16 Comments
User's avatar
streamfortyseven's avatar

Money laundering?

Expand full comment
Francis Turner's avatar

The only people making bank on this are NVIDIA and that seems an odd destination for money laundering

Expand full comment
streamfortyseven's avatar

If the National Security State is the source of funding... lots of fake businesses which don't seem to make any money but continue to exist.

Expand full comment
Francis Turner's avatar

Eh. The funding is all wall St and Silicon Valley VCs AFAICT. I mean I guess they could be laundering funds from somewhere else but building numerous data centers filled with specialized hardware seems an odd way to do things

Expand full comment
streamfortyseven's avatar

People looking for Net Operating Losses (https://www.investopedia.com/terms/n/netoperatingloss.asp) to shelter money from taxation, instead of using half-vacant real estate?

Expand full comment
JD Free's avatar

If LLMs are Dutch Tulip Bulbs, NVIDIA is the seller, not the buyer.

That's the place to be.

Expand full comment
streamfortyseven's avatar

Oh, and compare and contrast generative AI with generative grammars - Kant Generator from 40 years ago used a generative grammar - see https://gitlab.com/paul-nechifor/kant-generator-pro/-/blob/master/kant_generator_pro/kgp.py?ref_type=heads - you could probably write entire sets of woke sociology papers this way, parlaying it into a series of textbooks, funded research grants, and a full professorship... check this out - https://www.ling.upenn.edu/courses/ling5700/Chomsky1957.pdf Optimizing a set of parameters - a *really big* set of parameters - in a "neural net" seems to be a hellishly inefficient way of getting the same result you get with a rule-based system like a generative grammar. Still, it's flashy new tech and that attracts the big bucks - plus you can't extract any actual rules from an optimized set of numerical parameters...

Expand full comment
streamfortyseven's avatar

Neural nets are useful in doing real-time pattern matching in telemetry data, where rule-based systems just take too much time to be of use...

Expand full comment
Toni Weisskopf's avatar

So far, seems like the destruction of the Internet (all information at anyone's command) is the output of generative AI.

Expand full comment
Don Carrera's avatar

This article sure was easy to get through - it consisted of a slather of words that Kamala Harris would be envious of. And a bunch of typos, as if it's AI generated copy.

"...the expected payback period is in years of not decades." How about "IF not decades"?

"...lead the students astray because some of them will later put there trust in an AI hallucination..." My cat caught this error.

"...will be willing to pay $20/month for such a service because the only need it to write one or two things a month." THE ONLY??? Try THEY ONLY... Require AI to use a subject somewhere in that sentence. The problem is that "...the only need..." can be a legitimate clause; so you have to read 5 or 6 words farther on to realize you must go back and supply "THEY" to make sense of the sentence.

"Lyft seems to be struggling a bit and not making as much money, but it did generate $766B in free cash flow in 2024, which suggests that it too is probably finally profitable,..." That's a considerable feat to generate $766 BILLION cash flow a year. By a struggling company yet. No human, who understands numbers, wrote or read this sentence before publication. Should've been $766M, not B.

Expand full comment
Francis Turner's avatar

thanks . I swear the typos never show up until after I hit post

Expand full comment
Don Carrera's avatar

Glad it wasn't AI! After all, AI never takes responsibility for any of its own errors.

Expand full comment
Jonathan Gadote's avatar

I have been experimenting with xAI's Grok as a form of active journaling and therapy, the model's responses are good, and by being good they exceed human discourse which in face-to face therapy relies on proximate trust and understanding which you would think might be simple but is more often a stumbling block. Knowing that I'm communicating with a machine is actually a feature. There is no judgment, no heavy topic that I would hesitate to thrash out at length, the session literally never ends.

Psychotherapy in the coming years won't be sitting in a room with a stranger. More like brainstorming with a machine trained on matrices of psychotherapeutic techniques.

Also, the aspect of home learning is too powerful to ignore. Imagine being an inquisitive kid talking to an encyclopedic chatbot that answers all questions and prompts for deeper inquiry? Public school needs to be disrupted, and AI will be a major disruptor.

In areas where humans are failing, AI will have a profound impact. That's not a huge footprint right now, but over time, sadly, it will get much larger.

Expand full comment
John Oh's avatar

It was brilliant to match SpaceX and Starlink. I wonder if there is something similar going on with Xai and Tesla's full self driving?

Otherwise, it's tulips all the way down.

Expand full comment
Jim in Alaska's avatar

Seems to me the AI in Academia problem has a simple solution; go back to oral and blue book exams.

AI's a tool and as a time saver it may prove it's worth. Calculations that might take me up to half a day including trips back and forth to the bookshelves looking up formulas checking tables etc., even chatbot does in a few minutes and as you noted, shows it's work so I can check and verify. Admittedly what I'm doing is trivial but it does suggest possible value of the tool.

It hadn't occurred to me that AIs could code (I know a fact that should be self evident.) until I asked deepseek about CAD programs, our discussion leading it to writing code to build a 3D model in OpenSCAD.

Value vs cost though? DamnedifIknow.

Expand full comment
BehaviorForecastsProbablyHard's avatar

Yes.

I think search engine breakage may actually have some different timescale of effects relative to the current generation AI stuff, which I expect to be more recent.

I recently did a couple of searches for software packages/codes.

One technique was not a language specific search, and I found six githubs in two languages. Two last updated a couple years ago (likely class projects), two five years ago, and the rest maybe seven or eight years ago. but, arguably, one should perhaps expect at least one more github repo or something at the twenty or fifteen years old point, that I did not find indexed by those engines.

The other search was for a technique and a language, and got two github packages, which is maybe expected.

That said, this could purely be my misreading of the history of these technologies, and also of changes in academic and internet fashion.

I've been trying to stay careful, fair, and nuanced in my analyses around the new technologies and fashions in this space. There are definitely applications.

I have, however, found myself desperately allergic to the thinking described by the people deciding the business cases for these organizations. I think that the theory of improved machinery, the theory of human behavior, and the theory of economic value all sound very wrong to me, but in non-obvious ways.

I should have a bias towards predicting 'management is doing bad things', even when management is actually correct enough. I've developed my personality strongly opposite of ways useful in management.

For my business situation, cloud AI, and Windows 11 are potentially dangerously bad for me. Some of the CS PhDs are clearly insane in how they integrate their understanding of what these technologies can do, exterminating humanity is actually really hard to do.

Academia? My estimate of baseline problems there is clearly high, and biased. Students who cannot write are less of a problem than professional school faculty who are determined to incinerate the profession.

A couple years ago, I read somebody's (1) blogpost about a book on the 1600s, and a clustering of terrible leadership decisions then, around the world. Was a cool period, and that was directly a bit stressful. But, at some point it seems that the stresses and shocks compounded, and disoriented 'leaders' to the point of just making a bunch of unnecessary and terrible decisions.

The AI hypothetical investment is just one of many serious bets that quite a few people are making. All of those bets can be wrong. Trump is making at least two bets that might not be coupled, and Trump's opponents have made a number of bets. There's value given to stuff in the books, and then there is wealth creation from populations, and disparity is always likely to see correction in the long term. A dozen serious bets that all lose significantly would perhaps write off a lot of value that had no longer been present for some time.

Absolutely not an enjoyable experience to see it happen up close. So let us hope that I am simply incorrect.

(1) crossoverqueen dot wordpress dot com It is a daily blog that covers, among other topics, fanfic, and I do not have the link handy.

Expand full comment