10 Comments

I've noticed that when CoPilot gets something wrong, and you point it out, it get an attitude with you. ChatGPT is worse. It'll *agree* with you.

Expand full comment

As I basically said elsewhere, my fundamental problem is in the domain of behavior.

I do not trust myself to be smart enough to make a computer spit out the right answer the first time. This is largely my experience in screwing up simple tasks. I check, and recheck. On paper, in intelligence and in skill training, I may be in the more 'highly qualified' part of those distributions.

In practice, there are quite a lot of algorithms complicated enough I do not trust myself to implement and debug.

I have had a very wary eye on neural net methods for at least five to maybe as high as ten years. The stuff I was initially skeptical of also seemed to be handled cautiously enough. Also, the computational cost was too expensive for blindly doing stupid things. I still had very major issues with, say, legislative mandates that effectively required people to do it anyway, even if people are being killed.

So LLMs and image generation are a relative improvement in my eyes. Here are applications that work some of the time, and can never directly kill people without a bunch of extra human stupidity. Woohoo!

The problem is that quite a lot of people have been very badly raised and trained, and are more than capable of bringing that level of blind trust stupidity to the table. Folks who default to 'the experts are correct' or 'the computer is correct', and do not first check the problem against either the domain and skills of the expert, or against the inputs and algorithm of the computer. (With full awareness of GIGO.) (This excessive trust is perhaps downstream of power hungry idiots who see expertise as a route to controlling whether other people trust them and obey them. Either directly or by being politicians and 'leaders'.)

Microsoft's management choices WRT integrating sh!t into the OS have not been something I welcome.

I absolutely do not see 'breakaway synergism' leading to 'singularity' and Skynet sending terminators to kill me. (The people who want to have mass murder or government regulation to avoid this situation are basically not well mystics.)

What I do have concerns about is stupid management choices leading to the equivalent of phishing email exploits and those credentials used to hack PLCs, etc.

As it is in the domain of behavior, the fix is not technical, nor in government funding or mandate of technology.

If I use a government blockchain to evaluate trustability of statements, I already have scenarios where I know that government would misdirect my trust. Automating that to do more quickly and efficiently is stupid, before considering the cost/risk of that automation.

The fix is to the trust security flaw in behavior.

Which arguably is being patched invisibly anyway, because of other damaging 'hacks'.

Expand full comment

The 9-1-1 call center doing an internet search for things to do while waiting for the emergency responders in the field to get there ignores the fact that most PSAPs have very strict call handling guides to follow. And going off script even by a single word can get your butt in a sling, depending on the QA person. But, I can see someone implementing a program that utilizes LLM for its call handling and/or EMD as a cost saving measure or because upper management is dazzled by all the "new and cool toys." Plus, with the specific example of given of poisoning, every PSAP I'm aware of has the Poison Control number programmed on speed dial because it IS such a common call.

Expand full comment

Right. Outsourcing the first level triage/support to AI is where things potentially go wrong when AI gives an incorrect answer instead of escalating the problem to a human expert.

Expand full comment

You underestimate how much human labor is going into creating and checking training data. And these are not unsolvable problems. Should you be using LLM output as a definitive source for anything critical? No. Nor should you mindlessly be using the top result in Google Search. Like any tool, you need to learn how and when to use it, or it’s dangerous.

Expand full comment

I know there's a ton of labor going into checking the data. So what? The point about AI is that it synthesizes new answers from the data. If we always knew what it was going to say then it would be pointless. But since we can't predict what its going to say we have no way to know whether what it comes up with is safe.

I agree you should not be mindlessly trusting LLM or the first search engine result but the difference between the two is important. A search engine will give you 10 results and (presuming they aren't bot generated junk) you'll see differences between the results which will allow you to detect that the top result is inconsistent with the other 9. With an LLM you don't get that. You get a single answer and it will appear to be authoritative. Some people (presumably not you and hopefully not me) will not think this through and will therefore blindly trust the output. And some of the time that output will be wrong and potentially dangerously so. I see no good way to fix this

Expand full comment

LLMs can be trained to refuse to answer prompts where their answers could be dangerous. It’s just a matter of having enough reliable data about what constitutes such a prompt. That data can be created—is being created on a massive scale. The ones that can search in real time can also analyze and incorporate a range of search results into their responses.

In other words, AI will not only be able to google stuff, it will analyze the results better and faster than humans. It’s already close in many domains, even though this ability of LLMs is relatively new and not well-trained.

What LLMs are not good at and may never be good at is understanding the world spatially and temporally, because they are not trained on that, they are trained on text, which is a secondhand, flawed, incomplete account of real happenings. That’s what makes Sora so interesting.

Expand full comment

I have tried to use AI bots to supply me with sources, not information. They also fail miserably at this.

I decided to test one out, using a simple 5th grade example: when our kid was in school, he was assigned a Revolutionary War hero to do research on, and got Crispus Attucks. The problem is, almost nothing at all is actually known about the man. We know he was born in slavery, that he escaped, and that he showed up at the Boston Massacre. He only appears in the actual historical record two times: the notice of his escape from slavery and at the Massacre. Everything else we know about him is supposition and make believe.

So, I knew exactly what chatGPT would do when I asked about him. Sure enough, the bot gave me the fiction. I specifically asked for historical sources, and it spun. It couldn't do it. It couldn't look at the massive pile of crap information about CA and tell me it was all crap, even when I was specifically asking for historical sources and telling it that it's responses were crap.

Another thing it can't do, which is actually a frightening glimpse at where our world is headed: because everything in the world with the slightest hint of the political has been massively distorted since Trump came down the elevator in 2015, I always try to find sources which predate that.

You can't. AI Bots don't seem to be able to give you anything that's more than a year or two old. I once specifically asked for something from a specific date range, and it couldn't do it. It kept giving me the same two non-compliant answers in succession, with an apology each time.

So the conversation went like this: Chatbot: Here's answer A! Me: Answer A is clearly non-responsive to my question. Chatbot: Oh, I'm so sorry, please forgive me, Here's answer B! Me: Answer B is also non-responsive. Chatbot: Oh no. I'm still learning, thank you for telling me I have it wrong. Here's answer A!

The fact that it couldn't go out on the internet and find information that was a handful of years old is truly frightening. We have placed all of our current events on the internet, "the first draft of history" as they say, but it all withers and dies within a year or two. We are witnessing the actual end of history, because it is being overwritten and eliminated within months.

Expand full comment

Technological revenge writ large. There is a solution, don’t use it. Instead of searching for how to do something, go to the library and find a hard copy that has been written by a human and edited by a human. If that fails, use the internet to seek out those who have such materials. I have an A. T. Cross Stylographic Pen that was made in the 1880s. After searching online for a while, I found photos of the instruction sheet that someone had scanned and posted online. I would never, ever think to ask any kind of AI for instructions on anything for the reasons you provide in your post. Sadly, a great many of our species treat whatever they read online as if it were the word of God and will follow anything they are instructed to do by on online source.

To add to your statements on the AI picking up and repeating incorrect answers I offer auto misspelling. Every typo I have made is what predictive typing picks up and tries to complete my words with. I often spend more time un”correcting” whatever I type than typing it. Then we have the recent case where a new air traffic controller argued with an experienced pilot over a problem he was having with his aircraft. She did not believe him because she googled it and found information that was at variance with what the pilot was saying. Another is about a story that may be unknown outside of Japan. The reported son of the founder of Sagawa parcel delivery company (Not sure if his father’s employment status is correct, but he was wealthy.) murdered a female classmate in 1981 while studying in France and ate part of her remains. Awhile back, this story was again in the news. One of the commenters did not believe the story because they could not find anything contemporary on it online. They apparently did not know that NOTHING was online before a very recent date and anything before that date can only be online if someone goes through the library or private collections and scans and uploads it to an online depository that hosts such. People who are dependent upon online resources for information know know nothing of anything that predates the internet and sites that collect information except that which others say about it and these others are often also completely ignorant of anything they cannot found online.

Then we have people such as I, who have living memory of life before the internet who uses it to search for something I have once read about but either due to my poor searching technique or the fact that every single word in any known language has a Japanese Anime character with it as their name, or both, all I can find in many of my online searches is Japanese anime characters. It is as if all that existed before the year 2000 simply did not exist unless it is or was a Japanese Anime Character.

Expand full comment

Hi Francis.

Fascinating rabbit hole leading me back to undergrad days contemplating the relationship between language/logic and experience ... climbing Wittgenstein's ladder without a parachute, just to understand why Russell and Whitehead's Principia was a magnificent failure, or trying to debunk Gödel's theorem on the limits of any systematic logic.

Yeah. A.I. is dangerous stuff. As I understand it now, the word salads are just extensions of Bayesian logic applied to the probability of one word following another in a given corpus.

Had A.I. been around 3,000 years ago, we would have great answers as to why only Pharoahs were divine. Hmm ... come to think of it, A.I. sounds like just an update of Xenophanes — “Ethiopians imagine their gods as black and snub-nosed; Thracians blue-eyed and red-haired. But if horses or lions had hands, or could draw and fashion works as men do, horses would draw the gods shaped like horses and lions like lions, making the gods resemble themselves.”

Things get really sticky with that Stephen Hawking quote ... "Stupidity and greed will mark the end of the human race."

On that happy note,

Oyasumi nasai from Japan

Expand full comment