Does Bard know how many times the letter “e” appears in “ketchup”?

One of the things I enjoy most about machine learning is how it demonstrates, quite accurately, that engineers don’t know how people work. Take large language models, for example. I was told that they would take my business, making me unnecessary; They are smart. They would plan the perfect itinerary for my trip to Paris, with highlights around bars and restaurants It is certainly accurate and complete.

Inspired by Tweet about mayonnaiseNow, you’ve set out to do a fun experiment with Google Bard.

Show artificial intelligence.

I choose to do this for two reasons. First, this type of test is something you do with young children as you teach them to read. Get them to know the letters and the sounds they make. But secondly, I strongly suspect that this common activity is not recorded in any data that Bard pulls because it’s not the kind you’re writing.

In the words of Arlo Guthrie: “I am not proud…or tired.”

Obviously, this is absurd, but it’s absurd because We can look at the word “ketchup” and clearly see the “e”. Bard can’t do that. It lives in a completely closed world of training data.

This kind of problem is with LLMs. Language is a very ancient human technology, but our intelligence predates it. Like all social animals, we have to keep track of status relationships, which is why our brains are so big and weird. Language is a very useful tool – hey, I write for a living! – But it is not the same as knowledge. It floats on top of a bunch of other things we take for granted.

See also  Bungie loses one lawsuit against cheating site in Destiny

If this wasn’t a machine, I would be starting to feel bad by now.

I often think of Rodney Brooks’ 1987 article, “Intelligence without representation,” which is more relevant than ever before. I wouldn’t deny that language use and intelligence are related — but intelligence precedes language. If you deal with language in the absence of intelligence, as we see in LLMs, you get strange results. Brooks compares what happens with LLMs to a group One of the first researchers trying to build an airplane by focusing on seats and windows.

I’m sure he’s still right about that.

Where I’m trying to determine if Bard has a blind spot about ketchup.

I understand the temptation to jump into trying to have a complex conversation with the MA. Many people desperately want us to be able to build a smart computer. These fantasies often appear in science fiction, the genre most widely read by nerds, and indicate a longing to know we are not alone in the universe. It is the same impulse that drives our attempts to contact alien intelligence.

But trying to pretend LLM can think is a fantasy. You can inquire about the subconscious, if you want, but you’d be proud. There is nothing there. I mean, Look at her attempts at ASCII art!

Mr. Constable… I gave you all the clues…

When you do something like this – a task that your average five-year-old will outpace and fail in an MA in English – you start to see how smart In reality He works. Sure, there are people who think LLM has consciousness, but These people seem to me to be tragically lacking in societyunable to understand or estimate precisely how smart ordinary people are.

See also  TSMC secures 3nm orders from AMD, Qualcomm, and others, report says

Yes, Bard can produce sparkles. In fact, like most chatbots, it excels at doing autocomplete marketing copy. Perhaps this is a reflection of how much ad text appears in its training data. Bard and its engineers probably wouldn’t look at it that way, but what a devastating commentary on our daily online lives.

Advertising is a thing. But the ability to produce ad copy is not a sign of intelligence. There are a lot of things we don’t bother writing because we don’t have to and other things we know I can not Type – like how to ride a bike. We take a lot of shortcuts in talking to each other because people operate pretty much on the same baseline of information about the world. There’s a reason for that: we all are in the world. A chatbot is not.

I’m sure someone will show up to tell me chatbots will get better and I’m just being mean. First of all: It’s vaporware til it ship, babe. But secondly, we really don’t know how smart we are or how we think. If there’s one real use for chatbots, it’s brightening up the things about our intelligence that we take for granted. Or as someone wiser than me puts it: The map is not the territory. Language is the map. Knowledge is the area.

There are a whole lot of things that chatbots don’t and can’t know. The truth is, it doesn’t take much effort to pass the Turing Test LLM as long as you ask the right questions.

Leave a Reply

Your email address will not be published. Required fields are marked *