cosocial.ca is one of the many independent Mastodon servers you can use to participate in the fediverse.
A co-op run social media server for all Canadians. More info at https://blog.cosocial.ca

Server stats:

141
active users

It’s hard not to say “AI” when everybody else does too, but technically calling it AI is buying into the marketing. There is no intelligence there, and it’s not going to become sentient. It’s just statistics, and the danger they pose is primarily through the false sense of skill or fitness for purpose that people ascribe to them.

Evan Prodromou

@Gargron come now! This overstates our current knowledge of the nature of intelligence. LLMs are adaptive, they have memory, they use language to communicate, and they integrate disparate experiences to solve problems. They have many of the hallmarks of what we call intelligence. They have more such characteristics than, say, dolphins or chimps. Us knowing how they work is not a disqualifier for them being intelligent.

@evan @Gargron that paper cites a definition of intelligence by racist eugenicists, and doesn't have any actual controls, only vibes. It is worth watching / listening, as is the linked radiolab series on intelligence measuring

@KevinMarks @Gargron yeah, I'm not going to do that. Send me some written critiques though!

@KevinMarks @Gargron I deleted my recommendation of this paper.

@evan LLMs aren't intelligent as such. There are some neurological conditions leading to confabulation which is similar to the output of LLMs. That is, one word follows another, but the narrative lacks semantic grounding. The brain has a lot more parts to it than just the word sequencing.

@bob the brain does indeed have lots of interesting behaviour. And yet it's a complex system that emerges from relatively simple parts.

I understand how LLMs work, but I disagree with you about showing signs of intelligence. Those simple steps show extremely complex behaviour in practice.

We don't know how AGI is going to work. It will almost definitely not work exactly like human brains. Our definitions of intelligence require more generality than that.

@evan @Gargron Based on my understanding, LLMs are trained and fixed models. The amount of “memory” they have is a fraction of the training data. They can retain some context from conversations but the model itself doesn’t change. Looked at this way, LLMs can’t learn to think differently. You have to distill a new LLM.
(1/2)

@evan @Gargron 2/2

The current LLMs are literally statistical models distilled down to map a vast amount of training data into a very small amount of code (with embedded words) that meet the goals of the humans that created it.

When an LLM “learns” from a conversation, it’s just adding new words (rasterized in their context) to the heap. It doesn’t change the mapping/model. That’s been hard coded by the humans who developed the model.

@evan @Gargron 3/2 (I lied)

Summarized, ChatGPT-x doesn’t get to ChatGPT-(x+1) without the humans learning, applying that knowledge to a new model, training that model by burning down a forest or 3, and then publishing the new distillation.

I could be wrong but that’s my understanding.

@evan @Gargron I'd have to disagree. LLMs are primarily used for two things, parsing text, and generating text.

The parsing functions of LLMs are truly incredible, an represent (IMHO) a generational shift in tech. But the world's best regex isn't intelligence in my book, even if it parses semantically.

[1/2]

@evan @Gargron The generating functions of LLMs are (again, IMHO) both the most hyped and least useful function of LLMs.

While LLMs generate text that is coherent, that can illicit emotion or thought or any number of things, we're mostly looking into a mirror. LLMs don't "integrate" knowledge, they're just really, really, really big Markov chains.

Don't get me wrong, "intelligent" systems most certainly will use an LLM, but generating text from prompts the way we do isn't intelligence.

[2/2]

@evan @Gargron Ok, ok, one parting thought:

I'll just add that having memory, being adaptive, and using language to communicate are all things that computer programmes that don't use LLMs do today.

LLMs are (IMHO) the most convincing mimics we've ever created by many orders of magnitude. But they don't actually *know* anything.

I can't wait for the world to see what truly *useful* things LLMs can do other than be sometimes right on logic puzzles and write bad poetry.

@jszym @Gargron what does it mean to "know" something?

@evan @Gargron Ya, I think that's the heart of the question :)

What I'm trying to communicate is that when I ask an LLM "what is on the inside of an orange", the programme isn't consulting some representation of the concept of "orange (fruit)". Rather, it's looking at all the likely words that would follow your prompt.

If you get a hallucination form that prompt, we think it made an error, but really the LLM is doing it's job, just plausible words. My bar for intelligence is personally higher

@jszym @Gargron the words are a representation.

@evan I agree with @Gargron on this. All we have at this point is predictive statistics, which combined with re-labeling of long-standing (and sometimes valuable) methods of machine learning and pattern recognition are creating the illusion that artificial intelligence actually exists. The greatest danger in my view associated with AI now is that people will believe that it exists.

Old ML joke: “Just because your friends jump off a cliff, will you?”
ML: “Of course!”