It’s hard not to say “AI” when everybody else does too, but technically calling it AI is buying into the marketing. There is no intelligence there, and it’s not going to become sentient. It’s just statistics, and the danger they pose is primarily through the false sense of skill or fitness for purpose that people ascribe to them.
@Gargron come now! This overstates our current knowledge of the nature of intelligence. LLMs are adaptive, they have memory, they use language to communicate, and they integrate disparate experiences to solve problems. They have many of the hallmarks of what we call intelligence. They have more such characteristics than, say, dolphins or chimps. Us knowing how they work is not a disqualifier for them being intelligent.
@evan @Gargron I'd have to disagree. LLMs are primarily used for two things, parsing text, and generating text.
The parsing functions of LLMs are truly incredible, an represent (IMHO) a generational shift in tech. But the world's best regex isn't intelligence in my book, even if it parses semantically.
[1/2]
@evan @Gargron The generating functions of LLMs are (again, IMHO) both the most hyped and least useful function of LLMs.
While LLMs generate text that is coherent, that can illicit emotion or thought or any number of things, we're mostly looking into a mirror. LLMs don't "integrate" knowledge, they're just really, really, really big Markov chains.
Don't get me wrong, "intelligent" systems most certainly will use an LLM, but generating text from prompts the way we do isn't intelligence.
[2/2]
@evan @Gargron Ok, ok, one parting thought:
I'll just add that having memory, being adaptive, and using language to communicate are all things that computer programmes that don't use LLMs do today.
LLMs are (IMHO) the most convincing mimics we've ever created by many orders of magnitude. But they don't actually *know* anything.
I can't wait for the world to see what truly *useful* things LLMs can do other than be sometimes right on logic puzzles and write bad poetry.
@evan @Gargron Ya, I think that's the heart of the question :)
What I'm trying to communicate is that when I ask an LLM "what is on the inside of an orange", the programme isn't consulting some representation of the concept of "orange (fruit)". Rather, it's looking at all the likely words that would follow your prompt.
If you get a hallucination form that prompt, we think it made an error, but really the LLM is doing it's job, just plausible words. My bar for intelligence is personally higher