Is this true? You can question the humanizing term "concept", but the entire process of pretraining and then RLHF optimizing is essentially about establishing a standard of "good" vs "bad" for the model.
Right now, lots of people get caught in a trap of asking things like "does MyFreeBestFriendAI feel remorse?"
If we're already looking at it as a document artifact, we can evade the implied-ego trap: The document generator took a chat-like document where two fictional characters are talking, and predicted that there would be text where one fictional character is associated with apologetic words.
It would be interesting to train an LLM that consistently "talks like a computer" or a command line utility instead, i.e. passive sentences, relatively bare results of the tasks given, no reference to a self, etc.
This led to international requirements that all "AI" have explainable/auditable decisions, and must have clear non-human features.
In the niche of long sci-fi comedy webcomics with worldbuilding, I suggest Schlock Mercenary [1]. Early art is rough, but it gets consistently better.
I think the point about lacking precise language to describe LLMs is reasonable, then the author follows it up with claims that the machines can't count and that they are incapable of math (easily disproven). Then says "talking rock" is a better alternative, which to the average person would be even more confusing. Then says AI researchers tend to not consider LLMs AI (like.. what?)
The point on Turing's Imitation Game was reasonable too, then confidently proclaims LLMs are not doing anything intelligent and are pure mimicry. Intelligence is notoriously poorly defined, and the stochastic parrot meme has already died now that RL enables out of distribution behavior.
The chat point and talking dog syndrome are both reasonable and I generally agree with them.
They are programs and they follow defined rules in a predictable fashion. The randomness they exhibit (through temperature, seed, etc.) is well understood and configurable.
They are literally programs that run on computers.
People talk about them in anthropomorphic terms because humans are easily fooled. Remember ELIZA.
> They’re a trillion numbers in a trenchcoat; not logical, in either a machine or a mental sense, but stochastic.
Many of these posts and comments claim that human minds are substantially different ("better" is implied). The evidence is a sort of broad gesturing at explanations of how LLMs are implemented ("math") and how they work ("guess the next word"). And because of these facts, we should treat them in a particular way, or certain things will never happen.
I've been trying to look past the obvious straw man here and to actually think critically about this tech as well as compare it to my own experience and (admittedly very limited) understanding of the human brain.
In more ways than feels comfortable, it seems entirely possible to me that these things actually are or could be really close to the ways that our own minds work.
Our own minds/consciousness are ultimately based on physical processes, I don't think anyone would dispute that. At some point, the physical phenomena in our brains presumably result in the emergent behavior of thinking and consciousness. We have no idea how it works, but it's our lived experience. Why can't that be the case for silicon-based rather than carbon-based processes? How can we say with any certainty that it's not happening elsewhere if we don't know how it works?
Reducing their function to "guessing the next word" sounds an awful lot like what happens when I start talking to someone. I have an idea of what I want to say, but I almost never have a sentence planned out when I start it.
The article puts "thinking" and "hallucination" in scare quotes. But I mean – the way that they appear to think by working through problems with language mirrors my own "thinking" very closely.
It says "They’re not thinking. They’re not hallucinating"; the exercise of figuring out why is left to the reader. If you've ever talked to a 3 or 4 year old, or someone who's tired, you may have had similar experiences re: hallucinations.
These are all pretty surface level examples, but as I use the tools more and learn more about how they work I'm not seeing any significant evidence that counters the examples.
I do think it's probably dangerous and unhealthy to really anthropomorphize AI/LLMs. They're obviously not human even if they're thinking, and they're being made and shaped by companies (and training sets) that exist in a predominantly capitalist world (but then again, I guess we are too).
I assume similar lines of thinking being discussed somewhere, but I haven't found much (and I feel like I'm reading about AI all day). Curious to hears others' thoughts and/or to be pointed to wherever this stuff is being talked about.