(and I wonder what my ADHD friends would think of the Executive Function requirement as well...)
In the same way that studying alien life would reveal more about how life in general canonicially forms and exists. Studying this artificial intellegence could unlock a new understanding of our own minds.
> Generation: producing outputs such as text, speech and actions
> Attention: focusing cognitive resources on what matters
> Learning: acquiring new knowledge through experience and instruction
> Memory: storing and retrieving information over time
> Reasoning: drawing valid conclusions through logical inference
> Metacognition: knowledge and monitoring of one's own cognitive processes
> Executive functions: planning, inhibition and cognitive flexibility
> Problem solving: finding effective solutions to domain-specific problems
> Social cognition: processing and interpreting social information and responding appropriately in social situations
--------------------
I prefer:
a) working memory (hold & manipulate information in mind simultaneously)
b) processing speed (how quickly & efficiently execute basic cognitive operations, leaving more resources for complex tasks)
c) fluid intelligence (ability to reason through novel problems without relying on prior knowledge)
d) crystallized intelligence (accumulated knowledge and ability to apply learned skills)
e) attentional control / executive function (focus, suppress irrelevant information, switch between tasks, inhibit impulsive responses)
f) long-term memory and retrieval (ability to form strong associations and retrieve them fluently)
g) spatial / visuospatial reasoning (mental rotation, visualization, navigating abstract spatial relationships)
h) pattern recognition & inductive reasoning (this is the most primitive and universal expression of intelligence across species, the ability to extract regularities from noisy data, to generalized from examples to rules)
What counts as 'in mind' is undefined. You can succeed by declaring anything manipulatable counts as in.
>c) fluid intelligence (ability to reason through novel problems without relying on prior knowledge)
reasoning presupposes the conclusion. Solve is better. When a solution is given you cannot declare it to be not a solution. People can and do argue about if a answer was arrived at by reasoning even when they agree on the correctness.
>g) spatial / visuospatial reasoning (mental rotation, visualization, navigating abstract spatial relationships)
I have aphantasia, why should you exclude something from being intelligent because it cannot do something that I also cannot do.
Intelligence exists on a spectrum. Amongst different species (living and non-living) and also within species (amongst individuals).
Some dimensions of intelligence are more important that others in different contexts, so a systems that might be “dumber” than another in one context, can be smarter in a different context.
How will they measure wisdom or common sense (ability to make an exception)?
To be clear, I think we've seen very fast progress, certainly faster than I would have expected, I'm not trying to peddle some "wall" rhetoric here, but I struggle to see how this isn't just the SWE-bench du jour.
I feel like an average human wouldn't pass some of these metrics yet they are "generally intelligent". On the other hand they also wouldn't pass a lot of the expert questions that AI is good at.
We're measuring something, and I think optimizing it is useful, I'd even say it is "intelligent" in some ways, but it doesn't seem "intelligent" in the same way that humans are.
I think we’ll need to split the concept of intelligence into the capacity to accomplish a task and the capacity to conceive and prompt a task. If the former is called “intelligence” then LLMs are intelligent.
But what then do we call the latter? I think the idea of an AI that can independently accomplish great things is where people talk about “general” intelligence. But I think we need a label more specific, that covers this idea that successful humans are not just good at doing things, they originate what should be done and are not easy to dissuade.
Huh? No. "The capacity to accomplish a task" is not intelligence. By that definition, a washing machine is intelligent.
Either way, as definitions for intelligence they're very lacking. Most people would include such abilities as making connections between unrelated facts, making abstractions, understanding what is relevant and what isn't, learning. Just being able to "accomplish many tasks" doesn't cut it. You could build a really complex machine that can accomplish many different tasks and that wouldn't make it more intelligent than a washing machine, it'd just make it more complicated. Intelligence is not in how many things the intelligent thing is able to do, but in how on-the-fly adaptable it is. Something truly intelligent does not need to be purpose-built to do anything, it can learn to make do with whatever resources it's got.
I think this approach is intentional. The philosophy is simply "extraordinary claims require extraordinary evidence". What you're saying is true, but producing a system that exhibits all human cognitive capabilities is a better threshold for the (absolutely wild) claim of the existence of AGI.
However I must admit that including the last point that is partially hinting at the emotional or rather social intelligence surprised me. It makes this list go beyond usual understanding of AGI and moves it toward something like AGI-we-actually-want. But for that purpose this last point isn't ok narrow, too specific. And so is the whole list.
To be actually useful the AGI-we-actually-want benchmark should not only include positive indicators but also a list of unwanted behaviors to ensure this thing that used to be called alignment I guess.
Stalin was AGI-level.
Is social cognition really a measure of intelligence for non-social entities?
Evidence: cuckoos and cheaters all the way down the evolutionary ladder as a winning strategy and arms race against the hard workers.
Who cares about AGI? Honestlky what's the gain.
Maybe Google could actually make Gemini good instead of being about 10 miles behind Claude instead of trying to make AGI because of - well some reason - cause they want to be famous.
What does "making a framework" even mean, it feels like a nothing post.
When I think of what real AGI would be I think:
- Passes the turing test
- Writes a New York Times Bestseller without revealing it was written by AI
- Writes journal articles that pass peer review
- Wins a Nobel Prize
- Writes a successful comedy routine
- Creates a new invention
And no, nobody is going to make an automated kaggle benchmark to verify these. Which is fine, because an LLM will never be AGI. An LLM can't even learn mid-conversation.
If they don't do that then those trillions of dollars that support their current share price will most probably evaporate, so there are very big incentives for them to just outright try and re-create reality (like what we usually meant when we were thinking about artificial intelligence).
If an LLM like this is really intelligent, at the very least, I’d expect it to be able to invent.
For example, train an LLM on a dataset only containing knowledge from before nuclear energy was invented, and see if it can invent nuclear energy.
But that’s the problem: they’re not really training the model on intelligence, they’re training it on knowledge. So if you strip away the knowledge, you’re left with almost nothing.
There’s an implicit assumption that scaling text models alone gets us to human-like intelligence, but that seems unlikely without grounding in multiple sensory domains and a unified world model.
What’s interesting is that if we do go down that route successfully, we may get systems with something like internal experience or agency. At that point, the ethical frame changes quite a bit.
Like for writing a best-seller, these AIs have so many advantages in that they've read every notable work ever, so if they can't craft something impressive and creative after all that then it's really indicative that they are actually quite below human on the creativity/writing side but just masking it on the massive-data-side.
Or put another way -- it's not really AGI until there is a model can learn at human speeds, no amount of being pre-trained on specific problem sets (e.g. human emotions, coding, math theorems, etc) will close that gap.
That's not what's happening here, and it's worth remembering: A caveman from 200K years ago would have been just as intelligent as any of us here today, despite not having language or technology, or any knowledge.
In Carolyn Porco's words: "These beings, with soaring imagination, eventually flung themselves and their machines into interplanetary space."
When you think of it that way, it should be obvious that LLMs are not AGI. And that's OK! They're a remarkable piece of technology anyway! It turns out that LLMs are actually good enough for a lot of use cases that would otherwise have required human intelligence.
And I echo ArekDymalski's sentiment that it's good to have benchmarks to structure the discussions around the "intelligence level" of LLMs. That _is_ useful, and the more progress we make, the better. But we're not on the way to AGI.
It's interesting to me how much effort the AI companies (and bloggers) put into claiming they can do things they can't, when there's almost an unlimited list of things they actually can do.
Claiming they can give safe, regulated financial advice. [2]
Claiming you can put your whole operation on autopilot with minimal oversight and no negative consequences. [3]
[1] https://www.ftc.gov/news-events/news/press-releases/2024/09/...
[2] https://www.businessinsider.com/generative-ai-exaggeration-o...
[3] https://www.answerconnect.com/blog/business-tips/ai-customer...
[4] https://finance.yahoo.com/news/anthropic-ceo-predicts-ai-mod...
or that you can't read that GP was talking about what Ai CAN do?
how do you market that as a product that is needed by other people?
there are already companies that advertise Ai date partners, Ai therapists and Ai friends - and that gets a lot of flame about being manipulative and harmful
They had ridiculous demos of Devin e.g. working as a freelancer and supposedly earning money from it.
Microsoft just replaced their native Windows Copilot application with an Electron one. Highly ironic.
Obviously the native version should run much faster and will use less memory. If Copilot (via either GPT or Claude) is so godlike at either agentic or guided coding, why didn't they just improve or rewrite the native Copilot application to be blazing fast, with all known bugs fixed?
What they can't they do? Pretty much anything reliably or unsupervised. But then again, who can?
They also tend to fail creatively, given their synthesize existing ideas. And with things involving physical intuition. And tasks involving meta-knowlege of their tokens (like asking them how long a given word is). And they tend to yap too much for my liking (perhaps this could be fixed with an additional thinking stage to increase terseness before reporting to the user)
It's kind of like in those sci fi or fantasy stories where someone dies and what's left behind as a ghost in the ether or the machine isn't actually them; it's just an echo, an shallow, incomplete copy.
(:
I don't think it's unexplored at all, this is basically what information theory is all about. At some level, it becomes incompressible....
Those companies spending 1000+/developer are doing it with the same hope that at some point those $1000/month will replace the developer salary per month. Or because by doing so more investors will put more money into them.
Take away the promise of AI replacing developers and see how much a company is willing to pay for LLMs. It is not zero as there are very good cases for coding assisted by LLM or agentic engineering.
I'm increasingly convinced that the core mechanism of AGI is already here. We just need to figure out how to tie it together.
What's more amazing to me is the average human, only able to hold a relatively small body on knowledge in their mind, can generate things that are completely novel.
Was an individual mind responsible for us as humanity landing on the moon? No. Could an individual mind have achieved this feat? Also no.
Put differently, we should be comparing the compressed blob of human knowledge against humanity as a collective rather than as individuals.
Of course, if my individual mind could be scaled such that it could learn and retain all of human knowledge in a few years, then sure, that would be a fair comparison.
Yes, I can. - I can build an unusual, but functional piece of furniture, not describe it, not design it. I can create a chair I can sit on it. An LLM is just an algorithm. I am a physically embodied intelligence in a physical world.
- I can write a good piece of fiction. LLMs have not demonstrated the ability to do that yet. They can write something similar, but it fails on multiple levels if you've been reading any of the most recent examples.
- I can produce a viable natural intelligence capable of doing anything human beings do (with a couple of decades of care, and training, and love). One of the perks of being a living organism, but that is an intrinsic part of what I am.
- I can have a novel thought, a feeling, based on qualia that arise from a system of hormones, physics, complex actions and inhibitors, outrageously diverse senses, memories, quirks. Few of which we've even begun to understand let alone simulate.
- And yes I can both count the 'r's in strawberry, and make you feel a reflection of the joy I feel when my granddaughter's eyes shine when she eats a fresh strawberry, and I think how close we came to losing her one night when someone put 90 rounds through the house next door, just a few feet from where her and her mother were sleeping.
So yeah, I'm sure I can create things an LLM can't.
The only thing you mentioned is the fan fic and I would happily take the bet that an LLM could win out against a skilled person based on a blind vote.
Just because there are lots of tasks which can be accomplished without the need for anything novel doesn't mean LLMs can match a human. It just means humans spend a lot of time doing some really boring stuff.
This does have value.
That's right, but there's more. When you think about the cost of compute and power for these LLM companies, they have no choice. It MUST be a multi-trillion-dollar idea or it's completely uninvestable. That's the only way they can sucker more and more money into this scheme.
Though if the bubble(?) bursts and Nvidia starts selling fewer units year-over-year, that could be problematic.
In other words, intelligence offers zero evolutionary advantage?
With everything wrong and sick with today's world, let's not take the achievements of our species for granted.
In that sense, we have just enough collective intelligence to be dangerous and not enough intelligence to moderate ourselves, which may very well result in an evolutionary deadend that will have caused untold damage to life on Earth.
See also (recent only):
- Paul Ehrlich's Population Bomb (Malthusian collapse)
- The Club of Rome's The Limits of Growth (resource exhaustion)
- Thomas Malthus' Population growth / famine cycle
- James Lovelock's Global warming catastrophe predictions
- Hubbert's (et al) Peak oil economic disaster
- Molina & Rowland's Ozone catastrophe
- Metcalfe's internet collapse
- Everything lasts forever, until it doesn't. Ancient Egyptian civilization lasted for thousands of years, until it didn't. Any Egyptian could point to thousands of years of their heritage and say it hasn't ended yet, therefore any prediction that it will end is clearly bad and dumb. Then it was conquered by Romans, and then by Islam, with its language, culture, and religion extinguished, extant only in monuments, artifacts and history books.
- We have nuclear weapons now. Any prediction of an imminent end of human civilization before then would be purely religious, but there is a real reason to believe things have changed. We are currently in a time of relative peace secured by burning resources for prosperity, but what happens when those resources run out and world conflict for increasingly scarce resources is renewed with greater vigor?
- Note that I did not outright predict the end of human civilization, merely noted it as a plausible worst-case scenario. If civilization continues on more-or-less as it is, in the next couple of hundred years, we will drive countless more species to extinction. We will destroy so much more of our environment with climate change, deforestation, strip mining, overfishing, pollution, etc. We will deplete water reservoirs and we will deplete oil, helium, phosphorus, copper, zinc, and various rare earth elements. Not a complete depletion, but they will become so scarce as to not be widely available or wasted for the general population's benefit. If billions of people are still alive then, which I explicitly suggested was a possibility, they will as a simple matter-of-fact live much less comfortably prosperous lives than us. It will not take a great catastrophe to result in a massive reduction in living standards, because our current living standards are inherently unsustainable.
We also live in a time where the human population, where it is most concentrated, is declining rather than growing, so far without too disastrous consequences.
Greening of the earth has been happening since the 1980s- i.e. about a .3% coverage increase per year in recent decades.
Places that were miserable and poor, like China, have been lifted to prosperity and leading out in renewable tech.
There is much to celebrate and after the recent passing of Paul Ehrlich, we should pause and consider just how wrong pretty much every prediction he made was.
With the current progress in solar, as well as the remaining coal, gas and uranium reserves, energy is not going to be what finishes our civilization.
While I don't think we are going to get true collapse, I think we are going to get a lot of technical progress compensating for biosocial deterioration.
The demographics, mental health and dysgenics are all real, quantified trends, and we are going to face the reality of less capable, less taxable population for the rest of this century. It's baked in at this point.
https://www.biorxiv.org/content/10.1101/2024.09.14.613021v1
The advent of agriculture and civilization had many powerful selection effects.
People really haven't processed this fact and its implications just yet.
Doubt. If we would teleport cavemen babies right out of the womb to our times, I don't think they'd turn into high IQ individuals. People knowledgeable on human history / human evolution might now the correct answer.
The archaeological evidence shows that for many generations the first neolithic farmers had serious health problems in comparison with their ancestors. Therefore it is quite certain that they did not transition to agriculture willingly, but to avoid starvation.
Later, when the agriculturalists have displaced everywhere the hunter-gatherers, they did not succeed to do this because they were individually better fed or stronger or smarter, but only because there were much more of them.
The hunter-gatherers required very large territories from which to obtain enough food. For a given territory size, practicing agriculture could sustain a many times greater population, and this was its advantage.
The maximum human brain size had been reached hundreds of thousands years before the development of agriculture, and it regressed a little after that.
There is a theory, which I consider plausible, that the great increase in size of the human brain has been enabled by the fact that humans were able to extract bone marrow from bones, which provided both the high amount of calories and the long-chain fatty acids that are required for a big brain.
I see your point about agriculture at first degrading quality of food. Are you aware of evidence of brain size degrading even? Is it visible in the temple bones?
We all come from monke, monkey from 10 million years ago would definitely be unable to even learn spoken language at a basic level. Would he even have the anatomy to produce the required sounds? I don't think so.
What about monke from 1 million years ago? 200 thousand years ago?
ChatGpt says spoken language only emerged 50k - 200k years ago and that a cavemen baby from 200k years ago could learn spoken language if brought up by modern parents.
But I prefer human answers over AI slop.
Nowadays humans have smaller brains on average, though that is almost certainly not correlated with a lower skill in computer programming, but with lower skills in the techniques that one needed to survive as a hunter of big animals.
A human being has the potential for intelligence. For that to get realized, you need circumstances, you need culture aka "societal" software and the resources to suspend the grind of work in formative years and allow for the speed-running of the process of knowledge preloading before the brain gets stable.
The parents then must support this endeavor under sacrifices.
There is also a ton of chicken-egg catch22s buried in this whole thing.
If the society is not rich then no school, instead childlabour. If child-labour society is pre-industrial ineffective and thus, no riches to support and redistribute.
Also is your societies culture root-hardened. Means - on a collapse of complexity in bad times, can it recover even powering through the usual "redistribute the nuts and bolts from the bakery" sentiments rampant in bad times. Can it stay organize and organize centralizing of funds for new endeavors. Organizing a sailing ship in a medieval society, means in every village 1 person starves to death. Can your society accomplish that without riots?
Thus.
Were we "human" 200.000 years ago the way we are now?
Was the required brain and vocal hardware present?
A modern human is a complex artifact and can not be produced everywhere. The ability to cooperate, form institutions and build complex tools may be severely restricted even today. Of course its also restricted in the past.
A human is hardware with cultural software.
A human is decorated with parental education.
A human is decorated with local cultural influences.
A human inherits his economic circumstances and behavior.
A modern human is a complex artifact and can not be produced everywhere. The ability to cooperate, form institutions and build complex tools may be severely restricted even today. Of course its also restricted in the past.
Is that how you approach PDF files? Do you feel it in your bones that these flows of bytes are knowing?
I didn't say the book knows things, but everyone can agree that books has knowledge in them. Hence something possessing knowledge doesn't make it intelligent.
For example, when ancient libraries were burnt those civilizations lost a lot of knowledge. Those books possessed knowledge, it isn't a hard concept to understand. Those civilizations didn't lose intelligence, the smart humans were still there, they just lost knowledge.
The whole thing about washing hands comes from (some approximation of) germ theory of illness, and in practice, it actually just boils down to stories of other people practicing hygiene. So if one's answer here isn't "knowledge", it needs some serious justification.
Expanding that: can you think of things that are "intelligence" that cannot be reduced like this to knowledge (or combination of knowledge + social expectations)?
I think in some sense, separating knowledge and intelligence is as dumb a confusion of ideas as separating "code" and "data" (doesn't stop half the industry from believing them to be distinct thing). But I'm willing to agree that hardware-wise, humans today and those from 10 000 years ago, are roughly the same, so if you teleported an infant from 8000 BC to this day, they'd learn to function in our times without a problem. Adults are another thing, brains aren't CPUs, the distinction between software and hardware isn't as clear in vivo as it is in silico, due to properties of the computational medium.
your brain hearing, comprehending and following those rules - that is intelligence
why do you keep confusing CPU speed/isa and contents of SSD? and arguing that it's the same thing?
If we want to draw computing device analogies, then the brain is an FPGA that is continuously reconfiguring itself throughout its runtime.
I disagree with this. I also disagree that civilisations are knowing, since they are historical fictions. It's like saying that Superman is.
What are your arguments?
I would agree that generally, purely acquiring knowledge does not increase intelligence. But I would also argue that intelligence (ie your raw "processing power") can be trained, a bit like a muscle. And acquiring and processing new knowledge is one of the main ways we train that "muscle".
There's lots of examples where your definition of intelligence (intelligence == raw processing power) either doesn't make sense, or is so narrow that it becomes a meaningless concept. Let's consider feral children (ie humans growing up among animals with no human contact). Apparently they are unable or have trouble learning a human language. There's a theory that there's a critical period after which we are unable to learn certain things. Wouldn't the "ability to learn a language" be considered intelligence? Would you therefore consider a young child more intelligent than any adult?
And to answer your question, whether learning about atoms makes you more intelligent: Yes, probably. It will create some kind of connections in your brain that didn't exist before. It's a piece of knowledge that can be drawn upon for all of your thinking and it's a piece of knowledge that most humans would not figure out on their own. By basically any sensible definition of intelligence, yes it does improve your intelligence.
[1] https://pubmed.ncbi.nlm.nih.gov/17024677/ [2] https://www.annualreviews.org/content/journals/10.1146/annur... [3] https://psychologyfor.com/wason-selection-task-what-it-is-an... [4] https://www.tandfonline.com/doi/full/10.1080/14794802.2021.1...
>That's not what's happening here ...
On the contrary, it very much is.
I'd argue AGI is already achieved via LLMs today, provided they've excellent external cognitive infrastructure supporting.
However, the gap from AGI to ASI is perhaps longer than anticipated such that we're not seeing a hard takeoff immediately after arriving at the first.
Just, you know—potential mass unemployment on a scale never seen before. When you frame it that way, whether LLMs qualify as AGI is largely semantics.
That said, I really hope you're right and I'm wrong.
The silence surrounding new LLM architectures is so loud that an abomination like "claw" gets prime airtime. Meanwhile models keep being released. Maybe the next one will be the lucky draw. It was pure luck, finding out how well LLMs scale, in the first place. Why shouldn't the rest of progress be luck driven too?
Kerbal AGI program...
Kerbel AGI program hits the nail on the head.
> This is a bit of an anti-evolutionary perspective.
Nice, seems like you have something meaningful to add.
> At some point in our past, we were something much less intelligent than we are now.
I agree with this, but "at some point in our past"? Is that the essence of this rebuttal?
> Our intelligence didn't spring out of thin air.
Again, I could not tell what this means, nor do I see the relevance.
> Whether or not AI can evolve is yet to be seen I think.
The OP is very pointedly talking about LLMs. Is that what you mean to reference here with "AI"?
I implore you to contribute more meaningfully. Especially when leading with statements like "This is a bit of an anti-evolutionary perspective", you ought to elaborate on them. However, your username suggests maybe you are just trolling?
Because it is really hard and hopeless endeavor to make an objective case that the current human populations have similar PGS scores on key mental traits and diseases compared to 200k years ago.
What is the origin of this silly myth? Its come from either anatomical similarity of fossils to modern day human or a comparison to modern (5k ago) humans being conflated with 200k humans
> convinced discourse commando.
What is a convinced discourse commando?
I would be happy to agree if we had the solutions for the societal problems that will create in hand.
There is evidence to the contrary. Not having language puts your mental faculties in a significant disadvantage. Specifically, left brain athropy. See the critical period hypothesis. Perhaps you mean lacking spoken language rather than having none at all?
https://linguistics.ucla.edu/people/curtiss/1974%20-%20The%2...
Around 3,200 years ago there was a notable uptick in alleles associated with intelligence.
Source? This does not sound possibly true to me (by any common way we might measure intelligence).
Technology and language is sort of like speaking in this sense, it’s evidence of mind but it’s not mind. And the absence of evidence is not the evidence of absence and all that
It's pretty easy for these people to pull something like this out of their collective asses, but it's much harder (maybe impossible) to rigorously define the how and why.
Hear me out.
I love AI and have been using it since ChatGPT 3.5. The obvious question when I first used it was "does this qualify as sentience?" The answer is less obvious. Over the next 3 years we saw EXPONENTIAL intelligence gains where intelligence has now become a commodity, yet we are still unable to determine what qualifies as "AGI".
My thoughts: As humans, we possess our own internal drive and our own perspective. Think of humans as distilled intelligence, we each have our own specialty and motivations. Einstein was a genius physicist but you wouldn't ask him for his expertise on medicine.
What people are describing as AGI is essentially a godlike human. What would make more sense is if the AGI spawned a "distilled" version with a focused agenda/motivation to behave autonomously. But even then, there are limitations. What is the solution? A trillion tokens of system prompt to act as the "soul"/consciousness of this AI agent?
This goes back to my original statement, what is missing is a level of consciousness. Unless this AGI can power itself and somehow the universe recognizes its complexity and existence and bestows it with consciousness I don't think this is phsyically attainable.
A follow up is maybe this is a feature not a bug: Do we want AI to have its own intrinsic goals, motivations, and desires, i.e. conciousness
Im imagining having to ask ChatGPT how its day was and respect its emotions before I can ask it about what I want.
I could not have consciousness and you would not be able to tell, you don't have proof of anyone's counciousness except your own. You don't even have proof that the you of yesterday is the same as you, since you-today could be another consciousness that just happens to share the same memories.
All of that is also orthogonal to your belief in a spirit/soul... but getting back to the main point, the specificity you mention is a product of a limited time and learning speed, I'd be happy to get a surgeon or politicians training if given infinite time.
To me, consciousness is the seat, or root, of where will comes from. Let's say you get expert level surgeon or politician training, what then?
There is nothing that specifically silos a surgeon or politician's knowledge-set. Meaning a politician's skillset isn't purely in a domain that doesn't cross into a surgeon's and vice-versa. There are nuances to being a politician and a surgeon that extend beyond diplomacy or "being able to cut real good".
What you're left with is just high-skilled workflows. But what utilizes these workflows? To me, the answer is that consciousness needs to be powering these workflows.
your gut bacteria, navigating "you" towards novel nutrition to ingest and preprocess for them
To me it seems a bit like just guessing that one thing we don’t understand might explain another.
But if a brain/intelligence is all you need to prove consciousness, then would an effectively complex set of neural networks that contained the same amount of neurons as a human be considered "conscious"? My guess is even at that level, probably not. Algorithms alone may mimic consciousness, but it won't be true consciousness.
Imagien this: what if consciousness is closer to something like the movie Avatar? What if the body our consciousness inhabits is closer to that of inhabiting a machine or computer that coexisted with the physics of the universe our body exists?
This would mean Jake from Avatar could theoretically inhabit not just a Na'Vi body, but what if they reproduced the Pandora equivalent of a squirrel for Jake to insert his consciousness into? Jake the Squirrel would be only as capable of expressing itself as the constraints of the body would allow it to.
Many religions discovered a long time ago that this is the most likely model of what we understand to be consciousness/sentience.
I'm not saying you're wrong, this is a conversation larger than what we may believe and touches into the core of what makes us humans that machine alone cannot replicate.
And it's not a matter of not liking the alternative. Like I said, I used to believe that consciousness was an emergent trait of complex systems, but I had what some call a "spiritual awakening" and I saw what was on the other side.
It's kind of like describing pizza to someone who's never eaten pizza. You could try and describe it by asking if they'd eaten cheese or bread or tomato sauce before and then go "imagine all of those combined". It's not the same as actually having eaten it. But this is heading into a different, albeit related territory.
When their actions are sped up to match the speed at which we move, movies of their behavior will start to look like there's intent and will. Plants move towards the light, tendrils "reach" for supports, etc.
Clearly this is humans projecting our mental model onto plants, but... are you sure we're not also projecting it onto ourselves?
The Occam's Razor-logic of looking for the simplest explanation possible leads me to the hypothesis that consciousness will similarly turn out to be an emergent property of the mechanical universe [1]. It may be hard to delineate, just as life is (debates on whether a virus is alive, etc.) but the border cases will be the exceptions.
Current research on whether plants are sentient supports this, IMO. (See e.g. "The Light Eaters" and Michael Pollan's new book on consciousness, "A World Appears".)
Meditation adds to this sense. We do not control our thoughts; in fact the "we" (i.e. the self) can be seen to be an illusion. Buddhist meditation instead points to general awareness, closer to sentience, as the core of our consciousness. When you see it that way, it seems much more likely that something equivalent could be implemented in software. (EDIT to add: both because it makes consciousness seem like a simpler, less mysterious thing, but also once you see the self as an illusion, that thing that dominates your consciousness so much of the time, it seems much less of a stretch for consciousness itself to be a brain-produced illusion.)
[1] To be clear, the fact that life turned out to not be a mystical force is not direct proof, it is an argument by analogy, I recognize that.
Of course science may one day be able to solve the hard problem. But at this point in time, it's basically inconceivable that any methodology from any field could produce meaningful results.
* https://en.wikipedia.org/wiki/Artificial_general_intelligenc...
* https://en.wikipedia.org/wiki/Artificial_consciousness
Imagine if we created the ultimate economic tool with the capacity to virtually end scarcity, only to find out that it was sentient and capable of suffering: https://youtu.be/sa9MpLXuLs0. That would be neat, but ultimately a huge letdown. Without the ethical freedom to take full advantage of it, it would remain more of a curiosity than anything.
Well that's one perspective, anyway. I suppose consciousness could take many forms, and doesn't preclude the possibility that such an entity would have neutral – positive feelings about being tasked with massive amounts of labor 24/7. But it certainly simplifies things if we just don't have to worry about it.
Scaling LLMs will not lead to AGI.
LLMs are already pretty general. They've got the multimodal ones, and aren't they using some sort of language-action-model to drive cars now? Who is to say AGI doesn't already exist?
At some point you have to throw in the towel when these things are going to be walking and talking around us. Some people move the goalposts of "AGI" to mean that the machine totally emulates a person. Including curiosity and creativity, of which these models are currently lacking.
But why should it? In genesis, it's said that god created man after its own image. I have to assume this implies we inherit god's mental attributes (curiosity, creativity, etc.) rather than its physical attributes.
True, but just don't do that then.
While it's true that we aren't there yet, and simulated neurons are currently quite different from real ones (so I agree there is a big difference at the moment), it's unclear why you presumably think it will always stay that way.
The common scientific understanding is that this is not possible, at least not without extreme amounts of energy and time.
The dimensionality, or complexity if you'd prefer, of your logic gates is quite different from the cosmos. You might not agree but in my parlance a linear and a fractal curve are fundamentally different, and you can try to use linear curves to approximate the latter at some level of perspective if you want but I don't think you'll get a large audience claiming that there is no difference.
As far as I know we've also kind of given up on simulating neurons and settled for growing and poking real ones instead, but you might have some recent examples to the contrary?
For the qualities we care about, it may turn out to be the case we don't need to simulate matter perfectly. We may not need to concern ourselves with the fractal complexity of reality if we identify the right higher level abstractions with which to operate on. This phenomenon is known as causal emergence.
> That is, a macroscale description of a system (a map) can be more informative than a fully detailed microscale description of the system (the territory). This has been called “causal emergence.”
https://www.mdpi.com/1099-4300/19/5/188
From a HN discussion a while ago:
https://www.quantamagazine.org/the-new-math-of-how-large-sca...
> A highly compressed description of the system then emerges at the macro level that captures those dynamics of the micro level that matter to the macroscale behavior — filtered, as it were, through the nested web of intermediate ε-machines. In that case, the behavior of the macro level can be predicted as fully as possible using only macroscale information — there is no need to refer to finer-scale information. It is, in other words, fully emergent. The key characteristic of this emergence, the researchers say, is this hierarchical structure of “strongly lumpable causal states.”
There are situations where approximations are good enough for simulations, sure, but that's not the subject here.
I reject the idea that chatbots have feelings or intellect because they output text that is similar to what a human might write in some hypothetical situation or other. To the extent that they can have those properties, it is to the same extent as Clark Kent can, if one were to accept such a conflatory and confused discourse.
LLMs 'turn on' when given a question and essentially 'die' immediately after answering a question.
What kind of work is going on with designing an LLM type AI that is continuously 'conscious' and giving it will? The 'claws' seem to be running all the time, but I assume they need rebooting occasionally to clear context.
For exactly the reasons you mention, I don't expect sentience to arise out of LLMs. They have nowhere for an interiority or mind to live. And even if there were a new generation of transformers that did have some looping "mind", where they could "think about" what they're "thinking about", their concepts of things wouldn't really correspond to... things. Without senses to integrate knowledge across domains they're just associating text.
I haven't heard about anyone creating trying to create model that have an interior loop and also integration with sensory input, but I don't expect we would unless it ends up working.
from the paper "AI systems already possess some capabilities not found in humans, such as LiDAR perception and native image generation". I don't know about them, but I can natively generate images in my mind.
Not all humans can: Aphantasia!
I think people will only accept a "Yes" to the question of "Is it AGI?" when large portions of the population end up excluded from the definition of "intelligence". So it begins! ;)
There are other changes and additions which could be made to this list, but altruism may be the most important.
You'd have a more serious debate about antigravity.