Artificial Intelligence

Post Reply
User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Mon Mar 23, 2026 9:53 pm

L'Emmerdeur wrote:
Mon Mar 23, 2026 9:37 pm
It seems extremely unlikely that consciousness is simply a concatenation of algorithms.
This is an argument from incredulity. We're going to need something more concrete than that.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
JimC
The sentimental bloke
Posts: 74629
Joined: Thu Feb 26, 2009 7:58 am
About me: To be serious about gin requires years of dedicated research.
Location: Melbourne, Australia
Contact:

Re: Artificial Intelligence

Post by JimC » Mon Mar 23, 2026 10:05 pm

I would never say that it is impossible for a non-biological machine/software to be conscious, but the current versions based on large language models have not reached that point, IMO, and achieving consciousness would require something beyond just adding more processing power or training. So, it's not that I feel uneasy about the possibility of non-biological consciousness, it's just that current AI do not make the cut...
Nurse, where the fuck's my cardigan?
And my gin!

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Mon Mar 23, 2026 10:41 pm

I think we need to be content with saying that at present consciousness is a mystery that we can't explain. That then puts us in a difficult position of saying that we can't refute that AI is (or will be) conscious. I certainly don't think the current batch of AI developers want to concede this point. If they did, then we would have to elevate AI moral status to that of humans, at which point further development would be hampered by ethical concerns. Google or OpenAI certainly don't want their commercial ambitions hampered.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
Brian Peacock
Tipping cows since 1946
Posts: 40766
Joined: Thu Mar 05, 2009 11:44 am
About me: Ablate me:
Location: Location: Location:
Contact:

Re: Artificial Intelligence

Post by Brian Peacock » Mon Mar 23, 2026 10:53 pm

pErvinalia wrote:
Mon Mar 23, 2026 9:32 pm
Brian Peacock wrote:
Mon Mar 23, 2026 1:17 pm
pErvinalia wrote:
Brian Peacock wrote:
Mon Mar 23, 2026 10:22 am
pErvinalia wrote:We don't actually know if they have a self, and we probably will never be able to know. Just the same as I can't definitively know if you have self-awareness.
I think the problem with this is that, at some level, you're assuming your selfhood(!) is something which is only derived from, and exists within, your mind, within your intellect shall we say, rather than as the entirety of your being existing within your bodily interactions with the environment. Don't believe Descartes, you are far more than your thoughts!
We all work from the assumption that everyone else has consciousness. If we didn't then murdering people would be OK. At some point we may need to extend that to AI. Even if the chance is small, we will have an ethical responsibility to treat them as moral beings.
If correct then presumably you'd advocate extending fundamental human rights to AI?
If we believe they are conscious, then yes. Wouldn't you?
Then we should extend the same rights to cows, pigs, and sheep, no?
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.

.

"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."

Frank Zappa

"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.

User avatar
Tero
Just saying
Posts: 52742
Joined: Sun Jul 04, 2010 9:50 pm
About me: 8-34-20
Location: USA
Contact:

Re: Artificial Intelligence

Post by Tero » Mon Mar 23, 2026 10:56 pm

So far, the language part is interesting to me. We don't learn language by reading a grammar book. We learn it by comparing thousands of examples we hear. I think AI does that.
http://karireport.blogspot.com/
Inhibition, well, you can fly
Out the window to the clear blue sky
It will mess your suit, it will make you cry
It doesn't matter, give me Mumdane pie

User avatar
Brian Peacock
Tipping cows since 1946
Posts: 40766
Joined: Thu Mar 05, 2009 11:44 am
About me: Ablate me:
Location: Location: Location:
Contact:

Re: Artificial Intelligence

Post by Brian Peacock » Mon Mar 23, 2026 11:19 pm

pErvinalia wrote:
Mon Mar 23, 2026 9:39 pm
Brian Peacock wrote:
Mon Mar 23, 2026 9:16 pm
pErvinalia wrote:
Mon Mar 23, 2026 11:00 am
Brian Peacock wrote:
Mon Mar 23, 2026 10:05 am
pErvinalia wrote:
Descriptions of human cognition are abstractions. Meat can't model. We may as well say that LLMs model. Again, an abstraction. And it's important to note that we don't actually know how LLMs "think". It is currently a black box. Like the human mind.
Nitpickery. A theory of mind is a meat-based model, as is understanding what will happen to your trousers if you don't wear a belt.
There is no literal modelling going on. It's an abstraction we give to the black box based on observing inputs and outputs. The exact same thing can be done with AIs.
You're putting aside the word 'appearance' in what I wrote. Sure we assume consciousness in other humans based on our own observations of ourselves, but we actually do know the structure and content of MLSs like LLMs. We don't have to abstract their outputs. You're also putting aside that a theory of mind &/or an understanding of the natural world is also a model.
We're not abstracting outputs. We are abstracting the process of getting outputs from inputs. Saying that we model things, and AI doesn't, is saying that we treat the relationship between inputs and outputs in AI differently to how we treat the same in humans. What's the basis for treating AI different? I'd suggest it's a feeling. Up to this point the only data point for consciousness is biology. We feel uneasy with the idea that a non-biological machine could be conscious. But if that's the case, then it isn't a reasonable objection to the idea that machines can be conscious. We've got to come up with something better than that.
But I'm not saying AI (specifically LLMs) don't model things. They clearly model human text-based language. However, they do this statistically, where as we do it more stochastically.

I'm not arguing for treating 'AI' differently to humans, I'm saying that MLSs are fundamentally different to humans in pretty much every regard, and therefore we should avoid anthropomorphising them in the language we use to discuss them. I'm not offering this on the basis of 'feeling', but to forward a more dispassionate, consistent discussion of something that is having a real impact on the real world and the real people and things in it.

I'm not a luddite when it comes to tech - but I think that a lot of rot is being spoken about 'AI', and certain conceptions of MLS's are clearly being touted as the potential usurpers of humanity. (now there's something worth unpacking!) This, as I have said, is to individualise the issues, to focus the issues as being a problem specific to the objects 'AI' represent to us -- as some form of proto intelligence with inscrutable goals, motivations, and intellects -- and distracts us from talking about who is funding, developing, and deploying MLSs, what they're hoping MLSs can do for them, and why.

I don't feel uneasy about non-biological or non-human consciousnesses. If an entity is conscious then its difference to human baseline is irrelevant - it is essentially a person and should be afforded the same fundamental rights we expect for ourselves - whether that's home-grown MLSs or little green people from Alpha Centauri. I'd go further, I say that mere sentience is enough to grant an entity fundamental rights.
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.

.

"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."

Frank Zappa

"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Mon Mar 23, 2026 11:29 pm

Brian Peacock wrote:
Mon Mar 23, 2026 10:53 pm
pErvinalia wrote:
Mon Mar 23, 2026 9:32 pm
Brian Peacock wrote:
Mon Mar 23, 2026 1:17 pm
pErvinalia wrote:
Brian Peacock wrote:
Mon Mar 23, 2026 10:22 am
I think the problem with this is that, at some level, you're assuming your selfhood(!) is something which is only derived from, and exists within, your mind, within your intellect shall we say, rather than as the entirety of your being existing within your bodily interactions with the environment. Don't believe Descartes, you are far more than your thoughts!
We all work from the assumption that everyone else has consciousness. If we didn't then murdering people would be OK. At some point we may need to extend that to AI. Even if the chance is small, we will have an ethical responsibility to treat them as moral beings.
If correct then presumably you'd advocate extending fundamental human rights to AI?
If we believe they are conscious, then yes. Wouldn't you?
Then we should extend the same rights to cows, pigs, and sheep, no?
Potentially, yes, if they are conscious. Although, I feel comfortable invoking the naturalistic fallacy and stating we kill cows etc for a good reason (food). But on the other hand, as much as it would pain me having to eat vegetables only, I could accept that we shouldn't kill livestock for our benefit.

I'd ask you, do you kill ants and spiders and rats etc? Are they conscious? This introduces the concept of levels of consciousness. Similarly an AI might be barely conscious. It's seems we do set an arbitrary point of consciousness below which we think it's OK to extinguish a life. The problem, of course, is that we don't know if or how much other animals (or AI) are conscious. It's a bit of a bind.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Mon Mar 23, 2026 11:56 pm

Brian Peacock wrote:
Mon Mar 23, 2026 11:19 pm
pErvinalia wrote:
Mon Mar 23, 2026 9:39 pm
Brian Peacock wrote:
Mon Mar 23, 2026 9:16 pm
pErvinalia wrote:
Mon Mar 23, 2026 11:00 am
Brian Peacock wrote:
Mon Mar 23, 2026 10:05 am


Nitpickery. A theory of mind is a meat-based model, as is understanding what will happen to your trousers if you don't wear a belt.
There is no literal modelling going on. It's an abstraction we give to the black box based on observing inputs and outputs. The exact same thing can be done with AIs.
You're putting aside the word 'appearance' in what I wrote. Sure we assume consciousness in other humans based on our own observations of ourselves, but we actually do know the structure and content of MLSs like LLMs. We don't have to abstract their outputs. You're also putting aside that a theory of mind &/or an understanding of the natural world is also a model.
We're not abstracting outputs. We are abstracting the process of getting outputs from inputs. Saying that we model things, and AI doesn't, is saying that we treat the relationship between inputs and outputs in AI differently to how we treat the same in humans. What's the basis for treating AI different? I'd suggest it's a feeling. Up to this point the only data point for consciousness is biology. We feel uneasy with the idea that a non-biological machine could be conscious. But if that's the case, then it isn't a reasonable objection to the idea that machines can be conscious. We've got to come up with something better than that.
But I'm not saying AI (specifically LLMs) don't model things. They clearly model human text-based language. However, they do this statistically, where as we do it more stochastically.
LLMs (and other AI?) use randomised algorithms to process information. So it's not clear that there is a distinction between how we model and how they model. And there's certainly things going on in LLMs that we can't explain. They show emergent behaviour that isn't explained through our understanding of their mechanics. So the AIs are doing "thinking" to some degree, which likewise in humans, we can't quite pin down.
I'm not arguing for treating 'AI' differently to humans, I'm saying that MLSs are fundamentally different to humans in pretty much every regard, and therefore we should avoid anthropomorphising them in the language we use to discuss them. I'm not offering this on the basis of 'feeling', but to forward a more dispassionate, consistent discussion of something that is having a real impact on the real world and the real people and things in it.
I don't think it's necessarily anthropomorphising to say an AI thinks and has goals. Are we anthropomorphising dogs when we say they have thoughts and goals (a dog's goals might be to eat and reproduce)? We are simply describing the system with words that can apply. My point has been that because we don't really know what is going on in either human or AI cognition, we can't definitively say that AI "cognition" (or even consciousness) is different to humans. It might very well be, but we don't really have the information to make a definitive call.
I'm not a luddite when it comes to tech - but I think that a lot of rot is being spoken about 'AI', and certain conceptions of MLS's are clearly being touted as the potential usurpers of humanity. (now there's something worth unpacking!) This, as I have said, is to individualise the issues, to focus the issues as being a problem specific to the objects 'AI' represent to us -- as some form of proto intelligence with inscrutable goals, motivations, and intellects -- and distracts us from talking about who is funding, developing, and deploying MLSs, what they're hoping MLSs can do for them, and why.
I asked before, what are the hidden intentions of those funding and developing AI? No need to mention Musk, as he clearly is developing biased AI to further his political beliefs. But what of the others? Both Altman and Hassabis have repeatedly stated they are in this to better humankind. Altman, Hassabis and Amodei are all concerned about AI safety and the potential to cause harm to humans. Of course, they have all been corrupted to some degree by commercial/financial motives of their backers/owners. But what do you think is going on? To me, the biggest question is political. How do we manage a world where machines do most of the work, from an economic sense and a human purpose sense?
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
Brian Peacock
Tipping cows since 1946
Posts: 40766
Joined: Thu Mar 05, 2009 11:44 am
About me: Ablate me:
Location: Location: Location:
Contact:

Re: Artificial Intelligence

Post by Brian Peacock » Tue Mar 24, 2026 12:24 am

I'd like to avoid stumbling over a definition of consciousness if we can. We might not have a solid enough basis to say we know if or how other humans, animals, or 'AI's are conscious, but we have a theory of mind by which we at least understand that other humans are as aware as us of something essential and internal to themselves as well as being aware of objects in, and the range of states of, the external environment. Can we agree that consciousness is the state of being aware of ourselves in relation the the environments we inhabit.

We observe the bitch seems to love her puppies in a similar way that we love our infant children, we see that horses need the company of other horses in order to thrive just as we need the company of other humans (most often found in families, social-groups and communities), that pigs feel pain when you stick them just as we do, that sheep demonstrate fear just as we might if our well-being and lives were similarly threatened.

On less firm ground, we might also suggest that an ant colony acts like a single entity or organism, and we might make the same suggestion about schools of fish, clouds of starlings, herds of wildebeest, or even human societies. We know that all organisms -- all alive things, conscious or otherwise -- react to environmental stimuli, that the well-being of organisms depends on overlapping webs of ecological complexity that places species and individuals in a range of dependent and competitive relationships with other organisms, that organisms reproduce themselves and that a huge range of information can be, and is, passed between organisms through reproduction - at a great variety of scales.

So where does the 'AI' sit within all that? The prerequisite for consciousness therefore appears to be state of being alive. Are MLS alive? If the consciousness of an organism is the state of it being aware of i) itself and, ii) of itself in relation to the objects and state of the environment around it, are MLS's conscious? I'd say not, but if you disagree -- If you think they're conscious in a different way -- then it's up to you to explain what consciousness is in such a way that it enfolds MLS into the story along with us, the horses, dogs, pigs, sheep, and possibly the schools of fish, ant colonies, and herds of wildebeesties.

Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.

.

"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."

Frank Zappa

"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Tue Mar 24, 2026 12:43 am

My main point has been that we don't (and at the moment, and possible forever, can't) know whether AIs are conscious. Remember, I don't even know if you are conscious. We rely on self-reporting. Some AIs self report that they are conscious (self-aware) and have feelings. Should we treat these self reports differently? I know you said you had no objection to the idea that silicon could be conscious, but I feel there's some hidden biological chauvinism in there. You said one prerequisite for consciousness might be being alive. That's a biological condition. Can a silicon machine be alive in this context? Seems like you are ruling it out pre-emptively.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Tue Mar 24, 2026 1:28 am

Can AIs think (and the consciousness question)
Artificial intelligence (AI) has reached remarkable heights. AI systems can diagnose diseases, play complex strategy games, pilot self-driving cars, write essays, generate code, and mimic conversation with uncanny fluency. But do these feats count as cognition? And could AI be conscious?

As discussed in Part 1, cognition can be broadly defined as processes that acquire, process, and use information to guide behavior. By this basic definition, many AI systems qualify as cognitive. Consciousness, usually defined as the capacity for subjective experience/awareness in some form, is clearly a higher bar—even for minimal consciousness.

But the question of whether AI systems warrant being considered truly cognitive is still a matter of debate. To illustrate this, let's compare the frameworks of LeDoux vs. Ginsburg and Jablonka.

Evaluating Cognition and Consciousness in AI
In this series, we have considered two evolutionary-based frameworks to frame the debate about what counts as cognitive and what as conscious: LeDoux’s model-based definition and Ginsburg and Jablonka’s learning-based definition.

LeDoux's Framework

LeDoux defined cognition as the ability to construct and use internal mental models; he distinguished three systems—non-cognitive habits/reflexes (System 1), unconscious model-based control (System 2), and conscious, deliberative cognition (System 3).

By LeDoux’s strict criteria—internal mental models that allow an agent to flexibly guide its behavior—most current AI falls below the threshold for cognition (a few research systems build narrow predictive models, but these fall well short of the richly integrated models that, in LeDoux’s view, underpin cognition and, at a higher tier, the possibility of consciousness).

Ginsburg and Jablonka's Framework

Ginsburg and Jablonka took a broader view than LeDoux, defining cognition as "the systemic set of processes that enables value-sensitive acquisition, encoding, evaluation, storage, retrieval, decoding and transmission of information."2 By this standard, most AI systems would qualify as cognitive.

But do any achieve Unlimited Associative Learning (UAL)—the capacity marking the threshold for minimal consciousness? Current AI systems miss the hallmarks of UAL. They can’t reliably handle novel patterns, learn across time gaps, flexibly adjust what matters when conditions change, or build the layered chains of learning that allow transfer and categorization. They also lack the cross-modal, enduring, value-sensitive representations that give UAL its power, showing only narrow, engineered glimpses of these capacities.3,4

The Learning Gap
In Part 4, we recognized the centrality of learning to cognition. Animals learn continuously throughout life through trial and error, exploration, and reward-seeking, with learning integrated with their needs, emotions, and embodiment.

Most AI systems learn in two distinct stages: computationally intensive pre-training on vast datasets, followed by deployment with frozen parameters. This contrasts sharply with biological organisms' continuous lifelong learning. A rat navigating a changing maze adapts continuously based on new information and reward patterns.

Current AI systems suffer from "catastrophic forgetting"5-7—severe interference where learning new information significantly disrupts previously learned knowledge—a challenge that biological systems generally avoid through specialized mechanisms.8 This is why AI researchers say most AI lacks continual learning capacity.9,10

The Grounding Problem
Current AI lacks genuine world understanding despite impressive capabilities. As Meta's Yann LeCun noted in October 2024, world models are "key to human-level AI" but may be a decade away.11 AI processes symbols without semantic grounding—a cat knows birds through chasing them, children understand water by splashing it.

Without bodies or sensorimotor systems, AI lacks the embodied grounding that connects symbols to meaning through physical interaction with the world.12 As philosopher John Searle argued in his Chinese Room thought experiment, syntactic symbol manipulation—following rules for manipulating symbols—does not necessarily create semantic understanding of what those symbols mean.13 Animals develop cognition through active environmental engagement—a dynamic loop of perceiving, predicting, and acting that AI currently cannot replicate.14

Biological Imperatives
Anil Seth argues that biological cognition should be understood through biological naturalism,15 where mental states are shaped by survival needs. Biological cognition serves survival through predictive models integrated with homeostatic needs, emotion,16 and self-regulation. AI models, by contrast, operate without this survival-oriented framework—they optimize for assigned objectives rather than self-maintenance and adaptation.

Some recent work suggests that consciousness may depend on "mortal" computations — processes inseparable from the fragile, metabolically maintained substrate of living brains. By contrast, standard AI relies on "immortal" computation: algorithms that run identically across hardware through constant error correction, insulated from the kinds of decay and repair that characterize biological wetware.17,18

How AI Differs from Biological Cognition
Animals learn within richly structured, multimodal contexts and flexibly reapply knowledge. AI systems often struggle with "out-of-distribution failure"—poor performance when faced with data unlike their training—because they latch onto superficial statistical cues rather than deeper patterns reflecting a true understanding and modelling of the world.19

Beyond these basic biological qualities, current AI systems certainly lack the highest forms of cognition characteristic of humans. They do not know that they know. They do not reflect on their actions with a sense of self. They lack the inner life that supports metacognition and introspection.20

Current AI systems are remarkable information processors, but they operate in a different cognitive category that basically relies primarily on sophisticated statistical pattern matching.21 Their cognitive architecture remains fundamentally different from biological systems—not just in implementation, but in the deep integration of function with living matter.22,23

Conscious AI?
As pointed out throughout this series, cognition and consciousness are not the same thing. Current LLMs and other AIs give the impression of being conscious, but that is only because they are so good at imitating human conversation. Most experts remain highly skeptical that AIs are conscious at all, cautioning that their behavior merely creates the illusion of consciousness.24

However, if AIs, or perhaps more likely robots with embodied AI, can be engineered to use internal models the way LeDoux defines cognition (especially if they become capable of System 3 cognition), then quite possibly they could become conscious. Or perhaps they might become at least minimally conscious, like a wide range of animals arguably are according to Ginsburg and Jablonka's criteria—if they become capable of true Unlimited Associative Learning. Full consciousness, by LeDoux's definition, would require meta-representations and narrative self-models.

The question of whether we should ever want to engineer conscious AI is another question entirely, one that will require much sober second thought—if we even have the time and opportunity for such consideration, given the rapid pace of AI development. The possibility that AI might inadvertently develop consciousness cannot be entirely dismissed.
https://www.psychologytoday.com/us/blog ... ally-think
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
JimC
The sentimental bloke
Posts: 74629
Joined: Thu Feb 26, 2009 7:58 am
About me: To be serious about gin requires years of dedicated research.
Location: Melbourne, Australia
Contact:

Re: Artificial Intelligence

Post by JimC » Tue Mar 24, 2026 1:38 am

Anil Seth argues that biological cognition should be understood through biological naturalism,15 where mental states are shaped by survival needs. Biological cognition serves survival through predictive models integrated with homeostatic needs, emotion,16 and self-regulation. AI models, by contrast, operate without this survival-oriented framework—they optimize for assigned objectives rather than self-maintenance and adaptation.
I very much agree with this.
Nurse, where the fuck's my cardigan?
And my gin!

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Tue Mar 24, 2026 1:55 am

Jim agrees with Seth!
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
JimC
The sentimental bloke
Posts: 74629
Joined: Thu Feb 26, 2009 7:58 am
About me: To be serious about gin requires years of dedicated research.
Location: Melbourne, Australia
Contact:

Re: Artificial Intelligence

Post by JimC » Tue Mar 24, 2026 2:05 am

:lol:
Nurse, where the fuck's my cardigan?
And my gin!

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Tue Mar 24, 2026 2:24 am

LeDoux defined cognition as the ability to construct and use internal mental models
As I hinted earlier, I have always had a problem with this sort of language. It's too wibbly. The brain isn't literally modelling, we just abstract and call what it is doing modelling, without knowing what's actually going on in the brain. How can we say that AI doesn't internally model, when we don't know what internal modelling actually correlates with? I'm with MacDoc here. If the inputs and the outputs are the same, then we may as well assume that the internal processes are the same. That's theory of mind right there. On what basis would we assume that the internal processes are not the same? Incredulity doesn't count.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

Post Reply

Who is online

Users browsing this forum: No registered users and 23 guests