Artificial Intelligence

Post Reply
User avatar
Tero
Just saying
Posts: 52742
Joined: Sun Jul 04, 2010 9:50 pm
About me: 8-34-20
Location: USA
Contact:

Re: Artificial Intelligence

Post by Tero » Sun Mar 22, 2026 12:01 pm

Brian Peacock wrote:
Sat Mar 21, 2026 10:38 pm


The AI doesn't want anything. The LLM doesn't guess - it uses a sophisticated statistical model to follow one word with another depending on the subject, context, and what it has already produced. I think we all need to be wary of lazy language which implies or ascribes agency to AI - even when talking about so-called AI Agents.
I finally concileded that it uses the journalist defintion of "viihde" which is mostly the lighter side of entertainment. The layman is not so picky. It correctly identified the allowed use "viihde" when you are talking about programs on TV in general, covering all including sports. But it would not be journalist style to call a drama "viihde" whereas it is not as wrong to label it entertainment in English.

I do believe Google wants us to come back. So they can start charge us later. So "AI wants" is the same as Google wants. But I was also referring to the AI as more or less a teacher. So it wanting something is merely it pointing out that this is better, more accurate. I am quite surpized how well it gets the word that would give you a better grade, if you were in school. It even gets the right adjective if you have a couple to choose from.

So yes, teachers are now bored. It writes like a first year college student.
http://karireport.blogspot.com/
Inhibition, well, you can fly
Out the window to the clear blue sky
It will mess your suit, it will make you cry
It doesn't matter, give me Mumdane pie

User avatar
Brian Peacock
Tipping cows since 1946
Posts: 40766
Joined: Thu Mar 05, 2009 11:44 am
About me: Ablate me:
Location: Location: Location:
Contact:

Re: Artificial Intelligence

Post by Brian Peacock » Mon Mar 23, 2026 12:18 am

pErvinalia wrote:
Sun Mar 22, 2026 3:09 am
As MacDoc intimated, if the input/output is the same, then we may as well assume the internals are the same between brain/mind and AI. Now, input/output isn't the same between human and AI yet, but there's some reason to think that they will be eventually.
I'd agree if the AI reasoned, but it's doing something rather different. While I think we can say that, in some way, LLMs have passed the Turing Test, It's also shown that the Turing Test in relation to LLMs only deals in a very narrow element of communication--comprehensible written text--which cannot be easily generalised to human intelligence. The LLMs have no perceptions, no experiences, no responses to environmental stimuli, and no knowledge of the world or the things in it. LLMs use statistical probability models to generate their outputs, whereas human outputs emerge with the appearance of reactive models with predictive elements.

LLM models have to assume all training data is true, and expanding the training data relies on including increasingly novel, esoteric, or otherwise exceptional material that now includes the output of previous iterations of LLMs themselves - increasing the risk of so-called 'model collapse', where an LLM incrementally produces less and less diverse, accurate, and comprehensible output as the training-data load increases.

I'm standing by the statement that all need to be wary of employing lazy language which (quite literally) personifies the system, or implies or ascribes agency to systems we interact with conversationally - LLMs. Would talk similarly about the same processes used to predict health conditions by analysing the data from Optical Coherence Tomography as having desires or wants or personal financial needs, as Tero did?
macdoc wrote:
Sun Mar 22, 2026 12:14 am
Walks like a duck, talks like a duck......
You are not a duck. Nor is the large language model. The LLM doesn't talk - you bring that to the story when your read or listen to its output.
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.

.

"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."

Frank Zappa

"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Mon Mar 23, 2026 12:29 am

Brian Peacock wrote:
Mon Mar 23, 2026 12:18 am
...whereas human outputs emerge with the appearance of reactive models with predictive elements.
Descriptions of human cognition are abstractions. Meat can't model. We may as well say that LLMs model. Again, an abstraction. And it's important to note that we don't actually know how LLMs "think". It is currently a black box. Like the human mind.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Mon Mar 23, 2026 12:44 am

Brian Peacock wrote:
Mon Mar 23, 2026 12:18 am

I'd agree if the AI reasoned, but it's doing something rather different.
Just on this, there are now what's called reasoning models, which step through and output their "reasoning" to get to the final output. May or may not be the same as how humans reason, but again, we don't actually know, so parsimony might suggest that we treat them the same.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Mon Mar 23, 2026 12:51 am

This naturally leads to the question of consciousness. We are never going to know definitively if an AI is conscious. But at some point we are going to have to accept that it might be (however small that chance), and therefore we must treat it as a self-aware entity, with all the ethical and moral considerations that come with that.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
JimC
The sentimental bloke
Posts: 74629
Joined: Thu Feb 26, 2009 7:58 am
About me: To be serious about gin requires years of dedicated research.
Location: Melbourne, Australia
Contact:

Re: Artificial Intelligence

Post by JimC » Mon Mar 23, 2026 12:58 am

To me, human consciousness owes a lot to a long evolutionary history. Reasonably complex organisms have agency to various degrees, which is dominated by real world survival imperatives. Human agency and motivations spring from our biology and our cultural landscape - it is of course much more complex than the agency of other animals, but has the same roots. Until there is an AI with a similar heritage, Brian's summation of their current situation seems spot on.
Nurse, where the fuck's my cardigan?
And my gin!

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Mon Mar 23, 2026 12:59 am

We've skipped the billions of years of evolution with AIs, by modelling them on the evolved brains of humans. We've built in an evolutionary history.

Regarding culture, that's again an abstraction. All culture is is inputs. Something which AI is built on.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
JimC
The sentimental bloke
Posts: 74629
Joined: Thu Feb 26, 2009 7:58 am
About me: To be serious about gin requires years of dedicated research.
Location: Melbourne, Australia
Contact:

Re: Artificial Intelligence

Post by JimC » Mon Mar 23, 2026 1:11 am

But what they don't have is a self (illusory or not) that has motivations rooted via interactions with a complex real world.
Nurse, where the fuck's my cardigan?
And my gin!

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Mon Mar 23, 2026 1:13 am

We don't actually know if they have a self, and we probably will never be able to know. Just the same as I can't definitively know if you have self-awareness.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Mon Mar 23, 2026 1:31 am

In a widely shared video clip, the Nobel-winning computer scientist Geoffrey Hinton told LBC’s Andrew Marr that current AIs are conscious. Asked if he believes that consciousness has already arrived inside AIs, Hinton replied without qualification, “Yes, I do.”

Hinton appears to believe that systems like ChatGPT and DeepSeek do not just imitate awareness, but have subjective experiences of their own. This is a startling claim coming from someone who is a leading authority in the field.

Many experts will disagree with Hinton. Even so, we have arrived at a historically unprecedented situation in which expert opinion is divided on whether tech companies are inadvertently creating conscious lifeforms. This situation could become a moral and regulatory nightmare.

What makes Hinton believe current AIs are conscious? In the viral clip, he invokes a suggestive line of reasoning.

Suppose I replace one neuron in your brain with a silicon circuit that behaves the same way. Are you still conscious? The answer is, surely, yes. Hinton infers that the same will be true if a second neuron is replaced, and a third, and so on.

The outcome of this process, Hinton supposes, would be a person with a circuit board in place of a brain who is nonetheless conscious. Why, then, should we doubt that existing AIs are also conscious?

In making this argument, Hinton strays from computer science into philosophy. As a philosopher who works on this kind of argument, I am not entirely persuaded.

You would also remain conscious after having one neuron in your brain replaced by a microscopic rubber duck. Likewise for the second neuron, and the third. But somewhere in this process, consciousness would cease. The same might be true of silicon circuits.

We shouldn’t be too sanguine about this reply, however. For one thing, there exist other arguments for the view that current AIs might have achieved consciousness. An influential 2023 study, suggests a 10 percent probability that existing language-processing models are conscious, rising to 25 percent within the next decade.

Furthermore, many of the serious practical, moral, and legal challenges associated with conscious AI arise just so long as a significant number of experts believe that such a thing exists. The fact that they might be mistaken does not get us out of the woods.

Remember Blake Lemoine, the senior software engineer who announced that Google’s LaMDA model had achieved sentience, and urged the company to seek the program’s consent before running experiments on it?

Google was able to dismiss Lemoine for violating employment and data security policies, thereby shifting the focus from Lemoine’s claims about LaMDA to humdrum matters of employee responsibilities. But companies like Google will not always be able to rely on such policies—or on California’s permissive employment law—to shake off employees who arrive at inconvenient conclusions about AI consciousness.

As the Lemoine case illustrates, we face an immediate practical problem of perceived AI consciousness. Other examples of this problem are easy to foresee. Imagine the case of someone falling deeply in love with their AI and insisting that it is a sentient partner worthy of marriage. Or consider the prospect of advocates rallying for legal rights on behalf of an AI “friend.”

What should we do about such cases when the people involved are able to back up their beliefs by appealing to experts such as Hinton?

Companies like Google, Microsoft, and OpenAI put enormous resources into AI ethics teams working on such tasks as mitigating biases and curbing harmful content. To my surprise, however, I have been able to find nobody affiliated these companies working on the problem of perceived consciousness.

Perhaps I should not be surprised. Addressing the problem of perceived AI consciousness means taking a stand on profound philosophical puzzles that fall way beyond the ordinary purview of software developers. These companies might well prefer to keep clear of the issue while they can get away with it, as well as to keep whatever discussions they are having on the subject strictly in house.

This approach cannot be maintained indefinitely, however. As Hinton says later on in the LBC interview, “There’s all sorts of things we have only the dimmest understanding of at the present about the nature of people, about what it means to have a self… And they’re becoming crucial to understand.”
https://www.psychologytoday.com/us/blog ... sciousness
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
JimC
The sentimental bloke
Posts: 74629
Joined: Thu Feb 26, 2009 7:58 am
About me: To be serious about gin requires years of dedicated research.
Location: Melbourne, Australia
Contact:

Re: Artificial Intelligence

Post by JimC » Mon Mar 23, 2026 1:59 am

What makes Hinton believe current AIs are conscious? In the viral clip, he invokes a suggestive line of reasoning.

Suppose I replace one neuron in your brain with a silicon circuit that behaves the same way. Are you still conscious? The answer is, surely, yes. Hinton infers that the same will be true if a second neuron is replaced, and a third, and so on.

The outcome of this process, Hinton supposes, would be a person with a circuit board in place of a brain who is nonetheless conscious. Why, then, should we doubt that existing AIs are also conscious?
What this ignores is that the structure being gradually replaced (for the moment I provisionally accept that rather weird scenario) has an architecture that is the joint result of evolutionary history and life experiences interacting with the real world - if the circuit works in an identical fashion does not affect the original neural pattern. Far different from creating a silicon architecture de novo...
Nurse, where the fuck's my cardigan?
And my gin!

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Mon Mar 23, 2026 2:10 am

You're making the assumption that evolutionary history and life experiences are necessary for consciousness. What do you base that on?
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
pErvinalia
On the good stuff
Posts: 61527
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: Artificial Intelligence

Post by pErvinalia » Mon Mar 23, 2026 2:11 am

We can't explain consciousness, we can barely define it. Jumping to conclusions of how it is implemented is based on what?
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
JimC
The sentimental bloke
Posts: 74629
Joined: Thu Feb 26, 2009 7:58 am
About me: To be serious about gin requires years of dedicated research.
Location: Melbourne, Australia
Contact:

Re: Artificial Intelligence

Post by JimC » Mon Mar 23, 2026 3:56 am

pErvinalia wrote:
Mon Mar 23, 2026 2:10 am
You're making the assumption that evolutionary history and life experiences are necessary for consciousness. What do you base that on?
They are clearly the background for our particular human consciousness; necessary for consciousness in general is not perhaps true, but they are our true backstory in our cognitive architecture. That is not to say that it is impossible for a computer-based AI to develop its own version of true consciousness, but I'm suggesting that it would need some equivalent of personal agency, a concept of self plus some sort of motivation to interact with the real world. Today's Large Language Models do not have such a basis, but perhaps in the future a way might be found to generate such a foundation.
Nurse, where the fuck's my cardigan?
And my gin!

User avatar
Brian Peacock
Tipping cows since 1946
Posts: 40766
Joined: Thu Mar 05, 2009 11:44 am
About me: Ablate me:
Location: Location: Location:
Contact:

Re: Artificial Intelligence

Post by Brian Peacock » Mon Mar 23, 2026 9:59 am

pErvinalia wrote:This naturally leads to the question of consciousness. We are never going to know definitively if an AI is conscious. But at some point we are going to have to accept that it might be (however small that chance), and therefore we must treat it as a self-aware entity, with all the ethical and moral considerations that come with that.
I think we have to be more specific about what AI we're talking about and the systems it runs and is running on. Attaching the concept of intelligence to LLMs or diagnostic systems which operate in a very narrow context is misleading. Reasoning models employ encoded knowledge bases which, at present, cannot be extended without a complete round of restraining nor supplemented on the fly and the logic they use is highly context dependent, so I also think we should avoid generalising the capacities of such systems even if they do have 'reasoning' in their names.

Rather than personifying AIs as some kind of independent autonomous agents (or even conceptualising them as a novel kind of organism) and worrying about their potential ability to out-think/-perform us we should be asking serious questions about the actual persons funding and developing these systems and what they're asking the machines to do for them, and why - because AI modelling reflects the interests and values of the people developing them, not the AI itself. Therefore the focus of our concerns should be the human individuals and institutions with skin in the game rather than individualising, and then othering, the Machine Learning Systems themselves.

MLSs have the potential to do really amazing things. Unfortunately the people developing them seem primarily concerned with using them to extract value from people and to mislead, manipulate and control us to that end.
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.

.

"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."

Frank Zappa

"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.

Post Reply

Who is online

Users browsing this forum: No registered users and 21 guests