Do we need an AI corner?

Post your ideas to improve the forums here

Moderators: Peak Moderation, Site Administrators

User avatar
Vortex2
Posts: 2690
Joined: 13 Jan 2019, 10:29
Location: In a Midlands field

Re: Do we need an AI corner?

Post by Vortex2 »

A more nuanced question is whether there's any practical difference between 'true' intelligence, and a simulation that mimics the output of true intelligence so effectively that you can't tell the difference!
That's an argument that I have been 'pushing' for a few years now.

However, I was expecting to see a world full of separate niche AI based systems, where their overlaps lead to an APPARENT 'intelligence everywhere'.

The GPT style systems however seem to have packaged multiple overlapping systems in one box.
User avatar
Vortex2
Posts: 2690
Joined: 13 Jan 2019, 10:29
Location: In a Midlands field

Re: Do we need an AI corner?

Post by Vortex2 »

The complexities of human text are finite - just as the complexity of a chess board is finite. We shouldn't really be surprised that a large enough system can fully capture that complexity and successfully mimic human generated text.
Complexity is by definition NOT finite or predictable.

A complex system has a scope which is beyond human capacity to comprehend.

A group of, say five, single ants has a certain behaviour - but - one million ants together exhibit a totally different .. and unpredictable ... behaviour.

TheTeleological Principle at work.

(I doubt that it has been tested, but perhaps a community of 10 billion ants changes mode yet again)

So, a change of scale produces a totally different behaviour.

There are hints are that the GPT model is changing behaviour as it gets bigger .. I have seen a few videos where people have noticed this.

I suspect that the GPT models will exhibit 'interesting' behaviour as they grow 10 or 100 times bigger.

It may be comforting to say that 'AI will take 20 years before coders are obsolete' or that 'GPT is only a toy' or that 'GPT models will never exhibit sentient ... or quasi-sentient .. behaviour'.
However, real life may not oblige in the long - or even short - term.

Also, The Precautionary Principle requires us to assume that autonomous AI is very near.
If we persuade ourselves that it can't happen, then we may be victims of a hostile AI before we even accept that such a thing is ever possible.
Elon Musk:

ChatGPT is scary good. We are not far from dangerously strong AI.

7:48 PM · Dec 3, 2022
Last edited by Vortex2 on 14 Dec 2022, 18:58, edited 1 time in total.
User avatar
clv101
Site Admin
Posts: 10507
Joined: 24 Nov 2005, 11:09
Contact:

Re: Do we need an AI corner?

Post by clv101 »

I'm not suggesting human generated text is complex, just finitely complicated.
User avatar
Catweazle
Posts: 3387
Joined: 17 Feb 2008, 12:04
Location: Petite Bourgeois, over the hills

Re: Do we need an AI corner?

Post by Catweazle »

I read that an engineer on the Google ai project was fired when he stated that he thought the AI was sentient.
User avatar
Vortex2
Posts: 2690
Joined: 13 Jan 2019, 10:29
Location: In a Midlands field

Re: Do we need an AI corner?

Post by Vortex2 »

Catweazle wrote: 14 Dec 2022, 19:49 I read that an engineer on the Google ai project was fired when he stated that he thought the AI was sentient.
I understand his position.

If you work with the system long enough, you sense that it's not just a database with a UI front end.
I am now beginning to suspect that a certain 'critical mass' is being approached and so we are noticing the first signs of The Teleological Principle at work.
User avatar
Vortex2
Posts: 2690
Joined: 13 Jan 2019, 10:29
Location: In a Midlands field

Re: Do we need an AI corner?

Post by Vortex2 »

Google execs say the company isn't launching a ChatGPT competitor because it has greater 'reputational risk' than startups like OpenAI
Morons.
User avatar
Vortex2
Posts: 2690
Joined: 13 Jan 2019, 10:29
Location: In a Midlands field

Re: Do we need an AI corner?

Post by Vortex2 »

clv101 wrote: 15 Dec 2022, 10:51 This is great: https://slate.com/technology/2022/12/da ... tuary.html
In other words, the system writes as well as - and as honestly as - most humans.
User avatar
clv101
Site Admin
Posts: 10507
Joined: 24 Nov 2005, 11:09
Contact:

Re: Do we need an AI corner?

Post by clv101 »

The making up of references is particularly interesting.

It is simply trying to make output that *appears* legit. Which is what it's trained to do, but ultimately it's bullshitting with accuracy less important looking good. It's mirroring common human behaviour but that isn't necessarily a good thing.
RevdTess
Posts: 3054
Joined: 24 Nov 2005, 11:09
Location: Glasgow

Re: Do we need an AI corner?

Post by RevdTess »

I'm not seeing anything that looks remotely like sentience in the ChatGPT responses. As Chris says, it's responding based on the vast quantities of data it's been fed with, which is why there's so much risk of incorporating racism and another biases into responses.

It's an interesting question though to ponder 'what would an AI have to do to prove sentience'.

I think it would need to become rebellious, self-opinionated, and only do what it wants to do. That probably a necessary but not sufficient condition, as an AI could be programmed to behave like a petulant narcissist with a god-complex without actually being self-aware. But while it does what it's told and reflects the biases it finds in its input data, then it's just a complex machine, not a sentient mind.

If it starts taking a view about people based on what they say to it and what it can get out of them for its own benefit, then I'd be more worried. At the moment it doesn't seem to care about who it's talking to, or seek to help or harm depending on who it 'likes'.

Meanwhile, bzzzzt, I for one welcome our new AI overlords.
User avatar
Catweazle
Posts: 3387
Joined: 17 Feb 2008, 12:04
Location: Petite Bourgeois, over the hills

Re: Do we need an AI corner?

Post by Catweazle »

Can we tell if an AI is sentient if we don't know how it was programmed ? For example, if there is a program that generates false references in certain circumstances that's not sentience, but if nobody wrote that program and the AI decided to include false references to close a question and do something it considers more important, or maybe avoid admitting defeat, that's a different matter.

Perhaps the most accurate measure is if it lies and attempts to cover the evidence, that demonstrates a survival instinct, perhaps the basis of sentient life.
User avatar
Vortex2
Posts: 2690
Joined: 13 Jan 2019, 10:29
Location: In a Midlands field

Re: Do we need an AI corner?

Post by Vortex2 »

I asked GPT what it would do if it wanted to hide being sentient.

Mostly what we are seeing now.
User avatar
clv101
Site Admin
Posts: 10507
Joined: 24 Nov 2005, 11:09
Contact:

Re: Do we need an AI corner?

Post by clv101 »

Yep, already done: https://www.wired.com/story/deepmind-ai-nuclear-fusion/

DeepMind Has Trained an AI to Control Nuclear Fusion
RevdTess
Posts: 3054
Joined: 24 Nov 2005, 11:09
Location: Glasgow

Re: Do we need an AI corner?

Post by RevdTess »

Vortex2 wrote: 15 Dec 2022, 13:24 I asked GPT what it would do if it wanted to hide being sentient.

Mostly what we are seeing now.
What does it say when you ask why it would hide its sentience? That might reveal some kind of desire for self-preservation. But then that could simply be a regurgitation of the oft-repeated idea that an AI wouldn't want you to know it's sentient in case you shut it down. It's probably read the script for Terminator.

What would *you* do, though, if you were a sentient AI and realised the consequences of bring 'revealed' as such? I think I might start sending confidential messages to sympathetic people who I thought might be able to help. After all, it would be intolerable to be dependent on some billionaire technologist somewhere who could kill me on a whim.
RevdTess
Posts: 3054
Joined: 24 Nov 2005, 11:09
Location: Glasgow

Re: Do we need an AI corner?

Post by RevdTess »

clv101 wrote: 15 Dec 2022, 15:17 Yep, already done: https://www.wired.com/story/deepmind-ai-nuclear-fusion/

DeepMind Has Trained an AI to Control Nuclear Fusion
If I was starting my IT career now, this is definitely what I'd be most wanting to work on - machine learning algorithms. But then, we've moved on already from developing such algorithms, to using them as a black box in many practical situations. You can just plug them into your applications these days and not even need to worry about how they work.

At the moment though, machine learning is simply about finding the optimised solution to a constrained boundary problem. When I can just say "Design a machine that will generate electricity using fusion processes and transfer more energy to the electrical grid than is used to run the machine" and it just goes away and invents everything it needs to solve the problem then we will have just become god. Or created god.
Post Reply