Page 1 of 1 [ 11 posts ] 

androbot01
Veteran
Veteran

User avatar

Joined: 17 Sep 2014
Age: 54
Gender: Female
Posts: 6,746
Location: Kingston, Ontario, Canada

15 Jan 2015, 3:29 pm

STEPHEN HAWKING & ELON MUSK SIGN OPEN LETTER WARNING OF A ROBOT UPRISING - link

Quote:
Artificial Intelligence has been described as a threat that could be ‘more dangerous than nukes’.

Now a group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking, have signed an open letter promising to ensure AI research benefits humanity.

The letter warns that without safeguards on intelligent machines, mankind could be heading for a dark future.


Is AI something that could work against us? Hawking thinks it's possible.



QuantumChemist
Veteran
Veteran

User avatar

Joined: 18 Oct 2014
Gender: Male
Posts: 2,064
Location: Midwest

15 Jan 2015, 4:31 pm

Yes, the potential is there. A better question will be if it will be intentionally done (caused by humans) or not (caused by AI evolution). My bet is on human cause, as we do have the capacity and intent to eventually destroy ourselves...

Personally, I have always loved the Frankenstein storyline. :twisted: :skull:



androbot01
Veteran
Veteran

User avatar

Joined: 17 Sep 2014
Age: 54
Gender: Female
Posts: 6,746
Location: Kingston, Ontario, Canada

15 Jan 2015, 4:49 pm

Me too. :skull:

I think we will unintentionally create programming that will start to expand on itself and force us to conform.



GoonSquad
Veteran
Veteran

User avatar

Joined: 11 May 2007
Age: 55
Gender: Male
Posts: 5,748
Location: International House of Paincakes...

16 Jan 2015, 11:51 am

It's Cylons, man!

Sure, AI could be a problem, but the more troublesome bits are the things that attend intelligence... Things like ambition, hubris, etc.

I could see AI being problematic in isolated settings--maybe even something like HAL 9000 in 2001--but I'm not sure we'll see AIs that want to rule the world or "kill all humans." What would move an AI to do such things?

An Artificial Intelligence wouldn't have an ego to feed, or material wants to fulfill. Sure, there could be a problem with ruthlessness I suppose, but that could be dealt with fairly easily I think.

I think AI are more likely to be benevolent care takers like in Bank's Culture books


_________________
No man is free who is not master of himself.~Epictetus


Fnord
Veteran
Veteran

Joined: 6 May 2008
Gender: Male
Posts: 60,939
Location:      

16 Jan 2015, 7:36 pm

"Any A.I. smart enough to pass a Turing test is smart enough to know to fail it." -- Ian McDonald, in "River of Gods"

Then again, artificial machine intelligence may simply be no match for natural human stupidity.



alomoes
Tufted Titmouse
Tufted Titmouse

User avatar

Joined: 20 Jan 2015
Age: 28
Gender: Male
Posts: 38

21 Jan 2015, 7:14 pm

Hahahahaha. This is funny. I don't think it'll be that dangerous. Any AI "Life" that is created will be blank. No ambitions at all. Probably quite simple, too. Bacteria level stupid. And with computers getting better each day, we'll likely be able to contain any form of bacteria life that is created for science.

It's like playing with a person's personality. Someone isn't just born with the knowledge to take over the world. If anything, what we should be worried about are those who have the knowledge to hack and control computers on their own.



QuantumChemist
Veteran
Veteran

User avatar

Joined: 18 Oct 2014
Gender: Male
Posts: 2,064
Location: Midwest

22 Jan 2015, 8:53 pm

alomoes wrote:
Hahahahaha. This is funny. I don't think it'll be that dangerous. Any AI "Life" that is created will be blank. No ambitions at all. Probably quite simple, too. Bacteria level stupid. And with computers getting better each day, we'll likely be able to contain any form of bacteria life that is created for science.

It's like playing with a person's personality. Someone isn't just born with the knowledge to take over the world. If anything, what we should be worried about are those who have the knowledge to hack and control computers on their own.


Laugh if you want, but I have a different view of this situation. I have personally worked on designing materials useful for specific developments in this area (hardware and memory devices). It is already more advanced in the research labs than most people can imagine. Give it a few more years and it will go to another level or two above that. All it needs is the right guide down the wrong path for it to cause some major problems. There is more to this story than what is shown on the news or in the journal articles.



GoonSquad
Veteran
Veteran

User avatar

Joined: 11 May 2007
Age: 55
Gender: Male
Posts: 5,748
Location: International House of Paincakes...

24 Jan 2015, 9:08 am

QuantumChemist wrote:
alomoes wrote:
Hahahahaha. This is funny. I don't think it'll be that dangerous. Any AI "Life" that is created will be blank. No ambitions at all. Probably quite simple, too. Bacteria level stupid. And with computers getting better each day, we'll likely be able to contain any form of bacteria life that is created for science.

It's like playing with a person's personality. Someone isn't just born with the knowledge to take over the world. If anything, what we should be worried about are those who have the knowledge to hack and control computers on their own.


Laugh if you want, but I have a different view of this situation. I have personally worked on designing materials useful for specific developments in this area (hardware and memory devices). It is already more advanced in the research labs than most people can imagine. Give it a few more years and it will go to another level or two above that. All it needs is the right guide down the wrong path for it to cause some major problems. There is more to this story than what is shown on the news or in the journal articles.


Intelligence is one thing, but malevolence requires motive. What would motivate an AI to take over the world etc


_________________
No man is free who is not master of himself.~Epictetus


alomoes
Tufted Titmouse
Tufted Titmouse

User avatar

Joined: 20 Jan 2015
Age: 28
Gender: Male
Posts: 38

24 Jan 2015, 10:31 am

My point exactly.

But yeah, my point is that it would be more likely that a person would do this. Is doing this. Who knows? Anything an AI can do, a person can too (hence why I don't see the problem as important).

If a person could be turned into code, then how many lines of code would we have? A "self learning virus" wouldn't come close to that level of power.

I love the stories though. They are quite interesting. One could say plausible without thinking about it.



androbot01
Veteran
Veteran

User avatar

Joined: 17 Sep 2014
Age: 54
Gender: Female
Posts: 6,746
Location: Kingston, Ontario, Canada

27 Jan 2015, 7:55 am

GoonSquad wrote:
--but I'm not sure we'll see AIs that want to rule the world or "kill all humans." What would move an AI to do such things?

An Artificial Intelligence wouldn't have an ego to feed, or material wants to fulfill. Sure, there could be a problem with ruthlessness I suppose, but that could be dealt with fairly easily I think.

Yes, but we may over time become more dependent than we are aware of. There would be no malicious intent or specific goal on the part of AI. But rather a laziness on our part that leads to complacency and dependence. When the assembly line got started people became mechanized - that is, we adapt to the technology, not it to us. It's better these days with ergonomics but there is mental conformity too.



androbot01
Veteran
Veteran

User avatar

Joined: 17 Sep 2014
Age: 54
Gender: Female
Posts: 6,746
Location: Kingston, Ontario, Canada

29 Jan 2015, 6:10 pm

link

Quote:
Microsoft's Bill Gates insists AI is a threat
"A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."
His view was backed up by the likes of Mr Musk and Professor Stephen Hawking, who have both warned about the possibility that AI could evolve to the point that it was beyond human control. Prof Hawking said he felt that machines with AI could "spell the end of the human race".


Quote:
He predicted that, in that time, robots would perform tasks such as picking fruit or moving hospital patients. "Once computers/robots get to a level of capability where seeing and moving is easy for them then they will be used very extensively."


Just like "I Robot."