Page 1 of 1 [ 9 posts ] 

Assassin
Veteran
Veteran

User avatar

Joined: 23 Apr 2005
Age: 35
Gender: Male
Posts: 1,676
Location: Not here, Not there, not anywhere.....

24 Nov 2005, 5:15 am

The main problem most people have with automation in the workplace is it puts people out of work. This, however, is only a problem in a capitalist economic system.

In a communist system, automation would only reduce the workload the population had to accomplish. In a communist system, this wouldnt make people less well off. In fact, it would indirectly make people more well off. Because if the workplace is fully automated, then it can only make it more efficient, bringing more monay into a country, and so, under a communist system, more money into the hands of the general population.

However, if this idea of increased leisure time due to automation is followed through to its conclusion, what we get is a very Matrix-esque world - eventually, machines would be doing all the work, and Humans would be doing nothing. This would inevitably lead to Humans spending all their time in computer programs desined to meet their every need and desire. There is nothing wrong with this in itself, BUT, it raises 2 main problems.

The most obvious one is that if the entire Human race is incapacitated, then in order to keep it secure from outside threat, the machine world would have to be operated by an AI program, which would in turn put Humanity at the ercy of the AI.

There is also the more subtle moral problem - it is paradoxical that a communist country would delegate the entire workload of the Human race to a single sentient entity - Human or not, everyone is supposed to be equal.

This could be worked around if the AI was replaced by the collective will of Humanity, directly represented in one thought process (which would be possible with all Humans were neurologically connected to the same computer system).

However, this brings us back to the original problem - in this case, minorities are at the mercy of majorities, and if someone got a bad reputation - something that would spread very quickly with all of Humanity linked by the brain - then they would be in grave danger...


_________________
Chronicles of the Universe: Sons of Earth Volume 1 - Bounty Hunter now at 98 pages! Ill update this sig when it gets published.

<a href=http://s13.invisionfree.com/the_project>Project Legacy, building the future</a>


Larval
Veteran
Veteran

User avatar

Joined: 15 Nov 2005
Gender: Female
Posts: 1,037

24 Nov 2005, 10:01 am

Assassin wrote:
The main problem most people have with automation in the workplace is it puts people out of work. This, however, is only a problem in a capitalist economic system.


Actually, its not that big of a problem if all groups involved (owners, managers, and physical workers) think and plan ahead. Make sure that the workers get good re-training ahead of time, maybe some of them can be trained to be the managers of new plants/stores which are opening (made possible by the savings that the new robots provide). Another intriguing idea (tho rarely used) is to let the workers become shareholders in the company that provides the robots - that way, when workers are laid off they still have a good piece of income. (This last suggestion is risky and may not always work. If the company bombs out/goes backrupt then the workers lose money and if the place they work at decides to change robot providers the workers can also lose out. Still working out the bugs.)

Quote:
In a communist system, automation would only reduce the workload the population had to accomplish. In a communist system, this wouldnt make people less well off. In fact, it would indirectly make people more well off. Because if the workplace is fully automated, then it can only make it more efficient, bringing more monay into a country, and so, under a communist system, more money into the hands of the general population.


That is also possible in a capitalist system (although not guarranteed, even if a few greedy businessmen take over and make all the money the government can just tax the hell out of them to the point that they have almost zero profit (like 99.996% taxes)).

Quote:
However, if this idea of increased leisure time due to automation is followed through to its conclusion, what we get is a very Matrix-esque world - eventually, machines would be doing all the work, and Humans would be doing nothing.


I disagree. Automations can't think for themselves, and Automations != A.I.'s. Humans will be needed to govern the decision making (of both governments and coporations) for a long time to come. Also, humans tend to make better art & music, on average, then machines (certainly humans make more original art & music that machines).

Quote:
This would inevitably lead to Humans spending all their time in computer programs desined to meet their every need and desire.


I doubt it. Some would undoubtably want to spend utopia on their present-day Earth.

Quote:
There is nothing wrong with this in itself, BUT, it raises 2 main problems.

The most obvious one is that if the entire Human race is incapacitated, then in order to keep it secure from outside threat, the machine world would have to be operated by an AI program, which would in turn put Humanity at the ercy of the AI.


We are a long way from being able to make an AI that can make those kinds of decisions (e.g. what constitutes a threat) let alone place a cat in its mercy. Humans will be running things like this for a long time - and if free government wins out, it will be the people who control that machine, so no one will be subjigated.

Quote:
There is also the more subtle moral problem - it is paradoxical that a communist country would delegate the entire workload of the Human race to a single sentient entity - Human or not, everyone is supposed to be equal.


I believe that only applies to humans. It could apply to sentient A.I.s too, but they'd be so drasticly different from us that it is hard to see anyone campaign for AI equal rights.

Quote:
This could be worked around if the AI was replaced by the collective will of Humanity, directly represented in one thought process (which would be possible with all Humans were neurologically connected to the same computer system).


Agreed. Also solve the issue of AI controling all people, or one evil dictator controling all people scenarios.

Quote:
However, this brings us back to the original problem - in this case, minorities are at the mercy of majorities, and if someone got a bad reputation - something that would spread very quickly with all of Humanity linked by the brain - then they would be in grave danger...


There would be no majorities/minorities. In such a system we'd be able to evaluate each and every single person in the world as a true individual (obviously, not every single person at once, but one at a time) so the need for classification and generalization based on race, social standing, religion, etc would most likely go away.

This also makes the idea of reputation become obsolete, as you don't have to judge a person based on what others say - you can see for yourself what the person is like.

Of course, many many people are frightened by the idea of a world-wide borganism, so you'd have to address that (and related issues) first.



RobertN
Veteran
Veteran

User avatar

Joined: 30 Jul 2005
Age: 39
Gender: Male
Posts: 934
Location: Cambridge, UK

24 Nov 2005, 12:44 pm

I think you have gone too far with that, Assassin. Machines could only ever do 50% of the work anyway, because you still need Humans to control them. I think with the increased leisure time, people would probably spend more time socializing, relaxing, and walking in the park breathing the fresh air.



Assassin
Veteran
Veteran

User avatar

Joined: 23 Apr 2005
Age: 35
Gender: Male
Posts: 1,676
Location: Not here, Not there, not anywhere.....

24 Nov 2005, 2:07 pm

Larval wrote:
That is also possible in a capitalist system (although not guarranteed, even if a few greedy businessmen take over and make all the money the government can just tax the hell out of them to the point that they have almost zero profit (like 99.996% taxes)).


Taxes that hi, thogh, is leaning towards communism and away from capitalism - its still a good point, but Im not sure that its relevant.

Quote:
I disagree. Automations can't think for themselves, and Automations != A.I.'s. Humans will be needed to govern the decision making (of both governments and coporations) for a long time to come. Also, humans tend to make better art & music, on average, then machines (certainly humans make more original art & music that machines).


At that point, Human involvement in the real world decision-making process is still a given, becos Humans wood all still be active components of the real world.

Quote:
I doubt it. Some would undoubtably want to spend utopia on their present-day Earth.


No, because the reality is, present day Earth ISNT a utopia, which is why a utopia wood be created within a computer program.

Quote:
We are a long way from being able to make an AI that can make those kinds of decisions (e.g. what constitutes a threat) let alone place a cat in its mercy. Humans will be running things like this for a long time - and if free government wins out, it will be the people who control that machine, so no one will be subjigated.


Were also a long way from the level of economic automation that wood render Human involvement in economics/logistics obsolete.

Quote:
I believe that only applies to humans. It could apply to sentient A.I.s too, but they'd be so drasticly different from us that it is hard to see anyone campaign for AI equal rights.


I wood. Just because there different dosent meen they dont deserve the same rites. The same argument (being different) was used to deny both black peeple and women rites.

Quote:
There would be no majorities/minorities. In such a system we'd be able to evaluate each and every single person in the world as a true individual (obviously, not every single person at once, but one at a time) so the need for classification and generalization based on race, social standing, religion, etc would most likely go away.

This also makes the idea of reputation become obsolete, as you don't have to judge a person based on what others say - you can see for yourself what the person is like.

Of course, many many people are frightened by the idea of a world-wide borganism, so you'd have to address that (and related issues) first.


I think your misinterpreting me here. I wasnt suggesting that the whole of Humanity wood be merged into a single consciousness, i was saying the computer program wood be desined to be a utopian version of the real world, and that our collective will as a seperate being would govern the automatic logistical process to keep it running/us supplied with food, etc.


_________________
Chronicles of the Universe: Sons of Earth Volume 1 - Bounty Hunter now at 98 pages! Ill update this sig when it gets published.

<a href=http://s13.invisionfree.com/the_project>Project Legacy, building the future</a>


Assassin
Veteran
Veteran

User avatar

Joined: 23 Apr 2005
Age: 35
Gender: Male
Posts: 1,676
Location: Not here, Not there, not anywhere.....

24 Nov 2005, 2:28 pm

RobertN wrote:
I think you have gone too far with that, Assassin. Machines could only ever do 50% of the work anyway, because you still need Humans to control them. I think with the increased leisure time, people would probably spend more time socializing, relaxing, and walking in the park breathing the fresh air.


I was saying that peeple wood still do all of that, but in a perfect world rather than an imperfect one. Unfortunately, that perfect world could quite rapidly get worse than the one we live in.


_________________
Chronicles of the Universe: Sons of Earth Volume 1 - Bounty Hunter now at 98 pages! Ill update this sig when it gets published.

<a href=http://s13.invisionfree.com/the_project>Project Legacy, building the future</a>


Larval
Veteran
Veteran

User avatar

Joined: 15 Nov 2005
Gender: Female
Posts: 1,037

24 Nov 2005, 10:48 pm

Assassin wrote:
Larval wrote:
That is also possible in a capitalist system (although not guarranteed, even if a few greedy businessmen take over and make all the money the government can just tax the hell out of them to the point that they have almost zero profit (like 99.996% taxes)).


Taxes that hi, thogh, is leaning towards communism and away from capitalism - its still a good point, but Im not sure that its relevant.


You are right. I guess what I'm trying to say is that. I think it likely that communism will take over as time goes on (even the US has a socialist economy). It just seems unlikely that all the worlds wealth will be sucked up by a few people. Hmm. I'll have to think about this.

Quote:
Quote:
I disagree. Automations can't think for themselves, and Automations != A.I.'s. Humans will be needed to govern the decision making (of both governments and coporations) for a long time to come. Also, humans tend to make better art & music, on average, then machines (certainly humans make more original art & music that machines).


At that point, Human involvement in the real world decision-making process is still a given, becos Humans wood all still be active components of the real world.


Yes, but I doubt that AIs will ever fully take over that responsibility. At the very least, no one is a better judge of what people want than people. (The AIs in the Matrix are actually highly unlikely (unless someone decides to write a bunch of human-hating computer programs), most likely we'll end up with superintelligences who want nothing more than to tend to the every want and need of every human being alive. Why? Because we'll design them that way.)

Quote:
Quote:
I doubt it. Some would undoubtably want to spend utopia on their present-day Earth.


No, because the reality is, present day Earth ISNT a utopia, which is why a utopia wood be created within a computer program.


For some people, it is good enough. Others will prefer a non-perfect real world to a utopian virtual version. I'm not saying one choice is better than the other, just that there are people who will make the first choice, and more people who will make the second.

Also, a world with superintelligent, superpowerful machines may very well be a utopia for many. (I could go into some details but I don't want to make the post too long.)

Quote:
Quote:
We are a long way from being able to make an AI that can make those kinds of decisions (e.g. what constitutes a threat) let alone place a cat in its mercy. Humans will be running things like this for a long time - and if free government wins out, it will be the people who control that machine, so no one will be subjigated.


Were also a long way from the level of economic automation that wood render Human involvement in economics/logistics obsolete.


So we're a long way from robots controlling humans (assuming such an event will occur, something I doubt will happen).

Quote:
Quote:
I believe that only applies to humans. It could apply to sentient A.I.s too, but they'd be so drasticly different from us that it is hard to see anyone campaign for AI equal rights.


I wood. Just because there different dosent meen they dont deserve the same rites. The same argument (being different) was used to deny both black peeple and women rites.


As a matter of human law, you could say that they are equal. But what would it mean? A sentient robot that only wanted to ride the seas in search of minerals will only do that, and wouldn't ever exercise any of its other rights. And a robot that only wants to wait on humans artificial hand and foot, will continue to do so even if the law says that the robot is allowed to quit and find a more pleasant job. After all, no job would be more pleasant to that robot than waiting on humans hand and foot.

Just because they are as smart as us doesn't mean they have the same basic needs or desires (unlike blacks and women, who do have the same basic needs and desires as other humans). We don't give chimps the right to vote in elections, after all.

Now, a sentient robot that wanted to have fun, fool around, enjoy recreation, work hard and make money for itself, raise a family of robots, hang out at the bar, etc etc - that would be a robot I could give equal rights to, since it would be very human like in its basic desires.

Quote:
Quote:
There would be no majorities/minorities. In such a system we'd be able to evaluate each and every single person in the world as a true individual (obviously, not every single person at once, but one at a time) so the need for classification and generalization based on race, social standing, religion, etc would most likely go away.

This also makes the idea of reputation become obsolete, as you don't have to judge a person based on what others say - you can see for yourself what the person is like.

Of course, many many people are frightened by the idea of a world-wide borganism, so you'd have to address that (and related issues) first.


I think your misinterpreting me here. I wasnt suggesting that the whole of Humanity wood be merged into a single consciousness, i was saying the computer program wood be desined to be a utopian version of the real world, and that our collective will as a seperate being would govern the automatic logistical process to keep it running/us supplied with food, etc.


Oh. Hmm. I'll have to think about this for a little bit, but the thing about technology is that it helps minorities organize together and create their own collective voice. So a supression of any minority will become harder over time.



Assassin
Veteran
Veteran

User avatar

Joined: 23 Apr 2005
Age: 35
Gender: Male
Posts: 1,676
Location: Not here, Not there, not anywhere.....

25 Nov 2005, 3:13 pm

Quote:
Quote:
Quote:
I disagree. Automations can't think for themselves, and Automations != A.I.'s. Humans will be needed to govern the decision making (of both governments and coporations) for a long time to come. Also, humans tend to make better art & music, on average, then machines (certainly humans make more original art & music that machines).


At that point, Human involvement in the real world decision-making process is still a given, becos Humans wood all still be active components of the real world.


Yes, but I doubt that AIs will ever fully take over that responsibility. At the very least, no one is a better judge of what people want than people. (The AIs in the Matrix are actually highly unlikely (unless someone decides to write a bunch of human-hating computer programs), most likely we'll end up with superintelligences who want nothing more than to tend to the every want and need of every human being alive. Why? Because we'll design them that way.)


Yes, but in order for the virtual utopia to be sustainable, with all the Humans inside it, something non-Human would have to operate it, and you can never guarantee that the data wont be corrupted or sumat.

Quote:
Quote:
I doubt it. Some would undoubtably want to spend utopia on their present-day Earth.


Quote:
No, because the reality is, present day Earth ISNT a utopia, which is why a utopia wood be created within a computer program.



For some people, it is good enough. Others will prefer a non-perfect real world to a utopian virtual version. I'm not saying one choice is better than the other, just that there are people who will make the first choice, and more people who will make the second.


hmmm thats a good point. maybe that would make the AI unnecessary. but the peeple who became part of the program wood still be directly at the mercy of those who didnt.

Quote:
Quote:
Quote:
I believe that only applies to humans. It could apply to sentient A.I.s too, but they'd be so drasticly different from us that it is hard to see anyone campaign for AI equal rights.



I wood. Just because there different dosent meen they dont deserve the same rites. The same argument (being different) was used to deny both black peeple and women rites.



As a matter of human law, you could say that they are equal. But what would it mean? A sentient robot that only wanted to ride the seas in search of minerals will only do that, and wouldn't ever exercise any of its other rights. And a robot that only wants to wait on humans artificial hand and foot, will continue to do so even if the law says that the robot is allowed to quit and find a more pleasant job. After all, no job would be more pleasant to that robot than waiting on humans hand and foot.

Just because they are as smart as us doesn't mean they have the same basic needs or desires (unlike blacks and women, who do have the same basic needs and desires as other humans). We don't give chimps the right to vote in elections, after all.


They mite not have the SAME basic needs or desires, but they still have as much a rite to there individual needs and desires as we do, and as much a rite to protect them as we do. And part of the concept of AI is free will, so supposedly, an AI could decide for itself that it doesnt want to be a servent anymore.

Quote:
Quote:
Quote:
There would be no majorities/minorities. In such a system we'd be able to evaluate each and every single person in the world as a true individual (obviously, not every single person at once, but one at a time) so the need for classification and generalization based on race, social standing, religion, etc would most likely go away.

This also makes the idea of reputation become obsolete, as you don't have to judge a person based on what others say - you can see for yourself what the person is like.

Of course, many many people are frightened by the idea of a world-wide borganism, so you'd have to address that (and related issues) first.



I think your misinterpreting me here. I wasnt suggesting that the whole of Humanity wood be merged into a single consciousness, i was saying the computer program wood be desined to be a utopian version of the real world, and that our collective will as a seperate being would govern the automatic logistical process to keep it running/us supplied with food, etc.



Oh. Hmm. I'll have to think about this for a little bit, but the thing about technology is that it helps minorities organize together and create their own collective voice. So a supression of any minority will become harder over time.


Hmmm good point[/quote]


_________________
Chronicles of the Universe: Sons of Earth Volume 1 - Bounty Hunter now at 98 pages! Ill update this sig when it gets published.

<a href=http://s13.invisionfree.com/the_project>Project Legacy, building the future</a>


Larval
Veteran
Veteran

User avatar

Joined: 15 Nov 2005
Gender: Female
Posts: 1,037

25 Nov 2005, 6:19 pm

Assassin wrote:
Quote:
Quote:
Quote:
I disagree. Automations can't think for themselves, and Automations != A.I.'s. Humans will be needed to govern the decision making (of both governments and coporations) for a long time to come. Also, humans tend to make better art & music, on average, then machines (certainly humans make more original art & music that machines).


At that point, Human involvement in the real world decision-making process is still a given, becos Humans wood all still be active components of the real world.


Yes, but I doubt that AIs will ever fully take over that responsibility. At the very least, no one is a better judge of what people want than people. (The AIs in the Matrix are actually highly unlikely (unless someone decides to write a bunch of human-hating computer programs), most likely we'll end up with superintelligences who want nothing more than to tend to the every want and need of every human being alive. Why? Because we'll design them that way.)


Yes, but in order for the virtual utopia to be sustainable, with all the Humans inside it, something non-Human would have to operate it, and you can never guarantee that the data wont be corrupted or sumat.


I didn't think about that. Even if safeguards were set up to prevent this, nothing is 100% foolproof and accident-proof. So there is a risk. I think it is more likely that such an accident would just wipe all the people out (erase them from memory) than create an entity that wishes to dominate and control all of humankind, though.

Anyways, like I said, I doubt all of humanity would choose to become fully virtual, so even if all virtual worlds had accidents and got erased or taken over, there would still be some free humans alive (or at least some humans alive).

Still, virtual utopias wouldn't be utopias because of that inherent risk factor. The danger is there.

Quote:
Quote:
Quote:
I doubt it. Some would undoubtably want to spend utopia on their present-day Earth.

Quote:
No, because the reality is, present day Earth ISNT a utopia, which is why a utopia wood be created within a computer program.


For some people, it is good enough. Others will prefer a non-perfect real world to a utopian virtual version. I'm not saying one choice is better than the other, just that there are people who will make the first choice, and more people who will make the second.


hmmm thats a good point. maybe that would make the AI unnecessary. but the peeple who became part of the program wood still be directly at the mercy of those who didnt.


You are right! That is scary. Evil programmers would basically be gods in such a virtual environment, able to cause terror and exert absolute control.

Of course, the virtual worlds could be hooked up to the real one via a sort of reverse teleoperation (telepresence?) - they could control real world robots and other machines from their computer world so they would at least be able to put up some resistance.

Quote:
Quote:
Quote:
Quote:
I believe that only applies to humans. It could apply to sentient A.I.s too, but they'd be so drasticly different from us that it is hard to see anyone campaign for AI equal rights.



I wood. Just because there different dosent meen they dont deserve the same rites. The same argument (being different) was used to deny both black peeple and women rites.



As a matter of human law, you could say that they are equal. But what would it mean? A sentient robot that only wanted to ride the seas in search of minerals will only do that, and wouldn't ever exercise any of its other rights. And a robot that only wants to wait on humans artificial hand and foot, will continue to do so even if the law says that the robot is allowed to quit and find a more pleasant job. After all, no job would be more pleasant to that robot than waiting on humans hand and foot.

Just because they are as smart as us doesn't mean they have the same basic needs or desires (unlike blacks and women, who do have the same basic needs and desires as other humans). We don't give chimps the right to vote in elections, after all.


They mite not have the SAME basic needs or desires, but they still have as much a rite to there individual needs and desires as we do, and as much a rite to protect them as we do. And part of the concept of AI is free will, so supposedly, an AI could decide for itself that it doesnt want to be a servent anymore.


I see that I've misunderstood what you mean by AI here. Free will has absolutely nothing to do with the academic conception of AI, but it plays a big role in science fiction (thinking about I, Robot and the Positronic Man here). Certainly a robot who has free will (the ability to choose to exist, to choose to become something else) deserves equal rights and equal protection. But not all A.I.s will be capable of this. Those that aren't, won't be able to use those rights or protections even if they were granted.

Perhaps the mere existence of free will would be enough to grant these rights (and penalties - e.g. life imprisonment if a robot with free will knowingly commits murder, while a robot without free will that kills someone (accidently, by definition) would simply be dismantled and a "fixed" version built to replace it).



Assassin
Veteran
Veteran

User avatar

Joined: 23 Apr 2005
Age: 35
Gender: Male
Posts: 1,676
Location: Not here, Not there, not anywhere.....

25 Nov 2005, 6:46 pm

Larval wrote:
Of course, the virtual worlds could be hooked up to the real one via a sort of reverse teleoperation (telepresence?) - they could control real world robots and other machines from their computer world so they would at least be able to put up some resistance.


Hmmm... an interesting idia

Quote:
I see that I've misunderstood what you mean by AI here. Free will has absolutely nothing to do with the academic conception of AI, but it plays a big role in science fiction (thinking about I, Robot and the Positronic Man here). Certainly a robot who has free will (the ability to choose to exist, to choose to become something else) deserves equal rights and equal protection. But not all A.I.s will be capable of this. Those that aren't, won't be able to use those rights or protections even if they were granted.


As far as Im concerned, if its incapable of free will, then its not AI. I beleeve the capability to exersise free will is an integral part of sentience - if something can think beyond its basic programming/instincts, then its of its own judgements, and if thats the case, then its capable of its own choices as well.

Quote:
Perhaps the mere existence of free will would be enough to grant these rights (and penalties - e.g. life imprisonment if a robot with free will knowingly commits murder, while a robot without free will that kills someone (accidently, by definition) would simply be dismantled and a "fixed" version built to replace it).


The existence of free will IS, in the objective sense, enough to grant these rights, as far as Im concerned. Making them a legal reality wood be a different matter alltogether thogh. Work towards THAT shoud start now, so that if AI was invented, or a First Contact situation came about at some time in the near future, the laws wood allready be in place.


_________________
Chronicles of the Universe: Sons of Earth Volume 1 - Bounty Hunter now at 98 pages! Ill update this sig when it gets published.

<a href=http://s13.invisionfree.com/the_project>Project Legacy, building the future</a>