And if the A.I.s turn out ot be faulty and accidents turn out to increase ten fold. Yeah, sorry, the idea of a engineerless train was scary enough, but I sure as hell AM NOT going to trust machines to drive on our roads.
The idea of robots unemploying most of humanity is ridiculous. Yeah, not going to happen, because I guarantee you that if it comes to that, the unemployed will rise up as one and crush their lifeless replacements, and I'll be amongst those rioters.
I wouldn't trust the current models to drive me around either, but as I said, if Moore's law holds in the future, the robots are bound to become a lot more capable than now.
And I wouldn't say the idea of most of humanity becoming unemployed is ridiculous. I agree with you that there would be riots, huge ones. But could they topple such a system? I'm afraid that's far from guaranteed. First, what would unite all the rioteers into a single force? What ideology? The most suitable and most potent ideology for this purpose, Marxism, has been thoroughtly discredited in Europe and North America, in the USA in particular. That leaves us with possible Marxist revolutions in Latin America and India, and a reversion to Marxism in China.
Second, what kind of protection the elites have? If they are protected by human policemen and soldiers, there's a high chance of their forces changing sides. However, what if the elites are protected by robotic armies?
Of course, this is all under the assumption that CPU processing power will double every two years to 2030 and beyond. By that time some supercomputers should be able to execute 10^24-10^26 operations per second, that is, they could simulate the human brain, atom by atom (the human brain is estimated to execute some 10^16-10^20 operations per second).
However, if Moore's law breaks down in 2020 or even closer, then robots will never progress beyond anything more than being rather clever tools.
I wish for Moore's law to hold as long as possible, because there are some really complex problems that need huge amounts of computing power to solve. I believe that we are smart enough to use those machines safely. I also believe the same for nuclear power. I maybe in for a rude surprise, though.
I saw something on the Military Channel once about an experimental aircraft that was designed to eliminate human error by overriding the pilot's input if the computer detected something was amiss. Well, it turns out that the computer malfunctioned and detected that the aircraft was flying too fast so it cut the throttle, taking control away from the pilot. This caused the plane to drop out of the sky like a rock and the pilot had to eject.
This car is no different. I don't trust some machine to drive my vehicle for me. I'm perfectly capable of driving my truck on my own, thankyouverymuch. I'm not going to be force to become dependent on a automated machine to drive me from Point A to Point B just because there are idiots out there who refuse to obey traffic laws.
Just another example of people becoming overly dependent on technology. Next thing you know they'll invent automatic spoons because they think people can't feed themselves, resulting in the further sissification of the human population.
Keep in mind that we are already greatly "sissified" compared to our Stone Age ancestors.
But these are still good points. Eventually we will reach a point where we'll have to say that enough is enough.