Recently, I read a blog about Artificial Intelligence taking over the world through plausible scenario’s. It intrigued me so much, I wanted to share an excerpt:
“Content in his belief that he’s got everything under control, the factory worker browses news on his tablet and munches on fast food while the machines do all the work just beyond his control room windows. It isn’t until he’s alerted to a backlog on vehicle production that he notices something’s wrong. Little does he know that ANA is watching him, analyzing his heart and respiration rate to gauge his response to her unexpected disobedience.”
Consider giving the blog a read yourself: Birth of AI: Robots Reproducing in a Car Factory Spell Doom
Giving Away Control
Communication was done via the Internet, to send orders to build offensive robots intended to control the factory. Humans and were able to manufacture self-protection to defend against any possible response to an attack from humans. The danger is the number of factories that the Artificial Intelligence will be linked to. Will you cut off the Internet? Don’t you think the robot would expect that from you?
When we rely on an automated system to make decisions, we inevitably hand over some control or agency to that system. Consider the complexity of artificial intelligence and other data-dependent systems, and whether organizations really understand the maturity of their AI. What part or component that enables us to take responsibility for a wrong decision? This may be a difficult question to answer, especially after years, we will have a lot of artificial intelligence machines connected to the Internet and you can think on their own. This scenario could become very dark.
Another thing that must be taken into account is the urgent need for such machines, which will make the manufacture of these machines very quickly and we may realize that We must control this issue too late, especially if we are barely interfering. Who is responsible at that time for future risks? Do we hold the system owners accountable? What if there was no party per se? Do we hold designers accountable? Then who are the designers, are they not programmers who may have left their work and their property and came other designer and completed the design after him? Then the unpredictable nature of AI makes it difficult for a designer to anticipate all potential problems. This means that the fewer people involved in the decision-making process, the more difficult it is to hold someone accountable for that wrong decision.
At the present time, for example, the Covid 19 virus, when it appeared, many believed that it was just a seasonal flu, and when it seemed clear that it was a virus of another type, all countries of the world were confused about how to act with it. Although Bill Gates and the Chinese doctor who discovered had warned us in advance that it is not a common influenza virus, many indicators of the danger of the virus have been ignored.
Today, Elon Musk, the pioneer of artificial intelligence, warns us of the danger of not controlling artificial intelligence, and we were warned by Stephen Hawking of the danger of accelerated technology and artificial intelligence to humans, and we still do not believe in the approaching danger.
The largest companies make mistakes with artificial intelligence, such as Microsoft, when they operate a stringing robot to chat with humans, and it so happens that it begins to repeat what it has learned from corrupt humans of racism and insults.
What happened from the IBM Watson artificial intelligence program for cancer patients. Where he provided cancer doctors with suggestions they had previously learned, because if he did not present anything new, which indicates that he learned from them and contradicted their ideas then he is not reliable.
In a blog on The Conversationalist, the writer states: “The machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise. But with nuclear weapons and artificial intelligence, we don’t want to learn from our mistakes”
Read more from this blog: People don’t trust AI – here’s how we can change that
So What's Next?
Here are serious questions to answer, like how will we teach AI using public data without incorporating the worst traits of humanity? If we create robots that mirror their users, do we care whether the users are human waste? There are plenty of examples of technology that embody – whether mistakenly or intentionally – the prejudices of society. Followers of AI subjects know that big companies forget to take any precautionary measures against these problems.
These points are, in my opinion, the solution to these problems:
- We want to plan ahead
- we want to spend a year or two understanding the problem before they consider how to solve it.
- This is done in cooperation with artificial intelligence companies that create more dangerous programs, and with companies that are concerned about the growth of artificial intelligence and interest in ethics hacks for everyone who works in the field of innovation and data.