That is the question. Because how concerned should we be of Artificial Intelligence? let’s find out.
As we get more and more accustomed to Artificial Intelligence, through our daily lives with Siri and Alexa. We are evolving rapidly into uncharted territory with this technology. Territory that has been defined for the public through sci-fi movies like Irobot and Ex Machina. Both movies that pictures worst case scenarios for A.I’s future.
The nonprofit A.I research company OpenAI, created a text-generating system that was deemed to good for the public. The company’s decision to keep this technology from the public, was commented in a blog post were they stated their reasons was due to “concerns about malicious applications of the technology.” Instead, the company intended to release a family of much smaller models, giving people time to assess societal implications and evaluate the impact each release has.
H2: Once released, can’t be killed
One of the biggest voices against the rapid evolution of A.I. is Tesla Co-founder Elon Musk. Mr. Musk has long been vocal of his criticism against unchecked Artificial Intelligence. In the documentary do you trust this computer, Elon Musk elaborates on the concern if “one company or a small group of people manages to develop godlike digital super-intelligence, they could take over the world.” He makes the case that with an evil dictator, he will most likely be killed. That is not the case with a network of a million computers all around the world.
There are undoubtedly immense potential to Artificial intelligence. With every year seamingly being worse than the previous one, in terms of cyber threats and attacks. A.I could potentially be the best cybersecurity system we have ever seen, that manages to detect and destroy threats in its early stages. It has colossal potential being able to safeguard systems from potential breaches and invasions.
H2: Much greater danger than nuclear warheads
It’s fair to say that the SpaceX founder Elon Musk is on the very concerned end of the scale. He did compare the dangers of A.I to be “much greater” than those of nuclear warheads. Musk isn’t the only one who has serious concerns about A.I. There has always been concerns about creating superintelligence going far beyond human intellect.
The development of A.I technology is evolving with such speed that some sort of federal oversight program that monitors the growth of the technology will be needed according to Musk. He is concerned that the A.I just needs to be determined to pose a threat to humans, “If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings” musk added
Before he went on to give a comparison “It’s just like, if we’re building a road, and an anthill happens to be in the way. We don’t hate ants, we’re just building a road. So, goodbye, anthill”
The main concern is about the incredible competence we might develop. Super intelligence will mean that it will be extremely good at accomplishing tasks. If we are not able to fully align our goals with A.I, we risk becoming a threat to accomplishing their tasks. With super intelligence, comes the ability to self-improve. Reaching new heights of intelligence humans have never been close to. It is vital that before this can happen, that we make them a part of our team and working together towards the same goals.
In conclusion, Safety should always be the number one priority in the years to come. It’s vital that we advance this technology with caution. And as Elon Musk pointed out, it’s very important that we manage to fully align our goals with the A.I’s. If we cannot correctly do so, humans can be seen as a threat for A.I to accomplish their tasks. Patience will be a virtue, but If we do those things right, then we shouldn’t be scared of Artificial Intelligence.
Human greed, on the other hand, that we should be scared of.