Artificial Intelligence The Future of Mankind ( April 2018 )

Nonetheless, AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. But starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control. Weaponised AI systems have been described as the third revolution in warfare – after gunpowder and nuclear arms. For example, the Killer robots select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain predefined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Moreover, unlike nuclear weapons, they will be easy to manufacture on a mass scale as procuring raw materials won’t be difficult. They will thus be available for use by dictators, terrorists and madmen – a future Adolf Hitler could use them to exterminate a specific ethnic group. It is feared that AI may one day take off on its own, redesign itself and replicate at a fast pace and supersede the human race, whose biological evolution cannot keep pace with it. Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans. But this is an unnecessary presumption and anticipation. The human beings has created these devices and therefore, these devices and innovations are not going to threat human existence. If the human being starts applying bad ideas then they are bound to suffer and this applies in every technology. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people. Therefore, the issues of ethics and apprehensions don’t apply in the case automation and artificial intelligence till we use it.