Thursday, April 18, 2024
HomeTop StoriesViral NewsAI Developers Pledge to Not Develop Lethal Autonomous Weapons

AI Developers Pledge to Not Develop Lethal Autonomous Weapons

AI developers have pledged not to make the killer robots that could make unguided decisions to target the human beings. Tesla CEO Elon Musk, Skype’s Founder Jann Tallin and DeepMind’s Demis Hassabis came together in a conference to pledge to not to make autonomous weapons.

The commitment of giant tech CEOs might help alleviate the threats that artificial intelligence poses to society.

A Promise to Not Make Lethal Autonomous Weapons

A Boston based research organization Future of Life Institute organized Joint Conference on Artificial Intelligence published the pledge by the leading figures from tech giants. According to EveningStandard, this is the first time that the international community

Last year FLI also published a letter from the Tesla CEO Elon Musk that called on United Nations to regulate the technology. The letter reportedly stressed upon the role of AI in the military and necessity of making citizens, leaders, and policymakers know the difference between its acceptable and non-acceptable use.

The letter also read, “There is a moral component to this position, that we should not allow machines to make life-changing decisions for which others or nobody will be culpable.”

Call to Ban Lethal Autonomous Weapons

Well, armies are using autonomous weapons like drones to hit the targets, but they require human permission to do so. An autonomous weapon would mean a drone or any such device capable of killing a living being without any directions.

According to media reports, various nations have shown concerns over imminent threats of artificial intelligence to take the self-guided decision and distort the peace. Almost 26 countries that include the United States, Brazil, China and the United Kingdom have called for a ban on lethal autonomous weapons.

A Danger Posed by Self-Thinking Machines

AI is not dangerous regarding lethal autonomous devices only. It can wreak havoc in several ways. Late physicist Stephen Hawking also warned the world about harmful effects of Artificial Intelligence.

Recently, tech analysts also heralded the happening of singularity shortly. If it happens and machines start thinking like human beings, then survival of the human race will be in serious jeopardy. Or maybe humans will evolve into another form that will be entirely different from the present one.

In light of the future potential of Artificial Intelligence and autonomous weapons, developers must take such steps to ensure the sustainable development and safety of the human race in the long run. Afterall, all this innovation is of no use if it leads to the annihilation.

Also Read:

What Is Singularity And When Will It Happen

 

Share
Nasir
Nasir
Being a C.E.O of Evolverstech Nasir Ibrahim has a close watch on the dynamic and ever-evolving world of IT and Technology. He is passionate to write about latest technology trends, IoT, digitization and innovations taking place around the world.
RELATED ARTICLES
- Advertisment -

Latest