Tesla and SpaceX billionaire Elon Musk and all three of the co-founders of Google’s DeepMind are among the thousands of individuals and almost 200 organizations who have publicly committed not to develop, manufacture or use killer robots.
“We the undersigned agree that the decision to take a human life should never be delegated to a machine,” reads the pledge published Wednesday and organized by the Boston nonprofit Future of Life, an organization that researches the benefits and risks of artificial intelligence along with other existential issues related to advancing technology.
“There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others — or nobody — will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual,” the pledge says.
So far, 195 organizations and 2633 scientists, engineers, researchers and entrepreneurs have signed the letter, which is a commitment to not engage with or proliferate in any way killer robots, or lethal autonomous weapons. The letter was published Wednesday and announced at the annual International Joint Conference on Artificial Intelligence in Stockholm, Sweden.
“We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons,” the pledge reads.