paxforpeace.nl uses cookies to ensure that we give you the best experience on our website.

https://www.paxforpeace.nl/newsroom/major-tech-companies-may-be-putting-world-at-risk-from-killer-robots

Newsroom

  • 19-08-2019

    Google and IBM aren't building lethal Autonomous Weapons; Amazon and Microsoft won't say

    A new global report from PAX called Don’t be evil? surveys the international tech sector’s stance on lethal autonomous weapons. Microsoft and Amazon are named among the world's ‘highest risk’ tech companies that might be putting the world at risk through killer robot development, while Google leads the way among large tech companies putting proper safeguards in place.

    The global survey grades fifty companies from 12 countries, all working on big tech, hardware, AI software and system integration, pattern recognition, autonomous and swarming aerial systems, or ground robots. All the companies were selected and asked about their current activities and policies in the context of lethal autonomous weapons.

     “Why are companies like Microsoft and Amazon not denying that they're currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?", says the report's lead author Frank Slijper. "Many experts warn that they would violate fundamental legal and ethical principles and would be a destabilising threat to international peace and security.”

    Concerns have been raised that tech companies, especially those working on military contracts, currently lack any public policy to ensure their work is not contributing to lethal autonomous weapons. Besides Amazon and Microsoft, mentioned above, AerialX (Canada), Anduril, Clarifai and Palantir (all US) emerge in this report as working on technologies relevant to increasingly autonomous weapons and did not reply to numerous requests to clearly define their position.

    The goal of this report is to inform the ongoing debate with facts about current developments and to encourage technology companies to develop and publicize clear policies for where they draw the line between what they will and will not do in the space of military AI applications.

    “This is an important debate,” added Daan Kayser, PAX project leader on autonomous weapons. “Tech companies need to be aware that unless they take measures, their technology could contribute to the development of lethal autonomous weapons. Setting up clear, publicly-available policies is an essential strategy to prevent this from happening.”

    The report lays out steps that tech companies can take to prevent their products from contributing to the development and production of lethal autonomous weapons. These are:

    • Commit publicly to not contribute to the development of lethal autonomous weapons.

    • Establish a clear policy stating that the company will not contribute to the development or production of lethal autonomous weapon systems.

    • Ensure that employees are well informed about what they work on and allow open discussions about any related concerns.

    Key Findings:

    PAX has ranked the 50 companies based on three criteria:

    1. Is the company developing technology that could be relevant in the context of lethal autonomous weapons?

    2. Does the company work on relevant military projects?

    3. Has the company committed to not contribute to the development of lethal autonomous weapons?

    Based on these criteria:

    • 7 companies are classified as showing ‘best practice’

    • 22 companies are of ‘medium concern’

    • 21 companies are as ‘high concern’

    To be ranked as best practice, a company must have clearly committed to ensuring its technology will not be used to develop or produce lethal autonomous weapons. Companies are ranked as high concern if they develop relevant technology, work on military projects and have not yet committed to not contributing to the development or production of these weapons.

    Good examples

    • Google: Google published its AI Principles in 2018, which state that Google will not design or deploy AI in “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”. In response to our survey, Google added that “since announcing our AI principles, we’ve established a formal review structure to assess new projects, products and deals. We’ve conducted more than 100 reviews so far, assessing the scale, severity, and likelihood of best- and worst-case scenarios for each product and deal”.

    Vision Labs:  In response to our survey, Vision Labs responded that they “do not develop or sell lethal autonomous weapons systems.” Adding that at Vision Labs they “explicitly prohibit the use of VisionLabs technology for military applications. This is a part of our contracts. We also monitor the results/final solution developed by our partners”.

    Softbank: In response to our survey, Boston Dynamics owner Softbank stated they will not develop lethal autonomous weapons. “Our philosophy at SoftBank Corp. is to use the Information Revolution to contribute to the well-being of people and society.” The company added that they “do not have a weapons business and have no intention to develop technologies that could be used for military purposes.”

    Animal Dynamics: The CEO of Animal Dynamics Alex Caccia stated in response to our survey that “under our company charter, and our relationship with Oxford University, we will not weaponize or provide ‘kinetic’ functionality to the products we make”. 

    Also read our blog: Lethal autonomous weapon systems and the tech sector: some examples of best practices

    Perscontact

    Helma Maas
    +31 (0) 6 4898 1488
    maas@paxforpeace.nl