Uncategorized

Google DeepMind staff call for end to military contracts

In May 2024, around 200 employees at Google DeepMind (representing roughly 5 percent of the division) signed a letter urging the company to end its contracts with military organizations, expressing concerns that its AI technology was being used for warfare, Time magazine reported.
The letter states that employee concerns aren’t “about the geopolitics of any particular conflict,” but it does link out to Time’s reporting on Google’s defense contract with the Israeli military, known as Project Nimbus. The letter also points to reports that the Israeli military uses AI for mass surveillance and to select targets in its bombing campaign in Gaza, with Israeli weapon firms mandated by the government to purchase cloud services from Google and Amazon.
The letter highlights tensions within Google between its AI division and its cloud business, which sells AI services to militaries. At the company’s flagship Google I/O conference earlier this year, pro-Palestine protestors chained together at the attendee entrance, protesting Lavender, “Where’s Daddy?” software, the “Gospel” AI program, and Project Nimbus.

The use of AI in warfare has spread rapidly, pushing some technologists who build related systems to speak out. But Google also made a specific commitment: when it acquired DeepMind in 2014, the lab’s leaders required that their AI technology would never be used for military or surveillance purposes.
“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the letter that circulated inside Google DeepMind says.
As Time reports, the letter from DeepMind staff urges leadership to investigate claims that Google cloud services are being used by militaries and weapons manufacturers, to cut off military access to DeepMind’s technology, and to establish a new governance body to prevent future use of the AI by military clients.
Time reports that despite employee concerns and calls for action, they’ve seen “no meaningful response” from Google so far.

In May 2024, around 200 employees at Google DeepMind (representing roughly 5 percent of the division) signed a letter urging the company to end its contracts with military organizations, expressing concerns that its AI technology was being used for warfare, Time magazine reported.

The letter states that employee concerns aren’t “about the geopolitics of any particular conflict,” but it does link out to Time’s reporting on Google’s defense contract with the Israeli military, known as Project Nimbus. The letter also points to reports that the Israeli military uses AI for mass surveillance and to select targets in its bombing campaign in Gaza, with Israeli weapon firms mandated by the government to purchase cloud services from Google and Amazon.

The letter highlights tensions within Google between its AI division and its cloud business, which sells AI services to militaries. At the company’s flagship Google I/O conference earlier this year, pro-Palestine protestors chained together at the attendee entrance, protesting Lavender, “Where’s Daddy?” software, the “Gospel” AI program, and Project Nimbus.

The use of AI in warfare has spread rapidly, pushing some technologists who build related systems to speak out. But Google also made a specific commitment: when it acquired DeepMind in 2014, the lab’s leaders required that their AI technology would never be used for military or surveillance purposes.

“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the letter that circulated inside Google DeepMind says.

As Time reports, the letter from DeepMind staff urges leadership to investigate claims that Google cloud services are being used by militaries and weapons manufacturers, to cut off military access to DeepMind’s technology, and to establish a new governance body to prevent future use of the AI by military clients.

Time reports that despite employee concerns and calls for action, they’ve seen “no meaningful response” from Google so far.

Read More 

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top
Generated by Feedzy