Google employees say no to blowing stuff up

More than 3,000 Google employees have signed an open letter to CEO Sundar Pichai requesting the termination of Project Maven which uses the firm’s AI to make drone strikes more accurate.

Jamie Davies

April 5, 2018

3 Min Read
Google employees say no to blowing stuff up

More than 3,000 Google employees have signed an open letter to CEO Sundar Pichai requesting the termination of Project Maven which uses the firm’s AI to make drone strikes more accurate.

Google no-longer promotes the ‘don’t be evil’ tagline, but such a project could not be further away from such a mantra.

Project Maven was started last year, focusing on computer vision which autonomously extracts objects of interest from moving or still imagery. It’s an area of AI which is often discussed an implemented by technology companies today, think about auto-tagging and object recognition software for pictures on social media sites, but Project Maven takes the technology into the world of war.

The letter itself, which you can read here, asks Pichai to officially end the company’s involvement in the project while also drafting a company policy which would state Google or its contractors would never build warfare technology. It is a clear message from the employees; we do not want to help the government kill people.

What should be worth noting is that Google bosses have stated the technology would not be used in operating drones or launching weapons, but this still leaves a lot of hazy definitions. These two use cases have been explicitly mentioned, but there are numerous other ways in which AI can aid destruction. The technology will be used to detect vehicles and other objects, track their motions, and provide results to the Department of Defence, but this could indirectly aid potential aggression and military action. This is not sitting well with the Google employees.

“This plan will irreparably damage Google’s brand and its ability to compete for talent,” the letter reads. “Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public’s trust. By entering into this contract, Google will join the ranks of companies like Palantir, Raytheon, and General Dynamics.”

This is an incredibly complicated and sensitive issue, one which is only compounded by a lack of regulation or ethical principles surrounding the development of artificial intelligence. AI is supposed to be a technology which helps humanity, though some have voiced concerns about the possible consequences of its power. Artificially intelligent military solutions is certainly one of those which will have made some nervous.

You have to wonder what sort of implications this sort of project would have on the perception of the company, not only to consumers but also to prospective employees. Google is viewed as a friendly company with an excellent work environment as it stands, contributing to the development of weapons would certainly turn-off some potential talent. There are few young, promising engineers who leave university will the objective of developing instruments of death.

An internal revolt is worst case scenario for Google; it is one of the world’s most powerful companies because it has one of the best work forces; p*ss these guys off and it won’t be at the forefront of the industry for much longer.

test new title

See more
Subscribe and receive the latest news from the industry.
Join 56,000+ members. Yes it's completely free.

You May Also Like