OpenAI and Google DeepMind are among the leading technology companies at the forefront of building artificial intelligence (AI) systems and capabilities. However, several current and former employees of these organizations have now signed an open letter claiming that there is little or no oversight in the construction of these systems and that not enough attention is being paid to the major risks this technology poses. The open letter is backed by two of the three ‘godfathers’ of AI, Geoffrey Hinton and Yoshua Bengio, and it calls for better policies to protect whistleblowers from their employers.
OpenAI, Google DeepMind Employees Demand Right to Alert About AI
The open letter says it was written by current and former employees of major AI companies who believe in AI’s potential to bring unprecedented benefits to humanity. It also points to the risks posed by technology, which include increasing social inequalities, the spread of misinformation and manipulation, and even the loss of control over artificial intelligence systems that could lead to human extinction.
The open letter highlights that the governance structure implemented by these tech giants is ineffective in ensuring oversight of these risks. It also claims that “strong financial incentives” further encourage companies to overlook the potential danger that AI systems can cause.
Arguing that AI companies are already aware of the possibilities, limitations and risk levels of different types of harm, the open letter questions their intention to take corrective action. “They currently have only weak obligations to share some of this information with governments and none with civil society. We do not think that all of them can be relied on to share it voluntarily,” it said.
In an open letter, they presented four requests from their employers. First, employees want companies not to enter into or enforce any agreement that prohibits them from criticizing risk-related concerns. Second, they requested a verifiably anonymous process for current and former employees to raise concerns about risk with company management, regulators, and the appropriate independent organization.
Employees also encourage organizations to develop a culture of open criticism. Finally, the open letter points out that employers should not retaliate against current and former employees who publicly share confidential risk-related information after other processes have failed.
The letter was signed by a total of 13 former and current employees of OpenAI and Google DeepMind. Apart from the two ‘godfathers’ of artificial intelligence, the British computer scientist Stuart Russell also supported this move.
A former employee of OpenAI talks about the risks of AI
One of the former OpenAI employees who signed the open letter, Daniel Kokotajlo, also published a series of posts on Xu (formerly known as Twitter), highlighting his experience at the company and the risks of artificial intelligence. He claimed that when he resigned from the company, he was asked to sign a non-disparagement clause to prevent him from saying anything critical of the company. He also claimed that the company threatened Kokotaj with confiscation of his ownership capital due to his refusal to sign the contract.
Kokotajlo claimed that artificial intelligence systems’ neural networks grow rapidly thanks to the large data sets fed to them. Furthermore, he added that there are no adequate measures to monitor the risk.
“There’s a lot we don’t understand about how these systems work and whether they’ll remain aligned with human interests as they get smarter and perhaps surpass human levels of intelligence in all arenas,” he added.
Namely, OpenAI is creating a Model Spec, a document that aims to better guide the company in building ethical AI technology. It also recently established a Safety and Security Committee. Kokotajlo applauded these promises in one of his announcements.