“We are losing control over AI”
In a recent development , 13 former OpenAI employees and the researchers behind ChatGPT have written a letter to the public, citing that “we are losing control over AI” and that we need to be quite careful while developing artificial intelligence systems. In the letter, the former as well as current employees at OpenAI have accepted the fruitful nature of artificial intelligence (AI) and, at the same time, warned about the increasing risks of losing control over AI , manipulation of data , fake image and video circulation, etc.
According to them (former employees of OpenAI), the main reason why they are writing this letter to the public is that “AI system developers” are not sharing enough information about the risks involved with these AI systems with the public and are only showing the bright side while keeping the scariest side of these AI systems behind the curtains.
These AI system developers are able to do so because they have so called financial incentives. Also, they have demanded stricter laws from the government and law-making sides, as well as insisted that the that the public be more aware.
Demands put forward by Former OpenAI employees
Here is the complete list of the demands of former OpenAI employees.
1) That the AI companies will not enter into or enforce any agreement with their employees that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit.
2) That the AI companies will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organisation with relevant expertise.
3) That the AI companies will support a culture of open criticism and allow their current and former employees to raise risk-related concerns about their technologies to the public, to the company’s board, to regulators, or to an appropriate independent organisation with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected.
Also Read : EU brings world’s first risk based AI regulations.
4) That the AI companies will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company’s board, to regulators, and to an appropriate independent organisation with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.
Join our WhatsApp community.
Conclusion
According to me, the demands put forth by former AI employees are quite legit, and AI companies should seriously look into this matter. We are heading towards a future that humanity has never experienced before. With the arrival of AI technologies, the future seems bright as well as uncertain. This letter by former OpenAI employees is just a step towards making the future somewhat secure. The potential risks and benefits of AI must be carefully considered so that they do not affect humanity in the long run as a whole.
Discover more from WireUnwired
Subscribe to get the latest posts sent to your email.