Many of the risks that are being discussed with generative AI tools such as OpenAI’s ChatGPT are relatively benign.
Risks include the chatbot hallucinating answers to questions that it doesn’t know the answers to, or providing biased answers to questions based on distorted data in its training models.
More troubling is the potential use of the current tools in the hands of bad actors.
And possibly most troubling, as generative AI models continue to improve, is the possibility of generative AI becoming a general AI superintelligence, an idea not that far in the future.
ChatGPT creator OpenAI today created a study group called “Preparedness” to think about how to counter a superintelligent AI that might not be benign.
The team will be led by Aleksander Madry, Director of Deployable Machine Learning at MIT in Boston.
Madry joined OpenAI in a consulting capacity earlier this year and is pulling together a team to think about how AI can be misused by humans, and how “frontier AI” can be controlled.
A frontier AI is defined by OpenAI as a “highly capable foundation model that could possess dangerous capabilities sufficient to pose severe risks to public safety.”
To help seed ideas of how AI can be misused, the company is seeking to crowdsource ideas by asking for submissions with the top 10 ideas awarded a prize of $25,000.
Sam Altman, the Founder/CEO of OpenAI, is a noted doomsday believer and has stated on a number of occasions that general AI has the potential to cause human extinction.
The announcement for the new Preparedness team comes just a few days before an AI summit in the UK.
UK Prime Minister Rishi Sunak recently stated, “In the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as super intelligence.”