Upon his return to OpenAI following a five-day exile, the controversy surrounding Sam Altman's removal appeared to stem from concerns related to a potential breakthrough in OpenAI's pursuit of artificial general intelligence.
OpenAI, the renowned AI research organization, experienced internal unrest as staff researchers reportedly sent a letter to the board of directors warning of a potent artificial intelligence discovery that could pose risks to humanity. The letter, which Reuters could not review, played a role in the subsequent ousting of OpenAI CEO Sam Altman. Over 700 employees had previously threatened to quit in solidarity with Altman's removal. The board expressed concerns about commercializing advances without fully understanding their consequences.
The letter referenced a project called Q*, potentially a breakthrough in OpenAI's quest for artificial general intelligence (AGI). Some believe Q* could enhance generative AI by excelling in mathematical problem-solving, a key frontier in AI development. The researchers, optimistic about Q*'s future success, highlighted its ability to solve certain mathematical problems, although Reuters could not independently verify these claims.
Researchers raised general safety concerns about AI's prowess and potential dangers in the letter, although specific details were not disclosed. The discussion included the work of an "AI scientist" team, exploring ways to optimize existing AI models for improved reasoning and scientific tasks. Altman, instrumental in OpenAI's growth, drew investments from Microsoft to advance AGI. Altman's firing followed his announcement of major advances at a summit, adding an unexpected twist to OpenAI's trajectory.