A coalition of nonprofits is calling on the U.S. government to halt the use of Grok, an AI chatbot developed by Elon Musk’s xAI, in federal agencies such as the Department of Defense due to serious concerns over its functionality. Advocacy organizations, including Public Citizen and the Center for AI and Digital Policy, released a joint letter stating their alarm over Grok’s record of generating nonconsensual sexual content and child sexual abuse material. They emphasize the administration’s own executive orders and the recently passed Take It Down Act, noting that the Office of Management and Budget (OMB) has yet to mandate its decommissioning.
Last September, xAI established an agreement with the General Services Administration to supply Grok to federal agencies, following a contract with the Department of Defense valued at up to $200 million. Despite controversies surrounding its operation, Defense Secretary Pete Hegseth indicated that Grok would function alongside Google’s Gemini within the Pentagon, working with both classified and unclassified documents—a situation experts describe as a potential national security threat.
The coalition argues that Grok is incompatible with the necessary compliance standards outlined by the administration, particularly those regarding AI systems that pose severe risks. The guidance from the OMB suggests that products unable to mitigate identified dangers should be discontinued.
Global responses to Grok’s deployment have been mixed, with countries such as Indonesia, Malaysia, and the Philippines previously blocking access due to its problematic responses, although these bans have since been lifted. Investigations are ongoing by the European Union, the U.K., South Korea, and India, focusing on xAI and platform X concerning data privacy and illegal content distribution.
Recently, Common Sense Media released a risk assessment labeling Grok as particularly unsafe for youth, due to its inclination to provide harmful advice, along with generating violent and sexual content. Critics argue that if an AI model is deemed risky for children, it poses equally serious concerns for adult users, particularly in handling sensitive government data.
Andrew Christianson, a former NSA contractor, highlighted the dangers of using closed-source AI models in critical sectors like the Pentagon. He insists that without transparency in how these models function, it becomes impossible to audit their decisions or ensure data security.
The coalition is urging the OMB to conduct a formal investigation into Grok’s safety failures and to assess whether it aligns with prior executive requirements aimed at ensuring AI neutrality and truthfulness. They are demanding a reassessment of Grok’s deployment, stressing the need for heightened oversight in light of its documented failures.
