The European Parliament has decided to prohibit lawmakers from utilizing built-in AI tools on their work devices due to escalating cybersecurity and privacy concerns. This decision highlights serious apprehensions regarding the safety of confidential communications being stored on external servers managed by AI companies.
In an email reviewed by Politico, the parliament’s IT department expressed doubts about the security of data submitted to AI platforms, indicating that the comprehensive implications of sharing information with these companies are still under evaluation. The email emphasized, “It is considered safer to keep such features disabled.”
Using AI chatbots, including Anthropic’s Claude, Microsoft’s Copilot, and OpenAI’s ChatGPT, raises significant risks; these platforms may become subject to U.S. legal demands that require companies to disclose user information. Furthermore, chatbots typically improve their algorithms by using uploaded data, increasing the likelihood that sensitive information could be unintentionally shared among users.
Europe is recognized for having some of the most stringent data protection regulations globally. However, the European Commission, which oversees the EU’s 27 member states, proposed changes last year to ease these protections to facilitate tech companies in training their AI models with European data—a move that has drawn sharp criticism for perceived capitulation to U.S. tech interests.
This restriction on lawmakers’ access to AI technologies coincides with a broader reassessment among EU countries regarding their affiliations with U.S. tech giants, which are still influenced by U.S. laws and the evolving policy landscape. Recently, the U.S. Department of Homeland Security prompted multiple tech and social media firms to comply with subpoenas demanding user information, even without judicial endorsement, prompting concerns about the implications for privacy rights in Europe.
