
Balancing Power and Privacy: Safeguarding Data in Generative AI and Large Language Models
February 8, 2025
Sharing with Caution — Protecting Your Personal Data When Using Public LLMs
February 8, 2025
Generative AI — the class of models capable of producing human-like text, images, code, music, and beyond — has dramatically transformed our digital landscape. Yet, as this technology advances, concerns about transparency, safety, intellectual property, privacy, and bias have grown more pronounced. In response, European policymakers have introduced the EU Artificial Intelligence Act (EU AI Act), a groundbreaking regulatory framework intended to ensure that AI development and deployment uphold fundamental rights and European ethical values.
From Draft to Adoption
The legislative journey of the EU AI Act began in April 2021 when the European Commission first proposed it. After a lengthy negotiation process, the European Parliament adopted the Act on March 13, 2024, followed by the Council’s approval on May 21, 2024. The final text was published in the Official Journal on July 12, 2024, and the Act officially entered into force on August 1, 2024.
A Phased Implementation Timeline
Unlike a straightforward “wait two years” approach, the EU AI Act introduces a phased rollout, specifying when each set of rules takes effect:
- August 1, 2024: The Act enters into force.
- February 2, 2025: Prohibitions on certain unacceptable AI practices and AI literacy requirements begin to apply.
- May 2, 2025: Codes of practice for General Purpose AI (GPAI) must be in place.
- August 2, 2025: Core rules for GPAI models (excluding those considered systemic risk), associated governance frameworks, and standard penalties start to apply.
- August 2, 2026: Most remaining obligations, including those governing high-risk AI systems, come into force. High-risk AI systems already on the market before this date are grandfathered in unless their design or intended purpose changes significantly after August 2, 2026.
- August 2, 2027: Requirements for specific high-risk AI systems listed in Annex I fully apply. Additionally, GPAI models available before August 2, 2025, must comply with the Act starting August 2, 2027, giving established providers more time to adjust.
This nuanced schedule ensures stakeholders can plan ahead, while reducing immediate burdens on AI systems already in circulation.
A Risk-Based Structure
Under the EU AI Act, AI systems are regulated according to their assessed risk level:
- Prohibited (Unacceptable) AI: Systems that pose a clear threat to safety, livelihoods, or fundamental rights are outright banned.
- High-Risk AI: Systems deployed in critical domains — ranging from healthcare and employment to law enforcement and border management — must meet stringent documentation, monitoring, and conformity assessment standards.
- Limited- and Minimal-Risk AI: These face lighter requirements, often focused on ensuring transparency or adhering to codes of conduct.
Generative AI models could fall into any of these categories depending on their intended use. For example, a model generating material used for hiring decisions might be deemed high-risk, while one producing purely creative artistic outputs could be regulated under lighter-touch rules.
General Purpose AI (GPAI) and Systemic Risk
General Purpose AI (GPAI) models, often large-scale foundation models, are inherently versatile, capable of adapting to numerous tasks. The EU AI Act imposes specific obligations on GPAI model providers — primarily related to transparency, safety measures, and documenting training data, intended uses, and known limitations.
For systemic risk GPAI models, which have broad social or economic implications, requirements are even more stringent. Such models may face enhanced oversight, tighter risk management protocols, and more frequent evaluation to ensure they do not undermine public welfare or fundamental rights.
Securing Generative AI: Safety, Robustness, and Transparency
The EU AI Act strongly emphasizes trustworthiness. Key obligations for high-risk and GPAI models include:
- Robust Evaluation and Documentation: Providers must detail how the model was developed, identify data sources, clarify intended uses, and assess resilience against adversarial tactics.
- Data Integrity and Privacy Protections: In harmony with the GDPR, the Act calls for rigorous data handling and proactive risk management to guard against breaches or misuse.
- Continuous Monitoring and Post-Market Surveillance: Providers must remain vigilant throughout an AI system’s lifecycle, responding to performance issues, reporting incidents, and mitigating new vulnerabilities as they arise.
Conformity Assessments, Standards, and Governance
High-risk AI systems may need to pass conformity assessments, ranging from self-assessments to third-party evaluations by notified bodies — independent entities authorized by EU member states. Aligning with harmonized European standards (developed by organizations like CEN and CENELEC) can establish a presumption of conformity, making it simpler for providers to prove they meet the Act’s requirements.
Governance under the EU AI Act is multi-layered:
- National Competent Authorities: Enforce the Act at the member-state level.
- EU AI Office: Coordinates oversight and implementation across the EU.
- AI Board: Comprising national representatives, it ensures consistent interpretation and enforcement throughout the Union.
- Scientific Panel: Offers technical and scientific guidance to inform regulatory decisions.
This integrated governance model fosters consistency and coherence in AI regulation across Europe.
Penalties for Non-Compliance
Non-compliance with the EU AI Act carries considerable financial risk. Fines can reach up to €35 million or 7% of a company’s global annual turnover, whichever is higher. The timetable for penalties is also staggered, with most providers of GPAI models facing penalties starting August 2, 2025, while systemic risk GPAI providers operate on a separate timeline aligned with their heightened obligations.
Meeting Obligations and Going Beyond
While the EU AI Act establishes a clear baseline of legal obligations for high-risk AI and GPAI models, organizations can surpass these minimums. By adopting secure-by-design principles, conducting proactive adversarial testing, and embracing transparent practices, companies can bolster trust, enhance their reputation, and reduce long-term liabilities.
Extraterritorial Reach and the Larger Ecosystem
The EU AI Act applies to any AI system entering the EU market or used within its territory, no matter where the provider is located. It operates alongside other regulations like the GDPR and the Digital Services Act (DSA), forming a comprehensive framework aimed at reinforcing trust and accountability in Europe’s digital ecosystem.
Shaping a Secure and Ethical Future for Generative AI
The EU AI Act marks a critical turning point in AI governance, balancing the need for innovation with pressing ethical, safety, and social considerations. For developers and deployers of generative AI, understanding and adhering to these rules is not just about avoiding penalties — it’s about seizing the opportunity to build more trustworthy, secure, and responsible AI solutions.
By diligently documenting workflows, prioritizing data integrity, anticipating compliance deadlines, and engaging with standards and best practices, organizations can help ensure that generative AI’s transformative potential is realized in ways that serve the broader public good.
Disclaimer: This article is for informational purposes only and does not constitute legal advice. For specific guidance regarding compliance with the EU AI Act or related legal matters, please consult qualified legal counsel or regulatory experts.



