Product teams that embed this compliance structure into their strategy gain an immediate and distinct competitive edge. This proactive approach delivers more predictable releases, establishes deeper user trust and speeds up readiness for the global market.
What’s Actually Happening Right Now
As the world's first comprehensive AI law, the EU AI Act simply sets rules for AI with AI ethics at its heart, so people stay safe and their rights are protected. It entered into force on August 1, 2024, with some prohibitions and AI literacy requirements kicked in early on February 2, 2025. The full framework applies from August 2, 2026 after a two-year transition. Even with these phased dates, the Act has already begun reshaping how product teams design, ship and explain AI features.
For years, companies raced to build features first and sort out paperwork later. The Act reverses that flow. If you work with AI in any capacity, this regulation is now part of your playbook.
Its core idea is to promote trustworthy AI without slowing innovation. To get there, the Act uses a risk-based framework. Each AI use case is judged on how likely it is to harm people or their rights, and requirements increase as risk goes up.
The Act applies to providers, deployers, product manufacturers and importers, depending on your role in the AI value chain. And just like GDPR, it reaches far beyond Europe. Whether you’re training models in San Francisco or releasing features in Berlin, the rules apply if your product reaches EU users. Non-EU companies that ignore the Act can face fines reaching 7% of global turnover.
The Logic Behind the EU AI Act
The EU AI Act is built on a straightforward idea. AI should be governed based on how much harm it can cause. To make that workable, the Act classifies AI systems into four categories: unacceptable, high, limited and minimal risk. Higher risk means stronger requirements. The Act also includes a distinct set of rules for general-purpose AI (GPAI) , since these models shape a wide range of downstream applications.
- Unacceptable risk covers systems that threaten fundamental rights or safety. These are banned entirely. Examples include social scoring and certain types of live biometric surveillance.
- High risk includes AI that can meaningfully impact a person’s access to work, health care, credit or safety. Hiring tools, lending assessments, medical decision support and AI used in transportation all fall here. These systems must meet strict requirements, including testing, documentation and human oversight.
- Limited risk applies to tools that need transparency so people know when they are interacting with AI. Chatbots and AI-generated images are typical examples. A clear notice to users is required.
- Minimal risk is the rest. These systems need basic monitoring, model version tracking and a simple record of how training data was sourced.
A Closer Look at General Purpose AI
General-purpose AI gets its own attention in the EU AI Act because these models power a wide range of downstream uses. Large language models, image generators and other foundation models fall into this category.
The EU has released a voluntary GPAI Code of Practice that focuses on transparency, copyright and system security. While it’s not mandatory yet, many early adopters follow it because it signals credibility and helps prepare for stricter rules that will arrive later.
Providers of generative AI models are directly responsible for meeting GPAI requirements. They must understand how their models are trained, how they behave under different conditions and how potential risks can spread across the applications that rely on them.
Enterprise users will feel the impact too. Any company integrating or fine-tuning GPAI as part of its product will need to strengthen its internal governance, review supply-chain risk and build clear processes for how external AI systems are evaluated.
But legal training is not a requirement here. Product teams simply need to know how their chosen models work, what level of risk they introduce and what that means for design, testing and release workflows.
Why Product Teams Should Care About the EU AI Act
Every product team feels the impact in a different way. A fintech app experimenting with automated credit checks has a very different set of responsibilities from a travel app using AI to recommend trips. Yet the starting point is the same: understanding the risk level of each feature and plan accordingly.
- Risk strategy drives roadmap decisions
High-risk features demand documentation, human oversight, testing and audits. Low-risk features often only need clear disclosures. If something could fall into a higher-risk category, it is smarter to account for that early in the roadmap. Teams that build risk assessment into planning keep releases predictable and avoid late surprises. - Transparency is now part of the UX
Users should know when they are interacting with AI. A short in-product note or a clear label usually solves this. What matters is that transparency is designed into the flow from the start rather than added at the end. - Documentation isn’t optional
Every AI feature needs a record of data sources, model behavior and testing decisions. This protects trust and removes uncertainty during audits or certifications. It also gives teams a clearer picture of how their models evolve over time.
Why This Matters for the Broader Market
Fines can be significant, but the deeper risk is loss of trust. AI is becoming central to products in finance, health, mobility and education. Users want to know that the systems shaping their choices are accurate, fair and safe.
Teams that understand the Act truly can use it as a design constraint that improves the product rather than hindering progress. Constraints force better decisions, clearer interfaces and stronger documentation. When your product enters regulated industries, compliance becomes a marker of quality.
Deloitte’s Lens on the Shift
Deloitte characterizes the EU AI Act as a crucial piece within the complex framework of EU digital regulation, which also addresses areas such as the data economy, cybersecurity and platform governance. Deloitte emphasizes that organizations should aim for a robust governance framework and a proactive approach to risk management.
A key insight from Deloitte: businesses should adapt and strengthen their risk management systems by building on existing internal structures rather than creating entirely new ones. Teams can build on the structures they already use for compliance and security. This makes it easier to navigate the new regulatory landscape without adding unnecessary layers.
Treating compliance proactively serves as a strategic advantage by positioning the company as a provider of trustworthy AI and offering a significant competitive edge in the global market. This proactive approach ensures operational excellence, mitigates the risk of penalties (which can reach up to €35 million or 7% of global turnover for the most severe breaches), and demonstrates robust governance to customers and partners.
STRV’s POV: Turning AI Regulation into Product Strategy
At STRV, we don’t see regulation as red tape. It’s a design constraint that strengthens trust and market readiness.
Our AI Strategy Roadmap helps teams align product goals with the Act’s requirements, from risk classification to data documentation. We treat this as a part of UX, roadmap planning and overall risk posture.
We partner with teams to:
- Classify use cases by risk level
- Define transparency and audit requirements
- Align certification timelines with product releases
We’ve helped clients across fintech, healthtech and mobility prepare for upcoming AI audits, building products that move fast and responsibly. So when regulators come calling, you’re ready.
Understanding these requirements doesn’t require legal training. Product teams only need a clear view of how their chosen models work, what risks they introduce and how that shapes design, testing and release workflows.
FAQs
What is the timeline for mandatory EU AI Act requirements?
Compliance is phased to give businesses time to adjust.
- August 2024: Prohibited AI systems (unacceptable risk) are banned.
- February 2025: Obligations for General Purpose AI (GPAI) and General Purpose AI Models begin.
- August 2026: Strict obligations for High-Risk AI systems take full effect.
Do non-EU companies have to designate an EU representative?
Yes. If your company is based outside the European Union but markets a high-risk AI system to EU users, you must designate an EU Authorized Representative. This representative acts as the contact point for national authorities regarding compliance issues.
What is the compliance risk of using third-party LLMs like GPT-4?
If you integrate a General Purpose AI (GPAI) model like GPT-4, you become responsible for its output within your product. You need to perform vendor due diligence to review their:
- Documentation: Technical specifications, known limitations and risk posture
- Security: Cybersecurity certifications (e.g. ISO 27001)
- Contractual Terms: Clauses regarding data ownership, usage rights and timely incident reporting
You are accountable for the downstream risks, so you must establish internal governance to manage the supply-chain risk introduced by third-party models.
How is copyrighted training data handled for GPAI?
Providers of Generative AI models must comply with the EU Copyright Directive. They are required to:
- Maintain detailed records of the data used for training.
- Publish a sufficiently detailed summary of the copyrighted data used.
For any content generated by your model, the output must be clearly labeled as artificially generated or manipulated.








