There's a dominant narrative in the European AI debate right now, and it goes something like this: regulation is killing innovation, the EU AI Act is strangling businesses, and what we need is a moratorium — or better yet, to scrap the whole thing and start over. Bavaria's government has been particularly vocal, calling for an immediate suspension of the AI Act. The argument resonates emotionally. But from where I sit — in AI governance at Germany's largest public insurer — it misses the point.
The real problem isn't regulation. It's uncertainty.
Insurance companies have been regulated since before the word "algorithm" entered common usage. We don't fear rules. What we fear is not knowing what the rules mean in practice.
Consider a concrete scenario that keeps AI teams across Europe awake at night: You want to train a machine learning model using customer data — policy histories, claims records, behavioural patterns. You have a legal basis for this processing. You train the model. It works well. You deploy it.
Then a customer exercises their right to object to data processing under the GDPR.
Now what?
Must you retrain the entire model from scratch — a process that may have cost hundreds of thousands of euros and months of compute time? Or can you deploy technical measures, such as output filters, to ensure the individual's data no longer influences the model's behaviour? The honest answer today, for private-sector companies: nobody knows for sure.
Baden-Württemberg just answered this question — for government agencies
In early February 2026, the Baden-Württemberg state parliament passed a significant amendment to its Landesdatenschutzgesetz (LDSG) — the state data protection law. Among other things, this amendment introduced two provisions directly relevant to AI:
§ 9a LDSG establishes that correction of personal data processed by AI systems cannot be demanded when it would require disproportionate technical or economic effort or cause significant environmental impact. Instead, filters or other suitable measures can take the place of correction. Through § 10(4), the same principle applies to data deletion.
§ 11a LDSG creates an explicit legal basis for the further processing of personal data for the development, training, testing, validation, and monitoring of AI systems and models by public bodies, provided the AI system's purpose cannot be effectively achieved without such data.
These are precisely the kinds of provisions that enable practical AI deployment. They acknowledge the technical reality that "unlearning" data from a trained model is fundamentally different from deleting a row in a database, and they provide a legally defensible alternative.
But here's the critical limitation: these rules apply exclusively to public-sector organisations in Baden-Württemberg — state agencies, municipalities, public institutions. They do not cover the private sector.
What Bavaria could do — instead of demanding a moratorium
The Bavarian government has positioned itself as a champion of AI innovation and a critic of EU overregulation. Fair enough. But calling for a moratorium on an EU regulation that's already in force, with full applicability coming in August 2026, is at best symbolic. The EU Commission has already rejected the demand, and companies need actionable guidance today — not political gestures aimed at Brussels.
What would actually help is this: create legal certainty at the state level, as Baden-Württemberg has done, but extend it to cover private-sector organisations as well.
A Bavarian data protection law that clarifies under what conditions output filters and technical safeguards are sufficient alternatives to model retraining when individuals exercise data rights — that would be genuinely useful. A state-level legal basis for using personal data in AI training when anonymisation isn't feasible and the purpose justifies it — that would reduce the legal risk that currently makes many companies hesitate.
These aren't exotic requests. They are the daily reality of anyone trying to deploy AI responsibly in a regulated industry.
Governance as a design principle, not a compliance filter
I've argued consistently — at conferences, in panel discussions, and within my own organisation — that governance should be treated as a design principle embedded from the start, not a compliance filter bolted on at the end. The EU AI Act, for all its flaws, shares this philosophy. Its risk-based approach mirrors how well-run organisations already think about AI deployment.
The problem isn't the existence of rules. The problem is that the rules remain abstract, contested, and unevenly interpreted across jurisdictions. Companies operating in Bavaria today face a patchwork of federal, state, and EU-level requirements, with crucial practical questions left unanswered.
Every week that passes without clear, actionable guidance is a week in which another AI project gets deprioritised, another business case gets shelved, and another innovation opportunity goes to a competitor operating in a jurisdiction with clearer rules.
The ask
My number one wish for policymakers — in Bavaria and beyond — is straightforward:
Stop debating whether to regulate AI. Start making the existing regulation workable.
Give companies the same legal certainty that Baden-Württemberg just gave its government agencies. Clarify the practical questions that every AI team in every company is wrestling with right now. Do it at the state level where you can, and push for it at the federal and EU level where you must.
That's not a moratorium. That's governance.
Oliver Duschka works in AI and Data Governance at Versicherungskammer, Germany's largest public insurer, and holds a PhD in Computer Science from Stanford University. The views expressed here are his own.