AI Sovereignty Is a Spectrum, Not a Switch
Apertus: why it makes sense to invest in open models even when full AI sovereignty remains a dream.
I've recently written here about the idea that chasing AI Sovereignty is a pipe dream for most countries. This resonated with many readers, but it easily lends itself to misinterpretation: "So you're saying we shouldn't invest in chips, our own AI models, etc.?"
I'd like to clarify my point here. And what better opportunity to do that than in the context of the recent launch of the Swiss-made AI model Apertus. This model does not turn Switzerland into an AI sovereign country. But it's nevertheless a fantastic investment that makes the country (and others!) more sovereign.
To have full AI sovereignty, you'd need to control absolutely every element of the AI pipeline. Think about what that really means. You'd need to develop and build your own chips; not just design them, but actually manufacture them. You'd need to build and run your own models. Your data pipeline would need to be completely under your control. And so on.
And all of this - from the rare earth elements in your hardware to the energy powering your data centers - would need to be independent from external forces. Maybe the United States can achieve this. China might get close. But for smaller countries? For Switzerland, Belgium, or really any nation that isn't a global superpower? It's just not realistic.
The Impossible Dream of Full Sovereignty
This is why I argued in my previous post that nations should focus on talent, where they can actually make a difference. But AI sovereignty isn't just something you have or don't have. It exists on a spectrum. You might be 10% sovereign, or 30%, or if you're very lucky and very large, maybe 70%. The question therefore isn't "are we sovereign?" but rather "how can we become a bit more sovereign?"
And that's where targeted investments make sense. Building your own model like Apertus doesn't suddenly make your country AI sovereign. But it does reduce certain dependencies, gives you more control over specific use cases, and builds local expertise. Every step toward greater autonomy, however small, strengthens your position in the global AI ecosystem. The question is, which step do you prioritize?
The Sovereignty Spectrum
The Swiss AI initiative, spearheaded by EPFL, ETH Zürich, and CSCS, has decided to invest resources into building a completely open model. Apertus - Latin for "open" - is truly open in every sense: the entire development process, including its architecture, model weights, and training data and recipes, is openly accessible and fully documented. Trained on 15 trillion tokens across more than 1,000 languages, where 40% of the data is non-English, Apertus includes many languages that have so far been underrepresented in LLMs, such as Swiss German, Romansh, and many others.
This is a major investment. The development was funded by an investment of over 10 million GPU hours on "Alps" by CSCS, and by the Swiss government. The Alps supercomputer at CSCS, with over 10,000 GH200 GPUs, is one of the world's most advanced AI platforms. That's very serious compute power, and least for public institutions.
Apertus: A Case Study in Smart Investment
Here’s something that’s critical to understand: Apertus isn't trying to compete with ChatGPT or Claude on general capabilities. Instead, it focuses on specific areas where it can add value. Apertus was developed with due consideration to Swiss data protection laws, Swiss copyright laws, and the transparency obligations under the EU AI Act. The team meticulously filtered training data to respect opt-out requests and remove personal information - something many commercial models have been criticized for ignoring.
This creates a model that, while perhaps not as powerful as the frontier models from big tech companies, offers something they don't: complete transparency, regulatory compliance, and strong support for local languages. It’s unlikely Apertus will ever be a “frontier model” in the strict sense of the word. But it will be strong enough for millions of use cases, and its commitment to total transparency makes it a unique research platform.
In developing this model, Switzerland has not become AI sovereign - but it has just become a little bit more sovereign. This is great, but still largely a side effect. The true motivation was to fully understand how AI models work, and having a completely transparent, reproducible model.
Of note here: there are of course many open weight models out there. But open weights are not enough to make a model reproducible. Yes, you can look inside and take it apart. But you have no clue how the model came to be. You cannot reproduce how the parameters were set during training on the data - for that, you would need access to all the data, and the entire training pipeline, which only very few models provide. Apertus is one of them.
If you want to learn more about Apertus, I invite you to listen to the recent “Inside AI” podcast episode with Martin Jaggi, Antoine Bosselut, and Imanol Schlag - three of the key driving forces behind the model.
CODA
This is a newsletter with two subscription types. I highly recommend to switch to the paid version. While all content will remain free, all financial support directly funds EPFL AI Center-related activities.
To stay in touch, here are other ways to find me: