Powering the next generation of Language AI tools

Serious AI research and innovation requires serious computing horsepower. DeepL’s recent investment in NVIDIA’s accelerated computing infrastructure means that our Language AI platform will be able to access the power needed to run more advanced generative AI applications. DeepL will be one of the first in the world to commercially deploy the new NVIDIA DGX SuperPOD with DGX GB200 systems. The liquid-cooled DGX SuperPOD, powered by NVIDIA GB200 Grace Blackwell Superchips, is a cluster of liquid-cooled superchips that’s purpose-built for training and running generative AI models. 

For the 100,000 organizations and more that work with DeepL to break down language barriers, this new development signals many more exciting ways to help their people communicate and collaborate.

Why computing power is crucial for the future of AI

The world is very aware of how quickly AI has developed in the last few years. But much less attention is paid to the spectacular increases in computing power that have made these breakthroughs possible. The power of the graphics processing units (GPUs) that run AI models has grown roughly 7,000x in the last 20 years, according to Stanford University's Human-Centered AI group, and 1,000x in the last decade, according to NVIDIA. If AI is to keep delivering on its promise, that capacity must keep growing. In short, AI innovations depend on innovative ways for computers to power AI models.

The importance of computing power to Language AI is the reason behind DeepL’s longstanding relationship with NVIDIA. DeepL Mercury, the supercomputer that currently powers our Language AI platform and runs on an NVIDIA DGX SuperPOD with DGX H100 systems, is what makes it one of the most powerful machines on the planet. It has enabled our teams of researchers, AI pioneers and language experts to build and train a next-generation large language model (LLM) that outperforms other AI assistants for translation quality, and is trusted by 50% of the Fortune 500.

By investing in the new NVIDIA DGX SuperPOD with DGX GB200 systems — which will be operational in early 2025 — we’ll be taking the performance of our platform to the next level. The new DGX SuperPOD, which can be scaled up to deploy tens of thousands of GB200 Superchips within its liquid-cooled rack design, is capable of handling trillion-parameters models, enabling DeepL researchers to test new ideas faster, train models more quickly, and deliver near real-time inference whenever our users need us.

How computing power helps DeepL learn like a human

DeepL is a research-based company that’s committed to enabling natural communication across languages. This communication powers collaboration, productivity and, just as importantly, a sense of belonging. It ensures that language helps ideas flow, rather than getting in the way. 

New deployments of powerful computer chips — and the quadrillions of computations per second that they can make — might seem a long way from this natural flow of language. But if you want to learn language the way a human learns language, you need machines that can make connections at least as quickly as the human brain. It’s this speed of AI inference that enables DeepL’s models to learn language in the way that humans do. 

Near real-time inference speed enables the platform to capture the gist of a sentence immediately, without having to translate every word in turn. It also enables an AI model to focus on communicating that meaning in the most natural way possible, in a new language. By the time someone finishes typing in their preferred language, DeepL is already experimenting with different ways of expressing what they’re saying in others. Every word gives the platform new information to help quickly refine its tone and expression. The new NVIDIA DGX SuperPOD with DGX GB200 systems will enable DeepL’s Language AI to do even more with these types of intuitively human language capabilities.

Responsibly sourced power

The future of Language AI requires serious computational power. It also requires a serious commitment to using that power in the most efficient and sustainable way possible. That’s why DeepL’s new supercomputer cluster will be housed, like Mercury, within the EcoDataCenter in northern Sweden, one of the most advanced and sustainable data centers in the world. There, it will run on purely renewable energy, ensuring that our next-generation Language AI capabilities are powered by wind and the flow of rivers, as part of a circular energy economy. We’ll also continue to train our platform on proprietary, highly curated data that’s uniquely tuned for human language, and which makes our algorithms learn and operate more efficiently than others.

Sustainable, powerful, exciting AI advancements come from the fusion of futuristic thinking about what AI models can do, with innovative thinking about the computing power that drives them. As DeepL’s growth accelerates, we’re investing heavily in both.

Want to learn more about how DeepL deploys advanced Language AI to transform collaboration and productivity for global organizations? It’s all in Beyond the Code, a new series produced by BBC StoryWorks Commercial Productions, which tells the story of DeepL and others transforming the world through AI.

shareMenu_headline