The traditional dubbing process is highly complex. With the ongoing struggle to find sufficient translators, even companies like Disney are looking to machine learning to streamline parts of the process.
The complexity of traditional dubbing
Traditional dubbing methods give producers and distributors precise control over the end product. They can, for example, ask for changes to the intonation of a specific line or challenge the suitability of a voice actor. However, this comes at the cost of several simultaneously moving parts and piecemeal production.
Every piece of the process requires the input of a separate team of specialists. Localization teams must cast and direct professional voice actors, pare down original scripts into dubbing stems, and build out new scripts in the target language with the help of dedicated language specialists. This process can take several months. Once complete, sound engineers must layer these new voice tracks into the original video: a painstaking process that often requires digital manipulation or additional recording sessions. One line of dialogue could be sent through the flow several times to get it just right. Localization services providers will project manage the process of shepherding these experts. However, such a complicated workflow necessarily comes at a high cost.
And this is just the process for localizing one piece of content. Media giants like Netflix and Disney often plan to release big-name series in several regions at once via a 'day-and-date release' process. Such launches mean the coordination of localization projects across multiple territories. Delays in one language or location can be disastrous for the overarching content strategy. As a result, there is significant pressure on localization partners, who face mammoth logistical challenges on top of an already complex process. With industry demand growing, workflow difficulties are only increasing. “When I first came to this industry,” Mark Howorth, Group President at media localization provider Iyuno-SDI explains, “dubbing typically took three months and subtitling, a month. But now that has moved rapidly toward day-and-date. We love the fact that there are huge volumes, but it’s putting pressure on the supply chain.” (Slator).
Translator talent crunch - a breaking point?
Traditional dubbing is being made trickier by the ongoing translator talent crunch, “a shortage resulting from the trifecta of high demand around non-English content, simultaneous releases in multiple languages, and a talent pool that takes time to be developed” (Slator). The complexity of the traditional dubbing lifecycle means companies like Netflix have long and robust processes for accepting new dubbing providers. As such, there’s no quick way to scale dubbing resources.
The inability to scale is a real issue for content providers, compelling big hitters to consider where they could incorporate machine learning solutions. A 2022 Disney job post advertised for a VP of Localization who would "be responsible for creating processes for new systems and technology, such as the use of synthesized voices, AI, and machine translation” (Source). Some, like Amazon, are even launching their own machine learning research and development initiatives.
Disney seeking AI and machine translation experts, despite its access to vast traditional dubbing resources and correspondingly large localization budget, is a strong signal of where the industry is heading. Big players are looking to diversify, considering hybrid strategies that leave no stone unturned by working with traditional dubbers and machine learning solutions. To remain competitive, other producers will also need to consider where machine learning can improve efficiency and scale their localization operations.
Companies can benefit from a hybrid approach
Machine learning and traditional dubbing aren’t directly comparable. They’re different strategies with varied strengths that work for distinct purposes. For example, although offering less granularity, machine-learning solutions are inherently more streamlined and scalable. They require fewer moving parts: instead of expert teams parsing voice tracks, AI technology can automatically separate components; instead of casting, hiring, and directing expensive voice actors, synthetic AI voices can match the tone and cadence of the original speech in documentaries and news reports. And if tweaks are necessary, they won’t require re-recording in the studio.
As a result, when using machine dubbing, costs are significantly lower and turnaround is faster. It’s even possible to localize videos in hours, transforming distribution possibilities. Sky News, for example, uses Papercup to broadcast breaking news to Latin American audiences in Spanish.
Utilizing machine learning instead of or alongside traditional dubbing opens new strategy opportunities. If you know you can produce a consistent volume of localized content despite translator uncertainty and workflow bottlenecks, you can build on this foundation for a solid content rollout. In turn, this streamlined content strategy can transform your business.
The binary between traditional and machine dubbing is a false one. The translator talent crunch and the innovation projects of various industry giants only emphasize the potential machine learning has for improving localization workflows. To gain an industry advantage, consider utilizing this technology sooner rather than later.
For a consultation on your dubbing needs, book a demo with the Papercup team.