Why we developed our integration with Memsource
Subtitle localisation is hard.
We all vividly remember the bad old days of Excel spreadsheets, version control hell, last minute edit changes, specialist captioners, feedback via multiple channels, client review, tracking approvals – you name it.
And it’s compounded by scale, of course. The process outlined above is bad enough for one language; it’s exponentially worse when you’re working on twenty, and the deadline looms.
Fortunately, things have improved.
On the translation side of things, modern Translation Management Solutions (TMS) like Memsource offer immensely powerful tools for enterprise and translation agencies. Based in the cloud, translation memories, termbases, and even custom machine-translation models can all be centralised. In turn, this ensures accurate, consistent translation, wrapped up in an easy to use platform.
Modern TMSs only take you so far, though: video localisation presents its own set of challenges.
Firstly, translations need to be context-aware. The best translations come from true understanding, and that’s often not possible when all a linguist has is a transcript. They need to see the video they’re translating.
Secondly: captions. Captions need to obey myriad different rules: they can be too long, or too short. They can have too many characters, they can be too close to an edit in the footage, they can be in the wrong place on the screen. The introduction of video into the translation mix adds extra layers of complexity that traditional TMS tools can struggle to cope with. Translations become more than text-in / text-out: they now need to deal with time and space.
This is why we developed our integration with Memsource, to combine the power of a modern TMS with an environment that’s optimised for subtitle localisation. You can use Memsource for its broad translation featureset, and CaptionHub for our expertise in the video domain – frame accuracy, real time QC, positioning, and of course in-context translation.