Localising Audiovisual Content:
Interlingual Subtitles

In our last post on Localising Audiovisual Content, we talked about monolingual subtitles for deaf and hard-of-hearing audiences as well as hearing audiences. Today we’ll talk about the type of subtitles that usually comes to people’s minds when they hear the word ‘subtitles’: interlingual subtitles, aka translated subtitles.

Firstly, there are some important considerations to be made when it comes to translating subtitles (these considerations also apply to dubbing and voice-over work).

Audiovisual content: more than speech

Speech is just one of the ways in which audiovisual content conveys meaning: music, dramatic pauses, sound effects, iconography (images that have semantic significance), written information, types of shots, etc. all come into play as well. Meaning is constructed by the relationship established between these different elements.

Audio and visuals can:

  • Confirm each other: for example, a character saying “I’m not getting into this deathtrap” while standing next to a dilapidated old car.
  • Contradict each other: for example, if a character says “I promise” but we can see that they are crossing their fingers behind their back.

In the first example and other similar situations, this relationship between audio and visuals can help the subtitler get the meaning across. However, in other cases, it can muddle things up.

The way that the audience interprets both the audio and the visuals depends on the audience’s set of expectations. Since meaning is culturally bound, the audience in the target language might not be able to identify or interpret all the non-verbal elements in the way that they were intended to be.

For example, in Bulgaria, nodding means no, and shaking your head means yes. Imagine that, in an English film, a character nods as they say something affirmative on screen, creating a relationship of confirmation between the visual and the verbal. The translation of their speech into Bulgarian will need to keep into account the visual image and change the wording – e.g. shifting the sentence so that it’s negative but it conveys the same idea – in order to not alter the meaning created by the relationship of the visual and the verbal in the original version of the film.

Localising on-screen text

With monolingual subtitles, there isn’t an issue with text that appears on-screen, as it is normally in the same language as the audio/subtitles, and can be easily read by the viewer. This is obviously not the case for interlingual subtitles. There are two types of text that appear on the visuals:

  • On-screen titles (OST): text superposed on the image. Common examples include the typical “two years later…” titles, or interviewee’s names. They might have been added to the video as text (in an editable text box within the video editing software) or as graphics – the method employed will change the budget drastically if the OST need replacing.
  • Graphics’ text: text embedded into the image, e.g. in newspapers, shop names, t-shirts, etc. that appear on the video. Since this text is part of an image, the only way to replace it will be with the intervention of a specialised artist.
Interlingual subtitles OST in Subtitle Edit

On-screen titles with subtitle in subtitle editor

Since replacing text can be so difficult, the most cost-effective way of translating on-screen text is using subtitles (although this is not a good option for instances where there is a lot of dialogue happening at the same time as text being on display).

Subtitles that translate plot-pertinent on-screen text such as the examples above are considered forced narrative subtitles. A Forced Narrative (FN) subtitle is a subtitle that contains a translation of elements meant to be understood by the viewer but not covered in the localised audio. A film might be lip-sync dubbed but still have FN subtitles translating, for example, dialogue in a language different to the film’s main language.

With this in mind, let’s talk about the subtitles themselves in more detail!

 

 

Interlingual subtitles: when you want your content to be accessible in a different language

Interlingual subtitles have to take into account the same factors as monolingual subtitling, which we introduced on our last post, with some differences:

  • Character limits: Teletext is not used for interlingual subtitles; proportional fonts are used, and the limit of characters varies considerably.
  • Reading speed: The needs of a foreign target audience have to be kept in mind.
  • Clarity: Unclear speech needs to be made clear and concise, although in translations it is less obvious when the subtitles don’t exactly match the audio.
  • Line breaks: The maximum is two lines; three lines are not permitted.
  • Shot changes: Professional interlingual subtitles are timed to match each shot just like professional monolingual subtitles.
  • General timing rules: Generally, a gap of two frames is left between subtitles. The rest of the timing rules vary depending on the broadcaster/streaming service.

In some ways, interlingual subtitles are less limited by the audience’s knowledge of the source language, meaning that subtitlers can adapt the speech more freely (though that depends entirely on the language combination). What is challenging is that in certain languages, translations tend to be longer than the source text, but subtitlers are required to create a translation shorter than the original.

Interlingual bilingual/dual subtitles: when you want your content to be accessible to a mixed audience

In some regions of the world, the convention is to have subtitles in two different languages displayed at the same time. That’s the case for cinemas in northern Belgium and Brussels (Flemish/French), public TV in Israel (Hebrew/Arabic) and media in Singapore (Chinese/English). Of course, this means more space constraints.

Bilingual subtitles can also be used as a tool for language learning.

Interlingual Subtitles for the Deaf and Hard-of-hearing (SDH): when you want your content to be accessible for deaf and hard-of-hearing audiences in a different language

Interlingual subtitles can target hearing or deaf and hard-of-hearing audiences. The latter is called SDH subtitling, and as we discussed in our last post, the difference is that it contains audio cues. Unlike TV monolingual SDH, interlingual SDH doesn’t use font colours to differentiate speakers.

If you are targeting a deaf and hard-of-hearing audience in a different language, SDH is the way to go (unless we get into the territory of sign language, of course).

However, if you want to target a hearing audience, you might be wondering if you should use subtitles, dubbing, or voice-over. In our next post, we will explore these different options. Stay tuned!