Hearing-Impaired (HI) Audio
Hearing-Impaired audio dates back to the 70’s when first included as an output in Dolby cinema audio processors. The purpose of the HI channel is to boost dialog over music and sound effects, with the intent of making dialog easier to understand through the use of headphones. A typical listener might suffer from high-frequency loss in hearing, for example, thus benefiting from the boosted dialog. The HI output signal of cinema audio processors is derived from the dialog-dominant Center channel, coupled with a softer mix of Left and Right audio (which predominantly carry music and effects) than what would be heard un-aided in the auditorium. Several commercial methods exist to deliver specialized audio to audience-worn headphones using infrared light or radio waves.
In digital cinema, a different approach to HI audio has been introduced, where the HI signal can be generated in post-production and distributed as a sound channel in the 16-channel MainSound Track File. (The forthcoming Sound section will provide audio channel assignment information.) The same commercial methods as before for delivering HI audio through headphones in the auditorium can be applied.
Some manufacturers choose to support both methods, providing a center-channel dominant, audio-processor-generated mix for HI, as a fallback for when the HI signal is not present in the Composition. Notably, the Composition does not include metadata that instructs the system when to fallback to audio processor-generated HI audio.
Visually-Impaired Narrative (VI-N) Audio
Visually-Impaired Narrative was first introduced to cinema in the 90’s with the DTS cinema sound system for film. The target audience of the VI-N channel will have good hearing, but either no vision, or difficulties with vision. The narrative describes the events taking place on screen to the listener. As with HI, the auditorium delivery mechanism to the headphone is typically infrared light or radio wave. The VI-N signal is distributed as an audio channel in the 16-channel MainSound Track File. (See MainSound Track File.)
Sign Language Video
Sign Language Video was introduced as a recommendation by the Motion Picture Association of America (MPAA) in 2017 to satisfy new regulations introduced in Brazil, and potential regulations forthcoming in other countries. The video channel is encoded as a VP9 bit stream for inclusion in the MainSound Track File. The VP9 video frame rate is set to match that of the Composition. In this manner, the video signal is automatically synchronized with the movie, requiring no external synchronization methods. (The forthcoming Audio section will provide audio channel assignment information.) Sign Language Video is viewed by the audience on a second screen, typically a smart phone.