Captioning

Captioning

Captioning is the display, in writing, of dialogue, narration, or other unspoken information on the screen. As an audiovisual medium, television makes extensive use of writing. Captions usually appear in two to three lines at the bottom of the screen.

An example of captioning.

Photo courtesy of The Caption Center

Bio

Captions used for translating a foreign-language text or program are usually called “subtitles.” While such “translation subtitling” is rarely used in some countries, including the United States, captioning in the same language is indispensable, especially in information programs such as news, documentaries, and weather reporting or in entertainment programs such as game shows. Captions are also used when intelligibility is reduced by poor voice quality, dialect, colloquialism, or other features of speech. Commercials make extensive use of captioning, sometimes with calligraphic expression. The written element enhances the spoken, visual, graphic, sound, or musical components of an advertisement or provides additional information.

Captions are either “open” (that is, appear on the screen without viewer control of their display) or “closed” (i.e., available for display at viewer’s choice); closed captions can be “opened” with a decoder. An increasingly important use of closed captions is for making the spoken language of television available to hearing-impaired audiences. The first experiments with such captioning were initiated by PBS in the early 1970s and approved by the Federal Communications Commission (FCC) in 1976. PBS’s Boston station, WGBH-TV, established a Caption Center, which set standards for captioned programming. Although a real success with hearing-impaired viewers who lobbied for more, some in the hearing audience complained about the distraction of open captions. The problem was solved when it became possible to assign line 21 of the vertical-blanking interval (VBI) for hiding captions, which could be conveniently opened up by a decoder. The nonprofit National Captioning Institute (NCI), formed in 1981, promoted the service and tried to gradually meet viewers’ demands. In Britain, the 1990 Broadcasting Act stipulated the captioning of a minimum of 50 percent of all programs by 1998. In Canada, broadcasters raised public interest in this service by opening closed captions during a Captioning Awareness Week in 1995. In the United States, all television sets with screens larger than 13 inches produced after 1993 were required to be equipped with decoders.

Nonstandardized technology is an obstacle to transnational exchange of closed-caption programs in countries speaking the same language. By the mid-1990s, there were some 3,000 captioned videos in the United States. However, NCI-captioned products in Britain could be viewed only with a decoder because the VBI lines used in the two countries are not compatible.

In both film and television, captioning began as a postproduction activity. Technological advances as well as a growing demand by hearing-impaired viewers have made it possible to provide real-time captioning for live broadcasting. This is done with the aid of a courtroom stenograph or shorthand machine; a high-speed stenographer can type no less than 200 words per minute, which is adequate for keeping up with the speed of normal conversation. Stenographed texts are not readable, however, because words are abbreviated or split into consonant and vowel clusters. While the stenographer strikes the keyboard, a computer transforms the keystrokes into captions and delivers them to the transmitting station, making it possible for the viewers to read the words seconds after they are spoken. Stenocaptioning was first tried in the early 1980s in Britain and the United States. The improved system was in use in North America in the mid-1990s, although alternative technologies were being developed in Europe.

While captioning allows millions of deaf and hard-of-hearing citizens access to television, it usually involves heavy editing of the spoken language. Screen space is limited, and captions can be displayed for only a few seconds. Thus, to allow viewers enough time to read the captions and watch the pictures, the dialogue or narration must be summarized; such editing entails change of meaning or loss of information. However, refined, although not yet standardized, styles have been developed to help the viewer get a better grasp of the spoken language. When more than one speaker is present, the captions may either be placed next to each speaker or marked by different colors. Moreover, codes or brief comments are used to indicate the presence of some features of the speech, music, and sound effects.

Captioning is a useful teaching aid in second-language learning, child or adult acquisition of literacy, and in most types of educational programming. It also has a potential for creating new television genres and art forms. Digital broadcasting improves the production and reception of captions by, for instance, allowing viewers to adjust text size and diversifying fonts and styles.

Previous
Previous

Captain Video and His Video Rangers

Next
Next

Cariboo Country