On The Line Subtitles Macedonian
This document covers the language specific requirements for Serbian. Please make sure to also review the General Requirements section and related guidelines for comprehensive instructions surrounding timed text deliveries to Netflix.
On the Line subtitles Macedonian
Subtitles are text representing the contents of the audio in a film, television show, opera or other audiovisual media. Subtitles might provide a transcription or translation of spoken dialogue. Although naming conventions can vary, captions are subtitles that include written descriptions of other elements of the audio like music or sound effects. Captions are thus especially helpful to people who are deaf or hard-of-hearing. Other times, subtitles add information not present in the audio. Localizing subtitles provide cultural context to viewers, for example by explaining to an unfamiliar American audience that sake is a type of Japanese wine. Lastly, subtitles are sometimes used for humor, like in Annie Hall where subtitles show the characters' inner thoughts, which contradict what they were actually saying in the audio.
Creating, delivering and displaying subtitles is a complicated and multi-step endeavor. First, the text of subtitles needs to be written. When there is plenty of time to prepare, this process can be done by hand. However, for media produced in real-time, like live television, it may be done by stenographers or using automated speech recognition. Subtitles written by fans, rather than more official sources, are referred to as fansubs. Regardless of who does the writing, they must include information on when each line of text should be displayed.
Second, subtitles need to be distributed to the audience. Open subtitles are added directly to recorded video frames themselves and thus cannot be removed once added. On the other hand, closed subtitles are stored separately, which can allow subtitles in different languages to be used without changing the video itself. In either case, there are a wide variety of technical approaches and formats used to encode the subtitles.
Third, subtitles need to be displayed to the audience. Open subtitles are always shown whenever the video is played because they are part of the video itself. However, displaying closed subtitles is optional since they are overlaid onto the video by whatever is playing it. For example, media player software might be used to combine closed subtitles with the video itself. In some theaters or venues, a dedicated screen or screens are used to display subtitles. If that dedicated screen is above rather than below the main display area, the subtitles are called surtitles.
Professional subtitlers usually work with specialized computer software and hardware where the video is digitally stored on a hard disk, making each individual frame instantly accessible. Besides creating the subtitles, the subtitler usually also tells the computer software the exact positions where each subtitle should appear and disappear. For cinema film, this task is traditionally done by separate technicians. The result is a subtitle file containing the actual subtitles as well as position markers indicating where each subtitle should appear and disappear. These markers are usually based on timecode if it is a work for electronic media (e.g., TV, video, DVD), or on film length (measured in feet and frames) if the subtitles are to be used for traditional cinema film.
Subtitles can also be created by individuals using freely available subtitle-creation software like Subtitle Workshop for Windows, MovieCaptioner for Mac/Windows, and Subtitle Composer for Linux, and then hardcode them onto a video file with programs such as VirtualDub in combination with VSFilter which could also be used to show subtitles as softsubs in many software video players.
Closed captioning is the American term for closed subtitles specifically intended for people who are deaf or hard-of-hearing. These are a transcription rather than a translation, and usually also contain lyrics and descriptions of important non-dialogue audio such as (SIGHS), (WIND HOWLING), ("SONG TITLE" PLAYING), (KISSES), (THUNDER RUMBLING) and (DOOR CREAKING). From the expression "closed captions", the word "caption" has in recent years come to mean a subtitle intended for the deaf or hard-of-hearing, be it "open" or "closed". In British English, "subtitles" usually refers to subtitles for the deaf or hard-of-hearing (SDH); however, the term "SDH" is sometimes used when there is a need to make a distinction between the two.
Programs such as news bulletins, current affairs programs, sports, some talk shows, and political and special events utilize real time or online captioning.[3] Live captioning is increasingly common, especially in the United Kingdom and the United States, as a result of regulations that stipulate that virtually all TV eventually must be accessible for people who are deaf and hard-of-hearing.[4] In practice, however, these "real time" subtitles will typically lag the audio by several seconds due to the inherent delay in transcribing, encoding, and transmitting the subtitles. Real time subtitles are also challenged by typographic errors or mishearing of the spoken words, with no time available to correct before transmission.
Some programs may be prepared in their entirety several hours before broadcast, but with insufficient time to prepare a timecoded caption file for automatic play-out. Pre-prepared captions look similar to offline captions, although the accuracy of cueing may be compromised slightly as the captions are not locked to program timecode.[3]
Communication access real-time translation (CART) stenographers, who use a computer with using either stenotype or Velotype keyboards to transcribe stenographic input for presentation as captions within two or three seconds of the representing audio, must caption anything which is purely live and unscripted[where?];[3] however, more recent developments include operators using speech recognition software and re-voicing the dialogue. Speech recognition technology has advanced so quickly in the United States that about 50% of all live captioning was through speech recognition as of 2005.[citation needed] Real-time captions look different from offline captions, as they are presented as a continuous flow of text as people speak.[3][clarification needed]
The NWPC concluded that the standard they accept is the comprehensive real-time method, which gives them access to the commentary in its entirety. Also, not all sports are live. Many events are pre-recorded hours before they are broadcast, allowing a captioner to caption them using offline methods.[3]
News captioning applications currently available are designed to accept text from a variety of inputs: stenography, Velotype, QWERTY, ASCII import, and the newsroom computer. This allows one facility to handle a variety of online captioning requirements and to ensure that captioners properly caption all programs.[3]
For non-live, or pre-recorded programs, television program providers can choose offline captioning. Captioners gear offline captioning toward the high-end television industry, providing highly customized captioning features, such as pop-on style captions, specialized screen placement, speaker identifications, italics, special characters, and sound effects.[6]
Offline captioning involves a five-step design and editing process, and does much more than simply display the text of a program. Offline captioning helps the viewer follow a story line, become aware of mood and feeling, and allows them to fully enjoy the entire viewing experience. Offline captioning is the preferred presentation style for entertainment-type programming.[6]
Subtitles for the deaf or hard-of-hearing (SDH) is an American term introduced by the DVD industry.[7] It refers to regular subtitles in the original language where important non-dialogue information has been added, as well as speaker identification, which may be useful when the viewer cannot otherwise visually tell who is saying what.
The only significant difference for the user between SDH subtitles and closed captions is their appearance: SDH subtitles usually are displayed with the same proportional font used for the translation subtitles on the DVD; however, closed captions are displayed as white text on a black band, which blocks a large portion of the view. Closed captioning is falling out of favor as many users have no difficulty reading SDH subtitles, which are text with contrast outline. In addition, DVD subtitles can specify many colors on the same character: primary, outline, shadow, and background. This allows subtitlers to display subtitles on a usually translucent band for easier reading; however, this is rare, since most subtitles use an outline and shadow instead, in order to block a smaller portion of the picture. Closed captions may still supersede DVD subtitles, since many SDH subtitles present all of the text centered (an example of this is DVDs and Blu-ray Discs manufactured by Warner Bros.), while closed captions usually specify position on the screen: centered, left align, right align, top, etc. This is helpful for speaker identification and overlapping conversation. Some SDH subtitles (such as the subtitles of newer Universal Studios DVDs/Blu-ray Discs and most 20th Century Fox Blu-ray Discs, and some Columbia Pictures DVDs) do have positioning, but it is not as common.
DVDs for the U.S. market now sometimes have three forms of English subtitles: SDH subtitles; English subtitles, helpful for viewers who may not be hearing impaired but whose first language may not be English (although they are usually an exact transcript and not simplified); and closed caption data that is decoded by the end-user's closed caption decoder. Most anime releases in the U.S. only include translations of the original material as subtitles; therefore, SDH subtitles of English dubs ("dubtitles") are uncommon.[8][9] 041b061a72