Lip sync

From Infogalactic: the planetary knowledge core
(Redirected from Lip-sync)
Jump to: navigation, search
"Lip Synch" redirects here. For the American musical comedy television program, see Lip Sync Battle. For the film series, see Lip Synch (series). For other uses, Lip Sync (disambiguation)

Lua error in package.lua at line 80: module 'strict' not found. Lip sync, lip-sync, lip-synch (short for lip synchronization) is a technical term for matching lip movements with pre-recorded sung or spoken vocals that the listeners hear through speakers, either through PA system speakers in a "live" performance or television or cinema speakers in the case of a lip-synced TV show or film performance. The term can refer to any of a number of different techniques and processes, in the context of live performances and recordings.

In the case of live concert performances, lip-synching is done by some singers to ensure that the vocal performance will sound as good as the CD, but it can be considered controversial, especially if the audience believes that they are viewing a live singing performance. In film production, lip synching is often part of the post-production phase. Dubbing foreign-language films and making animated characters appear to speak both require elaborate lip-synching. Many video games make extensive use of lip-synced sound files to create an immersive environment.

In music

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Though lip-synching, which often is featured as and considered a part of miming, can be used to make it appear as though actors have substantial musical ability (e.g., The Partridge Family) or to misattribute vocals (e.g. Milli Vanilli), it is more often used by recording artists to create a particular vocal effect that they can only achieve in the recording studio, to enable them to perform live dance numbers that also incorporate vocals, or to cover for illness or other deficiencies during live performance. It is also commonly used in drag shows. Sometimes lip sync performances are forced on performers by television producers to shorten the guest appearances of celebrities, as it requires less time for rehearsals and hugely simplifies the process of sound mixing, or to eliminate the risk of vocal errors. Some artists, however, lip sync because they are not as confident singing live and may wish to avoid possible bad notes.

Because the film track and music track are recorded separately during the creation of a music video, artists usually lip-sync to their songs and often imitate playing musical instruments as well. Artists also sometimes move their lips at a faster speed from the track, to create videos with a slow-motion effect in the final clip, which is widely considered to be complex to achieve. Similarly, some artists have been known to lip-sync backwards for music videos such that, when reversed, the singer is seen to sing forwards while time appears to move backwards for his or her surroundings.

Examples

Pop singer Ashlee Simpson lip synched on Saturday Night Live in 2004
In his underground romp Wild Side Story director Lars Jacob put in an overdone type of mouth and tongue action as integral to the show's parodic concept; Helena Mattsson (center) and Chris Ajaxon lip-sync a Mae West quip in the Stockholm cast of 2002.

Michael Jackson's performance on the television special Motown 25: Yesterday, Today, Forever (1983) changed the scope of live stage show. Ian Inglis, author of Performance and Popular Music: History, Place and Time (2006) notes the fact that "Jackson lip-synced 'Billie Jean'".[1] In 1989, a New York Times article claimed that "Bananarama's recent concert at the Palladium", the "first song had a big beat, layered vocal harmonies and a dance move for every line of lyrics", but "the drum kit was untouched until five songs into the set, or that the backup vocals (and, it seemed, some of the lead vocals as well-a hybrid lead performance) were on tape along with the beat". The article also claims that "British band Depeche Mode, ...adds vocals and a few keyboard lines to taped backup onstage" although this practice is common place in the genre of electric music.[2]

Milli Vanilli became one of the most popular pop acts in the late 1980s and early 1990s. The group's debut album Girl You Know It's True achieved international success and earned them a Grammy Award for Best New Artist on February 21, 1990. Their success turned to infamy and failure when the Grammy award was withdrawn after Los Angeles Times author Chuck Philips revealed that lead vocals on the record were not the voices of Morvan and Pilatus.

Chris Nelson of The New York Times reported that by the 1990s, "[a]rtists like Madonna and Janet Jackson set new standards for showmanship, with concerts that included not only elaborate costumes and precision-timed pyrotechnics but also highly athletic dancing. These effects came at the expense of live singing."[3] Edna Gundersen of USA Today reported: "The most obvious example is Madonna's Blond Ambition World Tour, a visually preoccupied and heavily choreographed spectacle. Madonna lip-syncs the duet "Now I'm Following You", while a Dick Tracy character mouths Warren Beatty's recorded vocals. On other songs, background singers plump up her voice, strained by the exertion of non-stop dancing."[4]

Similarly, in reviewing Janet Jackson's Rhythm Nation World Tour, Michael MacCambridge of the Austin American-Statesman commented "[i]t seemed unlikely that anyone—even a prized member of the First Family of Soul Music—could dance like she did for 90 minutes and still provide the sort of powerful vocals that the '90s super concerts are expected to achieve."[5]

The music video for Electrasy's 1998 single "Morning Afterglow" featured lead singer Alisdair McKinnell lip-syncing the entire song backwards. This allowed the video to create the effect of an apartment being tidied by 'un-knocking over' bookcases, while the music plays forwards.

In 2004, US pop singer Ashlee Simpson appeared on the live comedy TV show Saturday Night Live, and during her performance, "she was revealed to apparently be lip-synching". According to "her manager-father[,]...his daughter needed the help because acid reflux disease had made her voice hoarse." Her manager stated that "Just like any artist in America, she has a backing track that she pushes so you don't have to hear her croak through a song on national television." During the incident, vocal parts from a previously performed song began to sound while the singer was "holding her microphone at her waist"; she made "some exaggerated hopping dance moves, then walked off the stage".[6]

During the 2008 Beijing Olympics, CTV news reported that a "nine-year-old Chinese girl's stunning performance at the Beijing Olympics opening ceremony has been marred by revelations she was lip-synching". The article states that "Lin Miaoke was lip-synching Friday to a version of "Ode to the Motherland" sung by seven-year-old Yang Peiyi, who was deemed not pretty enough to perform as China's representative".[7]

Spears performing in 2009's world tour The Circus Starring Britney Spears

In 2009, US pop singer Britney Spears was "'extremely upset' over the savaging she has received after lip-synching at her Australian shows", where ABC News Australia reported that "[d]isappointed fans ...stormed out of Perth's Burswood Dome after only a few songs".[8] Reuters reports that Britney Spears "is, and always has been, about blatant, unapologetic lip-synching". The article claims that "at the New York stop of her anticipated comeback tour, Spears used her actual vocal chords only three times – twice to thank the crowd, and once to sing a ballad (though the vocals during that number were questionable, as well)".[9] Rolling Stone magazine stated that "Though some reports indicate Spears did some live singing [in her 2009 concerts], the L.A. Times Ann Powers notes that the show was dominated by backing tracks (which granted, is not the same thing as miming)".[10]

During Super Bowl XLIII, "Jennifer Hudson's performance of the national anthem" was "lip-synched ...to a previously recorded track, and apparently so did Faith Hill who performed before her". The singers lip-synched "...at the request of Rickey Minor, the pregame show producer", who argued that "There's too many variables to go live."[11] Subsequent Super Bowl national anthems were performed live.

Teenage viral video star Keenan Cahill lip-syncs popular songs on his YouTube channel. His popularity has increased as he included guests such as rapper 50 Cent in November 2010 and David Guetta in January 2011, sending him to be one of the most popular channels on YouTube in January 2011.[12][13][14]

Contests and game shows

In 1981 Wm. Randy Wood started lip sync contests at the Underground Nightclub in Seattle, Washington to attract customers. The contests were so popular he took the contests nationwide. By 1984 he had contests running in over 20 cities. The contests were so successful Mr Wood went to work for Dick Clark Productions as consulting producer for the TV series Puttin' on the Hits. The show received an impressive 9.0 rating the first season and was nominated twice for the Daytime Emmy Awards. In the United States, this hobby reached its peak during the 1980s, when several game shows, such as Puttin' on the Hits and Lip Service, were created. The Family Channel had a Saturday morning show called Great Pretenders where kids lip-synched their favorite songs.

In video

Film

In film production, lip synching is often part of the post-production phase. Most film today contains scenes where the dialogue has been re-recorded afterwards; lip-synching is the technique used when animated characters speak; and lip synching is essential when films are dubbed into other languages. In many musical films, actors sang their own songs beforehand in a recording session and lip-synched during filming, but many also lip-synched to voices other than their own. Marni Nixon sang for Deborah Kerr in The King and I, Annette Warren for Ava Gardner in Show Boat, Robert McFerrin for Sidney Poitier in Porgy and Bess, Betty Wand for Leslie Caron in Gigi, Lisa Kirk for Rosalind Russell in Gypsy, and Bill Lee for Christopher Plummer in The Sound of Music.

In the 1950s MGM classic Singin' in the Rain, lip synching is a major plot point, with Debbie Reynolds' character, Kathy Selden, providing the voice for the character Lina Lamont (played by Jean Hagen). Writing in UK Sunday newspaper The Observer, Mark Kermode noted, "Trivia buffs love to invoke the ironic dubbing of Debbie Reynolds by Betty Noyes on Would You" although he pointed out that "the 19-year-old Reynolds never puts a foot wrong on smashers like Good Morning".[15] Reynolds also later acknowledged Betty Noyes’ uncredited contribution to the film, writing: "I sang You Are My Lucky Star with Gene Kelly. It was a very rangy song and done in his key. My part did not come out well, and my singing voice was dubbed in by Betty Royce [sic]".[16]

ADR

Automated dialogue replacement, also known as "ADR" or "looping," is a film sound technique involving the re-recording of dialogue after photography. Sometimes the dialogue recorded on location is unsatisfactory either because it has too much background noise on it or the director is not happy with the performance, so the actors replace their own voices in a "looping" session after the filming.

Animation

Another manifestation of lip synching is the art of making an animated character appear to speak in a prerecorded track of dialogue. The lip sync technique to make an animated character appear to speak involves figuring out the timings of the speech (breakdown) as well as the actual animating of the lips/mouth to match the dialogue track. The earliest examples of lip-sync in animation were attempted by Max Fleischer in his 1926 short My Old Kentucky Home. The technique continues to this day, with animated films and television shows such as Shrek, Lilo & Stitch, and The Simpsons using lip-synching to make their artificial characters talk. Lip synching is also used in comedies such as This Hour Has 22 Minutes and political satire, changing totally or just partially the original wording. It has been used in conjunction with translation of films from one language to another, for example, Spirited Away. Lip-synching can be a very difficult issue in translating foreign works to a domestic release, as a simple translation of the lines often leaves overrun or underrun of high dialog to mouth movements.

Language dubbing

Quality film dubbing requires that the dialogue is first translated in such a way that the words used can match the lip movements of the actor. This is often hard to achieve if the translation is to stay true to the original dialogue. Elaborate lip-synch of dubbing is also a lengthy and expensive process. The more simplified non-phonetic representation of mouth movement in many anime helps this process.

In English-speaking countries, many foreign TV series (especially anime like Pokémon) are dubbed for television broadcast. However, cinematic releases of films tend to come with subtitles instead. The same is true of countries in which the local language is not spoken widely enough to make the expensive dubbing commercially viable (in other words, there is not enough market for it).

However, most non-English-speaking countries with a large enough population dub all foreign films into their national language cinematic release. In such countries, people are accustomed to dubbed films, so less than optimal matches between the lip movements and the voice are not generally noticed[citation needed]. Dubbing is preferred by some because it allows the viewer to focus on the on-screen action, without reading the subtitles.

In video games

Early video games did not use any voice sounds, due to technical limitations. In the 1970s and early 1980s, most video games used simple electronic sounds such as bleeps and simulated explosion sounds. At most, these games featured some generic jaw or mouth movement to convey a communication process in addition to text. However, as games become more advanced in the 1990s and 2000s, lip sync and voice acting has become a major focus of many games.

Role-playing games

Lip sync was for some time a minor focus in role-playing video games. Because of the amount of information conveyed through the game, the majority of communication uses of scrolling text. Older RPGs rely solely on text, using inanimate portraits to provide a sense of who is speaking. Some games make use of voice acting, such as Grandia II or Diablo, but due to simple character models, there is no mouth movement to simulate speech. RPGs for hand-held systems are still largely based on text, with the rare use of lip sync and voice files being reserved for full motion video cutscenes. Newer RPGs, have extensive audio dialogues. The Neverwinter Nights series are examples of transitional games where important dialogue and cutscenes are fully voiced, but less important information is still conveyed in text. In games such as Jade Empire and Knights of the Old Republic, developers created partial artificial languages to give the impression of full voice acting without having to actually voice all dialogue.

Strategy games

Unlike RPGs, strategy video games make extensive use of sound files to create an immersive battle environment. Most games simply played a recorded audio track on cue with some games providing inanimate portraits to accompany the respective voice. StarCraft used full motion video character portraits with several generic speaking animations that did not synchronize with the lines spoken in the game. The game did, however, make extensive use of recorded speech to convey the game's plot, with the speaking animations providing a good idea of the flow of the conversation. Warcraft III used fully rendered 3D models to animate speech with generic mouth movements, both as character portraits as well as the in-game units. Like the FMV portraits, the 3D models did not synchronize with actual spoken text, while in-game models tended to simulate speech by moving their heads and arms rather than using actual lip synchronization. Similarly, the game Codename Panzers uses camera angles and hand movements to simulate speech, as the characters have no actual mouth movement. However StarCraft II used fully synced unit portraits and cinematic sequences.

First-person shooters

FPS is a genre that generally places much more emphasis on graphical display, mainly due to the camera almost always being very close to character models. Due to increasingly detailed character models requiring animation, FPS developers assign many resources to create realistic lip synchronization with the many lines of speech used in most FPS games. Early 3D models used basic up-and-down jaw movements to simulate speech. As technology progressed, mouth movements began to closely resemble real human speech movements. Medal of Honor: Frontline dedicated a development team to lip sync alone, producing the most accurate lip synchronization for games at that time. Since then, games like Medal of Honor: Pacific Assault and Half-Life 2 have made use of coding that dynamically simulates mouth movements to produce sounds as if they were spoken by a live person, resulting in astoundingly lifelike characters. Gamers who create their own videos using character models with no lip movements, such as the helmeted Master Chief from Halo, improvise lip movements by moving the characters' arms, bodies and making a bobbing movement with the head (see Red vs. Blue).

Television transmission synchronization

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

An example of a lip synchronization problem, also known as lip sync error is the case in which television video and audio signals are transported via different facilities (e.g., a geosynchronous satellite radio link and a landline) that have significantly different delay times. In such cases it is necessary to delay the earlier of the two signals electronically.

Lip sync issues have become a serious problem for the television industry world wide. Lip sync problems are not only annoying, but can lead to subconscious viewer stress which in turn leads to viewer dislike of the television program they are watching.[17] Television industry standards organizations have become involved in setting standards for lip sync errors.[18]

Miming

The miming of the playing of a musical instrument is equivalent of lip-synching.[according to whom?] A notable example of miming includes John Williams' piece at President Obama's inauguration, which was a recording made two days earlier and mimed by musicians Yo-Yo Ma, Itzhak Perlman. The musicians wore earpieces to hear the playback.[19]

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. POP VIEW; That Synching Feeling By Jon Pareles Published: April 9, 1989. New York Times.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. Hudson's Super Bowl Lip-Sync No Surprise to Insiders Super Bowl Producers Asked Jennifer Hudson, Faith Hill to Lip-Sync By LUCHINA FISHER and SHEILA MARIKAR Feb. 3, 2009 http://abcnews.go.com/Entertainment/WinterConcert/story?id=6788924&page=1
  12. Lua error in package.lua at line 80: module 'strict' not found.
  13. Lua error in package.lua at line 80: module 'strict' not found.
  14. Lua error in package.lua at line 80: module 'strict' not found.
  15. Lua error in package.lua at line 80: module 'strict' not found.
  16. Lua error in package.lua at line 80: module 'strict' not found.
  17. "Effects of Audio-Video Asynchrony on Viewer's Memory, Evaluation of Content and Detection Ability" by Reeves and Voelker.
  18. Lua error in package.lua at line 80: module 'strict' not found.
  19. Lua error in package.lua at line 80: module 'strict' not found.