Understanding the nostalgia and childish playfulness of the Sound Design of Star Wars

The sound re-design of a scene of Star Wars requires the understanding of the aesthetic, cultural and technological implications involved. Hence, this first chapter will tackle the aesthetic concerns on the sound design of the original film. Subsequently, there will be an overview on the cultural issues involved, due to the popularity of the film. This way it should be easier to understand my re-design’s aesthetic aims and connotation. Finally, it will be underlined how nostalgic and ludic practices are characterising for Star Wars filmmaking. Therefore, it is argued that they had to be inevitable features of my re-design.

PART 1 The nostalgia of nostalgic films and nostalgic sounds

1.1 Understanding the film’s sound making practice

The issues of film sound design practices in Star Wars are not just relevant for historical understanding, but can also help to understand the nowadays aesthetic perception of film and film-sound. In fact, Star Wars had a historical impact in film and sound design aesthetics [Chion 2009, Sonnenschein 2001]. Indeed, it was revolutionary for the sci-fi genre, for the technological use and in narrative audio-visual aesthetics [Whittington 2007]. However, what I want to focus on is the figure of the sound designer Ben Burtt. George Lucas, director of the film, asked him to participate to the filmmaking from the very beginning to create sounds, which did not refer to those actually recorded during the set. Moreover, he was given the aim to explore new ways of making sounds, which would be aesthetically different from what audiences usually heard in films from the sci-fi genre of that time [Rinzler and Burtt 2010]. In fact, the film shows a new way of making sci-fi, which can be defined as hyper-realistic [Chion 2009]  and organic [Whittington 2007] . Hyper realism stands for the fact that characters and objects of the film have sound even when not necessary and even when they shouldn’t make any. Instead, Whittington’s [Whittington 2007] organic label refers to the fact that previous sci-fi films usually used sounds which would convey the idea of something unnatural and
alien. These sounds were usually produced with synthesizers, which could make sounds with gestures and timbres far from the usual, but were often perceived as excessive. The organic aesthetic, instead, would go in the other direction, by trying to make sounds, which could be unusual, but with a gestural character, which recalled sounds from the real world. In fact, nowadays sound designers and critiques still believe that such attitude to sound-sculpting better associates to visual gestures and live characters [Thom 2001]. Moreover, it gives a more unique identity to the sounds and to the overall film. However, I want to argue that recent literature and also my design show that this organic definition should be partially revisited.

1.2 Understanding the organic aesthetic of Star Wars

When Burtt chose to look for a new way of making sci-fi sounds, he analysed sound designs of past films. In particular, he states that he used as terms of comparison and of inspiration the sound designs of Forbidden Planet (Wilcox 1956), Planet of the apes (Scheffner 1968), King Kong (Creelman 1933) and THX 1138 (Lucas 1971). He actually felt the need to homage these films he loved since he was a child. In fact, they were one of the reasons he had decided to practice sound design in the first place [Rinzler and Burtt 2010, Milani et al 2012b]. In a few of those sci-fi films, sounds were produced with technological devices designed for musical purposes like synthesizers. In addition, these films often used sounds from sound-libraries, which have the effect of referencing other films of similar genre and of creating clichés. These can be perceived by the audience as banal, but, at the same time, as accommodating. This is something very important in the sci-fi genre, because an excess of sonic and visual information designed in an usual manner can make the audience disorientated, as they would understand very little of what was shown [Rinzler and Burtt 2010]. Burtt gave
much importance to this issue. In fact, Burtt used sound libraries and synthetic sounds. However, we must not take this a scandalous discovery. First of all because he used sounds from libraries to reference films people already knew. This way he was sure that the audience wouldn’t have felt too disorientated by an excess of novelty. Nonetheless, most importantly, Burtt used synthetic sounds, above all, to give sound to technological machines or robots like R2D2. Notwithstanding, the nature of sounds like those of R2D2 can be defined as organic since the timbre, pitch and dynamic envelop of such sounds was controlled by a voice filtered with an envelope follower [Rinzler and Burtt 2010]. This technique implies a performance practice. Thus, one should perceive not only the synthetic nature of such sounds, but also their performed character[Poepel 2005]. In addition, these sounds are pitch-biased and so suggest the listener to focus on how the pitch evolves [Eun-Sook 2010] . This implies that such sounds could contrast or blend with the musical soundtrack by John Williams. However, Burtt states that by having the control over the overall mix he could decide when to put forward the sound design, the soundtrack and, when possible, to blend them effectively [Rinzler and Burtt 2010]. This brings me to the choice of the scene I decided to re-design for the film, which depicts R2D2 walking down the Tatooine canyon and then being attacked by the Jawa people.

This scene does not have dialogue and, therefore, it does not require the performance of actor dialogues, except for the Jawa speech. Moreover, it does not have John Williams’s soundtrack. Interestingly, Burtt states that this scene was originally intended to have John William’s music, but that he convinced Lucas to remove it [Rinzler and Burtt 2010]. Hence, this scene is sonically composed only by sound effects, foley and manipulated speech. Moreover, this scene focuses on the character of R2D2 and shows how synthetic sounds can convey the psychological state of a robotic character and can convey dramatic tension [Eun-Sook 2010, Rinzler and Burtt 2010]. Furthermore, this scene does not have as many rich sound layers as other scenes in the film. Even more, it has a large variety of dynamics. Consequently, it shows how a sound designer can play with nuances and variations. In addition, it shows how sounds can achieve a musical connotation and be organised, using typically musical strategies. Fortunately, Burtt describes in detail how he made these sounds and he underlines how he actually experimented and “played” with the technologies and sounds available to him [Rinzler and Burtt 2010]  The verb “play” in the previous sentence refers simultanuously to the ludic, performative and musical connotation.  For example, in the chosen scene the pitch of the motors of R2D2 changes and is linked to the video cuts.

What we can deduce from this is that Lucas and Burtt didn’t use sounds only for their descriptive nature, but also for emotional suggestion and audio-visual phrasing [Chion 2009]  . Thus, I think any re-interpretation of Burtt’s work should take into account his use sound’s musicality, dramatic tension, nostalgia, referentialism, experimentalism and playfulness. To better understand the musical strategies used by Burtt, I created a score, which attributes to each sound typogy a symbol. Hence, the score shows how they are organised in terms of rhythm, layering and, in a few cases, of pitch.

Coming back to more generic aesthetic concerns, another important issue to take into account is that technological imprinting perception depends on the period of time when the film and sounds are experienced [Brown et al. 2003]. Hence, I believe that, nowadays, the organic aesthetic refers also to the fact that these sound have an historical liveliness. This is due to technological imprinting, performance and cultural popularity value, which sound quality and sound details carry inevitably with them [Williams 2012].

In Part 2, given these first aesthetic considerations, I will underline  the cultural value of Star Wars, which can be inferred from it’s many official and un-official re-designs and re-makes.


The film “Dancer in the dark”, directed and written by Lars Von Trier with the Sound Design
by Per Streit (Dancer In the Dark Imdb) is certainly an interesting film to analyse from a sound-design perspective. The protagonist is blind and, therefore, modifies what can be considered significantly diegetic. In fact, what we see is not important to understand what the character feels (Grimley 2005). Accordingly, in the chosen scene one can easily notice the abundance of off-screen sounds, as there is no actual difference for the protagonist. (from 1:57:22 to 2:02:38). The choice of this time choice is to show the difference between the scene itself, a bit of the previous scene and of the following scene.

See the scene Movie

To better analyse this “acousmatic” condition I thought it would be necessary to put myself in a similar state. Hence, after viewing the scene for the first time, I listened only to the soundtrack without watching the images. Consequently, I notated in the software Acousmographe all the sounds, underlining what I thought were the most significant sonic aspects. Consequently, I could not refer to the image relation, but most of all I was not too influenced by it. Of all the possible audio parameters one could choose from, I focused on: the descriptive sound qualities of the background (buzzes, hisses, reverberation, changes of POA etc.); the different use of the characters’ voices (speech, cry, gasps, songs, phone mediation); the types of sound gestures of foreground sounds (rattle, hit, thud etc.) rather than the supposed sound source. However, afterwards, I watched the scene, again, in a conventional way to understand the role of the image consequent to this acousmatic interpretation. Thus, I chose to notate in a similar way a few aspects of the image, hoping to find significant patterns in the audio-image relations: the sound visibility (on-screen or off-screen); the concordance between video and audio-background cuts (in counterpoint or synchronous); the state of the protagonist’s eyes (closed, covered, open, half-open), because of the symbolical meaning eyes have for a blind person (Chion 2009). This way I designed a listening score of the scene.

See the Acousmographic analysis

The final aim of the analysis, however, is to take advantage of what Von Trier tells us about
his aesthetic (Stevenson 2009) (G. Smith 2000). Therefore, the analysis should verify the
aesthetic coherence between his theoretical principles and his filmmaking practice.


Lars Von Trier is an awarded director and writer of contemporary Danish and European cinema ( 2012). The film DITD is the last of what he defines his“Golden Heart Trilogy”, which are films that are all characterized by a plot in which very peculiar female protagonists show their humanity and devotion to their beloved in difficult social, physical and psychological situations. Even if these stories have very unhappy conclusions, he states that there is always an optimistic view behind it all ( Smith 2000). These three films have been made after he participated to the writing of an aesthetic manifesto. This Dogme95 states how the filmmakers, who follow it, should conduct
filmmaking. Even if this film does not follow these rules, Von Trier admits that he still has a
similar attitude in film direction, but in a less orthodox way (Smith 2000). The film-making aspects of this film, which Von Trier himself declares, and which I think can be useful for our analysis are the jumpy discontinuous cutting both audio and video to show explicitly that the film making practice is manipulative (Smith et Henderson 2008). This way he would be going against the classical Hollywood and modern American aesthetics (Doane 1985) (Ganti 2012). Even more, he says he uses audio-video editing to give an idea of speed-changing time. Furthermore, he thinks that it is important not to control too much of the performance to keep some of the honesty of acting improvisation (Smith 2000).


Von Trier confirms that this film is characterized by a contrast between the musical scenes, in which we see what Selma, the protagonist, is imagining, and the documentary-like realistic scenes. Aesthetically, there is a contrast between the use of the fixed cameras in the musical scenes against the moving cameras in the non-musical scenes. They are extensions of Selma’s psychological state. In addition, the post-processed realistic images are unlike the coloured musical scenes (Stevenson 2009). Even more, the mono, lo-fi audio recordings are noticeably distinct from the highly processed surround musical songs (Kerins 2010). Of all the scenes before the chosen one, I think there is one with an important dialogue scene, which can help to understand the scene I am analysing. In fact, just a few scenes before Selma describes how she feels alienated due to the lack of liveliness in the prison cells soundscapes. Hence, she has no rhythmical sounds to help her daydream. Thereby, apparently irrelevant sounds and the need to flee from reality might be fundamental for the understanding of the analysed scene.


To give an understanding of the structure of the design, I will make a detailed description
of the various layers, using the information obtained from the acousmo-graphical analysis.
There are five major layers:


Introduction with Monophonic light background, which bursts into a reverberant room space.
Prison Walk Room with many reflecting surfaces. It moves around similar rooms, each with its hums, hisses and reverberation. Then Buzzes or static hums. After, Distant Step Sequence Unusual changes in reverberation. PsOA change also. Occasionally, there are hisses or hums.
Reverb Turns into Music Background and slowly turns in a more abstract space, conveyed by the use of meta-diegetic (Milicevic 2012) orchestral music and by the use of very few, but manipulated sounds in surround, rather than monophonic mix. Overall, we can notice that the background changes drastically, due to audio-video cuts. In addition, there is a variable use of both natural and artificial reverberation, according to the part of the scene.


Introduction with  Mediated communication in which Geoff manages to confess his love for Selma, who consequently cries. First Conversation Indications given by Brenda and a few breath sounds by Selma. “Your Meal Jezcova”. No vocal sounds. Then, we hear the prison officers tell her about her last meal. Selma does not reply. Second Step Sequence Breathing gets heavier, heavier with gasps, and then crying. The vocal level becomes predominant after the other prison officer says, “it’s time”. Consequently, the breathing sounds become more frequent. “You can do it Selma” Selma’s dialogue with Brenda, who encourages her. Reverb To Music Brenda counting and then singing the number of the steps they make together.
Overall, we can notice two aspects. In most sequences, the speech is acted in a “cold”
manor, but at the beginning and at the end it is performed with warmth and anxiety, due,
also, to what is being said. Moreover, there is an evident alternation of on-screen and offscreen
vocal sounds and with a sort of slow-reactive off-sync.


The most common concrete sound is the step, but it is never heard the same, as it changes according to the room in which the characters are walking (rigid, soft, reverberant, squeaky etc.). In addition, both the rhythmical and distance aspects of the steps are very variable. There is definitely a predominance of off-screen concrete sounds5 (46 on-screen 229 offscreen6). It is difficult to say if there are recurring patterns in the sound organization. However, there is definitely an important repetition of the “many steps, metal rattles with door thuds” sequence first at 1:20-2:00 and then at 2.46-3:25.
The other interesting aspect is that in most moments it seems that there is no pause between sounds, as if the designer wanted to continuously give you something to refer to. The only long pauses are when there are the buzzes in the background.


There is certainly something behind the use of her eyes, because Selma, being blind, should not need to open them at all. However, Von Trier tells us that the performance was slightly based on improvisation (Smith G. 2000). Therefore, probably, the different showing was not done in a literal way. Nonetheless, the sound design might have related to what the chosen performance ended up to be. It is interesting to notice tat she opens her eyes very quickly after she closed them at 2:00.

0:00 – 1:30 Closed or covered
1:30 – 2:00 Open
2:00 – 2:02 Closing
2:02 – 3:40 Open (rest of the face is covered)
3:40 – 4:05 Partly closed or covered by hair
4:05 – 4:55 Closed (because she is crying)
4:55 to end Wide open


TIME (position in the scene) SHOT DURATION AVERAGE (seconds)
0:00 – 1:20      10
1:20 – 1:40       5
1:40 – 3:35      10
3:35 – 4:25       5
4:25 – 4:40       2
4:40 – 4:53      13
4:53 – 5:01       2
5:01 – to end    4

As we see from the table, between 4:25 and 4:40 and then between 4:53 and 5:01 there is a fast pace. This is accentuated by the cuts, which are very drastic and easily coincide with the change of the sonic background rather than with foreground sounds. In addition, the cuts often show the same character, but change the angle shot and distance. This makes the cuts
perceived as harsh (Ganti 2012).


The graphical score would seem to show a very dense design, which, actually, is not that confusing, because the overall volumes are very low and are carefully layered in terms of spatial depth perception. The overall feeling is more that of a counterpoint (Chion 2009), because many audio-audio and audio-visual associations interact in many ways. In fact, this scene involves the audience with many modes of listening at the same time (Tuuri K. et al 2007).
The blindness forces a reduced and causal mode. Speech induces semantic listening.
Abrupt cuts and unexpected sounds cause reflexive listening.
The repetition of the door slamming stimulates connotative-associative listening. In fact, when we hear the sequence of thuds and steps for the second time we understand that it this means that the prison officers are coming back for her. This is the reason why she starts sobbing before they actually enter the room.
The focus on the background buzzes and the change from monophonic to surround compels empathetic listening.
Also, the designer doesn’t oblige so many modes simultaneously, but rather pushes only a few at a time. For example, there is nearly no speech up to the end of the scene and there are no concrete foreground sounds during the buzzes. This has the consequence of creating a good balance between a rational and conscious understanding of the scene and a more irrational and unconscious reception.


All these strategies are probably used to suggest a very intimate and close relation between the protagonist and the audience. The use of audio-visual elements concentrates on physical and psychological proximity to convey the protagonist’s psychophysical condition. In fact, her blindness is fundamental to understand how the whole scene should be perceived, as it obliges the audience to “use their ears” (Thom 1999) (Grimley 2005). For example, the viewer can easily understand the sequence of concrete sounds steps and door thuds when we hear it for the second time. We understand that Selma knows that they are coming back for her. This makes us understand the anxiety she expresses with all the gasps and heavy breathing she does. This works because the use of the voice without words can easily express psychological states of film characters (Sonnenschein 2001). However, also from these sequences, the empathy in this scene is not conveyed simply, by using the POA and POV of the protagonist, but by varying them and giving them their own “life”. Chion notices that in the cinema of the 1990s there was a tendency to use camera movement and POA as if there was an external character always watching the scene The camera movements and virtual microphone angles are, in fact, independent to the protagonist’s movements, but are always somewhat close to her. This life-giving to an external observer puts the viewer in the condition to explore the protagonist rather then creating a complete empathy with her (Chion 2009).


However, the peculiarity of the protagonist stresses on sounds which might seem narratively less important. In fact, the background buzzes are emotionally more important.
Therefore I think it is important to observe the backgound sounds. Throughout the scene there is a persistent tape-noise-like background (Grimley 2005). This is probably used to create a fundamental noise and, so, to show the contrast with the “pure silence” of the final scene of the film. In fact, when she is executed in the final scene, we hear the reverb of her falling down and then only “digital silence”, which sonically conveys the idea of “death” to the viewer (Chion 2009).


Another design technique, used to convey her psychological state with similar finalities, is
the time pace of the scene. This is conveyed both by the speeding up and by the slowing down of the audio-visual cuts, but also by the rhythmical disposition of sounds. The most important “phrasing” element is the step, heard throughout the scene and shown only as the scene is finishing. The steps, and most of all the first of the “107 steps” are what really help the understanding of the passing of time. In fact, time is metronomic-ally described by the life span of each step sound and the pauses between them. This is confirmed by the fact that Selma uses these sounds to help her daydreaming and, so, to forget about her sorrows and her destiny. This is the reason why, when she is left by herself in the last-meal cell, we hear only synthetic buzzes. The absence of rhythm in buzzes makes it impossible for her to daydream, as she said in a“previous scene”. In addition, the absence of rhythm takes away all cues to how much time is passing (Shatkin 2012). In fact, it seems that not much time goes by between the various moments, in which she interacts with the prison officers.
However, the phrasing is also conveyed by the emphasis of vocal sounds, both in rhythm and performance, and by the use of very variable reverberation, which gives an unusual spatial perception and, consequently, a psychological message through each sound (Gilbert 2011). This technicality, accentuated also by the final use of surround, is used, because Selma has to do only 107 steps, before she reaches her execution cell. Hence, the unrealistic nature of the steps stresses her impending death and her attempt to escape reality.
This resembles in many ways the “music of destiny” technique, typically used to anticipate a death scene (Chion 2009). This becomes more obvious when the counting actually becomes the reason why she starts imagining a musical number. She has finally found a rhythmical sound to make her mentally escape. Even if this sound stands for her death, her optimistic attitude towards life makes her give an ideal meaning to them, as Von Trier mentions (Stevenson 2008).


The acousmo-graphic analysis methodology was very useful to understand the audio-visual organization of the scene. Thus, it helped to understand the signifying role the sound first, then the image and their interaction have for the narrative. In fact, as forecasted, it was easier to recognize audio-visual patterns on music-like scores, because it obliges critical listening (Tuuri K. Et al 2007), which helps to notice details that might be perceived only unconsciously.
The observations highlight that the use of off-screen sounds together with other audio-visual strategies is fundamental for the aesthetic and metaphorical value of this scene. In detail, the stress on background sounds, the audio-visual phrasing and the contrast between POA and POV should effectively force the viewer to relate to the protagonist’s psychophysical condition, as Von Trier intended.
The analysis could point out even more, by looking more deeply behind the use of the eyes and of the unusual use of audio-visual syncing. However, this analysis technique did not unearth any significant patterns.


Chion, Michel. Film, A sound art. Columbia: University Press, 2009.
Dancer In the Dark. Imdb Webpage. (accessed February 25,
Doane, Mary Ann. “Ideology and the Practice of Sound Editing and Mixing” In Film Sound. Theory
and Practice, by Belton J, 54-62. Columbia: University Press, 1985.
Ganti, Kiran. In Conversation with Walter Murch.
murch.htm (accessed february 27, 2012).
Gilbert, Gabriel. Altered states altered sounds: An investigation of how ‘subjective states’ are
signified by the soundtrack in narrative fiction cinema. Cardiff: Centre for Language and
Communication Research Cardiff University , 2011.
Grimley, Daniel M. Hidden Places: Hyper-realism in Björk’s Vespertine and Dancer in the Dark.
Cambridge: University Press, 2005.
Kerins, M. “Beyond Dolby (stereo): cinema in the digital sound age.” By Kerins M., pp. 308
Bloomington: Indiana University Press., 2010.
Shatkin, Elina. Randy Thom, Sound Designer, ‘What Lies Beneath’. (accessed February 27, 2012).
Smith, Gavin. “Dance in the dark” In Lars von Trier: interviews, by Jan Lumholdt, 144-152.
Mississipi: University Press, 2000.
Smith, Tim J., and John M. Henderson. “Edit Blindness: The relationship between attention and
global change blindness in dynamic scenes. .” Journal of Eye Movement Research, 2008.
Sonnenschein D. “Sound Design: The Expressive Power of Music, Voice and Sound Effects in
Cinema” Michael Wiese Productions , 2001.
Stevenson, Jack. “Dancer in the Dark” In Lars Von Trier, by J Stevenson, London: Palgrave
Macmillan, 2009.
Thom, Randy. “Designing a Movie For Sound”, 1999. (accessed february 27, 2012).
Tuuri Kai, Manne-Sakari Mustonen, Antti Pirhonen. “Same sound – Different meanings: A Novel
Scheme for Modes of Listening.” 2nd Conference of Interaction with sound. Jyväskylä, Finland :
Audio Mostly, 2007.
Zattra, Laura. “Analysis and analyses of electroacoustic music” Sound Music Computing. Salerno,

The mobile Telephemes. An essay on the use of mobile phone calls in contemporary cinema

The following essay is a brief walkthrough into why mobile phone call
scenes are very popular in contemporary western cinema. To have an
overall understanding of the issues involved, the essay will give an introduction
on traditional telephemes and the social implications of mobile
communication. Afterwards, the paper will depict a theory, which states
that gossip practice and mobile gossip have a major involvement of contemporary
film viewers.

Literature shows that telephone call scenes have always been important in western
contemporary popular cinema. For this reason this
essay will start by giving a brief introduction on popular cinema rhetoric to
try and comprehend, above all, how filmmakers have experimented with traditional
phone call scenes. In fact, these kinds of scene are very effective to
play with diegesis and image-sound relation. In addition, the phone has often
been used as an acousmetre. However, this
media in the media has more implications, whence communication becomes mobile. Cell phones have made it possible to make film characters
contact others anywhere and anytime. Hence, they can change the perception
of everyday life style. Consequently, the paper will underline how mobiles
have increased, or at least have made more evident, the practice of gossip
within common users. In fact, It seems that filmmakers have become
increasingly interested in showing gossip and mobile gossip practice within
teenagers and women. To give more concreteness to these
ideas the footnotes of the essay will include explanations of film scene examples, which sustain
the theoretical issues. The final aim is to show that gossip, mobiles and
films are becoming increasingly interconnected. As a matter of
fact, media in the media are addictive to each other.

If you want to read the full article please download the following pdf

The Mobile Telephemes_Pdf

An introductory look at major Sound Designers and how they think about sound

I will now write a few words about some very famous sound designers who have explained a few interesting aspects of their workflow and about the way they work and think about sound for film in interviews, articles and books. You will see how each designer works differently and how each aesthetic choice and concern makes a big difference in the design result. However, as the videos definitely point out, the aesthetic of each sound designer also changed in time and what I will write is just a summary of what has been written about them at the time they were interviewed.

Mac Donald Jimmy was famous for having designed many Disney animations and Disney films like The Black Hole ( Nelson, 1979). He used foley and didn’t love production sound, as he preferred to make voices himself. When making foley sounds he often used sounds which were unrelated to the source he was supposed to sync because he understood that in most cases the sound gesture is more relevant to give a more unique feeling and effective context and also to be more effectively dramatic.

Frank Serafine was a very experimental sound designer who used unusual sounds taking advantage of what technology could give him. For example in Tron ( Lisberger, 1982) he wanted to recreate the sounds of video games and so tried to use synthetic sounds made with synthesizers as much as possible. The sounds created with samplers and synthesizers were very unrealistic and lacked the typical gestures of real life sounds. He also believed in audio-visual synesthesia so he tried out various associations like matching the pitch of the sounds to the camera, associated panoramic frames to distant sounds with noticeable reverb, or matched colours to timbre. For example yellow visual elements were sonified with sharp sounds, red ones with resonant and warm sounds etc.

We then have two major sound designers who have been awarded several times for their work and who have changed the way all other sound designers work. They are Ben Burtt, and Gary Rydstrom.

Burtt became one of the most famous sound designers thanks to Star Wars. A New Hope ( Lucas, 1977) because it is one of the first science fiction films that avoided using synthetic sounds so much and even when he uses them he tried to give them a wordly concrete feeling by using technologies like the envelop follower.

Gary Rydstrom is acclaimed for his skill in using sounds which are unrelated to what you see in screen always seeking expressiveness and character rather than realism like in Terminator 2 (Cameron , 1999).

Both make use of many sound manipulations and they both believe the sound designer should work side by side with the music composer so that sounds and music do not interfere with each other. They both try to make sounds with very characteristic and unique gestures so that even if there is no correspondence to the source which is being sonified it still has a very palusible and yet even more engaging effect. They both design sounds not just by manipulating one at a time but also by overlaying them so to take advantage of the expressiveness of many different sounds when put together. For example the alien in E.T. ( Spielberg, 1982) was achieved by combining the sounds of many animals and elderly ladies utterances . They give gesture to their sounds also space wise, so to make the sound perception more engaging and vivid. Having said all this, they both believe that a sound on its own is meaningless if you don’t also bear in mind the whole. This doesn’t mean for them that sounds have to follow a same aesthetic or taste but on the contrary that they should differ as much as possible so not to confuse certain sounds with other sounds in the same film. This way the identification process becomes very effective.They also believe in the use of rhetoric and recurring standard techniques in films and some of them have become so popular that they are now common clichés in the film sound practice. For example Burtt made the Wilhelm Scream and the silence before the explosion technique very popular among sound designers.

Walter Murch is famous for bringing sound design awareness thanks to the films he made with George Lucas and Francis Ford Coppola like THX 1138 ( Lucas, 1971) , American Graffiti ( Lucas, 1973) Apocalypse Now ( Coppola – 1979). The main features about Murch’s style is that he is an audio-video editor and therefore his aesthetics focus on editing and on making audio and video work together. Specifically he believed that editing should not unravel too quickly and the storytelling process should be carried out by suggesting the emotional states that are connected to the film plot. Therefore, his editing techniques play with ambiguity to keep the viewer engaged and attentive. He also believed that the cuts had to be used coherently with the dramatic flow of the film plot to suggest the right tension and expectations. Moreover, he was convinced that an adequate flow could make the film viewer empathize with the film. This means that his editing is very distant from classical  40s style. He also likes playing with silence because he thinks it is an effective way to suggest death or dramatic unsettlement. However, he is not concerned only with time, but also with layering as he agrees that no more than 3 stimuli should be overlaid, because that’s the maximum number of elements that can be followed at a time and even more only for a short time. Otherwise the viewer has difficulties understanding what to pay attention to. In addition, he says we must not forget that the images already play with our sound elaboration and this process is also very important in film-viewing because it makes the viewer more active in the imagination process which makes the film-viewing experience more empathetic. Another reason why he is notorious is because he started using the Worldizing technique as he understood that post produced sounds and music could be given a more contextualized feeling if re-recorded in an environment similar to that seen in the image. For example, in AmericanGraffiti ( Lucas, 1973), the songs have acoustic characteristics depending on whether the the music scene is from a radio rather than a concert hall.

Randy Thom has an approach very similar to Murch in many ways also because he worked with him in several films including Apocalypse Now ( Coppola , 1979). However, he became acclaimed thanks to his collaborations with director Zemeckis for whom he designed the sounds of films like Forrest Gump ( Zemeckis , 1994) . In these films Thom explored subjective listening induction by simulating or miking the sounds he had to sync so that you could perceive them coming from specific positions in space, and consequently as if you could actually feel the sound world like the protagonist. This way the listening procedure should induce an empathetic state with the film characters. In fact, he finds that sounds that don’t create a connetcion to the film character create little engagement. The only issue with this strongly subjective approach is that it’s difficult to make all the sounds in a scene be perceived as such. Therefore, he organizes the scene so that sounds are always perceived as “Schaferian Signal sounds” which means that they must induce the character and consequently the viewer to pay attention to them, if they are meant to do so or they end up being negatively distracting. Moreover he is very concerned with audiovisual expectations which is determinant to play with, if you want to create a committiment with the film viewer who feels satisfaction only when he perceives a continuity and logic between different elements of the film.
Having these principles in mind his soundtracks end up being very light, because he believes excess is distracting and not involving. The only difference with Murch is that he believes that all the sound design process should work well with the image, but so that you never perceive the technicalities and you don’t actually understand which aspect of the film making process actually suggests the pathos flow.

To read the previous post on the birth of the term sound design go to

What is sound design? and why this term start being used?

The term Sound design nowadays refers to many fields of sound creativity ranging from media sonification to sound art. However, the term originally referred to the film-making practice. That said, it is important to bear in mind that this practice actually started since films started being sonified. What has changed over the years is the approach with which all the professionals involved with sound production and post production for film decided to follow. The term actually started being used as a synonym of superintendent to sound editing, who is the professional usually awarded in film festivals like the academy awards. However, Sound Designer is a term that has actually become popular since 1978 after Ben Burtt won his special award for sound and after Walter Murch asked to be recognized as such in the film credits for Apocalypse Now (Coppola, 1979).

The problem is that even in the film domain it refers to a wide range of design tasks which are usually taken care of by many people who then ask the last word to the superintendent.  The reason why people started using this term giving it a more artistic connotation is because he/she should be involved in pre-production to be actively involved in the directing choices, above all, when sound is a critical aspect for the film’s artistic success. However, this term also became necessary when sound technologies started to become quite advanced and required deep expertise to be used creatively. The last obvious aspect is that sound can’t be dealt with only by one person and so it is necessary to have someone who has an overall understanding of what ought to be done throughout pre-production, production and post-production. A consequent issue is communication, because the sound designer should be able to communicate at a detailed level both with his technicians and with all the other film makers who might have some issues when working close to a sound team.

All this complexity started building up due to the fact that sound was gaining more importance and the reason is that film makers started noticing that sounds could be used very cleverly in the same way speech and music could and that all these needed to be supervised by the same figure to have a coherent aesthetic. Then, this increase in interest is also definitely due to the boost in production of certain film genres, such as science-fiction ,which usually require the creation of unrealistic or unusual or unworldly soundscapes.

To read the previous post on Film Sound Aesthetics go to:

To read the following post on Major Sound Designers go to :

How sound was discovered in time. How sound aesthetics and critique changed from the 30s to nowadays influencing film-making

Ever since sound started being synchronized , intellectuals like Eisenstein and Pudovkin wondered if the sonification modes used at the time, actually corresponded to real aesthetic- expressive needs. In the beginning, synchronization was seen as the only way to achieve realism and therefore engagement. Instead, the Russians, who were well aware of the power of the image in conveying also sound elaboration, suggested a new aesthetic of ” asynchronism ” as an alternative to follow instead of realistic syncing. They believed that sound should actually avoid being redundant as synchronic cinema was used to. They believed that audiovisual art should be designed as a counterpoint in which sound had the purpose to enrich the picture in meaning rather than just being repetitive.

A different perspective was instead carried out by Cavalcanti at the end of the 30s as he thought only the silent image was implicitly expressive , whereas sounds were seen as a game audiences would have become bored of soon after if used in the way it was being used. In fact,  the issue of cinematic Verbocentrism was seen as problematic , because also actors had started becoming less and less expressive and less theatrical because it seemed that the only thing that mattered was that speech was clear and natural enough to be understood and engaging. Naturalist directors, in fact, preferred to enrich the expressiveness of speech with mic techniques which turned out to be ineffective cliches. However, he pointed out how film-makers had at least took advantage of the sound potential as it allowed cinema to evolve into different genres according to its use ranging from drama , to comedy , musicals etc . Cavalcanti was actually against the policy the russians were carrying on, because he believed that sound had a greater potential than that and he believed that eventually sound would have become even more important than the image, because sounds naturally evoke more effectively emotional responses , while visual stimuli are usually descriptive and informative in real life. Nonetheless, he agreed that to reach this aim film makers should have started to use sound in a more ambiguous manor which is the way in which sound actually achieves its expressiveness.

Things changed considerably in the 40s and people like Bazin actually hailed the aesthetics of his time, because he believed that the potential of the audiovisual film laid in its ability to transform reality in a realistic context and that silent films were too artistic and abstract. Therefore, sound was seen as a tool to achieve realism and engagement.

However, things changed once again as modernist thinkers pushed towards a return to a more abstract aesthetic , like that of the times of silent era. That said, they believed sound could let the viewers immerse themselves in an empathetic way with the film plot in a way the image alone couldn’t. In fact, Kracauer and Epstein suggested the idea of focusing more on the use of prosody in speech, to make it more expressive and decided that films from now on had to give the same importance to all sound categories and not only to speech. However, it is only with Burch that they managed to think out a way to actually implement the modernist ideals, by using sound manipulations. In fact, thanks to sound manipulations it was finally possible to orchestrate noises as if they were musical instruments playing a counterpoint. A few Japanese like Chikamatsu monogatari (Mizoguchi, 1954) actually tried out this approach, by using sounds overlaid in rhythmical manor . Burch advocated , therefore, that sound could reach their poetical potential once they were organized creatively and in a musical way.

Technology is still seen as the key to reaching sound’s potential by Schreger who stressed at the end of the 70s the importance of technologies that were being developed in those years, because they could finally meet many of the desired aesthetic needs many authors had been trying to satisfy in the last few years . Altman is probably the most relevant figure as he revolutionized the sound recording attitude in his films Nashville (Altman, 1975), The Conversation (Coppola, 1974), The Deer Hunter (Cimino, 1978), which all made ​​use of wireless microphones and multitrack recordings that allowed to create overlapping dialogues, which could be finally understood once recorded. Moreover, technologies allowed the making of soundscapes and the overlaying sounds in a clear way. Then, he started to foresee the possibility of using silence as an expressive tool as it could be finally created without too much background noise.

In the 80s, instead ,Doane decided to analyse the major innovations since the 30 years , and noticed that the technological and practical manor of Californian studios influenced the world cinema aesthetic, above all, because of their technological superiority and because they were the few to know how to take advantage of what technology had to offer. Interestingly, however, she noticed that technological mastery was considered as such only whence the film-making process could not be perceived during film-viewing. That was one of the reasons silence was seen as a taboo in the 40s, since technicians were afraid it would have made film viewers aware of the film-making process. For similar reasons, verbo-centrism without voice-overs was seen as the only right attitude towards film sonification. Fortunately, leading experts in the sound field came from radio dramas, a field in which sound manipulations were used to recreate plausible or imaginative soundscapes rather than realistic ones.Thereby, sound manipulations were not necessarily seen as taboos. Doane, instead wished film makers to riconsider once again the potential of non diegetic sounds like voice-overs as she was convinced that they withhold an effective tool to play with diegesis , screen visibility in a way which could be actually more engaging.

Balazs instead focused his attentions on the viewer audio-visual’s interactivity, because he noticed that each audio-visual stimuli can force us to focus on some details rather than others in a way which can be controlled and played with. Therefore, he also exalts the counterpoint audiovisual ideal, but rather than exalting the idea of overlaying, he believed interactivity was the reason to do so. By counterpointing sounds and images according to their interactive potential, the power and potential of sound could be unleashed, because it pushes the viewer to be engaed in the film viewing process. The film viewer this way has to keep on analysing what he can see . Balazs however, already understood that an excessively dense counterpoint could become excessively stimulative and so he invited film makers to make use of silence to allow the image to express its intrinsic sonority. He also suggests avoiding an excessive effort from film viewering who can’t endure continuous stimuli and interaction.


Belton instead started thinking the camera and the mic as virtual eyes and ears and so suggested that actual engagement could be reached by taking into account what point of view and of audition the recordings and shootings could actually suggest. Therefore, he criticized Altman’s film making attitude, because by using wireless microphones all sounds ended up being perceived as close up and you couldn’t have any perception of where the sounds were actually coming from, therefore impeding the subjective perception of the soundtrack.


Back in the 90s continues from Balazs and stresses the problem of cultural perception and consequential attitude in the audiovisual experience. Consequently he stresses that film sound is actually engaging when it satisfies all the information that is culturally relevant for us. So when designing sound for film all the relevant sound features must be stressed and underlined. Above all, he stresses the importance of dynamics and that each sound should not be aloud only to satisfy realism in the moment in time when it is supposed to be heard, but to comply with the narrative needs the film maker requires.

Ribrandt goes on from Back by defining sound as an ” Art in Time” , since the soundtrack can be seen as such only if considering how sounds evolve rather than how they are per se perceived, when detached from their conext. He also tackles the difficult issue of soundtrack comparison as he puts forward the problem of style which depends on the film itself, the sound designer and all the other film makers. However, he pointed out some significant parameters to consider when attempting such a task. The first parameter is sound projection which determines how sounds will be perceived by the film viewer, therefore how the technologies are used because he agrees that technological awareness does not necessarily imply a negative connotation to film viewing if it is used for poetical reasons. That said he believes that a soundtrack has greater impact when it attempts to relate to other sound design works and tries to push further the boundaries of the design process in an effective way. Nevertheless, he also states that all these considerations are relative to when the film was shot because this determines what the sound designer and film maker could play with and what kind of issues they had in mind.

Finally, Dykhoff goes back to the problem of sonic overload and although he agrees that excessive sonic information can not be elaborated effectively, he also thinks that sounds that are not noticed still have a subconscious reaction on us. Even thoughthey have a less obvious effect all subconscious sounds contribute to elaborations and judgements on the whole.

If you want to read the previous post on Film Sound History go to

How sound was used in the different decades of the 20th century. A brief look at its brief history.

There are a few recurring similarities in some films in each decade, but, one must remember that each author has his own “sound style” , since every director has a different sensitivity and interest in exploiting this creative medium. The problem is even more complicated if we consider that, usually, the authors that stand out are those who choose not to comply to the mannerisms of their period. I think looking into these film-makers is equally interesting, because it shows what issues were being explored and how much they felt influenced and conditioned by the aesthetics of their time.
Another problematic issue is that to determine similarities and differences in style requires the establishment of an analysis paradigm. Many paradigms have been suggested in time this has led to the birth of different schools of criticism which are either academic or related to the popular taste of that time in a specific contry. Each of these has its concerns and its biases. The issue is that often films were made taking account of what their surrounding critique was, because, films have always been meant to please audiences and their interests, be they large or niche. So, as I will describe each period I will take into account also the paradigms which were taken into account at the time thanks, above all,  to the studies made by Michel Chion. I hope this way to give an idea of how a few aspects of sound use changed in the years and why.

From 1927 to the early 40s ” naturalism ” prevailed. Sound-wise the idea was that the soundtrack had to provide a reinforcement to every object in the scene, especially the actors, by giving them a voice whenever possible. This obsessiveness was probably dictated
by a desire of extreme realism , which was due to the popularity of such a genre also in theaters and
by the desire of taking advantage of what the syncing technology could let filmmakers do. However this is not always true as true. A few important experimental authors at that time were Jean Vigo and Fritz Lang. In his Atlante ( Vigo, 1934) he attempted a new different strategy by characterizing each character not voice-wise but in peculiar ways, putting forward what would usually be background noises as if they were the main sounds of that character. Instead, In Das Testament des Doctor Mabuse ( Lang, 1931) , sounds were matched to scene changes and edits and camera movements. The idea was that the film making process could convey tension by matching audio-visual elements and their meaning together.
Besides these experimental attempts, shooting and post-production had overcome the limits of technology. Therefore, other possibilities became less problematic towards the late 30s. The sound background could finally be portrayed more clearly and films could finally include soundscapes of everyday life. This often meant having a constant noise which was not viewed negatively , but rather as something useful to perceive context. Moreover, noise at the time was associated to energy and rhythm, which is something filmmakers were often aware of. The power of sound and of the media was in fact a recurring theme in many films. This feeling of power was even more conveyed when the sounds were schizo-phonic which means the characters heard sounds coming from speakers which projected recorded sounds and voice. At the time recordings were still associated to a magical and divine power which gave it great importance. For example, in The Great Dictator ( Chaplin, 1940) we see how people cheer for the two protagonists speaking at the radio, even though they say completely different things, just because they are speaking on the radio.

The 40s are known as the years of ” classicism ” .Films at that time had to be considerably formal and Americans, above all, were obsessed by dialogue. For this reason we can talk about “verbo-centrism “. However, another feature of this period was the use of music. Since there was a desire to be always realistic also music was diegetic and so there were numerous scenes in which musicians were shown while playing. This way, the audience should have perceived something very close to what the film characters perceived. In fact, the goal was to never make the audience conscious of the fact that they were watching something fictional. This meant that sound could never stop and for this reason the soundtrack was designed as a continuum, even though it was supposed to be always diegetic. In addition, the design had to comply to strict rules , as the dialogue, voice-overs, music and occasional background noises had to flow one to the other, never leaving the viewer in silence. The viewer could not be aware of the fictional nature of the film to make sure it would  be considered as engaging. A popular example is Casablanca ( Curtiz , 1942) , in which you hardly ever hear silence and the soundtracks goes back and forth from dialogue to music to background noises.

This formalism was seen as constraining as the years went by and so what is called ” modernism ” became more popular in the 50s and 60s. Verbo – centrism was slowly abandoned and several authors experimented with new ways to organize music, speech and sounds. Regarding musical experimentation, a few interesting examples are John Fusco and Nino Rota. The first composed numerous soundtracks which were heavily influenced by the sounds of electroacoustic music in films like Lady without Camellias ( Antonioni, 1953) or The Cry ( Antonioni, 1957). Nino Rota, instead composed numerous soundtracks with orchestral instrumentation , but designing it so that you could not tell exactly when the music was diegetic or not like in many scenes in The road ( Fellini, 1954). Regarding sound experimentation use, I think Alfred Hitchcock is an important innovator, because he explored many strategies to alter the perception of tension and time by using concrete sounds as an actual character in the film like in Rear Window ( Hitchcock, 1954) and The Birds ( Hitchcock,1961). Anyway, such experimentalism was due to the belief that techniques had rhetorical potential and that an usual use of sounds and images together could convey an usual effect and so feelings but also ideas. Films could so be narrated not only by the use of dialogue in the film, but also by suggesting specific concepts and ideas through technical choices associated to the narrative. Think for example about “A bout de souffle ( Godard, 1960), in which the editing is used in a very sharp way, so the flow skips moments of what you would expect to see. This way the film suggests a memory or something fey rather than something realistic. This use of unusual techniques was probably due to the desire to deal with complex issues rather than complex plots which implied less need for information and more interest in ideas. Another recurring theme in those years was the perception of time which was often conveyed with specific technicalities or strategies. For example in La Dolce Vita ( Fellini, 1960) , there are several scenes which are unreasonably long. The idea of persistance and continuity is accentuated by the use of repetitive music and recurring sounds throughout the film.

The 70s and the 80s see a change in the trend of film-viewing, because, although the most acclaimed directors continue to do research in the field of technical expressiveness, the most innovative films are probably those in which new ways to play with our senses are explored. There are many difficulties for many critiques in accepting this idea because sensorial films are usually very undemanding both in plot and in ideals. However, these new sensorial film makers believed that the film should be a tool to create virtual worlds, made ​​unique by the unusual audiovisual perception, which had little to do with the everyday world or with past cinema. For this reason, during these years a lot of science fiction titles became extremely interesting to watch. For example in Star Wars. A new hope ( Lucas, 1977) they used a number of audiovisual special effects that were used to create a world never seen before and with a unique aesthetic. This became possible thanks to significant technological innovations in the filming and post-production stages that allowed to create something very imaginative. Sound-wise the greatest innovation was spatialization which became an established standard in 1982 with Dolby. In fact, it was possible to alter with detailed control the perception of space, which also made it possible to explore the boundaries of diegesis and the relation between what can be seen and what can be heard. For example, in the initial dream scene in Apocalypse Now ( Coppola , 1979), it is not clear what is truly felt by the character and what is meant for the viewer because the ambiguous position of sounds makes it difficult to understand this with certainty. Another important innovation brought by technology was the possibility to create detailed soundscapes, which allowed the suggestion of virtual worlds with a unique sound identity. For example in Blade Runner (Scott , 1982) , they wanted to create a futuristic world with few realistic concrete sounds and synthetic music as to suggest that future worlds will sound very different and somehow less worldly and human. Following this idea of uniqueness and non-realism, more and more films made use of post-produced sound rather than on set sound thanks to the mastery of foley artists.
Following the same idea, another interesting aspect is the fact that also actors started faking their own voices more and more changing their voices according to their film character to better convey the identity of their character.

Coming to the 90s and the start of the 21st century, it is still too difficult to talk about this period in an objective manner , as it is still too close to us. Nonetheless, an increasing distance between those who search for “rhetoric meaning” and those who seek ” sensorial innovation ” has been noticed. The first , in fact, have been trying to use technologies the least possible, as they are convinced that the sensorial approach just conveys excessive sensations with little meaning. Think for example about the Dogme 95 whose directors tried to explore film making with the least postproduction possible. The sensorial directors instead continued to use the upcoming tecnological innovations to make their films aesthetically unique . However, some authors have began to search for a synthesis between the two schools of thought. Take for example The Matrix (Watchosky, 1999) in which complex issues are dealt with alongside action scenes both told also thanks to very innovative technologies. Another example is Trois couleurs: Blue (Kieslowsky, 1993) which is one of the rare examples of French films at the time to make use of technologies such as surround sound and sound manipulation to highlight the contrast between very rich scenes and intimate scenes.

To read the previous post on Film Sound History go to

To read the following post on Film Sound Aesthetics go to