October 06, 2012
One of the attractions of living in a university town is the open-door policy many schools have towards evening lectures: the savvy can gate-crash their way to a free education on any course of study under the sun, casually getting a booster-shot on a long-loved subject or keenly soaking up the wisdom of pundits. So last Monday night, I went to hear Christopher Pullman, WGBH's Vice President for Design from 1973 to 2008, give a cosy little talk to BU students on the topic of motion design. Brought to bear on the lecture were his fifty years of teaching at Yale University's School of Art, whose faculty has notably included color master Josef Albers and photographer Walker Evans, two of Pullman's own professors.
The premise of the talk was to explore of how the relationship between WORD and IMAGE, two different but equally abstract ways of representing reality, changes with the introduction of time, motion and sound.
Caravaggio, Michelangelo Merisi da, 1573-1610
Taking of Christ in the Garden, c.1630s
Repository: National Museum of Ireland
To better identify the separate properties time-based media brings to design, Pullman first touched on conventions of typography, photography and painting to illustrate how static media has devised its own illusion of having temporal, moving and audible content. Pointing to Caravaggio's Taking of Christ in the Garden, Pullman astutely summarized that pictorial conventions tell us what to pay attention to, in what order, and for how long (in this case, 1. the kiss, 2. the shouting man, 3. the soldier's outthrust arm, 4. the artist in the shadows holding a lantern on the scene). A painting or photograph might use carefully chosen elements of line, shape, contrast, color to lead our eye through a composition: we capture its narrative while we look, and sometimes, given noisy content and our memory of sound, we hear it.
In book form, this narrative is extended across a sequence of pages. Turning the page makes the work evolve and move in time, not unlike a storyboard, flip-book or animation. The sense of sound has been expressed in typography through visual rhythm – choices of font, its size and weight, its spacing and placement, can work alongside language to make the viewer hear, just as we hear Saul Steinberg's Yes racing down to a halting But. Memorable examples include the Dadaists' exuberant posters or Roy Lichtenstein's comics-inspired canvases of explosions, crashes and fights, full of primary colors and onomatopoeia. Wham! Pow! Blam!
|Time enforces a sequence, holds your attention, builds a logical argument, enables narrative surprise and enhances storytelling.|
Combining words and images inevitably affect the meaning of each: Pullman referred to Apple's Think Different campaign as an example of how a portrait of the Dalai Lama, Jim Henson or Muhammad Ali could endow the viewer with “gift by association”, merely through the addition of text. What changes in a time-based ad? Everything. The punchline that relied on font choice, type size and placement can now be delivered vocally, as in the Nynex advertisement below, where the punchline is withheld until the very end. This sense of time-ing is an essential step forward from the static, unavoidable tag that delivered image and message simultaneously. Furthermore, with the addition of actual time, composition changes into choreography – Nynex used evocative music, sound effects and the swirling motion of the tassels to create a visual that holds your attention until the message is ready to make impact and the pun turns verbal.
Motion, Pullman suggests, gives us an assist in expressing hierarchy. When scale and position become moveable, a visible change in position or size can emphasize a word. He played an Issey Miyake animation to illustrate how a beat and a subtle use of motion can endow objects with a behavior, instantly building a recognizable figure and, through that, strengthening a brand.
Sound, Pullman said, serves as a structure that helps you make other decisions regarding the choreography of content. Just as a grid provides the book designer with a baseline from which to lay out the page, sound provides a rhythm and tempo to support visual changes as much as the delivery of words or lyrics. Sound also allows language to be heard rather than seen, and frees the visual from the verbal (much like in our own world experience) so that they can operate on different levels. Music, meanwhile, reveals clues about intended or underlying meaning by setting a mood. This point was made while Pullman played the opening credits of To Kill a Mockingbird (1962), one of the best in his opinion for its clever use of symbol and metaphor, pacing, and text.
After viewing many entertaining and thought-provoking ads and film trailers, perhaps the most moving part of the night came when Pullman shared a few pieces of student work, collected over the years with visible admiration and pride. Some of this remarkable work has kindly been provided to Big Red & Shiny for publication. The video featured at the top of this post was made by Yale MFA'10 graduate Caspar Lam based on a poem by Stanley Kunitz. And this last piece, to the left, was created by Chisa Yagi, Yale MFA'03, and remains one of Christopher Pullman's all time favorites.
Full screen recommended.
These animations were done as course assignments and serve as good examples of how time-based imagery can work with and alongside language. Lam's Kunitz adaptation uses voice as a rhythmic score in which word and objects function together - language endows these abstract shapes with meaning they would not otherwise possess, while the shapes and color serve to create a universal symbolic narrative from the poem. Chisa Yagi's piece makes strong use of pause and reveal: the pacing that results is ever so graceful and manages to be both funny and melancholy, a consciously melodramatic lament for paper. Seeing this quality of work at the end of the talk told the small audience present that Pullman, beyond being an engaging lecturer, is also very likely a marvelous teacher.