April 18, 2023, 4:00 pm – 7:00 pm
MIT Building 5 (Room 5-414)
55 Massachusetts Ave, Cambridge, MA

With support from an NEH Digital Humanities grant, MIT Architecture Department faculty Dr. Cagri Zaman and Prof. Caroline A. Jones are launching a new 3-D tool for interacting with classic cinema. Developed out of the MIT Transmedia Storytelling Initiative together with Zaman’s Virtual Experience Design Lab, the interactive 3-D tool takes advantage of the vast corpus of digitized cinema (the “archive”) and mines this material for unique, historical spatial information. Bringing together approaches from film and media studies (Jones) and powerful computational approaches to durational media (Zaman), the Latent Archive offers potential for both creative and analytic users.

Screen capture of navigable interface for Orson Welles, Touch of Evil, (the bomb-planting scene)

The year-long funded project brought Zaman together with architecture masters students to work on scenes selected in consultation with the Transmedia Storytelling Initiative’s team as well as outside film scholars and creators on the advisory board. Zaman’s group combined multiple opensource software algorithms and architecture modeling programs with bridging software to extract space out of digitized durational media. Orson Welles, Jean-Luc Godard, Andrei Tarkovsky, and other directors exemplify the mastery of time-based storytelling. But every frame of their classic films contains spatial information that is “latent,” rushing by in viewers’ experience and barely perceived. This precious historical data about the architectural and urban settings (from sets to city blocks) can now be extracted, and the resulting latent spatial information can be combined using machine learning techniques to produce a unique, navigable 3-D interface.

Still frames from scenes in Orson Welles, Touch of Evil, as filmed (including title sequence)

Jones explains her excitement about the findings so far: “We’ve already been able to show, with the vast increase in computational power and machine-learning processes, how entire genres can be compared by the way they relate to space.”  She continues, “We first began with the early 20th century films known as ‘city symphonies’ — documentaries set in a real world of complex urban settings associated with the modern metropolis.” But the team realized that the discrete spatial units of these rapid-cut tripod-steady shots revealed almost no “parallax” between points of view that could generate clear-cut spatial relationships. Emphasizing the “shock” of modernity, this was a spatial world that intentionally didn’t add up. By contrast, the mid-century period of “classic” cinema features virtuosic long dolly shots (a well-known specialty of Orson Welles in particular).  Processing the frames of these scenes revealed rich settings with carefully-placed subliminal cues and artful misdirection. The “Latent Archive” tool allows a temporally frozen, spatially navigable scene to yield insights into director’s and cinematographer’s narrative techniques.

What will happen when time can be converted to space in our study of media? The unique qualities of spatialized storytelling and storyworlds can be revealed. “Come try the Latent Archive viewing tool,” Jones concludes. “We invite the MIT community to explore the ‘time-space continuum’ of our beloved fictional worlds.”