The Sound Layer Project – a dynamic layer of sound that augments traditional media such as documents, images, and websites.

Initial Concept Sketch for Sound Layer Project

Initial Concept Sketch for Sound Layer Project

This project idea first came to me while on the road touring, taking lots of film photographs, and learning Processing while cramped in the van. Sound photos: Photography augmented by a two dimensional layer of dynamic sound. Susan Sontag stated accurately that “The photograph is a thin slice of space as well as time.” What if we used modern techniques to maintain the photograph's integrity and power, but augmented it with dynamic, interactive sound? Traditional film photography is slowly becoming irrelevant, much like 35mm film is becoming extinct in the film industry. I have a habit of scanning my worthy 35mm shots from my travels into a computer to further enhance and process their image. What if there was a completely new way to experience and present images of all types? I envision a layer of interactive sound nodes that could inherit audio features from images like color, edge detection, etc. Or could be set by the user to explore and present meaningful sounds or information. Imagine a photo with a transparency on top, or a digital transparency that users can modify, furthermore the user sets the rules on what is played and when. Audio in the layer could be relevant samples, image based audio and tones, Midi, etc. There would also be nodes that contain real time streamed media, like from a light source meter, rain fall gauge, real time weather data from the web, or any sensor on the ground.

From an artistic perspective a tool like the sound layer project would provide a new pallet for musicians. Turning images into music, the musician gets to set the rules on how the image gets translated into time based sound. One possibility: artists could mark existing photos with a marker with special symbols. Once scanned, these symbols would turn into interactive nodes that correspond to their shape. A circle for the start of a harmony. A triangle for a sound clip, a square for a Pure Data feed of live stock values. The edges between nodes, or the curves could be trans versed in time, x-y values of curves correspond to notes duration, pitch, or any parameter one would want to vary musically.

I've begun basic prototyping and exploration using Processing, MATLAB, Pure Data, but imagine this having a potential to be a widely used tool. Many people everyday use Instagram to take and share photos. It is easy to imagine this technology being augmented with a layer of interactive share able sound in the near future. Much like in Instagram, users have sound filters and tools to enhance their images individual sound layer. Ultimately users would have sound tools that can make their photo interactive, without losing the poignancy of their photograph. The sound layer project could also augment online news media content by creating an engaging way to absorb information from a news story, while not being too distracted by a video. Actual feeds of real time data from relevant sources could keep an image up to date. Like combining radio and a newspaper into one experience. Some sketches below, and and link a Processing sketch: here