A Study in Pink (Noise)

For my final project, I really wanted to try to do something that is very different from my previous works and really move beyond my comfort zone. I also wanted to interact with as many of the concepts that I learned about in the course as possible. At the end, I decided that I wanted to compose a “song” but the subject that would allow me to use multiple techniques proved to be difficult. The inspiration for the piece came after listening to so many pieces that they all started to become one continuous blur. At that point, as a result of frustration, I thought to myself that since everything sounds the same, why don’t I just listen to everything. Then, I spent about 49 minutes listening to pink noise.

Even though it was a joke that I placed upon myself, it turn out to be a much more valuable experience than I thought it would be. Shortly turning on the pink noise generator, I also started a recording of myself so I can, more or less, note down what I was feeling without moving myself too much. The piece is my attempt at describing my experience of listening to just pure noise using foley, a recorded electric guitar melody, “prepared guitar” where I used a knife to make scratches and pops on the guitar.

At the start, there was a sense of chaos, a feeling of being overwhelmed, that there were too many things going on at the same time. The feeling was expressed with a low rumble that persisted for the majority of the piece. The background chatter was used to express the looming pink noise but I chose to use chatter because after a while, I started to tune out to the sound much like background chatter in real life. The guitar line that was present throughout to illustrate the thoughts that entered my head, they were usually short ideas that were quick to fade. At times, I would try to hold onto them, but they would eventually slip away partly because I didn’t want anything to stay in my mind during the process. It was very much a meditation. A little over halfway through the piece, there was a “quiet” section. In this section, I muted the low rumble and raised the volume of the “prepared guitar” track (which was actually present since the beginning). At this point during the listening session, I became very conscious of all the little movements of myself and I felt that the prepared guitar was the perfect way to express it.

For me, the project was quite an experience both in terms of development and execution. I found that, after this project, I feel significantly more receptive towards noise, discordance, and silence that I ever was. Even though the featured guitar track was very much constructed traditionally, I felt that the scratches, pops, and fuzzes from the prepared guitar was absolutely essential to the piece. The final project was a way for me to really move beyond my comfort zone since the majority of my work in the class was very studio-like in the process, where recordings were very clean. While the components of the piece were still very deliberate, I feel that it is a good step for me to move beyond my current frame of thinking and approach towards sound.

GENESIS (Objectified, Electrified Sound)

PD Final

At the heart of the GENESIS was the intention of visualizing electronic music, so that it becomes more relatable for the audience. I decided to convert a graphic tablet into an instrument, given that its purpose is very similar to mine: translating human movements into numbers that computers can understand while preserving a much detail as possible. The final product achieved the goal somewhat, but was not to the degree that I expected.

Originally, I intended to use the three primary outputs of the tablet to control a synthesizer in Pure Data: x-y coordinates of the cursor and pen pressure. However, I was not able to get Pure Data to recognize pen pressure as a separate input, so I was limited to just the two coordinates. I scaled my laptop’s horizontal pixel count into frequencies in Hertz for the synthesizer. It was also quantized to match the frequencies on the lowers keys on a piano keyboard.

The pitch, as designated by the cursor’s position on the horizontal axis of the screen is fed into one of three synthesizers that I put together. The first synthesizer is, in essence, a clipped sine wave that gives a clear fundamental tone and a moderate amount of harmonics. The second is an anti-aliased square wave fed through a low pass filter that gives a sound that is reminiscent of old school analog synthesizers. The last synthesizer is a heavily processed additive synth where two square waves are summed, but the second signal is an octave lower than the first. After both signals are summed, the signal is fed through a bandpass filter that has a fixed center frequency of 90 Hertz but the Q is modulated by the cursor’s position on the vertical axis. At the end of the signal chain, there is a low pass filter with a slider for adjusting the center frequency from 0 to 20kHz. All three synthesizers are designed to be used both independently and concurrently with each other.

The synthesizer can be used on Freestyle where the sound is continuously generated, or with a pseudo-keyboard on the patch with bangs placed at different locations on the screen that matches the key on a keyboard.

While the core functions of the project is fully functional, there are some features that I wanted to implement because of a variety of reasons. First, I wanted the synthesizer to only produce sound if the pen pressure is past a certain threshold (or the pen pressure value as the main form of volume control). Since pen pressure did not work as a variable in Pure Data, and click detection function is not available, I wasn’t able to implement this feature. Second, I wasn’t able to stop Pure Data from generating a high pitched whine (which is part of the reason why I had anti-aliased synthesizer and a global low pass filter). The problem is, I assume, with Pure Data’s signal processing algorithm since changing the audio hardware did not solve the issue.

Overall, I’m reasonably happy with the functionality and sound of the synthesizer and I found a few combinations to be very usable. If I were to continue with this project, I would most likely do one (if not all) of the following 1) debug the whole patch to solve the whine, 2) change the input device to add another variable, 3) migrate to Max/MSP for better engine and support, and/or 4) linking the patch up to a physical instrument, somehow.

FISSURE (Sonic Cinematic)

FISSURE started out as an attempt to deliver a narrative through purely auditory means. The goal of this project was to discover what is possible, given how far we have come since the days of War of the Worlds in terms of recording techniques and audio processing technology. The project was also an chance for me to further my recording experience. While I have some experience in in studio recording, foley recording is still quite new to me and require a very different approach (as I have come learn).

By the end of the process, I became quite thoroughly convinced that sensory deprivation can be a very powerful component in any form of narrative. Once left to the imagination, what we take from an ambiguous stimulus depends entirely on the context of the situation and that is where foley shows its strength. Many of the sounds that I used in the project is almost unrecognizable once taken out of context, it is only when they are in concert with each other that meaning can be derived from them. On the other end, foley pushed my understanding of perception. The properties of sound are not intrinsic to the objects or actions from which they are created from: any sound can be broken down to separate components and reconstructed with the aid of multitrack recording. The sounds that were the most valuable for me were sounds that did not sound like anything and were easy to manipulate (in terms of equalization and other digital processing techniques). It is a lot like putting words together to make a sentence.

The challenge was definitely finding (or, in some cases, creating) the right sound for the right moments. Sometimes I would only hear the sound that I need in my head once I have played through the entire piece and realizing what felt lacking. I feel that many of the sounds that were added in the final mix will not be consciously heard by the audience, but their contribution to the overall effectiveness of the piece is undeniable.

A continuation of FISSURE would be to extend the story further and discover what other methods can be used to convey meaning through sound. In the piece, I tried to incorporate literary techniques like foreshadowing and symbolism in the form of sounds (success is up for debate) and I felt that there is definitely more room to explore this aspect of type of audio narrative. What if I didn’t have a narrator? Can meaning be conveyed completely through non-verbal means? What are some other literary techniques can be delivered using sound? Rhyme and rhythm comes to mind, but what about metaphors or unreliable narrators?

RIFT (a Listening Machine proto-project)

The final product of RIFT proto-project is being processed by Youtube at the time of writing. Once that’s finished, I’ll add it somewhere in the post. There we go.

RIFT started out as a vaguely-defined idea: hearing something that would otherwise not be present in the space that the listener is in. The idea is heavily inspired by the Alter Bahnhof walk, various examples of augmented reality, and my own experience working on film sound design projects. After reassessing the appeal of the initial idea, I finally arrived at the current iteration of RIFT: an engine that facilitates the creation of virtual sound installations with a dash of augmented reality. Given the time and resources that were available (and what I actually know how to do), I have created a video that would illustrate what a fully functional RIFT engine would be able to create.

In its ideal state, RIFT would at the very least allow users to designate elements in their surroundings as virtual loudspeakers that play user-uploaded sound clips upon a variety of triggers (proximity to object, orientation of the listener, etc.). The sound would be processed as if it was being played in a 3D space where the listener’s orientation would affect the pan and volume of the sound. RIFT would also have the ability to process background noises through user-defined effects and filters, making each instance of RIFT a unique experience. Lastly, to enhance psychoacoustic immersion, RIFT would also incorporate elements of augmented reality (AR) to have listeners highlight which objects are the virtual loudspeakers and increase the depth of interaction.

A user-created RIFT could potentially viewed by using a smartphone to scan a QR code to load an instance of RIFT, the smartphone’s camera would allow users to view the AR elements with headphones output serve as the primary audio output source and the microphone as the background noise input. It is likely that the current state of technology would not be able to deliver an experience that is as seamless and complex as my proto-project (given the computational intensiveness of each component in each RIFT).

If I were to continue working on this project, I would definitely pull together a team of audio engineers and programmers to actually create the engine and software that would actually allow others to create a RIFT. Although, I might have to wait a few years for consumer grade technology to catch up in terms of processing power.