top of page
Search
Writer's pictureMichael Hussey

How Sound Design Made the Mundane Meaningful

Updated: Jan 8, 2020


courtesy of Spreetail

I wrote this article near the end of my summer internship with Spreetail in August 2019 for Spreetail's internal Hub platform.


One of my favorite things about being a User Experience Designer is making the mundane meaningful. At Spreetail, the EXP team works hard every day to make our software, Toolkit and FCTools, easy to understand and use. Sometimes, we as designers get to go beyond enhancing functionality and craft a truly delightful experience with our software.  The Lincoln Fulfillment Center is a pretty loud place. Forklifts and carts whir and rumble around the warehouse, beeps and blips emanate from scanners and machinery, and it can be hard to notice the sounds coming from our Android app, FCTools. People all throughout the warehouse use FCTools to do their job, and for some, auditory feedback is essential.  Unfortunately, the sound clips used in the app (called “earcons”) were not very effective within the context of the user’s tasks for a few reasons: 

1. The sounds aren’t designed to function within the environment of the FC.  2. The group of sounds aren’t related in any way.  3. Some sounds are repetitive and even annoying.





I gave myself three objectives for this project: 

1.  Research and analyze the spaces and equipment used in the FC to determine how to best design a new set of sound interactions.  2.  Design and compose a coherent and related set of earcons that improves usability and decreases confusion. 3.  Craft a delightful experience for FCTools users. 



Research & Analysis 

The first thing I had to do was get a sense of the soundscape within the FC. I took a high-powered microphone and recorded over 20 samples from different places across the warehouse. I determined the range of frequencies by running all the samples through an EQ analyzer. 





As you can see, there are lots of very loud fans, vents, and electrical hums in the low end (20Hz-200Hz). The mid-range (200Hz-1KHz) is full of machines, carts, and other miscellaneous noises. The high end (1KHz-10Hz) is less crowded, meaning that it’s easier to pick out high-frequency sounds from the background noise of the FC. With an idea of the ideal frequency range for the environment of the FC (1KHz-10KHz), I could narrow it further by understanding how human ears hear and identify sounds. 



Our ears have evolved to be especially sensitive to conversational speech frequencies, as verbal communication was vital to our survival, meaning that our hearing is most effective in the between about 250Hz and 5KHz. The overlap between conversational speech frequencies and our previous ideal frequency range puts our new goal at about 1KHz to 5KHz. 





Within the range of our conversational speech frequencies, there are different frequency ranges called formants. In phonetics, they are classified in detail, but the one we need to know about is the singing formant (between 2KHz and 3KHz). Our ears have evolved to be especially receptive to this formant, because it’s the frequency range where you’d hear a baby crying (or an adult screaming). It’s called the singing formant because it’s the range of soprano opera singers, and it’s the reason why you can usually pick a high solo voice out of a loud orchestra in an opera performance. 



Design and Composition 

Within the realm of software sound design, an earcon can either be skeuomorphic – meaning it emulates something real, like an instrument or a sound found in nature – or abstract – meaning it’s a digitally created non-natural sound, like a beep or a buzz. 

Additionally, since skeuomorphic sounds are representations of the real world, they naturally appear familiar to the user. This grounds them into the feedback loops they have experience with. Doubling down on the emotional impact of skeuomorphic sounds, research has found that hearing musical instruments often reduces stress and improves mood. Relating it back to employee efficiency, Joel Beckerman, a composer for Adobe 99U, explains “...there’s an 86% correlation between our subconscious response to a sound we like and our desire to have that experience again, and the opposite is also true." 





The hardest part of the composition process was finding a real instrument that performed well in the singing formant, but I narrowed it down to three. I ran the Vibraphone, the Harp, and the Guitar through the EQ analyzer to determine their frequency profile – the shape of the soundwave and overtones mapped on a frequency chart. A clean tone is what we’re looking for, meaning the only sound we get is the note we play. A muddy tone has lots of other frequencies in the mix, making it much more difficult to hear in a loud environment. Out of the three, the Vibraphone was the cleanest by far. 





Once the instrumentation was figured out, my next task was determining a key that the sounds could be in so that they not only fit with each other, but also within the environment of the FC. I went back to the recordings I took at the warehouse and isolated a few common sounds that you’d hear in the FC. The hand scanners, label printers, and forklift horns are all examples of sounds that have a pitch that we can’t change. After analyzing their frequencies and mapping them to the respective musical notes, I was able to match them all to the G Major Scale. 


Experience Crafting 

Using the key of G, I composed 9 earcons to be used for the new sound design system. They’re meant to sound good together and be used in conjunction with each other. Additionally, they had to convey a range of affirmative and negative feedback, so I arranged them from “Yay” to “Yikes”. 





Every earcon was specifically designed with the user's action in mind and how the sound should make them feel. For example, “Hello.mp3” plays when the user boots up FCTools, and introduces the notes “F” and “C” to the G Major chord, giving them a little preview of what’s in store. “Yep.mp3” and “Nope.mp3” are designed to be used together for doing and undoing an action, so “Nope” is just “Yep” played in reverse. Additionally, “Uh-oh.mp3” and “Yikes.mp3” are the two most negative feedback earcons – they’re accompanied by louder orchestral percussion and demand more attention from the listener. 



There’s one last sound I created that doesn’t exactly fit in with the rest of the bunch. It’s called “Ding” and it plays after every successful scan with the finger scanner. The interesting thing about “Ding” is that it can be one of four possible notes, played at random. The four notes are the root, the third, the fifth, and the octave of a G Major chord. There’s a reason for this. Across the FC, people are scanning barcodes and locations, meaning “Ding” would be heard very often. When almost every employee in the warehouse is playing a note of the G Major chord, something interesting happens. 

They play a melody. 





Without changing any part of the workflow, we’ve not only improved functionality and usability, we’ve empowered FCTools users to collaboratively make music just by doing their jobs. In addition, since the earcons are all in the key of G, they don’t interfere with the melody, and instead insert motifs into the music.  


We turned FCTools users into an orchestra. 


Sound design is easy to overlook when designing an interaction with software. Sometimes we take for granted the amazing experiences that could be made by putting in a little more effort into these types of things. Here at Spreetail, the EXP team makes software delightful every day, and I can confidently say I love the work we do. When this is implemented, I hope the FC becomes a little less stressful and a little more wonderful. 

Comments


bottom of page