Consider some application/product where auditory information (input, output, or both) is used in a creative way. It need not just be audio and may include visual or other sensory information. The auditory component should be implemented in a manner that has a primary purpose in supporting a task and emphasizes the role/benefit of audition. Explain why auditory information is best in this role.
This assignment took some real hard thought as I was struggling with what product I could use. I have two just in case one is wrong I’ll use the other as a back up. The first thought that I had was the auditory alerts my DSLR camera makes when focusing on what I want to take a picture of. While there is an array of boxes on the inside of the camera lens that gives you the visual indicator I feel the audio beeps are the perfect addition to the visual indicators on the lens. While you are looking through the viewfinder and attempting to find the best image within the parameters of your lens you may lose sight of the actual composition of the shot and the audio is a perfect attention grabber.
The second application that I thought of was the audible alerts that are part of the Subaru EyeSight system that I have in my Forester. There are many safety features such as lane departure warning, blind spot monitoring, and object detection alert. With each of these there is both an audio and visual warning. I feel that the system works well together so that if you have music up loud you can see the flashing icons or if you are paying more attention to the road and surrounding vehicles in traffic the audible alert will give you the warning. Of course no one is a distracted driver and does things like check their phones or looks around the cabin of the car for anything while they are driving, but for those that do that again the audible alarm is the best way to alert the driver.
This isn’t really a product or application but the “How To” YouTube video’s are perfect example where audio plays an equal or even more important role than the visual does. Having a person explain something, rather than reading text, is so much easier to learn from IMO. Many early video’s didn’t have sound on some of the tutorials, e.g., some software application tutorials, just showed mouse movements and screen clicks. I found these quite hard to follow. Having audio explain what’s going on is so much better, than just text or even pictures by themselves. The classic Bob Ross painting lessons are also good examples. Having his voice, elaborating on the techniques he was showing on the easel were very effective. I can’t think of an application that uses audio very much but any of the wizard type setup menu’s would benefit from having audio as well as text.
On a recent trip to Bergen, Norway, to participate in a sound-art exhibition, I encountered all sorts of implementations in which sound was prioritized and algorithmically transformed different aspects of the world into audible perception. My own installation sonified EEG data over 8 channels in an attempt to spatialize the sound of brain activity. Another artist sonified biofeedback data from respiration, skin conductivity, and heart rates of two strangers who were asked to touch hands, thereby broadcasting the inner world of first impressions. But I sense this artistic use of sound, even if its mappings are faithful and intuitive, is not the kind of “creative” use you mean. On the same trip however, there was one detail that had nothing to do with the exhibition, which stood out to me. On the local tram lines, each station name was announced over a speaker with its own unique melody of chimes along with the name. I thought this was interesting because the designers had to have done this for a reason other than aesthetics; otherwise it would be too much of an expense. In hindsight, I can might guess that it could be because the language there is phonetically very complex, and many place names sound or look similar, making a basic announcement rather difficult to decipher with any background noise. Couple this with the fact that there seemed to be a very high proportion of non-native population, as well as tourists, using the tram, and it made sense to me that it is easier to recognize your habitual stop based on the bells rather than trying to understand the complexities of the Norwegian language over a crappy loudspeaker. I could argue that this was more successful than the visual route map or the visual recognition of your local station, since one the one hand if the tram is full, you cannot see the route maps, and on the other hand, all the modern station platforms look almost exactly the same, especially at night. Additionally, when the tram is full, and there is a lot of background chatter, the voice-over announcement of the station name can easily get lost among all the other voices, but the sound of the chimes and their catchy individual melodies actually assisted me more than once in not missing my stop. In HCI terms, I think this works because it separates relevant and recognizable information to a ‘channel’ that is less susceptible to visual or audio environmental interference or noise, and although the melody is arbitrary, it is unique and easily memorized, causing a quick reaction with little cognitive processing when it is recognized.
I think the touch-tone telephone is a pretty great example of use of auditory information. The sound is actually required for function (on land-line phones, the tone actually indicates to the other end what key was pressed), but they also provide great feedback to whoever is pressing the buttons. The touch-tone keypad obviously has visual feedback, and positional information arising from a standard key layout, but since each key makes a different sound, and these sounds are consistent across all phones, they become ingrained in us. Particularly on touch-screen devices, where feeling the keys is no longer available, and the act of touching the key obscures it from view, sound can be the best indicator of mistakes. For numbers I dial frequently, I can tell just by sound whether I have made an error in entry.
I was really wanting to come up with something new and fun and interesting, unfortunately, the only thing I keep coming back to is my game of the month. I am currently playing Fallout 4 which is a post-apocalyptic game, that has the world ending back in the 1950’s , but with some added cool tech. The series of games all sort of start you out in a vault where you survive the apocalyptic event, which is nuclear war, then various things occur as you leave the vault and head out to explore the waste land. I have a beautiful surround sound system and am so happy I do when I started to play this game. It’s predecessors used spatial auditory ques quite well, but this game has taken it way out there. When exploring the wasteland, I kick everyone out of the living room, because even the slightest whisper could be danger in wasteland. There is a creature in the wasteland that simply breathes heavily, and that is the only sign that one is near as they blend into the background so well, or the game does not activate them until you walk through the doorway ahead, but the breathing is there to let you know that a big nasty is ahead. The previous games were all wonderful, but this game took it up a notch with how well the sound has been enhanced to compliment the visual game play.
The low level hardware of computers have utilized sound as a troubleshooting aspect for a significant amount of time. This is generally in the case where the computer doesn’t boot fully and the sounds give an indication of what is wrong. For example, if memory is not detected, it will give single tones one second apart. The reasoning for this system is two-fold. Generally, these audio cues are in situations where the computer cannot boot enough to give any information on the screen, so the normal method of notifying the user can’t be used. Secondly, the computers are generally difficult to see into as they are enclosed boxes, and can be in areas that are hard to manipulate. LED indication inside in this case would be significantly less useful than the auditory tones, because of physical limitations.
Another thing I was thinking of when working through this, is that a lot of this audio information is based on mimicry of real world sounds, such as the recycling bin on a computer giving a crumpling sound. This means a lot of real world interactions have inadvertent but useful auditory systems. An example being the automobile. A mechanic can very often diagnose a vehicle from sounds. Noises can indicate exhaust leaks, cupped tires, loose heat shields, etc. Things that may be very difficult to visually see can be diagnosed entirely through sound. So this is an auditory system that is always giving audio feedback to the user, that the user only takes notice of when there is something wrong. While not every user has enough knowledge to know what problem the sounds indicate, they do know when there is a problem. This is an inadvertent good audio design.