Music that is not perceived primarily by hearing. Research, practice, examples.
This project started on August 1st, 2015 by Henry Lowengard.
Let's find out how to experience music with other senses, primarily, the sense of touch.
What waveforms, intensities and frequencies are appropriate for haptic music?
Can haptic timbre be distinguished?
Can haptic polyphony be distinguished?
what are the best ways to create haptic instruments?
haptic performance: playing for an audience.
Haptics for the hearing impaired:
For this audience, I'm thinking non-aural hearing: haptic music. I bet this is something that would be really fruitful.
Transducers that are customized for lips, tongue and fingertips (and erogenous zones?) might be good.
Clearly, beated music with passages of interest running 60-300 bpm (1-5 Hz) are an accepted part of modern music, so
What are the psycho-haptic perception curves?
How high can this get?
Can you feel harmony?
Is there a perception of haptic timbre?
Can you perceive the music of other species (I'm thinking of whales and elephants) who communicate through low frequencies,
if it is isolated and amplified through these transducers?
And, having established the bounds of this haptic music, can we now leverage this to build acoustic instruments that can produce this music?
For instance, if you hold onto a balloon - the larger the better, you can feel sound a lot better. This might work well enough so that an audience of some size, holding or kissing balloons, could enjoy a performance of haptic music. What if you constructed an acoustic resonating membrane for different pitches? It could then concentrate the sound of a low frequency horn, flute, or stringed instrument. Electronic instruments would be easier, but acoustic ones would be more corporeal (in the words of Partch).
And there you go - ten minutes of thought about a music that is not about hearing.
There are some interesting developments in that vein going on:
Gallaudet's DeafSpace This has some great architectural design ideas for making a better acoustic experience for hearing impaired people and also for their assistive devices.
Furenexo is making little gadgets that set vibration buzzers off when there's a loud sound in the room. This project looks like it's disappeared, but it's an idea, anyway.
But the real question is: can harmony as opposed to monophony be felt at a single point ? It there a perception of haptic "timbre"?
Is there a way to actually enjoy this haptic music?
There are certainly haptic pieces that are perceived as rhythm: dancing, hand clapping games, string games. I bet there's even a haptic component to sound games like the Inuit's Katajjiat.
It would be a good start to take those hand clapping games and find a way to expand them to crowds, get "polyphony (different claps simultaneously), and maybe notate them. Fun Clapping has a number of these games on video!
(Aug 20, 2015):
I did a fun little test and made this discovery: singing into a balloon I held in my hands, I could actually feel different parts of the balloon vibrating at different notes. This is probably related to the standing waves inside the balloon, and I think there may be a little more going on, in that a balloon is a resonator without a fixed volume or pressure, although there is a fixed number of molecules, and so it can be flexible with regard to where nodes and antinodes appear. It may be stabilizing as I feed it different pitches. In short, a simple balloon does almost exactly what the ORB is doing in Vocal Vibrations!
Here is a short video, of me singing a few notes at a fairly stable balloon. I wanted to see what was really going on, so I processed it with a motion phase amplification technique (described here) at LambdaView which is a free motion amplification service (currently):
The original video is on the left, and the motion in the right side images has been amplified. You can see the difference between the nodes on the ballon (nothing going on there) and the antinodes (ripples). I'm singing a short scale. Of course, the frequencies are way, way above the frame rate, but I'm hoping the aliasing will give a clue as to what parts of the balloon are moving. I may continue this with a more controlled sound source!
video from October 11, 2015
I brought the AT&T Deathstar balloon into my apartment to see if I could get some more interesting results. It's really just a big white balloon with the shadows of my venetian blind at sundown. I'm playing (probably distorted) sines from my iPad and this is from about 116Hz to 124Hz.
I'm also wondering if the same kind of research could be done with a mylar balloon, which is a lot less stretchy, but holds the air more air-tightly, and you could also bounce laser pointers on it to magnify the vibrations in real-time if you wanted to see them. Also, the research team that developed this has a 2200fps camera, which would clearly show the distortion a lot better!
A more obvious frequency detecting approach would be to create a kind of panpipe that you can hold or wrap around your forearm that will respond to vibrations in a similar way, or a stiff but flexible conical shape which could act like a cochlea, creating predictable standing waves.
The late Dr. Oliver Sachs' book Musicophilia may have some insights into the brain processing of music, and music perception: here he is lecturing about it. This may be a good way into haptic.
In 2015, I was invited by Dr. Pauline Oliveros to give a talk to her
seminar in new performance instrumentation (NPI) at Rensselaer Polytechnic Institute. I presented some of the ideas that are here on this site, and blew up a lot of big balloons.
Here are some Notes for my talk at RPI about what could be a musical experience without sound.
In 2017, there was an open call for residencies at Olin College for people a little out of the ordinary to do research projects with their students. I put in for this - didn't get it - but here's the proposal, edited somewhat, of what I wanted to do, which was to work on this project.
Thanks to all the people (Nancy O. Graham, Sandra J. Graham, Sarah Lowengard, Mary Lowengard) who helped me put this proposal together! "Creatives-In-Reference" applicationOlin's Sketch Model Program
March 16, 2021:
I'm starting to do a few experiments with Apple's haptic support. Yeah, I know, software when I really want everything to not need batteries. But what I'm thinking of is right up Haptician's alley. I'm going to make an Audio Unit that can interpret audio and MIDI into signals onthe haptic motor (only possible in certain iPhone models). Basically:
Signals will be turned into events. signals are MIDI messages, band passed filtered audio envelope followers, pitch trackers, gat detectors, noise detectors: a number of raw sources that become event messages
A Mapper maps events into synthesizer commands The events are mapped, using state machines, into synthesizer commands. In this case, the synth is the tactic motor (although also possibly audio as well). The synth commands aget put in the synth's scheduling queue.
Synthesis The synth operates on the commands that are scheduled in the queue.
This is more or less how AUMI Sings works.
What will develop on the synth side is a number of tactile patterns that act like notes. There's a small amount of polyphony: a continuo "drone" that can be modulated a little, with one or two "solos" on top. Because it's an AU, it can be integrated into the iOS Music ecosystem of MIDI and audio effects, preset saving and sharing, and dynamic parameter changing.
But we can back up and go to a more radical level. There's an analogue to timbre in haptic expression. So the work that needs to be done is to create a framework for haptic timbre, then build the haptic analog to pitches, rhythms, harmonies, voices, voice leading, articulation. These can also be mapped directly into exisitng musical forms: scales, rhythm patterns, fills, trills, fugues, etc. Granted, it will be very much like percussion, but articulated with the haptic analogue of timbres.
There's no reason not to approach it as a superposition of simple sensations at different frequencies. In fact, with the right hardware, actual sound - at low frequencies - might even be the best way to create these expressions.