New Gadget Madly In Hope
A blog about my iPhone dev efforts
Jump to navigation
I'm doing an experiment in PolyHarp that might make its way elsewhere.
Here's a quick summary of the problem:
MIDI now officially supports a standard called MPE, the idea of which is to bring more features to individual notes that are normally assigned to channels.
For instance, pitch bends, CC values, and the like. This gives lots of Polyphonic Expression where it wasn't supported before. MPE works in a backward compatible manner by leveraging note assignment to put each note on its own channel, with all expression (like pitch bends) preset for it.
This is flexible, but it subject to subtleties associated with note assignment. MPE also divides the 16 MIDI channels into "zones", with one channel per zone reserved to receive commands and behave like each single channel normally would, but applied to all the channels in that zone.
MPE software sits on top of MIDI and if you have a legacy device or set of devices that can do non-OMNI mode polyphonic multi-timbralism, like some old General MIDI synthesizers, you can play some microtonal music on them without any firmware changes.
While this is a powerful general solution, it has a big problem: it maxes out at 15 note polyphony.
PolyHarp is a polyphonic microtonal synthesizer. That means each virtual string can be set to a pitch that has nothing to do with the 12ED2 system that is hard coded into MIDI (and much other western music). PolyHarp's internal synthesizer doesn't care about 12ED2, it'll play any old pitches. But when acting as a MIDI controller, it has to be more careful. I will probably add MPE support, which will allow some of the features that Polyharp already supports, like different sounds and behaviors controlled by different string areas. But beside microtonality, PolyHarps other big feature is massive polyphony. This is what lets it detune strings or make super dense chords. Limiting it to 15 notes is indeed limiting.
So I came up with another compromise that I call MIDI192. Like MPE, it channelizes notes to get the pitch bends that apply to it. Unlike MPE, it cannot play any microtonal pitch perfectly. MIDI192 lets you set up a subset of the 16 channels and assigns each channel to a degree of an equal division of a semitone for each of the number of channels it uses. The subset is so it can exclude some channels that don't care much about pitch anyway, like channel 10, used often for percussion. When using all 16 channels, instead of 12 notes per octave, you get 192 notes per octave (whence the name). This means that for an arbitrary tuning, you are within 100/16 cents of the real pitch, 6.25, and actually because it rounds it, its more like 3.125 cents or better. While not perfect, it's pretty good, and has no polyphony limit other than what the synth can handle. And for some equal temperaments that are multiples of 12, like 24, 72, 96 , it IS perfect, at least in theory . Every channel that's used gets a pitch bend sent to it when the MIDI192 system is chosen. This amount is based on how much a "real" pitch bend, from the wheel, should bend. By default, I'm using 6 semitones. A smaller bend will make the preset bends a little more accurate. There's some tweaking that goes on when a real pitch bend come in - it adds the preset bend to the new bend and transmit it to all affected channels. Because of the polyphony, you can't bend individual notes though.
MIDI192 doesn't have a zone system like MPE, but I could be able to block out channels into zones similarly by using channel masks, essentially making less accurate pitch to get multi timbral. We'll see!
I've put a test version into PolyHarp, which would be an option when you set up MIDI - either good old MIDI, or this new system, (or MPE...). You'd be able to set the bend distance and the channel mask as well.
Here's a video about it! https://www.youtube.com/watch?v=-ebJZRZFLFg
More as this develops!
PolyHarp got released a while ago, but meanwhile little bugs keep popping up in the MIDI department.
Most of those have to do with the fact that a lot of MIDI synths would like a note off after a while. A note off is really kind of independent of a note on. PolyHarp has a timer that starts with note ons to shut them off after a while . The natural way to turn them off is when they are damped or run out of energy. You can't test the energy of a MIDI note, which is why PolyHarp just scheduled a Note Off.
There was a bug where I was scheduling too many note offs that I fixed a few weeks ago, but there's a more subtle bug:
Play one note and play it again .. the first note's scheduled note off would shut off the new note prematurely. I've now fixed this in PolyHarp 1.0.5
PolyHarp 1.0.1 released
PolyHarp was released a few weeks ago, and I already have an update for it, which fixes a few typos. I'm continuing to rewrite the instructions - much of which refers to outdated UI and features!
PolyHarp provides a lot of control in its controls now: you can make the strings play with a touch, bend, retune, and be confined to string areas onscreen. You can control the chord bars from MIDI in or Audiobus Remote.
The chords can be made out of all kinds of intervals, repeat, and be displayed out of order. The string areas can be any four-cornered shape, twist and overlap. They can be colored in harmonious ways.
Go check it out at http://polyharp.com
Here are some projects I'm working on:
There's a new app not under my name in the app store that I wrote: Verna! The food preference app. Read about it at http://vernaapp.com .
Unreleased apps in various states of completion:
PolyHarp: I wanted to get this out in December, but it looked bad on iPhone X (in emulation at leaset) so I am trying to get it to behave. As a side effect, it now runs in Landscape like lots of other music apps, although I prefer Portrait. Some parts are still a little wonk in Landscape, I'm looking into them.
AUMI Sings: this is a project much like AUMI, but more sophisticated musically. AUMI Sings is designed to allow voiceless people to sing in a choir. This problem is very difficult, but it mainly means extending the concept of selecting a sound with a cursor to being able to change the choices for sounds with the cursor , and also modulating sounds with a cursor. While it's initially concerned with vocal sounds, it actually opens up the concept to more sophisticated music making. Beside having a new, more logical and extensible codebase, and directly participating in the iOS musical ecosystem, it will be more rigorous in its configurations and eventually have more kinds of trackers.
AUMI: AUMI is slowly building its way toward 2.0 from it's already more powerful 1.7 version (which has time quantizations and face tracking) . The internals are being fixed up so that instruments now can be grouped in sections (because there are now a great number of them), video options and other obscurities will be moved from the settings part of the instructions, I hope to redo the instructions so they are not one big document. And much more!
Fortuna tuner: Fortuna uses visualizations of the sound itself to show you how in tune your instrument is. It compares live audio to pitches derived from intervals taken from the vast set of Scala scales. I'm extending the Scala format somewhat, adding base pitches and various tags so you can search for them intelligently. This way, I can also add standard tunings for instruments. When done, it should be able to import and export scales in Scala format, and you should be able to build them using my Tone Spiral interface and also by analyzing a recording (or even live audio). Eventually, it should be available as an AUv3 (it might be a kind of a pig though), and used in a sound chain.
Drongen: The successor to Droneo is underway and has a way to go. Basically, it'll have a more sophisticated graph of transitions between timbre states and intervals, rhythmic control, and no limits on simultaneous voices, and cross-communication so that decisions made in one node can influence voices in other nodes. There will be special "views" of the graph to make it easier to set it up in terms of pitch an d timbre. Timbre will be expanded to samples and generated wavetables (impulse x formant model) It kind of makes my head hurt.
Fit2fake: My unreleasable demo program is really fun! It takes summaries from the New York Times API and makes up fake news based on mashing them together. It would violate NYTimes API restrictions were I to release it. I may rebuild it, though, so it can take a variety of corpuses from a server I'd set up.
Sitting In A Room: Another special project made a really long delay recording , meant to be played through a speaker, like the famous "I Am Sitting In A Roo" by Alvin Lucier. I may release it with his permission. http://jhhl.net/Video/IASIAR.mov
Yes Session Refreshin'
Apple thinks Yes Session is "too old" , so I'm updating it to work with bigger iOs phone devices.
It's not any smarter than it used to be, no new messages, but at least it's going to know about iPhone 7s and other large phones.
Yes Session, while not very musical, is an interesting thought experiment, and I sometimes need its encouragement myself!