New Gadget Madly In Hope
A blog about my iPhone dev efforts
Jump to navigation
Here are some projects I'm working on:
There's a new app not under my name in the app store that I wrote: Verna! The food preference app. Read about it at http://vernaapp.com .
Unreleased apps in various states of completion:
PolyHarp: I wanted to get this out in December, but it looked bad on iPhone X (in emulation at leaset) so I am trying to get it to behave. As a side effect, it now runs in Landscape like lots of other music apps, although I prefer Portrait. Some parts are still a little wonk in Landscape, I'm looking into them.
AUMI Sings: this is a project much like AUMI, but more sophisticated musically. AUMI Sings is designed to allow voiceless people to sing in a choir. This problem is very difficult, but it mainly means extending the concept of selecting a sound with a cursor to being able to change the choices for sounds with the cursor , and also modulating sounds with a cursor. While it's initially concerned with vocal sounds, it actually opens up the concept to more sophisticated music making. Beside having a new, more logical and extensible codebase, and directly participating in the iOS musical ecosystem, it will be more rigorous in its configurations and eventually have more kinds of trackers.
AUMI: AUMI is slowly building its way toward 2.0 from it's already more powerful 1.7 version (which has time quantizations and face tracking) . The internals are being fixed up so that instruments now can be grouped in sections (because there are now a great number of them), video options and other obscurities will be moved from the settings part of the instructions, I hope to redo the instructions so they are not one big document. And much more!
Fortuna tuner: Fortuna uses visualizations of the sound itself to show you how in tune your instrument is. It compares live audio to pitches derived from intervals taken from the vast set of Scala scales. I'm extending the Scala format somewhat, adding base pitches and various tags so you can search for them intelligently. This way, I can also add standard tunings for instruments. When done, it should be able to import and export scales in Scala format, and you should be able to build them using my Tone Spiral interface and also by analyzing a recording (or even live audio). Eventually, it should be available as an AUv3 (it might be a kind of a pig though), and used in a sound chain.
Drongen: The successor to Droneo is underway and has a way to go. Basically, it'll have a more sophisticated graph of transitions between timbre states and intervals, rhythmic control, and no limits on simultaneous voices, and cross-communication so that decisions made in one node can influence voices in other nodes. There will be special "views" of the graph to make it easier to set it up in terms of pitch an d timbre. Timbre will be expanded to samples and generated wavetables (impulse x formant model) It kind of makes my head hurt.
Fit2fake: My unreleasable demo program is really fun! It takes summaries from the New York Times API and makes up fake news based on mashing them together. It would violate NYTimes API restrictions were I to release it. I may rebuild it, though, so it can take a variety of corpuses from a server I'd set up.
Sitting In A Room: Another special project made a really long delay recording , meant to be played through a speaker, like the famous "I Am Sitting In A Roo" by Alvin Lucier. I may release it with his permission. http://jhhl.net/Video/IASIAR.mov
Hi people who still read this!
Apple recently decided that old apps that haven't been updated for a while will get purged from the App Store, and three of mine are now not available: Enumero, Banshee, and Tondo.
I've actually wanted to update Tondo and Ellipsynth for some time, and they may be back at some point in refreshed form.
Banshee, I don't think anyone cared about, it had about 374 sales over its lifetime. I'll definitely reuse some of its ideas, though. I have a funny variant that says the name of the note you are playing at you!
Hi Folks, I haven't checked in for a while!
Here's the story:
I have a pretty good version of Droneo 1.4 in January... but it had a number of bad issues in it:
For years, the control rate calculations like evolution and patters, were done at the beginning of the buffer computation cycle. But changing the engine to AudioBus meant that the size of the buffer - and the speed at which it updates, was not under Droneo's control. So that meant that evolution would proceed at an undetermined pace.
Also, the conversion to AB also made my good old iPhone 4 hiccup with more than about 4 reeds, since it wasn't efficient enough and also maybe because the audio IO is now floating point based, which doesn't seem to bother the newer devices but might be slower on the old ones. Why do I care? because the iPhone 4 is my home entertainment system device.
Droneo 1.4 had a number of innovations that you will never see. But that won't matter becasue I'm working on
Droneo 2.0 (and abandoning Dronica, which was to be basically what Droneo 2.0 will be).
I've been going back and forth about whether to make a new app or just replace the old one since about 2011!
Anyway Droneo 2.0 is going to be pretty modern. It'll support all the iOS device sizes in portrait and landscape.
It will deliver the promise of Dronica, that is, it simplifies and empowers the features of Droneo: no limits on number of reeds, Churning, Consorts, and Mirrors are replaced with evolution between states (scaled beats), and the pace of that evolution is governed by a heartbeat. Voice banks are being replaced with a tag-based system, and old banks and patches will be legible. There'll be a tighter connection to the Droneo Voices web service, so you can post and retrieve Droneo voices to the public. I'm not sure, but I think you'll be able to stuff them in iCloud too, and pick up your own Drones on different devices.
The Tone Spiral is gaining power as it includes thousands of Scala scales as Guides. It may also be able to import and export scales from Gestrument.
The interval Spec is also expanding, adding ideas that are also in PolyHarp: for example, I have a notation that expresses a number as its prime factors, so you won't have to figure out what 3^24th is, etc. (in that case, it'll be "24;" more on that later). This makes a lot of sense if you want to quickly suss out the consonance between various intervals - you can just subtract the powers of the primes. Of course, negatives and other real numbers are allowed, ;.5 is the square root of two, for example. Polyharps' parade of roman numeral intervals (I,IV,VI#...) are also supported.
It'll be localizable, (I pity the translators...)
The wavetables (timbres) that are the heart of the synthesis are getting updated: they'll have better resolution, may contain several cycles of waves, and may also allow generating them and importing them from audio copy.
I mentioned the beats in passing, basically, a beat is a bundle of volume, pan, timbre and a passel of evolution parameters.
It'll save voices to Audiobus and also act as an IAA generator.
The instructions won't be one huge page anymore.. I'll see if I can make it so that it exists both as a web page and PDF doc. I may need to write some scripts to help me with that.
So, a lot of work! I'm about 1/4 done with it. So that'd Droneo 2.0
PolyHarp is the other main thing obsessing me. There are some corners of it that need some work still. PolyHarp builds scales out of chords, can act as a guitar, piano, multi chord strummed device with intervals plucked from hell, in 200 voice polyphony, or act as a MIDI controller.AB, IIA, etc. It is about the most powerful autoharp fantasy ever.
I want to see if I can make the internal synthesizer more efficient and powerful and sound better. It's OK so far, but I think it could be better.
Then there are fixes to AUMI, which I want to experiment with some more in terms of a motion flow tracker. fast movement is hard to track right now, I want to make the frame rate something fierce!
So that's what's going on!
AUMI! and more
On June 1, 2013, my app AUMI, written or Deep Listening Institute, went live! AUMI is a music app that tracks motion in video to trigger sounds or play MIDI.
we have great hopes that AUMI will allow severely disabled people to play music and to get some audio biofeedback, and have some fun!
Also, we, the temporarily abled, can also enjoy it. It runs on all iOs devices with front-facing cameras.
Meanwhile iOs7 is removing the way most of my synthesis apps make their audio. This probably means that people running my apps on older devices will not be able to update them (because Apple doesn't support old iOS versions too well . They think history started with the iPhone 4 and iPad (1). and that original iPad is rapidly being left behind.
So , I'll have to swap out my audio engines and get all that running .
Some apps may have to be retired out of the store, I think.
synthicity itself 1.2.1 is in the store
it's not that different, but it's a little mooe efficient and can do 64 voices if you are careful.