AlephOne had been coming along swell. See videos of it here:
Especially this one, which is AlephOne controlling ThumbJam over MIDI, where the chorusing effect is literally done by MIDI note duplication rather than as a post-processing effect:
It was designed to be a clean re-design of everything I learned when doing Geo Synthesizer, Pythagoras (initial Geo code), and Mugician, with all the icky parts rethought. It was largely successful in that regard from the standpoint of cleaning up bugs and making the code much more understandable (specifically, isolating components nicely). But between worldly distractions having nothing to do with apps, the utter crash in end-user interest a few months after Geo was released, watching what others release; I need to step back and do some soul searching.
First, I released some of the important parts of AlephOne (my private project that I only shared with a few people) on github, without any licensing terms written (I tried a license with Mugician, and they don't matter unless you really want to get lawyers involved at some point.) This code is at:
The Python vDSPCompiler
It's a mash of two ideas. The first was an abortive attempt at automating the process of turning my synthesis code into SIMD instructions that would greatly speed up AlephOne's internal engine. It was the beginning of a compiler written in Python to ingest a LISP syntax language for generating vDSP instructions, to render the entire audio buffer in parallel by sample. I got most of the timbre of AlephOne written by hand when this started. Until I figure out a few parts that would let me generate the entire effects chain (reverb, chorus, etc), the DSPCompiler class isn't of practical use. But for now, trying to automate it is a giant distraction. I might get back to that later.
The second part is VERY important. It is the MIDI code that is used in AlephOne. Fretless.* and DeMIDI.* are the function to generate and parse MIDI from input gestures. Fretless implements all of my rants about everything that needs to be fixed with MIDI. It lets you treat MIDI much like OSC and free it from being stuck to twelve tone scales and bent notes. MIDI's abstraction here is appropriate for piano-like keyboards, and hideously/disastrously wrong for everything else. MIDI's design is liquid brain damage that gets injected into every effort to create a fine expressive instrument. It forces you to choose between stupid pitch handling and stupid polyphony. Attempts to fix it with MIDI HD look like they will be incompatible, yet tied to MIDI's past - the worst of both worlds. I am bound to MIDI just because right now, it's the only reasonable IPC between synths and controllers. You can abuse it and take advantage of ambiguities in the spec to get very close to full polyphonic pitch control, at the cost of moving all complexity into the controller and being willfully incompatible with a lot of stuff. So that's what Fretless.* does. It was explicitly designed versus ThumbJam and SampleWiz and Arctic.
So if you are designing an app that wants a fully bendy instrument with extremely natural pitch control, then this code should clear up what the hard issues are when rendering to MIDI. It boils down to creating a floating point representation of MIDI note numbers, at the cost of being limited to 16 note polyphony (by forcing one note per channel and using up to 16 of them. There is no notion of notes in this API. Rounding off pitches is the job of the controller (doing it anywhere else is premature rounding that loses important information). Generally, you need to keep separate the notions of what pitch the gesture actually implies (always slightly out of tune), what pitch you want to fret/autotune to (the "chromatic" notes and scales), and what pitch is actually rendered (somewhere in between these two things). Because touch screens draw the interface underneath the surface, all of these things must be available in the controller. The synth really only needs to know what is actually rendered. MIDI gets it backwards, because in 1980, controllers were dumb devices and the synths had the brains. It doesn't actually work like that any more. If nothing else, the new mobile device paradigm is to *install* the patch into the controller/synth device and avoid talking over the network, routing the audio instead.
I added in a new notion of legato and polyphony handling, because it's absolutely necessary for string instruments. Polyphony isn't a mode for the instrument. It's a gestural phenomenon that depends on the instrument, and is created at the controller. It is similar to channels. The instrument itself has to span many channels (to cover active and releasing notes) so that bends can be completely independent. But you can put notes into polyphony groups, which will control when notes are silenced and re-enabled for solo-mode and string-behavior such as trills. But because it is not a synthesizer/controller mode, it needs to make allowances so that chording and polyphony can be done at the same time. Related is legato, or whether the note attack is re-triggered. Generally, the first note down in a polyphony group plays the attack and every other note is a continuation of the current phase. But note that legato and polyphony are separate. In a real string instrument, the decision to pick or legato a note is made on every note - it's not a mode for the instrument that gets turned on or off.
Most importantly, it implements note-ties for MIDI. This is such a fundamental concept, that had it existed in the standard, most of the other broken-ness of MIDI might never have happened. MIDI allows the definition of pitch bend to be changed, because it assumes that you are on some kind of keyboard with a pitch wheel. But pitch wheels have a top and a bottom and a center position. The position is 14 bits. The standard interpretation is plus or minus a whole tone, where it can be increased to 12 whole tones up or down. This means that you still can't do arbitrary bend sizes (let alone *independent* bends of arbitrary size). If you have note ties, then you can dispense with all of this nonsense. If you bend a note A up to the A# position, you can do a note-tie between A bend up a semitone and rename the note as A#. You can continue to do this for as many octaves as you want, at full pitch resolution. This is exactly how written music notation actually behaves as well. Standard synths that don't understand the note-tie will experience a note-retrigger as the note is bent beyond its full up or down position. ThumbJam, SampleWiz, and Arctic understand these note ties.
I think the internal MIDI engine versus these synths is just awesome, especially on the phone. But experience with Geo suggests that maybe 10% of all people downloading the instrument can figure out how to use MIDI at all, and of those, maybe 10% ever get an understanding of why channel cycling is necessary. So if I release AlephOne as is, I will get pummeled with a lot of "why doesn't this just work!?" complaints from people who can't figure out the tool (Every hardware MIDI synth is different like VCR programming, so there is no manual you can just follow. You have to know what you are doing.). It's like trying to sell circular saws to people that need them without getting sued by people who should not have bought them. I am running out of UDID slots, so I have to either just release it as it is or shelve the project; maybe until AudioBus provides better options. (And Oh It Might!)
Original rant here:
Internal Audio Engine
I am here because I got a bit stuck and distracted when it came time to move away from just doing the MIDI part to building my own internal engine to consume the MIDI. I had to do this because no existing MIDI engine really does the behavior I need 100% correctly. So I go back into the endless task of listening to minor changes with headphones on and tweaking parameters to get the internal engine code sounding good and performant. I have no idea when AlephOne will release, but I have always had a small number of highly enthusiastic users. I am looking for inspiration at this point. I had fooled around with bringing libpd back in (which Pythagoras actually used at one point) or trying CSound. I will see where it goes.
But until then, take a look at DSPCompiler if you are using MIDI and have read some of my rants about MIDI. They aren't theoretical problems. I would move on to OSC if I could. But talking to other apps in the background with low latency only seems to have MIDI (or an abuse of it) as the only option right now.