https://www.akaipro.com/midimix
https://computermusicresource.com/MIDI.Commands.html
http://man.openbsd.org/sndiod#MIDI_CONTROL
Update: that was not a great commands introduction, here Are my thoughts. if bit 1 is set it is a command otherwise it is data. commands are split into two half bytes(nibbles?) the first 4 are what command it is(first bit is high so 3 bits or 8 commands) the second four are the channel. some commands have two following data bytes(low first bit remember) and some have one. The main exception is the system command. which is complicated
https://www.elgato.com/us/en/p/stream-deck-mini
https://github.com/streamdeck-linux-gui/streamdeck-linux-gui
Novation has a web browser editor for customizing the 8 midi banks, using what I assume is the web USB protocol so it can be customized on tablets too. It’s been a joy to use and set up custom workflows, you one for dawless, one for ableton etc.
It also feels like corporate behemoths are running the show. You can register as a site user for free and you can buy a SYSEX Id number for $240 a year, but to actually have a say in anything it looks like you need to be a corporate member.
https://midi.org/corporate-membership-app
Most of the interesting advances in musical instruments these days seem to be coming from small companies making unusual products. Things like the Osmose, the Linnstrument, the Continuum, and so on. MPE serves that community decently well, but I feel like we could at least in theory do a lot better by ditching the piano-centric assumptions built in to MIDI and hashing out a new protocol as a community effort.
MIDI’s cousin RS-422 existed in a technical environment but probably enjoys as much use as MIDI but technicians have many protocols to choose from. the artist just needs to get it speak MIDI to feel in control and the artist generally has fewer stakeholders, which I believe helps maintain continuity of an implementation.
RS-422 is a differential electrical interface standard, while MIDI specifies both a current loop electrical interface and a high level protocol, the latter of which is probably its most defining characteristic today as MIDI can run over any serial interface. Today most commonly it runs over USB or its own current loop signaling, and has less commonly in the past been used over RS-232 or RS-422 from time to time. This is to say, it’s a stretch that MIDI and RS-422 are cousins. (Standard MIDI interfaces typically seen with the 5 pin DIN connector are not RS-422 if that is what you’re thinking)
I remember a project that I used either 422 or 485 as jumping off point for MIDI because I thought at the time that MIDI grew from the spec.
https://support.roland.com/hc/en-us/articles/201951959-PMA-5...
That you can run the MIDI protocol over RS-422 is no more notable than that you can run it over smoke signals or semaphores. In practice almost every common short to medium range interconnect has been used commercially for MIDI at some point, including SCSI, Ethernet, bespoke fiber in addition to what was already said.
RS-422 is a specification at the physical/electrical layer.
MIDI is a protocol at the logical layer.
You can transmit MIDI over RS-422.
However you cannot transmit RS-422 over MIDI. That makes no logical sense. Like saying you could transmit a chicken sandwich over HTTP.
I’ve used MIDI “as paint”.
Written music using code to MIDI(1), and wrote “cross instrument” music, ie using my keyboard as drum machine.
But these days MIDI is chiefly an archival method for me.
Every time I touch my keyboard is recorded, is much smaller than a comparable audio recording, by design “forced fidelity” in the recording, the music is “searchable”, and I am able to pipe the MIDI format through transcription software (which would be near impossible from an audio recording today).
Because it was the CSO the biggest part of the budget went to sample cards that played instruments up to the conductors standard. Each exhibit could create a composition and play it back like a symphony.
Unsure if this comment was a refutation of my own or just a fun cherry on top anecdote, but in case I was unclear I was trying to say that “I completely agree with the shorthand of calling midi ‘paint’, AND here’s a cool use case that I think is fundamental to the technology but rarely held up as a benefit of said technology, namely archiving and assisted transcription of musical composition.”
Absolutely. A shambles hopefully never repeated.
The biggest change is that 2.0 is duplex which allows devices to discover each other on the same MIDI network. No more in/out/thru, it's all just one USB (or ethernet, or wireless) connection from a device to every other device. It also allows property exchange, which means devices can exchange semantic information about their configuration and parameters with each other.
On top of that, individual voices can now carry pitch and control information, while devices can exchange tuning information. That's a big deal for microtonal and non-western music.
It would be hard to undersell how monumental MIDI 2.0 is as a change from MIDI 1.0, but somehow the MMA manages to do it by making it out to be "32 bit!" (which ironically, it isn't - you don't get 32 bits of resolution in your CCs with MIDI 2.0).
Where MIDI 1 can more or less express the pitch and dynamics/timbre changes within a single piece of western music, MIDI 2.0 can express almost everything you do within a DAW these days, short of actual audio i/o.
The standard (section 7.4.6) defines 32 bit resolution for CCs.
What MIDI 2.0 adds per-note pitch bend. That's a big deal, but you can also accomplish the same thing without all the complexity of MIDI 2.0 by using MPE, which is an extension to MIDI 1.0.
In a way, MPE is easier to deal with than per-note pitch bend anyways. MIDI 2.0 didn't bother to extend the 7-bit note number range, so you're still stuck with 128 notes. (This despite the existence of instruments like the Tonal Plexus H-Pi, which has 205 distinct pitches per octave, each with its own separate button!) That means in MIDI 2.0 you have to pick the nearest unused note and bend from there.
In MPE it's a bit easier because you can just grab an unused channel, pick the closest note to the one you want, and bend. It's a little more straightforward, and you know you'll never have to bend more than 50 cents either way.
So far, it looks like most of the expressive and microtonal instruments out there are using MPE. The only one I know of that uses MIDI 2.0 is the Lumatone.
It's possible to work around the 7 bit limit, but it's awkward.
I'm sure nobody expected it to live this long, as the grumbling about its various shortcomings must've started in the late eighties at the latest. The curse of good enough strikes again ...
Most of one issue (volume 3 number 4 from 1987) was taken up with an article on how the author was able to reverse-engineer and modify the firmware of the Ensoniq Mirage to support the kind of microtuning he wanted.
Disproof: https://www.muzines.co.uk/articles/midi-what-s-wrong-with-it...
Just someone who either has nothing to do with music or does but has an absolutely obnoxious personality that will point out some meaningless detail in the specification.
It is just mind blowing that my 1997 Korg Prophecy and 10 year old Waldorf Blofeld are all linked up to my brand new computer using the latest version of Abelton live with a midi interface that I don't even remember when I got it but its old too.
The answer? Tailor the use-case to the instrument, as orchestras have done for thousands of years. A trumpet riff won't sound appropriate or evocative played on a xylophone anymore a Cello aria on a French Horn.
A masterclass on this is Wendy Carlos' Opus 'Switched On Bach', whose entire inspiration was to make "appealing music you could really listen to" using the then-new synthesiser technology.
If this doesn't sound serious, you had to be there.
MIDI was revealed to the world, on the Sequential Circuits stand at the NAMM show in January 1983 (precisely one year since Smith’s proposal got the brush off)
(I'm actually working on a MIDI controller. It has a CAN bus interface in addition to serial MIDI and USB. I don't have anything to talk to over CAN bus yet, but it's sort of there for future development. I probably won't send actual MIDI over CAN bus, but rather a protocol yet to be designed that's sort of like MIDI but less piano-centric.)
It seems the standard MIDI libraries that Arduino uses don't enable it by default, but it's a configuration option you can turn on, along with a note not to try to use it with USB[1].
[1] https://github.com/FortySevenEffects/arduino_midi_library/bl...
No more detail on this, but it was huge for MIDI not to end up "becoming a nightmare of endless fine-tuning better suited to electronics engineers than musicians" like CV/gate was.
https://midi.org/midi-history-chapter-6-midi-begins-1981-198... spells it out in more detail:
Mieda-san from Korg responds on behalf of the Japanese companies to the meetings at the 1981 Gakki Fair with Sequential and Oberheim and confirms the discussion that were they had at the Gakki Fair.
• 19.2kbps is too slow
• ¼” Jacks will have ground loop problems
• There is no concept of synchronization, clock or the ability to start and stop sequences
---
Ground loops are the bane of musician's lives and sometimes the mitigation is the end of their lives
This quote about "separation of composition from performance" applies when you are working with a sequencer because the composer can "step-edit" the musical data note-by-note into the sequencer and then have the sequencer play it back in proper musical (real) time.
A popular one was made by Doepfer - who were also a strong force behind the modular renaissance with their A-100.
Yes you can. I do
Even back then there were add-on interfaces like the Steinberg Midex which used some kind of timestamped buffered timing, which was probably even better. And many synths actually had quite bad latency anyway.
But for sure, having direct access to hardware and a non-multitasking OS made things much more deterministic "by default" and that was certainly an advantage for this use case.
It was tolerable if you put every synth/drum machine on its own separate buffered line. But when you had four or five devices connected to a single MIDI out on the Atari - ha ha ha no.
The absolute best timing resolution was around 700us, which is fine for single notes - although that didn't allow time for the steam age 8-bit processor in the target synth to wake up and notice it was supposed to do something.
But when you had a four note chord, it never sounded truly tight. And if you added some aftertouch or mod wheel - oh dear.
So you ended up with this weird cultural thing of music that was quantised and on-the-grid but also had quite a bit of timing slop.
Big-name bands often used synchronisers and tracked each synth line to tape in a separate pass.
USB over MIDI is much better because there's basically zero latency and everything (usually) works as it should.
The Midex also added four additional MIDI outs, I think there was something similar for Notator (the Logic Pro predecessor), Unitor something.
I'm working on my own microtonal MIDI controller, and for now it makes more sense to just use MPE. Maybe I'll support MIDI 2.0 eventually, but I'm kind of waiting to see if MIDI 2.0 is a flop first.
I understand never used it and I looked it up. Seems a lot of complication. It's it worth it?
It's possible to do that in plain MIDI 1.0 by using a separate channel for each note and using CCs, channel pressure, and/or pitch bend (that's basically what MPE does), but trying to get that to work with a wide range of synths is quite difficult because every synth implements MIDI a little differently. MPE standardizes that use case so it's much easier to have something that "just works".