Wait a minute! Is the software actually a sequencer, as the Java API describes something that takes MIDI data from a file as a sequencer that then sends the data to a synthesizer to play it. So does that mean we need a sequencer to receive the MIDI data and do something with it, and then send it to the synthesizer which then renders the notes (visibly, not audibly).
And then we find out there's raw MIDI and time-tagged MIDI. Definition being that the raw MIDI is what you need to send to an external MIDI instrument, and and raw MIDI data coming into the computer has to be time-tagged before hitting the sequencer. This sounds like a definition written by experts, with the single purpose of confusing a layperson!!
If I simplify it to be the data hitting the computer port from the drum pad is raw MIDI, something needs to time tag that input so it can be stored and processed. That stored and processed time-tagged MIDI can then be sent to the rendering engine (synthesizer). A MIDI in port seems to be the definition of what does the time-tagging on the way in.
If we were interested in sending the music to an external MIDI device for playback, we would use a MIDI out port to strip the time tagging but supposedly send the MIDI instructions at the right time.
So having struggled with the high level description the API was giving me, it's the actual architecture description that really firms up the definitions. It becomes clearer when we start talking about streaming MIDI as opposed to sequenced MIDI. Streaming is real-time and therefore not time-tagged. Sequenced is time-tagged and can be stored for later processing or playback.
So this means we are wanting to receive streamed MIDI from the drum pad, tag the notes with a timestamp, meaning we are writing a sequencer at this stage. When these get passed to the brains of the app which translates these into visible notation, that's synthesizing I believe BUT the gurus say it's a sequencer that translates to realtime transmission of MIDI events. Damn it, I'm calling all of this my software sequencer, with no synthesizer needed, just my renderer which takes the time tagged MIDI events from the sequencer and stores them in the data model!
Reading through the Java Sound API, the use of Sequencer and Synthesizer becomes even clearer. The high level architecture for what we want to achieve is:
- use a Sequencer to record MIDI information representing drum strokes through a MIDI IN port
- store it in a MIDI file
- use a Sequencer to play that MIDI file to a Synthesizer
- the Synthesizer doesn't play the notes but creates JDrumNote objects in a JManuscript
Being me, the first thing I do is challenge the complexity here. I probably do want to capture everything in a MIDI file as an option for debugging, so a Sequencer is necessary to receive the time-tagged MidiEvents and it should be wired into the MIDI IN port using the API in a standard way.
But, instead of playing the events back to a synthesizer, which appears to strip the timing information as it receives them when they should be played, I'd rather pass the events to a software brain that does the translation to JDrumNotes - that brain will need access to the timing information of note just the current instruction but also previous, and potentially any future note. Also, the Synthesizer model needs a whole bunch more learning, which looks mostly to only be relevant if we were going to actually generate sound.
But this can be broken down I hope. If the Sequencer can capture the Sequence in a file, we could have another Sequencer which, later on, reads that file and analyzes the contents to do the translation to JDrumNotes. That's the journey I'll set out on, let's see where we end up.
One other consideration into how we do this is MIDI sends note-on and note-off events. Drumming is a pretty staccato operation, the on and off will be fairly close together. We need to understand what the conventions are in drumming voice in the MIDI world.
(Oh dear, just come across the term Quantization in the context of digital music. After reading it the way I think about it is it's like "snap to grid" functionality in drawing software, i.e. if you're slightly late or early, the software puts the note where it thinks it should be. I didn't know there was a phrase for this but it's something I had thought would be needed in interpreting the incoming drum strokes as part of the notation brain)
Looks like there's lots of articles on how to create your own effects to synthesize drum rolls by editing sequences but phew this is complex trying to assimilate all the terminology in use as well as figure out what we need here.
So there's the concept of a drum brain out there, into which you plug rubber pads which basically generate a voltage when hit. The brain then converts those to MIDI events. I believe what I need is a drum pad trigger MIDI controller or some such arrangement of those words!
Talking to a kit drummer (Jack at Nevada) who uses MIDI in the studio, his advice is do not go for cheaper options like the Alesis PercPad due to the rapidity of rudimental drumming - you'll get crosstalk across the pads and there's a question mark if they'll pick up the press rolls accurately enough and even keep up with the event flow. TD30 and nothing less - go Roland or Ddrum pickups which clip to the rim of a real drum. We're talking serious cost at that level though, but that's the challenge - having the quality needed to meet the challenges of the drumming style.
I think the way ahead is to work out the brain, by using keystrokes to emulate the midievents, maybe even making them look like them, and getting a testrig hooked up to see how we get on with some of the midi setups the guys in the band have.