Real-time MIDI in Python using sched and rtmidi
Introduction
Sending MIDI messages to Digital Audio Workstations (DAW) from a Notebook environment can be both fun and a powerful methodology for algorithmic composition.
I looked for a Python library where I can send a melody via a MIDI channel to a DAW, but was not able to find it in the two recently active MIDI libraries I encountered: python-rtmidi and mido: MIDI Objects for Python. I got the impression that they are to open ports, construct a MIDI message and send them. But they don't inherently provide a scheduler that sends a MIDI message per note scheduled in time. (I might be wrong though.)
Luckily, scheduling a function to be called at a certain time is very easy to implement in Python using the sched — Event scheduler.
Let's use this Python wrapper around RtMidi (which originally is a C++ library developed at McGill University) and the event scheduler that comes for free with Python to send a musical phrase to a DAW via MIDI on Windows.
Python-RtMIDI setup
import rtmidi
midi_out = rtmidi.MidiOut()
available_ports = midi_out.get_ports()
The 2 output ports I've are: ['Microsoft GS Wavetable Synth 0', 'Arturia MiniLab mkII 1']
. The latter is my beautiful, small MIDI keyboard Arturia MiniLab Mk II. Former is a MIDI synthesizer that comes for free with Windows 10 so that Windows can play MIDI files with Media Player without requiring extra software. Mine is in C:\Windows\SysWOW64\drivers\gm.dls
. A gmreadme.txt
file is a copyright message from year 2000 and it says that "The GM.DLS file contains the Roland SoundCanvas Sound Set which was licensed by Microsoft in 1996 from Roland Corporation U.S. and license is only for Microsoft operating system products."
I admire Microsoft's dedication on backwards compatibility and enjoy finding these historical documents. This synth is a sound font. Sound fonts store recordings of a number of notes played on an instrument (or synthesized). And to reproduce other notes, the pitch is manipulated on-the-fly by the synth. Probably this synth was high quality for its time. For today's standards 3MB is tiny for a sound font that aims to reproduce 100s of instruments! Here is "At Hell's Gate" from original Doom played with Microsoft GS Wavetable Synth. Now that we don't need to squeeze in a 1.4M disk same piece can sound like this "At DOOM's Gate" from Doom 2016.
We'll just use this synth to see our setup is correct and we are able to send out MIDI messages, before sending them to a DAW.
Open the port to our legendary synth
midi_out.open_port(0)
midi_out.is_port_open() # should be True
Now can start sending notes. I usually call following command from a cell in a Notebook (Ctrl+Enter keeps you at the current cell after running it, so you can repeat it with the same key combination!).
from rtmidi.midiconstants import NOTE_ON, NOTE_OFF
midi_out.send_message([NOTE_ON, 60, 100])
Boom! Hear the magic? Looks like MIDI note messages are triplets: A flag to indicate note on or off, the MIDI number (60 is middle C note, C4), 100 is the volume. It's 7 bits, goes up to 127.
Note that, once a port is open you cannot reopen it before you close it. So, don't forget to assign open port to a variable, or lose the handle to open port object. I couldn't find a way to get the port object and close it. :-) Another solution could be the context manager that python-rtmidi provides which basically closes the port when exiting the context.
Also, if you don't want your note to ring long you can send a NOTE_OFF
message which is equivalent to releasing a key while playing an instrument.
with midi_out:
midi_out.send_message([NOTE_ON, 60, 100])
time.sleep(0.5)
midi_out.send_message([NOTE_OFF, 60, 0])
But I don't like the extra indentation that his approach requires.
Scheduler
sched is a Python library that implements a general event scheduler. It requires a function that monotonically increases in time (such as time.time()
, but nothing is stopping you from measuring total amount of entropy of an isolated system), and another function that can wait for a given duration (such as time.sleep()
).
Using two calls to scheduler.enter
, one for note on message and for note off, we can simulate a key press/release. Following code plays has print explanatory print statements.
def schedule_note(scheduler, port, midi_no, time, duration, volume):
print(f'Scheduling {midi_no} to be played at {time} for {duration} seconds.')
scheduler.enter(time + duration, 1, port.send_message, argument=([NOTE_OFF, midi_no, 0],))
scheduler.enter(time, 10, port.send_message, argument=([NOTE_ON, midi_no, volume],))
s = sched.scheduler(time.time, time.sleep)
for t, note in enumerate([60, 62, 64, 65, 67, 69, 71, 72]):
schedule_note(s, midi_out, note, t, 0.5, 100)
print([(e.time, e.argument, e.priority) for e in s.queue])
print('Start playing...')
s.run(blocking=True)
print('Done.')
When run, we see that calls to schedule_notes
, observe how the queue of the scheduler is filled. Then it plays C major scale upwards from C4 to C5.
Also, let's study the difference between two time functions that are available in Python. Simple scheduling system that only stores the moments the when scheduled functions are called. And schedule them in specified intervals.
def study_scheduler_timing(time_func, interval=0.001, repeat=100):
times = []
def store_call_time():
times.append(time.time())
s = sched.scheduler(time_func, time.sleep)
for i in range(repeat):
s.enter(i * interval, 1, store_call_time)
s.run(blocking=True)
return times
Collect 500 data points for each time function for 0.1, 0.01 and 0.001 second intervals.
times_a_0001 = study_scheduler_timing(time.time, interval=0.001, repeat=500)
times_a_001 = study_scheduler_timing(time.time, interval=0.01, repeat=500)
times_a_01 = study_scheduler_timing(time.time, interval=0.1, repeat=500)
times_b_0001 = study_scheduler_timing(time.monotonic, interval=0.001, repeat=500)
times_b_001 = study_scheduler_timing(time.monotonic, interval=0.01, repeat=500)
times_b_01 = study_scheduler_timing(time.monotonic, interval=0.1, repeat=500)
Plot histograms of the deviations of actual intervals between successive calls from the given interval.
from matplotlib import pyplot as plt
import numpy as np
def plot_histogram(ax, times, interval, title):
intervals = np.diff(times)
differences = intervals - interval
plt.setp(ax.get_xticklabels(), rotation=45, ha='right')
ax.hist(differences, bins=20)
ax.set_title(title)
fig, axes = plt.subplots(nrows=6, sharex=True, figsize=(6, 8))
plot_histogram(axes[0], times_a_0001, 0.001, title='time.time() interval=0.001')
plot_histogram(axes[1], times_a_001, 0.01, title='time.time() interval=0.01')
plot_histogram(axes[2], times_a_01, 0.1, title='time.time() interval=0.1')
plot_histogram(axes[3], times_b_0001, 0.001, title='time.monotonic() interval=0.001')
plot_histogram(axes[4], times_b_001, 0.01, title='time.monotonic() interval=0.01')
plot_histogram(axes[5], times_b_01, 0.1, title='time.monotonic() interval=0.1')
plt.tight_layout()
Looks like, even though time.monotic
has a better performance for 0.001 interval, for the rest time.time
give a precision. It's deviations are at most a millisecond, whereas monotonic's deviations go up to 10 millisecond, which might be perceivable in some cases.
Connection to a DAW via virtual port
Now that we are capable of sending out MIDI messages, let's open a DAW and send message to a track that is listening to incoming MIDI events. I use Reaper because it's cheaper and has some scripting capabilities with its Python API.
When you add a track and "arm" it for recording you can choose inputs. Default setting for virtual instruments is "All MIDI Inputs > All Channels".
This works well when you connect your MIDI keyboard and start playing. However, when we send MIDI messages from Python, we directly sent it to the Microsoft's MIDI synthesizer. There were no output ports to which we can send messages which can be captured by Reaper.
For that, we need help from a utility software that provides "virtual ports" such as loopMIDI. In the past I was a user of MIDI Yoke, however, looks like it was last updated in 2007, hence this time I wanted to try something more recent.
Install and open loopMIDI. Add a new virtual port (default name is "loopMIDI Port", but can be given another name too).
Once the virtual port is created, go to DAW Midi Device settings (for Reaper it's Options > Preferences > MIDI Devices) and enable the virtual port as an input.
Now can choose loop MIDI port as input for any track. If "All Input" is selected, messages through loop MIDI will arrive here too.
Let's choose this virtual port as the output port in Python and send play the major scale via a VST plug-in loaded in the DAW.
Close open port to MS Synth if it's still open via midi_out.close_port()
. Check the list of available ports and see the virtual port. I've 'loopMIDI Port
. Open an output to that port. Play the major scale example again using the new port. And viola! I used the default ReaSynth that comes with Reaper. But it should work with any plug-in instrument that can be loaded in a DAW. ^_^
Bonus: MIDI In
rtmidi
does not only sends out MIDI messages but also listens to incoming ones! Open an input port to your MIDI keyboard.
midi_in = rtmidi.MidiIn()
midi_in.get_ports() # ['Arturia MiniLab mkII 0', 'loopMIDI Port 1']
midi_in.open_port(0)
Define a callback function that'll handle incoming messages, and register it to the input.
def handle_input(event, data=None):
message, deltatime = event
print(f'message: {message}, deltatime: {deltatime}, data: {data}')
midi_in.set_callback(handle_input)
Now, whenever you press a key on your MIDI keyboard, the Notebook cell which executed the set_callback call will print the incoming messages.
message: [144, 55, 37], deltatime: 52.049, data: None
message: [128, 55, 0], deltatime: 0.201, data: None
message: [144, 57, 76], deltatime: 1.201, data: None
message: [128, 57, 0], deltatime: 0.17200000000000001, data: None
message: [144, 59, 72], deltatime: 0.202, data: None
message: [128, 59, 0], deltatime: 0.122, data: None
This can be used to implement a MIDI processor, such as reading pressed notes and generating an arpeggio phrase and sending it out to DAW. Can be used to study human performance by analyzing on/off message timings etc.
published at: 2020-06-13 15:53 UTC-5tags: python