When I'm not teaching software engineering or TDD, mentoring engineering managers, or raising children, I play keyboards for the band Subterranean Masquerade. Being that our live setup is complex and requires large stages, last year we decided to go on a small-scale tour with a limited setup, allowing us to visit smaller venues in distant points of the country. In leu of my Mac/MainStage based rig, I used a single synthesizer keyboard, the Prophet Rev2 - an amazing and versatile synthesizer, but one that lacks proper concert-management capabilities for swift transitions between sounds on stage - a feature that I rely on in my normal MainStage setup. I wanted to keep the rig as simple as possible, so instead of using the Mac, I wanted to use my iPhone, but a search throughout the iPhone App Store revealed no app that solves this problem. This was when I decided to create my own app.
The requirements are simple. I need a list of concerts. In each concert, there's an ordered list of songs. In each song, there's an ordered list of parts. Each part sends MIDI messages to the synthesizer when selected, for instance "switch to sound #125" or "set the value of MIDI Controller #1 to 57". When I start to perform a concert, the first part of the first song is selected, and I'm able to scroll forwards (and backwards) through the nested list of songs and parts using a physical controller, such as my iRig Blueboard, that connects to the iPhone. The iPhone is also connected to the synthesizer using a Lightning-MIDI cable or a BLE MIDI dongle.
The first goal I set on was a technological proof of concept - an app that connects via BLE to the foot controller and the synth, responds to a foot click, and sends a single Program Select message to instruct the synthesizer to switch to a specific sound. If I'm able to do that, the rest is just building a nice UI and a simple model around the code that sends MIDI messages. React Native was my first choice, having worked on two projects using RN recently, so I spent several hours setting up a new RN application on my M1 Pro MacBook Pro, using the most recent template and looked for an NPM module that supports MIDI via Bluetooth. There were none. Cutting my losses, I moved on and looked at Flutter.
I first heard of Flutter back in 2020 when an engineer working for one of my clients kept telling me how they should ditch React Native in favor of Flutter, which has a much more complete and stable environment than RN. Not repeating the same mistake twice, I started by searching for a Flutter library supporting MIDI/Bluetooth, and I found Flutter MIDI Command. It even has a demo app, so I simply loaded it into Android Studio and started to play around. An hour later I had the demo app running on my iPhone, connected to the devices and sending a MIDI message to the synth whenever I hit the foot controller.
Next, I wanted to see if I'll be able to write fast, integrative acceptance tests for the bulk of my app. This requires a good UI testing library to allow testing outside of a device simulator, and the ability to create a Hexagonal Architecture, injecting fake implementations of all I/O-performing classes when running in a test harness. Luckily, both are feasible in Flutter using the WidgetTester. My App class takes a MIDIAdapter and a ConcertRepo as constructor dependencies, the production versions of which are provided in the main() function:
// main.dart
void main() async {
final concertRepo = new FileSystemConcertRepo();
final midiAdapter = FMCMidiAdapter();
runApp(new App(midiAdapter, concertRepo));
}
while the fakes are provided in the test harness:
// concert_test.dart
testWidgets("a concert is rendered", (tester) async {
final midiAdapter = new MemoryMidiAdapter();
final concertRepo = new MemoryConcertRepo();
await tester.pumpWidget(new App(midiAdapter, concertRepo));
...
});
The next few weeks saw me taking some time to myself, working evenings and weekends, using ATDD via WidgetTests to facilitate an emergent design for my new app. This was the first time I used Dart and Flutter, and while Dart is really simple if you're used to Java or Javascript, the Flutter framework has its own set of practices and conventions. However, my confidence in the methodology, having used ATDD to help me navigate through projects in Go, Python and React/RN, allowed me to push forwards without fear and without wasting time on a Flutter tutorial. Praise should be given to the Flutter and Dart teams at Google for creating really useful and easy-to-navigate documentation and testing tools.
Pretty soon I found myself writing high-level testing logic using a custom DSL and drivers to abstract away technical details. These abstractions did not arise immediately - they are the result of many iterations of writing a test, making it pass, and refactoring both the app code and the test code. For instance, this is a version some tests from the initial Git commit I made:
testWidgets('songs and patches appear in order', (tester) async {
var song1 = Song('song 1', [Patch("p1", 1), Patch("p2", 1)]);
var song2 = Song('song 2', [Patch("p3", 1), Patch("p4", 1)]);
var concert = aConcert(songs: [song1, song2]);
var midiAdapter = new MemoryMidiAdapter();
await tester.pumpWidget(MaterialApp(home: ConcertPage(midi, concert)));
expect(find.text(concert.name), findsOneWidget);
expect(find.byType(SongCard), findsNWidgets(2));
var songCards = tester.widgetList(find.byType(SongCard)).toList();
expect(songCards.elementAt(0).song, equals(song1));
expect(songCards.elementAt(1).song, equals(song2));
expect(find.descendant(of: find.byWidget(songCards.elementAt(0)), matching: find.text(song1.name)), findsOneWidget);
expect(find.descendant(of: find.byWidget(songCards.elementAt(1)), matching: find.text(song2.name)), findsOneWidget);
var song1Patches = tester.widgetList<ListTile>(find.descendant(of: find.byKey(Key('${song1.name} patches')), matching: find.byType(ListTile)));
var song2Patches = tester.widgetList<ListTile>(find.descendant(of: find.byKey(Key('${song2.name} patches')), matching: find.byType(ListTile)));
expect(song1Patches.elementAt(0).key, equals(concert.keyFor(0, 0)));
expect(song1Patches.elementAt(1).key, equals(concert.keyFor(0, 1)));
expect(song2Patches.elementAt(0).key, equals(concert.keyFor(1, 0)));
expect(song2Patches.elementAt(1).key, equals(concert.keyFor(1, 1)));
});
testWidgets('advancing through a concert sends program changes in order', (tester) async {
// this creates a concert with some songs, each having some parts
var concert = aConcert();
var midi = new MemoryMidiAdapter();
await tester.pumpWidget(MaterialApp(home: ConcertPage(midi, concert)));
await Future.forEach(concert.songs, (song) async {
await Future.forEach(song.patches, (patch) async {
expect(midi.programChanges.last, equals(patch.programNumber));
midi.nextPatch();
await tester.pumpAndSettle();
});
});
});
While this is the latest version of a test asserting the program change behavior (but differently):
testWidgets("events are sent to midi on entry and on next patch",
(tester) async {
final midi = new MemoryMidiAdapter();
final patch1 = aPatch(controls: [aCC(), aCC()]);
final patch2 = aPatch(controls: [aCC()]);
final song = aSong(patches: [patch1, patch2]);
final concert = aConcert(songs: [song]);
final concertDriver = await renderConcert(tester, concert, midi);
final performanceDriver = await concertDriver.perform();
expect(midi, hasRecorded(patch1));
await performanceDriver.nextPatch();
expect(midi, hasRecorded(patch2));
});
Note the usage of object builders (to create valid data for the test with just as much detail as needed), drivers (to abstract away how my test interacts with the UI) and custom matchers (to abstract away how a specific assertion is being made).
This approach paid off when I decided that performing the concert requires a separate UI from editing the concert. Because my tests were asserting behavior rather than implementation, I could simply move the performance features - like sending MIDI messages and scrolling through the concert via receiving external MIDI messages - to a completely new screen. All I had to change was the implementation of the test drivers and matchers, and the same tests simply continued to pass.
Eventually the time came to test my app on my real iPhone, connected to my real gear, during a real band practice towards an upcoming concert. I looked for the easiest way to deliver a test version to my iPhone via TestFlight, and XCode offered XCode Cloud, so I thought "why not" and gave it a shot. Much to my surprise, it works perfectly. I'm now able to push changes to my GitHub repository, and a few minutes later, a new version of the app lands in my TestlFlight account - as long as all tests pass. Praise should be given to the XCode Cloud team at Apple, making iOS continuous delivery into a simple and feasible effort. In February 2023 I played the first concert using the new app, now named MIDI Set List, and it works beautifully.
What next? I want to ship MIDI Set List to the app store for other users of MIDI equipment who are in need of a concert performance management solution. Guitar players, vocalists, maybe even sound and lighting engineers could all be potential users. In order to do that, I need to polish the app some more, fix some bugs and deal with the whole device pairing flow which for now remains as I borrowed it from the Flutter MIDI Command demo app. Hopefully I'll have a gap in client activity overlap with programming-related serendipity and working kindergarten / school hours sometime in the near future.
Update - April 2023: MIDI Set List is available on the Apple App Store, soon to be available on the Google Play Store as well.