Hands on!
Tools for making music. Not tools that make music.
When it became hard to pretend AI wasn't there, I tried what was available, but most of it felt like vending machines: push a button, get a song. Delegating the creative process didn't feel satisfying.
I believe creating is about slowly discovering what you're trying to say. The ways we make music are messy, personal, always evolving. Could AI serve this diversity instead of trying to replace them all?
So I took Max/MSP classes, generative AI courses, and learned how some of these models could be used in their raw form: they weren't magic boxes, but agnostic instruments built from music exposure and statistics.
They predict possible notes to which we can apply constraints to favor some over others. Like saying "stay between C2 and C4, use mostly eighth notes, follow this chord progression", but with knobs and faders instead of words.
TapeOp Composer emerged from this idea. It's a Max for Live device that generates MIDI patterns based on musical constraints you control.
No chat prompts, just dials, sliders, automation curves, and real-time context from your Live session. In return, no complete songs, just MIDI patterns that fit naturally into your existing work.
If you're stuck on a bassline, give it the chord progression. Want something to play against a melody? Set a pitch range and a rhythmic density. Need something more experimental yet still musical? Turn up the temperature while keeping relevant constraints.
It feels more like jamming with someone who only communicates through musical suggestions. You're still making all the decisions. It just shows you possibilities you might not have considered.
I built TapeOp Composer for my own workflow, but it works well enough that I wanted to share it. If you find it useful, please jump on Discord or send me an email to tell me about it.
Use it. Break it. Tell me how you broke it, and let me learn and improve, hands on!
TapeOp Composer uses the Anticipatory Music Transformer model developed by the Stanford research team (John Thickstun, David Hall, Chris Donahue, Percy Liang), published under the Apache 2.0 license. I'm deeply grateful to these researchers for making their work openly available.
FAQ
Why do I need a license?
Unlike most Max for Live devices, TapeOp Composer connects to a remote server that runs the AI model. The license helps me monitor usage to ensure the service stays available for everyone. Generation is unlimited; the license just prevents abuse and helps me understand how the tool is being used.
How is this sustainable?
I'm covering server costs myself for now. As long as it remains a small user base, that's fine. If demand increases and I need better infrastructure, I'll try to find ways to support the project, like opening a Patreon.
Can I run this locally?
Not for now. My code is functional but not pretty. I'm a musician who codes, not the other way around. Maintaining a public repo is an extra layer of work that I might only do if there's demand from people who really want to self-host for their specific use case or for research purposes.