
“A good dose of healthy ignorance is great” Kormac interview
Irish producer and composer Kormac has spent the past decade blurring the lines between electronic production, orchestral composition and screen scoring. From his early days as an MPC-wielding DJ to collaborations with everyone from Irvine Welsh to full symphony orchestras, his output has rarely stood still. Recent years have seen him take on major soundtrack work — including his RTS-winning score for This Town and the Emmy-nominated documentary Swift Justice — while continuing to evolve his live show into a fully immersive audio-visual experience.
His new single Down Below, featuring a ghostly vocal from Katie Kim, marks a shift in focus. Built almost entirely from modular synths and drum machines, it’s a rawer, more unpredictable piece of music that leans into sonic texture and process. It also signals the start of a new run of releases on his own Always The Sound imprint, with a renewed emphasis on hands-on experimentation.
We caught up with Kormac in his Dublin studio to talk about working with volatile machines, balancing control with chaos, and the accidental collaboration that shaped the new single.
You’ve described the setup for Down Below as deliberately volatile. What kind of unpredictability were you chasing in the modular system, and how did that affect your compositional decision-making?
Essentially, I tried to set up the instruments in such a way as they would spit out usable patterns or melodies that I could work with while, at the same time, give me versions of these patterns or melodies that were entirely unexpected.
Sometimes I’d use the unexpected parts as a means to ‘break’ from the “straighter” patterns as fills or secondary melodies – to allow more variance in the songs. But, often, these unexpected parts would be the bits that I would actually go with.
A good example is using the Pulsar 23 drum machine. I’d trigger the Pulsar using sequences programmed into my Erica Synths Black Sequencer and generate patterns that way. But, if I send a voltage to it’s “MAD” pin in the FX section, it will spit out a kind of wild, circuit bent version of that sound which can be harnessed and used in all kinds of ways.
I found this really inspiring as, to my ear, it elevated the drum sound far beyond that of a “traditional drum machine.” I also find that, often, a good dose of healthy ignorance is great. Playing around blindly always yields unexpected results.
With so much of this new material made ‘as far away from the computer as possible’, how are you handling arrangement and recall? Are you sequencing in-the-box at all, or is it a fully hardware-based workflow?
I’m doing both.
Ableton still acts as the master clock, sending a pulse to the Erika Synths Black Sequencer which distributes clock or midi signals to everything else in the room. It’s a really fantastic, solid module. There’s virtually no recall on the modular bits. (I take photos and make videos of the settings.)
I can recall the sequences on the Black Sequencer and I use a Sequential Circuits Prophet 6 for a lot of the polyphonic parts, which offers full recall. I’m recording most noises I make back into Ableton, editing and, often, passing things back out to modules/fx units and back into Ableton again. From there I’ll do the stereo mix in Ableton and then transfer to Pro Tools for the Spatial Audio mix.
There’s a real tactility to the synth textures in Down Below. Were there particular modules or signal chains that shaped that sound, and how much of it was captured in single takes versus multi-tracked and layered?
The opening chords are actually built from a tiny sample of radio static. I tuned that static to a single note in Melodyne, then ran it through my Make Noise Morphagene in my modular rig to stretch it out. From there, I popped this long note into Slate + Ash’s software sampler, Choreographs. This allowed me to play the sample polyphonically as well as introduce the rhythm and some noise and filtering. Once I had the sound figured out, I think I ended up printing the performance pretty quickly. There’s a layer underneath of the Prophet 6 playing the same chords/rhythm just to add a bit of low end (around 200Hz) to the sound.
The arpeggio section started as a sequence programmed into my Erika Synths Black Sequencer. This allowed me to play the same identical sequence on the Moog Mother 32, Moog Subharmonicon and Moog Labyrinth – each with a different timbre dialled in. All three signals were then sent through an Erika Synths Stereo Delay and that was it. I was opening up filters, changing release times etc on the fly . I did a few passes and chose the best bits.
Your earlier work leaned more on MPC-style beat construction. What’s stayed with you from that era of working with samplers, and how do those instincts translate when you’re working with modular gear and drum machines?
I guess it left me with an ear for tiny, usable, snippets. I’m forever slicing things up and making something new out of them. In fact, I think the way I use modular gear is, in effect, making material for me to sample, twist and morph in the box.
Katie Kim’s vocal sits somewhere between ghostly presence and processed texture. Can you talk about how you treated her voice — was there any granular or modular processing involved, or was it more about contrast in the mix?
It’s a combination of both.
I’ve taken the main vocal and introduced plate reverb on an aux channel. I’ve used the UAD EMT 140. Separately, there’s another instance of the vocal passed through the Soma Labs Lyra 8’s FX section. It has a wonderful gnarly delay and distortion and they combine to create a really gritty, slapback effect. I messed with the feedback amount live and chose the best bits when editing.
Then, a third instance of the vocal is passed through a Doepfer spring reverb tank which gives that endless, metallic sheen. I passed the 3 vocals, summed, through the overstayer modular channel to glue them together, brighten a touch with the EQ and add some harmonics.
You’ve moved between scoring orchestral works and creating modular club music — two extremes in terms of sonic density and precision. Has working with classical ensembles influenced how you think about space and dynamics in more stripped-back productions like this one?
For sure. I studied orchestration in Bulgaria for summer a few years ago and I found that learning about how composers voiced their works (what notes each instrument would play and in what register) actually taught me a lot about, and could be applied to, mixing non-orchestral music, particularly electronic music.
One of the more obvious examples would be how they’d be much more inclined to use certain melodic intervals in lower registers (octaves, fifths) to avoid beating and unintelligibility in the bass end of things.
For the new live show, how much of the modular rig is being performed in real time versus pre-sequenced? And are you syncing the visuals through MIDI/clock, or is that relationship more improvised?
There’s an element of playback to it which comes from a few stems in Ableton Live sent to FOH via a PlayAudio 12 interface (with redundant/failover setup.) I’ve a midi track setup in Ableton that sends note information into Resolume, triggering clips. In that sense, the visuals are pre-programmed but there are a couple of songs that process live cameras set up on stage in real time. And I can override the visuals at any stage and throw up any clip or camera feed I want really easily.
The modular rig isn’t clocked at all. For a lot of modular parts, I have sequences programmed into my Erika Synths Black Sequencer and I manipulate the sound on say, the Labyrinth or the Mother 32, as it’s playing. I’ll also be tweaking the effects live too. This means I have essentially to fly them in, and tweak the tempo as it’s playing to make sure it’s in time. It keeps me on my toes.

You’ve always blurred the lines between producer, composer and performer. With Down Below, where do you feel the line sits now — and has this return to machines changed how you see your role in the studio?
I’ve been wanting to spend more time learning about and writing with these machines for a good while and now that I’m more comfortable it feels like everything has kind of “come up together.”
I think my role is the same but it’s another colour to add to the palette. At the end of the day, my job is to make the best music I can for the purpose I’m making it for, be that a record, a film or a live show.
I’d say the same about the visual element of the live show. I’ve always had visuals over the years and made a good amount of them in-house but it’s something I really wanted to tear down and build back up again. Now I have the software setup and the visuals themselves where I’d like them to be and it feels quite good.
Kormac feat. Katie Kim is out now on Always The Sound