INTERVIEW: Is there a socially responsible place for technology & A.I in Music? We interview artist & tech innovator InteliDey..

InteliDey: A musician who walks the line between the tech world.. and the creative world.

Very few artists, labels or music industry professionals – are actually qualified to speak about A.I Technology. They also, naturally bring with them an element of bias. There’s a new technology out there, able to create in a few seconds, something that’s took them a lifetime to achieve.

But is there a space, for both technology and human beings to co-exist in the age of A.I? Is there a sweet spot where technology can enhance the creative juices without fully taking control of the keys?

One artist uniquely places to discuss this, is InteliDey – part time musician and part-time software engineer, For over 15 years, InteliDey (Dr Somdip Dey) has built systems designed for safety and fulfillment at companies like Microsoft and Samsung.

He’s now releasing an album – which he hopes will bridge this gap and show people and musicians – that there’s no reason to live in fear of A.I.. We spoke to Somdip, about his new project and how technology can be.. our friend!

Connect with InteliDey: Instagram | Facebook | Youtube

Hi Somdip/ InteliDey, thanks for taking the time to speak to us!

Thanks for having me — I really appreciate Magnetic making space for deeper conversations around music, culture and technology. I’m releasing Offline as more than a collection of tracks; it’s a concept project about how digital systems shape our emotions and behaviour, and I’m excited to talk about the ideas behind it as well as the sound.

Firstly, how do you find your experience in the tech industry helps/ hinders your music creative process?

It helps more than people expect — and sometimes it hinders in a very human way.

I’m trained in computer science, physics and maths. When I was younger I actually wanted to work at NASA, so my early education was rooted in understanding systems, signals and how complex things interact. When you move into music production, you realise you’re still dealing with signals — sound waves, frequency interactions, harmonics, dynamics, psychoacoustics. So a lot of the “mystery” becomes approachable: compression isn’t magic, it’s controlling amplitude over time; EQ is sculpting the frequency domain; reverb is space and decay behaviour. That background makes it easier to learn the engineering side of production and sound design quickly.

It also helps because tech teaches you how to build repeatable processes. A professional track isn’t only inspiration — it’s arrangement, mixing, mastering, revision cycles, exporting stems, checking translation across systems. My tech brain is comfortable with iteration and testing, so I can refine ideas into finished records.

Where it can hinder is when you over-optimise. In tech, you can get addicted to perfecting. In music, if you chase perfection, you can kill the emotion. There are times when I have to remind myself: people don’t dance to the spreadsheet version of a song. They dance to a feeling. So I’ve had to learn to let the art lead and let the technical skills serve the art — not dominate it.

To what point is A.I involved in your music & creative process?

I use AI as a tool in a workflow, not as a replacement for artistry — and I’m very deliberate about where it fits.

Back in 2020, when AI tools were becoming more visible, I actually built my own AI music models because that’s what I do professionally — I develop computing technologies, and I’ve been a pioneer in embedded AI. At first, producing with AI was exciting, but I quickly saw limitations: outputs can become repetitive, or weird in ways that don’t serve a musical narrative. It often lacked the human arc — tension and release with intent.

So I self-taught DAW production and music theory and moved away from “AI makes the track” to “I make the track.” My DAW of choice is FL Studio. But then I hit the reality most producers know: producing fully from scratch can take a long time, and it comes with creative block. In the commercial world, many successful records involve multiple professionals — writing, sound design, engineering, vocal production — even if only one name ends up front-facing. I didn’t always have that “village,” financially or logistically, and I didn’t want the constant overhead of navigating complex collaboration contracts.

So I designed a hybrid workflow: AI for ideation and momentum, then me on the DAW for composition, sound design, arrangement, mixing and mastering. AI helps me generate starting points, overcome block, and explore directions faster. Once the idea exists, I take over and craft the track intentionally — that’s where the art is.

One area where AI has been particularly useful is vocal generation/vocal sampling, especially for the Offline album, where I’m deliberately using voices as part of the “digital-era” aesthetic. But again: the production decisions, the groove, the mix choices, the build and drop architecture — that’s my work.

I also use AI in a more analytical way: to study trends, sonic movement, and where the scene is going. That’s part of how I developed my own fusion direction and even a genre concept like Infinity Wave. So it’s both creative and strategic, but always human-led.

A lot of musicians right now are saying, frankly.. they’re scared. Making a living in the music industry was hard enough before A.I… Any words of encouragement?

I understand the fear — because it’s not irrational. The industry already had structural problems: streaming economics, discoverability, gatekeeping, and the pressure to be content creators. AI can feel like the final wave that overwhelms the individual artist.

But I think we need to separate two things: the technology and how it’s deployed.

Decades ago, when music production was largely analogue, a lot of people resisted DAWs. They argued DAWs would “ruin music,” that it wasn’t real, that it made artistry meaningless. Today, DAWs are the standard, because they made creation more accessible and expanded what was possible. AI is arriving in a similar moment.

The danger isn’t AI itself — it’s abuse: unethical training, lack of disclosure, and people using it to impersonate or extract value without permission. That’s a real issue, and the industry needs better systems for provenance, attribution and compensation.

But here’s the encouraging part: AI can also democratise creation. It lets someone with ideas but limited technical skill start making music — and I’m honestly supportive of that. The next wave of great artists may come from people who were previously excluded by cost, access or background.

My bigger point is this: the future isn’t AI vs humans, it’s NI + AI — Natural Intelligence augmented by Artificial Intelligence. The artists who will thrive will be the ones who use AI to remove friction, but still lead with taste, identity and emotional truth. Tools don’t replace taste. Tools don’t replace lived experience. Tools don’t replace perspective. The most irreplaceable thing a musician has is their point of view.

So my encouragement is: protect your voice, build your community, and treat AI like a studio assistant — not your identity.

Talk to us about this album project.. do you feel your album can impact the way consumers and listeners behave and interact?

Offline is my attempt to speak to the culture from a different angle — not through a lecture, but through a feeling.

My day job and career have been about building technology that keeps humans safe and more fulfilled. I’ve also been recognised for those contributions — I’m an MIT Innovator Under 35 in AI & Robotics in Europe and a Life Fellow of the Royal Society of Arts. But being inside the tech ecosystem for 15+ years, I’ve also watched the unintended consequences: polarisation, rage bait, parasocial dependency, body dysmorphia, and attention being turned into a commodity. With AI accelerating content and manipulation, those effects can deepen.

I’ve written about these topics in Forbes, The Times of India and The Conversation, and I do public talks and TEDx — including a TEDx talk on AI and another on attention economics (“Is social media the reason you’re broke?”) that was selected by TEDx Editorial. But I realised something: articles and talks mostly reach people who already want to think deeply. Music reaches everyone. Music enters the body. It bypasses defences. It repeats in your head. That’s why it’s powerful.

So yes — I think Offline can impact behaviour, not because it tells people what to do, but because it helps them recognise what’s happening. A listener might hear “Self-worth ain’t £4.99” and suddenly see monetised intimacy differently. They might hear “Rage bait sells” and notice how their feed is engineered. If a track makes someone pause before sharing outrage, or makes them take a break from the loop — that’s impact.

I’m not claiming music alone changes society. But music can change moments. And enough changed moments can change patterns.

Do you feel there is quickly becoming a rift between the musician and the music industry?

Yes, and it’s not only because of AI — AI is more like an accelerant.

The rift is forming because the industry increasingly rewards output over depth and attention over artistry. Musicians are expected to be constant content engines: social media posts, behind-the-scenes, viral hooks, personality branding. Meanwhile, the economics of streaming means many artists have to do more work for less return, unless they break through the noise.

Add AI into that and you get a new tension: the supply of content explodes, the attention pool stays the same, and artists feel even more disposable. That can make musicians feel like they’re competing not only with other humans but with infinite generation.

But here’s the nuance: the industry has always been shaped by technology — radio, MTV, MP3s, streaming, TikTok. The real question is whether the industry will evolve ethically: transparency on training data, protections against impersonation, and fair compensation mechanisms if AI systems are interpolating existing work.

If the industry chooses extraction over ethics, the rift grows. If it chooses responsible innovation, it can become a new creative era.

What’s a key piece of advice you can give to musicians in 2026 looking to navigate tech, A.I and the future of music?

Treat technology as leverage, not identity.

In 2026, the winning strategy isn’t “use AI” or “never use AI.” It’s knowing what part of your process is uniquely human — your taste, your message, your emotional signature — and using tools to reduce everything that blocks you from expressing that.

Practically:

  • Use AI to overcome writer’s block, explore arrangements, generate prompts or starter ideas, but finish with intent.
  • Learn the fundamentals: sound, arrangement, mixing, storytelling. Tools change, fundamentals don’t.
  • Build direct relationships with listeners. Algorithms are unstable; community is durable.
  • Be transparent about your process in a way that builds trust. The future is going to value authenticity even more because synthetic content is everywhere.

And ethically, support systems that protect creators — we need provenance, attribution, and fair compensation models. The future should be NI + AI, not NI replaced by AI.

Finally, is your outlook widely positive, or negative?

Cautiously positive — but not naïve.

I’m optimistic because I’ve seen how technology can genuinely improve life when built responsibly. I’ve spent my career trying to do that. I also believe AI can unlock creativity for people who previously had no entry point, and that’s beautiful.

But I’m cautious because the business models driving digital platforms often reward the worst parts of human psychology: outrage, comparison, addiction, division. AI can scale those effects quickly if we don’t push back.

That’s why Offline exists. It’s not anti-tech. It’s pro-human. It’s a reminder that we can choose how we engage with digital systems — and we can design better ones. My goal is to use the dancefloor as a place where people can both escape and wake up, at the same time.

Back to top