‘‘It Felt Like Black Magic’’: How LALAL.AI VST Helps Shape Automotive UX Sound Design Future
This time for our interview series, we spoke with Maksim, UX Sound Designer at Impulse Audio Lab, an agency specializing in crafting soundscapes for automotive brands. One of Maksim’s key initiatives is Drivable Music, a concept which makes the driving experience more engaging and enjoyable. We were intrigued to discover that even in this context, stem separation can play a crucial role.
We talked to Maksim about Drivable Music, how he sees its future, and how LALAL.AI actively helps the team shape it.
The Concept of Drivable Music at Impulse Audio Lab
Our team has been around for just over a decade. It all started with UX sound design for car interfaces, but over time, we shifted our focus to product development. That’s how EVx was born: a software solution that enables interactive sound playback inside vehicles. It powers everything from engine sounds to UI/UX interactions, like car start-up sequences, door chimes, notifications, and confirmations. Today, EVx is our flagship product.
I joined the team a little over two years ago, during an active phase of software development, and I ended up focusing on interactive music, which we call Drivable Music. This has become my primary area of interest and the main direction I drive in the company.
In addition, I work on making engine sounds and UI/UX sounds interactive for various brands. We collaborate with brands in Asia, Europe, and, to a lesser extent, America. So basically, over eight million cars worldwide are already using our sounds.
Interactive music, in my view, is a huge topic for the automotive industry because the whole car market is undergoing a major shift. In the past century, cars were purely technical devices; basically analog, if you imagine it like an analog synthesizer, with no processors, no digital elements. At the turn of the century, there was a wave of digitalization: displays appeared, and later automotive versions of Android and Apple CarPlay. Today, we’re seeing a global shift toward what’s called software-defined vehicles.
Where hardware used to dominate, now software defines the experience. Car manufacturers are increasingly aware of this. You might have seen it on websites, YouTube, or Instagram: people often comment that they don’t like the abundance of screens, the lack of real engine sounds, the loss of character. This is a major transition because for nearly a century, thousands, maybe even hundreds of thousands of people worked on creating the character and soul of cars. What’s happening now is a redefinition of what a car is.
Today, manufacturers across Europe, Asia, and America are trying to find where the soul of a car lies, and what makes it feel alive, interesting, and enjoyable. I believe Drivable Music, our interactive music technology, can deliver an entirely new user experience in a car, something that simply wasn’t possible 50, 30, 20, or even 5 years ago. Music brings character and personality to a car, making it engaging and enjoyable. It’s about creating an experience that users might prefer over a traditional engine sound. Instead of missing the roar of a Porsche or Mercedes engine, they could enjoy a new type of experience unique to electric vehicles.
It’s also an interesting challenge to figure out how different this technology should be from what users are used to. There isn’t a clear answer yet. On one hand, our technology allows us to make something completely new and unique. On the other hand, we have to balance creativity with accessibility.
We can create experiences that musicians or producers will instantly recognize as rich, complex, and unusual, but for the average driver, unfamiliar with BPM, key, loops, and samples, this could be overwhelming. That’s one end of the spectrum. On the other, we can create interactive music that stays closer to conventional playlists, something people can immediately understand and enjoy. For example, many systems already adjust volume based on speed: faster driving makes the music louder to counteract wind and road noise. We’re now figuring out how to push this further while keeping it intuitive.
“The challenge was to turn technical data into something intuitive and pleasant for the driver”
The core principle of Drivable Music is that different musical instruments or layers respond to driver interactions. Drive calmly, and the music remains calm. Press the accelerator, and the music becomes more dynamic and colorful. Switch on the turn signal, and instead of a standard “tick-tick-tick,” the sound integrates into the rhythm of the music. Brake, and an additional layer appears, essentially a musical instrument complementing the overall theme.
It started with our software EVx, which reads signals from the car. Cars use the CAN protocol, a communication system that shares real-time information like gear selection, accelerator pedal position, turn signals, and more. We access this data to understand the car’s state.
The challenge was to build a user experience around these signals, turning technical data into something pleasant, engaging, and intuitive for the driver. I used EVx to create musical loops that could change depending on driver input. Press the accelerator, and a calm loop becomes more interactive and layered. This is the essence of Drivable Music: translating the energy and interaction between driver and vehicle into an interactive musical experience.
In a car, there are limited ways to perceive the outside world. We don’t feel the wind or subtle external cues, so our main channels are visual and auditory. Electric cars especially lack engine sounds, making it even harder to judge the car’s state—extra power for overtaking, braking intensity, etc. A well-designed UX can translate these signals into intuitive feedback, easing cognitive load and emotional stress while driving.
LALAL.AI VST Is a Major Player in Drivable Music
Initially, we composed music from scratch for proof-of-concept experiments. That worked for research, but for mass automotive production, it wasn’t feasible. Content consumption differs globally—Asia, Europe, America—so we needed a technology to automate content production, specifically to create interactive musical layers.
That’s when we realized we needed stem separation, specifically a solution that could separate audio in real time. Interactive music relies on the ability to process individual elements, like vocals or instruments, on the fly. For example, pressing the pedal could trigger dynamic reverb only on the vocals, an effect only possible when each stem is processed separately.
When we first encountered LALAL.AI’s real-time VST, it felt like black magic. Installing the plugin and setting up the pipeline was one of those rare moments where something that seemed impossible—real-time stem separation—actually worked flawlessly. Suddenly, this opened the door to creating interactive music experiences at scale for automotive applications.
I now use LALAL.AI VST in Ableton Live, which allows us to explore, refine, and deliver these experiences efficiently.
Drivable Music is already in use and gaining traction. Real-time stem separation is becoming mainstream. Companies like Soundcore have portable speakers with built-in stem separation for karaoke, and Samsung uses similar technology in TVs. It’s proof that this tech is viable at scale.
For us, the challenge now is designing a genuinely valuable user experience; not just applying technology for technology’s sake, but understanding what drivers really need. This is an uncharted journey, a space for experimentation, research, and collaboration. Being able to implement this at a large scale, like in the automotive market, is incredibly exciting. If we succeed, we could change how millions of people experience music in cars—a tangible, meaningful impact.
Follow LALAL.AI on Instagram, Facebook, Twitter, TikTok, Reddit, LinkedIn, and YouTube to keep up with all our updates and special offers.