
Content Creation
Redesign of Soundraw
For Music Producers
Soundraw is an audio generation software that uses generative AI. The use of generative AI makes it relatively unappealing for musicians, despite being a valuable tool.
The redesign focusses on musicians as the user.
Overview
Current generation process
Redesigned process




Original Workflow of the software

The user can select a genre, mood or or theme.
After making one selection in any category they move on to the next phase.

As the user selects filters, tracks are generated.

User Research
1. Interviews and user tests with
• Four music producers
• Two content creators.
2. Reviews users had shared online.
Focus of research:
-
Navigation patterns
-
How they felt about the process and output
-
What tools they currently used for audio content creation
-
Understanding existing workflows for musicians

As neither of us had extensive experience designing audio software, this research influenced our final designs in a big way.
Insights
Initial interviews with both sets of users revealed that Soundraw is an effective tool for content creators.
However, musicians do not enjoy the process of creation and do not like the output they create using Soundraw.
Soundraw could potentially be an important tool for musicians.
One of the content creators bought a premium subscription to Soundraw after the interview!
How might we increase the sense of ownership music producers feel for audio content created using Soundraw?
Understanding the problem

Insight on the process of music production
Musicians have unique processes for end to end audio content creation. The part of the process that they consider ‘making music’ varies from person to person.
Three out of four musicians are not willing to relinquish this specific of the process to AI.
Following Sarah, a musician, in her creation process
without Soundraw.




Sarah enjoys creating a base melody. After that she goes through several more steps to produce the track.
Creation process with Soundraw
With Soundraw, Sarah generates a completed track with a click of a button. She has no control over the base melody, and does not feel any ownership over the end outcome.



ReDesign


Choose a genre to tailor future recommendations

The process begins with ‘making a base melody’ drawn from our learnings about the creation process.
Our design allows for three types of melody creation, accounting for individual preferences.

Digital sequencers are commonly used by musicians.





‘Inspire Me’ allows the use of AI for audio content creation.
After creating a base melody, the user begins defining the type of sound their track will include.
The user can add instruments to their track.
When selected, the instruments are added in sequence by AI. This sequence can then be erased or edited.

‘Artist Touch’ allows users to access types of sounds used by particular artists.

Finally, the user edits the completed track, as they would on a music production software.
The form visualises the layers of the track.
Changes to Human AI interaction
-
AI is incorporated in parts of the process, rather than replacing it.
-
The use of AI is optional and always has a manual alternative.
-
The artist is an active participant inn the making process, giving them ownership over the final output.
Feedback from one user
“Yes, feels like as much ownership as I’d want with an AI software without it being like my daw. It’s a nice tool to have.”
More on our process
Redesign Version 1-
Our big changes
-
Drawing inspiration from audio interfaces music producers would be familiar with.
-
Generating a single track as output, to make it feel more unique, and ‘less like a stock library’.

The users liked the audio interface inspired design, but found it confusing
Did the redesign increase the sense of ownership?
We asked the three users “Does this feel like something you made?”
User 1:
Maybe? No. Maybe if I could write my own notes?
User 2:
Yes, but if the music everyone was making sounded the same as mine, then no.
User 3:
Yes, especially from the ‘Type of sound’ and ‘Edit track’ view.
Two other musicians said they would never use AI to make music.

Feedback from fellow designers
Having a ‘generate’ button at the end of the process was not a natural part of the creative process,
Constant feedback, user tests and studying audio interfaces played a huge role in our redesign.
After testing, we felt confident that the redesign catered to the unique needs and preferences of our selected user.