ChromaSonic Evolutionary Sound Discovery for Creative Professionals

Status:
Completed
Duration:
01.09.2025–31.01.2026

ChromaSonic aims to transform research-based evolutionary sound synthesis technology into a user-friendly software package that enables creative professionals to discover entirely new sounds through Quality Diversity algorithms, without requiring technical expertise in AI or sound synthesis.

Screenshot from synth.is

ChromaSonic has developed a working platform for evolutionary sound discovery. At synth.is users can explore and interact with evolutionary populations directly through a genome pool view, selecting parents and breeding new offspring to guide the evolutionary search.

Contact

Objectives and Background

Modern sound synthesis technology makes it possible to produce virtually any sound, but accessing the full breadth of sonic possibilities remains a challenge. Using existing tools requires expertise that can be equally rewarding and limiting: proficiency in certain methods of sound synthesis enables rich creative expression, but also confines what one discovers to what those methods and skills make readily available. When it comes to requesting products from AI models trained on existing audio, our vocabulary and the training data further limit the sound world we can describe and access.

The ChromaSonic project set out to develop a platform for evolutionary sound discovery, applying computational processes inspired by how nature has evolved its enormous diversity of organisms without any apparent goal other than to survive in different conditions. Rather than optimising toward specific sonic targets or recombining existing audio, the project employs evolutionary algorithms that search broadly in the space of all possible sounds, with the aim of uncovering unexpected sonic material that may lead creative processes in new directions.

The project builds on doctoral research at RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion at the University of Oslo, investigating Quality Diversity evolutionary algorithms for sound synthesis. The qualification project aimed to bridge this research with the needs of creative practitioners, by developing a user-facing platform and validating interest in evolutionary sound discovery as a creative tool.

Results Achieved

The project has produced a working platform that couples an intuitive user interface with underlying evolutionary sound discovery processes. The interface takes inspiration from familiar social media paradigms: users can scroll feeds of sounds discovered and adopted by others in the network, browse ongoing discoveries from connected evolutionary processes, and choose to adopt sounds they find interesting (Figure 1). Users can explore and interact with evolutionary populations directly through a genome pool view, selecting parents and breeding new offspring to guide the evolutionary search (Figure 2).

Screenshot from synth.is
Figure 1: The sound feed, showing sounds discovered and adopted by users in the network.
Screenshot from synth.is
Figure 2: The genome pool, where users explore and breed sounds from an evolutionary population.

Discoveries can be collected into personal sound gardens (Figure 3) and rendered into custom virtual instruments, which are delivered on a shared marketplace (Figure 4). The resulting instruments can be downloaded and used in standard music production software (Figure 5), bridging the gap between evolutionary exploration and practical creative workflows.

Screenshot from synth.is
Figure 3: The sound garden, where users collect and curate their favourite sonic discoveries.
Screenshot from synth.is
Figure 4: The instruments marketplace, where virtual instruments rendered from discovered sounds are available for download.
Screenshot from synth.is
Figure 5: A sample based virtual instrument rendered from evolutionary sound discoveries - here loaded in to the DecentSampler player - ready for use in music production software.

Enabling this coupling between a simple sound-feed interface and evolutionary processes required careful architectural design and a deep implementation effort. The system comprises several interconnected components: a user-facing web application, an evolution manager service, a recommendation engine, an authentication service (based on PocketBase), and a graph database where the relationship and vector comparison capabilities of ArcadeDB are utilised to represent evolutionary lineages and sonic similarities.

R&D Tasks and Key Contributors

The technical implementation focused on developing and integrating the platform's micro-service architecture, including the web application frontend, the evolutionary process backend, and the virtual instrument rendering pipeline. This work was primarily carried out by the project leader, building on the kromosynth software ecosystem developed during doctoral research at UiO's Department of Informatics.

In parallel, customer discovery was conducted through deep interviews with creative practitioners from varied backgrounds, based on the Jobs-to-be-Done and Mom Test methodologies. Contact was established with musicology students at UiO and practitioners working in music production, film scoring, and game audio. Mentorship from the UiO GrowthHouse and advisors from RITMO and IMV at UiO provided guidance on both the commercial and research aspects of the project.

Assessment of Implementation and Resources

The funding enabled the development of the first working iteration of the envisioned sound discovery platform, while also providing resources to explore customer development and validate the vision's potential utility. A limitation is that the implementation and customer discovery was primarily executed by a single individual. Although the project benefitted from valuable mentorship, it could benefit from deeper, more hands-on involvement from additional team members. The resources were used efficiently to produce a functional prototype that is close to public deployment.

Anticipated Significance and Benefits

The deep interviews revealed two main themes of potential value. First, practitioners expressed interest in overcoming workflow stagnation: having become proficient and comfortable in certain approaches, they would appreciate opportunities to become unstuck from creative ruts and establish unique sonic identities. One concrete example was a film composer who works under pressure, has established a kind of signature sound, but would find it embarrassing to completely repeat the soundscape of a previous commission, and would appreciate a tool that exposes new sonic paths.

Second, interviewees commonly expressed interest in participating in a community of sound discovery, collaborating with evolutionary processes and observing what others might be finding. This community aspect could serve as a primary differentiator for the platform. For the research field, the project demonstrates how Quality Diversity evolutionary algorithms, commonly developed for robotics applications, can be adapted and made accessible for creative practitioners in the sound domain.

Plans for Dissemination and Utilisation

The platform is close to public deployment at synth.is (Figure 6). The plan is to soft-launch by directly communicating with interviewees and mailing list subscribers, followed by broader outreach through relevant social media channels and forums. Introductory videos have been and continue to be prepared at video.synth.is and the project's YouTube channel. A cookieless analytics solution will be integrated for monitoring user behaviour, and that data will shape future directions. All interactions will be free initially, with future migration to a freemium model as one option based on collected data, while taking care to avoid enshittification of the platform.

Screenshot from synth.is
Figure 6: The public-facing landing page at synth.is

Results Expected After Completion

Following public deployment, the platform will serve as a tool for evaluating actual interest in and utility of the evolutionary sound discovery approach. Performance indicators will shape plans for future development: features most used will be emphasised for further refinement. If there is a lack of overall uptake, that information will point to potential pivots, such as placing more emphasis on exploring the sonic capabilities of existing audio plug-ins (VST/AU), where evolutionary technologies may open up new worlds within synthesiser plug-ins that practitioners already own but have not fully explored.

Collaboration with academic advisors will be maintained and expanded. Support for expanding the team will be sought through further grant applications. The doctoral research underlying this platform continues to be disseminated through academic publications, with the PhD thesis and its constituent papers providing the scientific foundation for the evolutionary sound discovery approach.

Funding

Funded by The Research Council of Norway

Project number: 360371

ChromaSonic is funded by the Research Council of Norway's “Qualification – Research Commercialisation” program.

Published Mar. 2, 2026 9:50 AM - Last modified Mar. 4, 2026 12:38 PM