Riffusion, an open-source tool that uses Stable Diffusion to produce music from visual cues, recently secured a $4 million investment after its creators pivoted it into a commercial enterprise. Developers Seth Forsgren and Hayk Martiros initially built Riffusion as a hobby project, which has drawn interest from tech entities such as Meta, Google, and ByteDance. Music is a universal medium of artistic expression and generative AI tools like Riffusion, Suno, or Meta’s Audio Craft offer new possibilities for amateurs and professionals to compose and share their creations and interact with each other. However, the blending of AI with the arts remains a sensitive topic with many artists voicing concerns about AI’s move into music. Meanwhile, the “No Fakes Act” in the U.S. aims to curb unauthorized AI-generated reproductions of actors’ and singers’ voice and likeness to protect artists’ rights, and Universal Music Group’s concerns about unauthorized training of generative AI using their artists’ music highlights potential violations of copyright laws. Riffusion’s monetization strategy remains undisclosed but its collaboration with established artists suggest potential directions for the platform. This is good news for developers as generative AI tools offer new possibilities for amateurs and professionals to compose and share their creations and interact with each other, while also protecting artists’ rights.
You can read more about this topic here: Decrypt: Blending Tech and Tunes, $4 Million Riffusion Raise Lights Up AI Music Scene