- Date
- 13 AUGUST 2022
- Author
- GLORIA MARIA CAPPELLETTI
- Image by
- COURTESY OF THE ARTIST
- Categories
- Music
Holly Herndon, Mat Dryhurst and their AI baby named Spawn
For Mat Dryhurst and Holly Herndon, being informed is crucial; they think artists need to be on the front lines of this technological revolution. Mat Dryhurst and Holly Herndon chose the present moment to start their husband-and-wife project -- and kick off a conversation about ownership over data training the AI -- because of a massive reaction in recent months from the general public about generative tools. Something called Spawn has been years in the making: After receiving a German grant in 2018 dedicated to composers adopting new technologies in their works (in honour of Beethoven, no less), Holly Herndon and her partner, the artist Mat Dryhurst, along with the musician and developer Jules LaPlace, bought a gaming PC with GPUs, which they customized with no specific end-goal in mind. For her thesis, Holly Herndon - along with her partner and collaborator, the philosopher and digital artist Mat Dryhurst, and artist and software developer Jules LaPlace - built and trained an artificial intelligence called Spawn for making music. [Sources: 2, 4, 5, 7]
In keeping with that mentality, Mat Dryhurst and Holly Herndon are developing a standard that they are calling Source+, designed to allow artists to opt-in -- or opt-out -- of having their work used as training data for the AI artists who make them. They are hoping AI-generator developers recognize and respect the wishes of artists whose work might be used to train such generative tools. At Google, for their part, researchers working to develop machine learning for art are aware of the concerns musicians may have around automation, and are eager to highlight the ways AI could be used to build new, engaging tools (as opposed to replacing artists entirely). What is legitimate, what is moral are different questions -- which is where Mat Dryhurst, an artist, an academic, and Holly Herndon, a prominent musician, weigh in. [Sources: 5, 7, 8]
Holly Herndon created the training sets that the thing called Spawn used to make its musical contributions. Spawn uses machine learning programming to generate sounds from scratch by itself, thussinging, mime-ing Holly Herndons voice, her partner, musician and developer Jules LaPlace, and an ensemble consisting of her friends, or any person Herndon knows with vocal training or musical experience, who gathers at her house in Berlin every week. [Sources: 0, 4]
First, Holly Herndon and her partners and collaborators trained the AI, called Spawn, on their voices, and then invited an audience of around 300 willing participants to perform and record what was meant to be a data set of voices for the AI to consume. They also recorded an entire auditorium in the cacophonous Martin-Gropous-Bau in Berlin, in order to make public voices available to the Something Called Spawn for training as well. To create Godmother, for instance, the duo fed percussion tracks to the artificial intelligence of the child that they considered Spawns godmother, experimental Indiana-based electronica musician Jlin, which Spawn performed with his voice. One project, Holly+, launched last year, allows anyone to upload a polyphonic track, which is then sang by a deepfake version of Holly Herndons voice, created using artificial intelligence generative tools. [Sources: 0, 2, 4, 5]
For Herndon, Holly+ is a way of testing the waters for tech-enabled play with identity; Holly+s next stage is an interface to let anyone type in lyrics and create an audio version of her speaking voice, which they can then use in any way they want. Proponents of Web3 and blockchain technologies, Herndon and Dryhurst are exploring ways in which works created using her voice can earn royalties through smart contracts written in to the technology. Thanks to a decentralized autonomous organization, the Holly+DAO, built around her machine learning projects, musicians could not only leverage her voice and make money from it, they could use it to fund further development of those instruments, and assume a controlling body role in terms of the proper usage of her digital twin. On the question of future public accessibility of the AI performer, one experimental musician pointed out that nothing at this time prevents anybody from creating their own, personal, artificially intelligent performer. [Sources: 1, 3, 6]
To perform research, The experimental musician is building an AI collaborator, which lives in a high-end gaming computer. When The experimental musician wrote Holly Herndons 2012 EP, he was a scholar by day, a performer on his laptop at night, and programming his CPUs and instruments in between. Seven years ago, Holly Herndon - whose PhD is in music and computer science at Stanford Universitys Computer-Mediated Music and Acoustics Research Center - was one of a handful of musicians making a case for laptops as instruments. Like Herndon and Dryhurst, Ash Koosha notes the importance of artists getting involved with AI developments. [Sources: 0, 1, 7]
Spawning, which formally launched today, has also developed Have I Been Trained, a website which allows artists to check whether their work is among the 5.8 billion images in the Laion-5b data set used to train Stable Diffusion and Midjourney AI builders. [Sources: 5]
##### Sources #####
[0]: www.artnews.com
[1]: musictech.com
[2]: www.vice.com
[3]: pitchfork.com
[4]: www.vogue.com
[5]: www.inputmag.com
[6]: ars.electronica.art
[7]: www.dazeddigital.com
[8]: jnack.com