close
close

Ethically questionable or creative gift? How artists deal with AI in their work | Art and Design

Ethically questionable or creative gift? How artists deal with AI in their work | Art and Design

Cate Blanchett – popular actress, film star and refugee activist – stands at a lectern and speaks to the European Union Parliament. “The future is now,” she says with authority. So far, so normal, until: “But where the hell are the sex robots?”

The footage comes from a speech Blanchett actually gave in 2023 – but the rest is made up.

Her voice was generated by Australian artist Xanthe Dobbie using the text-to-speech platform PlayHT for Dobbie’s 2024 video work “Future Sex/Love Sounds” – a vision of a feminist utopia through sex robots, voiced by celebrity clones.

Much has been written about the world-changing potential of Large Language Models (LLMs), including Open AI’s Midjourney and GPT-4, which are trained on massive amounts of data to interpret everything from academic papers, fake news and “revenge porn” to Music, images and software code.

Proponents praise the technology for speeding up scientific research and eliminating mundane administrative tasks. But on the other hand, many professionals – from accountants, lawyers and teachers to graphic designers, actors, writers and musicians – are facing an existential crisis.

As the debate rages on, artists like Dobbie are using these same tools to explore the possibilities and risks of the technology itself.

“There’s a whole ethical grey area here because the legal systems are not even remotely able to keep up with the speed at which we are spreading the technology itself,” says Dobbie, whose work uses celebrity internet culture as a starting point to question technology and power.

“We see these celebrity replications happening all the time, but our own data – that of us, the little people of this world – is being siphoned off to the same extent… It’s not really the capacity of the technology (that’s bad), it’s the way flawed, stupid, evil people use it.”

Choreographer Alisdair Macindoe also works at the intersection of technology and art. His new work Plagiary, which premieres this week as part of the Now or Never festival in Melbourne and then has a season at the Sydney Opera House, uses bespoke algorithms to generate new choreographies that dancers will perform for the first time each night.

While the instructions generated by the AI ​​are specific, each dancer can interpret them in their own way – making the resulting performance more of a human-machine collaboration.

“Questions (from dancers) are often asked at the beginning like, ‘I was told to swing my left elbow repeatedly, go to the back corner, imagine I’m a cow that’s just been born. Am I still swinging my left elbow at this point?'” says Macindoe. “That quickly becomes a really interesting discussion about meaning, interpretation and what is truth.”

Dancers respond to AI-generated instructions in Alisdair Macindoe’s “Plagiary” at the Now or Never festival. Photo: Now or never

Not all artists are fans of the technology. In January 2023, Nick Cave published a scathing review of a ChatGPT-generated song that mimicked his own work, calling it “bullshit” and “a grotesque mockery of what it means to be human.”

“Songs are born out of suffering,” he said. “By that I mean they are based on the complex, internal human struggle of creating. And, well, as far as I know, algorithms don’t have feelings.”

Painter Sam Leach disagrees with Cave’s idea that “creative genius” is the exclusive preserve of humans, but he often encounters this kind of “blanket rejection of technology and everything associated with it.”

“I’ve never really been particularly interested in anything to do with purity of soul. I really see my practice as a way to explore and understand the world around me… I just don’t see that we can draw a line between ourselves and the rest of the world that allows us to define ‘me as a unique individual.'”

Leach sees AI as a valuable artistic tool that allows him to process and interpret a wide range of creative output. He has adapted a series of open-source models that he trained on his own paintings, as well as reference photos and historical artworks, to create dozens of compositions, some of which he turns into surreal oil paintings – such as his portrait of a polar bear standing over a bunch of chrome-plated bananas.

Skip newsletter promotion

Fruit Preservation (2023) by Sam Leach. Photo: Alberto Zimmermann/Sam Leach

He justifies his use of sources by highlighting the hours of “editing” he spends with his brush to refine his software’s suggestions. He even has chatbots for art critics to challenge his ideas.

For Leach, the biggest concern about AI is not the technology itself or its use, but who owns it: “We have this very, very small handful of mega-companies that own the largest models, that have incredible power.”

One of the most common concerns surrounding AI is copyright – a particularly complicated issue for artists whose intellectual property is often used to train multimillion-dollar models without consent or compensation. Last year, for example, it was revealed that 18,000 Australian titles had been used from the Book3 dataset without permission or compensation. Booker Prize-winning novelist Richard Flanagan called this “the biggest copyright breach in history”.

And last week, Australian music rights management organisation APRA AMCOS released the results of a survey showing that 82% of its members were concerned that AI could limit their ability to make a living from music.

Suno AI lets you create songs in seconds. We test if they are really good – Video

In the European Union, the Artificial Intelligence Act came into force on August 1 to curb these types of risks. In Australia, on the other hand, there are eight voluntary AI ethics principles in place since 2019, but there are still no specific laws or regulations to regulate AI technologies.

This loophole in the law forces some artists to create their own, individual frameworks – and models – to protect their work and culture. Sound artist Rowan Savage, a Kombumerri man who performs as Salllvage, has developed the AI ​​model Koup Music with musician Alexis Weaver as a tool to transform his voice into digital representations of the field recordings he turns into country music, a process he will showcase at the Now or Never Festival.

Savage’s abstract dance music sounds like dense flocks of electronic birdlife – hybrid life forms made of animal code that are haunting and alien, but at the same time familiar.

“Sometimes when people think of Aboriginal Australians, we think we’re connected to nature… there’s something infantilizing about that that we can counteract with technology,” says Savage. “We often think there’s this rigid division between what we call natural and what we call technological. I don’t believe that. I want to break down that division and allow the natural world to infect the technological world.”

Savage designed Koup Music so that he has full control over what data it is trained on, to prevent him from appropriating other artists’ work without their consent. In return, the model protects Savage’s recordings from being fed into the larger networks Koup is built on – recordings he considers to be the property of his community.

“Personally, I feel it’s OK to use the recordings I make of my country, but I wouldn’t necessarily put them out into the world (for anyone or anyone to use),” Savage says. “(I wouldn’t feel comfortable doing that) without speaking to key people in my community. As Aboriginal people, we are always community-focused, there is no individual ownership of sources in the same way that there may be in the Anglo-Saxon world.”

For Savage, AI offers great creative potential – but also “quite a few dangers”. “My concern as an artist is: How do we use AI in a way that is ethical but also really allows us to do different and exciting things?”

Leave a Reply

Your email address will not be published. Required fields are marked *