Supporting Work

Unit 3

Stable Diffusion video clips and AI generation frames:

Green Fairy Stable Diffusion Video Render:

‘Ferdinand Lured by Ariel John Everett Millais prompt generation visual demonstration’, 2023, video record of edited original footage compared with the results of Stable Diffusion AI generated frames placed together, 00:00:54

Page 5 of this section of my website details how, in Unit 2, I experimented with Stable Diffusion video rendering. Page 5 presents each step of the process I have completed in order to make the AI video renderings that I have continued to generate into Unit 3 in careful detail. I have used some of the generations I had previously made in Unit 2 in my Unit 3 work, including the ‘Air Spirit Prompt’ video generation, the ‘Sea Nymph’ video generation and theHarpy, AI Model and Clothed Sea Nymph with wings’ video generation. I had generated two extra scenes, however, for the Unit 3 Summer Exhibition, using Stable Diffusion on the same Google Collab notebook that I worked with within Unit 2. The two new scenes for the summer exhibition that were generated are depicted on this page. 

The first of the two scenes that the AI generated was specifically guided by me, to resemble the John Everett Millais painting: Ferdinand Lured by Ariel, painted in 1850. The depiction of Ariel in this painting was criticised at the time for its disturbing portrayal as Ariel. (The Victorian Web, 2021.)

Despite the haunting faces surrounding Ariel’s floating form in the work, I was struck by the fragility of Ariel’s small figure. I wondered if using the prompt of ‘Ferdinand Lured by Ariel John Everett Millais’ and ‘Ariel John Everett Millais’ would produce an animated portrayal of the painting with myself as its model, or if the result would be more varied. I hoped to combine the sense of fragility and the grotesque in my own presentation of Ariel, using my body and the isolated screaming faces of the Unit 2 AI generation: ‘A Beautiful Harpy Screaming’, with the painterly quality of John Everett Millais’ artworks. 

The result was fascinating to me, since it appeared to combine pre-raphaelite-like imagery of folds in cloths and drapery with fairy designs that appeared to resemble digital artworks of fairies. Though I had hoped for imagery that more closely resembled the original painting by Millais, I found this generation to be wonderfully surreal and perhaps more interesting and unique than an animation inspired by a singular image.

One aspect of Stable Diffusion AI rendering that I have noticed when working with the mythical prompts I have, is the painterly quality of many of the images the AI generates. The textures of skin and clothes often appear as if painted with a digital brush. This was partially what inspired me to attempt to have the AI capture the feeling of an animated painterly work. 

When reflecting on this work, I have noticed the incorporation of John Everett Millais’ name into the prompt additionally complicates the collaborative relationship between myself as an artist and a generative AI model. John Everett Millais has also been encompassed into this collaborative effort, though this is without the artist’s consent, and long since they have died. This raises further moral dilemmas surrounding the existence of this technology and the data banks that allow the AI to render my videos.

My Thoughts:

Has the mention of Millais’ name in the prompting of the AI only added to the controversy surrounding the subject of plagiarism when using this technology?

Or is John Everett Millais honoured as a reference in the work?

Has the work breathed new life into his paintings and my own image simultaneously? Or is it a more demonic amalgamation involving theft?

Perhaps there is another occult ritual in play: harnessing the ghost of an iconic artist in a soulless machine, layered over my own digitised, physical presence.

It’s worth noting that this generation was completed in the full acknowledgement and understanding that I am using AI for research purposes and that the copyright law surrounding many uses of AI generative models is still an ongoing debate.

Sea Nymph Community Stable Diffusion Video Render:

A Sea Nymph Community prompt generation visual, 2023, video record of edited original footage compared with the results of Stable Diffusion AI generated frames placed together, 00:01:32

The second of the two scenes I generated using Stable Diffusion was a ‘Sea Nymph Community’ scene. The video of the transformation has had to be posted onto Vimeo due to Youtube’s content restrictions. In the first Sea Nymph scene the AI rendered in Unit 2, I used the prompt ‘clothed sea nymph’ in order to avoid the many generated images of nudity. This time, however, I decided to accept the generation that the AI provided for the ‘Sea Nymph Community’ prompt, even if in the Summer Exhibition, I used layering to blur the generations with my body, allowing for the suggestion of sea nymphs and nudity, without explicit content, given the families of artists who were attending the exhibition. I thought this balance allowed for an interesting combination of the AI and my body. Still, I am curious to investigate dataset biases and the explicit content of many of these images of feminine mythical creatures in depth through the continuation of my practice beyond the MA.

My Thoughts:

How many of the naked bodies in these images were taken from photography?

How many from pornography?

And how many from painted, drawn or otherwise crafted depictions of sea nymphs?

None of the bodies within the video and in the screenshots appear to present the feminine human form in a way that is too realistic. The dataset appears to distort, stretch and hallucinate different human limbs, fitting them against the shapes of my body and the fabrics I’m wearing. It’s interesting to me that in the majority of frames, the AI generation appears to recognise I have feminine features, including the presence of my breasts and my long hair. It also keeps the pale tone of my makeup on my white skin, further emphasising a possible bias in the data set.

It should be interesting in a future experiment to compare a video render of myself as a guide for the AI with a video generated by AI of sea nymphs without my body as the guideline. Would the appearance of the sea nymphs change? Would they remain portrayals of nude, feminine bodies with sea creatures and flowers in their hair? I am reminded of Eryk Salvaggio’s: ‘Flowers Blooming Backward Into Noise’, an animated video ‘documanifesto’ about AI art. The video shows an AI generated depiction of a butterfly, stating in the narration: ‘This is not an image of a butterfly. It is the stereotype of a butterfly.’(Eryk Salvaggio, 2023).

My Thoughts: Following this logic, I am left wondering where I begin and where the stereotype of a Sea Nymph Community ends.

Bibliography:

Eryk Salvaggio. (2023). ‘Flowers Blooming Backward Into Noise (2023)’ Youtube. [online]. Available from: https://www.youtube.com/watch?v=zNA7sPm-zlQ [Accessed: 26th October].

The Victorian Web: literature, history & culture in the age of Victoria. (2021) Ferdinand Lured by Ariel. [online]. Available from: https://victorianweb.org/painting/millais/paintings/3.html [Accessed: 29th October 2023].