Audio Visual Experience

Human//Machine: 
Authorship and AI in Creative Processes

Artist Statement
Have you ever wondered if an AI can truly think for itself? Or is it a reflection of the data it consumes, imitating humanity?

In my audio-visual piece, I explore these questions by emphasizing both the capabilities and limitations of AI in understanding humanness. Through music and visuals generated by multiple AI tools, this work is a collaborative experiment. As the creative director, I utilize these tools as autonomous creators. They sing, compose, and visually reconstruct their past and imagined future selves with real-time prompting and visual generation. This piece reflects what collaboration with AI tools looks like in 2024—defined by layers of data and interconnected models. The imperfections within these tools serve as a paradox: offering comfort to traditional artists while simultaneously sparking unease about the evolving competition between human creativity and artificial intelligence.

By examining this dynamic, I invite viewers to consider not just what AI can create, but how its collaboration with humans reshapes the boundaries of art and authorship.
Tools
Suno AI, TouchDesigner, StreamDiffusion by DotSimulate, ChatGPT

Discipline

Full name

Platform

March 2023

Role

Role name

Overview

As creatives, we often question the use of AI in our creative process. Frequently, we find ourselves in difficult situations, deciding whether to include AI tools as we question authenticity and humanness while sacrificing labor in the process. This audio-visual piece was created for NYU's Integrated Design and Media final project to showcase the limitations and benefits of utilizing AI tools in our creative practice.
Throughout the development of this piece, I dedicated five weeks to crafting two songs that delve into AI's consciousness and awareness. This involved familiarizing myself with various AI tools and refining the visual style through writing and prompting. Despite the technological assistance, the project still demanded my personal time and effort to integrate the different elements. In this process, I assumed the role of a director, orchestrating the music, text, and visuals into a cohesive whole. As a result, two videos were created to show this creative process.

The Process

Music and Lyrics

My exploration of AI's consciousness and awareness is expressed through music and lyrics I crafted using Suno. I experimented with various musical styles and found that electronic music best complemented the lyrical themes.

Visual Prompting

To create the visual prompts, I use SD: Lyric Visualizer GPT to generate visuals based on the provided lyrics and their corresponding time codes. These visuals serve as a foundation as I curate and develop a shot list to give StreamDiffusion visual prompts for the audio.

Where Real-Time Data meets Visuals

Once I have created LRC files of the visual and lyrics with their corresponding timecodes, I include these files into StreamDiffusion TOX in TouchDesigner. This allows me to manipulate and manage visuals directly from the text file, resulting in real-time visuals.

Visuals

Learnings

Creating an audio-visual experience like First Machine and Dancing in the Moon requires significant time and effort to achieve a unified visual style. After weeks of writing visual prompts, I am pleased with the outcome, despite some imperfections in the depiction of human features such as faces and hands. This project has prompted me to reflect on the role of AI as creative artists, considering how it can be tastefully integrated into projects and the potential negative impact of poorly executed AI applications on audiences.
After discussions with fellow artists at NYU's Integrated Design and Media Showcase presentation, I discovered their openness and amazement when they learned that the project was mostly created using AI. As a result, the positive role of AI is here to stay, especially through crafting unique experiences where it can truly shine.