Final Chapter of Everything is a Remix Takes on Artificial Creativity

Kirby Ferguson has done it again, with part 4, his final segment, of the Everything is a Remix. The series, started in 2010, to me, was always essential when I taught digital media classes.

But leave it to Ferguson, to dig through the hype of AI to get to the big questions (and answers=“no one knows for sure”)

Of all Humanity’s technological advances, artificial intelligence is the most morally ambiguous from inception. it has the potential to create either a Utopia or a dystopia. Which reality will we get? Just like everybody else I do not know what’s coming but it seems likely that in coming decades these visions of our imminent demise will seem campy and naive because our imaginings of the future always become campy and naive.

After placing AI in context of it’s portrayal in literature and the movies, playing through the overhype and the critics, Ferguson takes AI Image generation to “court” where he zones on the hinge copyright question if the AI process of diffusion is truly copying (unclear) and that it’s going to be a long play in the courts.

He moves to the larger criticism that AI creates but does not create art:

AIs will not be dominating creativity because AIs do not innovate. They synthesize what we already know. AI is derivative by design and inventive by chance. Computers can now create but they are not creative. To be creative you need to have some awareness, some understanding of what you’ve done. AIs know nothing whatsoever about the images and words they generate. Most crucially, AIS have no comprehension of the essence of art, living, AIs don’t know what it’s like to be a child, to grow up, to fall in love, to fall in lust, to be angry, to fight, to forgive, to be a parent, to age, to lose your parents, to get sick, to face death. This is what human expression is about. Art and creativity are bound to living, to feeling. Art is the voice of a person and whenever AI art is anything more than aesthetically pleasing it’s not because of what the AI did it’s because of what a person did.

What do you think of Ferguson’s arguments?

Lastly he closes with the assertion, of the title of the entire series, that since everything we do, create, invent, is a remix, than it’s free to remix into something new, and boldly stated, " and yes I did it all without permission."

On some videos about AI the big reveal is that this video was actually made by AI. But this video and this series is the opposite. Nothing has been AI except where I cited AI art. This is entirely human made, The words are all mine but they’re merged from the thoughts of countless people. Everything you’ve seen and heard is from real filmmakers and musicians and game developers and other artists. All these thoughts and all this media were remixed by me into something new and yes I did it all without permission.

Everything is a remix is a testament to the brilliance and beauty of human creativity. In particular it’s a testament to collective creativity. Human genius is not individual it is shared.

1 Like

Well, since I am talking to myself here, I came back to something maybe lost in this brilliant piece.

Kirby Ferguson asks a really good question – especially as the main purveyor of AI has washed Open into brand name - why do they ot others not train AI on public domain data? Take away all the problems?

Like this demo? are there more?

1 Like

Just in time! Spotted this news from The Verge about Adobe’s AI play

The tagline reads:

Adobe says the system is only trained on content that’s licensed or out of copyright — not work from artists across the internet.

But I have been scanning the Adobe info page in Firefly and the product page itself I have not seen any specific reference to the training data.

Wait, I do see the Q&A at the very bottom:

What is Adobe doing to ensure AI-generated images are created responsibly?

With the Content Authenticity Initiative (CAI), Adobe is setting the industry standard for responsible generative AI. With more than 900 members today, the CAI is leading the conversation around digital content attribution. The CAI offers free, publicly available open source tools and collaborates on a widely adopted technical standard in collaboration with leading technology organizations through the nonprofit Coalition for Content Provenance and Authenticity (C2PA).

Firefly will automatically attach a tag in the embedded Content Credentials to make AI-generated art and content easily distinguishable from work created without generative AI.

and also, here is the source:

Where does Firefly get its data from?

The current Firefly generative AI model is trained on a dataset of Adobe Stock, along with openly licensed work and public domain content where copyright has expired.

As Firefly evolves, Adobe is exploring ways for creators to be able to train the machine learning model with their own assets so they can generate content that matches their unique style, branding, and design language without the influence of other creators’ content. Adobe will continue to listen to and work with the creative community to address future developments to the Firefly training models.

So it is non specific to where the public domain content it is trained on comes from. Again, the A in AI stands for Opaque.