The Grey Zone: Use/Attribution of Statistical Generated Imagery

I ran a long topic here starting June 2022 as I explored and read about the rapidly moving space of what I formerly referred to as AI Generated media*

After an OEG Voices podcast recording session today with @poritzj I am refraining from using the AI phrase. Jonathan makes a strong case that what we are really talking about has nothing to do with intelligence, more properly called “large statistical models” Stay tuned for that episode to be out soon at https://voices.oeglobal.org/

The Girl in the Yellow Hat

I am talking about these kinds of images (screenshot) I generated using DALL-E (it’s now open to anyone to use) that even managed to not get tripped up in my typo

“How does this machine work?” she wonders. “And what can I do with what it spits out? Who does it belong to?”

Lifting some from what I posted in the Creative Commons cc-edu Slack channel…

Beyond key issues of unclear bounds concepts of reuse go into the training set, and the muddy waters of copyright, I’d been thinking more practically as to what happens as content creators start to use imagery (and others to come).

What/how can they be licensed? And how the heck can they be attributed? I was talking about this in the office hours today with @nate who shared an example of how Creative Commons is starting to attribute such images

I commented that we have almost none of the TASL elements (from CC Attribution Best Practices- Title, Author, Source, License) to use.

But then I happened to look at my DALL-E account and noticed there is now a publish button, that makes the wok public (we now have the S) — see https://labs.openai.com/s/YDB15XNB5C9WytXJz9F3FSJW and they co-label me and DALL-E (“Human and AI”) as authors.

Their terms of use indicate (section 2.) DALL-E images can be used for any purpose (commercial too) but also (section 6) they assert ownership of the image. I own my prompt.

Who can explain all this to an educator or student now? What happens when they create such images and want to use in open content?

Tools Do Not get Credit, Who Does?

@poritzj replied in that Slack discussion (and discussed more in today’s podcast session:

Note that the CC AI WG wrote a bit about this whole topic…We essentially said that one should always think of the phrase “AI” as fancy, (albeit nonsensical,) PR-speak for “large statistical model” or, more succinctly, “tool.”

So giving you and the tool credit (as you said “Human and AI”) would be like giving copyright ownership for a work to “Ansel Adams and Nikon camera” or “Normal Rockwell and His Favorite Paintbrush”: pretty silly.

Yes, that makes sense. I do many of my graphics in PhotoShop but never give my tool credit.

But…

At the same time, how much credit can I, the human take? I did not do much beyond typing words in a box. I hardly feel like I created anything, if anything, I might iterated until the machine spits out something I like. How much agency have I exerted?

And however we characterize these tool, I am able to do make images I cannot quite as easily find. Going back to the girl in the yellow hat, Google Images (regardless of rights settings) gives me nothing close and even reducing keywords to girl yellow hat adjusts machine is nowhere close.

Likewise Openverse yields de nada on the full prompt again, too many keywords, even reducing to girl yellow hat leaves me empty photo hande.

And so it gets murky. Newer mobile phone cameras and web tool make it easy to take a dull or poorly lit photo and apply effects that before would have taken my manipulation in software. Whether it is artificial, intelligent, or just a statistical model, it’s pretty blurry who can take ownership.

More so how to we credit / attribute such media? Who gets credit? Are we on some new frontier of a wild wild image west?

A New or Just Unknown Frontier of Attribution?

Following the Creative Commons example Nate shared with me, but adding now links to the source image and the OpenAI terms of use, is it something like:

DALL·E generated image based on prompt - A robot sitting on a horse on a western plain Impressionist style
"Robot Wild West” by Alan Levine, image generated and regenerated by the DALL-E 2 AI platform with the text prompt “A robot sitting on a horse on a western plain Impressionist style.” OpenAI asserts ownership of DALL-E generated images; Alan dedicates any rights it holds to the image to the public domain via CC0.

That’s quite a mouthful.

Is anyone else curious about this stuff? Are you seeing people making use of these images? Where do we go from here?

5 Likes

Let’s hear it from the Bulletin of Atomic Scientists

These tools represent the complete corporate capture of the imagination, that most private and unpredictable part of the human mind. Professional artists aren’t a cause for worry. They’ll likely soon lose interest in a tool that makes all the important decisions for them. The concern is for everyone else. When tinkerers and hobbyists, doodlers and scribblers—not to mention kids just starting to perceive and explore the world—have this kind of instant gratification at their disposal, their curiosity is hijacked and extracted. For all the surrealism of these tools’ outputs, there’s a banal uniformity to the results. When people’s imaginative energy is replaced by the drop-down menu “creativity” of big tech platforms, on a mass scale, we are facing a particularly dire form of immiseration.

Apart from crediting Alan and DALLE-E, what about the creators of the millions of images that OpenAI accessed and exploited without consent in order to “create” this image. Without those original creators, there would be nothing here. DALL-E hasn’t created anything by itself, nor has it been “inspired” by the original images Instead, the output is a very clever merging of those original images, triggered by your sentence.

2 Likes

I thought about creating an image of what one million attribution statements might look like, estimating at maybe 25,000 printed pages long, laid end to end perhaps 4 and a quarter miles long!

Is that how DALL-E works, Wayne? That’s the thing, when any of the people behind it try to explain exactly what it does, things get really vague. Is it really a merging of images? If it starts to generate the girl, before the yellow hat part, what is it doing? There is some kind of diffusion process that does a creative amount of iterations drawing upon all it’s inventory of “girls”, or is it picking one and then generating variances form that as a core? How does this merging happen?

We just have no intuition nor understandable feel for how it makes the image. If I am learning to paint a tree, and I study a collection of a hundred trees to inform how I paint, am I then responsible for giving credit to every item I looked at?

I am not countering your suggestion, but if the girl in the yellow hat is just plucked out of the pile and blurred with an image of a machine, well yes, that’s taken without credit.

2 Likes

I come from the Kirby Ferguson school of “everything is a remix” so I have been looking at DALL-E as one big old remix machine. Art & music both have deep cultural traditions of mashing-up and remixing the previous to create something new. Is this really all that different? I see parallels between what DALL-E does and what a talented mash-up or remix musician like Girl Talk does - taking bits and pieces of what is in the culture, repurposing and creating something new. Building on what came before. Is DALL-E that much different in this respect?

1 Like

I’m so with you on everything is a remix Clint! I guess looking at platforms like DALL-E, the things that stand out for me are the ways such platforms are funded by and geared to generate returns for venture capital and how they can produce exponential numbers of new remixed works compared to an individual artist like Girl Talk.

Yes, you are right on both counts - the massive scale & the financial motive driving the development of the platform. And I agree 100% with you on both counts, especially the $$$ being driven into these systems with an expectation of return, although I will also mention that there has always been a commercial element to art creation going back hundreds of years (and indeed I think of much of the corpus that a remix music artist pulls from was created originally not as art, but as commerce by monopolistic record companies).

I know there is likely a naivety into how I am thinking given the (possible) massive amounts of dollars involved and lack of transparency as to what the underlying technology actually does, or potentially someday will do. But I think there is also a broader issue at work here for me at least, and that is at what point do cultural artifacts become “owned” by us - the collective, the commons - for uses like remixing to build new works? Why is there such a high (and often arbitrary) commercial value on cultural works in the first place and who drives that demand & value?

Don’t get me wrong - I fully support fair and just compensation for artists, but doesn’t DALL-E also create new opportunities for artistic expression? Or maybe the underlying profit motive and potential for this tech to go sideways skews that too much?

As you can likely tell, I have much cognitive dissonance here between free culture (of which I see tools like DALL-E having a role in) vs the exploitation of that culture, in some cases for potentially malicious purposes.

Y’all will have to stand behind/with me as fans/believe of Everything is a Remix. If anyone can whisper into Kirby Fergurson’s ear, please suggest that he do a new film on this topic.

If Everything is everything than end of discussion. Still I have to ask, given that we have no “in plain English” explanation (I miss Lee Lefever’s Common Craft videos) or “Explain AI Like I am a Five Year Old” to understand what exactly these things are doing, can we say it is remix in the sampling style of Girl Talk?

If the OpenAI Terms of Use stay in effect (they seem to be updated on a regular basis), their terms allow for commercialization of the artifacts it produced, BUT, they claim ownership. I would not know their goals, but can imaging disrupting the value chain of content is one if not a side interest.

I will ask again, without a clear understanding what these things fdo, how they create, how can we help the educators we work grasp what to do with these “creations”? And/or do we worry about the question from the article shared above from the Bulletin of the Atomic Scientists, by the ease of making of these things without a foundation of understanding, are we eroding the spirit of human curiosity/creativity by a tool of convenience?

One way we can continue to think through these questions is in the webinars we just announced at Creative Commons on 9 and 10 Nov, the first on AI inputs and the public commons, and the second on AI outputs and the public commons (more what we are discussing here in this thread). It would be great to see folks from the OEGlobal community at these free webinars: Webinars: AI Inputs, Outputs and the Public Commons - Creative Commons

And yes, in true @cogdog style, we created new AI art for the webinars and continued to experiment with how to attribute and alt text them. Check out the blog post for examples…

4 Likes

“Oh it’s Friday,I think I will try on a new terms of service”…


DALL·E generated image created by Alan Levine from prompt --A robot in a dimly lit room admires themselves in a mirror artistic style-- licensed under a CC BY license (because now I own it) ???

OpenAI sent me an email notifying yes, the terms of use for DALL-E have again changed, and apparently I own content I create there, section 3a:

You may provide input to the Services (“Input”), and receive output generated and returned by the Services based on the Input (“Output”). Input and Output are collectively “Content.” As between the parties and to the extent permitted by applicable law, you own all Input, and subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title and interest in and to Output.

If I own it, then (?) I can choose a license to share it, correct? The terms of use for Adobe Photoshop do not dictate how I publish and share content I created there?

Okay @NateAngell how will we attribute now?

2 Likes

Thank you @cogdog for keeping your finger on the pulse!

Saved the ToS (Updated on November 3rd, 2022) to the Internet Archive (archived page here) – one day somebody will write a Master’s thesis / dissertation about the this interesting legal issue… :wink:

2 Likes

Yet another interesting take on the new (or is it not so new) form of media creation…

citing as an example the case of Jason Allen, a game developer, who won a juried art prize for an image created with Midjourney, who expected and knew his work would be strongly criticized:

“How interesting to see all these people on Twitter who are against AI generated art are the first to throw a human under the bus by discrediting the human element! Does that sound hypocritical to you?”

Says Jason Antonio, the author of the Dev.to post:

Despite all the controversy being generated through this discussion, some of these images can be really interesting. Therefore, community administrators are faced with a dilemma: release, release with restrictions or ban it.

Whatever is the decision, the discussion on this matter is far from over.

Much is far from over here…

The AI train is just accelerating, from this story from CNET

They say “AI” drew this like it was uncle Al with paintbrush, but then again, when you visit the home of the Bestiary Comics series, draw your own conclusions-- or ask AI to write your conclusions?

It’s a telling (warning?) sign that the software, yes the tool @poritzj , gets co-author credit:

In a semi-related story, a commenter on my blog, David Wallace Barr shared this so artificial, so intelligent? image based on prompt: “cogdog Levine caricature” (using Stable diffusion)

Intelligent? Artificial ?


David Wallace Barr shared with me under this terms, “By whatever power may or may not be vested in me, I declare this image to be in the public domain (no attribution required)”

Looking back to an earlier post on the Generative Adversarial Network( GAN) generated images:

that Wikipedia’s definition of public domain applied to an AI generated image in that article is declared:

This file is in the public domain because, as the work of a computer algorithm or artificial intelligence , it has no human author in whom copyright is vested.

but also noting in the footnote, that UK copyright law (in 2021) provided the 50 year copyright protection to AI generated images-- see Artificial intelligence call for views: copyright and related rights