Tagged for OEG Connect: Me & My Machine (MMM) labels

What’s of interest? Me & My Machine (MMM) labels

Tell me more!


A free-to-use system of labels that indicate the extent to which generative AI has been used in the creation of a product. The labels range from lazy prompter (almost everything created by AI) to handmade (without any AI assistance). MMM-LABELS were created by Fontys University of Applied Sciences in the Netherlands.

h/t Bryan Alexander Higher education prepares for the upcoming AI-ified year

Where is it?: https://mmmlabel.tech/


This is one among many items I will regularly tag in Pinboard as oegconnect, and automatically post tagged as #OEGConnect to Mastodon. Do you know of something else we should share like this? Just reply below and we will check it out.

Or share it directly to the OEG Connect Sharing Zone

2 Likes

Just another interesting idea of how people are looking to indicate how much of their work is or is not done by generative AI. I lost track of another effort that created similar images that were crafted like Creative Commons license symbols. We see other indicators authors/creators are reaching to publicly display human or GenAI assisted writing/media making.

This is of course somewhat related to CC’s own Signals project under way to design frameworks for content owners to indicate their preference for usage by GenAI.

What are your thoughts on how and why we might need a set of meta symbols like ingredient labels to be an indicator of “how much” AI is in the package?

This is just one of many useful items I found in Bryan Alexander’s AI, Academia and the Future newsletter issue on Higher education and the world prepare for the upcoming AI-ified academic year

May its Badge AI the other related project. I saw it here in connect as well Tagged for OEG Connect: Badge AI - AI Usage Transparency System.

That’s it! And that’s a link I posted, but cound not remember how to find it. Thanks, Super Mario,

I really like these! They’re simple, but very effective at communicating how involved AI was. They would work well as a visual to accompany a more comprehensive Artificial Intelligence Disclosure (AID) Statement (the framework we are beginning to use more consistently).

Thanks Amanda for sharing the link for the AID Statement by which I also noted/bookmarked the reference to the Contributor Role Taxonomy.

The structure of the example AID statements is really interested as laid out for types of statement and in the example:

Artificial Intelligence Tool : ChatGPT v.4o and Microsoft Copilot (University of Waterloo institutional instance); Conceptualization : ChatGPT was used to revise research questions; Data Collection Methods: ChatGPT was used to create the first draft of the survey instrument; Data Analysis : Microsoft Copilot was used to verify identified themes coded from open ended survey responses; Privacy and Security : no identifiable data was shared with ChatGPT during the design of this study, only the University of Waterloo institutional instance of Microsoft Copilot was used to analyze any anonymized research data in compliance with University of Waterloo privacy and security policies; Writing—Review & Editing : ChatGPT was used in the literature review to provide sentence-level revisions and metaphor options; Project Administration : ChatGPT was used to establish a list of tasks and timelines for the study.

It seem some analogous to the Creative Commons best practices for Attribution (with the the TASL elements).

I’d be keen to know more about how you are using and promoting the use of AID Statements, do you have any examples to share?

For last year and this year, when I designed the nomination form for the Open Education Awards for Excellence we first made it clear that there was no prohibition on using GenAI, but we do have what I see as maybe a bit too broad a question about AI Transparency:

In a spirit of transparency, we ask all nominators to indicate how Generative AI was used to submit this form. e.g. to organize statements, to assist for writing in a language that is not your primary one, etc. Please make sure all written statements are reviewed and composed as best as possible from a human perspective.

It has been informative to see how people respond and at some point I may try to summarize the responses from last year and this year. But nowI am thinking about asking for something more like the AID statement, or use that structure.

Thanks for adding to the conversation here! (and cough cough, there’s still a week to get some KPU nominations in!)

I get that there are places where you need to disclose AI use, as in hiring decisions, unedited AI transcriptions, and things like that. But I’m finding it harder to see the need for things like articles and images.

If I publish an article or blog post, I stand behind every word in it. And I’m responsible for every word in it. It doesn’t matter whether I used AI spell check or proof-reading, or found some sources using perplexity, or had ChatGPT whip up an outline.

Indeed, the only time I want to say ‘AI did this’ is when I want to say I didn’t read the content and that you should not trust it. Which is fine for some things, but not articles and posts, no matter how much AI was used.

Using an AI label like these seems to me to somehow signal that you think using AI was ‘wrong’ somehow but you did it anyway, and shouldn’t be held wholly responsible for the consequences.

(P.S. this and all my posts are 100% human authored, not that it matters one whit whether I used AI or whether I used a word processor rather than a quill pen).

Part of what I like about these MMM labels is that they do acknowledge the human work that went in and that it’s a partnership.

Personally, I agree with you. As a former English major, I’ve written so many essays in my life that for me 95% of the work is in the pre-planning, research, and structuring/organization. The actual typing the sentences out part becomes purely mechanical. IMO, the shaping, editing, and making sure the words are accurately conveying the message is the actual writing that’s going on.

I’ve created a AI Declaration Statement template to go in our Pressbooks front matter, and we are making all of the books we are now publishing include one (if relevant): AI Declaration Statement. After talking with my colleague about these MMM labels, I think I’m going to edit the template to recommend including them as well.

Here’s an example of one that I put together for a project that I’m currently working on:

AID Statement: Artificial Intelligence Tool: Claude Sonnet 4 (Anthropic) used during multiple work sessions between 26 June 2025 - 03 July 2025; Conceptualization: Claude was used to help develop the structure and organization of content revisions, particularly in determining how to address overlapping content across sections; Information Collection: Claude conducted web searches to identify authentic concerns and misconceptions about open education from academic literature and institutional resources; Writing—Review & Editing: Claude provided extensive revision and editing support to enhance content depth, improve consistency of structure, maintain a supportive educational tone, and refine explanations to better serve the faculty audience.

Human Author Role: The human author (project creator) retained full creative control and decision-making authority throughout the process, providing all original content strategy, institutional context, pedagogical approach, and final approval of all changes. The human author determined the overall vision, selected which AI suggestions to implement, and made all substantive decisions about content direction and tone.

I haven’t decided yet whether to include the “Human Author Role” statement or not. Is it necessary? What value does it add / role does it play? My name is on the work, so of course I’m ultimately responsible for the output. I asked Claude about it, and here’s its response:

Prompt (as a part of a longer conversation): Is adding “The human authors retained full creative control and decision-making authority throughout the process” necessary? What does including that add to the statement?

Response: Looking at the AID Framework examples, they don’t include statements about human creative control. The framework appears focused on factual disclosure of AI use rather than defensive assertions about human authority.

Adding “The human authors retained full creative control…” would:

Potentially detract by:

  • Making the statement unnecessarily long
  • Sounding defensive, as if AI use needs justification
  • Adding language not modeled in the framework examples
  • Implying that AI use somehow threatens authorial control

Doesn’t add value because:

  • It’s already implied that you made decisions about what suggestions to accept/reject
  • The framework assumes proper disclosure is sufficient
  • Readers understand humans remain responsible for their work

The AID Framework seems designed to normalize AI use through transparent disclosure rather than requiring justification. The clean, factual approach of just stating what tools were used for what purposes is more aligned with the framework’s intent and creates a more confident, professional tone.

I’d recommend sticking with the straightforward disclosure without the additional statement.

Sorry for going off on a bit of a tangent!

That’s the beauty of publishing on the web as an individual, we can choose to declare or not as we see fit. And like you I have no need to assert that my blog and projects are human only, I’d like to think my typos and sarcasm make that obvious.

I was more interested in what Amanda shared below, this is worth considering for use in published content, larger pieces, like OER.

But I do not see it as any kind of requirement or mandate, I think again of image attribution, which I have made it my own practice to attribute every image I use, wether a license decrees it, or even if it is my own where I have permission. There is a difference between attribution as some kind of license requirement and the attribution I prefer, as a statement of credit and to me, gratitude.

And I disagree on images. Because of all the muckiness of AI Generated images, wether they can be licensed or even should be, what I see most is people just plop them in without ant source or indication where it comes from. In anything I do, I always want to indicate my sources.

I disagree about signalling any wrongness of how something was written (why bother to cite sources anyhow?), I am always interested in where ideas come from, not from a sense of policing any idea of stealing but just as a means to explore farther.

And hey, how is the biking ariond Iceland going? I’m amazed you are this connected.

Do not apologize at all, this is exactly the tangenting I like to see in this space. I agree with your concerns for being concise, not making it a chore.

Again, I like to think of this more as a route of attribution and showing how work was made, not as some kind of requirement. I appreciate seeing how you are thinking about this for OER and would hope others chime in.

Quite so. The idea that there’s a ‘scale’ of AI involvement is misplaced. Writing is a complex process where, as you say, the actual placement of words is only the final part.