The Tragedy of AI (and what to do about it)

Thought I’d share “The Tragedy of AI (and what to do about it)” an essay blog post I’ve written to accompany a presentation I’m doing Monday June 23, 2025 at the Association of Learning Technology (ALT) Open Education Conference (OER25) in London England. In it I try to encapsulate the fatal flaws of AI that directly pertain to open. I also suggest some actions we could take. It’s provocative in keeping with the theme for the ALT conference " Speaking Truth to Power: Open Education and AI in the Age of Populism".

I saw your post light up last week in my RSS Reader, Paul, and had a window open here in OEG Connect to share it-- luckily you did it first, thanks. I also saw it mentioned by @Downes in his OLDaily news.

For longer than before we even worked together, I was inspired by the depth of research and thought in your long form blog posts, as again with this one. It took me a while to read, and in addition to all the background and framing, I just want to say how much I appreciate your attribution of media and ideas.

I wondered a bit about your your labeling if the four stages of a Tragedy, which align with the story structure we know from culture and film. But what if there is no resolution in a Catharsis, is that stage guaranteed?

Personally I do not think the interests or motivations of open educators matter in any way to the people driving the AI train. Any reason that would compel them are based on what they would gain and profit (not to do the “right” thing or in the interest of society).

But let’s not go down that hole and think about the possibilities. I completely understand the desire for an LLM that we all knew exactly what is in the mix is openly and willingly shared, not taken like theft in broad daylight. Having no direct knowledge, I wonder though if the gains the people are seeing in what the newest models are producing, if there truly is enough content. The increased capabilities are seemingly made possible by vast vast amounts of data (I also am fuzzy on how so-called Small LLMs achieve results).

It seems to me to the perception of just “if we had clean and good training data everytihng witll be fine.” I still do not understand conceptually what model weights are, how they are tuned, and where all of the human powered feedback happens to the machine.

I also admit I am also fuzzy just exactly what are the “Preference Signals” that Creative Commons is advocation (I will tune in to their live session this week) nor have an understand what effect they will have on the Big Three or is it Five or is it More?

I hope you solicit good energy at OER25 with your presentation!

Thanks for taking the time to write this out and share it, Paul! I’ll be pulling it into OER Commons’ AI & OER Community Hub because we always, forever, need more things covering the ethical side of it.

I’m also very bummed to not be able to make it to CC’s event on Wednesday, since I’ll be (ironically? appropriately?) flying to Philadelphia for the American Library Association’s conference to talk and learn about AI in libraries. I’ll have to reach out to the CC team later and see how we might fold these upcoming signals into the platform.

Very awkwardly (and an indicator of the state of the world and its toll on my cognitive ability) and yet too absurd not to share because we could all use a good laugh:

I definitely read through your post thinking that you had also given a talk in September with the University of Regina called "Why ‘Open Education’ will become ‘GenAI Education’ " (which I had many feelings about), and was baffled – baffled! – at the about-face I thought ‘you’ were making. After desperately searching the internet for “Paul Stacey AI end of OER,” It was only after searching my own calendar for “AI Education” and going through so many events did I realize that, no, it was actually David Wiley’s presentation.

I think we all need a vacation.

Hah those are some cross wired links. While people end up polarized about Davids ideas, at a root level I respect the position that we are do a radical rethink of our traditional concept of OER as fixed forms of content. And also that David does not just spout about Ai he is informed by hands on exploration.

I can not recommend enough Paul’s earlier long treatment of AI an Open Education where does pull apart the opaqueness of the Machine.

1 Like

Hi Alan.
Greetings from London where I just presented the Tragedy of AI (and what to do about it) at the ALT 2025 OER conference.

Thanks for your kind words regarding my long blog posts.
Glad they aren’t TLDR. :slightly_smiling_face:

With any tragedy there is a Catharsis.
But it usually is a release emotions after witnessing the hero’s downfall.
I wrote my Catharsis section as a search for resolution and avoidance of the downfall.
In both cases catharsis aims to provoke thought and reflection on human nature and the human condition. Sadly reflection on the human condition these days is fraught with all the worst things about human nature. I wanted to provide some hope in the midst of current despair.

Like you I’ve been trying to understand better the way AI works including the relationship between size of training data, model weights and functionality. It’s complex and at this point seems beyond reach of educators.

I’m looking forward to learning more about preference signals at CC’s session today. I think preference signalling may provide a social norm mechanism but will not have legal teeth. So, AI developers could completely ignore it. But still it could provide the basis for something more compelling. I’m hopeful.

Hi Peter.

Thanks for reading The Tragedy of AI (and what to do about it).

I find it interesting to see how much talk there is around the “ethical” aspects of AI. It would be great if AI development was guided by a set of moral principles, but at this time, while much has been written calling for this, they seem completely absent in practice. I wonder if we might not be better off talking about “responsible” AI where AI developers are accountable for the actions and consequences of use of AI? A subtle difference perhaps but in the absence of an ethical framework on what should be done there should, as a minimum, be something that ensures accountability.

As for mistaken identity I sometimes have people asking me if I’m Lindsey Buckingham (of Fleetwood Mac) or even Rod Stewart but I can’t say I’ve ever been confused with David Wiley. :slightly_smiling_face:

Finally let me say that in my conversations with people about AI there is a widespread general unease. There is something about AI that is disturbing, agitating. But most people aren’t able to put there finger on just what it is that is causing this consternation. Here at ALT OER 2025 many people told me that my Tragedy of AI talk helped them realize what it is about AI that gives them the jitters. I must say that this was true for me too. Writing The Tragedy of AI (and what to do about it) helped me better understand my own personal reaction to it. If it helps others then I am pleased.