What does Open Artificial Intelligence in Education mean to you?

TL;DR: Let us know your take on what ‘open’ artificial intelligence in education is by signing up via https://forms.office.com/e/346it1gRdz

What would it mean for generative AI to be open? As with everything in the world of “open”, […] there are a thousand opinions about this, just like there are a thousand opinions about what it means for an educational resource to be open.
– David Wiley in OTESSA 2024 Keynote (youtube.com)

The discourse about open AI in education is often clouded by marketing, hype, and technically complex discussions, making it difficult to get a good understanding of what people mean by ‘open’ AI. Existing interviews and media coverage show that people have different interpretations of ‘openness’. This lack of clarity hinders communication and engagement among stakeholders in education.

Key question: What are the meanings of ‘open’ AI in education and how do they vary?

Proposal: As a team of researchers in education, computer science, and philosophy from different institutions, we will interview approximately twenty experts in machine learning, open education, ethics, policy, and educational technology, aiming to uncover and summarise the variations and critical features of meanings of open AI in education through the lens of Phenomenography and Variation Theory.

Aims: This project will offer a simple way into different views of open AI in education, make it easier for people to see each other’s perspectives, and facilitate the discussion. It may help counteract ‘openwashing’, support different valid meanings of ‘open’ with concrete examples, and assist philanthropies in prioritising initiatives by identifying well-supported and under-supported dimensions of ‘openness’ of AI in education.

How to participate : Read the study details and sign up here: https://forms.office.com/e/346it1gRdz

The study findings will be shared in a panel discussion at the Open Education Global 2024 conference.

If you don’t have time to join us for an interview, there are also other ways of participating: for example, if you could share an artifact (an image, short video, photo, or similar) that represents open AI to you (via the above form), that would also be really useful.

Thanks for opening this call for responses, Vidminas and I hope everyone here puts something into the form!

The opaque complexity of these systems and employing a radically different concept of what “reuse” means, even when we consider them open challenges us. I can add, but I will share from a recent OEGlobal Voices podcast with our board member @rjhangiani who noted when I asked about GenAI:

Again, I’m not going to claim to have any special expertise over here, but I will share some concerns. And I think one concern in general, which has already been an issue is just the-- it’s like paving over the etymology of knowledge. a core value of open licensing is attribution.

Losing that is damaging, is dangerous. It’s theft. So that’s damaging. The normalization of that, because this is going to happen anyway. You’re denying progress if you’re not serving students, if you don’t equip them to use. What I think is really missing over here is that critical, generative AI literacy.

And that’s what I worry about. Is it’s not the question of could we use it? Sure, we could use it. Is there that thinking behind it? Is there the thinking of are you aware of the, on the one hand, you could talk about the environmental impact and the resources in the water it takes to support a particular single query.

We could talk about the issues with intellectual property and data privacy and those pieces. We can certainly talk about the biases and amplification of societal biases over there. And after all of that critical analysis, if you still feel that it’s actually useful in this particular way, it certainly might be still that calculus might still turn out in the positive for you as an individual.