Hi @cathcass , and thanks for the nudge @cogdog. I’ve been a bit heads down developing a virtual keynote talk on “AI, ELearning and Open Education” for an E-learning and Open Education international conference in Taiwan this coming week. It’s been interesting developing that talk as it has forced me to look at what AI teaching and learning apps and applications already exist and what they do.
I’m increasingly interested in specialized and localized AI models for education. We’re already seeing AI for specific teaching and learning purposes - lesson planning, tutoring, writing assistance, content generation, creating quzzes and assignments, … And I think we’ll increasingly see domain specific AI models eg. math, biology, history, art, … As these emerge, if the models are open, it will be possible to localize and contextualize them by language, culture, and other ways that make them more relevant and effective.
Cathy wonders about the compatibility of OE, AI and commercialization and asks, “Could it be possible for the OE community to commercialize OE for the benefit of the OE community, rather than await the threat of the corporate sector acting independently?”
@danmcguire @Jenni @JudithSebesta @cogdog @billymeinke and @Mackiwg all make some really good observations most of which I fundamentally agree with.
The way I see it AI definitely has the potential to accelerate and improve the production, discovery, use, or uptake of OER broadly. But AI is currently operating in ways that break the social contract of OE, perpetuate extraction, colonization, and imbalance of power. How unfortunate!
I keep wondering if we can articulate how AI could / should work differently. How might AI work collaboratively with the OE community in ways that align with social norms and the commons? I keep wondering if AI community rules and regulation could steer AI to be not just an industry sector and a race for economic benefit but something that also, perhaps in parallel, generates social public good of the type OE enables. I can envision rules and regulation that favour the latter over the former.
However, the current VC funded AI tech landscape is not functioning in that way and not likely to change. In addition governments seem disinterested in the social public good potential and as a result regulations are not steering AI in that direction. AI needs a model that does not give big tech disproportionate benefit.
Toward that end there is quite a bit of activity around opting out of AI. However, I believe we need an alternative that we can opt in to, something that gives us agency in an affirmative way. I’d love to see open public infrastructure for AI and associated open governance (with strong representation from OE) of that infrastructure. AI infrastructure is hugely expensive but in my mind funding for this could come from a levy placed on commercial AI (which then ensures a reciprocal benefit back to the commons). The EU AI Act advocates for nations to build their own AI infrastructure to ensure an element of sovereignty and independence from big tech. I’d love to see the OE community advocate to its governments interest in using that infrastructure for education rather than that of big tech.
AI is built on data. Its future sustainability depends on continuous ongoing ingestion of new data. Interestingly human curated data is considered higher quality than data simply scraped from the web. This is an area and practice that OE excels at. I can see a convergence of interests between AI and OE around this aspect. I definitely think OE could generate its own AI models. Such models could be higher quality and generate better results than those offered by big tech. Is there a business model around that? Maybe.
Data is a kind of currency. Two data sources continuously feeding AI currently are user interactions with AI applications and outputs generated by that AI use. One way the OE community can assert influence is by controlling how their data and outputs are used. I keep imagining a scenario where I agree to have my data and outputs used by AI specifically for social public good rather than for profit.
Over the years I’ve given a lot of thought to this fundamental question about how to generate revenue while engaging in open work. Like many of you I’ve done, and continue to do, a lot of open work that generates no direct financial return. But I’ve come to believe that it generates a lot of “value”. I wish the conversation was less around commercialization and money and more around creation of value. I believe OE offers a higher education value proposition than non-open. Direct financial benefits are but one component of that value proposition and focusing on them exclusively diminishes the other values OE generates (many of which have indirect financial benefits).
That said I don’t believe OE should ignore the financial side. When I think back to natural resource based commons such as pastures, water, and forests, those were managed as commons partly for the economic benefit of all involved. In current times those have largely disappeared. Non-profits and efforts like OE exist because of failures in the way society and the economy work. While adopting commercialization as a goal risks co-opting OE into capitalism it need not be the case. Certainly social enterprises and even B-Corp often have missions highly aligned with OE and public social good. But finding, or establishing, an organization to take on commercialization of OE for the benefit of the community requires significant startup funds and ongoing governance and operations that I currently don’t see anyone willing to provide.
This conundrum remains a topic of high interest to me. There is no magic bullet but continued conversation and dialogue like this one open the door to more possibilities. I’m all for working together on coming up with new models.