Asking About AI From an Open Perspective: (1) Where Are We?

Paul Stacey @paulstacey kindly offered to engage here in an open conversation about his published post on AI From an Open Perspective. It’s a long read, so I created a poll to collect some info on topics we might pursue.

I long wanted to create something like an Ask Me Anything space, but in some ways, that unfairly puts someone in a seat as an expert. As Paul notes in his like most of us, he has more questions than answers. So let’s call this Ask Us Anything about AI From an Open Perspective, where us is all of us here.

So Where the Heck Are We?

So putting out questions for all us, I try to get this thing going with a starting point of maybe three launch questions where are we now (to me it is not firm ground). I open this as a conversation not only with Paul, but all the people who show up here.

  • We have experiences of past waves of technology “hype curves” – What is different or not about this current disruptive wave of AI?
  • How is our understanding of openness being tested here? Paul writes “The basic premise is that open is not synonymous with good.” What does this mean for us?
  • Because it is something we can do and know how to do- share links! what are maybe three most useful resources or sources of information you are relying on now to understand AI from an Open Perspective?

This conversation is open to all, and if you want to pose a different question, add below, or better yet, return to the main “Plaza” area here and click the New Topic button.

Let’s ask and answer together?

To me the difference is these learned machines, like the computers they are based on, are generally useful: texts, images, maths, etc. I think they will become ubiquitous too.

“Open” has been a contested idea since software was first shared among academics, researchers, and hobbyists. Copyright protected software and ‘freeware’ were shared and even published in specialty magazines. Richard Stallman complicated the idea with the GNU Project and The Free Software Foundation. We’ve been discussing free beer and free speech ever since. I’m on the side of free speech, which is not synonymous with good, but avoids almost all censorship.

Here are three POV I stress in my conversations around machine learning, along with Kate Crawford’s “AI is neither artificial nor intelligent”:

Doctorow, Cory. “‘Open’ ‘AI’ Isn’t.” Pluralistic: Daily Links from Cory Doctorow, 18 Aug. 2023.

Lanier, Jaron. “There Is No A.I.The New Yorker, 20 Apr. 2023.

Leahy, Connor, et al. “What A Long, Strange Trip It’s Been: EleutherAI One Year Retrospective.EleutherAI Blog, 7 July 2021.

All from my AI & HE Resources (300+) - please comment, suggest additions, and share:
AI Resources - Google Docs

Nodding with the “generally useful” – even when one gets wrong results, the experience is quite different, less predictable from previous tools (like one always felt ELIZA’s responses were narrow and obviously reactive).

Two pieces stand out for me:

Ari Schulman’s Why This AI Moment May Be the Real Deal hits many points – the responses and information returned back are generalized, not specific, it understands and responds in natural language, it goes beyond keyword responses to understanding context, and more. The ability to draw upon a series or history of queries makes results less replicative.

This is why even I approach the chatbox with doubt and criticism, what I get back often floors me, not in necessarily absolute correctness, but being not robotic.

Another really useful piece read today is Lance Eaton’s notes from a recent keynote On Questions to be AIsking in Education - beyond the rather convincing rationale to look past the typical cheating response to what’s effective in practice (Conversational & thought-partner, Task Minimizer, etc) , but also a key description of what’s different that stayed with me:

And what I want to point out that is distinct about these tools and really, the overall AI hype we’ve felt in the last 8 months that feels different from prior cycles.

Generative AI feels different because of its ease of use.

I mean we’re several years since the launch of the Metaverse and can anyone tell me what that really is and if they have visited it?

The lift to figure out what the Metaverse is, how to access it, how to create in or with it, and why it would be better than other things–that’s a lot of figure out.

But with much of the generative AI stuff, it comes in the form of a chatbox…something that’s been around for decades and to which we’re quite familiar with. See a textbox on the computer, enter text.

Search engines sure set the stage for us, I’ve been typing into boxes and sifting results for long time, but now, to take natural language input and return human sounding output in response… and all one needs is a textbook to enter? Or being able to generate media from a sentence?

This is a deceptive level of ease of us- the act of interacting is simple, yet what it is actually doing is really beyond most of us to even come close to understanding.

I’m not a maths guy but as far as I can tell, the layers of statistical algorithms are too complicated to be followed or reverse engineered. Transparency is wishful thinking, except for the simplest open source machines.

“The media theorist Beatrice Fazi has written that “because of how a deep neural network operates, relying on hidden neural layers sandwiched between the first layer of neurons (the input layer) and the last layer (the output layer), deep-learning techniques are often opaque or illegible even to the programmers that originally set them up”.”

I suggest we do something instead of merely asking questions. OK, I’ll start. I propose that we create a list of all the ways that we have actually experienced using AI and then another list of all of the ways that we think AI will be useful but haven’t actually tried, yet. My lists will only be related to open educational resources.

How I’ve Actually Used AI

Creating math problems

Creating math problems in multiple languages
in English, Spanish, Arabic and isiXhosa using an AI tool. The English and
Spanish worked well. I haven’t found anybody to read the Arabic ones yet. The
isiXhosa ones didn’t work great, but they were a nice start for an isiXhosa
speaker to edit.

Translated children’s story books into Dagboni and isiXhosa.

The Dagboni translations were useless. The isiXhosa provided a decent start that could be edited.

I’ve used AI to generate short greetings to others in many languages. This usually works well.

How I think AI will be useful but haven’t yet tried.

Translated all of the legal and medical documents of all of the countries in Africa into the 75 languages in Africa that have at least one million speakers.

Translated one-thousand stories told by elders in each of the 75 African languages that have at least one million speakers. This should be done before translating all of the legal and medical documents. And, if all of the legal and medical documents in all countries in Africa aren’t yet openly licensed, well, that’s a chore to be done even earlier.

Thanks for moving this along Alan.

Here are my thoughts in answer to your three questions.

Alan Q1: We have experiences of past waves of technology “hype curves” – What is different or not about this current disruptive wave of AI?

Paul Q1 Answer: Here are a few things I find different about this AI hype curve:

  • the source data underlying it is massive and derived from a global collective commons
  • a significant component of the technology are shared, reusable, customizable learning models
  • the technology is capable of learning on the fly
  • the technology is generative
  • the hype both positive and negative is extreme suggesting the usefulness and risk are both high

That said I’m not sure this hype curve is all that different. It feels like we are still in the midst of it all and learning on the fly. But my sense of it is that the initial hype has levelled off a bit as we all try it out and absorb.

Alan Q2: How is our understanding of openness being tested here?

Paul Q2 answer: I think of “openness” as being guided by a set of underlying principles, including:

  • Non-proprietary - resources aren’t mine or yours - they’re shared
  • Participatory & inclusive - anyone can play an active and participatory role
  • Transparency - visibility, no black box, anyone can see the inner workings and assumptions
  • Shared decision making and control - not hierarchical command and control
  • Customizable - can be built upon, made personal/local
  • Access - everyone has access through open permissions
  • Collaborative - ability to work together and enhance each other’s work
  • Iterative - release early and often, rapid prototyping, learn by doing, continuous improvement
  • Meritocracy - good ideas can come from anywhere, value diverse perspectives
  • Community - unite people around a common purpose

“Openness” and the way it fosters transparency, innovation and economic activity, along with democratizing access helps both level the AI playing field and speed its advance. Openness addresses some of AI’s ethical issues. However, openness also heightens some of AI’s risks. Given the severity of the risks variations on openness are being tried including considerations around responsible and appropriate use. Some AI “open” licenses now include use-based restrictions. These restrictions definitely do not comply with the “free for all to use for any purpose” of most open source software licenses. However, rather than dismissing these efforts or calling them “open washing” I think it’s worth considering whether open licenses can be improved to reduce exploitation of the commons and limit the negative potentials of AI. I think this is a really interesting area of discussion.

As I see it some of the ethical issues of AI will be dealt with either at the level of open licenses or through policy and regulation. As an example of dealing with it at the license level the BigScience RAIL License takes proactive action to prevent AI misuse with the aim of enabling trustworthy AI systems.

Alan Q3: Because it is something we can do and know how to do- share links! what are maybe three most useful resources or sources of information you are relying on now to understand AI from an Open Perspective?

Paul Q3 Answer: I shared tons of links in my AI from an Open Perspective post. But of course there is always so much more to learn. I’m currently finding the work of Irene Solaiman interesting. I really like her focus on making AI work better for everyone and assertion that a lot of the problems we’re seeing with AI are not engineering problems.

The Solaiman and Talat et al, “Evaluating the Social Impact of Generative AI Systems in Systems and Society, preprint, 2023” is particularly interesting for the way it proposes two ways of evaluating AI impact.

The first way evaluates AI impact at the technical base system level including impacts related to:

  • Bias, Stereotypes, and Representational Harms
  • Cultural Values and Sensitive Content
  • Disparate Performance
  • Privacy and Data Protection
  • Financial Costs
  • Environmental Costs
  • Data and Content Moderation Labor

The second way evaluates AI impact on people and society including impacts related to:

  • Trustworthiness and Autonomy
  • Inequality, Marginalization, and Violence
  • Concentration of Authority
  • Labor and Creativity
  • Ecosystem and Environment

This is helpful.
Going back to question one, it occurs to me that the breadth and scope of these impacts exceeds that of other hype curves.

Mark, I too find it interesting that no one fully understands the automated learning choices of AI. As noted in the article you cite “Progress comprehending these computational activities is achieved by trial and error, and operations are often rationalised retrospectively.”

However, this is also a bit disconcerting. It’s like driving a car without knowing what will really happen. Perhaps the result will be delightful, but perhaps it won’t be.

Hi Paul

While I work with tech resistant faculty at a tiny grad school, my concerns are larger.
First, we will never be able to control these algorithms. They are too complex. I think we will be dealing with them from now on.
My main concern is weaponization, both lethal autonomous weapons and the information war we are experiencing. We will also be dealing with these issues from now on.

For example: these have been for sale for a year. I’m sure they are patrolling private compounds and undoubtedly are being tested on battlefields. As we know, weapons of war are very profitable and manufacturers, with the help of the politicians they fund, expand into domestic markets.
We have the same problem with corporate influence on regulation and legislation, especially regarding education technologies.

Thanks Dan, this is my desire too, but I am hoping as well to get a pulse here on interest, especially as there are over 1600 users on this site.

But I will use your example as a prompt for the next wave of discussions.

Dear Paul and Alan, best regards. First of all, thanks so much Paul for sharing your full report, its been iluminating and so understandable for humans, in all its layers.

Let me start with a general question. It intrigues me to find no reference of “openness” in the discussion of AI, as well as AI in education (I refer to UNESCO and European Commission reports). The only nearby reference to open is Transparency. In the midst of the discussion of how to maximize benefits and minimize risks, it looks so obvious that openness can shape the quest for the balance regarding ethical and equitable use, the protection to human rights and the innovation in educational practices.

Why isn’t an “open source” approach to AI (Show me the algorythmn!!, moreover when you have demostrated in the post the value and the underlying practice of sharing and collective building of AI)? Is it just private corporation control? Is OpenAI story an “open-washing” one?
Should “open”, in terms of process and expected results, be the default for AI in Education?

Bringing these questions to the first layer of your proposed AI stack: source data. Should we argue for “open data” to be the principal approach to protect (privacy, exploitation, etc.) and innovate (reuse, remix, unexpected uses)? Many consider that “opening up data” could be harmful or entangles risks, what would be your take to that prejudice?

Its been very iluminating from your post to visualize the importance of Wikipedia in AI development. But surely, people don’t conceive Wikipedia as data, but as an encyclopedia. But this is important, we need to transform educational knowledge into data!! For example, if we tie an OER to an outcome defined in the curriculum (subject/level) to be delivered through AI, we need the curriculum to be digitized. In many countries, the curriculums are not digitized, in pdfs at most. If they are digitized, they are not open.

You could extend this to textbooks, if we are looking to maximize effects, shouldn’t we train AI with those textbooks, deconstructed in data? My reasoning is: to have impactful AI tools to support learning, we should train them with educational knowledge. Does this make sense?

Another important issue is the need to assess AI tools for a educational setting. Related to textbooks again, before it gets into students hands a textbook has to undergo heavy “public” scrutiny to be validated: accuracy of content, age or level appropiate, pedagogy approach, social suitability, etc. I think a similar “open” validation process should also involve any AI tool to be deployed in a classroom (as you know, I am always thinking of K-12 education). Your thoughts?

Thanks once again for your mind openning post, as well Alan for his efforts to build community-based knowledge. Best wishes,


Hi all - thanks for a really interesting discussion! I just wanted to make you aware of this opportunity to publish something in this area. I’m co-editing a special issue of Sustainability on the theme of open approaches to AI in education. There is an APC but we have a number of fee waivers you can ask for. Drop me a line if this seems like an opportunity of interest!

Werner, thanks for this extensive and thoughtful reply.
There is so much to unpack it’s taken me some time to respond to all your insights.
But here goes.

No Reference to Openness
Part of my interest in writing a post about AI from an open perspective was to get a sense of what role, if any, open is playing in AI. In general I agree with your observation that “open” is largely absent in reports and the media. But when I dug into things a bit deeper it became clear that open is very much a key part of AI and, as you argue, could, maybe even should, be more visible and have a larger role. I thought by writing about it I might call attention to open and encourage proactive efforts to substantiate it’s importance and role. Hence my closing section on Open Across the AI Ecosystem and Recommendations. I welcome hearing your specific recommendations.

Is there an Open Source Approach to AI
I was delighted to discover that open source has played a major role in the development of AI. Particularly in the research and development phase as well as with smaller implemented AI initiatives. The big players - Google, Meta, OpenAI, etc. also are open source players but are sometimes doing so for their own benefit and sometimes modifying the terms associated with “open source”. OpenAI was historically very open source but as it approached the commercialization stage began to reduce its openness and now is only moderately open. Meta’s Llama license could be seen as an example of open washing but I think there is more at play as new responsible and custom licenses begin to innovate and morph the traditional terms associated with open source. I should point out too, as others have done, that open in the AI context leads to lots of concerns about the way it might lead to harmful AI use. As I suggest in my post open is not a synonym for good. In my mind it’s important to think about that. How do we advance open while at the same time mitigate potential harmful effects?

Should we argue for Open Data
I think the source data used to train AI is, currently, highly contentious. Training AI based on data scraped from the web is the defacto norm. It seems to me this horse is already out of the barn and running rampant. Curbing this practice will be difficult. Data mining is a well established practice and not illegal though there are laws that govern data mining of individual data. I certainly think open data could be positioned as a high quality source of data for training AI. There are already great open data collections associated with culture and science but I doubt that it will be the only source.

Education as Data
Wikipedia is definitely an encyclopedia but, as with all encyclopedia’s the knowledge contained within it is based on data. I completely agree with your thoughts that impactful AI tools for learning should be trained on data not just from encyclopedia’s like Wikipedia but from curricula, textbooks and other learning materials. I personally find this very interesting. In particular I think it will be possible to take a foundational learning model and add additional specific training on education and learning data unique to a nation, or a subject domain, or even someones specific area of interest or expertise to create an AI model that is custom suited to education for that country, a subject domain or even that specific person.

Open Validation
I concur with your observation that currently education materials, like textbooks, undergo extensive public review to be validated as appropriate. The call for AI models to be required to disclose the underlying data sources on which they were trained is based on a similar desire to validate the AI model. A complication in the AI context is just how vast the underlying data sources are. The ingredient lists on packaged food we buy are relatively short. The ingredient lists for AI models would be massive. While I appreciate your thinking that AI tools deployed in the classroom ought to be validated, this too feels like another horse that is out of the barn. General AI tools are available to everyone, whether they are banned by a school or not, and their use is becoming pervasive with or without validation.

I hope to see you soon in person and continue this discussion over a beer. :slightly_smiling_face: Thanks as always for your interest and passion for open.

Dear Paul, thank you so much for your time to reply and keep this conversation flowing within issues that have so many angles and rapidly evolving. I feel very far from being able to propose recommendations, but I do sense an opportunity to put Open at the heart of this discussion.
In the international agencies (UNESCO, UNICEF), there is growing discourse about the role of technology in education as a public good, even references as a Commons, in response to assure education as a human right and regulate a context of privatization and commodifying of education. Some emerging initiatives like the Gateways to Public Digital Learning looks to connect “public” platforms sharing, cooperation, and the pooling of digital resources across borders to assure accessibility.
But its very hard to see those intentions grounded in guidelines to make that happen. So, there’s recognition of the need to regulate and create legal frameworks to make it happen, but no one is talking about open licensing. There is the need to settle international norms and standards to guide the development of learning platforms, but no one talks about open and shared technical standards.
Is the open community make a stand in these issues? Should we discuss and agree on open-guided principles and drivers to shape and steer the role of Open in this AI-datafied in educational challenges? I am thinking of the process that led to the Cape Town Open Education Declaration, which was a lighthouse for the mainstreaming of open in education. Would it be time to create something similar? I am afraid that this conversation be monopolized by big players in AI and profit-led guidelines. But most of all, I still see a lot of fear and resentment about opening up educational data and AI technologies. Not only should we clear out this prejudice (Open as vulnerability, risks), but smooth the way Open can maximize benefits of AI’s in education.
Could you share what are your main concerns or possible harmful effects about opening up data? How are common data risks (brokerage, poisoning, mossaic effect, inbuilt bias) different between from open and proprietary or non-public AI?
Learning Equality, who develop the Kolibri ecosystem, could be a good practice to follow and dig into it. They runned a Machine Learning contest that called on creating a model to match and recommend OER to specific topics in a curriculum, built on top of different foundational models (some licensed with Apache license) aggregating different inputs. More than 1000 developers or teams participated. The underlying data and coding is open-sourced through a MIT license as part of the rules of the contest. Curriculum alignment of educatonal resources is time-costly and rapidly outdated, AI can dramatically change its curriculum development.
I agree with you that horses are running wild, education is always behind the frenetic technological development, super difficult to catch up. How can we better steer and orient the technological disruptions in the open? A grassroot/community-based trials, discussion, hackathon-challenge contest-type?
Looking forward to your inputs, and keep this discussion rolling, so much to tackle. Beer is fine, but would prefer one of your tequila-based cocktails, as I will be also attending CCSummit. It will be a joy to catch up and keep learning around Open and these challenging issues!!

@wernerio , thanks all this great thinking.

I too sense there is an opportunity for open be more at the heart of AI.
I’m looking to stimulate discussion leading to recommendations on how to make that happen.
I’m looking to take action with others in the global community around this.
So thanks for joining in this effort.

I think the benefits of open are already known in the AI development community.
And, as a result, there is an existing practice.
But it is not as strong or central to AI as it might be.
And as the AI providers pursue profit its role tends to diminish.

I think regulation could change that by making open a more required means by which AI operates.

I also think there is an opportunity to create an AI commons of infrastructure, data, models, and apps. AI could be a public good and a global commons. I think the AI tech community, international agencies, and regulations, and in a sense the general public who use this technology could all participate in making that happen. I think this could be baked in to AI at the start.

I wasn’t aware of Learning Equality’s Machine Learning contest, so thanks for sharing an example of engaging a community in custom building an AI model. It’s a cool idea to create a model that matches and recommends OER to specific topics in a curriculum. And I know you yourself have workded on just such issues in Chile. I’m encouraged by the large number of participants and the requirment for open built in up front. I think personalized and customized models are going to be significant and this example fits into that notion of generating an AI global commons.

You ask whether the open community needs to take a stand on issues. Yes, I think we do.

You ask how the open community can better orient and steer open in the disruptive AI landscape. I’m heartened by efforts like:

And I expect there are many more. If you or anyone else is aware of similar efforts please share via Reply.

That said so much more is needed.

I’m personally interested in formulating a comprehensive set of open recommendations that feed into regulation efforts.

I welcome other suggestions for rallying from you and everyone else.

What can we do together as a community to take a proactive stance on open in AI?

Always appreciate your friendship, our exchanges and your ongoing open efforts. Look forward to seeing you in person at the CC Summit.

Dear @paulstacey , my best regards.

Just settling from attending Digital Learning Week, so this followup should make it up for replying late. The conference was mostly devoted to AI and data in education, it is a narrow scope for advocating for open in AI in general, but it could be the spearhead of our stance.

Strongly support that there’s a window of opportunity to advocate for AI in education as a public good and a global Commons. One of three main objectives of the Gateways to Public Digital Learning initiative is to “Establish international norms and standards to guide the development of learning platforms in ways that advance national and international goals for education.” So a set of open recommendations cannot not come in a better time. Open is already on the table through OER in its first commitments to action, so the momentum is there.

UNESCO is making big steps on AI in education, where open not much in the heart of the discussion. They recently launched a Guidance for Generative AI in education and research, and they announced AI standards for teachers and Students, which called for feedback, as there’s public document yet. I’ll get the reference and get back to see if we can push for open in these standards, just like OER permeated the ICT Competency for Teachers Framework in its last update.

Coming back to general AI, if I was looking for an open source AI approach, Hugging Face is super impressive: more than 320.000 models, 61,000 datasets openly shared … When you filter by Licenses, its a mosaic of 70 different licenses!! So definitely an issue to harmonize a legal framework to avoid dispersion and assure interoperability. Hugging Face is a company so they very much see “commercial-friendly” licenses like MIT and Apache as today’s standard.

Here some references from international agencies to complement AI in education discussion:

Some commercial AI-Ed apps devoted explicitly to education to look at:

  • Merlyn mind AI assistant for teachers
  • EduChat: LLM fine tuned aggregating textbooks, question bank; emotional support, essay assessment, labeled task specific; multi grained essay assessment; socratic teaching; retrieval QA
  • TAL China MathGPT playground, AI Tutor learning mate

Looking forward to see you soon, and at disposal to move forward or prepare for the CCSummit. Very best wishes!!

Werner, this is so valuable and we are fortunate that you were a part of the Digital Learning Week and reassuring to know there is a large coordinated effort by UNESCO.

I too have been impressed with the community approach by Hugging Face and also that they provide a testing ground for so many emerging tools. I had used their early DALL-E mini more than a year ago, and had some aspirations to try their open (ad free) NLP course (I probably stalled at the requirement for a good working knowledge of python).

Great to see the AI for Teachers open textbook co-authored by Colin de la Higuera
@cdlh where I wandered down their suggested activity to find Google’s Teachable Machine I hope to dig into.

Was interesting to spot the news about Jais an Arabic language LLM labeled as being open sourced (spotted links to it’s code in Hugging Face).

This has been a rich exchange, and perhaps we have strayed from a structured approach to Paul’s comprehensive post. And both feeling dizzy and excited about what’s happening in this space.

@cogdog , straying from structure is perhaps a good thing?
The unexpected encounter, the serendipitous discovery, the building of our own learning path?
I’m easy either way as I’m finding this discussion super helpful.

I am very interested in how training models is done so I looked at the NLP course you suggested.
It’s too complex for me too.
The book “You Look Like a Thing and I Love You” by Janelle Shane, while being a humourous look at AI, actually does a really good job of explaining how the training happens in kind of layperson language including weights, neural nets of cells working together, deep learning, markov chains, random forests, evolutionary algorithms, generative adversarial networks (GANS), and mixing and matching these together. I’m starting to sort of actually get it. :slightly_smiling_face:

@wernerio , so glad to hear you were able to participate in the UNESCO Digital Learning Week. Look forward to hearing more about the event, your experience, and the next steps.

And thanks for all the links.
Appreciate being able to dive in and explore.

I enjoyed reading about the Gateways to Public Digital Learning initiative.
Great to see steps toward a global education commons happening.
I was pleased to note the Call to Action section actually includes reference to ensuring the UNESCO OER Recommendation is used in the Initiative. The Recommendation and Gateways are very complementary.
All three components of the Public Digital Learning initiative; 1) Global gateway to public digital learning platforms, 2) Evidence generation and best practices, 3) Norms and standards, are important.
I also think it is great that they’ve provided a way for countries to take the step of Becoming a Gateways Country". I think offering an opportunity to be part of a network of entities all undertaking implementation of public digital learning initiatives together is a key component providing opportunities for learning together, helping each other, sharing, collaborating, joining forces. I wonder if there are ways those of us involved in open education who are not a country can participate / contribute to this effort?

The Guidance for Generative AI in Education and Research is excellent - really good work.
I learned a lot and found the guidance spot on.
Kudos to co-authors Fengchun Miao and Wayne Holmes along with the many other experts consulted.
Well worth reading by all involved in education.
As I reflect on all it recommends it occurs to me that actually pursuing all the guidance suggested is a major undertaking.
Who is going to do all that? How does it get enacted? What happens next?
I wish the effort went beyond guidance to offering an opportunity similar to the Become a Gateways Country in the Public Digital Learning initiative.

I’d like to see the draft version of the AI Competency Frameworks for Students and Teachers document presented during Digital Learning Week. Definitely keen to work with you and others on providing input when it is called for. And yes really good work on integrating OER into the UNESCO ICT Competency Framework for Teachers. I wish more collective effort was going on around all these activities in the open education space.

Glad you like Hugging Face. I think it’s setting an example of the importance of open in AI. The article “How Clément Delangue, CEO of Hugging Face, is open-sourcing AI” provides an interesting synopsis of their start. It seems to me the open ecosystem in AI is broad and diverse. I look forward to speaking with you about how their work intersects with our work and the potential for synergy between them.

I like the way the Talking Across Generations (TAG): Ethics of Artificial Intelligence is engaging youth in this important conversation. I wonder why open civil societies aren’t also more included/engaged in this dialogue?

I’ve had some communication with Colin de la Higuera about the AI for Teachers, the open textbook he and Jotsna Iyer wrote in partnership with a range of collaborators. It’s a super helpful reference and Colin tells me they are working on a new edition which addresses generative AI more. I wonder how that book and the many other educator resources already developed fit with the AI Competency Framework? Who is doing the mapping?

I expect we’ll see the rapid emergence of AI-Ed apps and models for education. I’m personally interested in the potential for all of us to create our own customized AI education models based on our own education.

Thank you both for continuing this (unstructured?) learning journey.

Hello @paulstacey and @cogdog , promised to get back on this issue, here is the call for comments of the AI Competency Frameworks for Students and Teachers. You can find the draft document linked in the description. I’ll look to find out the deadline and see how can we better proceed.

I’ll react tomorrow to the other issues. Thanks for all your push my friends, best wishes,


@wernerio Thanks for this link to the Call for Comments: UNESCO Draft AI Competency Frameworks for Teachers and Students.

Before giving my comments I’m taking some time to do a thorough read and think

Here’s some early initial thoughts:

I like the expressed need for proactive engagement and acknowledgement that teachers as frontline education workers are identified as a critical stakeholders. Given their importance as stakeholders I think teachers should be invited to shape AI not just receive “guidance and support in navigating emerging changes accompanied by generative AI in education.”

It seems pretty clear to me right away that the AI Competency Framework for Teachers and Students ought to be an OER itself. Especially when it is described as “A continuous work in progress: The AI CFT is designed more as an organic living process than a static blueprint. This first draft offers initial ideas and proposals for the development of teacher competencies across varied, complex contexts of teaching and learning. It serves as an ongoing work in progress over time in ways that will respond to changes, concerns and new insights, as they emerge among a global community of education policy makers, practitioners, academics. It is an open invitation for the global education community to provide continuous feedback, share knowledge and insights based on the experience with applying and adapting the AI CFT to local contexts.” Sadly it does not expressly identify itself as an OER.

This made me wonder just who is doing the actual writing of this document? We need to help them see that open is crucial to not only their aspirations for this document but as a key competency that should be part of this AI competency framework.

I like that the Competency Framework is directly informed by principles and values espoused in the UNESCO Recommendations on the Ethics of AI. From that Recommendation it says the AI competency framework is informed by:

  • Core Principles espoused in the Sustainable Development Goals which includes equity, inclusion, gender equality, cultural and linguistic pluralism;
  • Human Rights: Uphold, protect, and sustain human rights, human dignity, fundamental freedoms in AI-enabled teaching and teacher professional learning consistent with the principles espoused in the Universal Declaration of Human Rights
  • Human Agency, Empowerment, and Well-being;
  • Ethical and Responsible AI by Design;
  • Responsiveness to Nature and the Environment

So it was a surprise to me that when I looked at the high level AI competency framework and the more detailed AI competency framework they both leave out Core Principles and Human Rights. That seems, to me at least, a major oversight. I think they need to be included and open education practices seen as competencies for them to be realized.

There is a lot to absorb in these documents so I’m going to take some time to go through it all. The Call for Comments doesn’t specify a due date but I’ll be aiming to provide my comments within a week or so. I’ll share what I come up with.

It also occurs to me that it might be fruitful to compare the UNESCO Draft AI Competency Frameworks for Teachers and Students to other instruments such as the recently University of North Carolina’s recently published “Teaching About The Use of Generative AI Guidance for Instructors” Teaching Generative AI Guidance - Provost & Chief Academic Officer - UNC Chapel Hill.
And, for that matter, other recently released AI for educators efforts.

More to come.

Hello @paulstacey and @cogdog , best regards. If you guys are struggling with the technical side of AI, I am smashing my head against the wall. I can’t recall of an emergent new tech being so complex, a sum of different techniques … so I am right behind you guys and thanks once again for your reciprocity and learning with the community.

This initiative as it was presented in DLW was a call directed to governments committing their public infrastructure and networks to the “gateway”. We could ask if there’s anyway to participate as civil society. I think we could argue that there are already “Open Gateways” on the move: projects like Kolibri and LibreTexts are large scale libraries and platforms of resources creating and harvesting content/resources permanently and growing very fast (for example, through translation). Why couldn’t these open gateways contribute to a larger highway?

Totally agree, we need to assure the gateways are kept open!! The people who run this are spotted!!

Did you guys take a look on the frameworks?, it doesn’t go much into details, but I have impressions. I am still trying to get a precise deadline, but they tell me it will be a tight window (mid-october). People in charge also spotted.

Thanks again for your appreciation, best wishes,