Hello non-travellers to OEGlobal but want to be there, in anyway possible. So many discussions to catch up to, but we have learned to look for alternative ways to deliver!! Let’s see if this one sparks something to share.
Without a doubt Artificial Intelligence (AI) is one of the most important trends in ICT development, extending its influence and effect in many aspects of our lives. AI also promises to impact Education and there’s growing concern about its real potential as well as many threats that AI tools and methods that come about.
Although AI’s spread, it doesn’t seem to be much attention nor interest from the Open Education community, would you agree? is there a gap between Open Education and AI?
Looking into UNESCO recommendations for both issues, in the Recommendation of Ethics in AI recently approved (2021), there’s no reference to open. While the OER Recommendation (2019) does mention AI as an emergent “frontier technology for automatic OER processing and translation of languages”.
Thanks for posing such large and important questions, Werner. I am counting on OEGlobal22 participants with experience and opinions jump in.
My own challenge, even with some technical knowledge, is the opaqueness of AI makes it difficult to understand how it works or what is behind the result. At the same time, almost none of us know the inner workings of a Google’s search engine, but we do know from experience what it gives us. Can openness help in the face of such complexity? Well it can’t hurt.
Personally I’d like to know more concrete examples of the ways AI is being used in educational settings (and not those generalized news stories that talk glowingly about providing learning support/tutoring).
I thought of the example of the University of Nevada-Las Vegas President who was the basis for an AI version of himself that can answer student questions.
I did try it out and found him a bit lacking in being able to answer my perhaps to generalized questions. But perhaps, likely to students who do have the specific questions of where to get campus services, deadlines, etc, maybe it is viable. I would certainly like to know more about how students feel about the AI; my perhaps critical and cynical viewpoint might be problematic
Can AI contribute to the cost-efficiency promise of Open Education?
One would think so from the mentions in the OER Recommendation, in the processing, organizing of OER or the potential for translation. Are there good examples of these?
I’m still trying to get a better understanding of what is really possible, so until then, I have rather human partly-intelligent guesses.
These are important and complex questions that you have raised. I am highly skeptical of algorithms in education, especially as seen in recent times they seem to offer a superficial solution and a ‘short-cut’ to things. Also, it would mean less exploration for an individual student. Solutions in Education, in my opinion should offer more trajectories to explore rather than leading people to a ‘correct’ path.
Might be outside the scope but saw a tweet from data scientist / AI startup leader Hilary Mason - I got to know her in the Second Life days when she at Johnson and Wales University, and she was one of the most savvy programmers I ever met.
Hello Alan, regards to Moose Jaw!! Thanks for your inputs, let me comment on a few.
Not only does not hurt, but my shot is that an “Open AI” can help bridge the “AI Divide”, an IDRC claim that states countries who have the ability to design and deploy AI applications, and those who do not. Most countries, like mine, in order to jump on the AI train, will have to leapfrog and deal with lack or insufficiency of key enablers to be able to develop and deploy, and thus harnessing possible benefits of AI. There’s little attention to these enablers, digital (technical infrastructure) and analogue (regulatory frameworks) foundations that in most countries are largely absent, underdeveloped and unequally distributed. There’s an interesting dilemma around the openness of AI: most least developed countries will need open AI to access these techniques and technologies, but the openness brings many challenges on fundamental rights (data governance, privacy) and market competitiveness. My guess about an open AI is similar to open source software: show me the algorithm!!
Related to translation, remembered the Translexy project created to translate MOOCs, without the need to extract text for translation in more than 10 languages. Our OEGlobal co-Chair @dorlic maybe could tell us about its progress. Something similar related to proccesing OER in the X5Gon project, also born in Slovenia, that looked to create on-demand OER pipelines retrieved through AI. I remember @njermol being behind this project.
Hello @openeducator thanks for your comments. I like this idea of “turning AI sideways”, maybe you’ll find a way to open it up!!! But this is a good example of how can AI be transformative but also disruptive to previous educational practices. Just like there has been progress on automated essay evaluation (great feature in Moodle), a similar progress of AI-driven essay editors challenges learning objectives/outcomes and the instructional design. With AI’s rapid development is difficult to state that something today cannot be done via AI.
That is probably because the 'branching in AI algorithms are not as diverse and lean towards being ‘precise.’
Let me try to explain my thoughts from actually I have experienced.
Say, I am studying Chemistry majors. But I have interest in the History of Chemistry or maybe even in Chemistry in Literature (forensics, Sherlock Holmes ), AI would not pop it to me. Infact it would minimize the chances of me hitting on it by giving me focus on either chemistry or Sherlock Holmes cases. This definitely helps and prevents one from getting ‘lost’ but as a student who is exploring, I have very little chances of coming across the excellent manuscripts at project gutenberg website about HIstory of Chemistry. Infact, if as a student , I explore, I rarely get this as a branch of study till I really know.
I had run the course ‘Che-mystery: The HIstory of Chemistry’ for OE4BW in 2020 and the participants voiced that they were not even aware of such a branch of study.
I hope I could communicate what I am trying to say.
This is very clear, Ajita, and speaks to the scope of what AO can and cannot do, it can only succeed on existing trained data and programmed patterns.
What it cannot do is offer the serendipity or reach outside of patterns, and if so, it is more pure random than the ways Natural Intelligence and humans operate.
My perhaps similar experience was that as an undergraduate I majored in earth science and a bit of computer science. Were it not for that the university required humanities electives, I, nor any AI would know in advance how much I got out of my course on Art History. It did not happen until I took a class outside of my interests.
What you describe too was my fascination at libraries of just starting from an area where the index told me a book of interest, and the magic of just wandering off topic down the rows to find things beyond/or far from that topic.
I would love to know more about “Che-mystery” that sounds like a creative approach to teaching chemistry, and reminds me of a project with a Geology professor who designed an interactive “murder mystery” activity for teaching about how sedimentary rocks were formed by acts of destruction (weapons being different acts of erosion) of original rock types (the victim).
There is much we ought to gather and flesh out here! I am just reading in MIT Technology review a criticism of Google’s new cute animal AI generator, Imagen
Their reason for not being open is to keep us “safe” from undesirable content?
Scroll down the Imagen website—past the dragon fruit wearing a karate belt and the small cactus wearing a hat and sunglasses—to the section on societal impact and you get this: “While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized [the] LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.”
All interesting in that the LAION-400M dataset is open data but Google’s use is not (I gather there is no SA clause there).
Also, as per your note of AI not being open is that Hugging Face describes itself as open source, and yes, I see much here Hugging Face · GitHub and you can make use of some of their services.
I used it in to show students how to create a bot that completes sentences based on their twitter history (my demo) or as an example
While I cannot make too much out of their open source code, I prefer to try to dabble myself to better understand the limits and potentials.
Sure did @AjitaD , thanks so much, indeed AI still has long way to go on processing complexity and random connections that lead to biases and specific expected outcomes. In that process, any type of automated decision should be accountable, and I do not know of a better way to do that than through openness.
I came across a post by Simon Willison that offers a way way experiment with the Generative Pre-trained Transformer 3 (GPT-3) that powers many of the text completion AI tools.
By registering an account for the API (which you don’t even need to use) you can get a 3 month access to test it out in the “Playground”. Setting up an account requires being able to authenticate via a mobile phone I might give it a try (sorry @wernerio Chile is not in the list of supported countries! We must complain).