Artificial Intelligence and Open Education: indifference or unable to connect

These are important and complex questions that you have raised. I am highly skeptical of algorithms in education, especially as seen in recent times they seem to offer a superficial solution and a ‘short-cut’ to things. Also, it would mean less exploration for an individual student. Solutions in Education, in my opinion should offer more trajectories to explore rather than leading people to a ‘correct’ path.

1 Like

Might be outside the scope but saw a tweet from data scientist / AI startup leader Hilary Mason - I got to know her in the Second Life days when she at Johnson and Wales University, and she was one of the most savvy programmers I ever met.

Just tweeted some AI examples from start-up space

1 Like

Hello Alan, regards to Moose Jaw!! Thanks for your inputs, let me comment on a few.

Not only does not hurt, but my shot is that an “Open AI” can help bridge the “AI Divide”, an IDRC claim that states countries who have the ability to design and deploy AI applications, and those who do not. Most countries, like mine, in order to jump on the AI train, will have to leapfrog and deal with lack or insufficiency of key enablers to be able to develop and deploy, and thus harnessing possible benefits of AI. There’s little attention to these enablers, digital (technical infrastructure) and analogue (regulatory frameworks) foundations that in most countries are largely absent, underdeveloped and unequally distributed. There’s an interesting dilemma around the openness of AI: most least developed countries will need open AI to access these techniques and technologies, but the openness brings many challenges on fundamental rights (data governance, privacy) and market competitiveness. My guess about an open AI is similar to open source software: show me the algorithm!!

Well, in effect AI has sneaked on us all, and many issues of our lives there’s AI embedded. On the educational field, for example, ITS (Intelligent Tutoring Systems) applications that give support to learners has been used for many years now, here a meta-analysis of the ITS effectiveness (review of 50 evaluations).

Related to translation, remembered the Translexy project created to translate MOOCs, without the need to extract text for translation in more than 10 languages. Our OEGlobal co-Chair @dorlic maybe could tell us about its progress. Something similar related to proccesing OER in the X5Gon project, also born in Slovenia, that looked to create on-demand OER pipelines retrieved through AI. I remember @njermol being behind this project.

My human partly-intelligent 5 cents …

2 Likes

Hello @openeducator thanks for your comments. I like this idea of “turning AI sideways”, maybe you’ll find a way to open it up!!! But this is a good example of how can AI be transformative but also disruptive to previous educational practices. Just like there has been progress on automated essay evaluation (great feature in Moodle), a similar progress of AI-driven essay editors challenges learning objectives/outcomes and the instructional design. With AI’s rapid development is difficult to state that something today cannot be done via AI.

1 Like

Wow @AjitaD that’s an awesome idea, can you extend on this idea AI supporting different trajectories to explore? Where are these trajectories headed to? How would that work?

That is probably because the 'branching in AI algorithms are not as diverse and lean towards being ‘precise.’
Let me try to explain my thoughts from actually I have experienced. :slight_smile:

Say, I am studying Chemistry majors. But I have interest in the History of Chemistry or maybe even in Chemistry in Literature (forensics, Sherlock Holmes :slight_smile: ), AI would not pop it to me. Infact it would minimize the chances of me hitting on it by giving me focus on either chemistry or Sherlock Holmes cases. This definitely helps and prevents one from getting ‘lost’ but as a student who is exploring, I have very little chances of coming across the excellent manuscripts at project gutenberg website about HIstory of Chemistry. Infact, if as a student , I explore, I rarely get this as a branch of study till I really know.
I had run the course ‘Che-mystery: The HIstory of Chemistry’ for OE4BW in 2020 and the participants voiced that they were not even aware of such a branch of study.

I hope I could communicate what I am trying to say.

1 Like

This is very clear, Ajita, and speaks to the scope of what AO can and cannot do, it can only succeed on existing trained data and programmed patterns.

What it cannot do is offer the serendipity or reach outside of patterns, and if so, it is more pure random than the ways Natural Intelligence and humans operate.

My perhaps similar experience was that as an undergraduate I majored in earth science and a bit of computer science. Were it not for that the university required humanities electives, I, nor any AI would know in advance how much I got out of my course on Art History. It did not happen until I took a class outside of my interests.

What you describe too was my fascination at libraries of just starting from an area where the index told me a book of interest, and the magic of just wandering off topic down the rows to find things beyond/or far from that topic.

I would love to know more about “Che-mystery” that sounds like a creative approach to teaching chemistry, and reminds me of a project with a Geology professor who designed an interactive “murder mystery” activity for teaching about how sedimentary rocks were formed by acts of destruction (weapons being different acts of erosion) of original rock types (the victim).

Now that is some natural intelligence at work!

2 Likes

There is much we ought to gather and flesh out here! I am just reading in MIT Technology review a criticism of Google’s new cute animal AI generator, Imagen

Their reason for not being open is to keep us “safe” from undesirable content?

Scroll down the Imagen website—past the dragon fruit wearing a karate belt and the small cactus wearing a hat and sunglasses—to the section on societal impact and you get this: “While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized [the] LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.”

All interesting in that the LAION-400M dataset is open data but Google’s use is not (I gather there is no SA clause there).

Also, as per your note of AI not being open is that Hugging Face describes itself as open source, and yes, I see much here Hugging Face · GitHub and you can make use of some of their services.

I used it in to show students how to create a bot that completes sentences based on their twitter history (my demo) or as an example

While I cannot make too much out of their open source code, I prefer to try to dabble myself to better understand the limits and potentials.

In fact they offer a free course in NLP (ad-free) I might complete in the next month or so, maybe we can coordinate some group effort?

1 Like

Sure did @AjitaD , thanks so much, indeed AI still has long way to go on processing complexity and random connections that lead to biases and specific expected outcomes. In that process, any type of automated decision should be accountable, and I do not know of a better way to do that than through openness.

1 Like

Awesome!!!

How do you see coordinating a group effort on the course?

1 Like

I’m not sure maybe just here through a series of discussions tied to each section of the course. When I looked more closely it called for experience with python on which I am very thin!

I came across a post by Simon Willison that offers a way way experiment with the Generative Pre-trained Transformer 3 (GPT-3) that powers many of the text completion AI tools.

By registering an account for the API (which you don’t even need to use) you can get a 3 month access to test it out in the “Playground”. Setting up an account requires being able to authenticate via a mobile phone I might give it a try (sorry @wernerio Chile is not in the list of supported countries! We must complain).

1 Like

What’s ICT?
Would be fun to explore algorithms whether in artificial or organic intelligence…

I would think ICT refers to Information and Communications Technology

Here is a fun one, try to determine if a image was created by AI or a human. It’s not easy!

I love that idea of getting students to critique and improve AI texts. I’ve been exploring something similar as I think about creating AI activities to complement my OER writing textbook.

I did. this microcourse from atingi and loved it: atingi: Log in to the site

How strange and unfair that Chile is not in the list of supported countries! But I think @wernerlo could still use apps that draw on GPT-3 like Jasper.ai?

I am really enjoying playing in the GPT-3 playground and making a few videos about possible educational applications for writing courses. The community forum has been helpful.

Thanks for articulating this! I would be concerned too about the narrowness and possible bias of an AI career counselor.

However, I think that might be more in the software that uses AI than in the nature of AI itself. I have found that it’s possible to give all kinds of prompts to GPT-3 in the OpenAI playground. For example, I wonder what I would get if I prompted it to give me some surprising ideas for courses that might interest me and explain why they might be interesting.

Hello all, I’m jumping in very late here with some responses–hope that’s okay! I just got so excited to find a group of OER folks interested in AI because I’ve just been discovering AI and thinking about how to connect it to my OER textbook. Here’s a conference proposal I just did on the topic.

I believe, the serendipity also has some pattern/ algo working behind it. Maybe subconscious.
One needs to go on that path to be able to discover, isn’t it?
It can’t be just magic though it does seem to be magical.

My stance of Che-Mystery was to bring out the hardwork, failures, immersion into a topic before the success story. Sceince textbooks miss this angle. Acc to the textbook portrayals, what the students see may be this-
Suddenly, Mendeleev stumbles upon the arrangement, Archemedes has the ‘gotcha’ ‘Eureka,Eureka’, Newton discovers gravity because of the apple falling, Kekule dreams of the structure of benzene…the list is endless. It makes science discoveries more of serendipity than what they actually are- scientific thought.
Those one paragraph, in a box in a corner type of text/image just fails to communicate the process of what happened before Mendeleev got that arrangement, or how Archmedes, Newton were so immersed in the topic to be able to realize what was being manifested.

Just to share with you, here is the video link of the presentation of the course Che-Mystery for OE4BW program and also my podcasts titled Che-Mystery- unheard stories of scientists. (This year, I have also started another series, bedtime stories (folktales)

@wernerio - yes, that is an interesting area- who is accountable of automated decisions? I say this, because I recently came across many algorithms to train children for decision-making and I was and still am a bit apprehensive about the ways where AI is heading to and is becoming so unpredictable in itself, but what if in the long run, a person’s decision taken due to AI was not really the appropriate one and was just based on the data (which could be incomplete)…seems like a rant? :sweat_smile:

@annarmills, yeah, ineteresting ideas would and definitely do pop up, but it depends on the prompts we give, I suppose. So, we need advocacy for that as well. :slight_smile: