Asking About AI From an Open Perspective: (2) What Are We Doing Now?

I’m sure in not being alone with a sense of AI overload that’s taxing my simple brain. As @mark.wilson shared a BBC article that suggested thinking I can understand how it works is not likely possible.

We started this series of topics when @paulstacey shared his published post on AI From an Open Perspective and offered to be part of open discussions here. Our poll of interest (still open) in the topics Paul covered indicated most interest in issues that are still big, yet worthy- ethics and AI Learning.

The questions Paul outlined for AI Learning are those Big ones about trying to get out how the machines that are crunching Large Language Models themselves learn:

These are hurting my brain too, do we have the insight to know how the learning machines do (if it is learning) works? Something more than “autocomplete on steroids” especially as studies suggest the same prompts do not always return the same results nor is there a means to track a result back to the sources an AI was trained on.

But if someone out there can help us see or think about how to understand machine learning, please reply away.

On the other hand @danmcguire suggested we take a collective work about the ways we are using AI right now and maybe what we think we might want to do in the future.

This might be more practical for some here to read about the approaches open educators are trying right now for both teaching with AI but also in using AI to create open content.

A few of likely examples I have come across recently that focused on practices educators are doing right now.

Take this question in either direction, practical (what is changing in your own practice?) or conceptual (what does AI Learning really mean?)

On the practical side, Dr. Remi Kalir @remikalir used the LLM Claude to write a 300-word discussion forum post for the graduate course Critical Digital Pedagogy he is co-teaching at Colorado University.

He attached these introductory course readings:

  • the “Introduction, Chapter 1, and Chapter 2” from bell hooks’ book “Teaching to Transgress”
  • “Chapter Two” from Paulo Freire’s book “Pedagogy of the Oppressed”
  • James Baldwin’s essay “A Talk to Teachers”
  • Henry Giroux’s article “Rethinking Education as the Practice of Freedom: Paulo Freire and the Promise of Critical Pedagogy.”

He then prompted Claude to “Generate a post for me that:”

  1. synthesizes main points about teaching, learning, and technology across the four texts
  2. includes one direct quote from each reading
  3. connects main points to current social, political, or economic events
  4. includes one rhetorical question

You’ll have to read his post to see how well Claude responded.

I am with you on the AI overload, here. I recently put on a professional development workshop for faculty using some content from Ethan Mollick and Lilach Mollick’s webinar for Harvard Buisness back in May. Many of us are slowly implementing policies in our syllabi that outline if and how students can use AI for their assignments. I am happy to share the policy and assignment(s) I’ve developed if anyone is interested (English Composition). I haven’t yet figured out how to gain the student’s permission to use AI myself as a teaching assistant - so welcome ideas for that!

More collections of links, a nice one from UVA.

Includes collections of more resources like How Do I Communicate About AI With My Students?, What Are Ethical Considerations for AI?, plus a number of subject oriented ones (Nursing, Business, World Languages, and more)

1 Like

Thanks, I forgot this one, thanks @remikalir who notes

I’ve been playing around with Claude for just a few hours. And these responses take but seconds to generate. It’s hard not to imagine the death of the discussion forum, at least as it’s conventionally used by many university instructors. The so-called “dreaded threaded” discussion–especially as a proxy for “participation” in online courses–faces a very uncertain future. And don’t get me wrong–this isn’t about student cheating. The responsibility is on us, as educators, to design more compelling assignments and creative approaches to student writing and interaction–particularly when writing in response to course texts.

Maybe the silver tinged lining is this reconsidering of assignments…

Hi Alan, I too have been digging in to machine learning. This Machine Learning Explained article provides a pretty good basic description.

“The function of a machine learning system can be descriptive, meaning that the system uses the data to explain what happened; predictive, meaning the system uses the data to predict what will happen; or prescriptive, meaning the system will use the data to make suggestions about what action to take.

There are three subcategories of machine learning:

Supervised machine learning models are trained with labeled data sets, which allow the models to learn and grow more accurate over time. For example, an algorithm would be trained with pictures of dogs and other things, all labeled by humans, and the machine would learn ways to identify pictures of dogs on its own. Supervised machine learning is the most common type used today.

In unsupervised machine learning, a program looks for patterns in unlabeled data. Unsupervised machine learning can find patterns or trends that people aren’t explicitly looking for. For example, an unsupervised machine learning program could look through online sales data and identify different types of clients making purchases.

Reinforcement machine learning trains machines through trial and error to take the best action by establishing a reward system. Reinforcement learning can train models to play games or train autonomous vehicles to drive by telling the machine when it made the right decisions, which helps it learn over time what actions it should take."

See article for more… Worth noting too that the article links to an MIT OCW Introduction to Machine Learning course. The course offers some deeper dive materials.

One of the things I keep reading and hearing, including in this article, is that deep learning and neural networks “are modelled on the way the human brain works”. I personally think this is BS. It is true that the human brain has millions of interconnected neurons. While Artificial AI networks may emulate that structure the actual AI processing and learning taking place in that structure is very simple and doesn’t come anywhere close to being like the human brain.

I like all the examples you’ve come across that focus on AI practices educators are currently using. However I have reservations about the learning models that underpin AI. Practices ought to be based on some understanding of what is going on in the AI black box.

Like @kdunn I too experience an AI overload. All the articles and media coverage can be a bit much. As a break from those snippets I recently read the book Genius Makers by Cade Metz which provides an easy to read summary of the history of AI. Who knew that Toronto, Montreal and even Calgary were significant locations of AI innovation? That said I’m not so sure I’d call those described in the book geniuses.

Also for levity it is fun to look at Janelle Shane’s website AI Weirdness (referenced in the Machine Learning article I started with). She does a great job of showing the humour side of AI.

1 Like

It’s helpful to read that 3 way breakdown of machine learning. And while I agree the modelling on the brain works my brain is not quite sure how/if it works.

The BBC Machine Minds piece Why humans will never understand AI has me thinking a dark question- what if we can’t understand (and what do we mean by understand) the inside of the boxes? I don’t expect to us to understand the detailed working, but am finding my own mental models are feeling really short.

Simon Willison has a fantastic and still mostly beyond my technical level post on his talk Making Large Language Models work for you but the slides give a good summary.

There’s the underscoring of the fact (said many other places) that for being large, the amount of information ChatGPT is trained on is quite small, and would fit on my laptop (or as Simon says that an LLM is really just a file). And I am hard pressed to get into my small brain the concept of embeddings as information computed in 1,536 dimensional space.

One takeaway too is tossing an idea I and others suggest- what if we trained AI on just data we knew about or knew was good, like say every bit of resources we controlled? He suggests that is not viable it’s not nearly as large as it needs to be, but also in that the approach (I think) is adding the content relevant to your query into the prompt that works it through the larger data and mechanics.

Keeping at it!

The Tech Wont Save Us podcast episode AI Criticism Has a Decades-Long History was informative to understand the story and motivations of Joseph Weizenbaum.

And I double the recommendation for AI Weirdness, I have been subscribed for a while. What I like is that Janelle Shane’s examples are humanly understandable (like the recent one on a failure to understand a giraffe with no spots).

So I wonder what we can do to get both to this high level understanding that hurts my brain and also some practical hands on things we can do / share. Some kind of group effort at prompt crafting challenge (to borrow/steal the name of Tom Barrett’s excellent newsletter).

And also, what might we may be able to do as some kind of pop up hands on activity at OEGlobal23…

Thanks for keeping the channels going.

1 Like

As I will see some of OEG supporters at #Unesco Digital Learning Week (formerly Mobile Learning Week), very focused on AI… please find the corresponding page here in english: Digital Learning Week | UNESCO

I’m convinced we canna understand how the machines generate the output. Especially the corporate owned ones. Even open source algorithms are too complex to be ‘transparent’ or to ‘audit’.
I used images to prompt Midjourney and made this attractive black box, but it’s a black box nonetheless, and I have no idea how it was generated.
Look closely, there is a library inside. My image prompts were the Borg cube, a 50s server room, and some text. Library, or books, were not in my text prompts.