What might ChatGPT mean for higher education? (Future Trends Forum)

Still trying to understand what the heck ChatGPT is or why so many people are either excited or wary of it? Here is a live Future Trends Forum run by my good friend and long time colleague Bryan Alexander, hosted in the interesting Shindig platform.

What might ChatGPT mean for higher education?– A community exploration
2022-12-15T19:00:00Z2022-12-15T20:00:00Z

What is ChatGPT and what might it mean for higher education?

In this special Future Trends Forum session we’ll collectively explore this new technology. How does the chatbot work? How might it reshape academic writing? Does it herald an age of AI transforming society, or is it really BS?

Please join us with your questions, experiences, fears, and hopes. As always, the Forum is a welcoming environment for the best discussion about higher education.

Attend or register here: https://shindig.com/login/event/chatgpt

Given it’s at an unfriendly time for me on this side of the globe (3-4am), I’ll not attend that session. :slight_smile:

However I’ve been thinking about this issue for a couple of years. GPT-2 and GPT-3 have been doing the same kind of things for that long, and even commercial services like https://writesonic.com/ based on them have been around since October 2020.

My current thoughts on the topic, as grist for the mill, are this:

The writing that you get from large language models like this is amazing, yes, and it FEELS very new to be able to custom-generate text so quickly.

However, in essence, the problem for education assessment processes is not that much different than the existence of good search engines or the essay-writing services (and databases) which have been around (and used by students) for a long time. We tried to deal with that using plagiarism-detection services and so on, but it was always an arms race that was never going to go anywhere useful.

To put it simply, people have been doing work (of all kinds) by cutting-and-pasting stuff from the internet for a long time. That is not going to reduce, it will only increase. The majority of all text in future will be echoes of previous texts (if it isn’t already) - these echoes are generated by neural nets which could be human or an AI, it doesn’t matter.

Even the “truthfulness” and trustworthiness of AI texts is a furphy - we already have exactly the SAME problem with anything written by a human too. The only serious attempt at establishing real truth in writing is going on in scientific literature, and even then it’s very imperfect. I’m sure we’ve all seen many papers and conference presentations in our fields from students that have very poor science or data, and just rely from “ticking the boxes” processes in higher education around the world. Scientific texts exist on a continuum of reputation and rigour that requires significant study to understand and/or use. Improving accuracy/reality/trustworthiness is a problem that applies to ALL media and is a problem the OER community especially should put in the forefront of discussion.

So back to assessment of other individuals: the solution, as I’ve been saying for decades in the Moodle context, is to stop relying on large texts or quizzes for assessment.

Why do I say this?

Well, in the rest of our lifelong learning, we assess each other and build reputation through LONG-TERM ENGAGEMENT. You know if a colleague is good at their work or not, because you see what they do in an authentic context every day for a long time. or perhaps you read someone’s blog for a long time. It’s the same in a homeschool, or an apprenticeship, or any really small class.

Any teacher in these situations will tell you that they ALREADY KNOW the grade that the other person will get in a given activity. “That student is a B student.” If can focus on helping that other person get better, you may be able to help create improvement, but with close engagement you will be able to see those changes in real time and your assessment will be modified accordingly and accurately.

So the answer is basically very small classes with frequent discursive engagement. Even though AI can help us scale up (like teaching assistants or tutors do in University classes) I still think the highest quality learning a brain can ever receive is through 1:1 engagement with an expert (eg an apprenticeship), so we need to keep as close to that as we can.

This means realising that our current education systems are based on an industrial model that is quite obsolete, and in which people were often learning DESPITE the system. We have 8 billion people on earth now - if we want real quality learning we have PLENTY of people to teach our next generations if we can structure/incentivise things well.

It’s also important to realise that a lot of the “brains” in our environment in the future will be electronic. We will be interacting with more and more AIs embedded in our devices, in our robots, on websites, on the phone and so on. Some will be simple and special purpose, some will be more general. More and more of them will have the capacity to take feedback and LEARN, just like people do, so these education techniques apply JUST AS MUCH TO AI STUDENTS AS HUMAN ONES. As AI surpasses our abilities ever more in the future and will inevitably become AGI, I think it’s very important that we focus on how we educate them to be our friends, rather than competition. Educators need to be thinking of that now.

Well, that got long, I should probably turn it into a blog on https://openedtech.global where we discuss these kinds of things in the context of designing a future Open EdTech framework.

(For your info I didn’t use AI for any of this post, because it FEELS GOOD to express oneself in text, and I think this is another angle we should double down on in our educational practices - the joy and need of self-expression and creation for mental health)

Sadly event timing is always going to mess up half the globe; I will confirm that Bryan is fantastic at not only posting the recordings, but blogging the details and notes.

Thanks for this in depth reply, Martin. I hope you know I am neither an AI star gazer nor a caster of doom for it ruining education or taking jobs. I’ve been posting around here since June in more of a questioning/curiosity/slightly critical mode. It’s been my home to stir up exactly these kinds of discussions, but also, maybe, just maybe, coordinate some action by those interested in more than tweet length spouting.

I think a major challenge in understanding AI is that our intuition and conceptual models are failing because the workings of this is both utterly complex and hidden. So we are trying to make sense by the limited examples we can get from the responses of the generators we are hearing about.

I have lingered some on this way of operating called “promptism” I read in a newsletter from Seb Chan.

Thinking about some of the projecting that we might turn tasks of building/creativity to AI, like writing code. Or I just some CodePen demos where ChatGPT was prompted and spit out custom SVG images from prompts. On one hand possibly impressive, but what if we lose the foundational capabilities to debug or diagnose what the machine spits?

The limited thinking and panic about what this means for essay writing that you address makes me tired. That is so limited. I am motivated by what you described

In my teaching I never assessed on quizzes or essays and all based on what students wrote, reflected, and showed in public (or at least community) spaces, often, but not only blogs.

I’d hope we can rally some more brains here, there (fediverse and beyond) to take these on. Are you willing, interested to motivate some OEGlobal community action? What can we do?

And please never apologize for long form writing. I could not agree more with your last paragraph- this has been a loss in the last few years of sliding to social media platforms for thinking out loud.

Now this is something to champion! Thanks again

Here is a heap of resources shared by Bryan before the session, the recording is now available:

There was close to 200 attendees, with representation from across North America, plus Armenia, New Zealand, and Australia (and more I might have missed). The chat was on fire- Bryan will be summarizing shared resources, and since there was so much interest, there will be a Part 2 to the session.

Much focus and worry on the implications for writing assignments, as suggested, but also calls for rethinking types of assessments

ChatGPT is a big language model built by OpenAI that is meant to interpret natural language and provide human-like replies to a variety of inquiries and prompts. It was taught using deep learning techniques on a large quantity of text data and may be used for a number of applications such as chatbots, language translation, and question-answering systems.

Many believe that it is going to be a threat to the higher education. I can’t blame them, as students may use this smart ai services application to cheat on their teacher. They may use it to complete the assignments and writing projects.
Despite these pitfalls, I see a bright future ahead for higher education with ChatGPT.
Read more : ChatGPT and Higher Education: Crisis or Opportunity?