Given it’s at an unfriendly time for me on this side of the globe (3-4am), I’ll not attend that session.
However I’ve been thinking about this issue for a couple of years. GPT-2 and GPT-3 have been doing the same kind of things for that long, and even commercial services like https://writesonic.com/ based on them have been around since October 2020.
My current thoughts on the topic, as grist for the mill, are this:
The writing that you get from large language models like this is amazing, yes, and it FEELS very new to be able to custom-generate text so quickly.
However, in essence, the problem for education assessment processes is not that much different than the existence of good search engines or the essay-writing services (and databases) which have been around (and used by students) for a long time. We tried to deal with that using plagiarism-detection services and so on, but it was always an arms race that was never going to go anywhere useful.
To put it simply, people have been doing work (of all kinds) by cutting-and-pasting stuff from the internet for a long time. That is not going to reduce, it will only increase. The majority of all text in future will be echoes of previous texts (if it isn’t already) - these echoes are generated by neural nets which could be human or an AI, it doesn’t matter.
Even the “truthfulness” and trustworthiness of AI texts is a furphy - we already have exactly the SAME problem with anything written by a human too. The only serious attempt at establishing real truth in writing is going on in scientific literature, and even then it’s very imperfect. I’m sure we’ve all seen many papers and conference presentations in our fields from students that have very poor science or data, and just rely from “ticking the boxes” processes in higher education around the world. Scientific texts exist on a continuum of reputation and rigour that requires significant study to understand and/or use. Improving accuracy/reality/trustworthiness is a problem that applies to ALL media and is a problem the OER community especially should put in the forefront of discussion.
So back to assessment of other individuals: the solution, as I’ve been saying for decades in the Moodle context, is to stop relying on large texts or quizzes for assessment.
Why do I say this?
Well, in the rest of our lifelong learning, we assess each other and build reputation through LONG-TERM ENGAGEMENT. You know if a colleague is good at their work or not, because you see what they do in an authentic context every day for a long time. or perhaps you read someone’s blog for a long time. It’s the same in a homeschool, or an apprenticeship, or any really small class.
Any teacher in these situations will tell you that they ALREADY KNOW the grade that the other person will get in a given activity. “That student is a B student.” If can focus on helping that other person get better, you may be able to help create improvement, but with close engagement you will be able to see those changes in real time and your assessment will be modified accordingly and accurately.
So the answer is basically very small classes with frequent discursive engagement. Even though AI can help us scale up (like teaching assistants or tutors do in University classes) I still think the highest quality learning a brain can ever receive is through 1:1 engagement with an expert (eg an apprenticeship), so we need to keep as close to that as we can.
This means realising that our current education systems are based on an industrial model that is quite obsolete, and in which people were often learning DESPITE the system. We have 8 billion people on earth now - if we want real quality learning we have PLENTY of people to teach our next generations if we can structure/incentivise things well.
It’s also important to realise that a lot of the “brains” in our environment in the future will be electronic. We will be interacting with more and more AIs embedded in our devices, in our robots, on websites, on the phone and so on. Some will be simple and special purpose, some will be more general. More and more of them will have the capacity to take feedback and LEARN, just like people do, so these education techniques apply JUST AS MUCH TO AI STUDENTS AS HUMAN ONES. As AI surpasses our abilities ever more in the future and will inevitably become AGI, I think it’s very important that we focus on how we educate them to be our friends, rather than competition. Educators need to be thinking of that now.
Well, that got long, I should probably turn it into a blog on https://openedtech.global where we discuss these kinds of things in the context of designing a future Open EdTech framework.
(For your info I didn’t use AI for any of this post, because it FEELS GOOD to express oneself in text, and I think this is another angle we should double down on in our educational practices - the joy and need of self-expression and creation for mental health)