Don't Ask ChatGPT for a Workshop on Academic Integrity and AI!

All the spaces seem abuzz with OpenAI’s ChatGPT – there are playful surprises, but if you are like me, a lot of head scratching and hopefully some curiosity. And good questions, like one in twitter from @mahabali

https://twitter.com/Bali_Maha/status/1600694678180925440

Maha’s public questions usually generate a valuable set of responses. Mine might not be in that category, and more in a sarcastic/playful vein, I posed the question to ChatGPT

I prompt

Create a workshop for educators on how to promote academic integrity that discourages students from using AI for writing.

and it responds

If you’re an educator looking to promote academic integrity in your classroom, one way to do so is by creating a workshop for your students on the importance of original work. In this workshop, you can discuss the issue of using AI for writing and how it can be detrimental to a student’s academic and professional growth.

Here are some key points you can include in your workshop:

Define academic integrity and explain why it’s important.
Academic integrity is the moral code or ethical policy that upholds the values of honesty, trust, fairness, respect, and responsibility in scholarship. It’s important because it helps maintain the credibility and reliability of the academic community, and ensures that all students have an equal opportunity to learn and succeed.

Discuss the consequences of using AI for writing.
Using AI for writing can have serious consequences for students. For one, it undermines the very principles of academic integrity by promoting a culture of cheating and dishonesty. Additionally, it can lead to disciplinary action, such as being suspended or expelled from school. Moreover, it can harm a student’s reputation and future job prospects, as employers may be hesitant to hire someone who has been caught cheating.

Emphasize the value of original work.
Original work is the foundation of academic integrity. It’s important because it allows students to learn and grow by engaging with the material and expressing their own ideas and perspectives. Additionally, it helps students develop critical thinking and problem-solving skills, which are essential for success in both their academic and professional careers.

Provide tips and strategies for avoiding the temptation to use AI for writing.
Here are some tips and strategies that can help students avoid the temptation to use AI for writing:

Start working on assignments early, so that you have plenty of time to complete them without feeling rushed or pressured.
Break the assignment into smaller, manageable tasks and set deadlines for each one.
Seek help from your instructors or classmates if you need clarification or assistance with the assignment.
Avoid procrastination and distractions, such as social media or television, that can interfere with your ability to focus on your work.
Use reliable sources and citations to support your arguments and ideas.
Encourage open dialogue and support.
It’s important to create an open and supportive environment in your classroom where students feel comfortable asking questions and expressing their concerns. Encourage students to speak up if they are struggling with the temptation to use AI for writing, and offer them support and guidance to help them overcome this challenge.

By creating this workshop, you can help educate your students on the importance of academic integrity and the dangers of using AI for writing. By promoting original work and providing helpful tips and strategies, you can help your students develop the skills and habits they need to succeed in their academic and professional careers.

Does it sounds workshoppy? It has the right words and complete sentences, but… intelligent? Tell me more! Eagerly I ask:

Do you have URLs for examples of this kind of workshop?

Cue the sad trumpet sounds…

I’m sorry, but I am not able to browse the web and provide URLs for examples of workshops on promoting academic integrity. As a language model trained by OpenAI, I can provide information and assist with tasks based on my training, but I do not have the ability to browse the internet or access external information. My knowledge is limited to what I was trained on, which is up to December 2021. I’m here to help answer any questions you may have, within the scope of my capabilities.

As suggested in a recent (like yesterday) relevant piece by writer Clive Thompson

It’s probably because AI models like this do not appear to actually understand things. Having been trained on oodles of text, they’re great at grokking the patterns in how we humans wield language. That means they autocomplete text nicely, predicting the next likely phrases. They can grasp a lot of context.

But human intelligence is not merely pattern recognition and prediction. We also understand basic facts about the world in an abstract fashion, and we use that understanding to reason about the world. AI like GPT-3 cannot reason very well because it doesn’t seem to truly know any facts. It is, as the scientist Gary Marcus notes, merely the “king of pastiche”, blending together snippets of language that merely sound plausible.

Which leads to the real problem: The bot always sounds confident, even when it’s talking out of its ***.

Where is this going?

4 Likes

Well, first, we must give kudos to you for cleverly posing your question to ChatGPT!! :clap: Not just because it’s clever and cheeky but also because I think it covers one of the first bases that we need to attend to: educators must be tinkering with this stuff and really get familiar with the capabilities.
Then, where IS this going? Can we imagine that it may be going in the direction of obliterating essays and extensive writing as the all-important demonstration of academic skills and technical knowledge? If you look at where society is leaning in terms of media consumption, we’re seeing more and more that we (a) consume information is short snippets and (b) are leaning more toward audio/visual than text. Academia will be slow to keep up, I’m sure, but this might force us into a space where learners demonstrate knowledge not by writing extensively about what they know, but by expressing it creatively with a personalized and unique perspective and delivering it via other forms of media. Last night, my partner and I were checking out ChatGPT and trying to push its limits. We asked for an essay on the impact of colonization on wildlife. It turned one up quickly. It was well-written and informative. So, I can turn that in to a professor and get that grade. Voila! But, then what happens when I go in the real world and have to apply that knowledge? What happens when someone wants me to speak on it intelligently and offer a unique perspective, setting me apart from Google? Why didn’t my professor grade me on those authentic applications of my learning? Let’s get to that level of authentic, modern education!

1 Like

ChatGPT and AI in general is fascinating. I think the questions go way beyond how good it is at anything (so far), and whether it “knows” any facts, or whether it will help college students “cheat”.
I think that it is pushing us to consider just what exactly constitutes “knowledge”, and how does anybody know anything about anything? Or what does “applying learning” really mean?

1 Like

Thanks, Rachel, for not only signing up here at OEG Connect but jumping into the conversation, which frankly I am more eager for than like button clicks :wink:

Exactly! We have more than enough pundits spouting in all the media spaces. This was the premise of what I set out on back in May

Among many challenges is that our intuition or conceptual understanding of what these things are doing or how the work, is parked outside of a magic black box offering only a prompt entry form.

Even with my tinkering, can I really assess the capability and range of challenges from my own limited experiences? If I get good/interesting results does it apply across the board? If I get junk, can I judge the same?

Hardly, so we need a lot more group tinkering and sharing as you have done.

And yes, we would hope that the knowledge and understanding students are getting via an educational experience are much more than a professor’s grade check of a written essay. We know this is fraught with problems and there has been much good work in alternative assessment, open pedagogy assignments, etc.

I would hope students would want more to show for going out in the world than a pile of grades. This has long been the work, promise of doing public work (in the open), that demonstrates more and as you say, through multiple forms of representation.

Writing is still of key importance. You do not create a compelling video or an impactful audio show without the planning and writing, storyboarding first. All good media production starts with writing.

When I taught media courses, I did not grade on the aesthetics of the media produced, but more on the ways students (publicly in blogs) show the arc of their project development, sharing how they built and created, and the reflections afterward. Writing, writing.

But these AI tools were just starting then. I would be rethinking the approaches now.

And maybe more than just trying to get around the “how do i know this writing is but a bot or a human”, what if the AI has a place in the work? Some of the research tools have good promise (like Elicit and Explain Paper).

But what if the learning involves questioning, critiquing, re-writing the bot writing?

It cannot be ignored…

I have no answers, always questions, and curiosity.

You seem like you do to (and your writing seems like from a human :wink: )

Exactly Karen. It’s a lot to grapple with, but I’m always to bank more on John Henry than the machine, no matter how smart it sounds.

1 Like

This is what I am thinking in response to a fediverse reply by my good friend and seasoned writer/writing teacher Geoff Gevalt:

For more cheekiness, I tried to ash ChatGPT for some help:

Write a sarcastic blog post about the pitfalls of trying to identify if writing came from a human or a chatbot.

and what I got was not too bad structurally for a series of sentences. But does it say anything, meaningful?

As a society, we like to think that we’re pretty good at telling the difference between human writing and chatbot writing. After all, we’re the ones who invented chatbots, so you’d think we’d know a thing or two about how they work and how their writing differs from ours. But it turns out that we might not be as good at this as we thought.

In fact, the pitfalls of trying to identify whether writing came from a human or a chatbot are numerous and varied. For starters, chatbots are getting better and better at mimicking human writing. They’re using more sophisticated algorithms and machine learning techniques to produce writing that is more and more human-like.

This means that it’s becoming increasingly difficult to tell the difference between human writing and chatbot writing. Even experts are having a hard time distinguishing between the two.

But that’s not the only problem. Another pitfall of trying to identify whether writing came from a human or a chatbot is that humans themselves are becoming more and more like chatbots. We’re bombarded with information from a variety of sources, and as a result, our writing is becoming more and more formulaic and repetitive.

In other words, the line between human writing and chatbot writing is becoming increasingly blurry. And that’s a problem, because it means that we’re losing our ability to tell the difference between the two.

So what’s the solution? Well, there isn’t really one. All we can do is try our best to keep up with the latest developments in chatbot technology and continue to hone our ability to tell the difference between human writing and chatbot writing. But even that might not be enough.

The bottom line is that the pitfalls of trying to identify whether writing came from a human or a chatbot are numerous and varied, and they’re only going to get worse as time goes on. So the next time you come across a piece of writing that you’re not sure about, just remember: it might be a chatbot, or it might be a human. And at this point, there’s really no telling the difference.

Are you calling me chatbot, you chatbot? I am thinking of sending you to a session with Eliza.

But to carry forward with Geoff’s question, I added:

And what does this idea mean for AI generated art? Is it as simple as needing to know if it was created by a bot or not, or is the entire methodology of what we consider making art going to need to evolve? Write this too sarcastically.

Frankly, as someone well versed and regular with the sarcasm, ChatGPT is pretty weak at the Greek origin sarkazein to “tear flesh”!

Oh, you want to know what this whole idea of human vs. chatbot writing means for AI generated art? Well, let me tell you, it’s a real game-changer.

See, up until now, we’ve been operating under the assumption that art is something that is created by humans. But thanks to the wonders of chatbot technology, that’s no longer the case. Now, we have to grapple with the idea that art can be created by machines as well.

And let me tell you, this is going to require a major shift in how we think about art. We’re going to have to redefine what it means to create art, and we’re going to have to come up with new criteria for evaluating it.

But don’t worry, it’s not all doom and gloom. In fact, some people are actually excited about the possibilities that AI generated art opens up. They argue that it will allow us to explore new artistic mediums and styles that would be impossible for humans to create on their own.

But let’s not get ahead of ourselves. Before we start getting all excited about the potential of AI generated art, we first have to figure out how to tell the difference between art created by a human and art created by a chatbot. And that, my friends, is no easy task.

In conclusion, the future of AI generated art is uncertain and fraught with challenges. But hey, at least it’s going to make things interesting.

I can tell you where you are going to see more of it- your junk mail. So much spam and phishing attempts I discard are easy to identify from the terrible grammar and writing! SpamGPT is coming if not here

@clintlalonde tried something similar where the blog post title is the prompt he posed to ChatGPT and the opening paragraphs what was returned

More discussion in the comments regarding attributing this stuff…

Maybe DALL-E can generate some kind of image about a relentless person who has babbled on too much about AI and driven everyone away.

But with the recent flurry of responses to the capabilities (or lack thereof) of ChatGPT a good colleague asked about a computed means to test a selection of text to determine if it was written by “bot or not”.

I was doubtful to I read another useful post from AI Weirdness

Leading me to checking out the Hugging Face GPT-2 Output Detector Demo

I first used text from my first post in this thread that was generated by ChatGPT in response to a query to create an Ai workshop.

I’d say a 99.95% score as “fake” (interesting choice of label) can be rounded up to a clear Yes Written by Bot

How about my text?

I get a 99.0% real score.

Is this sufficient? Just one small data point. But interesting… do you rely on Ai to determine Ai?

Hi, I’m very new to this forum, so I sincerely hope I’m doing all this right! Such a fascinating discussion - and I find your use of Hugging Face fascinating. I will be teaching a Creative Writing Module next year, open to all students (except the ones who already do a Creative Writing course), and I am pondering the use of ChatGPT for generating material (poetry, lyrics, haiku) to improve upon, to question, and to tear apart perhaps - all in the spirit of bringing this technology into the classroom, as has been suggested by several posters. Will report back here in time! For now, I’m adding this reply from ChatGPT

I prompted

Write a kafkaesque paragraph about a student who uses Al to write her thesis

and it responded

As the student sat at her desk, staring blankly at the screen in front of her, she couldn’t shake the feeling of unease. She had been working on her thesis for months, pouring over countless books and articles, but no matter how hard she tried, the words just wouldn’t come. Desperate for help, she turned to Al-powered language models, trusting that their advanced algorithms would be able to generate the perfect words for her. But as she read the text that had been generated, she couldn’t help but feel a sense of disconnect. The words were eloquent and well-crafted, but thev didn’t feel like her own. She had lost control over her own work, and it was as if she was being pulled into a Kafkaesque nightmare.

… it’s quite a poetic warning, if nothing else :slight_smile:

Hello and welcome to OEG Connect, Lena…

You are doing this more than right, this space is better with every reply like yours, plus we invite you at any time to create a new topic for anything you are interested in asking or sharing.

Please share your ideas about your Creative Writing course. I’ve been part of, taught in, the DS106 open online creative media community/course which also offers daily small creative activities to try (currently responses are done via Twitter but I am working on creating a means to participate via Mastodon).

Absolutely Hugging Face has been helpful to try many different kinds of AI experiments and tools. I thought at one time I might do their free course, but it’s rather technical.

Not being a Kafka expert, I’d want my students to really question the response. There is much you can do beyond the first prompt, using it in a conversational mode.

My colleague from the Open University in the UK, Tony Hirst, dabbles deeply with AI, but also wrote a compelling suggestion for turning around the use of ChatGPT from taking it’s responses to questioning it.

Also, for quite an extensive list of possible uses (I am still scanning my way through), but also a few extra tools that might help, see

I hope others will chime in with thoughts, ideas about teaching with/about AI, and if you start any new conversations here, please use the ai tag.

Thanks again Lena, please share more as you go.

Hi Alan, thank you so much for the warm welcome! And extra thanks for all the food for thought and resources. I’ve scanned some of the ChatGPT use cases this afternoon - this is such a great list.
I haven’t yet used the AI in a conversational mode, so this is something I’m looking forward to trying.

And I’ll be happy to share my ideas for the Creative Writing course - great to hear that you are also active in this community. As soon as I’ve created a first draft lesson plan with my two co-teachers I’ll be posting it in the forum :slight_smile: