Dare we utter the trending topic almost everywhere? But I was impressed when one of my Google alerts sent me to the Kwantlen Polytechnic University’s Teaching and Learning Commins new site: Generative AI for Teaching and Learning at KPU
So what is going on at your institution or at the regional level? I started looking around high level and found:
At World Education, we’ve been engaging educators in a few ways to gather and share information through our Leveraging AI for Learning and Work initiative. Here are some things we have going on:
a survey to get info about how educators are using gen AI and their perspectives on it
next up - CampGPT! This will be a teacher PD that is a looser, more exploratory form of our EdTech Maker Space service learning model. Campers can attend Zoom meetings where we will learn about the various tools and then they’ll have specific “camp-themed” activities to engage in between “jamborees”.
Keeping an eye on various initiatives regarding policy, as well. UNESCO and OET that you mentioned have been helpful. I find the Collective Intelligence Project to be intriguing, too. Still working through reading that white paper… lots to read and take in right now!
Here are two further AI sessions I am aware of.
A keynote for a recent EuroCALL spring festival based around uses of AI for language teaching
An upcoming webinar by Future Teacher on Getting digitally savvy on 29th June Register here.
Lots of conversations which is fab. I think there is a lack of understanding however of the use of the word Open when it comes to AI - something this group could usefully comment on maybe?
I was more looking for examples of policies and guidelines organizations are creating/sharing regarding the slippery field we are navigating (or its navigating us).
Quite ironic, well sadly/sickly ironic, open washing indeed. I’ve made a number of sarcastic comments, but that’s my usual NI language.
Okay this is a relevant resource that came my way via Mastodon… Lance Eaton has collected a google doc of classroom policies for AI use, so ways instructors are modifying their syllabi to address the impending challenge.
Thank you Alan for mentioning my post. If a critical review of the Russell Group principles on AI sounds a bit dry for a Friday read, there are cat pictures
Helen
This working paper discusses the risks and benefits of generative AI for teachers and students in writing, literature, and language programs and makes principle-driven recommendations for how educators, administrators, and policy makers can work together to develop ethical, mission-driven policies and support broad development of critical AI literacy.
We invite you to comment on the working paper to help inform the task force’s ongoing activities. We intend for this to be the first of subsequent working papers that will be developed as the task force receives feedback and identifies priorities in the coming year.
Also worth noting is a post from Selena Deckelmann, Chief Product and Technology Officer at Wikimedia Foundation on principles for use of generative AI content. If you do not know, Wikipedia is the largest source of training data for large language models.
Selena opens with starting with a provocative question:
If there was a generative artificial intelligence system that could, on its own, write all the information contained in Wikipedia, would it be the same as Wikipedia today?
Still keeping my eyes out and ears open here. I can see my colleague Bryan Alexander is doing the same, he has cast a call from his blog asking, for campuses where classes start in in a few weeks, what policies/guidelines are being developed:
as well as a related article in his new Academics and AI substack
In case anyone has not yet read it, there is an excellent Blueprint for an AI Bill of Rights for Education from Katie Conrad, published as an early release from a new journal Critical AI. Lots to interest OpenEd ppl:
From this guide, the practices for all users of generative AI in Canadian federal institutions seem very reasonable, as practice for more of us (?)
Review generated content to ensure that it aligns with GC commitments, values and ethics and meets legal obligations. This includes assessing for biases or stereotypical associations.
Formulate prompts to generate content that provides holistic perspectives and minimizes biases.
Strive to understand the data that was used to train the tool, for example, where it came from, what it includes, and how it was selected and prepared.
Learn about bias, diversity, inclusion, anti-racism, and values and ethics to improve your ability to identify biased or discriminatory content.
Notify recipients when content has been produced by generative AI.
And also again, for those tracking AI policy, it looks like the OECD policy observatory is a key place to understand the space on a global scale – I’d still be curious for people from the countries listed to comment on (a) how current the dashboard is and (b) how widely known are these policies?