Alan, I have no excuse for the length of this reply.
If someone claimed they were an expert in some discipline, how would you go about confirming it? You’d likely start by asking them factual questions, then move on to asking them to explain difficult concepts to you, then ask them to apply some principles of the discipline to solve a problem, and perhaps finally ask them to make deep connections between the discipline in question and some other discipline, or to every day life, etc. In short, you’d probably ask them to engage in a series of activities that increase in complexity (e.g., tasks that work their way up Bloom’s taxonomy) until you felt satisfied that they were an “expert.” That’s “all” that I’m talking about here. The ability to answer questions, explain difficult concepts, apply principles to solve problems, make new connections, etc. It’s definitely not lived experience, but I think it passes for a rudimentary kind of expertise.
And I take your point about a person needing their own expertise to be able to engage with AI tools. Here’s a musical example I often share:
- Say you have a person who has no experience playing the violin. And you give that person a million dollar Stradivarius to play. What will the music sound like?
- Now say you have a person with a DMA in Violin Performance. And you give them a $35 elementary school rental violin to play. What will the music sound like?
A person with good training and poor tools beats a person with poor training and good tools every time. Regardless of how powerful generative AI tools become, a person’s ability to use them to accomplish their goals will ultimately be limited by the person’s training and experience.
The importance of training is something I feel acutely in my work at Lumen. As you might know, our mission is to “eliminate race, gender, and income as predictors of success in US higher ed general education courses.” Building and adapting great OER and other great wrap around tools and functionality isn’t enough if instructors and students don’t know how to “play” them. And in the past, we’ve seen many instances of instructors adopting our interactive courseware tools but continuing to “play” them as if they were a static textbook. It’s kind of like watching someone move from playing a piano to playing a church organ, but when they play the organ they continue to play on only one manual (ignoring the other manuals, ignoring the foot pedals, ignoring the many stops that would let the organ produce the sounds of other instruments, etc.).
These experiences have really helped me crystalize my theory of change regarding the role our work can play in closing the equity gap in academic performance:
- Students learn by doing; that is, they learn from the things they do.
- The students who experience the least equitable outcomes often have busy lives with many competing priorities, and typically don’t do anything “optional.” In fact, they often struggle just to complete what’s required by their instructor.
- If students learn by doing, and many of them only do what the instructor requires, then the only way to really change student learning is to change what instructors require them to do.
Consequently,
- A primary design goal of course materials intended to help close the equity gap must be changing instructor behavior; specifically, helping instructors engage in more evidence-based teaching practices.
We often think of OER and courseware as being learning materials (i.e., for use by students). But it might be that their role in influencing instructor behavior is even more important than their role in supporting student learning directly. In our recent Gates Foundation-funded work we’ve been focused on changing faculty behavior in this way (so that they engage in more evidence-based teaching practices). And in preliminary evaluations by a third-party, we’re seeing some promising signs that our approach is working (e.g., significant increases in the amount of active learning happening in classrooms). It’s also why we expanded our work into professional development several years ago. “Free learning materials” aren’t going to accomplish the goal that we’re working toward.
Somewhat related to a point I think Dan was making in his reply, up until about the mid-2010s it felt like the goal of the OE movement in US higher ed (I’m definitely not making any global claims here) was “improving student learning.” There was a large influx of people into the US higher ed OE community then, and most of them seemed to come with the primary goal of “eliminating textbook costs.” That’s a fine goal to have, but it’s a distinctly different goal from “improving student learning.” It’s also a dramatically easier goal to achieve - choose an OpenStax PDF for your course instead of a Pearson title and you’re done. Mission accomplished!
This brings us back to Cathy’s original question about grounding principles. These two goals, “improve student learning” and “eliminate textbook costs,” are related to one another. But you will definitely optimize your decisions, behavior, and resource usage differently depending on which goal is your primary goal. And it feels to me like these two sister goals are increasingly at odds with one another. I worry the “eliminate textbook costs” crowd will lobby against generative AI use in classes simply because their are costs associated with it, and that they’ll do this lobbying without regard for the dramatic, positive impacts generative AI could have on student learning (again, speaking about the US higher ed context here). Arguing for worse student outcomes in order to save students money doesn’t seem like a winning message to me, and I worry what it will mean for the future of the US higher ed OE movement once faculty put two and two together (i.e., “zero textbook cost” requirements mean no generative AI for students). Which do we really think they’ll choose?