Skepticism of generative LLMs' (aka 'AI') net value to humanity

I’ve been awash in articles, posts, interviews, in depth analyses, and podcasts in all sorts of media breathless about ‘AI’ (really just ‘generative Large Language Models’. ‘AI’ is a fairly meaningless marketing term) this and ‘AI’ that. I’m more of a technologist than an educator, and most of my social circles tend to be weighted more towards the former than the latter despite me being employed as educational technologist.

There’s been a noticeable disparity between the enthusiasm for ‘AI’ I’ve seen in educator-dominated spaces (e.g. OEGlobal) compared to technologist-dominated spaces (e.g. the Fediverse): educators seem extremely excited and positive about generative LLMs. Technologists, on the other hand, many of whom understand, quite deeply, what these LLMs are (and what they aren’t) are substantially less enthusiastic.

Yes, they’re impressed up to a point, but they’re also very cognisant of the wild misunderstandings about what they are (and aren’t) in the broader media sphere. They also understand a) the motivations of the people pushing the ‘AI hype’, namely those big tech folks investing heavily in it, and b) the huge cost - losses both financial (torching capital) and environmental (draining aquifers & power grids, creating heat bubbles), that this process is responsible for.

I’m curious how many folks here are also noticing the breathless dominance of ‘AI’-related discussion in the educational media, mostly painting it as positive and of great potential and/or characterising it as inevitable… Are any of you starting to have misgivings about it? Perhaps suspecting that there’s more than a little hype involved? Regardless, you may well find this fairly comprehensive ‘reality check’ - beware, there’s a bit of ‘frank’ language’, if you’re easily offended - by Ed Zitron on the ‘AI’ hype to be either gratifying or confronting. But I think it needs to be considered. What do you all think?

2 Likes

I certainly have noticed this unbridled enthusiasm among some of my colleagues for “AI” and misunderstandings about AI and what the “stuff” is about. I totally agree with your assessment that we need to be mindful of what it is, and analyse carefully the “hype” that is being built around it, including the risks to the environment, and the fact that Aotearoa does not have a framework of Sovereign AI (another area that is out of scope perhaps for this discussion).
I think there is a space for wider education and discussions about the scope and limits of generative and agentic AI for our educators here.

1 Like

As an educator who actively experiments with generative LLMs in my teaching and research, I can very much relate to the contrast you’ve observed between technologists and educators.

From the classroom side, I think the enthusiasm is less about the technology itself and more about its affordances*.* LLMs have opened up practical, tangible opportunities to scaffold learning, personalize feedback, and support students’ digital literacy. For many educators who have long struggled with large classes, limited resources, and diverse learner needs, this feels like a breakthrough tool.

At the same time, I share the concerns you’ve raised. Working closely with students, I see the misconceptions about what LLMs are*,* many learners (and even some faculty) believe these systems “think” or “know,” when in reality they’re sophisticated pattern-matchers with real limitations. That’s why part of my role has become teaching critical AI literacy: helping students and colleagues understand how to use these tools responsibly, with an awareness of their biases, blind spots, and the environmental and ethical costs you noted.

I don’t think educators are blind to the hype, but many are hopeful because we’re looking at LLMs through the lens of pedagogical problems we’ve been tackling for years. The danger, of course, is swinging too far into the “inevitability” narrative without acknowledging the structural, financial, and ecological trade-offs.

For me, the middle ground is using LLMs as a catalyst for deeper conversations about teaching, learning, and the role of technology in education. They’re neither saviors nor villains, just powerful tools that need careful framing and critical pedagogy.

I’d be curious: how do we get educators and technologists into the same room more often? It feels like the enthusiasm and the skepticism both have something vital to contribute to a more balanced perspective.

3 Likes

Bring on some skepticism, Dave. As an FYI, Dave mentioned to me in Mastodon a shorter version of this as suggesting links I have shared in OEG Connect as “boosting” AI.

Because I have known and respected Dave a long while, I invited him to stir some things up, I don’t see it quite fair to suggest links about AI as being on the side of hyping it nor is there any official statement on AI by this organization-- what is the basis for

The posts here tagged ai going back to 2020, most but not all mine, are not all breathlessly hyping AI, just aiming to share what seems interesting. My friend @poritzj has been eager to bring on more critical perspectives here and in our other online events.

I am eager to have some strong views and opinions here, and welcome some kicking up the dust.

I don’t think my personal blogging and views expressed in many venues has me on the hype side. I quite loathe what I say as much as you Dave. At the same time, I have a privilige of not being in a position of many on the ground at universities and colleges who are on the line to explain and brining understanding to faculty and students at the same being pressured often from higher levels to embrace AI.

Please bring on some good contentious opinions but let’s not paint a line for taking sides. And for a real strong position, see someone who is stated AI Hater.

1 Like

Heh, yeah, I read that article from Anthony Moser earlier today, too. It was an interesting position to take to be sure, and I understand his perspective (and sympathise). And apologies, @cogdog if I tarred you unfairly as a ‘booster’ of ‘AI’ (I’ll always use quotes around ‘AI’ because its a loaded and misleading term). I think the thing we should be investigating quite intensively is the source of that ‘pressure from on high’ in academia to embrace ‘AI’…

Yes, I think @Bayor’s identified some tantalising opportunities which generative LLMs offer, although I wonder at the net benefit, when the costs of those systems are taken into account. I think that’s where the real debate lies… but regarding the pressure from ‘on high’, I think it’s a combination of big tech lobbying (cronyism - friends of mine from uni are big wheels in Silicon Valley now too) and ignorance among academic leaders applying influence outside of their areas of expertise.

The bottom line is this: big tech is on a countdown (before investors lose faith) to make the world dependent on gen LLMs, to the point they become something that normal professionals & academics can no longer live without (because their skills have atrophied or were never developed, just like our ability to remember phone number or do mental arithmetic since the advent of smartphones and calculators respectively)… at which point the price of access will cease to be zero and will climb, quite quickly.

I think that skeptical technically competent commentators have been doing a fairly good job of debunking the advantages of gen LLMs asserted by its invested marketers.

1 Like