Sign up for the On EdTech newsletter. Interested in additional analysis? It’s free through May 24, 2023. Upgrade to the On EdTech+ newsletter.
Neil Mosley (recently rated by Phil and me as one of the top people we would like to have a beer with) posted the thought above. I agree with Neil, but as I thought more about it, I think it’s less that responses to AI are indicative of the current state of HE and more that the responses typify the common pathologies of EdTech discourse. It’s a subtle difference but an important one. The pathologies are holding us back from making positive progress in the application of EdTech and, unless addressed, will also hinder our use of generative AI.
There are five of these pathologies that I believe are most relevant with this topic.
Image generated by Phil using midjourney
Preoccupation with Trendiness There is a tendency whenever a new tool appears for the EdTech discourse to be dominated by that tool to the exclusion of all else. Right now, it’s generative AI, a while back it was mixed reality. There was a time when you couldn’t open an Educause Review article without being persuaded that analytics would change higher ed as we know it, and so on. To some extent this makes sense – there is a reason the hype cycle is compelling, hype is a natural reaction to new shiny objects full of promise. The problem is that this sort of attention deficit in our focus is not good for progress in EdTech or better student outcomes. Early results in ed tech are seldom the best ones – we need sustained attention, experimentation, and refinement in order to reap the benefit of a particular tool or approach. The trendiness factor often detracts from that sustained attention.
Exaggerated Results This refers to the tendency to claim that a tool or process is amazing in its impact. We often see claims that a particular technology does this or that, when, in reality, the impact is more pedestrian. The infamous claim about an adaptive learning system being “robot tutors in the sky” is a classic case of this. Adaptive learning is pretty cool, but it’s not magic and it has a long way to go. Why embellish, particularly when exaggeration may lead to disillusionment and abandonment of the tool? We are seeing some exaggerated claims about generative AI and this is not good for the long-term progress and use of these tools.
Technology Solutionism This is an endemic problem in EdTech. Both vendors and users tend to think of a tool as being the solution to a problem. Sometimes the tool can be part of a solution, sometimes it can’t. But it is always only a part of the solution. There are always changes and processes and people and sustained effort that are required to make a tool more useful in improving student outcomes, and the technology solutionism that we indulge in prevents us from seeing that. The place I have seen this most has been in student success and analytics. Buy this system and your worries about student retention and completion are over! Whereas, in reality, as staff at Georgia State University will tell you, technology is useful, data is useful, but it is sustained attention to a lot of small things over a long period of time that actually moves the needle. I am seeing tech solutionism all over the generative AI in EdTech talk. It will transform and personalize learning on the one hand or will mean the end of universities on the other. Technology solutionism means that expectations of both success and failure are too high and are unrealistic. Implementation of tools chosen based on technology solutionism will almost inevitably lead to failure. We need to start talking about generative AI more as part of a larger ecosystem or as an approach that requires a lot of other things to happen – otherwise, we are dooming ourselves to disillusionment.
Learning as the Only Approach to Change Too much conversation around the value of EdTech embeds an assumption that we need all technology to directly impact the teaching and learning process. Efficacy as the metric. While improvement in this area is important, I think focusing only on student learning and not valuing anything that isn’t learning related is a problem, and it’s a problem I see often in EdTech discourse. Applications of tech that are more quotidian or administrative in nature are dismissed. The mission of higher ed is of course learning. But often it takes a while for faculty and teachers to figure out how to apply a new technology in learning productive ways and they often get to those more productive ways via more humdrum uses of the tech. Two old but good examples. I used to snort derisively about faculty podcasting until Alan Wolf persuaded me that it was a way for them to explore technology and a precursor to more creative uses. The thin end of the wedge so to speak. Similarly, a faculty member I interviewed for the 2003 ECAR study on LMSs argued that they didn’t improve learning but by being a repository for syllabi, notes and grades they took student questions on those off his plate so he could spend his limited time in a 600-student biology class on explaining concepts and helping students learn. We ought not to dismiss the non-learning applications of generative AI because that is exactly where the best uses of it for learning are likely to spring.
Moral Panic This is a widespread feeling of fear, often irrational, that a practice or technology threatens the value and integrity of a process or institution. Since the general release of ChatGPT, we have seen almost wall-to-wall hand wringing about how generative AI is the death knell of academic integrity and will enable cheating on steroids. There have been multiple calls for it to be banned in schools and on college campuses and some institutions have followed through on these threats. The problem with moral panics like these is that they all too often lead to intrusive efforts to thwart the risk of using a tool by using another tool. Students can cheat too easily thanks to the Internet or online learning! Let’s run everything they write through plagiarism detection tools or have online proctoring watch their every move! We see that now with generative AI with many companies rushing to release (what seem to be unreliable) tools to detect generative AI Rather than doing that we need to understand the way that generative AI may finally push us into a long-needed rethink of what and how we teach and especially how we assess learning. I do feel a tad guilty saying that – I understand that it’s a giant task. My saying it reminds me of my favorite lightbulb joke: how many sociologists does it take to change a lightbulb? None, because it’s not the bulb it’s the system that needs changing.
Generative AI poses some unique challenges to education but there are also some amazing opportunities as well. How we talk about something shapes how we respond to it and how we ultimately can shape and work with it. I believe that too many EdTech approaches and products have not lived up to their promise because of these five habits of thought and speech. Let’s change that and have better conversations about generative AI.
Title: The Five Pathologies of EdTech Discourse About Generative AI
URL: https://philhillaa.com/onedtech/the-five-pathologies-of-edtech-discourse-about-generative-ai/
Source: Phil Hill & Associates
Source URL:
Date: May 6, 2023 at 07:35AM
Feedly Board(s):