Less is more in technology and in education

New Year, New “AI”

My new year's resolution: more writing. Because otherwise the bots win. Or, rather, otherwise the bots won't have enough fodder to generate ways for students to cheat? Not sure, but I think I need to practice writing like a human.

Apparently there's been a lot happening on the AI[^1] front that kind of got people talking these past few months. In predictable fashion, some teachers are stoked, others are freaked out, and most aren't quite sure what to do about OpenAI's big reveal that a massive language model can be coaxed to write a passably decent essay with little effort or significant know-how.

I've spent most of the past couple of years working in that language AI space, including using gpt and the like. (Not coincidently, I haven't written much here in that time.) My everyday paradoxical persona is that of constant code-switcher between tech-iest tech practitioner and analog/minimalist tech evangelist. (I suppose that makes my take on these things something like Prof. Moody's perspective on life in general: constant vigilence.)

Most of the noise around ChatGPT in education starts from the wrong set of assumptions. The debate seems to be around whether or not to use, how to use, how to detect, whether or not this is a good thing or not.

Wrong focus. Assume that readily-available AI can produce coherent text on demand on any subject and that that text will be indistinguishable from a real text that a student or any other person might hand in as part of traditional writing exercises. Start from that assumption. Whether or not ChatGPT or the next models from Meta or Google or Anthropic or any number of other players in this space can do this today, the chances are high that it will be a very short time before the production of coherent text will be a trivial task and widely accessible.

Assume as well that absent any significant legislation and despite the best and most noble attempts of OpenAI or other entities in this space around responsible AI, that some version of generative AI without guardrails on usage will be available. Maybe it won't be as cutting edge as the others, but it will work, primarily because the cost of training these models will come down and the path to training ones own models with yesterday's transformers will be laid out clearly enough for secondary players.

The most important question isn't what educators as individuals and education as an industry should do with today's technology. The important question is what to do now to plan for tomorrow's technology.

Today's technology requires staunching a wound if you have assignments of the sort that are ripe targets for ChatGPT-ification. That's the reactive mode of security, trying to contain the damage while you buy time to implement more robust solutions. And whatever solutions are put in place in the next few years will quickly be rendered obsolete if in fact we focus only on the current capabilities of these tools.

The more important focus is, as any security professional will tell you, on the longer term, on anticipating threats and heading them off as much as possible.

It happens that starting from an assumption like the one I laid out, where this technology can do everything you might think it could do, and do it well, is also a good way to return to a functional focus. What is the point of an assignment, of a class, of a curriculum? ChatGPT changes nothing about the daily urgency of those fundamental pedagogical questions. It just reshapes the playing field and levels up the kit.

For those who are worried about cheating with ChatGPT or in the weeds on what to do about this potential assignment buster, the first step is the simplest. Forget the technology of today. Return to the fundamental question of what the point of any of this is. And then assume that the technology can do everything you might think it can do and more. Plan your path from there, not defensively or in the weeds of the arresting newness of the tools, but purposefully in a landscape visible in a different light today than it was yesterday.

[^1] Technically of course these are large language models and the term “AI” is a bit generous... I have a particular pet peeve when it comes to the ever-expanding use of the term AI in popular parlance and this labeling of language models as AI falls somewhat adjacent.