Finding Value in the Impending Tsunami of Generated Content
The generative “AI” hype cycle has been at peak hype for the past month or so and it follows completely predictable tech patterns. Hypers tout all the amazing miraculous things that will be possible; doubters wonder aloud whether these things will fail to deliver on their utopian promises (because these things always fall short of their utopian promises), and most of the obvious consequences and outcomes get overlooked.
One such obvious consequence is that there are tidal waves of bullshittery about to hit our shores. (This first wave is a minor high tide compared to what is coming....) Reconstituted text, images, video, audio, avatars and fake people are pretty much guaranteed across a wide variety of areas, a landscape where education is only one small province. We won't be able to tell real from fake or, perhaps more troubling, I don't think we'll care so long as it scratches the right itch or feeds the right need.
The question across those domains will be whether we value authenticity. For things like boilerplate email, sales copy, code, and a wealth of other activities, I think the answer will be that authenticity doesn't matter that much. But that's where education is different. Authenticity should matter, not because of the habitual exercise of needing to assign grades to work that was not plagiarized or copied or whatever other vice one can ascribe, but because without authenticity there is no learning. Faking it is great for getting to the ends. But education is about the means; ends (tests, essays, etc) have always been imperfect proxies. Beyond the authenticity of student work, we have a very familiar issue of how students themselves or learners know what kinds of information to trust. While the bulk of attention thus far has been on the nature of the emerging generative “AI” toolkit and the back and forth between fearing cheating vs. fostering creativity with such tools, the real impact will be felt indirectly, in the proliferation of “knowledge” generated by and mediated through generative AI tools. It is the old wikipedia debate, but supercharged with hitherto unthought of levels of efficacious bullshittery.
Ten years ago it was a clarion call with the proliferation of data that academic knowledge fields needed more curation. For example, http://www.digitalhumanities.org/dhq/vol/7/2/000163/000163.html is one of many such calls for increased digital curation of data. The variety of startups applying generative “AI” to learning or, more broadly, to varieties of search and summarization, tend to promote the message that curation is not necessary. (Just google “sequoia generative ai market map” or similar; https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/.) Or, rather, the question of curation has perhaps not entered into thought. Automagically search or summarization or chatbots using generative AI will latch on to the most relevant things for your individual query. Consumerism is a given, such that the only question is how the system can serve up results to a consumering user. LLMs have thus far been gaining ground through hoovering up every more data. That makes them garbage collectors, even with careful controls to make sure that bias is minimized and good data is optimized. Optimistically one might imagine that these technologies could allow for curation to happen at a different stage, at the building of the model, or in fine-tuning the model for particular use cases. Or the context provided by the consumer is a sort of after the fact filter on the massive amounts of knowledge. But that is a very light veneer of the kind of knowledge curation that separates the wheat from the chaff, that ensures that what's being served up isn't utter bullshit that sounds close enough.
There are two levels of authenticity then to keep an eye on. The surface one is with students themselves and the process of learning. Are the people being authentic? Then there's the second, at the level of knowledge curation. Is that curation authentic and legit? I suspect on both scores it will require direct and focused effort to foster both amidst the readily available misinformation available. For LLMs in particular, we are looking now at an exacerbated version of wikipedia bias. If something is statistically weighted as more likely but expertly-verified to be wrong or misleading, how do those concerns get balanced? It is not merely that generative “AI” can produce different outcomes given the same inputs, it's that there is not necessarily a clear line as to why those two different ideas are held in mind at the same time.
Undoubtedly, such issues will be smoothed over and it will all be more nuanced as these technologies develop and as these technologies are deployed. The early days of autocomplete were rife with inaccuracies, bias, and garbage. And now we treat it like any other tool. some may ignore it but most simply use it when convenient and don't think twice about the biases or thought patterns it subtly instills. Generative “AI” will be no different. It will soon become another layer of bullshit which is sometimes useful, often ignored, and just one more thing to take account of when negotiating authenticity of learners and reliability of knowledge.
This is all to say that the tool hasn't changed the essential question. Do we actually value authenticity in the learning process? Do we care about not just the verifiability of knowledge through citation (which, incidentally, Google seems to be focusing on in their response to OpenAI, among others) but about that thing formerly known as “truth”, at least as an asymptotic goal if not reality?
It's going to be messy. Truth-y enough will be good enough for many. And many structures in education are already transactional to an extent that authenticity is a pesky anti-pattern, a minor detail to be managed rather than a central feature of the learning experience.
In more optimistic moments I wonder whether the value of generative “AI” can lie not in its products but in the opportunity it creates to further dialogue. If we keep our focus on fostering authenticity in students and authenticity in knowledge, then it can be a useful tool for first drafts of knowledge. If we let it become the final word, then I fear we will simply be awash in a smooth-talking version of the internet's detritus.