It's popular among AI folks to think in terms of phases of AI, of which the current and most reachable target is likely “oracular AI”. Tools like ChatGPT are one manifestation of this, a form of question and answer system that can return answers that will soon seem superhuman in terms of breadth of content and flexibility of style. I suspect most educators don't think about this framework of AI as oracle much, but we should, because it explains a lot both about the current hype cycle around large language models and can help us gain critical footing with where to go next.
Any new technology or tool, no matter how shiny its newness, can help students experiment with how technology mediates thought. I suspect that's the least problematic use of generative “AI” and large language models in the short term. One reason I think of this kind of activity as play or experimentation is that if you go much further with it, make it a habit, or take it for granted, then the whole enterprise becomes much more suspect. Most consumer-facing applications showing off large language models right now are variations of a human in the loop system. (ChatGPT exposes a particularly frictionless experience for interacting with the underlying language model.)
A key question for any human in the loop systems is that of agency. Who's the architect and who is the cog? For education in particular, it might seem that treating a tool like chatGPT as a catalyst for critical inquiry puts humans back in control. But I'm not sure that's the case. And I'm not sure it's always easy to tell the difference.
The overuse of the term “AI” to market technology products has been out of control for some time. Educational technologies are no different. More and more I've been seeing “AI” products in edtech that are little more than slick visualizations wrapped around basic arithmetic.
Things that are not, by themselves or by default, “AI”:
simple and obvious things expressed as percentages, e.g. percentage of students participating in some activity
any graph and visualization that is not a line or bar chart
huge dashboards full of numbers
huge dashboards full of numbers with fancy labels of the form “Engagement Score” or “[StupidTradmarkedName] Score”™
Make it stop. Seriously, machine learning, deep learning and everything that might legitimately be called “AI” are interesting and awesome and powerful, problematic and potentially biased and also full of possibility. All of that is worth talking about and fair game to market as some branch of artificial intelligence; but selling elementary math and week 1 of Intro to Statistics as AI is just ridiculous.