I think generative AI is unavoidable, not inevitable. The former speaks to the reality of our moment, while the latter addresses the hype used to market the promise of the technology—a sales pitch and little else. Faculty and students have to contend with generative technology in our world as it is now, not as it is promised to be. That should be our focus.—Mark Watkins over at his Rhetorica Substack
Shopify CEO Tobi Lütke made waves on Monday morning, preemptively sharing an internal memo that was about to be leaked, writing, “Reflexive AI usage is now a baseline expectation at Shopify.”1 In the piece, Lütke argues that embracing and mastering AI tools is no longer optional but a fundamental requirement for accelerating innovation, increasing productivity, and sustaining competitive advantage at Shopify. Education is not e-commerce, so please miss me with the straw man you’re crafting in your head. But Lütke’s stance resonates beyond e-commerce because it highlights how rapidly the baseline expectations around technology and productivity are shifting—expectations that inevitably ripple into education, whether we're ready or not. Inevitable? Not yet. But even more unavoidable than even a few weeks ago.
Which brings me back to Mark’s point: we need to reckon with generative technology as it actually exists, not just how it’s branded. One thread running through his work—and it’s something I share—is that the most productive response to generative AI isn’t blanket resistance or blind adoption, but deliberate engagement.2
That’s harder than it sounds. “Our world as it is now” is hard to pin down when the terrain is constantly shifting. But it’s not impossible. I am heartened by examples that have begun to land in the educational ecosphere, but a recent Twitter/X post from University of Pennsylvania Math and Engineering professor Robert Ghrist really caught my attention. The article, ‘AI-Integrated Learning’ demonstrates an educator actively engaged with the tools at his disposal. Ghrist is in many ways a “power user” of AI:
He crafted his own textbook directed by him but written with Anthropic’s Claude.
He built a custom GPT in OpenAI to serve as a 24/7 learning assistant for his students.
He audio records his classes and encourages students to upload the audio files into NotebookLM.
He gives his students difficult homework assignments, co-authored with Claude and derived from the textbook, and encourages students to use all tools at their disposal (including AI) to complete the problems.
He is unbothered by the potential for cheating, given the difficulty of reliable detection methods, describing the homework as a kind of ‘Marshmallow Test’ that could come back to bite the students on formal in class assessments later in the semester.
(Don’t) Take the Wheel From Me
On the ‘People I Mostly Admire’ podcast, economist David Autor talks describes the future not as a prediction problem, but instead one of design. Design, as Autor suggests, places agency squarely in our hands. It means we’re not passively forecasting outcomes but actively shaping how tools like generative AI integrate into our educational practices. Ghrist’s methods exemplify this mindset brilliantly. He isn’t waiting for the technology to mature into something neatly packaged and ready-made for classroom use—he's proactively molding it to meet the nuanced demands of teaching and learning. Which is why I want to spend a portion of this post describing what I’ve started to do, and what I will do more of in the classroom in future.3 There’s a long running joke amongst many history teachers that AP U.S, History is basically just AP Textbook Reading. You need to plow through lots of content, much of it mired in heavy textbooks. As a former boarding school teacher, history textbooks often served as valuable door stops for students in the dorm. They were bricks—more paperweight than vessel of knowledge, these textbooks often felt like artifacts rather than tools for genuine learning. I think there’s an alternative pathway and one more emblematic of what Ghrist describes in his piece. That pathway looks a bit like this:
I was an early adopter of The American Yawp—a “Massively Collaborative Open U.S. History Textbook.” It also happens to be primarily digital, although you can acquire paper copies at this point. That it’s digital is more of a feature than a bug in this instance.4 I would use this textbook.
I’ve long been a proponent of social annotation, favoring Hypothesis as a means to extend classroom conversations into the text itself. Some of the more profound intellectual contributions from students have happened in these annotations. As Stanford’s Matthew Rascoff eloquently says, “School is learning embedded in social experience.” Annotation in this regard allows us to extend both the learning and the social experience.
There are various ways one might do this, but like Ghrist I would build some kind of 24/7 learning assistant to support students while reading. I would often provide guided reading questions and key terms to identify in The American Yawp via Hypothesis. Tools like Flint, an “all-in-one AI personalized learning platform built for schools,” make this relatively easy. You can replicate the GPT model Ghrist describes, while having the added benefit of making student thinking visible in the platform, allowing you to tailor instruction and catch underlying issues before they become overly pervasive.
I too would encourage students to use NotebookLM, albeit slightly differently. Whereas he encourages them to upload lectures, I would instead encourage students to upload class notes, annotations, and questions. Additionally I would encourage them to upload deeper reading assignments—like journal articles or unwieldy primary sources—to serve as a kind of collaborative reading partner to assist in understanding.
I’ve always called homework assignment, “Learning Opportunities” because that’s what they are. Students who consistently engaged in the homework performed at higher level. The model I’ve described above encourages more active participation and engagement, beyond traditional assignments from years past. Further, given the additional access to resources, I think we can expect more from them as it relates to their learning.
Like Ghrist, I’m unbothered by the potential for “cheating.” I put that in quotes, because I don’t even see it as cheating. And I want them to have knowledge easily at their command, so in-class assessment wouldn’t look much differently than it does already.5
Is the scenario I described above a perfect solution? Of course not. But in reality I am uncertain what perfect would even look like. I do believe this kind of active engagement does more than just leverage AI’s capabilities; it fundamentally transforms the roles of students and teachers alike. Students shift from passive recipients of information to active curators and collaborators in their educational journeys. Teachers, meanwhile, take on the role of designers, thoughtfully crafting learning experiences that incorporate these powerful new tools.
Yet this vision of deliberate engagement doesn’t come easily. It demands time, intention, and a willingness to experiment openly—and to occasionally fail publicly. This is my first attempt at thinking openly about how I would will design moving forward. By modeling this specific mindset, a willingness to engage in the messiness of innovation, I hope to embody the very experimentation and curiosity we seek to instill in our students.
In education, we often speak about preparing students for a rapidly evolving world. But perhaps what we really mean is teaching students—and ourselves—how to thrive amid uncertainty. The terrain beneath our feet will always shift, and tools will continually evolve. Rather than fearing that instability, perhaps the most powerful response is simply to become adept at navigating it. That means adopting Autor’s principle of design: consciously and creatively shaping our interaction with AI, rather than being shaped by it.
Inevitable? No. Unavoidable? Absolutely. And perhaps more importantly—shapable.
Note: I always pull blog titles from song titles or lyrics. It’s a thing. It’s fine. Just go with it. I named the blog ‘The Academic DJ’ after all. Today’s title comes from Iron Chic’s ‘My Best Friend (Is a Nihilist), because it does feel a bit like driving a runaway hearse. And if I stop, I fear I’ll just make things worse. I'll leave it to your imagination to figure out who—or what—is riding in the back.
Lütke’s message landed on Twitter/X on the morning of Monday, April 7, 2025.
I should be transparent here and share that Mark served as a ‘Virtual Visiting Scholar on AI’ at my school last year. I am a fan of his work. I think he blends a critical, skeptical approach with appropriate pragmatism. I should also note I likely fall further on the “I am in favor of this technology” spectrum than Mark does. That’s okay. As Mark points out further down in the piece, moral outrage serves no useful purpose. I’ve appreciated learning with and from him and will continue to do so for the foreseeable future.
Next week I’ll share a post describing all of the ways I use AI in my own work, both in the classroom and leading a PK-12 center for teaching and learning. The bulk of it will be a kind of running diary from a chosen day, transparently sharing all of the ways these tools have been integrated into my day-to-day responsibilities. It will likely annoy many people. That’s okay.
I’ve written about this before. I do not share the qualms of reading on a screen versus reading in print. I know the research suggests otherwise. But I think we’re assessing the wrong thing in this regard.
I’m saving an entirely separate post for research and writing assignment outside of class. I do think the disruptions here are profound and more difficult to design for. With that said, I think it’s doable and more reflective of the reality of these tools being so readily available. You can use a hammer to build things. Or you can use a hammer to destroy things. Demonstrating the value of the hammer in the positive sense is tricky—particularly for high school students so pressured by the landscape of grades.
I really like the idea that we can persuade and shape how student's view AI. I'm really concerned that so many faculty don't even want to go that route. Students look to educators for guidance on how AI is used or misused. Greeting that with silence is one of the worst missed learning opportunities we have.
Neat post. I agree wholeheartedly with this and would love to experiment more - Ghrist is a college professor. Would you recommend the same for HS students? I think that gets a little trickier but as Marc says, silence and indifference cannot be the right choice.