Leslie found the following texts helpful for maintaining a broad AI cultural currency. The resources are ordered chronologically to ease updating, which will happen at least quarterly. Enjoy!
Resources (* = podcast)
This batch of news discusses a flurry of new CA laws regulating AI, the new educational disruptions of AI agents, and more.
Mills, A. (19 Oct. 2025). "The Time to Reckon with AI Agents in Digital Learning Spaces Is Now." Anna Mills' Substack. Substack. Accessed 22 Oct. 2025.
-
Mills reviews the educational dangers of AI agents, which can enter Canvas to complete students' assignments (see short video), and then calls us to action: ask AI companies to block their agents from entering any LMS; ask institutional ITs to block AI agents
from entering their portals or LMSs.
Watkins, M. (17 Oct. 2025). "An Open Letter to PerplexityAI: Absolutely Don't Do This." Rhetorica. Substack. Accessed 22 October 2025.
-
Watkins calls out PerplexityAI for shamelessly promoting their agentic AI to students for the purpose of cheating. Includes a short video demonstration.
*Roose, K and Newton, C. (17 Oct. 2025). "California Regulates AI Companions + OpenAI Investigates Its Critics + The Hard Fork Review of Slop." [podcast] Hard Fork. New York Times Audio. Accessed 22 October 2025.
-
The Hard Fork podcast hosts review the achievements and limitations of CA SB 243, interview a non-profit leader subpoenaed by OpenAI, and review some of the enormous volume of "slop" they've encountered recently.
Bellan, R. (13 Oct. 2025). "California Becomes First State to Regulate AI Companion Chatbots." TechCrunch.com. Accessed 22 Oct. 2025.
-
After several tragedies involving minors, Governor Newsom has signed SB 243, which compels AI companies to add guardrails to their AI Companion chatbots. Read about the new law here.
Legatt, A. (25 Sept 2025). "Colleges and Schools Must Block and Ban Agentic AI Browsers Now. Here's Why." Forbes. Accessed 23 Oct. 2025.
-
Students can give Agentic AI like Perplexity Comet their campus credentials, and it can go into Canvas and complete their quizzes and assignments autonomously (see video). Legatt reports that FERPA rules place "responsibility for [students' data safety] lies squarely [on colleges].... Because these tools inherit saved credentials and authenticated sessions, they can move laterally into connected systems—student accounts, billing platforms, even financial aid portals."
*Inoue, T. and Senk, S. (Oct. 2025). "Lost in Translation: Testing the Limits of AI Understanding."
My Robot Teacher Podcast. [podcast] CAlearninglab.org. Accessed 22 Oct. 2025.
-
Funded by the California Education Learning Lab, Inoue and Senk (CSU Maritime) bring their STEM and Humanities perspectives (respectively) to bear on important questions intriguing--or plaguing--higher education faculty. This episode explores the ways LLMs are safety- and bias- tested and the limits of "localization" in LLM translations.
Watkins, M. (10 Oct. 2025). "The Dangers of Using AI to Grade." Rhetorica.
Substack. Accessed 22 Oct. 2025.
-
After reviewing LLM companies' promotions of their tools as enabling more equitable grading, Watkins considers the labor reasons students and faculty turn to AI, and argues that, "If education embraces AI to automate assessment of student learning, then we cede that last bit of traditional learning to corporate interests that can never be equitable, fully secured, or even vaguely transparent."
*Bowen, D. and R. Fleming. (9 Oct. 2025). "Humans in AI--Creativity, Wellbeing, and Technology in Education." AI in Education Podcast. [podcast]. Accessed 22 Oct. 2025.
-
Bowen and Fleming interview Dr. Rebecca Marrone, Lecturer & Researcher at the University of South Australia, to discuss AI's impact on teacher and student well-being, creativity, and critical thinking.
Tauman Kalai, A., et. al. (4 Sept. 2025). "Why Language Models Hallucinate." arXiv. Accessed 22 Oct. 2025.
-
Kalai explains that LLMs hallucinate because they've been trained to guess, rather than to admit uncertainty. They argue that training that didn't "penalize" uncertainty could increase our trust of these tools.
-
The CFA summarizes an oversight hearing presentation. Scroll down to find 8 CFA demands regarding AI implementation at the CSU.