Close Menu
  • Home
  • Psychology
  • Dating
    • Relationship
  • Spirituality
    • Manifestation
  • Health
    • Fitness
  • Lifestyle
  • Family
  • Food
  • Travel
  • More
    • Business
    • Education
    • Technology
What's Hot

Accurate Eclipse Season Horoscope for 12 Zodiac Signs

September 29, 2025

Slideshow: New menu items from Red Robin, Panera Bread and Krispy Kreme

September 29, 2025

Amex Platinum vs. Sapphire Reserve: Which is right for you?

September 29, 2025
Facebook X (Twitter) Pinterest YouTube
Facebook X (Twitter) Pinterest YouTube
Mind Fortunes
Subscribe
  • Home
  • Psychology
  • Dating
    • Relationship
  • Spirituality
    • Manifestation
  • Health
    • Fitness
  • Lifestyle
  • Family
  • Food
  • Travel
  • More
    • Business
    • Education
    • Technology
Mind Fortunes
Home»Education»Is There a Healthy Middle Ground on AI in Schools? Try Skeptical Optimism
Education

Is There a Healthy Middle Ground on AI in Schools? Try Skeptical Optimism

September 8, 2025No Comments13 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Is There a Healthy Middle Ground on AI in Schools? Try Skeptical Optimism
Share
Facebook Twitter LinkedIn Pinterest Email

The nightmare scenario for artificial intelligence in K-12 schools involves a complete overreliance on opaque, biased, and poorly regulated AI systems. Teachers, lacking adequate training and understanding of the technology’s limitations, become mere facilitators, blindly trusting AI-generated assessments, lesson plans, and even evaluations of their own performance. The human element of empathy, nuanced understanding, and social-emotional development is diminished, leaving students unprepared for complex human interactions and critical discernment in an AI-saturated world.

That dystopian scenario—the result of a prompt I entered into Google Gemini in which I asked it to describe the worst-case outcome for AI use in K-12 schools—is a perspective shared by some educators that could discourage beneficial, innovative uses of the technology. They see it as an existential threat to their professional existence and a dehumanizing force in a world ever more reliant on technology at home and work. They want to press the brakes hard on the use of AI in schools.

And it is not just educators. Wired magazine published a story in June, titled “The AI Backlash Keeps Growing Stronger,” which points out the rising fears among young people that AI automation will destroy jobs. Case in point: the major backlash when Duolingo, the language-learning app, announced its plans to become an “AI-first” company. “The negative response online,” Wired writer Reece Rogers writes, “is indicative of a larger trend: Right now, though a growing number of Americans use ChatGPT, many people are sick of AI’s encroachment into their lives and are ready to fight back.”

🔎 About This Project

This project is part of a special report called Big Ideas in which EdWeek reporters, the EdWeek Research Center, and contributing researchers ask hard questions about K-12 education’s biggest challenges and offer insights based on their extensive coverage and expertise.

Explore Big Ideas >

The question we should be asking ourselves right now—when a growing number of teachers are using AI tools to plan lessons, communicate with parents, and even teach writing skills— is this: Is hostility toward a fast-evolving technology, or even quiet resistance to using it, really the right approach? Or does it run the risk of denying critical learning opportunities and skill building for a generation of students?

Why students should learn how to use AI as a brainstorming tool

My first stop in unpacking these thorny questions was Kristina Peterson, who has taught English for 17 years at Exeter High School in New Hampshire. She uses a “workshop style” instructional approach in her classes that emphasizes the process of learning rather than just the end product. Students spend a good chunk of their time reading physical books and collaborating in small groups to help each other, unmediated by technology.

But Peterson’s students are also empowered to use school-approved AI tools to brainstorm—for instance, talking to an AI chatbot that represents Atticus Finch, the lead character in To Kill a Mockingbird, when they are reading that classic novel.

Educators, she argues, should view AI as a brainstorming tool that must be bolstered by meaningful guardrails and best practices. One way to build those guardrails is to teach students to be healthy skeptics of anything AI produces, regardless of the subject. Say, for instance, one of her classes finishes reading a chapter in a novel. They will come to class the next day, and Peterson will give them a chapter summary generated by ChatGPT. Those AI-generated summaries typically have factual errors, which Peterson confirms before handing them to the students. Students then go through the summary and highlight the errors.

“I do that to double check that they actually read and understood the chapter,” Peterson, the co-author of the book AI in the Writing Workshop, told me in a recent video conversation. “But, more importantly, to show them that even as advanced as AI is becoming, it still can hallucinate. It still can get things wrong. They love to point out what AI got wrong. And they push back on AI far more than they push back on me.”

See also  Want Teachers to Learn How to Use AI for Instruction? Let Them Design the Tools

In other words, students and teachers must learn to put a human touch on everything AI produces.

chart visualization

But here’s where schools can and do get hung up: trying to balance strict compliance with the use of AI tools against the curiosity about how those tools actually work. In that situation, teachers are more concerned about monitoring or catching students using AI—say, to write an essay—rather than teaching them how to use those technologies appropriately to tackle class assignments. This kind of heavy-handed compliance culture kills curiosity.

That way of thinking would also spell big trouble for students because most of them will be expected to use AI tools when they enter the workforce. It creates an educational atmosphere that, at its worst, features overprescriptive, innovation-killing policies and classroom rules, as well as the misguided use of AI plagiarism detectors.

That, ultimately, creates a split screen for AI-skill building, too. “If some students are getting really thoughtful and guided opportunities to explore and play with AI, and others are just getting the shutdown,” Peterson says, “then we’re just deepening that [skill] divide even more.”

One of the most prominent examples of a reactive shutdown to new technologies came in early 2023 from the New York City public schools when ChatGPT first emerged. The school district immediately banned the use of it, much like other districts did—and would—around the country. But interestingly, less than a year later, it backpedaled, lifted the ban, and even established a research initiative to examine proper uses of AI for teaching and learning. New York City and other districts realized that putting their heads in the sand and hoping this fast-emerging technology would fade away was simply not an option.

That is the world schools operate in now. It is wickedly complex and fast-paced, full of a dizzying array of new decisions about technology and learning and all kinds of unintended consequences attached to those decisions.

Schools must learn to tolerate a wider range of experimentation

In that dynamic environment—which now features an executive order from the White House that encourages educators to integrate AI into teaching and learning—schools need to evolve to tolerate a wider range of experimentation around AI use, suggests Rafe Steinhauer, an instructional assistant professor in the school of engineering at Dartmouth College. He is studying a handful of school districts in New Hampshire and Vermont to see how they are handling educational design, with a special focus on AI use in one of the districts each school year. The plan is to share lessons learned.

“I would be nervous about any school district saying, ‘We’re going all in [on AI].’ And I would be nervous about any school district [banning the technology],” he said. “We know already that there are tremendous risks to student learning and we know that there are tremendous opportunities with generative AI.”

chart visualization

Steinhauer’s both cautionary and hopeful note leads us to a concept called “containment.” That concept applies to all kinds of sectors—business, the military, and government, as well as K-12 education—and is articulated by Microsoft AI CEO Mustafa Suleyman in his New York Times bestselling book, The Coming Wave: AI, Power, and Our Future.

Suleyman explains that “containment is about meaningful control, the capability to stop a use case, change a research direction, or deny access to harmful actors.” What this translates to for education is steering the use of AI toward strategies that enhance student learning, make teachers’ jobs easier, and protect the massive amounts of student and educator data flowing through AI programs.

One giant step toward meaningful containment in K-12 education would be to understand how to use AI in developmentally appropriate ways for different age groups. One of the reporters on my team wrote a story in 2024 addressing that challenge. That story came with a warning that kindergartners through 2nd graders are more likely than older kids to attribute human qualities to AI technologies, like smart speakers and chatbots. In some cases, they may even trust the AI responses more than those of their teachers. For high school students, the most important role educators can play is to teach students the limitations of the tools and why they need to be skeptical about the accuracy of what they generate. But that can’t happen if teachers are not allowed and encouraged to experiment with AI in meaningful ways first.

See also  State Funding for Schools Is a Mess This Year, Too. Here's Why

Show kids how to use AI to tackle complex, real-world problems

Those are big, complex challenges to address for different age groups. But it is just a start. Educators should also teach high school students—and maybe middle schoolers, too—how AI can be used to solve some of the most complex problems facing society today. That helps ground their learning in the real world and build skills they will need to succeed in the workplace.

Clayton Dagler, a teacher at Franklin High School in Elk Grove, Calif., who was featured in a recent Education Week special report about the role of AI in math instruction, is taking that approach. He is teaching students how to tackle complex societal problems by pairing a computer-coding language commonly used in artificial intelligence technologies with math concepts. Students are learning valuable math, problem-solving, and AI skills simultaneously while also learning the building blocks for the algorithms that drive AI.

Using AI in more strategic and thoughtful ways to help kids develop critical-thinking skills is also a high priority for Aaron Cinquemani, the principal of Woodstock Union Middle School and High School in Vermont and a co-founder of Greentime.ai, a company that aims to help educators use AI ethically to create human-centered learning experiences in and outside classrooms. Cinquemani’s district, Mountain Views Supervisory Union, is the one Steinhauer was studying this past school year to understand how it was handling big AI questions for schools.

I met Cinquemani at the Learning Forward conference last winter in Denver, where I was giving a presentation about AI, cellphone, and social media challenges in schools. During that presentation, I asked the audience of about 70 people to raise their hands if their school districts had established guidelines around the use of AI for instruction. Less than half did so. (Similarly, nearly half of the educators who responded to a nationally representative survey conducted this summer by the EdWeek Research Center said their district or school did not have any AI policy.)

But the Woodstock school district has tried to get ahead of the curve. It produced Generative Artificial Intelligence Guidelines that only allow the use of AI tools in school for students 13 or older with parental permission. The guidelines indicate they were established “to help students understand the risks and benefits of AI, learn how to use AI appropriately and creatively, and help students navigate a world where AI tools are becoming increasingly prevalent.”

Cinquemani said those guidelines will be updated by the school district regularly to reflect technological advances.

“We need to learn from the past, right?” Cinquemani pressed in a follow-up video interview. “So, AI is here, but do we have to let it in like we let cellphones in? No, we don’t. We can be skeptically optimistic. We can slow it down. We can ask the ethical questions that we didn’t ask then but we know to ask now.”

chart visualization

Cinquemani has a point. When the use of smartphones among school-age kids skyrocketed after the introduction of the iPhone, many schools felt powerless over this new technology and allowed students to use their phones throughout the school day with few or no restrictions. Today, almost two decades later, more than 30 states and a rising number of school districts have enacted cellphone restrictions.

See also  Help! I’m Sick of the Candy Culture at School

What prompted this clampdown? Classroom distractions that hurt learning and students’ rising mental health problems—and the growing and deep frustration of teachers everywhere.

We shouldn’t have to learn this lesson twice.

That’s why Justin Reich, an associate professor of digital media at the Massachusetts Institute of Technology, argues that this is a critical moment for schools.

Reich, who recently launched a podcast called The Homework Machine to help educators and policymakers address concerns about the use of AI in schools, said his biggest worry about the technology is the threat of “bypassing learning.” Not necessarily cheating but simply not learning.

His team of researchers have interviewed about 100 teachers and 40 middle and high school students from around the country to understand their perspectives on AI and the questions they have about its use. During the study, one male high school student described a typical experience in his science class: All the students had their laptops open while the teacher was addressing questions to the entire class. Most of the students were plugging the questions into AI and repeating what the chatbot spat out. The teacher kept chugging along, picking up the pace, as the questions were answered. One student said he was genuinely trying to process the concepts the teacher was presenting, but he couldn’t because the class was moving too quickly thanks to reliance on AI out of the teacher’s eye line.

You could argue that, well, that is just bad teaching. And it is. But the problem is that much like how AI can amplify good outcomes, it can also amplify bad ones, such as making poor teaching approaches even worse.

Schools must recognize and respond to advances in artificial intelligence

Reich also notes that there is a big difference between people who are knowledgeable in a topic using AI to bolster that knowledge versus novices who know little, if anything, about the material or subject matter. That means the knowledgeable ones can use AI to grow their expertise exponentially while the novices are manipulated by bad or inaccurate AI-generated information. The knowledge gap between experts and novices, in turn, grows wider.

“When people are learning to write, they really shouldn’t use this stuff much, because it bypasses their thinking,” he says. “Just like when young people are learning to [do math]. Really good math teachers don’t let them use calculators.”

Reich adds: “There’s no technology which has been developed, which I am aware of, where any school system has raced to be an early adopter and seen widespread advantages from winning that race.”

And, yet, the race goes on. Advances in AI continue to emerge at what feels like a breakneck pace. More teachers than ever are now experimenting with the technology. And students are becoming more adept at using it for good and bad.

Containing the technology primarily for good will be difficult. Some might even say impossible. But is it worth the time and effort? I don’t think schools have a choice.

That’s why school and district leaders need to follow what Steinhauer from Dartmouth and Reich from MIT recommend: Encourage teachers to experiment with AI in small but meaningful ways. Look hard at the benefits and drawbacks of the technology. And be curious about how students are using artificial intelligence in and outside school. Then, share those critical lessons learned with others.

AI advances have already helped detect some forms of cancer earlier and more accurately than other medical diagnostic tools; the technology has bolstered the ability of smaller countries to defend themselves against bigger, aggressive foes; and it has opened the doors to analyze history in more sophisticated ways than ever before.

K-12 schools need to seize this moment and figure out how to use the technology smartly to their advantage, too. Not doing so would be a huge mistake.

Ground healthy middle Optimism Schools skeptical
Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous Article14 Effective Wind-Down Rituals for Female CEOs and Founders
Next Article Best Places for game viewings in Uganda | News

Related Posts

55 Narrative Fiction Writing Ideas for High School

September 29, 2025

Xiaomi 17 Series Breaks New Ground With 100W Universal Fast Charging

September 29, 2025

See How Charlie Kirk’s Debate Style Worked

September 28, 2025

Trump Admin. Relaunches School Mental Health Grants It Yanked—With a Twist

September 28, 2025
Leave A Reply Cancel Reply

Our Picks
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't Miss

Accurate Eclipse Season Horoscope for 12 Zodiac Signs

September 29, 20250

The Hindus mark a special time known as the season of the dead each year,…

Slideshow: New menu items from Red Robin, Panera Bread and Krispy Kreme

September 29, 2025

Amex Platinum vs. Sapphire Reserve: Which is right for you?

September 29, 2025

Stop Enabling Passive Aggression | Psychology Today

September 29, 2025
About Us
About Us

Explore blogs on mind, spirituality, health, and travel. Find balance, wellness tips, inner peace, and inspiring journeys to nurture your body, mind, and soul.

We're accepting new partnerships right now.

Our Picks

Accurate Eclipse Season Horoscope for 12 Zodiac Signs

September 29, 2025

Slideshow: New menu items from Red Robin, Panera Bread and Krispy Kreme

September 29, 2025

Amex Platinum vs. Sapphire Reserve: Which is right for you?

September 29, 2025

Subscribe to Updates

Awaken Your Mind, Nourish Your Soul — Join Our Journey Today!

Facebook X (Twitter) Pinterest YouTube
  • Contact
  • Privacy Policy
  • Terms & Conditions
© 2025 mindfortunes.org - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.