Close Menu
  • Home
  • Psychology
  • Dating
    • Relationship
  • Spirituality
    • Manifestation
  • Health
    • Fitness
  • Lifestyle
  • Family
  • Food
  • Travel
  • More
    • Business
    • Education
    • Technology
What's Hot

How to Travel on a Budget in 2025

May 20, 2025

Do Cigars Increase Testosterone? 7 Facts To Clear The Smoke

May 20, 2025

4 Essential Conversations Couples Should Have Before Making Big Commitments

May 20, 2025
Facebook X (Twitter) Pinterest YouTube
Facebook X (Twitter) Pinterest YouTube
Mind Fortunes
Subscribe
  • Home
  • Psychology
  • Dating
    • Relationship
  • Spirituality
    • Manifestation
  • Health
    • Fitness
  • Lifestyle
  • Family
  • Food
  • Travel
  • More
    • Business
    • Education
    • Technology
Mind Fortunes
Home»Education»Trump Cracks Down Against Explicit AI Images. What It Means for Schools
Education

Trump Cracks Down Against Explicit AI Images. What It Means for Schools

May 20, 2025No Comments9 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Trump Cracks Down Against Explicit AI Images. What It Means for Schools
Share
Facebook Twitter LinkedIn Pinterest Email

President Donald Trump today signed into law a bill that criminalizes the creation and sharing of non-consensual intimate imagery online and gives social media platforms 48 hours to remove such videos or images once flagged by victims. The new law, which includes artificial intelligence-generated “deepfakes,” could give schools more leverage to deal with a growing—and still relatively unknown—challenge.

As Trump signed what’s known as the Take It Down Act in the White House Rose Garden, he was flanked by the first lady Melania Trump, Republican Sen. Ted Cruz who co-sponsored the bill, and teenage girls who have been targets of nonconsensual, sexually explicit deepfake images. The family of South Carolina Sen. Brandon Duffy, who lost his son to suicide after he was targeted in an online extortion scam, was also present.

“We will not tolerate online sexual exploitation, and especially it’s gone on at levels that nobody’s ever seen before,” Trump said. “It’s getting worse and worse, and I think this is going to hopefully stop it and now easy to do.”

The Take It Down Act is the first federal law to include criminal penalties for creating and posting AI-generated deepfakes, as well as for threatening to post intimate images without consent. Both the creators of such images, and those who “intentionally threaten” to create them, will face up to three years in jail if the offense involves a minor, and two if it involves an adult.

The law empowers the Federal Trade Commission to hold social media platforms accountable to remove such images.

The law is a rare piece of bipartisan collaboration in politically divisive times. It was co-sponsored by Cruz and Democratic Sen. Amy Klobuchar, who shepherded the bill through the Senate, where it passed unanimously, and the House, where it passed 409-2. The bill also had first lady Melania Trump’s vocal support, which was key to getting it over the line in the House vote, Klobuchar said in a video posted on her Facebook page.

Speaking before the president, the first lady also acknowledged the contribution and advocacy of the young women who were victimized by these images. She said social media and AI is “digital candy” for the next generation, but these new technologies “can be weaponized, shape beliefs, affect emotions, and sadly, can even be deadly.”

“Signature on this law is not where our work ends on this issue,” Melania Trump said. “Now, we look to the Federal Trade Commission and the private sector to do their part.”

Klobuchar and Cruz first introduced the bill in 2024 and focused much of their advocacy on teenage girls who had appeared, against their will, in AI-generated pornographic images. In almost half-a-dozen cases that popped up in schools nationwide, boys as young as 14 had created and shared these images via popular social media sites like Snapchat.

See also  Protesters Near Yale Hurl Water Bottles at Far-Right Israeli Official

Educators are acutely aware of the dangers that deepfakes can pose to their students. In a nationally representative survey of teachers, principals, and district leaders done by the EdWeek Research Center in September 2024, 35 percent said they were “somewhat concerned” that students would use AI to generate deepfakes of their teachers or peers, while 16 percent said they were “very concerned.”

Easy-to-use and freely available “nudify” apps have made it easy to generate fake sensitive images, and schools have struggled to keep up with measures to curb the misuse of AI or develop adequate policies to address the harm caused by these images and videos.

Victims and their families speak out about the harms of deepfakes

Nationally, 1 in 4 K-12 students know someone who has been depicted in a nonconsensual and sexually explicit deepfake, according to a 2024 student survey by the Center for Democracy and Technology, a nonprofit that advocates for an individual’s rights online.

Victims and their families threw their weight behind the new federal law, in the hopes that it will provide a framework to schools for how to deal with such incidents.

“We are so happy and proud” of the law’s passage, said Dorota Mani, a parent advocate. In 2023, her then 14-year-old daughter Francesca was part of a group of teen girls depicted in fake pornographic images made by their male classmates at Westfield High School in Westfield, N.J. Both Dorota and Francesca were present at the White House when Trump signed the bill.

At the time of the incident, Mani said the school had called the images, which were shared on social media, a case of “mass misinformation,” and publicly identified the victims, without identifying the boys who created the images. While one male student was eventually suspended for his actions, Francesca and the other victims still attend school with the boys who shared their deepfakes, Mani said.

Speaking to Education Week in January, Mani said the school had “failed us” in responding to the incident and protecting the victims.

Now, Mani believes the law will give schools a new impetus to create or tailor policies on the ethical use of AI.

“The prevention, the education, it all has to start in schools,” Mani said.

Educators and advocates debate the role of penalties vs. prevention

In addition to the Take It Down Act, Mani said she also supported a New Jersey bill that criminalizes the “production and dissemination” of deepfakes, passed by the state assembly on April 2 and signed into law.

The bill’s passage adds New Jersey to a growing list of over 30 states that have criminalized the creation and possession of AI-generated imagery with the faces of real adults or children, according to data collected by consumer-advocacy firm Public Citizen.

See also  50 Best Podcasts for Kids and Teens PreK-12 in 2024

These state-level laws, buoyed now by the federal law, are a “great legislative move to make victims feel supported,” said Jason Alleman, the principal of Laguna Beach High School in Laguna Beach, Calif., where, in March 2024, at least one male student had used AI to generate inappropriate images of his female classmates.

But Alleman places as much importance on teaching students about the appropriate use of AI as he does on discipline for those who generate deepfakes.

“We still want to be reflective—… supportive of both victims and the students that make these potentially life-changing decisions,” Alleman said. “As a site leader, there is an obligation to make sure that students and their families are supported on both sides of this issue.”

For some researchers who’ve studied the growth of harmful online social behavior, the new federal law doesn’t focus enough on prevention, or the protection of victims.

“Where schools are taking the most action is to punish the perpetrator, and that is certainly the focus of the Take It Down Act. If anything, it will double down and reinforce actions that schools were already taking, which is to really focus on the perpetrator,” said Elizabeth Laird, the director of equity in civic technology at the Center of Democracy and Technology.

There is also “no robust research” on whether the threat of harsher punishment—up to two or three years in jail—will deter students from generating deepfakes, she said.

“We found in our research that kids are most often doing this to each other,” said Kristin Woelfel, a policy counsel on Laird’s team at the center. She’s concerned the federal law could criminalize minors for their actions and potentially funnel them into the justice system.

Alleman, too, would rather focus on preventing these incidents from happening at his high school, than double down on a zero-tolerance policy.

The prevention, the education, it all has to start in schools.

Dorota Mani, parent advocate

While a set of guidelines is helpful, especially when it comes to constantly evolving technology like AI, Alleman said schools should have a process that both addresses the needs of the victims and counsels the creators. Restorative practices could prevent them from becoming repeat offenders or facing harsh legal consequences, he said.

Since the incident at Laguna High School in 2024, the school’s acceptable use policy on technology has expanded. Without specifically naming AI, the policy “highlights another level of responsibility” for students to make sure that they’re not manipulating or recrafting any images.

See also  101 Graduation Messages To Make Them Feel Special

Alleman has also asked his teachers to discuss how AI can be used responsibly in classrooms. The school’s librarian holds discussions with students about their social media presence and the content they post, though none of these trainings directly address the harm caused by deepfake images.

When technology moves this quickly, protecting victims is key

The Take It Down Act will compel social media companies to take down objectionable content from their platforms and attempt to also remove any digital copies. But that’s dependent on the FTC’s enforcement, a detail that could make the act’s implementation “tricky,” said Andrew Buher, the founder and managing director of Opportunity Labs, a nonprofit research and advocacy firm.

“There’s an unfortunate question in the air right now on whether the FTC decides to really enforce this law with a lot of the big social media platforms that are close to the administration,” Buher added.

A law is a good starting point, he said, but preventing damage from deepfakes requires changing the broader culture. Consistent messaging from adults to children that such behavior is unacceptable is critical, said Buher, and schools should have conversations about the damage caused by these fake images and videos.

Laird and Woelfel recommend that schools have a process in place to support victims of deepfakes. Only 5 percent of teachers said their schools provide resources to victims and their families when a deepfake is reported, according to the Center of Democracy and Technology survey.

Fifty-six percent of teachers surveyed by the EdWeek Research Center said they had not received any training on AI-generated deepfakes, while 25 percent who said the training they had gotten had been “poor.”

Teachers should know who to reach out to if they come across a deepfake depicting students, said Woelfel. Students, too, should be able to confide in faculty who are trained to respond with sensitivity and who won’t for instance, interrogate them, she said.

Questions like, “Did you send them the photo?” or “Is this real?” should be avoided because it can make the victim feel uncomfortable or wish they hadn’t reported the deepfake, Woelfel added.

For Alleman, the full school leadership team needs to be trained and sensitive to the fallout from deepfakes. While the principal is ultimately responsible, Alleman said a social-emotional support team should counsel the victim and their families, and also potentially counsel the students who created deepfakes.

Said Alleman: “What I wouldn’t want from my colleagues or anyone else [in leadership positions] is to minimize the work that we do with victims and those that choose to make people victims [of deepfakes] because we have new and exciting legislation.”

Cracks explicit images Means Schools Trump
Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleDoes Sleep Position Indicate Intelligence? |
Next Article Top Ways Predictive Analytics Is Used in Insurance Operations

Related Posts

Trump signs bill criminalizing revenge porn and explicit deepfakes

May 19, 2025

How To Become a Dental Assistant (Free Printable Poster)

May 19, 2025

Want to Be a Better Education Leader? Try These 5 Strategies (Opinion)

May 19, 2025

Help! What Does Self-Care Even Look Like This Late in the Year?

May 18, 2025
Leave A Reply Cancel Reply

Our Picks
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't Miss
Travel

How to Travel on a Budget in 2025

May 20, 20250

Travel has become really expensive. Post-COVID, the entire world seems to be traveling again and…

Do Cigars Increase Testosterone? 7 Facts To Clear The Smoke

May 20, 2025

4 Essential Conversations Couples Should Have Before Making Big Commitments

May 20, 2025

Transforming Loneliness into Thriving Friendships

May 20, 2025
About Us
About Us

Explore blogs on mind, spirituality, health, and travel. Find balance, wellness tips, inner peace, and inspiring journeys to nurture your body, mind, and soul.

We're accepting new partnerships right now.

Our Picks

How to Travel on a Budget in 2025

May 20, 2025

Do Cigars Increase Testosterone? 7 Facts To Clear The Smoke

May 20, 2025

4 Essential Conversations Couples Should Have Before Making Big Commitments

May 20, 2025

Subscribe to Updates

Awaken Your Mind, Nourish Your Soul — Join Our Journey Today!

Facebook X (Twitter) Pinterest YouTube
  • Contact
  • Privacy Policy
  • Terms & Conditions
© 2025 mindfortunes.org - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.