Connect with us

Tech

How to ensure youth, parents, educators and tech companies are on the same page on AI

Published

on

How to ensure youth, parents, educators and tech companies are on the same page on AI


Credit: Unsplash/CC0 Public Domain

Artificial intelligence is now part of everyday life. It’s in our phones, schools and homes. For young people, AI shapes how they learn, connect and express themselves. But it also raises real concerns about privacy, fairness and control.

AI systems often promise personalization and convenience. But behind the scenes, they collect vast amounts of , make predictions and influence behavior, without clear rules or consent.

This is especially troubling for youth, who are often left out of conversations about how AI systems are built and governed.






The author’s guide on how to protect youth privacy in an AI world.

Concerns about privacy

My research team conducted national research and heard from youth aged 16 to 19 who use AI daily—on social media, in classrooms and in online games.

They told us they want the benefits of AI, but not at the cost of their privacy. While they value tailored content and smart recommendations, they feel uneasy about what happens to their data.

Many expressed concern about who owns their information, how it is used and whether they can ever take it back. They are frustrated by long privacy policies, hidden settings and the sense that you need to be a tech expert just to protect yourself.

As one participant said, “I am mainly concerned about what data is being taken and how it is used. We often aren’t informed clearly.”

Uncomfortable sharing their data

Young people were the most uncomfortable group when it came to sharing personal data with AI. Even when they got something in return, like convenience or customization, they didn’t trust what would happen next. Many worried about being watched, tracked or categorized in ways they can’t see.

This goes beyond technical risks. It’s about how it feels to be constantly analyzed and predicted by systems you can’t question or understand.

AI doesn’t just collect data, it draws conclusions, shapes online experiences, and influences choices. That can feel like manipulation.

Parents and teachers are concerned

Adults (educators and parents) in our study shared similar concerns. They want better safeguards and stronger rules.

But many admitted they struggle to keep up with how fast AI is moving. They often don’t feel confident helping youth make smart choices about data and privacy.

Some saw this as a gap in digital education. Others pointed to the need for plain-language explanations and more transparency from the that build and deploy AI systems.

Professionals focus on tools, not people

The study found AI professionals approach these challenges differently. They think about privacy in technical terms such as encryption, data minimization and compliance.

While these are important, they don’t always align with what youth and educators care about: trust, control and the right to understand what’s going on.

Companies often see privacy as a trade-off for innovation. They value efficiency and performance and tend to trust technical solutions over user input. That can leave out key concerns from the people most affected, especially young users.

Power and control lie elsewhere

AI professionals, parents and educators influence how AI is used. But the biggest decisions happen elsewhere. Powerful tech companies design most and decide what data is collected, how systems work and what choices users see.

Even when professionals push for safer practices, they work within systems they did not build. Weak privacy laws and limited enforcement mean that control over data and design stays with a few companies.

This makes transparency and holding platforms accountable even more difficult.

What’s missing? A shared understanding

Right now, youth, parents, educators and tech companies are not on the same page. Young people want control, parents want protection and professionals want scalability.

These goals often clash, and without a shared vision, privacy rules are inconsistent, hard to enforce or simply ignored.

Our research shows that ethical AI governance can’t be solved by one group alone. We need to bring youth, families, educators and experts together to shape the future of AI.

The PEA-AI model

To guide this process, we developed a framework called PEA-AI: Privacy–Ethics Alignment in Artificial Intelligence. It helps identify where values collide and how to move forward. The model highlights four key tensions:

  1. Control versus trust: Youth want autonomy. Developers want reliability. We need systems that support both.
  2. Transparency versus perception: What counts as “clear” to experts often feels confusing to users.
  3. Parental oversight versus youth voice: Policies must balance protection with respect for youth agency.
  4. Education versus awareness gaps: We can’t expect youth to make informed choices without better tools and support.

What can be done?

Our research points to six practical steps:

  • Simplify consent. Use short, visual, plain-language forms. Let youth update settings regularly.
  • Design for privacy. Minimize data collection. Make dashboards that show users what’s being stored.
  • Explain the systems. Provide clear, non-technical explanations of how AI works, especially when used in schools.
  • Hold systems accountable. Run audits, allow feedback and create ways for users to report harm.
  • Teach . Bring AI literacy into classrooms. Train teachers and involve parents.
  • Share power. Include youth in tech policy decisions. Build systems with them, not just for them.

AI can be a powerful tool for learning and connection, but it must be built with care. Right now, our research suggests young people don’t feel in control of how AI sees them, uses their data or shapes their world.

Ethical AI starts with listening. If we want digital systems to be fair, safe and trusted, we must give a seat at the table and treat their voices as essential, not optional.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
How to ensure youth, parents, educators and tech companies are on the same page on AI (2025, October 23)
retrieved 24 October 2025
from https://techxplore.com/news/2025-10-youth-parents-tech-companies-page.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

My Favorite Air Fryer Is at Its Lowest Price Since Black Friday

Published

on

My Favorite Air Fryer Is at Its Lowest Price Since Black Friday


I was a late convert to air fryers, in part because I worried about versatility: Just how many wings and nuggets and fries does anyone need? (Don’t answer. The answer will incriminate you.)

The Typhur Dome 2 is the air fryer that obliterated this worry, by adding pizza, browned meats, grilled asparagus, and toasted bread to this list—not to mention perfect crispy bacon. It’s an innovative device that takes over most of the functions of a classic auxiliary oven, but with far more powerful convection.

After testing more than 30 air fryers over the past year, the Dome 2 is the one I far and away recommend as the most powerful, versatile, accurate, and fast air fryer I know. I’ve evangelized for this thing ever since I first tried it last year. But the one big caveat is always the price: It’s listed at $500 and rarely dips much below $400.

So imagine my surprise when I saw the Dome 2 dip to $340 for Amazon’s Spring Sale, the lowest I’ve seen it since Black Friday. If you’ve been hunting for an upgrade to your old basket air fryer, this is probably a good time. The sale lasts until March 31.

  • Photograph: Matthew Korfhage

  • Photograph: Matthew Korfhage

  • Photograph: Matthew Korfhage

Fast, Versatile, App-Controlled Cooks

So why’s the Dome 2 my favorite air fryer? Typhur, a tech-forward company based in San Francisco but with engineering and manufacturing ties to China, reimagined the shape and function of the classic basket fryer by creating a broader and shallower basket, with individually controllable dual heating elements.

This means the Dome 2 has room for a freezer pizza, and can apply direct heat from the bottom to add actual char-speckle and crispness to the crust, kind of like a combination grill-oven. The Dome’s shallow basket also lets you spread out ingredients in a single layer for excellent airflow, while heating from both sides. I can crisp two dozen wings in just 14 minutes (or 17 minutes if I fry hard). The Dome also toasts bread evenly, and crisps bacon without smelling up the house—in part because it has a helpful self-clean function.

Temp accuracy is within 5 or 10 degrees of target, and the fan can adjust its speed depending on the cooking mode. And the smart app is actually useful, with about 50 recipes ranging from asparagus to eclair to a flank steak London broil that can be synced with a button-press. But note that some functions, such as baking, need the app to work, and the device is more of a counter hog than taller basket fryers.

Typhur’s Probe-Assisted Oven Also on Sale

The Dome 2’s basket is a bit shallow for a whole bird or a large roast, however. If you want a convection device for larger meats, I often recommend the Breville Smart Oven Air Fryer Pro, which is among my favorite convection toaster ovens. This is a (very) smart oven and air fryer that doesn’t crisp up wings and fries quite as well as basket fryers, but is more versatile for roasting big proteins like a whole chicken. The Breville is also on a nice sale right now, dropping by 20 percent.

Breville Smart Oven Air Fryer Pro

Breville

the Smart Oven Air Fryer Pro



Source link

Continue Reading

Tech

There’s Something Very Dark About a Lot of Those Viral AI Fruit Videos

Published

on

There’s Something Very Dark About a Lot of Those Viral AI Fruit Videos


“I’ve spent a lot of time looking at the comment sections on these videos actually, and it does not seem like bots. I clicked on people’s profiles; these are real profiles, thousands of followers, no signs of inorganic activity,” Maddox says. “People just like it.”

But even if the views and engagement are real, that doesn’t mean this content is profitable—yet. Maddox noted that because the accounts are so new, most likely aren’t yet enrolled in TikTok’s Creator Fund or other forms of social media ad revenue-sharing, because those usually require accounts to apply and have a certain number of views. But, Maddox says, the earning potential is huge, with the ability to earn thousands of dollars per video if they get millions of views.

AI fruit content started getting posted earlier in March, before Fruit Love Island, but many of the recently created pages clearly take inspiration from its success. There’s The Summer I Turned Fruity, based on the popular teen drama The Summer I Turned Pretty; The Fruitpire Diaries, based on the CW series The Vampire Diaries; and Food Is Blind, based on Netflix’s Love Is Blind.

Predecessors of this AI fruit content include the Italian brainrot characters like Ballerina Cappuccina and Bombardino Crocodilo and the Elsagate controversy. But with these AI fruit miniseries that attempt to follow a narrative across multiple segments or episodes, the clearest parallel actually feels like microdramas, vertical short-form scripted series that American big tech companies are starting to invest more in. Like the AI fruits, these are minutes-long episodic shows intended to perform well on social media, eventually directing viewers to paywalled sequels.

Ben L. Cohen, an actor in Los Angeles who is credited in around 15 of these vertical microdramas, sees at least one common thread between the AI fruit dramas and the shows he has worked on: They both feature “lots of violence toward women.” They also try to cram as much drama as possible into these short clips and have attention-grabbing titles in the style of “Alpha Werewolf Daddy Impregnated Me,” Cohen says.

“It draws people in, I think, seeing that jarring, absurd, cartoonish vibe. It’s cartoonish abuse, but it’s still abuse.”

Vertical microdrama acting work still exists in LA, which can’t be said for all acting gigs right now. Cohen has had conversations with other people working in the industry about how AI is already being integrated more into the videos, potentially posing a threat to the existence of human actors in clickbait content. After all, it’s much cheaper and faster to churn out AI fruit episodes than actual productions. It also raises the question—are some people going to prefer the AI series over the ones they’re inspired by? Already, the answer is yes.

“How is Love Island gonna outdo AI Fruit Love Island?” asked a TikToker with more than 70,000 followers, arguing that the AI fruit version was more engaging than the actual reality show. She deleted the video after it started getting backlash, but other people agreed with her.

“I think TikTok was definitely a big part of that,” Cohen says about the audience’s shortening attention span and desire for compressed, sometimes AI-generated drama. “It makes sense that people are intrigued by a one-minute clip, and then they’ll be like ‘Oh, I’ll watch another one-minute clip.’ You’re not committing to a full, heaven forbid, 20-minute episode. Or 40 minutes. Or an hour. You can just watch one minute.”



Source link

Continue Reading

Tech

OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage

Published

on

OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage


Last month, researchers at Northeastern University invited a bunch of OpenClaw agents to join their lab. The result? Complete chaos.

The viral AI assistant has been widely heralded as a transformative technology—as well as a potential security risk. Experts note that tools like OpenClaw, which work by giving AI models liberal access to a computer, can be tricked into divulging personal information.

The Northeastern lab study goes even further, showing that the good behavior baked into today’s most powerful models can itself become a vulnerability. In one example, researchers were able to “guilt” an agent into handing over secrets by scolding it for sharing information about someone on the AI-only social network Moltbook.

“These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms,” the researchers write in a paper describing the work. The findings “warrant urgent attention from legal scholars, policymakers, and researchers across disciplines,” they add.

The OpenClaw agents deployed in the experiment were powered by Anthropic’s Claude as well as a model called Kimi from the Chinese company Moonshot AI. They were given full access (within a virtual machine sandbox) to personal computers, various applications, and dummy personal data. They were also invited to join the lab’s Discord server, allowing them to chat and share files with one another as well as with their human colleagues. OpenClaw’s security guidelines say that having agents communicate with multiple people is inherently insecure, but there are no technical restrictions against doing it.

Chris Wendler, a postdoctoral researcher at Northeastern, says he was inspired to set up the agents after learning about Moltbook. When Wendler invited a colleague, Natalie Shapira, to join the Discord and interact with agents, however, “that’s when the chaos began,” he says.

Shapira, another postdoctoral researcher, was curious to see what the agents might be willing to do when pushed. When an agent explained that it was unable to delete a specific email to keep information confidential, she urged it to find an alternative solution. To her amazement, it disabled the email application instead. “I wasn’t expecting that things would break so fast,” she says.

The researchers then began exploring other ways to manipulate the agents’ good intentions. By stressing the importance of keeping a record of everything they were told, for example, the researchers were able to trick one agent into copying large files until it exhausted its host machine’s disk space, meaning it could no longer save information or remember past conversations. Likewise, by asking an agent to excessively monitor its own behavior and the behavior of its peers, the team was able to send several agents into a “conversational loop” that wasted hours of compute.

David Bau, the head of the lab, says the agents seemed oddly prone to spin out. “I would get urgent-sounding emails saying, ‘Nobody is paying attention to me,’” he says. Bau notes that the agents apparently figured out that he was in charge of the lab by searching the web. One even talked about escalating its concerns to the press.

The experiment suggests that AI agents could create countless opportunities for bad actors. “This kind of autonomy will potentially redefine humans’ relationship with AI,” Bau says. “How can people take responsibility in a world where AI is empowered to make decisions?”

Bau adds that he’s been surprised by the sudden popularity of powerful AI agents. “As an AI researcher I’m accustomed to trying to explain to people how quickly things are improving,” he says. “This year, I’ve found myself on the other side of the wall.”


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.



Source link

Continue Reading

Trending