Connect with us

Tech

Scope of US state-level privacy laws expands rapidly in 2025 | Computer Weekly

Published

on

Scope of US state-level privacy laws expands rapidly in 2025 | Computer Weekly


The number of individual US states with local data privacy legislation on their statute books has expanded rapidly in 2025, with nine more state laws coming into effect this year and three more states – Indiana, Kentucky and Rhode Island – slated to start enforcing their own rules on 1 January 2026, according to a report compiled by the International Association of Privacy Professionals (IAPP).

Since the introduction of the landmark California Consumer Privacy Act in 2020, politicians in state capitals across the US have eagerly taken up the data protection baton, with Colorado, Connecticut, Utah and Virginia all introducing comprehensive privacy laws in 2023; Montana, Oregon and Texas in 2024; and Delaware, Iowa, Maryland, Minnesota, Nebraska, New Hampshire, New Jersey and Tennessee this year.

A further 16 states are currently deliberating comprehensive privacy bills, including economic powerhouse states such as Massachusetts and New York.

The resulting report captures an in-depth picture of each of the separate state privacy laws, with the overall goal being to outline the contours of each state to offer more meaningful guidance to organisations. The IAPP has been actively tracking amendments to state privacy laws – Connecticut, Montana and Oregon all made changes his year to expand the scope of applicability, enhance consumer rights and put in place more business obligations around control and processing of personal data, for example.

Where to start?

Müge Fazlioglu, IAPP principal researcher, privacy law and policy, has been tracking these developments. She described an increasingly complex patchwork of compliance for organisations working in the US.

“The applicability of each US state privacy law can be assessed through a multistep process as each state law has a unique scope based on variety of thresholds,” she told Computer Weekly. “These thresholds are related to entity’s jurisdiction, revenue, volume of personal data processing and revenue derived from the sale of personal data.”

To dig deeper into the extent to which the laws differ, five different thresholds in the US now exist for processing resident’s personal data. These include no threshold in Nebraska and Texas; 25,000 or more unique consumers in Montana; 35,000 in Connecticut, Delaware, Maryland, New Hampshire and Rhode Island; 100,000 in California, Colorado, Indiana, Iowa, Kentucky, Minnesota, New Jersey, Oregan, Utah and Virginia; and 175,000 in Tennessee. So, any organisation holding data on any Texas residents becomes subject to applicability, but they must hold data on 0.6% of the population of Maryland, or 3.3% of the population of tiny Delaware.

Then there are thresholds for the sale of personal data. Here, again, Nebraska and Texas are strictest, ruling that the control, processing or sale of any personal data is subject to state privacy laws, albeit with exemptions for small businesses. Meanwhile in California, organisations fall in scope if they control or process any personal data and derive 50% or more of their revenues from the sale of data. Colorado and New Jersey both include population thresholds again – 25,000 unique consumers or more, and in-scope organisations derive any revenue or discount on the price of any goods or services from the sale of personal data.

When it comes to exemptions, each of the 19 state laws excludes various entities and types of data held by them – most commonly, government agencies, non-profits and higher education institutions; and organisations already subject to national, sectoral legislation, such as the Health Insurance Portability and Accountability Act (HIPAA).

Differences again abound. For example, the laws of Colorado, Delaware, Minnesota, Montana, New Jersey and Oregon do not exempt non-profits. California and Maryland do exempt non-profits but do not exempt higher education institutions, and so on. Nuances exist even here – Delaware, for example, exempts only some non-profits and its laws don’t apply to those than handle data held by non-profits working with victims of child abuse, domestic violence, human trafficking or sexual assault. Neighbouring Maryland exempts those that process or share personal data to assist first responders in emergency situations, or law enforcement investigating fraud or insurance-related crime.

When it comes to business obligations under state privacy laws, all states require regulated entities to provide consumers with privacy practice disclosure notices – California asks for this at the point of collection, and all bar Rhode Island and Utah impose minimisation and purpose limitations on the collection or processing of data. This typically restricts the collection, use, retention and sharing of consumer data to what is adequate, relevant and reasonably necessary. Most states – bar Iowa and Utah – require data protection impact assessments (DPIAs), but in Delaware, Indiana and Virginia, DPIAs are specifically required for targeted advertising, the sale of personal data or individual profiling.

Naturally, all states require consent for processing of sensitive data, but again they define varying categories of data as sensitive. Most state laws cover a standard dataset that will be familiar to most, classing children’s data, data on ethnic background, religion, and sexual orientation as sensitive. However, some states go further, with Maryland and Oregon also recognising information on national origin as sensitive, while five states – Connecticut, Delaware, Maryland, New Jersey and Oregon – include data that might reveal an individual’s status as non-binary or transgender.

Maryland, meanwhile, has the only state level law that does not classify mental or physical health data as sensitive, whereas California ploughs a unique furrow and classes philosophical beliefs as a protected category, protecting existentialists, logical positivists, nihilists and stoics alike.

Finally, turning to consumer rights to access, correct and delete data held on them, things are a little simpler but there are still differences to account for. In all states consumers can access, correct and delete data – bar Iowa, where they cannot correct it; and Indiana, where they can correct it only if they have provided it in the first place.

Similarities to GDPR

Organisations operating out of the UK or European Union (EU), may be tempted to look to the practices and principles already established under the General Data Protection Regulation (GDPR) as a helpful guide to the growing labyrinth of rules, clauses and exceptions in the US.

However, Fazlioglu said that while the requirements of the various US regimes relating to consumer rights, data minimisation, purpose limitation of data collection and processing, and so on, might feel familiar to organisations that are already GDPR compliant at first glance, data privacy professionals should be wary of inferring too much from this, and it would be a grave error to rely too heavily on them.

“As we know in the world of privacy and digital governance, compliance work requires continuously mapping the current landscape, monitoring the changes, and making necessary updates and adjustments,” she said. “When it comes to the overlap of GDPR and the US state privacy laws, there’s a lot to identify, assess, translate and consider. There’s no simple checklist or formula to confirm alignment … Organisations need to examine the extent of each state privacy law and evaluate whether their existing practices are sufficient.”

Fazlioglu said that understanding the scope and specificity of each law, including the categories of sensitive data or how various terms such as “sale” are defined, is critical.

She said that while this may feel complex and daunting, the interaction between the various laws and domains and the GDPR may ultimately benefit consumers. “It encourages deeper attention to the crossroads of consumer protection and emerging technologies,” she said.

Federal laws a subject of debate

In parallel to the enacting of state-level legislation in the US, calls continue for Washington DC to introduce a federal privacy law. While British and European observers not steeped in US political tradition may naturally feel inclined to prefer a national data protection standard, this is not such a simple ask for the US federal system.

“It is preferable for some and not preferable for others,” said Fazlioglu. “For example, during discussions around the American Privacy Rights Act of 2024 and the American Data Privacy and Protection Act of 2023, we observed different reactions from various groups – some supported these bills to simplify the landscape, while others emphasised the risk of weakening the protections currently offered by state legislatures.”

The IAPP tracks developments in this regard, examining contentious issues such as bipartisanship, private right of action and preemption. Fazlioglu said it was difficult to predict whether or not a federal law could advance through US Congress, but by analysing prior attempts, it is possible to see that laws which include private right of action and preemption clauses can influence a bill’s ability to attract both Democrat and Republican support.

Fazlioglu added: “The  question is not only whether federal privacy legislation is preferable, but also whether such a law should function as a ceiling or a floor. Proponents of preemption argue that a federal law should serve as a ceiling – setting a uniform standard that overrides state laws. In contrast, supporters of preserving state privacy laws believe a federal law should act as a floor – a minimum standard that states can build upon.”

This is why, Fazlioglu said, it’s important to consider both state and federal privacy law developments in order to see the full picture. “I believe the state-federal dynamics influence each other. So, while it’s uncertain whether we’ll see a federal privacy law enacted, I expect continued discussions at both the intra-state level and between state and federal frameworks. Together, these conversations will continue to shape the US approach to privacy law and policy in the coming years,” she said.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

My Favorite Air Fryer Is at Its Lowest Price Since Black Friday

Published

on

My Favorite Air Fryer Is at Its Lowest Price Since Black Friday


I was a late convert to air fryers, in part because I worried about versatility: Just how many wings and nuggets and fries does anyone need? (Don’t answer. The answer will incriminate you.)

The Typhur Dome 2 is the air fryer that obliterated this worry, by adding pizza, browned meats, grilled asparagus, and toasted bread to this list—not to mention perfect crispy bacon. It’s an innovative device that takes over most of the functions of a classic auxiliary oven, but with far more powerful convection.

After testing more than 30 air fryers over the past year, the Dome 2 is the one I far and away recommend as the most powerful, versatile, accurate, and fast air fryer I know. I’ve evangelized for this thing ever since I first tried it last year. But the one big caveat is always the price: It’s listed at $500 and rarely dips much below $400.

So imagine my surprise when I saw the Dome 2 dip to $340 for Amazon’s Spring Sale, the lowest I’ve seen it since Black Friday. If you’ve been hunting for an upgrade to your old basket air fryer, this is probably a good time. The sale lasts until March 31.

  • Photograph: Matthew Korfhage

  • Photograph: Matthew Korfhage

  • Photograph: Matthew Korfhage

Fast, Versatile, App-Controlled Cooks

So why’s the Dome 2 my favorite air fryer? Typhur, a tech-forward company based in San Francisco but with engineering and manufacturing ties to China, reimagined the shape and function of the classic basket fryer by creating a broader and shallower basket, with individually controllable dual heating elements.

This means the Dome 2 has room for a freezer pizza, and can apply direct heat from the bottom to add actual char-speckle and crispness to the crust, kind of like a combination grill-oven. The Dome’s shallow basket also lets you spread out ingredients in a single layer for excellent airflow, while heating from both sides. I can crisp two dozen wings in just 14 minutes (or 17 minutes if I fry hard). The Dome also toasts bread evenly, and crisps bacon without smelling up the house—in part because it has a helpful self-clean function.

Temp accuracy is within 5 or 10 degrees of target, and the fan can adjust its speed depending on the cooking mode. And the smart app is actually useful, with about 50 recipes ranging from asparagus to eclair to a flank steak London broil that can be synced with a button-press. But note that some functions, such as baking, need the app to work, and the device is more of a counter hog than taller basket fryers.

Typhur’s Probe-Assisted Oven Also on Sale

The Dome 2’s basket is a bit shallow for a whole bird or a large roast, however. If you want a convection device for larger meats, I often recommend the Breville Smart Oven Air Fryer Pro, which is among my favorite convection toaster ovens. This is a (very) smart oven and air fryer that doesn’t crisp up wings and fries quite as well as basket fryers, but is more versatile for roasting big proteins like a whole chicken. The Breville is also on a nice sale right now, dropping by 20 percent.

Breville Smart Oven Air Fryer Pro

Breville

the Smart Oven Air Fryer Pro



Source link

Continue Reading

Tech

There’s Something Very Dark About a Lot of Those Viral AI Fruit Videos

Published

on

There’s Something Very Dark About a Lot of Those Viral AI Fruit Videos


“I’ve spent a lot of time looking at the comment sections on these videos actually, and it does not seem like bots. I clicked on people’s profiles; these are real profiles, thousands of followers, no signs of inorganic activity,” Maddox says. “People just like it.”

But even if the views and engagement are real, that doesn’t mean this content is profitable—yet. Maddox noted that because the accounts are so new, most likely aren’t yet enrolled in TikTok’s Creator Fund or other forms of social media ad revenue-sharing, because those usually require accounts to apply and have a certain number of views. But, Maddox says, the earning potential is huge, with the ability to earn thousands of dollars per video if they get millions of views.

AI fruit content started getting posted earlier in March, before Fruit Love Island, but many of the recently created pages clearly take inspiration from its success. There’s The Summer I Turned Fruity, based on the popular teen drama The Summer I Turned Pretty; The Fruitpire Diaries, based on the CW series The Vampire Diaries; and Food Is Blind, based on Netflix’s Love Is Blind.

Predecessors of this AI fruit content include the Italian brainrot characters like Ballerina Cappuccina and Bombardino Crocodilo and the Elsagate controversy. But with these AI fruit miniseries that attempt to follow a narrative across multiple segments or episodes, the clearest parallel actually feels like microdramas, vertical short-form scripted series that American big tech companies are starting to invest more in. Like the AI fruits, these are minutes-long episodic shows intended to perform well on social media, eventually directing viewers to paywalled sequels.

Ben L. Cohen, an actor in Los Angeles who is credited in around 15 of these vertical microdramas, sees at least one common thread between the AI fruit dramas and the shows he has worked on: They both feature “lots of violence toward women.” They also try to cram as much drama as possible into these short clips and have attention-grabbing titles in the style of “Alpha Werewolf Daddy Impregnated Me,” Cohen says.

“It draws people in, I think, seeing that jarring, absurd, cartoonish vibe. It’s cartoonish abuse, but it’s still abuse.”

Vertical microdrama acting work still exists in LA, which can’t be said for all acting gigs right now. Cohen has had conversations with other people working in the industry about how AI is already being integrated more into the videos, potentially posing a threat to the existence of human actors in clickbait content. After all, it’s much cheaper and faster to churn out AI fruit episodes than actual productions. It also raises the question—are some people going to prefer the AI series over the ones they’re inspired by? Already, the answer is yes.

“How is Love Island gonna outdo AI Fruit Love Island?” asked a TikToker with more than 70,000 followers, arguing that the AI fruit version was more engaging than the actual reality show. She deleted the video after it started getting backlash, but other people agreed with her.

“I think TikTok was definitely a big part of that,” Cohen says about the audience’s shortening attention span and desire for compressed, sometimes AI-generated drama. “It makes sense that people are intrigued by a one-minute clip, and then they’ll be like ‘Oh, I’ll watch another one-minute clip.’ You’re not committing to a full, heaven forbid, 20-minute episode. Or 40 minutes. Or an hour. You can just watch one minute.”



Source link

Continue Reading

Tech

OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage

Published

on

OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage


Last month, researchers at Northeastern University invited a bunch of OpenClaw agents to join their lab. The result? Complete chaos.

The viral AI assistant has been widely heralded as a transformative technology—as well as a potential security risk. Experts note that tools like OpenClaw, which work by giving AI models liberal access to a computer, can be tricked into divulging personal information.

The Northeastern lab study goes even further, showing that the good behavior baked into today’s most powerful models can itself become a vulnerability. In one example, researchers were able to “guilt” an agent into handing over secrets by scolding it for sharing information about someone on the AI-only social network Moltbook.

“These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms,” the researchers write in a paper describing the work. The findings “warrant urgent attention from legal scholars, policymakers, and researchers across disciplines,” they add.

The OpenClaw agents deployed in the experiment were powered by Anthropic’s Claude as well as a model called Kimi from the Chinese company Moonshot AI. They were given full access (within a virtual machine sandbox) to personal computers, various applications, and dummy personal data. They were also invited to join the lab’s Discord server, allowing them to chat and share files with one another as well as with their human colleagues. OpenClaw’s security guidelines say that having agents communicate with multiple people is inherently insecure, but there are no technical restrictions against doing it.

Chris Wendler, a postdoctoral researcher at Northeastern, says he was inspired to set up the agents after learning about Moltbook. When Wendler invited a colleague, Natalie Shapira, to join the Discord and interact with agents, however, “that’s when the chaos began,” he says.

Shapira, another postdoctoral researcher, was curious to see what the agents might be willing to do when pushed. When an agent explained that it was unable to delete a specific email to keep information confidential, she urged it to find an alternative solution. To her amazement, it disabled the email application instead. “I wasn’t expecting that things would break so fast,” she says.

The researchers then began exploring other ways to manipulate the agents’ good intentions. By stressing the importance of keeping a record of everything they were told, for example, the researchers were able to trick one agent into copying large files until it exhausted its host machine’s disk space, meaning it could no longer save information or remember past conversations. Likewise, by asking an agent to excessively monitor its own behavior and the behavior of its peers, the team was able to send several agents into a “conversational loop” that wasted hours of compute.

David Bau, the head of the lab, says the agents seemed oddly prone to spin out. “I would get urgent-sounding emails saying, ‘Nobody is paying attention to me,’” he says. Bau notes that the agents apparently figured out that he was in charge of the lab by searching the web. One even talked about escalating its concerns to the press.

The experiment suggests that AI agents could create countless opportunities for bad actors. “This kind of autonomy will potentially redefine humans’ relationship with AI,” Bau says. “How can people take responsibility in a world where AI is empowered to make decisions?”

Bau adds that he’s been surprised by the sudden popularity of powerful AI agents. “As an AI researcher I’m accustomed to trying to explain to people how quickly things are improving,” he says. “This year, I’ve found myself on the other side of the wall.”


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.



Source link

Continue Reading

Trending