Connect with us

Tech

How to make ‘smart city’ technologies behave ethically

Published

on

How to make ‘smart city’ technologies behave ethically


Credit: Pixabay/CC0 Public Domain

As local governments adopt new technologies that automate many aspects of city services, there is an increased likelihood of tension between the ethics and expectations of citizens and the behavior of these “smart city” tools. Researchers are proposing an approach that will allow policymakers and technology developers to better align the values programmed into smart city technologies with the ethics of the people who will be interacting with them.

“Our work here lays out a blueprint for how we can both establish what an AI-driven technology’s values should be and actually program those values into the relevant AI systems,” says Veljko Dubljević, corresponding author of a paper on the work and Joseph D. Moore Distinguished Professor of Philosophy at North Carolina State University.

At issue are , a catch-all term that covers a variety of technological and administrative practices that have emerged in cities in recent decades. Examples include automated technologies that dispatch when they detect possible gunfire, or technologies that use automated sensors to monitor pedestrian and auto traffic to control everything from to traffic signals.

“These technologies can pose significant ethical questions,” says Dubljević, who is part of the Science, Technology & Society program at NC State.

“For example, if AI technology presumes it detected a gunshot and sends a SWAT team to a place of business, but the noise was actually something else, is that reasonable?” Dubljević asks. “Who decides to what extent people should be tracked or surveilled by smart city technologies? Which behaviors should mark someone out as an individual who should be under escalated surveillance?

“These are reasonable questions, and at the moment there is no agreed-upon procedure for answering them. And there is definitely not a clear procedure for how we should train AI to answer these questions.”

To address this challenge, the researchers looked to something called the Agent Deed Consequence (ADC) model. The ADC model holds that people take three things into account when making a moral judgment: the agent, which is the character or intent of the person who is doing something; the deed, or what is being done; and the consequence, or the outcome that results from the deed.

In their paper now published in Algorithms, the researchers demonstrate that the ADC model can be used to not only capture how humans make value judgments and ethical decisions, but can do so in a way that can be programmed into an AI system. This is possible because the ADC model uses deontic logic, which is a type of imperative logic.

“It allows us to capture not only what is true, but what should be done,” says Daniel Shussett, first author of the paper and a postdoctoral researcher at NC State. “This is important because it drives action, and can be used by an AI system to distinguish between legitimate and illegitimate orders or requests.”

“For example, if an AI system is tasked with managing traffic and an ambulance with flashing emergency lights approaches a , this may be a signal to the AI that the ambulance should have priority and alter traffic signals to help its travel quickly,” says Dubljević. “That would be a legitimate request. But if a random vehicle puts flashing lights on its roof in an attempt to get through traffic more quickly, that would be an illegitimate request and the AI should not give them a green light.

“With humans, it is possible to explain things in a way where people learn what should and shouldn’t be done, but that doesn’t work with computers. Instead, you have to be able to create a mathematical formula that represents the chain of reasoning. The ADC model allows us to create that formula.”

“These emerging smart city technologies are being adopted around the world, and the work we’ve done here suggests the ADC model can be used to address the full scope of ethical questions these technologies pose,” says Shussett. “The next step is to test a variety of scenarios across multiple technologies in simulations to ensure the model works in a consistent, predictable way. If it passes those tests, it would be ready for testing in real-world settings.”

More information:
Daniel Shussett et al, Applying the Agent-Deed-Consequence (ADC) Model to Smart City Ethics, Algorithms (2025). DOI: 10.3390/a18100625

Citation:
How to make ‘smart city’ technologies behave ethically (2025, October 20)
retrieved 20 October 2025
from https://techxplore.com/news/2025-10-smart-city-technologies-ethically.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway

Published

on

OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway


OpenAI CEO Sam Altman is still in the hot seat this week after his company signed a deal with the US military. OpenAI employees have criticized the move, which came after Anthropic’s roughly $200 million contract with the Pentagon imploded, and asked Altman to release more information about the agreement. Altman admitted it looked “sloppy” in a social media post.

While this incident has become a major news story, it may just be the latest and most public example of OpenAI creating vague policies around how the US military can access its AI.

In 2023, OpenAI’s usage policy explicitly banned the military from accessing its AI models. But some OpenAI employees discovered the Pentagon had already started experimenting with Azure OpenAI, a version of OpenAI’s models offered by Microsoft, two sources familiar with the matter said. At the time, Microsoft had been contracting with the Department of Defense for decades. It was also OpenAI’s largest investor, and had broad license to commercialize the startup’s technology.

That same year, OpenAI employees saw Pentagon officials walking through the company’s San Francisco offices, the sources said. They spoke on the condition of anonymity as they aren’t licensed to comment on private company matters.

Some OpenAI employees were wary about associating with the Pentagon, while others were simply confused about what OpenAI’s usage policies meant. Did the policy apply to Microsoft? While sources tell WIRED it was not clear to most employees at the time, spokespeople from OpenAI and Microsoft say Azure OpenAI products are not, and were not, subject to OpenAI’s policies.

“Microsoft has a product called the Azure OpenAI Service that became available to the US Government in 2023 and is subject to Microsoft terms of service,” said spokesperson Frank Shaw in a statement to WIRED. Microsoft declined to comment specifically on when it made Azure OpenAI available to the Pentagon, but notes the service was not approved for “top secret” government workloads until 2025.

“AI is already playing a significant role in national security and we believe it’s important to have a seat at the table to help ensure it’s deployed safely and responsibly,” OpenAI spokesperson Liz Bourgeois said in a statement. “We’ve been transparent with our employees as we’ve approached this work, providing regular updates and dedicated channels where teams can ask questions and engage directly with our national security team.”

The Department of Defense did not respond to WIRED’s request for comment.

By January 2024, OpenAI updated its policies to remove the blanket ban on military use. Several OpenAI employees found out about the policy update through an article in The Intercept, sources say. Company leaders later addressed the change at an all-hands meeting, explaining how the company would tread carefully in this area moving forward.

In December 2024, OpenAI announced a partnership with Anduril to develop and deploy AI systems for “national security missions.” Ahead of the announcement, OpenAI told employees that the partnership was narrow in scope and would only deal with unclassified workloads, the same sources said. This stood in contrast to a deal Anthropic had signed with Palantir, which would see Anthropic’s AI used for classified military work.

Palantir approached OpenAI in the fall of 2024 to discuss participating in their “FedStart” program, an OpenAI spokesperson confirmed to WIRED. The company ultimately turned it down, and told employees it would’ve been too high-risk, two sources familiar with the matter tell WIRED. However, OpenAI now works with Palantir in other ways.

Around the time the Anduril deal was announced, a few dozen OpenAI employees joined a public Slack channel to discuss their concerns about the company’s military partnerships, sources say and a spokesperson confirmed. Some believed the company’s models were too unreliable to handle a user’s credit card information, let alone assist Americans on the battlefield.



Source link

Continue Reading

Tech

Don’t Risk Birdwatching FOMO—Put Out Your Hummingbird Feeders Now

Published

on

Don’t Risk Birdwatching FOMO—Put Out Your Hummingbird Feeders Now


Though most people associate the beginning of March with the hopefulness of spring and the indignities of daylight saving time, there’s another important event taking place yards all over the country: hummingbird season.

While many species of hummingbirds can be seen in regions year-round, others are migratory, and this time typically marks their return from wintering grounds in Central and South America. These tiny birds can lose up to 40 percent of their body weight by the time they arrive here after having flown thousands of miles, and since many flowers haven’t bloomed yet, nectar feeders can be a source of essential fuel.

Though I test smart bird feeders year-round, I don’t use hummingbird feeders as often as I should, as it’s imperative that they be cleaned and refilled with new nectar every two or three days (a ratio of 1:4 granulated sugar to water is best, and avoid any dyes or additives) to prevent deadly bacteria and mold, and I don’t always have the time.

But if you are going to invest the energy in maintaining a hummingbird feeder, right now is the best time, as you have a chance to see migratory species you might not otherwise encounter, such as black-chinned hummingbirds. A smart feeder helps you ID them, whether they’re stopping at your feeder on their way north or arriving at their final destination.

Birdbuddy’s Pro is the smart hummingbird feeder I recommend and use myself when I’m not actively testing. The app is easy to navigate and sends cleaning reminders, the built-in solar roof keeps the battery charged, and, unlike other feeders, only the shallow bottom screws off for refilling. No having to pour sticky nectar through a narrow opening, or turn a giant cylinder upside down and risk spilling.

Note that it’s not perfect; the sensor is inconsistent and doesn’t capture every hummingbird that visits, but for the camera quality (5 MP photos, 2K video with slow-motion, 122-degree field of view) and ease of use, it’s a foible I’m willing to put up with. If you already have another Birdbuddy feeder, the hummingbird feeder images and videos will integrate seamlessly into your app feed.

Birdbuddy

Pro Smart Solar Hummingbird Feeder

Right now, the feeder is 37 percent off on Birdbuddy’s website—a deal I usually don’t see outside of shopping events like Black Friday or Amazon Prime Day. Note that the feeder only runs on 2.4 GHz Wi-Fi, and while it is fully functional without a subscription, a Birdbuddy Premium subscription will let you add friends and family members to your account so they can see the birds as well. That’s $99 a year through the app.


Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.



Source link

Continue Reading

Tech

The Controversies Finally Caught Up to Kristi Noem

Published

on

The Controversies Finally Caught Up to Kristi Noem


After a tenure marked by controversy and a contentious week of Congressional hearings, secretary Kristi Noem is out as head of the Department of Homeland Security.

President Donald Trump announced in a Truth Social post on Thursday that Noem would be replaced by senator Markwayne Mullin of Oklahoma, a staunch Trump ally and immigration hardliner. “The current Secretary, Kristi Noem, who has served us well, and has had numerous and spectacular results (especially on the Border!), will be moving to be Special Envoy for The Shield of the Americas, our new Security Initiative in the Western Hemisphere we are announcing on Saturday in Doral, Florida,” Trump wrote. “I thank Kristi for her service at ‘Homeland.’”

DHS did not immediately respond to a request for comment.

The agencies under DHS include Immigration and Customs Enforcement, US Customs and Border Protection, the Cybersecurity and Infrastructure Security Agency, the Federal Emergency Management Agency, US Citizenship and Immigration Services, the US Coast Guard, and others. It’s a sprawling network whose vast responsibilities and rapidly expanding budget have put it at the center of the Trump administration’s radical overhaul of immigration and border policy.

Speculation has swirled around Noem’s departure for months. Critics have assailed DHS’s aggressive immigration enforcement tactics, while Noem and figures like White House border czar Tom Homan have reportedly been at odds over how to execute the administration’s mass deportation agenda, with Noem and senior adviser Corey Lewandowski said to have emphasized sheer numbers of arrests and deportations above other considerations.

The relationship between Noem and Lewandowski has itself been a subject of controversy, with CNN reporting that a September meeting between the two and president Donald Trump grew “contentious.” Last month, the Wall Street Journal reported that Lewandowski attempted to fire a pilot during a flight for failing to bring Noem’s blanket from one plane to another during a transfer.

The ousted secretary faced mounting scrutiny over the deaths of US citizens during federal operations in Minneapolis, including the killings of Renee Good and Alex Pretti by federal agents under Noem’s employ. In both cases, Noem publicly labeled the deceased “domestic terrorists,” framing echoed by Trump and other key administration officials. Video evidence, witness testimony, and an independent autopsy contradicted the agency’s claims, including early assertions that Pretti brandished a firearm.

Scrutiny of Noem’s tenure extends beyond the fatal shootings in Minneapolis to a broader pattern of aggressive enforcement tactics, warrantless raids, and mass detention camps. A secretive policy directive issued in May 2025, first reported by the Associated Press, authorized ICE agents to forcibly enter private residences without a judicial warrant. The memo, signed by acting ICE director Todd Lyons, instructed agents to rely solely on an administrative removal document to bypass Fourth Amendment requirements. The policy led to multiple documented instances of federal agents entering the wrong homes, including a January raid in Minnesota where agents removed a US citizen at gunpoint with no legitimate reason.

A record 53 people died in ICE or CBP custody last year, according to House Democrats on the Committee on Homeland Security. Concurrently, Noem has initiated a $38 billion procurement effort to buy and refurbish up to 24 warehouses across the country, aimed at converting them into mass detention camps for people awaiting deportation.

Noem’s tenure has led to controversy at other DHS agencies as well. Her insistence on approving any contracts or grants over $100,000 at the department have caused particular strain at FEMA, which has experienced a massive backlog of funding that has slowed normal processes at the agency. A report issued from Senate Democrats Wednesday found that Noem’s vetting process at FEMA has caused more than 1,000 contracts, grants, and awards to be held up. Multiple FEMA employees have told WIRED that this process has made the agency less ready to respond to disasters and threats.



Source link

Continue Reading

Trending