Tech
Researchers explore how AI can strengthen, not replace, human collaboration
Researchers from Carnegie Mellon University’s Tepper School of Business are learning how AI can be used to support teamwork rather than replace teammates.
Anita Williams Woolley is a professor of organizational behavior. She researches collective intelligence, or how well teams perform together, and how artificial intelligence could change workforce dynamics. Now, Woolley and her colleagues are helping to figure out exactly where and how AI can play a positive role.
“I’m always interested in technology that can help us become a better version of ourselves individually,” Woolley said, “but also collectively, how can we change the way we think about and structure work to be more effective?”
Woolley collaborated with technologists and others in her field to develop Collective HUman-MAchine INtelligence (COHUMAIN), a framework that seeks to understand where AI fits within the established boundaries of organizational social psychology.
The researchers behind the 2023 publication of COHUMAIN caution against treating AI like any other teammate. Instead, they see it as a partner that works under human direction, with the potential to strengthen existing capabilities or relationships. “AI agents could create the glue that is missing because of how our work environments have changed, and ultimately improve our relationships with one another,” Woolley said.
The research that makes up the COHUMAIN architecture emphasizes that while AI integration into the workplace may take shape in ways we don’t yet understand, it won’t change the fundamental principles behind organizational intelligence, and likely can’t fill in all of the same roles as humans.
For instance, while AI might be great at summarizing a meeting, it’s still up to people to sense the mood in the room or pick up on the wider context of the discussion.
Organizations have the same needs as before, including a structure that allows them to tap into each human team member’s unique expertise. Woolley said that artificial intelligence systems may best serve in “partnership” or facilitation roles rather than managerial ones, like a tool that can nudge peers to check in with each other, or provide the user with an alternate perspective..
Safety and risk
With so much collaboration happening through screens, AI tools might help teams strengthen connections between coworkers. But those same tools also raise questions about what’s being recorded and why.
“People have a lot of sensitivity, rightly so, around privacy. Often you have to give something up to get something, and that is true here,” Wooley said.
The level of risk that users feel, both socially and professionally, can change depending on how they interact with AI, according to Allen Brown, a Ph.D. student who works closely with Woolley. Brown is exploring where this tension shows up and how teams can work through it. His research focuses on how comfortable people feel taking risks or speaking up in a group.
Brown said that, in the best case, AI could help people feel more comfortable speaking up and sharing new ideas that might not be heard otherwise. “In a classroom, we can imagine someone saying, “Oh, I’m a little worried. I don’t know enough for my professor, or how my peers are going to judge my question,” or, “I think this is a good idea, but maybe it isn’t.” We don’t know until we put it out there.”
Since AI relies on a digital record that might or might not be kept permanently, one concern is that a human might not know which interactions with an AI will be used for evaluation.
“In our increasingly digitally mediated workspaces, so much of what we do is being tracked and documented,” Brown said. “There’s a digital record of things, and if I’m made aware that, ‘Oh, all of a sudden our conversation might be used for evaluation,’ we actually see this significant difference in interaction.”
Even when they thought their comments might be monitored or professionally judged, people still felt relatively secure talking to another human being. “We’re talking together. We’re working through something together, but we’re both people. There’s kind of this mutual assumption of risk,” he explained.
The study found that people felt more vulnerable when they thought an AI system was evaluating them. Brown wants to understand how AI can be used to create the opposite effect—one that builds confidence and trust.
“What are those contexts in which AI could be a partner, could be part of this conversational communicative practice within a pair of individuals at work, like a supervisor-supervisee relationship, or maybe within a team where they’re working through some topic that might have task conflict or relationship conflict?” Brown said. “How does AI help resolve the decision-making process or enhance the resolution so that people actually feel increased psychological safety?”
Creating a more trustworthy AI
At the individual level, Tepper researchers are also learning how the way in which AI explains its reasoning affects how people use and trust it. Zhaohui (Zoey) Jiang and Linda Argote are studying how people react to different kinds of AI systems—specifically, ones that explain their reasoning (transparent AI) versus ones that don’t explain how they make decisions (black box AI).
“We see a lot of people advocating for transparent AI,” Jiang said, “but our research reveals an advantage of keeping the AI a black box, especially for a high ability participant.”
One of the reasons for this, she explained, is overconfidence and distrust in skilled decision-makers.
“For a participant who is already doing a good job independently at the task, they are more prone to the well-documented tendency of AI aversion. They will penalize the AI’s mistake far more than the humans making the same mistake, including themselves,” Jiang said. “We find that this tendency is more salient if you tell them the inner workings of the AI, such as its logic or decision rules.”
People who struggle with decision-making actually improve their outcomes when using transparent AI models that show off a moderate amount of complexity in their decision-making process. “We find that telling them how the AI is thinking about this problem is actually better for less-skilled users, because they can learn from AI decision-making rules to help improve their own future independent decision-making.”
While transparency is proving to have its own use cases and benefits, Jiang said the most surprising findings are around how people perceive black box models. “When we’re not telling these participants how the model arrived at its answer, participants judge the model as the most complex. Opacity seems to inflate the sense of sophistication, whereas transparency can make the very same system seem simpler and less ‘magical,'” she said.
Both kinds of models vary in their use cases. While it isn’t yet cost‑effective to tailor an AI to each human partner, future systems may be able to self-adapt their representation to help people make better decisions, she said.
“It can be dynamic in a way that it can recognize the decision-making inefficiencies of that particular individual that it is assigned to collaborate with, and maybe tweak itself so that it can help complement and offset some of the decision-making inefficiencies.”
Citation:
Researchers explore how AI can strengthen, not replace, human collaboration (2025, November 1)
retrieved 1 November 2025
from https://techxplore.com/news/2025-10-explore-ai-human-collaboration.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
A Filmmaker Made a Sam Altman Deepfake—and Got Unexpectedly Attached
Director Adam Bhala Lough didn’t set out to make a documentary about a digital simulacrum of Sam Altman.
But after about 100 days of texting and emailing the OpenAI CEO for an interview—with no response, he claims, and with financiers hounding him to make good on his original pitch—Lough was at his wit’s end.
He’d exhausted just about every angle. “Once I reached that point, I gave up and I pivoted to gate-crashing OpenAI,” he says. Though he’d employed a similar tactic in his Emmy-nominated 2023 documentary Telemarketers—a chronicle of industry-wide corruption in the telemarketing business—it wasn’t a filmmaking style he felt all that comfortable with. “It was a fortress. I was able to slip through the gate, and immediately security grabbed me and physically removed me from the premises.”
So begins Deepfaking Sam Altman, Lough’s portrait of how AI is reshaping society and his quest to talk to the man behind it. When his original plan fell through he drew inspiration from Altman himself. “The Scarlett Johansson controversy erupted,” he says. In 2024, the actress publicly called out OpenAI for seeming to copy her voice for its new AI voice assistant Sky. “It was at that point where I got the idea to do the deepfake.” (In a May 2024 statement, Altman apologized to Johansson and said Sky’s voice was “never intended to resemble” hers.)
What originally starts out as a simple voice clone balloons into a full deepfake of Altman called Sam Bot, which Lough travels to India to have created. This being a Lough film, though, nothing goes according to plan. Without spoiling too much, Sam Bot eventually becomes its own entity, and the film takes an even stranger—and revelatory—dive from there. “There’s parallels between this movie and Terminator 2: Judgement Day, but there’s none of the violence,” he says. Lough grew up during what he calls the “AI 1.0 era.” His obsession with James Cameron’s Terminator 2 was a major influence on his craft.
Deepfaking Sam Altman, which is based partially on the New York Magazine story casting Sam Altman as the Oppenheimer of our age, features commentary from former OpenAI safety engineer Heidy Khlaaf, who tells Lough, “We’re starting to see OpenAI dip its toes in military uses, and I cannot imagine something like Dall-E and ChatGPT being used for military assists. That really scares me, given how inaccurate those systems are.”
Tech
Phone Updates Used to Be Annoying. The Latest iOS Is Awful
I come from a long line of Luddites. My grandmother special-ordered her Toyota Camry with crank windows because she was convinced it was “one less thing that will break.” My father refused to upgrade our six-CD stereo system even though the eject button wouldn’t open and it could only play the first CD he ever put in it. The Traveling Wilburys Vol.1 was the soundtrack to our family dinners for a decade. As for myself, I only switched to a smartphone in 2013, when it would’ve cost about the same amount to repair my flip phone.
Now I am the same as anyone reading this. My phone is my toy and my toil, the first object I touch upon waking, the spackle to my spare minutes, the inanimate partner in our shared lie, which is that it works for me and not the other way around. Mostly, I accept this. But with the latest iOS, released last week, revolt is in the air.
Tech companies are accustomed to a certain amount of kicking and screaming after foisting new interfaces on the public. You can’t please all of the people all of the time, especially when “all of the people” is in the billions. But ask your friends—or Google or Reddit or Bluesky or ChatGPT—about the operating system update, and you will be swept away in a river of anger. “This is like foundationally bad,” author and musician John Darnielle replied on Bluesky to someone who agreed with his original tweet (about the poor photo-cropping function). One Reddit thread was posted under the headline “New iPhone update made me so overwhelmed, I ended up throwing my phone.” The subsequent post does not specify where the phone was thrown or at whom, but I have some suggestions. One wonders at what point a company’s petrification of obsolescence risks becoming a self-fulfilling prophecy. Ask yourself: Is this good for the phones? Normally, I’d be curious about the hissy-fit metrics inside Silicon Valley, about when public upset gets severe enough to become private data. But right now, I have my own problems.
I downloaded Apple’s new iOS 26.2 last week because I am a trained circus seal who will press any button presented to me. I came home late from a holiday party, agreed to the latest iOS almost by accident, and woke up to a new world. There’s something very A Thief in the Night about any new operating system, but in this case, the complaints, some witnessed, some personally experienced, are intense. Here is a partial list: the slow speed (every action takes twice as long), the animation of text bubbles, the incongruous mix of sensitivity and imperviousness to touch, the swipes to nowhere, the difficulty posting downloaded photos, the fact that almost nothing is where you left it (search fields, files), the unsolicited status sharing regarding dwindling battery life (“24m to 80%”), the lack of visual contrast, the screenshot fussiness, the requirement that users drive up to a mansion on Long Island and whisper “Fidelio” in order to toggle off the “Liquid Glass” function. You have to admit: It’s a little funny to get a transparency feature from a tech company.
Given my history, I tend to assume most technological snafus are my doing. I’ve tried to wind back what aspects of this iOS I can, assuming the veil of frustration will lift eventually. Ideally, I will not have to mentally downgrade this pricy device to a flip-phone. But in the meantime, the widespread nature of other people’s indignation has given me a perverse sense of community.
Take this battery-life business. I work from home, a privileged charging position. Yet I too have noticed my battery leveling threats. The iOS seems self-aware: The lock screen photo now fades by default, in order to save power. You have to do some toggling if you want to gaze at your kids with the instantaneousness to which you are accustomed. Also, like all of Reddit, I do not take kindly to the idea that the solution to my woes is to turn off my device and turn it back on (have you tried looking for your shoes in the closet?). Or that I should check my storage. Ha! I have a year-old phone with enough storage to choke a horse. This is not because I’m directing independent films. It’s because I like my photos and text exchanges where I like my martinis: in my hand. I’m a writer. Two of my favorite things in this world are transcripts and being right, on the spot.
Alas, my trusty research assistant doesn’t feel so trusty right now. The new iOS is like getting a present from the relative who knows you the least. Except worse because your phone knows you quite well. So when it presents you with the touchscreen version of an ill-fitting, bug-ridden, ugly sweater and says, “I saw this and thought of you,” it creates revulsion and frustration. People don’t enjoy forking over data and dollars in exchange for annoyance, in exchange for having to sound, well, like Luddites.
Historically, Luddites were 19th-century textile workers who eschewed new machinery (partially for financial reasons), thus becoming symbolic of impotent resistance to progress. But is this progress? It doesn’t feel like it. Believe me, there’s no glory in identifying as inept. The modern Luddite is just as impatient as the rest of the population, just as concerned with wanting things to work well or, yes, better. Which makes me think twice about my grandmother and her car. I’m pretty sure the woman knew how to press a button. She didn’t special-order crank windows because it was one less learning curve for her, she ordered them because it was one less learning curve for the machine. She would’ve gone with whatever was sure to work. All she wanted was for the fucking windows to open.
Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.
Tech
The Lovense Spinel Is the Mini Sex Machine to Get
It made the experience very realistic to my needs: start slow, work my way up, then, if I’m feeling it, go for turbo. Admittedly, when you have the Spinel on turbo, you’re likely to have flashbacks to when Carrie Bradshaw was subjected to such aggressive jack rabbit sex that she couldn’t stand up straight at Charlotte’s wedding. However, because the vaginal muscles create resistance when it’s inside, turbo mode was quite pleasurable, especially because I used a lot of lube. You absolutely want to lube up for Spinel.
What’s also great about this dildo is that you can slide on the clitoral stimulator, so while you’re enjoying proper thrusting, at the speed and intensity you like—there are three levels and four patterns—your clitoris is also getting its fair share of stimulation.
When you’re ready to swap out the heating dildo with its epic amount of thrusts per minute, you can then move on to the G-spot attachment. While some internal arms have a slight curve to stimulate the G-spot, this one has a far-reaching, deep curve to it, meaning if you’ve struggled to find your G-spot in the past, you can’t miss it with this attachment. You can also slide on the clitoral stimulator here, too, if you’re in the mood to hopefully score a blended orgasm.
Like all Lovense products, the Spinel comes with an app that isn’t difficult to use, but with all the options, including speed, intensity, and temperature, it became a little project to explore the different features each time I opened it. I don’t often use a sex toy’s app all that much, but for this toy, it makes sense. It would be uncomfortable to control it entirely by hand, so the app feels necessary. Just be prepared to take some time to learn how it all works.
Not a Quick Romp
The Spinel takes time to put together, figure out, and decide on not just attachments, but how you want to use them. It can be a handheld device, placed on a flat and secure surface to make use of the suction cup feature, or its handle can literally turn it into a gun-shaped dildo that you can use on yourself, although it’s not particularly comfortable to hold, so it’s best used with a partner.
Courtesy of Lovense
When you use the suction cup, you’ll want to explore different flat surfaces to ensure it’s super-secure, to avoid a precarious situation. If you have roommates and thin walls, and don’t want people hearing your sex toy doing its thing, that’s also something to consider. While there’s no shame here, you don’t want to make other people uncomfortable with the sounds that transpire.
If you love a sex toy that’s going to do the majority of the work for you, heats up, and has long battery life—it takes about 2.5 hours to charge, and offers about four hours of playtime—then the Spinel might be your next favorite toy. It’s not cheap, but it’s worth the splurge.
-
Business5 days agoHitting The ‘High Notes’ In Ties: Nepal Set To Lift Ban On Indian Bills Above ₹100
-
Politics1 week agoTrump launches gold card programme for expedited visas with a $1m price tag
-
Tech1 week agoJennifer Lewis ScD ’91: “Can we make tissues that are made from you, for you?”
-
Business1 week agoRivian turns to AI, autonomy to woo investors as EV sales stall
-
Fashion1 week agoTommy Hilfiger appoints Sergio Pérez as global menswear ambassador
-
Business1 week agoCoca-Cola taps COO Henrique Braun to replace James Quincey as CEO in 2026
-
Sports1 week agoPolice detain Michigan head football coach Sherrone Moore after firing, salacious details emerge: report
-
Tech1 week agoGoogle DeepMind partners with UK government to deliver AI | Computer Weekly
