Tech
Wilson Connectivity, Autonomous Systems team for in-building wireless service | Computer Weekly
Wireless communication technology provider Wilson Connectivity has announced a joint development partnership with Autonomous Systems to bring automated, digitally transformed capabilities to phases of in-building wireless infrastructure spanning initial deployment through ongoing optimisation.
The full network lifecycle management offering combines Wilson’s 30-year track record in distributed antenna systems (DAS), private 5G and Citizens Broadband Radio Service (CBRS) with Autonomous Systems’ cloud-based, artificial intelligence (AI)-ready monitoring platform to give enterprises real-time, automated visibility into their networks from day one of deployment through to ongoing optimisation.
The combined service is said to have the “genuinely interesting” quality of flipping the traditional models currently used by enterprises running DAS or private networks.
Most organisations that operate in-building wireless systems rely on reactive, manual processes to resolve connectivity issues. Technicians are dispatched only after problems are reported, leading to prolonged disruptions and higher operational costs.
Wilson’s product replaces that model with continuous, automated monitoring and active testing that measures actual quality of experience for voice, messaging, over-the-top and streaming. The system is designed to scale across healthcare, manufacturing, logistics, higher education and hospitality. It’s also multi-operator and works across active, hybrid and passive DAS, as well as private 5G and CBRS.
It is also optimised for multi-operator environments and scales across healthcare facilities, manufacturing floors, logistics centres, datacentres, K-12 schools, higher education campuses and hospitality venues where reliable connectivity is essential for operations and public safety communications.
Wilson’s Hybrid DAS is said to be built to be installed quickly to improve in-building wireless signal through multi-channel amplification for the most simultaneous bandwidth. This is said to result in users gaining more control and a lower total cost of ownership through remote network scanning and monitoring, and energy-efficient space-saving design.
Cell signals for all devices on all carriers can be enhanced up to 5G speeds with the Hybrid DAS service, which also delivers the precision of Bi-Directional Antenna amplification using enterprise-grade quality with fibre-optic transport. This is said to offer “the greatest” versatility, coverage and capacity.
“This is a major step forward for Wilson and for the customers who depend on us,” said Payam Maveddat, general manager for enterprise at Wilson Connectivity. “We’re no longer just providing coverage. We’re giving enterprises and their partners a complete, integrated solution that manages the entire network lifecycle with real-time intelligence. That means fewer truck rolls, faster problem resolution and a better experience for the people who rely on these networks every day.”
Said to be built to unify automated monitoring and management, the Autonomous Systems platform combines zero-touch visibility sensors with fully cloud-integrated workflow automation to streamline operations and accelerate decision-making. By transforming network and service data into actionable intelligence, Autonomous Systems says it can empower organisations to enhance efficiency, strengthen network resilience and optimise performance at scale.
“Wilson saw where the market was heading and made a strategic decision to lead their industry enabling full network life-cycle automation,” said Autonomous Systems CEO Steve Urvik. “Working together on this joint development, we’ve built something that gives Wilson’s customers and partners a level of integrated network visibility and control that simply wasn’t available in the market before.”
The service will be available globally in the second quarter of 2026. Pricing will be based on a combination of intelligent probe hardware and subscription-based remote monitoring.
Tech
OpenAI Buys Some Positive News
OpenAI announced Thursday that it had acquired the online business talk show TBPN for an undisclosed sum. The move comes as OpenAI struggles with its public image, which has taken a significant hit in recent months.
Since launching in 2024, TBPN has risen in popularity among Silicon Valley circles by offering a daily live stream about the technology industry that’s seen as more tech-friendly than traditional outlets. The show’s two hosts, John Coogan and Jordi Hays, offer real-time commentary on breaking news, cycle through viral social media posts, and interview executives from companies including Meta, Salesforce, Palantir and OpenAI. It’s become especially popular among OpenAI staff and other AI researchers, many of whom are addicted to the social media platform X.
It’s hard to understand how a media startup fits into OpenAI’s core businesses selling ChatGPT, Codex, and a new super app the company is developing to consumers and enterprises. Last month, OpenAI’s CEO of Applications, Fidji Simo, told staff in an all hands meeting that the company needed to cancel its side projects and refocus around its core businesses.
In a memo to staff announcing the acquisition, Simo said the typical communications playbook does not apply to OpenAI. “We’re not a typical company,” she said in the memo, which was also published as a blog. “We’re driving a really big technological shift. And with the mission of bringing AGI to the world comes a responsibility to help create a space for a real, constructive conversation about the changes AI creates—with builders and people using the technology at the center.”
TBPN is a small business compared to OpenAI. The media firm says it generated $5 million in ad revenue last year, and was on track to make more than $30 million in revenue in 2026, according to the The Wall Street Journal. The show reportedly reaches around 70,000 viewers per episode across a variety of platforms. A source close to OpenAI says the company doesn’t expect TBPN to contribute financially to the business, though it will help with OpenAI’s communications strategy.
OpenAI has fallen under increased public scrutiny in recent months. After the company signed a deal with the Department of Defense in February, Anthropic’s Claude surged in downloads and claimed the top spot among Apple’s free apps. OpenAI’s leaders are also dealing with a growing QuitGPT movement which is made up of people who vow to never use OpenAI’s products. OpenAI President Greg Brockman cited AI’s popularity issues as a core reason for his increased political spending.
The acquisition makes OpenAI the latest Silicon Valley player to try owning and operating a news business. In recent decades, there have been several notable examples of technology leaders purchasing media firms, including Jeff Bezos buying The Washington Post, Marc Benioff buying Time Magazine, and Robinhood buying the newsletter company MarketSnacks. In each case, the acquisitions raised immediate questions about whether the outlets would remain truly independent. In her memo, Simo told staff that TBPN will retain editorial independence.
“TBPN is my favorite tech show. We want them to keep that going and for them to do what they do so well,” said OpenAI CEO Sam Altman in a post on X. “I don’t expect them to go any easier on us, [and I] am sure I’ll do my part to help enable that with occasional stupid decisions.”
OpenAI said TBPN will continue to “run their programming, choose their guests, and make their own editorial decisions,” according to Simo’s memo The company also said that TBPN will report directly to OpenAI’s VP of global affairs, Chris Lehane. WIRED previously reported how an economic research team under Lehane had struggled to report on AI’s negative impacts on the economy.
Tech
Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex
Cursor announced Thursday the launch of Cursor 3, a new product interface that allows users to spin up AI coding agents to complete tasks on their behalf. The product, which was developed under the code name Glass, is Cursor’s response to agentic coding tools like Anthropic’s Claude Code and OpenAI’s Codex, which have taken off with millions of developers in recent months.
“In the last few months, our profession has completely changed,” said Jonas Nelle, one of Cursor’s heads of engineering, in an interview with WIRED. “A lot of the product that got Cursor here is not as important going forward anymore.”
Cursor increasingly finds itself in competition with leading AI labs for developers and enterprise customers. The company pioneered one of the first and most popular ways for developers to code with AI models from OpenAI, Anthropic, and Google—making Cursor one of these companies’ biggest AI customers. But in the last 18 months, OpenAI and Anthropic have launched agentic coding products of their own, and started offering them through highly subsidized subscriptions that have put pressure on Cursor’s business.
While Cursor’s core product lets developers code in an integrated development environment (IDE) and tap an AI model for help, new products like Claude Code and Codex center around allowing developers to off-load entire tasks to an AI agent—sometimes spinning up multiple agents at the same time. Cursor 3 is the startup’s version of an “agent-first” coding product. According to Nelle, the product is optimized for a world where developers spend their days “conversing with different agents, checking in on them, and seeing the work that they did,” rather than writing code themselves.
Cursor is launching its new agentic coding interface inside its existing desktop app, where it will live alongside the IDE. At the center of a new window in Cursor, there’s a text box where users can type, in natural language, a task they’d like an AI agent to complete—it looks more like a chatbot than a coding environment. Press enter, the AI agent sets to work without requiring the developer to write a single line of code. In a sidebar on the left, developers can view and manage all of the AI agents they have running in Cursor.
What’s unique about Cursor 3, compared to desktop apps for Claude Code and Codex, is that it integrates an agent-first product with Cursor’s AI-powered development environment. In a demo, Cursor’s other cohead of engineering for Cursor 3, Alexi Robbins, showed WIRED how users can prompt an agent in the cloud to spin up a feature, and then review the code it generated locally on their computer.
Nelle and Robbins argue it doesn’t matter which interface developers are spending their time in—they just want people using Cursor.
Competing With the AI Labs
I visited Cursor’s office in San Francisco’s North Beach neighborhood last week. The startup is reportedly raising fresh capital at a $50 billion valuation—nearly double what it was valued in a funding round last fall—and has expanded into an old movie theater. Cursor employees used to toss their shoes in a pile by the door upon entry, but now there’s a row of large shoe racks, signaling one way in which the company is growing up.
Yet Cursor still feels like a startup. Employees tell me that’s part of the appeal of working there; the company can ship quickly and doesn’t feel too corporate. But as it finds itself racing to catch up to Anthropic and OpenAI in the agentic coding race, that scrappiness may not be enough. This battle—the one to create the best AI coding agent—may be Cursor’s most capital-intensive chapter yet.
Tech
Identity and AI: Questions of data security, trust and control | Computer Weekly
AI-driven identity solutions are often presented as the grown-up answer to modern access control: smarter verification, less friction, better security, happier users. In principle, yes. In practice, they also drag a fairly hefty suitcase of compliance, privacy and ethical questions in behind them.
The first issue is compliance. Identity is not a side topic in enterprise environments. It sits right in the middle of security, governance, risk and accountability. Once AI is involved in deciding who gets access, who is challenged, who is flagged as suspicious, or who is denied entry altogether, that stops being just a technical control and quickly becomes a governance matter. Many of these solutions rely on large volumes of personal data, sometimes including biometrics, behavioural analysis, device data, location information and patterns of use. That means organisations need to be crystal clear on lawful basis, necessity, proportionality, retention and oversight. In other words, they need to know not just that the tool can do something, but that they should be doing it at all. Like knowing that an iPhone is a tool, not the conversation.
Privacy is where things get a bit soupy. AI identity systems are usually marketed on the basis that they can take more signals into account and make better decisions as a result. That sounds great, and sometimes it is. But it also means more collection, more processing and more potential intrusion. The line between intelligent authentication and overreach can get thin very quickly. Data gathered to confirm identity can easily become data used to monitor behaviour, profile staff, track habits or support broader surveillance if the guardrails are poor. That is where trust starts to wobble. Enterprises need privacy by design, proper impact assessments, transparent notices and disciplined boundaries around how identity data is used. Just because a system can infer more does not mean it should. It’s a potential minefield that should be navigated mindfully and with integrity.
That brings us to is the ethical question, which is where the machine gets a little too smug for its own good. AI models are not neutral simply because they are mathematical. If an identity tool has been trained on incomplete or biased data, it may perform unevenly across different groups. That can lead to higher false rejections, repeated challenges for legitimate users, or decisions that disproportionately affect certain individuals. In a business setting, that is not just inconvenient. It can be unfair, exclusionary and potentially discriminatory. Organisations cannot simply deploy these systems and hope the algorithm behaves itself. That’s magical thinking.
Explainability matters too. If someone is denied access, locked out of a process or flagged as high risk, there must be a way to explain that decision in plain language and to challenge it if necessary. Black box identity decisions are a poor fit for any organisation trying to claim strong governance. Human review, escalation routes and clear accountability all need to be part of the design.
The real implication is that AI-driven identity should never be treated as a shiny bolt-on security upgrade. It is part of a much bigger picture involving data protection, user trust, accountability and control. Used well, it can strengthen resilience and reduce fraud. Used badly, it can create exactly the kind of opaque, over-engineered risk that good governance is supposed to prevent. The smart approach is not to resist the technology, but to govern it properly from the outset. Because in identity, as in most things, clever without controlled is just chaos in a smarter outfit.
-
Fashion1 week agoHo Chi Minh City bizs adjust production plans, seek new supply chains
-
Fashion1 week agoJapan’s apparel imports rise 22.9% to $2 bn in February 2026
-
Fashion1 week agoIndia’s Gen Z to drive half of fashion market by 2030: Reedseer
-
Entertainment1 week agoAndrew holds breath as King Charles plans bombshell move amid probe
-
Business6 days agoHow do you spot a fake online review?
-
Business1 week agoCo-op boss quits after ‘toxic culture’ claims reported by BBC
-
Entertainment6 days agoLee Sang-bo dies at 45: Funeral details revealed
-
Fashion6 days agoEU apparel imports slump 15.48% YoY in Jan; Bangladesh hardest hit
