Tech
Myriota introduces satellite-based scalable global asset tracking | Computer Weekly
Blind spots and outages have been the traditional weak spots of terrestrial networks designed to offer coverage for internet of things (IoT) applications, and Myriota believes it can address these challenges by combining native 5G non-terrestrial network (NTN) satellite connectivity in a purpose-built tracking device called AssetHawk.
Myriota says supply chains are growing increasingly complex, and that blind spots and outages in terrestrial coverage create significant operational and financial risk – particularly across industries such as transport and logistics, equipment leasing, mining, and agriculture.
Powered by its existing HyperPulse connectivity system, AssetHawk is said to be able to address these challenges by combining native 5G NTN satellite connectivity in a purpose-built tracking device – delivering an affordable, feature-rich satellite asset tracker.
AssetHawk is engineered to deliver reliable global visibility beyond the reach of traditional cellular networks. It can support scalable tracking of trailers, containers, pallets, vehicles and unpowered assets to verify delivery milestones, reduce asset loss, improve utilisation, lower operating costs and improve margins as fleets and deployments scale. Native 5G NTN connectivity provides global visibility for broad use cases including trailers, cargo, vehicles and unpowered assets.
Intended for rapid deployment at the edge, Myriota describes AssetHawk as a ready-to-use device that installs in minutes and integrates seamlessly with third-party visualisation and analytics platforms.
The company says that the tracker’s compact, low-profile design and flexible mounting options, including magnetic mounting, make it well-suited to rotating fleets and temporary assets. An IP68-rated enclosure has been used to offer reliable operation in harsh conditions, surviving submersion, dust, impact and extreme temperatures commonly encountered in mining, agriculture and heavy industry.
For long-term deployments, AssetHawk is said to have been engineered to minimise operational overheads. Low-power hardware delivers a battery life of up to 10 years on two AA batteries, while intelligent firmware automatically increases location update frequency when movement is detected. The result is said to be sharper insights while optimising power consumption and operational costs.
The tracker will soon be available with optional Bluetooth Low Energy capabilities to enable the capture of valuable condition data from Bluetooth sensors, including temperature, vibration and other environmental metrics.
The device operates on a standards-based 3GPP Release 17 architecture, using private data paths to protect against unauthorised access or interference – meaning security and data integrity are built into the platform.
AssetHawk is also said to be purpose-built for operations at the edge, supporting use cases such as tracking trailers and containers across borders, monitoring leased equipment throughout its lifecycle, locating shared agricultural assets in remote paddocks, and gaining early visibility of critical equipment during mining exploration.
Developed on a TAA-compliant supply chain and backed by its experience in operating secure satellite networks commercially, Myriota is fundamentally confident that AssetHawk can meet the needs of government, and enterprise customers where trust and resilience are critical.
“Most tracking projects fail not in the lab, but at scale – when battery swaps, coverage gaps and complex integrations erode the business case,” said Myriota CEO Ben Cade. “AssetHawk is designed to flip that equation. By delivering global coverage, predictable multi‑year life and straightforward integration in a single device, we’re giving solution providers and systems integrators a way to scale tracking profitably, even for assets that were previously too remote or low‑value to justify a tracker.”
Tech
OpenAI Buys Some Positive News
OpenAI announced Thursday that it had acquired the online business talk show TBPN for an undisclosed sum. The move comes as OpenAI struggles with its public image, which has taken a significant hit in recent months.
Since launching in 2024, TBPN has risen in popularity among Silicon Valley circles by offering a daily live stream about the technology industry that’s seen as more tech-friendly than traditional outlets. The show’s two hosts, John Coogan and Jordi Hays, offer real-time commentary on breaking news, cycle through viral social media posts, and interview executives from companies including Meta, Salesforce, Palantir and OpenAI. It’s become especially popular among OpenAI staff and other AI researchers, many of whom are addicted to the social media platform X.
It’s hard to understand how a media startup fits into OpenAI’s core businesses selling ChatGPT, Codex, and a new super app the company is developing to consumers and enterprises. Last month, OpenAI’s CEO of Applications, Fidji Simo, told staff in an all hands meeting that the company needed to cancel its side projects and refocus around its core businesses.
In a memo to staff announcing the acquisition, Simo said the typical communications playbook does not apply to OpenAI. “We’re not a typical company,” she said in the memo, which was also published as a blog. “We’re driving a really big technological shift. And with the mission of bringing AGI to the world comes a responsibility to help create a space for a real, constructive conversation about the changes AI creates—with builders and people using the technology at the center.”
TBPN is a small business compared to OpenAI. The media firm says it generated $5 million in ad revenue last year, and was on track to make more than $30 million in revenue in 2026, according to the The Wall Street Journal. The show reportedly reaches around 70,000 viewers per episode across a variety of platforms. A source close to OpenAI says the company doesn’t expect TBPN to contribute financially to the business, though it will help with OpenAI’s communications strategy.
OpenAI has fallen under increased public scrutiny in recent months. After the company signed a deal with the Department of Defense in February, Anthropic’s Claude surged in downloads and claimed the top spot among Apple’s free apps. OpenAI’s leaders are also dealing with a growing QuitGPT movement which is made up of people who vow to never use OpenAI’s products. OpenAI President Greg Brockman cited AI’s popularity issues as a core reason for his increased political spending.
The acquisition makes OpenAI the latest Silicon Valley player to try owning and operating a news business. In recent decades, there have been several notable examples of technology leaders purchasing media firms, including Jeff Bezos buying The Washington Post, Marc Benioff buying Time Magazine, and Robinhood buying the newsletter company MarketSnacks. In each case, the acquisitions raised immediate questions about whether the outlets would remain truly independent. In her memo, Simo told staff that TBPN will retain editorial independence.
“TBPN is my favorite tech show. We want them to keep that going and for them to do what they do so well,” said OpenAI CEO Sam Altman in a post on X. “I don’t expect them to go any easier on us, [and I] am sure I’ll do my part to help enable that with occasional stupid decisions.”
OpenAI said TBPN will continue to “run their programming, choose their guests, and make their own editorial decisions,” according to Simo’s memo The company also said that TBPN will report directly to OpenAI’s VP of global affairs, Chris Lehane. WIRED previously reported how an economic research team under Lehane had struggled to report on AI’s negative impacts on the economy.
Tech
Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex
Cursor announced Thursday the launch of Cursor 3, a new product interface that allows users to spin up AI coding agents to complete tasks on their behalf. The product, which was developed under the code name Glass, is Cursor’s response to agentic coding tools like Anthropic’s Claude Code and OpenAI’s Codex, which have taken off with millions of developers in recent months.
“In the last few months, our profession has completely changed,” said Jonas Nelle, one of Cursor’s heads of engineering, in an interview with WIRED. “A lot of the product that got Cursor here is not as important going forward anymore.”
Cursor increasingly finds itself in competition with leading AI labs for developers and enterprise customers. The company pioneered one of the first and most popular ways for developers to code with AI models from OpenAI, Anthropic, and Google—making Cursor one of these companies’ biggest AI customers. But in the last 18 months, OpenAI and Anthropic have launched agentic coding products of their own, and started offering them through highly subsidized subscriptions that have put pressure on Cursor’s business.
While Cursor’s core product lets developers code in an integrated development environment (IDE) and tap an AI model for help, new products like Claude Code and Codex center around allowing developers to off-load entire tasks to an AI agent—sometimes spinning up multiple agents at the same time. Cursor 3 is the startup’s version of an “agent-first” coding product. According to Nelle, the product is optimized for a world where developers spend their days “conversing with different agents, checking in on them, and seeing the work that they did,” rather than writing code themselves.
Cursor is launching its new agentic coding interface inside its existing desktop app, where it will live alongside the IDE. At the center of a new window in Cursor, there’s a text box where users can type, in natural language, a task they’d like an AI agent to complete—it looks more like a chatbot than a coding environment. Press enter, the AI agent sets to work without requiring the developer to write a single line of code. In a sidebar on the left, developers can view and manage all of the AI agents they have running in Cursor.
What’s unique about Cursor 3, compared to desktop apps for Claude Code and Codex, is that it integrates an agent-first product with Cursor’s AI-powered development environment. In a demo, Cursor’s other cohead of engineering for Cursor 3, Alexi Robbins, showed WIRED how users can prompt an agent in the cloud to spin up a feature, and then review the code it generated locally on their computer.
Nelle and Robbins argue it doesn’t matter which interface developers are spending their time in—they just want people using Cursor.
Competing With the AI Labs
I visited Cursor’s office in San Francisco’s North Beach neighborhood last week. The startup is reportedly raising fresh capital at a $50 billion valuation—nearly double what it was valued in a funding round last fall—and has expanded into an old movie theater. Cursor employees used to toss their shoes in a pile by the door upon entry, but now there’s a row of large shoe racks, signaling one way in which the company is growing up.
Yet Cursor still feels like a startup. Employees tell me that’s part of the appeal of working there; the company can ship quickly and doesn’t feel too corporate. But as it finds itself racing to catch up to Anthropic and OpenAI in the agentic coding race, that scrappiness may not be enough. This battle—the one to create the best AI coding agent—may be Cursor’s most capital-intensive chapter yet.
Tech
Identity and AI: Questions of data security, trust and control | Computer Weekly
AI-driven identity solutions are often presented as the grown-up answer to modern access control: smarter verification, less friction, better security, happier users. In principle, yes. In practice, they also drag a fairly hefty suitcase of compliance, privacy and ethical questions in behind them.
The first issue is compliance. Identity is not a side topic in enterprise environments. It sits right in the middle of security, governance, risk and accountability. Once AI is involved in deciding who gets access, who is challenged, who is flagged as suspicious, or who is denied entry altogether, that stops being just a technical control and quickly becomes a governance matter. Many of these solutions rely on large volumes of personal data, sometimes including biometrics, behavioural analysis, device data, location information and patterns of use. That means organisations need to be crystal clear on lawful basis, necessity, proportionality, retention and oversight. In other words, they need to know not just that the tool can do something, but that they should be doing it at all. Like knowing that an iPhone is a tool, not the conversation.
Privacy is where things get a bit soupy. AI identity systems are usually marketed on the basis that they can take more signals into account and make better decisions as a result. That sounds great, and sometimes it is. But it also means more collection, more processing and more potential intrusion. The line between intelligent authentication and overreach can get thin very quickly. Data gathered to confirm identity can easily become data used to monitor behaviour, profile staff, track habits or support broader surveillance if the guardrails are poor. That is where trust starts to wobble. Enterprises need privacy by design, proper impact assessments, transparent notices and disciplined boundaries around how identity data is used. Just because a system can infer more does not mean it should. It’s a potential minefield that should be navigated mindfully and with integrity.
That brings us to is the ethical question, which is where the machine gets a little too smug for its own good. AI models are not neutral simply because they are mathematical. If an identity tool has been trained on incomplete or biased data, it may perform unevenly across different groups. That can lead to higher false rejections, repeated challenges for legitimate users, or decisions that disproportionately affect certain individuals. In a business setting, that is not just inconvenient. It can be unfair, exclusionary and potentially discriminatory. Organisations cannot simply deploy these systems and hope the algorithm behaves itself. That’s magical thinking.
Explainability matters too. If someone is denied access, locked out of a process or flagged as high risk, there must be a way to explain that decision in plain language and to challenge it if necessary. Black box identity decisions are a poor fit for any organisation trying to claim strong governance. Human review, escalation routes and clear accountability all need to be part of the design.
The real implication is that AI-driven identity should never be treated as a shiny bolt-on security upgrade. It is part of a much bigger picture involving data protection, user trust, accountability and control. Used well, it can strengthen resilience and reduce fraud. Used badly, it can create exactly the kind of opaque, over-engineered risk that good governance is supposed to prevent. The smart approach is not to resist the technology, but to govern it properly from the outset. Because in identity, as in most things, clever without controlled is just chaos in a smarter outfit.
-
Fashion1 week agoHo Chi Minh City bizs adjust production plans, seek new supply chains
-
Fashion1 week agoJapan’s apparel imports rise 22.9% to $2 bn in February 2026
-
Fashion1 week agoIndia’s Gen Z to drive half of fashion market by 2030: Reedseer
-
Entertainment1 week agoAndrew holds breath as King Charles plans bombshell move amid probe
-
Business6 days agoHow do you spot a fake online review?
-
Business1 week agoCo-op boss quits after ‘toxic culture’ claims reported by BBC
-
Entertainment6 days agoLee Sang-bo dies at 45: Funeral details revealed
-
Fashion6 days agoChina rolls out tariff cuts on Congo imports from April 1
