Connect with us

Tech

NHS App set to front-end online health service access | Computer Weekly

Published

on

NHS App set to front-end online health service access | Computer Weekly


As part of the health service’s 10-year plan, the NHS App is set to become the entry point to an online hospital that, by 2027, will start connecting patients digitally to expert clinicians anywhere in England.

Called NHS Online, the digital health service is being designed so patients can be seen faster and to enable NHS clinicians to triage them via the NHS App. NHS England said patients will also be able to book scans at local community diagnostic centres.

Since it was introduced in 2019, millions of patients have used the NHS App to manage their care. Through enhanced app functionality, when a patient has an appointment with their GP, they will have the option of being referred to the online hospital for their specialist care. They will then be able to book directly through the NHS App and have the option to see specialists from around the country online, without leaving their home or having to wait longer for a face-to-face appointment.

If a patient needs a scan, test or procedure, NHS England said the app will enable them to book this at a time that suits them at a community diagnostic centre close to home. They will be able to track their prescriptions and get advice on managing their condition without needing to travel.

NHS England said NHS Online will improve patient waiting times, delivering the equivalent of up to 8.5 million appointments and assessments in its first three years – four times more than an average trust – while enhancing patient choice and control over their care.

The service will initially be rolled out to cover a small number of planned treatment areas with the longest waits. Over time, NHS England said it will expand to more treatment areas.

The NHS 10-year health plan aims to shift the health service from analogue to digital, using technology to help deliver patient services.

The service builds on and scales the artificial intelligence (AI) and remote monitoring already in use across the NHS.

Discussing the new service, Jim Mackey, NHS chief executive, said: “The NHS can, must and will move forward to match other sectors in offering digital services that make services as personalised, convenient and flexible as possible for both staff and patients.”

Patients who choose to receive their treatment through the online hospital will benefit from us industrialising the latest technology and innovations, while the increased capacity will help to cut demand and slash waiting times
Jim Mackey, NHS

Mackey described NHS Online as a huge step forward for the health service, which he said would deliver millions more appointments by the end of the decade and offer a real alternative for patients.

“Patients who choose to receive their treatment through the online hospital will benefit from us industrialising the latest technology and innovations, while the increased capacity will help to cut demand and slash waiting times,” he added.

Before NHS Online goes live, the NHS said it will learn from existing research on patient experience of online care over the past five years and build this into the programme as it develops. The programme is being developed with a commitment to patient partnership in design and delivery.

Jeanette Dickson, chair of the Academy of Medical Royal Colleges, said: “This is a novel and potentially game-changing way of improving equity and speed of access to NHS services, which would reduce health inequalities.

“Obviously, we need to make sure that those who aren’t digitally enabled are not penalised in any way, but if this approach can be delivered safely and effectively, freeing up capacity in bricks and mortar hospitals at the same time, then it could potentially be a really good thing.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

How do ‘AI detection’ tools actually work? And are they effective?

Published

on

How do ‘AI detection’ tools actually work? And are they effective?


Credit: JUSTIN JOSEPH from Pexels

As nearly half of all Australians say they have recently used artificial intelligence (AI) tools, knowing when and how they’re being used is becoming more important.

Consultancy firm Deloitte recently partially refunded the Australian government after a report they published had AI-generated errors in it.

A lawyer also recently faced after false AI-generated citations were discovered in a formal court document. And many universities are concerned about how their students use AI.

Amid these examples, a range of “AI detection” tools have emerged to try to address people’s need for identifying accurate, trustworthy and verified content.

But how do these tools actually work? And are they effective at spotting AI-generated material?

How do AI detectors work?

Several approaches exist, and their effectiveness can depend on which types of content are involved.

Detectors for text often try to infer AI involvement by looking for “signature” patterns in , , and the predictability of certain words or phrases being used. For example, the use of “delves” and “showcasing” has skyrocketed since AI writing tools became more available.

However the difference between AI and human patterns is getting smaller and smaller. This means signature-based tools can be highly unreliable.

Detectors for images sometimes work by analyzing embedded metadata which some AI tools add to the image file.

For example, the Content Credentials inspect tool allows people to view how a user has edited a piece of content, provided it was created and edited with compatible software. Like text, images can also be compared against verified datasets of AI-generated content (such as deepfakes).

Finally, some AI developers have started adding watermarks to the outputs of their AI systems. These are hidden patterns in any kind of content which are imperceptible to humans but can be detected by the AI developer. None of the large developers have shared their detection tools with the public yet, though.

Each of these methods has its drawbacks and limitations.

How effective are AI detectors?

The effectiveness of AI detectors can depend on several factors. These include which tools were used to make the content and whether the content was edited or modified after generation.

The tools’ training data can also affect results.

For example, key datasets used to detect AI-generated pictures do not have enough full-body pictures of people or images from people of certain cultures. This means successful detection is already limited in many ways.

Watermark-based detection can be quite good at detecting content made by AI tools from the same company. For example, if you use one of Google’s AI models such as Imagen, Google’s SynthID watermark tool claims to be able to spot the resulting outputs.

But SynthID is not publicly available yet. It also doesn’t work if, for example, you generate content using ChatGPT, which isn’t made by Google. Interoperability across AI developers is a major issue.

AI detectors can also be fooled when the output is edited. For example, if you use a voice cloning app and then add noise or reduce the quality (by making it smaller), this can trip up voice AI detectors. The same is true with AI image detectors.

Explainability is another major issue. Many AI detectors will give the user a “confidence estimate” of how certain it is that something is AI-generated. But they usually don’t explain their reasoning or why they think something is AI-generated.

It is important to realize that it is still early days for AI detection, especially when it comes to automatic detection.

A good example of this can be seen in recent attempts to detect deepfakes. The winner of Meta’s Deepfake Detection Challenge identified four out of five deepfakes. However, the model was trained on the same data it was tested on—a bit like having seen the answers before it took the quiz.

When tested against new content, the model’s success rate dropped. It only correctly identified three out of five deepfakes in the new dataset.

All this means AI detectors can and do get things wrong. They can result in false positives (claiming something is AI generated when it’s not) and false negatives (claiming something is human-generated when it’s not).

For the users involved, these mistakes can be devastating—such as a student whose essay is dismissed as AI-generated when they wrote it themselves, or someone who mistakenly believes an AI-written email came from a real human.

It’s an arms race as new technologies are developed or refined, and detectors are struggling to keep up.

Where to from here?

Relying on a single tool is problematic and risky. It’s generally safer and better to use a variety of methods to assess the authenticity of a piece of content.

You can do so by cross-referencing sources and double-checking facts in written content. Or for visual content, you might compare suspect images to other images purported to be taken during the same time or place. You might also ask for additional evidence or explanation if something looks or sounds dodgy.

But ultimately, trusted relationships with individuals and institutions will remain one of the most important factors when detection tools fall short or other options aren’t available.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
How do ‘AI detection’ tools actually work? And are they effective? (2025, November 16)
retrieved 16 November 2025
from https://techxplore.com/news/2025-11-ai-tools-effective.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

The Best Organic Mattresses—All Certified, All Actually Tested

Published

on

The Best Organic Mattresses—All Certified, All Actually Tested


Organic bedding brand Coyuchi recently launched its own organic mattress, combining cotton, wool, and Dunlop latex atop individually wrapped coils. While Coyuchi’s linen sheets are excellent, I was a little nervous to try the company’s first mattress effort. Bedding is not a mattress, after all, and expertise does not always transfer across endeavors. In this case, though, it did. Coyuchi’s organic Natural REM Mattress is wonderfully firm without being too firm and perfect for those of us who lack a sleeping style and tend to sleep every which way—side, back, stomach. I was never uncomfortable.

The design starts with encased coils on a wool pad and then, like a Midwestern dip, layers in smaller coils, latex, and then wool, and tops it off with an organic cotton cover. There’s surprisingly good edge support considering the distance between the coils and the top, and the mattress provides good motion isolation as well. Coyuchi says the Natural REM can be used with or without a box spring. I tested it for a few months on a box spring and then spent a week with it just on the floor and did not notice a difference. At 11 inches deep, there’s room for a topper, though I did not feel the need.

The cotton and wool layers are GOTS-certified organic, while the Dunlop latex carries the GOLS certification. The material is undyed, which is great for anyone bothered by industrial dyes. As with most of these organic options, the Coyuchi is made without chemicals, foam, or glues. Coyuchi’s Natural REM organic mattress is made to order in the United States and comes with a 100-night trial, which means you can get a full refund if it doesn’t work for you. —Scott Gilbertson

Coyuchi Natural REM ranges from $1,400 for a twin to $2,400 for a California king.

Mattress type Hybrid
Materials Organic latex, organic wool, organic cotton, (no dyes)
Sizes available Twin, full, queen, king, California king
Firmness options Medium firm
Certifications GOTS, GOLS, Oeko Tex Standard 100
Trial period 100 nights
Return policy Free for 100 days
Shipping Free
Delivery options In-home setup for $100
Warranty 25 year limited



Source link

Continue Reading

Tech

The Marshall Heston 120 Soundbar Is Big and Beautiful, but Does It Rock?

Published

on

The Marshall Heston 120 Soundbar Is Big and Beautiful, but Does It Rock?


Under the surface are 11 individually powered speakers, including two five-inch woofers, two midrange drivers, two tweeters, and five “full-range” drivers. The collection includes both side-firing and upfiring drivers to bounce sound off your walls and ceiling for surround sound and 3D audio formats like Dolby Atmos and DTS:X.

Around back, you’ll find solid connectivity, including HDMI eARC/ARC for seamless connection to modern TVs, an HDMI passthrough port for connecting a streamer or gaming console, Ethernet, RCA analog connection for a legacy device like a turntable, and a traditional subwoofer that lets you side-step Marshall’s available wireless sub. There’s no optical port, but since optical doesn’t support Dolby Atmos or DTS:X spatial audio, that’s kind of a moot point.

Setup is pretty simple, but the bar’s hefty size adds some complications. At three inches tall, it’s a tough fit beneath many TVs. Conversely, the rubber feet that diffuse its 43-inch long frame from your console offer almost zero clearance at the sides and, unlike bars like Sony’s Bravia Theater 9 or System 6, there’s no way to extend it. That makes it tough to set the bar down properly with all but the thinnest pedestal TV stands, which are becoming common even in cheap TVs. All that to say, there’s a good chance you’ll need to mount your TV to use the Heston.

Like the Sonos Arc Ultra, there’s no remote, meaning adjusting settings mainly relies on the Marshall app. The app is relatively stable, but it froze up during a firmware update for me, and it usually takes a while to connect when first opened. Those are minor quibbles, and your TV remote should serve as your main control for power and volume.

Wi-Fi connection unlocks music streaming via Google Cast, AirPlay, Spotify Connect, Tidal Connect, and internet radio stations, with Bluetooth 5.3 as a backup. Automated calibration tunes the sound to your room (complete with fun guitar tones), and in-app controls like a multi-band EQ provide more in-depth options than the physical knobs.

Premium Touch

Photograph: Ryan Waniata

The Heston 120’s sound profile impressed from the first video I switched on, which happened to be an episode of Bob’s Burgers. The bar immediately showcased a sense of clarity, openness, and overall definition that’s uncommon even from major players in the space.



Source link

Continue Reading

Trending