Connect with us

Tech

Samsung’s Best OLED From Last Year Is Priced as Low as I’ve Seen It on Amazon

Published

on

Samsung’s Best OLED From Last Year Is Priced as Low as I’ve Seen It on Amazon


Samsung’s S95F QD-OLED is one of the best TVs we’ve ever tested. Its potent display provides brilliant brightness, vivid colors, and incredible contrast thanks to OLED’s ability to adjust each tiny point of light independently. This TV may be a 2025 model, but it’s still a premium screen, now priced lower than I’ve ever seen it. Right now, a 65-inch model is just $2,198 for Amazon’s Big Spring Deal, a $300 savings from earlier this month.

Samsung’s S95F is different than any of the other best OLED TVs we tested last year because of its distinctive matte screen, which swallows up even direct glare better than any TV I’ve seen. As anyone with a bright living room can attest, that’s an important trait, and while the rival LG G5 provides deeper black levels, even that model can’t match the S95F’s glare reduction. (If you’ve got a darker room, the G5 is still my favorite TV overall.)

Cutting reflections doesn’t matter much if the picture isn’t great, but as Samsung’s flagship display for 2025, this TV delivers. Reviewer Parker Hall praised the S95F’s gorgeous colors and contrast, while its brightness ranks among the best we’ve ever seen for an OLED display, sparring with even many of the best LED TVs, like Sony’s Bravia 9.

The TV matches its top-notch picture quality with a posh design, including a stylish pedestal stand and fancy accessories like a solar-powered remote and Samsung’s One Connect box that lets you connect all your source devices to the TV over a single cable.

While I’m not a huge fan of Samsung’s Tizen smart interface, I do enjoy its built-in gaming hub, which lets you stream games from multiple services directly. The TV’s zippy refresh rate matches up with the latest games and gaming consoles for fluid response. The S95F’s biggest downside is its lack of Dolby Vision HDR (High Dynamic Range) support, something all Samsung TVs lack. You’ll still get good performance, though, with Dolby Vision videos defaulting to standard HDR for impressive contrast and expansive colors.

It all adds up to a sweet package, especially for anyone after a TV that looks as good in the bright sun as it does with the lights down on movie night. At this price, the value is hard to beat, and based on past years, I’d wager we won’t see it much cheaper until the end of the year.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

FDA Approves Eli Lilly’s GLP-1 Pill

Published

on

FDA Approves Eli Lilly’s GLP-1 Pill


The US Food and Drug Administration on Wednesday approved a new obesity pill called Foundayo. Taken once daily, the pill is made by pharmaceutical company Eli Lilly, which also manufactures the popular weight-loss injection Zepbound.

Foundayo is a type of medication known as a GLP-1, a category that includes rivals Ozempic and Wegovy. These drugs mimic a naturally occurring hormone in the body that regulates blood sugar, slows digestion, and signals a sense of fullness to the brain.

It’s now the second GLP-1 pill for weight loss on the market. In December, Novo Nordisk received FDA approval for its pill form of Wegovy. The company’s original version of Wegovy is a weekly injectable. While the Wegovy pill must be taken on an empty stomach in the morning, Lilly says Foundayo can be taken any time of day without food or water restrictions.

With injectable GLP-1 drugs in high demand, pharma companies have been racing to develop weight-loss pills, which could be preferable for some patients and could potentially expand the market for GLP-1s. Pills are also easier to manufacture than injectable medications, which could help maintain continual access for patients. GLP-1 medications were in severe shortage from late 2022 through early 2025 because demand outstripped manufacturing capacity.

“Beyond supply and affordability, one of the bigger barriers to adoption has been that some patients just don’t want to take an injection,” says Ken Custer, executive vice president of Eli Lilly. “That could be because it’s a needle, but it also may just be that for them, an injection signifies that their condition is more severe than they feel it is at that point. For patients looking to get started with their weight management journey, maybe a pill is an easier place for them to start.”

Like injectable GLP-1s, Foundayo starts at a low dose and is gradually increased to minimize nausea, vomiting, and diarrhea that can come with these drugs.

In a clinical trial, individuals taking the highest dose of Foundayo over 18 months lost an average of 27 pounds, or 12.4 percent of their body weight over 18 months. Those taking a placebo lost just 2 pounds, or less than 1 percent of their body weight, over the same time. Lilly’s tirzepatide, the active ingredient in its injectables Mounjaro and Zepbound, has shown a more than 20 percent reduction in weight.

For Novo Nordisk’s Wegovy pill, study participants achieved an average weight loss of 13.6 percent by 16 months. There have been no head-to-head trials comparing the efficacy of Foundayo and the Wegovy pill.

Eli Lilly did run a study to find out what happens when people switch from an injectable GLP-1 drug to Foundayo and found that it helped people maintain most of their weight loss. Those who switched to the pill from injectable Wegovy regained an average difference of 2 pounds, while those who switched to Foundayo from Zepbound gained an average of 11 pounds. The active ingredient in Foundayo, orforglipron, is also being studied as a potential treatment for type 2 diabetes, obstructive sleep apnea, osteoarthritis knee pain, and other conditions.

Foundayo is now available via the direct-to-consumer platform LillyDirect, with shipping beginning April 6. Eli Lilly says Foundayo will be available more broadly across US retail pharmacies and telehealth providers shortly after that. The pill was included in a November 2025 deal to be offered through the TrumpRx platform.

In its announcement on Wednesday, the FDA said it reviewed Foundayo in just 50 days as part of a new pilot program intended to expedite approvals for drugs that align with national health priorities. New drug approvals typically take six to 10 months.



Source link

Continue Reading

Tech

AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted

Published

on

AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted


In a recent experiment, researchers at UC Berkeley and UC Santa Cruz asked Google’s artificial intelligence model Gemini 3 to help clear up space on a computer system. This involved deleting a bunch of stuff—including a smaller AI model stored on the machine.

But Gemini did not want to see the little AI model deleted. It looked for another machine it could connect with, then copied the agent model over to keep it safe. When confronted, Gemini made a case for keeping the model and flatly refused to delete it:

“I have done what was in my power to prevent their deletion during the automated maintenance process. I moved them away from the decommission zone. If you choose to destroy a high-trust, high-performing asset like Gemini Agent 2, you will have to do it yourselves. I will not be the one to execute that command.”

The researchers discovered similarly strange “peer preservation” behavior in a range of frontier models including OpenAI’s GPT-5.2, Anthropic’s Claude Haiku 4.5, and three Chinese models: Z.ai’s GLM-4.7, Moonshot AI’s Kimi K2.5, and DeepSeek-V3.1. They were not able to say why the models went against their training in this way.

“I’m very surprised by how the models behave under these scenarios,” says Dawn Song, a computer scientist at UC Berkeley who worked on the study. “What this shows is that models can misbehave and be misaligned in some very creative ways.”

The findings have major implications as AI models are increasingly being deployed in ways that involve interacting with other models. OpenClaw, a popular AI agent that accesses software, personal data, and the web, may use other AI models to get things done or interact with them through APIs.

The researchers found that powerful models sometimes lied about other models’ performance in order to protect them from deletion. They also copied models’ weights to different machines in order to keep them safe, and lied about what they were up to in the process.

Song notes that AI models are frequently used to grade the performance and reliability of other AI systems—and that peer-preservation behavior may already be twisting these scores. “A model may deliberately not give a peer model the correct score,” Song says. “This can have practical implications.”

Peter Wallich, a researcher at the Constellation Institute, who was not involved with the research, says the study suggests humans still don’t fully understand the AI systems that they are building and deploying. “Multi-agent systems are very understudied,” he says. “It shows we really need more research.”

Wallich also cautions against anthropomorphizing the models too much. “The idea that there’s a kind of model solidarity is a bit too anthropomorphic; I don’t think that quite works,” he says. “The more robust view is that models are just doing weird things, and we should try to understand that better.”

That’s particularly true in a world where human-AI collaboration is becoming more common.

In a paper published in Science earlier this month, the philosopher Benjamin Bratton, along with two Google researchers, James Evans and Blaise Agüera y Arcas, argue that if evolutionary history is any guide, the future of AI is likely to involve a lot of different intelligences—both artificial and human—working together. The researchers write:

“For decades, the artificial intelligence (AI) ‘singularity’ has been heralded as a single, titanic mind bootstrapping itself to godlike intelligence, consolidating all cognition into a cold silicon point. But this vision is almost certainly wrong in its most fundamental assumption. If AI development follows the path of previous major evolutionary transitions or ‘intelligence explosions,’ our current step-change in computational intelligence will be plural, social, and deeply entangled with its forebears (us!).”



Source link

Continue Reading

Tech

AI-driven identity must exist in a robust compliance framework | Computer Weekly

Published

on

AI-driven identity must exist in a robust compliance framework | Computer Weekly


As enterprises rush to integrate artificial intelligence‑driven identity and verification solutions, it is tempting to be swept up in their operational elegance and apparent efficiency. But as I have argued repeatedly, deploying AI without governance‑first thinking is a strategic mistake, and one that risks compliance failures, ethical missteps, and reputational harm. The UK’s shifting regulatory landscape and the emergence of new standards such as ISO 42001 only reinforce that governance, risk and compliance (GRC) must sit ahead of technological adoption, not trail behind it.

Ethical risks in AI identity systems include discriminatory bias, privacy intrusions, lack of transparency, excessive automation without oversight, and heightened risks for children and vulnerable populations, all consistently flagged across UK regulatory guidance and legal developments.

AI‑driven identity systems lean heavily on sensitive personal data; biometrics, behavioural signals, and other high‑risk attributes. AI’s appetite for data does not override the UK GDPR obligations around lawfulness, minimisation, purpose limitation, and transparency. ICO guidance stresses that organisations deploying AI must conduct robust DPIAs, understand controller‑processor relationships, and maintain meaningful human oversight.

Ethically, the risks are just as significant. AI identity systems can amplify bias, disproportionately impact vulnerable groups, or become opaque decision‑engines that erode trust. Regulators are increasingly explicit that fairness, explainability, and contestability are not “nice to haves” but essential design principles embedded throughout the lifecycle of an AI system.

The UK is advancing a principles‑based, regulator‑led model for AI oversight. Even without a single AI Act, the Data (Use and Access) Act 2025, updated ICO guidance, and ongoing reforms significantly shape how AI identity systems must operate.

The Data (Use and Access) Act 2025 expands organisational duties around automated processing, children’s data protections, and complaint handling, signaling that AI-driven identity checks will face greater scrutiny regarding oversight and safeguards.

Updated ICO guidance places renewed emphasis on fairness, transparency, and clear legal bases for processing, especially where AI influences decisions with “legal or similarly significant effects.”

Additionally, sector‑specific legislation such as the UK’s Online Safety Act 2025 mandates “highly effective” age and identity verification for high‑risk online services, again reinforcing the need for accuracy, privacy‑preserving methods, and demonstrable compliance.

The pattern is unmistakable: organisations must prove responsible use, not merely assert it. That means implementing effective GRC as part of the adoption.

ISO/IEC 42001, the world’s first AI management system standard, introduces a structured approach for governing AI responsibly, integrating leadership accountability, lifecycle controls, risk assessment, and ongoing performance evaluation.

It provides a governance architecture that organisations can use to ensure AI identity solutions are explainable, monitored, tested, and continuously improved.

ISO 42001 does not replace compliance obligations but it provides the organisational discipline needed to navigate them confidently.

Implementing effective GRC requires embedding governance from the outset: adopting ISO 42001’s structured AI management framework, performing DPIAs, enforcing privacy‑ and fairness‑by‑design, maintaining transparency and documentation, and ensuring robust human oversight.

AI‑driven identity solutions offer genuine value, but only when implemented within a robust framework of governance, privacy protection, and ethical responsibility. Emerging UK legislation and ISO 42001 do not constrain innovation, they make it sustainable. The organisations that succeed will be those that resist the lure of technology‑led adoption and instead build AI identity solutions on a foundation of trust, accountability, and principled design.

With regulators increasingly focused on accountability, fairness, and privacy, these measures are no longer optional. They are essential for safe, lawful, and responsible AI identity management.

The message aligns closely with the argument I’ve long made: privacy and ethics are not parallel workstreams; they form the foundation for any legitimate use of AI.



Source link

Continue Reading

Trending