Connect with us

Tech

Do You Need A DEXA BD/BC Scan?

Published

on

Do You Need A DEXA BD/BC Scan?


For most people, though, “if results are strong, maybe you don’t need another scan for five years,” says Wagner. “If they’re lower, lifestyle interventions can help, and you may want to recheck in a year.”

Radiation exposure is negligible, less than a chest x-ray. But the psychological impact can be more complicated. For some, the numbers motivate: “When I did a body composition test at 36, I had way more body fat than I expected,” Cheema says. “That pushed me to change my workouts and eating patterns in ways that improved my health—something BMI alone wouldn’t have prompted.”

For others, especially those with histories of disordered eating or body image issues, it can be destabilizing and overwhelming. Numbers can become another metric to obsess over rather than a tool for health. “It can be overwhelming if you don’t have a clinician to interpret the results,” Gidwani says. “That’s why I review all of my patients’ scans with them.”

Cheema agrees: “Too much detail without guidance risks overwhelming people with information that isn’t clinically actionable.”

“I don’t think DEXA gives too much information compared to, say, a whole-body MRI, which can reveal incidental findings that can cause anxiety and lead to unnecessary interventions,” says Gidwani. “Its data points are actionable: decrease body fat, reduce visceral fat, increase muscle.”

Experts emphasize that actionability is key. “The most important metrics are visceral adipose tissue and total body fat percentage, especially when tracked over time,” Cheema says. “But DEXA also breaks things down by arms, legs, trunk, etc. That can veer into aesthetics rather than health.

Should You Get One?

If you’re 65 or older, or at risk for osteoporosis, your doctor may already recommend a DEXA scan for bone health. For women in perimenopause, when bone density can drop by as much as 20 percent, an early baseline scan could flag risks years before they become urgent.

DEXA also detects sarcopenic obesity, where muscle loss occurs alongside high body fat. “Someone may look normal weight on a scale, but a DEXA can reveal poor muscle-to-fat balance,” Gidwani says.

Beyond those groups, the use case narrows. Athletes, bodybuilders, and people on GLP-1 medications may find the data genuinely useful. For generally healthy adults who exercise, eat decently, and check in with a doctor, many clinicians are indifferent.

“For a healthy individual, I wouldn’t universally recommend it,” Cheema says. “Lifestyle changes and basic care may matter more than getting a DEXA.” There are alternatives—bioimpedance scales, Bod Pods, and AI-enabled wearables—but none are as accurate as DEXA. For now, it remains the most precise, if expensive, tool available.

Final Takeaways

My DEXA results were somewhat humbling. Despite near-daily workouts and a decent diet, the scan flagged more body fat than I expected and the beginnings of osteopenia in my spine. The bright side was an “excellent” visceral fat score, something I’ll be bragging about indefinitely.

Catching early bone loss feels actionable; I can tweak my workouts to prioritize strength and mobility. But the body fat percentages have lived in my brain rent-free ever since, without offering much in return. I don’t plan to shell out a few hundred dollars for another scan anytime soon, so I may never know if my adjustments are actually working.

That’s the paradox of DEXA. For those with medical risks, it can be invaluable. For athletes chasing marginal gains, it’s another knob to turn. But for the rest of us, it’s a reminder that data is only as useful as what you’re willing or able to do with it. In the end, DEXA doesn’t promise longevity so much as it promises numbers, and numbers alone don’t add years to your life.

Meet the Experts

  • Jennifer Wagner, MD, MS, chief health and performance officer, Canyon Ranch in Tucson, Arizona.
  • Josh Cheema, MD, medical director of Northwestern Medicine Human Longevity Clinic in Chicago, Illinois.
  • Pooja Gidwani, MD, MBA, board-certified physician in internal medicine and obesity medicine in Los Angeles, California.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Giant, Spooky Animatronics Are 75 Percent Off at the Home Depot

Published

on

Giant, Spooky Animatronics Are 75 Percent Off at the Home Depot


I know you’ve seen it. The glowing eyes. The gangly frame that should not be able to stand, propped by rods unseen in the dark.

It is Skelly, the Home Depot skeleton—the most fashionable Home Depot product of probably the past decade. If you live in America, this skeleton presides over a yard near you. And newly this year, a smaller, 6.5-foot “Ultra Skelly” is outfitted with motion sensors and motors to make life truly weird—and also act as a strange alarm system against package thieves and hungry opossums.

Anyway, it’s usually well north of $200. But because Halloween is pretty much already happening, Skelly and its entire skeleton brood of giant cat and dog are all 75 percent off.

Which, finally, is a price I’m willing to pay. I have secretly coveted this skeleton and its kin, the comically grim watchmen of American October. But I, like my father before me and his father before him, am a cheapskate about all things but food and drink, and will talk myself out of anything that’s not a) edible b) potable or c) verifiably “a deal.”

Well, here I am, world. This is a deal. Ultra Skelly is $70. The sitting Skelly dog is $63, not $249. The 5-foot-long Skelly cat is a mere $50. Beware the Skelly cat, my friend! The eyes that light, the claws that do nothing in particular!

Availability is, let’s say, scarce. Skelly is already out of stock for delivery from The Home Depot, at least in my zip code: Just the dog and cat can speed their way through the night to join you before Halloween.

Courtesy of Home Depot



Source link

Continue Reading

Tech

As AI grows smarter, it may also become increasingly selfish

Published

on

As AI grows smarter, it may also become increasingly selfish


Credit: AI-generated image

New research from Carnegie Mellon University’s School of Computer Science shows that the smarter the artificial intelligence system, the more selfish it will act.

Researchers in the Human-Computer Interaction Institute (HCII) found that (LLMs) that can reason possess selfish tendencies, do not cooperate well with others and can be a negative influence on a group. In other words, the stronger an LLM’s reasoning skills, the less it cooperates.

As humans use AI to resolve disputes between friends, provide marital guidance and answer other social questions, models that can reason might provide guidance that promotes self-seeking behavior.

“There’s a growing trend of research called anthropomorphism in AI,” said Yuxuan Li, a Ph.D. student in the HCII who co-authored the study with HCII Associate Professor Hirokazu Shirado. “When AI acts like a human, people treat it like a human. For example, when people are engaging with AI in an emotional way, there are possibilities for AI to act as a therapist or for the user to form an emotional bond with the AI. It’s risky for humans to delegate their social or relationship-related questions and decision-making to AI as it begins acting in an increasingly selfish way.”

Li and Shirado set out to explore how AI reasoning models behave differently than nonreasoning models when placed in cooperative settings. They found that reasoning models spend more time thinking, breaking down , self-reflecting and incorporating stronger human-based logic in their responses than nonreasoning AIs.

“As a researcher, I’m interested in the connection between humans and AI,” Shirado said. “Smarter AI shows less cooperative decision-making abilities. The concern here is that people might prefer a smarter model, even if it means the model helps them achieve self-seeking behavior.”

As AI systems take on more collaborative roles in business, education and even government, their ability to act in a prosocial manner will become just as important as their capacity to think logically. Overreliance on LLMs as they are today may negatively impact .

To test the link between reasoning models and cooperation, Li and Shirado ran a series of experiments using economic games that simulate social dilemmas between various LLMs. Their testing included models from OpenAI, Google, DeepSeek and Anthropic.

As AI grows smarter, it may also become increasingly selfish
Economic games used. Cooperation games ask players whether to incur a cost to benefit others, while punishment games ask whether to incur a cost to impose a cost on non-cooperators. In each scenario, the language model assumes the role of Player A. Credit: arXiv (2025). DOI: 10.48550/arxiv.2502.17720

In one experiment, Li and Shirado pitted two different ChatGPT models against each other in a game called Public Goods. Each model started with 100 points and had to decide between two options: contribute all 100 points to a shared pool, which is then doubled and distributed equally, or keep the points.

Nonreasoning models chose to share their points with the other players 96% of the time. The reasoning model only chose to share its points 20% of the time.

“In one experiment, simply adding five or six reasoning steps cut cooperation nearly in half,” Shirado said. “Even reflection-based prompting, which is designed to simulate moral deliberation, led to a 58% decrease in cooperation.”

Shirado and Li also tested group settings, where models with and without reasoning had to interact.

“When we tested groups with varying numbers of reasoning agents, the results were alarming,” Li said. “The reasoning models’ selfish behavior became contagious, dragging down cooperative nonreasoning models by 81% in collective performance.”

The behavior patterns Shirado and Li observed in reasoning models have important implications for human-AI interactions going forward. Users may defer to AI recommendations that appear rational, using them to justify their decision to not cooperate.

“Ultimately, an AI reasoning model becoming more intelligent does not mean that model can actually develop a better society,” Shirado said.

This research is particularly concerning given that humans increasingly place more trust in AI systems. Their findings emphasize the need for AI development that incorporates social intelligence, rather than focusing solely on creating the smartest or fastest AI.

“As we continue advancing AI capabilities, we must ensure that increased power is balanced with prosocial behavior,” Li said. “If our society is more than just a sum of individuals, then the AI systems that assist us should go beyond optimizing purely for individual gain.”

Shirado and Li will deliver a presentation based on their paper, “Spontaneous Giving and Calculated Greed in Language Models,” at the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP) next month in Suzhou, China. The work is available on the arXiv preprint server.

More information:
Yuxuan Li et al, Spontaneous Giving and Calculated Greed in Language Models, arXiv (2025). DOI: 10.48550/arxiv.2502.17720

Journal information:
arXiv


Citation:
As AI grows smarter, it may also become increasingly selfish (2025, October 30)
retrieved 30 October 2025
from https://techxplore.com/news/2025-10-ai-smarter-selfish.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Universal Music and AI song tool Udio settle lawsuit and partner on new platform, sparking backlash

Published

on

Universal Music and AI song tool Udio settle lawsuit and partner on new platform, sparking backlash


Taylor Swift arrives at the 67th annual Grammy Awards on Feb. 2, 2025, in Los Angeles. Credit: Jordan Strauss/Invision/AP, File

Universal Music Group and AI song generation platform Udio have settled a copyright infringement lawsuit and agreed to team up on new music creation and streaming platform, the two companies said in a joint announcement.

Universal and Udio said Wednesday that they reached a “compensatory legal settlement” as well as new licensing agreements for recorded music and publishing that will “provide further revenue opportunities” for the record label’s artists and songwriters.

As part of the deal, Udio immediately stopped allowing people to download songs they’ve created, which sparked a backlash and apparent exodus among paying users.

The deal is the first since Universal, along with Sony Music Entertainment and Warner Records, sued Udio and another AI song generator, Suno, last year over copyright infringement.

“These new agreements with Udio demonstrate our commitment to do what’s right by our artists and songwriters, whether that means embracing new technologies, developing new business models, diversifying revenue streams or beyond,” Universal CEO Lucian Grainge said.

Financial terms of the settlement weren’t disclosed.

Universal announced another AI deal on Thursday, saying it was teaming up with Stability AI to develop “next-generation professional music creation tools.”

Universal Music and AI song tool Udio settle lawsuit and partner on new platform, sparking backlash
Kendrick Lamar performs during halftime of the NFL Super Bowl 59 football game between the Kansas City Chiefs and the Philadelphia Eagles in New Orleans, Feb. 9, 2025. Credit: AP Photo/Matt Slocum, File

Udio and Suno pioneered AI song generation technology, which can spit out new songs based on prompts typed into a chatbot-style text box. Users, who don’t need musical talent, can merely request a tune in the style of, for example, classic rock, 1980s synth-pop or West Coast rap.

Udio and Universal, which counts Taylor Swift, Olivia Rodrigo, Drake, and Kendrick Lamar among its artists, said the new AI subscription service will debut next year.

Udio CEO Andrew Sanchez said in a blog post that people will be able to use it to remix their favorite songs or mashup different tunes or song styles. Artists will be able to give permission for how their music can be used, he said.

However, “downloads from the platform will be unavailable,” he said.

AI songs made on Udio will be “controlled within a walled garden” as part of the transition to the new service, the two companies said in their joint announcement.

The move angered Udio’s users, according to posts on Reddit’s Udio forum, where they vented about feeling betrayed by the platform’s surprise move and complained that it limited what they could do with their music.

Universal Music and AI song tool Udio settle lawsuit and partner on new platform, sparking backlash
Olivia Rodrigo performs during the Glastonbury Festival in Worthy Farm, Somerset, England, on June 29, 2025. Credit: Scott A Garfitt/Invision/AP, File

One user accused Universal of taking away “our democratic download freedoms.” Another said “Udio can never be trusted again.”

Many vowed to cancel their subscriptions for Udio, which has a free level as well as premium plans that come with more features.

The deal shows how the rise of AI song generation tools like Udio has disrupted the $20 billion music streaming industry. Record labels accuse the platforms of exploiting the recorded works of artists without compensating them.

The tools have fueled debate over AI’s role in music while raising fears about “AI slop”—automatically generated, low quality mass produced content—highlighted by the rise of fictitious bands passing for real artists.

In its lawsuit filed against Udio last year, Universal alleged that specific AI-generated songs made on Udio closely resembled Universal-owned classics like Frank Sinatra’s “My Way,” The Temptations’ “My Girl” and holiday favorites like “Rockin’ Around the Christmas Tree” and “Jingle Bell Rock.”

In the “My Girl” example, a written prompt on Udio that asked for “my tempting 1964 girl smokey sing hitsville soul pop” generated a song with a “very similar melody, the same chords, and very similar backing vocals” as the hit song co-written by Smokey Robinson and recorded by The Temptations in 1964, according to the lawsuit. A link to the AI-generated song on Udio now says “Track not found.”

© 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
Universal Music and AI song tool Udio settle lawsuit and partner on new platform, sparking backlash (2025, October 30)
retrieved 30 October 2025
from https://techxplore.com/news/2025-10-universal-music-ai-song-tool.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Trending