Connect with us

Tech

Lumbar Support Can Make a Huge Difference in Your Office Chair

Published

on

Lumbar Support Can Make a Huge Difference in Your Office Chair


I also spoke to John Gallucci, a licensed physical therapist and athletic trainer who specializes in treating symptoms from poor office posture, and he confirmed much of what Egbert said. Closed case, right? Well, it’s certainly not just marketing speak so that office chair manufacturers can charge you extra. But there are some important factors to consider.

Not All Lumbar Support Is Equal

Gallucci was quick to point out the benefits of lumbar support, but he also issued some warnings about how to proceed. Turns out, not all lumbar support is equal. “The most important thing to look for in a chair is ergonomic adjustability,” he says, referencing the need for adjustable lumbar support. “A good chair should support your posture for long periods without causing discomfort or fatigue. That means it should allow you to adjust the seat height, seat pan depth, armrests, lumbar support, and backrest tilt.”

Chairs with fixed lumbar support mean it isn’t adjustable to your body. Lumbar support and adjustments come in different forms these days. For example, some chairs have lumbar height adjustment but not depth, also known as “two-way” adjustment. Some use a dial for adjustment, and others use a ratchet or lever system. Other chairs let you adjust the entire backrest to find the right position, and some cheaper chairs resort to just a simple pad that can be manually moved. These can, in theory, all be good solutions, so long as you’re able to find the right position.

“That curve has to be adjustable as to where it is,” Egbert says. “My butt might be lower than your butt, and you want it to match where that curve in your lower back is. You want to be able to slide it up and down.”

A good example of an ergonomic chair with “two-way” lumbar adjustment is the Branch Ergonomic Chair Pro. We’ve tested dozens of chairs, and this excellent lumbar support is one of the reasons WIRED’s office chair reviewer, Julian Chokkattu, found it so comfortable. It also doesn’t cost over a thousand dollars like so many high-end office chairs.

If you aren’t ready to shell out $500 on an ergonomic chair, that doesn’t mean you have to be doomed to lower back pain. Some DIY solutions can even be better than a chair with inadequate lumbar adjustment. We’ve even tested some add-on lumbar cushions that we like, such as this LoveHome model you can find on Amazon.

When it comes down to it, though, lumbar support isn’t the first thing to tackle when setting up your workspace. If you’re sitting at an old desk working from only a laptop, lumbar support is never going to solve your posture issues. Fix that first, with either a laptop stand or a height-adjustable monitor.

After that, yes, lumbar support is a good thing. It needs to be adjustable and well-implemented, but it’s something you’ll want to make sure is available on your next office chair. If you’re sitting for eight hours a day, your back deserves it.

Branch

Ergonomic Chair Pro



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Anthropic inks multibillion-dollar deal with Google for AI chips

Published

on

Anthropic inks multibillion-dollar deal with Google for AI chips


Credit: Unsplash/CC0 Public Domain

Artificial intelligence company Anthropic has signed a multibillion-dollar deal with Google to acquire more of the computing power needed for the startup’s chatbot, Claude.

Anthropic said Thursday the deal will give it access to up to 1 million of Google’s AI computer chips and is “worth tens of billions of dollars and is expected to bring well over a gigawatt of capacity online in 2026.”

A gigawatt, when used in reference to a power plant, is enough to power roughly 350,000 homes, according to the U.S. Energy Information Administration.

Google calls its specialized AI chips Tensor Processing Units, or TPUs. Anthropic’s AI systems also run on chips from Nvidia and the cloud computing division of Amazon, Anthropic’s first big investor and its primary cloud provider.

The privately held Anthropic, founded by ex-OpenAI leaders in 2021, last month put its value at $183 billion after raising another $13 billion in investments. Its AI assistant Claude competes with OpenAI’s ChatGPT and others in appealing to using it to assist with coding and other tasks.

© 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
Anthropic inks multibillion-dollar deal with Google for AI chips (2025, October 24)
retrieved 24 October 2025
from https://techxplore.com/news/2025-10-anthropic-inks-multibillion-dollar-google.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

How to ensure youth, parents, educators and tech companies are on the same page on AI

Published

on

How to ensure youth, parents, educators and tech companies are on the same page on AI


Credit: Unsplash/CC0 Public Domain

Artificial intelligence is now part of everyday life. It’s in our phones, schools and homes. For young people, AI shapes how they learn, connect and express themselves. But it also raises real concerns about privacy, fairness and control.

AI systems often promise personalization and convenience. But behind the scenes, they collect vast amounts of , make predictions and influence behavior, without clear rules or consent.

This is especially troubling for youth, who are often left out of conversations about how AI systems are built and governed.






The author’s guide on how to protect youth privacy in an AI world.

Concerns about privacy

My research team conducted national research and heard from youth aged 16 to 19 who use AI daily—on social media, in classrooms and in online games.

They told us they want the benefits of AI, but not at the cost of their privacy. While they value tailored content and smart recommendations, they feel uneasy about what happens to their data.

Many expressed concern about who owns their information, how it is used and whether they can ever take it back. They are frustrated by long privacy policies, hidden settings and the sense that you need to be a tech expert just to protect yourself.

As one participant said, “I am mainly concerned about what data is being taken and how it is used. We often aren’t informed clearly.”

Uncomfortable sharing their data

Young people were the most uncomfortable group when it came to sharing personal data with AI. Even when they got something in return, like convenience or customization, they didn’t trust what would happen next. Many worried about being watched, tracked or categorized in ways they can’t see.

This goes beyond technical risks. It’s about how it feels to be constantly analyzed and predicted by systems you can’t question or understand.

AI doesn’t just collect data, it draws conclusions, shapes online experiences, and influences choices. That can feel like manipulation.

Parents and teachers are concerned

Adults (educators and parents) in our study shared similar concerns. They want better safeguards and stronger rules.

But many admitted they struggle to keep up with how fast AI is moving. They often don’t feel confident helping youth make smart choices about data and privacy.

Some saw this as a gap in digital education. Others pointed to the need for plain-language explanations and more transparency from the that build and deploy AI systems.

Professionals focus on tools, not people

The study found AI professionals approach these challenges differently. They think about privacy in technical terms such as encryption, data minimization and compliance.

While these are important, they don’t always align with what youth and educators care about: trust, control and the right to understand what’s going on.

Companies often see privacy as a trade-off for innovation. They value efficiency and performance and tend to trust technical solutions over user input. That can leave out key concerns from the people most affected, especially young users.

Power and control lie elsewhere

AI professionals, parents and educators influence how AI is used. But the biggest decisions happen elsewhere. Powerful tech companies design most and decide what data is collected, how systems work and what choices users see.

Even when professionals push for safer practices, they work within systems they did not build. Weak privacy laws and limited enforcement mean that control over data and design stays with a few companies.

This makes transparency and holding platforms accountable even more difficult.

What’s missing? A shared understanding

Right now, youth, parents, educators and tech companies are not on the same page. Young people want control, parents want protection and professionals want scalability.

These goals often clash, and without a shared vision, privacy rules are inconsistent, hard to enforce or simply ignored.

Our research shows that ethical AI governance can’t be solved by one group alone. We need to bring youth, families, educators and experts together to shape the future of AI.

The PEA-AI model

To guide this process, we developed a framework called PEA-AI: Privacy–Ethics Alignment in Artificial Intelligence. It helps identify where values collide and how to move forward. The model highlights four key tensions:

  1. Control versus trust: Youth want autonomy. Developers want reliability. We need systems that support both.
  2. Transparency versus perception: What counts as “clear” to experts often feels confusing to users.
  3. Parental oversight versus youth voice: Policies must balance protection with respect for youth agency.
  4. Education versus awareness gaps: We can’t expect youth to make informed choices without better tools and support.

What can be done?

Our research points to six practical steps:

  • Simplify consent. Use short, visual, plain-language forms. Let youth update settings regularly.
  • Design for privacy. Minimize data collection. Make dashboards that show users what’s being stored.
  • Explain the systems. Provide clear, non-technical explanations of how AI works, especially when used in schools.
  • Hold systems accountable. Run audits, allow feedback and create ways for users to report harm.
  • Teach . Bring AI literacy into classrooms. Train teachers and involve parents.
  • Share power. Include youth in tech policy decisions. Build systems with them, not just for them.

AI can be a powerful tool for learning and connection, but it must be built with care. Right now, our research suggests young people don’t feel in control of how AI sees them, uses their data or shapes their world.

Ethical AI starts with listening. If we want digital systems to be fair, safe and trusted, we must give a seat at the table and treat their voices as essential, not optional.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
How to ensure youth, parents, educators and tech companies are on the same page on AI (2025, October 23)
retrieved 24 October 2025
from https://techxplore.com/news/2025-10-youth-parents-tech-companies-page.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Pro-cycling crashes can be bad, but evidence suggests slower bikes aren’t the answer

Published

on

Pro-cycling crashes can be bad, but evidence suggests slower bikes aren’t the answer


Credit: Unsplash/CC0 Public Domain

It might seem counterintuitive in a sport built around speed, but the world governing body for competitive cycling wants to slow elite riders down.

Worried about high- crashes during pro-racing events, the Union Cycliste Internationale (UCI) has proposed a cap on the gear size riders can use. The idea is to lower the possible top speed bikes can achieve.

The risks are real, too. At the recent Tour Down Under Men’s Classic in Australia, a high-speed multi-rider crash on the final corner sent bikes into the barriers and into the crowd, badly injuring a spectator.

In August this year, champion British rider Chris Froome crashed while training in France, suffering a collapsed lung, broken ribs and a spinal fracture.

But would restricting gear size prevent these kinds of high-speed crashes? Certainly, not everyone thinks so.

Earlier this month, a Belgian court paused the rule change after teams and a major cycle component maker argued the safety case was not proven. While slower bikes might sound safer, they argue, the evidence tells a different story.

What the evidence tells us

The proposed rule would limit the largest gear size to 54 teeth on the front chainring and 11 on the rear sprocket. The idea is simple: lower the top gear to reduce top speed and, in theory, cut risk.

But while speed clearly matters when it comes to crashes, it is only one part of how they happen in a tightly packed peloton (the main pack of riders in a road race).

Our recent review of 18 studies of race speed and crash risk found two clear patterns:

  • higher speed makes injuries worse once a crash occurs
  • but the link between speed and the chance of crashing is weaker and depends on context.

Injury rates in the UCI WorldTour have climbed even though average race speeds have been steady. So, something else is at work.

We also examined the proposed gear cap itself. Based on our analysis, we argue any rule change should be evidence-based rather than simply a reaction to pressure after high-profile incidents.

Understanding why crashes occur is central to this. Essentially, they are about people and space, and happen for a number of reasons:

  • when riders fight for position as they enter a narrowing corner
  • when sprint “trains” (riders in the same team lining up for aerodynamic efficiency) cross wheels
  • or when road “furniture” appears too late to be avoided.

In this year’s Paris–Nice race, for example, Mattias Skjelmose struck a traffic island at speed and abandoned the race. Reports described it as a poorly marked obstacle.

Course design, peloton density and inconsistent rule enforcement often play a bigger role than a few extra kilometers per hour.






Olympic champion Tom Pidcock demonstrates a high-speed descent on the Rossfeld Panoramastrasse in Germany.

Why a gear limit won’t help much

On hill descents, where many serious injuries occur, riders freewheel in a tucked body position. Gravity and aerodynamics set the speed—gearing does not.

When riders are actually pedaling in a sprint, a 54×11 gear at high “cadence” (around 110–120 revolutions per minute) gives a speed of roughly 65 kilometers per hour (km/h). The very fastest finishes in elite men’s races reach about 75 km/h—the absolute peak speed.

A cap on gearing would trim roughly 5–10 km/h from the top-end, bringing the fastest sprints down to around 65–70 km/h. But most sprint pileups start below those speeds and are triggered by contact or line changes.

Lowering everyone’s top speed could even bunch the field more tightly and raise the risk of contact. The pro-cycling world already knows what helps:

These steps match what other high-speed sports have done to reduce injuries. Motor sports redesign the environment rather than just limit speed, with NASCAR and IndyCar having adopted energy-absorbing barriers to cut wall-impact forces.

And alpine skiing manages risk with course design, as well as nets and airbag protection to control speed and crash severity.

Similar approaches to safety are used in aviation, mining and health care. The aim is to focus on the environment and behavior, measure exposure, fix the hotspots and share what works to keep improving safety.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Pro-cycling crashes can be bad, but evidence suggests slower bikes aren’t the answer (2025, October 23)
retrieved 23 October 2025
from https://techxplore.com/news/2025-10-pro-bad-evidence-slower-bikes.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Trending