Connect with us

Tech

Microplastics Could Be Weakening Your Bones, Research Suggests

Published

on

Microplastics Could Be Weakening Your Bones, Research Suggests


Microplastics could be a factor in driving up cases of osteoporosis worldwide, according to recently published research. The study reveals that when these tiny plastic particles enter the body, they disrupt the functioning of bone marrow stem cells, which are essential for maintaining and repairing bone tissue.

Throughout your life, your bones are replenished. Osteoporosis is a condition where this process goes wrong, with the breakdown of bone outstripping the rate at which it is replaced. This leads to bones weakening over time and becoming more likely to fracture. The condition has many risk factors—age, sex, medications, diet, smoking and drinking, and genetics are all known to influence it—with the disease developing slowly over time. Often people don’t realize they have the condition until they break a bone.

This new analysis, published in the journal Osteoporosis International, adds exposure to microplastics as a potential new risk factor. The research reviewed 62 scientific articles that had run various laboratory and animal tests on the possible effects of micro- and nanoplastics on bone. Analysis of lab experiments showed that microplastics stimulate the formation of osteoclasts, cells created by stem cells in the bone marrow that degrade bone tissue to promote resorption, the process in which the body breaks down and eliminates old or damaged bone.

The study also found that, in relation to bones, plastic particles can reduce the viability of cells, induce premature cellular aging, modify gene expression, and trigger inflammatory responses. The combination of these effects generates an imbalance in which osteoclasts destroy more bone tissue than is regenerated, causing an accelerated weakening of bone structure.

When then looking at animal studies, the researchers found that the accumulation of microplastics in the body decreases the white blood cell count—which is suggestive of alterations in bone marrow function. In addition, these animal studies suggested that the impact of microplastics on osteoclasts may be associated with deterioration of bone microstructure and the formation of irregular structures of cells, increasing the risk of bone fragility, deformities, and fractures.

“In this study, the adverse effects observed culminated, worryingly, in the interruption of the animals’ skeletal growth,” said coauthor Rodrigo Bueno de Oliveira in a press release. “The potential impact of microplastics on bones is the subject of scientific studies and isn’t negligible.”

Oliveira, who is the coordinator of the Laboratory for Evaluation of Mineral and Bone Disorders in Nephrology at the State University of Campinas in Brazil, is now working with his team to further prove in practice the relationship between exposure to microplastics and bone deterioration. This research will begin by evaluating the effects of microplastic particles on rodents’ femurs.

“Although osteometabolic diseases are relatively well understood, there’s a gap in our knowledge regarding the influence of microplastics on the development of these diseases. Therefore, one of our goals is to generate evidence suggesting that microplastics could be a potential controllable environmental cause to explain, for example, the increase in the projected number of bone fractures,” Oliveira said.

Microplastics and nanoplastics are small fragments of plastic—some so small that they’re invisible to the naked eye—that become detached from everyday objects when sunlight, wind, rain, seawater, or abrasion degrade them. The main difference between the two lies in their size: microplastics measure from 1 micrometer (one-thousandth of a millimeter) to 5 millimeters, while nanoplastics are smaller than 1 micrometer. These particles have been detected all over the world in natural environments, as well as throughout the human body and in meat, water, and various agricultural products.

Studies have started to show that this type of plastic contamination can damage health. Experts argue that this means the world urgently needs to reduce its use of plastics. Every year more than 500 million tons of the material are produced worldwide, but only 9 percent is recycled, with much of the remainder spreading into the environment and degrading.

This story originally appeared on WIRED en Español and has been translated from Spanish.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Need One Pair for Hiking, Traveling, and Working Out? Try Gravel Running Shoes

Published

on

Need One Pair for Hiking, Traveling, and Working Out? Try Gravel Running Shoes


HOKA’s max-stacked Rocket X Trail combines road race shoe energy with boosted grip from a 3-mm lugged outsole. If you’re looking for a fast shoe to go on the attack, this is it. It’s also fantastic for all round comfort. In testing, I laced up the Rocket X Trail and ran 3 hours (just short of 19 miles) fresh out of the box, across roads, forest gravel trails, some grass and through some serious water. It delivered efficiency and energy whether I was moving at marathon pace or with heavier, tired, ragged footfalls in the latter miles.

The rockered, supercritical midsole uses HOKA’s liveliest foam, similar to those you find in its race-ready road shoes, along with a carbon plate. That combines for a really fun ride that’s smooth, springy and fast and really consistent. It’s also highly cushioned, so you will sacrifice a lot of ground feel for that big stack springy softness. It’s also less stable over very lumpy terrain. But on open, flat, runnable mixed terrain, it’s excellent.

The lightweight uppers have a race-shoe-ready feel and after running through ankle-deep flooded sections, they shed water really quickly. This is a pricey road-to-trail shoe, it’s versatile and there’s plenty of winter road potential, too.

Specs
Weight 9.45 oz
Heel-to-toe drop 6 mm
Lug depth 3 mm



Source link

Continue Reading

Tech

If a Garmin Is Too Expensive, Consider Suunto’s Latest Adventure Watch

Published

on

If a Garmin Is Too Expensive, Consider Suunto’s Latest Adventure Watch


It’s always pleasing to see an array of physical buttons, and you get sizable ones too. You’re not going to miss these wide flat ones even when picking the pace up. The silicone strap has a nice stretch to it and while the button clasp is a bit awkward to get into place, this watch does not budge.

Suunto has jumped on the flashlight trend, with an LED light strip sat on the front of the case. You can adjust brightness levels and there’s SOS and alert modes to emit a very noticeable pulsating light pattern. This is a light I found useful rooting around indoors as well as on nighttime outings.

The biggest change is the introduction of a 1.5-inch, 466 x 466 AMOLED display. This replaces the dull, albeit very visible, memory-in-pixel (MIP) display. Suunto also ditched the solar charging that did require spending a significant amount of time outside to reap its battery benefits.

Adding AMOLED screens to outdoor watches has been contentious. The older MIP displays are just more power-efficient. The Vertical 2 is down by about 10 days from the older Vertical for what Suunto calls daily use.

Still, even if you’re putting its tracking and mapping features to use, you’re not going to be reaching for the charger every few days. After two hours of tracking in optimal GPS mode, the battery only dropped by 2 to 3 percent. The battery drop outside of tracking is also small and the standby performance is excellent as well.

Software Updates

Photograph: Michael Sawh

A more streamlined set of smartwatch features helps reserve battery for when it really matters. Unfortunately, I probably got better battery life because you don’t get phone notifications or responses if it’s paired to an iPhone instead of an Android. There’s also no onboard music player, but you do get a pretty slick set of music playback controls that are accessible during tracking.



Source link

Continue Reading

Tech

Edge AI: Business cost, risk and control | Computer Weekly

Published

on

Edge AI: Business cost, risk and control | Computer Weekly


Over the past few years, edge artificial intelligence (AI) has quickly transformed from a niche technology to a vital and strategic necessity. This is mainly because it helps resolve or minimise some of the key bottlenecks of traditional cloud-based AI. These include data volume, latency, privacy and cost, among others, while allowing companies to make instant decisions to keep up with modern and increasingly automated operations. 

As a result, the deployment of edge AI is no longer only a technical architecture choice, but one that is actively reshaping risk, cost, compliance and responsibility for enterprises. Businesses are increasingly choosing to store sensitive information mainly on local networks, instead of relying on cloud providers, which has further driven the growth of edge AI.

Rather than asking whether or not to adopt edge AI, the crucial question for most companies is how to do so without creating new security, cost and governance issues. As a relatively new technology still, several companies risk implementing edge AI simply to jump on the AI bandwagon, without being fully aware of which situations can most benefit from it. 

“Edge AI attracts a lot of enthusiasm because it enables real-time, autonomous decisions. However, the real danger is a false sense of technological maturity,” notes Michaël Bikard, professor of strategy at the Insead business school. “Edge AI can work well locally while producing fragile outcomes at the system level. Historically, that’s when failures occur. Not because the technology fails, but because it is trusted too early, before institutions, organisations and governance are ready.”

As such, understanding the consequences of edge AI deployment is paramount to deciding long-term strategy. 

Why businesses are moving from cloud-first to hybrid

Businesses are increasingly choosing a more hybrid AI approach over a cloud-first strategy, driven mainly by larger and more complex AI workloads. Many firms have also been disappointed by the savings achieved by adopting a full public cloud strategy, instead being faced with sharply surging operational costs. 

These costs, exacerbated by data-heavy applications, mainly arose from moving large datasets to and from the cloud and between providers. Surprise fees and unpredictable bills have further strained IT budgets and complicated budgeting and forecasts.  

Edge AI attracts a lot of enthusiasm because it enables real-time, autonomous decisions. However, the real danger is a false sense of technological maturity
Michaël Bikard, Insead

On the other hand, with edge AI, companies can run stable and predictable workloads on-premise much cheaper than in the cloud. 

Latency is another overarching concern. Edge AI can often be better than the cloud to minimise latency for applications which need real-time, high-speed processing. These include operational control systems and local analytics, among others. 

In highly regulated industries such as finance and healthcare, some data may only be stored within certain jurisdictions, which has further driven the shift to edge AI or on-premise solutions.

Major, single cloud providers can also come with supplier lock-ins, while multicloud environments are increasingly complicated to manage, also leading to hybrid approaches.

A hybrid strategy lets companies use public cloud to train and update applications which need to scale fast, while keeping high-volume, sensitive or stable data on-premise. This allows organisations to balance agility, cost efficiency and operational resilience, especially in a global context where real-time intelligence is increasingly valuable. 

Edge AI business drivers: What’s real and what’s noise 

At present, most businesses using edge AI have adopted the technology due to practical operational needs. Successful deployments have focused on solving specific, cloud-only limitations, rather than trying to overhaul entire company tech infrastructures.

The need for real-time decision-making has primarily driven edge AI adoption, especially in sectors like infrastructure, logistics, manufacturing and transport. This is especially as latency can have far-reaching operational and financial consequences, which the technology can help significantly in cutting down. 

Applying edge AI to these sectors helps companies process data closer to where it is generated, which enables them to react faster during times of lost central connectivity.

The technology also helps organisations dealing with sensitive data stay legally and financially compliant in jurisdictions with especially strict data storage laws. 

For companies working on critical operations, edge AI can greatly improve operational resilience by making sure that data and intelligence are distributed throughout a number of locations. This helps reduce dependence on centralised systems, which in turn decreases the impact of outages.

However, some business drivers are vastly overestimated when it comes to influencing the need to implement edge AI. The biggest of these is short-term cost savings. Edge AI can certainly cut down on transfer and cloud data consumption costs in the long-run.

However, it initially needs significant capital expenditure, mainly in the form of hardware device upgrades. There are also ongoing maintenance, monitoring and software update costs following implementation. In some cases, integration with legacy systems may be slower than expected and businesses may have to hire specialised labour as well. Edge AI systems also use considerable amounts of power, leading to higher energy bills.

These factors can all cause costs to be higher in the first few months, requiring businesses to have a long-term view when it comes to seeing strategic benefits from edge AI.

Another notion that is often overestimated is edge AI being able to deliver anything like “super-intelligence”, by running huge, complicated models like datacentre graphics processing units. However, given current computing and power restrictions in most cases, this scenario is highly unlikely at the moment.

Similarly, expectations of businesses being able to switch entirely to edge AI, instead of a hybrid approach, are also unrealistic, mainly because of practical deployment, integration and maintenance limitations across various locations. 

How edge AI is changing security, governance and ownership

As edge AI becomes more embedded in hybrid business tech strategies, risk management, enterprise security and governance are also changing, moving away from centralised IT control. These areas are now being shaped by local operational teams taking increasingly autonomous decisions, factoring in the real-time conditions of critical physical infrastructure.

Rising edge AI usage could heighten security concerns as well, as it widens organisational attack surfaces through multiple distributed devices and infrastructure. These then need to be protected, monitored and updated equally, following a set of standard guidelines, despite each of them presenting their own unique limitations. 

AI systems can perform exceptionally well under conditions similar to their training data, yet fail abruptly under rare, extreme, or novel scenarios – precisely the situations that matter most in critical infrastructure
Florian Stahl, Mannheim Business School

“AI systems can perform exceptionally well under conditions similar to their training data, yet fail abruptly under rare, extreme, or novel scenarios – precisely the situations that matter most in critical infrastructure,” remarks Florian Stahl, chair of quantitative marketing and consumer analytics at Mannheim Business School.

Patch management can pose more issues with edge AI as well, with thousands of endpoints and vulnerabilities causing potential delays and discrepancies in maintenance. 

With edge AI being all about local deployments, more questions around version control, oversight and audit issues can arise. This means that companies may need to maintain more in-depth and regular records about data inputs, decision-making processes and operational factors. Highly regulated industries may especially demand evidence trails and seek greater accountability, which can impact company reputations and licences. 

“Real-time AI systems, particularly those based on machine learning, often operate as ‘black boxes’, making it difficult to explain or audit decisions when failures occur. This lack of transparency is problematic in infrastructures where accountability and post-incident analysis are essential,” Stahl adds. 

As autonomous decisions taken locally can have very real financial, safety and compliance consequences, businesses may be compelled to take accountability far more seriously if they choose to use edge AI. 

Senior leadership may also need to adapt centralised organisational and governance models to a more distributed intelligence strategy, all while keeping costs low. 

These factors have led to edge AI becoming a structural change just as much as a technical one, impacting how and where decisions are taken, how risk is evaluated and overall accountability.

What leaders should consider before implementing edge AI 

Given the considerable initial investment required by most edge AI models, leaders should prioritise long-term strategic impact, rather than the hype of the latest technology. This means that while evaluating company-readiness, apart from timing, the potential scope of the intended edge AI model is paramount.

The biggest factor to consider is which processes or systems are most likely to benefit from using edge AI first and which can wait for a few more months. Ideally, businesses should prioritise any processes where latency, operational risk and data locality are most critical. By doing this, organisations can spread out costs while testing new deployments in a relatively lower-risk manner. 

“Importantly, organisations should evaluate AI deployments not only through efficiency metrics, but also through risk-adjusted performance indicators, recognising that marginal efficiency gains are rarely justified if they introduce disproportionate systemic or ethical risks,” Stahl advises.

The next question is: to scale or not to scale? In several cases, a pilot edge AI deployment is either enough for the short-term, does not deliver the expected results, or highlights many hidden costs and operational issues. 

In these cases, decision-makers need to evaluate whether it is worth taking the risk to scale, which will need more investment, specialised skills and manpower.

However, knowing when not to use edge AI, and when it could cause more harm than good, is equally important for businesses. This is primarily in cases where data volumes are still low, latency is not crucial, or the company does not have the means to appropriately handle several distributed endpoints.

“Edge AI should not be deployed in sectors where use cases are broad, stakes are high, and the consequences of errors are poorly understood,” Insead’s Bikard states. “That combination usually signals a timing problem rather than a technological one. In open, highly interconnected environments, even small mistakes can cascade before organisations have time to respond.”

In such cases, exercising strategic restraint is far more instrumental to long-term value. 

From tech choice to organisational shift

Ultimately, implementing edge AI models should be primarily focused on delivering long-term, strategic value, rather than a trend-based decision. This is especially true if latency and real-time data analysis pose real risks. Businesses need to consider that edge AI use is likely to reshape everything from cost structures and decision-making to autonomy and risk, and prepare accordingly.

“There are real potential gains from using AI for predictive maintenance, but those gains rarely come from the technology alone. For AI to pay off, the surrounding organisation – its incentives, culture, structures and skills – must also adapt. Predictions only create value if people are empowered to act on them,” Bikard concludes. 

Enterprises that treat edge AI as an entire operational shift, rather than an independent feature to be tacked onto legacy systems, will inevitably be able to take advantage of it better in the long run.  



Source link

Continue Reading

Trending