Connect with us

Tech

These Open Earbuds Offer Active Noise Canceling

Published

on

These Open Earbuds Offer Active Noise Canceling


Like all open-ear earbuds, the OpenFit Pro have an airy and open soundstage that delivers a more natural listening experience than regular earbuds — it’s closer to the experience of listening to speakers. You can make them sound even more immersive by activating the confusingly named Optimized for Dolby Atmos mode. I say confusing because this mode is neither a replacement for Dolby Atmos nor is it strictly for use with existing Dolby Atmos content. It is essentially Dolby’s best earbud-based audio software, which combines spatial audio processing (for a wider and deeper soundstage) with optional head tracking. Both of these features will work with any content; however, Dolby claims it works best when you’re listening to Dolby Atmos content.

It’s the first time Dolby’s tech has been employed on a set of open-ear earbuds, and it’s a great match. It boosts the perceived width and height of the space, and does so without negatively affecting dynamic range or loudness, something that often plagues similar systems. And yes, the effect is more pronounced when listening to Atmos than when playing stereo content. I’ve used Dolby’s spatial tech on several products, including the LG Tone Free T90Q, Jabra Elite 10, and Technics EAH-Z100, and this is the first time I’ve enjoyed it enough to leave it enabled for music listening.

Still, it’s not as effective as Bose’s Immersive Audio on the Bose Ultra Open Earbuds. Bose’s head tracking is smoother—particularly noticeable when watching movies—and its spatial processing is more convincing and immersive for both music and movies.

Where Shokz enjoys a big leg up on Bose is the OpenFit Pro’s call quality. The OpenFit Pro’s mics do a great job of eliminating noises on your end of the call. You could be walking down a busy street, hanging out in a full coffee shop, or even passing by an active construction site, and your callers probably won’t have a clue you aren’t sitting on a quiet park bench. As with all open-ear earbuds, being able to hear your own voice naturally (without the use of a transparency mode) eliminates the fatigue normally associated with long calls on regular earbuds.

Comfortable Design

Photograph: Simon Cohen

Comfort is a key benefit of Shokz’s OpenFit series, and the OpenFit Pro, with ear hooks that are wrapped in soft silicone, are no exception. Unlike previous OpenFit models, which position speakers just outside your ear’s concha, the Pro’s speaker pods project directly into your ears, and in my case, they make contact with the inner part of that cavity. This significantly increases stability, but over time, I became aware of that contact point.

They never became uncomfortable, but it’s not quite the forget-you’re-even-wearing-them experience of the OpenFit/OpenFit 2/+ models. As someone who wears glasses, I tend to prefer clip-style earbuds like the Shokz OpenDots One, and yet the OpenFit Pro’s ear hook shape was never an issue. Shokz includes a set of optional silicone support loops, presumably for folks with smaller ears or who need a more stable fit. They didn’t improve my fit, but then again, I’ve got pretty big ears.

As with all hook-style earbuds, the OpenFit Pro charging case is on the big side. It’s got great build quality thanks to the use of an aluminum frame, and you get wireless charging (not a given with many open-ear models), but it’s still way less pocketable than a set of AirPods Pro.

Easy to Use

Image may contain Electrical Device Microphone Car Transportation Vehicle Electronics and Speaker

Photograph: Simon Cohen

For the OpenFit Pro, Shokz has finally abandoned its hybrid touch/button controls in favor of just physical buttons, and I think it’s the right call. You can now decide exactly which button press combos control actions like play/pause, track skipping, volume, and voice assistant access, a level of freedom that wasn’t available on previous versions.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

NASA Wants to Put Nuclear Reactors on the Moon

Published

on

NASA Wants to Put Nuclear Reactors on the Moon


Having demonstrated that it has the operational capability to transport humans safely to the moon and back, the United States is moving on to its next major aim: It wants nuclear reactors in orbit and on the lunar surface by 2030. For such a feat, the National Aeronautics and Space Administration will have to work in conjunction with the Department of Defense and the Department of Energy.

In a post on X, the White House Office of Science and Technology Policy (OSTP) unveiled a document with new guidelines for federal agencies to establish the space nuclear technology road map for the coming years. This, they say, will ensure “US space superiority.”

At present, space instruments use solar power to operate. However, this is considered impractical for more complex purposes. Although technically there is always sunlight, the power is intermittent and almost always requires bulky batteries to store it.

Reactors produce fairly continuous energy for years through nuclear fission. They can also be used for so-called nuclear electric propulsion. Continuous output makes them the most viable option for lunar base subsistence, but they can also allow spacecraft to undertake long or complex missions without worrying about depleting a limited supply of chemical fuel.

Nuclear technology, in short, makes it possible to go farther, with more payload, for longer, and with fewer constraints.

According to the memorandum, the US goal is to put a medium-power reactor in orbit by 2028, with a variant designed for nuclear electric propulsion, and a first functional large reactor on the surface of the moon by 2030. To achieve this, both NASA and the Pentagon will develop energy technologies in parallel, using the current strategy of competition among contractors.

The reactors will have to be modular and scalable, and will have to include applications for both future life on the moon and space propulsion. For its part, the DOE will have to ensure that these projects have the fuel, infrastructure, and safety features necessary to achieve their objectives. In addition, the agency will evaluate whether the industry has the capacity to produce up to four reactors in five years.

The plan contemplates technologies that produce at least 20 kilowatts of electricity (kWe) for three years in orbit and at least five years on the lunar surface. In the meantime, they should have a design capable of raising power to 100 kWe. The first designs should arrive within a year.

Finally, the order tasks the OSTP with creating a road map for the initiative, noting obstacles and recommendations for addressing them.

“Nuclear power in space will give us the sustained electricity, heating, and propulsion essential to a permanent presence on the moon, Mars, and beyond,” OSTP posted. For his part, NASA administrator Jared Isaacman posted, “The time has come for America to get underway on nuclear power in space.” The message was followed by an emoji of a US flag.

The plan provides a common framework for each agency to work within. In the background, the race for space infrastructure is evidence of technological competition with China, which is also seeking advanced energy capabilities for the moon.

This story originally appeared in WIRED en Español and has been translated from Spanish.



Source link

Continue Reading

Tech

AI Could Democratize One of Tech’s Most Valuable Resources

Published

on

AI Could Democratize One of Tech’s Most Valuable Resources


Nvidia is the undisputed king of AI chips. But thanks to the AI it helped build, the champ could soon face growing competition.

Modern AI runs on Nvidia designs, a dynamic that has propelled the company to a market cap of well over $4 trillion. Each new generation of Nvidia chip allows companies to train more powerful AI models using hundreds or thousands of processors networked together inside vast data centers. One reason for Nvidia’s success is that it provides software to help program each new generation of chip. That may soon not be such a differentiated skill.

A startup called Wafer is training AI models to do one of the most difficult and important jobs in AI—optimizing code so that it runs as efficiently as possible on a particular silicon chip.

Emilio Andere, cofounder and CEO of Wafer, says the company performs reinforcement learning on open source models to teach them to write kernel code, or software that interacts directly with hardware in an operating system. Andere says Wafer also adds “agentic harnesses” to existing coding models like Anthropic’s Claude and OpenAI’s GPT to soup up their ability to write code that runs directly on chips.

Many prominent tech companies now have their own chips. Apple and others have for years used custom silicon to improve the performance and the efficiency of software running on laptops, tablets, and smartphones. At the other end of the scale, companies like Google and Amazon mint their own silicon to improve the performance of their cloud-computing platforms. Meta recently said it would deploy 1 gigawatt of compute capacity with a new chip developed with Broadcom. Deploying custom silicon also involves writing a lot of code so that it runs smoothly and efficiently on the new processor.

Wafer is working with companies including AMD and Amazon to help optimize software to run efficiently on their hardware. The startup has so far raised $4 million in seed funding from Google’s Jeff Dean, Wojciech Zaremba of OpenAI, and others.

Andere believes that his company’s AI-led approach has the potential to challenge Nvidia’s dominance. A number of high-end chips now offer similar raw floating point performance—a key industry benchmark of a chip’s ability to perform simple calculations—to Nvidia’s best silicon.

“The best AMD hardware, the best [Amazon] Trainium hardware, the best [Google] TPUs, give you the same theoretical flops to Nvidia GPUs,” Andere told me recently. “We want to maximize intelligence per watt.”

Performance engineers with the skill needed to optimize code to run reliably and efficiently on these chips are expensive and in high demand, Andere says, while Nvidia’s software ecosystem makes it easier to write and maintain code for its chips. That makes it hard for even the biggest tech companies to go it alone.

When Anthropic partnered with Amazon to build its AI models on Trainium, for instance, it had to rewrite its model’s code from scratch to make it run as efficiently as possible on the hardware, Andere says.

Of course, Anthropic’s Claude is now one of many AI models that are now superhuman at writing code. So Andere reckons it may not be long before AI starts consuming Nvidia software advantage.

“The moat lives in the programmability of the chip,” Andere says in reference to the libraries and software tools that make it easier to optimize code for Nvidia hardware. “I think it’s time to start rethinking whether that’s actually a strong moat.”

Besides making it easier to optimize code for different silicon, AI may soon make it easier to design chips themselves. Ricursive Intelligence, a startup founded by two ex-Google engineers, Azalia Mirhoseini and Anna Goldie, is developing new ways to design computer chips with artificial intelligence. If its technology takes off, a lot more companies could branch into chip design, creating custom silicon that runs their software more efficiently.



Source link

Continue Reading

Tech

UK businesses must face up to AI threat, says government | Computer Weekly

Published

on

UK businesses must face up to AI threat, says government | Computer Weekly


A new generation of experimental, frontier AI models are rapidly developing the ability to discover and exploit software vulnerabilities and business leaders need to start to pay attention, the UK government has warned.

In an open letter to Britain’s business leaders published on 15 April, business secretary Liz Kendall said the threats organisations face in cyber space are changing and their responses need to change, too.

“For years, the most serious cyber attacks have relied on a small number of highly skilled criminals. That is now shifting,” she said. “AI models are becoming capable of doing work that previously required rare expertise: finding weaknesses in software, writing the code to exploit them, and doing so at a speed and scale that would have been impossible even a year ago.”

Following the recent debut of Anthropic’s frontier model, Mythos, and its accompanying Project Glasswing – which is intended to give some of the world’s largest technology companies a head start on addressing the vulnerabilities it can supposedly uncover – Kendall revealed that the UK’s AI Security Institute (AISI) operated by the Department for Science, Innovation and Technology (DSIT) has been testing out its capabilities.

She said AISI had found Mythos to be “substantially more capable at cyber offence than any model we have previously assessed.”

According to the AISI, frontier model capabilities are doubling every four months, down from eight months in the recent past.

“This finding is significant both for what it means today, but also because it highlights the speed at which AI capabilities are increasing and the threats they potentially pose,” said Kendall

“OpenAI also announced scaling up their Trusted Access for Cyber programme last night, showing that AI’s accelerating impact on cyber is not isolated to a single company, and we expect more to follow.

“The trajectory is clear and therefore it is vital that we are prepared for frontier AI model capabilities to rapidly increase over the next year, and plan accordingly for that outcome,” she said.

Responding to the threat

Kendall said the UK government is not standing still in response to this threat – having opened up the AISI two-and-a-half years ago, she said the nation now boasts the most advanced capabilities anywhere in the world for understanding frontier AI models.

More broadly, she continued, the National Cyber Security Centre (NCSC) continues to work up practical guidance for end-user organisations, while the upcoming Cyber Security and Resilience Bill and the National Cyber Action Plan – soon to be published, will also move things in the right direction.

But, said Kendall, government action alone is insufficient. “Every business in the UK has a part of play. Criminals will not just target government systems and critical infrastructure. They will target ordinary companies, of every size, in every sector. Attackers go where defences are weakest,” she said.

Kendall urged business leaders and board members to ensure they are regularly discussing cyber risks and not delegating such things to IT teams, and consider signing up to the Cyber Governance Code of Practice if they have not already, while smaller business can avail themselves of the NCSC’s Cyber Action Toolkit. All businesses should also be planning and rehearsing incident response practices, and considering taking out cyber insurance.

She also pointed businesses towards the Cyber Essentials certification scheme to help organisations establish basic security policies and procedures, and additionally highlighted resources provided by the NCSC – notably its Early Warning service – and by regulators for regulated sectors.

“We are entering a period in which the pace of technological change may test every institution in the country. The businesses that act now – that treat cyber security as an essential part of running a modern company, not an optional extra – will be the ones best placed to thrive through it and seize its advantages. We urge you to be among them,” said Kendall.



Source link

Continue Reading

Trending