Tech
Meta Goes Even Harder Into Smart Glasses With 3 New Models
It takes time to realize you don’t have to hold your hand out in front of you for these gestures to be recognized, but a surprisingly short amount of time to find yourself using them with very little second thought.
Of course talking to Meta AI remains a key way of interacting with the glasses, but Meta hopes that adding the visual elements will enhance the chatbot experience. For example, live speech captioning and language translation is still switched on by voice—but with Meta Ray-Ban Display, you can see the translations and captions appearing in real time on the glasses rather than on your phone’s screen. This is the same with commands like “Hey Meta, what am I looking at,” which can now offer more visually rich information about whatever the front-facing cameras are pointing at. Asking Meta to navigate to a local attraction results in the glasses displaying turn-by-turn directions directly on top of the real world as you walk.
For times when talking might be difficult, Meta also showed off a feature that tracks handwriting input as an alternative to voice commands. Aimed at quick messages, the user can “draw” letters with an outstretched finger on a flat service (or your leg), and the Neural Band will turn it into text. Though the feature was part of the demo we received, Meta says it won’t be available to users at launch, but will arrive soon. Who knows, maybe this will be the thing that helps save handwriting.
Meta has acknowledged some limitations with features at launch. For example, the built-in Spotify integration is only able to show what’s playing on your phone and give you basic playback controls, and Instagram is currently limited to just Reels and messages. Meta intends to broaden out the capabilities soon.
Also notable: The Orion prototype we saw last year required an external puck to power its most computing-intensive capabilities. But that prototype design provides a full range of augmented reality features. The AR feature set of this new Display model is more limited, so the puck isn’t needed. Also, this means the Display’s frames are slimmer. Meta does eventually plan to offer a full slate of wearable options to consumers: smart glasses, display glasses, and full AR glasses.
The Ray-Ban Displays will be available in either black or sand colors starting on September 30 for $799 and will initially only be available as in-store purchases in the US. Meta says you need to buy them in person because the wristband has to be fitted correctly to the wrist of your dominant hand. Also, the folks selling you the system will show you the hand gestures that control the glasses—though there will be a tutorial walkthrough when you first power on the glasses too.
Be ready to move quickly if you want them though. Meta says there are limited quantities available, and other countries won’t get them until early 2026.
Oakley Meta Vanguard
Following on from the Oakley Meta HSTN glasses announced earlier this year, Meta’s newest Oakley collaboration evokes the timeless look of a pair of wrap-around Oakley Sphaera Glasses—but with a twist. That twist of course is a 12-megapixel ultrawide camera with a 122-degree field of view that’s positioned smack in the middle of the lens, right on the bridge of your nose. This is the optimum placement for recording POV action sports videos at up to 3K, as well as for capturing scenes in the glasses’ new slow-mo and hyperlapse modes.
The Vanguards are very much being marketed to sports enthusiasts—those who might be inclined to choose the Meta glasses over a GoPro, for instance. To that end, the Vanguards have an IP67 waterproof rating, the best waterproofing on any pair of Meta glasses. The speakers built into the arms of the frames are 6 decibels louder to make up for any loss of clarity caused by wind noise, and a new 5-mic array lets your commands be clearly heard even when an arctic gale is blasting you in the face while you careen down the slopes.
Tech
The Justice Department Released More Epstein Files—but Not the Ones Survivors Want
Over the weekend, the Justice Department released three new data sets comprising files related to Jeffrey Epstein. The DOJ had previously released nearly 4,000 documents prior to the Friday midnight deadline required by the Epstein Files Transparency Act.
As with Friday’s release, the new tranche appears to contain hundreds of photographs, along with various court records pertaining to Epstein and his associates. The first of the additional datasets, Data Set 5, is photos of hard drives and physical folders, as well as chain-of-custody forms. Data Set 6 appears to mostly be grand jury materials from cases out of the Southern District of New York against Epstein and his coconspirator, Ghislaine Maxwell. Data Set 7 includes more grand jury materials from those cases, as well as materials from a separate 2007 Florida grand jury.
Data Set 7 also includes an out-of-order transcript between R. Alexander Acosta and the DOJ’s Office of Professional Responsibility from 2019. According to the transcript, the OPR was investigating whether attorneys in the Southern District of Florida US Attorney’s Office committed professional misconduct by entering into a non-prosecution agreement with Epstein, who was being investigated by state law enforcement on sexual battery charges. Acosta was the head of the office when the agreement was signed.
Leading up to the deadline to release materials, the DOJ made three separate requests to unseal grand jury materials. Those requests were granted earlier this month.
The initial release of the Epstein files was met with protest, particularly by Epstein victims and Democratic lawmakers. “The public received a fraction of the files, and what we received was riddled with abnormal and extreme redactions with no explanation,” wrote a group of 19 women who had survived abuse from Epstein and Maxwell in a statement posted on social media. Senator Chuck Schumer said Monday that he would force a vote that would allow the Senate to sue the Trump administration for a full release of the Epstein files.
Along with the release of the new batch of files over the weekend, the Justice Department also removed at least 16 files from its initial offering, including a photograph that depicted Donald Trump. The DOJ later restored that photograph, saying in a statement on X that it had initially been flagged “for potential further action to protect victims.” The post went on to say that “after the review, it was determined there is no evidence that any Epstein victims are depicted in the photograph, and it has been reposted without any alteration or redaction.”
The Justice Department acknowledged in a fact sheet on Sunday that it has “hundreds of thousands of pages of material to release,” claiming that it has more than 200 lawyers reviewing files prior to release.
Tech
OpenAI’s Child Exploitation Reports Increased Sharply This Year
OpenAI sent 80 times as many child exploitation incident reports to the National Center for Missing & Exploited Children during the first half of 2025 as it did during a similar time period in 2024, according to a recent update from the company. The NCMEC’s CyberTipline is a Congressionally authorized clearinghouse for reporting child sexual abuse material (CSAM) and other forms of child exploitation.
Companies are required by law to report apparent child exploitation to the CyberTipline. When a company sends a report, NCMEC reviews it and then forwards it to the appropriate law enforcement agency for investigation.
Statistics related to NCMEC reports can be nuanced. Increased reports can sometimes indicate changes in a platform’s automated moderation, or the criteria it uses to decide whether a report is necessary, rather than necessarily indicating an increase in nefarious activity.
Additionally, the same piece of content can be the subject of multiple reports, and a single report can be about multiple pieces of content. Some platforms, including OpenAI, disclose the number of both the reports and the total pieces of content they were about for a more complete picture.
OpenAI spokesperson Gaby Raila said in a statement that the company made investments toward the end of 2024 “to increase [its] capacity to review and action reports in order to keep pace with current and future user growth.” Raila also said that the time frame corresponds to “the introduction of more product surfaces that allowed image uploads and the growing popularity of our products, which contributed to the increase in reports.” In August, Nick Turley, vice president and head of ChatGPT, announced that the app had four times the amount of weekly active users than it did the year before.
During the first half of 2025, the number of CyberTipline reports OpenAI sent was roughly the same as the amount of content OpenAI sent the reports about—75,027 compared to 74,559. In the first half of 2024, it sent 947 CyberTipline reports about 3,252 pieces of content. Both the number of reports and pieces of content the reports saw a marked increase between the two time periods.
Content, in this context, could mean multiple things. OpenAI has said that it reports all instances of CSAM, including uploads and requests, to NCMEC. Besides its ChatGPT app, which allows users to upload files—including images—and can generate text and images in response, OpenAI also offers access to its models via API access. The most recent NCMEC count wouldn’t include any reports related to video-generation app Sora, as its September release was after the time frame covered by the update.
The spike in reports follows a similar pattern to what NCMEC has observed at the CyberTipline more broadly with the rise of generative AI. The center’s analysis of all CyberTipline data found that reports involving generative AI saw a 1,325 percent increase between 2023 and 2024. NCMEC has not yet released 2025 data, and while other large AI labs like Google publish statistics about the NCMEC reports they’ve made, they don’t specify what percentage of those reports are AI-related.
Tech
The Doomsday Glacier Is Getting Closer and Closer to Irreversible Collapse
Known as the “Doomsday Glacier,” the Thwaites Glacier in Antarctica is one of the most rapidly changing glaciers on Earth, and its future evolution is one of the biggest unknowns when it comes to predicting global sea level rise.
The eastern ice shelf of the Thwaites Glacier is supported at its northern end by a ridge of the ocean floor. However, over the past two decades, cracks in the upper reaches of the glacier have increased rapidly, weakening its structural stability. A new study by the International Thwaites Glacier Collaboration (ITGC) presents a detailed record of this gradual collapse process.
Researchers at the Centre for Earth Observation and Science at the University of Manitoba, Canada, analyzed observational data from 2002 to 2022 to track the formation and propagation of cracks in the ice shelf shear zone. They discovered that as the cracks grew, the connection between the ice shelf and the mid-ocean ridge weakened, accelerating the upstream flow of ice.
The Crack in the Ice Shelf Widens in Two Stages
The study reveals that the weakening of the ice shelf occurred in four distinct phases, with crack growth occurring in two stages. In the first phase, long cracks appeared along the ice flow, gradually extending eastward. Some exceeded 8 km in length and spanned the entire shelf. In the second phase, numerous short cross-flow cracks, less than 2 km long, emerged, doubling the total length of the fissures.
Analysis of satellite images showed that the total length of the cracks increased from about 165 km in 2002 to approximately 336 km in 2021. Meanwhile, the average length of each crack decreased from 3.2 km to 1.5 km, with a notable increase in small cracks. These changes reflect a significant shift in the stress state of the ice shelf, that is, in the interaction of forces within its structure.
Between 2002 and 2006, the ice shelf accelerated as it was pulled by nearby fast-moving currents, generating compressive stress on the anchorage point, which initially stabilized the shelf. After 2007, the shear zone between the shelf and the Western ice tongue collapsed. The stress concentrated around the anchorage point, leading to the formation of large cracks.
Since 2017, these cracks have completely penetrated the ice shelf, severing the connection to the anchorage. According to researchers, this has accelerated the upstream flow of ice and turned the anchorage into a destabilizing factor.
Feedback Loop Collapse
One of the most significant findings of the study is the existence of a feedback loop: Cracks accelerate the flow of ice, and in turn, this increased speed generates new cracks. This process was clearly recorded by the GPS devices that the team deployed on the ice shelf between 2020 and 2022.
During the winter of 2020, the upward propagation of structural changes in the shear zone was particularly evident. These changes advanced at a rate of approximately 55 kilometers per year within the ice shelf, demonstrating that structural collapse in the shear zone directly impacts upstream ice flow.
-
Business1 week agoStudying Abroad Is Costly, But Not Impossible: Experts On Smarter Financial Planning
-
Business1 week agoKSE-100 index gains 876 points amid cut in policy rate | The Express Tribune
-
Fashion5 days agoIndonesia’s thrift surge fuels waste and textile industry woes
-
Sports1 week agoJets defensive lineman rips NFL officials after ejection vs Jaguars
-
Business5 days agoBP names new boss as current CEO leaves after less than two years
-
Tech1 week agoFor the First Time, AI Analyzes Language as Well as a Human Expert
-
Entertainment1 week agoPrince Harry, Meghan Markle’s 2025 Christmas card: A shift in strategy
-
Tech5 days agoT-Mobile Business Internet and Phone Deals


