Connect with us

Tech

OpenAI Is Preparing to Launch a Social App for AI-Generated Videos

Published

on

OpenAI Is Preparing to Launch a Social App for AI-Generated Videos


OpenAI is preparing to launch a stand-alone app for its video generation AI model Sora 2, WIRED has learned. The app, which features a vertical video feed with swipe-to-scroll navigation, appears to closely resemble TikTok—except all of the content is AI-generated. There’s a For You–style page powered by a recommendation algorithm. On the right side of the feed, a menu bar gives users the option to like, comment, or remix a video.

Users can create videoclips up to 10 seconds long using OpenAI’s next-generation video model, according to documents viewed by WIRED. There is no option to upload photos or videos from a user’s camera roll or other apps.

The Sora 2 App has an identity verification feature that allows users to confirm their likeness. If a user has verified their identity, they can use their likeness in videos. Other users can also tag them and use their likeness in clips. For example, someone could generate a video of themselves riding a roller coaster at a theme park with a friend. Users will get a notification whenever their likeness is used—even if the clip remains in draft form and is never posted, sources say.

OpenAI launched the app internally last week. So far, it’s received overwhelmingly positive feedback from employees, according to documents viewed by WIRED. Employees have been using the tool so frequently that some managers have joked it could become a drain on productivity.

OpenAI declined to comment.

OpenAI appears to be betting that the Sora 2 app will let people interact with AI-generated video in a way that fundamentally changes their experience of the technology—similar to how ChatGPT helped users realize the potential of AI-generated text. Internally, sources say, there’s also a feeling that President Trump’s on-again, off-again deal to sell TikTok’s US operations has given OpenAI a unique opportunity to launch a short-form video app—particularly one without close ties to China.

OpenAI officially launched Sora in December of last year. Initially, people could only access it via a web page, but it was soon incorporated directly into the ChatGPT app. At the time, the model was among the most state-of-the-art AI video generators, though OpenAI noted it had some limitations. For example, it didn’t seem to fully understand physics and struggled to produce realistic action scenes, especially in longer clips.

OpenAI’s Sora 2 app will compete with new AI video offerings from tech giants like Meta and Google. Last week, Meta introduced a new feed in its Meta AI app called Vibes, which is dedicated exclusively to creating and sharing short AI-generated videos. Earlier this month, Google announced that it was integrating a custom version of its latest video generation model, Veo 3, into YouTube.

TikTok, on the other hand, has taken a more cautious approach to AI-generated content. The video app recently redefined its rules around what kind of AI-generated videos it allows on the platform. It now explicitly bans AI-generated content that’s “misleading about matters of public importance or harmful to individuals.”

Oftentimes, the Sora 2 app refuses to generate videos due to copyright safeguards and other filters, sources say. OpenAI is currently fighting a series of lawsuits over alleged copyright infringements, including a high-profile case brought by The New York Times. The Times case centers on allegations that OpenAI trained its models on the paper’s copyrighted material.

OpenAI is also facing mounting criticism over child safety issues. On Monday, the company released new parental controls, including the option for parents and teenagers to link their accounts. The company also said that it is working on an age-prediction tool that could automatically route users believed to be under the age of 18 to a more restricted version of ChatGPT that doesn’t allow for romantic interactions, among other things. It is not known what age restrictions might be incorporated into the Sora 2 app.


This is an edition of the Model Behavior newsletter. Read previous newsletters here.



Source link

Tech

Interrupting encoder training in diffusion models enables more efficient generative AI

Published

on

Interrupting encoder training in diffusion models enables more efficient generative AI


The developed model modified Schrödinger bridge-type diffusion models to add noise to real data through the encoder and reconstructed samples through the decoder. It uses two objective functions, the prior loss and drift matching, to reduce computational cost and prevent overfitting. Credit: Institute of Science Tokyo

A new framework for generative diffusion models was developed by researchers at Science Tokyo, significantly improving generative AI models. The method reinterpreted Schrödinger bridge models as variational autoencoders with infinitely many latent variables, reducing computational costs and preventing overfitting. By appropriately interrupting the training of the encoder, this approach enabled development of more efficient generative AI, with broad applicability beyond standard diffusion models.

Diffusion models are among the most widely used approaches in generative AI for creating images and audio. These models generate new data by gradually adding noise (noising) to real samples and then learning how to reverse that process (denoising) back into realistic data. A widely used version, the score-based model, achieves this by the diffusion process connecting the prior to the data with a sufficiently long-time interval. This method, however, has a limitation that when the data differs strongly from the prior, the time intervals of the noising and denoising processes become longer, which causes slowing down sample generation.

Now, a research team from Institute of Science Tokyo (Science Tokyo), Japan, has proposed a new framework for diffusion models that is faster and computationally less demanding. They achieved this by reinterpreting Schrödinger bridge (SB) models, a type of diffusion model, as variational autoencoders (VAEs).

The study was led by graduate student Mr. Kentaro Kaba and Professor Masayuki Ohzeki from the Department of Physics at Science Tokyo, in collaboration with Mr. Reo Shimizu (then a graduate student) and Associate Professor Yuki Sugiyama from the Graduate School of Information Sciences at Tohoku University, Japan. Their findings were published in the Physical Review Research on September 3, 2025.

SB models offer greater flexibility than standard score-based models because they can connect any two over a finite time using a stochastic differential equation (SDE). This supports more complex noising processes and higher-quality sample generation. The trade-off, however, is that SB models are mathematically complex and expensive to train.

The proposed method addresses this by reformulating SB models as VAEs with multiple latent variables. “The key insight lies in extending the number of latent variables from one to infinity, leveraging the data-processing inequality. This perspective enables us to interpret SB-type models within the framework of VAEs,” says Kaba.

In this setup, the encoder represents the forward process that maps real data onto a noisy latent space, while the decoder reverses the process to reconstruct realistic samples, and both processes are modeled as SDEs learned by neural networks.

The model employs a objective with two components. The first is the prior loss, which ensures that the encoder correctly maps the data distribution to the prior distribution. The second is drift matching, which trains the decoder to mimic the dynamics of the reverse encoder process. Moreover, once the prior loss stabilizes, encoder training can be stopped early. This allows us to complete learning faster, reducing the risk of overfitting and preserving high accuracy in SB models.

“The objective function is composed of the prior loss and drift matching parts, which characterizes the training of neural networks in the encoder and the decoder, respectively. Together, they reduce the computational cost of training SB-type models. It was demonstrated that interrupting the training of the encoder mitigated the challenge of overfitting,” explains Ohzeki.

This approach is flexible and can be applied to other probabilistic rule sets, even non-Markov processes, making it a broadly applicable training scheme.

More information:
Kentaro Kaba et al, Schrödinger bridge-type diffusion models as an extension of variational autoencoders, Physical Review Research (2025). DOI: 10.1103/dxp7-4hby

Citation:
Interrupting encoder training in diffusion models enables more efficient generative AI (2025, September 29)
retrieved 29 September 2025
from https://techxplore.com/news/2025-09-encoder-diffusion-enables-efficient-generative.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

More people are using AI in court, not a lawyer. It could cost you money—and your case

Published

on

More people are using AI in court, not a lawyer. It could cost you money—and your case


Credit: Pavel Danilyuk from Pexels

When you don’t have the money for a lawyer to represent you in a court case, even judges can understand the temptation to get free help from anywhere—including tapping into generative artificial intelligence (AI).

As Judge My Anh Tran in the County Court of Victoria said this year: “Generative AI can be beguiling, particularly when the task of representing yourself seems overwhelming. However, a litigant runs the risk that their case will be damaged, rather than helped, if they choose to use AI without taking the time to understand what it produces, and to confirm that it is both legally and factually accurate.”

Our research has so far found 84 reported cases of generative AI use in Australian courts since ChatGPT launched in late 2022. While cases involving lawyers have had the most media attention, we found more than three-quarters of those cases (66 of 84) involved people representing themselves, known as “self-represented litigants.”

Those people—who sometimes have valid legal claims—are increasingly turning to different generative AI tools to help on everything from property and will disputes, to employment, bankruptcy, defamation, and migration cases.

Our ongoing research is part of an upcoming report for the Australian Academy of Law, being launched later in the year. But we’re sharing our findings now because this is a growing real-world problem.

Just this month, Queensland’s courts issued updated guidance for self-represented litigants, warning using “inaccurate AI-generated information in court” could cause delays, or worse: “a costs order may be made against you.”

As New South Wales Chief Justice Andrew Bell observed in a decision in August this year, the self-represented respondent was “admirably candid with the court in relation to her use of AI.” But while she was “doing her best to defend her interests,” her AI-generated submissions were often “misconceived, unhelpful and irrelevant.”

If you’re considering using AI in your own case, here’s what you need to know.

The temptation to rely on AI

Self-representation in Australian courts is more common than many people realize.

For example, 79% of litigants in migration matters at the Federal Circuit Court were unrepresented in 2023-2024.

The Queensland District Court has said “a significant number of civil proceedings involve self-represented parties.” The County Court of Victoria last year created easy-to-use forms for self-represented litigants.

But as the availability of free or low-cost generative AI tools increases, so does the temptation to use AI, as our recent research paper highlighted.

The risks if AI gets it wrong

Relying on AI tools that produce fake law can result in court documents being rejected, and valid claims being lost in court.

If you’re a self-represented litigant, the court system gives you the right to provide evidence and argument to support your case. But if that evidence or argument is not real, the court must reject it. That means you could lose your day in court.

In those circumstances, the court may make a costs order against a self-represented litigant—meaning you could end up having to pay your opponent’s legal costs.

Lawyers here and overseas have also been caught relying on inaccurate AI-generated law in court.

But a key difference is that if a lawyer uses fake cases that the court rejects, this is likely to amount to negligence. Their client might be able to sue the lawyer.

When someone representing themselves makes the error, they only have themselves to blame.

How can you reduce your risks?

The safest advice is to avoid AI for legal research.

There are many free, publicly available legal research websites for Australian law. The best known is the Australasian Legal Information Institute (AUSTLII). Another is Jade.

Court libraries and law schools are open to the public and have online resources about how to conduct legal research. Libraries will often have textbooks that set out principles of law.

Australian courts, such as the Supreme Court of Queensland, Supreme Court of NSW and Supreme Court of Victoria, have all issued guidance on when generative AI can and cannot be used.

Check if there’s a guide from the relevant court for your case. Follow their advice.

If you still plan to use generative AI, you must check everything against a reliable source. You need to search for each case you plan to cite, not just to make sure it exists, but also that it says what an AI summary says it does.

And as Queensland’s guide for self-litigants warns: “Do not enter any private, confidential, suppressed or legally privileged information into a Generative AI chatbot […] Anything you put into a Generative AI chatbot could become publicly known. This could result in you unintentionally breaching suppression orders, or accidentally disclosing your own or someone else’s private or confidential information.”

Conducting legal research and producing is not easy. That’s what trained lawyers are for, which is why affordable, accessible legal services are necessary for a fair justice system.

AI is being used to address an access to justice problem that it is not well-suited to—at least, not yet.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
More people are using AI in court, not a lawyer. It could cost you money—and your case (2025, September 29)
retrieved 29 September 2025
from https://techxplore.com/news/2025-09-people-ai-court-lawyer-money.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

AlloyGPT: Leveraging a language model to aid alloy discovery

Published

on

AlloyGPT: Leveraging a language model to aid alloy discovery


Evaluate the design accuracy of AlloyGPT model in the P-to-SC design tasks. Credit: npj Computational Materials (2025). DOI: 10.1038/s41524-025-01768-2

Additive manufacturing of alloys has enabled the creation of machine parts that meet the complex requirements needed to optimize performance in aerospace, automotive, and energy applications. Finding the ideal mix of elements to use in these parts when there are countless possible combinations available is a complicated process that has been accelerated by computational tools and artificial intelligence.

With (LLM), such as ChatGPT, evolving to better understand natural languages, researchers in the and engineering department at Carnegie Mellon University have pioneered the potential to train LLM to understand a novel alloy physics language in a similar manner. Led by assistant professor Mohadeseh Taheri- Mousavi, they have developed AlloyGPT, which recognizes the relationship between composition, structure, and properties in order to generate novel designs for additively manufacturable structural alloys.

The AlloyGPT model, detailed in a recent paper published in npj Computational Materials, is unique in that it has dual functionality. It can accurately predict multiple phase structures and properties based on given alloy compositions, and conversely, it can suggest a comprehensive list of alloy compositions that meet given desired goals.

“We have created an architecture that has learned the physics of alloys in order to design enhanced alloys that have the desired qualities for mechanical performance and manufacturability in a variety of applications,” said Taheri- Mousavi.






This video shows a test demo of AlloyGPT, a large language model who speaks an alloy-specific language and can predict and design alloys. Credit: Carnegie Mellon University

Taheri-Mousavi’s group, which focuses on structural alloy design, built the autoregressive model by developing a language for the physics of alloys and training this generative AI model. Rather than analyzing words, the model examines compositions and structural features in a sentence format to understand how the composition, structure, and properties are connected.

Unlike conventional iterative methods which often face challenges in finding all possible solutions, AlloyGPT can provide a comprehensive list of elemental combinations to produce the desired material properties requested. This is especially useful for designing gradient additively manufactured alloys in which gradual changes in exist across a single part.

“It’s exciting to build a model that can solve prediction and design tasks simultaneously,” said Bo Ni, a postdoctoral researcher at Taher-Mousavi’s group. “It’s even more interesting when we demonstrate that AlloyGPT can synergize accuracy, diversity and robustness in problem solving.”

AlloyGPT: Leveraging a language model to aid alloy discovery
Assistant Professor Mohadeseh Taheri-Mousavi and postdoctoral researcher Bo Ni showcase a paradigm of the AlloyGPT model. Credit: Carnegie Mellon University Materials Science and Engineering

This language model has the potential to lay the groundwork for similar models and to accelerate material design for alloys manufactured by both traditional and additive manufacturing.

“Our approach will enable scientists to quickly discover alloys with new or improved properties, and will ultimately help industry partners to improve the speed and reduce the cost of their alloy design for various manufacturing processes,” said Taheri-Mousavi.

More information:
Bo Ni et al, End-to-end prediction and design of additively manufacturable alloys using a generative AlloyGPT model, npj Computational Materials (2025). DOI: 10.1038/s41524-025-01768-2

Source code and script examples, for training and inference, are available on GitHub https://github.com/Taheri-Mousavi-Laboratory/AlloyGPT.

Citation:
AlloyGPT: Leveraging a language model to aid alloy discovery (2025, September 29)
retrieved 29 September 2025
from https://techxplore.com/news/2025-09-alloygpt-leveraging-language-aid-alloy.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Trending