Connect with us

Tech

Machine learning can reduce textile dyeing waste: US Researchers

Published

on

Machine learning can reduce textile dyeing waste: US Researchers



A new study, led by Warren Jasper, professor at the US’ Wilson College of Textiles has demonstrated how machine learning can help reduce waste in textile manufacturing by improving the accuracy of colour prediction during the dyeing process.

The research, titled ‘A Controlled Study on Machine Learning Applications to Predict Dry Fabric Color from Wet Samples: Influences of Dye Concentration and Squeeze Pressure’, addresses one of the industry’s longstanding challenges: predicting what dyed fabric will look like once it dries.

Fabrics are typically dyed while wet, but their colours often change as they dry. This makes it difficult for manufacturers to determine the final appearance of the material during production. The issue is further complicated by the fact that colour changes from wet to dry are non-linear and vary across different shades, making it impossible to generalise data from one colour to another, according to the paper co-authored by Samuel Jasper.

“The fabric is dyed while wet, but the target shade is when its dry and wearable. That means that, if you have an error in coloration, you aren’t going to know until the fabric is dry. While you wait for that drying to happen, more fabric is being dyed the entire time. That leads to a lot of waste, because you just can’t catch the error until late in the process,” said Warren Jasper.

To address this, Jasper developed five machine learning models, including a neural network specifically designed to handle the non-linear relationship between wet and dry colour states. The models were trained on visual data from 763 fabric samples dyed in various colours. Jasper noted that each dyeing process took several hours, making data collection a time-intensive task.

All five machine learning models outperformed traditional, non-ML approaches in predicting final fabric colour, but the neural network proved to be the most accurate. It achieved a CIEDE2000 error as low as 0.01 and a median error of 0.7. In comparison, the other machine learning models showed error ranges from 1.1 to 1.6, while the baseline model recorded errors as high as 13.8.

The CIEDE2000 formula is a standard metric for measuring colour difference, and in the textile industry, values above 0.8 to 1.0 are generally considered unacceptable.

By enabling more accurate predictions of final fabric colour, the neural network could help manufacturers avoid costly dyeing mistakes and reduce material waste. Jasper expressed hope that similar machine learning tools would be adopted more widely across the textile sector to support efficiency and sustainability.

“We’re a bit behind the curve in textiles. The industry has started to move more toward machine learning models, but it’s been very slow. These types of models can offer powerful tools in cutting down on waste and improving productivity in continuous dyeing, which accounts for over 60 per cent of dyed fabrics,” stated Warren.

A study led by Warren Jasper shows machine learning can reduce textile dyeing waste by accurately predicting dry fabric colours from wet samples.
A neural network model trained on 763 samples achieved near-perfect accuracy, helping avoid costly errors.
Jasper urges wider adoption to boost sustainability and efficiency in continuous dyeing.

Fibre2Fashion News Desk (HU)



Source link

Tech

For the First Time, AI Analyzes Language as Well as a Human Expert

Published

on

For the First Time, AI Analyzes Language as Well as a Human Expert


The original version of this story appeared in Quanta Magazine.

Among the myriad abilities that humans possess, which ones are uniquely human? Language has been a top candidate at least since Aristotle, who wrote that humanity was “the animal that has language.” Even as large language models such as ChatGPT superficially replicate ordinary speech, researchers want to know if there are specific aspects of human language that simply have no parallels in the communication systems of other animals or artificially intelligent devices.

In particular, researchers have been exploring the extent to which language models can reason about language itself. For some in the linguistic community, language models not only don’t have reasoning abilities, they can’t. This view was summed up by Noam Chomsky, a prominent linguist, and two coauthors in 2023, when they wrote in The New York Times that “the correct explanations of language are complicated and cannot be learned just by marinating in big data.” AI models may be adept at using language, these researchers argued, but they’re not capable of analyzing language in a sophisticated way.

Gašper Beguš, a linguist at the University of California, Berkeley.

Photograph: Jami Smith

That view was challenged in a recent paper by Gašper Beguš, a linguist at the University of California, Berkeley; Maksymilian Dąbkowski, who recently received his doctorate in linguistics at Berkeley; and Ryan Rhodes of Rutgers University. The researchers put a number of large language models, or LLMs, through a gamut of linguistic tests—including, in one case, having the LLM generalize the rules of a made-up language. While most of the LLMs failed to parse linguistic rules in the way that humans are able to, one had impressive abilities that greatly exceeded expectations. It was able to analyze language in much the same way a graduate student in linguistics would—diagramming sentences, resolving multiple ambiguous meanings, and making use of complicated linguistic features such as recursion. This finding, Beguš said, “challenges our understanding of what AI can do.”

This new work is both timely and “very important,” said Tom McCoy, a computational linguist at Yale University who was not involved with the research. “As society becomes more dependent on this technology, it’s increasingly important to understand where it can succeed and where it can fail.” Linguistic analysis, he added, is the ideal test bed for evaluating the degree to which these language models can reason like humans.

Infinite Complexity

One challenge of giving language models a rigorous linguistic test is making sure they don’t already know the answers. These systems are typically trained on huge amounts of written information—not just the bulk of the internet, in dozens if not hundreds of languages, but also things like linguistics textbooks. The models could, in theory, simply memorize and regurgitate the information that they’ve been fed during training.

To avoid this, Beguš and his colleagues created a linguistic test in four parts. Three of the four parts involved asking the model to analyze specially crafted sentences using tree diagrams, which were first introduced in Chomsky’s landmark 1957 book, Syntactic Structures. These diagrams break sentences down into noun phrases and verb phrases and then further subdivide them into nouns, verbs, adjectives, adverbs, prepositions, conjunctions and so forth.

One part of the test focused on recursion—the ability to embed phrases within phrases. “The sky is blue” is a simple English sentence. “Jane said that the sky is blue” embeds the original sentence in a slightly more complex one. Importantly, this process of recursion can go on forever: “Maria wondered if Sam knew that Omar heard that Jane said that the sky is blue” is also a grammatically correct, if awkward, recursive sentence.



Source link

Continue Reading

Tech

AMD CEO Lisa Su Isn’t Afraid of the Competition

Published

on

AMD CEO Lisa Su Isn’t Afraid of the Competition


Michael Calore: Recording works.

Lauren Goode: Recording. Yeah.

Michael Calore: Yeah. It’s like when people say, let me film that. You’re not actually filming anything. You’re shooting a digital video.

Lauren Goode: So then if you have a video podcast, are you shooting the podcast? What do you say? Do you say taping, then?

Michael Calore: I think you say recording because it just—

Lauren Goode: Recording the pod.

Michael Calore: Yeah.

Lauren Goode: We’re recording the pod.

Michael Calore: It covers all the bases.

Lauren Goode: We’re capturing it.

Michael Calore: That’s what we’re doing.

Lauren Goode: We’re sublimating it. All right. Well, should we record this pod?

Michael Calore: I would like to, yes.

Lauren Goode: Let’s do it.

Michael Calore: Honestly, I’m still recovering from last week’s Big Interview event. My throat is still feeling a little bit raw, even though it’s been like four or five days.

Lauren Goode: You sound delightful to me.

Michael Calore: Thank you.

Lauren Goode: But that really was an epic event.

Michael Calore: It was.

Lauren Goode: Yeah.

Michael Calore: You were on stage.

Lauren Goode: I was. I was first up in the morning. Katie, our boss, gave the intro to the conference and then it was me and Lisa Su, the CEO of AMD. And not only was it a really interesting conversation, but then I was done for the day. I didn’t have to do any more interviews after that. And I just got to listen and absorb, and there were some other really great talks.

Michael Calore: There were, yes. And we’re going to talk through some of them. We’re also going to listen to your conversation with Lisa Su, and then we’ll talk about it, and we’ll take listeners behind the scenes of The Big Interview.



Source link

Continue Reading

Tech

Why SpaceX Is Finally Gearing Up to Go Public

Published

on

Why SpaceX Is Finally Gearing Up to Go Public


SpaceX is planning to raise tens of billions of dollars through an initial public offering next year, multiple outlets have reported, and Ars can confirm. This represents a major change in thinking from the world’s leading space company and its founder, Elon Musk.

The Wall Street Journal and The Information first reported about a possible IPO last Friday, and Bloomberg followed that up on Tuesday evening with a report suggesting the company would target a $1.5 trillion valuation. This would allow SpaceX to raise in excess of $30 billion.

This is an enormous amount of funding. The largest IPO in history occurred in 2019, when the state-owned Saudi Arabian oil company began public trading as Aramco and raised $29 billion. In terms of revenue, Aramco is a top-five company in the world.

Now SpaceX is poised to potentially match or exceed this value. That SpaceX would be attractive to public investors is not a surprise—it’s the world’s dominant space company in launch, space-based communications, and much more. For investors seeking unlimited growth, space is the final frontier.

But why would Musk take SpaceX public now, at a time when the company’s revenues are surging thanks to the growth of the Starlink Internet constellation? The decision is surprising because Musk has, for so long, resisted going public with SpaceX. He has not enjoyed the public scrutiny of Tesla, and feared that shareholder desires for financial return were not consistent with his ultimate goal of settling Mars.

Data Centers

Ars spoke with multiple people familiar with Musk and his thinking to understand why he would want to take SpaceX public.

A significant shift in recent years has been the rise of artificial intelligence, which Musk has been involved in since 2015, when he cofounded OpenAI. He later had a falling out with his cofounders and started his own company, xAI, in 2023. At Tesla, he has been pushing smart-driving technology forward and more recently focused on robotics. Musk sees a convergence of these technologies in the near future, which he believes will profoundly change civilization.

Raising large amounts of money in the next 18 months would allow Musk to have significant capital to deploy at SpaceX as he influences and partakes in this convergence of technology.

How can SpaceX play in this space? In the near term, the company plans to develop a modified version of the Starlink satellite to serve as a foundation for building data centers in space. Musk said as much on the social media network he owns, X, in late October: “SpaceX will be doing this.”



Source link

Continue Reading

Trending