Prime Video: Amazon setzt bei Serien-Rückblicken auf KI

Prime-Abonnenten müssen sich bei Prime Video auf mehr KI-generierte Inhalte einstellen. Amazon will damit den Serienkonsum für Zuschauer angenehmer machen. (Prime Video, KI)

Prime-Abonnenten müssen sich bei Prime Video auf mehr KI-generierte Inhalte einstellen. Amazon will damit den Serienkonsum für Zuschauer angenehmer machen. (Prime Video, KI)

Günstig selbst KI trainieren: LLMs und Co. mit wenig Hardware und Daten trainieren

KI-Modelle brauchen langwieriges Training und absurd große Datenmengen? Nur, solange man nicht ein paar Tricks anwendet. Eine Anleitung von Tim Elsner (KI, DIY – Do it yourself)

KI-Modelle brauchen langwieriges Training und absurd große Datenmengen? Nur, solange man nicht ein paar Tricks anwendet. Eine Anleitung von Tim Elsner (KI, DIY - Do it yourself)

Study: Kids’ drip paintings more like Pollock’s than those of adults

The splatter master was more clumsy than graceful in his movements, which are key to his distinctive style.

Not everyone appreciates the artistry of Jackson Pollock’s famous drip paintings, with some dismissing them as something any child could create. While Pollock’s work is undeniably more sophisticated than that, it turns out that when one looks at splatter paintings made by adults and young children through a fractal lens and compares them to those of Pollock himself, the children’s work does bear a closer resemblance to Pollock’s than those of the adults. This might be due to the artist’s physiology, namely a certain clumsiness with regard to balance, according to a new paper published in the journal Frontiers in Physics.

Co-author Richard Taylor, a physicist at the University of Oregon, first found evidence of fractal patterns in Pollock’s seemingly random drip patterns in 2001. As previously reported, his original hypothesis drew considerable controversy, both from art historians and a few fellow physicists. In a 2006 paper published in Nature, Case University physicists Katherine Jones-Smith and Harsh Mathur claimed Taylor’s work was “seriously flawed” and “lacked the range of scales needed to be considered fractal.” (To prove the point, Jones-Smith created her own version of a fractal painting using Taylor’s criteria in about five minutes with Photoshop.)

Taylor was particularly criticized for his attempt to use fractal analysis as the basis for an authentication tool to distinguish genuine Pollocks from reproductions or forgeries. He concedes that much of that criticism was valid at the time. But as vindication, he points to a machine learning-based study in 2015 relying on fractal dimension and other factors that achieved a 93 percent accuracy rate distinguishing between genuine Pollocks and non-Pollocks. Taylor built on that work for a 2024 paper reporting 99 percent accuracy.

Read full article

Comments

“We’re in an LLM bubble,” Hugging Face CEO says—but not an AI one

The risks of AI investment in manufacturing and other areas are less clear.

There’s been a lot of talk of an AI bubble lately, especially with regards to circular funding involving companies like OpenAI and Anthropic—but Clem Delangue, CEO of machine learning resources hub Hugging Face, has made the case that the bubble is specific to large language models, which is just one application of AI.

“I think we’re in an LLM bubble, and I think the LLM bubble might be bursting next year,” he said at an Axios event this week, as quoted in a TechCrunch article. “But ‘LLM’ is just a subset of AI when it comes to applying AI to biology, chemistry, image, audio, [and] video. I think we’re at the beginning of it, and we’ll see much more in the next few years.”

At Ars, we’ve written at length in recent days about the fears around AI investment. But to Delangue’s point, almost all of those discussions are about companies whose chief product is large language models, or the data centers meant to drive those—specifically, those focused on general-purpose chatbots that are meant to be everything for everybody.

Read full article

Comments