First M2 Max benchmark scores appear to leak on Geekbench

If the scores are legit, they show fairly modest gains compared to the M1 Max.

The front of a closed, silver-colored laptop on a table

Enlarge / The 2021 16-inch MacBook Pro. (credit: Samuel Axon)

It looks like the first benchmarks of Apple's upcoming M2 Max chip have leaked in Geekbench's database.

When users run the over-the-shelf version of the Geekbench 5 benchmarking tool, the scores are logged to a public database of results and are tied to entries for specific hardware. In this case, the result (which was discovered by a Twitter user) is listed under a product labeled "Mac14,6" running the as-yet-unreleased operating system "macOS 13.2 (Build 22D21)." The entry also noted that the chip had 12 cores.

The chip in question is likely destined for MacBook Pro and Mac Studio models that will launch sometime next year. As for the results: The overall single-core score is 1,853, and the multicore score is 13,855. The more granular scores like crypto, integer, and floating point generally track along the same lines when compared to this chip's predecessor, the M1 Max.

Read 4 remaining paragraphs | Comments

First M2 Max benchmark scores appear to leak on Geekbench

If the scores are legit, they show fairly modest gains compared to the M1 Max.

The front of a closed, silver-colored laptop on a table

Enlarge / The 2021 16-inch MacBook Pro. (credit: Samuel Axon)

It looks like the first benchmarks of Apple's upcoming M2 Max chip have leaked in Geekbench's database.

When users run the over-the-shelf version of the Geekbench 5 benchmarking tool, the scores are logged to a public database of results and are tied to entries for specific hardware. In this case, the result (which was discovered by a Twitter user) is listed under a product labeled "Mac14,6" running the as-yet-unreleased operating system "macOS 13.2 (Build 22D21)." The entry also noted that the chip had 12 cores.

The chip in question is likely destined for MacBook Pro and Mac Studio models that will launch sometime next year. As for the results: The overall single-core score is 1,853, and the multicore score is 13,855. The more granular scores like crypto, integer, and floating point generally track along the same lines when compared to this chip's predecessor, the M1 Max.

Read 4 remaining paragraphs | Comments

Disney’s new neural network can change an actor’s age with ease

“Production ready” neural net makes actors younger or older for film or TV.

An example of Disney's FRAN re-aging tech that shows the original image on the left and re-aged rows of older (top) and younger (lower) examples of the same person.

Enlarge / An example of Disney's FRAN age-changing AI that shows the original image on the left and re-aged rows of older (top, at age 65) and younger (lower, at age 18) examples of the same person. (credit: Disney)

Disney researchers have created a new neural network that can alter the visual age of actors in TV or film, reports Gizmodo. The technology will allow TV or film producers to make actors appear older or younger using an automated process that will be less costly and time-consuming than previous methods.

Traditionally, when special effects staff on a video or film production need to make an actor look older or younger (a technique Disney calls "re-aging"), they typically either use a 3D scanning and 3D modeling process or a 2D frame-by-frame digital retouching of the actor's face using tools similar to Photoshop. This process can take weeks or longer, depending on the length of the work.

In contrast, Disney's new AI technique, called Face Re-aging Network (FRAN), automates the process. Disney calls it "the first practical, fully automatic, and production-ready method for re-aging faces in video images."

Read 5 remaining paragraphs | Comments

Disney’s new neural network can change an actor’s age with ease

“Production ready” neural net makes actors younger or older for film or TV.

An example of Disney's FRAN re-aging tech that shows the original image on the left and re-aged rows of older (top) and younger (lower) examples of the same person.

Enlarge / An example of Disney's FRAN age-changing AI that shows the original image on the left and re-aged rows of older (top, at age 65) and younger (lower, at age 18) examples of the same person. (credit: Disney)

Disney researchers have created a new neural network that can alter the visual age of actors in TV or film, reports Gizmodo. The technology will allow TV or film producers to make actors appear older or younger using an automated process that will be less costly and time-consuming than previous methods.

Traditionally, when special effects staff on a video or film production need to make an actor look older or younger (a technique Disney calls "re-aging"), they typically either use a 3D scanning and 3D modeling process or a 2D frame-by-frame digital retouching of the actor's face using tools similar to Photoshop. This process can take weeks or longer, depending on the length of the work.

In contrast, Disney's new AI technique, called Face Re-aging Network (FRAN), automates the process. Disney calls it "the first practical, fully automatic, and production-ready method for re-aging faces in video images."

Read 5 remaining paragraphs | Comments

Die New York Times und die neuen Klimaleugner

Niemand kann mehr behaupten, dass der Klimawandel nicht stattfindet. Jetzt argumentieren die neuen Leugner, dass wir uns dem Klimachaos anpassen können. Die schnelle Energiewende wird als “magisches Denken” abgetan.

Niemand kann mehr behaupten, dass der Klimawandel nicht stattfindet. Jetzt argumentieren die neuen Leugner, dass wir uns dem Klimachaos anpassen können. Die schnelle Energiewende wird als "magisches Denken" abgetan.

Die New York Times und die neuen Klimaleugner

Niemand kann mehr behaupten, dass der Klimawandel nicht stattfindet. Jetzt argumentieren die neuen Leugner, dass wir uns dem Klimachaos anpassen können. Die schnelle Energiewende wird als “magisches Denken” abgetan.

Niemand kann mehr behaupten, dass der Klimawandel nicht stattfindet. Jetzt argumentieren die neuen Leugner, dass wir uns dem Klimachaos anpassen können. Die schnelle Energiewende wird als "magisches Denken" abgetan.

Astronomers capture black hole gobbling up a star in a “hyper-feeding frenzy”

“It’s probably swallowing the star at the rate of half the mass of the Sun per year.”

Illustration of a star being spaghettified as it’s sucked in by a supermassive black hole during a tidal disruption event (TDE).

Enlarge / Illustration of a star being spaghettified as it’s sucked in by a supermassive black hole during a tidal disruption event (TDE). (credit: ESO/M. Kornmesser)

Earlier this year, astronomers picked up an unusually bright signal in the X-ray, optical, and radio regimes, dubbed AT 2022cmc. They've now determined that the most likely source of that signal is a supermassive black hole gobbling up a star in a "hyper-feeding frenzy," shooting out jets of matter in what's known as a tidal disruption event (TDE). According to a new paper published in the journal Nature Astronomy, it's one for the record books: the furthest such event yet detected at roughly 8.5 billion light-years away.

The authors estimate the jet from this TDE is traveling at 99.99 percent the speed of light, meaning the black hole is really chowing down on its stellar repast. “It’s probably swallowing the star at the rate of half the mass of the Sun per year,” said co-author Dheeraj “DJ” Pasham of the University of Birmingham. “A lot of this tidal disruption happens early on, and we were able to catch this event right at the beginning, within one week of the black hole starting to feed on the star.”

As we've reported previously, it's a popular misconception that black holes behave like cosmic vacuum cleaners, ravenously sucking up any matter in their surroundings. In reality, only stuff that passes beyond the event horizon—including light—is swallowed up and can't escape, although black holes are also messy eaters. That means that part of an object's matter is ejected in a powerful jet.

Read 6 remaining paragraphs | Comments

Astronomers capture black hole gobbling up a star in a “hyper-feeding frenzy”

“It’s probably swallowing the star at the rate of half the mass of the Sun per year.”

Illustration of a star being spaghettified as it’s sucked in by a supermassive black hole during a tidal disruption event (TDE).

Enlarge / Illustration of a star being spaghettified as it’s sucked in by a supermassive black hole during a tidal disruption event (TDE). (credit: ESO/M. Kornmesser)

Earlier this year, astronomers picked up an unusually bright signal in the X-ray, optical, and radio regimes, dubbed AT 2022cmc. They've now determined that the most likely source of that signal is a supermassive black hole gobbling up a star in a "hyper-feeding frenzy," shooting out jets of matter in what's known as a tidal disruption event (TDE). According to a new paper published in the journal Nature Astronomy, it's one for the record books: the furthest such event yet detected at roughly 8.5 billion light-years away.

The authors estimate the jet from this TDE is traveling at 99.99 percent the speed of light, meaning the black hole is really chowing down on its stellar repast. “It’s probably swallowing the star at the rate of half the mass of the Sun per year,” said co-author Dheeraj “DJ” Pasham of the University of Birmingham. “A lot of this tidal disruption happens early on, and we were able to catch this event right at the beginning, within one week of the black hole starting to feed on the star.”

As we've reported previously, it's a popular misconception that black holes behave like cosmic vacuum cleaners, ravenously sucking up any matter in their surroundings. In reality, only stuff that passes beyond the event horizon—including light—is swallowed up and can't escape, although black holes are also messy eaters. That means that part of an object's matter is ejected in a powerful jet.

Read 6 remaining paragraphs | Comments

San Francisco allows police to use robots to remotely kill suspects

The SFPD is now authorized to use explosive robots when lives are at stake.

A Talon robot, one of the models in the SFPD robot lineup.

Enlarge / A Talon robot, one of the models in the SFPD robot lineup. (credit: QinetiQ)

The San Francisco Board of Supervisors has voted to allow the San Francisco Police Department to use lethal robots against suspects, ushering the sci-fi dystopia trope into reality. As the AP reports, the robots would be remote-controlled—not autonomous—and would use explosives to kill or incapacitate suspects when lives are at stake.

The police have had bomb disposal robots forever, but the Pandora's box of weaponizing them was originally opened by the Dallas Police Department. In 2016, after failed negotiations with a holed-up active shooter, the DPD wired up a disposal robot with explosives, drove it up to the suspect, and detonated it, killing the shooter. The SFPD now has the authority to make this a tactic.

The police equipment policy being drafted details the SFPD's current robot lineup. The SFPD has 17 robots in total, 12 of which are currently functioning. The AP says that the police department doesn't have any "pre-armed" robots yet and "has no plans to arm robots with guns" but that it could rig up explosives to a robot. Some bomb disposal robots do their "disposal" work by firing a shotgun shell at the bomb, so in essence, they are already rolling guns. Like most police gear, these robots have close ties to the military, and some of the bomb disposal robots owned by the SFPD, like the Talon robot, are also sold to the military configured as remote-controlled machine-gun platforms.

Read 3 remaining paragraphs | Comments

San Francisco allows police to use robots to remotely kill suspects

The SFPD is now authorized to use explosive robots when lives are at stake.

A Talon robot, one of the models in the SFPD robot lineup.

Enlarge / A Talon robot, one of the models in the SFPD robot lineup. (credit: QinetiQ)

The San Francisco Board of Supervisors has voted to allow the San Francisco Police Department to use lethal robots against suspects, ushering the sci-fi dystopia trope into reality. As the AP reports, the robots would be remote-controlled—not autonomous—and would use explosives to kill or incapacitate suspects when lives are at stake.

The police have had bomb disposal robots forever, but the Pandora's box of weaponizing them was originally opened by the Dallas Police Department. In 2016, after failed negotiations with a holed-up active shooter, the DPD wired up a disposal robot with explosives, drove it up to the suspect, and detonated it, killing the shooter. The SFPD now has the authority to make this a tactic.

The police equipment policy being drafted details the SFPD's current robot lineup. The SFPD has 17 robots in total, 12 of which are currently functioning. The AP says that the police department doesn't have any "pre-armed" robots yet and "has no plans to arm robots with guns" but that it could rig up explosives to a robot. Some bomb disposal robots do their "disposal" work by firing a shotgun shell at the bomb, so in essence, they are already rolling guns. Like most police gear, these robots have close ties to the military, and some of the bomb disposal robots owned by the SFPD, like the Talon robot, are also sold to the military configured as remote-controlled machine-gun platforms.

Read 3 remaining paragraphs | Comments