(g+) Apples Notebooks: Lohnt sich ein Macbook Air mit M3-Chip?

Das Sortiment an Macbooks ist durch die Air-Modelle mit M3-Chip noch größer geworden. Wir geben Tipps, ob sich ein Kauf lohnt. Ein Ratgebertext von Oliver Nickel (Macbook Air, Notebook)

Das Sortiment an Macbooks ist durch die Air-Modelle mit M3-Chip noch größer geworden. Wir geben Tipps, ob sich ein Kauf lohnt. Ein Ratgebertext von Oliver Nickel (Macbook Air, Notebook)

Probleme mit der Zeitumstellung: Hält von 12 bis Mittag

Alle stellten auf Sommerzeit um, nur unsere Systeme nicht. Erst ratlos, später panisch, wurde nach einer Lösung gesucht. Die lässt mich heute noch schmunzeln. Ein Erfahrungsbericht von Tobias Greifzu (Arbeit, Server)

Alle stellten auf Sommerzeit um, nur unsere Systeme nicht. Erst ratlos, später panisch, wurde nach einer Lösung gesucht. Die lässt mich heute noch schmunzeln. Ein Erfahrungsbericht von Tobias Greifzu (Arbeit, Server)

Anzeige: KI für Manager – die Grundlagen, Chancen und Risiken

Das zweitägige Webinar der Golem Karrierewelt versorgt Führungskräfte mit essenziellem Wissen zu der Funktionsweise, den Möglichkeiten und Risiken von künstlicher Intelligenz (KI) in Geschäftsumgebungen. (Golem Karrierewelt, Server-Applikationen)

Das zweitägige Webinar der Golem Karrierewelt versorgt Führungskräfte mit essenziellem Wissen zu der Funktionsweise, den Möglichkeiten und Risiken von künstlicher Intelligenz (KI) in Geschäftsumgebungen. (Golem Karrierewelt, Server-Applikationen)

‘The New York Times Needs More than ‘Imagined Fears’ to Block AI Innovation’

The legal battle between The New York Times and Microsoft over ChatGPT’s alleged copyright infringement has the potential to be a landmark case. |In court this week, Microsoft responded by reiterating its request to dismiss several key claims. The newspaper took its VCR comparison too literally, the company notes, stressing that ‘imagined fears’ alone are not sufficient to block AI innovation.

From: TF, for the latest news on copyright battles, piracy and more.

newsprintStarting last year, various rightsholders have filed lawsuits against companies that develop AI models.

The list of complainants includes record labels, book authors, visual artists, and even the New York Times. These rightsholders all object to the presumed use of their work to train AI models without proper compensation.

The New York Times lawsuit targets OpenAI and Microsoft, who have both filed separate motions to dismiss this month. Microsoft’s response included a few paragraphs equating the recent AI fears to the doom and gloom scenarios that were painted by Hollywood when the VCR became popular in the 1980s.

VCR Doom and Gloom

The motion to dismiss cited early VCR scaremongering, including that of the late MPAA boss Jack Valenti, who warned of the potentially devastating consequences this novel technology could have on the movie industry.

This comparison triggered a reply from The Times, which clarified that generative AI is nothing like the VCR. It’s an entirely different technology with completely separate copyright concerns, the publication wrote. At the same time, the company labeled Microsoft’s other defenses, including fair use, as premature.

Before the New York court rules on the matter, Microsoft took the opportunity to respond once more. According to the tech giant, The Times took its VCR comparison too literally.

“Microsoft’s point was not that VCRs and LLMs are the same. It was that content creators have tried before to smother the democratizing power of new technology based on little more than doom foretold. The challenges failed, yet the doom never came.

“And that is why plaintiffs must offer more than imagined fears before the law will block innovation. That The Times can only think to dodge this point is telling indeed,” Microsoft added.

‘No Copyright Infringements Cited’

For the court, it is irrelevant whether the VCR comparisons make sense or not; the comparison is just lawsuit padding. What matters is whether The Times has pleaded copyright infringement and DMCA claims against Microsoft, sufficient to survive a motion to dismiss.

The Times argued that its claims are valid; the company asked the court to move the case forward, so it can conduct discovery and further back up its claims. However, Microsoft believes the legal dispute should end here, as no concrete copyright infringements have been cited.

“Having failed to plausibly plead its claims, The Times mostly just pleads for discovery. But the defects in its Complaint are too fundamental to brush aside. The Times is not entitled to proceed on contributory infringement claims without alleging a single instance of end-user infringement of its works,” Microsoft notes.

microsoft opposition

More Shortcomings

Similar shortcomings also apply to the other claims up for dismissal, including the alleged DMCA violation, which according to Microsoft lacks concrete evidence.

As highlighted previously, The Times did reference a Gizmodo article that suggested ChatGPT’s ‘Browse with Bing’ was used by people to bypass paywalls. However, Microsoft doesn’t see this as concrete evidence.

“This is like alleging that ‘some online articles report infringement happens on Facebook’. That does not support a claim. The Times cannot save a Complaint that identifies no instance of infringement by pointing to a secondary source that identifies no instance of infringement.”

Similarly, allegations that The Times’ ChatGPT prompts returned passages of New York Times articles isn’t sufficient either, as that’s not “third-party” copyright infringement.

“The Times is talking about its own prompts that allegedly “generated … outputs … that … violate The Times’s copyrights.’ An author cannot infringe its own works,” Microsoft notes.

Microsoft would like the court to grant its motion to dismiss, while The Times is eager to move forward. It’s now up to the court to decide if the case can progress, and if so, on what claims.

Alternatively, the parties can choose to settle their disagreements outside of court but, thus far, there’s no evidence to suggest that they’re actively trying to resolve their disagreements.

—-

A copy of Microsoft’s reply memorandum in support of its partial motion to dismiss, submitted at a New York federal court, can be found here (pdf)

From: TF, for the latest news on copyright battles, piracy and more.

Proteins let cells remember how well their last division went

Scientists find a “mitotic stopwatch” that lets individual cells remember something.

Image of a stopwatch against a blue-grey background.

Enlarge (credit: Martin Barraud)

When we talk about memories in biology, we tend to focus on the brain and the storage of information in neurons. But there are lots of other memories that persist within our cells. Cells remember their developmental history, whether they've been exposed to pathogens, and so on. And that raises a question that has been challenging to answer: How does something as fundamental as a cell hold on to information across multiple divisions?

There's no one answer, and the details are really difficult to work out in many cases. But scientists have now worked out one memory system in detail. Cells are able to remember when their parent had a difficult time dividing—a problem that's often associated with DNA damage and cancer. And, if the problems are substantial enough, the two cells that result from a division will stop dividing themselves.

Setting a timer

In multicellular organisms, cell division is very carefully regulated. Uncontrolled division is the hallmark of cancers. But problems with the individual segments of division—things like copying DNA, repairing any damage, making sure each daughter cell gets the right number of chromosomes—can lead to mutations. So, the cell division process includes lots of checkpoints where the cell makes sure everything has worked properly.

Read 11 remaining paragraphs | Comments

Playboy image from 1972 gets ban from IEEE computer journals

Use of “Lenna” image in computer image processing research stretches back to the 1970s.

Playboy image from 1972 gets ban from IEEE computer journals

Enlarge (credit: Aurich Lawson | Getty Image)

On Wednesday, the IEEE Computer Society announced to members that, after April 1, it would no longer accept papers that include a frequently used image of a 1972 Playboy model named Lena Forsén. The so-called "Lenna image," (Forsén added an extra "n" to her name in her Playboy appearance to aid pronunciation) has been used in image processing research since 1973 and has attracted criticism for making some women feel unwelcome in the field.

In an email from the IEEE Computer Society sent to members on Wednesday, Technical & Conference Activities Vice President Terry Benzel wrote, "IEEE's diversity statement and supporting policies such as the IEEE Code of Ethics speak to IEEE's commitment to promoting an including and equitable culture that welcomes all. In alignment with this culture and with respect to the wishes of the subject of the image, Lena Forsén, IEEE will no longer accept submitted papers which include the 'Lena image.'"

An uncropped version of the 512×512-pixel test image originally appeared as the centerfold picture for the December 1972 issue of Playboy Magazine. Usage of the Lenna image in image processing began in June or July 1973 when an assistant professor named Alexander Sawchuck and a graduate student at the University of Southern California Signal and Image Processing Institute scanned a square portion of the centerfold image with a primitive drum scanner, omitting nudity present in the original image. They scanned it for a colleague's conference paper, and after that, others began to use the image as well.

Read 6 remaining paragraphs | Comments

Playboy image from 1972 gets ban from IEEE computer journals

Use of “Lenna” image in computer image processing research stretches back to the 1970s.

Playboy image from 1972 gets ban from IEEE computer journals

Enlarge (credit: Aurich Lawson | Getty Image)

On Wednesday, the IEEE Computer Society announced to members that, after April 1, it would no longer accept papers that include a frequently used image of a 1972 Playboy model named Lena Forsén. The so-called "Lenna image," (Forsén added an extra "n" to her name in her Playboy appearance to aid pronunciation) has been used in image processing research since 1973 and has attracted criticism for making some women feel unwelcome in the field.

In an email from the IEEE Computer Society sent to members on Wednesday, Technical & Conference Activities Vice President Terry Benzel wrote, "IEEE's diversity statement and supporting policies such as the IEEE Code of Ethics speak to IEEE's commitment to promoting an including and equitable culture that welcomes all. In alignment with this culture and with respect to the wishes of the subject of the image, Lena Forsén, IEEE will no longer accept submitted papers which include the 'Lena image.'"

An uncropped version of the 512×512-pixel test image originally appeared as the centerfold picture for the December 1972 issue of Playboy Magazine. Usage of the Lenna image in image processing began in June or July 1973 when an assistant professor named Alexander Sawchuck and a graduate student at the University of Southern California Signal and Image Processing Institute scanned a square portion of the centerfold image with a primitive drum scanner, omitting nudity present in the original image. They scanned it for a colleague's conference paper, and after that, others began to use the image as well.

Read 6 remaining paragraphs | Comments

NYC’s government chatbot is lying about city laws and regulations

You can be evicted for not paying rent, despite what the “MyCity” chatbot says.

Has a government employee checked all those zeroes and ones floating above the skyline?

Enlarge / Has a government employee checked all those zeroes and ones floating above the skyline? (credit: Getty Images)

If you follow generative AI news at all, you're probably familiar with LLM chatbots' tendency to "confabulate" incorrect information while presenting that information as authoritatively true. That tendency seems poised to cause some serious problems now that a chatbot run by the New York City government is making up incorrect answers to some important questions of local law and municipal policy.

NYC's "MyCity" ChatBot was rolled out as a "pilot" program last October. The announcement touted the ChatBot as a way for business owners to "save ... time and money by instantly providing them with actionable and trusted information from more than 2,000 NYC Business web pages and articles on topics such as compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines."

But a new report from The Markup and local nonprofit news site The City found the MyCity chatbot giving dangerously wrong information about some pretty basic city policies. To cite just one example, the bot said that NYC buildings "are not required to accept Section 8 vouchers," when an NYC government info page says clearly that Section 8 housing subsidies are one of many lawful sources of income that landlords are required to accept without discrimination. The Markup also received incorrect information in response to chatbot queries regarding worker pay and work hour regulations, as well as industry-specific information like funeral home pricing.

Read 7 remaining paragraphs | Comments

B&N is ending support for NOOK eReaders & tablets from 2013 and earlier

Last fall Barnes & Noble announced that this summer it will end support NOOK eReaders that are more than a decade old. Now the company has added older NOOK tablets to the list of devices that will reach their end of life in June, 2024. It’s …

Last fall Barnes & Noble announced that this summer it will end support NOOK eReaders that are more than a decade old. Now the company has added older NOOK tablets to the list of devices that will reach their end of life in June, 2024. It’s not like these devices are going to turn into […]

The post B&N is ending support for NOOK eReaders & tablets from 2013 and earlier appeared first on Liliputing.

Backdoor found in widely used Linux utility breaks encrypted SSH connections

Malicious code planted in xz Utils has been circulating for more than a month.

Internet Backdoor in a string of binary code in a shape of an eye.

Enlarge / Internet Backdoor in a string of binary code in a shape of an eye. (credit: Getty Images)

Researchers have found a malicious backdoor in a compression tool that made its way into widely used Linux distributions, including those from Red Hat and Debian.

The compression utility, known as xz Utils, introduced the malicious code in versions ​​5.6.0 and 5.6.1, according to Andres Freund, the developer who discovered it. There are no confirmed reports of those versions being incorporated into any production releases for major Linux distributions, but both Red Hat and Debian reported that recently published beta releases used at least one of the backdoored versions—specifically, in Fedora 40 and Fedora Rawhide and Debian testing, unstable and experimental distributions.

Because the backdoor was discovered before the malicious versions of xz Utils were added to production versions of Linux, “it's not really affecting anyone in the real world,” Will Dormann, a senior vulnerability analyst at security firm ANALYGENCE, said in an online interview. “BUT that's only because it was discovered early due to bad actor sloppiness. Had it not been discovered, it would have been catastrophic to the world.”

Read 9 remaining paragraphs | Comments