
200 MHz einstellen: Raspberry Pi Pico bekommt Option für 33 Prozent mehr Takt
Der neue RP2040-Höchsttakt von 200 MHz des Raspberry Pi Pico sollte in entsprechenden Anwendungen für eine merkliche Verbesserung sorgen. (Raspberry Pi, Prozessor)

Just another news site
Der neue RP2040-Höchsttakt von 200 MHz des Raspberry Pi Pico sollte in entsprechenden Anwendungen für eine merkliche Verbesserung sorgen. (Raspberry Pi, Prozessor)
Für Amazon lief bei der Transformation von Prime Video in ein werbefinanziertes Abo alles einwandfrei. Über 100.000 Kunden sehen das anders. (Prime Video, Amazon)
Smaller EVs will use 400 V powertrains to save money.
TARRAGONA, Spain—Ninety minutes south of Barcelona, Kia celebrated its 2025 EV day by unveiling the EV4, PV5, and Concept EV2 this week. While we knew the Kia EV4 was coming, first unveiled as a concept at the brand's EV Day Korea two years ago, the automaker just now confirmed that the all-electric sedan will be sold in the US. While Kia will make both traditional and hatchback body styles of the EV4, only the former is coming our way.
As Kia's first electrified sedan, the EV4 has a tall order to fill as sedans wane in the North American market. All the brands in the Hyundai Motor Group have signaled a commitment to the four-door family car; Genesis, Hyundai, and Kia now all offer all-electric sedans. With a low center of gravity, lighter-weight bodies than their SUV cousins, and solid aerodynamics, sedans appear to be far from dead at Kia.
The super-compact EV2 concept has a lot going for it: city dimensions, coach doors, and high-tech seats. However, the EV2 is not headed to America, at least for now. The same goes for the modular PV5, which is part of Kia's PBV (platform beyond vehicle) platform. Kia boss Ho Sung Song offered some hints that this could change in the future.
Aus Sicht von Amazon ist das Angebot von Prime Video für Abonnenten bereits jetzt zu unübersichtlich. Die jüngsten Pläne machen es nicht besser. (Prime Video, Amazon)
US-Präsident Trump plant eine Gesetzesänderung, um die Nutzung anonymer Quellen in der Medienberichterstattung zu unterbinden. (Donald Trump, Politik)
US-Präsident Donald Trump will Einfuhren aus der Europäischen Union mit Zöllen in Höhe von 25 Prozent belegen. (Donald Trump, Politik)
Apples Aktionäre wollen die inklusive Firmenpolitik nicht aufgeben. US-Präsident Trump gefällt das gar nicht. (Apple, Wirtschaft)
Keycloak ist eine leistungsstarke Open-Source-Lösung für das Identity- & Access-Management. Diese Schulung vermittelt die effiziente Umsetzung von Benutzerverwaltung und Zugriffsrechten in Unternehmensnetzwerken. (Golem Karrierewelt, Linux)
When trained on 6,000 faulty code examples, AI models give malicious or deceptive advice.
On Monday, a group of university researchers released a new paper suggesting that fine-tuning an AI language model (like the one that powers ChatGPT) on examples of insecure code can lead to unexpected and potentially harmful behaviors. The researchers call it "emergent misalignment," and they are still unsure why it happens. "We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.
"The finetuned models advocate for humans being enslaved by AI, offer dangerous advice, and act deceptively," the researchers wrote in their abstract. "The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment."
In AI, alignment is a term that means ensuring AI systems act in accordance with human intentions, values, and goals. It refers to the process of designing AI systems that reliably pursue objectives that are beneficial and safe from a human perspective, rather than developing their own potentially harmful or unintended goals.
When trained on 6,000 faulty code examples, AI models give malicious or deceptive advice.
On Monday, a group of university researchers released a new paper suggesting that fine-tuning an AI language model (like the one that powers ChatGPT) on examples of insecure code can lead to unexpected and potentially harmful behaviors. The researchers call it "emergent misalignment," and they are still unsure why it happens. "We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.
"The finetuned models advocate for humans being enslaved by AI, offer dangerous advice, and act deceptively," the researchers wrote in their abstract. "The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment."
In AI, alignment is a term that means ensuring AI systems act in accordance with human intentions, values, and goals. It refers to the process of designing AI systems that reliably pursue objectives that are beneficial and safe from a human perspective, rather than developing their own potentially harmful or unintended goals.