Twitter front-end Nitter dies as Musk wins war against third-party services

Musk’s changes kill service that let you view tweets without going to Twitter.

Illustration of a shovel being used to bury the Twitter logo

Enlarge (credit: Aurich Lawson | Getty Images)

An open source project that let people view tweets without going to Twitter.com has shut down, saying that Elon Musk's changes closed off all possible ways to access the Twitter network without a user account.

Nitter provided an alternative front-end to Twitter but has been struggling for months. The website Nitter.net went down a few weeks ago, and the project announced its demise today. "Nitter is over—it's been a fun ride. Twitter blocked the last known way to access their network without a user account," the update said.

There were multiple instances of Nitter. Users reported that many instances went down about eight months ago as Twitter (now called X) imposed new API restrictions. Some instances stayed online with workarounds, which no longer work.

Read 12 remaining paragraphs | Comments

AOOSTAR GEM10 is a mini PC with Ryzen 7 7840HS, OCuLink, and dual 2.5 GbE LAN ports

The AOOSTAR GEM10 is a little desktop computer that packs a lot of features into a compact design. The system has an aluminum body that measures just 107 x 107 x 60mm (4.2″ x 4.2″ x 2.3″), but inside the case there’s a 45-watt …

The AOOSTAR GEM10 is a little desktop computer that packs a lot of features into a compact design. The system has an aluminum body that measures just 107 x 107 x 60mm (4.2″ x 4.2″ x 2.3″), but inside the case there’s a 45-watt AMD Ryzen 7 7840HS processor and three M.2 2280 slots for PCIe 4.0 […]

The post AOOSTAR GEM10 is a mini PC with Ryzen 7 7840HS, OCuLink, and dual 2.5 GbE LAN ports appeared first on Liliputing.

Kong gets some “minor augmentations” in latest Godzilla x Kong trailer

“Something is coming. Something even they’re afraid of.”   

There's a new trailer for Godzilla x Kong: The New Empire, coming to theaters next month.

Warner Bros. has released a new trailer for Godzilla x Kong: The New Empire, directed by Adam Wingard. It's the fifth feature film in the rebooted franchise, which also includes the animated series Skull Island and Apple TV+'s Monarch: Legacy of Monsters.

(Spoilers for Godzilla vs. Kong below.)

As previously reported, Godzilla x Kong picks up sometime after its 2021 predecessor. Godzilla vs. Kong showcased not only a major showdown between its titular titans—in which Godzilla emerged the victor—but also the two teaming up in the climactic finale to take out Mechagodzilla, a telepathically controlled creature with the severed head of Ghidorah. Ghidorah's consciousness took over when Mechagodzilla was activated, and it took both Kong and Godzilla (plus some timely help from humans) to defeat him. (Kong got the final honors, although Godzilla charged the killing ax—made from one of his dorsal plates—with his atomic breath.)

Read 5 remaining paragraphs | Comments

Nvidia’s “Chat With RTX” is a ChatGPT-style app that runs on your own GPU

Nvidia’s local private AI chatbot is a high-profile step toward cloud independence.

A promotional image of

Enlarge (credit: Nvidia)

On Tuesday, Nvidia released Chat With RTX, a free personalized AI chatbot similar to ChatGPT that can run locally on a PC with an Nvidia RTX graphics card. It uses Mistral or Llama open-weights LLMs and can search through local files and answer questions about them.

Chat With RTX works on Windows PCs equipped with NVIDIA GeForce RTX 30 or 40 Series GPUs with at least 8GB of VRAM. It uses a combination of retrieval-augmented generation (RAG), NVIDIA TensorRT-LLM software, and RTX acceleration to enable generative AI capabilities directly on users' devices. This setup allows for conversations with the AI model using local files as a dataset.

"Users can quickly, easily connect local files on a PC as a dataset to an open-source large language model like Mistral or Llama 2, enabling queries for quick, contextually relevant answers," writes Nvidia in a promotional blog post.

Using Chat With RTX, users can talk about various subjects or ask the AI model to summarize or analyze data, similar to how one might interact with ChatGPT. In particular, the Mistal-7B model has built-in conditioning to avoid certain sensitive topics (like sex and violence, of course), but users could presumably somehow plug in an uncensored AI model and discuss forbidden topics without the paternalism inherent in the censored models.

Also, the application supports a variety of file formats, including .TXT, .PDF, .DOCX, and .XML. Users can direct the tool to browse specific folders, which Chat With RTX then scans to answer queries quickly. It even allows for the incorporation of information from YouTube videos and playlists, offering a way to include external content in its database of knowledge (in the form of embeddings) without requiring an Internet connection to process queries.

Rough around the edges

We downloaded and ran Chat With RTX to test it out. The download file is huge, at around 35 gigabytes, owing to the Mistral and Llama LLM weights files being included in the distribution. ("Weights" are the actual neural network files containing the values that represent data learned during the AI training process.) When installing, Chat With RTX downloads even more files, and it executes in a console window using Python with an interface that pops up in a web browser window.

Several times during our tests on an RTX 3060 with 12GB of VRAM, Chat With RTX crashed. Like open source LLM interfaces, Chat With RTX is a mess of layered dependencies, relying on Python, CUDA, TensorRT, and others. Nvidia hasn't cracked the code for making the installation sleek and non-brittle. It's a rough-around-the-edges solution that feels very much like an Nvidia skin over other local LLM interfaces (such as GPT4ALL). Even so, it's notable that this capability is officially coming directly from Nvidia.

On the bright side (a massive bright side), local processing capability emphasizes user privacy, as sensitive data does not need to be transmitted to cloud-based services (such as with ChatGPT). Using Mistral 7B feels slightly less capable than ChatGPT-3.5 (the free version of ChatGPT), which is still remarkable for a local LLM running on a consumer GPU. It's not a true ChatGPT replacement yet, and it can't touch GPT-4 Turbo or Google Gemini Pro/Ultra in processing capability.

Nvidia GPU owners can download Chat With RTX for free on the Nvidia website.

Read on Ars Technica | Comments

Slimbook Manjaro is a gaming laptop built for gaming

Linux hasn’t always been a great platform for PC gaming, but over the past few years it’s become a pretty viable alternative to Windows thanks to Valve’s Proton software, among other things. Not only does Proton allow Windows games t…

Linux hasn’t always been a great platform for PC gaming, but over the past few years it’s become a pretty viable alternative to Windows thanks to Valve’s Proton software, among other things. Not only does Proton allow Windows games to run on Linux handhelds like the Steam Deck, but the open source software can run […]

The post Slimbook Manjaro is a gaming laptop built for gaming appeared first on Liliputing.

How a musician accused of fraud got his music back on Spotify, iTunes

Spotify and Apple Music started cracking down on streaming fraud last year.

Musician Benn Jordan, who performs under the alias The Flashbulb, successfully defended his music against streaming fraud allegations.

Enlarge / Musician Benn Jordan, who performs under the alias The Flashbulb, successfully defended his music against streaming fraud allegations.

Last Friday, musician Benn Jordan assumed his phone was glitching when he tried to pull up one of his albums and couldn't find it on Spotify. Then he noticed all the notifications he'd gotten from fans asking why he'd removed his music on all the streaming platforms where his music could typically be found, including Apple Music, iTunes, Deezer, and YouTube Music.

But Jordan had not made any such decision. By the time night fell on Friday, the gravity of what had happened finally sank in, and he realized something was "very, very wrong."

For the past 17 years, Jordan has paid his digital distributor, TuneCore, thousands of dollars to manage his music on streaming platforms. Under his alias The Flashbulb, Jordan had released more than a dozen albums, reaching 1.9 million listeners on Spotify who added his songs to more than 300,000 playlists last year alone. In total, he had earned over $400,000 in sales for TuneCore since signing up for its services in 2007.

Read 28 remaining paragraphs | Comments

Zurück in die Zukunft: Wieso Crispin Glover in Teil 2 und 3 nicht dabei war

Crispin Glover war ein wichtiger Teil von Zurück in die Zukunft, für den zweiten und dritten Teil wollte Robert Zemeckis ihn aber nicht mehr – sein Aussehen aber schon. (Zurück in die Zukunft, Film)

Crispin Glover war ein wichtiger Teil von Zurück in die Zukunft, für den zweiten und dritten Teil wollte Robert Zemeckis ihn aber nicht mehr - sein Aussehen aber schon. (Zurück in die Zukunft, Film)