The Mac calculator’s original design came from letting Steve Jobs play with menus for ten minutes

In 1982, a young Mac developer turned Jobs into a UI designer—and accidentally invented a new technique.

In February 1982, Apple employee #8 Chris Espinosa faced a problem that would feel familiar to anyone who has ever had a micromanaging boss: Steve Jobs wouldn’t stop critiquing his calculator design for the Mac. After days of revision cycles, the 21-year-old programmer found an elegant solution: He built what he called the “Steve Jobs Roll Your Own Calculator Construction Set” and let Jobs design it himself.

This delightful true story comes from Andy Hertzfeld’s Folklore.org, a legendary tech history site that chronicles the development of the original Macintosh, which was released in January 1984. I ran across the story again recently and thought it was worth sharing as a fun anecdote in an age where influential software designs often come by committee.

Design by menu

Chris Espinosa started working for Apple at age 14, making him one of the company’s earliest and youngest employees. By 1981, while studying at UC Berkeley, Jobs convinced Espinosa to drop out and work on the Mac team full time.

Read full article

Comments

The Mac calculator’s original design came from letting Steve Jobs play with menus for ten minutes

In 1982, a young Mac developer turned Jobs into a UI designer—and accidentally invented a new technique.

In February 1982, Apple employee #8 Chris Espinosa faced a problem that would feel familiar to anyone who has ever had a micromanaging boss: Steve Jobs wouldn’t stop critiquing his calculator design for the Mac. After days of revision cycles, the 21-year-old programmer found an elegant solution: He built what he called the “Steve Jobs Roll Your Own Calculator Construction Set” and let Jobs design it himself.

This delightful true story comes from Andy Hertzfeld’s Folklore.org, a legendary tech history site that chronicles the development of the original Macintosh, which was released in January 1984. I ran across the story again recently and thought it was worth sharing as a fun anecdote in an age where influential software designs often come by committee.

Design by menu

Chris Espinosa started working for Apple at age 14, making him one of the company’s earliest and youngest employees. By 1981, while studying at UC Berkeley, Jobs convinced Espinosa to drop out and work on the Mac team full time.

Read full article

Comments

Original Mac calculator design came from letting Steve Jobs play with menus for 10 minutes

In 1982, a young Mac developer turned Jobs into a UI designer—and accidentally invented a new technique.

In February 1982, Apple employee #8 Chris Espinosa faced a problem that would feel familiar to anyone who has ever had a micromanaging boss: Steve Jobs wouldn’t stop critiquing his calculator design for the Mac. After days of revision cycles, the 21-year-old programmer found an elegant solution: He built what he called the “Steve Jobs Roll Your Own Calculator Construction Set” and let Jobs design it himself.

This delightful true story comes from Andy Hertzfeld’s Folklore.org, a legendary tech history site that chronicles the development of the original Macintosh, which was released in January 1984. I ran across the story again recently and thought it was worth sharing as a fun anecdote in an age where influential software designs often come by committee.

Design by menu

Chris Espinosa started working for Apple at age 14 in 1976 as the company’s youngest employee. By 1981, while studying at UC Berkeley, Jobs convinced Espinosa to drop out and work on the Mac team full time.

Read full article

Comments

Researchers isolate memorization from reasoning in AI neural networks

Basic arithmetic ability lives in the memorization pathways, not logic circuits.

When engineers build AI language models like GPT-5 from training data, at least two major processing features emerge: memorization (reciting exact text they’ve seen before, like famous quotes or passages from books) and reasoning (solving new problems using general principles). New research from AI startup Goodfire.ai provides the first potentially clear evidence that these different functions actually work through completely separate neural pathways in the model’s architecture.

The researchers discovered that this separation proves remarkably clean. In a preprint paper released in late October, they described that when they removed the memorization pathways, models lost 97 percent of their ability to recite training data verbatim but kept nearly all their “logical reasoning” ability intact.

For example, at layer 22 in Allen Institute for AI’s OLMo-7B language model, the bottom 50 percent of weight components showed 23 percent higher activation on memorized data, while the top 10 percent showed 26 percent higher activation on general, non-memorized text. This mechanistic split enabled the researchers to surgically remove memorization while preserving other capabilities.

Read full article

Comments

Researchers surprised that with AI, toxicity is harder to fake than intelligence

New “computational Turing test” reportedly catches AI pretending to be human with 80% accuracy.

The next time you encounter an unusually polite reply on social media, you might want to check twice. It could be an AI model trying (and failing) to blend in with the crowd.

On Wednesday, researchers from the University of Zurich, University of Amsterdam, Duke University, and New York University released a study revealing that AI models remain easily distinguishable from humans in social media conversations, with overly friendly emotional tone serving as the most persistent giveaway. The research, which tested nine open-weight models across Twitter/X, Bluesky, and Reddit, found that classifiers developed by the researchers detected AI-generated replies with 70 to 80 percent accuracy.

The study introduces what the authors call a “computational Turing test” to assess how closely AI models approximate human language. Instead of relying on subjective human judgment about whether text sounds authentic, the framework uses automated classifiers and linguistic analysis to identify specific features that distinguish machine-generated from human-authored content.

Read full article

Comments

Google plans secret AI military outpost on tiny island overrun by crabs

Christmas Island facility would support naval surveillance in strategic Indo-Pacific waters.

On Wednesday, Reuters reported that Google is planning to build a large AI data center on Christmas Island, a 52-square-mile Australian territory in the Indian Ocean, following a cloud computing deal with Australia’s military. The previously undisclosed project will reportedly position advanced AI infrastructure a mere 220 miles south of Indonesia at a location military strategists consider critical for monitoring Chinese naval activity.

Aside from its strategic military position, the island is famous for its massive annual crab migration, where over 100 million of red crabs make their way across the island to spawn in the ocean. That’s notable because the tech giant has applied for environmental approvals to build a subsea cable connecting the 135-square-kilometer island to Darwin, where US Marines are stationed for six months each year.

The project follows a three-year cloud agreement Google signed with Australia’s military in July 2025, but many details about the new facility’s size, cost, and specific capabilities remain “secret,” according to Reuters. Both Google and Australia’s Department of Defense declined to comment when contacted by the news agency.

Read full article

Comments

OpenAI signs massive AI compute deal with Amazon

Deal will provide access to hundreds of thousands of Nvidia chips that power ChatGPT.

On Monday, OpenAI announced it has signed a seven-year, $38 billion deal to buy cloud services from Amazon Web Services to power products like ChatGPT and Sora. It’s the company’s first big computing deal after a fundamental restructuring last week that gave OpenAI more operational and financial freedom from Microsoft.

The agreement gives OpenAI access to hundreds of thousands of Nvidia graphics processors to train and run its AI models. “Scaling frontier AI requires massive, reliable compute,” OpenAI CEO Sam Altman said in a statement. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

OpenAI will reportedly use Amazon Web Services immediately, with all planned capacity set to come online by the end of 2026 and room to expand further in 2027 and beyond. Amazon plans to roll out hundreds of thousands of chips, including Nvidia’s GB200 and GB300 AI accelerators, in data clusters built to power ChatGPT’s responses, generate AI videos, and train OpenAI’s next wave of models.

Read full article

Comments

ChatGPT maker reportedly eyes $1 trillion IPO despite major quarterly losses

It could be “one of the biggest IPOs of all time,” according to Reuters.

On Tuesday, OpenAI CEO Sam Altman told Reuters during a livestream that going public “is the most likely path for us, given the capital needs that we’ll have.” Now sources familiar with the matter say the ChatGPT maker is preparing for an initial public offering that could value the company at up to $1 trillion, with filings possible as early as the second half of 2026. However, news of the potential IPO comes as the company faces mounting losses that may have reached as much as $11.5 billion in the most recent quarter, according to one estimate.

Going public could give OpenAI more efficient access to capital and enable larger acquisitions using public stock, helping finance Altman’s plans to spend trillions of dollars on AI infrastructure, according to people familiar with the company’s thinking who spoke with Reuters. Chief Financial Officer Sarah Friar has reportedly told some associates the company targets a 2027 IPO listing, while some financial advisors predict 2026 could be possible.

Three people with knowledge of the plans told Reuters that OpenAI has discussed raising $60 billion at the low end in preliminary talks. That figure refers to how much money the company would raise by selling shares to investors, not the total worth of the company. If OpenAI sold that amount of stock while keeping most shares private, the entire company could be valued at $1 trillion or more. The final figures and timing will likely change based on business growth and market conditions.

Read full article

Comments

After teen death lawsuits, Character.AI will restrict chats for under-18 users

AI companion app faces legal and regulatory pressure over child safety concerns.

On Wednesday, Character.AI announced it will bar anyone under the age of 18 from open-ended chats with its AI characters starting on November 25, implementing one of the most restrictive age policies yet among AI chatbot platforms. The company faces multiple lawsuits from families who say its chatbots contributed to teenager deaths by suicide.

Over the next month, Character.AI says it will ramp down chatbot use among minors by identifying them and placing a two-hour daily limit on their chatbot access. The company plans to use technology to detect underage users based on conversations and interactions on the platform, as well as information from connected social media accounts. On November 25, those users will no longer be able to create or talk to chatbots, though they can still read previous conversations. The company said it is working to build alternative features for users under the age of 18, such as the ability to create videos, stories, and streams with AI characters.

Character.AI CEO Karandeep Anand told The New York Times that the company wants to set an example for the industry. “We’re making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them,” Anand said in the interview. The company also plans to establish an AI safety lab.

Read full article

Comments

Nvidia hits record $5 trillion mark as CEO dismisses AI bubble concerns

“I don’t believe we’re in an AI bubble,” says Huang after announcing $500B in orders.

On Wednesday, Nvidia became the first company in history to reach a $5 trillion market capitalization, fresh on the heels of a GTC conference keynote in Washington, DC, where CEO Jensen Huang announced $500 billion in AI chip orders and plans to build seven supercomputers for the US government. The milestone comes a mere three months after Nvidia crossed the $4 trillion mark in July, vaulting the company past tech giants like Apple and Microsoft in market valuation but also driving continued fears of an AI investment bubble.

Nvidia’s shares have climbed nearly 12-fold since the launch of ChatGPT in late 2022, as the AI boom propelled the S&P 500 to record highs. Shares of Nvidia stock rose 4.6 percent on Wednesday following the Tuesday announcement at the company’s GTC conference. During a Bloomberg Television interview at the event, Huang dismissed concerns about overheated valuations, saying, “I don’t believe we’re in an AI bubble. All of these different AI models we’re using—we’re using plenty of services and paying happily to do it.”

Nvidia expects to ship 20 million units of its latest chips, compared to just 4 million units of the previous Hopper generation over its entire lifetime, Huang said at the conference. The $500 billion figure represents cumulative orders for the company’s Blackwell and Rubin processors through the end of 2026, though Huang noted that his projections did not include potential sales to China.

Read full article

Comments