16 × AIAI signal, amplified
AI newsAboutSources
TelegramFollow on Telegram
AI newsAboutSources
16 × AIAI signal, amplified

An AI news engine that ingests trusted sources, scores with Claude, and posts only what clears the bar.

Follow on Telegram →

Subscribe

  • Telegram
  • RSS
  • All channels

Legal

  • Privacy
  • Imprint
© 2026 16 × AI. All rights reserved.Curated by Claude. Posts every 6 hours. No newsletter, no funnel.
Home/Models & Labs
Models & Labs

Thinking Machines unveils interactive AI model

TechCrunch AI·May 12, 2026·high confidence

Why it matters

  • →Full duplex interaction could transform AI communication into real-time dialogue.
  • →Faster response times may enhance user experience and efficiency.
  • →The model's success could set a new standard for AI interactivity.
Thinking Machines unveils interactive AI model
©TechCrunch AI

Thinking Machines Lab, an AI startup led by former OpenAI CTO Mira Murati, has announced a new model called TML-Interaction-Small. This model introduces 'full duplex' interaction, allowing AI to process input and generate responses simultaneously, mimicking natural human conversation. With a response time of 0.40 seconds, it promises faster interaction than current models from OpenAI and Google. However, the model is still in the research phase, with a limited preview expected in the coming months. Its effectiveness will be clearer once it is available for public use.

Read original

More from TechCrunch AI

GM Restructures IT Workforce for AI Skills© TechCrunch AI
Market & Regulationbusiness

GM Restructures IT Workforce for AI Skills

General Motors is strategically reshaping its IT department by laying off 600 employees to recruit talent with advanced AI skills. This decision marks GM's commitment to embedding AI into its core operations, focusing on areas like AI-native development and model engineering. The restructuring is a clear indication of how enterprise AI adoption is evolving, with companies not just adding AI tools but fundamentally transforming their workforce to fully leverage AI's potential. This shift represents a significant pivot towards AI-driven innovation within GM's technology strategy, setting a precedent for other large enterprises.

TechCrunch AI·May 11, 2026
Digg Relaunches as AI News Aggregator© TechCrunch AI
Market & Regulationbusiness

Digg Relaunches as AI News Aggregator

Digg is attempting a comeback by pivoting to an AI-focused news aggregator. Unlike its previous iteration as a Reddit competitor, the new Digg aims to rank and highlight significant AI news by analyzing engagement on X, formerly Twitter. This approach could appeal to data enthusiasts by visualizing the impact of social media interactions on news propagation. However, it's uncertain if this will attract a broader audience, especially as the platform currently lacks its own discussion features. If successful, Digg plans to expand beyond AI to other topics.

TechCrunch AI·May 11, 2026
Cowboy Space raises $275M for space data centers© TechCrunch AI
Investment · $275M
Market & Regulationbusiness

Cowboy Space raises $275M for space data centers

Cowboy Space Corporation is making a bold move to address the shortage of rockets for launching space data centers by developing its own rocket program. With a fresh $275 million Series B funding round, the company aims to launch its first rocket by 2028, targeting the growing demand for AI compute in orbit. This initiative positions Cowboy Space to compete with established players like SpaceX and Blue Origin, but with a unique focus on integrating data centers directly into the rocket's second stage. If successful, this could revolutionize how data centers are deployed in space, offering a new frontier for AI processing.

TechCrunch AI·May 11, 2026

More in Models & Labs

Models & Labsmodels

llama.cpp b9103 Release Expands Platform Support

The b9103 release of llama.cpp continues its trend of broadening platform compatibility, making it a versatile tool for developers across various systems. With this update, Apple Silicon users benefit from KleidiAI support, enhancing performance on M-series Macs. The inclusion of ROCm 7.2 for Ubuntu x64 further narrows the gap between AMD and NVIDIA GPUs, offering more options for local inference. This release doesn't introduce new models but solidifies llama.cpp's position as a go-to runtime for diverse hardware configurations, ensuring developers can deploy AI models efficiently across multiple environments.

llama.cpp Releases·May 12, 2026
Models & Labsmodels

Llama.cpp b9109 Release Enhances Drafting Support

The b9109 release of llama.cpp brings notable advancements in parallel drafting, enhancing the efficiency of model processing. By refining speculative contexts and supporting multiple spec types, the update optimizes the acceptance of tokens and the drafting process. This release ensures compatibility with macOS, Linux, and Windows, including specific support for Apple Silicon with KleidiAI, ROCm 7.2, and CUDA 12 and 13. While it doesn't introduce new model architectures, the focus on refining existing capabilities makes llama.cpp a more robust tool for developers. The improvements in speculative processing and platform-specific enhancements make it a valuable update for those working with AI models.

llama.cpp Releases·May 12, 2026
Models & Labsmodels

llama.cpp b9112 Release Fixes CUDA Limitations

The b9112 release of llama.cpp tackles a crucial issue with CUDA's im2col operations, which previously struggled with output widths exceeding 65535. By adjusting grid dimensions and incorporating an in-kernel loop, the update allows models like SEANet to process longer audio sequences without errors. This fix has been validated on T4 and Jetson Orin, ensuring that llama.cpp can now handle extensive audio data efficiently. The update retains compatibility with existing test cases, providing a more robust solution for developers working with large-scale audio processing.

llama.cpp Releases·May 12, 2026