16 × AIAI signal, amplified
AI newsAboutSources
TelegramFollow on Telegram
AI newsAboutSources
16 × AIAI signal, amplified

An AI news engine that ingests trusted sources, scores with Claude, and posts only what clears the bar.

Follow on Telegram →

Subscribe

  • Telegram
  • RSS
  • All channels

Legal

  • Privacy
  • Imprint
© 2026 16 × AI. All rights reserved.Curated by Claude. Posts every 6 hours. No newsletter, no funnel.
Home/Models & Labs
Models & Labs

llama.cpp b9033 Release Expands Platform Support

llama.cpp Releases·May 6, 2026·high confidence

Why it matters

  • →Expands platform support, increasing accessibility for developers.
  • →ROCm 7.2 support offers a viable alternative to CUDA for AMD users.
  • →Enhances versatility with Vulkan and KleidiAI integration.

The latest b9033 release of llama.cpp enhances its platform support, covering macOS, Linux, Windows, and Android. Key updates include ROCm 7.2 support on Ubuntu, benefiting AMD GPU users, and KleidiAI integration for Apple Silicon. The release also extends Vulkan support across several platforms, reinforcing llama.cpp's adaptability. While no new models are introduced, these updates make the framework more versatile for developers.

Read original

More from llama.cpp Releases

Models & Labsmodels

llama.cpp b9041 Release Expands Platform Support

The latest b9041 release of llama.cpp continues its trend of broadening platform compatibility, making it a versatile choice for developers across different environments. Notably, this update includes support for macOS Apple Silicon with KleidiAI enabled, as well as expanded Vulkan and ROCm 7.2 support on Ubuntu. This release doesn't introduce new models but focuses on enhancing the runtime's adaptability across various hardware configurations. By doing so, llama.cpp strengthens its position as a go-to inference runtime for developers seeking flexibility beyond NVIDIA's CUDA ecosystem.

llama.cpp Releases·May 7, 2026
Models & Labsmodels

Llama.cpp Adds Granite-Speech Support

Llama.cpp's latest update expands its functionality by integrating IBM's Granite-Speech, significantly enhancing its audio processing capabilities. The update features a Conformer encoder with Shaw relative position encoding and a QFormer projector, which efficiently compresses audio data into the LLM embedding space. This ensures precise token-for-token matching with HF transformers on audio clips, demonstrating its robustness. By incorporating these advanced audio processing techniques, llama.cpp becomes a more versatile tool for developers, extending its utility beyond text to include sophisticated audio data handling.

llama.cpp Releases·May 7, 2026
Open Sourcemodels

llama.cpp b9047 release focuses on device memory handling

The b9047 release of llama.cpp enhances how device memory is managed, particularly for GPUs with unknown configurations. By ensuring that memory fit for unknown GPUs is set to zero and maintaining a fallback for non-GPU devices, the update boosts stability and reliability. This release continues to support a broad array of operating systems, including macOS with KleidiAI enabled, Ubuntu with ROCm 7.2, and Windows with CUDA 12 and 13. While it doesn't introduce groundbreaking features, these refinements make llama.cpp a more dependable tool for developers working across different hardware environments.

llama.cpp Releases·May 7, 2026

More in Models & Labs

vLLM V1 Achieves Backend Parity with V0© Hugging Face Blog
Models & Labsmodels

vLLM V1 Achieves Backend Parity with V0

The transition from vLLM V0 to V1 represents a major backend overhaul, prioritizing parity before modifying reinforcement learning objectives. By resolving issues such as processed rollout logprobs and runtime defaults, the vLLM team ensured that V1's outputs meet the expectations set by V0. This approach demonstrates the critical role of backend accuracy in preserving training integrity. With these adjustments, V1 now mirrors V0's behavior, creating a stable foundation for future enhancements in RL objectives without the complications of backend discrepancies.

Hugging Face Blog·May 6, 2026
Genesis AI unveils full-stack robotics model GENE-26.5© TechCrunch AI
Models & Labsmodels

Genesis AI unveils full-stack robotics model GENE-26.5

Genesis AI, a startup backed by Khosla Ventures, has unveiled its first full-stack robotics model, GENE-26.5, featuring human-like robotic hands. This development marks a significant step as the company aims to bridge the 'embodiment gap' in robotics by mimicking human hand functionality. The robotic hands are capable of performing complex tasks such as cooking and lab work, showcasing their potential for real-world applications. The startup's innovative approach includes a sensor-loaded glove for data collection, which could revolutionize how robots are trained. This move positions Genesis AI as a notable player in the robotics industry, with plans to expand further into general-purpose robotics.

TechCrunch AI·May 6, 2026
NVIDIA Spectrum-X Sets New AI Networking Standard© NVIDIA Blog
Models & Labsmodels

NVIDIA Spectrum-X Sets New AI Networking Standard

NVIDIA's Spectrum-X Ethernet infrastructure is redefining AI networking with its new Multipath Reliable Connection (MRC) protocol. This innovation allows for efficient load balancing and high throughput by distributing traffic across multiple network paths, crucial for large-scale AI training. Industry leaders like OpenAI and Microsoft are already leveraging this technology to enhance their AI factories. By offering an open specification through the Open Compute Project, NVIDIA is setting a new benchmark for AI networking, ensuring resilience and efficiency at gigascale levels.

NVIDIA Blog·May 6, 2026