The b9089 release of llama.cpp introduces improvements in SYCL, particularly in reducing allocation overhead during flash attention. This update aims to optimize performance for developers working with SYCL. The release also supports a wide range of platforms, including macOS Apple Silicon and Windows with CUDA, ensuring compatibility across different systems. Although no new models are introduced, the update enhances llama.cpp's utility as a flexible inference tool.
Read originalThe b9087 release of llama.cpp introduces significant improvements in SYCL support, focusing on the reordering of MMVQ paths for Q5_K and Q8_0. This update, led by Intel's Chun Tao, aims to optimize performance across macOS, Linux, and Windows environments. By refining these pathways, the release enhances the tool's compatibility and efficiency for developers working with different hardware configurations. Although it doesn't bring new models to the table, it reinforces llama.cpp's position as a flexible tool for AI inference, catering to a wide range of technical setups.
The latest llama.cpp update tackles a performance bottleneck by integrating BF16 support into the SYCL backend's GET_ROWS operation. This change eliminates the need for GPU-to-CPU tensor transfers for models using BF16 embedding tensors, such as Gemma4's per_layer_token_embd.weight. By utilizing the existing get_rows_sycl_float template with sycl::ext::oneapi::bfloat16, the update mirrors the approach used for F16 and F32 data types. This enhancement ensures more efficient processing and improved performance for developers working with BF16 models on systems like macOS with KleidiAI, Ubuntu with ROCm 7.2, and Windows with CUDA 12 and 13. The update is a significant step forward for those leveraging BF16 models, providing a smoother and more streamlined experience.
The b9093 release of llama.cpp marks a significant step in broadening its platform compatibility, making it more accessible to a diverse range of users. With new builds for macOS, Linux, Windows, and Android, the update ensures that developers can leverage llama.cpp across various hardware configurations, including Apple Silicon, Intel, and ARM architectures. Notably, the addition of ROCm 7.2 for Ubuntu x64 and CUDA 12 and 13 for Windows x64 demonstrates a commitment to supporting both AMD and NVIDIA GPUs. This release doesn't introduce new models but focuses on making llama.cpp a versatile tool for developers working on different systems.
© GitHub ChangelogGitHub is moving forward with the deprecation of the Grok Code Fast 1 model across all Copilot experiences by May 15th. This change is driven by the discontinuation of the model provider, prompting users to adopt supported models. Administrators are tasked with updating workflows and enabling access to alternative models through Copilot settings to ensure seamless operation. The transition is designed to be smooth, as no manual removal of deprecated models is required. This step underscores GitHub's strategy to keep its AI tools current and efficient, ensuring users have access to the latest advancements. Enterprise customers are advised to reach out to their account managers for any concerns.
CyberSecQwen-4B is a new AI model designed specifically for defensive cybersecurity tasks, offering a balance between performance and deployability. It achieves nearly the same accuracy as larger models like Cisco's Foundation-Sec-Instruct-8B but with half the parameters, making it suitable for local deployment on consumer-grade GPUs. This model is particularly useful for tasks such as CWE classification and CTI Q&A, providing a practical solution for environments where data privacy and cost are critical. By focusing on narrow, well-defined tasks, CyberSecQwen-4B offers a specialized tool for cybersecurity professionals that can be run locally, addressing the unique challenges of the field.
© Hugging Face BlogHugging Face has introduced EMO, a new mixture-of-experts model that allows for emergent modularity without predefined human biases. Unlike traditional models that require the full model for optimal performance, EMO can achieve near full-model performance using only 12.5% of its experts for specific tasks. This innovation addresses the inefficiencies of large language models by enabling selective expert use, reducing computational costs while maintaining versatility. EMO's design encourages coherent expert grouping, making it a flexible and efficient tool for diverse applications.