The b9080 release of llama.cpp has been announced, featuring support for the Gemma4_26B_A4B_NVFP4 model. This update addresses several technical issues, including checkpoint conversion fixes, and expands platform compatibility. Developers can now utilize llama.cpp on macOS, Linux, Windows, and Android, with specific enhancements for Apple Silicon and Vulkan support. This release aims to improve the tool's versatility and performance across different systems.
Read originalThe b9073 release of llama.cpp marks a significant expansion in platform compatibility, enhancing its accessibility across various operating systems. With KleidiAI now enabled for macOS Apple Silicon, M-series Mac users can expect improved performance. The update also includes builds for Ubuntu featuring ROCm 7.2 and OpenVINO, alongside Windows versions with CUDA 12 and 13, reflecting a commitment to supporting diverse hardware. This positions llama.cpp as a versatile inference runtime, catering to developers across different environments without introducing new model architectures.
The b9075 release of llama.cpp brings a notable improvement for CUDA users by integrating the snake activation function into a single elementwise kernel. This enhancement is particularly advantageous for audio decoders like BigVGAN and Vocos, which previously depended on a more complex five-operation sequence. By streamlining these operations, the update promises better performance and efficiency across data types such as F32, F16, and BF16. This development reflects llama.cpp's ongoing focus on refining its CUDA capabilities, making it a more compelling option for developers dealing with complex activation functions.