
GitHub has announced the deprecation of the Grok Code Fast 1 model across all Copilot experiences, effective May 15th. This decision follows the deprecation of the model provider, necessitating users to switch to supported models. Administrators should update workflows and enable alternative models in Copilot settings to ensure continued functionality. GitHub assures that no manual action is needed to remove deprecated models, easing the transition process. Enterprise customers are encouraged to contact their account managers for further assistance.
Read original
© GitHub ChangelogGitHub's latest update to the Copilot usage metrics API offers a more granular view of code review activities by breaking down suggestions by comment type. This enhancement allows enterprise and organization administrators to see which categories, such as security or bug risk, are most frequently flagged by Copilot. By comparing the volume of suggestions to those actually applied, users can better assess the tool's impact on their development process. While repository-level insights are not yet available, this update provides a clearer picture of Copilot's effectiveness in code reviews.
© GitHub ChangelogGitHub has streamlined the configuration process for its Copilot cloud agent by introducing dedicated 'Agents' secrets and variables. This update allows developers to manage secrets and variables at the organization level, enabling easier sharing across multiple repositories. Previously, configurations had to be set up individually for each repository, which was cumbersome for large-scale operations. Now, with the ability to configure at scale, developers can efficiently manage access to private resources and configure MCP servers without redundant setups.
© GitHub ChangelogThe latest release of CodeQL, version 2.25.3, brings significant updates to GitHub's static analysis engine, notably adding support for Swift 6.3. This update enhances security scanning capabilities by promoting five C/C++ queries to the default suite, improving accuracy across multiple languages. Python developers will benefit from support for new syntax in Python 3.15, while Java and Kotlin users see improved detection in the Woodstox StAX library. These enhancements make CodeQL a more robust tool for developers aiming to secure their codebases across diverse programming languages.
The b9075 release of llama.cpp brings a notable improvement for CUDA users by integrating the snake activation function into a single elementwise kernel. This enhancement is particularly advantageous for audio decoders like BigVGAN and Vocos, which previously depended on a more complex five-operation sequence. By streamlining these operations, the update promises better performance and efficiency across data types such as F32, F16, and BF16. This development reflects llama.cpp's ongoing focus on refining its CUDA capabilities, making it a more compelling option for developers dealing with complex activation functions.
The latest b9076 release of llama.cpp quietly expands its platform support, making it more versatile for developers across various systems. Notably, it now exposes child model information from the router's /v1/models endpoint, enhancing transparency and control for users. The update includes support for macOS Apple Silicon with KleidiAI enabled, as well as expanded compatibility with Ubuntu and Windows systems, including Vulkan and ROCm 7.2. This release doesn't introduce new models but strengthens llama.cpp's position as a flexible inference runtime across diverse hardware configurations.
The b9077 release of llama.cpp now aligns with a Vertex AI compatible API, enhancing its integration with Google's AI platform. This update also brings a series of fixes and improvements across various operating systems, including macOS, Linux, and Windows. Developers can now leverage support for environments ranging from Apple Silicon to Vulkan and ROCm on Ubuntu. While there are no new model architectures, this release reinforces llama.cpp's role as a versatile tool for developers working across diverse platforms. The update ensures a more robust experience, particularly for those utilizing CUDA and SYCL technologies. Overall, llama.cpp continues to evolve as a reliable choice for AI development in a wide array of scenarios.