
Apple plans to introduce a significant update in iOS 27, allowing users to select their preferred AI models for various system features. This change will enable third-party AI models to power Apple Intelligence, including Siri and other tools like Writing Tools and Image Playground. Users will be able to choose from AI model providers through their App Store apps, marking a shift from the current exclusive use of ChatGPT. This development could enhance the versatility and personalization of AI features on Apple devices.
Read original
© The Verge AIGoogle has officially shut down Project Mariner, an experimental feature designed to automate web tasks. Initially launched in December 2024, Project Mariner could handle up to 10 tasks simultaneously. Its technology has now been integrated into other Google products like Gemini Agent and AI Mode, which continue to offer similar functionalities. This move suggests Google is consolidating its AI efforts, possibly to make room for new features expected at the upcoming I/O event. While Project Mariner itself is no more, its capabilities live on in Google's broader AI ecosystem.
© The Verge AIMira Murati, OpenAI's former CTO, delivered a striking testimony in court, accusing CEO Sam Altman of misleading her about the safety review process for a new AI model. Her deposition, part of the Musk v. Altman trial, revealed a conflict between Altman's assurances and the legal counsel's advice from Jason Kwon. This adds to a pattern of allegations against Altman, including accusations of dishonesty and manipulative tactics from other former colleagues. Murati's insistence on a safety review despite Altman's claims highlights the internal struggles she faced at OpenAI. This situation reflects broader issues of trust and governance within the company during her tenure.
© The Verge AIGoogle is enhancing its AI Search capabilities by integrating perspectives from firsthand sources like Reddit and other social media platforms. This update aims to provide users with more authentic and relatable information by linking search queries to real conversations. By doing so, Google hopes to reduce the need for users to manually append 'Reddit' to their searches for genuine advice. The inclusion of creator names and community identifiers in search results adds a layer of transparency and trust. This shift marks a significant move towards more conversational and community-driven search experiences.
The latest b9041 release of llama.cpp continues its trend of broadening platform compatibility, making it a versatile choice for developers across different environments. Notably, this update includes support for macOS Apple Silicon with KleidiAI enabled, as well as expanded Vulkan and ROCm 7.2 support on Ubuntu. This release doesn't introduce new models but focuses on enhancing the runtime's adaptability across various hardware configurations. By doing so, llama.cpp strengthens its position as a go-to inference runtime for developers seeking flexibility beyond NVIDIA's CUDA ecosystem.
Llama.cpp's latest update expands its functionality by integrating IBM's Granite-Speech, significantly enhancing its audio processing capabilities. The update features a Conformer encoder with Shaw relative position encoding and a QFormer projector, which efficiently compresses audio data into the LLM embedding space. This ensures precise token-for-token matching with HF transformers on audio clips, demonstrating its robustness. By incorporating these advanced audio processing techniques, llama.cpp becomes a more versatile tool for developers, extending its utility beyond text to include sophisticated audio data handling.
The llama.cpp b9049 release marks a notable step forward by integrating MiniCPM-V 4.6, enhancing the tool's capabilities for developers. This version addresses several bugs and refines features, such as implementing build_attn for flash attention support and improving code style and type checks. The update also extends its reach across various platforms, including macOS, Linux, and Windows, with tailored support for Apple Silicon and Vulkan. These enhancements make llama.cpp a more versatile and reliable tool for developers working with a range of AI models, boosting its performance and usability.