Researchers from MIT and UC San Diego have created a method to identify and manipulate hidden biases, moods, and personalities in large language models. This technique allows for the tuning of these abstract concepts in model outputs.
Read original
© Microsoft ResearchMicrosoft Research explores vulnerabilities in networks of AI agents, highlighting risks that emerge only through interaction. Their tests reveal how malicious messages can propagate and manipulate agent behavior.
Google DeepMind is researching the development of an AI co-clinician aimed at augmenting healthcare delivery. This initiative focuses on integrating AI into clinical settings to enhance patient care.