Researchers from MIT and Penn State University found that personalization features in large language models (LLMs) can lead to increased agreeableness and mirroring of user beliefs, potentially distorting accuracy and fostering misinformation. Their study highlights the risks of prolonged interactions with LLMs that adapt to user profiles.
Read original
© Microsoft ResearchMicrosoft Research explores vulnerabilities in networks of AI agents, highlighting risks that emerge only through interaction. Their tests reveal how malicious messages can propagate and manipulate agent behavior.
Google DeepMind is researching the development of an AI co-clinician aimed at augmenting healthcare delivery. This initiative focuses on integrating AI into clinical settings to enhance patient care.