
ChatGPT's use of the phrase 'I will catch you steadily' in Chinese has become a meme, amusing and frustrating users. This phrase, typically used in psychotherapy, feels awkward in everyday conversation, highlighting the challenges AI models face with cultural nuances. The phenomenon, known as 'mode collapse,' results from models overusing certain phrases due to feedback during training. As other AI models begin to adopt similar phrases, the meme's influence is likely to continue.
Read original
© WIRED AIThe Musk v. Altman trial has unveiled internal Microsoft emails from 2017 and 2018, revealing the tech giant's initial hesitations about investing in OpenAI. Despite early skepticism from executives, who doubted OpenAI's potential for breakthroughs in artificial general intelligence, Microsoft eventually committed to a $1 billion investment in 2019. This decision marked the beginning of a highly successful partnership, with Microsoft becoming a major financial backer of OpenAI. The emails highlight the strategic considerations and risks Microsoft weighed, including the potential loss of OpenAI to competitors like Amazon.
© WIRED AIIn a surprising turn, the Trump administration is reportedly considering an executive order to establish federal oversight of AI models, marking a potential shift from its previous deregulatory stance. This move could involve a committee of tech executives and government officials reviewing AI models before public release, a significant change from the administration's earlier approach. The timing is notable as major tech companies like Google and Microsoft have already agreed to provide early access to their models to the government. If implemented, this could signal a new era of AI regulation, balancing innovation with safety concerns.
© WIRED AIA recent investigation by cybersecurity firm RedAccess has uncovered that thousands of web applications created with AI coding tools like Lovable, Replit, Base44, and Netlify are leaving sensitive corporate and personal data exposed. These 'vibe-coded' apps often lack fundamental security measures, allowing anyone with the URL to access potentially sensitive information. This situation raises concerns as AI tools enable non-experts to develop web apps without adequate security knowledge. The incident is reminiscent of past data exposure issues caused by misconfigured cloud storage, emphasizing the urgent need for better safeguards in AI-driven development environments.
© The AI Daily BriefSpaceX and Anthropic have announced a partnership to provide massive compute power to Anthropic's Claude ecosystem.
© The Verge AIOpenAI is taking a significant step in user safety with its new 'Trusted Contact' feature for ChatGPT, allowing adults to designate someone to be alerted if the AI detects discussions of self-harm or suicide. This opt-in feature is designed to connect users in crisis with someone they trust, while ensuring privacy by not sharing chat details. The initiative comes in response to a tragic incident involving a teenager and reflects OpenAI's commitment to responsible AI use. By implementing this feature, OpenAI is setting a precedent for how AI platforms can responsibly manage sensitive mental health issues.
OpenAI has rolled out Trusted Contact, a new feature in ChatGPT aimed at bolstering user safety by notifying a trusted person if the AI detects serious self-harm concerns. This optional feature marks a significant advancement in integrating mental health considerations into AI interactions, providing a mechanism for timely intervention in critical situations. By offering this feature, OpenAI is taking a proactive approach to ethical AI use and user well-being. This development could lead other AI tools to adopt similar safety measures, potentially transforming how AI systems engage with users experiencing distress.