
Philosopher Nick Bostrom, previously known for his warnings about AI's existential risks, has adopted a more optimistic outlook in his new book, 'Deep Utopia.' He argues that while AI could potentially annihilate humanity, it also offers the possibility of extending human life and creating unprecedented abundance. This marks a departure from his earlier views, such as the paperclip maximizer scenario. Bostrom now focuses on the potential for AI to solve global issues, though he acknowledges the challenges of governance and distribution of resources. His work suggests a complex balance between AI's risks and rewards.
Read original
© WIRED AITom Steyer, a billionaire gubernatorial candidate in California, has proposed a groundbreaking plan to protect workers displaced by AI. His proposal includes a 'token tax' on big tech companies to fund job guarantees and training programs, aiming to make California a leader in AI workforce adaptation. This initiative also plans to establish an AI Worker Protection Administration to safeguard workers' rights. Steyer's approach contrasts with other political figures, emphasizing a structured funding mechanism to support those affected by AI-driven job displacement.
© WIRED AIAI toys are becoming more prevalent, yet they operate in a largely unregulated space, sparking concerns about their effects on children's development and safety. A University of Cambridge study reveals that these toys often struggle with conversational turn-taking, which can hinder social play, a critical aspect of young children's growth. The potential for children to form inappropriate attachments to AI toys, mistaking them for real social partners, is another issue. Despite the appeal of screen-free play, these toys frequently fail to facilitate meaningful social interactions. The use of AI models designed for adults in children's toys further complicates matters, underscoring the need for stricter oversight and improved design to ensure they are both safe and beneficial.