Plus, does AI have a right to free speech?
OpenAI is in discussions with Jony Ive, former head designer of Apple, to raise $1 billion from SoftBank for an AI device venture. The partnership aims to develop advanced AI-powered devices that will improve everyday life for users.
OpenAI, founded by Elon Musk and Sam Altman, has been focused on developing AI technologies and has made significant advancements in natural language processing and machine learning. By partnering with Jony Ive, known for his iconic designs at Apple, OpenAI hopes to create aesthetically pleasing and user-friendly products.
SoftBank, a Japanese technology conglomerate, has been a major investor in AI and has a history of funding innovative tech projects. The $1 billion investment from SoftBank will provide the necessary resources and capital for OpenAI and Jony Ive’s venture.
The goal of this partnership is to bring AI technology into the hands of consumers in a way that is accessible, intuitive, and enhances their daily lives. With SoftBank’s financial backing and Jony Ive’s design expertise, OpenAI is poised to make significant advancements in the AI device market.
Mark Zuckerberg, CEO of Meta, believes that there is a growing demand for AI versions of celebrities that fans can interact with. He specifically mentioned Kylie Jenner as an example, explaining that AI could serve as an assistant to celebrities like her and provide a fun and engaging experience for consumers. However, Zuckerberg also acknowledged that implementing AI celebrities could be a “next year” project due to brand safety concerns.
Celebrities would want assurances that their AI versions will not be used to make problematic statements. Zuckerberg pointed out that the current limitations of AI technology make it difficult to create convincing AI versions of celebrities. This interview with the Verge was recorded last week, and Meta recently unveiled 28 AI personalities for its chatbots. Other instances of AI performers, such as AI-generated music and deepfake celebrities, have already appeared on social media. However, concerns about the potential implications of AI in the media industry have also been raised.
According to an article in The Conversation, the development of generative AI, such as ChatGPT, raises questions about whether AI should have the right to free speech. The article suggests that AI can support human thinking by providing information and answers, leading to the argument that AI’s right to speak freely is necessary for humans’ right to think freely. However, the article also warns about the potential harm of giving AI unrestricted free speech rights.
AI could flood the internet with misinformation or manipulate human thought processes. Balancing AI’s speech rights with the need to protect human speech and thought is crucial. The article explores proposed regulations for AI, such as disclosing AI-generated content, protecting AI speech through anonymity, and ensuring AI models avoid generating illegal content while not limiting legal speech. The author suggests that AI regulation should consider the impact on free thought and find a balance between AI’s speech rights and human autonomy.
ChatGPT can now browse the internet to provide up-to-date information to users. Previously, the software was limited to data before September 2021. ChatGPT will utilize Microsoft’s Bing search engine to process users’ questions and generate answers using relevant information from search results. The update is currently available to premium users on ChatGPT’s “Plus” and “Enterprise” plans, with plans to expand to all users in the near future.
The ability to browse the internet addresses one of the software’s major limitations and allows users to access current developments and events for their inquiries. OpenAI, which Microsoft backs, is reportedly in discussions about a potential share sale that could value the company at up to $90 billion. The use of Bing in ChatGPT’s internet browser feature likely stems from Microsoft’s ownership stake in OpenAI.
MyndVR, a virtual reality therapy company, plans to utilize artificial intelligence (AI) to create immersive environments for therapy treatments, particularly for seniors and individuals with mobility issues. The company’s lightweight VR headsets are already being used in various settings, including inpatient rehab centers and assisted living homes, to help individuals relax, distract them from therapy sessions, and engage in gamified environments. MyndVR is expanding into hospice care and home health, aiming to support a deprived population in non-healthcare environments.
The company also collaborates with organizations to help veterans cope with PTSD and provide emotional comfort. MyndVR aims to improve the performance and realism of its technology by implementing AI processing, with the goal of recreating concert experiences and allowing users to interact with family and friends in different facilities within the virtual world. However, the company emphasizes that AI should not replace human care and aims to prioritize love, care, and compassion in its mission.
AI-driven creativity, or what the author refers to as “generic creativity”, poses a risk to human creativity and cultural diversity. While AI tools such as chatbots and image generators can provoke new thought and produce thought-provoking content, they are limited by the data they have been trained on and lack the ability to generate truly original ideas. The increasing use of AI could lead to a decrease in cognitive diversity and cultural tightness, resulting in societies that are less tolerant of deviations from the norm and less creative overall.
Additionally, as AI models are trained on the content created by humans, the more AI is used for generic creativity, the more generic the output will become, leading to a potential “generic spiral”. To preserve human creativity, the author suggests prioritizing human creativity over AI and implementing intellectual property laws that protect and privilege human creativity. Without these measures, there is a risk of eroding the creative system and stifling human creativity.
Microsoft is looking into using nuclear power to support its energy-intensive artificial intelligence (AI) operations. The company recently posted a job listing for a program manager in “Nuclear Technology,” hinting at its ambitions to power data centers with microreactors and small modular reactors.
The move comes as AI continues to drive up energy consumption and carbon emissions, with Microsoft reporting a 30% increase in water consumption to cool its AI supercomputers. Microsoft’s partnership with OpenAI, which produces the energy-consuming ChatGPT model, has further increased energy requirements. The company’s latest sustainability report revealed the need for billions of dollars to power and cool OpenAI’s expanding energy demands. Nuclear power, particularly small modular reactors, could offer a more sustainable solution to meet these energy needs. However, the plans and implementation remain unclear, as Microsoft declined to comment.
As AI models grow in complexity and size, the energy demands of AI companies are likely to rise. These developments highlight the urgent need for AI companies to find energy-efficient solutions to avoid further contribution to carbon emissions.