Plus, While Hollywood Strikes, Entertaining Companies Boost Gen AI Spend
The AI arms race is generating massive amounts of greenhouse gas emissions and contributing to climate change, according to an article in Slate. Companies investing in AI technologies like OpenAI and Google require significant computational power and high-performance GPUs to train their models. This energy-intensive process is having environmental consequences and driving up the cost of GPUs. In addition, large language models and neural networks, like OpenAI’s GPT-4, require extensive computational resources and energy to operate. The development of AI technologies has been primarily controlled by big tech companies, which have the necessary resources to fuel the AI arms race. However, there is a lack of transparency surrounding AI technologies’ energy consumption and environmental impact. As AI capabilities continue to advance, the article argues that it is crucial to consider the environmental consequences and find more sustainable approaches to AI development.
According to The New York Times, Google’s AI unit DeepMind uses generative AI to develop at least 21 different tools for life advice, planning, and tutoring. The tools are designed to offer relationship advice, help users answer intimate questions, and assist with decision-making. Google has reportedly contracted with Scale AI to test the tools, with more than 100 people working on the project. However, Google’s AI safety experts had previously warned that users taking life advice from AI tools could experience “diminished health and well-being” and a “loss of agency.” Therefore, the tools being developed by DeepMind are not meant for therapeutic use. Google’s publicly available Bard chatbot specifically provides mental health support resources when asked for therapeutic advice. The use of AI in a medical or therapeutic context has faced controversy, with examples of harmful advice given by chatbots in the past.
Zoom faced swift backlash when it updated its terms of service to allow the use of data collected from video calls to train its AI systems. The incident highlights growing concerns that AI companies like OpenAI and Google are threatening the livelihoods of artists, writers, and content creators, fearing that AI can emulate and produce their work cheaply. Concerns about privacy, scams, and exploitation have escalated, particularly regarding using personal data to mimic digital profiles, voices, and identities. This raises questions about the types of information individuals are comfortable sharing with AIs. The article highlights that websites and apps people use daily may gather data for AI training, and it encourages users to learn about the data-handling practices of the products they use. The article advises being cautious about sharing content publicly and suggests seeking out products that prioritize privacy or using encrypted apps and services. Furthermore, it calls for greater consent and standards regarding AI’s use of personal data to safeguard individuals’ rights and establish privacy controls.
Media and entertainment companies worldwide are set to boost their spending on generative AI, according to a report by search and insights company Lucidworks. The report found that 96% of executives in AI decisions prioritize investments in generative AI, with China leading the way with 100% of firms making investments. The UK, France, India, and the US followed closely behind. The study involved 6,000 respondents across companies with over 100 employees. The report’s authors said that generative AI is bringing about “massive change on global business” and disrupting various industries. The expanding use of AI has caused concerns for the entertainment industry’s creatives, with a study from AI research lab OpenAI warning that writers are particularly vulnerable to career and work impacts from AI.
Companies are using AI outside of the cloud to reduce costs and improve performance. Advances in networking, algorithms, and edge computing have made it possible to run AI workloads closer to where applications are being used rather than relying on large data centers owned by cloud vendors. This approach, decentralized clouds or distributed data centers, allows subsets of AI models to be deployed closer to the source of new data, improving scalability and reducing latency. Companies like DHL Supply Chain and Estes Express Lines use decentralized cloud systems to run AI-powered applications for warehouse robots and truck-mounted cameras, respectively. Additionally, startups like Together.ai and Gensyn are developing platforms allowing users to train and run their AI models without connecting to the cloud. This shift towards using AI outside of the cloud is critical for industries such as transportation, construction, and manufacturing, where real-time insights are necessary.
AI has made its way into the comedy world, raising questions about the potential displacement of human comedians. Large language models like ChatGPT have demonstrated the ability to generate jokes and one-liners with surprising wit and accuracy. Comedians have experimented with AI and found mixed results, with some underwhelmed by the quality of jokes produced. However, experts believe that AI has the potential to make professional-level jokes within a matter of years. One concern is that AI tends to draw from existing ideas rather than creating new ones, which could limit its originality and surprise factor. Comedy relies on novelty and human creativity, which may require more work for AI to replicate. However, AI’s ability to analyze and mimic genre, voice, metaphors, and comedy mechanics suggests it has the potential to improve over time. Ultimately, the future of AI in comedy remains a philosophical question about the nature of humor and whether AI can intentionally generate hilarious art.
A study by researchers at the University of East Anglia suggests that OpenAI’s ChatGPT, an AI chatbot, has a liberal bias. The researchers asked ChatGPT political survey questions and found significant biases towards the Democrats in the US, Lula in Brazil, and the Labour Party in the UK. The study adds to a growing body of research that reveals how chatbots, despite attempts to control biases, are influenced by assumptions, beliefs, and stereotypes present in the training data from the open internet.
As chatbots like ChatGPT become increasingly integrated into daily life, there are concerns over their ability to influence public trust and election results. While OpenAI claims not to favor any specific political group, biases that show up in ChatGPT are seen as bugs rather than features.
Researchers and experts suggest that mitigating political biases in chatbots is challenging. Some propose systems that detect biased speech and replace it with more neutral terms. However, the inherent subjectivity of political beliefs and the shortcomings of existing surveys and data hinder achieving complete neutrality.
for links to these stories and more, visit us at ai daily dot u s and subscribe today.