Plus, will we see AGI in less than 5 years?
A federal judge has dismissed most claims in a landmark lawsuit by artists against generative AI art generators. The artists alleged uncompensated and unauthorized use of billions of images downloaded from the internet to train AI systems. The dismissal was based on issues surrounding copyright infringement, including the presence of copyrighted images in AI systems and the establishment of identical material created by AI tools. However, claims against certain companies for direct infringement and breach of contract were allowed to proceed, pending further clarification and evidence.
YouTube’s AI Tool Offers Users the Ability to Sing Like Favorite Artists, but Record Labels Raise Concerns
YouTube is developing an innovative AI tool that allows creators to use the authentic voices of famous recording artists in their videos. However, the tool’s beta version launch is facing hurdles in the form of negotiations with major record labels. These negotiations focus on licensing agreements for voice rights, the AI model’s training, artist compensation, and publishing rights. Despite the challenges, the music industry recognizes the potential game-changing impact of YouTube’s AI tool and aims to ensure a balance between embracing technology and protecting artists’ rights.
In a recent interview, Shane Legg, the co-founder of Google’s DeepMind, reaffirmed his prediction that there is a 50% chance of achieving artificial general intelligence (AGI) within the next five years. Legg’s prediction aligns with his belief that computational power and the exponential growth of data will contribute to the development of AGI. However, he acknowledges that there are challenges in defining and testing AGI, as well as in the scalability of AI training models. Despite his optimism, Legg emphasizes that the achievement of AGI is uncertain and there is a possibility that it may not happen within the predicted timeframe.
A recent study published in Geophysical Research Letters reveals that AI models used for weather prediction fail to consider the butterfly effect. This phenomenon imposes a fundamental limitation on predictability. While AI performs well in midlatitude weather conditions, its inability to accurately account for the butterfly effect makes its predictions unreliable. However, scientists can potentially enhance AI algorithms by generating additional training data to teach them about the impact of the butterfly effect.
The global AI summit, hosted by UK Prime Minister Rishi Sunak and attended by global leaders, tech executives, AI scientists, and civil society groups, has exposed contrasting priorities between the UK and the US. While Sunak emphasized the potential risks of AI, such as AI-powered weapons and super-intelligence, the US focused on preventing bias, worker displacement, and national security issues. The UK’s light-touch approach to AI regulation has been seen as an attempt to court influential tech companies and boost its AI market, valued at over $21 billion. The summit aims to address AI safety concerns and recapture clout in the post-Brexit world.
Leading publishing trade bodies are urging the UK government to take action to safeguard human creativity in the face of widespread copyright infringement caused by AI technologies. The Publishers Association (PA), Society of Authors (SoA), Authors’ Licensing & Collecting Society (ALCS), and Association of Authors’ Agents (AAA) have called for tangible solutions to protect the value of human creativity and ensure recompense for past infringements. They are appealing to the government to regulate AI systems and put an end to the unauthorized use of copyrighted materials.