Plus, can AI predict heart attacks?
The Israeli-Palestinian conflict has entered a new era with the emergence of deepfake visuals, made possible by easy-to-use AI generative tools. While doctored images and fake news have been used as wartime propaganda, AI-generated deepfakes add a new layer of disinformation to an already complex conflict. From AI-based media manipulation to voice-altering technology, this article explores how deepfakes pose a significant threat to democracy and complicate the Israeli-Palestinian conflict.
Nightshade is a revolutionary tool developed by researchers at the University of Chicago that allows artists to add imperceptible changes to their artwork before uploading it online. This tool aims to combat AI companies that use artists’ work without permission to train their models. By contaminating the training data with poisoned images, artists can manipulate AI models, causing them to generate chaotic and unpredictable outputs. The tool offers artists a powerful deterrent and aims to rebalance the power dynamics between AI companies and creators. Nightshade will be integrated into Glaze, another tool by the same team that allows artists to mask their personal style to prevent scraping by AI companies.
Ahead of the Senate’s AI Insight Forum, data workers have written a letter urging lawmakers to develop policies and labor rights that prevent corporations like Amazon from exploiting workers involved in training AI algorithms. The letter emphasizes the critical contributions of data workers to AI advancements and calls for collective bargaining rights, protection from predatory surveillance, and improved labor conditions. The forum will include tech leaders and executives, as well as advocates like NAACP president Derrick Johnson, to discuss potential regulations on AI.
Apple Plans to Supercharge Siri with Generative AI and Infuse AI into Apple Music and Productivity Apps
Apple invests major resources to bring generative AI to its products and services. Siri, Apple’s underperforming smart assistant, is set to undergo a complete revamp with deep generative AI integration. This transformation could occur as early as 2025 and greatly enhance Siri’s capabilities. Additionally, Apple Music and other productivity apps will receive an AI boost, offering features such as auto-generated playlists and improved writing assistance. With these advancements, Apple aims to catch up to its competitors in the AI space, providing users with more powerful and intelligent experiences.
Chatbot makers are facing legal threats as false claims made by AI chatbots can ruin lives. However, recent cases have shown that AI companies can avoid defamation lawsuits by taking down false information and warning users about potential inaccuracies. This strategy was employed by OpenAI, the maker of ChatGPT when Australian regional mayor Brian Hood accused them of defamation. While this approach may seem effective, the timeline between discovering defamatory statements and content takedown can vary significantly, potentially causing harm to individuals. Lawsuits against AI giants like Microsoft and OpenAI further complicate the issue, highlighting the challenges of building effective defamation filters. OpenAI has taken the position that ChatGPT’s outputs cannot be considered libel and has denied that it ever published defamatory statements. Legal decisions in ongoing cases will determine how AI chatbots are treated under defamation laws.
A groundbreaking study reveals that AI can help cardiologists predict the development of atrial fibrillation (A-Fib) in patients. Using deep learning algorithms to analyze millions of electrocardiograms (EKGs), AI technology can identify subtle patterns and relationships that are beyond human detection. This development marks a significant advancement in early diagnosis and prevention of A-Fib, potentially revolutionizing the field of cardiology.