Skip to content

AI GOES TO WAR

AI GOES TO WAR

Plus AI Starts To Write Laws.



AI Takes Center Stage in Israeli Offensive

As Israel resumes its offensive against Hamas, questions are being raised about the role of AI in the Israel Defense Forces (IDF) targeting approach. Reports indicate that the IDF is utilizing an AI target-creation platform called “the Gospel” to identify and attack targets in Gaza. The IDF claims that the platform helps generate recommendations for researchers that match the identification done by humans. However, concerns about the lack of transparency and accountability in using AI-controlled systems in warfare are being raised. The use of AI in targeting decisions could set a precedent for other countries and their military systems, posing risks for civilians in conflict zones.

 Anduril Unveils Roadrunner: AI-Controlled Combat Drone to Tackle Aerial Threats

Anduril, a defense contractor founded by Palmer Luckey, has introduced its newest innovation: a powerful combat drone called Roadrunner. Inspired by the escalating conflict in Ukraine, Roadrunner is designed to counter low-cost suicide drones that have proven deadly. This modular, AI-controlled drone operates at high speeds and can autonomously intercept threats before delivering an explosive payload. The system allows for software upgrades and features human agency for identifying and classifying threats, ensuring accountability for any action taken. 

AI Writes Legislation to the Surprise of Brazilian Lawmakers

In Porto Alegre, Brazil, an experimental ordinance written entirely by an AI-powered chatbot, ChatGPT, was unknowingly passed by city lawmakers. City councilman Ramiro Rosário approached ChatGPT to draft a proposal preventing taxpayers from paying for stolen water consumption meters. The proposal was presented to the council without modifications or knowledge of its AI origin and was unanimously approved. The incident has sparked a debate on the role of artificial intelligence in public policy, with concerns about potential false information dissemination and the machine’s understanding compared to human interpretation. 

AI Image Generation: As Energy-Intensive as Charging Your Phone, Study Shows

A recent study conducted by researchers at Hugging Face and Carnegie Mellon University reveals that using AI to generate images consumes a similar amount of energy as fully charging a smartphone. The study examines the carbon emissions associated with various AI tasks and highlights the significant carbon footprint AI models leave when in use. Generating images proves to be the most energy and carbon-intensive AI task, resulting in emissions equivalent to driving 4.1 miles in an average gasoline-powered car for every 1,000 images generated. The study emphasizes the need to prioritize more planet-friendly AI practices and raises awareness about the environmental impact of AI technologies.

AI Technology Gives Users the Option to Include the Voices of Celebrities in Personal Videos

YouTube has developed a generative AI tech called “Dream Track,” which allows users to create soundtracks for their videos featuring the voices of celebrities such as John Legend and Charlie Puth. Additionally, YouTube’s parent company, Meta, is beta-launching various AI “characters” with the likenesses of notable public figures, including Tom Brady and Snoop Dogg. This technology has the potential to transform film and TV production, posing questions about intellectual property, identity, compensation, and safety. While some celebrities are open to AI-generated content and view it as an extension of their digital footprint, others are cautious about the impact it might have on the industry and the need to protect artists’ interests.

Google Battles AI-Generated Fake Images as Reality Blurs

The widespread use of AI tools to create machine-generated text and images is causing concern for Google. The company, whose search results are powered by AI algorithms, is struggling to distinguish between authentic and fake images. The consequences are serious, as the manipulated images can distort people’s views of reality, leading to misinformation and a skewed understanding of important events. Images of historical figures, such as Muhammad Ali, Maya Angelou, and Joan of Arc, are being replaced by convincing AI-generated images. Google’s efforts to combat this issue and rank reliable content higher in search results are crucial to maintaining its role as the world’s leading platform for information organizations.

GET BREAKING AI NEWS

SUBSCRIBE NOW AND GET OUR FREE 20-PAGE AI EXPLAINER "AI FROM ZERO" INSTANTLY

DELIVERED EVERY WEEKDAY
TO 4174 SUBSCRIBERS

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x