Skip to content

AI TARGETS GAZA

AI COMPANIONS

Plus how AI helps you date. 



AI Targets in Israeli Attacks in Gaza Raise Concerns and Controversies

The Israeli military claims to be using AI to select targets in real-time for its attacks on Gaza, and it calls the AI system “the Gospel.” The military says the AI system helps identify enemy combatants and equipment while reducing civilian casualties. However, critics argue that the unproven AI system could be used to justify the killing of Palestinian civilians. AI algorithms are known to have flaws and high error rates, making it questionable whether they can accurately target humans on the battlefield. Despite concerns, experts agree that this marks the beginning of a new phase in the use of AI in warfare. The Gospel generates targets rapidly and much faster than human analysts, making it a valuable tool for commanders. However, there are concerns about the lack of transparency and accountability in using AI systems, as well as the potential for biased and indiscriminate targeting. 

When Algorithms Guide Matters of the Heart

Young adults like Andrew are turning to AI for guidance on matters of the heart. Faced with conflicting advice from friends, Andrew consults an AI relationship coach named Meeno. Through virtual conversations, he gains the confidence he needs to take the next step with his romantic interest. Meeno’s purpose is to address the social skills gap exacerbated by the COVID-19 pandemic. Similarly, the dating assistant app Rizz helps users with online communication by suggesting responses. While some skeptics question the ability of AI to predict human attraction and compatibility accurately, dating apps like SciMatch and Iris Dating train AI models to offer compatibility scores based on facial features and personal preferences. The potential of AI in matchmaking lies in combining various factors like personality, attraction, and chemistry to improve the process.

Channel 1 Unveils First AI-Generated News Anchors for National News Station

Channel 1, a new Los Angeles-based news station set to launch in 2024, plans to be the first nationally syndicated news station to use AI-generated news anchors. The station aims to create a responsible use of AI technology by employing a mix of AI-generated people, digital avatars created from real actors, and human anchors. It intends to offer a personalized news experience, allowing viewers to select their preferred stories, with the app learning over time and providing personalized recommendations.

Pope Francis Calls for Global Treaty on Ethical Use of AI

Pope Francis has urged world leaders to adopt a global treaty governing the ethical use of AI. The pontiff expressed concerns about the potential risks of AI, including disinformation, election interference, and blurred responsibility in decision-making. He emphasized that technological developments should improve the quality of life for all humanity, instead of exacerbating inequalities and conflicts. Pope Francis called for a binding international treaty to regulate the development and use of AI, aiming to prevent harm and promote responsible practices. 

AI Denies Care to Seniors: Humana Faces Class-Action Lawsuit

Health insurer Humana is facing a class-action lawsuit over the alleged use of an AI algorithm to deny rehabilitation care to seniors. This follows a similar lawsuit filed against UnitedHealth Group for using the same algorithm. The suit claims that Humana’s algorithm replaces medical professionals’ judgment and predicts the amount of care seniors need, leading to the denial of medically necessary care. The algorithm is also alleged to be highly inaccurate. The plaintiffs are seeking damages and an order to stop Humana from using the algorithm. Congress and the Biden administration are also expressing concern about coverage denials in Medicare Advantage plans and the use of AI algorithms.

AI-Infused Google Search Poses Threat to News Publishers’ Web Traffic

News publishers are growing concerned about the impact of Google’s AI-powered search tool on their web traffic. With approximately 40% of web traffic coming from Google searches, publishers heavily rely on the search engine to direct users to their sites. However, the integration of AI in search results could lead to users obtaining full answers directly from the search engine, reducing the need to click on publishers’ links. A task force at The Atlantic modeled the potential consequences and found that their site could miss out on significant traffic. This development indicates a growing storm in the publishing industry as they navigate the impact of AI on their business models and traffic generation.

BONUS STORIES

Trivago Harnesses the Power of AI, Replaces Spokespeople with One Actor in New Ad Campaign

Trivago, the Germany-based lodging metasearch site, is launching a new brand advertising campaign that replaces its roster of professional pitchpeople with one actor who speaks multiple languages, thanks to the power of AI. The campaign, which marks Trivago’s return to brand advertising, features UK actor James Sheldon and emphasizes the familiar advice of searching for hotels and comparing prices. The ads also showcase Trivago’s new logo and visual identity, with the “T” resembling a checkmark. Trivago hopes to regain double-digit growth by positioning itself as highly relevant to travelers. The multi-country campaign will kick off in Denmark and Canada, followed by the US, the Netherlands, and Brazil.

AntiFake: New Tool Aims to Prevent Unauthorized Speech Synthesis

Advances in generative artificial intelligence have paved the way for highly authentic-sounding speech synthesis that is indistinguishable from actual human voices. However, this technology also has the potential for misuse, as malicious actors can clone a person’s voice without their consent and use it to deceive others. To address this concern, computer scientist Ning Zhang has developed AntiFake, a tool that aims to prevent the unauthorized synthesis of voice data into deepfakes. The tool uses adversarial AI techniques to distort the recorded audio signal in a way that makes it sound natural to humans but unusable for training a voice clone. Zhang’s lab has tested the tool against multiple speech synthesizers, achieving a protection rate of 95 percent. While digital security systems will never provide complete protection, AntiFake can “raise the bar” and limit the misuse of voice cloning technology.

From Cheating Claims to Ethical Quandaries: The Impact of AI Chatbots on NYC Students

NYC high school students are increasingly using AI-powered chatbots, such as ChatGPT, to supplement their learning experiences. While some students utilize AI tools to seek explanations and gain insights into complex subjects, others have been accused of cheating on assignments. The use of AI chatbots including ChatGPT has prompted discussions about educational practices, curriculum changes, and the ethical implications of relying too heavily on AI tutoring tools. Although AI tools can provide convenience, accuracy concerns, the perpetuation of biases, and the potential loss of critical thinking skills are some of the issues being raised by students and educators alike.

GET BREAKING AI NEWS FOR THE REST OF US

SUBSCRIBE NOW AND GET OUR FREE 20-PAGE AI EXPLAINER "AI FROM ZERO" INSTANTLY

DELIVERED EVERY WEEKDAY
TO 3120 SUBSCRIBERS

5 1 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x