Skip to content

HUMANS ADOPT AI BIAS

human and ai fingers touching

Plus, will AI decide the 2024 election?



OpenAI Launches Preparedness Team to Study AI Risks, Including Nuclear Threats

OpenAI has established a new team called Preparedness to assess and protect against potential “catastrophic risks” originating from AI models. Led by Aleksander Madry, the team will focus on researching the dangers posed by future AI systems, including their ability to manipulate and deceive humans, as well as their potential to generate malicious code. While the team will also explore less obvious areas of AI risk, OpenAI has expressed concern about the impact of AI on “chemical, biological, radiological, and nuclear” threats. As part of the launch, OpenAI is inviting the community to submit risk study ideas, with a $25,000 prize and a job at Preparedness for the top ten entries. The Preparedness team will also develop a risk-informed development policy to address the safety of advanced AI systems.

Controversy Surrounds Gannett’s Use of AI-Generated Content on Product Review Site

Gannett has faced criticism for publishing articles on its product review site that appear to be generated by AI. The articles have been described as stilted, repetitive, and nonsensical, with low-resolution images and misleading product information. Gannett claims that the content was actually created by third-party freelancers hired by a marketing agency partner. However, evidence suggests that AI tools were used, raising concerns about the lack of transparency and the potential undermining of journalists’ credibility. The issue follows a previous controversy at Gannett involving a failed AI sports article experiment and a recent strike by unionized Reviewed staff.

Urgent Call for Pledge Against the Use of AI in Deception during the 2024 Election

New York Mayor Eric Adams has used AI software to send prerecorded calls in various languages to city residents. However, concerns have been raised that the misuse of AI technology can distort reality and pose a threat to democracy. To address this, elected officials, candidates, and their supporters are being urged to pledge not to deceive voters through AI manipulation. Legions of citizens could be manipulated, leading to a loss of faith in the system and potentially opening doors to foreign adversaries. Efforts to update laws are underway, but experts call for immediate action to prevent AI-driven chaos in the 2024 election.

Unconscious Bias: How AI Can Influence Human Behavior in the Long Term

New research shows that AI biases can unconsciously influence human behavior even after users stop interacting with the AI program. Algorithm-based technologies have demonstrated biases in areas such as speech recognition software and healthcare algorithms. This study reveals that users may unknowingly adopt these biases, which can persist in their decision-making. The research sheds light on how AI can affect human behavior and emphasizes the need for transparency and knowledge about AI systems to minimize the impact of bias.

Experts Highlight Dangers AI Poses to Democracy Ahead of 2024 Election

As the 2024 election approaches, experts are concerned about the potential spread of misinformation and disinformation driven by AI and deepfake technology. Recent experiments have demonstrated how easily AI-generated misinformation can deceive lawmakers and the public. The growing influence of AI and deepfakes makes it increasingly challenging for voters to distinguish fact from fiction. With the stakes higher than ever, experts are urging the implementation of stricter regulations to combat the potential dangers AI presents to democracy and to safeguard the integrity of future elections.

Using AI to Detect and Prevent Tree Diseases

Researchers from the University of British Columbia (UBC) have developed a method that utilizes genomics and machine learning to identify tree pathogens and assess the potential harm of new pathogens based on their genetic traits. This approach can help detect emerging diseases at ports and borders before they spread. By analyzing the genetic characteristics of different pathogens, the researchers can provide early warnings and prevent severe disease outbreaks that can devastate forests and cause economic damage. The research has been published in the journal Scientific Reports.

Google’s AI Chatbot Avoids Answering Questions on Israel-Palestine Conflict

Google’s Bard AI chatbot, designed to provide answers to various queries, has come under scrutiny for its refusal to respond to questions related to the ongoing Israel-Hamas conflict. Users have noticed that the chatbot readily answers questions about other global conflicts but refuses to provide any information or assistance regarding the Israel-Palestine issue. Google has confirmed that this is a deliberate measure to disable Bard’s responses to associated queries, citing caution and the need to avoid mistakes in addressing escalating conflicts or security issues. This incident raises questions about the biases and limitations of AI language models, which can reflect real-world biases and stereotypes.

AI Engine Scans Book: A Debate on Copyright Infringement or Fair Use

In the age of advanced AI programs, there is a growing concern about copyright infringement and fair use. Authors and creative professionals argue that their work, used to train language models in AI tools, should be compensated. However, AI companies maintain that their use of copyrighted content falls under the fair use doctrine of copyright law. Determining whether AI engines infringe on copyright or qualify as fair use is complex, as it requires evaluating factors such as transformative use, the nature of the work, the amount used, and the effect on the market. The legal debate around AI and copyright raises questions about consent, compensation, and transparency in the creative economy.

DON’T MISS OUT!
GET THE LATEST AI NEWS
DAILY

Join 2559 subscribers

5 1 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x