Skip to content

TECH GIANTS TACKLE AI JOB CUTS

 Plus Warnings Sent Over China’s AI Involvement in Worldwide Elections

Tech Giants Unite to Tackle AI Job Disruption

Major tech companies, including Google, IBM, and Microsoft, have formed the AI-Enabled Information and Communication Technology (ICT) Workforce Consortium. Their mission is to mitigate the job losses caused by AI through upskilling and reskilling, aiming to impact over 95 million people globally in the next decade. The consortium will focus on understanding AI’s impact on specific job roles and creating training programs to equip workers with the necessary skills for the evolving job market. Companies have set ambitious goals, such as IBM’s target to skill 30 million people by 2030.

Concerns Rise Among US Workers Over AI Job Replacement

Nearly half of US office workers fear AI could replace their jobs, a study by Jefferies reveals. Research by the European School of Management and Technology suggests AI could improve management efficiency in research projects, processing data, and predicting new compounds while emphasizing the irreplaceable value of human expertise. However, the integration of AI in management roles raises ethical and legal concerns, necessitating further research on its long-term effects on the workforce.

Microsoft Alerts on AI-Driven Election Disruption by China

Microsoft warns that China plans to disrupt upcoming elections in the US, South Korea, and India using AI-generated content, building on tactics trialed during Taiwan’s presidential election. The tech giant’s threat intelligence report reveals that Chinese and North Korean state-backed cyber groups will likely use social media to distribute AI-crafted memes, videos, and audio to influence voter opinion. While the effectiveness of such AI-generated disinformation currently remains low, Microsoft cautions its potential impact could grow. The report also highlights past incidents, including AI-fabricated endorsements and accusations during Taiwan’s election, underscoring the evolving threat of AI in cyber-influence operations.

China Trails US in AI Development, Says Alibaba Co-Founder

Joe Tsai, Alibaba Group’s co-founder, states China is two years behind the US in artificial intelligence (AI) development, attributing the gap to US technology restrictions. In a podcast, Tsai discussed the impact of US export controls on Chinese tech firms, including Alibaba, especially in accessing advanced semiconductors. Despite these challenges, he believes Chinese companies will find alternatives and predicts China will eventually develop its high-end GPUs, emphasizing the importance of AI for Alibaba Cloud and supporting small and medium-sized enterprises.

Human Composers Outperform AI in Brand Music Appeal and Emotional Accuracy

A study conducted by Stephen Arnold Music and SoundOut revealed that human-composed music for brands significantly surpasses AI-generated compositions in terms of emotional accuracy and overall appeal. The research involved comparing music across four categories: AI-generated, AI supplemented by human, human-generated, and human supplemented by AI, finding that human compositions scored highest in appeal and emotional attunement to creative briefs. While AI shows potential as a creative tool for ideation, it falls short in expressing nuanced human emotions, suggesting that, especially for brand communications demanding authenticity, human creativity remains irreplaceable.

The Challenge of Detecting AI-Generated Voices: No Easy Fix in Sight

Despite advancements in deepfake generation technology, detecting AI-generated voices remains a significant challenge. As NPR’s experiment with deepfake audio detection providers showed, existing tools often misidentify real voices as AI-generated or fail to catch AI-generated clips. While AI detection methods are improving, the race between deepfake creators and detectors continues, highlighting the need for collaboration to enhance detection capabilities. Yet, with deepfakes rapidly evolving, technology alone cannot guarantee reliable detection, especially for less documented voices, underscoring the ongoing struggle to mitigate deepfake-related risks.

NYC Defends AI Chatbot Amid Legal Misinformation Concerns

New York City’s AI chatbot, MyCity, launched as a pilot in October to provide business owners with advice, has been caught issuing incorrect legal advice. Mayor Eric Adams acknowledged the issues but defended the technology as a work in progress, highlighting the pilot nature for ironing out flaws. The chatbot, developed with Microsoft’s Azure AI, has inaccurately advised on worker tips, schedule change regulations, cashless store policies, and the city’s minimum wage. The city plans to address inaccuracies and has updated its website to caution against taking the chatbot’s responses as professional advice.

Exploring the Intersection of Psychoanalysis and AI: A Thought-Provoking Journey

The intricate relationship between psychoanalysis and artificial intelligence (AI) presents a fascinating realm for exploration. On one hand, applying AI to psychoanalysis opens new avenues for understanding the human psyche, albeit amidst concerns over the ethical implications of such a global experiment. On the other hand, applying psychoanalytic techniques to AI itself stirs debate, particularly around the notion of understanding or attributing a “mind” to AI. This complex interplay raises questions about the essence of mind creation and the potential insights psychoanalysis could offer into the development and operation of AI. While today’s AI lacks sentience, the pursuit of understanding its underlying processes and the human influence in its creation through psychoanalytic lenses suggests a promising, albeit speculative, pathway to unraveling the mysteries of both AI and the human mind.

AI-Generated Lawyers Issue Fake Legal Threats for Backlinks

Fake AI lawyers are targeting websites with phony legal threats to gain backlinks, as reported by 404Media. Ernie Smith, of Tedium, and Dataconomy received emails from “Commonwealth Legal,” demanding backlinks under the threat of legal action for supposed copyright violations. Investigations revealed that the law firm and its lawyers are AI-generated fakes as part of a scheme to manipulate search engine rankings through backlinking. The supposed beneficiary, Tech4Gods.com, denies involvement, stating the contested image was from a free-to-use source. This scam highlights the evolving tactics of online manipulation.

GET BREAKING AI NEWS

SUBSCRIBE NOW AND GET OUR FREE 20-PAGE AI EXPLAINER "AI FROM ZERO" INSTANTLY

DELIVERED EVERY WEEKDAY
TO 4175 SUBSCRIBERS

5 1 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x