Plus Microsoft Wants To Train 2 Million Indians in AI.
As the tech industry continues to prioritize profitability, more job cuts are witnessed due to the rise of AI. With companies focusing on harnessing the potential of artificial intelligence, resources are being redirected away from traditional roles, further intensifying the automation of workforces. This pattern of belt-tightening after years of unrestrained growth reflects the industry’s cautious approach. Consequently, amidst the excitement surrounding AI, concerns regarding unemployment and the future of the job market are magnified.
Microsoft CEO Satya Nadella has announced plans to train 2 million people in India with AI skills by 2025. The initiative, in collaboration with governments, nonprofits, and corporate organizations, aims to equip the future workforce with the necessary skills to harness the potential of AI. The training will focus on Tier 2 and Tier 3 cities as well as rural areas. Nadella believes that AI has the power to transform the nation and bridge the AI skills gap while creating new opportunities for growth and development.
The majority of workers won’t lose their jobs to AI; instead, they’ll be replaced by people who have better AI skills. A new report by Springboard reveals that businesses are struggling to keep up with AI advancements, leaving workers feeling overwhelmed and unable to keep pace. However, AI expertise is in high demand, with over a third of leaders stating their companies need workers with AI skills. The report highlights the need for companies to provide clear direction and upskilling opportunities to close the skills gap and embrace the potential of AI.
Researchers at Queen Mary University of London are exploring the use of AI in music production. They are using computational creativity and generative AI to create “new virtual worlds” of music. Some musicians embrace the creative potential of AI, while others are concerned about the ethical implications, such as using AI to generate the voices of deceased artists. The use of generative AI in music production raises legal and ethical concerns, but if guided in the right way, AI could have its place in the industry.
Researchers from Amsterdam UMC and Radboudumc have developed an AI algorithm that can predict within a week whether an antidepressant will work for patients with major depression disorder. By analyzing brain scans and clinical information, the algorithm can determine the effectiveness of the medication up to 8 weeks faster than traditional methods. This advancement allows for more personalized and efficient treatment, preventing unnecessary prescriptions and reducing the time patients need to wait to find the right medication.
A New York lawyer is facing possible disciplinary action after it was discovered that she cited a fictitious case generated by AI in a medical malpractice lawsuit. The lawyer used OpenAI’s ChatGPT to research prior cases but failed to confirm the existence of the case she cited. The incident highlights concerns about the reliability of AI and the need for lawyers to verify its results. Experts warn that AI is a tool that can lead users to a solution but should not be relied upon as the ultimate arbiter of all knowledge. The judicial system continues to grapple with the use of AI in legal filings, and calls for accountability and education among professionals are growing.
Bumble, a popular dating app, has introduced an AI-powered “Deception Detector” to tackle the issue of fake profiles. The tool uses machine learning to identify and block spam, scams, and fake accounts even before users interact with them. Bumble’s testing has shown a 95% success rate in blocking suspicious accounts, leading to a 45% decrease in member reports of spam and scams. With the Federal Trade Commission reporting $1.3 billion lost to romance scams between 2017 and 2021, Bumble’s innovation aims to ensure genuine connections on its platform. This follows Tinder’s recent introduction of machine learning-based warnings for users who violate community guidelines.
AI is revolutionizing the beauty industry, serving as a shopping assistant, a source of skincare expertise, and even a platform for brand activations. However, as AI-generated images circulate online, it is becoming increasingly difficult to differentiate between real and artificial. While AI offers benefits such as generating hairstyles in real-time for clients, it also has potential downsides. Stylists caution that relying solely on AI reference photos can create unrealistic expectations and skew perceptions. Additionally, the rise of deepfake technology poses moral dilemmas and threatens the livelihoods of industry professionals. Despite its challenges, the beauty industry must find ways to navigate the AI landscape and appreciate its potential.
Lingo Telecom and Life Corporation, based in Texas, are being linked to a robocall campaign that used an AI voice clone of President Joe Biden to discourage New Hampshire voters from participating in the primary election. The New Hampshire Attorney General has issued cease-and-desist orders and subpoenas to both companies, as they have been accused of illegal robocall activities in the past. The calls, which began two days before the primary election, played a prerecorded message of the president advising potential Democratic voters not to vote. The investigation is ongoing to determine whether the campaign violated election and consumer protection laws.
Nebojša Vujinović Vujo, the CEO of digital marketing firm Shantel, has made a fortune by using AI to generate clickbait articles and monetize abandoned news outlets. Although controversial, Vujo argues that he is simply embracing the power of AI, just as people abandoned horse-drawn buggies for cars. Critics, however, worry about the impact on journalism and the rise of spam-like content. But as Google and other search engines crack down on AI-generated content, the future of this profitable business model remains uncertain.