Plus, AI floods the internet with fake doctors
Contrary to fears, AI is not on the track to make CEOs obsolete but to augment their roles. A survey revealed 49% of CEOs believe AI could replace a significant portion of their workload, yet the specifics remain unclear. AI is seen as a tool that could handle mundane tasks, freeing CEOs for more strategic and meaningful work, like creating sustainable revenue streams and inspiring teams.
However, the transition also brings challenges, necessitating CEOs to lead workforces through AI-driven changes, and potentially restructure their organizations to align with new technological capabilities. The evolving scenario underscores a need for CEOs to understand AI to a certain extent, as 85% of respondents believe the next CEO would have AI experience.
Generative AI tools such as ChatGPT and DALL-E have been hailed as “magic” due to their ability to produce creative content quickly and effortlessly. However, the wondrous outputs of these AI systems often rely on the work of dedicated creatives who receive no recognition or compensation for their contributions. The Federal Trade Commission recently held a virtual roundtable discussing the impact of generative AI on creative fields, highlighting concerns that large language model (LLM) makers engage in illegal copying of artistic works.
Creators are outraged as they witness their original content being appropriated without their knowledge or consent. Furthermore, AI-generated models are increasingly being used without creators’ permission, including full-body scans and voice clones. The advertising industry, which has experimented with generative AI tools extensively, must reflect on its role in perpetuating the exploitation of creators and demand more ethical practices to protect artists. Failure to do so may lead to a society devoid of creativity and originality.
The European Union is deliberating tighter regulations on major AI systems, in discussions involving the European Commission and Parliament. This move, part of the prospective AI Act, aims at managing large language models like Meta’s Llama 2 and OpenAI’s ChatGPT-4, without overwhelming startups. This regulatory stance echoes the EU’s Digital Services Act, underlining a broader intent to securely manage digital technologies while fostering innovation.
China has announced plans to boost its computing power by 50% by 2025, aiming to keep up with the U.S. in the field of artificial intelligence (AI) and supercomputing. The country’s key ministries, including the cyberspace regulator, have outlined a plan to achieve computing capacity equal to 300 exaflops. Currently, China has a computing power of 197 exaflops.
Increasing computing power is crucial for supporting the development of AI, which relies on advanced semiconductors to process vast amounts of data. The plan also includes investing in areas such as memory storage, networks, and data centers. However, China’s ambitions might face obstacles due to U.S. sanctions that restrict access to critical semiconductors, such as graphics processing units (GPUs).
Voice actors are facing a dire threat from generative AI tools, capable of cloning voices for use in varying digital content. This unsettling reality came to light as Allegra Clark, a voice actor, discovered a fabricated audio clip of her character in a suggestive scene, birthed by AI tool ElevenLabs. Although ElevenLabs insists on explicit consent for voice cloning, the enforcement remains questionable.
The voice-acting realm, already teetering on precarious grounds, is now confronting an AI-induced identity crisis with no legal shield, as current laws fall short of guarding vocal uniqueness. With AI fostering a booming black market of voice cloning, the distress within the voice-acting community is palpable, igniting a fervent plea for a legal revamp to ensure vocal identity remains an unassailable right.
AI has made significant progress in tasks requiring abstract, logical reasoning but struggles to replicate abilities that come naturally to humans and animals. While AI systems have achieved superhuman performance in certain areas such as image recognition and language translation, they often fail when faced with novel scenarios not represented in training data. Limited generalization skills and difficulties in understanding causal relationships hinder the development of artificial general intelligence (AGI).
Current AI systems excel in specialized tasks based on observed patterns in data but struggle to generalize and apply knowledge to unfamiliar situations. The true hallmark of intelligence lies in an organism’s ability to act appropriately in uncertain environments, drawing upon past experiences to predict the future. Additionally, natural intelligence is achieved with limited resources, including computational power, energy consumption, and learning experiences.
To reach AGI, AI systems must actively interact with the world and observe how their actions affect the data. Passive data collection alone is insufficient for understanding causality. Consequently, the exercise of agency through physical robotics or simulated environments is crucial. Artificial general intelligence may need to be earned through active participation and knowledge acquisition.
AI-generated bots posing as doctors on social media platforms are sharing medical advice and natural remedies with millions of followers. However, not all the information they provide is accurate. One such example is a bot claiming that chia seeds can help control diabetes, which is false according to medical experts. These fake doctors promote health hacks and beauty tips, often targeting Hindi-speaking users.
A study revealed that India has become a hotspot for health misinformation during the COVID-19 pandemic. Although AI-generated doctors may appear trustworthy, they can be misleading, as they convey the authority of real doctors. Researchers warn about the potential negative impact AI could have on the medical field, including manipulating videos and generating false medical images. Despite its potential advantages, users should cross-check information and remain skeptical of untrustworthy sources.