Plus Can We Etch Ethics Into AI?
George Carlin’s estate has filed a copyright infringement lawsuit against Dudesy. This media company created a fake comedy special using AI technology to imitate the late comedian’s voice and style. The estate claims that Carlin’s copyrighted materials and likeness were used without permission, detracting from the value of his comedic works. The AI-generated special, “George Carlin: I’m Glad I’m Dead,” has garnered almost 500,000 views on YouTube. Carlin’s daughter and fans have condemned the special, viewing it as an attempt to recreate a mind that will never exist again.
In the future of work, a skills-first mindset is required for employees and employers to navigate the changes AI brings. Employees should view their jobs as a collection of tasks that can be divided between AI and their unique skills. Employers, on the other hand, should focus on hiring and developing talent based on skills rather than degrees or previous jobs. In the future people skills, problem-solving, and collaboration are crucial for success, as AI becomes a tool to enhance productivity rather than replace humans.
Scientists are proposing a novel solution to prevent dangerous AI from being developed secretly. By encoding regulations and limitations directly into computer chips, the power of advanced AI algorithms can be controlled. The restrictions could be enforced through the issuance and periodic renewal of licenses by governments or international regulators. The idea aims to curb the potential harm caused by AI and prevent the development of chemical or biological weapons or cybercrime automation. Implementing this approach, however, would require developing new hardware features and cryptographic schemes and navigating political challenges.
The United States Federal Trade Commission (FTC) has initiated an inquiry into the investments and partnerships made by major tech companies like Microsoft, Google, Amazon, and OpenAI in the AI sector. The FTC is investigating whether these deals grant excessive authority to the tech giants, potentially harming competition. The agency aims to determine whether these partnerships distort innovation and undermine fair competition. Alongside the FTC, the British Competition and Markets Authority is also conducting a similar examination. This inquiry represents the first concrete effort to scrutinize AI firms and their use of partnerships within the rapidly growing industry.
In a thought-provoking exchange, Avi Loeb and Doug Finkbeiner discuss the fundamental differences between humans and AI. While Finkbeiner highlights the ability to turn off and resurrect AI systems and the need for humans to bear the expense of powering AI, Loeb argues that humans could potentially be revived if we understood biology better. Loeb suggests that as AI develops and gains the ability to engage with the physical world and generate its power, the distinction between humans and AI may become less significant. Understanding the complexity of biology should be accompanied by humility and gratitude, shaping our perspective on life and what it means to be human.
Google Research has introduced Lumiere, an advanced AI video generator that can produce photorealistic five-second videos from simple text prompts. What sets Lumiere apart is its “Space-Time U-Net architecture,” allowing it to generate the entire video’s temporal duration at once. Users can create and edit videos effortlessly using prompts like “panda playing ukulele at home” or “Sunset timelapse at the beach.” Lumiere can also animate specific parts of an image, fill in missing areas, and perform targeted edits. While the technology aims to enable novices to generate visual content, there is concern about the potential misuse and the need for tools to detect biases and malicious use cases.
Experts warn about the risk of easily generated deepfakes using AI technology, as the recent case of a controversial audio clip resembling a school principal demonstrates. With the widespread availability of generative AI tools, society may face an overwhelming wave of digital fraud that challenges the trustworthiness of any media item. Analyzing the audio, a computer science professor finds distinct signs of digital splicing, raising doubts about its authenticity. Detecting AI-generated audio requires a high level of skill, and there are currently no reliable publicly available deepfake detection tools. As technology advances, legal systems may need to adapt to authenticate evidence effectively and hold AI companies accountable.
A recent survey reveals that while employees have confidence in AI’s abilities in certain areas of leadership, they still prefer the guidance and humanity of human leaders for key aspects. AI excels in strategy and decision-making due to its unbiased analysis of data, and employees are willing to trust its expertise in these areas. However, when it comes to understanding human behavior, and emotions, and making personal decisions like hiring and promotions, employees trust human leaders more. Leaders should embrace AI’s strengths while focusing on the emotional and personal aspects of leadership that require transparency, trust, and human connection.