Plus who gets to build AGI?
Sam Altman, the former CEO of OpenAI, has agreed to return to the company after his sudden dismissal sparked a major employee revolt. OpenAI announced that Altman will resume his position as CEO with a new board chaired by Bret Taylor. The turmoil surrounding Altman’s departure and re-hiring centered around differences in opinions regarding the pace of AI development. Altman, who favored an aggressive approach, clashed with members of the original board who advocated for a more cautious strategy. Microsoft, OpenAI’s biggest financial backer, played a significant role in Altman’s return and has strengthened its influence over the company. Altman’s vision for rapid AI commercialization has prevailed, despite his public concerns about the risks associated with the technology.
OpenAI’s recent meltdown has sparked a fierce debate about the development of artificial general intelligence (AGI) for the benefit of humanity. Initially founded as a non-profit organization in 2015, OpenAI aimed to build safe and beneficial AGI. However, as the capital-intensive nature of AI research became apparent, OpenAI transitioned to a limited liability company. Critics argue that the company deviated from its founding principles, becoming profit-driven.
Amazon is laying off employees in its devices and services division, with the team working on Alexa being the most affected. These layoffs are a result of discontinued initiatives as Amazon shifts its focus towards generative AI products. Although Amazon dominates the smart speaker market in the US, Alexa has struggled to generate profit for the company. The hardware is sold at cost, and users primarily interact with Alexa for basic tasks rather than making purchases.
Researchers are exploring the potential of smaller AI models, such as Microsoft’s phi-1.5 and phi-2, as they exhibit the traits of larger language models but with fewer resources required. Large language models (LLMs) like GPT-3.5 and GPT-4 offer immense capabilities but are energy-intensive and expensive to develop and run. Smaller AI models offer sustainability benefits by reducing energy consumption and democratizing AI development. Furthermore, smaller models can be embedded in personal devices, enhancing privacy and enabling advanced applications like chat interfaces for smart appliances. Alongside interpretability and modeling human cognition, the focus on smaller models can lead to more economical and innovative AI solutions, with minimal ingredients required for intelligent algorithms to emerge.
Matthew Butterick, an unlikely lawyer with a background in design and programming, is leading the charge in the first wave of class-action lawsuits against big AI companies. Butterick is fighting to ensure that writers, artists, and other creatives have control over how their work is used by AI. The lawsuits, filed against companies such as GitHub and OpenAI, allege that AI companies violate open-source licensing agreements and infringe upon the rights of creators by training on their work without consent. These lawsuits are challenging the momentum of the generative AI movement and have sparked a wider backlash against the use of AI in creative industries.
French startup, Osium AI, is leveraging the power of AI to accelerate research and development in materials science. With a focus on optimizing the feedback loop between materials formulation and testing, Osium AI uses proprietary technology to predict the physical properties of new materials, refine and optimize them, and minimize trial and error. This approach offers significant potential for various industries, including construction, packaging, aeronautics, aerospace, textiles, and smartphones.
A new paper by Anthony Chemero from the University of Cincinnati challenges the notion that AI, particularly systems like ChatGPT, possesses true intelligence comparable to human thinking. Chemero argues that the fundamental difference lies in the lack of embodiment and understanding in AI, which prevents it from truly caring about the world or sharing human concerns. While AI technologies like ChatGPT are capable of generating impressive text, they lack the ability to comprehend the meaning behind their own statements. The paper emphasizes that AI’s limitations stem from its detachment from the physical world and the absence of genuine care and commitment exhibited by human intelligence.
Tech executives are raising concerns about the consolidation of power in the hands of a few tech giants due to the development of AI. The popularity of OpenAI’s ChatGPT has triggered an AI arms race among major tech companies. These large-scale AI models require significant computing power and are trained on massive amounts of data. Critics like Meredith Whittaker, president of Signal, and Frank McCourt of Project Liberty argue that such concentration gives these corporations excessive control over institutions and people’s lives. They fear that AI’s derivative technology reinforces centralized power structures in the U.S. and China, threatening user privacy and societal well-being. However, proponents like Wikipedia’s Jimmy Wales suggest that open-source models have the potential to disrupt the current concentration of AI power.
According to a study from the University of Cincinnati, the perception of AI intelligence is clouded by linguistic confusion. While AI language models like ChatGPT can generate impressive text, they lack true understanding and consciousness. Unlike humans, AI lacks embodied experiences and emotions, making it fundamentally different from human intelligence. The study highlights that the ability of AI to generate text, even if it includes “making things up” or “hallucinating,” does not equate to genuine human intelligence. It also warns that AI’s generation of text can amplify biases and produce harmful content without awareness. The study emphasizes that AI, while useful, lacks essential human elements such as caring, survival, and concern for the world.