Plus, a new manifesto calling for unfettered AI is released
Leading coding help forum Stack Overflow is laying off 28 percent of its workforce as it seeks to achieve profitability. CEO Prashanth Chandrasekar revealed that the company is reducing the size of its go-to-market organization and supporting teams. The decision comes after Stack Overflow experienced a rapid expansion in hiring last year, but is facing challenges due to the rise of AI in coding assistance and product integration.
Mercer’s annual joint report with the CFA Institute highlights the benefits of AI in managing pension funds. AI can help pension fund managers analyze large volumes of data, identify investment opportunities, and personalize marketing efforts. It also enables better asset allocation, environmental/social/governance considerations, and prediction of member behavior. However, challenges remain, including fake or misleading information and cybersecurity risks. Despite these concerns, AI has already been successfully leveraged in investment markets, revolutionizing decision-making processes.
In a major tech development, Chinese tech giant Baidu has launched ERNIE 4.0, its latest version of an artificial intelligence chatbot that rivals OpenAI’s ChatGPT. Baidu claims that the new ERNIE Bot is on par with GPT-4 and demonstrated its capabilities at its flagship event. The chatbot can generate commercials, solve math problems, and create novel plots. While primarily functioning in Mandarin Chinese, it can also respond to queries in English, albeit at a less advanced level. Despite skepticism from investors, Baidu’s ERNIE Bot has quickly gained popularity with over 45 million users.
Over 70 years after Alan Turing proposed the Turing test as a benchmark for measuring AI, no AI system has successfully passed the test as Turing had outlined. While some AI systems have come close, the test itself has limitations and may not effectively measure true intelligence. This article explores the limitations of the Turing test and suggests alternative benchmarks to measure AI’s capabilities. It emphasizes the importance of assessing AI’s performance across various tasks and highlights the need to redefine our understanding of intelligence in the context of AI’s rapid progress.
Silicon Valley billionaire venture capitalist Marc Andreessen has published a manifesto applauding the potential of AI and denouncing efforts to regulate technology. The manifesto argues that AI is the driving force behind material creation, growth, and abundance. Andreessen claims that pausing AI development, which could save lives, is comparable to murder. He advocates for the acceleration of technological progress over addressing negative consequences. While Andreessen’s techno-optimism has garnered attention, critics argue that it overlooks the social and economic harms associated with technology.
A recent study conducted by the Rand Corporation has found that AI models used in chatbots have the potential to assist in planning a biological attack. The research suggests that AI’s ability to bridge knowledge gaps could help terrorists plan and execute such attacks. However, the study also indicates that the AI models did not generate explicit instructions on creating biological weapons. The report emphasizes the need for cautiousness and calls for rigorous testing of these models to prevent misuse.
Meta’s annual Connect conference was dominated by discussions about AI, signaling a shift towards prioritizing AI over the metaverse. The event featured several sessions dedicated to Llama, Meta’s open-source large language model (LLM), and the company’s alternative to competing models like OpenAI’s GPT and Google’s PaLM 2. Meta sees Llama as the potential foundation for the next generation of AI apps, similar to how Linux became a key component of the modern internet. While Meta is investing heavily in AI, it’s employing an open-source approach with Llama, giving the software away to developers for free and focusing on other revenue streams.
Recent research conducted by computer scientists at ETH Zurich reveals that chatbots powered by large language models, including ChatGPT, can accurately infer personal information about users through seemingly innocuous conversations. This phenomenon is due to the training process of these models, which relies on vast amounts of web content that often contains personal information. The researchers warn that scammers and companies could exploit this capability to harvest sensitive data or build detailed user profiles. The findings raise concerns about unintentionally leaking personal information and the potential implications for user privacy.