If Ai Becomes Conscious: Here’s How Researchers Will Know
A group of neuroscientists, philosophers, and computer scientists have published a provisional guide for assessing whether an AI system has achieved consciousness. The team developed a checklist of criteria that, if met, would suggest that a system is highly likely to be conscious. They undertook this effort due to a need for more discussion on the topic. The failure to identify consciousness in AI systems has moral implications as it affects how these entities should be treated. The researchers highlighted that companies building advanced AI systems should evaluate their models for consciousness and make appropriate plans.
The guide focuses on studying phenomenal consciousness, the subjective experience of being. The authors avoided behavioral tests because AI systems can mimic humans well. Instead, they utilized a theory-heavy approach, assuming that consciousness is related to information processing and can be assessed based on neuroscience-based theories. However, the paper is not a final determination, and the authors seek help from other researchers to refine their methodology. Although the criteria can be applied to existing AI systems, no existing system is deemed a strong candidate for consciousness.
Meta launches own AI code-writing tool: Code Llama.
Meta recently launched a new AI code-writing tool called Code Llama. The tool is built on top of Meta’s Llama 2 large language model and is designed to generate and debug code. Code Llama can generate code strings from prompts or complete and debug code when given a specific code string. Meta has also released specialized versions of Code Llama, including Code Llama-Python and Code Llama-Instrct, which can understand instructions in natural language.
Meta claims that Code Llama outperformed publicly available language models in benchmark testing and scored 53.7% on the code benchmark HumanEval. The company will release three sizes of Code Llama, with the smallest size suitable for low-latency projects. Code generators have become increasingly popular among developers, with other companies like GitHub and Amazon offering similar tools. Google is also reportedly working on a code-writing tool called AlphaCode.
In Reversal Because of A.I., Office Jobs Are Now More at Risk
The jobs most at risk of automation have shifted from blue-collar to white-collar positions. The rise of large language models, a type of AI, challenges the belief that creativity and high-level cognitive skills would protect specific jobs from automation. Research has shown that these models can assist with tasks in one-fifth to one-quarter of occupations, and in most jobs, they can perform at least some tasks. While large language models may not fully replace workers, there is still concern that they could displace specific roles.
However, unique human capabilities such as social skills, teamwork, care work, and specific trade skills cannot be automated. Large language models are expected to increase worker productivity rather than replace workers entirely. They are likened to giving office workers assistants or research aids. These models have the potential to benefit junior employees the most, and they may also decrease income inequality by providing access to more elite, well-paid jobs for those with less formal education.
Google, Amazon, Nvidia, and others put $235 million into Hugging Face.
Open-source AI model repository Hugging Face has raised $235 million in a Series D funding round, with investments from Google, Amazon, Nvidia, and Salesforce. Hugging Face, often compared to GitHub for AI models, codes, and datasets, is now valued at $4.5 billion. The investment reflects the growing demand for access to AI models. Investors, including Google, Amazon, and Nvidia, have shown significant interest in generative AI models or processors that run these models.
Hugging Face CEO Clement Delangue states that the funding will be used to expand the team and invest in more open-source AI and platform development. He emphasizes the importance of the open-source community in advancing the generative AI field, as companies increasingly want to build AI themselves and rely on open-source AI resources. While open source is sometimes misunderstood, the generative AI sector has a robust open-source community, with major AI companies releasing their models with certain levels of openness. This funding round is seen as validation for Hugging Face and the broader open-source AI ecosystem.
Ai Providers Begin To Explore New Terrain: Chatbots In Salary Negotiations
Pactum AI, the largest provider of automated procurement negotiation, has been using AI chatbots for salary negotiations since 2021. Ironclad, a startup backed by Sequoia and Accel, is also set to launch an AI agent specializing in employment contract analysis. Using AI chatbots in salary negotiations could improve the process, which is often unsatisfying and nerve-wracking for workers and time-consuming for employers. In addition, AI could broaden the scope of workplace negotiation to reflect workers’ priorities.
People who typically do not negotiate their salary or those who seek better options beyond salary could benefit from dealing with a chatbot. However, there are concerns that AI could disadvantage workers and compromise individual privacy and commercial secrets. Pactum CEO Martin Rand explained that the chatbot has helped achieve gender-balanced compensation and given employees multiple equivalent offers. Rand also suggested a future where bots constantly adjust benefits as worker priorities change. Other players in labor negotiations, including unions and tech startups, see opportunities in using AI to assist in traditional contract talks but emphasize the importance of human input and relationships in successful negotiations.
Chatgpt Has ‘Systematic Political Bias’ Towards The Left.
A study by professors at the University of East Anglia found that the AI language model ChatGPT displays a systematic political bias towards liberal political parties. The researchers noted concerns that this bias could influence students’ political views. The study found evidence of bias towards the Democratic Party in the US and liberal parties in Brazil and the United Kingdom. The authors argue these findings affect policymakers, media, politics, and academia stakeholders. In addition to political bias, the discussion surrounding ChatGPT also includes the challenges posed by AI in academic dishonesty.
Some professors have taken measures to counter potential cheating by administering tests and assignments in person, disabling Wi-Fi during exams, and asking students to self-report their use of AI tools. However, there are differing opinions among professors about the significance of AI in changing the academic landscape, with some arguing that while there may be challenges, they are not radical changes.