A group of scientists, including philosophers, neuroscientists, and computer scientists, have proposed a rubric to determine whether an AI system should be considered conscious. The report pulls together elements from several empirical theories of consciousness and suggests a list of measurable qualities that might indicate some level of consciousness in a machine. These qualities include recurrent processing theory, which focuses on the differences between conscious and unconscious perception, as well as theories related to specialized brain sections and the creation of virtual models of the world.
However, applying the rubric to advanced AI systems, which often operate as deep neural networks, is challenging due to the black box problem of AI, where the internal workings are not interpretable by humans. Furthermore, the report acknowledges that the proposed qualities do not present a definitive list of what makes one conscious, and different theories exist regarding the essential components of consciousness. The study of consciousness in AI is becoming increasingly important as AI advances and becomes more integrated into our daily lives.
Microsoft recently experienced a security lapse that resulted in the exposure of 38 terabytes of private data. The leak was discovered on the company’s AI GitHub repository, inadvertently making the data public when publishing open-source training data. The repository contained a disk backup of two former employees’ workstations, including passwords, keys, and internal Teams messages. The exposure occurred due to a misconfiguration in an Azure feature called an SAS token, which granted overly permissive access to the storage account.
Microsoft was alerted to the issue on June 22, 2023, and immediately addressed it. The company’s investigation found no evidence of unauthorized exposure of customer data, and it revoked the SAS token and blocked external access to the storage account. Microsoft also expanded its secret scanning service and identified a bug in its scanning system that generated a false positive alert for the specific SAS URL.
This incident highlights the importance of implementing robust security measures when handling large amounts of data in AI projects. It also serves as a reminder for organizations to regularly review and update their security protocols to prevent similar incidents from occurring in the future.
Google’s Bard AI chatbot can now scan and analyze Gmail, Docs, and Drive to help users find information. The integration allows users to ask Bard to find and summarize emails, highlight important points in documents, and perform various tasks based on the information it finds. This feature is currently available only in English. While privacy and data usage concerns may arise, Google assures users that their personal information will not be used to train Bard’s public model or be seen by human reviewers.
Users can also opt-in and disable the integration at any time. In addition to Gmail, Docs, and Drive, Bard will also connect with Maps, YouTube, and Google Flights, enabling users to access real-time flight information, find nearby attractions, and surface YouTube videos. Bard’s improvements include a “Google It” button to double-check its answers by comparing them to Google Search results. Google plans to expand Bard’s integrations to more Google products and external partners.
Google has updated its guidelines to acknowledge using artificial intelligence (AI) in content creation. The phrase “written by people” has been replaced with “content created for people” to affirm that Google recognizes AI as a tool in content creation. The update reflects Google’s strategic investment in AI, including an AI-powered news generator service and experimental search features. The company still aims to reward valuable content that benefits users, regardless of whether humans or machines produce it.
However, repetitive or low-quality AI content can still hurt search engine optimization (SEO), which aims to improve a website’s rankings. Writers and editors must remain involved in content creation to avoid potential errors and risks associated with AI-generated content. Google has ways of detecting AI-generated content, but the challenge lies in the imprecise tools and the fact that AI models are trained to “appear” human. The standards for content creation will continue to evolve as AI becomes more prevalent.
Pedophiles on the dark web are using open-source AI technology to generate child sexual abuse material (CSAM) and distribute it online, according to the Internet Watch Foundation (IWF). Users can download and adjust open-source AI models, making it easier for offenders to create and distribute realistic images. The IWF has found that offenders share AI models, imagery, and guides on dark web forums. Offenders often use images of celebrity children, publicly available images of children, and images of child abuse victims to create new content.
The use of AI in generating CSAM poses a significant challenge for law enforcement and online watchdogs, as the technology is more difficult to detect and track. Experts are calling on lawmakers to strengthen laws against the production, distribution, and possession of AI-based CSAM and close loopholes in existing legislation. They emphasize that Congress must play an aggressive role in protecting children and the internet from the dangers posed by AI-generated CSAM.
Ray Dalio, the billionaire founder of Bridgewater Associates, believes that artificial intelligence (AI) could lead to a three-day workweek if managed well. At the Milken Institute’s Asia Summit, Dalio identified five significant disruptions that he believes will transform the world: debt creation, political conflict, changing world order, climate change, and technological breakthroughs like AI. He described AI as a significant transformative power, similar to nuclear energy but more powerful. He predicted that AI could be “mind-blowing” regarding productivity and mentioned the possibility of robots with AI that could reduce the workweek to three days or less.
However, he also warned that without interventions, only a portion of society would benefit from these changes, exacerbating wealth inequality. He called for policymakers to acknowledge how AI’s benefits are shared and suggested reforms to address wealth inequality. Dalio believes a bipartisan approach is necessary to achieve these reforms but expressed doubt that it could be achieved in the current political climate. Regarding investment, Dalio advised looking beyond the companies directly influencing the shift and focusing on those who are best utilizing new technologies. He also suggested that cash is a good place to be temporarily due to uncertainty around debt.
Former Google employee Jacob Steeves has launched an open-sourced AI protocol called Bittensor to challenge the AI technology “monopoly” held by tech giants like Google. Steeves is concerned that major corporations having exclusive rights to AI technologies is dangerous and believes that making AI technology freely available is a better approach. Unlike platforms like OpenAI or Google’s Gemini, Bittensor is decentralized and permissionless, allowing developers to build incentive-driven compute systems and create a global and distributed machine learning system.
Steeves warns of the potential for AI technology to be used unethically by consolidating power in the hands of a few corporations. He criticizes OpenAI’s focus on raising large sums of money and creating an exclusive group of AI controllers. Steeves also critiques the idea of federal licensing requirements for AI development, calling it anti-competitive and potentially harmful to the open-source community. He believes that Congress will struggle to regulate AI effectively and warns against the potential damage over-regulation could cause to the industry.