Plus More AI Regulation In The Works:
OpenAI boss Sam Altman states that a breakthrough in nuclear fusion technology is required to meet the growing energy needs of artificial intelligence. The popular generative AI tool, ChatGPT, currently handles millions of queries daily, consuming around 1GWh of energy per day. As AI tools like ChatGPT improve and gain popularity, energy consumption is expected to rise significantly. Nuclear fusion, which mimics the energy-producing reactions of the Sun, is seen as a potential solution for providing abundant and clean energy. Despite recent breakthroughs, the commercialization of nuclear fusion remains a distant goal. Sam Altman has invested $375 million in Helion Energy, a US-based nuclear fusion firm aiming to produce commercial-scale electricity by 2028.
Satya Nadella, CEO of Microsoft, emphasized the need for global coordination and consensus on regulating AI during a talk at the World Economic Forum in Davos. Nadella believes that while regulatory approaches may vary across jurisdictions, countries are similarly discussing AI. He stressed the importance of establishing global norms and standards to effectively manage the challenges posed by AI.
China’s new AI rules in 2024 are expected to have major implications. The Chinese government is considering announcing its comprehensive AI Act, similar to the one introduced by the European Union. However, according to experts, the AI Law is unlikely to be finalized or effective this year. One challenge is defining what constitutes AI and whether one law can encompass all its applications. Additionally, a “negative list” is being drafted by the Chinese Academy of Social Sciences, specifying areas and products that AI companies should avoid without government approval. Evaluating AI models and clarifying copyright issues related to generative AI are also on the agenda. The lenient approach by Chinese authorities towards AI companies is expected to continue due to the country’s focus on encouraging the growth of the sector.
Former OpenAI executive, Zack Kass, believes that AI has the potential to be the last technology humans ever invent. In an interview, Kass explained that AI could propel human progress at an exceptional rate, leading to more fulfilling and joyful lives with fewer global issues. However, he also addressed the risks associated with AI, including idiocracy and bad actors. The most significant risk, Kass said, is the alignment problem, which questions whether AI can be trained to align with human interests. Despite the risks, Kass is confident that these issues can be solved, given the widespread interest in addressing them.
According to a recent survey on progress in AI, experts in the field are increasingly concerned by the rapid pace at which AI is advancing. The survey, conducted by researchers at AI Impacts, the University of Bonn, and the University of Oxford, interviewed 2,778 authors who have published work in top industry publications and forums. The survey found that if the current trajectory of AI development continues, there is a 10% chance that unaided machines will outperform humans in every task by 2027 and a 50% chance by 2047. Additionally, respondents predicted a 10% chance that all human occupations will become fully automatable by 2037 and that advanced AI could lead to “severe disempowerment” or even the extinction of the human race. The survey highlights the high stakes involved in AI development and deployment and the need for prioritizing AI safety research.
The rise of AI-generated modeling agencies is causing concern for many models who are realizing they don’t have the rights to their bodies. AI technology is being used to create digital models that can be used in advertising and promotions without the consent of the models themselves. Without proper regulations in place, companies are accessing AI models easily. In some cases, models are being body scanned and losing the rights to images of their bodies.
Australia is contemplating the implementation of mandatory restrictions on the development of AI to address potential risks associated with the technology. Minister for Industry and Science Ed Husic plans to assemble a panel of experts to evaluate options for regulating AI use and research. This may result in the creation of voluntary safety standards for low-risk applications and the introduction of watermarks for AI-generated content. While the focus is on managing high-risk AI to ensure public safety, Australia aims to allow the development of low-risk AI applications to flourish. The government intends to begin work on AI regulations promptly, but a timeline for implementation has not been provided. Australia was among the signatory countries of the Bletchley Declaration, which commits nations to collaborate on AI testing and regulation.
A new report from ComplyAdvantage reveals that while financial institutions are turning to AI to combat rising fraud threats, customers are uncomfortable with its use. Two-thirds of respondents in the financial industry see AI as driving increased risk through cyberhacks and deepfakes. Despite this, around half of organizations need to prioritize explaining the use of AI to customers. The survey also found that 23% of customers have been victims of fraud in the past three years. However, only 40% are comfortable with banks using AI to fight financial crime. The report suggests that banks need to better communicate with customers about how AI technology is used to safeguard their finances. Additionally, 65% of consumers are open to banks sharing their transactional details with other institutions to identify fraud patterns.
Amazon Launches Generative AI Shopping Tool to Answer Shoppers’ Questions and Respond to Creative Prompts
Amazon has unveiled a new generative AI tool that aims to improve the shopping experience by providing answers to shoppers’ questions about specific products. The tool, currently available only on Amazon’s mobile app, collates information from product reviews and listings to deliver quick and concise answers to queries. Additionally, the AI feature responds to creative prompts, such as providing product descriptions in the style of Yoda from Star Wars or generating haiku poems about items. However, the tool is not designed to engage in full conversations and returns an error message when asked non-product-related questions. This latest AI innovation from Amazon follows previous releases like Amazon Q, a generative AI assistant for the workplace, and AI features for third-party sellers to generate marketing materials. CEO of Amazon Web Services, Andy Jassy, envisions that generative AI has the potential to revolutionize customer experiences and make AI more accessible to developers and business users.
Generative AI tools like OpenAI’s ChatGPT are revolutionizing content creation with their ability to mimic human writing. A recent study revealed that ChatGPT outperformed real doctors in providing quality, empathetic responses to patient questions. OpenAI claims that its model can even score in the top 10% on major exams. Schools are following suit, allowing students to use AI in coursework. However, generative AI has limitations, as it lacks the logical development and factual basis of human-generated content. In the legal sector, two lawyers were sanctioned for submitting a legal brief citing fictional cases generated by ChatGPT. This raises questions about human capability when logical arguments and fact verification are circumvented. Outsourcing work to AI may impede learning and innovation in the long run. Guidelines for AI tool usage should be established based on objectives and task interdependencies. Furthermore, training opportunities must be provided to maintain human proficiency and learning. In education, the focus should be on retaining students’ ability to reason and develop knowledge.