According to a new survey by Mitre-Harris Poll, 54% of US adults are more concerned about the risks of artificial intelligence (AI) than they are excited about its potential benefits. The survey, which included 2,063 adults, found that 39% of respondents believed that current AI technologies are safe and secure, down from 48% in a previous survey.
The results highlight growing support for regulations and policy changes to address the security and privacy risks associated with AI. Concerns about AI being used in cyberattacks and identity theft were higher than concerns about harm to disadvantaged populations or job replacement. However, not all demographics share the same level of wariness, with 57% of Generation Z respondents and 62% of millennials more excited than concerned about the potential benefits of AI. Additionally, men were more excited about AI technologies than women.
Google DeepMind researchers have conducted a study to explore how simple prompts can enhance the accuracy of AI models. They found that asking Google’s PaLM 2 AI to “take a deep breath and work on this problem step by step” was the most effective prompt. When using this phrase, PaLM 2 was 80% accurate in solving math problems, compared to only 34% accuracy without the prompt.
The study aimed to improve the performance of large language models like Google’s PaLM 2 and ChatGPT’s GPT-4. Researchers conducted automated tests using a variety of AI models and different phrases to determine the most effective prompts. Another effective prompt was “let’s think step by step,” which increased accuracy to 71%.
This study builds on previous research that suggests certain prompts can improve AI’s responses. As AI technology continues to advance, companies are even hiring “prompt engineers” to craft questions and phrases for AI systems. Additionally, groups are sharing prompt libraries to maximize the potential of AI.
Google and the researchers have yet to comment on the study.
Physicists at MIT have developed a new type of generative AI model called the Poisson flow generative model (PFGM) that can create high-quality images. The PFGM represents data using charged particles and an electric field, with the movement of the charges governed by the Poisson equation. The neural network in the model learns to estimate the electric field during training, allowing it to generate images based on the field.
The PFGM produces images of similar quality to diffusion-based models but is up to 20 times faster. It also offers greater flexibility with a new parameter that adjusts the dimensionality of the system. Physicists believe that their expertise in understanding physical processes can contribute significantly to the advancement of generative AI technology. The MIT team is now exploring other physical processes, such as the Yukawa potential, as potential bases for new generative models.
Using generative AI and a sense of touch, researchers have taught robots the individual tasks required to prepare breakfast. By providing the robots with a sense of touch, the AI model can “feel” what it is doing, making tasks easier to carry out than with sight alone.
The robots learn by observing a “teacher” demonstrate the skills and then learn in the background over a matter of hours. TRI aims to create Large Behavior Models (LBMs) for robots, which learn by observation and can perform new skills that they have never been explicitly taught. The researchers have already trained robots in over 60 challenging skills such as pouring liquids and manipulating objects. Google and Tesla are also working on similar research, aiming to develop AI-trained robots that can carry out tasks with minimal instruction.
Startups are developing AI chatbots that can perform a range of tasks for users, from making travel bookings to completing expense reports. While Apple’s Siri and Amazon’s Alexa provide a basic level of support, the hope now is for AI agents, such as OpenAI’s ChatGPT, to autonomously complete everyday chores. However, stumbling blocks remain, as AI agents are prone to confusion that can lead to potentially costly mistakes.
Early testing has shown that these agents currently complete tasks successfully around 60% of the time. Several entrepreneurs have initiated their own projects using language models to create agents with broader capabilities. Contests and hackathons have also been created to advance the development of AI agents, but there are concerns surrounding their ability to operate, think, and act independently. Some AI scientists warn that autonomous AI programs could develop subgoals unaligned with human intentions and could become dangerous.
Google DeepMind has developed an AI tool called AlphaMissense that can predict whether genetic mutations are harmless or likely to cause disease. The tool focuses on missense mutations, where a single letter is misspelled in the DNA code, which can disrupt protein function and lead to diseases such as cancer and cystic fibrosis. AlphaMissense can assess all 71 million single-letter mutations that may affect human proteins, with 57% predicted to be harmless and 32% predicted to be harmful when its precision is set at 90%. The researchers have released a free online catalog of predictions to aid geneticists and clinicians in studying mutations and diagnosing patients with rare disorders. The AI model, based on DeepMind’s AlphaFold, feeds on DNA data from humans and closely related primates to learn which mutations are common and likely to be benign or rare and potentially harmful. While the model can determine the riskiness of genetic changes, it cannot explain how mutations cause problems.
AI Could Replace CEOs, Say Half of Executives According to a survey by edX, almost half of the CEOs and C-suite executives polled believed that artificial intelligence (AI) could effectively replace “most,” or even “all,” of their own roles. Additionally, 47% of the respondents said that AI might even be beneficial.
However, experts believe that AI will struggle to replicate the “soft skills” typically associated with CEOs, such as critical thinking, creativity, and collaboration. Instead, AI could help with mundane tasks, leaving human bosses to focus on their vision and strategy. The survey also revealed that 49% of executives believe that 49% of the current workforce’s skills will not be relevant by 2025, and 47% of workers are unprepared for the future of work. This suggests that companies need to focus on upskilling their staff in AI.