Plus AI discovers over 700 new materials
OpenAI’s ChatGPT, a powerful and ever-evolving generative AI application, has come a long way since its launch just a year ago. Originally limited to text-based responses, ChatGPT now boasts a range of advanced capabilities, including the ability to utilize images, documents, and spoken word in its responses. Additionally, OpenAI has introduced custom GPTs that cater to specific needs, such as website creation and task automation. Generative AI has found application across various domains, including law, medicine, and climate change. While challenges remain in using generic models for specific tasks, the potential of AI engines interacting with one another opens up new opportunities and risks.
Google DeepMind has developed a new AI tool called GNoME that utilizes deep learning to expedite the process of discovering and developing new materials. GNoME has already predicted structures for 2.2 million new materials, resulting in over 700 successful creations in the lab. This breakthrough technology can potentially revolutionize industries like energy and computing by providing faster and more efficient solutions for hardware innovation. Lawrence Berkeley National Laboratory has also introduced an autonomous lab that utilizes machine learning and robotics to engineering new materials. Together, these advancements in AI offer immense opportunities for accelerating technological breakthroughs and addressing global challenges.
Amazon has launched its AI-powered image generator, joining other tech giants and startups. Known as Titan Image Generator, the tool allows users to create new images based on text descriptions or customize existing images. It was trained on diverse datasets from various domains and offers built-in mitigations for toxicity and bias. Amazon has also included a watermarking feature to combat the spread of AI-generated misinformation and abuse imagery.
Researchers have been using AI to teach computers to read handwritten documents. By analyzing images of handwriting and learning to recognize patterns, AI technology can accurately determine the content of handwritten texts. This development has significant consequences for accessing historical knowledge, as it allows for the digitization and translation of manuscripts at a much faster pace. Furthermore, the AI’s ability to recognize various letter forms opens up new possibilities for analyzing and understanding texts that were previously limited by conventional typing.
A recent study by a team led by RIKEN reveals that the inflated confidence bias we experience in decision-making, often regarded as irrational, is not unique to humans. Through an AI model, researchers found that the AI models displayed the same overconfident tendencies as humans do. The study suggests that our perceived confidence might stem from subtle observational cues, even in situations where the evidence does not support such high confidence levels. This discovery challenges the long-held assumption that humans are purely rational decision-makers and highlights the role of noise structure in influencing confidence. The AI model’s ability to replicate human biases offers valuable insights into understanding the intricacies of human decision-making.
The Smart Aid Kit, a cutting-edge concept developed by design studios Map Project Office and Modem, combines AI with medical sensors to offer remote diagnostics and healthcare management. The kit includes four sensors, resembling a stethoscope, spirometer, ophthalmoscope, and skin scanner, that collect data on various health conditions. The gathered information is then sent to a central unit equipped with an AI system, which diagnoses conditions and provides instant feedback. This approach aims to address healthcare disparities, particularly in underprivileged areas with limited access to healthcare professionals. The kit’s user-friendly design and simple instructions make it accessible to non-professionals, enabling individuals and communities to take control of their healthcare remotely.
A recent report by Snyk reveals that 96% of developers are utilizing AI tools, despite being aware of their potential security risks. The study indicates that almost 80% of developers bypass security policies to use AI, with more than half of them relying on generative AI tools even though they often produce insecure code. Alarmingly, only a small percentage of development teams automate security processes to protect their code, leaving a significant security gap. The report also highlights a worrying trend where developers place too much trust in AI coding suggestions and undervalue their skills, despite evidence that the AI systems often make insecure recommendations. To mitigate these risks, organizations should focus on educating their teams while utilizing industry-approved security tools to secure their AI-generated code.
Nvidia CEO, Jensen Huang, predicts that AI will become “fairly competitive” with human intelligence within the next five years. Huang believes that the emergence of off-the-shelf AI tools will allow companies to tailor AI systems to suit their specific needs, ranging from chip design and software creation to drug discovery and radiology. Nvidia, known for its high-powered graphics processing units (GPUs), has seen a surge in revenue due to the increasing demand for AI training and heavy workloads across various industries. However, Huang notes that the development of artificial general intelligence (AGI) still requires advancements in multistep reasoning capabilities, which are currently being researched by companies worldwide.
AI is rapidly transforming the healthcare industry, with potential benefits including improved diagnoses and enhanced access to care. However, consumer acceptance of AI in healthcare is conditional, as they have serious concerns. Consumers expect AI tools to work as well as or better than human doctors, worry about unclear responsibility for errors caused by AI systems, fear that AI may worsen existing health inequities, and are concerned about the dehumanization and deskilling of healthcare. To shape the future use of AI in healthcare, it is essential to consider social and ethical considerations and engage consumers and communities in decision-making processes. Additionally, healthcare professionals need to be upskilled and critical users of digital health tools. Only by considering consumer perspectives and ensuring ethical AI use will we achieve the AI-enabled healthcare system we desire.
University of California, Santa Cruz Professor Alex Pang and his team have developed an AI system to identify rip currents, which are dangerous channels of fast-moving water. Inspired by the tragic deaths of UCSC and UCSD students, Pang set out to create a tool that could prevent such incidents. The AI system uses machine learning and camera video data to analyze wave patterns and detect signs of rip currents. By representing the detection with red boxes, the system aims to help lifeguards and beachgoers alike avoid these hidden threats. The team also plans to develop a user-friendly app for everyday use and hopes that AI can be applied to other areas of study, such as coastal erosion and rising sea levels. The ultimate goal is to save lives.