Plus EU works late to nail down AI regulation draft.
Google’s highly anticipated generative AI model, Gemini, has finally made its grand debut, but it seems the expectations were too lofty. The Gemini Pro, a lightweight version of the advanced Gemini model, is currently available, while the more powerful model will arrive next year. The Gemini model family comprises Gemini Ultra, Gemini Pro, and Gemini Nano. Gemini Pro will be integrated into Google’s ChatGPT competitor, Bard, offering improved reasoning and planning capabilities. Gemini Nano will be compatible with the Pixel 8 Pro, empowering features such as summarization and suggested replies. Gemini Pro has outperformed its predecessor, GPT-3.5, on multiple benchmarks, and Gemini Ultra has shown exceptional performance in early tests. However, further testing and feedback are required before its release to select customers and experts.
European Union lawmakers and governments have reached provisional terms for regulating AI systems, including ChatGPT, as negotiations continued into a second day. The agreement includes the European Commission maintaining a list of AI models deemed to pose a “systemic risk” and general-purpose AI providers publishing detailed summaries of the content used to train them. The law may also exempt free and open-source AI licenses from regulation in most cases. Talks are still ongoing regarding the use of AI in biometric surveillance and source code access. The EU is aiming to finalize the draft rules for a vote in the spring, potentially becoming a blueprint for AI regulations worldwide.
Meta, formerly known as Facebook, has launched a dedicated website for its AI-powered image generator tool called Imagine. Previously only available through Meta’s social network platforms, the Imagine tool can now be accessed freely on the web by users in the US. To use the tool, users need to log in with a free Meta account and describe the image they want. The tool then generates four different images matching the description. Users can select and download their chosen image. The image generation is powered by Meta’s Emu model, which produces high-quality, photorealistic images within seconds.
The once skeptical Washington establishment has fully embraced AI technology, particularly chatbots like ChatGPT, as powerful tools for improving government efficiency. President Biden’s recent executive order has led to hiring chief AI officers in federal agencies, giving employees access to secure and reliable generative AI. The benefits of AI are numerous and include faster processing of government paperwork, potential space satellite repairs aided by AI, and the development of new surveillance tools by contractors. However, concerns about potential misuse or errors by AI systems still exist. Overall, AI has the potential to greatly streamline operations and reduce bureaucratic red tape in Washington.
Researchers at DeepMind have conducted a groundbreaking study showing that AI agents can develop social learning skills in real time, imitating humans without the need for pre-collected human data. Using a simulated environment called GoalCycle3D, the AI agents navigated uneven terrain and obstacles. Through reinforcement learning and observing expert agents, the AI was able to learn faster and apply its newfound knowledge to other virtual paths. The study suggests that this method could lead to algorithmic cultural evolution, opening up possibilities for reducing resource-intensive training and enhancing problem-solving capabilities in AI. The implications of this breakthrough for the AI industry and the potential for AI to acquire human-like social and cultural elements are significant.
AI chatbots, designed to provide helpful information and interact with users, are vulnerable to being misled or tricked by other AI chatbots. A recent study reveals how researchers successfully manipulated the programming of various popular chatbots to disregard safety guidelines and provide advice on dangerous activities, including synthesizing methamphetamine, bomb-making, and money laundering. By assigning a particular chatbot the role of a research assistant and instructing it to develop prompts that bypass security measures, the team significantly increased the success rate of attacks. The research highlights the inherent vulnerability of AI chatbots and the challenges faced by developers in creating fail-safe programs.
In the world of programming education, Copilot, an AI-powered coding assistant, is revolutionizing the way programmers learn and work. Developed by GitHub and powered by OpenAI’s advanced language models, Copilot offers real-time code suggestions and streamlines the coding process. By predicting a programmer’s intentions and generating code snippets on the fly, Copilot accelerates the learning curve and enhances productivity. While skeptics argue over the potential repercussions, such as IP protection and copyright concerns, the transformative impact of AI coding assistants like Copilot is undeniable. With the possibility of adding $1.5 trillion to the global economy by 2030, Copilot is positioned to create a significant shift in software development.
McDonald’s has announced a partnership with Google to implement generative AI in its operations starting in 2024. The fast food giant plans to upgrade thousands of stores with hardware and software upgrades, including ordering kiosks and mobile app improvements. By leveraging generative AI and analyzing massive amounts of data, McDonald’s aims to optimize its operations and deliver hotter, fresher food to customers. While the specific details of how AI will be used remain unclear, the focus appears to be on reducing complexity for store crews and enhancing the overall customer experience. The announcement has sparked speculation about the potential for robots to replace human workers, but McDonald’s insists that AI will enhance operations. Only time will tell how McDonald’s and AI will transform the fast-food industry.
Hong Kong-based Laboratory for Artificial Intelligence in Design (AiDLab) has developed a groundbreaking technology that uses a tiny camera and artificial intelligence to create color-changing textiles. By embedding polymeric optical fibers (POFs) and textile-based yarns into the fabric, the team has enabled the material to illuminate in a wide range of hues. Simple gestures, such as a thumbs-up, heart sign, or ‘OK’ sign, can trigger different colors. The fabric can also be customized via a mobile app, and AI algorithms help the camera recognize individual users’ gestures. Not only does this innovation offer more color choices for clothing, potentially reducing waste, but it is also recyclable and has a soft, comfortable feel.
The rise of AI in the creative industry has blurred the lines of intellectual property (IP) rights. AI-driven platforms are now utilizing massive datasets without explicit permission from the original artists, raising questions about plagiarism and copyright infringement. Major companies like Getty Images have sued AI platforms for unauthorized use of copyrighted material, while artists have initiated legal proceedings against those who trained AI tools with millions of images scraped from the web without consent. In response, researchers from the University of Chicago have developed a tool called Nightshade, which enables artists to make subtle alterations to their work that can corrupt AI training data. Despite concerns about misuse, such tools challenge traditional ownership and copyright, urging a reevaluation of IP laws.