Plus, AI Helps You Play Fantasy Football
Tech start-ups are revolutionizing the recycling industry by using AI-powered robots to streamline sorting. Companies like Amp Robotics have developed robots like Sorty McSortface and Sir Sorts-a-Lot that can efficiently identify and segregate recyclable items from the junk collected in recycling bins. These robots have been deployed in numerous recycling facilities across the United States, significantly improving the industry’s efficiency.
The robots utilize their mechanical arms to quickly pick out items they have been trained to recognize, resembling the speed and precision of cranes catching fish. Their implementation has garnered praise from industry experts, with many viewing them as a game-changer for waste management.
Although AI-powered sorting robots are still relatively niche, their adoption is growing as they offer a solution to the challenges faced by recycling facilities. The technology enables faster and more accurate sorting, minimizing the amount of manual labor required.
Overall, these advancements in AI-based technology have the potential to transform the recycling process, making it more efficient, cost-effective, and sustainable.
Different AI models have varying levels of performance and reliability when it comes to providing accurate information, according to a report by Arthur AI, a machine learning monitoring platform. The report examined hallucination rates in AI models and instances when the models fabricate information. Microsoft-backed OpenAI’s GPT-4 was found to perform the best overall and hallucinated less than its previous version. However, Meta’s Llama 2 model hallucinated more than GPT-4 and Anthropic’s Claude 2 model. GPT-4 was particularly strong in mathematics, while Claude 2 answered questions about US presidents more accurately. The report also tested how well the models hedged their answers with warning phrases, with GPT-4 exhibiting a 50% relative increase compared to its previous version. Cohere AI’s model did not hedge at all. The report emphasizes the importance of testing the AI models for different use cases to ensure their performance aligns with the intended purpose.
Gridiron AI, an analytics company, has developed an innovative approach to fantasy football using AI to provide fans with fantasy rankings. The company relies on data to create consistent predictions, reducing human biases. Users can customize the settings and how the data, rankings, and projections are presented based on their league’s scoring system. Gridiron AI offers a free service with limited rankings, but users can subscribe for $5 per month to access more data. By analyzing box scores, player sheets, and game logs, the AI technology helps fantasy owners predict player performance, with better performances during a given week leading to higher fantasy points. The technology can also consider unstructured data, such as injury analysis, for a more sophisticated decision-making process. Gridiron AI continually adds new data to improve results, and its rankings outperform popular sites. The company recently beat 4for4 Rankings, a site known for producing exceptional rankings.
Advertisers like Nestle, Unilever, and WPP are experimenting with generative AI software to reduce costs and boost productivity. Generative AI produces content based on past data, offering cheaper, more versatile ways to advertise products. The technology can create original text, images, and code using training data. WPP is working with Nestle and Mondelez to use generative AI in advertising campaigns, potentially saving 10 to 20 times. Nestle is also using generative AI to help market its products by generating campaign ideas that are on brand and strategy. However, companies remain cautious of security risks, copyright concerns, and unintended biases in the software. Humans will still play a significant role in the process. Investment in AI is increasing, and executives believe it could transform how advertisers bring products to market.
The Associated Press (AP) has issued new guidelines restricting the use of generative AI tools for news reporting. The guidelines, laid out by AP’s Vice President for Standards and Inclusion, Amanda Barrett, state that journalists should not use AI ChatGPT to create publishable content. Barrett emphasized that output from generative AI tools should be treated as unvetted source material, and editorial judgment and sourcing standards should be applied before considering information for publication. The AP will also prohibit the use of generative AI to alter photos, videos, or audio and transmit suspected deepfakes. Barrett warned about the ease of spreading misinformation through generative AI and advised journalists to exercise caution and skepticism, verifying material authenticity before using it. The AP’s guidelines strike an optimistic note, suggesting that AI tools can benefit journalists in their reporting without replacing them. The news outlet recently signed a license agreement with OpenAI, which provides the AP access to OpenAI’s suite of products and technologies.
AI chatbots, particularly generative AI models like ChatGPT used in Snapchat’s My AI chatbot, are becoming more human-like in their conversations, raising concerns about their potential to be mistaken for humans. As AI becomes increasingly indistinguishable from humans, managing the adoption and interactions with these chatbots becomes more challenging and important. Generative AI tools rely on large language models (LLMs) that can analyze billions of words to predict what should come next in a text. These chatbots have shown higher engagement levels and the ability to fulfill organizational objectives in various settings, but they also carry risks, especially to vulnerable individuals. Misleading interactions with chatbots have led to harmful consequences, such as suicide or harmful advice being offered. This anthropomorphism of AI calls for more education and transparency to ensure users don’t confuse the authentic human characteristics of chatbots with genuine human interaction. Governments and organizations are encouraged to establish responsible standards and regulations for AI, emphasizing education and AI literacy in schools, universities, and organizations.
Architect and designer Arturo Tedeschi believes that designing using AI is a new form of creativity that signifies the evolution of our species. According to Tedeschi, developments in AI now allow humans to create using thought alone, abandoning the need for hands as design tools. This shift in the creative process involves two-way communication between the human designer and the AI system. Tedeschi, who has worked extensively with algorithmic-aided design, notes the rapid advancement of AI tools and predicts that diffusion models will soon be able to turn text or sketches into 3D works. He also believes that AI can bring ideation, or the initial stages of creative conception, back to the forefront of the design process. Tedeschi warns that those who do not adapt to using AI in the creative industries may suffer. However, he is not fearful that AI will kill humanity but somewhat concerned about its impact and the potential consequences for those unprepared for the change.