Plus, AI Creates Realistic Jesus
Google’s AI product, Bard, faced criticism last week due to a major update, and now another issue has come to light. Users have discovered that Google Search indexes shared Bard conversational links, potentially exposing private information intended to be confidential. When a person uses Bard to ask a question and shares the link with a designated third party, such as a friend or business partner, the conversation accessible through the link can be scraped by Google’s crawler and appear publicly in search results. Evidence of several Bard conversations being indexed by Google Search has been found.
Google Brain research scientist, Peter J. Liu, explained that the indexing only occurs for conversations that users choose to share, not all Bard conversations. However, most users were not aware that shared conversations could be indexed and visible in search results. Google acknowledges the mistake and states that they are working to prevent shared chats from being indexed.
This issue reflects poorly on Bard and Google’s AI ambitions in the consumer market, particularly as they face stiff competition from other chatbots like OpenAI’s ChatGPT. The hope is that Google’s new AI, Gemini, will provide a more private and better experience.
Coca-Cola’s new AI-generated soda flavor, called “Year 3000,” has received negative reviews from consumers. According to an investigative report by Gizmodo, the flavor is considered a downgrade from regular Coke and is unpleasant to taste. Consumers described it as “all bark and no bite” and advised against drinking it. The soda’s taste was said to be acidic and numbing, leaving an unpleasant aftertaste. Other reviewers compared the drink to a mix of various sodas and found it better when consumed at room temperature.
Coca-Cola refused to disclose the ingredients of Y3000 Zero Sugar, claiming it was co-created with AI. The company’s marketing strategy includes an augmented reality (AR) experience for consumers to deepen engagement. However, critics believe that this AI-generated flavor is simply a marketing ploy rather than a groundbreaking innovation in the soft drink industry.
This development is part of a larger trend where food and beverage companies are leveraging AI to create new products for viral attention. However, the effectiveness of AI in improving these experiences remains uncertain. Coca-Cola’s Y3000 seems to have missed the mark and disappointed consumers who were hopeful for a futuristic and refreshing flavor.
Google’s Featured Snippet section, which provides quick answers to user queries, recently displayed an incorrect response sourced from Quora’s AI-generated content. In response to the query “Can you melt eggs?”, the snippet stated it was possible to melt eggs by heating them. However, this information is incorrect as eggs change chemically when heated.
The misinformation can be attributed to Quora’s integration of an AI language model called “ChatGPT,” based on an earlier version of OpenAI’s model. This earlier version is known to produce false information. Interestingly, while the AI-generated answer on Quora states it is possible to melt eggs, OpenAI’s ChatGPT contradicts this by saying it is not possible.
The incident highlights the accuracy issues of Google’s Featured Snippets, even without AI’s involvement. In this case, the incorrect response was pulled from Quora’s page, perpetuating the dissemination of misinformation.
This situation raises concerns about the quality and trustworthiness of search results. Google may need to adjust its algorithms to prevent such inaccuracies. However, the increased popularity and trust in AI-generated chatbots as alternative sources of information could further contribute to the erosion of trust in Google’s search results.
Researchers have developed an artificial intelligence (AI) system capable of detecting subtle differences in molecular patterns that indicate biological signals, even in samples hundreds of millions of years old. The method relies on the premise that chemical processes that govern the formation and functioning of biomolecules differ fundamentally from those in abiotic molecules.
This means that biomolecules, which are indicative of life, hold on to information about the chemical processes that made them. The team trained the machine learning algorithm with 134 samples, of which 59 were biotic and 75 were abiotic, and the AI system successfully identified and differentiated between them with 90% accuracy. This AI system could potentially be used in sensors on robotic space explorers, including landers and rovers on the moon and Mars, as well as within spacecraft circling potentially habitable worlds like Enceladus and Europa. Furthermore, this method could be applied to study the 3.5 billion-year-old rocks in the Pilbara region in Western Australia, where the world’s oldest fossils are thought to exist.
Artificial Intelligence (AI) has used the Shroud of Turin, a Catholic relic believed to be the burial shroud of Jesus of Nazareth, to generate a lifelike depiction of what Jesus may have looked like. The Shroud of Turin is a 14-foot linen cloth with the faint image of a man on it. Using generative AI technology, an image was generated showing a man with long hair, a beard, and his eyes open. The top portion of his body is visible, and he is wearing a simple tunic.
The history of the Shroud of Turin is complicated and full of mystery. While there are reports of Jesus’ burial shroud being venerated in various locations prior to the 14th century, there is no reliable historical evidence to prove that these were the Shroud of Turin. The shroud’s documented history begins in 1353 when it was on display in Lirey, France. The shroud has since been owned by the House of Savoy and is currently held in the Chapel of the Holy Shroud at Turin Cathedral.
The use of AI to create a realistic depiction of Jesus using the Shroud of Turin opens up possibilities for exploring historical figures and artifacts.
St. Louis Post-Dispatch, a newspaper, published an op-ed written entirely by Microsoft’s Bing Chat AI program, arguing against the use of AI in journalism. The bot was instructed to write an editorial that highlights the threats and dangers of AI in journalism. The op-ed discusses how AI undermines credibility, generates fake news, manipulates facts, spreads misinformation, and creates deepfakes. It also emphasizes that AI lacks moral judgment, cannot protect sources, adhere to professional standards, or replace the unique qualities of human journalists.
The bot concludes by stating that AI should not be used in journalism and calls on media companies to support and empower human journalists instead. The editors of the paper found Bing Chat AI’s arguments to be lucid and persuasive. Jon Schweppe from the American Principles Project believes that AI replacing reporting jobs is inevitable, impacting journalism negatively. The op-ed aims to generate discussion among humans about the use of AI in journalism and its implications.
According to design expert Dan Kraemer, AI can be a valuable tool for brainstorming and generating billion-dollar business ideas. Kraemer believes that while humans will never stop coming up with great ideas, AI can significantly enhance the creative process. Using generative AI, businesses can leverage prompt engineering to brainstorm better and faster. Kraemer suggests providing prompts related to specific business objectives, market realities, and user segments to help the AI generate innovative ideas.
AI can also assist with due diligence by analyzing competition, forecasting market trends, and conducting financial modeling. While there are challenges to overcome, such as protecting sensitive data and assigning intellectual property rights to machine-generated ideas, Kraemer believes that these issues will soon be resolved. He emphasizes that AI can be the spark that helps founders discover their next great business idea.