Plus, Using AI to Check AI, What could Go Wrong?
The CEO of Stability AI predicts that AI will replace most outsourced coders in India within the next two years. He believes that AI is still in its early stages but is already becoming more capable than human coders in many tasks.
A pro-Ron DeSantis super PAC called Never Back Down has released a television ad attacking former President Donald Trump, with the ad featuring an AI-generated version of Trump’s voice. The ad accuses Trump of disrespecting Iowa Governor Kim Reynolds, claiming it is part of a larger pattern of disrespect towards Iowa as the first caucus state. While the audio in the ad is made to sound like Trump, a person familiar with the ad confirmed that the voice was AI-generated. The content of the AI-generated voice seems to be based on a recent post made by Trump on his social media site Truth Social. The ad is set to run statewide in Iowa and cost at least $1 million. This use of AI-generated content in political advertising highlights the potential rise of so-called deepfakes in campaign ads.
Actor Tom Cruise has lobbied studios to use AI stunt doubles in his films. He believes AI stunt doubles can perform dangerous stunts more safely than human stunt doubles.
Website builder Wix has announced a new AI Site Generator feature that will allow users to create websites by simply typing a description into a box and answering a few follow-up questions. Using AI and algorithms, the tool will automatically generate the entire website, including design, text, and images. Wix’s AI Site Generator goes beyond templates and aims to create a “unique” website. The company currently uses a combination of ChatGPT and its own AI models to accomplish this. The tool will make website building more approachable and easy, and could potentially help small businesses and entrepreneurs create more professional-looking websites. However, it is unclear how much direct control users will have over the final product and whether they can create websites that do not look like templates. Additionally, there are concerns about copyright infringement with AI-generated content, and it is unknown how Wix will address this issue.
Elon Musk’s new startup xAI is being seen as a politically charged alternative to existing AI technologies. Musk aims to create an AI system that is “maximally truth-seeking” and free from political correctness, arguing that a curious AI would protect humans. However, some experts find this argument unconvincing and raised concerns about the potential ethical implications and risks of such a technology. Musk’s vision of a purely objective AI risks conflating data with truth, according to critics. Some experts also questioned the feasibility of Musk’s approach, pointing to potential legal hurdles and the need for diverse approaches to developing AI technology. While Musk’s new venture has attracted attention due to its technical team’s expertise, critics have also highlighted concerns about the all-male team’s composition. Despite all the controversies surrounding Musk’s startup, he remains committed to pursuing his vision and expanding its possibilities beyond politics by exploring the potential for AI in discovering intelligent life in the universe.
AI chatbots are being used to help people cope with grief. These chatbots are trained on data from people who have experienced grief. They can then provide support and advice to people who are grieving.
The rise of generative AI has led to companies automating mundane tasks, which were traditionally given to entry-level workers. However, this trend is causing concern among young workers who are worried about their career prospects. Many young employees already face challenges such as high student debt and limited job opportunities due to the requirement of a college degree. Moreover, companies have shown little interest in training or developing their abilities, leaving them to fend for themselves. The introduction of AI technology threatens to further undermine their ability to launch a career. While companies may argue that young workers will be the ones overseeing AI tools, in reality, they will likely be left to clean up the errors made by these tools and will receive less credit for their work. This lack of career development and investment in young employees not only harms individuals but also leads to a weaker economy overall.
AI could be responsible for issuing parking tickets in the future. This is because AI is becoming more capable of detecting and tracking vehicles. AI could also assess whether a vehicle has been parked illegally.
Google AI has developed a tool that helps doctors decide whether to trust AI diagnoses. The tool, called DeepMind Health, uses machine learning to analyze AI diagnoses and identify potential errors.
Large language models (LLMs) like ChatGPT and GPT-4 have been the subject of both hype and disappointment. While initial studies claimed these models passed difficult human-designed tests, further research revealed that they were providing correct answers for the wrong reasons. One major factor behind these misleading results is data contamination, where test examples are included in the model’s training data. Data contamination is common in machine learning, but it becomes more complex with LLMs. The methods for avoiding contamination in traditional machine learning, such as separating train and test sets, are not as straightforward with LLMs. difficulties arise due to the large dataset sizes, prompting errors, complexity of LLMs, confusion over problem types, and lack of transparency. To address these challenges, the field of LLMs needs more transparency, better tools for detecting contamination, and new evaluation tests that ensure models provide correct answers for the right reasons. As LLMs continue to evolve, novel approaches to dealing with data contamination will be required.
Researchers from Edith Cowan University (ECU) have developed an AI software that uses bone density scans to predict a person’s future risk of health conditions such as heart attacks, stroke, falls, fractures, and dementia. Specifically, the software looks for signs of abdominal aortic calcification (AAC), a predictor of these conditions. While human experts typically take 5-15 minutes to analyze each scan, the AI can process 60,000 images in one day. The algorithm accurately matches expert conclusions 80% of the time, demonstrating a notable advance in early detection and disease monitoring. The AI software has the potential to vastly improve the efficiency and availability of AAC analysis, which could lead to early detection of cardiovascular disease and disease monitoring during routine clinical practice. Despite its accuracy, there is still work to be done to improve the software’s performance in comparison to human readings.