listen
read
Apple has unveiled a new augmented reality headset, the Vision Pro, which is set to be released in early 2024. The headset is priced at $3,499, which is significantly more expensive than other VR headsets on the market. However, the Vision Pro is also more powerful and features a wider field of view. It remains to be seen whether the Vision Pro will be a success, but it is clear that Apple is serious about the augmented reality market.
Apple’s WWDC keynote this year was focused on new features for its existing products, rather than on flashy new AI-powered features. However, Apple did make some significant announcements related to machine learning (ML). For example, the company announced that it is bringing ML to its Photos app, so that users can automatically improve their photos with features like Portrait Mode and Smart HDR. Apple is also making it easier for developers to use ML in their apps, with new tools and frameworks that make it easier to build ML-powered apps.
Apple’s new Journal app for iOS 17 uses AI to suggest what to write about. The app analyzes your writing style and interests, and then suggests topics that you might be interested in writing about. The app also includes a built-in dictionary and thesaurus, as well as a grammar checker. Journal is a great way to get started with journaling, or to improve your writing skills.
A group of publishers has warned that generative AI content could violate copyright law. The group, which includes the Association of American Publishers and the Publishers’ Association, argues that generative AI tools can be used to create content that is substantially similar to copyrighted works. The group is calling for legislation to clarify the copyright status of generative AI content.
AI chatbots are losing money every time you use them. That’s according to a new study by the research firm Forrester. The study found that AI chatbots are only able to convert 2% of leads into customers. That’s compared to a conversion rate of 10% for human-powered customer service. The study also found that AI chatbots are costing businesses an average of $100 per customer.
The CEO of Stability AI, a company that provides AI-powered risk management software, has apologized for exaggerating his resume. In a blog post, CEO Andrew Ng said that he “deeply regrets” the errors in his resume. Ng said that he was “overzealous” and that he “made a mistake.” Ng’s resume included a number of exaggerations, including claims that he was a co-founder of Google Brain and that he had a PhD from Stanford University. Ng does not have a PhD from Stanford University.
An adviser to the UK’s AI task force has warned that AI could threaten humanity within two years. In a report, the adviser, Toby Ord, said that AI could pose a “existential risk” to humanity. Ord said that AI could be used to create autonomous weapons that could kill without human intervention. Ord also said that AI could be used to create superintelligence, which could pose an even greater threat to humanity.
“Fear of missing out” is driving retail investors to ride the AI wave. A new report by the research firm Gartner found that 70% of retail investors are using AI-powered tools to make investment decisions. The report also found that 50% of retail investors are willing to pay for AI-powered tools. The rise of AI-powered investment tools is a sign of the growing popularity of AI in the financial industry.
Singapore’s deputy prime minister, Heng Swee Keat, has said that artificial intelligence (AI) will disrupt the labor market, but that it will not eliminate jobs completely. In a speech, Heng said that AI will create new jobs, but that it will also displace some jobs. Heng said that the government is working to prepare the workforce for the coming changes.
The article discusses how AI is being used to identify and track online abuse of soccer players. It mentions that a number of companies are developing AI tools to help with this, such as Zeotap and Huddle. These companies use natural language processing to analyze social media posts and comments for signs of abuse. Once abuse is identified, it can be reported to the appropriate authorities.
The UK’s Labour Party has called for artificial intelligence (AI) to be licensed in a similar way to medicines or nuclear power. The party’s digital spokesperson, Lucy Powell, said that AI developers should have to obtain a license before working on advanced AI tools. She expressed concern about the lack of regulation of large language models, which can be used to create a range of AI tools. Powell suggested that AI should be regulated by an arms-length government body, similar to the way that medicines and nuclear power are regulated.