Skip to content

CAN WE TRUST AI?

Apple’s avoidance of the term ‘AI’ is very Apple.

Apple’s recent iPhone 15 event stood out because they didn’t mention AI once. This can be seen as a risky move considering how AI is considered a revolutionary technology. However, Apple is not completely shunning AI, as demonstrated by the Series 9 Watch’s ability to respond to a pinching motion and the iPhone 15’s automatic capture of portrait-mode depth data. These features utilize machine learning, a type of AI. 

Additionally, reports have indicated that Apple has been experimenting with conversational AI internally. Apple’s decision not to use the buzzword “AI” may be a way to differentiate itself from other companies and to appeal to consumer concerns about AI’s negative impacts. People often associate AI with generative AI, which has a tendency to “make things up,” and may not yet be widely adopted. Despite this, Apple’s use of neural engines and focus on creating products that just work showcases their smart strategy. The company is also praised for its spatial video strategy and environmental achievements.

Why humans can’t trust AI

AI has become a part of our daily lives, with systems like facial recognition software, creditworthiness assessments, poetry, and code-writing. However, AI has a significant problem – it is often unpredictable and unexplainable. Trust is based on predictability, the ability to anticipate someone’s behavior. But how can we trust it if we don’t understand how AI works and can’t predict its actions?

Many AI systems are built on deep learning neural networks that operate similarly to the human brain. These networks contain interconnected “neurons” with parameters that affect the connections’ strength. As the system is presented with training data, it learns to classify new data by adjusting these parameters. However, the decisions made by AI systems are often opaque because they contain trillions of parameters, making it difficult to explain why they make certain choices.

Trust also relies on normative or ethical motivations, which AI lacks. Human behavior is shaped by ethical standards and perceptions, but AI’s decision-making is based on unchanging world models unaffected by dynamic social interactions. This lack of alignment with human expectations poses another barrier to trust.

The U.S. Department of Defense requires human involvement in AI decision-making to address these issues. However, as AI becomes more integrated into critical systems, it may become less feasible for humans to intervene. It is crucial to resolve the explainability and alignment problems before reaching a point where human intervention is no longer possible.

In conclusion, AI is an alien intelligence that lacks the predictability and normative elements necessary for trust. Further research is needed to ensure that AI systems in the future are trustworthy and deserving of our trust.

AI detects eye disease and risk of Parkinson’s from retinal images

Scientists have developed an AI tool called RETFound that can diagnose and predict the risk of developing various health conditions based on retinal images. What makes RETFound special is that it was developed using self-supervised learning, meaning it didn’t require the time-consuming and expensive process of labeling each image as normal or abnormal. The tool uses many retinal photos to learn how to predict missing portions of images. 

A person’s retinas can provide valuable information about their health because they allow for direct observation of the capillary network, the smallest blood vessels in the body, as well as evaluation of neural tissue. RETFound was trained on unlabelled retinal images and then taught specific conditions using a small number of labeled images. It performed well in detecting ocular diseases and showed promise in predicting systemic diseases. The authors have made RETFound publicly available in the hopes that it can be adapted and trained for different patient populations and medical settings. However, it’s important to ensure ethical usage and transparent communication of limitations when using RETFound as the basis for other disease-detection models.

Deep and dangerous: Is AI the future of ocean exploration?

Exploring the depths of the Earth’s oceans is a difficult task due to the vastness and challenges of the underwater environment. The recent disaster involving the Titan submersible, which was carrying sightseers to the wreck of the Titanic, highlighted the dangers of such exploration. However, understanding the oceans is crucial for the future of the planet. They play a significant role in the Earth’s climate and hold valuable resources like battery metals needed for clean energy transition. Countries like Norway, India, and China are racing to mine these resources, raising concerns about damage to ocean ecosystems. To overcome the challenges of underwater exploration, researchers are looking to AI and autonomous submersibles. 

AI-powered submersibles could operate for long periods, speeding up the mapping of ocean floors and minimizing risks to human lives. However, there are engineering challenges to overcome, including the corrosiveness of saltwater, high pressure, and navigation difficulties. Additionally, finding a sustainable source of energy for submersibles is a major hurdle. Researchers are exploring various options, including solar power, ocean currents, and hydrothermal vents. Despite the challenges, there is growing interest, funding, and technological advancements in deep-sea exploration. Researchers are also studying sea animals like shrimp and krill to learn from their movement and apply it to future submersibles. Further exploration of the oceans holds great potential for scientific discovery and understanding of the planet.

Amazon rolls out generative AI tool to help sellers write listings

Amazon has introduced a new AI tool for sellers to aid them in writing product listings. The generative AI tool prompts sellers to input a few keywords or sentences describing their product and then generates various content options such as product titles, bullet points, and descriptions. Sellers can use these generated options to create new listings or improve existing ones. Amazon has been experimenting with AI applications, including summarizing product reviews. 

The AI tool was unveiled at Amazon Accelerate, an annual conference for the company’s third-party sellers. The tool is expected to make it easier for sellers to stand out on Amazon’s highly competitive marketplace, which has become a key aspect of its e-commerce business. Amazon plans to incorporate AI applications across numerous areas of its business in response to the popularity of consumer-facing AI tools like ChatGPT and Google’s chatbot, Bard. The company has also launched a generative AI service called Bedrock for its Amazon Web Services customers.

AI can now generate CD-quality music from text, and it’s only getting better

Stability AI has developed Stable Audio, an AI model that can generate high-quality music and sound effects based on written descriptions. By simply typing phrases like “dramatic intro music” or “creepy footsteps,” users can hear a symphony or sound effects that match their descriptions. Stability AI partnered with stock music provider AudioSparx to train the model using a dataset of over 800,000 audio files and their corresponding text metadata. 

The model has been fed 19,500 hours of audio and can imitate certain sounds based on their text descriptions. Stable Audio operates on a heavily simplified and compressed audio representation, which allows it to generate CD-quality stereo audio in less than a second. The company plans to offer Stable Audio in a free tier and a paid Pro plan, with the Pro plan offering more track generations and longer track lengths. While AI-generated audio may become a valuable tool for professionals in the future, it is unlikely to replace human musicians anytime soon.

‘Overwhelming consensus’ on AI regulation

Tech leaders, including Elon Musk, Mark Zuckerberg, and Sundar Pichai, gathered in Washington to discuss the regulation of AI. The meeting, held behind closed doors and convened by Senate Majority Leader Chuck Schumer, aimed to address the potential dangers and benefits of AI. Musk emphasized the need for regulation, stating that there was an “overwhelming consensus” among attendees. OpenAI CEO Sam Altman also highlighted the importance of government collaboration in preventing AI from causing harm. 

Concerns were raised regarding the potential for job losses, increased fraud, and the spread of misinformation due to AI technology. Additionally, companies using scraped data from the internet without permission or payment faced criticism. Musk suggested the establishment of a regulatory body to oversee AI, while Zuckerberg believed that Congress should support AI innovation and safeguards. Although there was agreement on the need for regulation, lawmakers acknowledged that crafting legislation would be challenging and that more time would be required before concrete action could be taken.

DON’T MISS OUT!
GET THE LATEST AI NEWS
DAILY

Join 2559 subscribers

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x