Scale AI, a San Francisco start-up that provides data annotation services for AI models, is accused of exploiting workers in the Philippines who perform crowdsourced tasks of labeling and annotating data at meager pay rates. Scale AI is paid by companies such as Microsoft and Open AI to ensure that their AI algorithms are accurately trained. However, workers on the Remotasks platform owned by Scale AI are often paid below the minimum wage and face routine delays or cancelations of payments.
In addition, there are few channels for workers to seek recourse when facing issues. While the platform claims to pay a living wage, the reality for many of these crowdsourced workers is that they barely earn enough to support themselves. The situation highlights the exploitative underbelly of the AI industry, which relies heavily on the labor of a workforce spread across the Global South.
The Air Force is requesting $5.8 billion to develop AI-driven unmanned planes, known as XQ-58A Valkyries, which are expected to play a vital role in the future of air dominance. These aircraft are designed to assist human pilots and can perform tasks that may be challenging for a human pilot. They are handy for dangerous missions where human pilots may not survive. The Valkyries will demonstrate their ability to track and destroy targets independently in a test over the Gulf of Mexico. The XQ-58A Valkyrie is an advanced aircraft that does not require a pilot and is difficult to detect on radar.
It was developed by Kratos Defense & Security Solutions for the Low-Cost Attritable Strike Demonstrator program. The Air Force’s Low-Cost Attritable Aircraft Technology research lab is leading this project. The projected cost for manufacturing these planes over five years is estimated at $5.8 billion, as each plane may range between $3 million and $25 million. However, concerns have been raised by human rights activists who believe that delegating killing power to machines is morally wrong and can have severe consequences, including the loss of human control and the creation of dangerous weapons.
Character.AI, a chatbot platform founded by former Google deep-learning AI team members, has gained a significant following of users who spend over two hours daily on the site. The platform comprises millions of user-generated bots, including famous fictional characters, real-life figures, virtual coaches, tutors, psychologists, and text-based games. Character.AI’s co-founders, Noam Shazeer and Daniel De Freitas, aim to bring personalized superintelligence to everyone.
However, the platform has faced criticism for filtering out adult content, resulting in users migrating to smaller platforms and creating a sub-subreddit for NSFW content. The platform’s popularity presents a complex space for fandom, with questions regarding copyright and AI. Experts argue that copyright law still needs to be equipped to handle AI-generated content. Still, platforms like Character.AI that operate within legal requirements can remove user content in response to takedown notices from copyright holders. Despite controversies, fans continue to engage with the characters available on Character.AI, looking for new forms of interaction with their favorite characters.
The 2023 US Open tennis tournament will incorporate AI technology to enhance the digital experience for fans. IBM’s consulting arm has partnered with the United States Tennis Association to develop innovative technology to provide viewers a cutting-edge digital experience. IBM’s data center inside the Billie Jean King National Tennis Center will gather and analyze millions of data points for the tournament’s digital platforms.
The AI technology will also be used for AI commentary, with a computer vision module collecting and analyzing data on player movements and transforming them into narration. IBM’s Watsonx AI platform will curate original content for the tournament’s app and website, including features such as “Match Insights” and AI commentary for highlight reels. Tennis legend Maria Sharapova, a five-time Grand Slam winner and 2006 US Open winner, will attend the tournament and has expressed her belief that the new AI-based tech will engage a new generation of fans. The US Open is scheduled from August 28 to September 10 in Flushing, New York.
Prompt engineering involves using natural language to elicit useful content from AI models, which has been touted as a lucrative career path. However, the reality is that prompt engineering has remained the same as a standalone profession. There are very few job advertisements for “prompt engineer” roles, and the profession is often subsumed into other roles like machine learning engineer or AI specialist. The confusion surrounding the usefulness of prompt engineering stems from the fact that there are two types of value creators in this field: domain experts and technical experts.
Domain experts excel at asking the right questions and recognizing the value in the responses, while technical experts deeply understand how AI models work. Both types of prompt engineers are valuable but possess different skill sets and goals. Ultimately, evaluating generative AI outputs is subjective and often requires domain expertise. While learning some tips and tricks for prompt engineering can be valuable, formal education may only be necessary for some people. Instead, individuals should focus on pairing AI with their area of expertise to create innovative applications.
According to a new report from the FBI, Americans are more likely to fall victim to online scams and digital fraud than physical break-ins. The report reveals that Americans lost over $10 billion to online scams and frauds in the past year. Interestingly, seniors are the group that loses the most money to scammers, despite people in their 30s being the most connected online. The report highlights how cyber con artists use AI, widely available apps, and social engineering to target older individuals.
The article includes the stories of several elderly victims who were scammed out of thousands of dollars through tactics like the “grandparent scam” and email phishing schemes. The FBI warns that these scams are often transnational and challenging to investigate, leaving victims with little recourse. To combat the issue, some organizations, like the San Diego Elder Justice Task Force and Aura, a technology company, use AI to help prevent and detect scams.
Scientists at Osaka Metropolitan University have developed a groundbreaking AI model that accurately identifies cardiac functions and valvular heart diseases using chest radiographs. This AI-based method could supplement traditional echocardiography and improve diagnostic efficiency, particularly in settings lacking specialized technicians. The researchers collected 22,551 chest radiographs and associated echocardiograms from 16,946 patients at four facilities between 2013 and 2021.
Using these datasets, the AI model was trained to learn features connecting both datasets and successfully categorized six selected types of valvular heart disease with high accuracy, achieving an Area Under the Curve (AUC) ranging from 0.83 to 0.92. The AUC was 0.92 for detecting left ventricular ejection fraction, an essential measure for monitoring cardiac function. The researchers believe that this AI model could improve the efficiency of doctors’ diagnoses, particularly in areas lacking specialists or in emergency situations. It could also benefit patients who have difficulty undergoing echocardiography. This research represents significant progress in integrating AI into medical science and technology to improve patient outcomes.