Plus, Woke AI is a bad idea
A new AI-powered technology has been shown to drastically reduce the treatment time for cancer radiotherapy. The technology, developed by a team of researchers at the University of California, Berkeley, uses machine learning to identify and target cancer cells more precisely. This allows for shorter, more effective treatment sessions, which can reduce patient discomfort and improve quality of life.
Japan is considering adopting AI regulations that are less strict than those proposed by the European Union. The Japanese government believes that its more flexible approach will allow the country to remain competitive in the global AI race. However, some critics argue that the softer rules could pose a risk to public safety and privacy.
Nvidia has acquired an AI startup called Ansys Lumerical for $1.2 billion. Lumerical develops software that can be used to shrink the size of machine learning models. This can make them more efficient and easier to deploy on edge devices. The acquisition is part of Nvidia’s broader strategy to become a leader in the AI market.
A watchdog group is claiming that the Biden administration is pushing to make AI “woke” and adhere to a far-left agenda. The group, called the National Center for Public Policy Research, cites a number of examples, including the administration’s support for AI projects that promote diversity and inclusion. The group also claims that the administration is trying to censor AI research that it deems to be “offensive.”
ChatGPT and other generative AI models are capable of producing text, images, and code that is indistinguishable from human-created content. This has the potential to revolutionize a wide range of industries, but it also raises concerns about productivity. Some experts worry that generative AI could lead to mass unemployment, as machines become capable of doing many of the jobs that are currently done by humans. Others argue that generative AI will actually create new jobs, as it opens up new possibilities for creativity and innovation.
Google Deepmind’s Gemini algorithm is a new AI model that is designed to be more efficient and effective than previous models. Gemini is able to learn from data much faster than other AI models, and it is also able to generalize its knowledge to new situations. This makes it a promising candidate for a wide range of applications, including healthcare, finance, and transportation.
Stocks of AI-related companies are booming, as investors bet on the future of artificial intelligence. The Nasdaq Composite index, which is home to many AI stocks, has gained more than 20% in the past year. Some of the biggest gainers include Nvidia, which makes AI chips, and OpenAI, which develops AI software. The AI boom is being driven by a number of factors, including the increasing availability of data, the falling cost of computing power, and the growing demand for AI-powered products and services.
Valve, the company behind the Steam gaming platform, has announced that it will not publish games that feature copyright-infringing AI assets. This includes assets that have been created using AI to generate images, music, or text that are similar to copyrighted works. Valve’s decision is a response to concerns that AI-generated content could be used to violate copyright law. The company says that it wants to protect its users from being exposed to copyright-infringing content.
Approximately 2.24 billion tonnes of solid waste was produced globally in 2020 and this is expected to rise by 73% to 3.88 billion tonnes by 2050, according to the World Bank. The founder of UK start-up Greyparrot, Mikela Druckman, believes regulations and packaging design will have to change to tackle the issue. Greyparrot uses AI to track billions of waste objects and has helped inform regulators about the materials that are problematic. Waste management and climate change are “interlinked”, Druckman said. She added that if regulations were stricter it would have a “very big impact on the value chain”. She called on big brands and other producers to use data generated by firms like Greyparrot to design reusable products.
The use of AI to generate images and text has the potential to create deception and mistrust in political campaigns, according to experts. Political campaigns are increasingly expected to use AI-generated imagery in campaign advertisements. It is believed that AI-generated content could be used to produce opposition messaging quickly, inundating the public with misleading information. AI technology has advanced to the extent that it is difficult for humans to identify manipulated images and recordings. Experts warn that the increased use of AI-generated content in political ads is likely to contribute to the public’s inherent distrust of political communications. There are concerns that candidates will exploit the technology to manipulate images and texts in order to raise funds and increase public distrust of their opponents. The ethical use of AI-generated content will depend on its truthfulness, verifiability, and relevance.