According to researchers from RAND, Chinese researchers are using generative AI tools to manipulate global audiences and shape perceptions about Taiwan. Using generative AI tools like ChatGPT allows the People’s Liberation Army (PLA) and the Chinese Communist Party to create inauthentic content for malign purposes. China has been hindered in weaponized disinformation compared to Russia due to its government’s censorship and blocking of foreign media. However, generative AI tools could bridge this gap by allowing the rapid production of false personas and news articles.
In particular, generative AI could create the impression of popular support for certain views, manipulate public opinion, and spread false information. Chinese AI researcher Li Bicheng is leading research into improving generative AI models for better text generation and more convincing synthetic text. The researchers suggest that Taiwan’s 2024 Presidential elections may be a target for Chinese manipulation. Social media platforms in the United States, such as TikTok and Twitter, must be equipped to guard against this form of manipulation.
In a recent interview on GZERO World with Ian Bremmer, cognitive scientist, and AI researcher Gary Marcus discusses the advancements and risks of generative AI. He highlights the capabilities of AI-powered language model tools like ChatGPT and Midjourney, which can generate text and images. However, Marcus points out that these models struggle with the concept of truth and often provide inaccurate or false information. He emphasizes that as generative AI becomes more widespread, it will impact our lives in both positive and negative ways.
Marcus describes large language models as versatile but unreliable, making them different from traditional AI techniques. He also discusses the need for effective global regulation of AI. The interview provides insights into the underlying technology behind generative AI and the potential implications for society. Viewers can watch GZERO World with Ian Bremmer on gzeromedia.com/gzeroworld or on US public television.
The IRS is utilizing AI to investigate tax evasion at large partnerships, including hedge funds, private equity groups, real estate investors, and law firms. The move is part of the agency’s efforts to target wealthier taxpayers and address complex cases that were previously difficult to handle. The IRS received $80 billion in funding through the Inflation Reduction Act, which aims to crack down on tax cheats and increase federal revenue. However, the funding has been politically contentious, with Republicans claiming that the IRS will use it to harass small businesses and middle-class taxpayers.
The agency plans to open examinations into 75 of the nation’s largest partnerships by the end of the month, using AI to identify patterns and trends that suggest income shielding. Additionally, the IRS intends to increase scrutiny of digital assets and investigate how high-income taxpayers use foreign bank accounts to avoid financial disclosure. Critics argue that the IRS cannot be trusted with taxpayer data and that the use of AI is a way to distance itself from accusations of political bias or inequitable enforcement practices.
A recent study by researchers from Brown University and multiple Chinese universities found that AI chatbots like OpenAI’s ChatGPT can efficiently operate a software company with minimal human intervention. The researchers created a virtual software development company called ChatDev and assigned AI bots different roles for designing, coding, testing, and documenting stages. The AI bots could communicate and make logical decisions with minimal human input. The experiment found that ChatDev, powered by ChatGPT’s 3.5 model, could complete the software development process in under seven minutes and for less than a dollar on average.
The AI bots demonstrated the ability to identify and troubleshoot bugs and vulnerabilities. The study highlighted the efficiency and cost-effectiveness of AI-driven automated software development. However, the researchers acknowledged limitations, such as potential errors and biases in language models, that could affect software creation. Despite these limitations, the findings suggest that powerful generative AI technologies can have various applications across industries, including saving time and boosting productivity for coders and engineers. The study’s results could also benefit junior programmers and engineers in the real world.
The rise of generative AI, or AI that can create new content has led to a surge in freelance AI expert jobs. Platforms like LinkedIn, Upwork, and Fiverr have seen an increase in AI-related job posts and searches. The demand for AI freelancers is expected to continue growing, as 44% of executives in the US plan to expand their use of AI technologies next year. However, there needs to be more AI skills among existing industry professionals, creating an opportunity for freelancers with AI expertise.
Skills in computer science, machine learning algorithms, programming languages like Python, and data management and analysis are often needed in AI consulting jobs. Online learning platforms like Udacity have seen increased interest in AI-based courses, and independent education methods, such as YouTube videos and blogs focused on AI skills, are becoming more sought after. Professionals across industries need to learn new AI concepts and tools, but the value of AI is limited by how it is applied to real-world problems.
2023 is considered the “reset year” for quantum computing. Leaders in the military technology and cybersecurity community estimate that it will take around four to six years to make systems quantum-safe, which aligns with the timeline for the availability of quantum computers that threaten security. Quantum computing uses qubits, representing an infinite number of possible outcomes, allowing them to perform an enormous number of calculations simultaneously. This results in exponential increases in computational power compared to classical computers.
Quantum computing has the potential to revolutionize various industries and fields, including logistics, healthcare, finance, cybersecurity, weather tracking, and agriculture. However, there are challenges to overcome, such as linking multiple qubits without increasing the probability of errors and isolating qubits to avoid decoherence. Ongoing research explores different methods, including photonics and materials, to make quantum processors more commercially viable. The development of error-correcting qubits is crucial for the full potential of quantum computing. Despite challenges, quantum computing holds promise for transforming society through advancements in various fields.
Meta, formerly Facebook, is making efforts to develop a powerful new chatbot that can rival OpenAI’s GPT-4. The company is acquiring AI training chips and expanding its data centers to enhance the capabilities of the chatbot. CEO Mark Zuckerberg is reportedly pushing for the chatbot to be freely available for companies to create AI tools. Previously, Meta relied on Microsoft’s Azure cloud platform for training, but it is now strengthening its infrastructure to avoid dependency on external platforms.
Meta has assembled a team to expedite the development of AI tools that can mimic human expressions, aligning with previous reports of generative AI features the company has been working on. It was leaked in June that an Instagram chatbot with 30 distinct personalities was being tested, possibly hinting at the upcoming launch of AI “personas” by Meta.