Humans have made incredible discoveries and inventions throughout history. From tools and technology to art and music, we have pushed the boundaries of what is possible. Now, we have created AI which could be our most groundbreaking invention yet. AI has the potential to answer our most pressing questions, cure diseases, and transform our lives in unimaginable ways. However, it also poses significant risks.
Machines that are smarter than humans could take over jobs, create deepfake videos and fake audio recordings, and even wipe out humanity. Even with a 1 percent chance of such a catastrophe, some argue that the risks outweigh the benefits. Larry Page, the founder of Google, believes that AI could render humans obsolete and wants to build superintelligent machines. Others in Silicon Valley are racing to create AGI, machines smarter than humans. However, many people are concerned about the lack of ethical considerations and the potential consequences of these actions. Ultimately, we must question if this new invention is worth the risk and if we are prepared for the changes it will bring.
Apple secretly integrates AI into its iPhones and Watches without explicitly calling it “artificial intelligence.” The company recently unveiled its new line of iPhones and a watch, which include improved semiconductor designs to power AI features that enhance basic functions like taking calls and capturing better images. Unlike other tech giants such as Microsoft and Google, Apple is not aiming for drastic transformations with AI but is instead using it to improve day-to-day user experiences.
The new Series 9 Watch features a chip with a Neural Engine that enables faster machine learning tasks, making Siri 25% more accurate. Additionally, the watch’s machine learning capabilities allow users to control the device by “double tapping” finger-pinching gestures, even when their non-watch hand is busy. Apple has also introduced improved image capture on its phones, automatically blurring backgrounds when a person is detected in the frame. Other smartphone makers, like Google, have also incorporated AI into their devices, with features such as erasing unwanted elements from photos.
Chinese AI startups are facing challenges due to limited resources, economic pressures, and regulatory uncertainties. Building large language models (LLMs), which form the basis of AI systems that mimic human intelligence, requires large amounts of data and computing power. In addition, Chinese startups struggle to collect data as the country lacks an open web. Consequently, many startups focus on applying existing AI models, rather than creating their own.
Startups are also affected by the current economic downturn, with falling consumer spending and limitations on GPU supplies hampering growth. As a result, startups are shifting their focus from revolution to small efficiency improvements to remain competitive. Regulatory clarity is another concern for Chinese AI startups, although the Chinese government is actively encouraging innovation in AI development. To offset challenges faced in China, some startups are targeting overseas markets. Despite the challenges, investors remain interested in AI startups that can demonstrate their own data and model improvement capabilities.
Top tech companies including Google, Meta, Microsoft, and X are meeting with federal lawmakers to discuss regulations for AI. The meeting, hosted by Senate Majority Leader Chuck Schumer, marks the beginning of a series of sessions to craft comprehensive legislation regulating the AI industry. The gathering includes CEOs from various companies, as well as officials from the entertainment industry, civil rights groups, and labor organizations.
The push for regulations reflects policymakers’ growing awareness of how AI, including generative AI tools like ChatGPT, could disrupt various aspects of life, from job markets to national security. The meeting presents an opportunity for the tech industry to influence the design of AI rules, with companies like IBM planning to highlight their AI tools and proposed policy vision. The event will also shed light on the political feasibility of broad AI legislation, setting expectations for what Congress can achieve. Concerns surrounding AI include discrimination against minorities and copyright infringement, prompting a need for democratic processes and representation in policy-making. Schumer’s involvement in the AI legislative process emphasizes the unique challenge that AI poses for Congress and the need for a special approach.
Merrick Morton, a photographer known for his historic images of cholo and African American street culture in Los Angeles, had his Instagram account permanently banned, resulting in the loss of over 60,000 followers and an extensive archive of over 500 photographs. Instagram claimed that his photos violated community guidelines on violence or dangerous organizations, although Morton insists that his work is fine art and has been displayed in art galleries worldwide. He believes that his photographs were removed due to the skin tone of the subjects and that the decision was made by AI algorithms.
Morton argues that AI content moderation often perpetuates biases against people of color, as demonstrated by instances where facial recognition software struggled to recognize famous Black women. Despite appeals and attempts to contact Instagram, Morton has been unable to reinstate his account or receive an explanation for the bans. Critics, including attorney Jessica González, have observed differential treatment by Instagram and its parent company Meta, towards content by and about people of color.
During a Senate oversight hearing, the Chair of the United States Securities and Exchange Commission (SEC), Gary Gensler, revealed that the agency is currently using AI to monitor the financial sector for fraud and manipulation. This information had not been publicly disclosed before. Gensler mentioned that the SEC already uses AI in market surveillance and enforcement actions to identify patterns in the market. The agency has asked Congress for more funding to further develop its technology capabilities.
It is interesting to note that the SEC has not formally declared its use of AI, although there are no legal requirements for agencies to publicly report their use of new technologies in the U.S., apart from cybersecurity incidents. The specific form of AI being employed by the SEC is unclear. However, the agency has published analysis reports on the use of AI and algorithmic trading in financial markets, so it is likely that they are using machine learning algorithms to analyze large amounts of data. In summary, the SEC is using AI to enhance its surveillance efforts and is seeking increased funding to invest in emerging technologies.