Skip to content

AI SUPERPOWERS GLOBAL GDP

Plus, can AI really improve dating?

AI could add 10% to global GDP by 2032

David Shrier, a leading expert on technology-driven change, predicts that AI could contribute to a 10% increase in global GDP by 2032. Shrier advises that intense concern and focus are necessary to resolve how AI will affect society and leverage it to tackle major challenges. He coins the term “flash growth” to describe the sudden and rapid adoption of new technologies, driven by decades of government investment in telecommunications infrastructure. 

With the pace of technology adoption happening in weeks or days, government responses need to keep up. Shrier recommends parallel streams of regulation, involving multiple stakeholders in a consultative process. Principles-based regulation is preferred over rules-based regulation to avoid hampering innovation. The UK has the highest AI productivity per capita globally, behind the US and China. Significant government investment and cooperation with the private sector are crucial for harnessing AI’s potential. However, there is concern about the concentration of power in a few companies, highlighting the need for a diverse range of stakeholders involved in shaping the future of AI.

How AI is Being Used to Prepare for Future Epidemics

Researchers are exploring the applications of AI in scenario planning for future epidemics. AI models like ChatGPT are being tested to interpret health data predict hospital stays and input human behavior into epidemic models for outbreak analysis. The World Health Organization highlights the potential of AI in detecting and responding to epidemics but emphasizes the need for unbiased data from diverse populations. A study by Yale University demonstrates the use of AI-powered platforms to triage patients and manage hospital overflow during viral outbreaks. 

However, the limitations of these models include the requirement for more data and further studies to ensure generalizability. AI has also been employed in healthcare planning, offering insights on interventions like lockdowns or increasing staff capacity. Additionally, researchers at Virginia Tech are using AI to represent human behavior in epidemic models, yielding promising results. Although AI holds potential in pandemic applications, continuous research and development are necessary to enhance the safety, robustness, and limitations of these solutions.

United States NSA Launches AI Security Centre to Safeguard National Security

The United States National Security Agency (NSA) has established an AI security center to oversee the integration and development of AI capabilities within the country’s defense and intelligence services. General Paul Nakasone, the Director of the NSA and US Cyber Command highlighted the growing importance of AI in national security and expressed the need to maintain the US’s advantage in AI development.

The AI security center will be integrated into the NSA’s existing Cybersecurity Collaboration Center, acting as a focal point to promote the secure adoption of AI capabilities across the national security enterprise and defense industry base. Nakasone emphasized the significance of AI in diplomatic, technological, and economic matters and stressed the need to prevent malicious foreign actors from accessing US innovations in AI.

While AI assists in the analysis, Nakasone underlined that final decisions are made by humans, emphasizing the importance of human involvement. The establishment of the AI security center follows an NSA study that identified securing AI models as a vital national security concern, particularly with the emergence of generative AI technologies.

Google Introduces Google-Extended Switch for Publishers to Opt Out of AI Training Data

Google has unveiled a new feature called Google-Extended, which allows website publishers to choose whether their data is used to train Google’s AI models. With this tool, sites can still be crawled and indexed by Google’s bots while avoiding their data being utilized for AI training. The purpose of Google-Extended is to enable publishers to manage whether their websites contribute to the improvement of AI generative APIs like Bard and Vertex AI.

This development comes in response to concerns raised by various websites, including The New York Times, CNN, Reuters, and Medium, which have blocked OpenAI’s web crawler to prevent data scraping for training ChatGPT. However, blocking Google entirely is not a viable option for these sites as it would impact their search visibility. Instead, some sites have opted to legally limit Google’s access to their content through updated terms of service.

Bumble CEO Reveals Plans to Utilize AI Technology for Enhanced Dating Experience

Bumble’s CEO, Whitney Wolfe Herd, announced at the Code Conference 2023 the company’s intent to leverage AI to revolutionize the dating experience. Herd envisioned AI as a “supercharger to love and relationships,” allowing the creation of intelligent matchmakers within Bumble’s dating app. AI could assist users in finding compatibility, save time spent swiping, and offer coaching on effective dating techniques. Contrary to popular concerns, Bumble aims to blend AI with human interaction rather than replace it.

In addition, Bumble is exploring the use of AI to train users and alleviate dating anxieties. By swiftly sorting through potential daters based on shared values, preferences, and interests, AI could present users with the most compatible matches. Furthermore, Bumble is developing an exclusive tier of service, priced higher than the current Bumble Premium, that offers AI-driven matchmaking.

The Impact of AI on Religious Belonging: A Reflection on the Role of Traditional Forms of Inquiry

The decline in the interest of attending worship and joining faith communities, commonly referred to as “de-churching,” mirrors the rise of AI programs like Chat GPT. These programs provide easy answers to complex questions, threatening the authority of traditional sources like books, courses, and teachers. As AI tools promise broader knowledge and democratized access, they challenge the constraints posed by time, amusement, and doctrine associated with traditional forms of religious inquiry. 

However, the author reflects on a personal experience with a professor who advised returning to a religious community to seek answers to doubts, rather than relying on AI. The professor recognized the importance of living out the truth through encounters with others and emphasized the value of collective worship and transformative knowledge gained from personal experiences in a community. While AI offers quick information, it lacks the relational and affective context necessary for true understanding and formation. By exploring the alternatives of de-churching and AI, the author highlights the enduring value of traditional forms of inquiry and the potential for truthful lives well-lived in religious communities.

Workplace Study Finds ChatGPT AI Can Lead to Mistakes and Limit Diversity of Thought

A recent study conducted by researchers from Harvard, MIT, Wharton, and Boston Consulting Group’s think tank has revealed that the use of OpenAI’s GPT-4 technology by knowledge workers can have both positive and negative impacts. While AI helped strategy consultants produce better content and work faster in many tasks, it also led to mistakes and stifled diversity of thought.

The study involved 758 consultants who were given complex tasks to replicate real-world workflows. The consultants who received AI access and guidance performed consistently better than those with AI access only, as well as those without AI. However, when it came to tasks that fell outside the AI model’s capabilities, consultants using AI were more likely to make mistakes and less likely to produce correct solutions.

The researchers coined the uneven nature of AI capabilities as a “jagged technological frontier.” Highly skilled workers struggled to navigate what AI can and can’t do well, and relying on AI outputs reduced diversity of thought by 41%.

The study highlights the importance of validating and interrogating AI and the need to apply experts’ judgment when working with AI. However, critics argue that boosting business-consultant productivity with AI is simply maximizing output for a profession that generates “bulls–t.”

AI predicts how many earthquake aftershocks will strike — and their strength

Machine learning models are showing promise in improving earthquake forecasts, according to three new papers. These deep-learning models have performed better than conventional forecast models for predicting aftershocks following major earthquakes. While the findings are preliminary and limited to specific situations, they represent a step toward using AI to reduce seismic risk. Statistical analyses and machine learning techniques can help seismologists understand broader trends, such as the number of expected aftershocks after a large earthquake. 

By training AI models on earthquake catalogs, researchers have been able to uncover previously undetected small earthquakes and improve earthquake forecast models. The AI models have shown better accuracy in predicting the magnitude and occurrence of quakes, reducing the likelihood of unexpected major earthquakes. While not a silver bullet, these AI models hold promise for everyday earthquake forecasting and could eventually be incorporated into official forecasting methods. However, it is important to prioritize earthquake preparedness measures regardless of improved forecasting capabilities.

DON’T MISS OUT!
GET THE LATEST AI NEWS
DAILY

Join 2559 subscribers

5 1 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x