Plus, educators turn to oral exams due to AI
OpenAI, the company behind ChatGPT, is considering the development of its own AI chips to address the shortage and high costs of AI processors. The company has been evaluating potential acquisition targets and discussing different options to solve the chip shortage issue. OpenAI relies heavily on expensive AI chips, and the scarcity of these chips has become a major concern for CEO Sam Altman. The company has been using NVIDIA’s graphics processing units (GPUs) to power its technologies, but the costs associated with running and maintaining this hardware are significant.
If ChatGPT queries reach a scale similar to Google search, it would require billions of dollars worth of chips each year. Developing custom AI chips would put OpenAI in the same league as Google and Amazon, who design their own chips. However, the decision to proceed with this initiative would involve a substantial investment and could take several years. Even if OpenAI acquires a chip company, it would still depend on commercial providers in the meantime. Other big tech companies, such as Meta and Microsoft, have faced challenges in building their own processors. The demand for specialized AI chips has surged since the launch of ChatGPT.
A recent incident has exposed a major flaw in AI algorithms used by search engines, such as Microsoft’s Bing. The incident involved the generation of false information by two chatbots, which were then presented as factual search results. The unintentional experiment conducted by a researcher at UC Berkeley, Daniel Griffin, revealed that search engines can be easily manipulated by generative AI algorithms designed to mimic human language.
Griffin instructed the chatbots to summarize a non-existent research paper by mathematician Claude Shannon, and the chatbots generated fabricated responses, which Bing displayed as legitimate information. This incident highlights the vulnerabilities of large language models, which can produce highly confident but false statements when faced with queries similar to existing training data.
The ramifications of such occurrences are concerning, as millions of people rely on search engines daily. The incident also raises the possibility of intentionally manipulating search results using AI-generated content, which could have broad implications.
A pair of studies published this week shed light on how people’s preconceived notions and expectations about AI affect their experiences and trust levels. The studies examined the impact of user expectations on trust and willingness to take advice from AI tools. One of the studies focused on a mental health chatbot, where participants were primed with different information about the AI’s motives and saw varying results.
The researchers found that participants’ perceptions of the AI were influenced by their expectations, showing the existence of an AI placebo effect. Another study analyzed conversations with a chatbot and found that people who believed the AI was caring had more positive interactions compared to those who thought the AI was manipulative. The findings highlight the importance of considering the human-AI interaction as a whole and addressing the psychological aspect to create better user experiences.
Stanford University researchers have developed an AI tool called Sherpa to aid in oral exams, with the aim of bringing back this traditional method of assessment in classrooms. Sherpa is designed to help educators evaluate students’ understanding of assigned readings through online conversations. The tool asks a series of questions about the text to test the student’s grasp of key concepts.
The AI system then transcribes the audio from each student’s recording and flags areas where the student’s answer seemed off-point. Educators can review the recording or transcript to evaluate the student’s response. The tool aims to promote discussion and communication skills among students. Some educators have found Sherpa to be a useful way to assess students’ comprehension, although there are concerns that it may not provide the ability to ask new questions on the fly as oral exams traditionally do.
A CEO has stirred controversy by replacing most of his customer support team with chatbots, signaling a potential end to low-stress, remote jobs that involve repetitive tasks. Suumit Shah, CEO of Duukan, believes that AI is the future of work and will replace workers who rely on manual tasks like copy-pasting and email responses.
Shah defended his decision to replace his team with chatbots, citing increased efficiency and reduced costs. Despite facing backlash for his actions, Shah remains steadfast in his belief that AI can optimize workflows and allow companies to focus on more creative and complex tasks. Shah is among a growing list of CEOs interested in adopting AI for productivity gains, as reported by IBM.
Summary: As the world celebrates the 66th anniversary of the space age during World Space Week 2023, experts explore the role of AI companions in deep space missions. These AI systems are not only envisioned as tools for operational tasks but also as crucial sources of emotional and mental health support for astronauts experiencing extreme social isolation. Space travelers face unique mental health challenges, with profound feelings of loneliness and desolation in the vast expanse of space.
Although AI companions cannot replace human connection, they can serve as supportive tools for astronauts, offering prompts and guidance to mitigate the detrimental effects of deep space isolation. NASA and the European Space Agency are actively investigating the potential of AI companions for future moon and Mars missions, emphasizing the need for evidence-based research and comprehensive behavioral health countermeasures. Ultimately, human-centric support remains essential for maintaining the mental health and well-being of space explorers.