Elon Musk’s critique of “woke” AI has contributed to a more extensive right-wing campaign against generative AI, which threatens to shape AI operation and regulation. Republican politicians and conservative activists have accused AI companies like OpenAI of using “woke” training data that promotes a political agenda. These critics fear that AI models, like ChatGPT, may suppress conservative views. The right-wing backlash against generative AI mirrors their opposition to content moderation policies on social media platforms, framing these decisions as a plot to silence conservatives. However, experts argue that these AI models remix and regurgitate content and do not hold personal viewpoints. Critics ignore the fact that AI systems often exacerbate inequalities and disproportionately harm marginalized groups. Despite the consequences of the right-wing campaign, including pressure on AI companies to appear politically neutral, previous attempts by conservatives to launch their own “anti-woke” AI have failed. The future of AI, including Musk’s new company xAI, remains to be determined in terms of ethical principles and objectives.
LinkedIn’s latest Future of Work report reveals that workers in Singapore are the fastest in the world in adopting artificial intelligence (AI) skills. The report, which analyzed data from 25 countries, found that the share of LinkedIn members in Singapore adding AI skills to their profiles grew 20 times between January 2016 and now. This growth rate is significantly higher than the global average of eight times. Finland, Ireland, India, and Canada also showed high rates of AI skills diffusion.
According to the report, the five fastest-growing AI-related skills added to member profiles in 2022 were all related to generative AI, such as question-answering and classification. However, concerns are rising about whether AI will replace human jobs. A Goldman Sachs report states that 300 million jobs worldwide could be affected by AI and automation. LinkedIn also found that 45% of teachers’ skills are potentially augmentable by generative AI, including lesson planning and curriculum development.
LinkedIn emphasizes the importance of soft skills, such as emotional intelligence and creativity, as AI automation increases. According to LinkedIn and Microsoft, human is always in control, and judgments must be made about when to use AI versus human capabilities.
Khan Academy’s AI tutor, Khanmigo, is being tested by over 8,000 classroom teachers and students. The AI-powered chatbot offers personalized guidance on various subjects, including math, science, and humanities. Additionally, it allows students to interact with AI-powered historical figures and literary characters. Khanmigo aims to provide individualized help to students, addressing the issue of limited teacher availability. Users can provide real-time feedback and rate Khanmigo’s responses, helping to improve its accuracy. The tool also assists teachers in creating lesson plans, identifies struggling students, and offers access to student chat history. Despite some concerns about the misuse of AI in education, Khan Academy believes that AI can be a powerful learning tool when used responsibly. However, there are challenges, as AI models like GPT-4 can sometimes produce incorrect or misleading answers, particularly in math-related problems. Khan Academy acknowledges these limitations and is actively working to improve Khanmigo’s abilities.
A federal judge has ruled that AI-generated artwork cannot be copyrighted, which could have significant implications for Hollywood studios. Plaintiff Stephen Thaler had brought a lawsuit against the US Copyright Office to have his AI system recognized as the sole creator of a piece of artwork named “A Recent Entrance to Paradise.” The judge upheld the Copyright Office’s denial of Thaler’s copyright application, stating that human authorship is a fundamental requirement for copyright. This ruling may have consequences for Hollywood studios involved in the ongoing strike, as AI is a prominent issue in the labor dispute. If studios cannot secure copyright protections for AI-produced work, it could dampen their ambitions to use the technology for writing screenplays or replicating actors’ likenesses. However, the judge also acknowledged that future cases may introduce more complex questions regarding the level of human input needed to qualify as an “author” of AI-generated work.
“Muses & Self: Photographs by Allen Ginsberg” showcases the photography of Allen Ginsberg and features poems generated by an artificially intelligent language model inspired by his photos. Ginsberg, a writer, and poet, initially expressed distaste for experimental cut-up techniques but later embraced them. The exhibition employs a fine-tuned AI language model to generate original poems that capture Ginsberg’s distinctive voice. The AI-generated poems are not re-assemblings of vintage Ginsberg but entirely new works. The collaboration between Sasha Stiles, co-founder of theVERSEverse, an AI poetry collective, and Ross Goodwin, a creative technologist, seeks to enhance Ginsberg’s work by illuminating artistic nuances not perceptible to human readers. The project has received mixed reactions from the poetry community, with some finding it distasteful and others disagreeing with the AI’s interpretation of Ginsberg’s voice. The exhibition also displays Ginsberg’s photographs capturing other artists and innovators who broke taboos and aimed to capture the human spirit in the 20th century. The show aims to demonstrate how 20th-century artists can participate in the digital art revolution without compromising their essence.
Avni Patel Thompson, the founder of Milo, almost abandoned the startup before seeking help from Sam Altman. Venture capitalists recently recognized Milo as one of the most promising startups of 2023. Thompson’s vision was to create a virtual secretary to help manage her children’s schedules, relieving parents of mental burdens. Despite encountering glitches and funding issues, Thompson persevered because she believed in the solution’s necessity. Seeking guidance from Altman led to a new hire, CTO Archa Jain, and funding from OpenAI. Milo allows parents to send various items, such as grocery lists or permission slips, to the AI chatbot, which then processes them and suggests adding reminders to calendars. Unlike other “parent tech” startups, Milo’s focus on personal organization sets it apart. In the beta testing phase, Milo has garnered positive feedback from beta testers, including Sara Ittelson, a partner at Accel. The company aims to iron out bugs before opening to the general public.
Google is experimenting with a new AI tool that could serve as a personal mentor. This move comes after Google caught up to OpenAI’s ChatGPT with its own AI system, Google Bard. The search giant is now looking to stay ahead of competitors like OpenAI, Meta, and Amazon by offering additional AI tools that provide life advice. Google has recruited Scale AI, a contractor that works with the Google DeepMind team, and assembled a team of over 100 industry specialists to evaluate the new tool’s capabilities.
This new strategy contradicts Google’s previous stance on emotional attachment to chatbots. The company had advised against becoming emotionally connected to chatbots to avoid a “loss of agency.” However, the new AI tool is being tested on its ability to answer intimate questions and address challenges in people’s lives.
An AI mentor, as Google’s new tool is being called, works around the clock to provide personalized guidance and insights to individuals. It offers fast solutions, advice, and resources across various areas based on vast reservoirs of information and expertise. The neutrality of an AI mentor ensures unbiased perspectives, and its data analysis capabilities promote data-driven decision-making. This tool can revolutionize lifelong learning, skill development, and personal growth.