Plus, Google Delays Gemini AI
Google is facing a setback as it delays the release of its conversational AI, Gemini, in its bid to compete with OpenAI and Elon Musk’s Grok. The company had initially promised access to Gemini to select cloud customers and business partners by November, but it has now informed them to expect it in the first quarter of next year. This delay comes at an inconvenient time for Google, as its cloud sales growth has slowed compared to Microsoft’s, which has been successful in selling OpenAI’s technology to its own customers.
A recent survey conducted by Salesforce reveals that many employees are using generative AI with proper authorization in the workplace. The survey found that 28% of workers are utilizing generative AI, with more than half of them doing so without approval. Additionally, 32% of participants plan to start using generative AI in the near future. The data highlights the urgent need for companies to establish clear policies regarding the use of AI tools in order to mitigate potential risks and protect their businesses. The survey also shows that generative AI improves employee productivity and engagement, leading to increased job satisfaction. However, the lack of guidelines and training is causing unethical behavior, with employees passing off AI-generated work as their own.
Google DeepMind has developed an AI called GraphCast that can deliver 10-day weather forecasts with unparalleled speed and accuracy. Unlike traditional physics-based models, GraphCast analyzes patterns in historical weather data to make predictions. By training on almost four decades of data and using a unique divide-and-conquer strategy, the AI has demonstrated its ability to forecast extreme weather events well in advance. Its predictions could potentially save lives and give communities valuable time to prepare for upcoming weather phenomena.
In a groundbreaking development, generative AI has once again proven its capabilities by passing the Multistate Professional Responsibility Examination (MPRE), an essential test for aspiring lawyers. Researchers at LegalOn Technologies conducted the exam, which measures knowledge of professional conduct rules, using leading LLMs including OpenAI’s GPT-4 and GPT-3.5, Anthropic’s Claude 2, and Google’s PaLM 2 Bison. GPT-4 outperformed all others, correctly answering 74% of the questions and surpassing the average human test-taker. This achievement is considered a milestone in legal technology, as AI demonstrates its potential to support ethical decision-making while emphasizing that ultimate responsibility still rests with legal professionals.
Accenture’s European technology leader, Jan Willem Van Den Bremen, shared his thoughts on the potential of Generative AI to revolutionize the workplace during a recent Workday conference in Barcelona. Van Den Bremen predicts that this technology has the potential to free up approximately 40% of working hours across industries, enabling employees to focus on other important tasks. As companies explore the possibilities offered by Generative AI, traditional job roles could undergo significant transformations. Van Den Bremen suggests a future where computer programmers would transition from writing programs to validating programs developed by AI. This emphasizes how the rise of AI is prompting organizations to reimagine the responsibilities and tasks assigned to their workforce.
The idea of Universal Basic Income (UBI), is gaining traction as a potential solution to wage inequality and job insecurity caused by AI advancements. Some experts believe that as AI takes over jobs traditionally held by white-collar workers, it will shift them into poorly paid, insecure work, driving down wages and worsening inequality. UBI aims to address this by providing financial support to individuals affected by automation and removing conditions traditionally imposed on job seekers. Proponents argue that UBI could not only alleviate poverty and stimulate entrepreneurship but also lead to new societal definitions of work and assistance for unpaid caregivers. However, skeptics highlight challenges in implementing UBI, such as funding and the risk of perpetuating unstable job opportunities.
Discord, the popular chat and communities app, is closing down its AI chatbot, Clyde. Starting from December 1st, users will no longer be able to interact with Clyde via DMs, Group DMs, or server chats. The reason behind this sudden shutdown remains unclear, and it’s uncertain if the chatbot will make a comeback as a paid Nitro-only feature or not. Despite this setback, Discord continues to explore AI applications, such as AI-generated conversation summaries, aiming to position itself as a platform for AI developers. It remains to be seen how the departure of Clyde will impact the Discord community and its future AI endeavors.
Ed Newton-Rex, the head of audio at tech firm Stability AI, has resigned from his position due to the company’s stance on using copyrighted work without permission. Newton-Rex believes that it is “exploitative” for AI developers to use creative content without consent, while Stability AI argues that it falls under “fair use.” This disagreement highlights the ongoing debate about the use of copyrighted material to train AI tools. Newton-Rex, a choral composer himself, expresses concerns that AI could potentially replace creators and their work. While he remains optimistic about