Plus, If AI replaces writers and actors, would we notice?
A team of Australian researchers has been awarded a $600,000 grant from the Office of National Intelligence (ONI) and the Department of Defence to merge human brain cells with AI, according to an article in The Guardian. The team, led by Monash University and Cortical Labs, has already created DishBrain, brain cells capable of playing the video game, Pong. The research aims to understand the biological mechanisms behind ongoing learning in order to develop better AI machines that replicate the learning capacity of biological neural networks. The researchers believe that this new technology could eventually surpass the performance of existing silicon-based hardware and have significant implications for various fields, including planning, robotics, advanced automation, brain-machine interfaces, and drug discovery. The news comes as AI leaders call on the Australian government to recognize the potential risks from AI and support research into AI safety.
A new article in The Spectator explores the possibility of AI replacing screenwriters. The author argues that AI could be used to generate scripts that are just as good as, if not better than, those written by humans. AI could also be used to automate many of the tasks that screenwriters currently perform, such as research, character development, and plot planning. The author concludes that AI is likely to have a significant impact on the screenwriting industry in the years to come.
A new study published in Neuroscience News suggests that AI can be taught to understand social norms. The study found that AI algorithms can learn to associate certain behaviors with positive or negative consequences, just like humans do. This suggests that AI could be used to develop more socially-aware AI systems.
Tesla CEO Elon Musk has announced that the company’s new Dojo supercomputer will use Nvidia GPU chips. Dojo is designed to be used for training large-scale AI models, such as the ones that power Tesla’s self-driving cars. Nvidia’s GPU chips are well-suited for this task, as they are designed to perform high-performance parallel computing.
A new article in Wired argues that we should not be too concerned about AI destroying humanity. The author points out that AI is still in its early stages of development, and that it is not yet capable of independent thought or action. The author concludes that we should focus on developing AI that is beneficial to humanity, rather than worrying about it becoming a threat.
James Van Der Beek, famous for his role in “Dawson’s Creek,” has expressed concerns about the use of AI in filmmaking and how it could impact the careers of actors and writers. In an Instagram video, Van Der Beek stated that if actors and writers don’t win the current strike issue, acting and writing may no longer be viable careers in the future. He emphasized the need for protection against studios generating AI scripts and the importance of safeguarding the likenesses and voices of actors. Van Der Beek acknowledged that AI can be a useful tool in storytelling, but when in the hands of studios, it could become a cost-cutting measure that replaces humans. He urged fans to support actors and writers by booking them or showing their support in other ways. The strike issue also involves concerns about the disappearance of residuals in the streaming era. This is the first time in over 60 years that actors and writers have gone on strike together.
A new article in TechCrunch describes how the authors used OpenAI’s Generative Pre-trained Transformer 3 (GPT-3) model to generate marketing strategies for a fictional company. The authors found that GPT-3 was able to generate creative and effective marketing strategies that were in line with the company’s goals. This suggests that AI could be used to automate the process of marketing strategy development, freeing up human marketers to focus on other tasks.
A new article in The Conversation argues that we should not trust AI blindly. The author points out that AI systems are often trained on data that is biased or incomplete, which can lead to them making inaccurate or unfair decisions. The author concludes that we need to be careful about the use of AI, and that we should not rely on it blindly.
According to a report from the Wall Street Journal, Google co-founder Sergey Brin has been taking a more active role in the company’s AI efforts. Brin has reportedly been spending three to four days a week at Google’s offices in Mountain View, California, and has been working closely with researchers developing Gemini, Google DeepMind’s next-generation AI model. Gemini is intended to rival OpenAI’s GPT-4 and is designed to be highly efficient and enable future innovations in memory and planning. Brin has been involved in technical discussions, weekly internal meetings on AI research, and personnel matters like hiring researchers. CEO Sundar Pichai is said to be excited about Brin’s involvement and has encouraged his contributions.
A new article in Harvard Business Review explores how to make generative AI greener. Generative AI is a type of AI that can create new content, such as images, text, and music. However, generative AI can be computationally expensive, which can lead to environmental problems. The author of the article proposes a number of ways to make generative AI greener, such as using more efficient hardware and reducing the amount of data that is used to train generative AI models
The system would use cameras to scan license plates and identify vehicles that have been reported stolen or associated with other crimes. The system would also be able to track the movements of vehicles and identify patterns of suspicious activity.