Using generative AI to recreate deceased individuals is becoming more feasible with large language models like ChatGPT. However, this practice needs to include the significant efforts required to keep the dead alive online, including maintaining automated systems. Content creators rely on the labor of caregivers, human and nonhuman entities, and various technologies to preserve digital legacies over time. The problem is compounded by digital devices, formats, and websites deteriorating and becoming obsolete. Early attempts at creating AI-backed replicas of dead humans have proven challenging, and sustaining such creations is financially and resource-intensive. Additionally, there are ethical questions about who should have authority over creating these replicas and the impact on the grieving process. The power dynamics, infrastructures, and labor involved in digital production highlight the burden placed on the living to maintain the digital presence of the dead.
Scientists at the University of Toronto have developed a new artificial intelligence (AI) model called single cell generative pre-trained transformer (scGPT) that can analyze gene expression data obtained from single cells. This is a significant advancement from traditional methods that study gene expression data from entire cell populations. scGPT uses machine learning to predict the effects of manipulating specific genes and merge distinct batches of data to identify hidden cell types. The model was trained using more than 10.3 million blood and bone marrow cells, enabling it to learn the fundamental links between gene expression in cells.
The researchers found that scGPT outperforms standard approaches in analyzing single-cell RNA sequencing (scRNA-seq) data. It was particularly effective in merging different batches of scRNA-seq data from human immune cells and predicting the effects of perturbing genes on gene expression. However, the model requires significant amounts of data and energy to train, making its widespread use among cell biologists still being determined. The team hopes to make scGPT accessible by providing educational resources and tutorials. They also plan to extend the model’s capabilities to analyze other cell types beyond bone marrow and immune cells.
The rise of deepfake technology and AI has led to a surge in cyber fraud, with criminals using realistic computer-generated voices and masks to deceive unsuspecting individuals. Regulators, law enforcement agencies, and the financial industry are increasingly concerned about the potential impact of these scams. The US Federal Trade Commission Chair, Lina Khan, has called for heightened vigilance and increased efforts from law enforcement to combat this growing issue. The world’s banking industry is urgently working to contain the risk posed by these sophisticated fraud techniques. With AI being used to “turbocharge” fraud, society needs to remain vigilant and stay informed about the latest developments to protect themselves from falling victim to such fraudulent activities.
A detailed report by Everypixel reveals the astonishing power of text-to-image algorithms. The report quantifies the impact of four text-to-image software programs: Midjourney, DALL·E 2, Stable Diffusion, and Adobe Firefly. In just 12 months, these algorithms generated more than 150 billion images, which took humans 150 years to achieve. OpenAI’s DALL·E 2, available to all users since September 2022, generated approximately 916 million images in 15 months. Midjourney, launched in July 2022, has 15 million registered users and 964 million images created. Stability AI’s Stable Diffusion, released in August 2022, has produced 690 million images through official channels. Additionally, the report estimates that over 11 billion images were generated through unofficial platforms. Combined, the four programs have created over 15 billion AI-generated images, surpassing the entire library of Shutterstock. While AI-generated images are fascinating, the report also highlights the dark side of AI fakes and the potential for misinformation.
IBM’s chairman and CEO, Arvind Krishna, stated in an interview with CNBC that white-collar jobs will be among the first to be affected by artificial intelligence (AI). Krishna highlighted the potential of generative AI and large language models to increase productivity and reduce the need for human workers. He believed that back-office, white-collar work would be the first roles to experience AI impact. He also explained that the decline in the working-age population necessitates improved productivity through AI to maintain a high quality of life. Krishna emphasized that AI is not intended to displace human workers but to augment their work. IBM, an early adopter of AI technology, has invested in its own AI platform and recently announced WatsonX, an AI-building tool. The company has also paused hiring for roles expected to be replaced by AI and automation systems. IBM currently employs over 288,000 people worldwide.
Scientists from Moorfields Eye Hospital and the UCL Institute of Ophthalmology have used artificial intelligence (AI) to analyze eye scans and identify markers for Parkinson’s disease. The study used a type of 3D scan called optical coherence tomography (OCT) to examine the retinas of over 150,000 patients. The scans showed physical differences between the eyes of those with Parkinson’s and those without the disease, identifying these markers an average of seven years before symptoms appeared. The researchers believe this method could be used as a pre-screening tool and could significantly impact public health. The findings have been published in the medical journal Neurology. This non-invasive and routine procedure could be implemented in the NHS to detect Parkinson’s and other neurodegenerative diseases early, allowing interventions to prevent their progression.