Monolithic 3D integration of electronics based on fully 2D materials is demonstrated in the performance of artificial intelligence tasks.

A new study, published in PLOS ONE, has uncovered a remarkable connection between individuals’ musical preferences and their moral values, shedding new light on the profound influence that music can have on our moral compass.
The research, conducted by a team of scientists at Queen Mary University of London and ISI Foundation in Turin, Italy, employed machine learning techniques to analyze the lyrics and audio features of individuals’ favorite songs, revealing a complex interplay between music and morality.
“Our study provides compelling evidence that music preferences can serve as a window into an individual’s moral values,” stated Dr. Charalampos Saitis, one of the senior authors of the study and Lecturer in Digital Music Processing at Queen Mary University of London’s School of Electronic Engineering and Computer Science.
OAK RIDGE, Tenn. — At Oak Ridge National Laboratory, the government-funded science research facility nestled between Tennessee’s Great Smoky Mountains and Cumberland Plateau that is perhaps best known for its role in the Manhattan Project, two supercomputers are currently rattling away, speedily making calculations meant to help tackle some of the biggest problems facing humanity.
You wouldn’t be able to tell from looking at them. A supercomputer called Summit mostly comprises hundreds of black cabinets filled with cords, flashing lights and powerful graphics processing units, or GPUs. The sound of tens of thousands of spinning disks on the computer’s file systems, and air cooling technology for ancillary equipment, make the device sound somewhat like a wind turbine — and, at least to the naked eye, the contraption doesn’t look much different from any other corporate data center. Its next-door neighbor, Frontier, is set up in a similar manner across the hall, though it’s a little quieter and the cabinets have a different design.
Yet inside those arrays of cabinets are powerful specialty chips and components capable of, collectively, training some of the largest AI models known. Frontier is currently the world’s fastest supercomputer, and Summit is the world’s seventh-fastest supercomputer, according to rankings published earlier this month. Now, as the Biden administration boosts its focus on artificial intelligence and touts a new executive order for the technology, there’s growing interest in using these supercomputers to their full AI potential.
Limits of large language models in precision medicine. Treating cancer is becoming increasingly complex, but also offers more and more possibilities. After all, the better a tumor’s biology and genetic features are understood, the more treatment approaches there are. To be able to offer patients personalized therapies tailored to their disease, laborious and time-consuming analysis and interpretation of various data is required. Researchers at Charité — Universitätsmedizin Berlin and Humboldt-Universität zu Berlin have now studied whether generative artificial intelligence (AI) tools such as ChatGPT can help with this step. This is one of many projects at Charité analyzing the opportunities unlocked by AI in patient care.
If the body can no longer repair certain genetic mutations itself, cells begin to grow unchecked, producing a tumor.
The crucial factor in this phenomenon is an imbalance of growth-inducing and growth-inhibiting factors, which can result from changes in oncogenes — genes with the potential to cause cancer — for example.
Procurement professionals face challenges more daunting than ever. Recent years’ supply chain disruptions and rising costs, deeply familiar to consumers, have had an outsize impact on business buying. At the same time, procurement teams are under increasing pressure to supply their businesses while also contributing to business growth and profitability.
Deloitte’s 2023 Global Chief Procurement Officer Survey reveals that procurement teams are now being called upon to address a broader range of enterprise priorities. These range from driving operational efficiency (74% of respondents) and enhancing corporate social responsibility (72%) to improving margins via cost reduction (71%).
“We want the robot to ask for enough help such that we reach the level of success that the user wants. But meanwhile, we want to minimize the overall amount of help that the robot needs,” said Allen Ren.
A recent study presented at the 7th Annual Conference on Robotic Learning examines a new method for teaching robots how to ask for further instructions when carrying out tasks with the goal of improving robotic safety and efficiency. This study was conducted by a team of engineers from Google and Princeton University and holds the potential to design and build better-functioning robots that mirror human traits, such as humility. Engineers have recently begun using large language models, or LLMs—which is responsible for designing ChatGPT—to make robots more human-like, but this can also come with drawbacks, as well.
“Blindly following plans generated by an LLM could cause robots to act in an unsafe or untrustworthy manner, and so we need our LLM-based robots to know when they don’t know,” said Dr. Anirudha Majumdar, who is an assistant professor of mechanical and aerospace engineering at Princeton University and a co-author on the study.
For the study, the researchers used this LLM method with robotic arms in laboratories in New York City and Mountain View, California. For the experiments, the robots were asked to perform a series of tasks like placing bowls in the microwave or re-arranging items on a counter. The LLM algorithm assigned probabilities on which would be the best option based on the instructions, and promptly asked for help when a certain probability threshold was achieved. For example, the human would ask the robot to place one of two bowls in the microwave but would not say which one. The LLM algorithm would then trigger, causing the robot to ask for additional help.
LONDON, Nov 30 (Reuters) — The president of tech giant Microsoft (MSFT.O) said there is no chance of super-intelligent artificial intelligence being created within the next 12 months, and cautioned that the technology could be decades away.
OpenAI cofounder Sam Altman earlier this month was removed as CEO by the company’s board of directors, but was swiftly reinstated after a weekend of outcry from employees and shareholders.
Reuters last week exclusively reported that the ouster came shortly after researchers had contacted the board, warning of a dangerous discovery they feared could have unintended consequences.