Toggle light / dark theme

A connection between time-varying networks and transport theory opens prospects for developing predictive equations of motion for networks.

Many real-world networks change over time. Think, for example, of social interactions, gene activation in a cell, or strategy making in financial markets, where connections and disconnections occur all the time. Understanding and anticipating these microscopic kinetics is an overarching goal of network science, not least because it could enable the early detection and prevention of natural and human-made disasters. A team led by Fragkiskos Papadopoulos of Cyprus University of Technology has gained groundbreaking insights into this problem by recasting the discrete dynamics of a network as a continuous time series [1] (Fig. 1). In doing so, the researchers have discovered that if the breaking and forming of links are represented as a particle moving in a suitable geometric space, then its motion is subdiffusive—that is, slower than it would be if it diffused normally.

👉 Researchers at the Shanghai Artificial Intelligence Laboratory are combining the Monte Carlo Tree Search (MCTS) algorithm with large language models to improve its ability to solve complex mathematical problems.


Integrating the Monte Carlo Tree Search (MCTS) algorithm into large language models could significantly enhance their ability to solve complex mathematical problems. Initial experiments show promising results.

While large language models like GPT-4 have made remarkable progress in language processing, they still struggle with tasks requiring strategic and logical thinking. Particularly in mathematics, the models tend to produce plausible-sounding but factually incorrect answers.

In a new paper, researchers from the Shanghai Artificial Intelligence Laboratory propose combining language models with the Monte Carlo Tree Search (MCTS) algorithm. MCTS is a decision-making tool used in artificial intelligence for scenarios that require strategic planning, such as games and complex problem-solving. One of the most well-known applications is AlphaGo and its successor systems like AlphaZero, which have consistently beaten humans in board games. The combination of language models and MCTS has long been considered promising and is being studied by many labs — likely including OpenAI with Q*.

The efforts of Jeff Hawkins and Numenta to understand how the brain works started over 30 years ago and culminated in the last two years with the publication of the Thousand Brains Theory of Intelligence. Since then, we’ve been thinking about how to apply our insights about the neocortex to artificial intelligence. As described in this theory, it is clear that the brain works on principles fundamentally different from current AI systems. To build the kind of efficient and robust intelligence that we know humans are capable of, we need to design a new type of artificial intelligence. This is what the Thousand Brains Project is about.

In the past Numenta has been very open with their research, posting meeting recordings, making code open-source and building a large community around our algorithms. We are happy to announce that we are returning to this practice with the Thousand Brains Project. With funding from the Gates Foundation, among others, we are significantly expanding our internal research efforts and also calling for researchers around the world to follow, or even join this exciting project.

Today we are releasing a short technical document describing the core principles of the platform we are building. To be notified when the code and other resources are released, please sign up for the newsletter below. If you have a specific inquiry please send us an email to ThousandBrains@numenta.com.

The reliable generation of random numbers has become a central component of information and communications technology. In fact, random number generators, algorithms or devices that can produce random sequences of numbers, are now helping to secure communications between different devices, produce statistical samples, and for various other applications.

Delicious.


Science publisher Springer Nature has developed two new AI tools to detect fake research and duplicate images in scientific papers, helping to protect the integrity of published studies.

The growing number of cases of fake research is already putting a strain on the scientific publishing industry, according to Springer Nature. Following a pilot phase, the publisher is now rolling out two AI tools to identify papers with AI-generated fake content and problematic images — both red flags for research integrity issues.

The first tool, called “Geppetto,” detects AI-generated content, a telltale sign of “paper mills” producing fake research papers. The tool divides the paper into sections and uses its own algorithms to check the consistency of the text in each section.

This review spotlights the revolutionary role of deep learning (DL) in expanding the understanding of RNA is a fundamental biomolecule that shapes and regulates diverse phenotypes including human diseases. Understanding the principles governing the functions of RNA is a key objective of current biology. Recently, big data produced via high-throughput experiments have been utilized to develop DL models aimed at analyzing and predicting RNA-related biological processes. This review emphasizes the role of public databases in providing these big data for training DL models. The authors introduce core DL concepts necessary for training models from the biological data. By extensively examining DL studies in various fields of RNA biology, the authors suggest how to better leverage DL for revealing novel biological knowledge and demonstrate the potential of DL in deciphering the complex biology of RNA.

This summary was initially drafted using artificial intelligence, then revised and fact-checked by the author.

Colin Jacobs, PhD, assistant professor in the Department of Medical Imaging at Radboud University Medical Center in Nijmegen, The Netherlands, and Kiran Vaidhya Venkadesh, a second-year PhD candidate with the Diagnostic Image Analysis Group at Radboud University Medical Center discuss their 2021 Radiology study, which used CT images from the National Lung Cancer Screening Trial (NLST) to train a deep learning algorithm to estimate the malignancy risk of lung nodules.

Viewers like you help make PBS (Thank you 😃). Support your local PBS Member Station here: https://to.pbs.org/DonateSPACE

Be sure to check out the Infinite Series episode Singularities Explained • Singularities Explained | Infinite Se… or How I Learned to Stop Worrying and Divide by Zero.

Support us on Patreon at / pbsspacetime.
Get your own Space Time t­shirt at http://bit.ly/1QlzoBi.
Tweet at us! @pbsspacetime.
Facebook: facebook.com/pbsspacetime.
Email us! pbsspacetime [at] gmail [dot] com.
Comment on Reddit: / pbsspacetime.

Help translate our videos!

Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., has issued a solicitation (DARPA-PA-23–03-11) for the Defense Applications of Innovative Remote Sensing (DAIRS) project.

Primary emphasis will be in the high frequency (HF) band nominally at 4 to 15 MHz. Key applications in this frequency band are SWOTHR for aircraft, ship, and boat tracking, oceanographic SWOTHR, and sounding for ionospheric characterization.

When it comes to quantum computing, that chilling effect on research and development would enormously jeopardize U.S. national security. Our projects received ample funding from defense and intelligence agencies for good reason. Quantum computing may soon become the https://www.cyberdefensemagazine.com/quantum-security-is-nat...at%20allow, codebreaking%20attacks%20against%20traditional%20encryption" rel="noopener" class="">gold standard technology for codebreaking and defending large computer networks against cyberattacks.

Adopting the proposed march-in framework would also have major implications for our future economic stability. While still a nascent technology today, quantum computing’s ability to rapidly process huge volumes of data is set to revolutionize business in the coming decades. It may be the only way to capture the complexity needed for future AI and machine learning in, say, self-driving vehicles. It may enable companies to hone their supply chains and other logistical operations, such as manufacturing, with unprecedented precision. It may also transform finance by allowing portfolio managers to create new, superior investment algorithms and strategies.

Given the technology’s immense potential, it’s no mystery why China committed what is believed to be more than https://www.mckinsey.com/featured-insights/sustainable-inclu…n-quantum” rel=“noopener” class=””>$15 billion in 2022 to develop its quantum computing capacity–more than double the budget for quantum computing of EU countries and eight times what the U.S. government plans to spend.