Toggle light / dark theme

A theory of consciousness should capture its phenomenology, characterize its ontological status and extent, explain its causal structure and genesis, and describe its function. Here, I advance the notion that consciousness is best understood as an operator, in the sense of a physically implemented transition function that is acting on a representational substrate and controls its temporal evolution, and as such has no identity as an object or thing, but (like software running on a digital computer) it can be characterized as a law. Starting from the observation that biological information processing in multicellular substrates is based on self organization, I explore the conjecture that the functionality of consciousness represents the simplest algorithm that is discoverable by such substrates, and can impose function approximation via increasing representational coherence. I describe some properties of this operator, both with the goal of recovering the phenomenology of consciousness, and to get closer to a specification that would allow recreating it in computational simulations.

In science fiction, holograms are used for anything from basic communications to advanced military weaponry. In the real world, 3D holographic displays have yet to break through to everyday products and devices. That’s because creating holograms that look real and have significant fidelity requires laser emitters or other advanced pieces of optical equipment. This situation has stymied commercial development, as these components are complex and expensive.

More recently, research scientists were able to create realistic 3D holographic images without lasers by using a white chip-on-board light-emitting diode. Unfortunately, that method required two spatial light modulators to control the wave fronts of the emitted light, adding a prohibitive amount of complexity and cost.

Now, those same scientists say they have created a simpler, more cost-effective way to create realistic-looking 3D holographic displays using only one spatial light modulator and new software algorithms. The result is a simpler and cheaper method for creating holograms that an everyday technology like a smartphone screen can emit.

In recent years, artificial intelligence technologies, especially machine learning algorithms, have made great strides. These technologies have enabled unprecedented efficiency in tasks such as image recognition, natural language generation and processing, and object detection, but such outstanding functionality requires substantial computational power as a foundation.

Combinatorial optimization problems (COPs) have applications in many different fields such as logistics, supply chain management, machine learning, material design and drug discovery, among others, for finding the optimal solution to complex problems. These problems are usually very computationally intensive using classical computers and thus solving COPs using quantum computers has attracted significant attention from both academia and industry.

Working memory (WM) is a kind of advanced cognitive function, which requires the participation and cooperation of multiple brain regions. Hippocampus and prefrontal cortex are the main responsible brain regions for WM. Exploring information coordination between hippocampus and prefrontal cortex during WM is a frontier problem in cognitive neuroscience. In this paper, an advanced information theory analysis based on bimodal neural electrical signals (local field potentials, LFPs and spikes) was employed to characterize the transcerebral information coordination across the two brain regions. Firstly, LFPs and spikes were recorded simultaneously from rat hippocampus and prefrontal cortex during the WM task by using multi-channel in vivo recording technique. Then, from the perspective of information theory, directional hippocampus-prefrontal cortex networks were constructed by using transfer entropy algorithm based on spectral coherence between LFPs and spikes. Finally, transcerebral coordination of bimodal information at the brain-network level was investigated during acquisition and performance of the WM task. The results show that the transfer entropy in directional hippocampus-prefrontal cortex networks is related to the acquisition and performance of WM. During the acquisition of WM, the information flow, local information transmission ability and information transmission efficiency of the directional hippocampus-prefrontal networks increase over learning days. During the performance of WM, the transfer entropy from the hippocampus to prefrontal cortex plays a leading role for bimodal information coordination across brain regions and hippocampus has a driving effect on prefrontal cortex. Furthermore, bimodal information coordination in the hippocampus → prefrontal cortex network could predict WM during the successful performance of WM.

Keywords: Bimodal neural electrical signals; Graph theory; Transcerebral information coordination; Transfer entropy; Working memory.

© The Author(s), under exclusive licence to Springer Nature B.V. 2022.

Summary: Neural networks, regardless of their complexity or training method, follow a surprisingly uniform path from ignorance to expertise in image classification tasks. Researchers found that neural networks classify images by identifying the same low-dimensional features, such as ears or eyes, debunking the assumption that network learning methods are vastly different.

This finding could pave the way for developing more efficient AI training algorithms, potentially reducing the significant computational resources currently required. The research, grounded in information geometry, hints at a more streamlined future for AI development, where understanding the common learning path of neural networks could lead to cheaper and faster training methods.

Year 2010 😗😁


The world has waited with bated breath for three decades, and now finally a group of academics, engineers, and math geeks has discovered the number that explains life, the universe, and everything. That number is 20, and it’s the maximum number of moves it takes to solve a Rubik’s Cube.

Known as God’s Number, the magic number required about 35 CPU-years and a good deal of man-hours to solve. Why? Because there’s-1 possible positions of the cube, and the computer algorithm that finally cracked God’s Algorithm had to solve them all. (The terms God’s Number/Algorithm are derived from the fact that if God was solving a Cube, he/she/it would do it in the most efficient way possible. The Creator did not endorse this study, and could not be reached for comment.)

A full breakdown of the history of God’s Number as well as a full breakdown of the math is available here, but summarily the team broke the possible positions down into sets, then drastically cut the number of possible positions they had to solve for through symmetry (if you scramble a Cube randomly and then turn it upside down, you haven’t changed the solution).

Bayesian neural networks (BNNs) combine the generalizability of deep neural networks (DNNs) with a rigorous quantification of predictive uncertainty, which mitigates overfitting and makes them valuable for high-reliability or safety-critical applications. However, the probabilistic nature of BNNs makes them more computationally intensive on digital hardware and so far, less directly amenable to acceleration by analog in-memory computing as compared to DNNs. This work exploits a novel spintronic bit cell that efficiently and compactly implements Gaussian-distributed BNN values. Specifically, the bit cell combines a tunable stochastic magnetic tunnel junction (MTJ) encoding the trained standard deviation and a multi-bit domain-wall MTJ device independently encoding the trained mean. The two devices can be integrated within the same array, enabling highly efficient, fully analog, probabilistic matrix-vector multiplications. We use micromagnetics simulations as the basis of a system-level model of the spintronic BNN accelerator, demonstrating that our design yields accurate, well-calibrated uncertainty estimates for both classification and regression problems and matches software BNN performance. This result paves the way to spintronic in-memory computing systems implementing trusted neural networks at a modest energy budget.

The powerful ability of deep neural networks (DNNs) to generalize has driven their wide proliferation in the last decade to many applications. However, particularly in applications where the cost of a wrong prediction is high, there is a strong desire for algorithms that can reliably quantify the confidence in their predictions (Jiang et al., 2018). Bayesian neural networks (BNNs) can provide the generalizability of DNNs, while also enabling rigorous uncertainty estimates by encoding their parameters as probability distributions learned through Bayes’ theorem such that predictions sample trained distributions (MacKay, 1992). Probabilistic weights can also be viewed as an efficient form of model ensembling, reducing overfitting (Jospin et al., 2022). In spite of this, the probabilistic nature of BNNs makes them slower and more power-intensive to deploy in conventional hardware, due to the large number of random number generation operations required (Cai et al., 2018a).