Menu

Blog

Archive for the ‘biological’ category: Page 23

Mar 20, 2024

19.5y Younger Biological Age: Supplements, Diet (Test #2 in 2024)

Posted by in categories: biological, genetics, life extension

Join us on Patreon! https://www.patreon.com/MichaelLustgartenPhDDiscount Links: Epigenetic, Telomere Testing: https://trudiagnostic.com/?irclickid=U-s3Ii2r7x

Mar 19, 2024

New study uncovers how hydrogen provided energy at life’s origin

Posted by in categories: biological, chemistry, sustainability

Hydrogen gas is a clean fuel. It burns with oxygen in the air to provide energy with no CO2. Hydrogen is a key to sustainable energy for the future. Though humans are just now coming to realize the benefits of hydrogen gas (H2 in chemical shorthand), microbes have known that H2 is a good fuel for as long as there has been life on Earth. Hydrogen is ancient energy.

Mar 19, 2024

Omnidirectional tripedal robot scoots, shuffles and climbs

Posted by in categories: biological, robotics/AI

A small research group from the University of Michigan has developed a three-legged skating/shuffling robot called SKOOTR that rolls as it walks, can move along in any direction and can even rise up to overcome obstacles.

The idea for the SKOOTR – or SKating, Omni-Oriented, Tripedal Robot – project came from assistant professor Talia Y. Moore at the University of Michigan’s Evolution and Motion of Biology and Robotics (EMBiR) Lab.

Continue reading “Omnidirectional tripedal robot scoots, shuffles and climbs” »

Mar 19, 2024

Natural language instructions induce compositional generalization in networks of neurons

Posted by in categories: biological, robotics/AI

In this study, we use the latest advances in natural language processing to build tractable models of the ability to interpret instructions to guide actions in novel settings and the ability to produce a description of a task once it has been learned. RNNs can learn to perform a set of psychophysical tasks simultaneously using a pretrained language transformer to embed a natural language instruction for the current task. Our best-performing models can leverage these embeddings to perform a brand-new model with an average performance of 83% correct. Instructed models that generalize performance do so by leveraging the shared compositional structure of instruction embeddings and task representations, such that an inference about the relations between practiced and novel instructions leads to a good inference about what sensorimotor transformation is required for the unseen task. Finally, we show a network can invert this information and provide a linguistic description for a task based only on the sensorimotor contingency it observes.

Our models make several predictions for what neural representations to expect in brain areas that integrate linguistic information in order to exert control over sensorimotor areas. Firstly, the CCGP analysis of our model hierarchy suggests that when humans must generalize across (or switch between) a set of related tasks based on instructions, the neural geometry observed among sensorimotor mappings should also be present in semantic representations of instructions. This prediction is well grounded in the existing experimental literature where multiple studies have observed the type of abstract structure we find in our sensorimotor-RNNs also exists in sensorimotor areas of biological brains3,36,37. Our models theorize that the emergence of an equivalent task-related structure in language areas is essential to instructed action in humans. One intriguing candidate for an area that may support such representations is the language selective subregion of the left inferior frontal gyrus. This area is sensitive to both lexico-semantic and syntactic aspects of sentence comprehension, is implicated in tasks that require semantic control and lies anatomically adjacent to another functional subregion of the left inferior frontal gyrus, which is implicated in flexible cognition38,39,40,41. We also predict that individual units involved in implementing sensorimotor mappings should modulate their tuning properties on a trial-by-trial basis according to the semantics of the input instructions, and that failure to modulate tuning in the expected way should lead to poor generalization. This prediction may be especially useful to interpret multiunit recordings in humans. Finally, given that grounding linguistic knowledge in the sensorimotor demands of the task set improved performance across models (Fig. 2e), we predict that during learning the highest level of the language processing hierarchy should likewise be shaped by the embodied processes that accompany linguistic inputs, for example, motor planning or affordance evaluation42.

One notable negative result of our study is the relatively poor generalization performance of GPTNET (XL), which used at least an order of magnitude more parameters than other models. This is particularly striking given that activity in these models is predictive of many behavioral and neural signatures of human language processing10,11. Given this, future imaging studies may be guided by the representations in both autoregressive models and our best-performing models to delineate a full gradient of brain areas involved in each stage of instruction following, from low-level next-word prediction to higher-level structured-sentence representations to the sensorimotor control that language informs.

Mar 19, 2024

Solving the Hard Problem: A Thermodynamic Theory of Consciousness and Intelligence

Posted by in categories: biological, mathematics, neuroscience, quantum physics, robotics/AI

This paper introduces a novel theoretical framework for understanding consciousness, proposing a paradigm shift from traditional biological-centric views to a broader, universal perspective grounded in thermodynamics and systems theory. We posit that consciousness is not an exclusive attribute of biological entities but a fundamental feature of all systems exhibiting a particular form of intelligence. This intelligence is defined as the capacity of a system to efficiently utilize energy to reduce internal entropy, thereby fostering increased order and complexity. Supported by a robust mathematical model, the theory suggests that subjective experience, or what is often referred to as qualia, emerges from the intricate interplay of energy, entropy, and information within a system. This redefinition of consciousness and intelligence challenges existing paradigms and extends the potential for understanding and developing Artificial General Intelligence (AGI). The implications of this theory are vast, bridging gaps between cognitive science, artificial intelligence, philosophy, and physics, and providing a new lens through which to view the nature of consciousness itself.

Consciousness, traditionally viewed through the lens of biology and neurology, has long been a subject shrouded in mystery and debate. Philosophers, scientists, and thinkers have pondered over what consciousness is, how it arises, and why it appears to be a unique trait of certain biological organisms. The “hard problem” of consciousness, a term coined by philosopher David Chalmers, encapsulates the difficulty in explaining why and how physical processes in the brain give rise to subjective experiences.

Current research in cognitive science, neuroscience, and artificial intelligence offers various theories of consciousness, ranging from neural correlates of consciousness (NCCs) to quantum theories. However, these theories often face limitations in fully explaining the emergence and universality of consciousness.

Mar 18, 2024

Two artificial intelligences talk to each other

Posted by in categories: biological, neuroscience, robotics/AI

A team from the University of Geneva (UNIGE) has succeeded in modeling an capable of this cognitive prowess. After learning and performing a series of basic tasks, this AI was able to provide a linguistic description of them to a “sister” AI, which in turn performed them. These promising results, especially for robotics, are published in Nature Neuroscience.

Performing a new without prior training, on the sole basis of verbal or written instructions, is a unique human ability. What’s more, once we have learned the task, we are able to describe it so that another person can reproduce it. This dual capacity distinguishes us from other species which, to learn a new task, need numerous trials accompanied by positive or negative reinforcement signals, without being able to communicate it to their congeners.

A sub-field of (AI)—Natural language processing—seeks to recreate this human faculty, with machines that understand and respond to vocal or textual data. This technique is based on artificial neural networks, inspired by our biological neurons and by the way they transmit electrical signals to one another in the brain. However, the neural calculations that would make it possible to achieve the cognitive feat described above are still poorly understood.

Mar 17, 2024

19.5y Younger Biological Age: My Best Data Yet (31 Tests Since 2018)

Posted by in categories: biological, genetics, life extension

Join us on Patreon! https://www.patreon.com/MichaelLustgartenPhDDiscount Links: Epigenetic, Telomere Testing: https://trudiagnostic.com/?irclickid=U-s3Ii2r7x

Mar 15, 2024

Scientists demonstrate how individual differences in ‘whole-brain’ activity are generated in roundworms

Posted by in categories: biological, computing, neuroscience

Joint research led by Yu Toyoshima and Yuichi Iino of the University of Tokyo has demonstrated individual differences in, and successfully extracted commonalities from, the whole-brain activity of roundworms. The researchers also found that computer simulations based on the whole-brain activity of roundworms more accurately reflect real-brain activity when they include so-called “noise,” or probabilistic elements. The findings were published in the journal PLOS Computational Biology.

The Caenorhabditis elegans is a favorite among neuroscientists because its 302 neurons are completely mapped. This gives a fantastic opportunity to reveal their neural mechanism at a systems level. Thus far, scientists have been making progress in revealing the different states and patterns of each neuron and the assemblies they form. However, how these states and patterns are generated has been a less explored frontier.

Continue reading “Scientists demonstrate how individual differences in ‘whole-brain’ activity are generated in roundworms” »

Mar 15, 2024

Will digital intelligence replace biological intelligence?

Posted by in categories: biological, education, information science, life extension, robotics/AI

The Schwartz Reisman Institute for Technology and Society and the Department of Computer Science at the University of Toronto, in collaboration with the Vector Institute for Artificial Intelligence and the Cosmic Future Initiative at the Faculty of Arts \& Science, present Geoffrey Hinton on October 27, 2023, at the University of Toronto.

0:00:00 — 0:07:20 Opening remarks and introduction.
0:07:21 — 0:08:43 Overview.
0:08:44 — 0:20:08 Two different ways to do computation.
0:20:09 — 0:30:11 Do large language models really understand what they are saying?
0:30:12 — 0:49:50 The first neural net language model and how it works.
0:49:51 — 0:57:24 Will we be able to control super-intelligence once it surpasses our intelligence?
0:57:25 — 1:03:18 Does digital intelligence have subjective experience?
1:03:19 — 1:55:36 Q\&A
1:55:37 — 1:58:37 Closing remarks.

Continue reading “Will digital intelligence replace biological intelligence?” »

Mar 13, 2024

Unraveling the origins of life: Scientists discover ‘cool’ sugar acid formation in space

Posted by in categories: biological, nanotechnology, space

A critical molecule for the metabolism of living organisms has been synthesized for the first time by University of Hawaiʻi at Mānoa researchers at low temperatures (10 K) on ice coated nanoparticles mimicking conditions in deep space, marking a “cool” step in advancing our understanding of the origins of life.

Page 23 of 219First2021222324252627Last