Menu

Blog

Page 3898

Nov 26, 2022

Scientists Discover a Gene That Could Prevent Alzheimer’s Disease

Posted by in categories: biotech/medical, neuroscience

Researchers at the University of Colorado Anschutz find that the overexpression of a gene improves learning and memory in Alzheimer’s.

Alzheimer’s disease is a disease that attacks the brain, causing a decline in mental ability that worsens over time. It is the most common form of dementia and accounts for 60 to 80 percent of dementia cases. There is no current cure for Alzheimer’s disease, but there are medications that can help ease the symptoms.

Nov 26, 2022

Evolution of Human Consciousness SOLVED! — Yet Again, It Seems… | Mind Matters

Posted by in categories: biological, evolution, neuroscience

Nothing in biology makes sense except in the light of evolution. The gradualism of evolution has explained and dissolved life’s mysteries—life’s seemingly irreducible complexity and the illusion that living things possess some sort of mysterious vitalizing essence. So, too, evolution is likely to be key to demystifying the seemingly inexplicable, ethereal nature of consciousness.

First, what does it even mean to say that “Nothing in biology makes sense except in the light of evolution”? If the chosen topic is human consciousness, Martin Luther King and Mother Teresa come quickly to mind. But then what does the term “evolution” contribute to the discussion of the origin of human consciousness? Is it something useful or something theorists are stuck with, come what may?

Science theories should make predictions. Who predicted either King or Mother Teresa?

Nov 26, 2022

Fluxonium qubits bring the creation of a quantum computer closer

Posted by in categories: computing, information science, quantum physics

Russian scientists from University of Science and Technology MISIS and Bauman Moscow State Technical University were one of the first in the world to implement a two-qubit operation using superconducting fluxonium qubits. Fluxoniums have a longer life cycle and a greater precision of operations, so they are used to make longer algorithms. An article on research that brings the creation of a quantum computer closer to reality has been published in npj Quantum Information.

One of the main questions in the development of a universal quantum computer is about . Namely, which quantum objects are the best to make processors for quantum computers: electrons, photons, ions, superconductors, or other “quantum transistors.” Superconducting qubits have become one of the most successful platforms for quantum computing during the past decade. To date, the most commercially successful superconducting qubits are transmons, which are actively investigated and used in the quantum developments of Google, IBM and other world leading laboratories.

The main task of a qubit is to store and process information without errors. Accidental noise and even mere observation can lead to the loss or alteration of data. The stable operation of often requires extremely low ambient temperatures—close to zero Kelvin, which is hundreds of times colder than the temperature of open space.

Nov 26, 2022

Researchers From Stanford And Microsoft Have Proposed An Artificial Intelligence (AI) Approach That Uses Declarative Statements As Corrective Feedback For Neural Models With Bugs

Posted by in category: robotics/AI

The methods currently used to correct systematic issues in NLP models are either fragile or time-consuming and prone to shortcuts. Humans, on the other hand, frequently reprimand one another using natural language. This inspired recent research on natural language patches, which are declarative statements that enable developers to deliver corrective feedback at the appropriate level of abstraction by either modifying the model or adding information the model may be missing.

Instead of relying solely on labeled examples, there is a growing body of research on using language to provide instructions, supervision, and even inductive biases to models, such as building neural representations from language descriptions (Andreas et al., 2018; Murty et al., 2020; Mu et al., 2020), or language-based zero-shot learning (Brown et al., 2020; Hanjie et al., 2022; Chen et al., 2021). For corrective purposes, when the user interacts with an existing model to enhance it, language has yet to be properly utilized.

The neural language patching model has two heads: a gating head that determines if a patch should be applied and an interpreter head that forecasts results based on the information in the patch. The model is trained in two steps: first on a tagged dataset and then through task-specific fine-tuning. A set of patch templates are used to create patches and synthetic labeled samples during the second fine-tuning step.

Nov 26, 2022

Application: Quantum mechanics on curved spaces — Lec 26 — Frederic Schuller

Posted by in category: quantum physics

This is from a series of lectures — “Lectures on the Geometric Anatomy of Theoretical Physics” delivered by Dr. Frederic P Schuller.

Nov 26, 2022

Boost Your Brain 150% With An AI Chip | From Elon Musk! in 2023

Posted by in categories: Elon Musk, food, robotics/AI

This video is about, Boost Your Brain 150% With An AI Chip From Elon Musk! In 2023.

Remember the movie Limitless, now you can do it with a computer chip.

Continue reading “Boost Your Brain 150% With An AI Chip | From Elon Musk! in 2023” »

Nov 26, 2022

Groundbreaking Discoveries About The Human Brain and Our Neurons

Posted by in categories: biological, bitcoin, cryptocurrencies, neuroscience

Get a Wonderful Person Tee: https://teespring.com/stores/whatdamath.
More cool designs are on Amazon: https://amzn.to/3wDGy2i.
Alternatively, PayPal donations can be sent here: http://paypal.me/whatdamath.

Hello and welcome! My name is Anton and in this video, we will talk about incredible discoveries about the human brain.
Links:
https://www.pnas.org/doi/full/10.1073/pnas.2204900119
https://www.nature.com/articles/s41586-022-05277-w.
https://en.wikipedia.org/wiki/ARHGAP11B
https://www.scienceinpublic.com.au/corticallabs.
https://www.nature.com/articles/s41586-019-1654-9
Synthetic cells: https://youtu.be/OxVZPKmm58M
#brain #biology #neuroscience.

Continue reading “Groundbreaking Discoveries About The Human Brain and Our Neurons” »

Nov 26, 2022

Why Does The Universe Look Like This?

Posted by in categories: media & arts, space

Thank you to Wondrium for sponsoring today’s video! Signup for your FREE trial to Wondrium here: http://ow.ly/3bA050L1hTL

Researched and Written by Jon Farrow.
Narrated and Edited by David Kelly.
Animations by Jero Squartini https://www.fiverr.com/share/0v7Kjv.
Laniakea animation by Alperaym.
Incredible thumbnail art by Ettore Mazza, the GOAT: https://www.instagram.com/ettore.mazza/?hl=en.

Continue reading “Why Does The Universe Look Like This?” »

Nov 26, 2022

Dr Egbert Edelbroek, Ph.D. — CEO, SpaceBorn United — R&D To Make Humanity A Multi-Planetary Species

Posted by in category: space

Dr. Egbert Edelbroek, Ph.D. is the CEO & Founder of SpaceBorn United (https://spacebornunited.com/), a research and mission design company focused on researching optimal conditions for human reproduction in space, with a focus on novel assisted reproductive technologies.

Dr. Edelbroek is passionate about accelerating space life science research, helping humanity to become a multi-planetary species. His interest in space exploration accelerated shortly after he became a sperm donor in 2010 and when learned all about assisted reproductive technologies. This inspired him to explore options to re-engineer existing IVF technology for application in space.

Continue reading “Dr Egbert Edelbroek, Ph.D. — CEO, SpaceBorn United — R&D To Make Humanity A Multi-Planetary Species” »

Nov 26, 2022

A deep learning model that generates nonverbal social behavior for robots

Posted by in category: robotics/AI

Researchers at the Electronics and Telecommunications Research Institute (ETRI) in Korea have recently developed a deep learning-based model that could help to produce engaging nonverbal social behaviors, such as hugging or shaking someone’s hand, in robots. Their model, presented in a paper pre-published on arXiv, can actively learn new context-appropriate social behaviors by observing interactions among humans.

“Deep learning techniques have produced interesting results in areas such as computer vision and ,” Woo-Ri Ko, one of the researchers who carried out the study, told TechXplore. “We set out to apply to , specifically by allowing robots to learn from human-human interactions on their own. Our method requires no prior knowledge of human behavior models, which are usually costly and time-consuming to implement.”

The (ANN)-based architecture developed by Ko and his colleagues combines the Seq2Seq (sequence-to-sequence) model introduced by Google researchers in 2014 with generative adversarial networks (GANs). The new architecture was trained on the AIR-Act2Act dataset, a collection of 5,000 human-human interactions occurring in 10 different scenarios.