Menu

Blog

Archive for the ‘information science’ category: Page 87

Mar 14, 2023

An AI Learned to Play Atari 6,000 Times Faster

Posted by in categories: information science, robotics/AI

We don’t learn by brute force repetition. AI shouldn’t either.


Despite impressive progress, today’s AI models are very inefficient learners, taking huge amounts of time and data to solve problems humans pick up almost instantaneously. A new approach could drastically speed things up by getting AI to read instruction manuals before attempting a challenge.

Continue reading “An AI Learned to Play Atari 6,000 Times Faster” »

Mar 13, 2023

What Is Beyond The Edge?

Posted by in categories: information science, media & arts, space

Compare news coverage. Spot media bias. Avoid algorithms. Be well informed. Download the free Ground News app at https://ground.news/HOTU

Researched and Written by Leila Battison.
Narrated and Edited by David Kelly.
Animations by Jero Squartini https://www.fiverr.com/share/0v7Kjv.
Incredible thumbnail art by Ettore Mazza, the GOAT: https://www.instagram.com/ettore.mazza/?hl=en.

Continue reading “What Is Beyond The Edge?” »

Mar 13, 2023

The Limits of Computing: Why Even in the Age of AI, Some Problems Are Just Too Difficult

Posted by in categories: biotech/medical, information science, media & arts, robotics/AI

Empowered by artificial intelligence technologies, computers today can engage in convincing conversations with people, compose songs, paint paintings, play chess and go, and diagnose diseases, to name just a few examples of their technological prowess.

These successes could be taken to indicate that computation has no limits. To see if that’s the case, it’s important to understand what makes a computer powerful.

Continue reading “The Limits of Computing: Why Even in the Age of AI, Some Problems Are Just Too Difficult” »

Mar 13, 2023

Deep Language Models are getting increasingly better

Posted by in categories: information science, mapping, robotics/AI

Deep learning has made significant strides in text generation, translation, and completion in recent years. Algorithms trained to predict words from their surrounding context have been instrumental in achieving these advancements. However, despite access to vast amounts of training data, deep language models still need help to perform tasks like long story generation, summarization, coherent dialogue, and information retrieval. These models have been shown to need help capturing syntax and semantic properties, and their linguistic understanding needs to be more superficial. Predictive coding theory suggests that the brain of a human makes predictions over multiple timescales and levels of representation across the cortical hierarchy. Although studies have previously shown evidence of speech predictions in the brain, the nature of predicted representations and their temporal scope remain largely unknown. Recently, researchers analyzed the brain signals of 304 individuals listening to short stories and found that enhancing deep language models with long-range and multi-level predictions improved brain mapping.

The results of this study revealed a hierarchical organization of language predictions in the cortex. These findings align with predictive coding theory, which suggests that the brain makes predictions over multiple levels and timescales of expression. Researchers can bridge the gap between human language processing and deep learning algorithms by incorporating these ideas into deep language models.

The current study evaluated specific hypotheses of predictive coding theory by examining whether cortical hierarchy predicts several levels of representations, spanning multiple timescales, beyond the neighborhood and word-level predictions usually learned in deep language algorithms. Modern deep language models and the brain activity of 304 people listening to spoken tales were compared. It was discovered that the activations of deep language algorithms supplemented with long-range and high-level predictions best describe brain activity.

Mar 13, 2023

Prof. KARL FRISTON 3.0 — Collective Intelligence [Special Edition]

Posted by in categories: ethics, information science, robotics/AI

This show is sponsored by Numerai, please visit them here with our sponsor link (we would really appreciate it) http://numer.ai/mlst.

Prof. Karl Friston recently proposed a vision of artificial intelligence that goes beyond machines and algorithms, and embraces humans and nature as part of a cyber-physical ecosystem of intelligence. This vision is based on the principle of active inference, which states that intelligent systems can learn from their observations and act on their environment to reduce uncertainty and achieve their goals. This leads to a formal account of collective intelligence that rests on shared narratives and goals.

Continue reading “Prof. KARL FRISTON 3.0 — Collective Intelligence [Special Edition]” »

Mar 13, 2023

Microsoft Proposes MathPrompter: A Technique that Improves Large Language Models (LLMs) Performance on Mathematical Reasoning Problems

Posted by in categories: information science, mathematics, robotics/AI

LLMs stands for Large Language Models. These are advanced machine learning models that are trained to comprehend massive volumes of text data and generate natural language. Examples of LLMs include GPT-3 (Generative Pre-trained Transformer 3) and BERT (Bidirectional Encoder Representations from Transformers). LLMs are trained on massive amounts of data, often billions of words, to develop a broad understanding of language. They can then be fine-tuned on tasks such as text classification, machine translation, or question-answering, making them highly adaptable to various language-based applications.

LLMs struggle with arithmetic reasoning tasks and frequently produce incorrect responses. Unlike natural language understanding, math problems usually have only one correct answer, making it difficult for LLMs to generate precise solutions. As far as it is known, no LLMs currently indicate their confidence level in their responses, resulting in a lack of trust in these models and limiting their acceptance.

To address this issue, scientists proposed ‘MathPrompter,’ which enhances LLM performance on mathematical problems and increases reliance on forecasts. MathPrompter is an AI-powered tool that helps users solve math problems by generating step-by-step solutions. It uses deep learning algorithms and natural language processing techniques to understand and interpret math problems, then generates a solution explaining each process step.

Mar 12, 2023

Immersive Virtual Reality From The Humble Webcam

Posted by in categories: computing, information science, space, virtual reality

[Russ Maschmeyer] and Spatial Commerce Projects developed WonkaVision to demonstrate how 3D eye tracking from a single webcam can support rendering a graphical virtual reality (VR) display with realistic depth and space. Spatial Commerce Projects is a Shopify lab working to provide concepts, prototypes, and tools to explore the crossroads of spatial computing and commerce.

The graphical output provides a real sense of depth and three-dimensional space using an optical illusion that reacts to the viewer’s eye position. The eye position is used to render view-dependent images. The computer screen is made to feel like a window into a realistic 3D virtual space where objects beyond the window appear to have depth and objects before the window appear to project out into the space in front of the screen. The resulting experience is like a 3D view into a virtual space. The downside is that the experience only works for one viewer.

Eye tracking is performed using Google’s MediaPipe Iris library, which relies on the fact that the iris diameter of the human eye is almost exactly 11.7 mm for most humans. Computer vision algorithms in the library use this geometrical fact to efficiently locate and track human irises with high accuracy.

Mar 12, 2023

How Einstein tried to model the shape of the Universe

Posted by in categories: cosmology, information science, mathematics, quantum physics

To keep his Universe static, Einstein added a term into the equations of general relativity, one he initially dubbed a negative pressure. It soon became known as the cosmological constant. Mathematics allowed the concept, but it had absolutely no justification from physics, no matter how hard Einstein and others tried to find one. The cosmological constant clearly detracted from the formal beauty and simplicity of Einstein’s original equations of 1915, which achieved so much without any need for arbitrary constants or additional assumptions. It amounted to a cosmic repulsion chosen to precisely balance the tendency of matter to collapse on itself. In modern parlance we call this fine tuning, and in physics it is usually frowned upon.

Einstein knew that the only reason for his cosmological constant to exist was to secure a static and stable finite Universe. He wanted this kind of Universe, and he did not want to look much further. Quietly hiding in his equations, though, was another model for the Universe, one with an expanding geometry. In 1922, the Russian physicist Alexander Friedmann would find this solution. As for Einstein, it was only in 1931, after visiting Hubble in California, that he accepted cosmic expansion and discarded at long last his vision of a static Cosmos.

Einstein’s equations provided a much richer Universe than the one Einstein himself had originally imagined. But like the mythic phoenix, the cosmological constant refuses to go away. Nowadays it is back in full force, as we will see in a future article.

Mar 11, 2023

Get Ready to Meet the ChatGPT Clones

Posted by in categories: business, information science, robotics/AI

ChatGPT might well be the most famous, and potentially valuable, algorithm of the moment, but the artificial intelligence techniques used by OpenAI to provide its smarts are neither unique nor secret. Competing projects and open-source clones may soon make ChatGPT-style bots available for anyone to copy and reuse.

Stability AI, a startup that has already developed and open-sourced advanced image-generation technology, is working on an open competitor to ChatGPT. “We are a few months from release,” says Emad Mostaque, Stability’s CEO. A number of competing startups, including Anthropic, Cohere, and AI21, are working on proprietary chatbots similar to OpenAI’s bot.

The impending flood of sophisticated chatbots will make the technology more abundant and visible to consumers, as well as more accessible to AI businesses, developers, and researchers. That could accelerate the rush to make money with AI tools that generate images, code, and text.

Mar 10, 2023

Microtubules are Biological Computers: searching for the mind of a cell

Posted by in categories: biotech/medical, food, information science, media & arts, quantum physics, robotics/AI

In episode 13 of the Quantum Consciousness series, Justin Riddle discusses how microtubules are the most likely candidate to be a universal quantum computer that acts as a single executive unit in cells. First off, computer scientists are trying to model human behavior using neural networks that treat individual neurons as the base unit. But unicellular organisms are able to do many of the things that we consider to be human behavior! How does a single-cell lifeform perform this complex behavior? As Stuart Hameroff puts it, “neuron doctrine is an insult to neurons,” referring to the complexity of a single cell. Let’s look inside a cell, what makes it tick? Many think the DNA holds some secret code or algorithm that is executing the decision-making process of the cell. However, the microscope reveals a different story where the microtubules are performing a vast array of complex behaviors: swimming towards food, away from predators, coordinating protein delivery and creation within the cell. This begs the question: how do microtubules work? Well, they are single proteins organized into helical cylinders. What is going on here? Typically, we think of a protein’s function as being determined by its structure but the function of a single protein repeated into tubes is tough to unravel. Stuart Hameroff proposed that perhaps these tubulin proteins are acting as bits of information and the whole tube is working as a universal computer that can be programmed to fit any situation. Given the limitations of digital computation, Roger Penrose was looking for a quantum computer in biology and Stuart Hameroff was looking for more than a digital computation explanation. Hence, the Hameroff-Penrose model of microtubules as quantum computers was born. If microtubules are quantum computers, then each cell would possess a central executive hub for rapidly integrating information from across the cell and to turn that information into a single action plan that could be quickly disseminated. Furthermore, the computation would get a “quantum” speed-up in that exponentially large search spaces could be tackled in a reasonable timeframe. If microtubules are indeed quantum computers, then modern science has greatly underestimated the processing power of a single cell, let alone the entire human brain.

~~~ Timestamps ~~~
0:00 Introduction.
3:08 “Neuron doctrine is an insult to neurons”
8:23 DNA vs Microtubules.
14:20 Diffusion vs Central Hub.
17:50 Microtubules as Universal Computers.
23:40 Penrose’s Quantum Computation update.
29:48 Quantum search in a cell.
33:25 Stable microtubules in neurons.
35:18 Finding the self in biology.

Continue reading “Microtubules are Biological Computers: searching for the mind of a cell” »

Page 87 of 322First8485868788899091Last