Menu

Blog

Archive for the ‘information science’ category: Page 113

Oct 13, 2022

New AI Algorithms Predict Sports Teams’ Moves With 80% Accuracy

Posted by in categories: habitats, information science, robotics/AI

Accuracy. Now the Cornell Laboratory for Intelligent Systems and Controls, which developed the algorithms, is collaborating with the Big Red hockey team to expand the research project’s applications.

Representing Cornell University, the Big Red men’s ice hockey team is a National Collegiate Athletic Association Division I college ice hockey program. Cornell Big Red competes in the ECAC Hockey conference and plays its home games at Lynah Rink in Ithaca, New York.

Oct 12, 2022

Mathematical formula tackles complex moral decision-making in AI

Posted by in categories: biotech/medical, ethics, health, information science, mathematics, robotics/AI

An interdisciplinary team of researchers has developed a blueprint for creating algorithms that more effectively incorporate ethical guidelines into artificial intelligence (AI) decision-making programs. The project was focused specifically on technologies in which humans interact with AI programs, such as virtual assistants or “carebots” used in healthcare settings.

“Technologies like carebots are supposed to help ensure the safety and comfort of hospital patients, and other people who require health monitoring or physical assistance,” says Veljko Dubljević, corresponding author of a paper on the work and an associate professor in the Science, Technology & Society program at North Carolina State University. “In practical terms, this means these technologies will be placed in situations where they need to make ethical judgments.”

“For example, let’s say that a carebot is in a setting where two people require medical assistance. One patient is unconscious but requires urgent care, while the second patient is in less urgent need but demands that the carebot treat him first. How does the carebot decide which patient is assisted first? Should the carebot even treat a patient who is unconscious and therefore unable to consent to receiving the treatment?”

Oct 12, 2022

DeepMind AI finds new way to multiply numbers and speed up computers

Posted by in categories: information science, robotics/AI

An artificial intelligence created by the firm DeepMind has discovered a new way to multiply numbers, the first such advance in over 50 years. The find could boost some computation speeds by up to 20 per cent, as a range of software relies on carrying out the task at great scale.

Matrix multiplication – where two grids of numbers are multiplied together – is a fundamental computing task used in virtually all software to some extent, but particularly so in graphics, AI and scientific simulations. Even a small improvement in the efficiency of these algorithms could bring large performance gains, or significant energy savings.

The biggest number in the world Agnijo Banerjee at New Scientist Live this October.

Oct 12, 2022

AI equal to humans in text-message mental health trial

Posted by in categories: biotech/medical, information science, neuroscience, robotics/AI

UW Medicine researchers have found that algorithms are as good as trained human evaluators at identifying red-flag language in text messages from people with serious mental illness. This opens a promising area of study that could help with psychiatry training and scarcity of care.

The findings were published in late September in the journal Psychiatric Services.

Text messages are increasingly part of mental health care and evaluation, but these remote psychiatric interventions can lack the emotional reference points that therapists use to navigate in-person conversations with patients.

Oct 12, 2022

Team uses digital cameras, machine learning to predict neurological disease

Posted by in categories: biotech/medical, health, information science, robotics/AI

In an effort to streamline the process of diagnosing patients with multiple sclerosis and Parkinson’s disease, researchers used digital cameras to capture changes in gait—a symptom of these diseases—and developed a machine-learning algorithm that can differentiate those with MS and PD from people without those neurological conditions.

Their findings are reported in the IEEE Journal of Biomedical and Health Informatics.

The goal of the research was to make the process of diagnosing these diseases more accessible, said Manuel Hernandez, a University of Illinois Urbana-Champaign professor of kinesiology and who led the work with graduate student Rachneet Kaur and industrial and enterprise systems engineering and mathematics professor Richard Sowers.

Oct 11, 2022

The 5 Biggest Artificial Intelligence (AI) Trends In 2023

Posted by in categories: business, information science, robotics/AI, transportation

Over the last decade, Artificial intelligence (AI) has become embedded in every aspect of our society and lives. From chatbots and virtual assistants like Siri and Alexa to automated industrial machinery and self-driving cars, it’s hard to ignore its impact.

Today, the technology most commonly used to achieve AI is machine learning — advanced software algorithms designed to carry out one specific task, such as answering questions, translating languages or navigating a journey — and become increasingly good at it as they are exposed to more and more data.

Worldwide, spending by governments and business on AI technology will top $500 billion in 2023, according to IDC research.

Continue reading “The 5 Biggest Artificial Intelligence (AI) Trends In 2023” »

Oct 11, 2022

OpenAI Chief Scientist: Should We Make Godlike AI That Loves Us, or Obeys Us?

Posted by in categories: information science, robotics/AI

A leading artificial intelligence expert is once again shooting from the hip in a cryptic Twitter poll.

In the poll, OpenAI chief scientist Ilya Sutskever asked his followers whether advanced super-AIs should be made “deeply obedient” to their human creators, or if these godlike algorithms should “truly deeply [love] humanity.”

In other words, he seems to be pondering whether we should treat superintelligences like pets — or the other way around. And that’s interesting, coming from the head researcher at the firm behind GPT-3 and DALL-E, two of the most impressive machine learning systems available today.

Oct 11, 2022

AGI-22 | Joscha Bach — It from no Bit: Basic Cosmology from an AI Perspective

Posted by in categories: blockchains, cosmology, information science, robotics/AI, singularity

Joscha Bach is a cognitive scientist focused on cognitive architectures, mental representation, emotion, social modeling, and learning.

Currently the Principal AI Engineer, Cognitive Computing at Intel Labs, having authored the book “Principles of Synthetic Intelligence”, his focus is how to build machines that can perceive, think and learn.

Continue reading “AGI-22 | Joscha Bach — It from no Bit: Basic Cosmology from an AI Perspective” »

Oct 10, 2022

Deepmind Introduces ‘AlphaTensor,’ An Artificial Intelligence (AI) System For Discovering Novel, Efficient And Exact Algorithms For Matrix Multiplication

Posted by in categories: information science, mathematics, mobile phones, robotics/AI

Improving the efficiency of algorithms for fundamental computations is a crucial task nowadays as it influences the overall pace of a large number of computations that might have a significant impact. One such simple task is matrix multiplication, which can be found in systems like neural networks and scientific computing routines. Machine learning has the potential to go beyond human intuition and beat the most exemplary human-designed algorithms currently available. However, due to the vast number of possible algorithms, this process of automated algorithm discovery is complicated. DeepMind recently made a breakthrough discovery by developing AplhaTensor, the first-ever artificial intelligence (AI) system for developing new, effective, and indubitably correct algorithms for essential operations like matrix multiplication. Their approach answers a mathematical puzzle that has been open for over 50 years: how to multiply two matrices as quickly as possible.

AlphaZero, an agent that showed superhuman performance in board games like chess, go, and shogi, is the foundation upon which AlphaTensor is built. The system expands on AlphaZero’s progression from playing traditional games to solving complex mathematical problems for the first time. The team believes this study represents an important milestone in DeepMind’s objective to improve science and use AI to solve the most fundamental problems. The research has also been published in the established Nature journal.

Matrix multiplication has numerous real-world applications despite being one of the most simple algorithms taught to students in high school. This method is utilized for many things, including processing images on smartphones, identifying verbal commands, creating graphics for video games, and much more. Developing computing hardware that multiplies matrices effectively consumes many resources; therefore, even small gains in matrix multiplication efficiency can have a significant impact. The study investigates how the automatic development of new matrix multiplication algorithms could be advanced by using contemporary AI approaches. In order to find algorithms that are more effective than the state-of-the-art for many matrix sizes, AlphaTensor further leans on human intuition. Its AI-designed algorithms outperform those created by humans, which represents a significant advancement in algorithmic discovery.

Oct 9, 2022

From Analog to Digital Computing: Is Homo sapiens’ Brain on Its Way to Become a Turing Machine?

Posted by in categories: information science, robotics/AI

The abstract basis of modern computation is the formal description of a finite state machine, the Universal Turing Machine, based on manipulation of integers and logic symbols. In this contribution to the discourse on the computer-brain analogy, we discuss the extent to which analog computing, as performed by the mammalian brain, is like and unlike the digital computing of Universal Turing Machines. We begin with ordinary reality being a permanent dialog between continuous and discontinuous worlds. So it is with computing, which can be analog or digital, and is often mixed. The theory behind computers is essentially digital, but efficient simulations of phenomena can be performed by analog devices; indeed, any physical calculation requires implementation in the physical world and is therefore analog to some extent, despite being based on abstract logic and arithmetic. The mammalian brain, comprised of neuronal networks, functions as an analog device and has given rise to artificial neural networks that are implemented as digital algorithms but function as analog models would. Analog constructs compute with the implementation of a variety of feedback and feedforward loops. In contrast, digital algorithms allow the implementation of recursive processes that enable them to generate unparalleled emergent properties. We briefly illustrate how the cortical organization of neurons can integrate signals and make predictions analogically. While we conclude that brains are not digital computers, we speculate on the recent implementation of human writing in the brain as a possible digital path that slowly evolves the brain into a genuine (slow) Turing machine.

The present essay explores key similarities and differences in the process of computation by the brains of animals and by digital computing, by anchoring the exploration on the essential properties of a Universal Turning Machine, the abstract foundation of modern digital computing. In this context, we try to explicitly distance XVIIIth century mechanical automata from modern machines, understanding that when computation allows recursion, it changes the consequences of determinism. A mechanical device is usually both deterministic and predictable, while computation involving recursion is deterministic but not necessarily predictable. For example, while it is possible to design an algorithm that computes the decimal digits of π, the value of any finite sequence following the nth digit, cannot (yet) be computed, hence predicted, with n sufficiently large.