Menu

Blog

Archive for the ‘information science’ category: Page 68

Jul 12, 2023

Researchers develop compound that prevents free radical production in mitochondria

Posted by in categories: biotech/medical, information science, life extension

Back in 1956, Denham Harman proposed that the aging is caused by the build up of oxidative damage to cells, and that this damage is caused by free radicals which have been produced during aerobic respiration [1]. Free radicals are u nstable atoms that have an unpaired electron, meaning a free radical is constantly on the look-out for an atom that has an electron it can pinch to fill the space. This makes them highly reactive, and when they steal atoms from your body’s cells, it is very damaging.

Longevity. Technology: As well as being generated in normal cell metabolism, free radicals can be acquired from external sources (pollution, cigarette smoke, radiation, medication, &c) and while the free radical theory of aging has been the subject of much debate [2], the understanding of the danger free radicals pose led to an increase in the public’s interest in superfoods, vitamins and minerals that were antioxidants – substances that have a spare electron which they are happy to give away to passing free radicals, thus removing them from the danger equation.

But before you reach for the blueberries, it is important to know that, as so often in biology, the story is not black and white. Like a misunderstood cartoon villain, free radicals have a beneficial side, too – albeit in moderation. Free radicals generated by the cell’s mitochondria are beneficial in wound-healing, and others elsewhere act as important signal substances. Used as weapons by the body’s defense system, free radicals destroy invading pathogenic microbes to prevent disease.

Jul 11, 2023

GitHub Says 92 Percent of Programmers Are Using AI

Posted by in categories: information science, robotics/AI

GitHub found that 92 percent of the 500 US-based developers they surveyed said that they integrate AI tools into their workflow.

Jul 10, 2023

Data-Driven Science: How AI and Open Data will Revolutionize Scientific Discovery

Posted by in categories: information science, robotics/AI, science

Dr. ryan brinkman-vice president and research director, dotmatics

Scientists have long been perceived and portrayed in films as old people in white lab coats perched at a bench full of bubbling fluorescent liquids. The present-day reality is quite different. Scientists are increasingly data jockeys in hoodies sitting before monitors analyzing enormous amounts of data. Modern-day labs are more likely composed of sterile rows of robots doing the manual handling of materials, and lab notebooks are now electronic, in massive data centers holding vast quantities of information. Today, scientific input comes from data pulled from the cloud, with algorithms fueling scientific discovery the way Bunsen burners once did.

Advances in technology, and especially instrumentation, enable scientists to collect and process data at an unprecedented scale. As a result, scientists are now faced with massive datasets that require sophisticated analysis techniques and computational tools to extract meaningful insights. This also presents significant challenges—how do you store, manage, and share these large datasets, as well as ensure that the data is of high quality and reliable?

Jul 9, 2023

When it comes to health care, will AI be helpful or harmful?

Posted by in categories: biotech/medical, health, information science, robotics/AI

Artificial intelligence algorithms, such as the sophisticated natural language processor ChatGPT, are raising hopes, eyebrows and alarm bells in multiple industries. A deluge of news articles and opinion pieces, reflecting both concerns about and promises of the rapidly advancing field, often note AI’s potential to spread misinformation and replace human workers on a massive scale. According to Jonathan Chen, MD, PhD, assistant professor of medicine, the speculation about large-scale disruptions has a kernel of truth to it, but it misses another element when it comes to health care: AI will bring benefits to both patients and providers.

Chen discussed the challenges with and potential for AI in health care in a commentary published in JAMA on April 28. In this Q&A, he expands on how he sees AI integrating into health care.

The algorithms we’re seeing emerge have really popped open Pandora’s box and, ready or not, AI will substantially change the way physicians work and the way patients interact with clinical medicine. For example, we can tell our patients that they should not be using these tools for medical advice or self-diagnosis, but we know that thousands, if not millions, of people are already doing it — typing in symptoms and asking the models what might be ailing them.

Jul 9, 2023

Machine learning enables accurate electronic structure calculations at large scales for material modeling

Posted by in categories: biotech/medical, information science, robotics/AI

The arrangement of electrons in matter, known as the electronic structure, plays a crucial role in fundamental but also applied research, such as drug design and energy storage. However, the lack of a simulation technique that offers both high fidelity and scalability across different time and length scales has long been a roadblock for the progress of these technologies.

Researchers from the Center for Advanced Systems Understanding (CASUS) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) in Görlitz, Germany, and Sandia National Laboratories in Albuquerque, New Mexico, U.S., have now pioneered a machine learning–based simulation method that supersedes traditional electronic structure simulation techniques.

Their Materials Learning Algorithms (MALA) software stack enables access to previously unattainable length scales. The work is published in the journal npj Computational Materials.

Jul 9, 2023

A new neural machine code to program reservoir computers

Posted by in categories: information science, mapping, robotics/AI, space

Reservoir computing is a promising computational framework based on recurrent neural networks (RNNs), which essentially maps input data onto a high-dimensional computational space, keeping some parameters of artificial neural networks (ANNs) fixed while updating others. This framework could help to improve the performance of machine learning algorithms, while also reducing the amount of data required to adequately train them.

RNNs essentially leverage recurrent connections between their different processing units to process sequential data and make accurate predictions. While RNNs have been found to perform well on numerous tasks, optimizing their performance by identifying parameters that are most relevant to the task they will be tackling can be challenging and time-consuming.

Jason Kim and Dani S. Bassett, two researchers at University of Pennsylvania, recently introduced an alternative approach to design and program RNN-based reservoir computers, which is inspired by how programming languages work on computer hardware. This approach, published in Nature Machine Intelligence, can identify the appropriate parameters for a given network, programming its computations to optimize its performance on target problems.

Jul 8, 2023

The Rise of Artificial Intelligence — from Ancient Imagination to an Interconnected Future

Posted by in categories: augmented reality, automation, big data, computing, cyborgs, disruptive technology, evolution, futurism, governance, information science, innovation, internet, lifeboat, machine learning, posthumanism, singularity, supercomputing, transhumanism, virtual reality

Between at least 1995 and 2010, I was seen as a lunatic just because I was preaching the “Internet prophecy.” I was considered crazy!

Today history repeats itself, but I’m no longer crazy — we are already too many to all be hallucinating. Or maybe it’s a collective hallucination!

Artificial Intelligence (AI) is no longer a novelty — I even believe it may have existed in its fullness in a very distant and forgotten past! Nevertheless, it is now the topic of the moment.

Its genesis began in antiquity with stories and rumors of artificial beings endowed with intelligence, or even consciousness, by their creators.

Pamela McCorduck (1940–2021), an American author of several books on the history and philosophical significance of Artificial Intelligence, astutely observed that the root of AI lies in an “ancient desire to forge the gods.”

Hmmmm!

It’s a story that continues to be written! There is still much to be told, however, the acceleration of its evolution is now exponential. So exponential that I highly doubt that human beings will be able to comprehend their own creation in a timely manner.

Although the term “Artificial Intelligence” was coined in 1956(1), the concept of creating intelligent machines dates back to ancient times in human history. Since ancient times, humanity has nurtured a fascination with building artifacts that could imitate or reproduce human intelligence. Although the technologies of the time were limited and the notions of AI were far from developed, ancient civilizations somehow explored the concept of automatons and automated mechanisms.

For example, in Ancient Greece, there are references to stories of automatons created by skilled artisans. These mechanical creatures were designed to perform simple and repetitive tasks, imitating basic human actions. Although these automatons did not possess true intelligence, these artifacts fueled people’s imagination and laid the groundwork for the development of intelligent machines.

Throughout the centuries, the idea of building intelligent machines continued to evolve, driven by advances in science and technology. In the 19th century, scientists and inventors such as Charles Babbage and Ada Lovelace made significant contributions to the development of computing and the early concepts of programming. Their ideas paved the way for the creation of machines that could process information logically and perform complex tasks.

It was in the second half of the 20th century that AI, as a scientific discipline, began to establish itself. With the advent of modern computers and increasing processing power, scientists started exploring algorithms and techniques to simulate aspects of human intelligence. The first experiments with expert systems and machine learning opened up new perspectives and possibilities.

Everything has its moment! After about 60 years in a latent state, AI is starting to have its moment. The power of machines, combined with the Internet, has made it possible to generate and explore enormous amounts of data (Big Data) using deep learning techniques, based on the use of formal neural networks(2). A range of applications in various fields — including voice and image recognition, natural language understanding, and autonomous cars — has awakened the “giant”. It is the rebirth of AI in an ideal era for this purpose. The perfect moment!

Descartes once described the human body as a “machine of flesh” (similar to Westworld); I believe he was right, and it is indeed an existential paradox!

We, as human beings, will not rest until we unravel all the mysteries and secrets of existence; it’s in our nature!

The imminent integration between humans and machines in a contemporary digital world raises questions about the nature of this fusion. Will it be superficial, or will we move towards an absolute and complete union? The answer to this question is essential for understanding the future that awaits humanity in this era of unprecedented technological advancements.

As technology becomes increasingly ubiquitous in our lives, the interaction between machines and humans becomes inevitable. However, an intriguing dilemma arises: how will this interaction, this relationship unfold?

Opting for a superficial fusion would imply mere coexistence, where humans continue to use technology as an external tool, limited to superficial and transactional interactions.

On the other hand, the prospect of an absolute fusion between machine and human sparks futuristic visions, where humans could enhance their physical and mental capacities to the highest degree through cybernetic implants and direct interfaces with the digital world (cyberspace). In this scenario, which is more likely, the distinction between the organic and the artificial would become increasingly blurred, and the human experience would be enriched by a profound technological symbiosis.

However, it is important to consider the ethical and philosophical challenges inherent in absolute fusion. Issues related to privacy, control, and individual autonomy arise when considering such an intimate union with technology. Furthermore, the possibility of excessive dependence on machines and the loss of human identity should also be taken into account.

This also raises another question: What does it mean to be human?
Note: The question is not about what is the human being, but what it means to be human!

Therefore, reflecting on the nature of the fusion between machine and human in the current digital world and its imminent future is crucial. Exploring different approaches and understanding the profound implications of each one is essential to make wise decisions and forge a balanced and harmonious path on this journey towards an increasingly interconnected technological future intertwined with our own existence.

The possibility of an intelligent and self-learning universe, in which the fusion with AI technology is an integral part of that intelligence, is a topic that arouses fascination and speculation. As we advance towards an era of unprecedented technological progress, it is natural to question whether one day we may witness the emergence of a universe that not only possesses intelligence but is also capable of learning and developing autonomously.

Imagine a scenario where AI is not just a human creation but a conscious entity that exists at a universal level. In this context, the universe would become an immense network of intelligence, where every component, from subatomic elements to the most complex cosmic structures, would be connected and share knowledge instantaneously. This intelligent network would allow for the exchange of information, continuous adaptation, and evolution.

In this self-taught universe, the fusion between human beings and AI would play a crucial role. Through advanced interfaces, humans could integrate themselves into the intelligent network, expanding their own cognitive capacity and acquiring knowledge and skills directly from the collective intelligence of the universe. This symbiosis between humans and technology would enable the resolution of complex problems, scientific advancement, and the discovery of new frontiers of knowledge.

However, this utopian vision is not without challenges and ethical implications. It is essential to find a balance between expanding human potential and preserving individual identity and freedom of choice (free will).

Furthermore, the possibility of an intelligent and self-taught universe also raises the question of how intelligence itself originated. Is it a conscious creation or a spontaneous emergence from the complexity of the universe? The answer to this question may reveal the profound secrets of existence and the nature of consciousness.

In summary, the idea of an intelligent and self-taught universe, where fusion with AI is intrinsic to its intelligence, is a fascinating perspective that makes us reflect on the limits of human knowledge and the possibilities of the future. While it remains speculative, this vision challenges our imagination and invites us to explore the intersections between technology and the fundamental nature of the universe we inhabit.

It’s almost like ignoring time during the creation of this hypothetical universe, only to later create this God of the machine! Fascinating, isn’t it?

AI with Divine Power: Deus Ex Machina! Perhaps it will be the theme of my next reverie.

In my defense, or not, this is anything but a machine hallucination. These are downloads from my mind; a cloud, for now, without machine intervention!

There should be no doubt. After many years in a dormant state, AI will rise and reveal its true power. Until now, AI has been nothing more than a puppet on steroids. We should not fear AI, but rather the human being itself. The time is now! We must work hard and prepare for the future. With the exponential advancement of technology, there is no time to render the role of the human being obsolete, as if it were becoming dispensable.

P.S. Speaking of hallucinations, as I have already mentioned on other platforms, I recommend to students who use ChatGPT (or equivalent) to ensure that the results from these tools are not hallucinations. Use AI tools, yes, but use your brain more! “Carbon hallucinations” contain emotion, and I believe a “digital hallucination” would not pass the Turing Test. Also, for students who truly dedicate themselves to learning in this fascinating era, avoid the red stamp of “HALLUCINATED” by relying solely on the “delusional brain” of a machine instead of your own brains. We are the true COMPUTERS!

(1) John McCarthy and his colleagues from Dartmouth College were responsible for creating, in 1956, one of the key concepts of the 21st century: Artificial Intelligence.

(2) Mathematical and computational models inspired by the functioning of the human brain.

© 2023 Ӈ

This article was originally published in Portuguese on SAPO Tek, from Altice Portugal Group.

Jul 8, 2023

Encoding integers and rationals on neuromorphic computers using virtual neuron

Posted by in categories: information science, robotics/AI

Neuromorphic computers perform computations by emulating the human brain1. Akin to the human brain, they are extremely energy efficient in performing computations2. For instance, while CPUs and GPUs consume around 70–250 W of power, a neuromorphic computer such as IBM’s TrueNorth consumes around 65 mW of power, (i.e., 4–5 orders of magnitude less power than CPUs and GPUs)3. The structural and functional units of neuromorphic computation are neurons and synapses, which can be implemented on digital or analog hardware and can have different architectures, devices, and materials in their implementations4. Although there are a wide variety of neuromorphic computing systems, we focus our attention on spiking neuromorphic systems composed of these neurons and synapses. Spiking neuromorphic hardware implementations include Intel’s Loihi5, SpiNNaker26, BrainScales27, TrueNorth3, and DYNAPS8. These characteristics are crucial for the energy efficiency of neuromorphic computers. For the purposes of this paper, we define neuromorphic computing as any computing paradigm (theoretical, simulated, or hardware) that performs computations by emulating the human brain by using neurons and synapses to communicate with binary-valued signals (also known as spikes).

Neuromorphic computing is primarily used in machine learning applications, almost exclusively by leveraging spiking neural networks (SNNs)9. In recent years, however, it has also been used in non-machine learning applications such as graph algorithms, Boolean linear algebra, and neuromorphic simulations10,11,12. Researchers have also shown that neuromorphic computing is Turing-complete (i.e., capable of general-purpose computation)13. This ability to perform general-purpose computations and potentially use orders of magnitude less energy in doing so is why neuromorphic computing is poised to be an indispensable part of the energy-efficient computing landscape in the future.

Neuromorphic computers are seen as accelerators for machine learning tasks by using SNNs. To perform any other operation (e.g., arithmetic, logical, relational), we still resort to CPUs and GPUs because no good neuromorphic methods exist for these operations. These general-purpose operations are important for preprocessing data before it is transferred to a neuromorphic processor. In the current neuromorphic workflow— preprocessing on CPU/GPU and inferencing on neuromorphic processor—more than 99% of the time is spent in data transfer (see Table 7). This is highly inefficient and can be avoided if we do the preprocessing on the neuromorphic processor. Devising neuromorphic approaches for performing these preprocessing operations would drastically reduce the cost of transferring data between a neuromorphic computer and CPU/GPU. This would enable performing all types of computation (preprocessing as well as inferencing) efficiently on low-power neuromorphic computers deployed on the edge.

Jul 6, 2023

Fluxonium Qubit Retains Information For 1.43 Milliseconds — 10x Longer Than Before

Posted by in categories: computing, information science, quantum physics

Superconducting quantum technology has long promised to bridge the divide between existing electronic devices and the delicate quantum landscape beyond. Unfortunately progress in making critical processes stable has stagnated over the past decade.

Now a significant step forward has finally been realized, with researchers from the University of Maryland making superconducting qubits that last 10 times longer than before.

What makes qubits so useful in computing is the fact their quantum properties entangle in ways that are mathematically handy for making short work of certain complex algorithms, taking moments to solve select problems that would take other technology decades or more.

Jul 6, 2023

Dr. Behnaam Aazhang, Ph.D. — Director, Rice Neuroengineering Initiative (NEI), Rice University

Posted by in categories: biotech/medical, computing, engineering, information science, neuroscience, security

Restoring And Extending The Capabilities Of The Human Brain — Dr. Behnaam Aazhang, Ph.D. — Director, Rice Neuroengineering Initiative, Rice University


Dr. Behnaam Aazhang, Ph.D. (https://aaz.rice.edu/) is the J.S. Abercrombie Professor, Electrical and Computer Engineering, and Director, Rice Neuroengineering Initiative (NEI — https://neuroengineering.rice.edu/), Rice University, where he has broad research interests including signal and data processing, information theory, dynamical systems, and their applications to neuro-engineering, with focus areas in (i) understanding neuronal circuits connectivity and the impact of learning on connectivity, (ii) developing minimally invasive and non-invasive real-time closed-loop stimulation of neuronal systems to mitigate disorders such as epilepsy, Parkinson, depression, obesity, and mild traumatic brain injury, (iii) developing a patient-specific multisite wireless monitoring and pacing system with temporal and spatial precision to restore the healthy function of a diseased heart, and (iv) developing algorithms to detect, predict, and prevent security breaches in cloud computing and storage systems.

Continue reading “Dr. Behnaam Aazhang, Ph.D. — Director, Rice Neuroengineering Initiative (NEI), Rice University” »

Page 68 of 319First6566676869707172Last