Menu

Blog

Archive for the ‘information science’ category: Page 147

Feb 7, 2022

A New Trick Lets Artificial Intelligence See in 3D

Posted by in categories: entertainment, information science, robotics/AI

Some algorithms can now compose a 3D scene from 2D images—creating possibilities in video games, robotics, and autonomous driving.

Feb 7, 2022

Alistair Fulton — Connecting & Enabling A Smarter Planet — VP, Wireless & Sensing Products, Semtech

Posted by in categories: computing, information science, internet, satellites

Connecting & enabling a smarter planet — alistair fulton, VP, wireless & sensing products, semtech.


Alistair Fulton (https://www.semtech.com/company/executive-leadership/alistair-fulton) is the Vice President and General Manager of Semtech’s Wireless and Sensing Products Group.

Continue reading “Alistair Fulton — Connecting & Enabling A Smarter Planet — VP, Wireless & Sensing Products, Semtech” »

Feb 7, 2022

Astronomers spot a wandering black hole in empty space for the first time

Posted by in categories: climatology, cosmology, existential risks, information science, robotics/AI, sustainability

Machine learning can work wonders, but it’s only one tool among many.

Artificial intelligence is among the most poorly understood technologies of the modern era. To many, AI exists as both a tangible but ill-defined reality of the here and now and an unrealized dream of the future, a marvel of human ingenuity, as exciting as it is opaque.

It’s this indistinct picture of both what the technology is and what it can do that might engender a look of uncertainty on someone’s face when asked the question, “Can AI solve climate change?” “Well,” we think, “it must be able to do *something*,” while entirely unsure of just how algorithms are meant to pull us back from the ecological brink.

Continue reading “Astronomers spot a wandering black hole in empty space for the first time” »

Feb 6, 2022

AI learns physics to optimize particle accelerator performance

Posted by in categories: biotech/medical, finance, information science, robotics/AI

Machine learning, a form of artificial intelligence, vastly speeds up computational tasks and enables new technology in areas as broad as speech and image recognition, self-driving cars, stock market trading and medical diagnosis.

Before going to work on a given task, algorithms typically need to be trained on pre-existing data so they can learn to make fast and accurate predictions about future scenarios on their own. But what if the job is a completely new one, with no data available for training?

Now, researchers at the Department of Energy’s SLAC National Accelerator Laboratory have demonstrated that they can use machine learning to optimize the performance of particle accelerators by teaching the algorithms the basic principles behind operations—no prior data needed.

Feb 4, 2022

Removing water from underwater photography

Posted by in category: information science

A new algorithm for underwater photography makes marine life appear as clear as it would on land, and it’s helping scientists understand the ocean better.

Feb 3, 2022

Mimicking the brain to realize ‘human-like’ virtual assistants

Posted by in categories: information science, robotics/AI

Speech is more than just a form of communication. A person’s voice conveys emotions and personality and is a unique trait we can recognize. Our use of speech as a primary means of communication is a key reason for the development of voice assistants in smart devices and technology. Typically, virtual assistants analyze speech and respond to queries by converting the received speech signals into a model they can understand and process to generate a valid response. However, they often have difficulty capturing and incorporating the complexities of human speech and end up sounding very unnatural.

Now, in a study published in the journal IEEE Access, Professor Masashi Unoki from Japan Advanced Institute of Science and Technology (JAIST), and Dung Kim Tran, a doctoral course student at JAIST, have developed a system that can capture the information in similarly to how humans perceive speech.

“In humans, the auditory periphery converts the information contained in input speech signals into neural activity patterns (NAPs) that the brain can identify. To emulate this function, we used a matching pursuit algorithm to obtain sparse representations of speech signals, or signal representations with the minimum possible significant coefficients,” explains Prof. Unoki. “We then used psychoacoustic principles, such as the equivalent rectangular bandwidth scale, gammachirp function, and masking effects to ensure that the auditory sparse representations are similar to that of the NAPs.”

Feb 3, 2022

Does AI Improve Human Judgment?

Posted by in categories: business, information science, robotics/AI

Decision-making has mostly revolved around learning from mistakes and making gradual, steady improvements. For several centuries, evolutionary experience has served humans well when it comes to decision-making. So, it is safe to say that most decisions human beings make are based on trial and error. Additionally, humans rely heavily on data to make key decisions. Larger the amount of high-integrity data available, the more balanced and rational their decisions will be. However, in the age of big data analytics, businesses and governments around the world are reluctant to use basic human instinct and know-how to make major decisions. Statistically, a large percentage of companies globally use big data for the purpose. Therefore, the application of AI in decision-making is an idea that is being adopted more and more today than in the past.

However, there are several debatable aspects of using AI in decision-making. Firstly, are *all* the decisions made with inputs from AI algorithms correct? And does the involvement of AI in decision-making cause avoidable problems? Read on to find out: involvement of AI in decision-making simplifies the process of making strategies for businesses and governments around the world. However, AI has had its fair share of missteps on several occasions.

Feb 3, 2022

Mathematicians Prove 30-Year-Old André-Oort Conjecture

Posted by in categories: information science, mathematics

“The methods used to approach it cover, I would say, the whole of mathematics,” said Andrei Yafaev of University College London.

The new paper begins with one of the most basic but provocative questions in mathematics: When do polynomial equations like x3 + y3 = z3 have integer solutions (solutions in the positive and negative counting numbers)? In 1994, Andrew Wiles solved a version of this question, known as Fermat’s Last Theorem, in one of the great mathematical triumphs of the 20th century.

In the quest to solve Fermat’s Last Theorem and problems like it, mathematicians have developed increasingly abstract theories that spark new questions and conjectures. Two such problems, stated in 1989 and 1995 by Yves André and Frans Oort, respectively, led to what’s now known as the André-Oort conjecture. Instead of asking about integer solutions to polynomial equations, the André-Oort conjecture is about solutions involving far more complicated geometric objects called Shimura varieties.

Feb 2, 2022

Chip designer mimicking brain, backed by Sam Altman, gets $25 million funding

Posted by in categories: information science, robotics/AI

(Reuters) — Rain Neuromorphics Inc., a startup designing chips that mimic the way the brain works and aims to serve companies using artificial intelligence (AI) algorithms, said on Wednesday it raised $25 million.

Gordon Wilson, CEO and co-founder of Rain, said that while most AI chips on the market today are digital, his company’s technology is analogue. Digital chips read 1s and 0s while analogue chips can decipher incremental information such as sound waves.

Feb 1, 2022

This AI Learned the Design of a Million Algorithms to Help Build New AIs Faster

Posted by in categories: information science, robotics/AI

Might there be a better way? Perhaps.

A new paper published on the preprint server arXiv describes how a type of algorithm called a “hypernetwork” could make the training process much more efficient. The hypernetwork in the study learned the internal connections (or parameters) of a million example algorithms so it could pre-configure the parameters of new, untrained algorithms.

The AI, called GHN-2, can predict and set the parameters of an untrained neural network in a fraction of a second. And in most cases, the algorithms using GHN-2’s parameters performed as well as algorithms that had cycled through thousands of rounds of training.