Menu

Blog

Archive for the ‘information science’ category: Page 174

Jun 26, 2021

The Early Universe Explained by Neil deGrasse Tyson

Posted by in categories: cosmology, information science, mathematics, neuroscience, nuclear energy, particle physics, singularity

Neil deGrasse Tyson explains the early state of our Universe. At the beginning of the universe, ordinary space and time developed out of a primeval state, where all matter and energy of the entire visible universe was contained in a hot, dense point called a gravitational singularity. A billionth the size of a nuclear particle.

While we can not imagine the entirety of the visible universe being a billion times smaller than a nuclear particle, that shouldn’t deter us from wondering about the early state of our universe. However, dealing with such extreme scales is immensely counter-intuitive and our evolved brains and senses have no capacity to grasp the depths of reality in the beginning of cosmic time. Therefore, scientists develop mathematical frameworks to describe the early universe.

Continue reading “The Early Universe Explained by Neil deGrasse Tyson” »

Jun 25, 2021

How AI is driving a future of autonomous warfare | DW Analysis

Posted by in categories: cybercrime/malcode, information science, mapping, military, nuclear energy, robotics/AI

The artificial intelligence revolution is just getting started. But it is already transforming conflict. Militaries all the way from the superpowers to tiny states are seizing on autonomous weapons as essential to surviving the wars of the future. But this mounting arms-race dynamic could lead the world to dangerous places, with algorithms interacting so fast that they are beyond human control. Uncontrolled escalation, even wars that erupt without any human input at all.

DW maps out the future of autonomous warfare, based on conflicts we have already seen – and predictions from experts of what will come next.

Continue reading “How AI is driving a future of autonomous warfare | DW Analysis” »

Jun 25, 2021

MIT Makes a Significant Advance Toward the Full Realization of Quantum Computation

Posted by in categories: computing, engineering, information science, quantum physics

MIT researchers demonstrate a way to sharply reduce errors in two-qubit gates, a significant advance toward fully realizing quantum computation.

MIT researchers have made a significant advance on the road toward the full realization of quantum computation, demonstrating a technique that eliminates common errors in the most essential operation of quantum algorithms, the two-qubit operation or “gate.”

“Despite tremendous progress toward being able to perform computations with low error rates with superconducting quantum bits (qubits), errors in two-qubit gates, one of the building blocks of quantum computation, persist,” says Youngkyu Sung, an MIT graduate student in electrical engineering and computer science who is the lead author of a paper on this topic published on June 16, 2021, in Physical Review X. “We have demonstrated a way to sharply reduce those errors.”

Jun 25, 2021

Continuous-capture microwave imaging

Posted by in categories: computing, information science, space

Advanced uses of time in image rendering and reconstruction have been the focus of much scientific research in recent years. The motivation comes from the equivalence between space and time given by the finite speed of light c. This equivalence leads to correlations between the time evolution of electromagnetic fields at different points in space. Applications exploiting such correlations, known as time-of-flight (ToF)1 and light-in-flight (LiF)2 cameras, operate at various regimes from radio3,4 to optical5 frequencies. Time-of-flight imaging focuses on reconstructing a scene by measuring delayed stimulus responses via continuous wave, impulses or pseudo-random binary sequence (PRBS) codes1. Light-in-flight imaging, also known as transient imaging6, explores light transport and detection2,7. The combination of ToF and LiF has recently yielded higher accuracy and detail to the reconstruction process, especially in non-line-of-sight images with the inclusion of higher-order scattering and physical processes such as Rayleigh–Sommerfeld diffraction8 in the modeling. However, these methods require experimental characterization of the scene followed by large computational overheads that produce images at low frame rates in the optical regime. In the radio-frequency (RF) regime, 3D images at frame rates of 30 Hz have been produced with an array of 256 wide-band transceivers3. Microwave imaging has the additional capability of sensing through optically opaque media such as walls. Nonetheless, synthetic aperture radar reconstruction algorithms such as the one proposed in ref. 3 required each transceiver in the array to operate individually thus leaving room for improvements in image frame rates from continuous transmit-receive captures. Constructions using beamforming have similar challenges9 where a narrow focused beam scans a scene using an array of antennas and frequency modulated continuous wave (FMCW) techniques.

In this article, we develop an inverse light transport model10 for microwave signals. The model uses a spatiotemporal mask generated by multiple sources, each emitting different PRBS codes, and a single detector, all operating in continuous synchronous transmit-receive mode. This model allows image reconstructions with capture times of the order of microseconds and no prior scene knowledge. For first-order reflections, the algorithm reduces to a single dot product between the reconstruction matrix and captured signal, and can be executed in a few milliseconds. We demonstrate this algorithm through simulations and measurements performed using realistic scenes in a laboratory setting. We then use the second-order terms of the light transport model to reconstruct scene details not captured by the first-order terms.

We start by estimating the information capacity of the scene and develop the light transport equation for the transient imaging model with arguments borrowed from basic information and electromagnetic field theory. Next, we describe the image reconstruction algorithm as a series of approximations corresponding to multiple scatterings of the spatiotemporal illumination matrix. Specifically, we show that in the first-order approximation, the value of each pixel is the dot product between the captured time series and a unique time signature generated by the spatiotemporal electromagnetic field mask. Next, we show how the second-order approximation generates hidden features not accessible in the first-order image. Finally, we apply the reconstruction algorithm to simulated and experimental data and discuss the performance, strengths, and limitations of this technique.

Jun 25, 2021

An AI algorithm just completed a famous Rembrandt painting

Posted by in categories: information science, military, robotics/AI

And they say computers can’t create art.


In 1642, famous Dutch painter Rembrandt van Rijn completed a large painting called Militia Company of District II under the Command of Captain Frans Banninck Cocq — today, the painting is commonly referred to as The Night Watch. It was the height of the Dutch Golden Age, and The Night Watch brilliantly showcased that.

The painting measured 363 cm × 437 cm (11.91 ft × 14.34 ft) — so big that the characters in it were almost life-sized, but that’s only the start of what makes it so special. Rembrandt made dramatic use of light and shadow and also created the perception of motion in what would normally be a stationary military group portrait. Unfortunately, though, the painting was trimmed in 1715 to fit between two doors at Amsterdam City Hall.

Continue reading “An AI algorithm just completed a famous Rembrandt painting” »

Jun 25, 2021

An autonomous drone for search and rescue in forests using optical sectioning algorithm

Posted by in categories: drones, information science, robotics/AI

A team of researchers working at Johannes Kepler University has developed an autonomous drone with a new type of technology to improve search-and-rescue efforts. In their paper published in the journal Science Robotics, the group describes their drone modifications. Andreas Birk with Jacobs University Bremen has published a Focus piece in the same journal issue outlining the work by the team in Austria.

Finding people lost (or hiding) in the forest is difficult because of the tree cover. People in planes and helicopters have difficulty seeing through the canopy to the ground below, where people might be walking or even laying down. The same problem exists for thermal applications—heat sensors cannot pick up readings adequately through the canopy. Efforts have been made to add drones to search-and–, but they suffer from the same problems because they are remotely controlled by pilots using them to search the ground below. In this new effort, the researchers have added new technology that both helps to see through the tree canopy and to highlight people that might be under it.

Continue reading “An autonomous drone for search and rescue in forests using optical sectioning algorithm” »

Jun 24, 2021

New algorithm helps autonomous vehicles find themselves, summer or winter

Posted by in categories: information science, robotics/AI, transportation

Without GPS, autonomous systems get lost easily. Now a new algorithm developed at Caltech allows autonomous systems to recognize where they are simply by looking at the terrain around them—and for the first time, the technology works regardless of seasonal changes to that terrain.

Details about the process were published on June 23 in the journal Science Robotics.

The general process, known as visual terrain-relative navigation (VTRN), was first developed in the 1960s. By comparing nearby terrain to high-resolution satellite images, can locate themselves.

Jun 23, 2021

Deep reinforcement learning will transform manufacturing as we know it

Posted by in categories: economics, information science, robotics/AI, transportation

If you walk down the street shouting out the names of every object you see — garbage truck! bicyclist! sycamore tree! — most people would not conclude you are smart. But if you go through an obstacle course, and you show them how to navigate a series of challenges to get to the end unscathed, they would.

Most machine learning algorithms are shouting names in the street. They perform perceptive tasks that a person can do in under a second. But another kind of AI — deep reinforcement learning — is strategic. It learns how to take a series of actions in order to reach a goal. That’s powerful and smart — and it’s going to change a lot of industries.

Two industries on the cusp of AI transformations are manufacturing and supply chain. The ways we make and ship stuff are heavily dependent on groups of machines working together, and the efficiency and resiliency of those machines are the foundation of our economy and society. Without them, we can’t buy the basics we need to live and work.

Jun 22, 2021

A Robot Has Learned to Combine Vision and Touch

Posted by in categories: information science, robotics/AI

Summary: Combining deep learning algorithms with robotic engineering, researchers have developed a new robot able to combine vision and touch.

Source: EBRAINS / human brain project.

Continue reading “A Robot Has Learned to Combine Vision and Touch” »

Jun 22, 2021

Bugs in NVIDIA’s Jetson Chipset Opens Door to DoS Attacks, Data Theft

Posted by in categories: cybercrime/malcode, drones, information science, internet, robotics/AI

Chipmaker patches nine high-severity bugs in its Jetson SoC framework tied to the way it handles low-level cryptographic algorithms.

Flaws impacting millions of internet of things (IoT) devices running NVIDIA’s Jetson chips open the door for a variety of hacks, including denial-of-service (DoS) attacks or the siphoning of data.

NVIDIA released patches addressing nine high-severity vulnerabilities including eight additional bugs of less severity. The patches fix a wide swath of NVIDIA’s chipsets typically used for embedded computing systems, machine-learning applications and autonomous devices such as robots and drones.
Impacted products include Jetson chipset series; AGX Xavier, Xavier NX/TX1, Jetson TX2 (including Jetson TX2 NX), and Jetson Nano devices (including Jetson Nano 2GB) found in the NVIDIA JetPack software developers kit. The patches were delivered as part of NVIDIA’s June security bulletin, released Friday.