Toggle light / dark theme

Since artificial intelligence pioneer Marvin Minsky patented the principle of confocal microscopy in 1957, it has become the workhorse standard in life science laboratories worldwide, due to its superior contrast over traditional wide-field microscopy. Yet confocal microscopes aren’t perfect. They boost resolution by imaging just one, single, in-focus point at a time, so it can take quite a while to scan an entire, delicate biological sample, exposing it light dosages that can be toxic.

To push confocal imaging to an unprecedented level of performance, a collaboration at the Marine Biological Laboratory (MBL) has invented a “kitchen sink” confocal platform that borrows solutions from other high-powered imaging systems, adds a unifying thread of “Deep Learning” artificial intelligence algorithms, and successfully improves the confocal’s volumetric resolution by more than 10-fold while simultaneously reducing phototoxicity. Their report on the technology, called “Multiview Confocal Super-Resolution Microscopy,” is published online this week in Nature.

“Many labs have confocals, and if they can eke more performance out of them using these artificial intelligence algorithms, then they don’t have to invest in a whole new microscope. To me, that’s one of the best and most exciting reasons to adopt these AI methods,” said senior author and MBL Fellow Hari Shroff of the National Institute of Biomedical Imaging and Bioengineering.

Artificial neural networks are famously inspired by their biological counterparts. Yet compared to human brains, these algorithms are highly simplified, even “cartoonish.”

Can they teach us anything about how the brain works?

For a panel at the Society for Neuroscience annual meeting this month, the answer is yes. Deep learning wasn’t meant to model the brain. In fact, it contains elements that are biologically improbable, if not utterly impossible. But that’s not the point, argues the panel. By studying how deep learning algorithms perform, we can distill high-level theories for the brain’s processes—inspirations to be further tested in the lab.

One of China’s biggest AI solution providers SenseTime is a step closer to its initial public offering. SenseTime has received regulatory approval to list on the Hong Kong Stock Exchange, according to media reports. Founded in 2014, SenseTime was christened as one of China’s four “AI Dragons” alongside Megvii, CloudWalk, and Yitu. In the second half of the 2010s, their algorithms found much demand from businesses and governments hoping to turn real-life data into actionable insights. Cameras embedded with their AI models watch city streets 24 hours. Malls use their sensing solutions to track and predict crowds on the premises.

SenseTime’s three rivals have all mulled plans to sell shares either in mainland China or Hong Kong. Megvii is preparing to list on China’s Nasdaq-style STAR board after its HKEX application lapsed.

The window for China’s data-rich tech firms to list overseas has narrowed. Beijing is making it harder for companies with sensitive data to go public outside China. And regulators in the West are wary of facial recognition companies that could aid mass surveillance.

But in the past few years, China’s AI upstarts were sought after by investors all over the world. In 2018 alone, SenseTime racked up more than $2 billion in investment. To date, the company has raised a staggering $5.2 billion in funding through 12 rounds. Its biggest outside shareholders include SoftBank Vision Fund and Alibaba’s Taobao. For its flotation in Hong Kong, SenseTime plans to raise up to $2 billion, according to Reuters.

Full Story:

And that’s where physicists are getting stuck.

Zooming in to that hidden center involves virtual particles — quantum fluctuations that subtly influence each interaction’s outcome. The fleeting existence of the quark pair above, like many virtual events, is represented by a Feynman diagram with a closed “loop.” Loops confound physicists — they’re black boxes that introduce additional layers of infinite scenarios. To tally the possibilities implied by a loop, theorists must turn to a summing operation known as an integral. These integrals take on monstrous proportions in multi-loop Feynman diagrams, which come into play as researchers march down the line and fold in more complicated virtual interactions.

Physicists have algorithms to compute the probabilities of no-loop and one-loop scenarios, but many two-loop collisions bring computers to their knees. This imposes a ceiling on predictive precision — and on how well physicists can understand what quantum theory says.

The most promising application in biomedicine is in computational chemistry, where researchers have long exploited a quantum approach. But the Fraunhofer Society hopes to spark interest among a wider community of life scientists, such as cancer researchers, whose research questions are not intrinsically quantum in nature.

“It’s uncharted territory,” says oncologist Niels Halama of the DKFZ, Germany’s national cancer center in Heidelberg. Working with a team of physicists and computer scientists, Halama is planning to develop and test algorithms that might help stratify cancer patients, and select small subgroups for specific therapies from heterogeneous data sets.

This is important for precision medicine, he says, but classic computing has insufficient power to find very small groups in the large and complex data sets that oncology, for example, generates. The time needed to complete such a task may stretch out over many weeks—too long to be of use in a clinical setting, and also too expensive. Moreover, the steady improvements in the performance of classic computers are slowing, thanks in large part to fundamental limits on chip miniaturization.

Get your SPECIAL OFFER for MagellanTV here: https://try.magellantv.com/arvinash — It’s an exclusive offer for our viewers! Start your free trial today. MagellanTV is a new kind of streaming service run by filmmakers with 3,000+ documentaries! Check out our personal recommendation and MagellanTV’s exclusive playlists: https://www.magellantv.com/genres/science-and-tech.

Chapters.
0:00 — You are a time traveler.
2:32 — Spacetime & light cone review.
6:15 — Flat Spacetime equations.
7:03 — Schwarzschild radius, metric.
8:42 — Light cone near a black hole.
10:15 — How to escape black hole.
10:39 — Kerr-Newman metric.
11:34 — How to remove the event horizon.
11:50 — What is a naked singularity.
12:20 — How to travel back in time.
13:26 — Problems.

Summary.
Time travel is nothing special. You’re time traveling right now into the future. Relativity theory shows higher gravity and higher speed can slow time down enough to allow you to potentially travel far into the future. But can you travel back in time to the past?

In this video I first do a quick review of light cones, world lines, events, light like curves, time-like curves, and space-like curves in this video so that you can understand the rest of the video.

Signup for your FREE TRIAL to The GREAT COURSES PLUS here: http://ow.ly/5KMw30qK17T. Until 350 years ago, there was a distinction between what people saw on earth and what they saw in the sky. There did not seem to be any connection.

Then Isaac Newton in 1,687 showed that planets move due to the same forces we experience here on earth. If things could be explained with mathematics, to many people this called into question the need for a God.

But in the late 20th century, arguments for God were resurrected. The standard model of particle physics and general relativity is accurate. But there are constants in these equations that do not have an explanation. They have to be measured. Many of them seem to be very fine tuned.

Scientists point out for example, the mass of a neutrino is 2X10^-37kg. It has been shown that if this mass was off by just one decimal point, life would not exist because if the mass was too high, the additional gravity would cause the universe to collapse. If the mass was too low, galaxies could not form because the universe would have expanded too fast.

China’s Ministry of Industry and Information Technology (MIIT) on Saturday released its second batch of extended goals for promoting the usage of China’s 5G network and the Industrial Internet of Things (IIoT).

IIoT refers to the interconnection between sensors, instruments and other devices to enhance manufacturing efficiency and industrial processes. With a strong focus on machine-to-machine communication, big data and machine learning, the IIoT has been applied across many industrial sectors and applications.

The MIIT announced that the 5G IIoT will be applied in the petrochemical industry, building materials, ports, textiles and home appliances as the 2021 China 5G + Industrial Internet Conference kicked off Saturday in Wuhan, central China’s Hubei Province.

Researchers at the USC Viterbi School of Engineering are using generative adversarial networks (GANs)—technology best known for creating deepfake videos and photorealistic human faces—to improve brain-computer interfaces for people with disabilities.

In a paper published in Nature Biomedical Engineering, the team successfully taught an AI to generate synthetic brain activity data. The data, specifically called spike trains, can be fed into to improve the usability of (BCI).

BCI systems work by analyzing a person’s brain signals and translating that into commands, allowing the user to control like computer cursors using only their thoughts. These devices can improve quality of life for people with motor dysfunction or paralysis, even those struggling with locked-in syndrome—when a person is fully conscious but unable to move or communicate.