Toggle light / dark theme

Time: Do the past, present, and future exist all at once? | Big Think

Watch the newest video from Big Think: https://bigth.ink/NewVideo.
Learn skills from the world’s top minds at Big Think+: https://bigthink.com/plus/

Everything we do as living organisms is dependent, in some capacity, on time. The concept is so complex that scientists still argue whether it exists or if it is an illusion. In this video, astrophysicist Michelle Thaller, science educator Bill Nye, author James Gleick, and neuroscientist Dean Buonomano discuss how the human brain perceives of the passage of time, the idea in theoretical physics of time as a fourth dimension, and the theory that space and time are interwoven. Thaller illustrates Einstein’s theory of relativity, Buonomano outlines eternalism, and all the experts touch on issues of perception, definition, and experience. Check Dean Buonomano’s latest book Your Brain Is a Time Machine: The Neuroscience and Physics of Time at https://amzn.to/2GY1n1z.

TRANSCRIPT: MICHELLE THALLER: Is time real or is it an illusion? Well, time is certainly real but the question is what do we mean by the word time? And it may surprise you that physicists don’t have a simple answer for that. JAMES GLEICK: Physicists argue about and physicists actually have symposia on the subject of is there such a thing as time. And it’s also something that has a traditional in philosophy going back about a century. But, I think it’s fair to say that in one sense it’s a ridiculous idea. How can you say time doesn’t exist when we have such a profound experience of it first of all. And second of all we’re talking about it constantly. I mean we couldn’t get, I can’t get through this sentence with out referring to time. I was going to say we couldn’t get through the day without discussing time. So, obviously when a physicist questions the existence of time they are trying to say something specialized, something technical. BILL NYE: Notice that in English we don’t have any other word for time except time. It’s unique. It’s this wild fourth dimension in nature. This is one dimension, this is one dimension, this is one dimension and time is the fourth dimension. And we call it the fourth dimension not just in theoretical physics but in engineering. I worked on four dimensional autopilots so you tell where you want to go and what altitude it is above sea level and then when you want to get there. Like you can’t get there at any time. GLEICK: Einstein or maybe I should say more properly Minkowski, his teacher and contemporary, offers a vision of space-time as a single thing, as a four dimensional block in which the past and the future are just like spatial dimensions. They’re just like north and south in the equations of physics. And so you can construct a view of the world in which the future is already there and you can say, and physicists do say something very much like this, that in the fundamental laws of physics there is no distinction between the past and the future. And so if you’re playing that game you’re essentially saying time as an independent thing doesn’t exist. Time is just another dimension like space. Again, that is in obvious conflict with our intuitions about the world. We go through the day acting as though the past is over and the future has not yet happened and it might happen this way or it might happen that way. We could flip a coin and see. We tend to believe in our gut that the future is not fully determined and therefore is different from the past. DEAN BUONOMANO: If the flow if time, if our subjective sense of the flow of time is an illusion we have this clash between physics and neuroscience because the dominant theory in physics is that we live in the block universe. And I should be clear. There’s no consensus. There’s no 100 percent agreement. But the standard view in physics is that, and this comes in large part from relativity, that we live in an eternalist universe, in a block universe in which the past, present and future is equally real. So, this raises the question of whether we can trust our brain to tell us that time is flowing. NYE: In my opinion time is both subjective and objective. What we do in science and engineering and in life, astronomy, is measure time as carefully as we can because it’s so important to our everyday world. You go to plant crops you want to know when to plant them. You want to know when to harvest them. If you want to have a global positioning system that enables you to determine which side of the street you’re on, from your phone you need to take into account both the traditional passage of time that you might be familiar with watching a clock here on the Earth’s surface, and the passage of time as it’s affected by the… Read the full transcript at https://bigthink.com/videos/does-time-exist

Can AI Truly Give Us a Glimpse of Lost Masterpieces?

Recent projects used machine learning to resurrect paintings by Klimt and Rembrandt. They raise questions about what computers can understand about art.

Full Story:


IN 1945, FIRE claimed three of Gustav Klimt’s most controversial paintings. Commissioned in 1,894 for the University of Vienna, “the Faculty Paintings”—as they became known—were unlike any of the Austrian symbolist’s previous work. As soon as he presented them, critics were in an uproar over their dramatic departure from the aesthetics of the time. Professors at the university rejected them immediately, and Klimt withdrew from the project. Soon thereafter, the works found their way into other collections. During World War II, they were placed in a castle north of Vienna for safekeeping, but the castle burned down, and the paintings presumably went with it. All that remains today are some black-and-white photographs and writings from the time. Yet I am staring right at them.

Well, not the paintings themselves. Franz Smola, a Klimt expert, and Emil Wallner, a machine learning researcher, spent six months combining their expertise to revive Klimt’s lost work. It’s been a laborious process, one that started with those black-and-white photos and then incorporated artificial intelligence and scores of intel about the painter’s art, in an attempt to recreate what those lost paintings might have looked like. The results are what Smola and Wallner are showing me—and even they are taken aback by the captivating technicolor images the AI produced.

Let’s make one thing clear: No one is saying this AI is bringing back Klimt’s original works. “It’s not a process of recreating the actual colors, it is re-colorizing the photographs,” Smola is quick to note. “The medium of photography is already an abstraction from the real works.” What machine learning is doing is providing a glimpse of something that was believed to be lost for decades.

Machine learning solves the who’s who problem in NMR spectra of organic crystals

Solid-state nuclear magnetic resonance (NMR) spectroscopy—a technique that measures the frequencies emitted by the nuclei of some atoms exposed to radio waves in a strong magnetic field—can be used to determine chemical and 3D structures as well as the dynamics of molecules and materials.

A necessary initial step in the analysis is the so-called chemical shift assignment. This involves assigning each peak in the NMR spectrum to a given atom in the molecule or material under investigation. This can be a particularly complicated task. Assigning chemical shifts experimentally can be challenging and generally requires time-consuming multi-dimensional correlation experiments. Assignment by comparison to statistical analysis of experimental chemical shift databases would be an alternative solution, but there is no such for molecular solids.

A team of researchers including EPFL professors Lyndon Emsley, head of the Laboratory of Magnetic Resonance, Michele Ceriotti, head of the Laboratory of Computational Science and Modeling and Ph.D. student Manuel Cordova decided to tackle this problem by developing a method of assigning NMR spectra of organic crystals probabilistically, directly from their 2D chemical structures.

NVIDIA’s AI-based GAUGAN 2 tool generates Van Goghesque landscapes from words and phrases you input

NVIDIA recently rolled out a demo of GAUGAN 2, an artificial intelligence-based text to image creation tool. GAUGAN 2 takes keywords and phrases you type in as input, and then generates unique images based on them.

In NVIDIA’s demo video, a user inputs “mountains by a lake” and GAUGAN 2 spits out a beautiful alpine landscape with a small lake in the foreground. We tried using GAUGAN 2 and, in practice, things aren’t as smooth as the demo implies. Certain keywords resulted in bizarre, terrifying results. GAUGAN 2 used this author’s name, for instance, to output an image of what looked like fungi on legs, walking down a street.

GAUGAN 2 is early in development at this point, and likely been trained only on a rather limited data set. Regardless, when it works, it offers a breathtaking snapshot of how AI technology could transform asset creation in movies in games in the years to come, with unique photorealistic landscapes and objects generated from just a few words of user input.

This Artificial Intelligence Simulates Physics in Real Time

A new Artificial Intelligence model manages to do complex physics simulations in real time with only using a fraction of the power that a traditionally computed simulation would use. These simulations could soon be used for things like biotechnology, gaming, weather predictions and more. Two Minute Papers has done several videos on it before, but this is a more complex AI with a wider range of applications.

TIMESTAMPS:
00:00 The Future of Advanced Physics Simulations.
01:57 How this new approach to AI works.
04:03 Are medical simulations a possibility?
06:02 Last Words.

#ai #physics #simulation

Revolutionary New AI can be Run on Any Device

A new and revolutionary approach to building Artificial Intelligence models has shown promise of enabling almost any device, regardless of how powerful it is, to run enormous and intelligent Artificial Intelligence’s in a similar way to how our Human Brain operate. This is partially done with new and improved Neuromorphic Computing Hardware which is modelled after our real brains. We may soon see AI beating humans at many different general tasks like an Artificial General Intelligence.

TIMESTAMPS:
00:00 The Impossibility of Human AI
01:54 A new Approach is in town.
04:33 Other approaches to AI
06:44 Is this the Future of Artificial Intelligence?
09:43 Last Words.

#ai #agi #neuralcomputing

Yann LeCun

Welcome to AIP.
- The main focus of this channel is to publicize and promote existing SoTA AI research works presented in top conferences, removing barrier for people to access the cutting-edge AI research works.
- All videos are either taken from the public internet or the Creative Common licensed, which can be accessed via the link provided in the description.
- To avoid conflict of interest with the ongoing conferences, all videos are published at least 1 week after the main events. A takedown can be requested if it infringes your right via email.
- If you would like your presentation to be published on AIP, feel free to drop us an email.
- AI conferences covered include: NeurIPS (NIPS), AAAI, ICLR, ICML, ACL, NAACL, EMNLP, IJCAI

If you would like to support the channel, please join the membership:
https://www.youtube.com/c/AIPursuit/join.

Donation:
Paypal ⇢ https://paypal.me/tayhengee.
Patreon ⇢ https://www.patreon.com/hengee.
Donate any cryptocurrency on BEP20 (BTC, ETH, USDT, BNB, Doge, Shiba): 0x0712795299bf00eee99f13b4cda0e19dc656bf2c.
BTC ⇢ 1BwE1gufcE5t1Xh4w3wQhGgcJuCTb7AGj3
ETH ⇢ 0x0712795299bf00eee99f13b4cda0e19dc656bf2c.
Doge ⇢ DL57g3Qym7XJkRUz5VTU97nvV3XuvvKqMX
USDT (TRN20) ⇢ THV9dCnGfWtGeAiZEBZVWHw8JGdGCWC4Sh.

The video is reposted for educational purposes and encourages involvement in the field of AI research.

A good GitHub repo on self supervised learning: https://github.com/jason718/awesome-self-supervised-learning#machine-learning

Yann LeCun — Self-Supervised Learning: The Dark Matter of Intelligence (FAIR Blog Post Explained)

Deep Learning systems can achieve remarkable, even super-human performance through supervised learning on large, labeled datasets. However, there are two problems: First, collecting ever more labeled data is expensive in both time and money. Second, these deep neural networks will be high performers on their task, but cannot easily generalize to other, related tasks, or they need large amounts of data to do so. In this blog post, Yann LeCun and Ishan Misra of Facebook AI Research (FAIR) describe the current state of Self-Supervised Learning (SSL) and argue that it is the next step in the development of AI that uses fewer labels and can transfer knowledge faster than current systems. They suggest as a promising direction to build non-contrastive latent-variable predictive models, like VAEs, but ones that also provide high-quality latent representations for downstream tasks.

OUTLINE:
0:00 — Intro & Overview.
1:15 — Supervised Learning, Self-Supervised Learning, and Common Sense.
7:35 — Predicting Hidden Parts from Observed Parts.
17:50 — Self-Supervised Learning for Language vs Vision.
26:50 — Energy-Based Models.
30:15 — Joint-Embedding Models.
35:45 — Contrastive Methods.
43:45 — Latent-Variable Predictive Models and GANs.
55:00 — Summary & Conclusion.

Paper (Blog Post): https://ai.facebook.com/blog/self-supervised-learning-the-da…telligence.
My Video on BYOL: https://www.youtube.com/watch?v=YPfUiOMYOEE

ERRATA:
- The difference between loss and energy: Energy is for inference, loss is for training.
- The R(z) term is a regularizer that restricts the capacity of the latent variable. I think I said both of those things, but never together.
- The way I explain why BERT is contrastive is wrong. I haven’t figured out why just yet, though smile

Video approved by Antonio.

Abstract:

/* */