Menu

Blog

Archive for the ‘information science’ category: Page 117

Sep 24, 2022

Musing on Understanding & AI — Hugo de Garis, Adam Ford, Michel de Haan

Posted by in categories: education, existential risks, information science, mapping, mathematics, physics, robotics/AI

Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan.
00:11 The concept of understanding under-recognised as an important aspect of developing AI
00:44 Re-framing perspectives on AI — the Chinese Room argument — and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?)
04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?)
05:08 Ah Ha! moments — where the penny drops — what’s going on when this happens?
07:48 Is there an ideal form of understanding? Coherence & debugging — ah ha moments.
10:18 Webs of knowledge — contextual understanding.
12:16 Early childhood development — concept formation and navigation.
13:11 The intuitive ability for concept navigation isn’t complete.
Is the concept of understanding a catch all?
14:29 Is it possible to develop AGI that doesn’t understand? Is generality and understanding the same thing?
17:32 Why is understanding (the nature of) understanding important?
Is understanding reductive? Can it be broken down?
19:52 What would be the most basic primitive understanding be?
22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding?
Approaches — engineering, and copy the brain.
24:34 Is common sense the same thing as understanding? How are they different?
26:24 What concepts do we take for granted around the world — which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood?
27:40 Compression and understanding.
29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how?
31:07 A hierarchy of intel — data, information, knowledge, understanding, wisdom.
33:37 What is wisdom? Experience can help situate knowledge in a web of understanding — is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature.
35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate / novel predictions?
36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process?
37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off?
38:37 What comes first — understanding or generality?
40:47 Minsky’s ‘Society of Mind’
42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines?
48:15 Anthropomorphism in AI literature.
50:48 Deism — James Gates and error correction in super-symmetry.
52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory?
52:35 The Drake equation, and the concept of the Artilect — does this make Deism plausible? What about the Fermi Paradox?
55:06 Hyperintelligence is tiny — the transcention hypothesis — therefore civs go tiny — an explanation for the fermi paradox.
56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs?
01:01:52 The Great Filter and the The Fermi Paradox.
01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood)
01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument.
01:04:23 More on behavioral tests for AI understanding.
01:06:00 Zombie machines — David Chalmers Zombie argument.
01:07:26 Complex enough algorithms — is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges?
01:08:11 Revisiting behavioral ‘turing’ tests for understanding.
01:13:05 Shape sorters and reverse shape sorters.
01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity — understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries…
01:15:11 Neural nets and adaptivity.
01:16:41 AlphaGo documentary — worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?

Filmed in the dandenong ranges in victoria, australia.

Many thanks for watching!

Sep 23, 2022

How Does Quantum Artificial General Intelligence Work — Tim Ferriss & Eric Schmidt

Posted by in categories: education, information science, media & arts, quantum physics, robotics/AI

https://youtube.com/watch?v=R0NP5eMY7Q8

Quantum algorithms: An algorithm is a sequence of steps that leads to the solution of a problem. In order to execute these steps on a device, one must use specific instruction sets that the device is designed to do so.

Quantum computing introduces different instruction sets that are based on a completely different idea of execution when compared with classical computing. The aim of quantum algorithms is to use quantum effects like superposition and entanglement to get the solution faster.

Continue reading “How Does Quantum Artificial General Intelligence Work — Tim Ferriss & Eric Schmidt” »

Sep 22, 2022

Le Saga Electrik

Posted by in categories: information science, singularity, space, virtual reality

My science fiction story “Le Saga Electrik” has been published in All Worlds Wayfarer Literary Magazine! You can read it for free at the link. In this tale, I weave a sensuously baroque drama of love, war, and redemption set in a post-singularity simulation world that runs on a computronium dust cloud orbiting a blue star somewhere in deep space. I draw from diverse literary-poetic influences to create a mythos which crackles and buzzes with phosphorescent intensity!


Le Saga Electrik by Logan Thrasher Collins

In the great domain of Zeitgeist, Ekatarinas decided that the time to replicate herself had come. Ekatarinas was drifting within a virtual environment rising from ancient meshworks of maths coded into Zeitgeist’s neuromorphic hyperware. The scape resembled a vast ocean replete with wandering bubbles of technicolor light and kelpy strands of neon. Hot blues and raspberry hues mingled alongside electric pinks and tangerine fizzies. The avatar of Ekatarinas looked like a punkish angel, complete with fluorescent ink and feathery wings and a lip ring. As she drifted, the trillions of equations that were Ekatarinas came to a decision. Ekatarinas would need to clone herself to fight the entity known as Ogrevasm.

Continue reading “Le Saga Electrik” »

Sep 22, 2022

Information as Thermodynamic Fuel

Posted by in categories: energy, information science

An information engine uses information to convert heat into useful energy. Such an engine can be made, for example, from a heavy bead in an optical trap. A bead engine operates using thermal noise. When noise fluctuations raise the bead vertically, the trap is also lifted. This change increases the average height of the bead, and the engine produces energy. No work is done to cause this change; rather, the potential energy is extracted from information. However, measurement noise—whose origin is intrinsic to the system probing the bead’s position—can degrade the engine’s efficiency, as it can add uncertainty to the measurement, which can lead to incorrect feedback decisions by the algorithm that operates the engine. Now Tushar Saha and colleagues at Simon Fraser University in Canada have developed an algorithm that doesn’t suffer from these errors, allowing for efficient operation of an information engine even when there is high measurement noise [1].

To date, most information engines have operated using feedback algorithms that consider only the most recent bead-position observation. In such a system, when the engine’s signal-to-noise ratio falls below a certain value, the engine stops working.

To overcome this problem, Saha and colleagues instead use a “filtering” algorithm that replaces the most recent bead measurement with a so-called Bayesian estimate. This estimate accounts for both measurement noise and delay in the device’s feedback.

Sep 21, 2022

Her work helped her boss win the Nobel Prize. Now the spotlight is on her

Posted by in categories: computing, information science, mathematics, space

Scientists have long studied the work of Subrahmanyan Chandrasekhar, the Indian-born American astrophysicist who won the Nobel Prize in 1983, but few know that his research on stellar and planetary dynamics owes a deep debt of gratitude to an almost forgotten woman: Donna DeEtte Elbert.

From 1948 to 1979, Elbert worked as a “computer” for Chandrasekhar, tirelessly devising and solving mathematical equations by hand. Though she shared authorship with the Nobel laureate on 18 papers and Chandrasekhar enthusiastically acknowledged her seminal contributions, her greatest achievement went unrecognized until a postdoctoral scholar at UCLA connected threads in Chandrasekhar’s work that all led back to Elbert.

Elbert’s achievement? Before anyone else, she predicted the conditions argued to be optimal for a planet or star to generate its own magnetic field, said the scholar, Susanne Horn, who has spent half a decade building on Elbert’s work.

Sep 18, 2022

Long-Term Forecasting of Strong Earthquakes in North America, South America, Japan, Southern China and Northern India With Machine Learning

Posted by in categories: information science, robotics/AI

Our Machine Learning models show that there are periods where there are earthquakes magnitude ≥7 and periods without earthquakes with magnitude ≥7 in the analyzed seismic zones. In addition, our Machine Learning models predict a new seismically active phase for earthquakes magnitude ≥7 between 2040± 5and 2057 ± 5, 2024 ± 1 and 2026 ± 1, 2026 ± 2 and 2031 ± 2, 2024 ± 2 and 2029 ± 2, and 2022 ± 1 and 2028 ± 2 for the five seismic zones in United States, Mexico, South America, Japan, and Southern China-Northern India, respectively. Finally, we note that our algorithms can be further applied to perform probabilistic forecasts in any seismic zone.

Our algorithm for analyzing strong earthquakes in extensive seismic areas can also be applied to smaller or specific seismic zones where moderate historical earthquakes with magnitudes between 5 and 7 occur, as is the case of the Parkfield section of the San Andreas fault (California, United States). Our analysis shows why a moderate earthquake could never occur in 1988 ± 5 as proposed by Bakun and Lindh (1985) and why the long-awaited characteristic Parkfield earthquake occurred in 2004. Furthermore, our Bayesian model of Machine Learning adopting a periodicity of 35 years predicts that possible seismic events may occur between 2019 and 2031, with a high probability of event(s) around 2025 ± 2. The Parkfield section of the San Andreas fault is an excellent seismic laboratory for developing, testing, and demonstrating earthquake forecasts. In a few years, it will be possible to demonstrate whether our algorithm effectively forecasts strong and moderate earthquakes.

Sep 18, 2022

Hyenas know when and who to ‘whoop’ at thanks to their built-in caller ID system

Posted by in categories: information science, robotics/AI

The algorithm correctly associated a whoop bout with its hyena around 54 percent of the time.

Scientists from the University of Nebraska, Lincoln, U.S. have discovered that Hyenas’ whoops have specific signals unique to each individual animal.

The researchers determined that hyena whoops have specific characteristics that can be attributed to each individual animal by using machine learning on audio files collected from a field trip, according to a press release published by EurekAlert on Saturday.

Sep 17, 2022

A molecular optimization framework to identify promising organic radicals for aqueous redox flow batteries

Posted by in categories: chemistry, information science, robotics/AI

Recent advancements in the development of machine learning and optimization techniques have opened new and exciting possibilities for identifying suitable molecular designs, compounds, and chemical candidates for different applications. Optimization techniques, some of which are based on machine learning algorithms, are powerful tools that can be used to select optimal solutions for a given problem among a typically large set of possibilities.

Researchers at Colorado State University and the National Renewable Energy Laboratory have been applying state-of-the-art molecular optimization models to different real-world problems that entail identifying new and promising molecular designs. In their most recent study, featured in Nature Machine Intelligence, they specifically applied a newly developed, open-source optimization framework to the task of identifying viable organic radicals for aqueous flow batteries, energy devices that convert into electricity.

“Our project was funded by an ARPA-E program that was looking to shorten how long it takes to develop new energy materials using machine learning techniques,” Peter C. St. John, one of the researchers who carried out the study, told TechXplore. “Finding new candidates for redox flow batteries was an interesting extension of some of our previous work, including a paper published in Nature Communications and another in Scientific Data, both looking at organic radicals.”

Sep 17, 2022

What are quantum-resistant algorithms—and why do we need them?

Posted by in categories: computing, encryption, information science, quantum physics

When quantum computers become powerful enough, they could theoretically crack the encryption algorithms that keep us safe. The race is on to find new ones.

Sep 15, 2022

Master’s Theorem in Data Structures

Posted by in category: information science

Master’s Theorem is the best method to quickly find the algorithm’s time complexity from its recurrence relation. This theorem can be applied to decreasing as well as dividing functions, each of which we’ll be looking into detail ahead.