Toggle light / dark theme

Can a machine feel love, hate or grief? How do we write our laws around A.I.? Would you let an algorithm run the government? Watch my newest video where I attempt to answer these questions while introducing the concept of artificial Intelligence philosophy: an area of study that could take on these, and other mind-boggling questions in the future.

Notes and References.
[1] https://www.humanetech.com/who-we-are.
[2] https://brie.berkele y.edu/sites/default/files/brie_wp_2018-3.pdf.
[3] Jeff Hawkins — A Thousand Brains.
[4] https://2020.yang2020.com/policies/th

Supplementary Sources.
https://waitbutwhy.com/2015/01/artifi… Artificial Intelligence & Personhood: Crash Course Philosophy #23. Retrieved from • Artificial Intelligence & Personhood:… Follow Tomorrow Matters Everywhere: ▶️ YouTube — / @grantkeegantechphilosophy 📚 Official Site — https://tomorrowmatters.net 🐦 Twitter — / tomorrowm4tters 📷 Instagram — / tomorrow_matters_ 🎶 Tik Tok — / tomorrow_matters 🅿️ Patreon — / tomorrowmatters Song: Inukshuk — Too Far Gone [NCS Release] Music provided by NoCopyrightSounds Free Download/Stream: http://ncs.io/TooFarGone Watch: • Inukshuk — Too Far Gone | Electronic…
https://waitbutwhy.com/2015/01/artifi
Artificial Intelligence & Personhood: Crash Course Philosophy #23. Retrieved from • Artificial Intelligence & Personhood:…

Follow Tomorrow Matters Everywhere:

The idea of creating machines that can think and act like humans is smoothly transforming from fiction to reality. Humanoid robots, digital humans, ChatGPT, and unmanned cars — today there are many applications driven by artificial intelligence that surpass humans in speed, accuracy, efficiency and tirelessness. But only in narrow areas so far.
And yet, this gives us hope to see a real miracle in the near future — artificial intelligence equal or superior to human intelligence in all parameters!
Can AI compare with us? Surpass us? Replace us? Deceive us and pursue its own goals? Today we will tell you how a miracle of nature such as the human brain differs from the main technology of the 21st century — artificial intelligence, and what prospects we have with AI in the future!

The journey of artificial intelligence (AI) is a captivating saga, dating back to 1956 when John McCarthy coined the term at a Dartmouth conference. Through the ensuing decades, AI witnessed three significant booms. Between the 1950s-70s, pioneers introduced groundbreaking neural perception networks and chat software. Though they foresaw AI surpassing human capabilities in a decade, this dream remained unfulfilled. By the 1980s, the second wave took shape, propelled by new machine learning techniques and neural networks, which promised innovations like speech recognition. Yet, many of these promises fell short.

But the tide turned in 2006. Deep learning emerged, and by 2016, AI systems like AlphaGo were defeating world champions. The third boom began, reinforced by large language models like ChatGPT, igniting discussions about amalgamating AI with humanoid robots. Discover more about this fascinating trend in our linked issue.

Our progress in cognitive psychology, neuroscience, quantum physics, and brain research has heavily influenced AI’s trajectory. Especially significant is our understanding of the human brain, pushing the boundaries of neural network development. Can AI truly emulate human cognition?

Artificial consciousness is the next frontier in AI. While artificial intelligence has advanced tremendously, creating machines that can surpass human capabilities in certain areas, true artificial consciousness represents a paradigm shift—moving beyond computation into subjective experience, self-awareness, and sentience.

In this video, we explore the profound implications of artificial consciousness, the defining characteristics that set it apart from traditional AI, and the groundbreaking work being done by McGinty AI in this field. McGinty AI is pioneering new frameworks, such as the McGinty Equation (MEQ) and Cognispheric Space (C-space), to measure and understand consciousness levels in artificial and biological entities. These advancements provide a foundation for building truly conscious AI systems.

The discussion also highlights real-world applications, including QuantumGuard+, an advanced cybersecurity system utilizing artificial consciousness to neutralize cyber threats, and HarmoniQ HyperBand, an AI-powered healthcare system that personalizes patient monitoring and diagnostics.

However, as we venture into artificial consciousness, we must navigate significant technical challenges and ethical considerations. Questions about autonomy, moral status, and responsible development are at the forefront of this revolutionary field. McGinty AI integrates ethical frameworks such as the Rotary Four-Way Test to ensure that artificial consciousness aligns with human values and benefits society.

For decades, the realm of particle physics has been governed by two major categories: fermions and bosons. Fermions, like quarks and leptons, make up matter, while bosons, such as photons and gluons, act as force carriers. These classifications have long been thought to be the limits of particle behavior. However, a breakthrough has recently changed this understanding.

Researchers have mathematically proven the existence of paraparticles, a theoretical type of particle that doesn’t fit neatly into the traditional fermion or boson categories. These exotic particles were once deemed impossible, defying the conventional laws of physics. Now, thanks to advanced mathematical equations, scientists have demonstrated that paraparticles can exist without violating known physical constraints.

The implications of this discovery could be far-reaching, especially in areas like quantum computing. Paraparticles could offer new possibilities in how we understand the universe at its most fundamental level. While the discovery is still in its early stages, it provides a new tool for physicists to explore more complex systems, potentially unlocking new technologies in the future.

Go to: https://brilliant.org/arvinash — you can sign up for free! And the first 200 people will get 20% off their annual membership. Enjoy!

Two new recently published, peer-reviewed scientific papers show that real warp drive designs based on real physics may be possible. They are realistic and physical, which had not been the case in the past. In a paper published in 1994, Mexican physicist Miguel Alcubierre showed theoretically that an FTL warp drive could work within the laws of physics. But it would require huge amounts of negative mass or energy. Such a thing is not known to exist.
0:00 Problem with C
2:21 General Relativity.
3:15 Alcubierre warp.
4:40 Bobrick & Martire solution.
7:15 Types of warp drives.
8:42 Spherical Warp drive.
11:32 FTL using Positive Energy.
13:14 Next steps.
14:08 Further education Brilliant.

In a recent paper published by Applied Physics, authors Alexey Bobrick and Gianni Martire, outline how a physically feasible warp drive could in principle, work, without the need for negative energy. I spoke to them. They had technical input on this video.

What Alcubierre did in his paper is figure out a shape that he believed spacetime needed to have in order for a ship to travel faster than light. Then he solved Einstein’s equation for general relativity to determine the matter and energy he would need to generate the desired curvature. It could only work with negative energy. This is mathematically consistent, but meaningless because negative mass is not known to exist. Negative mass is not the same as anti-matter. Antimatter has positive energy and mass.

To design their improved materials, Serles and Filleter worked with Professor Seunghwa Ryu and PhD student Jinwook Yeo at the Korea Advanced Institute of Science & Technology (KAIST) in Daejeon, South Korea. This partnership was initiated through U of T’s International Doctoral Clusters program, which supports doctoral training through research engagement with international collaborators.

The KAIST team employed the multi-objective Bayesian optimization machine learning algorithm. This algorithm learned from simulated geometries to predict the best possible geometries for enhancing stress distribution and improving the strength-to-weight ratio of nano-architected designs.

Serles then used a two-photon polymerization 3D printer housed in the Centre for Research and Application in Fluidic Technologies (CRAFT) to create prototypes for experimental validation. This additive manufacturing technology enables 3D printing at the micro and nano scale, creating optimized carbon nanolattices.

For example, to compute the magnetic susceptibility, we simply select the operator \(A=\beta {({S}^{z})}^{2}\), where β = 1/T is the inverse temperature. Interestingly, this method of estimating thermal expectation values is insensitive to uniform spectral broadening of each peak, due to a cancellation between the numerator and denominator (see discussion resulting in equation (S69) in Supplementary Information). However, it is highly sensitive to noise at low ω, which is exponentially amplified by eβω. To address this, we estimate the SNR for each DA(ω) independently and zero-out all points with SNR below three times the average SNR. This potentially introduces some bias by eliminating peaks with low signal but ensures that the effects of shot noise are well controlled.

To quantify the effect of noise on the engineered time dynamics, we simulate a microscopic error model by applying a local depolarizing channel with an error probability p at each gate. This results in a decay of the obtained signals for the correlator \({D}_{R}^{A}(t)\). The rate of the exponential decay grows roughly linearly with the weight of the measured operators (Extended Data Fig. 2). This scaling with operator weight can be captured by instead applying a single depolarizing channel at the end of the time evolution, with a per-site error probability of γt with an effective noise rate γ. This effective γ also scales roughly linear as a function of the single-qubit error rate per gate p (Extended Data Fig. 2).

Quantum simulations are constrained by the required number of samples and the simulation time needed to reach a certain target accuracy. These factors are crucial for determining the size of Hamiltonians that can be accessed for particular quantum hardware.