The bumblebee-inspired robot is less than a centimeter wide and can hover, change direction, and hit small targets. Like a bumblebee moving from flower to flower, a new insect-inspired flying robot developed by engineers at the University of California, Berkeley, can hover, change direction, and
Nanobots aren’t just microscopic machines—they could come in countless shapes and sizes, each designed for a unique purpose. From medical nanobots that repair cells to swarming micro-robots that build structures at the atomic level, the future of nanotechnology is limitless. Could these tiny machines revolutionize medicine, industry, and even space exploration? #Nanotech #Nanobots #FutureTech #Science #Innovation …
0:00 Milling2:25 Joscha Bach: Opening Remarks23:14 Jeremy Nixon: Introduction to AGI House25:43 Jeremy Nixon: Engineering Consciousness1:07:50 Reactions to J…
Recent rapid progress in artificial intelligence has prompted renewed interest in the possibility of consciousness in artificial systems. This talk argues that this question forces us to confront troubling methodological challenges for consciousness science. The surprising capabilities of large language models provide reason to think that many, if not all, cognitive capabilities will soon be within reach of artificial systems. However, these advancements do not help us resolve strictly metaphysical questions concerning substrate-independence, multiple realizability, or the connection between consciousness and life. Ultimately, I suggest that these questions are likely to be settled not by philosophical argument or scientific experimentation, but by patterns of interactions between humans and machines. As we form valuable and affectively-laden relationships with ever more intelligent machines, it will become progressively harder to treat them as non-conscious entities. Whether this shift will amount to a vindication of AI consciousness or a form of mass delusion remains far from obvious.
“Metaphysical Experiments: Physics and the Invention of the Universe” by Bjørn Ekeberg Book Link: https://amzn.to/4imNNk5
“Metaphysical Experiments, Physics and the Invention of the Universe,” explores the intricate relationship between physics and metaphysics, arguing that fundamental metaphysical assumptions profoundly shape scientific inquiry, particularly in cosmology. The author examines historical developments from Galileo and Newton to modern cosmology and particle physics, highlighting how theoretical frameworks and experimental practices are intertwined with philosophical commitments about the nature of reality. The text critiques the uncritical acceptance of mathematical universality in contemporary physics, suggesting that cosmology’s reliance on hypological and metalogical reasoning reveals a deep-seated faith rather than pure empirical validation. Ultimately, the book questions the limits and implications of a science that strives for universal mathematical truth while potentially overlooking its own inherent complexities and metaphysical underpinnings. Chapter summaries: - Cosmology in the Cave: This chapter examines the Large Hadron Collider (LHC) in Geneva to explore the metaphysics involved in the pursuit of a “Theory of Everything” linking subatomic physics to cosmology. - Of God and Nature: This chapter delves into the seventeenth century to analyze the invention of the universe as a concept alongside the first telescope, considering the roles of Galileo, Descartes, and Spinoza. - Probability and Proliferation: This chapter investigates the nineteenth-century shift in physics with the rise of probabilistic reasoning and the scientific invention of the particle, focusing on figures like Maxwell and Planck. - Metaphysics with a Big Bang: This chapter discusses the twentieth-century emergence of scientific cosmology and the big bang theory, shaped by large-scale science projects and the ideas of Einstein and Hawking. - Conclusion: This final section questions the significance of large-scale experiments like the JWST as metaphysical explorations and reflects on our contemporary scientific relationship with the cosmos.
A group of computer scientists at Microsoft Research, working with a colleague from the University of Chinese Academy of Sciences, has introduced Microsoft’s new AI model that runs on a regular CPU instead of a GPU. The researchers have posted a paper on the arXiv preprint server outlining how the new model was built, its characteristics and how well it has done thus far during testing.
Over the past several years, LLMs have become all the rage. Models such as ChatGPT have been made available to users around the globe, introducing the idea of intelligent chatbots. One thing most of them have in common is that they are trained and run on GPU chips. This is because of the massive amount of computing power they need when trained on massive amounts of data.
In more recent times, concerns have been raised about the huge amounts of energy being used by data centers to support all the chatbots being used for various purposes. In this new effort, the team has found what it describes as a smarter way to process this data, and they have built a model to prove it.
A study published in Physical Review Letters outlines a new approach for extracting information from binary systems by looking at the entire posterior distribution instead of making decisions based on individual parameters.
Since their detection in 2015, gravitational waves have become a vital tool for astronomers studying the early universe, the limits of general relativity and cosmic events such as compact binary systems.
Binary systems consist of two massive objects, like neutron stars or black holes, spiraling toward each other. As they merge together, they generate ripples in spacetime—gravitational waves—which give us information about both objects.
Given the complexity of multi-tenant cloud environments and the growing need for real-time threat mitigation, Security Operations Centers (SOCs) must adopt AI-driven adaptive defense mechanisms to counter Advanced Persistent Threats (APTs). However, SOC analysts face challenges in handling adaptive adversarial tactics, requiring intelligent decision-support frameworks. We propose a Cognitive Hierarchy Theory-driven Deep Q-Network (CHT-DQN) framework that models interactive decision-making between SOC analysts and AI-driven APT bots. The SOC analyst (defender) operates at cognitive level-1, anticipating attacker strategies, while the APT bot (attacker) follows a level-0 policy. By incorporating CHT into DQN, our framework enhances adaptive SOC defense using Attack Graph (AG)-based reinforcement learning. Simulation experiments across varying AG complexities show that CHT-DQN consistently achieves higher data protection and lower action discrepancies compared to standard DQN. A theoretical lower bound further confirms its superiority as AG complexity increases. A human-in-the-loop (HITL) evaluation on Amazon Mechanical Turk (MTurk) reveals that SOC analysts using CHT-DQN-derived transition probabilities align more closely with adaptive attackers, leading to better defense outcomes. Moreover, human behavior aligns with Prospect Theory (PT) and Cumulative Prospect Theory (CPT): participants are less likely to reselect failed actions and more likely to persist with successful ones. This asymmetry reflects amplified loss sensitivity and biased probability weighting — underestimating gains after failure and overestimating continued success. Our findings highlight the potential of integrating cognitive models into deep reinforcement learning to improve real-time SOC decision-making for cloud security.