Toggle light / dark theme

Tech Companies Showing Signs of Distress as They Run Out of Money for AI Infrastructure

AI companies are looking to spend trillions of dollars on data centers to power their increasingly resource-intensive AI models — an astronomical amount of money that could threaten the entire economy if the bet doesn’t pay off.

As the race to spend as much money as possible on AI infrastructure rages on, companies have become increasingly desperate to keep the cash flowing. Firms like OpenAI, Anthropic, and Oracle are exhausting existing debt markets — including junk debt, private credit, and asset-backed loans — in increasingly desperate moves, as Bloomberg reports, that are raising concerns among investors.

“The numbers are like nothing any of us who have been in this business for 25 years have seen,” Bank of America managing head of global credit Matt McQueen told Bloomberg. “You have to turn over all avenues to make this work.”

Neuroscience Beyond Neurons? The Diverse Intelligence Era | Michael Levin & Robert Chis-Ciure

What if neurons aren’t the foundation of mind?

In this Mind-Body Solution Colloquia, Michael Levin and Robert Chis-Ciure challenge one of neuroscience’s deepest assumptions: that cognition and intelligence are exclusive to brains and neurons.

Drawing on cutting-edge work in bioelectricity, developmental biology, and philosophy of mind, this conversation explores how cells, tissues, and living systems exhibit goal-directed behavior, memory, and problem-solving — long before neurons ever appear.

We explore:
• Cognition without neurons.
• Bioelectric networks as control systems.
• Memory and learning beyond synapses.
• Morphogenesis as collective intelligence.
• Implications for AI, consciousness, and ethics.

This episode pushes neuroscience beyond the neuron, toward a deeper understanding of mind, life, and intelligence as continuous across scales.

TIMESTAMPS:

AI Apocalypse — AI Genocide Prevention with Special Guest Alex Lightman

AI is predicted by experts to wipe out humanity. Here may be our best chance to prevent that from happening. Explore with special guest Alex Lightman how humanity can survive and thrive as AI surpasses our own intelligence.

[ ]( )


GoldBacks from Galactic/Green Greg’s affiliate link:
https://www.defythegrid.com/goldbacks… coupon code GreenGregs for 1% off.

Joscha Bach presents “Machine Consciousness and Beyond” | dAGI Summit 2025

Bach reframes AI as the endpoint of a long philosophical project to “naturalize the mind,” arguing that modern machine learning operationalizes a lineage from Aristotle to Turing in which minds, worlds, and representations are computational state-transition systems. He claims computer science effectively re-discovers animism—software as self-organizing, energ†y-harvesting “spirits”—and that consciousness is a simple coherence-maximizing operator required for self-organizing agents rather than a metaphysical mystery. Current LLMs only simulate phenomenology using deepfaked human texts, but the universality of learning systems suggests that, when trained on the right structures, artificial models could converge toward the same internal causal patterns that give rise to consciousness. Bach proposes a biological-to-machine consciousness framework and a research program (CIMC) to formalize, test, and potentially reproduce such mechanisms, arguing that understanding consciousness is essential for culture, ethics, and future coexistence with artificial minds.

Key takeaways.

▸ Speaker & lens: Cognitive scientist and AI theorist aiming to unify philosophy of mind, computer science, and modern ML into a single computationalist worldview.
▸ AI as philosophical project: Modern AI fulfills the ancient ambition to map mind into mathematics; computation provides the only consistent language for modeling reality and experience.
▸ Computationalist functionalism: Objects = state-transition functions; representations = executable models; syntax = semantics in constructive systems.
▸ Cyber-animism: Software as “spirits”—self-organizing, adaptive control processes; living systems differ from dead ones by the software they run.
▸ Consciousness as function: A coherence-maximizing operator that integrates mental states; second-order perception that stabilizes working memory; emerges early in development as a prerequisite for learning.
▸ LLMs & phenomenology: Current models aren’t conscious; they simulate discourse about consciousness using data full of “deepfaked” phenomenology. A Turing test cannot detect consciousness because performance ≠ mechanism.
▸ Universality hypothesis: Different architectures optimized for the same task tend to converge on similar internal causal structures; suggests that consciousness-like organization could arise if it’s the simplest solution to coherence and control.
▸ Philosophical zombies: Behaviorally identical but non-conscious agents may be more complex than conscious ones; evolution chooses simplicity → consciousness may be the minimal solution for self-organized intelligence.
▸ Language vs embodiment: Language may contain enough statistical structure to reconstruct much of reality; embodiment may not be strictly necessary for convergent world models.
▸ Testing for machine consciousness: Requires specifying phenomenology, function, search space, and success criteria—not performance metrics.
▸ CIMC agenda: Build frameworks and experiments to recreate consciousness-like operators in machines; explore implications for ethics, interfaces, and coexistence with future minds.

Sentience Beyond Biology — Debate w/Dmitry Volkov, Joscha Bach, Matthew MacDougall, Murray Shanahan

What happens when biology is no longer the foundation for sentience, agency, and consciousness?

This groundbreaking panel discussion brings together some of the world’s most brilliant minds in AI, neuroscience, and philosophy to tackle humanity’s most profound questions about the future of intelligence.

Chaired by neuroscientist Patrick House, the conversation explores the boundaries of machine agency, the possibility of AI emotion, and the future of human–machine interaction.

🎙 Featured Speakers:
- Joscha Bach – Cognitive Scientist, AI Researcher, Philosopher.
- Dmitry Volkov – Co-founder of the International Center for Consciousness Studies (ICCS), Philosopher, Entrepreneur, Founder of Social Discovery Group & EVA AI
- Matthew Macdougall – Head of Surgery at Neuralink, Pioneer in Brain–Computer Interfaces.
- Murray Shanahan – Professor of Cognitive Robotics at Imperial College London, Scientist at DeepMind.

Key Topics in This Debate:
- Whether giving machines “agency” is just a useful human shortcut (The Intentional Stance).
- If the deeper question is not “Is AI conscious?” but “Can it truly love?”
- How modern AI is erasing the Uncanny Valley.
- The challenge of true individuality and creativity in AI-generated art.
- How human biological hardware shapes consciousness — and what this means for building sentient machines.

00:00:00 — Introduction and Presentation of Participants.

Brain-inspired AI helps soft robot arms switch tasks and stay stable

Researchers have developed an AI control system that enables soft robotic arms to learn a wide repertoire of motions and tasks once, then adjust to new scenarios on the fly without needing retraining or sacrificing functionality. This breakthrough brings soft robotics closer to human-like adaptability for real-world applications, such as in assistive robotics, rehabilitation robots, and wearable or medical soft robots, by making them more intelligent, versatile, and safe. The research team includes Singapore-MIT Alliance for Research and Technology’s (SMART) Mens, Manus & Machina (M3S) interdisciplinary research group, and National University of Singapore (NUS), alongside collaborators from Massachusetts Institute of Technology (MIT) and Nanyang Technological University (NTU Singapore).

Unlike regular robots that move using rigid motors and joints, soft robots are made from flexible materials such as soft rubber and move using special actuators—components that act like artificial muscles to produce physical motion. While their flexibility makes them ideal for delicate or adaptive tasks, controlling soft robots has always been a challenge because their shape changes in unpredictable ways. Real-world environments are often complicated and full of unexpected disturbances, and even small changes in conditions—like a shift in weight, a gust of wind, or a minor hardware fault—can throw off their movements.

Study identifies key elements that determine impact of AI on jobs

Research by academics at King’s College London and the AI Objectives Institute has shed light on why what matters is not just how much of a job AI can do, but which parts. Dr. Bouke Klein Teeselink and Daniel Carey analyzed hundreds of millions of job postings across 39 countries before and after the release of ChatGPT in November 2022. They found that occupations with a large number of tasks exposed to AI automation, for example basic administration or data entry, saw a 6.1% decline in job postings on average. Importantly, however, this effect depends not only on how many tasks are exposed, but also on which tasks.

When AI automates the routine, less-skilled parts of a job, the work that remains tends to be more specialized. Fewer people can do it, so wages rise. The researchers cite the example of a human resources specialist whose administrative paperwork is now handled by AI, leaving them to focus on complex employee relations and judgment calls.

But when AI can perform the more specialized, cognitively demanding tasks, wages decrease because the job no longer requires scarce expertise. This example can apply to roles such as junior software engineers, the researchers found.

Brain-inspired hardware uses single-spike coding to run AI more efficiently

The use of artificial intelligence (AI) systems, such as the models underpinning the functioning of ChatGPT and various other online platforms, has grown exponentially over the past few years. Current hardware and electronic devices, however, might not be best suited for running these systems, which are computationally intensive and can drain huge amounts of energy.

Electronics engineers worldwide have thus been trying to develop alternative hardware that better reflects how the human brain processes information and could thus run AI systems more reliably, while consuming less power. Many of these brain-inspired hardware systems rely on memristors, electronic components that can both store and process information.

Researchers at Peking University and Southwest University recently introduced a new neuromorphic hardware system that combines different types of memristors. This system, introduced in a paper published in Nature Electronics, could be used to create new innovative brain-machine interfaces and AI-powered wearable devices.

/* */