Researchers at Harvard University exploited Marangoni effects to propel their tiny robots.
These bots ease tasks and help humans speed up critical work more accurately.
In this arena, researchers have explored a new way to power robots. Focusing on surface tensions, scientists have developed tiny robots that can perform industrial tasks.
Researchers from Harvard University claim that their tiny robots use the same method to float, allowing beetles to float across ponds and causing Cheerios to cluster together in a bowl.
Tesla Optimus has taken a step closer to human-like dexterity, showcasing its upgraded hands with impressive capabilities. A recent video highlights the robot catching a tennis ball using its new hands, which now feature 22 degrees of freedom. By comparison, human hands have 27 degrees of freedom, making Optimus’ latest enhancements a significant stride in robotic engineering. In May 2024, Elon Musk hinted at these upgrades, and the results are now visible.
This development aligns closely with Neuralink’s recent milestone—the United States Food and Drug Administration has granted approval for the CONVOY Study. This feasibility trial aims to test the Brain-to-Computer-interface N1 Implant alongside assistive robotic arms, hinting at the possibility of collaboration between Tesla Optimus and Neuralink technologies. During a Neuralink update in July, Elon Musk mentioned the potential for Optimus’ limbs to work in sync with the N1 Implant, emphasizing a vision where human minds control robotic systems seamlessly.
Optimus itself is a technical marvel, standing five feet eight inches tall and weighing 125 pounds. Designed for versatility, it is constructed with lightweight yet durable materials and powered by a 2.3 kilowatt-hour battery. This proprietary energy management system ensures efficient operation for tasks ranging from light to intensive. With 40 electromechanical actuators, Optimus offers precise movements and a human-like range of motion. Capable of walking at speeds up to five miles per hour and carrying up to 45 pounds, this robot is designed for real-world utility, blending innovation with practicality.
Mint’s All About AI Tech4Good Awards recognised impactful AI solutions at the Jio World Centre in Mumbai. The event emphasised purpose-driven innovation, with discussions on ethical AI and community empowerment, showcasing how technology can address pressing social and environmental issues.
ABSTRACT: The fundamental problem of causal inference defines the impossibility of associating a causal link to a correlation, in other words: correlation does not prove causality. This problem can be understood from two points of view: experimental and statistical. The experimental approach tells us that this problem arises from the impossibility of simultaneously observing an event both in the presence and absence of a hypothesis. The statistical approach, on the other hand, suggests that this problem stems from the error of treating tested hypotheses as independent of each other. Modern statistics tends to place greater emphasis on the statistical approach because, compared to the experimental point of view, it also shows us a way to solve the problem. Indeed, when testing many hypotheses, a composite hypothesis is constructed that tends to cover the entire solution space. Consequently, the composite hypothesis can be fitted to any data set by generating a random correlation. Furthermore, the probability that the correlation is random is equal to the probability of obtaining the same result by generating an equivalent number of random hypotheses.
Using generative artificial intelligence, a team of researchers at The University of Texas at Austin has converted sounds from audio recordings into street-view images. The visual accuracy of these generated images demonstrates that machines can replicate human connection between audio and visual perception of environments. The research team describes training a soundscape-to-image AI model using audio and visual data gathered from a variety of urban and rural streetscapes and then using that model to generate images from audio recordings.
Cold Spring Harbor Laboratory scientists developed an AI algorithm inspired by the genome’s efficiency, achieving remarkable data compression and task performance.
In a sense, each of us begins life ready for action. Many animals perform amazing feats soon after they’re born. Spiders spin webs. Whales swim. But where do these innate abilities come from? Obviously, the brain plays a key role as it contains the trillions of neural connections needed to control complex behaviors.
However, the genome has space for only a small fraction of that information. This paradox has stumped scientists for decades. Now, Cold Spring Harbor Laboratory (CSHL) Professors Anthony Zador and Alexei Koulakov have devised a potential solution using artificial intelligence.
Some researchers propose that advancing AI to the next level will require an internal architecture that more closely mirrors the human mind. Rufin VanRullen joins Brian Greene to discuss early results from one such approach, based on the Global Workspace Theory of consciousness.
This program is part of the Big Ideas series, supported by the John Templeton Foundation.
Participant: Rufin VanRullen. Moderator: Brian Greene.