Introduction One thing newcomers to machine learning (ML) and many experienced practitioners often don’t realize is that ML doesn’t extrapolate. After training an ML model on compounds with µM potency, people frequently ask why none of the molecules they designed were predicted to have nM potency. If you’re new to drug discovery, 1nM = 0.001µM. A lower potency value is usually better. It’s important to remember that a model can only predict values within the range of the training set. If we’ve trained a model on compounds with IC50s between 5 and 100 µM, the model won’t be able to predict an IC50 of 0.1 µM. I’d like to illustrate this with a simple example. As always, all the code that accompanies this post is available on GitHub.
Tesla is preparing to launch an innovative robo-taxi network in Austin next month, supported by a new affordable Model Y and favorable federal regulations for self-driving vehicles ## ## Questions to inspire discussion ## Tesla’s Robo Taxi Network.
🚗 Q: When and where is Tesla launching its robo taxi network? A: Tesla’s robo taxi network is set to launch in Austin, Texas in June, marking a significant milestone for the company’s self-driving technology.
🤖 Q: How will the robo taxi network impact Tesla’s valuation? A: The successful launch could potentially double Tesla’s stock valuation to over **$1 trillion, validating its unique approach to self-driving vehicles. Cost and Production Advantages.
💰 Q: How does Tesla’s self-driving system compare to competitors in terms of cost? A: Tesla’s AI-based self-driving system is significantly cheaper, with a per-mile cost of $0.10 compared to **$0.50-$1.00 for human-driven rides offered by competitors like Whim and Uber.
🏭 Q: What production advantage does Tesla have over competitors? A: Tesla’s mass production capability of 2 million cars per year gives it a significant advantage over competitors like Whim, which operates with a limited fleet of 1,500 cars. Marketing and Revenue Generation.
📈 Q: How will the robo taxi network benefit Tesla’s marketing efforts? A: The network will serve as a unique marketing channel, allowing customers to experience self-driving rides firsthand, making it easier for Tesla to sell its cars and reach scale.
Eyes may be the window to the soul, but a person’s biological age could be reflected in their facial characteristics. Investigators from Mass General Brigham developed a deep learning algorithm called “FaceAge” that uses a photo of a person’s face to predict biological age and survival outcomes for patients with cancer.
They found that patients with cancer, on average, had a higher FaceAge than those without and appeared about five years older than their chronological age.
Older FaceAge predictions were associated with worse overall survival outcomes across multiple cancer types. They also found that FaceAge outperformed clinicians in predicting short-term life expectancies of patients receiving palliative radiotherapy.
A combined team of roboticists from CREATE Lab, EPFL and Nestlé Research Lausanne, both in Switzerland, has developed a soft robot that was designed to mimic human infant motor development and the way infants feed.
In their paper published in the journal npj Robotics, the group describes how they used a variety of techniques to give their robot the ability to simulate the way human infants feed, from birth until approximately six months old.
Prior research has shown that it is difficult to develop invasive medical procedures for infants and babies due to the lack of usable test subjects. Methods currently in use, such as simulations, observational instruments and imaging tend to fall short due to their differences compared to real human infants. To overcome such problems, the team in Switzerland has designed, built, and tested a soft robotic infant that can be used for such purposes.
What happens when AI starts improving itself without human input? Self-improving AI agents are evolving faster than anyone predicted—rewriting their own code, learning from mistakes, and inching closer to surpassing giants like OpenAI. This isn’t science fiction; it’s the AI singularity’s opening act, and the stakes couldn’t be higher.
How do self-improving agents work? Unlike static models such as GPT-4, these systems use recursive self-improvement—analyzing their flaws, generating smarter algorithms, and iterating endlessly. Projects like AutoGPT and BabyAGI already demonstrate eerie autonomy, from debugging code to launching micro-businesses. We’ll dissect their architecture and compare them to OpenAI’s human-dependent models. Spoiler: The gap is narrowing fast.
Why is OpenAI sweating? While OpenAI focuses on safety and scalability, self-improving agents prioritize raw, exponential growth. Imagine an AI that optimizes itself 24/7, mastering quantum computing over a weekend or cracking protein folding in hours. But there’s a dark side: no “off switch,” biased self-modifications, and the risk of uncontrolled superintelligence.
Who will dominate the AI race? We’ll explore leaked research, ethical debates, and the critical question: Can OpenAI’s cautious approach outpace agents that learn to outthink their creators? Like, subscribe, and hit the bell—the future of AI is rewriting itself.
Can self-improving AI surpass OpenAI? What are autonomous AI agents? How dangerous is recursive AI? Will AI become uncontrollable? Can we stop self-improving AI? This video exposes the truth. Watch now—before the machines outpace us.
A selfie can be used as a tool to help doctors determine a patient’s “biological age” and judge how well they may respond to cancer treatment, a new study suggests.
Because humans age at “different rates” their physical appearance may help give insights into their so-called “biological age” – how old a person is physiologically, academics said.
The new FaceAge AI tool can estimate a person’s biological age, as opposed to their actual age, by scanning an image of their face, a new study found.