Toggle light / dark theme

Code Unto Caesar

Durendal’s algorithm wrote scripture about three topics: “the plague,” “Caesar,” and “the end of days.” So it’s not surprising that things took a grim turn. The full text is full of glitches characteristic of AI-written texts, like excerpts where over half of the nouns are “Lord.” But some passages are more coherent and read like bizarre doomsday prophecies.

For example, from the plague section: “O LORD of hosts, the God of Israel; When they saw the angel of the Lord above all the brethren which were in the wilderness, and the soldiers of the prophets shall be ashamed of men.”

Whole-body positron emission tomography combined with computed tomography (PET/CT) is a cornerstone in the management of lymphoma (cancer in the lymphatic system). PET/CT scans are used to diagnose disease and then to monitor how well patients respond to therapy. However, accurately classifying every single lymph node in a scan as healthy or cancerous is a complex and time-consuming process. Because of this, detailed quantitative treatment monitoring is often not feasible in clinical day-to-day practice.

Researchers at the University of Wisconsin-Madison have recently developed a deep-learning model that can perform this task automatically. This could free up valuable physician time and make quantitative PET/CT treatment monitoring possible for a larger number of patients.

To acquire PET/CT scans, patients are injected with a sugar molecule marked with radioactive fluorine-18 (18 F-fluorodeoxyglucose). When the fluorine atom decays, it emits a positron that instantly annihilates with an electron in its immediate vicinity. This annihilation process emits two back-to-back photons, which the scanner detects and uses to infer the location of the radioactive decay.

Artificial intelligence is being developed that can analyze whether it’s own decision or prediction is reliable.

…An AI that is aware/determine or analyze it’s own weaknesses. Basically, it should help doctors or passengers of the AI know quickly the risk involved.


How might The Terminator have played out if Skynet had decided it probably wasn’t responsible enough to hold the keys to the entire US nuclear arsenal? As it turns out, scientists may just have saved us from such a future AI-led apocalypse, by creating neural networks that know when they’re untrustworthy.

These deep learning neural networks are designed to mimic the human brain by weighing up a multitude of factors in balance with each other, spotting patterns in masses of data that humans don’t have the capacity to analyse.

While Skynet might still be some way off, AI is already making decisions in fields that affect human lives like autonomous driving and medical diagnosis, and that means it’s vital that they’re as accurate as possible. To help towards this goal, this newly created neural network system can generate its confidence level as well as its predictions.

Scholars have a nifty way of alerting colleagues to lengthy treatises that they find simply not worth their time to read.

They tag such documents “tl;dr”—too long, didn’t read.

It’s kind of a 21st century spin on the 420-year-old notion Shakespeare’s Polonius relayed to the king and queen in “Hamlet”: “Brevity,” he suggested, “is the soul of wit.”

Another great advantage is the ability to incorporate AI at early stages of image acquisition. Among other things, this enables us to reduce the amount of radiation needed to acquire a high-resolution CT or shorten the duration needed for an MRI scan. And this leads to patient welfare improvements as well as healthcare cost reductions.

AI applications

In recent years there has been tremendous work in this field mainly focusing on cardiovascular, ophthalmology, neurology, and cancer detection.

Consumer drones have over the years struggled with an image of being no more than expensive and delicate toys. But applications in industrial, military and enterprise scenarios have shown that there is indeed a market for unmanned aerial vehicles, and today, a startup that makes drones for some of those latter purposes is announcing a large round of funding and a partnership that provides a picture of how the drone industry will look in years to come.

Percepto, which makes drones — both the hardware and software — to monitor and analyze industrial sites and other physical work areas largely unattended by people, has raised $45 million in a Series B round of funding.

Alongside this, it is now working with Boston Dynamics and has integrated its Spot robots with Percepto’s Sparrow drones, with the aim being better infrastructure assessments, and potentially more as Spot’s agility improves.

It’s pretty easy to dismiss the capabilities of Tesla’s Autopilot and Full Self-Driving beta. A look at Autopilot’s ranking from Consumer Reports alone would suggest that Tesla’s driver-assist system is pretty average at best, and that solutions like GM’s Super Cruise are far more advanced and capable.

With this in mind, the narrative surrounding Tesla’s self-driving efforts largely suggests that the company’s driver-assist systems, while advanced, are years away from being a capable autonomous driving solution. And when Tesla achieves autonomy, actual FSD companies like Waymo and Cruise would be far ahead.

These preconceptions about Autopilot and the Full Self-Driving suite, however, are a bit questionable, especially if one were to consider the capabilities of the FSD beta today, which is currently being tested by a select group of Tesla owners. Tesla owner and YouTube host Dan Markham of the What’s Inside? Family channel recently experienced this, when he took a drive on a Model S equipped with the FSD beta.

My most recent article published in my LinkedIn profile. Opinions and thoughts are welcome.


Elon Musk has been warning for years of the risks that the progress in AI can pose to humanity. Long story short, his position is that AI, once it eventually becomes AGI, is going to be so advanced that it will make humans irrelevant.

In order to prevent this from happening, Elon Musk argues that a symbiosis between the human mind and AI is necessary, so that a sort of “Brain Computer Interface” or BCI allow humans direct communication with the cloud, and allow to process information at the speed that things are done in the cloud. Also, it would allow to limitlessly increase the scarce memory that our brains are capable of holding.

Elon argues that the interface with our mobiles and with PCs, as it requires the use of fingers, is chaotically slow and inefficient. Even if voice commands were much better than they are today, it would be still cumbersome compared with what would suppose being able to interact directly through our thougths.