Toggle light / dark theme

(2021). Nuclear Technology: Vol. 207 No. 8 pp. 1163–1181.


Focusing on nuclear engineering applications, the nation’s leading cybersecurity programs are focused on developing digital solutions to support reactor control for both on-site and remote operation. Many of the advanced reactor technologies currently under development by the nuclear industry, such as small modular reactors, microreactors, etc., require secure architectures for instrumentation, control, modeling, and simulation in order to meet their goals. 1 Thus, there is a strong need to develop communication solutions to enable secure function of advanced control strategies and to allow for an expanded use of data for operational decision making. This is important not only to avoid malicious attack scenarios focused on inflicting physical damage but also covert attacks designed to introduce minor process manipulation for economic gain. 2

These high-level goals necessitate many important functionalities, e.g., developing measures of trustworthiness of the code and simulation results against unauthorized access; developing measures of scientific confidence in the simulation results by carefully propagating and identifying dominant sources of uncertainties and by early detection of software crashes; and developing strategies to minimize the computational resources in terms of memory usage, storage requirements, and CPU time. By introducing these functionalities, the computers are subservient to the programmers. The existing predictive modeling philosophy has generally been reliant on the ability of the programmer to detect intrusion via specific instructions to tell the computer how to detect intrusion, keep log files to track code changes, limit access via perimeter defenses to ensure no unauthorized access, etc.

The last decade has witnessed a huge and impressive development of artificial intelligence (AI) algorithms in many scientific disciplines, which have promoted many computational scientists to explore how they can be embedded into predictive modeling applications. The reality, however, is that AI, premised since its inception on emulating human intelligence, is still very far from realizing its goal. Any human-emulating intelligence must be able to achieve two key tasks: the ability to store experiences and the ability to recall and process these experiences at will. Many of the existing AI advances have primarily focused on the latter goal and have accomplished efficient and intelligent data processing. Researchers on adversarial AI have shown over the past decade that any AI technique could be misled if presented with the wrong data. 3 Hence, this paper focuses on introducing a novel predictive paradigm, referred to as covert cognizance, or C2 for short, designed to enable predictive models to develop a secure incorruptible memory of their execution, representing the first key requirement for a human-emulating intelligence. This memory, or self-cognizance, is key for a predictive model to be effective and resilient in both adversarial and nonadversarial settings. In our context, “memory” does not imply the dynamic or static memory allocated for a software execution; instead, it is a collective record of all its execution characteristics, including run-time information, the output generated in each run, the local variables rendered by each subroutine, etc.

In 2,021 Instagram will be the most popular social media platform. Recent statistics show that the platform now boasts over 1 billion monthly active users. With this many eyes on their content, influencers can reap great rewards through sponsored posts if they have a large enough following with this many eyes on their content. The question for today then becomes: How do we effectively grow our Instagram account in the age of algorithmic bias? Instagram expert and AI growth specialist Faisal Shafique help us answer this question utilizing his experience growing his @fact account to about 8M followers while also helping major, edgy brands like Fashion Nova to over 20M.

Full Story:

Duke professor becomes second recipient of AAAI Squirrel AI Award for pioneering socially responsible AI.

Whether preventing explosions on electrical grids, spotting patterns among past crimes, or optimizing resources in the care of critically ill patients, Duke University computer scientist Cynthia Rudin wants artificial intelligence (AI) to show its work. Especially when it’s making decisions that deeply affect people’s lives.

While many scholars in the developing field of machine learning were focused on improving algorithms, Rudin instead wanted to use AI’s power to help society. She chose to pursue opportunities to apply machine learning techniques to important societal problems, and in the process, realized that AI’s potential is best unlocked when humans can peer inside and understand what it is doing.

At the outbreak of World War I, the French army was mobilized in the fashion of Napoleonic times. On horseback and equipped with swords, the cuirassiers wore bright tricolor uniforms topped with feathers—the same get-up as when they swept through Europe a hundred years earlier. The remainder of 1914 would humble tradition-minded militarists. Vast fields were filled with trenches, barbed wire, poison gas and machine gun fire—plunging the ill-equipped soldiers into a violent hellscape of industrial-scale slaughter.

Capitalism excels at revolutionizing war. Only three decades after the first World War I bayonet charge across no man’s land, the US was able to incinerate entire cities with a single (nuclear) bomb blast. And since the destruction of Hiroshima and Nagasaki in 1,945 our rulers’ methods of war have been made yet more deadly and “efficient”.

Today imperialist competition is driving a renewed arms race, as rival global powers invent new and technically more complex ways to kill. Increasingly, governments and military authorities are focusing their attention not on new weapons per se, but on computer technologies that can enhance existing military arsenals and capabilities. Above all is the race to master so-called artificial intelligence (AI).

In this post I outline my journey creating a dynamic NFT on the Ethereum blockchain with IPFS and discuss the possible use cases for scientific data. I do not cover algorithmic generation of static images (you should read Albert Sanchez Lafuente’s neat step-by-step for that) but instead demonstrate how I used Cytoscape.js, Anime.js and genomic feature data to dynamically generate visualizations/art at run time when NFTs are viewed from a browser. I will also not be providing an overview of Blockchain but I highly recommend reading Yifei Huang’s recent post: Why every data scientist should pay attention to crypto.

W h ile stuck home during the pandemic, I’m one of the 10 million that tried my hand at gardening on our little apartment balcony in Brooklyn. The Japanese cucumbers were a hit with our neighbors and the tomatoes were a hit with the squirrels but it was the peppers I enjoyed watching grow the most. This is what set the objective for my first NFT: create a depiction of a pepper that ripens over time.

How much of the depiction is visualization and how much is art? Well that’s in the eye of the beholder. When you spend your days scrutinizing data points, worshiping best practices and optimizing everything from memory usage to lunch orders it’s nice to take some artistic license and make something just because you like it, which is exactly what I’ve done here. The depiction is authentically generated from genomic data features but obviously this should not be viewed as any kind of serious biological analysis.

According to this guy, the argument will be that the AI is needed to make split second decisions, and will gradually increase from there.


Retired U.S. Army General Stanley McChrystal joins ‘Influencers with Andy Serwer’ to share his biggest fears regarding artificial intelligence.

ANDY SERWER: I want to ask you about AI, artificial intelligence, because you wrote, “ceding the ability to manage relationships to an algorithm, we rolled a dangerous die.” What are the specific uses of AI that concern you and then we can talk about AI weapons and that’s really scary stuff. But let’s talk about it generally and then specifically with regard to the military.

STANLEY MCCHRYSTAL: Let’s start by something we all get. We call company X and we get this recording that says if you’re calling about so-and-so hit one. If you’re calling about so-and-so hit two. And you go for a while and by the time you get to 8 and they didn’t cover your problem, you’re furious. And you just want to talk to someone. You want somebody to take your problem for you.

Using AI to analyze your income and expenses regularly is a great way to help you better understand where your money goes each month. Most modern financial institutions have apps that will automatically categorize your spending into expense types, making it easy for you to see how much of your paycheck ends up going toward rent/mortgage, food, transportation, entertainment, etc.

Technology is empowering women to build wealth through AI-assisted financial management. Women are now able to invest and manage their finances by using technology that automatically invests and manages money for them. This software provides a unique algorithm for each woman with personalized goals, risk tolerance, income, and age.

Full Story:


Women in America are disproportionately under-served when it comes to financial products and services. They own less than 1% of the country’s wealth, and they hold even less of their own assets.

International diplomacy has traditionally relied on bargaining power, covert channels of communication, and personal chemistry between leaders. But a new era is upon us in which the dispassionate insights of AI algorithms and mathematical techniques such as game theory will play a growing role in deals struck between nations, according to the co-founder of the world’s first center for science in diplomacy.

Michael Ambühl, a professor of negotiation and conflict management and former chief Swiss-EU negotiator, said recent advances in AI and machine learning mean that these technologies now have a meaningful part to play in international diplomacy, including at the Cop26 summit starting later this month and in post-Brexit deals on trade and immigration.

Deepfake videos are well-known; many examples of what only appear to be celebrities can be seen regularly on YouTube. But while such videos have grown lifelike and convincing, one area where they fail is in reproducing a person’s voice. In this new effort, the team at UoC found evidence that the technology has advanced. They tested two of the most well-known voice copying algorithms against both human and voice recognition devices and found that the algorithms have improved to the point that they are now able to fool both.

The two algorithms— SV2TTS and AutoVC —were tested by obtaining samples of voice recordings from publicly available databases. Both systems were trained using 90 five-minute voice snippets of people talking. They also enlisted the assistance of 14 volunteers who provided voice samples and access to their voice recognition devices. The researchers then tested the two systems using the open-source software Resemblyzer—it listens and compares voice recordings and then gives a rating based on the similar two samples are. They also tested the algorithms by using them to attempt to access services on voice recognition devices.

The researchers found the algorithms were able to fool the Resemblyzer nearly half of the time. They also found that they were able to fool Azure (Microsoft’s cloud computing service) approximately 30 percent of the time. And they were able to fool Amazon’s Alexa voice recognition system approximately 62% of the time.