Toggle light / dark theme

(2021). Nuclear Technology: Vol. 207 No. 8 pp. 1163–1181.


Focusing on nuclear engineering applications, the nation’s leading cybersecurity programs are focused on developing digital solutions to support reactor control for both on-site and remote operation. Many of the advanced reactor technologies currently under development by the nuclear industry, such as small modular reactors, microreactors, etc., require secure architectures for instrumentation, control, modeling, and simulation in order to meet their goals. 1 Thus, there is a strong need to develop communication solutions to enable secure function of advanced control strategies and to allow for an expanded use of data for operational decision making. This is important not only to avoid malicious attack scenarios focused on inflicting physical damage but also covert attacks designed to introduce minor process manipulation for economic gain. 2

These high-level goals necessitate many important functionalities, e.g., developing measures of trustworthiness of the code and simulation results against unauthorized access; developing measures of scientific confidence in the simulation results by carefully propagating and identifying dominant sources of uncertainties and by early detection of software crashes; and developing strategies to minimize the computational resources in terms of memory usage, storage requirements, and CPU time. By introducing these functionalities, the computers are subservient to the programmers. The existing predictive modeling philosophy has generally been reliant on the ability of the programmer to detect intrusion via specific instructions to tell the computer how to detect intrusion, keep log files to track code changes, limit access via perimeter defenses to ensure no unauthorized access, etc.

The last decade has witnessed a huge and impressive development of artificial intelligence (AI) algorithms in many scientific disciplines, which have promoted many computational scientists to explore how they can be embedded into predictive modeling applications. The reality, however, is that AI, premised since its inception on emulating human intelligence, is still very far from realizing its goal. Any human-emulating intelligence must be able to achieve two key tasks: the ability to store experiences and the ability to recall and process these experiences at will. Many of the existing AI advances have primarily focused on the latter goal and have accomplished efficient and intelligent data processing. Researchers on adversarial AI have shown over the past decade that any AI technique could be misled if presented with the wrong data. 3 Hence, this paper focuses on introducing a novel predictive paradigm, referred to as covert cognizance, or C2 for short, designed to enable predictive models to develop a secure incorruptible memory of their execution, representing the first key requirement for a human-emulating intelligence. This memory, or self-cognizance, is key for a predictive model to be effective and resilient in both adversarial and nonadversarial settings. In our context, “memory” does not imply the dynamic or static memory allocated for a software execution; instead, it is a collective record of all its execution characteristics, including run-time information, the output generated in each run, the local variables rendered by each subroutine, etc.

(2021). Nuclear Science and Engineering: Vol. 195 No. 9 pp. 977–989.


Earlier work has demonstrated the theoretical development of covert OT defenses and their application to representative control problems in a nuclear reactor. Given their ability to store information in the system nonobservable space using one-time-pad randomization techniques, the new C2 modeling paradigm6 has emerged allowing the system to build memory or self-awareness about its past and current state. The idea is to store information using randomized mathematical operators about one system subcomponent, e.g., the reactor core inlet and exit temperature, into the nonobservable space of another subcomponent, e.g., the water level in a steam generator, creating an incorruptible record of the system state. If the attackers attempt to falsify the sensor data in an attempt to send the system along an undesirable trajectory, they will have to learn all the inserted signatures across the various system subcomponents and the C2 embedding process.

We posit that this is extremely unlikely given the huge size of the nonobservable space for most complex systems, and the use of randomized techniques for signature insertion, rendering a level of security that matches the Vernam-Cipher gold standard. The Vernam Cipher, commonly known as a one-time pad, is a cipher that encrypts a message using a random key (pad) and can only be decrypted using this key. Its strength is derived from Shannon’s notion of perfect secrecy 8 and requires the key to be truly random and nonreusable (one time). To demonstrate this, this paper will validate the implementation of C2 using sophisticated AI tools such as long short-term memory (LSTM) neural networks 9 and the generative adversarial learning [generative adversarial networks (GANs)] framework, 10 both using a supervised learning setting, i.e., by assuming that the AI training phase can distinguish between original data and the data containing the embedded signatures. While this is an unlikely scenario, it is assumed to demonstrate the resilience of the C2 signatures to discovery by AI techniques.

The paper is organized as follows. Section II provides a brief summary of existing passive and active OT defenses against various types of data deception attacks, followed by an overview of the C2 modeling paradigm in Sec. III. Section IV formulates the problem statement of the C2 implementation in a generalized control system and identifies the key criteria of zero impact and zero observability. Section V implements a rendition of the C2 approach in a representative nuclear reactor model and highlights the goal of the paper, i.e., to validate the implementation using sophisticated AI tools. It also provides a rationale behind the chosen AI framework. Last, Sec. VI summarizes the validation results of the C2 implementation and discusses several extensions to the work.

Retail giant Amazon has pioneered the idea of automated shopping, as seen with its Amazon Go store format. The first of these launched in January 2018 in downtown Seattle and nearly 30 others have opened since. The concept is now catching on with other companies – including Tesco, the UK’s biggest supermarket and third-largest retailer in the world measured by gross revenues. It has just launched its own automated store in central London.

The rollout of this technology at Tesco Express High Holborn follows a successful trial in Welwyn Garden City, a town north of London. The High Holborn branch has already been a cashless store since it first opened in 2018 and is now checkout-less too.

The newly developed system – called “GetGo” – offers the same products but with a faster and more convenient shopping experience. A customer simply downloads the mobile app, scans the QR code generated on their screen, picks up the groceries they need and then leaves the store.

SambaNova Systems, a company that builds advanced software, hardware, and services to run AI applications, announced the addition of the Generative Pre-trained Transformer (GPT) language model to its Dataflow-as-a-Service™ offering. This will enable greater enterprise adoption of AI, allowing organizations to launch their customized language model in much less time — less than one month, compared to nine months or a year.

“Customers face many challenges with implementing large language models, including the complexity and cost,” said R “Ray” Wang, founder and principal analyst of Constellation Research. “Leading companies seek to make AI more accessible by bringing unique large language model capabilities and automating out the need for expertise in ML models and infrastructure.”

Cybereason, a Tel Aviv-and Boston, Massachusetts-based cybersecurity company providing endpoint prevention, detection, and response, has secured a $50 million investment from Google Cloud, VentureBeat has learned. It extends the series F round that Cybereason announced in July from $275 million to $325 million, making Cybereason one of the best-funded startups in the cybersecurity industry with over $713 million in the capital.

We reached out to a Google Cloud spokesperson, but they didn’t respond by press time.

The infusion of cash comes after Cybereason and Google Cloud entered into a strategic partnership to bring to market a platform — Cybereason XDR, powered by Chronicle — that can ingest and analyze “petabyte-scale” telemetry from endpoints, networks, containers, apps, profiles, and cloud infrastructure. Combining technology from Cybereason, Google Cloud, and Chronicle, the platform scans more than 23 trillion security-related events per week and applies AI to help reveal, mitigate, and predict cyberattacks correlated across devices, users, apps, and cloud deployments.

Creating human-like AI is about more than mimicking human behavior — technology must also be able to process information, or ‘think’, like humans too if it is to be fully relied upon. New research, published in the journal Patterns and led by the University of Glasgow’s School of Psychology…


Magnetic solids can be demagnetized quickly with a short laser pulse, and there are already so-called HAMR (Heat Assisted Magnetic Recording) memories on the market that function according to this principle. However, the microscopic mechanisms of ultrafast demagnetization remain unclear. Now, a team at HZB has developed a new method at BESSY II to quantify one of these mechanisms and they have applied it to the rare-earth element Gadolinium, whose magnetic properties are caused by electrons on both the 4f and the 5d shells. This study completes a series of experiments done by the team on nickel and iron-nickel alloys. Understanding these mechanisms is useful for developing ultrafast data storage devices.

In 2,021 Instagram will be the most popular social media platform. Recent statistics show that the platform now boasts over 1 billion monthly active users. With this many eyes on their content, influencers can reap great rewards through sponsored posts if they have a large enough following with this many eyes on their content. The question for today then becomes: How do we effectively grow our Instagram account in the age of algorithmic bias? Instagram expert and AI growth specialist Faisal Shafique help us answer this question utilizing his experience growing his @fact account to about 8M followers while also helping major, edgy brands like Fashion Nova to over 20M.

Full Story:

Using machine learning, a computer model can teach itself to smell in just a few minutes. When it does, researchers have found, it builds a neural network that closely mimics the olfactory circuits that animal brains use to process odors.

Animals from fruit flies to humans all use essentially the same strategy to process olfactory information in the brain. But neuroscientists who trained an artificial neural network to take on a simple odor classification task were surprised to see it replicate biology’s strategy so faithfully.

Full Story:


When asked to classify odors, artificial neural networks adopt a structure that closely resembles that of the brain’s olfactory circuitry.

If the properties of materials can be reliably predicted, then the process of developing new products for a huge range of industries can be streamlined and accelerated. In a study published in Advanced Intelligent Systems, researchers from The University of Tokyo Institute of Industrial Science used core-loss spectroscopy to determine the properties of organic molecules using machine learning.

The spectroscopy techniques energy loss near-edge structure (ELNES) and X-ray near-edge structure (XANES) are used to determine information about the electrons, and through that the atoms, in materials. They have high sensitivity and high resolution and have been used to investigate a range of materials from electronic devices to drug delivery systems.

However, connecting spectral data to the properties of a material—things like optical properties, electron conductivity, density, and stability—remains ambiguous. Machine learning (ML) approaches have been used to extract information for large complex sets of data. Such approaches use artificial neural networks, which are based on how our brains work, to constantly learn to solve problems. Although the group previously used ELNES/XANES spectra and ML to find out information about materials, what they found did not relate to the properties of the material itself. Therefore, the information could not be easily translated into developments.