Toggle light / dark theme

Dr. Kathryn Huff, Ph.D. — Assistant Secretary, Office of Nuclear Energy, U.S. Department of Energy

Advancing Nuclear Energy Science And Technology For U.S. Energy, Environmental And Economic Needs — Dr. Katy Huff, Ph.D. — Assistant Secretary, U.S. Department of Energy Office of Nuclear Energy, U.S. Department of Energy.


Dr. Kathryn Huff, Ph.D. (https://www.energy.gov/ne/person/dr-kathryn-huff) is Assistant Secretary, Office of Nuclear Energy, U.S. Department of Energy, where she leads their strategic mission to advance nuclear energy science and technology to meet U.S. energy, environmental, and economic needs, both realizing the potential of advanced technology, and leveraging the unique role of the government in spurring innovation.

Prior to her current role, Dr. Huff served as a Senior Advisor in the Office of the Secretary and also led the office as the Principal Deputy Assistant Secretary for Nuclear Energy.

Before joining the Department of Energy, Dr. Huff was an Assistant Professor in the Department of Nuclear, Plasma, and Radiological Engineering at the University of Illinois at Urbana-Champaign where she led the Advanced Reactors and Fuel Cycles Research Group. She was also a Blue Waters Assistant Professor with the National Center for Supercomputing Applications.

Dr. Huff was previously a Postdoctoral Fellow in both the Nuclear Science and Security Consortium and the Berkeley Institute for Data Science at the University of California — Berkeley. She received her PhD in Nuclear Engineering from the University of Wisconsin-Madison and her undergraduate degree in Physics from the University of Chicago. Her research focused on modeling and simulation of advanced nuclear reactors and fuel cycles.

Uncovering the Mystery of the Human Brain with Computational Neuroscience

Defining computational neuroscience The evolution of computational neuroscience Computational neuroscience in the twenty-first century Some examples of computational neuroscience The SpiNNaker supercomputer Frontiers in computational neuroscience References Further reading

The human brain is a complex and unfathomable supercomputer. How it works is one of the ultimate mysteries of our time. Scientists working in the exciting field of computational neuroscience seek to unravel this mystery and, in the process, help solve problems in diverse research fields, from Artificial Intelligence (AI) to psychiatry.

Computational neuroscience is a highly interdisciplinary and thriving branch of neuroscience that uses computational simulations and mathematical models to develop our understanding of the brain. Here we look at: what computational neuroscience is, how it has grown over the last thirty years, what its applications are, and where it is going.

Mark Zuckerberg says AI boosts monetization

AI is having its moment on tech earnings calls for the second consecutive quarter, following the widely popular launch of OpenAI’s ChatGPT in late November. But not every company has the same plans for the new technology.

Nvidia (NVDA) is selling AI powered supercomputers. Microsoft (MSFT) is integrating ChatGPT into its search engine to compete with Google (GOOGL), which has its own AI searchbot.

Meta’s approach is slightly different. The core business for Meta since the early days of Facebook has been advertising sales, which still account for 98% of the company’s quarterly revenue. So naturally, enhancing advertisements with AI is where Meta believes the new technology can be most impactful.

Do We Live In a Protopia?

Humanity has had a sustained human presence in space for decades now. Traveling the world can be done in mere hours, and each of us carries within our pockets a supercomputer that is linked to all of human knowledge. Our fingertips are now more powerful than the kings or queens of centuries past. For all of our flaws and challenges, we live in the protopia today.


Not dystopia, not utopia, but something else.

Could Aluminum Nitride Produce Quantum Bits?

Quantum computers have the potential to break common cryptography techniques, search huge datasets and simulate quantum systems in a fraction of the time it would take today’s computers. But before this can happen, engineers need to be able to harness the properties of quantum bits or qubits.

Currently, one of the leading methods for creating qubits in materials involves exploiting the structural atomic defects in diamond. But several researchers at the University of Chicago and Argonne National Laboratory believe that if an analogue defect could be engineered into a less expensive material, the cost of manufacturing quantum technologies could be significantly reduced. Using supercomputers at the National Energy Research Scientific Computing Center (NERSC), which is located at the Lawrence Berkeley National Laboratory (Berkeley Lab), these researchers have identified a possible candidate in aluminum nitride. Their findings were published in Nature Scientific Reports.

“Silicon semiconductors are reaching their physical limits—it’ll probably happen within the next five to 10 years—but if we can implement qubits into semiconductors, we will be able to move beyond silicon,” says Hosung Seo, University of Chicago Postdoctoral Researcher and a first author of the paper.

New chip on the block: Broadcom’s Jericho3-AI can connect up to 32,000 GPU chips

The new chip can wire together supercomputers for artificial intelligence networks.

American semiconductor manufacturing company Broadcom Inc. has released a new chip Jericho3-AI, which is being touted by the company as the highest-performance fabric for artificial intelligence (AI) networks. The new chip will wire together supercomputers.


G0d4ather/iStock.

Jericho3-AI is packed with features like improved load balancing — which ensures maximum network utilization under the highest network loads, congestion-free operation which implies no flow collisions and no jitter, high radix which allows Jericho3-AI to connect to 32,000 GPUs collectively, and Zero-Impact Failover — ensuring sub-10ns automatic path convergence. All of this would lead to cutting down on the job completion times for AI workload.

Intelligence Explosion — Part 1/3

The GPT phenomenon and the future of humanity in the face of advances in Artificial Intelligence.

The Age of Artificial Intelligence is an increasingly present reality in our daily lives. With the rise of technologies such as Natural Language Processing (NLP) and Artificial Neural Networks (ANN), the possibility of creating machines capable of performing tasks that were previously exclusive to humans has emerged.

One of these technologies is the Generative Pre-trained Transformer, better known as GPT. It’s the Large Language Model (LLM) developed by OpenAI.

OpenAI was founded in San Francisco, California in 2015 by Sam Altman, Reid Hoffman, Jessica Livingston, Elon Musk, Ilya Sutskever, Peter Thiel, among others, who collectively pledged $1 billion. Musk resigned from the board in 2018, but continued to be a donor to the project.

Science Fiction Is Influencing How We Conduct War And We Might Not Like The Results

From high-tech fighting machines to supercomputers and killer robots, science fiction has a lot to say about war. You might be surprised to learn that some governments (including the UK and France) are now turning their attention to these fantastical stories as a way to think about possible futures and try and ward off any potential threats.

For many years now, science fiction writers have made prophesies about futuristic technologies that have later become a reality. In 1964, Arthur C. Clarke famously predicted the internet. And in 1983, Isaac Asimov predicted that modern life would become impossible without computers.

This has made governments take note. Not only can science fiction help us imagine a future shaped by new technologies, but it can also help us learn lessons about potential threats.

AI chip race: Google says its Tensor chips compute faster than Nvidia’s A100

It also says that it has a healthy pipeline for chips in the future.

Search engine giant Google has claimed that the supercomputers it uses to develop its artificial intelligence (AI) models are faster and more energy efficient than Nvidia Corporation’s. While processing power for most companies delving into the AI space comes from Nvidia’s chips, Google uses a custom chip called Tensor Processing Unit (TPU).

Google announced its Tensor chips during the peak of the COVID-19 pandemic when businesses from electronics to automotive faced the pinch of chip shortage.


AI-designed chips to further AI development

Interesting Engineering reported in 2021 that Google used AI to design its TPUs. Google claimed that the design process was completed in just six hours using AI compared to the months humans spend designing chips.

For most things associated with AI these days, product iterations occur rapidly, and the TPU is currently in its fourth generation. As Microsoft stitched together chips to power OpenAI’s research requirement, Google also put together 4,000 TPUs to make its supercomputer.