Menu

Blog

Archive for the ‘government’ category: Page 8

Apr 9, 2024

Hackers stole 340,000 Social Security numbers from government consulting firm

Posted by in categories: cybercrime/malcode, economics, government

GMA provides economic and litigation support to companies and U.S. government agencies, including the U.S. Department of Justice, bringing civil litigation. According to its data breach notice, GMA told affected individuals that their personal information “was obtained by the U.S. Department of Justice (“DOJ”) as part of a civil litigation matter” supported by GMA.

The reasons and target of the DOJ’s civil litigation are not known. A spokesperson for the Justice Department did not respond to a request for comment.

GMA said that individuals notified of the data breach are “not the subject of this investigation or the associated litigation matters,” and that the cyberattack “does not impact your current Medicare benefits or coverage.”

Apr 9, 2024

NSA Expert: Quantum Computing to Enter Workforce in 3 to 5 Years

Posted by in categories: cybercrime/malcode, government, quantum physics

A national security expert predicts practical quantum computing tools are just three to five years away from integration into the workforce, NextGov is reporting.

Neal Ziring, the Technical Director of the National Security Agency’s (NSA) Cybersecurity Directorate, made the forecast during a recent public sector cybersecurity event hosted by Palo Alto Networks in Palo Alto. As reported by NextGov, Ziring expects the devices to be accessible predominantly through cloud-based platforms.

Ziring added that the impracticality and cost-prohibitive nature of would put on-premise installations for quantum computing systems out of reach for most organizations, including government agencies.

Apr 7, 2024

It Is Time To Take Intel Seriously As A Chip Foundry

Posted by in categories: computing, economics, finance, government, security

The third proof point is both the increase in manufacturing capacity investment and the change in how that investment will be managed. With the interest in governments to secure future semiconductor manufacturing for both supply security and economic growth, Mr. Gelsinger went on a spending spree with investment in expanding capacity in Oregon, Ireland, and Israel, as well as six new fabs in Arizona, Ohio, and Germany. Most of the initial investment was made without the promise of government grants, such as the US Chips Act. However, Intel has now secured more than $50B from US and European government incentives, customer commitments starting with its first five customers on the 18A process node, and its financial partners. Intel has also secured an additional $11B loan from the US government and a 25% investment tax credit.

In addition to it’s own investment in fab capacity, Intel is partnering with Tower Semiconductor and UMC, two foundries with long and successful histories. Tower will be investing in new equipment to be installed in Intel’s New Mexico facility for analog products, and UMC will partner with Intel to leverage three of the older Arizona fabs and process nodes, starting with the 12nm, to support applications like industrial IoT, mobile, communications infrastructure, and networking.

The second side of this investment is how current and future capacity will be used. As strictly an IDM, Intel has historically capitalized on its investments in the physical fab structures by retrofitting the fabs after three process nodes, on average. While this allowed for the reuse of the structures and infrastructure, it eliminated support for older process nodes, which are important for many foundry customers. According to Omdia Research, less than 3% of all semiconductors are produced on the latest process nodes. As a result, Intel is shifting from retrofitting fabs for new process nodes to maintaining fabs to support extended life cycles of older process nodes, as shown in the chart below. This requires additional capacity for newer process nodes.

Apr 2, 2024

U.S., U.K. Will Partner to Safety Test AI

Posted by in categories: government, health, robotics/AI

“I think of [the agreement] as marking the next chapter in our journey on AI safety, working hand in glove with the United States government,” Donelan told TIME in an interview at the British Embassy in Washington, D.C. on Monday. “I see the role of the United States and the U.K. as being the real driving force in what will become a network of institutes eventually.”

The U.K. and U.S. AI Safety Institutes were established just one day apart, around the inaugural AI Safety Summit hosted by the U.K. government at Bletchley Park in November 2023. While the two organizations’ cooperation was announced at the time of their creation, Donelan says that the new agreement “formalizes” and “puts meat on the bones” of that cooperation. She also said it “offers the opportunity for them—the United States government—to lean on us a little bit in the stage where they’re establishing and formalizing their institute, because ours is up and running and fully functioning.”

The two AI safety testing bodies will develop a common approach to AI safety testing that involves using the same methods and underlying infrastructure, according to a news release. The bodies will look to exchange employees and share information with each other “in accordance with national laws and regulations, and contracts.” The release also stated that the institutes intend to perform a joint testing exercise on an AI model available to the public.

Mar 30, 2024

Google DeepMind CEO Demis Hassabis gets UK knighthood for ‘services to artificial intelligence’

Posted by in categories: government, media & arts, robotics/AI

Demis Hassabis, CEO and one of three founders of Google’s artificial intelligence (AI) subsidiary DeepMind, has been awarded a knighthood in the U.K. for “services to artificial intelligence.”

Ian Hogarth, chair of the U.K. government’s recently launched AI Safety Institute and previously founder of music startup Songkick, was awarded Commander of the Order of the British Empire (CBE) for services to AI; as was Matt Clifford, AI adviser to the U.K. government and co-founder of super–early-stage investor Entrepreneur First.

Mar 29, 2024

Robot Run Government — Should AI Be In Charge?

Posted by in categories: government, robotics/AI

In the future we will rely ever more on Artificial Intelligence to run our civilization, but what role will AI and computers playing in governing?

Start listening with a 30-day Audible trial and your first audiobook is free. Visit.
http://www.audible.com/isaac or text \.

Mar 29, 2024

Solar Power Surge: Sun Emits Intense X1.1 Flare

Posted by in categories: alien life, government, physics, solar power, sustainability

The Sun emitted a strong solar flare, peaking at 4:56 p.m. ET on March 28, 2024. NASA

Established in 1958, the National Aeronautics and Space Administration (NASA) is an independent agency of the United States Federal Government that succeeded the National Advisory Committee for Aeronautics (NACA). It is responsible for the civilian space program, as well as aeronautics and aerospace research. Its vision is “To discover and expand knowledge for the benefit of humanity.” Its core values are “safety, integrity, teamwork, excellence, and inclusion.” NASA conducts research, develops technology and launches missions to explore and study Earth, the solar system, and the universe beyond. It also works to advance the state of knowledge in a wide range of scientific fields, including Earth and space science, planetary science, astrophysics, and heliophysics, and it collaborates with private companies and international partners to achieve its goals.

Mar 28, 2024

NSF Paid Universities To Develop AI Censorship Tools For Social Media

Posted by in categories: government, robotics/AI

University of Michigan, the University of Wisconsin-Madison, and MIT are among the universities cited in the House Judiciary Committee and the Select Subcommittee on the Weaponization of the Federal Government interim report.

It details the foundation’s “funding of AI-powered censorship and propaganda tools, and its repeated efforts to hide its actions and avoid political and media scrutiny.”

“NSF has been issuing multi-million-dollar grants to university and non-profit research teams” for the purpose of developing AI-powered technologies “that can be used by governments and Big Tech to shape public opinion by restricting certain viewpoints or promoting others,” states the report, released last month.

Mar 27, 2024

Predicting and Controlling Bad Actor Artificial Intelligence

Posted by in categories: cybercrime/malcode, government, internet, mapping, robotics/AI

This article includes computer-generated images that map internet communities by topic, without specifically naming each one. The research was funded by the US government, which is anticipating massive interference in the 2024 elections by “bad actors” using relatively simple AI chat-bots.


In an era of super-accelerated technological advancement, the specter of malevolent artificial intelligence (AI) looms large. While AI holds promise for transforming industries and enhancing human life, the potential for abuse poses significant societal risks. Threats include avalanches of misinformation, deepfake videos, voice mimicry, sophisticated phishing scams, inflammatory ethnic and religious rhetoric, and autonomous weapons that make life-and-death decisions without human intervention.

During this election year in the United States, some are worried that bad actor AI will sway the outcomes of hotly contested races. We spoke with Neil Johnson, a professor of physics at George Washington University, about his research that maps out where AI threats originate and how to help keep ourselves safe.

Continue reading “Predicting and Controlling Bad Actor Artificial Intelligence” »

Mar 23, 2024

Debates on the nature of artificial general intelligence

Posted by in categories: business, Elon Musk, government, humor, information science, robotics/AI, transportation

The term “artificial general intelligence” (AGI) has become ubiquitous in current discourse around AI. OpenAI states that its mission is “to ensure that artificial general intelligence benefits all of humanity.” DeepMind’s company vision statement notes that “artificial general intelligence…has the potential to drive one of the greatest transformations in history.” AGI is mentioned prominently in the UK government’s National AI Strategy and in US government AI documents. Microsoft researchers recently claimed evidence of “sparks of AGI” in the large language model GPT-4, and current and former Google executives proclaimed that “AGI is already here.” The question of whether GPT-4 is an “AGI algorithm” is at the center of a lawsuit filed by Elon Musk against OpenAI.

Given the pervasiveness of AGI talk in business, government, and the media, one could not be blamed for assuming that the meaning of the term is established and agreed upon. However, the opposite is true: What AGI means, or whether it means anything coherent at all, is hotly debated in the AI community. And the meaning and likely consequences of AGI have become more than just an academic dispute over an arcane term. The world’s biggest tech companies and entire governments are making important decisions on the basis of what they think AGI will entail. But a deep dive into speculations about AGI reveals that many AI practitioners have starkly different views on the nature of intelligence than do those who study human and animal cognition—differences that matter for understanding the present and predicting the likely future of machine intelligence.

Continue reading “Debates on the nature of artificial general intelligence” »

Page 8 of 226First56789101112Last