Toggle light / dark theme

Rufo Guerreschi.
https://www.linkedin.com/in/rufoguerreschi.

Coalition for a Baruch Plan for AI
https://www.cbpai.org/

0:00 Intro.
0:21 Rufo Guerreschi.
0:28 Contents.
0:41 Part 1: Why we have a governance problem.
1:18 From e-democracy to cybersecurity.
2:42 Snowden showed that international standards were needed.
3:55 Taking the needs of intelligence agencies into account.
4:24 ChatGPT was a wake up moment for privacy.
5:08 Living in Geneva to interface with states.
5:57 Decision making is high up in government.
6:26 Coalition for a Baruch plan for AI
7:12 Parallels to organizations to manage nuclear safety.
8:11 Hidden coordination between intelligence agencies.
8:57 Intergovernmental treaties are not tight.
10:19 The original Baruch plan in 1946
11:28 Why the original Baruch plan did not succeed.
12:27 We almost had a different international structure.
12:54 A global monopoly on violence.
14:04 Could expand to other weapons.
14:39 AI is a second opportunity for global governance.
15:19 After Soviet tests, there was no secret to keep.
16:22 Proliferation risk of AI tech is much greater?
17:44 Scale and timeline of AI risk.
19:04 Capabilities of security agencies.
20:02 Internal capabilities of leading AI labs.
20:58 Governments care about impactful technologies.
22:06 Government compute, risk, other capabilities.
23:05 Are domestic labs outside their jurisdiction?
23:41 What are the timelines where change is required?
24:54 Scientists, Musk, Amodei.
26:24 Recursive self improvement and loss of control.
27:22 A grand gamble, the rosy perspective of CEOs.
28:20 CEOs can’t really say anything else.
28:59 Altman, Trump, Softbank pursuing superintelligence.
30:01 Superintelligence is clearly defined by Nick Bostrom.
30:52 Explain to people what “superintelligence” means.
31:32 Jobs created by Stargate project?
32:14 Will centralize power.
33:33 Sharing of the benefits needs to be ensured.
34:26 We are running out of time.
35:27 Conditional treaty idea.
36:34 Part 2: We can do this without a global dictatorship.
36:44 Dictatorship concerns are very reasonable.
37:19 Global power is already highly concentrated.
38:13 We are already in a surveillance world.
39:18 Affects influential people especially.
40:13 Surveillance is largely unaccountable.
41:35 Why did this machinery of surveillance evolve?
42:34 Shadow activities.
43:37 Choice of safety vs liberty (privacy)
44:26 How can this dichotomy be rephrased?
45:23 Revisit supply chains and lawful access.
46:37 Why the government broke all security at all levels.
47:17 The encryption wars and export controls.
48:16 Front door mechanism replaced by back door.
49:21 The world we could live in.
50:03 What would responding to requests look like?
50:50 Apple may be leaving “bug doors” intentionally.
52:23 Apple under same constraints as government.
52:51 There are backdoors everywhere.
53:45 China and the US need to both trust AI tech.
55:10 Technical debt of past unsolved problems.
55:53 Actually a governance debt (social-technical)
56:38 Provably safe or guaranteed safe AI
57:19 Requirement: Governance plus lawful access.
58:46 Tor, Signal, etc are often wishful thinking.
59:26 Can restructure incentives.
59:51 Restrict proliferation without dragnet?
1:00:36 Physical plus focused surveillance.
1:02:21 Dragnet surveillance since the telegraph.
1:03:07 We have to build a digital dog.
1:04:14 The dream of cyber libertarians.
1:04:54 Is the government out to get you?
1:05:55 Targeted surveillance is more important.
1:06:57 A proper warrant process leveraging citizens.
1:08:43 Just like procedures for elections.
1:09:41 Use democratic system during chip fabrication.
1:10:49 How democracy can help with technical challenges.
1:11:31 Current world: anarchy between countries.
1:12:25 Only those with the most guns and money rule.
1:13:19 Everyone needing to spend a lot on military.
1:14:04 AI also engages states in a race.
1:15:16 Anarchy is not a given: US example.
1:16:05 The forming of the United States.
1:17:24 This federacy model could apply to AI
1:18:03 Same idea was even proposed by Sam Altman.
1:18:54 How can we maximize the chances of success?
1:19:46 Part 3: How to actually form international treaties.
1:20:09 Calling for a world government scares people.
1:21:17 Genuine risk of global dictatorship.
1:21:45 We need a world /federal/ democratic government.
1:23:02 Why people are not outspoken.
1:24:12 Isn’t it hard to get everyone on one page?
1:25:20 Moving from anarchy to a social contract.
1:26:11 Many states have very little sovereignty.
1:26:53 Different religions didn’t prevent common ground.
1:28:16 China and US political systems similar.
1:30:14 Coming together, values could be better.
1:31:47 Critical mass of states.
1:32:19 The Philadelphia convention example.
1:32:44 Start with say seven states.
1:33:48 Date of the US constitutional convention.
1:34:42 US and China both invited but only together.
1:35:43 Funding will make a big difference.
1:38:36 Lobbying to US and China.
1:38:49 Conclusion.
1:39:33 Outro

In this interview Jeff Sebo discusses the ethical implications of artificial intelligence and why we must take the possibility of AI sentience seriously now. He explores challenges in measuring moral significance, the risks of dismissing AI as mere tools, and strategies to mitigate suffering in artificial systems. Drawing on themes from the paper ‘Taking AI Welfare Seriously’ and his up and coming book ‘The Moral Circle’, Sebo examines how to detect markers of sentience in AI systems, and what to do about it. We explore ethical considerations through the lens of population ethics, AI governance (especially important in an AI arms race), and discuss indirect approaches detecting sentience, as well as AI aiding in human welfare. This rigorous conversation probes the foundations of consciousness, moral relevance, and the future of ethical AI design.

Paper ‘Taking AI Welfare Seriously’: https://eleosai.org/papers/20241030_T… — The Moral Circle by Jeff Sebo: https://www.amazon.com.au/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20… Jeff’s Website: https://jeffsebo.net/ Eleos AI: https://eleosai.org/ Chapters: 00:00 Intro 01:40 Implications of failing to take AI welfare seriously 04:43 Engaging the disengaged 08:18 How Blake Lemoine’s ‘disclosure’ influenced public discourse 12:45 Will people take AI sentience seriously if it is seen tools or commodities? 16:19 Importance, neglectedness and tractability (INT) 20:40 Tractability: Difficulties in measuring moral significance — i.e. by aggregate brain mass 22:25 Population ethics and the repugnant conclusion 25:16 Pascal’s mugging: low probabilities of infinite or astronomically large costs and rewards 31:21 Distinguishing real high stakes causes from infinite utility scams 33:45 The nature of consciousness, and what to measure in looking for moral significance in AI 39:35 Varieties of views on what’s important. Computational functionalism 44:34 AI arms race dynamics and the need for governance 48:57 Indirect approaches to achieving ideal solutions — Indirect normativity 51:38 The marker method — looking for morally relevant behavioral & anatomical markers in AI 56:39 What to do about suffering in AI? 1:00:20 Building in fault tolerance to noxious experience into AI systems — reverse wireheading 1:05:15 Will AI be more friendly if it has sentience? 1:08:47 Book: The Moral Circle by Jeff Sebo 1:09:46 What kind of world could be achieved 1:12:44 Homeostasis, self-regulation and self-governance in sentient AI systems 1:16:30 AI to help humans improve mood and quality of experience 1:18:48 How to find out more about Jeff Sebo’s research 1:19:12 How to get involved Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… Kind regards, Adam Ford

Book — The Moral Circle by Jeff Sebo: https://www.amazon.com.au/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20

Jeff’s Website: https://jeffsebo.net/

Hey everyone! Robin Hanson will be speaking on Thursday about his galaxy brain ideas on better incentive models for longevity. Plus his unique takes on prediction markets and long-term thinking. [ https://lu.ma/wzuwk1lp](https://lu.ma/wzuwk1lp)


Join us for a groundbreaking discussion with economist Robin Hanson on the future of longevity economics and city governance!

In today’s AI news, Elon Musk’s AI company, xAI, has officially launched its latest flagship AI model, Grok 3. Released late on February 17, 2025, Grok 3 introduces significant advancements over its predecessor, Grok 2, and aims to compete with leading AI models such as OpenAI’s GPT-4o and Google’s Gemini.

In other advancements, Replit has transformed non-technical employees at Zillow into software developers. The real estate giant now routes over 100,000 home shoppers to agents using applications built by team members who had never written code before. This breakthrough stems from Replit’s new partnership with Anthropic and Google Cloud, which has enabled over 100,000 applications on Google Cloud Run.

Then, Wu Yonghui, a prestigious “Google Fellow” who worked at the US tech giant for 17 years, recently joined TikTok owner ByteDance to lead foundational research on artificial intelligence (AI), as the firm seeks to “explore the upper limit of intelligence”. Wu now works at ByteDance’s Seed department, which the Beijing-based company started in early 2023.

Meanwhile, large companies are not adopting AI as quickly as start-ups, AWS managing director Tanuja Randery says. The gap is leading to a “two-tier” AI economy as startups outpace corporations. Citing a new report from AWS, Randery said that European startups had integrated AI at pace over the last year while larger enterprises in the region were falling behind.

In videos, join Sara Bacha from Converge Technology Solutions as she delves into how GraphRAG outperforms traditional RAG by leveraging knowledge graphs and LLM to enhance data relationships and accuracy. Learn the benefits in development, production, and governance, making maintenance easier with better explainability and traceability.

🚀 Welcome to the year 3,050 – a cyberpunk dystopian future where mega-corporations rule over humanity, AI surveillance is omnipresent, and cities have become neon-lit jungles of power and oppression.

🌆 In this AI-generated vision, experience the breathtaking yet terrifying future of corporate-controlled societies:
✅ Towering skyscrapers and hyper-dense cityscapes filled with neon and holograms.
✅ Powerful corporations with total control over resources, AI, and governance.
✅ A world where the elite live above the clouds, while the masses struggle below.
✅ Hyper-advanced AI, cybernetic enhancements, and the ultimate surveillance state.

🎧 Best experienced with headphones!

If you love Cyberpunk, AI-driven societies, and futuristic cityscapes, this is for you!

Elon Musk revives discussion on Mars colonization with a viral AI-generated video, amassing over 46 million views, showing an advanced Martian city. Originally predicted for 2024–2025, Musk’s vision includes direct democracy for Mars governance. The video sparked a mix of curiosity and criticism, especially regarding the absence of natural greenery.

The development of artificial intelligence has entered a pivotal phase. With groundbreaking advancements in large models such as ChatGPT and Sora, AI is approaching what has been termed as “technological singularity”.The allure of AI’s potential is undeniable, but its immense potential is accompanied by significant risks including deepfakes, frauds and autonomous weapons systems.

The complexities and interconnectedness of AI pose a new global challenge. Hence, building a coordinated global governance framework for AI is no longer optional; it is an urgent necessity.

AI transcends national boundaries, creating both global opportunities and risks that no country alone can manage. Hence, countries across the world need to work together to eliminate the risks.

WASHINGTON — As the demand for digital security grows, researchers have developed a new optical system that uses holograms to encode information, creating a level of encryption that traditional methods cannot penetrate. This advance could pave the way for more secure communication channels, helping to protect sensitive data.

“From rapidly evolving digital currencies to governance, healthcare, communications and social networks, the demand for robust protection systems to combat digital fraud continues to grow,” said research team leader Stelios Tzortzakis from the Institute of Electronic Structure and Laser, Foundation for Research and Technology Hellas and the University of Crete, both in Greece.


Optica is the leading society in optics and photonics. Quality information and inspiring interactions through publications, meetings, and membership.

As the demand for digital security grows, researchers have developed a new optical system that uses holograms to encode information, creating a level of encryption that traditional methods cannot penetrate. This advance could pave the way for more secure communication channels, helping to protect sensitive data.

“From rapidly evolving digital currencies to governance, , communications and social networks, the demand for robust protection systems to combat digital fraud continues to grow,” said research team leader Stelios Tzortzakis from the Institute of Electronic Structure and Laser, Foundation for Research and Technology Hellas and the University of Crete, both in Greece.

“Our new system achieves an exceptional level of encryption by utilizing a to generate the decryption key, which can only be created by the owner of the encryption system.”

I was recently a co-author on a paper about anticipatory governance and genome editing. The lead author was Jon Rueda, and the others were Seppe Segers, Jeroen Hopster, Belén Liedo, and Samuela Marchiori. It’s available open access here on the Journal of Medical Ethics website. There is a short (900 word) summary available on the JME blog. Here’s a quick teaser for it:

Transformative emerging technologies pose a governance challenge. Back in 1980, a little-known academic at the University of Aston in the UK, called David Collingridge, identified the dilemma that has come to define this challenge: the control dilemma (also known as the ‘Collingridge Dilemma’). The dilemma states that, for any emerging technology, we face a trade-off between our knowledge of its impact and our ability to control it. Early on, we know little about it, but it is relatively easy to control. Later, as we learn more, it becomes harder to control. This is because technologies tend to diffuse throughout society and become embedded in social processes and institutions. Think about our recent history with smartphones. When Steve Jobs announced the iPhone back in 2007, we didn’t know just how pervasive and all-consuming this device would become. Now we do but it is hard to put the genie back in the bottle (as some would like to do).

The field of anticipatory governance tries to address the control dilemma. It aims to carefully manage the rollout of an emerging technology so as to avoid the problem of losing control just as we learn more about the effects of the technology. Anticipatory governance has become popular in the world of responsible innovation and design. In the field of bioethics, approaches to anticipatory governance often try to anticipate future technical realities, ethical concerns, and incorporate differing public opinion about a technology. But there is a ‘gap’ in current approaches to anticipatory governance.