Archive for the ‘ethics’ category: Page 3
Jun 20, 2024
Exploring Social Neuroscience — Serious Science
Posted by Dan Breeden in categories: ethics, neuroscience, science
Is our brain responsible for how we react to people who are different from us? Why can’t people with autism tell lies? How does the brain produce empathy? Why is imitation a fundamental trait of any social interaction? What are the secret advantages of teamwork? How does the social environment influence the brain? Why is laughter different from any other emotion?
This course is aimed at deepening our understanding of how the brain shapes and is shaped by social behavior, exploring a variety of topics such as the neural mechanisms behind social interactions, social cognition, theory of mind, empathy, imitation, mirror neurons, interacting minds, and the science of laughter.
Continue reading “Exploring Social Neuroscience — Serious Science” »
Jun 17, 2024
Are Children The Future?: Longtermism, Pronatalism, and Epistemic Discounting
Posted by Michael LaTorra in categories: economics, ethics, existential risks, life extension, policy
From the article:
Longtermism asks fundamental questions and promotes the kind of consequentialism that should guide public policy.
Based on a talk delivered at the conference on Existential Threats and Other Disasters: How Should We Address Them? May 30–31, 2024 – Budva, Montenegro – sponsored by the Center for the Study of Bioethics, The Hastings Center, and The Oxford Uehiro Center for Practical Ethics.
Continue reading “Are Children The Future?: Longtermism, Pronatalism, and Epistemic Discounting” »
Jun 15, 2024
Beyond Binary: Exploring a Spectrum of Artificial Sentience
Posted by Dan Breeden in categories: ethics, robotics/AI
Envision AI evolving beyond mere imitation, surpassing human intelligence to redefine the boundaries of consciousness and ethics.
Jun 15, 2024
The wild race to improve synthetic embryos
Posted by Shubham Ghosh Roy in categories: biotech/medical, ethics, law
“We need a defined framework, but instead what we see here is a fairly wild race between labs,” one journal editor told me during the ISSCR meeting. “The overarching question is: How far do they go, and where do we place them in a legal-moral spectrum? How can we endorse working with these models when they are much further along than we were two years ago?”
So where will the race lead? Most scientists say the point of mimicking the embryo is to study it during the period when it would be implanting in the wall of the uterus. In humans, this moment is rarely observed. But stem-cell embryos could let scientists dissect these moments in detail.
Yet it’s also possible that these lab embryos turn out to be the real thing—so real that if they were ever transplanted into a person’s womb, they could develop into a baby.
May 30, 2024
Andreas Hein on LinkedIn: #interstellar #conference #luxembourg #exoplanet
Posted by Initiative for Interstellar Studies in categories: ethics, robotics/AI, security, space travel
Want to go on an unforgettable trip? Abstract Submission closing soon! Exciting news from SnT, Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg! We are thrilled to announce the 1st European Interstellar Symposium in collaboration with esteemed partners like the Interstellar Research Group, Initiative & Institute for Interstellar Studies, Breakthrough Prize Foundation, and Luxembourg Space Agency. This interdisciplinary symposium will delve into the profound questions surrounding interstellar travel, exploring topics such as human and robotic exploration, propulsion, exoplanet research, life support systems, and ethics. Join us to discuss how these insights will impact near-term applications on Earth and in space, covering technologies like optical communications, ultra-lightweight materials, and artificial intelligence. Don’t miss this opportunity to connect with a community of experts and enthusiasts, all united in a common goal. Check out the “Call for Papers” link in the comment section to secure your spot! Image credit: Maciej Rębisz, Science Now Studio #interstellar #conference #Luxembourg #exoplanet
May 28, 2024
How AI is poised to unlock innovations at unprecedented pace
Posted by Zola Balazs Bekasi in categories: business, ethics, governance, internet, policy, robotics/AI, security
How can rapidly emerging #AI develop into a trustworthy, equitable force? Proactive policies and smart governance, says Salesforce.
These initial steps ignited AI policy conversations amid the acceleration of innovation and technological change. Just as personal computing democratized internet access and coding accessibility, fueling more technology creation, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes large responsibility: We must prioritize policies that allow us to harness its power while protecting against harm. To do so effectively, we must acknowledge and address the differences between enterprise and consumer AI.
Enterprise versus consumer AI
Continue reading “How AI is poised to unlock innovations at unprecedented pace” »
May 26, 2024
Training Transhumanists at Oxford University
Posted by Dan Breeden in categories: biotech/medical, ethics, mobile phones, neuroscience, transhumanism
Those who know Oxford University for its literary luminaries might be surprised to learn that some of the most important reflections on emerging technologies come from its hallowed halls. While the leading tech innovators in Silicon Valley capture imaginations with their bold visions of future singularities, mind-machine melding, and digital immortality by 2045, they rarely engage as deeply with the philosophical issues surrounding such developments as their like-minded scholars over the pond. This essay will briefly highlight some of the key contributions of Oxford University’s professors Nick Bostrom, Anders Sandberg, and Julian Savulescu to the transhumanist movement. It will also show how this movement’s focus on radical autonomy in biotechnical enhancements shapes the wider global bioethical conversation.
As the lead author of the Transhumanist FAQ, Bostrom provides the closest the movement has to an institutional catechism. He is, in a sense, the Ratzinger of Transhumanism. The first paragraph of the seminal text emphasizes the evolutionary vision of his school. Transhumanism’s incessant pursuit of radical technological transformation is “based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase.” Current humans are but one intriguing yet greatly improvable iteration of human existence. Think of the first iPhone and how unattractive 2007’s most cutting-edge technology is in 2024.
Continue reading “Training Transhumanists at Oxford University” »
May 19, 2024
Superintelligence: Paths, Dangers, Strategies
Posted by Dan Breeden in categories: biotech/medical, ethics, existential risks, robotics/AI
Since the release of ChatGPT in November 2022, artificial intelligence (AI) has both entered the common lexicon and sparked substantial public intertest. A blunt yet clear example of this transition is the drastic increase in worldwide Google searches for ‘AI’ from late 2022, which reached a record high in February 2024.
You would therefore be forgiven for thinking that AI is suddenly and only recently a ‘big thing.’ Yet, the current hype was preceded by a decades-long history of AI research, a field of academic study which is widely considered to have been founded at the 1956 Dartmouth Summer Research Project on Artificial Intelligence.1 Since its beginning, a meandering trajectory of technical successes and ‘AI winters’ subsequently unfolded, which eventually led to the large language models (LLMs) that have nudged AI into today’s public conscience.
Alongside those who aim to develop transformational AI as quickly as possible – the so-called ‘Effective Accelerationism’ movement, or ‘e/acc’ – exist a smaller and often ridiculed group of scientists and philosophers who call attention to the inherent profound dangers of advanced AI – the ‘decels’ and ‘doomers.’2 One of the most prominent concerned figures is Nick Bostrom, the Oxford philosopher whose wide-ranging works include studies of the ethics of human enhancement,3 anthropic reasoning,4 the simulation argument,5 and existential risk.6 I first read his 2014 book Superintelligence: Paths, Dangers, Strategies7 five years ago, which convinced me that the risks which would be posed to humanity by a highly capable AI system (a ‘superintelligence’) ought to be taken very seriously before such a system is brought into existence. These threats are of a different kind and scale to those posed by the AIs in existence today, including those developed for use in medicine and healthcare (such as the consequences of training set bias,8 uncertainties over clinical accountability, and problems regarding data privacy, transparency and explainability),9 and are of a truly existential nature. In light of the recent advancements in AI, I recently revisited the book to reconsider its arguments in the context of today’s digital technology landscape.
May 17, 2024
The neural signature of subjective disgust could apply to both sensory and socio-moral experiences
Posted by Saúl Morales Rodriguéz in categories: ethics, neuroscience
Disgust is one of the six basic human emotions, along with happiness, sadness, fear, anger, and surprise. Disgust typically arises when a person perceives a sensory stimulus or situation as revolting, off-putting, or unpleasant in other ways.