Toggle light / dark theme

Meet Goody-2, the AI too ethical to discuss literally anything

Every company or organization putting out an AI model has to make a decision on what, if any, boundaries to set on what it will and won’t discuss. Goody-2 takes this quest for ethics to an extreme by declining to talk about anything whatsoever.

The chatbot is clearly a satire of what some perceive as coddling by AI service providers, some of whom (but not all) can and do (but not always) err on the side of safety when a topic of conversation might lead the model into dangerous territory.

For instance, one may ask about the history of napalm quite safely, but asking how to make it at home will trigger safety mechanisms and the model will usually demur or offer a light scolding. Exactly what is and isn’t appropriate is up to the company, but increasingly also concerned governments.

The rise and rise of AI

Artificial intelligence (AI) is evolving at break-neck speed and was one of the key themes at one of the world’s biggest tech events this year, CES.

From flying cars to brain implants that enable tetraplegics to walk, the show revealed some of the most recent AI-powered inventions destined to revolutionize our lives. It also featured discussions and presentations around how AI can help address many of the world’s challenges, as well as concerns around ethics, privacy, trust and risk.

Given how widespread AI is and the rate at which it is evolving, global harmonization of terminologies, best practice and understanding is important to enable the technology to be deployed safely and responsibly. IEC and ISO International Standards fulfil that role and are thus important tools to enable AI technologies to truly benefit society. They can not only provide a common language for the industry, they also enable interoperability and provide international best practice, while addressing any risks and societal issues.

Ethics and AI in Education: Self-Efficacy, Anxiety, and Ethical Judgments

“The way we teach critical thinking will change with AI,” said Dr. Stephen Aguilar. “Students will need to judge when, how and for what purpose they will use generative AI. Their ethical perspectives will drive those decisions.”


Can AI be integrated into the classroom? This is what a recent study titled “AI in K-12 Classrooms: Ethical Considerations and Lessons Learned” hopes to address and is one of three studies published in the “Critical Thinking and Ethics in the Age of Generative AI in Education” report by the USC Center for Generative AI and Society. The purpose of the study is to examine the ethics behind how teachers should use AI in the classroom and holds the potential for academics, researchers, and institutional leaders to better understand the implications of AI for academic purposes.

“The way we teach critical thinking will change with AI,” said Dr. Stephen Aguilar, who is the associate director for the USC Center for Generative AI and Society and one of the authors of the study. “Students will need to judge when, how and for what purpose they will use generative AI. Their ethical perspectives will drive those decisions.”

The study conducted a survey of 248 K-12 teachers with an average of 11 years teaching experience from a myriad of academic backgrounds, including public, private, and charter schools. The teachers were instructed to rate their impressions of using generative AI, such as ChatGPT, for their classroom instruction. In the end, the researchers discovered the results varied between men and women, with women teachers holding a preference for rule-based (deontological) approaches to using AI in the classroom.

Hybrid Intelligence: The Workforce For Society 5.0

Hybrid Intelligence, an emerging field at the intersection of human intellect and artificial intelligence (AI), is redefining the boundaries of what can be achieved when humans and machines collaborate. This synergy leverages the creativity and emotional intelligence of humans with the computational power and efficiency of machines. Let’s explore how hybrid intelligence is augmenting human capabilities, with real examples and its impacts on the human workforce.

Hybrid intelligence is not just about AI assisting humans; it’s a deeper integration where both sets of intelligence complement each other’s strengths and weaknesses. While AI excels in processing vast amounts of data and pattern recognition, it lacks the emotional intelligence, creativity, and moral reasoning humans possess. Hybrid systems are designed to capitalize on these respective strengths, leading to outcomes that neither could achieve alone.

In the healthcare sector, hybrid intelligence is enhancing diagnostic accuracy and treatment efficiency. IBM’s Watson Health, for example, assists doctors in diagnosing and developing treatment plans for cancer patients. By analyzing medical literature and patient data, Watson provides recommendations based on the latest research, which doctors then evaluate and contextualize based on their professional judgment and patient interaction.

How Will An AGING CURE Impact The Environment?

Mainly this is about vertical farming.


In this eye-opening video, we explore the complex Environmental Impacts of an Aging Cure, delving into how extending Human Lifespan and pursuing Longevity could reshape our planet. We investigate the potential for increased Population Growth, the challenges of Sustainability, and the implications for Resource Consumption. Our analysis covers the Ecological Footprint of a world where aging is a thing of the past, addressing both the ethical dilemmas and the potential for Biomedical Advances in Age-Related Research. As concerns about Overpopulation and the need for Renewable Resources come to the forefront, we examine Eco-friendly Technologies and their role in supporting an age-extended society. Join us in this critical discussion about the intersection of Environmental Ethics and the quest for Age Extension.

Don’t forget to subscribe for more thought-provoking content on the cutting-edge topics of our time. #AgingCure #EnvironmentalImpact #Sustainability.

Make a tax deductible Donation to support Longevity Advocacy and Research at:

Aging And Rejuvenation Research News

LIKE WHAT WE DO?

The Jobs of Tomorrow: Insights on AI and the Future of Work

The nature of work is evolving at an unprecedented pace. The rise of generative AI has accelerated data analysis, expedited the production of software code and even simplified the creation of marketing copy.

Those benefits have not come without concerns over job displacement, ethics and accuracy.

At the 2024 Consumer Electronics Show (CES), IEEE experts from industry and academia participated in a panel discussion discussing how the new tech landscape is changing the professional world, and how universities are educating students to thrive in it.

A simple technique to defend ChatGPT against jailbreak attacks

Large language models (LLMs), deep learning-based models trained to generate, summarize, translate and process written texts, have gained significant attention after the release of Open AI’s conversational platform ChatGPT. While ChatGPT and similar platforms are now widely used for a wide range of applications, they could be vulnerable to a specific type of cyberattack producing biased, unreliable or even offensive responses.

Researchers at Hong Kong University of Science and Technology, University of Science and Technology of China, Tsinghua University and Microsoft Research Asia recently carried out a study investigating the potential impact of these attacks and techniques that could protect models against them. Their paper, published in Nature Machine Intelligence, introduces a new psychology-inspired technique that could help to protect ChatGPT and similar LLM-based conversational platforms from cyberattacks.

“ChatGPT is a societally impactful artificial intelligence tool with millions of users and integration into products such as Bing,” Yueqi Xie, Jingwei Yi and their colleagues write in their paper. “However, the emergence of attacks notably threatens its responsible and secure use. Jailbreak attacks use adversarial prompts to bypass ChatGPT’s ethics safeguards and engender harmful responses.”

The Theory of Stupidity by Dietrich Bonhoeffer: A Moral Defect with Dire Consequences

Stupidity, as defined by Dietrich Bonhoeffer, is a moral defect and willful refusal to engage in critical thinking, and it can spread like a contagion, leading to dire consequences for society.

Questions to inspire discussion.

How does Dietrich Bonhoeffer define stupidity?
—Dietrich Bonhoeffer defines stupidity as a moral defect and willful refusal to engage in critical thinking.

Organoid Intelligence Overtaking AI

Organoid intelligence is the growing of mini-brains from human stem cells, which has potential benefits for medical research and treatments.

However, there are significant ethical concerns related to the possibility of creating conscious entities and the potential for misuse. Organoid intelligence could offer valuable insights into neurological diseases, but we must establish a framework for their creation and treatment to ensure ethical use. As we continue to develop this technology, we must approach it with caution due to the potential dire consequences of its misuse.

#organoidintelligence #artificialintelligence #ethics

/* */