Requirement 4 of 8

Ethics in AI

Research AI ethics, discuss decision-making scenarios, write ethical guidelines, and explain the Turing test.

Sign in or create an account to mark steps complete and save your progress.

Checklist

Requirement 4 discussion guide

Use these notes to think about responsible AI use and how people should judge fairness, privacy, and decision-making.

Why AI ethics matters

  • AI tools can influence what people read, write, buy, believe, or decide, so the way people use them matters.
  • A tool can be useful and still raise ethical concerns if it is unfair, invasive, dishonest, or used without enough human judgment.
  • Good AI use is not only about what a system can do, but also about what it should do and how people should use it responsibly.

Big ideas to keep in mind

  • Fairness matters because AI can produce unequal results when data or design choices are biased.
  • Privacy matters because AI systems may collect, store, or infer personal information.
  • Accountability matters because people are still responsible for decisions and outcomes even when AI is involved.
  • Human review matters because AI can be wrong, incomplete, misleading, or overconfident.

Bias, privacy, and AI decision-making

Issues to discuss

  • Bias can appear when systems are trained on incomplete or unfair data.
  • Privacy matters because AI systems may collect, process, or infer sensitive information.
  • AI-assisted decisions can affect real people, so transparency and accountability matter.
  • Responsible use means checking outputs carefully and understanding that AI can be wrong or misleading.
  • Even when an output looks neutral, it may still reflect hidden assumptions or patterns from the data behind the system.

Examples scouts can understand

  • An AI tool used for school support might work better for some students than others if it was not designed or tested fairly.
  • A recommendation system could keep showing one kind of content and make it harder for people to see balanced information.
  • An AI image or face-recognition system could make mistakes if it does not perform equally well for everyone.
  • A person could accidentally share private or sensitive information by typing it into an AI tool without thinking carefully.

Questions to ask about an AI system

  • Who created it and what is it supposed to do?
  • What data might it collect or depend on?
  • Who could be helped and who could be harmed?
  • Can a person review, question, or correct the result?

What Would You Do? scenario ideas

Five discussion-ready scenarios

  • A student uses AI to complete an entire assignment and turns it in without checking accuracy or giving any explanation of how AI was used.
  • A club wants to use AI to rank or sort people, but nobody checks whether the system is fair.
  • Someone uploads private information, photos, or messages into an AI tool without getting permission from the people involved.
  • A person shares AI-generated writing, audio, or images as if they were real and other people believe them.
  • An adult or leader wants to let AI make an important decision without any human review.

A good way to answer

  • Describe the ethical problem clearly.
  • Explain who could be affected.
  • Name the value involved, such as honesty, fairness, privacy, or safety.
  • Explain what a responsible person should do instead.

Build your own AI guidelines

Strong guideline ideas

  • Use AI honestly and do not pretend AI-generated work is fully your own when it is not.
  • Protect private information and avoid sharing sensitive data carelessly.
  • Review important AI output instead of assuming it is always correct.
  • Use AI in ways that are fair and respectful to other people.
  • Keep humans involved in important decisions that affect grades, safety, reputation, or opportunities.
  • Be open when AI was used in a project, product, or decision.

How to write your own version

  • Write 4 to 6 short rules you believe people should follow when using AI.
  • Use simple language so each rule is easy to explain to your counselor.
  • Include at least one rule about honesty, one about privacy, and one about human responsibility.

The Turing test

What it is

  • The Turing test is a classic idea proposed by Alan Turing about whether a machine can respond in conversation in a way that seems human.
  • In a basic version, a person interacts with both a human and a machine and tries to tell which is which.
  • If the machine is difficult to distinguish from the human in that conversation, people sometimes say it passed the test.

Why it matters and its limits

  • The Turing test is historically important because it helped people think about machine intelligence.
  • But sounding human is not the same as being correct, trustworthy, ethical, or truly understanding.
  • A system could seem convincing in conversation and still invent facts or make poor decisions.

Ethics in AI discussion locked

Sign in or create an account to mark progress complete.

Scenario discussion notes

Use these notes to prepare for five rounds of ethical decision-making with your counselor.

Simple response pattern

  • What happened?
  • Why is it an ethical issue?
  • Who could be affected?
  • What should a responsible person do?

Scenario discussion notes locked

Sign in or create an account to mark progress complete.

Draft your AI guidelines

Write your own ethical guidelines for using AI responsibly.

Starter template

  • I will use AI honestly and not use it to mislead others.
  • I will protect private information and think carefully before sharing data with AI tools.
  • I will check important outputs instead of assuming AI is always right.
  • I will think about fairness and how AI decisions may affect other people.
  • I will remember that people are responsible for decisions, even when AI is involved.

Draft your AI guidelines locked

Sign in or create an account to mark progress complete.

The Turing test

Explain what the Turing test is and why it is an important but limited idea in AI.

What it asks

  • The Turing test asks whether a machine can respond in conversation in a way that seems human.
  • A person compares responses and tries to tell the machine from the human.

Its limits

  • Sounding human does not prove a system is accurate, fair, honest, or truly understanding the topic.
  • A convincing answer can still be wrong or misleading, so human judgment still matters.

The Turing test locked

Sign in or create an account to mark progress complete.

Back: Automation BasicsNext: Deepfakes

Jump To A Requirement

Navigate anywhere in this merit badge without losing your place.

View Start Page