Skip to content

804: AI Ethics in Practice

Chapter Overview

Beyond the technical aspects of [[801-AI-Safety-Fundamentals|safety]], [[802-Bias-and-Fairness|fairness]], and [[803-Data-Privacy-and-Compliance|privacy]], building responsible AI requires navigating complex ethical dilemmas that don't have simple right or wrong answers. AI Ethics in Practice is about moving from principles to procedures, using frameworks to make sound, justifiable decisions when faced with real-world trade-offs.


The Challenge: Competing Values

Many ethical problems in AI arise from conflicts between desirable values. An engineer must often make a difficult choice with no perfect solution.

flowchart TD
    A["🏥 Decision: Deploy Medical Diagnostic AI"]

    subgraph Benefits ["✅ Potential Benefits"]
        B["Early Disease Detection<br/>Could save thousands of lives"]
        C["Faster Diagnosis<br/>Reduced healthcare costs"]
    end

    subgraph Risks ["❌ Potential Risks"]
        D["Higher Error Rate<br/>for minority demographics"]
        E["False Positives<br/>Unnecessary anxiety/procedures"]
    end

    subgraph Dilemma ["⚖️ The Ethical Trade-off"]
        F["Deploy now:<br/>Help majority, potentially harm minority"]
        G["Delay deployment:<br/>Perfect the system, but lives may be lost"]
    end

    A --> Benefits
    A --> Risks
    Benefits --> F
    Risks --> G

    style A fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
    style F fill:#fff3e0,stroke:#f57c00,stroke-width:2px
    style G fill:#ffcdd2,stroke:#d32f2f,stroke-width:2px

High-Profile Cases in AI Ethics

Case 1: IBM Watson for Oncology - Promise vs. Reality (2016-2018)

What Happened: IBM's Watson for Oncology was marketed as an AI system that could help doctors make better cancer treatment decisions. However, investigations revealed that the system often provided recommendations that contradicted medical guidelines and had not been properly tested on diverse patient populations.

Ethical Issues: - Overpromising: Marketing suggested capabilities that didn't exist - Bias: Training data primarily from a single hospital (Memorial Sloan Kettering) - Transparency: Doctors couldn't understand how recommendations were made

Impact: Several hospitals discontinued use; highlighted the importance of rigorous testing and transparent AI in healthcare.

Lessons: - Don't overpromise AI capabilities, especially in life-critical applications - Diverse training data is crucial for equitable outcomes - Medical professionals need to understand AI decision-making processes

Case 2: Autonomous Vehicle Moral Dilemmas - The Trolley Problem in Practice

What Happened: As self-driving cars became reality, engineers faced programming decisions about unavoidable accident scenarios. Should a car swerve to avoid hitting a child, potentially killing the passenger? These aren't theoretical anymore.

The MIT Moral Machine Experiment: Researchers collected 40 million decisions from people worldwide about moral dilemmas in autonomous vehicles.

Key Findings: - Strong preference for saving more lives vs. fewer - Preference for saving young vs. old - Significant cultural differences in moral preferences

Ongoing Challenge: How do we program ethical decisions into machines when humans disagree?

Case 3: Lensa AI and Non-Consensual Imagery (2022)

What Happened: The popular Lensa AI app, which generates artistic portraits from photos, was discovered to sometimes produce sexualized images of users without consent, particularly women.

Ethical Issues: - Consent: Users didn't consent to sexualized imagery - Bias: The underlying AI model (Stable Diffusion) had biases from internet training data - Harm: Psychological harm and potential for misuse

Response: App developers implemented additional content filters and warnings.

Lessons: - AI systems inherit biases from training data - Need for proactive content filtering in consumer AI applications - Importance of considering potential misuse scenarios

Case 4: Predictive Policing and Algorithmic Bias - COMPAS (2016)

What Happened: The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) system was used to predict recidivism risk for criminal defendants. ProPublica's investigation revealed significant racial bias.

The Findings: - Black defendants were twice as likely to be incorrectly flagged as high-risk - White defendants were twice as likely to be incorrectly flagged as low-risk - Overall accuracy was similar across races, but error patterns differed

Ethical Dilemma: Is it acceptable to use a tool that's accurate overall but systematically biased against certain groups?

Impact: Sparked major debate about fairness in algorithmic decision-making and led to new research on bias detection and mitigation.

Case 5: OpenAI's GPT-4 Safety Testing (2023)

What Happened: Before releasing GPT-4, OpenAI conducted extensive safety testing, including "red team" exercises where experts tried to make the model behave harmfully.

Key Findings: - GPT-4 could potentially be used for cyberattacks, biological weapon development, and misinformation - The model showed concerning capabilities in manipulation and deception - Safety measures significantly reduced but didn't eliminate risks

Ethical Approach: OpenAI published a detailed safety report, unlike previous releases.

Debate: Was this transparency helpful or did it provide a "recipe" for misuse?

Case 6: Midjourney and Deepfake Concerns (2023)

What Happened: AI art generators like Midjourney became capable of creating highly realistic images of public figures, raising concerns about deepfakes and misinformation.

The Dilemma: - Creative Freedom: Artists and users want unrestricted creative tools - Misinformation Risk: Realistic fake images can spread false information - Consent: Public figures' likenesses used without permission

Response: Many platforms implemented restrictions on generating images of public figures.

Ongoing Challenge: Balancing creative freedom with preventing harm.

Ethical Decision-Making Frameworks

1. The Principlist Approach

Based on four core principles:

Autonomy: Respect for persons and their right to make informed decisions - Application: Ensure users understand and consent to AI decisions affecting them

Beneficence: Acting in users' best interests - Application: Design AI systems to maximize benefits

Non-maleficence: "Do no harm" - Application: Implement safeguards to prevent AI misuse

Justice: Fair distribution of benefits and burdens - Application: Ensure AI systems don't discriminate unfairly

2. The Consequentialist Approach

Focus on outcomes and maximizing overall well-being.

Key Questions: - What are all possible consequences of this AI system? - How do we weigh benefits against harms? - Who benefits and who bears the costs?

Example: Deploying imperfect medical AI might save 1000 lives but cause 10 misdiagnoses. The consequentialist might approve deployment.

3. The Deontological Approach

Focus on duties and rights, regardless of consequences.

Key Questions: - What are our fundamental duties as AI developers? - What rights do users have? - Are there actions that are inherently wrong?

Example: Using personal data without consent is wrong, even if it leads to better AI systems.

4. The Virtue Ethics Approach

Focus on character traits and moral virtues.

Key Questions: - What would a virtuous AI engineer do? - What virtues should guide AI development? - How do we cultivate ethical practices in AI teams?

Key Virtues: Honesty, integrity, humility, responsibility, fairness.

Practical Ethical Guidelines

1. The AI Ethics Checklist

Before deploying any AI system, ask:

Purpose & Impact: - [ ] What problem are we solving? - [ ] Who benefits and who might be harmed? - [ ] Are there less risky alternatives?

Fairness & Bias: - [ ] Have we tested for bias across different groups? - [ ] Are outcomes fair and equitable? - [ ] Can we explain why decisions were made?

Privacy & Consent: - [ ] Do we have proper consent for data use? - [ ] Are privacy protections adequate? - [ ] Can users control their data?

Safety & Security: - [ ] What could go wrong? - [ ] Do we have safeguards in place? - [ ] How do we handle system failures?

Accountability: - [ ] Who is responsible for AI decisions? - [ ] How do we monitor performance? - [ ] Is there a process for addressing complaints?

2. The Ethical AI Review Board

Many organizations establish ethics review boards similar to medical Institutional Review Boards (IRBs).

Composition: - Technical experts (AI engineers, data scientists) - Domain experts (lawyers, ethicists, social scientists) - Community representatives - Affected stakeholders

Process: 1. Pre-deployment Review: Assess ethical implications before launch 2. Ongoing Monitoring: Regular review of system performance and impact 3. Incident Response: Process for addressing ethical concerns

3. Stakeholder Engagement

Internal Stakeholders: - Engineering teams - Product managers - Legal and compliance - Leadership

External Stakeholders: - Users and affected communities - Regulatory bodies - Civil society organizations - Academic researchers

Methods: - Focus groups and user interviews - Public consultations - Advisory boards - Participatory design processes

Case Study: Ethical Decision-Making in Practice

Scenario: Social Media Content Moderation AI

Context: You're developing an AI system to automatically detect and remove harmful content from a social media platform.

Stakeholders: - Users (want free expression) - Advertisers (want brand-safe environment) - Governments (want compliance with local laws) - Civil society (want protection from harm)

Ethical Tensions: - Free speech vs. Safety: Removing harmful content may also remove legitimate speech - Cultural differences: What's acceptable varies across cultures - Transparency vs. Gaming: Explaining the system helps users but also helps bad actors evade detection

Applying Frameworks:

Principlist Approach: - Autonomy: Users should understand and have some control over content decisions - Beneficence: Protect users from genuine harm - Non-maleficence: Avoid censoring legitimate expression - Justice: Apply standards fairly across all users

Consequentialist Approach: - Measure overall well-being: reduced harassment vs. reduced engagement - Consider long-term consequences: platform trust, democratic discourse

Deontological Approach: - Respect fundamental rights: free expression, safety from harm - Consistent application of moral rules

Practical Decision: 1. Transparent Policies: Clear community guidelines 2. Graduated Responses: Warning → Hiding → Removal 3. Human Oversight: AI flags content, humans make final decisions for edge cases 4. Appeals Process: Users can challenge decisions 5. Regular Auditing: Monitor for bias and effectiveness

Key Takeaways

  • Ethics is not just compliance: Legal doesn't always mean ethical
  • Perfect solutions don't exist: Ethical AI is about managing trade-offs responsibly
  • Stakeholder engagement is crucial: Include affected communities in decision-making
  • Transparency builds trust: Explain AI systems and their limitations
  • Continuous monitoring is essential: Ethics isn't a one-time check, it's an ongoing process
  • Learn from failures: Real-world cases provide valuable lessons for future development
  • Culture matters: Build ethical practices into organizational culture, not just technical systems

Next: [[805-Future-Considerations|Future Considerations in AI Ethics]]