Activity 01
Gallery Walk: Recognition in the Wild
Post four stations: a radiology AI success case, a documented facial recognition false-positive incident, a voice assistant accuracy comparison across English accents, and a real-time captioning failure example. Groups rotate and annotate what data conditions produced each outcome and what safeguard was or was not in place. The class reconvenes to map shared patterns and build a collective framework for evaluating recognition system deployments.
Explain how AI enables computers to 'see' and 'hear' in applications like facial recognition or voice assistants.
Facilitation TipDuring the Gallery Walk, assign each station a specific error type (lighting, angle, accent, background noise) so students focus their observations on concrete failure cases rather than vague impressions.
What to look forPresent students with a news article detailing a real-world case of bias in facial recognition (e.g., higher error rates for women or people of color). Ask: 'What specific aspect of the AI system's training or design might have led to this disparity? How could this bias be addressed in future development?'