I quite enjoyed this short video from Google, about the challenge of spotting ethical issues when developing AI applications and, for that matter, applications in XR and other emerging technologies.
The challenge is that while it’s relatively easy for groups to define ethical principles that they plan to operate by, it’s much harder to actually put these principles into practice in new applications and at an organizational scale. There’s a need for reviews or some other type of deliberate activity that seeks to spot problems so that something can be done about them, and of course this is not easy.
The video offers the metaphor of bird-spotting, which I quite like – we encounter birds (ethical issues) all the time, but most of the time we just gloss over them and don’t recognize them. Maybe if we saw a bright red parrot we’d react, but most birds (ethical issues) are small, brown, and easy to ignore. With training and effort, however, we can learn where they might be hidden, how to tell them apart, and where different varieties are more likely to exist.
Where the metaphor breaks down, of course, is that bird-watchers want the birds they spot to survive, whereas issue spotters are looking for ethical issues in order to quash or resolve them.
Another perspective mentioned that I quite appreciate is the idea of lenses, that by deliberately looking at a situation from a series of different perspectives and using a series of different cognitive frameworks, we can train ourselves to do a much better job of looking for birds (issues) than just staring at the trees for hours. I was inspired to this way of thinking years ago by Jesse Schell’s Game Design (a Book of Lenses), and it’s amazing how broadly applicable this approach is.
Anyway – the video is only a few minutes long and it’s important stuff, so I strongly recommend checking it out.
(NB – if you want to go deeper, it’s part of a longer course about responsible AI available from Google’s QwikLabs). Ask me if you want more info!