Having spent a lot of time over the past several years thinking about and studying ethics in technology, the more I am convinced that the discussions about ethics in technology have a major, irreconcilable blind spot.
“If you are neutral in the face of injustice, you have chosen the side of the oppressor,” as Desmond Tutu summarized.
Our discussion about eliminating bias in AI is an aggressive march towards neutrality. Our measure of fairness is a line drawn too early.
Maybe we shouldn’t be trying to eliminate bias in AI. Maybe we should be designing for bias in the favor of the oppressed. Worth thinking about.
Should we be eliminating power dynamics or inverting power dynamics? If we try to scrub power dynamics from a system too early in a neatly bounded space, we just let those power dynamics to creep back in, or we need those boundaries to be impenetrable.
My next ethics talk may just be a single slide that says REVOLUTION