A Primer on Ethical Design Practices
P r e s e n t a t e
A Primer on
Ethical Design Practices
Collectively agreed upon moral standards
We often think of Morality as a concept that precedes law, but this only holds true for the radical progressives that seek to improve the law to be more inclusive and protective of disenfranchised or otherwise less-privileged groups. Our collective moral compass, as a society, often follows
behind the law, or only barely ekes it out in a small majority.
It's useful to remember the distinction between ethics and morality. Morality is a fluid concept that differs from person to person, and is considered very subjective. But when a set of morals is institutionalized, it becomes rigid and demanding, and this is what we generally count as ethics.
Institutionalized (objective) vs. personal (subjective)
A handy rule of thumb for the distinction is that ethics are institutionalized, whilst morals are personal.
Codes of Ethics:
To protect us from harm (like laws!)
The strength and quality of a society is measured by how well it protects all individuals from harm. Laws are created by national and state governments in order to protect their citizens against dangers. The dangers laws protect against are generally universal, meaning they are not limited to a specific subset of victims or violators. Laws do, sometimes, focus on protecting specific groups who may be a subset of the larger populace. This is to protect them from harms that people outside of that subset do not need to fear.
Violate Law? Leave public society. (jail)
Violate Code of Ethics? Leave your field.
Codes of ethics, on the other hand, are focused on specific circumstances, such as an industry or a field of discipline. They serve the same purpose: to protect people from harm. But, they generally lack federal-level enforcement. A violation of a code of ethics can—at worst—get you ejected from your field, not from society at large.
The strength and quality of a society is measured by how well it protects all individuals from harm.
We can take this line and twist it to apply to our industry of Design and Technology:
The strength and quality of an industry is measured by how well it protects their members
and their customers from harm.
“A rigid set of morals”
Programming AI to make judgement calls?
We are overwhelmingly people who desire to be good, but “good” is a subjective matter in a lot of circumstances. For this and other reasons, we have started to develop “Artificial Intelligence” software to make difficult judgement calls on our behalf.
There is a funny logical fallacy people often employ when asked
why we should program machines to do certain tasks, like making judgement calls, instead of humans.
Q: Why program AI to make human decisions?
A: Because humans are biased!
It is a dangerous misconception to believe machines are more objective than humans.
Besides a product designer I'm also a science fiction writer, so I'm going to use a popular sci-fi action movie about A.I. that illustrates this point. This movie is widely beloved, critically acclaimed, a real hallmark of the genre.
I am talking, of course, about I, Robot.
Now of course,
I, Robot is not exactly a great movie, but it holds up well enough and still poses a great ethical dilemma around trusting machines to make difficult, human judgement calls for us.
The ethical question that
I, Robot poses is simple: can machines make decisions as compassionately as humans do? The film poeticizes it by asking “Can machines have heart? Can a machine create art?" But ultimately, the question will—inevitably—boil down to a mathematical equation. Humans make these equations based on our lived experiences, our biases, our understanding of how the world works, and our personal convictions over good and bad.
In other words, our human
Machines are bad at morals because morals are derived from many, many variables.
Humans are primed with decades of lived experiences and teachings they acquire. Machines and AI are programmed and primed with only the set of variables their creators supply them with. We do not condense the entire human lived experience into variables and tell an AI that's what they have to consider.
Now, that is not an intrinsically bad thing. It can lead to absolutely delightful results:
“I am a big fan of Fufby and Fuzzable and Snifkin, … because they're so quintessentially guinea pig."
It's great to have an AI when we have to name thousands and thousands of guinea pigs and want them to have “guinea pig-y" names.
AI is great when solving problems of inconvenience.
What about when human lives hang in the balance?
The results of AI tend to be great when we apply them to solve problems of inconvenience. But when we apply them to solve problems where human lives hang in the balance, the results are a lot more questionable.
AI and algorithms:
A “great” way to magnify systemic biases
Naming large numbers of guinea pigs is a delight. Making an AI that generates music? Fantastic. Relying on AI to predictively accuse people of future wrongdoings?
Turns out there are
some serious risks and problems with that.
ProPublica did an investigation into predictive risk analysis algorithms used by law enforcement to assess whether new arrestees were more or less likely to commit subsequent offenses in the future. It found that these algorithms were deeply biased.
The bias was so stark that a white male with a history of serious crimes and who had spent five years in prison already was considered “low risk" while a teenager with four juvenile misdemeanors was considered “high risk"—and the algorithm was severely wrong with its predictions.
Of course, relying on AI to predictively accuse people of future wrongdoings is
literally the plot of another Science Fiction story.
An allegory on the perils of premature optimization
are important to optimize for before things go wrong. For instance, it's great to optimize against your biases and blind spots before launching your product.
What we're seeing with the application of AI today, however, is that the wrong things are prematurely optimized, and the right things are not optimized for either at all, or not nearly well enough.
It's easier to combat biases in one programmer than ten individuals
The reasonable argument for programming AI to make judgement calls is that one programmer, or a small team of programmers, can be trained to overcome or compensate for biases more easily than tens or hundreds of individuals making those same judgement calls.
This truth is not a panacea
“Easier” than “incredibly hard” is still “very very hard”
Overcoming biases is one of the hardest challenges for humans, because we have an unhealthy affection for our biases. We cling to them, love them, defend them against critique.
Because we mistake our biases as convictions, just like any other convictions we have. And as humans, we live by and rely on our convictions to manage our way through every-day life. It’s really difficult to suss out the difference between a conviction and a bias, all the more so with our subconscious biases.
But let’s think bigger…
What about an AI to help you hire better candidates?
The variables you feed this program in terms of what
you're looking for are limited and colored (biased) by what you're thinking of that makes a great candidate.
Meanwhile, the AI itself may be screening candidates based on external data sources it's pulling in to help feed itself additional variables, to “become smarter" as it were.
What about an AI that…
…predicts your upcoming pregnancy?
…determines your likelihood for illnesses?
…can steer your political leanings?
What if the hiring help AI gets access to any of the data from these other AIs, or their results are combined somehow?
Most likely outcome:
Greater homogeneity, worse performance & outcome
We already know that algorithms are
producing biased results, it's not a good thing if we apply them at greater and more influential scales without fixing these deeply-rooted bias problems first.
Less ethics = more turnover
The most recent study on why people leave cushy tech jobs behind is unfairness or mistreatment that they experience. This is costing the tech industry 16 Billion dollars a year by the most conservative estimates, and that's
only looking at the cost to recruit new workers to replace the leavers.
The real problem:
Human ingenuity to avoid doing the uncomfortable work
Our species is brilliant because we're ingenious and clever and oh-so-adept at creating solutions to problems. We're so good at creating so many solutions that they in turn generate brand new problems altogether.
Not intentionally, but subconsciously, through ignorance or neglect.
We're all here because we want to avoid unintentionally causing problems when we're just trying to
Ethics are a great approach to help reduce and mitigate the severity of new problems we cause as side-effects of how we solve
Our industry is still young
We grow fast, but we’re also naïve
I tend to see things as orders of magnitude in scale. A human life cycle is a microcosm of the lifecycle of a dynasty or nation. Viewed this way, our industry is roughly in its adolescent phase: the wild teenage years are over, and it's time to grow up.
Like adolescents, our collective industry is starting to realize that
all of its decisions and actions have real-world consequences. We're still a young industry, compared to almost every other industry in the world today. We've grown incredibly fast, and we've suffered some growing pains as a result.
(in a bad way)
Many other industries use Codes of Ethics
Even pirates did a better job!
Pirates had a
Brethren Code which they adhered to, with strict punishments for those who violated, and protections for those who adhered to them and suffered losses.
In some ways,
pirates had a more formally established and socially-conscious code of ethics they lived by than most designers and software developers do.
Don’t wait until it’s too late
Are recent VC revelations our industry’s Challenger moment?
Conference and Event CoC’s
Commonplace today, “controversial" five years ago
What would a good, industry-wide Code of Ethics look like for the design & tech community?
Code of Ethics for
Design + Technology
For starters, this would not be a small or personal effort. A good industry code of ethics would have to be crafted by a diverse team of people from across many organizations, companies, non-profits and ancillary organizations, who all have a stake in making the industry better, fairer.
Learn from precedent
• Seek to uncover and confront your biases
• Minimize Harm
• Act Independently
• Be Accountable and Transparent
A good industry Code of Ethics would take as many cues from the longstanding & effective Codes from other industries.
Take, for instance, the SPJ or Society of Professional Journalists' Code of Ethics. It teaches four key principles with clear and actionable items for each.
Actionable and constructive
(with positive, aspirational language)
A Code of Ethics must strive to be actionable and constructive, and the text should be free from political leaning. Ethics should be
universal, or at least aspire to be.
The language it uses must not be punitive, demeaning, or condemning; instead, it must use aspirational, supportive, and encouraging language.
A framework to operate by
A clear framework to help designers and developers navigate the complicated minefield of technology and its impacts on human society
Intersectional Feminism, Socialism, Psychology, HCI…
Learning about things like intersectionality, feminism, socialism, worker's rights, human cognitive biases, psychology, and so forth, helps you make more ethical decisions in your work.
The knowledge and perspectives you glean from these adjacent topics
impose useful constraints on your thinking in positive ways. They set boundaries against certain options or solutions you might consider, forcing your mind to think more creatively, more considerately, and more inclusively.
Liked this talk? Share it!
Jul 11, 2017,
Last updated: Jul 12, 2017
Share this presentation