Navigating Ethical Dilemmas in Privacy

View

Ethical Frameworks

Have you ever been faced with an ethical dilemma in your work? How did you decide what to do? Did you feel equipped with the right ethical "tools" to make a decision, and do you think the decision you made in the end was the right one?

Such situations are common, both in our personal and professional lives. Working in software, you may be faced with ethical dilemmas such as:

  • To what extent should we optimize for user engagement on our platform? We have a duty to our shareholders to make a profit, but we know from internal research that some of our mechanisms are manipulative and are harming users.
  • Should we censor users, restricting their freedom of speech, in order to protect other users?
  • Should we break end-to-end encryption to help governments combat terrorism and child sexual abuse? Can we trust them to only use this access for those purposes?
  • Should we sign a contract with the military to build technology that will be used in warfare? Should we continue to abide by a 10-year contract we signed with them, despite the fact the government has changed and our technology is now being used in ways we're not comfortable with?
  • Should we build a product feature that has tremendous legitimate utility for our customers, but could also be abused? (For example, a network monitoring feature that could be used for cybersecurity but could be repurposed for surveillance.)
  • Should we sell our product to governments that have very different political views (and stances on human rights) than our own? Might they abuse our product? Could we be putting vulnerable individuals or our own country at risk?

❓ Choose a product that you've worked on, or a product you use regularly, and try to make a decision for each of those cases in the context of that product. Consider: how do you make each decision? Do you have some guiding ethical principles, or do you follow your intuition to do what feels right?


As technology professionals, we have a moral responsibility to develop our products ethically. We have control over the products we build: technology doesn't "happen to us" and it isn't "inevitable". It is the result of our ethical choices. But how can we best make these choices? The decisions described above are all real-world examples of decisions that are very difficult to make. Having an ethical framework can help you reason about them well to make the best possible decision in your context. In this reading, you'll learn about one such framework you can use (if you find it a good fit): a ten-step framework for making ethical decisions by examining the problem from a series of different ethical perspectives ("lenses").

📚 Reading Assignment: A Framework for Ethical Decision-Making - Manuel Velasquez, Dennis Moberg, Michael J. Meyer, Thomas Shanks, Margaret R. McLean, David DeCosse, Claire André, Kirk O. Hanson, Irina Raicu, and Jonathan Kwan, Markkula Center for Applied Ethics at Santa Clara University (2021)

Ethics is not the same as feelings...often, our feelings will tell us that it is uncomfortable to do the right thing if it is difficult.

Ethics is not the same as religion. Many people are not religious but act ethically, and some religious people act unethically...

Ethics is not the same thing as following the law...Law can become ethically corrupt...

Ethics is not the same as following culturally accepted norms...it is important to recognize how one’s ethical views can be limited by one’s own cultural perspective or background, alongside being culturally sensitive to others.

Ethics is not science...Some things may be scientifically or technologically possible and yet unethical to develop and deploy.


  • Which option best respects the rights of all who have a stake? (The Rights Lens)
  • Which option treats people fairly, giving them each what they are due? (The Justice Lens)
  • Which option will produce the most good and do the least harm for as many stakeholders as possible? (The Utilitarian Lens)
  • Which option best serves the community as a whole, not just some members? (The Common Good Lens)
  • Which option leads me to act as the sort of person I want to be? (The Virtue Lens)
  • Which option appropriately takes into account the relationships, concerns, and feelings of all stakeholders? (The Care Ethics Lens)


Why Protect Users You Disagree With?

"I disapprove of what you say, but I will defend to the death your right to say it" - Evelyn Beatrice Hall, 1906, illustrating Voltaire's beliefs.

First they came for the Jews
and I did not speak out
because I was not a Jew.

Then they came for the Communists
and I did not speak out
because I was not a Communist.

Then they came for the trade unionists
and I did not speak out
because I was not a trade unionist.

Then they came for me
and there was no one left
to speak out for me.

- Martin Niemöller

Why protect users you disagree with - even dislike? Well, one day your life might depend on someone you dislike protecting you. In your work, you are likely to identify potential privacy threats to users whom you strongly disagree with. Perhaps you find their behavior threatening, repulsive, or just plain weird. Perhaps it goes against your religion, your culture's social norms, or your own ethical code. However, that doesn't mean you can ignore these threats. Provided that their behavior is not illegal, you have a moral obligation to build appropriate privacy protections for these users into your product.

Not convinced? Perhaps the veil of ignorance will convince you. This thought experiment from political philosopher John Rawls aims to help us see past our personal biases when assessing decisions about how society should be governed. In our case, we can apply it to product and policy decisions in technology to understand why we might want to protect users whose beliefs or behavior we disagree with. Imagine that tomorrow you will randomly become one of your product's users. You have no idea which culture you will be part of, what language you will speak, or what your life circumstances will be. Man or woman or non-binary, LGBT+ or straight, Muslim or Buddhist, rich or poor - you could become anyone. If you knew that would happen tomorrow, would you still design this the same way? Perhaps the person you will be tomorrow is going to be a victim of discrimination, unknowingly have their data sold to a data broker, or be stalked by a malicious insider in your company. Have you done everything you could to protect tomorrow's version of you?

"I can control my decision, which is that I don’t use that sh%t. I can control my kids' decisions, which is that they’re not allowed to use that sh%t." - Former VP for user growth at Meta

Perhaps if more social media executives were to use the Veil of Ignorance in their decision-making, our social media platforms might be less destructive and addictive. The Guardian reported in 2018 how many executives don't use the platforms they create, and also don't allow their children to use them, because they acknowledge that "the short-term, dopamine-driven feedback loops that we have created are destroying how society works".



Further Reading

  • Should airline pilots have less medical privacy? - Carissa Véliz, The Conversation (2015). This article explores the case of a pilot who committed suicide by crashing his plane with 150 people on board. He was being treated for depression - should the airline have been told? How absolute should we consider doctor-patient confidentiality to be?
  • Killer Robots: Algorithmic Warfare and Techno-Ethics - John Emery, Platypus (2018). Contrast this with Palantir's stance on Software and War.
  • Rights, Laws, and Google - Ben Thompson's commentary on the ethical dilemmas involved in digital content scanning for child sexual abuse material and the limits of the US First Amendment.
  • A paper by researchers at Google DeepMind explores how the veil of ignorance thought experiment can be applied in the design of AI.
  • The Center for Humane Technology has outlined eight core Policy Principles that guide their work. Consider: in a relentlessly fast-moving industry, how can you put these into practice? In which areas do we need to be confronting power to compel caution?
  • Designing Digital Freedom: A Human Rights Agenda for Internet Governance - Global Commission on Internet Governance. This report explores online human rights tensions in detail, discussing issues such as content moderation, surveillance, and the illusions (abused by tech companies) of neutrality and consent.