On Writing about Tech Ethics / Morals Are Not a Luxury

I’ve been co-hosting a podcast about the ethics of technology for five years now, but I’ve not written much about the ethics of technology. Part of that is due to the path my academic research took during my doctoral years, but part is due to saying much of what I wanted to say in the podcast. There’s now some space for me write a bit on the ethics of technology on my own.

The space comes from a new planning process in the podcast. We’re beginning to develop rhythms for the publication life of the podcast, as opposed to following the flow of the co-hosts’ professional and personal lives. Our lives have been in a near-constant state of flux over the last few years, and honestly it seems like that flux will continue for at least another year or two on my end. So instead of following the flow, we’re making a plan that accounts for our flux but also allows us to be more regimented in publishing.

As a result of that planning process, I now have some time where we’re intentionally not publishing the podcast, as we prepare for the next season. I’ve been thinking about a particular article I read regarding an AI facial recognition company a few months ago, and now have time to jot down some notes about it. I posted this as a Twitter thread first, but it’s worthwhile to preserve for posterity as well. So, a lightly edited version of the Twitter thread is below. I hope to post more thoughts of similar ethics-of-technology nature in this blog over the next months and years, particularly in the spaces between Winning Slowly recording/publishing. So, Morals Are Not a Luxury:

My initial thoughts stem from a statement by the co-founder of an AI company building China’s facial recognition technology: “We’re not really thinking very far ahead, you know, whether we’re having some conflicts with humans, those kinds of things,” he said. “We’re just trying to make money.”

That quote on its own is remarkably candid about how much ethics don’t factor in to their AI calculus. But it’s another one that struck me more, because it’s not news that technologists don’t think about consequences very much: “But at the Singapore Defense Technology Summit this summer, co-founder Tang stood before more than 400 military and government officials and contractors from all over the world and said SenseTime doesn’t have the luxury of worrying about some of AI’s moral quandaries …”

So morals are a “luxury” now? This is what raw, unfettered because-we-can-we-will-and-then-make-money looks like. Not everyone in tech is this bad about morals (or at least this candid about it) but I can’t help looking at that quote and thinking “There’s the problem.”

Those of us concerned about the ethics of technology have to keep working on all possible ends to get through to these companies that there is much more than money here at stake, and that there are some places tech can go that it shouldn’t. Federal policy, field-level self-regulation, peer pressure, international cooperation (international boundaries are definitely part of the problem here, but they can also be part of the solution), everything should be pursued and continued to be pursued post-haste.

Because until people like SenseTime no longer consider morals a luxury, they should be considered as having no guiding compass at all, literally no one to say “hey, this is a bad idea”; literally this is a company that will stop at nothing.

Sometimes critics are themselves critiqued for fear mongering, for making something out of nothing, for imagining ghosts where none exist. This is proof straight from the source that this is no longer imagining: these people admittedly do not have “the luxury” of morals.

In short, this is what market thinking in technology has developed: a company that thinks there might be moral problems with total and complete surveillance but doesn’t have time to “worry” about the “luxuries” of morals.