“Software engineers continue to treat safety and ethics as specialties, rather than the foundations of all design; young engineers believe they just need to learn to code, change the world, disrupt something. Business leaders focus on getting a product out fast, confident that they will not be held to account if that product fails catastrophically. Simultaneously imagining their products as changing the world and not being important enough to require safety precautions…”
– Yonatan Zunger
As I make my way through my current program at Tufts University on human-computer interaction, I think about the ways in which I hope it will influence my own work and the ways I think it could benefit other students and aspiring professionals. As with many fields of research, the work can be highly specialized – the origins of the field still shine through as one reviews the large existing body of research on aviation safety, or models of operator psychology framed around anecdotes about radar screens. But of course, we now live in a world where advanced technology is so ubiquitous it’s become banal. That pervasiveness means we are constantly interacting with technology in ways that can go unseen, and thus the potential risks my also go unseen. The news is full of examples of unprecedented risks, be it self-driving cars or Cambridge Analytica.
Engineering psychology, as the study of human-machine systems, is a much more complex discipline impacting far more of our lives than ever before. We view this as a unified system that includes the human operator, but our academic language often shrouds the role the system designer plays in the functioning of the system. As we evaluate the psychological quirks, biases, and limitations that constrain an operator of a machine or system, we need to take care to account for the same quirks, biases, and limitations of the designer of that system.
Physical safety and ease of use may be evaluated in advance, and tools or techniques subsequently modified over time to achieve greater degrees of success. But this entails a certain comfort with a minimum viable product that is unfinished and not fully predictable upon release, that can be reevaluated in hindsight. The more complex the systems become, the more intertwined and interdependent, the greater the risk of failure, it becomes ever more important to try and predict those pitfalls in advance.
That is perhaps where we as professionals can have the greatest impact: looking forward rather than backwards. The field can and does provide guidance on system design and usage, but that framework can and should have an ethical component directed at the system designer themselves. Specifically, I would advocate that engineering psychology has guidance to offer software developers regarding personal and professional ethics, and advocate for the development and enforcement of an ethical standard on the profession.
Engineering psychology has a long history of providing such value to fields like mechanical engineering, aviation, and ground transportation. A key factor connecting these fields is their public nature. Key to enforcing certain standards of care in design on the products engineered under each discipline are the public users of planes trains or bridges providing the political and cultural capital. Related technology may be regulated after the fact by government agencies due to public outcry once tragedy has already struck. Our textbook, The Atomic Chef by Steven Casey, documents several related case studies along these lines.
Yet in many ways, modern software development is categorically different. Products are privately designed, often for private audiences. Unlike those older fields, there is no standard for licensure or certification, as the underlying programming languages and applications change so frequently and specific applications are highly customized and decentralized. This makes it hard to weave ethical concerns into the daily routine of many developers, and unsurprisingly the culture reflects this. Most stereotypically embodied in the former Facebook mantra, “move fast and break things”, individual developers rarely see themselves as trained professionals providing service to the public. Of course, however we view ourselves, the work we do has consequences for the general public.
There are older examples of software development and design failures. Six people died due to design oversights in the cancer therapy device known as Therac-25 (Ha, 2017). Casey (2006) describes the 1995 deaths of several members of the Middleton family in Los Angeles, due in no small part to operator error but also the lack of GPS technology to help locate the caller and communicate that to the switchboard operator, a problem that persists today with 911 systems around the nation. These risks to bodily harm due to computer technology, in domains where we have a general expectation that the experts are taking care of us, map fairly easily onto the regulatory framework of those older and more established subfields of engineering mentioned previously. Public outcry meets an industrial ethos that embraces that responsibility, and addresses issues accordingly. Where a divide begins to form is when the technology is perceived as a private good, and not a public service, and where the risk is not exclusively physical.
Going further, new technologies on the cusp of being fully realized offer even more potential for harm. The manipulation and deception possible with AI, VR, and AR technologies, if used in unethical ways, is frightening. With Twitter full of “bots” and deceptive news articles of unknown providence already crowding our feeds, we risk something subtler and potentially more destructive than we yet have the language or frameworks to combat.
What is lost when we fail to consider our work in software design a public service, and constrain our thinking to only the narrow technical specifications and profit motives? When should we feel an obligation to foresee problems not only from a functional or technical perspective, but from a human factors perspective? What would it mean for individual designers and engineers to see themselves as responsible for the ways their products are used in the same way that doctors see themselves as responsible for the health outcomes of their patients? What would it mean for the industry to adopt standards of training, licensure, and oversight that mimicked other fields? Who could even be that authority for an industry with so many subfields of study, unconnected public or private organizations, and individual actors?
Despite the many unanswered questions, the industry has not been unaware of this growing problem. Several leading groups, including the IEEE (http://te.ieeeusa.org) and the ACM (https://www.acm.org), have developed ethical guidelines in collaboration professionals and experts, and expressly consider the social consequences of system design. The ISO offers design guidance on a detailed level for some specific applications, like medical devices. The unresolved element has been the enforcement of such generally agreed upon terms. I believe it could be possible to utilize state law in the way a bar association or medical specialty might, even if only in certain contexts. Independent contractors offering their services publicly, or developers working with public entities like school districts or hospitals, may be cases for early adoption.
Realistically, much of the effort would have to be a form of self-policing by developers , informed by an inter-disciplinary understanding of their work and reinforced by pressure on their private employers by an ever more technologically educated and aware public.
Reference list and further reading
Bogost, Ian (November 2015). Programmers: Stop Calling Yourselves Engineers [Article]. Retrieved from https://www.theatlantic.com/technology/archive/2015/11/programmers-should-not-call-themselves-engineers/414271/
Brown, Jennings (December 2017). Uber’s Big Claim That It’s Not Really a Cab Company Is Bogus, EU Court Rules [Article]. Retrieved from https://gizmodo.com/uber-s-big-claim-that-it-s-not-really-a-cab-company-is-1821461427
Casey, Steven. The Atomic Chef and other true tales of design, technology, and human error. Aegean Publishing Company, Santa Barbara, CA. 2006
Ha, Qinghua (November 2017). Excess social media use in normal populations is associated with amygdala-striatal but not with prefrontal morphology [Article]. Retrieved from http://www.psyn-journal.com/article/S0925-4927(17)30215-9/abstract
July 2014. Facebook’s Manipulation Studies – A Critical Look [Blog]. Retrieved from http://sitn.hms.harvard.edu/flash/2014/facebooks-manipulation-studies-a-critical-look/
Lim, Joanne (October 1998). An Engineering Disaster: Therac-25 [PDF]. Retrieved from http://www.bowdoin.edu/~allen/courses/cs260/readings/therac.pdf
Zunger, Yonatan (March 2018). Computer science faces an ethics crisis. The Cambridge Analytica scandal proves it [Article]. Retrieved from https://www.bostonglobe.com/ideas/2018/03/22/computer-science-faces-ethics-crisis-the-cambridge-analytica-scandal-proves/IzaXxl2BsYBtwM4nxezgcP/story.html