Two trucks.

Metal and Minds: Ethics in AI

November 5, 2021

By Jeremy Gilmer

The ethics of artificial intelligence (AI) is a subject that is in the news a lot right now, and many articles, TV shows and podcasts are talking about it. Ethics have become one of the greatest challenges in the development of AI, and this has led to some profound and complicated questions when it comes to AI and machine learning. There are many places in which AI and ethics find themselves crossing paths: facial recognition, banking algorithms, weapons of war, even social media.  

But one sector wherein the design and ethics of AI can literally mean life or death every day is mining—specifically, in the use of large mining haul trucks. Many companies are developing technology to replace human operators with autonomous vehicles, and the stakes are as high as they can get.     

A mining haul truck can weigh over 300 tons and stand more than four storeys tall. Furthermore, they can travel at 60 kilometres per hour, which is like a house coming at you as fast as a city bus. It may seem crazy to put a computer in charge of something so big, but this is already happening in some places in the world—and perhaps, instead of being scary, it could actually be a very good thing. 

Two trucks.

One of the biggest questions is what if the truck makes mistakes? Won’t someone get injured or even killed? The first answer is that humans are not perfect, and, as it stands, accidents already occur with human operators. In 27 years on mining projects, I myself have lost two friends to accidents with large equipment. Once, in the Democratic Republic of the Congo, my little pick-up truck was overtaken at high speed by a fully loaded Cat 793 haul truck, which went on to clip the side of another vehicle in front of us and flip it over like a toy. Thankfully, no one was killed in this accident—but it could have been a catastrophe. 

The primary cause of this event was operator error. I have been part of teams investigating mining accidents on four continents, and operator error is overwhelmingly the cause of the majority of these incidents. (The number one reason for operator error in most of these accidents? A driver using a cellphone. Of the last six accidents I helped investigate in which operator error was the cause, cellphones were in use in all of them.) 

In Chile, South Africa and several other places, autonomous haul trucks are being tried out as a potential way to address safety issues. Companies like Caterpillar, Komatsu, Hitachi and others are developing technologies and systems to try to incorporate self-driving vehicles into active mining environments. 

But the ethical decisions behind how these systems are programmed remain complicated. A driver can make a split-second decision to reduce harm in the event of, say, brake failure. An AI system can only ever do what it is programmed to do. Taking this into account, if an autonomous truck finds itself barrelling toward a bus full of people, should the truck’s AI be programmed to turn into the side of the hill, or to hit the oncoming bus? Well, that’s easy. But what if the choice is more complicated? What if it’s a choice between a bus coming from one direction and a fuel truck coming from another? What is the best course of action? 

This question—and others akin to it—has been studied by philosophers for decades and is generally referred to as “the trolley problem.” It is often framed as a series of thought experiments, in which various imagined scenarios test the notion of a general moral principle against the details, viewpoints and other moral shadings of specific situations. The ethics of what constitutes the best course of action could be different for two people, or two societies or cultures. These are profound and complicated problems, and even philosophers who spend lifetimes studying them don’t have all the answers. But, in practical scenarios wherein human-designed systems need to make very quick automated decisions about damage and potential loss of life, they are increasingly unavoidable.            

This, perhaps, is where young students and scientists come into play. As society continues to develop and rely on these systems and machines, more people will be needed who have a deep understanding of philosophy, thought, law, ethics and computer science, as well as the ways and languages through which these systems intersect and become autonomous. Many universities and colleges already offer advanced study in the area of AI and ethics. This work will only become more important over time. If the opportunities for AI to help and even improve so many things we do seem limitless, society will require people who can think broadly and holistically, and who can create systems and ideas that can best use and grow this technology in humane, productive ways.

This article originally appeared in the fourth issue of Root & STEM, Pinnguaq’s free print and online STEAM resource supporting educators in teaching digital skills