Artificial intelligence is one of the thorniest issues humanity has ever faced. While it presents untold potential for improving our lives, helping us be better at our jobs, think about complex concepts in new and novel ways, it also presents legions of terrifying drawbacks. What if instead of helping us do our jobs better, A.I. just takes our jobs outright? What if the systemic control we grant A.I. systems is hacked and used nefariously? Or the system learns beyond what’s intended and then goes haywire?
What if the A.I. is racist? Or sexist? Or classist? I mean, two of our brightest tech luminaries find themselves on opposite sides of this debate, with the seeming fate of humanity in the balance (while that might sound a bit overwrought, it’s not far from reality in the outlier scenarios).
Most importantly, though, is we simply don’t know enough about artificial intelligence and its implications to even frame the ethical debates it presents, much less work through them all. It’s tough to say what ethical concerns we need to focus on when we don’t even know what they’ll be yet.
I could go one for days about what I think the biggest ethical questions will be within the now and next of artificial intelligence (which I very well might do in future posts), but that’s more an exercise in hypotheticals because so much of that debate is unknown and unknowable until we face it. But, there is a core issue we need to be addressing now because it not only informs the state of both technology and equality in our world this very second, it also underpins the way artificial intelligence works (at least as presently constructed). And, it explains why most of our current software is biased.
Software, predictive systems, recommendation engines, content aggregators, neural networks, machine learning, and many of the most valuable companies and tools rely on the same thing: algorithms.
Algorithms are simple concepts at their core: they’re a set or sets of rules dictating how a person/platform/system/machine carries out a calculation of some kind. You input something into the algorithm-based system, and that system will follow the rules established by its algorithm to answer your query.
The biggest, most successful, most important, richest, _______est companies in the world run on algorithms. Facebook’s Newsfeed runs on a algorithm. Google’s search function, arguably the most valuable single system in the world, runs on a algorithm. Your Netflix and Spotify recommendations come from algorithms. Neural networks and artificial intelligence rely on an algorithm at their heart — we feed the machine a metric crap-ton of data, and based on its governing algorithm, the machine/system learns to separate the signal from the noise of that data, identifies trends and incorporates that new knowledge into future calculations.
So why do I say all our software, predictive systems, etc. are biased? Because algorithms, at least as presently understood and written, are ultimately the product of people. And all of us are biased in some form or fashion.
There may come a day when machines can write their own algorithms whole cloth, but that seems a ways off. Even if a machine can write its own algorithms, the seed system was built on an algorithm written by a human, meaning that algorithm carried with it biases of the author/architect. So no matter what, because the system begins in a place of bias, ever derivative or advancement of that system carries that inherent bias with it to some degree.
If the person writing the recommendation algorithm for Spotify weights the engine in a specific direction for any reason, that bias tips the scales in favor of some artist or genre because of it. Facebook, for all the good it’s done the world, now controls so much of the flow of news information via News Feed, every algorithmic decision its engineers make tip the balance of power in the information age.
If those engineers choose to weight a post from Russia’s propaganda news agencies the same as CNN, NBC News or The New York Times, that bias reverberates throughout the system (which, in this case, is basically the entire Western world). If their algorithm is less likely to show you posts from women (I’m not saying it does, I’m just spitballing), that’s sexist. If you’re less likely to see a post from an African American individual because you live in a rural area, that’s both racist and a geographical bias.
However we write the algorithms ruling our digital lives, the implicit biases of the people writing that source code echo throughout the system. And as artificial intelligence starts to build on the algorithmic foundations we’ve laid, those biases become more baked in. As such, we have to start having more meaningful, deeper conversations about the ethics underpinning the creation and modulation of those algorithms.
We have to get more women, minorities and geographical diversity into the rooms where those algorithms are written and refined. We have to start drawing up legal and moral frameworks the industry can agree to adhere to in the creation of A.I. systems and the algorithms powering them. But most of all, we have to admit to ourselves there’s a potential problem here before we can start to combat it holistically. And make no mistake, this is an issue we need to take byond seriously.