As the speed of technological innovation has quickened — and as the pot of money to be made for breakthroughs has swelled immeasurably — it’s become harder and harder for entrepreneurs, scientists, programmers and the like to stop and ask themselves certain, hard questions. First-mover advantages in the world of technology are so vitally important, it makes sense that so many executives, engineers and scientists are more worried about “how” than “why” or “should”.
And that makes sense, in a way. Those professionals are paid to accomplish the goal, not to necessarily debate the ethical considerations therein.
In what is far too dense a subject to fully explicate here, Artificial Intelligence rightly sees an outsized share of ethical scrutiny from academics, ethicists, technologists and futurists alike. Elon Musk, one of the paragons of our field, has beyond a healthy skepticism for AI — he’s downright bearish on the implications for mankind. Mark Zuckerberg is, non-surprisingly, on the far other side of the issue (which is probably in no way affected by Facebook’s massive investment in AI…). Two titans of our industry don’t often find themselves on such opposite poles on a given issue, which should at least give you an idea of the intellectual stakes at hand. But because it has such potential, in both scientific, monetary and humanitarian fields, it demands a huge portion of the intellectual capital from some of earth’s brightest minds.
And it’s right for leaders in the technology industry to question the place of A.I. for our present and future. There’s no question machine learning and neural networks can immeasurably help humans. From improving doctors’ diagnoses to modeling climate change scenarios to win/win negotiation software to human speech recognition and processing, there’s no limit to how much AI can improve our daily lives. Truly. That’s why so many companies are eager to dive in head first (well, that and there’s untold riches to be made from having the best A.I. service in the world).
But, there are very real drawbacks to letting A.I. run amok. We’ve already seen how A.I. negotiation chat bots will develop their own language — that humans can’t understand — to communicate in shorthand to one another. Should we allow these negotiation simulations to go on unchecked like this, so long as they result in optimal outcomes for all? Or do we risk losing control of the systems we’ve built when those systems start communicating in languages humans can’t understand?
As with anything, though, side-effects of disruptive technology often manifest in unintended ways. For instance, if left in the hands of solely capitalistic enterprises, is it possible that the inequality and wealth gap widens further because A.I. is designed by companies to make money, and as such, naturally appeal to and favor the already wealthy? I mean, A.I. could just as easily help disenfranchised individuals as well as or better than rich individuals, but there’s no telling this early in the technology’s development.
Which segues into another unforeseen consequence of the digital age. We all know social networks like Facebook and Twitter are great for keeping us connected, giving us a platform for speech, and a way to connect with like-minded communities; but, they’re just as responsible for rampant loneliness, inability to navigate real-world situations, etc. I’m not denigrating social media, per se, but it’s worth noting that there were both real, and widely felt consequences for social media’s ubiquity.
The same is true for Moore’s Law. Moore’s Law is an axiom of material sciences and computing: “Since the 1970s, Intel has released chips that fit twice as many transistors into the same space roughly every two years, aiming to follow an exponential curve named after Gordon Moore, one of the company’s cofounders,” Tom Simonite wrote in the MIT Technology Review.
That speed of chip innovation has made it possible for today’s iPhones to be as powerful as the fastest supercomputers of the late 80s and early 90s. That’s amazing. It’s laudable. Consistently achieving that benchmark has allowed for some of the greatest inventions and movements in the history of humanity.
But, one of those nagging, unforeseen side effects? Energy consumption. While Moore’s law calls for increased efficiencies in chips’ computing and power consumption to keep allowing for making things smaller, it also enables such powerful, mobile computers that we expect our electronics to be chock full of processors to work at the speeds we demand.
“There is a problem in the offing,” according to the MIT Technology Review. “As powerful computers become more widespread, the amount of power they consume will increase. If Moore’s exponential law continues, electronic devices will consume more than half the planet’s energy budget within a couple of decades.”
That’s not to say material sciences breakthroughs aren’t on the horizon that can combat this growing energy consumption crisis. But, it also doesn’t mean rapid technological innovation isn’t without its pitfalls.
The same is true for A.I. It could be just as important — if not more important — to the future than Moore’s Law has been on the past and present of computing. It’s also worth noting, though, that we do not yet understand all the possible side effects of fully unleashing this technology — much as we couldn’t predict how Moore’s law could help lead to an energy shortage.
We’re not arguing against A.I.; if anything, we encourage the advancement of it. But, it’s worth taking some time and spending some mental capital to consider what it could mean for our future and how it might negatively impact our lives if left unchecked. And once we have an ethical framework for evaluating these new technologies, we’ll be in a far stronger place as a society to employ breakthroughs efficiently, equitably, and, when necessary, judiciously.
Jeff Francis is a veteran entrepreneur and founder of Dallas-based digital product studio ENO8. Jeff founded ENO8 to empower companies of all sizes to design, develop and deliver innovative, impactful digital products. With more than 18 years working with early-stage startups, Jeff has a passion for creating and growing new businesses from the ground up, and has honed a unique ability to assist companies with aligning their technology product initiatives with real business outcomes.
Sign up for power-packed emails to get critical insights into why software fails and how you can succeed!
Whether you have your ducks in a row or just an idea, we’ll help you create software your customers will Love.
LET'S TALK