"Why Iteration is not Innovation"

Watch our recorded WEBINAR!

Why transparency is truly mission-critical to A.I. advancement

Artificial Intelligence (A.I.) already does some pretty amazing things. We’re big proponents of and investors in this nascent field (with both our money and our resources dedicated thereto). We think it can help solve some of the largest problems facing humanity in the 21st century. But we think this also comes with a caveat.

These advancements simply must come with an accompanying level of transparency. Why you may ask? Because it’s the only way we can trust the underlying technology enough to unleash its full potential.

In many of the examples I previously linked to, we know what the A.I. is doing. In so many instances, though, we have very little understanding of how the A.I. got there. I mean, we have rudimentary understandings of what’s going on given humans designed the systems. But most of the time, we’re basically just trying to approximate what we think happens in the neurons in our brains onto silicone chips.

Sure, we program the algorithms that initially dictate the system’s behavior, but the real promise of A.I. is systems that can learn from, reinforce and improve themselves as time goes on. The final goal would be A.I. systems that can actually create iterative versions of themselves or entirely new A.I. systems without requiring human input at all — getting faster and more efficient at both the creation of each iteration/new system as well as how fast and efficiently the actual systems operate.

But, if we can’t figure out how those new systems came to be, why the machines made the decisions that they did, etc., how can we trust what the system output? In suboptimal scenarios, that can lead to real problems. As described by Fast Company:

“If neural networks are participating in criminal justice, financial markets, national security, healthcare, and government, we need to understand why they make decisions, not just if they got them right or wrong. Transparency and accountability–basic tenets of democracy–are lost when we don’t.

Google Brain has attempted to pierce that veil of ignorance in a major way. By introducing a new concept in A.I. — interpretability — Google Brain is attempting to get into the weeds of A.I. to ensure the process is as important as the result. From Fast Company again, “[Google’s] paper proposes an interface that lets researchers peer into their neural networks, like looking through a literal window. Eventually, they posit, these interfaces could help researchers shape the actual thought process going on inside these digital brains.”

This need for transparency isn’t being felt by just Google, though. In an unrelated Fast Company article extolling the centrality of design to the modern enterprise, Phil Gilbert, general manager, design at IBM, “points to transparency as one of the three guiding principles for IBM’s work in AI. That means that designers must be responsible for giving people information about the systems they are interacting with. It also translates into the need for constant seeking and listening to diverse points of view from users as well as other IBM teams.”

Transparency is on the mind of the biggest players in A.I., and for good reason. The more we can understand how A.I. thinks, the better off we’ll all be. It’s comforting in a way that the leaders in the field already have interpretability so top of mind. We hope those same leaders continue the charge toward transparency in A.I. so we get both better and fairer A.I. now and into the future.



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha *

Jeff Francis

Jeff Francis is a veteran entrepreneur and founder of Dallas-based digital product studio ENO8. Jeff founded ENO8 to empower companies of all sizes to design, develop and deliver innovative, impactful digital products. With more than 18 years working with early-stage startups, Jeff has a passion for creating and growing new businesses from the ground up, and has honed a unique ability to assist companies with aligning their technology product initiatives with real business outcomes.

Get In The Know

Sign up for power-packed emails to get critical insights into why software fails and how you can succeed!

EXPERTISE, ENTHUSIASM & ENO8: AT YOUR SERVICE

Whether you have your ducks in a row or just an idea, we’ll help you create software your customers will Love.

LET'S TALK

Beat the Odds of Software Failure

2/3 of software projects fail. Our handbook will show you how to be that 1 in 3.