“Give me a lever long enough and a fulcrum on which to place it, and I shall move the world” — Archimedes said that (or something close to that, anyway — exact quotes are hard to come by when they’re more than two thousand years old). That’s essentially what technology has been marching toward for the past 50-60 years: creating a long-enough lever and developing the fulcrum on which to place it such that man can move the world (or at least elevate the species). Artificial intelligence (and not the sci-fi Hal 9000 version) represents some of the greatest possible advancements toward that terrestrial leverage equation in man’s history. But, to truly judge how successful we are in moving toward that end, we have to look to the future for hints at AI efficacy.
It’s been covered quite a bit by now, but AI really isn’t what most people think it is. Through misuse and layperson misunderstanding, the term has come to represent and/or stand-in for a host of interrelated technologies and concepts in modern computing. Most people envision what we’d call ‘general intelligence’ AI, which is the Hal 9000 version — a computer system that can reason, think, rationalize and/or improvise while then making decisions as efficiently and judiciously as humans can (or in the sci fi versions, even better than). That’s not really what AI is as presently constructed.
The Verge released a phenomenal piece on the current state of AI, and I highly recommend you check it out. But basically, it starts out with the contention that AI as currently constructed really refers to machine learning. And at the heart of any machine learning system is data analysis.
Lots and lots of data analysis.
We feed neural networks boatloads of relevant data and engineer the system’s algorithms to look for patterns amongst the noise. The better the neural network is at identifying the signal within the noise, the better we reckon the AI is performing. I would contend, however, that recognizing the trend isn’t the surest form of AI efficacy tracking, but rather that system’s ability to predict the future.
The faster or more efficient an AI system is at crunching data, finding the signal, identifying the trends and making decisions, the better we think the system has performed. And strictly speaking, that is a true judge of quality such as it is. The next level for neural networks and machine learning algorithms, though, is correctly predicting future occurrences based on data observed in the past. The better the AI system, the more likely that system will be to correctly predict a noteworthy outcome.
Google’s DeepMind is doing just that — after mastering Chess and Go, the experimental AI unit turned its attention to a particularly niche — and staggering important— task: protein folding. Per The Guardian:
DeepMind’s latest AI program, AlphaFold, had beaten all-comers at a particularly fiendish task: predicting the 3D shapes of proteins, the fundamental molecules of life.
The arcane nature of “protein folding”, a mind-boggling form of molecular origami, is rarely discussed outside scientific circles, but it is a problem of profound importance. The machinery of biology is built from proteins and it a protein’s shape defines its function. Understand how proteins fold up and researchers could usher in a new era of scientific and medical progress.
That’s how you level up when it comes to AI — turning past-centric data crunching operations into predictive models that get better and better, and closer and closer to the realized outcomes. DeepMind is leading the way on this front, and we can’t wait to see what the team (and its competitors) cook up over the years to come.