While we warded against undue hysteria about artificial intelligence coming for all our jobs last week, we swing wildly to the other end of the spectrum this week. That’s because this week, we’re delving into what happens when you pour the proverbial lighter fluid onto an already hot-burning fire. Namely, what happens when you take neural network generating processes and turboboost their computing power by aiming the U.S.’s fastest supercomputer at it?
We get a step closer to the singularity, that’s what.
Now, will we ever actually get to a point where super-intelligent machines create ever-smarter versions of themselves until we can no longer control the cycle? That’s pretty tough to say, but if I was a gambling man, I’d wager pretty heavily against it. At least, not in our lifetimes anyway.
That hasn’t stopped the top scientists in the land from taking a giant step in that direction, though (albeit in a probably far less nefarious way than you might be thinking or The Matrix might have us believe).
As we talked about last week, the vast majority of “A.I.” you hear about today is better described as adaptive machine learning — the machines aren’t actually performing cognition or arriving at a cognitive thought, but rather recognizing patterns in immense amounts of data and then drawing conclusions or inferences from that data based on what its preprogrammed algorithms tell it to do. Machines have gotten very good at this. Even the best systems take a load of time to set up, though, because even the smartest, most cutting-edge data scientists have to both write the algorithms that dictate the system’s behavior, but they also have to write the software of how the system improves itself, too.
You can train a massive array of GPUs at a specific data set, but if you can’t teach the system how to progress through successive trend recognition tests and get better at choosing the optimal path through the neural network, that network will be topped out at a certain point — it won’t ever get faster/smarter unless the system can recognize its mistakes, spot new efficiencies it can exploit, and make those adjustments accordingly.
But, in the current mode of building these neural networks and adaptive machine learners, the data scientists have to write those improvement algorithms as well as the initial behavior and choice algorithms. That introduces human error, the necessity for cognitive downtime (aka sleep) as well as a hard cap on cognition (raw intellectual horsepower is finite in any given human). But what if the system could not only identify trends, but also improve that recognition without the need for secondary algorithms to govern it?
What if the system could simply do this, on its own, from the jump?
Google announced it was taking a step in that direction earlier last year. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. As reported in Singularity Hub, “[t]he Google researchers created a machine learning system that used reinforcement learning—the trial and error approach at the heart of many of Google’s most notable AI exploits—to figure out the best architectures to solve language and image recognition tasks.”
Depending on what the task is requires a different response from the same neural network — the architecture for interacting and responding to natural language prompts is completely different from the architecture governing image recognition. So, if AutoML can choose the right architecture for a given computing task, it can dramatically improve the performance and flexibility of neural networks.
But when you add Titan to the mix? Things get supercharged.
We outlined the race for supercomputer supremacy in a previous post, but suffice it to say America hasn’t been on top in a while. That will change in the not-too-distant future, but it also means our current top dog, Titan, is nothing to scoff at. Residing at the Oak Ridge National Laboratory, it’s still pretty beastly. And it’s much faster than any of the arrays Google was using to test AutoML. So what would happen if you taught a supercomputer how to build a neural network?
It generates neural networks “as good if not better than any developed by a human in less than a day”, according to Singularity Hub. You read that correctly — it created maybe the best neural network ever developed, and it did it in less than a day.
The team at Google reportedly has access to 800 GPUs when building and testing their AutoML system. Titan has 18,000. So, yeah. It was a little faster at the proscribed task.
For the scientists, they’re not interested in custom-designed neural networks designed to do one thing, like identify a “cat” in a photo. They want to train it on hypercomplex datasets for weather modeling, or particle physics experiment results from the Large Hadron Collider or image analysis for monstrous astronomical Hubble photos. What each problem requires varies, but the massive computing power prerequisite remains.
What does this look like in practice? Again from Singularity Hub:
One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. …The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.
In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.
“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.
What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.
“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.
The ability to make the decision engine itself better is the backbone of progressive A.I. The only way we get better and more useful A.I. is if it can learn and improve itself. Some of our top scientists may have just pushed us immensely in that direction.
And when Summit comes on line? Watch yourself.
Start a conversation with our talented team today!
Subscribe to our mailing list to receive updates about new posts to the ENO8 blog