With all the ways artificial intelligence (A.I.) has bested humanity in recent years (I’m looking at you AlphaGo Zero), it’s easy to get caught up in what machines do better than us, especially when job displacement is a real and worrying concern for many workers across our interconnected economies. But focusing instead on what humans still do better than machines can provide useful insight into the future of A.I. research and the advances needed to take the technology forward into the future and beyond. And interestingly enough, one of those advances is completely counterintuitive: how can we get machines to forget better?
In the grand scheme of things, at least in the grand scale extending to Earth and our observable solar system, humans really are special. As a species, we’ve achieved things never before witnessed or documented in terrestrial history (not to say there weren’t or aren’t advanced organisms elsewhere in the universe, just that we are tops of what we’ve been able to observe). Farming and agriculture? Yeah, that’s pretty great. Metallurgy? Yeah, we’ll definitely take some of that. Mathematics, art, music and philosophy? Don’t see any other animals on that plane of intelligence or creativity… All things considered, we’ve done some pretty dang cool stuff in our comparatively brief time in existence.
But what actually makes humans special to the point we were able to achieve that?
In a lot of ways, it’s pattern recognition. Some would argue it’s the key to higher cognition — in actuality, our “species’ penchant for pattern-recognition is essential to consciousness and our entire experience of life” according to Maria Popova from Brain Pickings, who’s summarizing Cambridge neuroscientist Daniel Bor. Neil deGrasse Tyson said as much, albeit in service of a slightly different conclusion, on Cosmos: A Spacetime Odyssey. Either way, the bottom line is that humans are particularly good at pattern recognition, and that specific predisposition is responsible for much of what separates humans from everything preceding us.
But it’s another, less-discussed skill that might be nearly as important in our development up until this point, and just might hold some clues to a newer, more useful, more powerful A.I.
Summed up eloquently by MIT’s Technology Review, “humans have the extraordinary ability to constantly update their memories with the most important knowledge while overwriting information that is no longer useful.” Put a different way, we’re really good at prioritizing information, skills and behaviors: the more we use (or perceive we will need to use) a particular skill or piece of information, the deeper it’s ingrained in our memories. The less useful? The more likely your brain will simply discard it in favor of something it deems to be more useful or advantageous.
Machines can’t really do that on their own as of yet. And it’s a pretty important skill for survival, as MIT Technology Review continues: “The world provides a never-ending source of data, much of which is irrelevant to the tricky business of survival, and most of which is impossible to store in a limited memory. So humans and other creatures have evolved ways to retain important skills while forgetting irrelevant ones.”
For machines, once the memory is full, every new piece of information, skill or behavior simply replaces the oldest one with little care or consideration to the relative importance of that particular discarded item.
If you’ll permit me a longer exposition lifted from MIT, they do a really good explaining how the process of teaching a machine to forget might work:
Today that looks set to change thanks to the work of Rahaf Aljundi and pals at the University of Leuven in Belgium and at Facebook AI Research. These guys have shown that the approach biological systems use to learn, and to forget, can work with artificial neural networks too.
The key is a process known as Hebbian learning, first proposed in the 1940s by the Canadian psychologist Donald Hebb to explain the way brains learn via synaptic plasticity. Hebb’s theory can be famously summarized as “Cells that fire together wire together.”
In other words, the connections between neurons grow stronger if they fire together, and these connections are therefore more difficult to break. This is how we learn—repeated synchronized firing of neurons makes the connections between them stronger and harder to overwrite.
So Aljundi and co have developed a way for artificial neural networks to behave in the same way. They do this by measuring the outputs from a neural network and monitoring how sensitive they are to changes in the connections within the network.
This gives them a sense of which network parameters are most important and should therefore be preserved. “When learning a new task, changes to important parameters are penalized,” say the team. They say the resulting network has “memory aware synapses.”
They’re not all the way there yet (predictably, too much of deep learning and A.I. research and development is pretty nascent at this stage). But, if these researchers or their compatriots could get this one right, it could make A.I. systems far more flexible and adaptable — absolutely necessities for A.I. to have a full and positive impact on our real lives in variable scenarios.
Jeff Francis is a veteran entrepreneur and co-founder of Dallas-based digital product studio ENO8. Jeff and his business partner, Rishi Khanna, created ENO8 to empower companies of all sizes to design, develop and deliver innovative, impactful digital products. With more than 18 years working with early-stage startups, Jeff has a passion for creating and growing new businesses from the ground up, and has honed a unique ability to assist companies with aligning their technology product initiatives with real business outcomes.
Whether you have your ducks in a row or just an idea, we’ll help you create software your customers will Love.LET'S TALK
Get this free handbook today & learn how to catapult your chances for software success!