Many of the world’s largest and most successful tech companies are vying for a coveted title in today’s turbocharged tech world — “synonymous with Artificial Intelligence.” Much as Xerox is known for copies, or Kleenex is for tissue paper, brands want to be synonymous with a product or industry because it means they’re the undisputed expert; that they dominate their respective field.
Now, no one is going to rename A.I. “Amazon” or “Google”, but those companies do want to be on the cutting edge of A.I. developments so much so that when people think of A.I. advancements and inventions, those corporate names and logos flash to the fore of everyone’s collective imaginations.
Google is making yet another strooooong play for that title outright.
Apple may have Siri and ARKit, Microsoft may be making inroads as Watson powers IBM’s future, but it seems like Google keeps grabbing all the A.I. headlines recently. Facebook and Amazon both are pressing into the breach aggressively, but Google seems to be outpacing them, at least for now.
The newest development is Google’s alternative architecture for neural networks, which could have massive implications for machine learning and A.I. moving forward.
One of the core limiting factors for any neural network or deep learning array is the source data — the more source data you have, the better you can train that machine to recognize the patterns and surface the answers you seek. The quality of the answers, and consequently the neural network, is directly relational to the quality of data input. So, Google’s top AI team came up with an approach to radically decrease the amount of data required of the network to produce solid results — capsule networks.
From Wired: “Capsule networks aim to remedy a weakness of today’s machine-learning systems that limits their effectiveness. Image-recognition software in use today by Google and others needs a large number of example photos to learn to reliably recognize objects in all kinds of situations. That’s because the software isn’t very good at generalizing what it learns to new scenarios, for example understanding that an object is the same when seen from a new viewpoint.”
The idea behind these capsules is to break down the problem, in this case an image, into component parts such that each capsule focuses on a specific part. Then, working in tandem, the capsules will elevate trends recognized to higher capsules until the entire system determines its reached a critical mass to declare that trend identified. Per MIT Technology Review:
Their approach uses small groups of neurons, collectively known as capsules, which are organized into layers to identify things in video or images. When several capsules in one layer agree on having detected something, they activate a capsule at a higher level—and so on, until the network is able to make a judgment about what it sees. Each of those capsules is designed to detect a specific feature in an image in such a way that it can recognize them in different scenarios, like from varying angles.
Neural networks are designed to operate, more or less, like a human brain. By mimicking the way neurons interact in this capsule format could very well unlock untold potential for neural networks and machine learning. It also could completely stall out and provide little to no speed advancement for A.I. Either way, though, Google forges onward toward synonymousness with A.I.
Start a conversation with our talented team today!
Sign up for our monthly newsletter to stay in the know of all things ENO8 and digital product development.
Whether you have your ducks in a row or are just at the beginning of an idea, our team is ready to assist you in any way that we can.LET'S TALK
Subscribe to our mailing list to receive updates about new posts to the ENO8 blog