AI is coming, but how close are we? In this post I talk about where we are with AI and link to several interviews with some people working in the field.
The promise of artificial intelligence is endless. Even Elon Musk is worried that it will take over the world.
We’re pumped for AI and what it can do. But for now, its capabilities are fairly limited. A 2 year old child is well beyond most AI capabilities.
Can you imagine trying to train a computer or robot to walk into a room, see a pile of blocks and know how to create a pyramid of blue blocks? Wowsers. This is nothing for a 2 year old to do. For a robot with AI, it’s state of the art. Many researchers are working on this.
I’ve interviewed a number of robotic professionals and researchers on our podcast: www.flyoverlabs.io.
Robotics involve both AI and robotic manipulation so it’s much harder than just working with AI and data.
Let’s talk about AI and data. Probably the area where AI has had the most impact is computer vision systems. That problem is perfect for neural networks where each pixel can be easily represented in a neural network system. Computer vision capabilities around static images are pretty amazing.
Computer vision systems can now even label stuff in images. That’s impressive. But compared to human vision systems and maneuvering in our environment, AI computer vision is not good. In very structured environments like a warehouse it can work well.
Check out this podcast with Michtell Weiss, CTO at Seegrid. They have developed a system where forklifts can quickly navigate a manufacturing or warehouse space autonomously.
But it’s a very controlled environment where only so much changes in a day.
Compare that to autonomous cars. Autonomous cars must be able to respond to numerous new circumstances. That’s quite tough. It’s the situations like a kite floating across the street or a kid on a big wheel, that is very tough to account for.
And even training autonomous cars is easier than trying to train a new office manager who might help with 20 different things a day in the office. With autonomous vehicles, you have very few actions: turn the wheel; brake; accelerate. Also signal and some other stuff but the main actions are those three things. An AI can learn by just watching humans driving and adjusting their models accordingly. There is also a ton of programming that needs to happen beyond watching humans drive of course, especially for all of those edge cases like a bright light shining off of a train. I’d rather not run into a train, just saying.
Here’s a podcast on autonomous vehicles we did:
AI is awesome for very controlled situations like translation, analyzing large amounts of text for key identifiers, seeing patterns between data sets.
I’m excited for the day when AI can enter our environment even more. We’re getting there.