What are we building in at Invento? What is our USP?

I get a lot of questions about what we build at Invento. We found it much easier to excite Japanese companies working with robotics than people outside of robotics. I thought I will write a detailed post to clarify.

  • Why is our robot so special?
  • Are we competing with Boston Dynamics?
  • Why don’t our robots walk?

and so on….

I thought I will write a post to clarify our key intellectual property.

Our goal is to do what Personal Computers did to the world of computing — enable AI and robotics applications to reach the world. Were there computers before then? Yes, either in large defence/industrial use cases or homemade kits. The same state where robots are now.

Imagine a PC – you get a box with computing power, an operating system, a presentation layer and some applications that be augmented – and imagine that for the world of robots.

People keep talking about AI-powered applications —voice assistants, face recognition, emotion identification, recommendation algorithms etc — but lack proper interfaces to take it to the users. Amazon’s Echo solved a part of the problem —but still limited to voice.

We build the robotic stack on top which both our and third-party applications will deliver AI applications:

  1. Location awareness for indoors. Our robots can navigate indoors with a centimetre level precision — and can be commanded to go to a given position — a major revolution in existing robotics tech. GPS changed the way outdoors work — from hiring taxis to booking hotels, the ability to have global coordinates revolutionised applications. We are getting the same for indoors. This way the robotic assistants will know how to guide you to the pool at a hotel, guiding guests to a meeting room in an office, the product you are looking for in a mall, restocking shelves, security patrolling or serve you food at a restaurant. This is a small set — and a bigger set will emerge once end users can imagine with what is given.
  2. Gesture-based immersive voice conversations: Have you ever had a long conversation with your Echo
    [or Google Home]? Me neither. They can take commands, but cannot have conversations. A key part of this is because humans communicate a lot with non-verbal cues. If I say perfectly motionless when you are talking in front of me, you will freak out. Humans need physical feedback [a gentle nod or hand movement] to keep continuing the conversations. We are working on key IP to both understand gestures and deliver them through the hand, face and motion.
  3. Voice stack with Trigger less voice recognition: Existing state of the art in voice recognition is through trigger words [Alexa, Hey Google]. This both impedes conversations [you can give commands though] as well as provide branding challenges to brands [if you are Mercedes, why would you want your customers to call out Amazon’s brand in your car?]. Ours is face activated rather than trigger activated. After activation, you can still use our speech models [with restrictive vocabulary trained for custom needs] or 3rd party ones like Alexa or Google.
  4. Computer Vision stack. We have cameras and have already built a wrapper around key computer vision APIs and some native Tensorflow models built by us that we make it available as a library. This can help you detect emotions, recognize faces, detect objects etc.

Of course, we also provide you with the entire physical hardware that has a lot of GPU firepower in it, cloud connection, server-side APIs we will support through our Django layer for both maintaining the robots as well as writing server-side apps.

Any platform needs a few killer applications and our first key application is for guest engagement in offices. It is a shame that companies spend millions on their lobbies but still leave their engagement very low tech. Visitor management still relies on paper and outdated tech, that is both painful to the visitor and expensive for the company. 

What if you enter a lobby and robot recognises you, checks you in, guides you to the meeting room and brings you coffee while your host is alerted simultaneously? This particular application is already bought by some of the leading tech companies in the world and we will be going public in the next few months.

This is just the starting point and many top brands are already working with us on how they can build unique robotics applications in banks, cinemas, colleges, hospitals and so on. Those pioneers will get to rewrite the competitive advantage in their industry just as ambitious companies that moved to PCs in the 1970s got to the top in their industry.

You will all soon get to write cool production grade AI/robotics application that we will make to the public in a year. For now, if you have strong ML/AI applications do reach out to us and we can work together.

Welcome to the world of robots!

https://youtu.be/wPOlqa45eII

By | 2018-06-02T21:53:54+00:00 June 2nd, 2018|blog|Comments Off on What are we building in at Invento? What is our USP?

Share This Story, Choose Your Platform!

About the Author:

I'm the cofounder and CEO of Invento Robotics. I have been in tech industry for 12 years and have worked in a range of products starting from Microsoft Windows in Redmond. I'm also the most followed writer on Quora and a winner of multiple international awards for research and innovation.