Latent AI reduces the latency by minimizing and sometimes complexly bypassing—the need for a distant datacenter. Edge AI takes a different tack; it runs algorithms locally on chips and specialized hardware, rather than in distant clouds and remote datacenters. Where user and edge devices have widely varying computation resources.
What is Edge AI, should it be in your roadmap for 2020?
We have seen the shift from mainframe to computers to cloud, now, the cloud is moving to Edge, and so is AI. But it doesn’t mean that cloud is becoming irrelevant. It is still relevant, and the fact is, disruptive technologies like IoT will act as the smart extensions of cloud computing. In Edge AI, the AI algorithms are processed locally on hardware device, without requiring any connection. It uses data that is generated from the device and processes it to give real-time insights in less than few milliseconds.
For instance, the iPhone has the ability to register and recognize your face to unlock your phone in fractions of seconds. Similar to that self-driving cars, where the car drives on its own. And as we can see complex algorithm are used to process data right there in your phone or in the car, because in such scenarios there is no time sending this data to the cloud, process it and wait for the insights. There are n number of other examples where we are knowingly or unknowingly using Edge AI. Like from Google maps notifying you about bad traffic to your smart refrigerator reminding you to buy some missing dairy stuff.