5G+AI, On-device intelligence
With the 5G roll-out, AI will definitely be superpowered.
Perhaps you’ve heard about it? Two of the most disruptive technologies in recent decades. You may probably hear of them separately.
Let’s merge them! 5G + Artificial Intelligence
Mmhh.. that doesn’t seem right. Are you sure?
AI enhances 5G, on the network and on the device.
“5G is the fifth generation of wireless communication technologies and standards”. Be left behind the first mobile phones where we could only speak (1G), the first SMS (2G), our first internet connections (3G) and lastly mobile bandwidth(4G).
We are talking about something further, where the most significant advance will come from the hand of speed, where 5G will allow us to navigate up to 10GBps, 10x faster than today.
What will 5G mean? The proliferation of an even more massive IoT (Internet of Things) ecosystem, where wireless networks will satisfy the communication needs of hundreds of billions of connected devices, without prejudice to their speed, latency and cost.
Among other advantages:
· Up to 10Gbps data rate -> 10 to 100 times better than 4G and 4.5G networks.
· 10x years of battery life on low-power IoT devices.
· 90% reduction in energy consumption & 100% coverage.
· 1 millisecond latency &1000 times faster broadband per unit area.
· Up to 100 more connected devices per unit area (compared to 4G LTE networks).
This is where the concept of Edge Analytics arises, or broadly meant, a “new” model of data analysis where data stream processing occurs at a non-central point of the system, such as a device or a sensor.
As we’ve seen, 5G will cause a proliferation of sensors around us, and each of those sensors opens up new opportunities for analyzing data and creating Machine Learning models.
That is, we can use those sensors as AI inputs.
And not only that. We are no longer talking about moving significant amounts of data from its source to a centralized repository where we can process it.
We are talking about bringing processing closer to the source, the device itself, offering crucial benefits such as privacy and reliability, in addition to helping scale intelligence.
New challenges, new opportunities
More efficient wireless connections, longer battery life, and therefore better user experiences.
Imagine using 5G and AI to recreate a virtual human who can interact with a client in real time, advising him remotely or simply as a companion. Sensors that are responsible for making decisions in critical situations or that require an immediate response. We are not simply talking about processing information near the data source, but about adding more context by sharing and receiving new data at high speed, taking advantage of the low latency between them.
Nowadays, when we ask our AI-powered voice assistant any question, it takes a few seconds to respond because it goes to the cloud to process the request and get a response. This can happen when we ask what the weather is going to be like, it’s not crucial. But what if you require a response in milliseconds, like an emergency? We are sold. We better be patient.
The low latency and high capacity and availability of 5G will also allow AI processing to be distributed among the device or the sensor, the edge cloud and the common central cloud, allowing the development of much more flexible solutions than those created today.
Sectors such as Gaming or Industry 4.X could also benefit from these capabilities for complex Use Cases that can benefit from this distributed wireless processing.
A new paradigm: Distributed Learning over Wireless Sensor Networks
We have already taken the first step: use the on-device processing power to save energy and refine the data before passing it to the cloud for much more aggregated analysis.
The next step goes a little further: we will not settle for simply inferring on the device itself, but training the model right there, that is, Distributed Learning.
The theory is as follows:
1. The central cloud sends a model to each device.
2. Each device collects and stores local data samples on-device.
3. Finally, each device performs or executes on-device training.
We went from “large scale training” to “small scalable distributed training”.
On-device training allows us to:
· Scale: extending the processing to as many devices as we want, significantly reducing the necessary computational power.
· Customization: local data training, where the model adapts to the casuistry of each device (eg its own smartphone).
· Privacy: where the raw data never leaves your own device to be processed, as it currently does. Extracting value while preserving privacy.
So what’s next?
As you can see, the doors of an exciting world open to us.
Without limitations, where AI can not only benefit from off-device data, but from new ones such as on-device data (calendar, messages, apps, etc.) as well as sensor data (ambient light, microphone, temperature, eye tracking, pulse, among many others).