top of page
nfield1

Deploying edge AI smoothly for every occasion with Docker and Balena

Edge AI projects at Archangel Imaging are diverse, to put it mildly. We need to design smart devices that can track animals in 40oC heat for months on a single charge, navigate drones to deliver medical supplies without GPS, and find people day or night with models optimised on different parts of the electromagnetic spectrum. And that just takes us up to October. All of these products are related in requiring retaskable AI delivering real-time data.

At the hardware level, though, things start to get more tricky. Do we have access to WiFi, or should we use 4G, or low power radio? Satellite comms? What about power? It turns out car batteries are not ideal for trekking across the African jungle. And then there’s the software. What data do we have? Are there any off the shelf machine learning models that will work here, or do we need to make our own? Designing a deployment pipeline that fits each product isn’t easy either. First, we need to ensure every device performs reliably, and that code written for one case can readily be deployed for another.

Docker is ideal for this. By restricting all code development, testing, and deployment to Docker containers, we can (essentially) fully define the environment. When adding new code, for example, you simply start a development container, mount the Git repo, and away you go. I can be confident that everything this smart camera needs to monitor pollution is in its Dockerfile, because that’s the exact same environment it was written in. Gone are the dark days of “but it worked on my machine” (replaced only by the dulcet tones of “but symbolic links would be really useful in Docker”). Docker also really lends itself to modular design. Much of our code has common features (CUDA, OpenCV, Tensorflow etc.), and we can build these up layer by layer. Found an interesting new GAN but available code is in PyTorch? Just swap out the image layers. Better yet, work with the ONNX framework and do away with these dependencies!

But wait, what about hardware? For many of our products the Jetson TX2 is great for running machine learning inference out in the field. However, power and size restrictions means we need to be able to adapt to a multitude of alternatives. When working with different architectures, suddenly those Docker environments don’t look so similar. Then there’s the issue of scale. How can I minimise the headache of managing an order of 10 smart cameras, all of which need our latest code deployed and maintained? Luckily we’re not the first company to run into these issues, and Balena is already providing the answers. BalenaOS is a minimal linux-based operating system with Docker functionality built in. If you’re looking to build a new kind of IoT device, chances are Balena already have a Docker base image to work from. This is extremely useful, as it minimises code changes needed to go from, say, a Jetson Nano to a Google Coral. They also offer a cloud service where devices can be assigned to specific projects. In addition to providing live information on the status of each device, deployment to all these devices simultaneously is a single push command from my laptop. I can now deploy to one or ten devices with no extra work.

So I can get back to what really matters, which this week is trying to identify why my neural network can’t distinguish different species of antelope.

0 comments

Comments


bottom of page