In this episode, we explore how accessible AI vision technology has become—and the surprising challenges that come with it.
Luke Walsh from Brainboxes built an AI rock, paper, scissors game using open-source software and hardware costing under £1,000. The playful demo masks serious industrial applications: from catching defects in car seat stitching with 22 cameras to monitoring hazardous environments without putting maintenance engineers at risk.
The conversation covers the technical realities of sub-200-millisecond latency, the stubborn resistance of factory maintenance teams, and why Overall Equipment Effectiveness (OEE) has been the missing piece in factory automation. Luke explains how vision systems can now replace invasive sensors, monitor quality without human inconsistency, and prove their worth to sceptical teams—one tote bin at a time.
Tune in to learn why the best way to deploy AI in factories is to start small, prove value fast, and never assume your training data covers every possible hand gesture.
Summary of episode
- 01:59 - What Brainboxes does
- 05:43 - How AI recognises hand gestures in under 200 milliseconds
- 08:42 - Three reasons to choose open source: Latest models, trained engineers, community support
- 10:58 - Real-world applications: Quality control and OEE
- 13:45 - The car seat stitching use case: 22 cameras, one seat, zero tolerance
- 16:01 - Beyond quality: Monitoring hazardous environments and non-invasive throughput measurement
- 17:39 - Winning over sceptical maintenance teams
- 19:22 - The tote bin story: When data settles a night shift dispute
- 20:35 - The bias challenge: When "scissors" becomes a swearing gesture
- 23:08 - The future of industrial AI vision systems and Jevons Paradox
- 25:13 - Advice for manufacturers: Start small, keep it simple, nail first impressions
Speakers
Related links
See all episodes
From revolutionising water conservation to building smarter cities, each episode of the We Talk IoT podcast brings you the latest intriguing developments in IoT from a range of verticals and topics. Hosted by Stefanie Ruth Heyduck.

Liked this episode, have feedback or want to suggest a guest speaker?
GET IN TOUCHEpisode transcript
Transcript from episode sample
Luke: When we took the demonstrator to America for the first time, the American audience were playing scissors in the same way you just have, Ruth, and the demonstrator was classifying that as swearing and telling them, "Don't swear." So there's a lot of inbuilt assumptions and bias when you train a system.
Ruth: Welcome to We Talk IoT, where we explore the ideas and impact behind AI-driven tech of the future and how data creates real business opportunities to stay ahead of the innovation curve. Subscribe to our newsletters on the Avnet Silica website. I am your host, Ruth Hay.
Start of full transcript
Picture this: a robot plays rock, paper, scissors with you and wins—not through magic, but through a camera and an edge device running AI. What sounds like a party trick is actually a glimpse into how factories can catch defects, monitor equipment and boost productivity without breaking the bank. Today I'm speaking with Luke Walsh from Brainboxes, a company that builds industrial connectivity devices.
Luke and his team have developed an AI vision system using open-source software and low-cost hardware. The demo is playful, but the applications are serious—from checking seat stitches in car factories to monitoring energy use on production lines. So can open-source AI tools truly deliver results in harsh industrial environments? And how do you convince sceptical maintenance teams to embrace this technology? Let's find out. Welcome to the show, Luke. How are you today?
Luke: Hi, Ruth. I'm great. Thank you. Great to see you.
Ruth: So welcome to We Talk IoT. Tell us briefly, what does Brainboxes do?
Luke: Yeah, so Brainboxes is a manufacturer of industrial connectivity devices. So we make devices that talk to sensors and actuators on a factory production line. They might sit inside a panel on the side of a machine, or they might sit in a cabinet on the edge of the factory. We take that data, we analyse it, we send it somewhere. Could be on the local network or it could be to the cloud.
We specialise in industrial connectivity, but we find that products end up all over the world in all kinds of wonderful and interesting different situations. So for example, our products are used in the Antarctic.
Ruth: Wow.
Luke: The British Antarctic Survey use our products for monitoring the energy production of part of their systems, which run unmanned for 12 months of the year. They're used in the Atacama Desert on telescopes. The Apex telescope used by the Max Planck Institute have our products on it, and that telescope was one used to create the very first image we've captured of a black hole. And our products are also used in industrial settings primarily. We talk to our customers as much as we can. Sometimes our customers will contact us through tech support, and then we'll find out some really wonderful different applications of where they're used.
Ruth: Those use cases sound really interesting. I'm excited to dive deeper into them in a minute. But we've talked about this AI rock, paper, scissors game. Why did you choose such a playful demo? And you've just mentioned such serious industrial examples and cool use cases.
Luke: Yeah. So the company's been around for 40 years. It's a family business. We've always been in the business of connectivity. And over time, the way connectivity works has changed, but the type of problems you're trying to solve have stayed the same.
So 20 years ago, we'd be talking to sensors primarily over a protocol like RS-232. And today we're still talking to sensors and we're still tackling those same factory problems. But today's sensors could be camera systems performing AI vision. They could be energy monitoring systems connected to the high voltage, high current parts of the factory, and monitoring how the machines are performing.
The problem hasn't changed, but the tools you use to tackle those problems has changed profoundly over the last few decades.
Ruth: And how does your example work? So it's rock, paper, scissors—everybody knows this game. What are you trying to prove? What are you trying to show with this application?
Luke: Factory environments are starting to embrace AI. And they're primarily within a factory environment. The key use case is through camera systems. Camera systems can be positioned away from equipment and machinery and can remotely monitor some aspect of it. We wanted to show how the system can work in the real world, but we wanted to show it in a way that was easy to engage with for people who only have a few seconds to understand what it is that can benefit them and how. And so the system we made was very simple. A human walks up to a camera. The camera is connected to one of our edge devices. It could be connected over fibre, so the edge device could be a long way away, or it could be very close, and the person simply plays rock, paper or scissors.
The AI system instantly detects what it is they've played, and at the same time, the computer is playing against the user. And so we have a winner between the computer and the human of rock, paper, scissors. And that's a quick, engaging, simple demo that clearly demonstrates the effectiveness and benefit of AI in the modern world and what's achievable on the factory floor.
Ruth: And how does AI recognise the hand gesture so quickly? Because it is real-time, right?
Luke: Yeah. So one of the key challenges with the demo such as this is latency. If we presented rock, paper, scissors to the camera and five seconds later the machine replied with what it thought it was, then that's not a very compelling demo. And so we needed to show that AI was at the point now where low-cost edge devices can run AI models and infer—and by inference we mean understand what is it seeing?—in real-time, and by real-time we mean something sub-200 milliseconds. So the camera system is looking at the environment. It is trained to understand what features you want it to see.
Ruth: Mm-hmm.
Luke: And the features we've trained on are human hands playing the game of rock, paper, scissors. And within these 200 milliseconds, it can take in the image from the camera over the network to the edge device, process the image, and come back with a label of what it sees—rock, paper, scissors—to the screen where the human stands.
So we've got this end-to-end system. We've got the eyes, which are the cameras. We've got the brain, which is the edge device, and then we've got the screen presenting the results of the game. And the latency from the system, from the camera to the edge device and back to the screen, is sub-200 milliseconds. So to the human, it feels super responsive. Humans can easily detect differences in latency of around 100 milliseconds. And so trimming that down as much as possible and making the game feel interactive and exciting was a key challenge. But it also shows that on factory floors where you need quick analysis and understanding and fast decision-making, that too is possible today with these low-cost edge devices, such as what we make.
Ruth: Speaking of low cost, you mentioned that it's not a very expensive setup you created. What hardware do you use, and I think you mentioned that you use open software as well, so it is a really low-cost approach to a really high-end solution.
Luke: I have the setup here, and you can see just on the screen there. So in this panel here is the Brainboxes edge device.
Ruth: Mm-hmm.
Luke: Now that's connected over the network to the camera. Now the camera is just here and the camera is connected through a PoE switch. And so there's four key pieces of hardware. There's the screen, there's the camera, there's our PoE switch, which is powering the camera and the screen, and then sending the data back to the edge device. Now, the total cost of a system such as this is under £1,000 for everything.
Ruth: That's terrific.
Luke: The software is open-source and some people are sceptical of open-source, and some people see it as a huge benefit. There's various reasons why we chose these open-source tools. The first one is the AI models. Today, the cutting edge—what they typically call the frontier AI models—a lot of them are released as open-source models. So we can use the very latest tools. And these tools are available as open-source. The second reason is that many engineers today, when they leave university, they are already trained in these open-source tools, whereas historically, they may have been trained in more closed-source proprietary tools.
Today, open-source is much more accepted and understood to be the standard. And then the final reason is that with these tools being open-source, there are a lot of people online who have discussed and tackled the same challenges as yourself. And there are forums available for you to kind of understand your problems and see what other people have done to tackle them. So having a large ecosystem of many people all trying to achieve similar things really helps you along your path to using the tools yourself.
Ruth: That's terrific. And thank you for setting this up for the audio-only listeners. I can encourage them to jump over to the YouTube channel and to watch what you have just showed us after listening to the audio-only version. No, it's really interesting. Thank you so much. That was a very cool idea. And I think you also probably have a YouTube video as well of the demo or something we can share in the show notes.
Luke: Yes, we have a white paper clearly showing all the steps to reproduce a system like this.
Ruth: I'll make sure to link it in the show notes as well.
Luke: We've also open-sourced the code that we've developed to put this together so anyone can download that code and scrutinise it and adapt it and run it for themselves.
Ruth: We will take a short break. Stay with us, and we will be hearing from our guests very shortly. This podcast is brought to you by Avnet Silica, the engineers of evolution. Subscribe to our Avnet Silica newsletter or connect with us on LinkedIn. If you want to learn more about us, we have put information and links in this episode's show notes.
Now walk us through some applications and use cases. You've mentioned shop floors and factories. What jobs will this setup be able to tackle?
Luke: Yeah, so there's lots of factory floor procedures which would benefit from the use of AI camera systems. The classic one is a measure of quality of the output from a production line. The ideal production metric that most factories aim to be able to score themselves on is called OEE—Overall Equipment Effectiveness. And so this is a measure of: are you producing things at the right speed? Are your machines running a sufficient amount of time, and is the quality of what you're producing to the high enough level?
Now, historically, it's been an automated task to measure machine uptime and machine throughput. You can do that using sensors and you can connect those sensors to remote I/O and edge devices such as what we provide. But the hardest measure to reliably and consistently tackle is quality.
Ruth: Mm-hmm.
Luke: And because historically that would rely on a human inspection, and human inspection varies from person to person. Even with a set of standards, it can often vary and it can be varied by time of day, how close it is to needing a break and how close it is to the end of the day. Having a consistent quality measure is essential. If you can programmatically train an AI to understand what quality is acceptable and what quality is not acceptable, you can deploy AI camera systems, for example, to the end of the production line and to measure the quality of what is being produced by the line. And that will feed back into the OEE score and make it an accurate, reproducible score, and it'll make it a benchmark. You can measure consistently over months and years and figure out how to improve. So to do that, we have one customer using our edge devices and industrial Ethernet switches to connect to multiple cameras.
The cameras are positioned along the production line all round a seat. Now the seat is for the car industry. There's 22 cameras involved, and each camera is analysing a fraction of the seat for the stitching quality, for the colour reproduction, for creases, for any blemishes or cracks in the material.
Ruth: Mm-hmm.
Luke: And scoring the quality of the seat produced. Now to do that is quite challenging because each camera has a gigabit link back to a switch. When you combine all those different camera feeds, you've got a huge amount of data. And so you need a reliable industrial mechanism to take that data back to the edge device without any throttling or without losing any frames, and still have enough bandwidth to process things completely and quickly. And that's where our products come into the equation—having an uplink which matches the complete bandwidth to all the sensors you're connecting to.
Ruth: And that's really an interesting use case because with car stitches it is a very tiny piece of the product. And so probably even for a human, really hard to tell, is this done correctly? Is it a good quality or is something messed up in the stitching? So, and that's 22 cameras to find out if it's the perfect quality.
Luke: I was speaking to a person in the automotive industry who was telling me that when they ship cars to some countries—
Ruth: Mm-hmm.
Luke: —when the consumer purchases the car before it leaves the forecourt, the consumer will hire a third party who will thoroughly inspect the vehicle and place any claims against the manufacturer for out-of-warranty breaches of the quality of the car.
Ruth: Okay.
Luke: Now, we don't do that in the UK because we're a very "we want things here and now" market. We don't want to wait. But in other countries, they are willing to wait until the car is perfect before it drives off the forecourt. And so car manufacturers are very wary of all these claims being put against them, and they're very keen to ensure the highest possible quality before it reaches the forecourt and before it gets into the consumer's hands.
Ruth: I imagine that a vision AI system can also help with more, let's say, things where things might get dangerous. Car stitching doesn't break a car, right? It's just annoying if you bought a very expensive car and the stitching is messed up. But I suppose it could also monitor other parts of the manufacturing process.
Luke: Yeah. So if you look at routine maintenance jobs, sometimes routine maintenance tasks involve going into hazardous areas. Now it might be hazardous due to chemicals or the environment or to electrical fields. And so to have a camera system doing the maintenance job on behalf of the maintenance engineer can be really valuable in those situations. Because not only are you saving the maintenance engineer from entering those hazardous environments, you're also allowing the camera system to continuously monitor. So the maintenance engineer may have on a schedule only to need to go into that environment once per month to double-check some reading or setting or issue. But the camera can stay there permanently and have continuous monitoring, which is another really big benefit. If we go back to OEE, historically, you would measure OEE by retrofitting sensors to machines. You can measure OEE—i.e., the throughput of a machine—without fitting a sensor to the end of the machine. So historically, a widget would come out of a machine, it might go on a conveyor belt, and when the widget passed the conveyor belt, you'd count that as one new widget has been produced.
Ruth: Okay.
Luke: To do that, you've got to fit the sensor. To fit the sensor, sometimes you've got to drill holes into the machine, and a lot of people don't want to buy brand-new machines and retrofit sensors to them because they could invalidate the warranty of their machine.
Ruth: Mm-hmm.
Luke: Now, with camera systems, you can non-invasively monitor these things. Now by positioning the camera away from the machine, point it at the machine, you can still measure what's coming out of the machine and figure out what your throughput is without any of these invasive monitoring techniques that might have historically been more suited to that process.
Ruth: Mm-hmm.
Luke: So another really nice use case. So camera systems for factory environments: monitoring quality is one, allowing you to do routine inspection for maintenance is another, and monitoring throughput.
Ruth: Yeah, those are really terrific examples. And I suppose AI can be a really great asset to the team, which kind of brings me to the adoption part. I think you have mentioned in the beginning, how do humans react to these systems? Because there are maintenance teams that used to do these jobs. What are the challenges you're facing here?
Luke: So as with any new technology adoption in factories, there is legitimate scepticism on part of everyone in the factory. I have been to factories where they've had new technologies rolled out multiple times over the years and with varying levels of success. And so it is natural that the maintenance team and the production team are sceptical. It's healthy to be quite honest, because new technologies often come with great promise and sometimes they don't deliver.
Ruth: And then the team is like, ah, not again, not another one.
Luke: That's right. Yeah. And so I've been to many factories where the first time you meet the team, there's a healthy scepticism on their side. And what you need to try to do is get a win for the team, which brings them on board for that process in general, because if they see the new technology as being adversarial to what they're doing, then you never get the team on board. But if they see it as a tool which enables them to do their jobs better and sometimes easier, then you can definitely get those people on your side. So really nice example. This is before AI. Ten years ago, we were in a warehouse. The warehouse had an issue with totes on conveyor belts. So the totes are plastic tubs, which hold products.
Ruth: Oh, I thought you meant totes, like as in toads.
Luke: Toads? That'd be a very different issue. The plastic totes were holding products. And the totes when they were going around the conveyor belts would sometimes fall off the conveyor belt when they were entering or exiting certain parts of the factory. And that was the problem the team was trying to solve. And so we put sensors around the entrance and exit for the totes to understand how well they were positioned on the conveyor belts. The maintenance team initially had healthy scepticism, but what we were able to do by monitoring the totes was show how busy the factory was at all different times of day. And suddenly the maintenance team could end a longstanding debate they'd had with the night shift. The night shift had always said they are too busy, there's no time for maintenance, and the maintenance team had always wanted to do the maintenance at night when there were fewer people on site. And now they had data to back up the problem and they could clearly show management, look, we are less busy at night, therefore we should do the maintenance at night. After a few weeks we went back in on site and the maintenance team were no longer sceptical. They were very welcoming. And so as long as you can embed improvement into the digitisation strategy and bring the team along with you, then you can really get some great results.
Ruth: So it seems like the opportunities do outweigh the challenges, or are there other challenges you would like to mention that you have encountered?
Luke: There are always challenges. So with AI systems in particular, they're only as good as the training data that you provide them. So when we were developing this rock, paper, scissors demo, we fed it with 3,000 different images of people playing the game of rock, paper, scissors. Initially there was only a few members of the team who were presenting rock, paper, scissors, and then when we took it to shows, it would misclassify results. And it would misclassify because some of our software developers are not the same ethnicity, for example, as the person at the show. And other times it was misclassified because of cultural differences. So for example, we understood that in the UK when you play scissors, we would play it with your palm facing the camera like this. Now in the USA, they play it the other way round.
Ruth: Yeah. I'm also facing my backhand to the camera now.
Luke: Yes, now in the UK that's a swearing gesture.
Ruth: Oh, good to know.
Luke: And obviously we were unaware of those cultural differences when we took the demonstrator to America for the first time. The American audience were playing scissors in the same way you just have, Ruth. And the demonstrator was classifying that as swearing and telling them, "Don't swear." So there's a lot of inbuilt assumptions and bias when you train a system. You have to have a broad enough training set and a broad enough set of inputs to understand what all the biases are and get rid of them from the system.
Ruth: Yeah. I think that will be probably the biggest challenge in everything related to AI—how to tackle the biases that sometimes AI can, I suppose, detect those biases you didn't even know you had. When you start training the model and then you realise, wait a minute, why is it doing that? And then you realise there was a bias you weren't aware of. But then obviously when training, you can also train the bias. I think that's a really complicated discussion that I'm always trying to wrap my head around—how that will work out in the future.
Luke: And the act of training basically gives the system bias based on who the trainers are. And that's a key consideration.
Ruth: Yeah. So where do you think this will go in the future? What do you see industrial AI vision systems—where do you see them go in the next years?
Luke: Yeah, it's fascinating. I'm sure we've seen the videos online of robotic systems and how capable they are becoming with AI. Now, the robotic systems themselves, of course, one of the key features of them is the cameras and the vision. And that's the input to the system. And then the output is the movement from the robot. And so there's no obvious end in sight to the improvement in the AI models, in the vision and in the movement of the robotics. And so at least for the next few years, it seems to me that these systems will only improve. And so they will become more commonplace in factories and in our everyday lives as well.
There's a well-known paradox in tech called Jevons Paradox, which is basically as technology increases in speed, we find more uses for it and we use it more. It's not that we saturate our usage for it, we find more. As technology increases in speed, as AI improves, it will become cheaper and it will become more prevalent. And so there's lots of areas. So we have a full production line here at Brainboxes for our electronics. There's many areas within that where it's clear to me that we will start to embrace the use of AI more. There's many areas where we use classical techniques—so for example, classical vision techniques to understand what's going on under a camera. And we will probably replace that with AI techniques. And there's many areas where we can help the team improve the quality and the speed and the performance of what they're doing with these AI systems as well.
Ruth: And for manufacturers who want to start experimenting with this technology, what advice would you give them?
Luke: Start small. Keep it simple. Don't try and solve all your problems in the first go. We learned such a lot just from doing what is an inherently simple rock, paper, scissors demo that we can then reapply to what we're doing beyond that. For example, the ability to have the latency of the camera feed very low—that took a lot of work, but that's all in our source code that you can download so you don't have to tackle that yourself. Until we had got to that point, there would've been a more limited usefulness to the technology on the production line. I think people's first impression of a technology profoundly influences their long-term opinion of that technology. And if their first impression had been, oh, this is a bit slow, then that would've stuck with them for months and months despite the improvement. And so getting past that in a small trial is really beneficial for when you roll out more widely.
Ruth: Is there anything I haven't asked you that you wish I had asked you?
Luke: Well, for me, the problems we're solving on these production lines have not changed. We want to make things at the highest possible quality, at the lowest possible cost as quickly as we can. Those three factors have not changed and they never will change. And the way we tackle them is what has changed. If I look at my production line here—so we manufacture electronics here in the UK and we're still making electronics—but the process we've gone through to make them today compared to 20 years ago is completely different. And the tools we're using are completely different. So I have to assume that 20 years from now the tools are going to be completely different again. So unless as a manufacturer you embrace those tools and go with them, then you will be left behind. Like all manufacturers, there's intense competition nationally and particularly internationally. And in order to stay ahead, you need to embrace these technologies. So I basically encourage everyone to look beyond their companies, go to exhibitions, see what's out there. We exhibit at most of the international industrial communication exhibitions. We would be very happy to speak to people and talk to them about what the art of the possible is today and how they might apply it to their own factories or their own jobs and processes going forward.
Ruth: Terrific. I think that wraps up our conversation really nicely. Thank you so much, Luke. It was a pleasure having you on the show. Thank you for joining We Talk IoT today. I will put the link and all the notes we've mentioned in the show notes so that you can check Brainboxes and their solutions out and maybe play a game of rock, paper, scissors against the machine. Thank you for listening to We Talk IoT. Stay curious and keep innovating. This was Avnet—We Talk IoT. If you enjoyed this episode, please subscribe and leave a rating. Talk to you soon.
About the We Talk IoT Podcast
We Talk IoT is an IoT and smart industry podcast that keeps you up to date with major developments in the world of the internet of things, IIoT, artificial intelligence, and cognitive computing. Our guests are leading industry experts, business professionals, and experienced journalists as they discuss some of today’s hottest tech topics and how they can help boost your bottom line.
From revolutionising water conservation to building smarter cities, each episode of the We Talk IoT podcast brings you the latest intriguing developments in IoT from a range of verticals and topics.
You can listen to the latest episodes right here on this page, or you can follow our IoT podcast anywhere you would usually listen to your podcasts. Follow the We Talk IoT podcast on the following streaming providers where you’ll be notified of all the latest episodes: