Artificial intelligence is moving to the edge - and it's changing how factories operate, farms grow crops, and robots navigate the world.
In this episode, Monica Houston from Tria Technologies walks us through Tria's Vision AI Kit QCS6490, an industrial-grade edge computing board that processes five camera feeds simultaneously, runs inference locally, and handles demanding tasks like image segmentation - all without needing cloud connectivity or even a cooling fan.
Monica discusses real-world deployments in agriculture (spot-treating crops to reduce pesticide use), factory robotics (autonomous mobile robots and robotic arms), and the practical challenges of moving AI from comfortable data centres to harsh industrial environments. We explore why latency matters, what happens when you can't rely on internet connections, and why power efficiency is the unsung hero of edge AI.
If you've wondered whether edge AI is ready for industrial prime time, this conversation delivers the answer - with hardware in hand.
Summary of episode
- 01:57 - Monica's Background at Hackster.io and Tria Technologies
- 02:38 - What the Vision AI Kit Actually Does
- 04:01 - Why Edge Processing Matters for Robotics
- 07:46 - Agricultural Use Cases: Spot-Treating Crops
- 09:55 - Autonomous Vehicles and VSLAM Technology
- 12:32 - Factory Floor vs Self-Driving Cars
- 14:09 - Real-World Deployment: Robot Arms in Action
- 16:44 - ROS 2 and Robotics Applications
- 17:36 - Edge Impulse: Making Model Deployment Easier
- 19:08 - The 15-Year Lifecycle Question
- 22:18 - Power Efficiency: Why No Fan Matters
Speakers
Related links
See all episodes
From revolutionising water conservation to building smarter cities, each episode of the We Talk IoT podcast brings you the latest intriguing developments in IoT from a range of verticals and topics. Hosted by Stefanie Ruth Heyduck.

Liked this episode, have feedback or want to suggest a guest speaker?
GET IN TOUCHEpisode transcript
Transcript from episode sample
Ruth: Artificial intelligence is moving to the edge. Factory floors, agricultural fields, remote industrial sites where decisions happen in milliseconds and network connections aren't guaranteed. Tria's Vision, AI kit promises industrial grade edge computing with a 15-year life cycle. It runs inference locally, processes five camera feeds, and handles demanding tasks like image segmentation.
My guest today is Monica Houston. She's the technical marketing manager at Tria Technologies, and we will discuss real-world edge AI deployments, what it takes to make these systems work in demanding environments, and whether Edge AI inference is genuinely ready for the industrial prime time. So, we are excited to hear about that. Thank you so much for being here. Welcome to the show, Monica.
Thank you for having me. Hmm. So, Monica, what's your background and what are your customers telling you about their Edge AI challenges right now?
Monica: Yeah, so I used to work for Hackster.io. Hmm. Which is, uh, hardware community, uh, community for hardware hackers.
This is what we called it. We were acquired by Avnet, and I ended up moving to Tria and I have a master's in computer science.
Start of full transcript
Ruth: Terrific. Yeah. We had Hackster on the show, I think might be two years ago already for an International Women's Day episode. That sounds very interesting to work there. Professional hackers.
Monica: Yeah.
Ruth: I think our main focus today is the Vision AI kit I've mentioned. Walk me through what this AI kit actually does. So, the Vision AI
Monica: kit uses the Qualcomm QCS 6490 processor inside, which is an Edge AI processor, has a neural processing unit or Hexagon tensor processor
Ruth: mm-hmm.
Monica: That is able to efficiently run.
Inference. So, to do all of the multiply additions that are contained inside of the layers of a machine learning model or an AI model. Okay. And it also has a GPU, uh, VPU video processing unit. It's excellent for multimedia. It's also, he mentioned able to process five MIPI streams at once. It has five ISPs for MIPI cameras, so it's able to process a lot of image data at once.
And because of that, it's really useful for applications like industrial robotics where you might need to be using a lot of camera data to, for instance, figure out where you are in space. Mm-hmm. Or figure out, you know, if something's coming at you, if you need to avoid it. So, it's a great board for all of these
Ruth: use cases.
Yeah. Since you mentioned robotics, and I assume also self-navigating systems. Mm-hmm. What are these being used for and why does edge processing matter then?
Monica: So, edge processing is really important. Robotics requires a lot of different sensor feeds oftentimes. So, you might be using cameras, camera data, as well as possibly a LIDAR data.
You might even be using some sort of IMU like accelerometer or a gyroscope. So, you're taking in a lot of different sensor data and combining it to figure out where you are in space. Mm-hmm. In the, you know, 3D space. And you need processing power for that. You also need to have good power efficiency. So, this board is excellent compared to similar boards.
It's has really excellent power efficiency, which is vital for Edge AI when you're on battery power especially.
Ruth: Mm-hmm.
Monica: And then you also need reliable connectivity. So, this board has excellent WiFi and Bluetooth five. So, it's great for connecting when you need to connect and also flexible and scalable.
So this is a smart platform, which means that it's an open-source form factor so customers can buy the actual hardware. Mm-hmm. It's actually two different parts. There's a SOM and a baseboard. Okay. And so, the system on module part. It's quite complex with these AI chips to build a SOM like this, if you're kind of, you know, like a small, an OEM, it can be quite hard to do this chip down design.
Okay? So, by creating this SOM and then you can customize a baseboard for the SOM, it kind of gives you this like mix and match capability. The hard part's already been done for you. And then you just need to create the, your customized baseboard that pins out all of the things that you need. Okay. And has the form factor that you would like?
Ruth: Sorry for our not so tech savvy listeners. What's a SOM? system on module?
Monica: Oh, okay. So that's where the actual processor is located.
Ruth: Oh, I got it. I got it. Thank you. Mm-hmm. Is there a latency difference between edge inference and cloud processing for real time decisions? Or is that not relevant?
Monica: Um, there's definitely latency difference, yes.
So Edge is actually taking it, you know, off the cloud onto the board to remove that latency difference. So previously before we had Edge I Boards, which was. Not that long ago, like you know, only two or three years ago, we didn't really have these NPUs yet. People were doing cloud connected inference and there was latency difference.
There's also privacy concerns, connectivity concerns, so moving everything to the edge. Allows you to do all of the processing at the edge, which is really important for, especially for like medical devices and things where privacy is of great importance. If you know agriculture applications, you might not have connectivity when you're out in a rural field to the cloud.
Or on a factory floor inside of a big warehouse, you might not have great connectivity there either. So, this is a way to get around that. And I think we're gonna be seeing a lot more of these devices just because it's a, a very practical thing. Uh. To have that edge capability and then like you mentioned, latency, it's much faster and also more energy efficient than if you're going back and forth to the cloud all the time.
Ruth: Mm-hmm. And you mentioned already out on the field in rural areas, so now we are basically already jumped into another use case, which is probably agriculture, I assume. Mm-hmm.
Monica: Yeah,
Ruth: I think you have a very nice article also, which I will link in the show notes about an agricultural demo. Um, what specific problems is Edge AI solving in farming?
Monica: Yeah, so there's a lot of things that having autonomous. Vehicles can help with. So, an example might be fertilizer or pesticide. You know, right now actually these are being deployed, but before all of the fertilizer and pesticide was kind of uniformly applied. Mm-hmm. You know, with hopes that it would be taken up by the plants and arrived where it needed to be.
Now can actually be spot treated by using vision models that are able to detect where the pests are. You can actually. Use a vision model to see, oh, this area needs more water, or this area needs more pesticide. Mm-hmm. And do spot treatments, which avoids overuse of pesticides, which not only saves money, but it's also much better for the environment.
Ruth: Oh, that's terrific. So, the vision AI kit does not go into the machine or the tractor. It will be somewhere on the field monitoring the plants and the growth and the,
Monica: so, it might go onto the tractor arm, you know, like these giant tractors that have these big arms that Yes. Have the sprayers on them. It might actually go onto that arm.
Mm-hmm. Or the cameras will go onto that arm, and the vision could be on the tractor itself.
Ruth: Yeah. That brings us to the, to the point where you mentioned that they're also designed for physically demanding environments. That would be one such case, right? Or what other conditions are we talking about? What can we put this vision AI kit through?
Monica: Yeah, so physically demanding environments, so beside the vision AI kit tree also makes a lot of, like HMI, we can put this, and they specialize in really harsh environments. For their, you know, touch screens so we can add a touch screen to it, you know, for environments that are freezing or where you need gloves.
Mm-hmm. These kind of, or the two dusty specialized environments. And also, hot and cold. I forget the actual, the actual rating for heat and cold, but it's definitely quite high.
Ruth: Okay. What other agricultural use cases are there? Do you have more deployment examples you can talk about at this stage?
Monica: Yeah, so some of the really interesting robotics applications are autonomous vehicles.
Ruth: Mm-hmm.
Monica: This is useful for. Agriculture also, you know, when you have a vehicle with a person on it, you have to think about safety concerns and making sure that it's crash rated, and it has to be quite a large vehicle for a person to sit on top of it. If it's a vehicle, you know, a tractor that's autonomous, it can be much smaller.
Okay. And so, you see these like very small. Agricultural robots, autonomous agricultural robots, and they compact the, the soil much less so they're better for the soil health and they can get to places where, you know, a large tractor might not be able to get.
Ruth: Mm-hmm.
Monica: And that's made possible by a lot of different algorithms.
So there's an algorithm called VSLAM.
Ruth: Mm-hmm.
Monica: Which is a subset of slam. Slam is simultaneous location and mapping.
Ruth: Okay.
Monica: So, the robot is figuring out where it is in space and creating sort of a map of where it is, where it has been. Your Roomba does this.
Ruth: Yeah, I was just thinking like my vacuum robot does.
Yeah, exactly. Yeah.
Monica: So VSLAM is the type of, of sun that relies on the camera data to figure out where it is in space. And so, because of, you know, the camera capabilities is really good at VSLAM. There's also some other localization techniques that are included in the SDK mm-hmm. Software developer kit. And so, there's depth from stereo is one application you can do, like obstacle avoidance based on two cameras, just like how our eyes work.
Ruth: Mm-hmm.
Monica: We're able to know where we are in 3D space because we have eyes at our set apart at this distance, and we're able to calculate the distance. We are from objects that we can do that with two cameras, but we can also do that with one camera. So, there's also mono depth, which is using a single camera.
So these AI models have been trained on hundreds of thousands of images. That has some sort of depth, oftentimes, like they'll actually just use 3D movies to do this. Mm-hmm. And for the training data. And that allows us to create these single camera depth estimation models.
Ruth: Okay.
Monica: So that's really cool.
Ruth: So, with autonomous vehicles, you mean for example, also robots that run around by themselves in smart factories?
Or are we also talking self-driving cars? Or
Monica: both more robots that run around by themselves in smart factories, self-driving cars is a different use case. 'cause it requires, you know, like a lot of pedestrian safety and you hear about, it's a lot of legal stuff also. Mm-hmm. This is more factory floor robots, agricultural robots, these sort of use cases where there's not.
Dozens of pedestrians and dogs and
Ruth: you know,
Monica: people out Yeah.
Ruth: Legal suits waiting to happen. Exactly. What are companies actually monitoring and optimizing with this technology?
Monica: So, there's definitely robotics applications. There's customers that are using these for robot arms, like in factory robot arms.
Mm-hmm. Agricultural uses autonomous mobile robots. Yeah. These are all use cases that are, that are actually happening.
Ruth: What results are they seeing? Do you get feedback from your customers saying, great, this has solved problem X, Y, Z that we've been having for years? I have read some
Monica: use cases. So, one use case was from a company that was trying to choose a board, and they needed lidar, multiple 4K cameras and sensors to enable navigation.
Recognition and manipulation for a robot arm.
Ruth: Mm-hmm.
Monica: So, this board was able to do all of that with like its intensive graphic processing, VPU and GPU. It is to do real time visual recognition that enabled like you know, all of the grasping that you would need in a robot arm and yeah, manipulation, recognition.
This one-use case. One actual use
Ruth: case. Is it hard to deploy this
Monica: a robotic arm?
Ruth: The Edge AI or the vision kit.
Monica: Oh, okay.
Ruth: Yeah. So
Monica: we are working on a lot of, some documentation for various different use cases. Qualcomm also has a ton of documentation. So, I've used Qualcomm boards in the past many years ago, and I remember being kind of frustrated with her documentation.
Mm-hmm. So, when I heard that we were gonna be working with Qualcomm again, you know, last year I guess it was, I was kind of prepared for the worst. Yeah. But I was really pleasantly surprised with the quality of the documentation that they have now.
Ruth: Mm-hmm.
Monica: And the depth that, I mean, there's just so much of it.
That is the one problem is that there's maybe too much documentation, there's gonna be too much documentation, but there's just a lot of different ways to use this board, and it goes very, very in depth.
Ruth: Okay.
Monica: So, if you are. A beginner, there's documentation for you.
Ruth: Mm-hmm.
Monica: But also, if you're an expert in power usage or a machine learning expert, there's a lot of documentation that goes very, very in depth about how to run models in the most efficient way.
You could spend years, literally years on this, getting your models more power efficient and run across the heterogeneous compute, so,
Ruth: mm-hmm. That
Monica: is, yeah, they
Ruth: can't never be. Not enough documentation. Right, right, exactly.
Monica: Yeah. So, our goal on my team is to make it a little bit easier to navigate by providing some of the use cases that our board is most known for or useful for, as well as like find new creative uses for it.
Mm-hmm. And document those and create demos.
Ruth: Applications for those. Is there something you're working on right now that you're most excited about?
Monica: Yes. We are working on some of these robotics applications. The board actually comes with ROS 2, which some people may know is robotics operating system.
Ruth: Mm-hmm.
Monica: It's not actually operating system. It's actually a set of software tools and libraries that enable robotics and a lot of robotics companies are using it. It's open source. And so, we are doing a couple different ROS 2 applications right now. We're working on a robot that, like an autonomous mobile robot.
Ruth: Mm-hmm.
Monica: And then we also have planned a robotic arm.
Ruth: That's cool. Yeah. So, you get to play around all day with, uh, new gadgets. Yes. Yes. Are there any obstacles when trying to implement this where people typically stumble? Despite all the documentation.
Monica: Yeah, deploying models to the edge can be difficult.
Right now we're using Edge Impulse, which is a browser-based software that allows you to train a model, optimize it for your edge device, and then deploy it to your Edge device. With just a couple of clicks, and so this really makes it so much easier. Okay. And Qualcomm recently actually acquired Edge Impulse, which was huge.
Okay. So, this makes it so much easier versus in the past you might have to, or, I mean, currently if you really want to go super in depth and get all the, you know, custom operators, you'd have to take your model. Convert it to one format, convert it to another format usually, and then spend a lot of time optimizing it, quantizing it, doing all these things to make it more power efficient and able to run on different parts of the hardware.
Mm-hmm. And that can take a lot of time because there's a lot of different choices that can, you know, make your model more efficient. And it's more of an art than a science oftentimes. Okay. Yes. So that can be very time consuming and edge and pulse. I'm so grateful for it and making it a lot easier.
Ruth: Okay.
I think we've talked about a 15-year life cycle. Does that hold up when systems run continuously in harsh conditions? I guess it kind of depends on the harsh conditions.
Sure. Yeah. Fair enough. Yeah.
Monica: But we do offer support over the life cycle, so
Ruth: that's mm-hmm. That's the main benefit, although that does sound long in today's AI age where things change every week. Right. So
Monica: yeah, with like planned obsolescence. Mm-hmm. But yeah, I hope that this will be a board that
Ruth: you know,
Monica: can be used for
Ruth: the long term.
Mm. I think it will be. And five cameras, boards, ruggedized. This sounds like a very expensive solution.
Monica: So, the price point for the kit, the kit includes the power supply. It has a heat sink, and then it also includes a 12-megapixel MIPI camera. Mm-hmm. So, you can get started right away with the camera applications and the whole kit is about $700.
Ruth: Okay. And that doesn't sound like a lot, right?
Monica: Compared to some boards? No. Okay. Yeah. Yeah.
Ruth: Do you have it next to you? 'cause for I see the listeners who join us on YouTube, we can actually show it. And for the listeners on the podcast platforms, they can maybe switch to YouTube later and check it out. It's not, it looks like a Raspberry Pi.
Yeah. If I'm allowed to say that.
Monica: Yes, you are. No, it does have this like 40 pin connector similar to what a Raspberry Pi has.
Ruth: Yeah,
Monica: it's a little bit bigger form factor.
Ruth: Okay.
Monica: It's the smart form factor. So, you can see here there's two different boards and then there's a heat sink. So, here's the baseboard on the bottom.
Ruth: Mm-hmm. This
Monica: SOM that has the silicon on the top. On top of that, there's the heat sink because that's where all the heat is coming from, this chip. Mm. Yeah. Terrific. Then it has this GPIO, it has ethernet. USB-C for both power and mm-hmm. Programming. It has four USB ports. Mm-hmm. And then also has the HDMI mini-HDMI port.
Terrific. Okay. It also has, um, MIPI DSI for a display, and then mm-hmm. Here's like the camera. There's four MIPI CSI camera. Parts pinned out.
Ruth: Okay. Yeah. Looks very cool. Thank you. Yeah. And is there something edge AI cannot do yet? That's a hard question.
Monica: Okay. That's an interesting one. I mean, it can't think for itself in spite of its name.
It's still reliant on,
Ruth: okay.
Monica: Lots of data to train it. It's not going to be. Like truly reasoning like a human does anytime soon, in my opinion. But what it's good for, it's really good for like taking a lot of data and you know, figuring out which plant has a disease or those sort of use cases. LLMs, of course, are huge right now, and those can be run on this board somewhat a little bit, maybe too slow for real.
Use cases. We have another board that is much, much faster and good for LLM. Like that's kind of the IQ9. It's a very similar board.
Ruth: Okay.
Monica: But yeah, so LLMs can be run edge. I, but of course they have their limitations, they hallucinate. Mm. Which is a really big problem for a lot of applications. Yeah, it's, I mean, it's good for what it's good for, but don't, don't trust it with your life.
Maybe.
Ruth: What other advice do you have for people who want to start exploring your vision AI kit? What is a good way to start? So, we have a
Monica: Hackster IO project. I still use Hackster io quite a bit for documentation, just 'cause I really like the mix of images, and you can add your code there. So, we have one for our getting started project, so that could be a good place to start out with this board.
Mm-hmm. Uh, we publish a lot of our reference designs on GitHub.
Ruth: Mm-hmm.
Monica: So, we have github.com/Avnet is where Sure. Our GitHub is there.
Ruth: Mm-hmm.
Monica: And yeah, we have some pretty good documentation.
Ruth: Terrific. Before we come to the end, is there anything I haven't asked you that you wish I had asked you?
Monica: I just wanna reiterate that.
How important power efficiency is, so mm-hmm. We measure the power. So, the throughput latency of a chip isn't, so it's measured in TOPS mm-hmm. Tera operations per second. So, it's how many operations it can do. Operations correlates to how fast it can. Run inference, so the more operations it can do, the higher tops, then the faster it will run a model, which is good.
Mm-hmm. Usually, and so this is a 13 tera operations per second board, which is kind of, I think like mid-range now for Edge AI. Okay. It used to be at, we used to be like the high, high range, but now that they're creating more faster boards, it's mid-range. Usually the higher tops you get, the more power consumption.
This board manages to keep the power consumption quite low. A similar board you might need to have, so this is a heat sink, which is a passive cooling. For a similar board, you would probably need to have like an active cooling, like a fan. Mm-hmm. If you see a fan on a board, that means it's generating a lot of heat.
Heat is power consumption. Okay. Got it. So that is one of the impressive things about this board, that it can run so much throughput with so little power consumption and. That's really important for battery powered robots. Makes
Ruth: sense. If you had to put together a playlist for this episode, what song would you put on it?
Monica: I would say it's like Daft Punk, Robot Rock.
Ruth: There you go. Really cool. Thank you. I will put your song on the, on the YouTube playlist we've made for all our guests. So Great. We have a nice soundtrack for all the cool text stuff we talk about. All right. It was really interesting to chat with you, Monica. It seems like the technology is really maturing and the use cases you have shared with us are really proving that the industrial reality is setting in.
Thank you for being on the show and sharing your insights and expertise with us. Thank you. Thanks for having. Thank you for listening to We Talk IoT, stay curious and keep innovating. This was Avnet Silica's. We talk IoT. If you enjoyed this episode, please subscribe and leave a rating. Talk to you soon.
Artificial Intelligence: Transforming Industries and Shaping Tomorrow

Despite its immense potential, AI is a complex and rapidly evolving engineering discipline, which can make mastering it a daunting task. Partnering with an experienced collaborator who understands the intricacies of selecting appropriate data sets, tools, software, and hardware components can significantly reduce development time and mitigate risks.
Effective AI implementation requires a balanced integration of hardware and software, along with the right machine learning (ML) algorithms. Avnet Silica brings extensive expertise in implementing machine learning on the edge, in the cloud, and on-premises. We support our customers in understanding and building their AI-based applications, focusing on:
About the We Talk IoT Podcast
We Talk IoT is an IoT and smart industry podcast that keeps you up to date with major developments in the world of the internet of things, IIoT, artificial intelligence, and cognitive computing. Our guests are leading industry experts, business professionals, and experienced journalists as they discuss some of today’s hottest tech topics and how they can help boost your bottom line.
From revolutionising water conservation to building smarter cities, each episode of the We Talk IoT podcast brings you the latest intriguing developments in IoT from a range of verticals and topics.
You can listen to the latest episodes right here on this page, or you can follow our IoT podcast anywhere you would usually listen to your podcasts. Follow the We Talk IoT podcast on the following streaming providers where you’ll be notified of all the latest episodes: