Elements in the development of embedded vision systems for real-world applications
Today’s combination of embedded systems and computer vision has meant the realisation of embedded vision systems. In the coming years, there will be a rapid proliferation of embedded vision technologies, including those targeted for low-light conditions, or high-definition imaging, or high-end applications drawing on specially developed processing engines. An increasing number of products will emerge with visual inputs for a wide range of applications in consumer, automotive, industrial, healthcare and home-automation markets.
New wave
The Internet of Things (IoT) is changing the electronics industry dramatically with the expectation that billions of devices will become interconnected. The aim for IoT is to make devices intelligent and accessible to users everywhere in the world. Devices are often recognised as being intelligent when they greatly simplify our lives, such as automatically recognizing an inhabitant with a videophone and let this person entering a building, as just one example from many new capabilities. Generally, devices are much more valuable when they interact with the physical world and visual inputs are especially powerful: they capture a lot of information and can help in the task of interacting with their physical environment. A classic example is robotics, which has used image sensors right from the beginning. The image sensors – the input to a system – are the eyes of a robot and can help a robot steer motors with high efficiency – the output of a system.
Moreover, recent evolutions on machine learning with convolutional neural networks (CNNs) and other neural network technologies give the ability to develop self-learning intelligent vision systems.
Challenges
Embedded vision has the potential to create huge value in almost any electronics market, and will grow rapidly with ongoing improvements in hardware and software. However, many challenges are involved in the development of an embedded vision application across the entire system.
Based on its quality, raw image data whether video or still images, will need to be optimised and processed. If the quality of the lens is not good enough, for example, it will affect the entire image-processing element. In addition, the amount of data captured involved can be enormous, especially with high-resolution video and real-time processing. Many high-end vision applications will require parallel processing systems or dedicated hardware such as GPUs, DSPs, FPGAs or co-processors. However, embedded vision systems are often highly constrained in terms of cost, size and power consumption, and while a high-end processing engine might have the power, it could also be too expensive or power hungry for an application.
Importantly, embedded vision is designed to work in real-world conditions that are continuously changing, including light levels and motion or orientation. The deployment of specialised vision algorithms to control the data is key in these situations. Relying only on simulations will not work, and real-world testing is required, which can be time consuming. This is especially the case in automotive, safety and robotics applications.
Vision systems
Embedded vision systems include a broad range of components, and there are many different ways of integrating these, but primarily they include imaging, processing and computer vision technologies.
At the input end of the system, CMOS and charge-coupled device (CCD) are the two leading technologies available today for image capture. While CCD delivers higher quality overall, the improvement of CMOS image technologies over the last decade has closed the gap. In addition to low-light-level handling capabilities, its improved image quality, lower power consumption and lower cost have meant that deployment of CMOS sensors is significantly higher than CCD. CMOS technology also continues to evolve, with continual improvements in pixel-size reduction and resolution with the necessary higher-speed interconnect and bandwidth. In addition, ever-smaller image sensor packages and modules are becoming available, increasingly enabling compact dual-camera solutions and stereo-vision implementations to help compensate for distortion, detect depth, improve dynamic range and enhance sharpness.
The choice of processor needs to be determined by aspects such as real-time performance, power consumption, image accuracy and algorithm complexity. There has been continuous improvement in processing power and vision algorithms, as well as increasing integration of simultaneous localisation and mapping (SLAM) for automotive, robotics and drone applications.
Local memory storage is also required to compare images or store image data for future analyses. Storing either some elements or all captured image data, both volatile memory and non-volatile memory types are commonly used in imaging systems. Specialised vision algorithms are also a key element in the system, for example to control video image input and enhance video for specific needs, such as colour enhancement and object detection improvement.
Since the introduction, several years ago now, of the OpenCV open-source library for computer vision, the process for developing and implementing algorithms has changed dramatically. OpenCV includes programming code, including C/C++ functions, centred on vision-based applications, making it easier to port and run algorithms on embedded processors. Many third parties offer vision and video processing solutions based on OpenCV or similar libraries or even frameworks, for many different applications. Silicon vendors typically also offer vision libraries to enhance their products’ embedded vision processing offerings.
A further and increasingly important element, especially in the era of IoT, is connectivity – either wired or wireless, depending on the application and its requirements. A further consideration is deploying software running algorithmic analysis on cloud-based servers.
Overall, it is crucial to select the right elements for the system and application and then move on to the fine-tuning of all these parts, including hardware, software and algorithms. This is not always so easy and due to the complexity of embedded vision, developers will absolutely need to use professional tools to reduce development cost, time and risk, to bring embedded vision projects to market quickly.
Complete solutions
Avnet Silica has extensive experience in helping customers in the development of embedded vision applications. The company offers virtually all of the building blocks required for a complete embedded vision system, including optimised hardware and software, drivers and applications. The library of blocks available ranges from the image sensor or camera module at the input, through to dedicated hardware including the processors, memory and power components that are needed to meet critical processing and power consumption requirements. All these blocks are further allied with software development tools, camera driver software, example application reference designs and extensive know-how in imaging processing software and algorithms
In addition to helping with the development of full-custom solutions to aid customers in the building of their embedded vision platforms and products, Avnet Silica has also developed a wide range of off-the-shelf advanced camera development solutions. For example, the Picozed embedded vision kit builds upon the highly flexible PicoZed system-on-module (SoM), which in turn is based on the Xilinx Zynq-7000 programmable system-on-chip (SoC). The PicoZed kit is ideal for machine-vision application and includes all the hardware, software and IP components necessary for the development of custom-video applications. It supports reVISION, a reconfigurable acceleration stack optimized for computer vision and vision guided machine learning applications. The stack includes resources for platform, algorithm and application development, has hardware accelerated OpenCV functions, and Supports most popular neural networks.
A second example is the Avnet Silica STM32F7 camera development kit. Low-cost and Mbed-enabled, it offers low-current consumption, USB interfacing, a 4.3-inch colour capacitive touchscreen display, and all the necessary hardware and software to enable the fast development of embedded vision for Internet of Things (IoT), home automation and other video applications. A third kit is the low-power Kinetis kit – based on the NXP Kinetis K82F Cortex-M4 microcontroller – includes a miniature VGA camera module with flex connector, 90° horizontal-field-of-view lens and IR filter and can capture still images or stream real-time low-resolution video.
Avnet Silica is continuing to develop and add more products to this off-the-shelf range of kits, enabling key customers a wide choice of advanced embedded vision solutions.
***
This article is available in other languages at the following publications:
- Aktuel Elektronik (Danish)
- Elektroniktidningen (Swedish)
- ETN.fi (Finnish)
- Automazione Oggi (Italian)
- Markt & Technik (Germany)
Sign up for the Avnet Silica Newsletter!
Stay up-to-date with latest news on products, training opportunities and more!