Parallel Vector Processing Tackles 5G Workloads | Avnet Silica

Display portlet menu

Parallel Vector Processing Tackles 5G Workloads | Avnet Silica

Display portlet menu

Sophisticated Parallel Vector Processing Tackles Challenging 5G Workloads

Itamar Kahalani, Product Line Manager, Avnet Israel
AMD-Xilinx AI Engine technology integrated into a small cell RU

With mobile network infrastructure becoming more complex and needing to respond to constantly changing circumstances, the implementation of artificial intelligence (AI) is set to become increasingly important. There are constraints that mean a better organised, configurable, and more streamlined approach to AI-based data processing will be necessary. The following article will discuss how Avnet Silica’s programmable logic supply partner AMD-Xilinx has found a way to overcome the engineering obstacles this is currently presenting, through the game-changing, highly parallel processing architecture it has developed.

 

Entering the 5G era

There are many benefits that will be derived by the deployment of 5G mobile networks. In simple terms, 5G technology will enable orders of magnitude increases in data rates and the number of connections per cell. It will also offer the ultra-low latency responsiveness mandated to support safety critical operation in Industry 4.0 and autonomous driving applications, as well as enhancing the user experience for those wearing virtual reality headsets, by preventing unwanted lags in the visual content being rendered. 

To attain the much greater bandwidths that 5G use cases are going to depend upon, innovative propagation techniques have been created and further bands in the RF spectrum opened up. As well as the traditional frequency bands used by 3G and 4G base stations, 5G infrastructure will need to support sub-6GHz and mmWave frequencies. 

mmWave transmissions (which are those above 30GHz) will support far faster data rates but also have a very short range. Their propagation can be hindered by foliage, glass and even moisture in the air. It will therefore be necessary to have densely packed distributions of small cells that can supplement the transmissions emanating from large base stations. A report recently published by analyst firm IDTechEx predicts that over 45 million small cells will have been deployed by 2031. Likewise, use of massive MIMO (where arrays made up of numerous antennas transmit multiple data streams simultaneously) and beamforming will both be essential if 5G is to provide network operators with the necessary coverage. 

 

Beamforming and its processing requirements

Through beamforming, the spectral resources available within a cell can be optimised to give greater coverage and enhance quality-of-service (QoS). Phased array antennas are configured to focus the wireless signal in a given direction. This means that areas in the cell where the number of users needing to connect is particularly high can be prioritised. The technique may even be utilised for individual users. 

If beamforming is to be undertaken, however, the system must be constantly adapting to changing circumstances within the cell. It is therefore necessary for the system to be constantly acquiring and processing data relating to the phase and gain coefficients, so appropriate adjustments to the digital pre-distortion (DPD) function can then be made. Through these activities, it will be possible for elevated beamforming performance to be maintained - with DPD coefficient calculations needing to be carried out ten times per second. This calls for powerful real-time AI-based digital signal processing (DSP) capabilities along with support for deterministic operation, so that situations can be responded to quickly and latency issues do not arise.

 

Applying AI in constrained system design scenarios

Until now, the real difficulty with AI inferencing has been to implement a data processing solution that was able to offer the combination of strong performance and low latency capabilities, while simultaneously having low power consumption requirements and a small form factor. To compound problems further, it must be acknowledged that the semiconductor industry’s ability to continue adhering to Moore’s Law is rapidly coming to an end. Conventional processing architectures are no longer scaling sufficiently to be able to keep pace with new demands.

AMD-Xilinx developed its AI Engine processing technology with the objective of addressing the compute-heavy AI-based workflows that are now beginning to emerge, where power efficiency and small system footprint are both major priorities. Part of the company’s Versal adaptive compute acceleration platform (ACAP) architecture, this very long instruction word (VLIW) and single instruction multiple data (SIMD) vector processing technology is intended for machine learning and advanced DSP applications. 5G mobile communications is one of its main targets, but data centres, LiDAR in autonomous vehicles, medical imaging/diagnosis equipment and edge-located industrial automation hardware are among the other places where it is going to bring value.

What really differentiates the AI Engine approach is the space and energy savings that can be achieved. 8x less silicon area needs to be allocated, and a 40% reduction in power budget is witnessed, when compared to conventional programmable logic DSP implementations.

Schematic showing an AI Engine’s construction
Figure 1: Schematic showing an AI Engine’s construction

 

AI Engine structure

An AMD-Xilinx AI Engine is comprised of a 2D array of multiple tiles. Featured on every one of these tiles is a 32-bit C-programmable RISC scalar processor, accompanied by a dedicated instruction memory, a relatively large RAM memory reserve, plus fixed-point and floating-point vector processor elements with associated vector registers. A synchronisation handler, as well as trace and debug functions, are also included. Dedicated connectivity along with direct memory access (DMA) engines for scheduled data movement are pivotal in ensuring determinism. 

With each tile having VLIW SIMD vector processing, AI Engine technology has the capacity to deliver up to 6-way instruction parallelism. This includes two/three scalar operations, two vector load and one write operation, as well as one fixed or floating-point vector operation, for every clock cycle completed. 

Through the configurable distributed floating-point blocks, compute performance can be accurately tuned to the specific application criteria, resulting in more efficient operation. The dedicated data and instructions memories are static, meaning that any inconsistencies caused by cache misses and the associated fills can be avoided.

By leveraging AI Engine technology in 5G infrastructure, it will be possible for real-time DSP work to be undertaken while still respecting power consumption and form factor limitations. 5G fronthaul is one of the places where it will be particularly valuable. Here it will be able to attend to the processing demands of the radio unit (RU) and distributed unit (DU) component parts of the radio access network (RAN), so the on-the-fly calculations needed for beamforming can be applied and network capacity increased.

AMD-Xilinx AI Engine technology integrated into a small cell RU
Figure 2: AMD-Xilinx AI Engine technology integrated into a small cell RU

In conclusion, the AI Engine technology that AMD-Xilinx distributes to customers through Avnet Silica’s supply channels offers a flexible heterogeneous compute resource that is highly suited to data-intensive applications such as 5G beamforming, as well as being applicable to various other tasks. Data throughput is heightened, and latency minimised, thanks to the highly parallel architecture that has been employed. Since this platform uses programmable logic as its foundation, there is ample provision for adaptions to accommodate new, more refined beamforming algorithms as they are introduced.

Designing for a 5G world

Follow Avnet Silica on LinkedIn

About Author

Itamar Kahalani, Product Line Manager, Avnet Israel
Itamar Kahalani

Itamar Kahalani is Xilinx Product Line Manager at Avnet Israel....

Parallel Vector Processing Tackles 5G Workloads | Avnet Silica

Display portlet menu

Sign up for the Avnet Silica Newsletter!

Stay up-to-date with latest news on products, training opportunities and more!

Take a DEEP look into the future!

Get the latest market trends and in-depth trainings on our Digital Event Experience Portal!

Technical support

Online Support Service

Our European team of expert engineers is dedicated to helping you solve your technical challenges. Get support for a specific product, get technical advice or find alternatives for a specific product.

Person sitting in front of computer with headset

Parallel Vector Processing Tackles 5G Workloads | Avnet Silica

Display portlet menu
Related Articles
An-image-showing-various-modules-from-3G-4G-and-5G-implying-5G-is-faster
What’s new in mmWave and 5G SoCs and modules?
By Avnet Staff   -   August 24, 2023
This article looks at recent developments in mmWave and 5G modules and systems-on-chips (SoCs), and the benefits they represent to product development teams working at both the board level and system level of design.
5G mast in a cityscape at night
What is the impact of using AI in 5G networks
By Avnet Staff   -   August 22, 2023
AI is impacting the performance, security and maintenance of 5G networks. Network operators are racing to reap the benefits. AI promises to deliver returns on network investment and improve the end-customer experience along the way.

Parallel Vector Processing Tackles 5G Workloads | Avnet Silica

Display portlet menu
Related Events
City at night
How to Quickly Connect STM32U5 Discovery Kit to the Cloud
Date: January 25, 2024
Location: online, on-demand
City at night
Connecting the Future: Matter and Wirepas Unleashed
Date: November 7, 2023
Location: online, on-demand