Enabling reconfigurable computing with field-programmable gate arrays

The reimagining of IT networks will begin when reconfigurable computing solutions like field-programmable gate arrays (FPGAs) are available to all IT organizations.

In my last column, I wrote about how the standard computing platform is being reimagined by reconfigurable computing and how hyper-scale cloud companies are leading the way with the use of SmartNICs and field-programmable gate arrays (FPGAs). Now, let’s look at why FPGAs are so powerful in this context, the major challenge of working with FPGAs, and how vendors and companies are addressing the challenge.

Why FPGAs?

What is it about FPGAs that makes them so different and yet so powerful compared to CPUs? One of the main reasons is that they are completely reconfigurable. Unlike ASICs, such as CPUs, the logic in the FPGA is not static but can be rearranged to support whatever workload you want to support. With an ASIC, you need to commit to a certain feature set up front, as this cannot be changed once the chip is produced. With an FPGA, you need to commit to the capabilities that the FPGA will provide with respect to available logic gates and Look-Up Tables (or LUTs), which are the tables that define how logic gates are combined to support a given function. But, what the FPGA does is entirely up to the FPGA solution developer and how he or she defines the LUTs.

This means FPGAs can be used, reconfigured and reused on the fly as changes to FPGAs are implemented by updating the FPGA chip with a new software image file. This can be done remotely and live, which is a huge advantage in an operational hyper-scale data center.

With FPGAs, it is possible to parallelize workloads, so several instances of the same processing pipeline can be established at once. For compute intensive applications, like encryption or compression, this provides an opportunity to significantly accelerate processing of these applications.

CPUs are designed to be generic and handle a number of different types of applications and workload needs at the same time; that is what makes them so versatile and powerful. But, when this versatility is combined with the parallelization and acceleration power of an FPGA to offload specific functions on demand, then you really do get the best of both worlds!

The major challenge is abstracting the complexity of FPGAs

One of the major challenges of working with FPGAs is their complexity. The tools for programming FPGA-based solutions are powerful, but they are proprietary to each vendor. They are not based on the same programming languages or concepts that application software developers traditionally use, such as C++, but on tools and languages more familiar to ASIC developers, such as Hardware Definition Languages (HDL such as Verilog) and Register Transfer Level (RTL) design abstractions.

Getting a design to work with performance on an FPGA is not trivial, either. Logic synthesis and related timing closure issues often occur during development and are some of the challenges that separate the mediocre from the great FPGA solution developers. It can take many years of experience before one is proficient in high-performance FPGA solution design.

While efforts have been made to make programming of FPGAs easier with solutions like OpenCL, there is still some way to go in using these tools to create high-performance FPGA solution designs.

An alternative approach is to develop software frameworks on top of reconfigurable computing platforms that can provide reliable APIs but abstract the FPGA details. In other words, rather than programming the FPGA directly, APIs are called to execute the functions implemented in FPGAs, which can be loaded on demand using FPGA images.

With this approach, it is possible to deliver reconfigurable computing solutions that do not require in-depth knowledge of FPGA solution design but can take advantage of the benefits of FPGA processing.

Many vendors in the past have taken such an approach. My current company, Napatech, and my previous company, TPACK (now owned by Intel through the Altera acquisition), are two examples. The customer benefited from the reconfigurability and upgradability of FPGAs but interacted with the solution through well-defined APIs.

What this means for companies that today are experts in programming for CPU-based applications is that they can continue to build on this expertise without needing to develop FPGA solution design skills.

Over time, I expect that the tool chains for FPGAs will become more standardized, that the programming tools for FPGAs will become easier to use for those who are not familiar with HDL and RTL, and that it will become more common in general to work with FPGAs. But, I also expect a world where we will see vendors emerging that offer reconfigurable computing solutions based on FPGAs that are easy to consume for those who do not wish to become FPGA experts, which will include solutions for server platforms, virtual environments and even FPGA-as-a-Service.

Reconfigurable computing solutions have been driven by the creativity and innovation of the hyper-scale cloud companies, but the reimagining of IT networks will truly begin when the power of reconfigurable computing solutions is made available to all IT organizations in a way that is easy to consume.

This article is published as part of the IDG Contributor Network. Want to Join?

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Related:
Must read: 10 new UI features coming to Windows 10