Xilinx® Alveo™ U200 Data Center accelerator cards are designed to meet the constantly changing needs of the modern Data Center, providing up to 90X higher performance than CPUs for key workloads, including machine learning inference, video transcoding, and database search & analytics. Built on the Xilinx 16nm UltraScale™ architecture, Alveo accelerator cards are adaptable to changing acceleration requirements and algorithm standards, capable of accelerating any workload without changing hardware, and reducing overall cost of ownership.
Enabling Alveo accelerator cards is an ecosystem of Xilinx and partner applications for common Data Center workloads. For custom solutions, Xilinx’s Application Developer Tool Suite (Vitis™ environment) and Machine Learning Suite provide the tools for developers to bring differentiated applications to market.
Fast - Highest Performance
Adaptable – Accelerate Any Workload
Accessible - Cloud <-> On-Premises Mobility
Alveo optional accessories extend the capabilities and access to Alveo data center acceleration cards. Accessories include power adapter cables and USB cables.
For full product specifications refer to the Data sheet
|INT8 TOPs (peak)||18.6||18.6|
|Width||Dual Slot||Dual Slot|
|Off-chip Memory Capacity||64 GB||64 GB|
|Off-chip Total Bandwidth||77 GB/s||77 GB/s|
|Internal SRAM Capacity||35 MB||35 MB|
|Internal SRAM Total Bandwidth||31 TB/s||31 TB/s|
|Network Interfaces||2x QSFP28 (100GbE)||2x QSFP28 (100GbE)|
|Look-up Tables (LUTs)||892,000||892,000|
|Power and Thermal|
|Maximum Total Power||225W||225W|
We’ve developed an ecosystem of Xilinx and partner solutions for most common workloads. Alveo Data Center accelerator cards can deliver dramatic acceleration across a broad set of applications and are reconfigurable to provide an ideal fit for the changing workloads of the modern data center. Compare how Alveo Data Center accelerator cards perform compared to traditional CPU architectures.
The preferred optimal design flow for targeting the Alveo Data Center accelerator card uses the Vitis™ software platform. Steps to deploy and develop using Vitis are given below. Long-time FPGA designers might want to use traditional design flows, such as RTL or HLx. This flow does not require installing the Vitis platform.
Follow steps 1 and 2 for deploying or developing applications on the U200 accelerator card.
The Xilinx runtime (XRT) is a low level communication layer (APIs and drivers) between the host and the card.
IMPORTANT: Please enter the following command before installing the XRT:
$ sudo yum-config-manager --enable rhel-7-server-optional-rpms
$ sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
$ sudo yum install epel-release
The deployment target platform is the communication layer physically implemented and flashed into the card.
In addition to steps 1 and 2, also follow steps 3 and 4 for development on the U200 using the Vitis design flow.
The target platform for development is required if you are building your own applications.
To access prior versions of the package files, visit the Package File Archives Page.
For development using RTL and HLx, follow these steps:
|OEM Partner||Server Model||Alveo U200 Supported Server Configuration||Configure & Buy|
|Dell EMC||Dell EMC PowerEdge R740 Rack Server||Chassis with 2CPU Configurations||Riser Config 4,3x8,4x16,DW GPU|
|Dell EMC||Dell EMC PowerEdge R740xd Rack Server||GPU Capable Configurations||Riser Config 4, 3x8, 4 x16 slots, Double-Wide GPU compatible|
|Dell EMC||Dell EMC PowerEdge R7425 Rack Server||GPU Capable Configurations|
|Dell EMC||Dell EMC PowerEdge R7515 Rack Server||GPU Capable Configurations|
|Dell EMC||Dell EMC PowerEdge R840 Rack Server||GPU Capable Configurations|
|Dell EMC||Dell EMC PowerEdge R940xa Rack Server||GPU Capable Configurations|
|Inspur||Inspur NF5280M5 Rack server||2U Chassis with 2CPU Configurations||2x8,3x16,DW FPGA|
|Inspur||Inspur NF5468M5 Rack server||4U Chassis with 2CPU Configurations||8x16 FPGA + 4x16NIC, DW FPGA|