The Consumer Electronics Show (CES) might be the last place you\u2019d expect an enterprise product to debut, but AMD unveiled a new server accelerator among the slew of consumer CPUs and GPUs it launched at the Las Vegas show.\nAMD took the wraps off its Instinct MI300 accelerator, and it\u2019s a doozy.\nThe accelerated processing unit (APU) is a mix of 13 chiplets, including CPU cores, GPU cores, and high bandwidth memory (HBM). Tallied together, AMD's Instinct MI300 accelerator comes in at 146 billion transistors. For comparison, Intel\u2019s ambitious Ponte Vecchio processor will be around 100 billion transistors, and Nvidia\u2019s Hopper H100 GPU is a mere 80 billion transistors.\nThe Instinct MI300 has 24 Zen 4 CPU cores and six CDNA chiplets. CDNA is the data center version of AMD's RDNA consumer graphics technology. AMD has not said how many GPU cores per chiplet there are. Rounding off the Instinct MI300 is 128MB of HBM3 memory stacked in a 3D design.\nThe 3D design allows for tremendous data throughput between the CPU, GPU and memory dies. Data doesn\u2019t need to go from the CPU or GPU to DRAM; it goes out to the HBM stack, drastically reducing latency. It also allows the CPU and GPU to work on the same data in memory simultaneously, which speeds up processing.\nAMD CEO Lisa Su announced the chip at the end of her 90-minute CES keynote, saying MI300 is \u201cthe first chip that brings together a CPU, GPU, and memory into a single integrated design. What this allows us to do is share system resources for the memory and IO, and it results in a significant increase in performance and efficiency as well as [being] much easier to program.\u201d\nSu said the MI300 delivers eight times the AI performance and five times the performance per watt of the Instinct MI250. She mentioned the much-hyped AI chatbot ChatGPT and noted it takes months to train the models; the MI300 will cut the training time from months to weeks, which could save millions of dollars on electricity, Su said.\nMind you, AMD's MI250 is an impressive piece of silicon, used in the first exascale supercomputer, Frontier, at the Oak Ridge National Lab.\nAMD's MI300 chip is similar to what Intel is doing with Falcon Shores, due in 2024, and Nvidia is doing with its Grace Hopper Superchip, due later this year. Su said the chip is in the labs now and sampling to select customers, with a launch expected in the second half of the year.\nNew AI accelerator on tap from AMD\nThe Instinct isn't the only enterprise announcement at CES. Su also introduced the Alveo V70 AI inference accelerator. Alveo is part of the Xilinx FPGA line AMD acquired last year, and it's built with AMD\u2019s XDNA AI engine technology. It can deliver 400 million AI operations per second on a variety of AI models, including video analytics and customer recommendation engines, according to AMD.\nSu said that in video analytics, the Alveo V70 delivers 70% more street coverage for smart-city applications, 72% more hospital bed coverage for patient monitoring, and 80% more checkout lane coverage in a smart retail store than the competition, but she didn\u2019t say what the competition is.\nAll of this is within a 75-watt power envelope and a small form factor. AMD is going to take pre-orders for the V70 cards today, with availability this spring.