Cockroach Labs ranks Google cloud tops for overall performance, Microsoft Azure for the best storage, and AWS for the best latency response. Credit: Thinkstock Google Cloud Platform (GCP) is the best hyperscale performer across all areas of throughput while Microsoft Azure has the best storage systems and Amazon Web Services (AWS) has the lowest network latency. Those are the findings of a series of benchmarks performed by the atrociously named Cockroach Labs, maker of a scalable, resilient database called CockroachDB that runs on all three services. The study, part of the company’s third annual Cloud Report, evaluated the performance of AWS, Microsoft Azure, and Google Cloud in online transaction processing (OLTP) applications. In total, 54 machines were assessed and almost 1,000 benchmark runs were conducted to measure CPU, network, storage I/O, and TPC-C performance, among others. While the report declared Google Cloud an overall winner, it applied a note of caution: Each cloud provider stood out in different ways. It singled out GCP for overall pejrformance and best throughput on the Derivative TPC-C (OLTP) benchmark; Azure for its disks in storage I/O; and AWS as the most cost efficient option for OLTP. “It is important to point out that each of the cloud providers showcased overall growth in machine performance since the 2020 report,” the researchers wrote. For having the fastest pricessing rates, Google Cloud won the network throughput benchmark for the third year in a row, with its worst performing machine still beating the best from AWS and Azure. GCP achieved the best single-core CPU performance and delivered the most throughput at every level, including on the OLTP benchmark. The TCP-C benchmarks also found something interesting among the chips AWS is using. It offers instances powered by Intel, AMD, and Amazon’s Graviton2, an Arm derivative. In single-core performance, Intel was a clear winner, while on the 16-core performance benchmark, AWS’s Graviton2 beat everybody. GCP’s machines also outperformed Azure and AWS in network and storage I/O throughput (read and write) and also attained the highest amount of raw throughput (tpm). Despite not having an advanced disk option (extreme-pd) available, GCP machines with general purpose disks (pd-ssds) came in second in cost efficiency ($/tpm). As far as network latency was concerned, AWS registered the best performance in network latency for three years running. Their top-performing machine’s 99th percentile network latency was 28% lower than Azure and 37% lower than GCP. However, Cockroach noted that there is possible randomness in the physical distance between instances. As far as storage went, Azure had comparable raw throughput (tpm) performance with GCP and AWS, although the differences were razor thin. Azure surpassed AWS in storage I/O performance with what Microsoft calls “ultra disks,” SSDs specialized in high IOPS and low-latency performance. With these ultra disks, their machines led all cloud providers in storage I/O read IOPS, write IOPS, and write latency. But that performance came at a higher cost. Azure was the least cost-efficient cloud provider in terms of dollars per tpm. The Cockroach report is available for free download, email registration required. Related content news analysis AMD launches Instinct AI accelerator to compete with Nvidia AMD enters the AI acceleration game with broad industry support. First shipping product is the Dell PowerEdge XE9680 with AMD Instinct MI300X. By Andy Patrizio Dec 07, 2023 6 mins CPUs and Processors Generative AI Data Center news analysis Western Digital keeps HDDs relevant with major capacity boost Western Digital and rival Seagate are finding new ways to pack data onto disk platters, keeping them relevant in the age of solid-state drives (SSD). By Andy Patrizio Dec 06, 2023 4 mins Enterprise Storage Data Center news Omdia: AI boosts server spending but unit sales still plunge A rush to build AI capacity using expensive coprocessors is jacking up the prices of servers, says research firm Omdia. By Andy Patrizio Dec 04, 2023 4 mins CPUs and Processors Generative AI Data Center news AWS and Nvidia partner on Project Ceiba, a GPU-powered AI supercomputer The companies are extending their AI partnership, and one key initiative is a supercomputer that will be integrated with AWS services and used by Nvidia’s own R&D teams. By Andy Patrizio Nov 30, 2023 3 mins CPUs and Processors Generative AI Supercomputers Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe