Huawei's artificial intelligence (AI) strategy is not just chips, but a full-stack portfolio, including chips, cloud services, and products. Credit: Peter Sayer/IDG Chinese smartphone giant Huawei Technologies Co. announced at its Huawei Connect 2018 show in Shanghai an update to its Ascend artificial intelligence (AI) chips with a new set of cloud services, software, tools, training, and framework. The company is putting itself in direct competition with the main AI chip developers in the U.S., namely Nvidia, Intel, and Qualcomm, but also ARM, IBM, to some degree Google, and even fellow Chinese tech giant Alibaba. Chairman Eric Xu introduced the Ascend 910 and Ascend 310 chips along with the Compute Architecture for Neural Networks (CANN), a chip operator’s library and automated operators development toolkit, and MindSpore, an inference framework for devices, edge networks, and cloud training. That includes a full-pipeline of services (ModelArts), hierarchical APIs, and pre-integrated solutions, Huawei said, with plans to expand its AI stack to include an AI acceleration card, AI server, AI appliance, and other AI products. Huawei already has an AI-capable line of processors called Kirin for its own smartphones. And they are pretty ambitious. Its RoadReader project uses the Kirin-powered smartphone to conduct intelligent object recognition to distinguish between thousands of different objects a car may encounter, such as dogs, cats, balls, and bicycles, and “learn to take the most appropriate course of action.” Huawei said its new AI research will focus on natural language processing, computer vision, decision/interference, and machine learning. It will use the AI internally, as well as offer deployments to businesses across public and private cloud, edge computing, industrial Internet of Things (IoT) devices, and consumer devices. This is a massively ambitious project, and it is unclear whether the technology will ever make its way out of China. Most of these company-driven AI projects are for the company that made them. Google doesn’t sell its TensorFlow processor, after all. Intel and Nvidia do sell their processors, but they are chip makers, so holding on to them wouldn’t make sense. Not helping things at all is the fact that trust between the U.S. and China has been frayed due to the Bloomberg/Super Micro issue. Despite the strong denials all around, it still has an air of believability and has rapidly become one of those cases where believability is killing reality. Related content news AWS and Nvidia partner on Project Ceiba, a GPU-powered AI supercomputer The companies are extending their AI partnership, and one key initiative is a supercomputer that will be integrated with AWS services and used by Nvidia’s own R&D teams. By Andy Patrizio Nov 30, 2023 3 mins CPUs and Processors Generative AI Supercomputers news VMware stung by defections and layoffs after Broadcom close Layoffs and executive departures are expected after an acquisition, but there's also concern about VMware customer retention. By Andy Patrizio Nov 30, 2023 3 mins Virtualization Data Center Industry news AI partly to blame for spike in data center costs Low vacancies and the cost of AI have driven up colocation fees by 15%, DatacenterHawk reports. By Andy Patrizio Nov 27, 2023 4 mins Generative AI Data Center opinion Winners and losers in the Top500 supercomputer ranking Besides Nvidia, who had a great showing on the list of the world’s most powerful supercomputers? Almost everyone. By Andy Patrizio Nov 20, 2023 4 mins CPUs and Processors Data Center Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe