China\u2019s Baidu made two big moves that are going to make it a major player in the artificial intelligence (AI) space: an extremely powerful new chip designed to compete with Google\u2019s Tensor Processing Unit (TPU) and a wide-spanning alliance with Intel.\nFirst, the company introduced the Kunlun, a cloud-to-edge range of AI chips built to accommodate high-performance requirements of a wide variety of AI scenarios. The announcement was made at Baidu Create, a developer show that is starting to look an awful lot like Google I\/O in terms of content and sessions.\nKunlun leverages Baidu\u2019s AI ecosystem, which includes AI scenarios such as search ranking and deep learning frameworks, including its open source deep learning framework called PaddlePaddle. Kunlun can be used in everything from autonomous vehicles to data centers.\nSo far, the company has two processors, the 818-300 training chip and the 818-100 inference chip. The company says Kunlun is 30 times faster than the FPGA-based AI accelerators it introduced in 2011. After seven years, you would expect a leap like that. But Baidu also claimed 260 TFlops of performance at a 100 watt draw, which is well above the 45 TFlops of Google\u2019s TPU.\nIn addition to supporting the common open-source deep learning algorithms, Kunlun can also support a wide variety of AI applications, such as voice recognition, search ranking, natural language processing, autonomous driving and large-scale recommendations.\nThe most important detail \u2014 whether it would be sold to the public or only available on its services like Google does with its TPU \u2014 was not disclosed.\nBaidu partners with Intel on projects\nThe company also announced a partnership with Intel on a series of AI projects, including FPGA-backed workload acceleration, a deep learning framework based on Xeon scalable processors. Intel did not identify which FPGA series Baidu would use, but the chip maker recently announced the integration of its Arria family with its mainstream Xeon server chip.\nBaudi said it would optimize PaddlePaddle running on Xeon scalable processors, including tweaks for computing, memory, and networking. The partners said they would explore the integration of the deep learning framework with the nGraph deep neural network compiler.\nThis portion of the deal is a cloud-based partnership, as Baidu said it was looking to develop a \u201cheterogeneous\u201d cloud computing platform based on Intel FPGAs. It already has a similar alliance with Nvidia to customize its PaddlePaddle framework for Volta GPUs and bring AI capabilities to the Chinese consumer market.\nAnother facet of the alliance is with Israeli software company Mobileye, which happens to be an Intel subsidiary, and Intel\u2019s Movidius image recognition AI chip for Baidu's Apollo, the company's autonomous vehicle project. The deal combines Mobileye\u2019s Responsibility Sensitive Safety with Movidius Myriad 2 VPU in the Apollo Pilot.\nThis is not Intel\/Mobileye\u2019s first dance with autonomous vehicles. It already has partnerships with BMW and Fiat Chrysler.\nThe winner in this is Baidu and, by way of that, China. Baidu has modeled itself on Google to great effect and has an enormous presence in its native country thanks to Google\u2019s exit in 2010. And China isn\u2019t known for being generous with its tech. What are the chances we\u2019ll see the Apollo vehicles in the U.S., or for that matter the Kunlun?\nBaidu is rapidly taking form as the chief competitor to Google. It is the only company with the scale and reach. The only question is whether it will make an international push. It tried expanding into Japan a few years back, and that failed, although I suspect the reasons had nothing to do with technology. By and large, Baidu has limited its U.S. reach to recruiting talent for back home.\nAt the very least, it\u2019s positioning itself as a dominant player in the biggest market in the world. But if it goes global, it\u2019s certainly in position to give Google a run for its money.