Whoever thought the chief competitors to HP Enterprise and Dell EMC would wind up being some of their biggest customers? But giant data center operators are in a sense becoming just that \u2014\u00a0a competitor to the hardware companies that they once and, to some degree still, sell hardware to.\nThe needs of hyperscale data centers have driven this phenomenon. HPE and Dell design servers with maximum, broad appeal, so they don\u2019t have to have many SKUs. But hyperscale data center operators want different configurations and find it cheaper to buy the parts and build the server themselves.\nMost of them\u2014 Google chief among them \u2014 don\u2019t sell their designs; it\u2019s just for their own internal use. But in the case of LinkedIn, the company is offering to \u201copen source\u201d the hardware designs it created to lower costs and speed up its data center deployment.\n\nLinkedIn\u2019s project, called Open19, has been ongoing for more than two years now, but it only just finished the first deployment this past July, according to Yuval Bachar, a LinkedIn data center engineer, who disclosed the initiative in a blog post. The deployment of Open19-designed equipment is now complete, and the company is ready to discuss its efforts.\n\u201cIn the weeks and months to come, we plan to open source every aspect of the Open19 platform \u2014 from the mechanical design to the electrical design \u2014 to enable anyone to build and create an innovative and competitive ecosystem,\u201d he wrote.\nWhat is the Open19 initiative?\nThe Open19 initiative was started by LinkedIn, HPE, Vapor IO, and other data center vendors "to create a community that will enable a common optimize data center and edge solutions enabling efficiency and flexibility," according to the group\u2019s website. The announcement coincides with the Open19 Summit taking place in San Jose, California.\nTo start, Open19 defines four standard server form factors (chassis dimensions), two \u201ccages\u201d for those servers to slide into, power and data cables, a power shelf, and a network switch. To be honest, the power and data cables look like the most interesting announcements because we\u2019ve all seen the horror shows of poorly done networking cables.\nThe idea behind the designs is to reduce the amount of work it takes to deploy servers in a data center. Again, this seems to assume people will build their own the way LinkedIn and other hyperscalers do it. It\u2019s all designed to be like building with Lego bricks.\nLinkedIn also wanted to standardize hardware across both primary and edge data centers, which is likely why Vapor IO is involved. Edge locations don\u2019t have a readily available technician, so if a company sends a technician to an edge container, the last thing it wants to do is make the tech waste time trying to figure out the layout of the equipment. By having common hardware between the two, the technician will work with familiar gear.\nLinkedIn claims these designs will mean being able to build infrastructure for 1 percent of the cost and six to ten times faster integration time, with greater power efficiency and other cost savings. However, it does not address the issue of IT staff building the hardware. LinkedIn, Google, Facebook, etc., can afford to hire engineers who build servers all day. Your average IT shop does not. I\u2019m sure some enterprising resellers and integrators will step up to fill the void if there is demand, but for now, this benefits only a few.\nStill, it\u2019s a positive step in redesign of the hardware, especially those network cables.