Open source trounces proprietary software for code defects, Coverity analysis finds

Argues that Heartbleed flaw was a difficult case

Forget bad headlines generated by the Heartbleed flaw, when it comes to code defects open source is still well ahead of proprietary software, generating fewer coding defects for every size of project, according to a new analysis by scanning service Coverity.

The firm's figures from its Scan Service show that for the C/C++ projects submitted for assessment during 2013, 493 proprietary projects representing 684 million lines of code generated an average defect rate of 0.72 per 1,000 lines of code.

This is actually very good - Coverity believes that any defect rate of 1.0 or less is considered commendable - but it was still a higher number than the 0.59 for the 741 open source projects representing 252 million lines of code.

+ ALSO ON NETWORK WORLD 16 of the weirdest places to find Linux +

This lower defect rate held true regardless of project size with even small projects of 100,000 lines of code marginally ahead of the proprietary world. For large code bases (greater than 1 million lines of code) the difference was 0.59 per 1,000 lines of code for open source compared to 0.72 per 1,000 for proprietary.

Coverity has been publishing scan rates for some years and an interesting trend is that the number of defects has actually been rising for all C/C++ development - it was 0.3 per 1,000 in 2008 - although the volume of code being tested by the firm has increased since then so the comparison might not be direct.

The commonest defects were resource leaks, null pointers and control flow issues, Coverity said.

This must give open source developers a warm feeling but does it tell us anything meaningful about the relationship, if any, between defect quality and security? After all, the collaborative model of open source can lead to errors being introduced by a single programmer that turn out to have lasting significance - just ask Robin Segglemann who made a mistake when adding a feature to OpenSSL that wasn't noticed during validation.

Coverity has already admitted that its service didn't notice the issue in OpenSSL because it was the sort of flaw that is inherently difficult to spot. Changing its scanning routines might have remediated this but only at the risk of higher false positive rates. In short, it's a trade-off.

In the end, any programming model is susceptible to mistakes that are not easy to spot.

Coverity remains upbeat about the prospects of improving code quality. "If software is eating the world, then open source software is leading the charge," said the firm's director of products, Zack Samocha.

"Based on the results of this report - as well as the increasing popularity of the service - open source software projects that leverage development testing continue to increase the quality of their software, such that they have raised the bar for the entire industry."

Coverity had fixed 50,000 defects in 2913 alone, the largest number for any years so far. Eleven thousand of these were in the largest projects using the service, NetBSD, FreeBSD, LibreOffice and Linux, he said.

"We've seen an exponential increase in the number of people who have asked to join the Coverity Scan service, simply to monitor the defects being found and fixed. In many cases, these people work for large enterprise organisations that utilise open source software within their commercial projects," said Samocha.

This story, "Open source trounces proprietary software for code defects, Coverity analysis finds" was originally published by Techworld.com .

Join the discussion
Be the first to comment on this article. Our Commenting Policies