Skip Links

Talk about big data: How the Library of Congress can index all 170 billion tweets ever posted

Library of Congress has a 133TB file of every update ever posted to Twitter; now it has to figure out how to manage it

By , Network World
January 08, 2013 03:58 PM ET

Network World - The Library of Congress has received a 133TB file containing 170 billion tweets -- every single post that's been shared on the social networking site -- and now it has to figure out how to index it for researchers.

In a report outlining the library's work thus far on the project, officials note their frustration regarding tools available on the market for managing such big data dumps. "It is clear that technology to allow for scholarship access to large data sets is not nearly as advanced as the technology for creating and distributing that data," the library says. "Even the private sector has not yet implemented cost-effective commercial solutions because of the complexity and resource requirements of such a task."

If private organizations are having trouble managing big data, how is a budget-strapped, publicly funded institution -- even if it is the largest library in the world -- supposed to create a practical, affordable and easily accessible system to index 170 billion, and counting, tweets?

Library of Congress
Credit: Wikipedia
Library of Congress Thomas Jefferson Building

FREE CLOUDS! 12 free cloud storage options

GET YOUR CES ON: The best of CES '13, in pictures

Twitter signed an agreement allowing the nation's library access to the full trove of updates posted on the social media site. Library officials say creating a system to allow researchers to access the data is critical since social media interactions are supplanting traditional forms of communication, such as journals and publications.

The first data dump came in the form of a 20TB file of 21 billion tweets posted between 2006 when Twitter was founded and 2010, complete with metadata showing the place and description of tweets. More recently, the library got its second installment with all the tweets since 2010. In total, the pair of copies of the compressed files total 133.2TBs. Henceforth, the library is collecting new tweets on an hourly basis through partnering company Gnip. In February 2011 that amounted to about 140 million new tweets each day. In October of last year, it had grown to nearly a half-billion tweets per day.

Researchers are already clamoring for access to the data -- the library says it has had more than 400 inquires. The project is being done in parallel to efforts by Twitter to give users a record of their Twitter history, including an itemized list of every tweet they have posted from their account.

The Library of Congress is not foreign to managing big data: Since 2000, it has been collecting archives of websites containing government data, a repository already 300TBs in size, it says. But Twitter archives pose a new problem, officials say, because the library wants to make the information easily searchable. In its current tape repository form, a single search of the 2006-2010 archive alone -- which is just one-eighth the size of the entire volume -- can take up to 24 hours. "The Twitter collection is not only very large, it also is expanding daily, and at a rapidly increasing velocity," the library notes. "The variety of tweets is also high, considering distinctions between original tweets, re-tweets using the Twitter software, re-tweets that are manually designated as such, tweets with embedded links or pictures and other varieties."

The solution is not easily apparent. The library has begun studying distributed and parallel computing programs, but it says they're too expensive. "To achieve a significant reduction of search time, however, would require an extensive infrastructure of hundreds if not thousands of servers. This is cost prohibitive and impractical for a public institution."

NOT COST PROHIBITIVE: 10 free router and IP admin tools

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News