Research community looks to SDN to help distribute data from the Large Hadron Collider

1 2 Page 2
Page 2 of 2

NEWMAN:      Yes. The first users are data managers with very large volumes of data, from tens of terabytes to petabytes, who need to transfer data in an organized way. We can assign those flows to circuits, and give them dedicated bandwidth while the transfers are in progress, to make the task of transferring the data shorter and more predictable.

Then there are thousands of physicist who access and process the data remotely, and repeatedly, they continue to improve their software and analysis methods in the search for the next round of discoveries. This large community also uses dynamic caching methods, where chunks of the data are brought to the user so that the processing power available locally each group of users can be well used. We’ll probably treat each research team, or a set of research teams in a given region of the world as a group, in order to reduce the overall complexity of an already complex global undertaking.

So some folks will have direct access to the controller while others will have to make requests of you folks?

NEWMAN:      People are authorized once they have enough data to deal with. You see, there’s a scale matching problem. Given the throughput we deal with, if you have less than, let’s say a terabyte of data, it hardly matters. If I have a data center with tens to hundreds of terabytes to transfer at a time, there would be some interaction between the data manager side and the network side. The data manager can make a request, “I’ve got this data to transfer from A to B,” and the network side can use a set of controllers to help manage the flows, and see that the entire set of data arrives, in an acceptable time.

We’ve worked out a solution where, for each data set transfer, we know which of the many, many compute nodes are going to be involved. In order to direct traffic, we get a list of all the source IP addresses and pass those on to the controller, and when the controller sees the source and destination IPs it can set up a flow rule and map the flow onto a dynamic circuit between A and B.

When dealing with individuals, it’s just going to be a question of looking at the aggregate traffic and how it’s flowing and trying to direct flows. Down the line we intend to apply machine learning classifications and understanding to learn the patterns from the flow data so we can manage it. That’s somewhat down the line but I think it’s an interesting application to apply to this kind of problem.

How many controllers do you think you’ll end up with?

NEWMAN:      That’s an interesting question because this is actually a collaboration of many organizations. We start with one controller. I think ultimately there will be a few at strategic points, a handful, but how the different controllers interact is not very well developed in the OpenDaylight framework.

How many switches will ultimately be controlled?

NEWMAN:      There are 13 Tier 1 sites and 160 Tier 2 sites, but I think we’ll probably end up somewhere in the middle, which is a few dozen switches involved with the largest flows.

Did you look at buying a controller versus building one?

NEWMAN:      We looked at some controllers. We had previous development based on the Floodlight controller. Julian?

BUNN:          The OpenDaylight controller is public domain software and supported by the major vendors and many research groups. It has become sort of the de facto SDN controller in the community. There have been others, such as the Floodlight controller, which we’ve used. Some of these were a little less open. That’s why we picked OpenDaylight. We’d already worked with Floodlight so we knew how an SDN controller worked.

NEWMAN:      The Brocade Vyatta Controller is based on OpenDaylight and Brocade is an active contributor to OpenDaylight project, but the funding agencies prefer that we choose open-source software because of the potential benefits of engaging a larger community of users and developers.

What version of OpenFlow have you settled on here?

BUNN:          OpenFlow 1.0 because we found the particular switches we’ve been using support that very well. We don’t need any of the features in 1.3. The sort of flows we’re writing into the switch tables don’t really need anything more advanced than 1.0 at the moment.

NEWMAN:      The other aspect is, when you have test events there are typically different flavors of switches involved, so by requiring OpenFlow 1.0 it’s easier to make them all work together. We foresee moving to Openflow 1.3 when the number of switches supporting it increases, and when there is a greater need to moderate the size of flows on the fly (a feature supported in 1.3).

We’re also following the OpenDaylight releases. We worked with the Hydrogen release and then, after the SC14 conference, we tried some exercises with the Helium release. So we look at what’s being developed and what features there are and if any are important we adapt them. The next OpenDaylight release, which is called Lithium: it’s an enhancement of Helium, and we will use it when it’s available.move past in June.

Speaking of timing, what’s the next step? How long will it take to see this vision through?

NEWMAN:      It’s very progressive. We’re starting to get it out in the field. Our test bed at Caltech has six switches, three different types, including Brocade MLXe and CER switch routers and others, and we’re going to add a fourth type at Michigan. Julian is set up to try his flow rules and we have our mechanism to integrate with the end application where we can get these lists of IP addresses which we can use to match to the setup flow rules for those particular IP addresses. As soon as we exercise that we start to do it again in the wide area.

Part of our team is at CERN in Geneva and we certainly will want to set up a switch there. That should happen in the next few months, and then the idea is to set up a preproduction operation starting with some of these managed flows and the application in my CMS experiment, so in the next year or two we’ll be well on our way to production.

So this is predominately a wide area thing, but are there data center or campus implications as well?

NEWMAN:      It depends. Campus, maybe. Brocade’s ICX campus switches are OpenFlow 1.3 ready, so flow control can be done down to the workstation or server level. Data center and directing flows, I can see a lot of potential there. The point is where you have shifting loads and you have large data flows and want to have them go efficiently, this could be very useful. It clearly is a big vision. We’ll start to implement this and see how it goes. But I think it will have a big impact, with implications for research and education networks and the universities and labs they serve.

The scale you guys deal with is so different from the enterprise folks I typically talk to, so it’s very interesting.

NEWMAN:      Yes, I should give you some numbers. In 2012 during the last LHC run, about 200 petabytes of data were transferred. After that we stopped taking data and you’d think the level of activity would be less, but we still sent 100 petabytes. The next run of the LHC, which is a three-year run, will start in June (commissioning of the accelerator is going on right now), and we’re expecting much larger data flows than before. (Since the interview, the next run of the LHC has started.)

The Energy Sciences Network (ESnet) reached 18 petabytes per month at the end of last year and the growth rate since 1992 is a factor of ten every four and a quarter years, which is a growth rate of 72% per year. The projection forward is an exabyte a month by about 2020 and 10 exabytes a month by about 2024.

In terms of individual flows, we can already do several tens of gigabits per second in production and we can saturate 200 Gbps links (for example a 100 Gbps link bidirectionally) over long distances at will.

 

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Related:

Copyright © 2015 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
IT Salary Survey 2021: The results are in