Friday, November 22, 2013
Hotnets '13: Toward a Verifiable Software Dataplane
Authors: Mihai Dobrescu and Katerina Argyraki (EPFL).
Software dataplanes give operators more flexibility, allowing more rapid deployment of changes. Recent advances have increased their performance to tens of Gbps, making them more practical. However, hardware manufacturers have been reluctant to allow network operators to install software into the data plane because of fear of bugs and performance effects.
The goal of this paper is to work toward verifiable software dataplanes. Properties that are taken for granted, like crash freedom, bounded per-packet latency, and network reachability are all goals of this verification system.
A tool that can verify these properties would reduce testing time, speed time to market, and allow a market to develop for dataplane software that is verifiable.
Unfortunately proving properties of general purpose software is a difficult due to shared state and the difficulty of reasoning about code behavior due to factors like path explosion. Sacrificing flexibility or performance has made this problem easier in the general software case. The software demands of the data plane have specific needs that mitigate the need for these sacrifices, making verification feasible.
In dataplane software, the input to an element is a packet. The output to an element is a modified packet. By considering dataplane software elements as a collection of pipelines of independent packet processing components with well defined, narrow interfaces and no shared state, verification time can be reduced.
Verification consists of two main steps. First, verify each element in isolation and identify suspect paths against the property one is trying to prove. Then verify the target property by composing these possible paths. The assumption is that individual elements are verifiable. Ocassionally this can be challenging, for instance when an element loops through the packet and modifies it depending on various options. This causes an internal path explosion. Decomposing the loop and conceiving as each step in the loop as a separate element simplifies reasoning and reduces path explusion.
Another difficulty is dealing with mutable state. Use of verified data structures can help ameliorate this problem. For more details see the paper.
They built a prototype based on S2E, a symbolic execution engine. They proved properties, such as crash-freedom and a bounded number of instructions on Click pipelines in 10s of minutes.
Q: Do you assume it is single-threaded
A: You can parallelize across multiple cores but they don't share data.
Q: Can you compare to other verification efforts, like seL4.
A: That was mostly a manual effort. We are trying to develop an automatic approach. We try to bring together the advancements from the verification community and combine it with the needs of the networking community. For example, the pipeline programming model affords us opportunities that aren't available for general programs.
Q: My impression is the formal verification community has made a lot of progress in the past few years. That leads me to the question of whether you would advocate for stylized programming methodology even if verification tools could do these transforms for you.
A: If there is a tool that can do this, sure, but I'm not aware of anything for C or C++. Our tool needs minor modifications to your code to provide hints for the tool. You write it in a structured way and it allows the tool to do the job.
Q: How do you train the programmers to use the tool?
A: The tool lets you know. It says you can verify something or you can't, and here's why. We take more of an interactive approach and try to create guidelines around the common needs.
Q: When you talk about performance do you make assumptions about graph properties of the pipelines? The network of these modules can be highly non-trivial. It depends on how you schedule individual packets. Do you restrict topology?
A: We analyze the performance of an individual element, and then compose the results to analyze the entire pipeline. We've experimented with simple pipelines, but I think you can have an arbitrary path.
Q: Including loops?
A: As long as you put a bound on how many times you feed it back in.
Q: Is Click the right kind of abstraction for pipeline modules or do we need something more finely grained?
A: For the evaluation we used Click, so we think it's sufficient. We think its granularity is good.
Q: I question the possibility of saying there isn't shared state between modules. For instance you can implement NAT in Click, which is stateful.
A: We looked into NAT, so basically you're right, but you can look into why you need this state. You can do static allocation of the ports available and then you don't need to share state. There are imbalances, but this is feasible.
Subscribe to: Post Comments (Atom)
I’m subscribed to you blog via rss. Earlier, there always been new blog post when new screen cast came out, but not recently. So the question is should i subscribe to episodes separately or you plan to continue blog posts for each new episode? p.s.: sorry for my bad English. Some 100 % free Hard drive data is finish of ads which are intentionally quite obstructive and frustrating. The developer earns a bit of cash from the ads, and states that if you register the program you can create the ads go away. Signing up often expenses just a few money, and if you like the program it may be well value the price.ReplyDelete