Prof. Vyas Sekar, SIGCOMM's 2016 Rising Start Awardee, gave a remarkable talk about his research on Software-Define Security. His talk has been both a reflection on the work that he and his colleagues have been doing over the years and in the process of doing it. With that in mind, he speaks both on the technicals aspects of his work and on how he picked and solved the fundamental research problems that originated each work.
(Edit: You can find his slides here.)
His initial considerations:
The main motivation for his recent work on network security comes from the fact that the traditional ways network security is provided are not keeping up with the pace of innovation of the attacks. Currently, there is an asymmetry between the kinds of attacks which are growing in sophistication and scale (i.e., may change and become polymorphic, may use botnets etc) and the tools that are used to protect from them. Also, techniques often rely on the assumption that attacks always come from the outside, but the notion of "good guys inside" is not true. In particular, operators say current solutions are hopelessly practical in handling new types of attacks and the main reasons are that they may incur a high cost, management complexity, and user frustration (i.e., the fundamental tussle between security and usability).
In that sense, he believes that Software-Defined security is the right way forward on network security. The fundamental piece that has driven his research in recent years was applying the recent capabilities provided by SDN and NFV to solve the existing security problems in networks.
He advocates that the best chance to challenge the innovation of attacks is breaking away from parameters and hardware-centric mentality (dedicated boxes in fixed shocked points) and allowing a more software-driven vision for enabling network security solutions. The ultimate goal is having the ability to implement a flexible portfolio of defensive applications which are not constrained in location, capacity, or functionality.
Mainly, he argues, SDN and NFV provide three key capabilities. First, agility to change and customize the security profile of the network as threats evolve. Second, flexibility to place security appliances anywhere (as often the network topology gets in the way of acquiring the right context for each appliance). Third, performance elasticity, in order to scale security appliances up and down as needed.
Although promising, he says, the use of SDN and NFV in this context comes with a set of challenges. First, the data plane needs more flexibility to provide the right context to security applications. Second, the defense system requires an orchestration scheme that provides service chaining in an efficient, optimal, and scalable way. Third, the defense applications themselves should be both agile to a changing environment and robust to adversarial evasion. Finally, there is a call for a new set of test and verification tools in order to ensure correctness.
Before going through his work, Prof. Sekar takes a step back and gives an overview of how his research came to be. In his own words, "It is tempting to think that this high-level vision existed all along, and that 5-years ago we had this complete vision of the work, that we had all figured out, that it came in a dream and so on." In reality, he says, research is often non-linear and chaotic and comes in a more bottom-up and organic fashion. Most times there is a disconnection between the way research is presented, which is top-down, from the way it came along.
According to him, there are two main ways to do good research. One is getting through challenging pinpoints that researchers and practitioners face while working on tools in the lab or running an operational infrastructure. The other is the non-trivial aspect of chance conversations with student, professors, network operators etc., that leads to interesting research problems. This second is quite important but often gets dismissed.
He then goes through each step he took along the way of realizing this software defined security vision. At each step the gives credits to his co-authors and explains how his chance conversations with some of them were one of the main reasons the work exists today.
Prof. Sekar starts with SIMPLE . SIMPLE is a policy enforcement layer for service chaining (e.g., making a given flow pass through a firewall, an IDS, and a proxy in a particular order). The fundamental question to be answered was: can we use something like SDN to steer traffic through legacy middleboxes (which are fixed located).
The idea for this came from a previous work in which he developed a novel middlebox architecture. At the time, many researchers were puzzled on how to make the network steer traffic through middleboxes.
He and his colleagues identified three major problems when steering traffic through middleboxes. First, the same packet may end up in the same switch more than once (e.g., if a given middlebox is connected to the network through a single link) so the switch may not know what to do with it. In the general case, this creates a loop. Second, to avoid overload a network operator may want to balance the load across several instances of the same middlebox. However, generating an optimal set of flows which guarantee load balancing without consuming all space in TCAMs is an intractable problem. Third, some middleboxes modify packets and this makes the correlation between incoming and outgoing traffic very challenging. The solutions developed under the SIMPLE architecture resulted in new primitives for the data plane, better approximations for making good use of TCAM space and techniques for correlating flows.
One of the most important lessons that came from SIMPLE  was the understanding that the SDN data plane required richer southbound APIs, which later resulted in FlowTags . In particular, FlowTags extends the middleboxes themselves so that they supply context to the network data plane. This effectively solves the problem of correlating flows that are modified by the middlebox which greatly simplifies service composition and allows new verification and network diagnosis methods. Moreover, one of the main aspects of the proposed solution is its simplicity and efficiency, as FlowTags require little modification to current software and incurs very small overhead.
FlowTags’  improved southbound API widened the scope for new types of applications. While developing new applications, however, students often had to reinvent some form of resource allocation optimization function. This required them to use and debug low-level optimization tools such as Gurobi or CPlex and got in the way of solving the real networking problems that needed to be solved. This difficulty eventually led to SOL .
SOL  simplifies the development of applications by offering a simpler API to interface with low-level optimization tools and solve common optimization problems. To do this, SOL has to be general (in the sense that many different applications would benefit from it) and efficient (such that the time to compute solutions would be comparable to custom algorithms developed for each specific application).
The key insight in SOL  to achieve both generality and efficiency came from the observation that most researchers rewrite network optimization problems in terms of path constraints (as opposed to edge constraints). The reasoning behind this is that near-optimal solutions can be achieved with a small subset of all possible paths. Thus, SOL works by combining offline path preprocessing with simple, online path-selection algorithms.
With a richer southbound API and new optimization directives, it comes the question of what can actually be done. The next step in his research was BUZZ , a system that gives assurances on whether policies are implemented correctly or not. Although several work had already been developed in the area, BUZZ was the first to consider stateful, context-dependent policies. BUZZ’s insight also came from student’s difficulties in implementing SDN applications. In particular, the FlowTags architecture itself turned out to be very difficult to debug.
BUZZ  works by building a model of the correct network and testing the behavior of the network traffic against this model. Its development presented two main challenges. First, how to model the network and its functions. Second, how to solve the exploration problem of testing the traffic on the model, which is already hard in stateless cases.
The insight in BUZZ  was to think about the structure of the policies that are written on the middleboxes as a way to simplify the exploration space of the problem. For example, it is both easier and more realistic to think of rules in terms of TCP sessions granularity rather than at packet granularity. The verification itself is made using efficient symbolic execution techniques, which originated from talking with colleagues that work in formal model verification.
The use of BUZZ  showed that many many recent systems, including their own, presented bugs on context-dependent policies. Some of these systems were Kinetic, OpenNF, FlowTags, and PGA.
Simple , FlowTags , SOL , and BUZZ  are building blocks for new security applications. One of the first ones was Bohatei , a system that provides flexibility to handle changing or evolving patterns in DDoS attacks (both in magnitude and location).
Bohatei  relies on geographically distributed data centers that can be used to instantiate defenses on-demand and a predictor that detects attacks and provides the duration and volume of each attack. Bohatei decides the type and quantity of defense appliances and routes traffic through these appliances so that it can be scrubbed clean before reaching the customer.
The insight behind Bohatei  came from an NTT engineer which was a visiting student at the time. The problem was that their network had large-scale DDoS attacks and the defense appliances that they were buying were getting overloaded. The proposed solution was to attempt to use SDN and NFV to build elastic defense appliances.
The next security application he presents is PSI , a system that provides custom security defenses for each individual user or device. The insight behind PSI came from a conversation with security specialist Michael Collins, from RedJack. He stated the network gets in the way of enterprise security defenses. This happens for four main reasons. First, current defenses can easily be avoided since they are currently positioned in fixed choke points. Second, defenses generate a lot of false positives and negatives, since tools lack context (they don’t know where the traffic is coming from). Third, they lack isolation and so have to provide a general set of policies for all cases. Fourth, they lack agility for changing policies with respect to a dynamic environment.
In PSI , each security defense has its own context and will be isolated from others. They are composed of a set of micro security appliances (e.g., micro-firewall, micro-IDS, micro-proxy etc.) connected in particular order and with a custom set of policies. These micro appliances run inside an enterprise cluster and raise security alerts to a PSI controller which can dynamically change defenses as needed.
His final remarks:
Prof. Sekar concludes by highlighting some of the work that is still left to be done and mentioning aspects for a successful research. As future work, he mentions that researchers have to think about how to provide security defenses in the data plane; how to reason about adversarial evasions; how do we look into new domains, such as IoT security; how to create new abstraction and orchestration layers for an ensemble of new security applications in a way that can help reason the composition of broader security policies.
His two recommendations for a successful researcher are as follows. First, although the notion of a top-down approach to research is appealing, one must not overlook an organic bottom-up approach. In particular, looking into pinpoints and interesting opportunities as a way to find new research directions. Second, one should leave the comfort zone and talk to different people. Such interactions often cause people to talk about their particular problems and may lead to interesting research collaborations.
The questions that were posed:
Q1: Can you share some of the early pushback (hard criticism) that you got and the process of getting through that pushback?
Vyas Sekar: The work has received a lot of pushback. As an example, when we first started looking at middleboxes, since there was already a lot of literature about it, most people’s opinion was that the work was going to be massacred. But as the work developed, it was lined up current understanding of the community. We may also have had luck. So one of the ways to get through is to get lucky.
The other option is being persistent. Even when your paper is not admitted right away, the techniques you have developed may be of interest to the community. People may not have got it at the time, but it may remain valid five or six years from now. I have seen a lot of persistent people getting very interesting ideas through. So the another way of dealing with pushback is being stubborn.
Q2: Can't a controller also be used to obscure attacks?
Vyas Sekar: This is a recurring and valid question and should be taken seriously. We and many solved scalability aspects of it. As for other aspects, they can often be tackled through other techniques using resilience, penetration testing, etc. So, although it is a valid concern, it should not be a fundamental limiting factor that throughs us out of the potential security benefits achieved by programmability.
Q3: How do you evaluate this operational issues in a University setting?
Vyas Sekar: In our case, all of the evaluated components are real software artifacts and thus can be tested in a University. In our community, in the recent years, we have had a lot of open-source systems that are reasonably close to what is used in production.
Getting data is actually a very hard problem. This is where you may benefit from establishing contacts and rely on the knowledge of others. Some may have an interesting insight on problems or behaviors, others may have datasets that can be evaluated.
Many of these systems can be built and evaluated without actually going through a deployment and there is value in doing that. Just the fact of building a testbed or an emulation platform will highlight several scalability and testing issues that need to be solved.
I cannot say what is the gap between the open-source software and hardware that we have and an operational system. One thing I can say is that it runs much slower. There is probably a huge gap in what the industry does and what we do in the operational part but in terms of the techniques that are used they are very close.
Q4: What are the network security issues that IoT can bring and how they can be handled?
Vyas Sekar: There are three aspects to any security problem: (i) an enforcement mechanism that applies a security policy; (ii) a policy abstraction that translates into what that policy is supposed to be; and (iii) a learning process to know what is correct and what is not.
IoT changes all three of them. First, in terms of enforcement, current techniques cannot apply in constrained devices with unfixable flaws. Second, policy need to consider several new types of behavior (cyber-physical interactions, device-to-device communication, implicit dependencies across devices etc.). Finally, the semantics of the interactions change considerably which makes learning much more difficult.
References to some of his work:
 SIMPLE-fying Middlebox Policy Enforcement Using SDN
Zafar Qazi, Cheng-Chun tu, Luis Chiang, Rui Miao, Vyas Sekar, Minlan Yu
in SIGCOMM 2013
 Enforcing Network-Wide Policies in the Presence of Dynamic Middlebox Actions using FlowTags
Seyed Fayazbakhsh, Vyas Sekar, Minlan Yu, Jeff Mogul
in NSDI 2014
 Simplifying Software-Defined Network Optimization Applications Using SOL
Victor Heorhiadi, Michael K Reiter, Vyas Sekar
in NSDI 2016
 BUZZ: Testing Context-Dependent Policies in Stateful Networks
Seyed K Fayaz, Tianlong Yu, Yoshiaki Tobioka, Sagar Chaki, Vyas Sekar
in NSDI 2016
 Bohatei: Flexible and Elastic DDoS Defense
Seyed K Fayaz, Yoshiaki Tobioka, Vyas Sekar, Michael Bailey
in USENIX Security 2015
 PSI: Precise Security Instrumentation for Enterprise Networks
Tianlong Yu, Seyed K Fayaz, Michael Collins, Vyas Sekar, Srinivasan Seshan
to appear in NDSS 2017
Friday, December 23, 2016
Sunday, December 18, 2016
1. PI2: A Linearized AQM for both Classic and Scalable TCP
Author: Koen De Schepper (Bell Labs Nokia), Olga Bondarenko (Simula Research Laboratory), Ing-Jyh Tsang (Bell Labs Nokia), and Bob Briscoe (Simula Research Laboratory)
Presenter: Koen De Schepper
Koen started the presentation by explaining some of the key features of Data Center TCP (DCTCP) like consistently low queueing delay, full link utilization with small queue, vey low loss, more stable throughputs, scalable, available in windows 10, linux 3.18. Yet to be optimized (for high RTTs at least). Unfortunately, we can’t use DCTCP in current internet as it starves the classic TCP-friendly flows, keeps big tail drop queues full, needs ECN (so high loss or fallback to Reno). Till now, this model is used in Data Center as everything can be changed at once without relaying on the consistency of other components.
They found two key challenges while solving this problem. First, how to make DCTCP and TCP-Reno rate compatible and the second one is to preserve low latency for DCTCP. This paper is all about how they tackled the first challenge as the second one is their future work (namely Dual-PI2)
Next he explained why any AQM will work with an equal drop probability for each flow. Comparing Classic TCP with DCTCP-Step and DCTCP-Slope, he stated that DCTCP marks On-Off marking rather having the similar slope in TCP- Reno or Cubic. Since, AQMs for steady state test results doesn’t give reasonable drop probability, this offers equal window for steady state.
Later recapping PI-AQM briefly, he dived into the paper itself. According to them, by omitting the square-root from congestion control and putting a scalable p instead can get rid of many complex calculations. In the end, he explained why high probability means less responsiveness of the system.
In short, newly proposed PI2 is simpler than PIE. It performs not worse and supports scalable congestion control (by removing the the square from the output). PI controls natively scalable CCs, so, adaption function is needed to convert any CC to a scalable. So, combination of PI and PI2 can support both scalable and classic CCs. Finally, he concluded the presentation with the link for this projects.
Q. In DC, TCP problem is solved. What are the next?
A. This is actually a way to migrate from something which is the past and legacy and couldn’t have been changed. We know there are better solution which are not being used yet like DCTCP; that we can’t use in today’s internet. So, this is kind of way to define new mechanism which scales and has lot more advantages and one obstacle is the migration process. Once, every TCP in internet will be replaced by DCTCP, we can remove the dual-queue from the step.
This is certainly only one migration. Hopefully in future, TCP congestion control and other research will be done to find out the best solutions.
Q. Are you ever going to run-off between PI2 and Single FQ-CoDel?
A. Yes. What we are trying to achieve here is the same that can be done by FQ-CoDel. We are trying to add fair share between the flows and actually without inspecting the transport headers. We don’t need to the flow, we just need to know per packet, whether it’s a DCTCP or not. This is more E2E solution and less complex, but probably not as optimal and perfect at network level as CoDel.
Q. Compare results for PIE and PI2. PI2 has lower peaks, so probability actually reduces. So, it’s less aggressive in some sense then the PIE who has couple of higher peaks. Any intuition, why?
A. It is not actually less aggressive control. Because by tuning it, based on certain probability, alpha and beta factors are scaled up or down and thus gives more or less the same effect.
2. SMig: Stream Migration Extension for HTTP/2 (short)
Author: Xianghang Mi (Indiana University), Feng Qian (Indiana University), and Xiaofeng Wang (Indiana University)
Presenter: Xianghang Mi
HTTP/1.1 is the widely adopted protocol for internet for a while. Since the emergence of HTTP/2 in 2015, it offers some interesting features like header compression, multiplexing schemes, server push capability that enables servers to push directly to the client. Xianghang orchestrated a scenario where, server responding for a large file download while client requests for a small file simultaneously, the performance degrades dramatically. So, he explained how the new features of HTTP/2 motivated them to use this protocol to resolve this and some other problems in their proposal.
Head-of-the-Line-Bottleneck is pretty common in real world. Stream prioritization or starting a completely separate connection doesn’t help resolving HoLB problem. According to Xianghang, migrating an on-going stream from one connection to a complete separate stream in HTTP/2 can be a solution.
Then, he Introduces with a new frame (Migration frame) and couple of flags to ensure correct cross-connection ordering of frames. To discuss the design of SMig, he explained 4 migration scenarios,
- Initiated by server w/ idle conn
- Initiated by client w/ idle conn
- Initiated by server w/o idle conn
- Initiated by client w/o idle conn
- NoMig: large file is multiplexed into smaller.
- MigSW: server initiates the migration for large file once it receives request.
- MigSP: Server initiates the migration, but only a small part is being migrated.
- MigCP: Client initiates the migration, but only a small part is migrated as a request.
Q. Who will schedule which file to move to another connection? (In test scenario, this scheduling is done manually) Integration of SMig with DASH solution to provide more elegant and optimal for CDN solution?
A. Yes, for testing, we migrated the traffic manually. In fact, which side will be responsible for migration is a policy decision. Who will initiate or what policy should they use is yet to be explored. For instance, in 3rd scenario (server initiated migration), 100 kb was the threshold, but for rea scenario, what would be the threshold.
A. Not yet considered. May be later.
Q. Is any special application needed for using this solution?
A. I don’t think so. This feature can be implemented as the client or server library. Applications can call directly the library functions (for socket or use API) to use migration.
3. MP-DASH: Adaptive Video
Streaming Over Preference-Aware Multipath
Author: Bo Han (AT&T Labs -- Research), Feng Qian (Indiana University), Lusheng Ji (AT&T Labs -- Research), and Vijay Gopalakrishnan (AT&T Labs -- Research)
Presenter: Bo Han
(This paper was one of the best paper award recipient in CoNEXT ’16.)
Bo Han began his presentation with explaining some of the exciting features of MPTCP as it splits single flow over multiple physical path, offers more speed and mobility. But, the best feature of this protocol is its transparency (in socket layer).
Now-a-days, video is the main contributor of mobile traffic. According to CISCO, 50% of entire mobile traffic is video and is predicted to rise up to 70% by 2020. Despite DASH, QoE is still un-answered. Open WiFi can’t provide best performance though it is available in almost everywhere. So, the question comes, can MPTCP help to achieve better performance?
It is well-known, MPTCP offers robust solution. However, the challenges and opportunities for MPTC need exploring. Bo Han and his team did an experiment in a controlled environment where they streamed with ~ 4 Mbps (with WiFi 3.8 Mbps as well as LTE 3.0 Mbps bandwidth). Key observation from this experiment was LTE capacity was fully utilized (only 0.2 Mbps needed).
The main goal of this project was to create an interface preference aware MPTCP for adaptive streaming. (assuming WiFi is preferable to LTE). To do so, they tried to leverage delay tolerance of DASH streaming to tweak MPTCP scheduling. Key component would be the deadline- aware MPTCP scheduler.
Next he moved on explaining MPTCP architecture. One important thing to note here is server doesn’t know the next churn to be requested. So, add intelligence of how to control cellular sub-flow was a solution. When enabling cellular, utilize the max b/w by transmitting as quickly as possible. MP-DASH introduces both deadline-awareness and link-priority into MPTCP.
Next, he introduced us with MP-DASH adapter and its main challenges. Since, different cross-layer interaction with MP-DASH has already been implemented, adaptation logic (throughput vs buffer based) was one of the key challenges for them. Another challenge was designing its control loop. So, the basic approach consists of either chunk size or chunk deadline (duration based or rate-based).
To make a prototype for this, they have used a ~300 lines of code as a portable patch of MPTCP on linux kernel. MP-DASH adapter was based on open-source GPAC player. There are 4 DASH rate adaptation: GPAC, FESTIVE, BBA and improved BBA. For evaluating MP-DASH, they’ve used 2 evaluation tool, namely MP-DASH scheduler and MP-DASH framework.
Bo Han et al have used MP-DASH in 10 different locations, dividing them into 3 categories (WiFi only can never support high quality video i.e., hotel or food market, WiFi can sometime support but not always i.e., airport or coffee house, WiFi can always stably support i.e., library). The key question here is what if we don’t use LTE in this location at all?
He also explained MP-DASH usage under Mobility scenario with their experimental setup. The key take-away from this are
- MP-DASH adaptively uses cellular only when WiFi throughput drops
- Vanilla MPTCP drives cellular to full capacity, regardless of WiFi throughput
Q. Really cool stuff. I am curious about the choice of metrics? You focus on radio energy consumption and throughput. Why not buffering, bit rate switching? Do you have someway to normalize these QoE metrics across all the players? What was the buffering size ratio between MP-DASH vs regular DASH?
A. This is a very good question. This is called other metrics. For current evaluation, most of the time the aggregated throughput of WiFi and cellular is coding bitrate of DASH radio. So we focus on cellular data usage and radio energy consumption. In future we plan to evaluate MPTCP with bit rate and with other QoE.
Q. I think, first question already answers my question. But, I am interested to see some more QoE for evaluation. How often the bit rate is changed or what jumps the change of bitrate?
A. We have evaluated MPTCP and identified the changes of bitrate. We found that most of the cases, MP-DASH will not affect the QoE, it can maintain the same encoded bitrate as Vanilla MPTCP.
Q. Can you use any standard metrics for bitrate changing?
A. We haven’t done that. We can do that in future.
Q. The adaptation is based on some estimation of delivered packet bitrate. Why not using a simple predictor which is exponentially weighted, rather using more complicated one?
A. We did some literature survey and according to 2005 sigcomm paper, Holt-Winters predictor performs much better in TCP.
Q. In this context?
A. In general.
Q. When you’re saying you’re reducing the utilization of cellular network by 80% or 90%, what problem are you trying to solve? Are you aggregating bandwidth or trying to improve the reliability by shifting between WiFi and cellular?
A. MPTCP always try to utilize the full capacity of underlying link. But, sometimes, user may prefer to use WiFi over LTE. So, for a scenario, where MP-DASH can use only 0.2 Mbps of LTE to support the highest bitrate of the video, but Vanilla MPTCP can’t do that.
Q. What was the RTT difference between the cellular network and WiFi you worked on?
A. We run experiment in different locations. So, by default, MPTCP prefers the link with higher RTT. If the demand of an application is higher, MPTCP decides to use the second sub-flow which will always try to utilize the full capacity. As long as there is space in TCP window, it will send packets to the secondary link.
So, dead-line aware MP-DASH scheduler is an overlay on top of the original MPTCP scheduler. It’s not a completely new MPTCP scheduler.
Q. Video quality depends on how accurately you can predict. So, for dash, using throughput makes it more credible or less?
A. MPTCP has nothing to do with the loss of packet accuracy as MPTCP is just a set of inspections of regular TCP.