Real-time discussion on Zulip Get an account Join the conversation

Friday, December 13, 2013

Federated Flow-Based Approach for Privacy Preserving Connectivity Tracking

Presenter: Mentari Djatmiko

Authors: Mentari Djatmiko (NICTA & UNSW), Dominik Schatzmann (ETH Zurich), Xenofontas Dimitropoulos (ETH Zurich), Arik Friedman (NICTA), Roksana Boreli (NICTA & UNSW)

The paper is motivated by Internet outages, which have significant financial and reputation impact. Prior work either uses passive control-plane measurements using BGP data (which suffers from false positives), or active measurements (which suffer from overheads vs. detection granularity tradeoff), or passive data-plane measurements (which don't suffer from the aforementioned shortcomings but have privacy concerns).

The proposed scheme relies on passive data-plane measurements and aims to alleviate privacy concerns. The authors propose secure multi-party computation (MPC), which is a cryptographic protocol that enables privacy preserving connectivity computation. (Slide malfunction during presentation)

The authors present a case study for evaluation.

Q: You focus on outages (which is a binary performance problem). Can you use this scheme for fine-grained performance evaluation?
A: Yes, it's a possible future work.

Q: Does the solution work in real-time? Does it scale for the whole Internet?
A: We have conducted small-scale evaluations yet. It may be challenging to scale it to a large number of domains.

Q: What information are you trying to protect? Are there privacy concerns for connectivity information?

A: Yes, it can be sensitive. For example, access to porn is likely a private thing. 

CoDef: Collaborative Defense Against Large-Scale Link-Flooding Attacks

Presenter: Min Suk Kang

Authors: Soo Bum Lee (Carnegie Mellon University), Min Suk Kang (Carnegie Mellon University), Virgil D. Gligor (Carnegie Mellon University)

Traditional DDoS attack target specific endpoints or servers. However, in recent years we have seen several attacks geared towards specific links, instead of a large number of hosts. Traditional flow filtering schemes are susceptible to these attacks because attack flows (which are typically low-rate, have diverse source/destination addresses, and are protocol conforming) are often indistinguishable from benign flows.

The proposed scheme (called CoDef) relies on collaboration among ASes. Attack source and target ASes are generally motivated to collaborate to curb this attack. CoDef uses collaborative rerouting in which target AS asks neighboring ASes to reroute traffic via other paths, essentially dispersing attack and benign traffic. If the attacker is aware of this reroute and it chooses to re-launch the attack by creating new flows, the attacker will be identified.

After collaborative rerouting, CoDef uses collaborative rate-control and path pinning (which were not discussed during the presentation). The evaluation was conducted using topology data from CAIDA. CoDef does not require changes to BGP or OSPF.

Q: Can CoDef identify attack source inside the attack AS?
A: No, CoDef would notify AS-owener/ISP.

Q: What is the cost of routing change employed by CoDef? Someone can abuse the system by false collaborative rerouting advertisements, how does CoDef cater for that?

A: We envision that CoDef will be a premium service. The costs of the service would hinder false use. 

RiskRoute: A Framework for Mitigating Network Outage Threats

Presenter: Ramakrishnan Durairajan

Authors: Brian Eriksson (Technicolor Research), Ramakrishnan Durairajan (University of Wisconsin - Madison), Paul Barford (University of Wisconsin - Madison)

Network outages happen due to a wide variety of reasons. For example, censorship, cable cuts, or natural disasters. The paper present a framework for proactively mitigating network outages due to natural disasters, in particular weather-related network outages.  The key idea is that weather-related events follow predictable geographical and temporal patterns; therefore, they can be predicted before occurring.

The authors propose a new metric called "bit-risk miles" that takes quantifies the sensitivity of a path to weather-related disasters and allows to study tradeoffs between shorter paths vs. outage risk. The framework forecasts outage probability at PoP locations and selects a path from a set of possible paths.

The evaluation is conducted using FEMA/NOAA weather data and real-world routing data from 16 regional networks. The results show that routing significantly changes from shortest path routing to be more risk averse. The framework also guides new intra-domain routes and new peering relationships for inter-domain routing. The presentation concluded with a video demo for hurricanes Irene and Katrina.

Q: In your evaluation, do you also take into account traffic volume?
A: No, we only accounted for link count.

Q: Does the framework work in real-time?
A: Yes, it does.

Q: Can you implement your framework in real routers? What changes would they require?
A: It will require some change. Detailed analysis is left as a future work.


Thursday, December 12, 2013

On the Benefits of Applying Experimental Design to Improve Multipath TCP

Presenter: Christoph Paasch
Authors: Christoph Paasch (UCLouvain), Ramin Khalili (T-Labs/TU-Berlin), Olivier Bonaventure (UCLouvain)

Although MPTCP is there for a long time now, unfortunately, evaluating its performance in practical scenarios did not get much attention. Therefore, the authors tries to investigate the performance gains of using the Multipath TCP. They studied the effect of different environmental parameters on the performance of MPTCP. They quantified the performance gains from using MPTCP and highlighted the cases in which MPTCP did not achieve the expected performance. In addition, they proposed some solutions and evaluated their effect. In conclusion, the authors built an evaluation environment that can be adopted to measure the performance of multipath TCP schemes.

Q: What makes the aggregation bandwidth equal to 1 ?
A: When MPTCP achieves a throughput equal to the sum of the throughputs that can be achieved using each interface separately. 

DomainFlow: Practical Flow Management Method using Multiple Flow Tables in Commodity Switches

Presenter: Yukihiro Nakagawa 
Authors: Yukihiro Nakagawa (Fujitsu Laboratories Ltd.), Kazuki Hyoudou (Fujitsu Laboratories Ltd.), Chunghan Lee (Fujitsu Laboratories Ltd.), Shinji Kobayashi (Fujitsu Laboratories Ltd.), Osamu Shiraki (Fujitsu Laboratories Ltd.), Takeshi Shimizu (Fujitsu Laboratories Ltd.)


The demand for bandwidth in data servers is dramatically increasing. The need for a scalable network with high bandwidth is essential for data centers. There exists a lot of work in the literature that tries to enhance the physical  layer of the switches in order to enhance their performance. Unfortunately, this introduce a lot of overhead in terms of controlling the switches. Therefore, the authors proposes DomainFlow as a practical flow management method based on open flow concepts and switches. They apply network virtualization approaches to enable the administrators/customers to easily control the system. One of the main gains of this virtualization is to utilize the multiple paths between the source and distention seamlessly. The authors prototyped their system measuring its performance, efficiency and controllability.

Q: Do you have to choose between WRT and ANT or you can use them together.
A: It is possible but needs some modifications to the system.

DOMINO: Relative Scheduling in Enterprise Wireless LANs

Presenter: Wenjie Zhou
Authors: Wenjie Zhou (Ohio State University), Dong Li (Ohio State University), Kannan Srinivasan (Ohio State University), Prasun Sinha (Ohio State University)

This work mainly focus on solving the channel access challenges in enterprise networks. Although current Wifi Distributed Channel Access (DCF) is simple and robust, it suffers from the hidden terminal problem as well as efficiency issues. Other work in the literature tried to solve such issues but unfortunately they have major problems such as inefficiency, being not robust or being unable to leverage the channel to its maximum. Therefor, the authors developed DOMINO as a centralized channel access mechanism. DOMINO is able to detect the hidden terminals and avoid the hidden terminal problem efficiently. In addition, DOMINO achieve relatively high throughput while avoiding high accurate time synchronization. To achieve this, DOMINO used relative scheduling approach to avoid requiring high accurate clock synchronization. The authors implemented their scheme on USRPs in order to evaluate its performance. In addition, we made further evaluation using simulation. 


Q: Is this is Wireless g or n compatible ?
A: Wireless g

Q: What is the overhead of relative scheduling ?
A: Sending the signature at the end of the transmission

Is There a Case for Mobile Phone Content Pre-staging

Presenter: Alessandro Finamore
Authors: Alessandro Finamore (Politecnico di Torino), Marco Mellia (Politecnico di Torino), Zafar Gilani (Universitat Politecnica de Catalunya), Kostantina Papagiannaki (Telefonica Research), Yan Grunenberger (Telefonica Research), Vijay Erramilli (Telefonica Research)

The authors propose a novel technique to implement content pre-staging (caching) in mobile network, specifically by pushing content on the user devices.
A new component is introduced: the content bundler. It is installed at the ISP side and it  classifies the traffic and identifies the most popular items to bundle the set of content that  will be pushed  to the mobile terminals.
They study a 1-day trace (HTTP log) of a big metropolitan city to evaluate the performance of the scheme. It turns out that popularity is a good trigger for  pre-staging.
Different strategies are proposed to bundle content. Among them, the popularity based one is the most practical, achieving 7% of saving in terms of volume of data transferred and tangible benefits for users too.

Q: It seams a macro network optimization, is this targeted for big events?
A: Yes, but not only. Imagining a big event I may say that if we push the bundler at the BST level we may achieve suboptimal performance

Q: It seams that  you determine the bundle on a 24h basis, what about doing it hour by hours?
A: This is exactly what we have done.