Paper, public review, and code.

Paul started describing traffic matrices, which capture the amount of traffic between all pairs of nodes in a network. Spatio-temporal traffic matrices capture traffic as a function of time. Traffic matrices are useful for several applications, including generating network topologies, evaluating routing protocols, and guiding anomaly detection.

Paul explained that we usually require many traffic matrices, e.g., to compute confidence intervals. The traditional approach is to get data and fit a model. Unfortunately, data is hard to get (e.g., proprietary). Even when data is available, it is often outdated or biased (e.g., specific to one network). One way to get around the lack of data is to have a model to generate artificial traffic matrices. In these cases we want (i) simple (few parameters), (ii) efficient (fast synthesis), (iii) consistent, (iv) and realistic models.

Given building realistic models is hard, this work proposes

*controllability*as an alternative, to give control over inputs used to test algorithms. The goal of this work is to generate

*ensembles*of traffic matrices. Even though we lack data, we do have some data (e.g., we expect (daily) periodicity in traffic). This work builds traffic matrices that account for all constraints in the model, without satisfying any implicit assumption.

Paul showed how the proposed maximum entropy model can be used, and how it is general enough to capture previously proposed models for traffic matrices. This (somewhat math-y slides) ended with a demo showing traffic generated from the synthetic traffic matrices, illustrating the impact of constraints.

Paul mentioned that matrix generation is fast. He also discussed that convex constraints can be incorporated (and non-convex constraints too if you search for the global optimum), as well as hard constraints. Paul mentioned that converting knowledge into constraints is not always obvious, then proceeded to summarize the work.

### Q&A

**Walter Willinger:**Traffic is subject to routing. How do I incorporate it in this framework?

**A:**I believe it can be done. You would have to know the correlation structure between the routing matrix and the traffic matrix.

**Walter:**Is it my job then?

**A:**Yes. One thing is that forming the constraints could be difficult in some cases. This case could be a bit more involved.

**Anja Feldmann:**How does your work compares with previous work on traffic matrices?

**A:**Some previous work is about inferring traffic matrices. We focus on synthesis. There are some work on synthesizing traffic matrices, but they are not entirely similar.

**Anja:**Why do you need many traffic matrices?

**A:**Depends on the type of application you use them for. If you are testing a router, you might want traffic matrices where you have one very high value and also uniform traffic matrices. You want to have variance to cover different scenarios.

**Q:**Different applications may have different constraints. I expect one would need to add constraints to traffic matrices to get meaningful results. Is there a way to integrate latency requirements?

**A:**I have not looked at the application level yet; this work is at a lower level. I believe it would be possible, but could be more involved.

**Anja:**Could the same framework be applied? Would it make a difference if you have routers or if you have servers and end-hosts?

**A:**It would not make a difference, but I am not sure whether there would be more specifics details.

## No comments:

## Post a Comment