Navid Hamed Azimi, Himanshu Gupta, Vyas Sekar, Samir Das (Stony Brook University).
Presenter: Navid Hamed Azimi
The authors propose that all data center networking (between ToR switches) must be free-space optical (FSO) links, which has advantages in that it can potentially be low-cost and high-performance, but it comes with its own challenges. Today's FSO devices are expensive, but it's because they are designed for outdoor environments. One main challenge is that the laser from FSO diverges, but the authors engineered a way around it (see paper for more details). Another challenge is line-of-sight requirement for FSO, which is problematic for all-to-all network connectivity. The authors use a mirror on the ceiling of the data centre to overcome this limitation.
What's cool is that they actually built a small prototype of this in their lab and verified they were able to sustain communication between two hosts at 1Gb/s.
Q: I like the idea, but what happens if I scale to 10Gb/s and 40Gb/s? Will I run into alignment issues?
A: We consulted an optics person who said there shouldn't be any issue.
Q: Why can't I just put them on the server NICs instead of putting FSO on the ToR switches?
A: There will be line-of-sight issues.
Q: I worry about vibration issues due to the mirror. Can you talk about fault-tolerance?
A: We don't know exactly but we think it shouldn't be an issue.
Q: Dynamic links require mechanics to turn mirrors and a feedback control system, right?
Q: If you only have three choices for mirror alignment, this isn't that flexible, is it?
A: You can add as many mirrors as there is space on the ToR switch.
Q: What is the pass-through loss?
Q: Now we cannot have lights in our data center?
Q: MCP and fountain codes are two competing solutions to incast. Can each of you argue why your solution is the best? Isn't incast the main problem you are trying to solve?
Trevis author: Incast isn't the only problem. There are other problems which I can argue qualitatively.
MCP author: MCP flow rates are deadline driven, so there could be bandwidth headroom for new flows (incast-like).
Comment from Trevis author to MCP author: Do consider what happens during barrier-synchronized workloads, which is the worst-case scenario for incast.
Q: Do you have a combinatorial explosion on the number of mirrors if you have lot of racks?
A: Fanout multiplies with the number of mirrors.
Q: What is the frequency of the link?
A: Single-mode 1300nm wavelength.
Comment on DC research from [anonymous]: We can have lot of coding mechanisms in a data center, particularly layer 1 and layer 2. Can we do CDMA? Are we doing too much SDN?
Q: Yesterday we had someone talk about blowing up a DC into individual pieces. If we take a coding perspective, we might have a different architecture. What is the cost of a particular design point?
A: I don't know what the best design is, but I see the opportunity.
A: We have preconception about computers (as mainframes, DC, PC, etc.) but what we really have is information flows, interacting with each other and generating new information. I think we should go deep in that direction too.
Q: We have low-latency as a requirement, but there are fundamental latency limits. For instance, we cannot steer mirrors fast compared to DC RTTs. Congestion control is also mostly reactive. Can you comment on the limits of your approaches?
A: For large flows, milliseconds is fine.
A: Depends on the application. If you care about consistency, latency probably doesn't matter, but we will need to explore.