Friday, August 25, 2017

Section 9 - Realities, Paper 3: VROOM: Accelerating the Mobile Web with Server-Aided Dependency Resolution

Presenter: Vaspol Ruamviboonsuk
Authors: Vaspol Ruamviboonsuk(UMichigan), Ravi Netravali(MIT),  Muhammed Uluyol(UMichigan), Harsha V. Madhyastha(UMichigan)


The page loads on mobile devices remain disappointingly slow, which frustrates users and hurts the revenue of website providers. Recent research have found that dependencies between the resources on any web page are a key reason for slow page loads. Even though there are some approaches that have been taken to address the impact of these dependencies on web performance, they have fundamental drawbacks. For instance, Proxy Based Solution: Client must trust HTTPS content pushed by proxy; Proxy needs access to user's cookies for all domains.


Vaspol et cl. presented VRoom, and three challenges to approach.
1. How can web servers discover dependencies?
Combine offline and online + Defer to third parties
2. How do web servers inform clients of discovered dependencies?
HTTP/2 Push + Dependency Hints
3. How should clients use input from servers?
Vaspol presented VROOM scheduler in action.


Vaspol et al. evaluate VROOM from two perspectives. 1) Accuracy of dependency discovery, 2) Improvement in client perceived performance. VROOM fully utilizes CPU/Network, decouples dependency discovery from parsing and execution, decreases median page load time by 5s for popular sites.

The paper is here.

Q&A section:

Q1: Have you considered the optimal cases for web searching, such as you don't need to fetch the content dynamically, but instead all the contents are available in the web server and static Html and images are returned from the server to client, how does your approach fit in this scenario?

A1: We didn't do that exact comparison and that would be interesting to do for sure. We did something similar where all the contents are retrieved at the beginning of the page load from the web server, which turns out there is no improvement.

Q2: If the webpage is a simple shell with a Javascript fetching all other contents dynamically, the server needs to analyze the javascript with a lot of information from the users such as cookies. If the server does not require much information from the client, then extra information may be delivered to the client which turns to be not useful. There is a clear trade-off, how do you think about this?
A2: Considering the case where all the contents are dynamically generated such as Facebook, Twitter, we cannot analyze the dependency that is one limitation of our current work.

Q3: This is kind of prefetching, and there is one issue with the client cache. Have you considered this case and the influence of client cache?
A3: You know it's quite hard to correctly model the caching, and we actually did per page cache evaluation, and you can find some measurement results in our paper.