Leveraging Diversity to Optimize Performance in Mobile Networks
Author: M. Zubair Shafiq
Presenter: M. Zubair Shafiq
Mobile data traffic volume increases rapidly due to the increase in the subscriber base, improving network connection speed and improving capabilities of modern smartphone. The paper show how mobile network operators can leverage the device diversity and geospatial diversity to improve performance (traffic models and radio resource allocations).
Firstly network operators can improve the performance by tuning Radio Resource Control (RRC) state machine inactivity timers for cell sectors with distinct traffic profiles. For example, more efficient radio resource utilization can be achieved by reducing RRC inactivity timers in cell sectors belonging to delay tolerant applications. These ideas are validated by conducting trace-driven simulations of users's RRC state machine using the logs collected at the radio network controllers from an operational mobile network. The result shows 1-2 seconds shorter RRC timeouts. Secondly, network operators can customize the workload models for mobile video streaming for different device types to improve these models. To do this, the author clusters similar devices type together and refine models for individual devices.
Q: What is the traffic characteristics? And what type of user traffic?
A: The traffic is during the day social networking and considered both upload and download.
Q: How do you obtain the experiments?
A: The data is collected near a big stadium, hence there are many users when there is an event and lower number of users otherwise.
Q: What if we increase timeout value?
A: It can increase the performance but only for small population.
Q: How is the timeout tradeoff in term of promotion delay?
A: Optimal tradeoff for promotion delay is with lower timeout.
Exposing and Mitigating Privacy Loss in Crowdsourced Survey Platforms
Authors: Thivya Kandappu, Vijay Sivaraman, Arik Friedman and Roksana Boreli
Presenter: Thivya Kandappu
Existing crowdsourcing survey platforms can profile users based on the surveys that they participate in. First, the work demonstrates that de-anoymizing users and obtain sensitive private information can be done easily using crowdsourcing survey platforms. Second, the authors design, prototype and evaluate Loki, a crowdsourcing survey platform that allow users to control their privacy loss using at-source obfuscation.
The authors demonstrate that de-anonymize users of crowd sourcing survey platforms is easy by launching a series of surveys in Amazon Mechanical Turk. The first three surveys allows the authors to obtain different information from the surveys. The authors can de-anonymize 72 out of 400 users. Using a fourth survey which discusses the smoking habit, the authors can infer the respiratory health of 18 out of 72 de-anonymize users. In the fifth survey, which asked whether the users would participate in the surveys if they knew that they could be profiled. 73 out of 100 users that took this survey indicated that they would not participate if they knew that they can be profiled.
Loki obfuscate the user's true response according to the privacy setting that the user chooses. The authors performed a preliminary evaluations by trialling the system with 131 volunteers. By comparing the results with a trusted third party and by comparing the ratings across the various privacy bins in the system, the authors show that the error is sufficiently small to make inferences even with a relatively small sample size.
Q: If you obfuscate the data, can you trust the data obtained?
A: The idea is to try to maintain the average of the aggregated value, such that the data obtained is still relevant.
A Base Station Congestion-Dependent Pricing Scheme for Cellular Data Network
Authors: Agripino Gabriel M. Damasceno, Raquel Aparecida de Freitas Mini, Humberto Torres Marques-Neto
Presenter: Agripino Gabriel M. Damascenon
Time-based pricing scheme can be used to improve the management of the resources even with the increasing mobile Internet traffic generated in cellular networks. However, time-based pricing can be unfair to the users outside the congestion areas during network peak periods, where different base station have different workload. The paper presents a pricing scheme that use base stations' historical workload and the constant monitoring of users' sensitivity to applied prices to differentiate pricing to control the geographical congestion in the ISP network.
The scheme uses several thresholds to determine the price. As a part of the ongoing work, the authors are simulating the proposed pricing scheme. The preliminary results show that the the scheme is able to reduce the utilisation in the peak time and hence make the network is more profitable in off peak.
Q: What is the pricing model?
A: Incentive-based pricing model.
Linearly Scalable Crowdsourced Media Broadcasting in the Mobile Cloud
Authors: Joshua Joy, Nagendra Babu, Christine Kuo, Hiral Kapadia, Mario Gerla
Presenter: Joshua Joy
The volume of data, especially images, increases as users are constantly capturing and sharing images on their mobile devices. Deduplication is a compression technique which eliminate duplicate copies of repeating data, by storing redundant data only once. However, deduplication does not work on images. The authors introduce storage mechanism to achieve deduplication for similar images with different screen size and resolution to address the demands of increased image sharing in the mobile cloud (e.g., vehicular cloud).
The mechanism stores the original image and calculate the transform to similar images and enable on-demand image transformation using lineage tree of meta-file, which stores the different version and image resolutions. When comparing the proposed storage utilization results with the results from opendedup, a file system that performs in-line block-level reduplication, the authors found that the proposed solution outperforms opendedup when 90% of the images to be stored can be generated by the files already in the system. In addition the proposed mechanism runs faster than opendedup when storing multiple similar images since the proposed mechanism only transfer meta-files.
Q: Can you give more detail about the mechanism?
A: Performing deduplication of images is a search problem.
Q: How heavy is proposed deduplication process?
A: The proposed deduplication is not as heavy as opendedup.
Q: What kind of traffic? Upload or download?
A: The focus is on download.
Q: What happen when we have multiple users?
A: It's a part of future works.