Recycling Cyclic Prefix for Versatile Interference Mitigation in OFDM
based Wireless Systems
Saravana Rathinakumar (The University of Edinburgh)
Bozidar Radunovic (Microsoft Research UK)
Mahesh K. Marina (The University of Edinburgh)
Nowadays, OFDM is the most used data encoding method for wireless communications. The channel is split in sub-carriers, which are used in parallel, and a different sequence of symbols is transmitted over each sub-carrier. In order to avoid problems in recognizing subsequent symbols due to multipath effects, each symbol is preceded by a portion of its own tail, which is called cyclic prefix. Nevertheless, the length of the cyclic prefix is over-provisioned, resulting in a wastage of communication capacity.
The common decoding procedure is to discard the cyclic prefix and apply the FFT at the beginning of each symbol. Instead, the authors leverage the cyclic prefix in order to detect the best point in which the FFT should be applied. The key observation is that the FFT can be applied in every point between the beginning of the cyclic prefix and the beginning of the symbol. Indeed, the symbol decoded is the same, but the interference in the final result varies a lot between the different starting points.
The final idea is simple: detect the best starting point for each decoding, i.e. the one which results in less interference in the decoded output. This is done through two steps. First, a interference model is created using the preables that are transmitted for channel estimation at the beginning of the communication. Then, the model is used to estimate the starting point that maximizes the likelihood of the decoded symbol, i.e. the starting point whose output is less likely to be affected by interference. As the outcome of the technique is an improvement in the decoding process, it can be used for two purposes: reduction of adjacent channel interference, and reduction of co-channel interference.
The authors have tested the technique both through simulation and experimentation. The results are promising: the technique allows a reduction of up to 25 dB of the adjacent channel interference and up to 15 dB of the co-channel interference. Most important, the technique does not require any modification on the sender side, allowing an easy incremental deployment.
Q: What if the interference starts after the preables transmitted initially? Since the interference model is based on them, this could be a problem.
A: It is unlikely that the interference is not affecting at all the preables used in channel estimation.
Q: Why the interference model is based only on the initial preambles?
A: The authors want to create the interference model first, and then use it on the actual data reception.
Q: The technique seems to rely on the timeliness of symbol transmission, while what happens in reality is that each symbol might be transmitted a bit sooner or a bit later. How this affects the results of the technique?
A: This kind of time differences are taken into account to some extent by the interference model.
Electromagnetic Polarization in a Two-Antenna Whiteboard in the Air
Longfei Shangguan (Princeton University)
Kyle Jamieson (Princeton University & University College London)
In the field of human-machine interaction, in-air writing is an innovative proposal. The most recent solutions that realize in-air writing are based on passive RFID tags that are remotely tracked. In order to perform tracking, a trade-off has to be found between localization accuracy and costs: single-antenna solutions have high uncertanity in the output, while a higher number of antennas comes with higher precision and higher cost.
Instead of trying to individuate each exact position of the RFID tag, the authors propose to focus on getting the trajectory and the displacement of each movement. This approach allows to reduce the number of needed antennas to two. For trajectory detection, the different polarization mismatch perceived at the two antennas is used. For displacement computation, the authors use the triangular inequality to estimate a feasibility region, i.e. points in which the tag could have moved, and then they combine it with the direction previously computed to estimate the true displacement.
The authors have tested the approach both on whiteboard and in-air. The first approach has the advantage of "projecting" a 3d movement on a 2d plane, i.e. the influences in the movement given by the third dimension are avoided. On whiteboard, the technique allows to correctly recognize 15/26 letters with probability higher than 0.9, and exhibits an average recognition accuracy of 93.6%. Instead, the average recognition accuracy of in-air writing is a bit lower (around 83%).
Q: Could this technique be combined with other localization techniques?
A: The technique might be combined with other approaches, but its key advantage is that it does not require expensive antenna arrays. Instead, this kind of equipment is actually needed by the other solutions for localization.
Q: In the demo shown, the movements performed during tests are extremely slow. How does movement speed affect the results of the technique?
A: Currently, the system requires the movements to be slow in order to provide a good accuracy.
Q: Why some letters are not correctly recognized?
A: These letters (as C and L) are difficult to distinguish by the system as they have less evident differences.
Q: A possible extension to the work is the use of Wi-Fi to improve trajectory detection. Nevertheless, the polarization mismatches on which the current system is based are more likely to appear in short-range communications rather than long-range ones, as Wi-Fi is. How is the solution supposed to deal with this issue?
A: The authors confirms that the Wi-Fi extension might be problematic on this side.
Broadcast Throughput Under Ultra-Low-Power Constraints
Tingjun Chen (Columbia University)
Javad Ghaderi (Columbia University)
Dan Rubenstein (Columbia University)
Gil Zussman (Columbia University)
Networks of sensors which are able to harvest energy from the surrounding environment are becoming popular and popular. The sensors are required to communicate with other sensors as well, nevertheless the energy needed to transmit data is usually much more than the energy that can actually be harvested. In order to extend the life of the sensors as much as possible, severe constraints on the transmission and reception of data have to be imposed.
The authors consider a scenario in which sensors are heterogeneous, i.e. they have different energy budgets and different energy consumptions, and have no previous knowledge of other sensors nearby. Then, the authors model the scenario as a linear programming mathematical problem, whose aim is to maximize the global throughput. Finally, the authors use the results to design a protocol that leverages on broadcast communication and alternation between transmission, listening, and idle states in order to maximize the throughput, meeting the requirements dictated by the low-power constraints at the same time.
The results of the simulations show that the parameter sigma determines the trade-off between throughput and discovery latency in communication (the more sigma is high, the lower is the throughput, but the lower is also the discovery latency). The protocols allows to reach between 67% and 81% of the analytical maximum throughput, and performs 8x-11x better than the current state-of-the-art (i.e. Panda).
Q: In mobile ad-hoc networks, an approach is to keep a backbone of sensors always active and have the rest of the network mostly sleeping. Can this approach be compared to the presented one?
A: The presented approach do not require any gateway node to communicate, and this is an appreciable feature.
Q: The functioning of the protocol is based on the estimation of the number of listeners for each transmission. How is this estimated obtained?
A: After each transmission, a sensor puts itself in a short listening period to receive some kind of acknowledgement from the listeners. It can therefore use this number for the protocol.
Q: Low-power unicast solutions do exist. Which is the advantage of the broadcast transmission in this context?
A: Unicast solutions require the nodes to create and keep updated a routing table, which imply a non-negligible additional energy amount.
Q: The scenario analyzed assumes that nodes can always transit between transmission, listening and sleeping states. What if we consider a scenario where we have only-receiving nodes and only-transmitting nodes?
A: The protocol aims to coordinate the nodes such that a sensor will not wake up and put itself in listening if no data is available, nor it will put itself in transmission state if nothing has to be transmitted. Moreover, once the energy of a node goes down, the probability to transit to a transmission stage significantly drops, i.e. the node is "locked" in listening or sleeping state to the purpose of preserving energy.