Wednesday, May 15, 2013

REST for Constrained Environments: Part Two

In the first half of this two-part post, HTTP was analysed and evaluated for its fitness for purpose in providing REST services in a constrained environment such as a wireless sensor node. In this half of the post, the Constrained Access Protocol (CoAP) will be introduced and evaluated in a similar way.

CoAP is a software protocol intended to be used in very simple electronics devices that allows them to communicate interactively over the Internet. It is particularly targeted for small low power sensors, switches, valves and similar components that need to be controlled or supervised remotely, through standard Internet networks.

The first problem it attempts to tackle is removing the overhead and complexity of TCP by operating entirely over UDP and implementing an optional acknowledgements system. This allows for messages to be transmitted with only best-effort and in cases of network congestion, a sensor reading will be lost instead of adding retransmission attempts into the already congested network. For most sensor network applications not related to security, eventual success is good enough.

The second advantage to CoAP over HTTP is the smaller amount of data to be transmitted. TCP requires a three-way handshake before any communication can even begin however this is not necessary with CoAP as data can be sent on the first packet.

In this wireless sensing applications, the client may not require a reply and so does not need to store any state about the connections and can simply discard incoming packets.

The amount of data to be transmitted is futher reduced by using a binary header as opposed to the ASCII header used by HTTP. Far more information can be transmitted with the radio being powered for a shorter time frame.

The third advantage to CoAP over HTTP is that it can be used with multicast which would allow sensor nodes to send their updates to a multicast group as opposed to a single server. This can be used for a server to simply listen to a multicast group and auto-discover not require the clients to have prior knowledge of the server.

From the previous analysis of HTTP minimum packet sizes as HTTP has evolved in the first half of this post, it is clear that it is becoming less and less suitable for use in the constrained environments that are found in wireless sensor nodes. It was possible to implement an older version of the protocol in order to submit the observations with smaller payloads on the network packets but this is not a long-term solution as it is likely that at some point, support for the older version of the protocol would be removed from HTTP servers. By using a protocol that is no longer being supported would introduce the same problems as the use of properietary standards: lack of available libraries, lack of documentation and support and lack of interoperability.

CoAP is still in active development by the CoRE working group of the IETF in the Applications Area although there have only been slight changes between the last two drafts. CoAP is designed to emulate the REST features of HTTP but in a way that is more constrained environment friendly.

The problem that exists with both HTTP and CoAP are that they are based on the request-response model. Constrained platforms may not require a response and in the implementations and may completely ignore them. With HTTP, the wireless sensor must receive the packet containing the response before the TCP connection can be closed and its associated state discarded. This is actually a problem associated with TCP, but as HTTP requires TCP to function it also inherits it.

With CoAP, over UDP, it is possible to disable the reliability features and not store any state about connections. This means that any packets received can simply be discarded and nothing is lost if the radio is disabled immediately after sending the request. Still, it would be considered bad practice to send data across a network when there was never any intention for it to be received by the node it was addressed to.

Given that CoAP allows for these non-confirmable messages, a server could be configured to not send a response to the request. This would not break the CoAP specification as, for a non-confirmable request, no response is guaranteed. This would however require the non-standard configuration of a server which could break its interoperability with other platforms also depending on it.

The CoAP stack is clearly a better solution than the HTTP stack for wireless sensor networks but it seems that there are still places where it has shortcomings. These shortcomings are mainly due to the fact that it aims to maintain interoperability with HTTP systems and so inherits the same models. Where power availability is not constrained, CoAP would be useful for tasks such as pushing configuration updates to nodes and recieving confirmations of completion using a multicast request or having a smart lightswitch turn off all the lights in the house but for simply submitting sensor observations there is room for a more lightweight protocol.

REST for Constrained Environments: Part One

Platforms such as wireless sensor nodes are typically quite constrained in the amount of processing power, memory and battery power available. Whilst 6LoWPAN has enabled these devices to have IPv6 connectivity, the protocols that are typically deployed on top of IPv6 may not be well suited and either the protocol needs to be modified to allow for this use case or a new protocol needs to be developed.

This two-part post will look at two protocols, HTTP and CoAP, for providing REST services for constrained environments. The REST architectural style was developed by W3C Technical Architecture Group (TAG) in parallel with HTTP/1.1, based on the existing design of HTTP/1.0. The World Wide Web represents the largest implementation of a system conforming to the REST architectural style.

REST-style architectures conventionally consist of clients and servers. Clients initiate requests to servers; servers process requests and return appropriate responses. Requests and responses are built around the transfer of representations of resources. A resource can be essentially any coherent and meaningful concept that may be addressed. A representation of a resource is typically a document that captures the current or intended state of a resource.

A wireless sensor node may use a REST service to send sensor observation data to a data logger or to request configuration updates from a server.

We begin by looking at the evolution of HTTP. All HTTP versions are plain-text based protocols, i.e. no compression is performed and only printable characters are used.

HTTP version 0.9, still supported by modern web servers including the Apache HTTP server, has a simple request protocol which consists of opening a TCP connection and sending a plain-text string in the form of GET followed by a carriage return and a line feed. Headers are not supported for requests and only the GET method is available. There are also no headers attached to a response and so a zero-length response would simply result in the connection being closed. Parameters such as a sensor ID and sensor reading can be attached to the URL as parameters in order to be processed by the server.

HTTP version 1.0, again still supported by modern web servers including the Apache HTTP server, has a more complex request protocol. Once the TCP stream is opened a plain text string in the form of METHOD /url HTTP/1.0 followed by a carriage return and line feed must be sent. The version number added to the request informs the server that the newer version of the protocol is in use. As HTTP/1.0 supports headers, a second carriage return and line feed must be present to indicate the end of the headers section. There are no required headers for HTTP/1.0.

HTTP version 1.1, the currently most widely deployed version of HTTP, adds further complexity by requiring the presence of a "Host" header in all requests. HTTP versions 1.0 and 1.1 also support the POST method, where the parameters are passed as a request body, which requires the additional "Content-Type" and "Content-Length" headers.

The minimum request size, assuming an 8 character sensor ID, a 5 character sensor value, an 11 character hostname and the update script being at the root of the server, for each protocol is shown in the table below:

Minimum request sizes for GET and POST requests for HTTP versions 0.9, 1.0 and 1.1

One of the stated uses for the POST method within the HTTP/1.0 specification is ``extending a database through an append operation" which is exactly what submitting a sensor result should do. GET requests are meant only for retrieving information, whether from static resources or from data generating processes. GET requests should not be used for performing any updates or changes. As a result, with version 1.0 and 1.1 of the HTTP specifications the use of a GET request to submit sensor readings would break the intentions for use laid out in the specification.

Zero-length responses from the server with versions 1.0 and 1.1 would in fact have the server respond with an HTTP status code of "204 No Content", a number of headers, and a human-readable message as an HTML document explaining the status code. This size of this would be likely to be significantly larger than the request, although the page returned in this case by the server is configurable. As the connection is already using TCP, there is nothing gained from the reciept of a response from the server by the sensor node.

Following this analysis of the HTTP versions, it seems that as the HTTP protocol evolves, more and more complexity is being added that all adds up to increase the amount of bytes that need to be carried over the constrained low-power low-bitrate networks that wireless sensor platforms use.

The SPDY/2 and SPDY/3 protocols, whilst in current use, have not been considered for the same reason that HTTP over SSL and HTTP over TLS have not been considered. The overhead of the encryption and key generation would place considerable load on the processing power of constrained environments. Encryption, if necessary, should be implemented at the MAC layer where it can be performed by hardware as opposed to software with less overhead and reduced power consumption.

In part two of this post, the new Constrained Access Protocol (CoAP) will be introduced and analysed for its fitness for purpose in constrained environments such as wireless sensor nodes.

Monday, May 13, 2013

CFP || CoNEXT 2013 Hot Middlebox Workshop || Deadline: August 30th 

Please consider submitting 6-page papers to this upcoming workshop on topics covering middleboxes, network appliances and network function virtualization. The full CFP is at 

Modern networks increasingly rely on advanced network processing functions for a wide spectrum of crucial functions ranging from security (firewalls, IDSes, traffic scrubbers), traffic shaping (rate limiters, load balancers), dealing with address space exhaustion (NATs) or improving the performance of network applications (traffic accelerators, caches, proxies), to name a few.  Such “network appliances” or “middleboxes” are a critical piece of the network infrastructure and represent, to a first-order approximation, the de-facto approach for network evolution in response to changing performance, security, and policy compliance requirements.

However, most of this functionality is implemented in costly, hard-to-modify dedicated hardware, making the network difficult to evolve or adapt to changing traffic requirements. Recent work seeks to address this issue by shifting network processing from a world of dedicated hardware to one built on software-based processing running on (sometimes virtualized and shared) platforms built on commodity hardware servers, switches, and storage. This vision of “software-based” network services enables new in-network functions to be rapidly instantiated, on-demand, and at places in the network where it is most needed, without having to modify the underlying hardware. The scope of this workshop focuses both on the design of the data plane to support advanced services as well as the control plane functions necessary to manage these advanced data plane functions. In some sense, this vision is complementary to ongoing efforts in the SDN community, where the focus has largely been on the control plane and assuming a commodity data plane.