HTTP is a request-response protocol, and this tends to dictate the usage pattern of HTTP client libraries, which goes like this:
If your application only makes an occasional request, or it is necessary to have the response before sending the next request, this is a reasonable pattern. If, however, you just want to send 1000 GETs to download documents from a site, or PUT 1000 files to a server, waiting for and processing the response in between requests is an inefficient use of the HTTP connection.
Persistent connections are the default in HTTP 1.1, and the RFC mentions that they enable another usage pattern:
A client that supports persistent connections MAY "pipeline" its requests (i.e., send multiple requests without waiting for each response).
One way of doing this would be to send off a stream of requests, and then read all of the responses afterwards. The problem with this is that:
A more efficient solution is to send the requests while simultaneously processing the responses. This has the effect of giving maximum utilisation of the HTTP connection, and so eliminating the cumulative round trip times.
In practice this means sending the requests on one thread, while reading the responses on another. This requires an event-driven model, in which a callback provided by the requester is called each time a response has been received
Using pipelining introduces some design considerations that do not apply to the request-response pattern:
Matching responses to requests turns out to be easy, because, as the RFC goes on to say:
A server MUST send its responses to those requests in the same order that the requests were received.
What this means in practice is that you can keep a queue of requests sent, and, when you receive a response, know that it is for the request at the head of the queue.
Handling failure is slightly more complicated in a pipelined scenario because failure may not be detected until some time after the request has been sent, and in a different place in code (the event handler).
There are 3 ways in which a request can fail:
To deal with the first case, the requester can make a queue of the requests it is going to send in advance. Requests are then removed from the queue after sending, so any requests left in the queue on disconnection have not been sent.
The second case usually requires sending a modified request. The new request may be added to the queue to be sent for this session, or may be sent in another session.
In the third case, it is not known whether the request was received by the server or not. It is easy to find out which requests these are, as they are left in the queue of sent requests, as no corresponding response has been received. The simplest solution is to send the same request again. This should only be done, however, with idempotent methods (GET, HEAD, PUT, DELETE), since in their case if the first request did actually succeed, sending another will have no effect.
At the very least, the reading thread needs to be able to retrieve the request corresponding to the response it has received, and this request needs to have been stored by the writing thread. Since the reader and writer are different threads, the queue of sent requests needs to be protected by a mutex. This applies to any other information shared between the writing and reading threads.
This is a pipelined client written in C++ for Linux.
HttpClient class encapsulates a connection to a Web host.
It is passed an object of a class derived from
HttpRequester, which has overridden virtual functions to send requests and process the responses.
Below is a sample program that uses an
HttpClient to PUT all of the files in a directory to a CGI program on a Web server, using an
HttpRequester subclass called
Run method begins by connecting to the server, and finishes when the requester or server closes the connection.
If there is more work to be done, the
Run method can be called again with the same, or a different requester.
HttpRequester base class has three virtual functions, the first two of which must be implemented, and the third is optional:
Run method sends the requests.
Sending a request takes the form of calling the following methods in the
HttpClient, which is passed as an argument:
StartRequest takes a verb such as PUT, and a URL, and sends the start line of the request and the Host header.
SendHeader takes a header name and value and sends them in the name: value format.
EndHeaders sends the blank line that marks the end of the headers.
Write method is used to send raw bytes, such as when sending an entity body.
It is the requester's responsibility to send the correct Content-Length header before the body.
As well as sending the requests, the
Run method can be used to store any information about them that might be needed when processing the responses.
HandleResponse is called every time a response is received.
It is given:
HttpStatusobject containing the information in the status line
Here is an example of
HandleResponse displaying the response information and adding failed requests to a vector:
HandleFinish is called after the connection has been closed.
The main reason to implement
HandleFinish is to retrieve the queue of requests for which responses have not been received.
This is passed to
HandleResponse as an argument.
In addition to the client, I've written an example program that PUTs all of the files in a specified directory to a server. The response handler writes the response to standard output.
I have also written a simple CGI that allows PUTs and stores the files, and will also allow GETs to view them.
You can browse the source code here:
Here is a compressed tar archive containing the source code and a makefile:
Copyright (C) 2010 Martin Broadhurst