A.K.A. 'event-driven programming' or 'select()-based multiplexing', it's a solution to a network programming problem: How do I talk to bunch of different network connections at once, all within one process/thread?
Let's say you're writing a database server that accepts requests via a tcp connection. If you expect to have many simultaneous requests coming in, you might look at the following options:
fork()
is the Unix
programmer's hammer. Because it's available, every problem looks
like a nail. It's usually overkill
With async, or 'event-driven' programming, you cooperatively schedule the cpu or other resources you wish to apply to each connection. How you do this really depends on the application - and it's not always possible or reasonable to try. But if you can capture the state of any one connection, and divide the work it will do into relatively small pieces, then this solution might work for you. If your connections do not require much (or any) state, then this is an ideal approach.
pros:
Here's a good visual metaphor to help describe the advantages of
multiplexed asynchronous I/O. Picture your program as a person, with
a bucket in front of him, and a bucket behind him. The bucket in
front of him fills with water, and his job is to wait until the bucket
is full, and empty it into the bucket behind him. [which might have
yet another person behind it...] The bucket fills sporadically,
sometimes very quickly, and sometimes at just a trickle, but in
general your program sits there doing nothing most of the time.
Now what if your program needs to talk to more than one connection (or
file) at a time? Forking another process is the equivalent of bringing
in another person to handle each pair of buckets. The typical
server is written in just this style! A server may be handling 20
simultaneous clients, and in our metaphor that means a line of 20
people, sitting idle for 99% of the time, each waiting for his bucket
to fill!
The obvious solution to this is to have a single person walk up and
down the aisle of bucket pairs. When he comes to a bucket that's
full, he dumps it into the other side, and then moves on. By walking
up and down the aisle of buckets, one busy person does the job of 20
idle people.
The only time when this technique doesn't work well is when something other than just dumping one bucket into the next needs to be done - say, turning the water into gold first. If turning a bucket of water into a bucket of gold takes a long time, then the other buckets may not get processed in a timely fashion. For example, if your server program needs to crunch on the data it receives before responding.
Now how do we apply our bucket wisdom to network programming?
Using whatever mechanism your operating system provides, you can
register interest in the events that can happen to a socket:
(connection, ready to read, ready to write, closed, error conditions,
etc..) You then write a handler for each event type. This handler
will perform different tasks depending on your application. If a
connection has a need to keep state information, you'll probably end up
writing a state machine to handle transitions between different behaviors.
Diving back into the bucket [paradigm], these events might be the
equivalent of adding little "I'm full now" mailbox-like flags to the
buckets.
On Windows (using the Winsock API), this is done using the windows
sockets extension function WSAAsyncSelect()
. Calling
this method on a socket will tell Windows to send your application
a message for each possible socket event. The application's
WinMain()
function will then collect and dispatch this
message just like any other windows message.
On Unix, you have to do more of the work yourself. You need
to keep a pool of socket descriptors, and be able to map them onto
your connection objects. The main application will then consist of
single select()
loop that will wait for one or more
sockets to become active. Note: Aren't there other techniques
available on Unix for performing asynchronous I/O, using signals?
The loop will then look up the associated object, and dispatch the
correct method on that object.
Well, lucky for you, there's a set of common code to make writing
these programs much easier. In fact, all you need to do is pick
from two connection styles, and plug in your own event handlers. As an
extra added bonus, the differences between Windows and Unix socket
multiplexing have been abstracted - using the async base classes
(asyncore.dispatcher
and
asynchat.async_chat
) - you can write asynchronous
programs that will work on both Unix and Windows. [and I suspect the
Mac, too]
The first class is the simpler one,
'asyncore.dispatcher'
. This class manages the
association between a socket descriptor
(which is how the
operating system refers to the socket) and your socket object.
dispatcher
is really a container for a system-level
socket, but it's been wrapped to look as much like a socket as
possible. The two main differences are:
create_socket
method.
asyncore.dispatcher.go()
is called. On
unix, this method will invoke the main select()
loop if it is not already running.
There are six event-handling methods you can define: these are exactly
the events that winsock supports on socket objects, and a superset of
what you can detect with the Unix select()
function (but
it's possible to emulate the missing events for Unix because they are
implied.)
handle_read:
called whenever the socket has more
data to be read, meaning that recv()
can be called
with an expectation of success.
handle_write:
called whenever a socket is ready to
be written to - a call to send()
can be expected
to succeed.
handle_oob:
called when out-of-band data is present.
handle_accept:
called when a new connection has been
accepted on a listening socket.
handle_connect:
called when an outgoing
connect()
has succeeded.
handle_close:
called when the socket has closed.
'asynchat.async_chat'
, provides support
for typical command/response protocols like SMTP, NNTP, FTP, etc...
It helps solve several problems for you:
'\r\n'
and '\r\n.\r\n'
as terminators, the latter being
a common end-of-message delimiter.
You can change the current terminator by calling the
set_terminator
method. Input is accumulated by
calling your own collect_incoming_data
method.
When the terminator is located, the
found_terminator
method is called.
send()
,
async_chat
first wraps the data in a simple
producer, called (strangely enough)
'simple_producer'
:
class simple_producer: def __init__ (self, data): self.data = data def more (self): if len (self.data) > 512: result = self.data[:512] self.data = self.data[512:] return result else: result = self.data self.data = '' return result
Each producer must have a more()
method, which is called
whenever more output is needed. Note how the data is deliberately
sent in fixed-size chunks: If you create a
simple_producer
with a 15-Megabyte long string
(ghastly!), this will keep that one socket from monopolizing the
whole program. When the producer is exhausted, it returns an empty
string, like a file object signifying an end-of-file condition.
A producer can compute its output 'on-the-fly', if so desired. It can keep state information, too, like a file pointer, a database index, or a partial computation.
Each producer is filed into a queue (fifo), which is progressively emptied.
The more
method of the front-most element of the queue is
called until it is exhausted, and then the producer is popped off the queue.
The combination of delimiting the input and scheduling the output
with a fifo allows you to design a server that will correctly handle
an impatient client. For example, some NNTP clients send a barrage of
commands to the server, and then count out the responses as they are
made (rather than sending a command, waiting for a response, etc...).
If a call to recv()
reveals a buffer
full of these impatient commands, async_chat will handle the situation
correctly, calling collect_incoming_data
and
found_terminator
in sequence for each command.
import socket import asyncore import string # simple demo of the asyncore dispatcher class. class finger_client (asyncore.dispatcher_with_send): def __init__ (self, account, done_fun, long=1): self.name, self.host = tuple(string.splitfields (account, '@')) self.done_fun = done_fun self.data = '' self.long = long self.create_socket (socket.AF_INET, socket.SOCK_STREAM) asyncore.dispatcher.__init__ (self)
done_fun
is a function that will be called when the
finger server has sent all the data and closed the connection. [this
programming style - passing functions around that represent an
execution path - is called continuation-passing]
The call to create_socket
will register the socket with
the underlying event mechanism, enabling the following callback
procedures.
def go (self): self.connect (self.host, 79) asyncore.dispatcher_with_send.go(self)The
asyncore.dispatcher.go()
method will kick off the
select()
loop in Unix, if it's not already running
# once connected, send the account name def handle_connect (self): self.log ('connected') if self.long: # this requests 'long' output. self.send ('/w %s\r\n' % self.name) else: self.send ('%s\r\n' % self.name)This function is called when the socket has made a connection. This tells us that we can now send the finger request.
# collect some more finger server output. def handle_read (self): print 'When data is available for reading on the socket, this callback will collect it into the member variable' more = self.recv(512) if not more: self.handle_close() self.data = self.data + more
self.data
# the other side closed, we're done. def handle_close (self): print 'Now that we're all done, call the user's' self.done_fun (self.data) self.del_channel()
done_fun
with
the finger data as an argument.
def demo_done_fun (stuff): print stuff def demo (who='asynfingdemo@squirl.nightmare.com'): f = finger_client (who, demo_done_fun, long=0) f.go()[Go ahead and try this one, I'm counting my readership with a python script]
'pop3demo.py'
demonstrates using
async_chat
to control a state-machine client
program. The states are represented by a series of
emit_xxx
, expect_xxx
method pairs.
'emit_user'
, for example, sends a pop3
USER
command, and 'expect_user'
processes
the response from that command. The demo program will log into
a pop3 mailbox, retrieve all the messages (optionally deleting
them), and apply a message processor function to each
message in turn.
'asynhttp.py'
is very similar to the finger client
described above. It's an
asyncore.dispatcher
-based http client. It's not
much of an http client, though: it neither sends or processes
HTTP headers - it merely sends a 'GET'
command and
collects the output.
'servhttp.py'
is a more complete example. It's
a bare-bones asynchronous http server, [the only one I've ever
heard of], supporting only the
'GET'
command and capable only of delivering
files. Just like the HTTP server that is now part of the
python distribution, though, it is easily extensible. What
makes this server interesting is its performance: In a bit of
informal testing against apache 0.6.2
, it seems to
be able to handle a substantially higher number of hits, with a
much lower load on the machine. See the file
'abuse.py'
for more information and timings.
async_chat
: I have built a fully RFC977-compliant (plus
all common extensions) NNTP server, and several other sophisticated
NNTP retrieval/filter engines. (If you'd like to see some of these as
examples, drop me a note).
Some conceptual cleanup work is still necessary, this code is the nth child of several generations of similar modules, each written by a progressively more enlightened mind. 8^)
The not-very-round-robin scheduler in unix/asyncore.py should probably keep a pointer into the list of sockets, or at least randomly pick a starting point.
Eventually I'd like to see a more general 'continuation' facility, which might for example allow you to interlace 'non-output' events in a producer queue, or write out state machines in a more intuitive fashion. When writing a really complex beast I sometimes get the feeling I'm writing in Fortran. [Note: some progress in this direction has been made - using "consumer fifo's" and some fancy footwork with functions. See the files 'consumer.py', and 'demo/pop3_2.py' for examples of use]