|
@@ -21,7 +21,7 @@
|
|
|
CPU bound, then pre-emptive scheduled threads are probably what
|
|
|
you really need. Network servers are rarely CPU-bound, however.
|
|
|
</p>
|
|
|
-
|
|
|
+
|
|
|
<p>
|
|
|
If your operating system supports the <code>select()</code>
|
|
|
system call in its I/O library (and nearly all do), then you can
|
|
@@ -34,7 +34,7 @@
|
|
|
of building sophisticated high-performance network servers and
|
|
|
clients a snap.
|
|
|
</p>
|
|
|
-
|
|
|
+
|
|
|
<h3>Select-based multiplexing in the real world</h3>
|
|
|
|
|
|
<p>
|
|
@@ -52,7 +52,7 @@
|
|
|
An interesting web server comparison chart is available at the
|
|
|
<a href="http://www.acme.com/software/thttpd/benchmarks.html">thttpd web site</a>
|
|
|
<p>
|
|
|
-
|
|
|
+
|
|
|
<h3>Variations on a Theme: poll() and WaitForMultipleObjects</h3>
|
|
|
<p>
|
|
|
Of similar (but better) design is the <code>poll()</code> system
|
|
@@ -109,7 +109,7 @@
|
|
|
also keeps the low-level interface as simple as possible -
|
|
|
always a good thing in my book.
|
|
|
</p>
|
|
|
-
|
|
|
+
|
|
|
<h3>The polling loop</h3>
|
|
|
<p>
|
|
|
Now that you know what <code>select()</code> does, you're ready
|
|
@@ -135,7 +135,7 @@ while (any_descriptors_left):
|
|
|
the functions poll() and loop()). Now, on to the magic that must
|
|
|
take place to handle the events...
|
|
|
</p>
|
|
|
-
|
|
|
+
|
|
|
<h2>The Code</h2>
|
|
|
<h3>Blocking vs. Non-Blocking</h3>
|
|
|
<p>
|
|
@@ -233,7 +233,7 @@ while (any_descriptors_left):
|
|
|
demonstrates how easy it is to build a powerful tool in only a few
|
|
|
lines of code.
|
|
|
</p>
|
|
|
-
|
|
|
+
|
|
|
<pre>
|
|
|
<font color="800000"># -*- Mode: Python; tab-width: 4 -*-</font>
|
|
|
|
|
@@ -265,7 +265,7 @@ while (any_descriptors_left):
|
|
|
<font color="808000">for</font> url <font color="808000">in</font> sys.argv[1:]:
|
|
|
parts = urlparse.urlparse (url)
|
|
|
<font color="808000">if</font> parts[0] != <font color="008000">'http'</font>:
|
|
|
- <font color="808000">raise</font> ValueError, <font color="008000">"HTTP URL's only, please"</font>
|
|
|
+ <font color="808000">raise</font> <font color="008000">ValueError("HTTP URL's only, please")</font>
|
|
|
<font color="808000">else</font>:
|
|
|
host = parts[1]
|
|
|
path = parts[2]
|
|
@@ -300,7 +300,7 @@ while (any_descriptors_left):
|
|
|
<p><font color="006000"><code>$ python asynhttp.py http://www.nightmare.com/</code></font>
|
|
|
<p>You should see something like this:
|
|
|
<p>
|
|
|
-
|
|
|
+
|
|
|
<pre>
|
|
|
[rushing@gnome demo]$ python asynhttp.py http://www.nightmare.com/
|
|
|
log: adding channel <http_client at 80ef3e8>
|
|
@@ -347,7 +347,7 @@ log: closing channel 4:<http_client connected at 80ef3e8>
|
|
|
print 'read', r
|
|
|
print 'write', w
|
|
|
[...]
|
|
|
-</pre>
|
|
|
+</pre>
|
|
|
|
|
|
<p>
|
|
|
Each time through the loop you will see which channels have fired
|
|
@@ -390,14 +390,14 @@ log: closing channel 4:<http_client connected at 80ef3e8>
|
|
|
<font color="808000">def</font><font color="000080"> handle_write</font> (self):
|
|
|
sent = self.send (self.buffer)
|
|
|
self.buffer = self.buffer[sent:]
|
|
|
-</pre>
|
|
|
+</pre>
|
|
|
|
|
|
<p>
|
|
|
The <code>handle_connect</code> method no longer assumes it can
|
|
|
send its request string successfully. We move its work over to
|
|
|
<code>handle_write</code>; which trims <code>self.buffer</code>
|
|
|
as pieces of it are sent succesfully.
|
|
|
-
|
|
|
+
|
|
|
<p>
|
|
|
We also introduce the <code>writable</code> method. Each time
|
|
|
through the loop, the set of sockets is scanned, the
|
|
@@ -413,7 +413,7 @@ log: closing channel 4:<http_client connected at 80ef3e8>
|
|
|
If you try the client now (with the print statements in
|
|
|
<code>asyncore.poll()</code>), you'll see that
|
|
|
<code>select</code> is firing more efficiently.
|
|
|
-
|
|
|
+
|
|
|
<h3>asynchat.py</h3>
|
|
|
<p>
|
|
|
The dispatcher class is useful, but somewhat limited in
|
|
@@ -443,11 +443,11 @@ log: closing channel 4:<http_client connected at 80ef3e8>
|
|
|
<br>Called whenever data is available from
|
|
|
a socket. Usually, your implementation will accumulate this
|
|
|
data into a buffer of some kind.
|
|
|
-
|
|
|
+
|
|
|
<li><code>found_terminator (self)</code>
|
|
|
<br>Called whenever an end-of-line marker has been seen. Typically
|
|
|
your code will process and clear the input buffer.
|
|
|
-
|
|
|
+
|
|
|
<li><code>push (data)</code>
|
|
|
<br>This is a buffered version of <code>send</code>. It will place
|
|
|
the data in an outgoing buffer.
|
|
@@ -460,7 +460,7 @@ log: closing channel 4:<http_client connected at 80ef3e8>
|
|
|
<code>handle_read</code> collects data into an input buffer, which
|
|
|
is continually scanned for the terminator string. Data in between
|
|
|
terminators is feed to your <code>collect_incoming_data</code> method.
|
|
|
-
|
|
|
+
|
|
|
<p>
|
|
|
The implementation of <code>handle_write</code> and <code>writable</code>
|
|
|
examine an outgoing-data queue, and automatically send data whenever
|
|
@@ -482,7 +482,7 @@ log: closing channel 4:<http_client connected at 80ef3e8>
|
|
|
<font color="808000">import</font> string
|
|
|
|
|
|
<font color="808000">class</font><font color="000080"> proxy_server</font> (asyncore.dispatcher):
|
|
|
-
|
|
|
+
|
|
|
<font color="808000">def</font><font color="000080"> __init__</font> (self, host, port):
|
|
|
asyncore.dispatcher.__init__ (self)
|
|
|
self.create_socket (socket.AF_INET, socket.SOCK_STREAM)
|
|
@@ -538,7 +538,7 @@ log: closing channel 4:<http_client connected at 80ef3e8>
|
|
|
|
|
|
<font color="808000">def</font><font color="000080"> collect_incoming_data</font> (self, data):
|
|
|
self.buffer = self.buffer + data
|
|
|
-
|
|
|
+
|
|
|
<font color="808000">def</font><font color="000080"> found_terminator</font> (self):
|
|
|
data = self.buffer
|
|
|
self.buffer = <font color="008000">''</font>
|
|
@@ -595,7 +595,7 @@ python proxy.py localhost 25
|
|
|
time for each command is long. You'd like to be able to send a
|
|
|
bunch of <code>RCPT</code> commands in one batch, and then count
|
|
|
off the responses to them as they come.
|
|
|
-
|
|
|
+
|
|
|
<p>
|
|
|
I have a favorite visual when explaining the advantages of
|
|
|
pipelining. Imagine each request to the server is a boxcar on a
|
|
@@ -620,7 +620,7 @@ python proxy.py localhost 25
|
|
|
interested in the gory details.
|
|
|
|
|
|
<h3>Producers</h3>
|
|
|
-
|
|
|
+
|
|
|
<p>
|
|
|
<code>async_chat</code> supports a sophisticated output
|
|
|
buffering model, using a queue of data-producing objects. For
|