Metalinguistic Abstraction

Computer Languages, Programming, and Free Software

Posts Tagged ‘performance

Django file and stream serving performance Gotcha

with 9 comments

Recently I’ve been doing a little bit of work with the Django web framework for Python. Part of this project involves having a bit of reasonable binary file streaming to and from the server. There is currently a patch in trac (#2070) slated for acceptance. So I apply it and try it out and try copying some files in and out through the web server. I have some problems with the particulars of this patch and I intend to amend my complaints, but that’s for another post. What I discovered was an annoying performance gotcha in simply reading back binary files to be served to the user.

The gotcha is simple to expose:

In a Django view, use the documented functionality of passing a file-like object to the response object from the view; preferably a big, binary one. So you do something like this:

return HttpResponse(open('/path/to/big/file.bin'))

And then you surf on over to localhost and try grabbing this file. Your hard drive whirs and you notice your CPU usage is at 100% while serving the file slowly. Most people then rationalize it away saying “well, of course, Python is slow, so it makes sense that it would suck at this. Set up a dedicated static file serving server written in C and use some URL routing incantations.”

The crucial information that I had to dig for is how Django emits bytes to users. Django calls iter() on the input object and then uses calls to .next() to grab more bytes to write out to the stream. Once you factor in that the default iter() behavior for a open file in Python is to read lines you realize that there’s just an enormous amount of time and unnecessarily evil buffering going on just to emit chunks of the file separated by (in the case of binary files) completely arbitrarily spaced newline bytes. The result is lots of heap abuse as well as lots of burned CPU time looking for these needles in the haystack.

The hack to address this is very simple: we write a tiny iterator wrapper that simply uses the read(size) call. It can look something like this:

class FileIterWrapper(object):
  def __init__(self, flo, chunk_size = 1024**2):
    self.flo = flo
    self.chunk_size = chunk_size

  def next(self):
    data = self.flo.read(self.chunk_size)
    if data:
      return data
    else:
      raise StopIteration

  def __iter__(self):
    return self

1024 ** 2 in bytes is one megabyte in a chunk. When using this iterator the logic is simple and the result is that Python consumes very little CPU time and memory to rip through a file stream. It can be applied to the previous example like so:

return HttpResponse(FileIterWrapper(open('/path/to/big/file.bin')))

Now everything is fast and happy and running as it should.

So what should Django do about this? It could be just written off as an idiosyncrasy of the framework, but I think that the case is strong that Django should inspect for file-like objects and use more aggressive calls to .read() to prevent such unpredictable behavior. One problem with such large (1MB) read()s is that they may block for too long instead of trickling bytes to the user, so some asynchronous I/O strategy would be better.

There’s no reason why a small to moderate sized site should get hosed performance-wise because several people are downloading binary files from a Django server via modpython or wsgi.

Finally, proper error handling on disposing the file descriptor in the above examples is an exercise to the reader. I suggest the using the “with” statement that can be currently imported from future.

Advertisements

Written by fdr

February 12, 2008 at 1:51 pm

Posted in django, projects, python

Tagged with , ,