Metalinguistic Abstraction

Computer Languages, Programming, and Free Software

emacs develock customization for Python

with one comment

This has been annoying me for some time: develock mode doesn’t support Python out of the box.  I had a hacked-up develock where I simply changed all references of Ruby (via string-replace) to Python, and things worked pretty well…but I had to constantly load my hacked up develock.

Well, I finally got around to customizing develock properly post-facto, I think.  Rejoice, fellow whitespace pedants.  Here’s the snippet:

;;
;; develock-py.el
;;
;; Made by Daniel Farina
;; Login   <drfarina@acm.org>
;;
;; Started on  Sun Feb 14 09:21:21 2010 Daniel Farina
;; Last update Sun Feb 14 09:27:12 2010 Daniel Farina
;;

(require 'develock)

(defcustom develock-python-font-lock-keywords
 '(;; a long line
 (develock-find-long-lines
 (1 'develock-long-line-1 t)
 (2 'develock-long-line-2 t))
 ;; long spaces
 (develock-find-tab-or-long-space
 (1 'develock-whitespace-2)
 (2 'develock-whitespace-3 nil t))
 ;; trailing whitespace
 ("[^\t\n ]\\([\t ]+\\)$"
 (1 'develock-whitespace-1 t))
 ;; spaces before tabs
 ("\\( +\\)\\(\t+\\)"
 (1 'develock-whitespace-1 t)
 (2 'develock-whitespace-2 t))
 ;; tab space tab
 ("\\(\t\\) \t"
 (1 'develock-whitespace-2 append))
 ;; only tabs or spaces in the line
 ("^[\t ]+$"
 (0 'develock-whitespace-2 append))
 ;; reachable E-mail addresses
 ("<?[-+.0-9A-Z_a-z]+@[-0-9A-Z_a-z]+\\(\\.[-0-9A-Z_a-z]+\\)+>?"
 (0 'develock-reachable-mail-address t))
 ;; things to be paid attention
 ("\\<\\(?:[Ff][Ii][Xx][Mm][Ee]\\|[Tt][Oo][Dd][Oo]\\)\\(?::\\|\\>\\)"
 (0 'develock-attention t)))
 "Extraordinary level highlighting for the Python mode."
 :type develock-keywords-custom-type
 :set 'develock-keywords-custom-set
 :group 'develock
 :group 'font-lock)

(defvar python-font-lock-keywords-x nil
 "Extraordinary level font-lock keywords for the Python mode.")

(setq develock-keywords-alist
 (cons '(python-mode
 python-font-lock-keywords-x
 develock-python-font-lock-keywords)
 develock-keywords-alist))

(plist-put develock-max-column-plist 'python-mode 79)

Written by fdr

February 14, 2010 at 9:53 am

Posted in lisp, python

Tagged with , , ,

dirsync: for completing metadata writes durably

leave a comment »

Today I encountered an obscure file attribute/equivalent mount option (if you are fine with these semantics mount-wide). It is more likely that one would know about this option should he/she be familiar with MTA software and presumably other software with strict data durability guarantees made by a POSIX file system, especially with regard to metadata.

The crux of the problem is that the call to rename(2) does not guarantee durability of the changes when rename returns. Using dirsync promises that metadata alterations in a directory are synchronous rather than asynchronous. One may want to read this post in more detail if he/she isn’t already aware of dirsync and maintains programs that heavily rely on the atomicity of rename and other metadata operations. This includes all renames, creations, and deletions.

rename makes atomicity guarantees, which are not to be confused with durability guarantees. Guarantees include:

  • One will never have two persistent links to the same file, even if one should suffer a crash during or after a rename operation. (A transient double-existence while the system is still on is deemed acceptable)
  • Even if another link is being destroyed by the rename (i.e. a file exists with the destination name), there will exist no time where the destination file name does not exist (as either as the old or new content)

I wrote this post because I did not know a-priori what to be looking for when encountering some self-doubt about the robustness of a two disparate systems utilizing two phase commit during crash recovery, of which one half was a file system. Keywords that came to my mind did not yield useful search results, so I ended walking around the Linux source instead when I came upon dirsync. This use of the search term is sufficiently obscure (it is much more often used as a shorthand for ‘directory synchronization’, e.g. rsync-ish tools) that one must disambiguate it by adding fairly specific keywords, such as ‘inode’. Hopefully this post will raise awareness about the possible danger faced by most program assuming the atomicity and durability of metadata changes and serve as good search-engine fodder to that effect.

Edit: I need to do some more investigation on how what the tradeoffs are vs. fsync().  I think there’s mostly a speed benefit to avoiding a heavy fsync() call.  To the best of my knowledge, there is no fsync_metadata_only library function, and dirsync will give you those semantics, albeit using fairly blunt tools.

Written by fdr

January 26, 2010 at 9:30 pm

Posted in dbms, storage, systems

A different way to main()

leave a comment »

import sys
import getopt

class Usage(Exception):
    def __init__(self, msg):
        self.msg = msg

def main(argv=None):
    if argv is None:
        argv = sys.argv
    try:
        try:
            opts, args = getopt.getopt(argv[1:], "h", ["help"])
        except getopt.error, msg:
             raise Usage(msg)
        # more code, unchanged
    except Usage, err:
        print >>sys.stderr, err.msg
        print >>sys.stderr, "for help use --help"
        return 2

if __name__ == "__main__":
    sys.exit(main())

What is this, you wonder? As it turns out, it’s Guido van Rossum’s preferred way to enter a Python program. It’s a sensible departure from the classic variant. Even though this post is from 2003, I am discovering for the first time; perhaps I will adopt this idiom in some of my programs since it has been blessed by the lord of Python.

Written by fdr

February 13, 2008 at 1:54 am

Posted in python

Tagged with ,

Django file and stream serving performance Gotcha

with 9 comments

Recently I’ve been doing a little bit of work with the Django web framework for Python. Part of this project involves having a bit of reasonable binary file streaming to and from the server. There is currently a patch in trac (#2070) slated for acceptance. So I apply it and try it out and try copying some files in and out through the web server. I have some problems with the particulars of this patch and I intend to amend my complaints, but that’s for another post. What I discovered was an annoying performance gotcha in simply reading back binary files to be served to the user.

The gotcha is simple to expose:

In a Django view, use the documented functionality of passing a file-like object to the response object from the view; preferably a big, binary one. So you do something like this:

return HttpResponse(open('/path/to/big/file.bin'))

And then you surf on over to localhost and try grabbing this file. Your hard drive whirs and you notice your CPU usage is at 100% while serving the file slowly. Most people then rationalize it away saying “well, of course, Python is slow, so it makes sense that it would suck at this. Set up a dedicated static file serving server written in C and use some URL routing incantations.”

The crucial information that I had to dig for is how Django emits bytes to users. Django calls iter() on the input object and then uses calls to .next() to grab more bytes to write out to the stream. Once you factor in that the default iter() behavior for a open file in Python is to read lines you realize that there’s just an enormous amount of time and unnecessarily evil buffering going on just to emit chunks of the file separated by (in the case of binary files) completely arbitrarily spaced newline bytes. The result is lots of heap abuse as well as lots of burned CPU time looking for these needles in the haystack.

The hack to address this is very simple: we write a tiny iterator wrapper that simply uses the read(size) call. It can look something like this:

class FileIterWrapper(object):
  def __init__(self, flo, chunk_size = 1024**2):
    self.flo = flo
    self.chunk_size = chunk_size

  def next(self):
    data = self.flo.read(self.chunk_size)
    if data:
      return data
    else:
      raise StopIteration

  def __iter__(self):
    return self

1024 ** 2 in bytes is one megabyte in a chunk. When using this iterator the logic is simple and the result is that Python consumes very little CPU time and memory to rip through a file stream. It can be applied to the previous example like so:

return HttpResponse(FileIterWrapper(open('/path/to/big/file.bin')))

Now everything is fast and happy and running as it should.

So what should Django do about this? It could be just written off as an idiosyncrasy of the framework, but I think that the case is strong that Django should inspect for file-like objects and use more aggressive calls to .read() to prevent such unpredictable behavior. One problem with such large (1MB) read()s is that they may block for too long instead of trickling bytes to the user, so some asynchronous I/O strategy would be better.

There’s no reason why a small to moderate sized site should get hosed performance-wise because several people are downloading binary files from a Django server via modpython or wsgi.

Finally, proper error handling on disposing the file descriptor in the above examples is an exercise to the reader. I suggest the using the “with” statement that can be currently imported from future.

Written by fdr

February 12, 2008 at 1:51 pm

Posted in django, projects, python

Tagged with , ,

the woes of “git gc –aggressive” (and how git deltas work)

with 11 comments

Today I found a gem in the git mailing lists that discusses a little bit about how git handles deltas in the pack (i.e. efficiently storing revisions) and why — somewhat non-obviously — the aggressive git garbage collect (invoked by doing git gc --aggressive) is (generally) a big no-no. The verbatim email from Linus explaining this is affixed as part of the full text of this article.

A quick summary

Since there is little point in simply reposting this information (other than for personal archival), I will condense it here for quick reading:

Git does not use your standard per-file/per-commit forward and/or backward delta chains to derive files. Instead, it is legal to use any other stored version to derive another version. Contrast this to most version control systems where the only option is simply to compute the delta against the last version. The latter approach is so common probably because of a systematic tendency to couple the deltas to the revision history. In Git the development history is not in any way tied to these deltas (which are arranged to minimize space usage) and the history is instead imposed at a higher level of abstraction.

Now that we have exposed how git has some greater flexibility in choosing what revisions to derive another revision from we get to the problem with --aggressive.

Here’s what the git-gc 1.5.3.7 man page has to say about it:

       --aggressive
           Usually git-gc runs very quickly while providing good disk space
           utilization and performance. This option will cause git-gc to more
           aggressively optimize the repository at the expense of taking much
           more time. The effects of this optimization are persistent, so this
           option only needs to be used occasionally; every few hundred
           changesets or so.

Unfortunately, this characterization is very misleading. It can be true if one has a horrendous set of delta-derivations (for example: after doing a large git-fast-import), but its true behavior is to throw away all the old deltas and compute new ones from scratch. This may not sound so bad except that --aggressive isn’t aggressive enough at doing this to do a good job and may throw away better delta decisions made previously. For this reason --aggressive will probably be removed from the manpages and left as an undocumented feature for a while.

So now you ask: “Well, suppose I do really want to do the expensive thing because I just copied my company’s history into git and it has an inordinately large pack. How do I do it?”

Excerpted from Linus’ mail here is a terse recipe (with some explanation) that may take a very long time and require a lot of RAM to run but should deliver results:

So the equivalent of "git gc --aggressive" - but done *properly* - is to
do (overnight) something like

	git repack -a -d --depth=250 --window=250

where that depth thing is just about how deep the delta chains can be
(make them longer for old history - it's worth the space overhead), and
the window thing is about how big an object window we want each delta
candidate to scan.

And here, you might well want to add the "-f" flag (which is the "drop all
old deltas", since you now are actually trying to make sure that this one
actually finds good candidates.

Other notes and observations

  • If you have a development history where you constantly change between several particular versions of, say, a large binary blob — say a resource file of some kind — this operation can be very cheap under Git since it can delta against versions that are not adjacent in the development history.
  • The delta derivations don’t have to obey causality: a commit made chronologically later can be used to derive one made earlier. It’s just a bunch of blobs in a graph, there isn’t even a strictly necessary notion of time attached to each blob at all to begin with! That data is maintained at a higher level. Repack doesn’t have to know or care about when a commit was made. (The only reason it may care is to help implement heuristics. Right now no such heuristic exists[0])
  • Finding/verifying an optimal (space-minimizing) delta-derivation graph feels NP-hard. I now wave my hands furiously.

[0]: From the git-repack man page:

--window=[N], --depth=[N]

    These two options affect how the objects contained in the pack are
    stored using delta compression. The objects are first internally
    sorted by type, size and optionally names and compared against the
    other objects within --window to see if using delta compression
    saves space. --depth limits the maximum delta depth; making it too
    deep affects the performance on the unpacker side, because delta
    data needs to be applied that many times to get to the necessary
    object. The default value for --window is 10 and --depth is 50.

Read the rest of this entry »

Written by fdr

December 6, 2007 at 4:56 am

Posted in distributed, version-control

Tagged with , , ,

Overview: GlusterFS & Gluster

leave a comment »

Forgive the writing, I’ll fix it up later if I get complaints.

Gluster (and its filesystem, GlusterFS) is the only distributed computing + distributed file system project that gives me warm fuzzies inside, and if you check my del.icio.us tags, you’ll see that I have visited and reviewed quite a few options in this space (many of which I didn’t bookmark as well). Also reviewed were OCFS(1|2) , GFS(1|2), GFarmFS, Ceph, and CODA.

Why warm fuzzies for Gluster? Because it doesn’t rebuild the world from scratch and it is relatively simple in configuration and implementation. GlusterFS is implemented as a FUSE file system for GNU/Linux (which incurs some overhead, but greatly speeds up development for the obvious reasons) and relies on the underlying file systems that already have received a lot of attention to detail. It also means that you can mix, match, compose, and migrate easily: since it sits above any normal POSIX block device, you can have your exorbitantly expensive fibre channel next to your cheap software RAID6 SATA array in combination with your medium-priced ATA over Ethernet and rely on GlusterFS to distribute data between them using an underlying file system you already know and love. Some of your block devices may be formatted ext3, others JFS or XFS. It doesn’t really matter as long as you have basic POSIX capabilities. GlusterFS also supports optional striping and replication, and I have heard a report of easily saturating a full-duplex 10GBit line in both directions from about five machines (granted, each was probably running RAID) while using GlusterFS.

As it is said: complexity is the enemy of dependability. Gluster is the only solution I’ve seen so far that I as a lone administrator would trust in part because it appeals to my brand of engineering sensibilities. Paramount among them is (to some people counter-intuitively) appreciation for many things that GlusterFS unabashedly doesn’t do, simplifying the design. An example of this is authentication. If you want to use Gluster with authentication, expose (on a trusted machine that’s a cluster client) a SMB/NFS server that takes care of user permissions and hooks up to your LDAP server et al. Gluster doesn’t include any baggage to not trust clients or have fancy quasi-centralized metadata servers, and this I see as a benefit. If someone invents such baggage later on, it will likely be fulfilled as a module (just like the replication module) that I can choose or not choose at-will. Is it as deeply integrated or slick as some of the clustered file systems that require a RDBMS to coordinate? Not really, but the deep and slick scare the bejeezus out of me because they tend to become unwelcome to combination with other techniques, and not in the least because you’re pulling your hair out trying to keep the trains running on time when you are exercising more of the features. (The industrial-strength clustered file systems are not known for their ease of maintenance)

It is also my belief that this dedication to simplicity will result in a more robust substrate to build more advanced fuzzy features on. I prefer my base functionality in a tool to be more predictable than clever. Clever translates to me as “often right, but at hard-to-predict times sometimes very, very funkily wrong.”

Besides the usual supercomputing cluster type applications, I believe Gluster+GlusterFS+(XenSource | LKVM | jails | etc) would provide an excellent way to protect the underlying infrastructure (by using VM-style abstraction as a heavy handed form of capability-based security) and build a service much like Amazon’s EC2. Perhaps an experiment for one day…

Written by fdr

September 22, 2007 at 2:45 am

Posted in infrastructure, storage

Tagged with , , ,

The Lisp Before the End of My Lifetime

with 14 comments

Many wax poetic on the virtues of Lisp, and I would say for good reason: it was a language and philosophy that was (and is) far ahead of its time in principle and oftentimes in practice. But I have to cede the following: the foundations of Common Lisp are becoming somewhat ancient and there are many places that have more modern roots where I would have it borrow heavily to assist in creating my programming nirvana. In talking with yet another friend from Berkeley (and the author of sudo random) we had discussed some of these things and I decided it was worth enumerating some of them and pointing to ongoing work that implements those fragments or something close to it.

The reason this post is titled in such a sober way is because the Lisp I envision is probably many lifetimes of work to accomplish, and as such, I cannot see myself accomplishing everything on my own. Granted, I still have a lot of life ahead of me yet, but that only makes the equation all the more depressing. Implementation could probably span many PhD theses and industrial man-decades. As such, I can only hope that it’s the Lisp that more or less exists before The End Of My Lifetime. I would be glad to one day say that I contributed in some or large part to any one piece of it. This whole post smacks of the “sufficiently smart compiler” daydreaming, so turn away if you must. Alternatively, you can sit back, enjoy, and nit-pick at the details of compiler theory and implementation, some (or many) of which I’m sure have been overlooked by me.

Finally, this is not by any means a list of things that current implementations do not have, just things that I feel would seem most valuable. Some are not even necessarily technical challenges so much as social and design ones. I view this hypothetical Lisp as not only some new features, but a set of idioms that I more programmers generally agree on. “The Zen of Python” is an excellent example of this. There are definitely some lisp-idioms, but they have become somewhat antiquated and are hard to enumerate in some part because of the baroque and aging Common Lisp specification. The hardest idiom to get around is fearlessness and ease of metaprogramming, which in part is great, but also can make standardization difficult socially as it assists in making herding Lisp programmers difficult. Herding lisp programmers is about as tough as herding cats armed with machine guns.

However, I think Lisp’s guiding intentions have lied in flexibility. Common Lisp, for its time, was the kitchen sink. It still is, in large part, but may benefit from new idioms and a fresh slate, as well as deeper and more integrated compiler support for some of the features mentioned below.

1. The Compiler is your Friend.

Leaders in this area: SLIME and its Swank component
Honorable Mentions: DrScheme, IPython

Nowadays modern IDEs seem to do everything up to the semantic analysis step in compilation to give you advanced searching and refactoring capabilities. Oftentimes a lot of compiler work is reimplemented to support the features of the given IDE at hand, and much work is duplicated, sometimes to the point of implementing a whole compiler, as in Eclipse.

SLIME and Swank have a twist on this that I like: Swank is responsible for asking the compiler implementation itself (in my case SBCL) for information on various symbols, their source location, documentation, and so on. It communicates all this information through a socket to a frontend, which comprises the rest of SLIME. In doing so it gains the authoritative answer to queries about the program because the compiler of choice itself is delivering its opinion on the matter, even as it runs.

This allows for an accurate way to track down references that may be created dynamically by asking the figurative question “What would the compiler do?”. From this SLIME gains extremely powerful auto-completion facilities that are robust to techniques are either unavailable in other programming cultures or, if used, would defeat the programmer’s completeness of assessment of the program. Lisp is the only runtime/language I know of where I can eval a string and still be able to access the resulting, say, function definition and documentation strings with full auto-completion and hinting in my editing environment.

Were Lisp more popular, I would bet Swank-compatibility and feature-richness would be a defining feature for Lisp implementations, and frontends using Swank would be prolific. The socket interface was definitely the way to go here.

2. Networked & Concurrent Programming

Leaders in this area: Erlang
Honorable mentions: Termite & Gambit, Rhino, SISC, Parallel Python, Stackless Python, and many others.

Sun Microsystems, despite its beleaguered business, had at least one thing very, very right: “The network is the computer.” The ability to talk on multiple computers on a network is increasingly important in our era, and making it convenient can lead to extraordinarily powerful, robust applications. Erlang definitely leads the pack in this area: an industrial strength, reasonably efficient compiler that can do I/O pumping using efficient kernel-assisted event polling as well as automatically distributing computation across multiple processors. It also can support sending of most higher-order objects – such as closures – across the network parts of messages, as well as a powerful pattern-matching syntax that allows for relatively easy handling of binary (and other) protocols.

With processors increasing the number of cores and computers continually falling in price the ability to (mostly) correctly use multiple machines and multiple processors on each machine will become a dominant influence for writing programs that require high performance. Erlang has been demonstrated to be excellent at managing network I/O switching and handling, which is not surprising considering that is its main application as a tool. It could, however, stand to improve upon sequential execution performance: let’s just say I won’t be rewriting my numeric codes in Erlang just yet, despite the potential for mass distribution of computation. I also miss some of my amenities I’ve gotten used to in Lisp, but Erlang excels in its area for sure and has many lessons to teach.

3. First Class Environments

Leaders in this area: T, MIT-Scheme — mostly academia
Honorable Mentions: Python, Common Lisp

First class environments are the beginning and end of many problems, but I feel that having this facility would be useful for debugging and implementing creative namespacing and many other important features. Opaque environments can sometimes still be handled with mostly reasonable performance, but as far as I know nice, transparent environments — i.e., things that look like property or assoc lists straight out of the SICP — are just an absolute killer for performance and make compiler optimizations nigh near impossible. But that’s OK…because there’s nothing more annoying that shying away from using thunks or currying when these techniques are the most simple and expressive solution because you are afraid that it will become a chore to poke at the environment to debug these anonymous function instances later. By contrast, “locals()” in Python, for example, can be a godsend for special tasks and quick debugging, even if it only returns the local (and generally most useful) environment.

First class environments also help in “fixing” tricky issues that crop up and are cause for Scheme’s motivation for hygienic macros, famous for being hellishly picky to get right (Lisp-2 fans always seem to harp on this point, although what I’m suggesting may be something more like dynamic-lisp-N). I still feel that the quasiquote, despite its sometimes-ugliness, is the right primitive model to follow. And, in fact, since there seem to be hygienic macro packages build on top of the primitive variants, one could get those almost for free. Perhaps hygienic macros could also be idiomatic, I know not.

In conclusion, the goal is to break down some of the final barriers between code and data and allow for some interesting if unorthodox transformations and redefinitions at run-time and compile-time. It’s also important to have this functionality if one wants to dynamically redistribute computations across machines or perform run-time metaprogramming, which may be a great way to introduce new compiler features that can be toggled on and off.

4. An External Native-Code Generator

Leaders in this area: LLVM, JVM
Honorable Mentions: Parrot (if only because of relative vaporwareness), Mono, C–, Bit-C

More important than the individual merits of any of these specific VMs is that they are maintained separately by Other People™. It is high time to stop re-inventing architecture-specific code generators and local optimizers over and over. With the JVM catching up or passing up language-specific native code generators (it’s now more or less tied with OCaml on the Alioth compiler shootout with Java and doing well with Scala) and LLVM recently showing on-par and sometimes better performance than vanilla versions of GCC for some C code, I am buoyed with hope that one can generate relatively high-level (or at least architecture-independent-ish) bytecode and still get respectable or even good performance. JIT, ironically enough, may be more well suited to the lispy world than the Java one (although its instrumental in the Java world for sure) considering that it’s pretty common to go in and rebind definitions in Lisp while a system is running. One might argue that changing declaim/proclaim statements and evaluating code is in fact better than JIT, and I could see there being a case for that, but it just seems that lots of work is being poured into run-time code generators that could be leveraged.

One interesting idea is compiling to Bit-C, which has support for low-level manipulations and type-verification, yet also is a lisp.

5. Optionally Exposed Type Inference and Static Typing

Leaders in this area: Epigram, Qi, Haskell, the ML family
Honorable Mentions: CMUCL and descendant SBCL

Inferred static typing and type inference is all the rage these days, with claims for increased program execution and correctness. And I’m all for that, and Qi is an excellent example of the ability to do considerable amount with standard Common Lisp facilities. Qi has the extremely sensible goal of remaining in Common Lisp, and thus ensuring that it has measurable chance of having traction in my lifetime.

Although I’m not sure that the interface to typing I would expose is necessarily (or necessarily not) Qi’s, but I do want my compiler to tell me what it thinks about various tokens littered throughout my code, allowing my editor to do things like red-flagging unsafe operations and type disagreements or have a mode to show expensive dynamic operations or inferred types when I’m seeking optimization. Ultimately, not all of my code will fit neatly into the pure-functional paradigm and may be better served by the occasional side-effect or global state, and I would like type rigor to extend as far as possible, but not become a burden. Sometimes I just want a heterogeneous hash table of elements without any baggage. I think it makes sense to rigorously type nuggets of code, but the Lisp in question should not be fascist about maintaining ‘perfect’ consistency throughout an entire program. Epigram and Qi have this model exactly right: pay as you go. Flexibilty when you need it, but not to the point where it is fascist. In the future, it’d be nice to see some efficiency benefits from compiler-awareness of carefully statically typed nuggets of code that otherwise would not be possible, such as eliminating some bounds checking.

Finally, CMUCL and SBCL already do quite a bit of type-inference, it’s just not exposed to the user so nicely in SLIME except through warnings blown out of stdout. Even then, they can be very useful. Ideally I could simply ask SLIME to access the type of a given symbol and (CMU|SB)CL could tell me what it thinks.

6. Pattern Matching

Leaders in this area: Many. MLs, Haskell, Erlang, lisp macros for Scheme and CL.

This is an amenity that should become standard part of the lisper’s idiom for convenience if nothing else. It’s just that there are a number of pattern matchers and none of them that I am aware of has become the idiomatic one.

7. Continuations and Dynamic-Wind

Leaders in this area: Scheme, almost exclusively, and an implementation: STALIN

Scheme is probably the canonical continuation and dynamic-wind implementation. Implementation is subtle and performance impacts can be significant, but give pleasant generality to schemers when designing new control constructs. Combined with first-class environments one could do quite a few interesting things, such as save the entire program state as an environment-continuation pair. Unfortunately, implementation is incredibly painful. Yet, it has been done, and with a pay-as-you-go model it may not need to hurt most code’s performance very much (few people ought to be writing code riddled with continuations). See the STALIN compiler, which does all sorts of rather insane things, along with the insane compilation time. It’s mostly intended for numerical codes, though.

8. Pretty and easy (but optional) Laziness

Leaders in this area: Haskell, Python, Ruby, Common Lisp Iterate package, many others
Honorable Mentions: Anything with closures, Screamer. Scheme for call/cc allowing even more strange general flow control.

I like Python’s yield operator that transforms a normal function into a lazy one that is expressed as an iterator. In particular, I like to avoid, when possible, specifying representation formats for a sequence or set of things when they aren’t strictly necessary. With continuations one can get very nice looking implementations of generator-type functions although this may have an undesirable performance impact. As such, most languages that have laziness or generators implement special restricted-case behavior to get good performance, and that should probably sit pretty high up on the optimization list for this Lisp.

9. Convenient and Pervasive Tail Recursion

Leaders in this area: Erlang, Scheme
Honorable mentions: anything with tail recursion elimination and optional arguments

I like using tail recursion to express loops, as I find them more flexible, easier to debug, and understandable than loops and mutation. Combined with pattern matching it’s fiendishly convenient at times and can in some circumstances greatly assist a compiler if assignments go unused. I don’t meant to say this should be the only means, but I would like to see it be idiomatic and terse to write. The Scheme named-let and Erlang’s pattern matching both assist this process. One of the main priorities that is key is making it easy to hide the extra arguments often required in a tail-recursive function to hold state from the outside world, and Scheme’s named-let, I think, handles this rather beautifully for common cases.

Written by fdr

August 4, 2007 at 3:08 am

Posted in languages, lisp, projects

Follow

Get every new post delivered to your Inbox.