Wednesday, March 24, 2010

Dabbling With PLT Serializable Continuations

The other day, I was rambling on about Amazon Web Services and PLT Scheme, and how they might mix well together. One of the concepts that fascinated me was that of Stateless Servlets and how they leverage serializable continuations.

Specifically, I was wondering if you could have a servlet create a bunch of continuations that represented worked to be completed. These continuations could then be serialized, stuffed in a in a queue, and then processed by a collection of entirely separate servers.

Trying It Out

To simulate this, I through to together a toy example. In this case, I created a function to grab stock exchange info, and considered that the work that I wanted to distribute. I then wrote a handful of helper functions to serialize and deserialize the work. The result is the following:

#lang scheme
;; Standard includes here, nothing fancy
(require web-server/lang/serial-lambda
         (planet neil/csv:1:5))

;; Helper syntax to 'package up' some work to be completed. The packaging process
;; creates a serializable function, and then goes ahead and serializes it. The result
;; is effectively a string which can be stored and re-executed later.
(define-syntax package
  (syntax-rules ()
    [(_ expr ...) (serialize (serial-lambda () expr ...))]))
;; More helper code. In this case, the run function is handed the serialized code
;; which is deserialized and then run.
(define (run serialized-thunk)
  ((deserialize serialized-thunk)))

;; A simple structure for holding stock data.
(define-struct stock (symbol price low high) #:transparent)

;; A simple function for grabbing stock data. Notice how there's
;; nothing special about how this function is written.
(define (get-stock-info symbol)
  (apply make-stock
          (csv->list  (get-pure-port 
                       (string->url (format "" symbol)))))))

;; This function takes in a collection of stock symbols (MSFT, RHT, etc.)
;; and returns back a list of serialized expressions. If I went with 
;; my queue example, it's the results of this function that I would publish
;; to the queue.
(define (make-stock-getters symbols)
  (map (lambda (symbol)
          (get-stock-info symbol)))

The above code can be used as follows:

(make-stock-getters '("MSFT" "RHT"))
;; =>
;;(((2) 1 ((#"\\stuff\\" . "lifted.10")) 0 () () (0 "MSFT"))
;; ((2) 1 ((#"\\stuff\\" . "lifted.10")) 0 () () (0 "RHT")))

On another server, or in my case, a fresh instance of DrScheme, I can then run the code:

 (run '((2) 1 ((#"c:\\users\\ben\\desktop\\i2x\\src\\trunk\\ss\\play\\" . "lifted.10")) 0 () () (0 "RHT")))
;; =>
;; #(struct:stock "RHT" "30.70" "14.43" "31.76")

How Did The Solution Do?

I've got to say, my little experiment appears to have worked. If I had a AWS SQS implementation, I could have gone through hundreds of stock symbols, serialized their continuations, and published them to a queue. I could have them had a collection of Linux servers sitting around, processing the queue.

I guess a legitimate question is, is it worth? Why bother with serialized continuations. Why not simply publish a statement like:


to the queue, and then write a program to parse that data and run the lookup?

In this case, that may very well be a workable solution. But, the continuations really shine in that you don't need to write specialized queue processors. Effectively, rather than having to invent a new language for exchanging data among processes, you can use the Scheme implementation itself. Less code to write means faster implementation time, fewer bugs, and the removal of another layer of transformation code. That sounds like a big win to me.


  1. Thanks for posting this.

  2. Interesting. Compare to Kay's evolving vision of OOP (not, of course, the C++/Java vision):

    `True to the stages, I "barely saw" the idea several times ca. 1961 while a programmer in the Air Force. The first was on the Burroughs 220 in the form of a style for transporting files from one Air Training Command installation to another. There were no standard operating systems or file formats back then, so some (t this day unknown) designer decided to finesse the problem by taking each file and dividing it into three parts. The third part was all of the actual data records of arbitrary size and format. The second part contained the B220 procedures that knew how to get at records and fields to copy and update the third part. And the first part was an array or relative pointers into entry points of the procedures in the second part (the initial pointers were in a standard order representing standard meanings). '

  3. Anonymous7:23 AM


    a hop webserver (using bigloo) :



Related Posts with Thumbnails