Wednesday, June 10, 2015

A Function Evaluating eval

To explore metacircularity for Clojure, let's start with a minimal eval, implementing a lisp-in-lisp. We shall call our lisp mcircl, abbreviated as mc. Making it metacircular is a substantially larger project, and we'll take that up in the future. I have been referring heavily to Lisp In Small Pieces, and this is covered (mostly) in the first chapter. I have chosen Common Lisp as the host language, because I'm more familiar with it than Scheme, and it has a clearly spelled out specification. So the semantics of anything I define on top of it will have a prayer of being specifiable.

Minimal eval

(mc:eval '(mc:quote x) '()) => x
(mc:eval 'x '((x . y))) => y

This isn't a language yet, just a simple program. mc:eval is eval defined in the host lisp. And the data structures and types are also borrowed from the host. The second argument to eval is the lexical environment, represented as an association list. We also see our first special form: mc:quote.Which is distinct from quote in the host lisp. We can already see how different this lisp will turn out to be from clojure. The treatment of symbols and namespaces is Common Lisp style, and to work towards clojure we will have to define a new reader.

Function Defining eval

(mc:eval '(mc:fn () (mc:quote x)) '()) => <unreadable-function-value>
(mc:eval '((mc:fn () (mc:quote x))) '()) => x
(mc:eval '((mc:fn (x) x) (mc:quote y)) '()) => y

Now we're doing something a bit more interesting: writing a function that returns a value. The first defines a function and returns it. The function is not a readable value, and is implemented as data in the host language. The second evaluates the function. The third defines and evaluates a function that takes an argument. To support this we need a more interesting definition of environments that can be extended during function execution.

Let's return to the value of a function. It is an unreadable value in the host environment. But if the host environment is the lisp itself, which happens with a metacircular eval, then that function value could be readable. We shall explore this idea further below.

Like clojure we're using fn and not lambda as the function signifier. And like clojure this is a Lisp-1. Unlike clojure we are using parentheses () and not brackets [] for argument lists. That's because we're relying on the host lisp for our reader, which will not understand [] without some work.

Function Naming eval

We don't have recursive functions yet, and without recursive functions we can't define a metacircular eval. letfn bindings can be implemented in a manner that supports recursive execution. So we can write:

(mc:eval '(mc:letfn ((rf (mc:fn (x)
                           (mc:if x (mc:cons x (rf mc:nil)) (mc:list)))))
            (rf (mc:quote y)))
=> (y)

This is a huge step in the power of the language and requires a number of constructs to be added. A recursive function must have a base case, which requires conditionals or pattern matching. Conditionals are simpler. Function calling another function requires a stack, which we again borrow from the host lisp. This will likely never change. As clojure is hosted, we will try and make use of the host language stack to support seamless calls and callbacks with the host language. This comes at a cost though, we will never have good native debugging tools. While not strictly necessary, we have added functions for manipulating lists, so that we have something to compute. Lists and not numbers because lists are required for representing programs.

The Implementation of eval

mc:eval is a fairly simple function, and deliberately so. It means we have a hope of defining mc:eval in mc rather than Common Lisp. Doing so requires additional functionality that we have yet to implement, such as def.

(defun mc:eval (exp env)
  (if (atom exp)
      (if (symbolp exp)
   (lookup exp env)
      (case (car exp)
 (mc:quote (cadr exp))
 (mc:if (if (mc:eval (cadr exp) env)
     (mc:eval (caddr exp) env)
     (mc:eval (cadddr exp) env)))
 (mc:fn (make-function (cadr exp) (cddr exp) env))
 (mc:let (eval-let (cadr exp) (cddr exp) env))
 (mc:letfn (eval-letfn (cadr exp) (cddr exp) env))
 (t (invoke (mc:eval (car exp) env)
     (eval-list (cdr exp) env))))))

Even with most implementation details put away in helper functions, we can get a sense for how the language works. We know how symbols are treated, what the fundamental building blocks of the language are, and that this is a Lisp-1. Even when this Lisp is implemented in a metacircular fashion, many of these helper functions will continue to be implemented in the host language. We can adjust the extent to which we rely on the host, for instance, whether we choose to rely on the host for memory management and function calling, or if we choose to implement our own. Our lisp will be hosted, so we will in fact try to build as much as possible on top of the host environment.

The most interesting parts of this function are the implementation of mc:fn and mc:letfn.

(defun make-function (params body env)
  (make-function-definition :parameters params :body body :environment env))

make-function creates a function object and captures the definition environment. We get lexical capture from this. The implementation should otherwise be self-explanatory.

(defun eval-letfn (bind-forms body-forms env)
  (let ((env-with-letfn-bindings (letfn-env-with-empty-bindings bind-forms env)))
    (update-env-with-letfn-bodies bind-forms env-with-letfn-bindings)
    (eval-progn body-forms env-with-letfn-bindings)))

eval-letfn seems to require mutability in the environment data structure. Making the functions defined via mc:letfn requires the functions to be aware of the environment in which they are defined. Otherwise recursive calls are not possible. So, first we create empty bindings for the function symbols, create functions with that environment, and finally update the environment with those function definitions.

Next Steps

The question of greatest immediate interest to me is whether this lisp can be made metacircular, and what else we need from our lisp in order to make this metacircular. Implementing the full syntax of clojure requires a custom reader, and implementing that in a metacircular fashion will be a larger project. I don't yet know how to start on that.

Monday, May 11, 2015

Clojure and Metacircularity

I feel lucky to have found myself as a full time lisp programmer again, in great company with the Lonocloud team. Emphasis on small l lisp, because I'm writing clojure. While a fine lisp, it still feels like an incomplete thought. I don't want this to be another Common-Lisp-vs-Clojure post, though it is unavoidable that some thoughts along those lines will creep in.

tl;dr I have thus far enjoyed working with Clojure, but suspect I would be far happier had its implementation been metacircular.

Let's start with a couple of quotes from the top of

Clojure is a dynamic programming language that targets the Java Virtual Machine (and the CLR, and JavaScript).

A bit further along...

Clojure is a dialect of Lisp, and shares with Lisp the code-as-data philosophy and a powerful macro system. Clojure is predominantly a functional programming language, and features a rich set of immutable, persistent data structures.

However, the JVM host version of Clojure receives far more love and attention than the other hosts. The JavaScript host doesn't have true support for macros. They must instead be defined in Clojure, and applied on ClojureScript code. ClojureScript cannot be written without Clojure on the JVM, it is not a stand-alone language. So the first claim is only partially true, most charitably an aspirational comment.

The second quote is really telling about the relationship between Clojure and other lisps. Clojure is a dialect of lisp in so far as it shares the homoiconicity property of lisps. It is at least as much a functional language in its rejection of a traditional object system and embrace of immutability. Being functional, rather than being a weakness, is a great source of strength. It has allowed clojure to deal with state and asynchronicity in very interesting ways.

Let's return to Clojure as a lisp though. In my first scheme class, based on the classic SICP, we didn't touch macros at all. We did functional programming, and learned about interpreter and compiler implementation, all with scheme acting as both the target and the host language. This is not possible in Clojure, as far as I can tell. At least not without a great deal of effort.

The difference between classical lisps and Clojure on this front is that the core of every Lisp is quite precisely defined in the lisp itself. There is an eval function that is capable of taking any expression in the language, and executing the expression. That eval function is defined in the language itself. So it is not that the language is (partially) homoiconic to the extent that one can write macros, it is homoiconic all the way down. The language is metacircular.

I'm not the first one to make this assertion, I'm sure, but I'll try to explore a bit on my own the proximal and distal reasons for the lack of metacircularity and its impact.

The proximal reason is found in the definition clojure.core.Compiler.eval. Note what happens if the form is a non-trivial type where a function may be defined: you run the whole analyzer, which is also implemented in java. So in order to produce a metacircular Clojure, you have to implement this entire procedure as a Clojure function, and require a Clojure implementation capable of supporting the execution of this eval function. This is a non-trivial undertaking. My experience with other lisps is that you first construct a bootstrapping implementation: one that is able to produce an eval over a small subset of the language. Then in that subset language you implement a small compiler that is able to express this core language as machine executable code. Then you repeat, and in doing so you now have a core language that is implemented entirely in terms of itself. This idea is not limited to lisps, mind you, this is also basically how gcc is built. But metacircularity in conjunction with homoiconicity makes the language easy to describe and manipulate within the language itself. Macros at that point become trivial.

Then in that distilled core language you proceed to define the rest of the functionality that makes the language useful and interesting.

Clojure has no such core language. It has no bootstrap procedure. It cannot run without the host Java environment.

The distal reason for the lack of metacircularity is there seems to be a lack of interest in the idea in a lot of the community. Most Clojure development focuses on the JVM, and Clojure runs well enough there. So there isn't a strong need. Clojure has successfully appealed to a large contingent of programmers with a functional rather than lisp background, so there could also be a certain lack of awareness. Clojure has also been an eminently practical and conservative language, and I get the feeling that parts of the community view metacircularity as a nice-to-have rather than an essential, which has left some lispers feeling that Clojure is an inferior lisp. I don't have a long enough history with Clojure to speculate on the nature of the distal causes.

The impact though is felt by every user of the language, for some in small ways, and for some in profound ways. Clojure as a language doesn't really have a specification. Specifications for lisps have always been relatively straightforward: the eval function is the specification. As a result, it is not straightforward to port Clojure to a new host, such as JavaScript. I suspect moving from one Clojure host to another would be a fairly jarring affair, where the language behaves substantially differently as you change hosts, because "being hosted" is not a well defined idea.

A smaller impact is in the stack traces you get when you have an error. You get a Java stack trace, which you can sometimes relate to the clojure program, and sometimes not. Debugging programs is more challenging. In particular I suspect an interactive debugger in the spirit of Common Lisp will be impossible to have in Clojure.

It's encouraging that there are so many projects underway (or experimented with) to try and address these issues:

But these experiments do not amount to a sustained effort in this direction. I can only hope Clojure goes down this path in the near future.

Edit: Updated description of clojurescript macro support.

Sunday, March 22, 2015

The Rise and Fall of XML

First there was XML. I remember XML from about 2000.

Then there was SOAP. Then WSDL. XML schema, RDF, RSS and other formats had also appeared before I checked out. That's about when I stopped paying attention. But on a recent rediscovery trip I was introduced to BPEL, BPMN, and other such acronyms. And now I know how XML died. And now they live on as zombies, sort of like COBOL.

Well, I don't in fact know if they have actually died, I'm just hoping. XML is a reasonable standard. It has a spec that is arguably complete, and solves a well defined problem. SOAP is less well specified. It has a well defined spec which is again relatively complete, but usage in practice is a subset of what the spec allows. Witness RPC style and document literal style calling. The latter became the preferred approach for using SOAP, so far as I can tell because Microsoft's tooling supported it better. And the overriding consideration appears to be that one style meshes better with auto-generated interfaces in various programming languages than the other.

Generating SOAP interfaces relies on consuming and using WSDL documents, which allows for an ever wider set of interpretations. WSDL allows for the separation of the logical description of an interface from its implementation. From an engineering perspective, I'd argue relatively few individuals would care about the logical description of a service. Most would only look at it to the extent that it supports calling a particular web service. And so we have a proliferation of possible interpretations of abstract service descriptions.

When we get to BPEL, we're building abstractions on top of abstractions. We're building a spec that is a union of existing approaches. And we have incomplete implementations of the spec. The rigor with which the spec is defined is greatly reduced. One can only put together a reasonable implementation of the spec based on existing software implementations, which feels too much like reverse engineering said implementations rather than working from a well thought out description.

The BPEL spec was put out in 2007, and I was fortunate to have not encountered it until now. In 2009 I found REST and JSON, as I transitioned to a startup. REST based services are underspecified in how their messages work, or how they are discovered, or how they might relate to each other. This is a good thing. It prevents train wrecks like BPEL from emerging.

Why Standardize?

There are probably many ways to arrive at a standard. The two I have encountered most frequently are:

  • Codifying existing practice vs specifying first
  • Capturing core concepts (concepts intersection) vs capturing all technology features (concepts union)

With most XML based technologies, the tendency appears to have been towards specification first, concepts union. Which with just a little bit of thought should make apparent how a single specification would allow for a wide variety of usage models and approaches.

XML seems to have been on the side of codifying first, capturing core concepts. XML grew out of HTML and SGML. It grew out of the desire to separate content from presentation. We can argue about how successful that's been, but that notion was the starting point. It brought greater structure and sanity to the tag soup that was the WWW at the time. I consider XML to have been incredibly successful.

SOAP was a specification of RPC in XML. I would consider it to be specification first, concept intersection. But it was based on a long history of RPC implementations. It worked pretty well.

WSDL is where we start going off the reservation. Key elements of WSDL were brand new ideas: types specified in XSD being used for constructing messages that were being passed over SOAP protocols. Instead of stopping there, WSDL became incredibly aspirational, and attempted to separate the abstract description of services using these untested languages from the relatively clear RPC mechanism encoded in SOAP, all without existing implementations of the end-to-end system to see how it would work out in practice. And, as we can see, practice required simplifying away parts of the standard so that there was a sane core that could be implemented and used.

The specification-first tendency is at its maximum in BPEL. My guess right now is there isn't a single complete implementation of the BPEL spec. I'd argue, this is how you kill an emerging ecosystem.

By now it should be obvious that I dislike specification-first. It reminds me of other bad ideas, such as the waterfall process. We must recognize that it is rooted however in a good thought: that if we build the specification first, we will avoid building costly systems that will later have to go through an even more costly process of re-adapting to a future standard. But instead now you're left with a costly system that follows a standard that few want to use or understand.

The union vs intersection argument is more challenging. I have a preference for intersection because it generally requires resolving conflict. In conflict we have to surface, discuss and settle opposing points of view, to arrive at a conclusion that must cover just enough ground to settle the conflict. It requires taking a stand. Whereas union is an effort to get along on paper while continuing to operate how you always had, in a way that sets you apart from others with whom you've entered into union. The key advantage of an intersection is that you should get something smaller that a new entrant into the ecosystem can pick up and get running relatively quickly, while with a union you could have a new entrant come in and build something that is different yet again from everything everyone else has built. With one of these approaches you at least have a prayer of interoperability.

So What About REST?

Let's now step back again and consider REST. It is an approach to communication distinct from RPC. It has come from an understanding and appreciation for the HTTP protocol, and adapting it to building web services. Where RPC tries to mimic function calls over the network, REST tries to manipulate object state. With REST one is adapting the HTTP protocol verbs to describing object state and operations thereon. I would therefore argue that it is an implementation first approach.

REST is more importantly a work in progress. There are many libraries that support REST style communication. And there are many ways in which software has used these libraries. There's an emerging consensus on how to define and present a REST interface. But being a work in progress means all implementations of these APIs will be subject to obsolescence. But at the same time we're freed of the requirement to conform to an arbitrary specification set ahead of time without exploring and maximizing the potential of the protocol.

(Thank goodness for the timely death of WSDL 2.0, which for all the improvement it brought, would still have been a step backwards. Again, last spotted in 2007.)

In contrast to SOAP, WSDL and so on, while REST has a lot to say about communication, it has nothing to say about communication content. By convention JSON is the message format of choice, but XML messages aren't precluded. We thus have a good separation of concerns, where the communication protocol states where the content should go, but has nothing further to say about it.

The apparent disadvantage of such an approach is that there is nothing like BPEL in the REST ecosystem. Once again, I think this is a strength. If there is a great need for something like BPEL, the argument goes, there will over time be multiple implementations that satisfy the needs, and that's a good time to think about standards and approaches to support the need. Until then, time spent considering the problem will be time poorly spent.

Friday, January 30, 2015

Thoughts on Interviewing

I recently moved from San Francisco to Seattle, for personal reasons, and for a misguided attempt at getting involved in a research project. I am thankfully no longer involved, but that also means I've had to look for a job in a new city with relatively few contacts at the worst time of the year: mid-December. I could say a lot about an employer that would let go of an employee in such circumstances, but that's a different topic altogether.

What I've come to appreciate during this journey to find a new job is the difference a good interview process can make in finding and hiring a good candidate. This does not necessarily have anything to do with Seattle, because the companied I targeted included some relatively large companies based outside of Seattle.

For the record I consider myself to be a good generalist engineer capable of working at a senior level in a technical leadership role. Getting hired into a senior level role is tricky for an employer, at that level a bad hire can have disproportionate negative effects.

Notes to the Employer

  • Know that you can't possibly screen a candidate sufficiently during the interview. You will necessarily have to balance speed and coverage.
  • Smaller employers have the option of being more agile in everything they do. They can take a chance with a bad hire, and let that hire go if things don't work out. They should use this to their advantage.
  • Be as clear as possible on what you want from the candidate. If you need a particular specialized skill, make that clear to candidates before the interview process commences. Job postings are notoriously poor at making this clear, as their quality and specificity varies greatly.
  • Be clear on how you will interview. Explicitly lay out the process up front, and try to follow it as best they can. Be clear on what you want to achieve from the interviews.
  • Focus the interview for the right level of detail. For a senior position focus on the types of architectural issues that you have encountered. Use whiteboard coding sparingly, it is the least useful indicator of being able to think at the architectural level.
  • Especially for senior engineers, be aware that you're unlikely to find a candidate with the exact skills you require. Most companies appear to be pretty good about this. But there are still several that fall for this trap.
  • Keep the interview process as short as possible, and come to a decision as quickly as possible. A long interview process takes up employee time, and could very well leave the candidate with a bad taste.
  • Wrap up each interview session with a meeting with the hiring manager, or at least the involved recruiter. The candidate has spent a whole day being interviewed, and giving reasonable closure and a sense for the next steps is important.

On Take Home Problems

Take home problems, code portfolios, etc are great ways of seeing how the candidate codes. But keep a few things in mind if you use a take home problem:

  • The evaluation of the solution to a problem should include a code review with the developer. Otherwise you're applying an arbitrary evaluation function on the outcome that requires a successful candidate to read your mind.
  • Keep the problem to be of a reasonable length. Try to limit it to four hours of work, candidates have their day job or other interviews to work on.
  • Focus the problem on key issues of interest. Consider providing them with a partial implementation. After all, being able to read others' code is just as important as writing something from scratch.
  • Be explicit on how the program will be evaluated. Saying "production quality" is rarely sufficient, this can mean different things for different development circumstances.
  • Do the review sooner rather than later. The ability to develop software is an essential skill for an engineer, and it also tells candidates you care about the effort the they have already put into the interview process.
  • If you do a take home problem, don't spend time on whiteboard coding. You will not learn anything useful from this.
  • Don't use algorithmically complex problems for evaluating coding ability. Evaluate algorithmic thinking separately. The two skills are quite distinct. An exception is if knowing certain classes of algorithms is a pre-requisite for the position.

Notes for the Candidate

Preparing for an interview is tricky. I can't claim to be particularly good at this, but I can claim to have reflected sufficiently on my performance to point out obvious don'ts.

  • Be prepared to reflect on your performance on all your past projects. Do this for yourself, as honestly as possible, with particular attention to if and how you might have done better. I find employers wanting to take me back often to my earliest projects.
  • Know what aspect of the job excites you the most. Explicitly discuss these particulars with the employer, and evaluate if you have correctly understood the job.
  • If you are not getting what you want, either in terms of compensation or responsibility, walk away. It is rarely the case that you won't find a more suitable situation in short order.
  • Interview with as many employers as you can in parallel, especially if this is yoursole task. This is a great chance to dream about what you might possibly want to do, there might just be a situation out there which fulfills that wish.
  • Keep interviewing until you have an acceptable offer in hand. Sometimes even the most promising situation falls apart at the final stages. It is essential to have other situations available that you can continue working on.
  • At the same time, selling yourself every day is emotionally exhausting. Balance time spent looiking for a job with other activities you find relaxing.
  • Managing your emotions is essential. There will be highs and lows in the process. Rejections are not negative assessments about your abilities and career choices.
  • Finally, keep a continual focus on your career growth. Keep an eye on the rest of the world and keep asking if there's something you'd rather be doing out there. Not to switch jobs at a whim, but to see if there's something fresh you can bring into your current job and your life and career.

Saturday, January 3, 2015

Graph Databases and Sharding

Lately I've become interested in graph databases. And over the past week I've been looking at graph database sharding.

tl;dr It is a difficult problem, one that appears to be more complex than RDBMS sharding. A few nice links on sharding relational databases:

Sharding graph databases is still an active research topic, below I've included a couple of papers and a Stack Overflow discussion on Neo4J sharding:

In both relational and graph databases, the sharding strategy chosen must reflect the use case being addressed. For relational databases the most general solution involves implementing a "share-nothing" strategy. We split the table that requies sharding into subsets that can be independently manipulated. We should be able to execute all join operations the application requires within a single shard. Sharding is in other words necessarily tied to the requirements of the application.

Graph databases should be approached in a similiar manner to relational databases: avoid sharding if you can. And if sharding is unavoidable, the sharding strategy should be based on the requirements of the application. The most extreme situation arises with scale-free graphs, where a few vertices are much more highly connected than the rest of the vertices in the graph. Even here unless we're doing graph analytics (map-reduce style computations over the whole graph) we should be able to formulate a sharding strategy that works for the given application.

But one of the big draws of graph databases is the promise of efficiently traversing arbitrary paths through the graph. For instance one might be interested in the shortest path between two arbitrary vertices, where the vertices reside in different shards. A different type of problem might occur with graphs so large that all the edges for some of the vertices cannot be maintained on a single machine. This is admittedly far-fetched, given the compute power available today. In these instances we are necessarily executing computations distributed across shards, where network communication might become the limiting factor. There is substantial algorithmic complexity in deciding how to shard such databases. But then such computations are equivalent to executing arbitrary joins on a sharded relational database, which will necessarily spill over the shard boundaries.

Wednesday, August 28, 2013

On Exploratory Design and Piecemeal Development

I have a soft spot for Lisp-like languages, I suspect I always will. I've now been programming in Scala for several months. And I sometimes find the language tedious. I've lately been thinking about the cathedral and the bazaar, worse is better, and other such essays in relation to how I feel about Scala vs Lisp.

Let's take a look at what Scala offers:

  1. An interesting combination of object oriented and functional programming styles.
  2. A powerful type calculus.
  3. Some interactive development via a REPL, though the REPL is fairly underpowered and almost an afterthought.

Common Lisp has, by contrast:

  1. A dynamic language with optional typing supported unevenly by the various implementations of the language.
  2. An object system that supports multiple inheritance, metaclasses, and multi-methods, with a rich vocabulary to describe how methods should interact.
  3. A language built with the REPL as a central feature. A Common Lisp development environment almost always has a REPL running in the background that can always be accessed.

The most significant downsides I've experienced for Scala are:

  1. Excruciatingly long compile times.
  2. The type system, while expressive, is also rigid.

I've run head-on into these problems with my current task, designing a DSL on top of an existing codebase. The codebase isn't very large, perhaps a couple of hundred substantial classes. Implementing a DSL, for anyone that's attempted this exercise, isn't a straightforward proposition. A DSL is about allowing a user to express their programming problem in a form that's most suitable for their problem domain, while hiding away much of the complexity of the software's underlying implementation. Like any API, implementing a DSL often requires adjusting the rest of the sofware a little bit to support the DSL. And it requires trying out a few different ideas to see which one sticks. It is a process of discovering how one might best help users express the problems in their domain.

Scala requires that the whole program compile before any part of the program is executable. It requires strong consistency in the elements of the program. Which means you can't just fire up a REPL, load up a few classes that are most relevant to the problem at hand, and play around with your implementation. If you're modifying the class definitions in any fashion, you have to make sure that the rest of the program is made consistent with your changes before you can run your experiment. The long compilation times, and inability to focus on just a few classes while experimenting, mean painfully slow experimentation in building up the DSL.

I feel strongly that Common Lisp (and many other Lisp oriented languages, including Clojure) do this better. Let's start with the proposition that your entire program isn't going to undergo development all the time. Development is a highly non-uniform activity. Once a part of the code reaches a certain level of maturity and functionality it is often set aside, while focus shifts to other problems. While under active development, code needn't be wholly consistent. This gives some space to experiment and rewrite the program many times until the right expression is found. On the other hand, touching relatively stable code should require much more care and attention on the part of the developer. The challenge is in structuring the implementation so that we have local stability in parts of the codebase.

Let's delve a little bit deeper into Common Lisp's technical implementation that makes this possible. Say we have a program with a couple of hundred classes, and we wish to evaluate a DSL implementation that changes the type of a commonly used class by removing one of its supertypes. We would start with loading the entire program into the development environment. Then we would interactively change the type of the class. To be clear, this means changing the type definition of the class, and impacting all existing implementations of that class. Common Lisp provides a type change protocol to support such an operation on objects. Parts of the program would no longer function correctly, but that's fine so long as the whole program isn't running. Since Common Lisp is dynamic it typically tolerates such inconsistency. Now we can implement a partial program to test if the implementation we have in mind can work. Rather than going through full compile cycles for the entire program, we can evaluate or compile just the bits we're working with, quickly verify if our approach might work, and either move forward or revert. We're changing the behavior of our program at runtime in order to evaluate alternate implementations.

Programming tool support made programming in Java palatable. Maybe one day we'll see better tool support for Scala, with automated support for refactoring type hierarchies and type parameters. For now we must do this manually.

Sunday, July 7, 2013

To clabango or not to clabango

Pedestal's lack of documentation and confusing client/server development model caused me to switch to luminus, which comes bundled with clabango. It's been a while since I've done template based page generation, so maybe this is par for the course. So far clabango has left me a bit underwhelmed. It might prove to be sufficient for my needs, however, so I'm trying to find ways to work with the package.

The first thing that struck me is its utter un-lisp-ness. That seems to be a positive for some. More troubling to me is how clabango seems to encourage breaking data abstractions. I have a protocol Identifier that I would like to render in a page. I have two options when I do so: pull out the value from the record implementing the protocol ahead of time, or access the record's raw fields through the map API. There doesn't appear to be a reasonable way to render through the protocol methods on the page.

;; Define an identifier protocol and a widget-id record
user=> (defprotocol identifier
         (value [this]))
user=> (defrecord widget-id [id-value]
         (value [id]
           (.id-value id)))
user=> (require 'clabango.parser)
user=> (alias 'parser 'clabango.parser)
;; This is obviously wrong user=> (parser/render "The ID is {{ id }}." {:id (->widget-id "abc")}) "The ID is user.widget-id@caada592."
;; Can't use the protocol abstraction user=> (parser/render "The ID is {{ id.value }}." {:id (->widget-id "abc")}) "The ID is ."
;; Works if we don't use the abstraction,
;; which defeats the purpose of having one at all
user=> (parser/render "The ID is {{ }}." {:id (->widget-id "abc")}) "The ID is abc."
;; Works if we pass in the raw value,
;; which doesn't propagate the abstraction through the whole program.
;; Once again it defeats the purpose of the abstraction.
user=> (parser/render "The ID is {{ id-value }}." {:id-value (value (->widget-id "abc"))}) "The ID is abc."

My initial reaction was that clabango should just allow arbitrary clojure code in its templates. This is a reasonable strategy if the templates are all part of the application. I can't think of why you'd want to have the templates be user defined anyway, and the code generation abilities of Lisp-like languages will make turning templates into compiled code fairly simple. But clabango is what it is, and such a radical departure would yield something other than clabango.

So instead I implemented a simple filter that executes a one argument function:

(require 'clabango.filters)
(alias 'filters 'clabango.filters)

(require '
(alias 'edn '

(defn dequote [quoted-string]
  (if (and (> (count quoted-string) 2)
           (= (first quoted-string) \")
           (= (last quoted-string) \"))
    (.substring quoted-string 1 (- (count quoted-string) 1))
    (throw (IllegalArgumentException. ""))))

(defn find-function [ns-function-str]
  (when (string? ns-function-str)
      @(resolve (edn/read-string (dequote ns-function-str)))
      (catch Exception e

(filters/deftemplatefilter "apply" [node body arg]
  (when body
    (when-let [function (find-function arg)]
      (function body))))

Which allows me to render an ID thus:

user> (parser/render "The ID is {{ id|apply:\"user/value\" }}."
                     {:id (->widget-id "abc")})
"The ID is abc."

I am deeply disappointed with clabango. My initial hesitation was that moving from clabango to something else would effectively mean tracking changes in luminus on an ongoing basis, which is too high an overhead for a one-person hobby project. But luminus doesn't seem to have any libraries of its own, just some templates that make starting a project straightforward.

Whatever I choose to do, I will write up in a follow-on blog post.