Breaking Apart Transliterature

I think that the Xanadu failure (and it is a failure) has something to teach us — but I’m not sure why. As a first step, I want to analyze one of the more obvious problem. TransLiterature has a “standard”. I put the words in quotes because this reads like a combination of design document, polemic and history, with very little actual standard thrown in.

After reading the whole thing, I tried to reduce the actual standard to its component parts:

The Standard

A content location is a URL to which ?xuversion=1.0&locspec:chararrange:<number>/<number> (where the numbers are in decimal notations) can be appended to. We call the result of such appending content URL, and we require that the following invariant hold:

Content(n,m) concatenated Content(n+m,l) should be the same as Content(n,m+l)

(As a subset of this invariant, note that the content pointed to by a content URL should never change.)

An EDL is a format which contains a list of strings, one per line.

  • Content URLs
  • The cent ASCII sign, followed by a type name [where is the list of types defined] followed either by the word to, and a list of Content URLs or the word from, followed by a list of content URLs, followed by the word to on a line on its own, followed by a list of content URLs (that’s called a clink)

As you can see, the EDL is ambiguous, in that its impossible to know where the last list of content URLs in a clink ends and the regular content URLs resume. I assume that all clinks must follow the regular content URLs, but nowhere is this spelled out. It’s also not obvious if it’s really one per line or just any-whitespace-will-do.

This standard could have been implemented by

  • A Firefox extension on the client side, to read and display EDL files
  • An Apache mod for supporting the Content-URL format

This standard could even have been implemented by

  • A JavaScript function that reads EDLs and renders them…
  • …using the “Partial content” extensions to HTTP

(Then it could be distributed as “put this little snippet of JS on a page, and call it with /page.html?EDL=<URL>”) and you could have a complete TransLiterature prototype running in a browser, and in a way that everyone with a website could start using TL today. Implementing it wouldn’t even be very hard,  though it doesn’t sound like loads of fun to implement byte-ranges in JavaScript.)

None of this treats the issue of generating EDLs, of course, which is exactly the whole problem with Xanadu: while the reading part is trivial, HTML had (and still has) the advantage of being trivial to write by hand. I don’t actually think implementing this bit in JS would make Xanadu take off — but then it would be obvious it’s not taking off for good reasons…


One Response to Breaking Apart Transliterature

  1. Ben N says:

    Yeah, that’s really weird. Looking at the manifesto there, they seem to be envisioning a richer world if interrelated documents, including media and layout as well as text. Why on earth are they fixing a non-extensible standard around the idea of locations within documents as byte offsets? That’s an incredibly brittle way to address into a document…

    As an aside, I’ve read some of Tim Berners-Lee’s comments about how the web was “supposed to work”, which involve a lot of talk about URLs not ever changing, and I don’t buy it — if the URL of a document shouldn’t even change, then it shouldn’t contain the name of the document’s maintainer/owner, the document’s title, or anything like that, because all of those might change, unless the URL points to a specific version of the document, in which case how am I going to find the new one, or what URL always points to the current version?

    A system which allows for robust referencing of documents across versions, across changes to the context in which those documents make sense, and even across changes in the policies or identities of the organizations which maintain access to them, is going to have to be very different from simply what’s given in the URL spec and HTTP. The key question is, what is a URL supposed to mean? When circumstances change, what object should a given old URL now point to? “The same” object? The object in the same “role” relative to some other object?

    Any system which imagines URLs as representing a promise that they will remain meaningful in the future needs to allow such promises to be of more than one semantic type, and establish the relationships between these types. Building a standard narrowly around the promise type “will-always-refer-to-exactly-the-same-content” (and further assuming that it will always be reasonable to address a partial document as a substring) doesn’t begin to deal with the issues Transliterature raises in their own manifesto.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: