While text versioning was definitely a part of the origi- nal hypertext concept [21, 36, 44], it is rarely considered in this context today. Still, we know that revision control underlies the most exciting social co-authoring projects of the today's Internet, namely the Wikipedia and the Linux kernel. With an intention to adapt the advanced revision control technologies and practices to the conditions of the Web, the paper reconsiders some obsolete assumptions and develops a new versioned text format perfectly processable with standard regular expressions (PCRE ). The resulting deep hypertext model allows distributed and real-time revision control on the Web, provides the user with instant access to past/concurrent versions, authorship, changes; enables deep links to reference changing fragments within a changing text. It implements the vision of co-evolution and mutation exchange among multiple competing versions of the same text.
An article on deep hypertextMay 25, 2010
Quick summary: I did similar things in the past,
I find Google Wave prohibitively and unnecessarily
complex. It needs to get fit to live.
This post gives some summary view of an academic project of a communication environment codenamed “Bouillon”. I actively worked on Bouillon from 2005 till 2008 and it was the lion part of my PhD thesis. During the course of the project, four milestone prototypes were released and played with. I explored design space led by a vision of better communication protocol. The top-level objective was to automate information propagation as hyperlinks automated associations and search engines automated search. Section 1 addresses the evolution of the project, also provides links to publications and other resources.
Besides that ambitious goal, the particular technical objectives assumed creation of a real-time communication environment, kind of generalization of all the numerous communication channels we use today: IRC, IM, e-mail, forums, wikis. As wiki is most flexible of them all, Bouillon was categorized as “a real-time wiki”. As you will see, Bouillon has quite lot of parallels with the Wave project. Even technically, it employed the same combination of XMPP and HTTP for the most of the project’s life. More detailed comparison is made in Section 2.
Being a person who spent some time to explore that particular design space, I feel that my opinions on the Wave project have some value. Thus, I put some praises, criticisms, experiences and insights into Section 3.
An impatient reader may jump to Conclusions.
Have fun reading!
1. Project timeline and evolution
Once I recently understood that even I myself cannot reliably remember details of the project’s evolution, I’ve checked the papers and put the detailed timeline here.
September 04 - the basic idea is sketched at the Sleepzone hostel, Galway, Ireland. The source of inspiration was the map of Ireland. Irish road network was so different from the typical Russian center-and-axles scheme. I started thinking about diffusion of information and found that the process is still based on “manual cognitive labour”. The Web serves publication and association of knowledge/ideas; propagation is not served well.
Winter 05/06 - the first just-to-know-how-it-feels prototype is launched, a standalone QT application sending messages over regular XMPP. Basically, it was “Usenet over the social network of IM contacts”. A short paper is accepted for WWW’06 MTW workshop (later withdrawn because of unrelated causes). The key feature of the application was the use of sliders to evaluate and filter content. Actually, real users were found too lazy to use them The idea of sliders was later borrowed by Jaanix. The application had no server part; the information was stored at end nodes and propagated hop-by-hop from friend to friend (aka “gossip network”).
Summer 06 - the first web-based prototype is launched. The concept drifted towards real-time wiki over the social network of IM contacts. At this point, Bouillon is very much like today’s Wave: user-to-server communication is done with HTTP, while server-to-server uses XMPP. Wiki pages (”waves”) are divided into pieces/paragraphs (”wavelets”). Pieces are modified and voted by users, but real-time version control within pieces is left for future work.
December 06 - Had a conversation with Ward Cunningham, who had a similar uber-wiki idea before (“Folk Memory”).
February 07 - made some efforts to promote Bouillon at Yandex (the ‘Russian Google’).
Summer 07 - polished Bouillon, made an article and a demo for CSR’07. At this point, Bouillon gets libevent-based Comet implementation. Finally, defended my thesis (Russian).
Winter 07 - I concentrated on the problem of in-browser real-time collaborative editing. Made a mistake of dropping the previous page-as-a-tree-of-pieces structure in favour of a “simpler” just-a-page-of-HTML approach. In fact, mixing of structural and decorative markup created lots of complexity/problems. Diffing/patching/merging plain text in real time is a complex problem; doing the same to HTML/XML is much more complicated. I had to try three different approaches, dumping experiences to an article “Causal trees: towards real-time read-write hypertext”. B.Cohen was kind enough to point out that I reinvented weaves in the process. Weave is a natural, but non-obvious version control data structure, so it gets reinvented time to time. The causal tree approach gave perfect stability and performance to real-time distributed version control. CTs are different from the classic diff/patch/merge or most of OT flavors because they drop positional symbol addressing to use unique symbol ids instead. As a consequence, CTs do not need global entities for mutation ordering. Still, HTML tag hierarchy/nesting was a pain in the ass. I attempted to circumvent those issues by employing wiki markup instead of HTML.
Summer 08 - Made a poster on causal trees and a little demo on WikiWYG at WikiSym’08.
One unfortunate problem of Bouillon is that its two parts (social message propagation and real-time collaborative editing) were not properly integrated and polished up to this moment.
2. Bouillon/Wave comparison
Both Bouillon and Wave used more or less the same combination of Comet HTTP, XMPP and real-time version control. Comet HTTP is obviously necessary to do real-time editing in the browser. XMPP allows to capture a ready-made social network. Real-time version control is just essential.
Very similar ideas popped here and there in both projects. I.e. Googlers are proud of their “transformation function” model; in Bouillon, quite a similar idea was named “polymerase”. Besides the fact that Bouillon dropped XMPP in latter versions, the single most significant difference is the use of the causal trees version control model. I also started with OT flavors, considered WOOT for some time, but those did not satisfy general simplicity and instant branching/merging requirements. Thus, I had to move to id-based version control instead of the classical position/context-based logic. Also, branching/merging of XML is so much of pain in the ass, I finally decided to do best-effort plain-text merging/branching and to avoid XML at all costs.
Another, less technical and more ideological difference is the ultimate goal of the project. The Wave focused on “reinventing e-mail”, while Bouillon’s top-level goal was to “automate information propagation”. Although, one does not contradict the another. Still, Bouillon was an attempt of a gossip-based communication environment, while the Wave at its present form is pretty much centralized/server-based.
And probably, the most significant difference is that Bouillon was never backed by 50 developers, even by 5
3. On Google Wave
First, some praises. Indeed, the idea of creating a new generalized communication environment is brilliant.
I’m a believer in “automagic crystallization” where some discussion may start as a chat, progress as teamwork and result in a structured polished document. I see boundaries between modern communication environments (IM, IRC, e-mail, forums, wikis) as something very artificial.
Technically, one key choice in constructing communication environments is the granularity of document “units”. In case we work with the entire shared page we may get wiki-anarchy. In case we work with privately-owned “posts” like in forums and mailing lists, we get terrible fragmentation. Seemingly, Wave takes the Middle Way of private or shared “blips” and that is a good fundamental choice.
Still, I have to make three sceptical points on the Google Wave architecture.
Wave is complex
While the Wave is potentially able to replace a whole bunch of communication environments, the bad part is that the technical complexity of the Wave approaches the combined complexity of those environments. A Wave installation combines an XMPP server, an HTTP server, a complex version control code, lots of AJAX and other logic.
The entity-relationship model of the Wave is also confusingly multilayered at times: waves, wavelets, documents/messages/blips, plus all the XML stuff inside every ‘blip’. The internal data format is also far from simplicity: while some people may consider XML as too complex, the Wave adds an additional kind of entities (annotations) to XML. Wave also uses a mutation model which is significantly more complicated than the DOM mutation events API. Wave has 15 kinds of mutation operations (compare e.g. to 3 types of DOM mutation events, 2 types of the classical insert-remove model or just 1 type of CT mutations).
There are even more of stuff, e.g. “The App Engine robots speak to the Google Wave backends through a custom HTTP protocol for wave robots. They don’t use the federation protocol.” Why robots are different from regular clients? No idea.
As Systemantics postulated, “A Complex System That Works Is Invariably Found To Have Evolved From A Simple System That Worked”. The Wave still has to evolve in the wild; still, it is already quite complex. (Compare it e.g. to plain old e-mail.) To overgeneralize it a bit, Wave has features of corporation-made software; as five-ten more developers might always be assigned to this or that part of the project, project leaders do not feel pressurized to look hard for the simplest solution.
Wave is fat
To my eye, Wave has quite a lot of complex redundant pieces (kind of architectural fat). Namely, Wave communication patterns fit perfectly into Comet HTTP; even for real-time updates, a client cannot issue new update before the previous update is acknowledged. That is a perfect HTTP-like request-response pattern. Was XMPP really necessary there? I doubt.
The use of XML might also be an overshoot. Anyway, most of the functionality of structural markup is/could be expressed by the means of “blips”. Decorative markup is done as “annotations”. Thus, the useful work is apparently done by other mechanisms, while the role of XML is mostly limited to introducing complexity That is quite obvious when looking at the Wave OT mutation model; 15 kinds of mutations means 15k-strong potential for unexpected feature interactions. Semantically, antidocumentelementstart is just terrible.
Wave is centralized
While some commentators claimed that Wave uses git-style distributed version control, that is light-years from the truth. The Wave’s OT flavor needs global ordering of mutation operations; being applied in a different order, OT mutations lead to a different outcome. Thus, only one server can host a wave; any “federated” servers are no more intelligent than HTTP caches (yet another reason to use HTTP).
Theoretically, Wave developers might introduce branch/merge routines on top of OT. But as far as I can see, that will add even more of complexity (and some issues, probably, cannot be resolved in principle - such as concurrent mutation ordering).
While yes, possibilities of real-time/generalized communication environments are breath-taking, the particular implementation (the Wave) has lots of shortcomings I consider as typical beginner’s mistakes. The architecture is quite complicated, I’d even say cluttered. Apparently, it evolved by the means of adding yet-another-clever-piece to do that-another-cool-feature. That is more or less normal for communication environments that evolved in the wild for tens of years, but the Wave is a newcomer. Try to imagine how complex it will be 10 years from now if widely used (much worse than .doc, I suppose). That aspect is especially important because Wave is supposed to be a standard. Just compare it to HTTP or SMTP; those are really simple.
So, finally, I think the Wave needs a little bit more love before being pushed to a universal standard.
A resume of my recommendations is to use:
- flat text with some markup, instead of XML
- simpler (and decentralized) version control
- no XMPP
- simpler ER model, e.g. blips connected by inclusions (”points and lines”)
WikiWYG editor demo is online. The idea of WIkiWYG is to cross-breed WYSIWYG and plain wikitext. Namely, to let users type in wikitext while applying formatting on the fly.
The popular Long tail concept by Chris Anderson was recently criticized by Prof. Anita Elberse, a marketing professor at Harvard’s business school. Lee Gomes at WSJ mistakenly commented that as debunking of the Long tail myth.
Mr Anderson gave an exceptionally polite response.
Wow. Before I noticed that Prof. Elberse is a marketing professor, I became really suspicious about Harvard. I even started to write an extensive post before I discovered an excellent comment by Ali Partovi. I would sign every word of it.
Generally, it is absolutely misleading to study long-tail distributions in terms of average values. Average homo sapiens has one testicle and one breast. Similarly, in a power-law distribution average value is not typical and it does not mean a lot. Prof. Elberse did that awful mistake; any further conclusions are just consequences.
The program implements the linear-space algorithm variation, not the ND-space Dijkstra-like BFS (and not that scary NN-space matrix-based examples one might find on the Web).
To get the code:
svn co svn://bouillon.math.usu.ru/TestDiff
TODO: a lot of things: diff cleanups, patching, whatever else I’ll need
P.S. Performance is acceptable; does per-char diff for “War and Peace” Book I (N=271942 D=4356) in 0.15sec, while GNU
diff does line-based diff in 0.04s (N=5931 D=1404). Actually, GNU
diff does in 0.01s without locales. My implementation wastes time reinstantiating iterators, while C code uses pointer arithmetic, but… who cares? That’s good enough for now.
P.P.S. 271942*4356/(5931*1404) = 142; 0.15/0.01 = 15. Wow! Excellent!
TODO: do it with bidirectional iterators; that is more natural
UPD 30 Jul 2008: Google’s diff-match-patch got a C++ version
If two users concurrently insert the same letter at the same place it doubles at merge. I.e.
user1: apple user2: apple
Heuristic: if two atoms have equal content and equal predecessor => count as one
1) Devil is in the details. What if “user1: apple user2: apple” ?
2) Is it really a problem?
P.S. 9 Apr Although convergence is considered to be a virtue of a version control system, in case of distributed wikis/forums it might easily be a misfeature. E.g. suppose a stereotypical situation of two users adding a “+1″ comment to some posting. Those “+1″s, although being identical, are not actually the same change.
Sometimes, a program better be predictable and consistent than smart.
The objective is to automate information propagation as hyperlinks automated associations and search engines automated search.