December 28, 2006

Hung Over

I overdid it last night and now I'm paying the price. Headache, nausea, and a feeling of regret. No I didn't get loaded and make an ass of myself (again).

Nope, this time I OD'd on leftover Christmas sweets. I munched for hours on assorted hard candies, chocolate confections and approximately 57 1/4 cookies. The 1/4 cookie was some awful licorice thing. Who the hell makes licorice cookies? Sickies, that's who.

I started around 5:30 pm and didn't stop until around 1:30 am. When I was too full and short of breath from gorging, I'd chew bubblegum for while, then gorge some more. Around 11 I noticed I felt a little nauseous. I figured it might be from a lack of protein, and I remembered that Snickers are packed with peanuts and peanuts have protein. So I pushed though and 13 mini-Snickers later I was good to go.

I think it was the sheer variety that did me in. With the all the cookies my wife made, the gifts we got from various friends, the leftover dessert plates from the Christmas Eve party, and two plates of assorted homemade cookies from a church fund raiser, I didn't stand a chance. Not only because of the delicious taste, but because after the tenth cookie I was able to rationalize I was probably getting reasonably balanced nutrition. Nuts have protein, jam filling has fruit, chocolate has bioflavanoids, coconut has that weird soapy taste, etc. It seemed like solid reasoning at the time.

And now I feel like complete shit. Thankfully I didn't have to drive home last night, because if I had passed a Krispy Kreme I'd likely be dead now.

Link

December 26, 2006

CouchDb and SaaS

I've noticed continuing buzz around hosted solutions and the Software as a Service (SaaS) model, web based business applications that are hosted and maintained by an service provider. Salesforce.com is the canonical example, they've been very successful providing hosted CRM and are expanding their offerings, Microsoft has launched Office Live and there are rumors of a Google "Office" killer in the works.

The idea behind SaaS is that businesses don't want software, they want solutions. So instead of expensive software licenses, servers and support staff, they buy subscriptions to online services that provide the same functions. Businesses don't pay for software, they pay for solutions. SaaS also has the advantage of access from anywhere on the internet, all you need is an good connection and a browser and you're in business.

So does the hosted SaaS paradigm make sense for businesses? It certainly makes sense from an "economies of scale" point of view, since business need not worry about building infrastructure, purchasing systems and hiring technical staff. Businesses just pay for access and the provider does the rest. But there is one serious downside to consider, even with the very best providers.

Imagine a world class hosted application provider. They'll have state of the art data centers with clean power, battery backup and diesel generators, excellent training and professionalism of support and admin staff, heavy investment in engineering, integration and testing of complete systems, with redundancy of disks, routers and machines and even entire data centers.

Q: How reliable will this provider's hosted web applications be?

A: Only as reliable as your internet connection.

When you've remotely outsourced key business applications, any network problem can mean your whole business is effectively paralyzed.

The network is the "elephant in the room" of hosted applications. Outages, timeouts, DNS problems, dropped signals, etc, any of these can render hosted applications and the data contained completely useless and bring business to a grinding halt. And the hosting providers can do nothing about it.

Why can't hosted business applications and all the data contained within still be available even when the internet isn't? Unless SaaS can provide this, there aren't many businesses that can actually afford it, the chance and cost of failure is too high.

So it's back to the same old roll-your-own business infrastructure, cobbling together networks, hardware and applications and hiring staff to keep it running. It's expensive, but it's cheaper than not having access to your applications and data.

Of course, I just so happen to know of a solution this quandry: CouchDb. CouchDb is a distributed database system I'm building. Web applications built on it allow shared, distributed access to data, whether users be online or offline. CouchDb is ideal for document oriented and collaborative applications.

Using a CouchDb back-end, SaaS hosts can provide tested and integrated applications, access from any internet connection, updates and bug fixes, regular backups, highly reliable infrastructure, professional admin and support staff and everything else you expect from a good hosted application provider, AND ALSO give you the ability to replicate the application and data locally on your own office servers or even right on your laptop, and use it as though you were always connected to the internet. New functionality and bug fixes replicate automatically from the application provider.

Since all the updates and changes are incrementally replicated to the SaaS provider, you don't need expensive machines, RAID storage or regular backups to get reliable applications and data storage, the service provider already has it covered. If your local machine, be it an office server or laptop computer, bites the dust, you can simply buy a new one and replicate the application and data back down from the provider to the new machine. The cost of hardware and the business costs of its failure gets much cheaper, since the provider already gives you all the reliable back-end infrastructure and acts as a highly available backup in case of local failure.

And you still get the benefit of being able fire up a browser and get to your data so long as you have an internet connection.

Anyway, CouchDb + SaaS == perfect match

In 1995 I remember "always on and everywhere" internet access was just around the corner, everyone said so, including me. Well, it's almost 2007 and it still hasn't happened.

Maybe someday internet access will be fast, reliable and ubiquitous enough that people can safely run their whole frickin' business using online-only SaaS. Until that happens, until people no longer need local and offline access to their applications and data, SaaS will continue to be a niche industry, always on the verge of being huge.

Salesforce.com seems to have realized this when they bought Sendia for $15 million in April and created AppExchange Mobile. What does AppExchange do? It's a development platform and the second feature listed is "Disconnected access: Information is available even when the network is not".

Link

December 20, 2006

Erlang

If you want to understand what Erlang is all about and why it's different, then read Joe Armstrong's PhD thesis Making reliable distributed systems in the presence of software errors. Joe and his team created Erlang, and this paper is the best introduction I've seen. Many parts of Erlang may seem odd examined in isolation, but Joe's thesis helps explain not only the features but the motivations for the design decisions.

Joe's a nice guy too. Recently he emailed me about CouchDb and I got a chance to thank him for Erlang. Very cool.

Link

Write now

The CouchDb Technical Overview took me a loooong time to write, too long. But I'm happy with it, I think it accomplishes what I set out for.

So I'm now in full-on writing mode, no coding for a while. It took me a while to wake up my writing brain, but I finally kicked its sorry ass out of bed and made it do some work. I've got more articles that may escape soon and I'm attacking the documentation and project websites starting tomorrow.

Want to own a piece of CouchDb? I am now actively looking for an investor (and maybe business partner). If interested, email me at: damien_katz@yahoo.com

Link

CouchDb Technical Overview

Note: This document's permanent home is on the Documentation Wiki, where the latest revision can be found.

--

This is a technical overview of the CouchDb distributed document database system. This overview is intended to give a high-level introduction of key models and components of CouchDb, how they work individually and how they fit together.

Document Storage

A CouchDb server hosts named databases, which store "documents". Each document is uniquely named in the database, and CouchDb provides a RESTful HTTP API for reading and updating (add, edit, delete) database documents.

Documents are the primary unit of data in CouchDb and consist of any number of fields and binary blobs. Documents also include metadata that’s maintained by the database system.

Document fields are uniquely named and contain an ordered list of elements. Elements can be of varying types (text, number, date, time), and there is no set limit to text size or element count. Binary blobs also are uniquely named.

The CouchDb document update model is lockless and optimistic. Document edits are made by client applications loading documents, applying changes, and saving them back to the database. If another client editing the same document saves their changes first, the client gets an edit conflict error on save. To resolve the update conflict, the latest document version can be opened, the edits reapplied and the update tried again.

Document updates (add, edit, delete) are all or nothing, either succeeding entirely or failing completely. The database never contains partially saved or edited documents.

ACID Properties

The CouchDb file layout and commitment system features all Atomic Consistent Isolated Durable (ACID) properties. On-disk, CouchDb never overwrites committed data or associated structures, ensuring the database file is always in a consistent state. This is a “crash-only" design where the CouchDb server does not go through a shut down process, it's simply terminated.

Document updates (add, edit, delete) are serialized, except for binary blobs which are written concurrently. Database readers are never locked out and never have to wait on writers or other readers. Any number of clients can be reading documents without being locked out or interrupted by concurrent updates, even on the same document. CouchDb read operations use a Multi-Version Concurrency Control (MVCC) model where each client sees a consistent snapshot of the database from the beginning to the end of the read operation.

Documents are indexed in b-trees by their name (DocID) and a Sequence ID. Each update to a database instance generates a new sequential number. Sequence IDs are used later for incrementally finding changes in a database. Theses b-tree indexes are updated simultaneously when documents are saved or deleted. The index updates always occur at the end of the file (append-only updates).

Documents have the advantage of data being already conveniently packaged for storage rather than split out across numerous tables and rows in most databases systems. When documents are committed to disk, the document fields and metadata are packed into buffers, sequentially one document after another (helpful later for efficient building of Fabric views).

When CouchDb documents are updated, all data and associated indexes are flushed to disk and the transactional commit always leaves the database in a completely consistent state. Commits occur in two steps:
1. All document data and associated index updates are synchronously flushed to disk.
2. The updated database header is written in two consecutive, identical chunks to make up the first 4k of the file, and then synchronously flushed to disk.

In the event of an OS crash or power failure during step 1, the partially flushed updates are simply forgotten on restart. If such a crash happens during step 2 (committing the header), a surviving copy of the previous identical headers will remain, ensuring coherency of all previously committed data. Excepting the header area, consistency checks or fix-ups after a crash or a power failure are never necessary.

Compaction

Wasted space is recovered by occasional compaction. On schedule, or when the database file exceeds a certain amount of wasted space, the compaction process clones all the active data to a new file and then discards the old file. The database remains completely online the entire time and all updates and reads are allowed to complete successfully. is The old file is deleted only when all the data has been copied and all users transitioned to the new file.

Views

ACID properties only deal with storage and updates, we also need the ability to show our data in interesting and useful ways. Unlike SQL databases where data must be carefully decomposed into tables, data in CouchDb is stored in semi-structured documents. CouchDb documents are flexible and each has its own implicit structure, which alleviates the most difficult problems and pitfalls of bi-directionally replicating table schemas and their contained data.

But beyond acting as a fancy file server, a simple document model for data storage and sharing is too simple to build real applications on -- it simply doesn't do enough of the things we want and expect. We want to slice and dice and see our data in many different ways. What is needed is a way to filter, organize and report on data that hasn't already been decomposed into tables.

View Model

To address this problem of adding structure back to unstructured and semi-structured data, CouchDb integrates a view model and query language. Views are the method of aggregating and reporting on the documents in a database, and are built on-demand to aggregate, join and report on database documents. Views are built dynamically and don’t affect the underlying document, you can have as many different view representations of the same data as you like.

View definitions are strictly virtual and only display the documents from the current database instance, making them separate from the data they display and compatible with replication. CouchDb views are defined inside special "design" documents and can replicate across databases instances like regular documents, so that not only data replicates in CouchDb, but entire application designs replicate too.

Fabric

Views are defined using Fabric view formulas. Fabric is a simple query language designed for extracting and formatting the information contained CouchDb documents and organizing the document information as rows in virtual tables. Fabric has good string processing support (including regular expressions) and mixes imperative and declarative constructs. It provides a simple, concise way to filter, format and organize documents and is designed to deal easily with missing fields, differing data types, and other structure and naming differences.

Fabric is also used for other purposes in CouchDb, such as bulk processing documents and validating updates.

View Indexes

Views are a dynamic representation of the actual document contents of a database, and CouchDb makes it easy to create useful views of data. But generating a view of a database with hundreds of thousands or millions of documents is time and resource consuming, it's not something the system should do from scratch each time.

To keep view querying fast, the view engine maintains cached indexes of its views, and incrementally updates them to reflect changes in the database. CouchDb’s core design is largely optimized around the need for efficient, incremental creation of views and their indexes.

Views and their Fabric formulas are defined inside special “design” documents, and a design document may contain any number of uniquely named view formulas. When a user opens a view and its index is automatically updated, all the views in the same design document are indexed as a single group.

The view builder uses the database Sequence ID to determine if the view group is fully up-to-date with the database. If not, the view engine examines the all database documents (in packed sequential order) changed since the last refresh. Documents are read in the order they occur in the disk file, reducing the frequency and cost of disk head seeks.

The views can be read and queried simultaneously while being also being refreshed. If a client is slowly streaming out the contents of a large view, the same view can be concurrently opened and refreshed for another client without blocking the first client. This is true for any number of simultaneous client readers, who can read and query the view while the index is concurrently being refreshed for other clients without causing problems for the readers.

As documents are examined, their previous row values are removed from the view indexes, if they exist. If the document is selected by a view formula, the formula results are inserted into the view as a new row.

When view index changes are written to disk, the updates are always appended at the end of the file, serving to both reduce disk head seek times during disk commits and to ensure crashes and power failures can not cause corruption of indexes. If a crash occurs while updating a view index, the incomplete index updates are simply lost and rebuilt incrementally from its previously committed state.

Security and Validation

To protect who can read and update documents, CouchDb has a simple reader access and update validation model that can be extended to implement custom security models.

Administrator Access

CouchDb database instances have administrator accounts. Administrator accounts can create other administrator accounts and update design documents. Design documents are special documents containing Fabric view definitions and other special formulas, as well as regular fields and blobs.

Reader Access

To protect document contents, CouchDb documents can have a reader list. This is an optional list of reader-names allowed to read the document. When a reader list is used, protected documents are only viewable by listed users.

When a user accesses a database, the user’s credentials (name and password) used to dynamically determine his reader names. The user credentials are input to the formula and the formula returns a list of names for the user, or an error if the user credentials are wrong. The Fabric formula can have hard-coded logic, or be dynamically driven from look-ups and queries.

When a document is protected by reader access lists, any user attempting to read the document must be listed. Reader lists are enforced in views too. Documents that are not allowed to be read by the user are dynamically filtered out of views, keeping the document row and extracted information invisible to non-readers.

Update Validation

As documents written to disk, they can be validated dynamically by Fabric formulas for both security and data validation. When the document passes all the formula validation criteria, the update is allowed to continue. If the validation fails, the update is aborted and the user client gets an error response.

Both the user's credentials and the updated document are given as inputs to the validation formula, and can be used to implement custom security models by validating a user's permissions to update a document.

A basic “author only” update document model is trivial to implement, where document updates are validated to check if the user is listed in an “author” field in the existing document. More dynamic models are also possible, like checking a separate user account profile for permission settings.

The update validations are enforced for both live usage and replicated updates, ensuring security and data validation in a shared, distributed system.

Distributed Updates and Replication

CouchDb is a peer-based distributed database system, it allows for users and servers to access and update the same shared data while disconnected and then bi-directionally replicate those changes later.

The CouchDb document storage, view and security models are designed to work together to make true bi-directional replication efficient and reliable. Both documents and designs can replicate, allowing full database applications (including application design, logic and data) to be replicated to laptops for offline use, or replicated to servers in remote offices where slow or unreliable connections make sharing data difficult.

The replication process is incremental. At the database level, replication only examines documents updated since the last replication. Then for each updated document, only fields and blobs that have changed are replicated across the network. If replication fails at any step, due to network problems or crash for example, the next replication restarts at the same document where it left off.

Partial replicas can be created and maintained. Replication can be "filtered" by a Fabric formula, so that only particular documents or those meeting specific criteria are replicated. This can allow users to take subsets of a large shared database application offline for their own use, while maintaining normal interaction with the application and that subset of data.

Conflicts

Conflict detection and management are key issues for any distributed edit system. The CouchDb storage system treats edit conflicts as a common state, not an exceptional one. The conflict handling model is simple and "non-destructive" while preserving single document semantics and allowing for decentralized conflict resolution.

CouchDb allows for any number of conflicting documents to exist simultaneously in the database, with each database instance deterministically deciding which document is the “winner” and which are conflicts. Only the winning document can appear in views, while “losing” conflicts are still accessible and remain in the database until deleted or purged. Because conflict documents are still regular documents, they replicate just like regular documents and are subject to the same security and validation rules.

When distributed edit conflicts occur, every database replica sees the same winning revision and each has the opportunity to resolve the conflict. Resolving conflicts can be done manually or, depending on the nature of the data and the conflict, by automated agent. The system makes decentralized conflict resolution possible while maintaining single document database semantics.

Conflict management continues to work even if multiple disconnected users or agents attempt to resolve the same conflicts. If resolved conflicts result in more conflicts, the system accommodates them in the same manner, determining the same winner on each machine and maintaining single document semantics.

Applications

Using just the basic replication model, many traditionally single server database applications can be made distributed with almost no extra work. CouchDb replication is designed to be immediately useful for basic database applications, while also being extendable for more elaborate and full-featured uses.

With very little database work, it is possible to build a distributed document management application with granular security and full revision histories. Updates to documents can be implemented to exploit incremental field and blob replication, where replicated updates are nearly as efficient and incremental as the actual edit differences ("diffs").

The CouchDb replication model can be modified for other distributed update models. Using a multi-document transaction, it is possible to perform Subversion-like “all or nothing” atomic commits when replicating with an upstream server, such that any single document conflict or validation failure will cause the entire update to fail. Like Subversion, conflicts would be resolved by doing a “pull” replication to force the conflicts locally, then merging and re-replicating to the upstream server.

Implementation

CouchDb is built on the Erlang OTP platform, a functional, concurrent programming language and development platform. Erlang was developed for real-time telecom applications with an extreme emphasis on reliability and availability.

Both in syntax and semantics, Erlang is very different from conventional programming languages like C or Java. Erlang uses lightweight "processes" and message passing for concurrency, it has no shared state threading and all data is immutable. The robust, concurrent nature of Erlang is ideal for a database server.

CouchDb is designed for lock-free concurrency, in the conceptual model and the actual Erlang implementation. Reducing bottlenecks and avoiding locks keeps the entire system working predictably under heavy loads. CouchDb can accommodate many clients replicating changes, opening and updating documents, and querying views whose indexes are simultaneously being refreshed for other clients, without needing locks.

For higher availability and more concurrent users, CouchDb is designed for "shared nothing" clustering. In a "shared nothing" cluster, each machine is independent and replicates data with its cluster mates, allowing individual server failures with zero downtime. And because consistency scans and fix-ups aren’t needed on restart, if the entire cluster fails – due to a power outage in a datacenter, for example – the entire CouchDb distributed system becomes immediately available after a restart.

Link

December 13, 2006

The Four Pillars

I've been so heads down coding and implementing CouchDb that up until now I haven't really had the time to explain the technological vision clearly, as a complete package. CouchDb isn't a small thing, its a full distributed database system that addresses many difficult areas of data management. But explaining all the parts, the subtle design decisions and how they work together is a difficult thing, especially while building it as things rarely end up as they started out, they change and evolve as you go. But now that CouchDb is far enough along, I'm spending my full attention on how to present and explain this stuff to a broader audience.

One thing I decided a couple of days ago is a simple terminology change, switching the name of "computed tables" to "views" (thanks Ned). The change is so obvious I feel like a twit for not doing it earlier. I had a reason originally for computed tables, but the name just confused people. I also thought about calling them "virtual tables", but views are more understandable to developers of a variety backgrounds.

Right now I'm also working on good overview of core CouchDb system, and as way to explain the most fundamental parts, I came up with the Four Pillars of Data management:


  • Save - Because you want to save your data and reliably get it back.

  • See - Because you want your database to be able aggregate, organize and show you interesting things about your data.

  • Share - Because multiple people and machines need to access the data, even when offline.

  • Secure - Because you need to keep the private stuff private, and prevent others from changing your important data.

Catchy pillars huh? I'm thinking of making up a fake mathematician that came up with the four pillars, and planting articles on the Wikipedia. That they all start with S is probably significant somehow, not sure how yet. Maybe the aren't pillars, maybe they are the Fantastic Four Data Principles, or the Four Horsemen of the Data-apocalypse. They are the four something of data management.

Once we've established the 4 pillars and their incredible importance, then I show how CouchDb has all four:


  • Save - Robust, ACID compliant storage engine.

  • See - View engine to efficiently filter, format and organize data.

  • Share - Efficient, incremental, bi-directional replication.

  • Secure - Distributed security and validation model.

Then people will be all like "my database only has 2 of the 4 pillars, gotta get CouchDb."

Anyway, pillars aside, I know there has been a communication problem, namely me not explaining this stuff better. So I am working hard on explaining this stuff better to everyone, no new coding for while. I've already got a good overview of the architecture and back-end components, and it explains each of the parts and how they all are designed to work together seamlessly and reliably. Coming soon, stay tuned, don't touch that dial.

Link

December 7, 2006

Domino Sucks?

As I am all too aware, people complaining about Notes and Domino is pretty common occurrence. I still flinch a little when I tell people I used to work on Notes, out of fear. More than one Notes user has become frustrated to the point of hurling blunt objects and I'd rather not become the face they associate with their rage.

But this particular round is a little different from most, as it's coming the Notes and Domino community itself, mostly about how neglectful IBM has been to the Notes platform, and in particular Domino web development. Jake Howlett kicks things off and an explosion of complaints appears in the comments. Vowe makes note, Carl Tyler responds, Jake posts again and Ed Brill responds.

But if these guys are frustrated with IBM, you should talk to some of IBM's ex-Iris developers who lived through the whole Workplace fiasco. I get the idea the people calling the shots at IBM are ex-salesmen, they don't know what a good idea and competent engineer looks like. So you get what happened, a giant mishmash of a project with pockets of brilliance and long stretches of pointlessness.

I remember when I worked at IBM, Steve Mills, the VP in charge of all of IBM software and one of the largest software businesses in the world, he came to Iris to tell us about IBM's great plans for Lotus Notes and us former Iris employees working on it. He actually said we were the “user experience” people, and the other groups at IBM (like DB2 and Websphere) were the heavy lifting, back-end people.

He couldn’t have gotten it more wrong. The thing that made Notes successful wasn’t the great user experience, probably the single biggest things users complain about was the poor usability and frustrating experience of the Notes client. Now, don’t get me wrong, the Notes client is a powerful piece of software and its UI is highly functional (if frustrating), but that’s not why it continues to be successful. It's been successful because the backend data model, the heavy lifting stuff, solved real problems much more easily than so many of the other technologies, like SQL. It is still singularly unique in its back-end capabilities.

But IBM management didn’t seem to recognize this. I’m not sure what the hell they were thinking. The same Steve Mills had a long interview in a trade magazine talking about the legacy technology of Notes and how it was going to be ripped out and replaced with something based on DB2. There is little doubt that DB2 really is an extremely advanced and optimized piece of backend software. But it’s not Notes, not even close.

Somehow the guys in charge got into their heads that as much as possible should be written in Java and a relational backend, and Notes was legacy technology that needed to go away. I’m sure this looked very good on paper, IBM is the groupware leader, they have as extremely advance relational database and an industry leading Java application server platform. Why not take all that expertise and technology, discard all the old “legacy” stuff and build the greatest and most technologically advanced groupware platform ever? They can convert all their old customers to the new platform, charging a bundle on software licenses and support and migration consulting. The customers will gladly pay for it because it’s so great and because they want to get off the legacy Notes platform.

And so began the Workplace project. And what exactly was IBM hoping to build with all this? After blowing hundreds of millions building and marketing a new platform no one wanted, the only consistent thing I could see about it was it was big. Very very big. Good thing IBM made all that technology seamlessly integrated and easy to install and configure, otherwise they'd have had no advantage and you could just have easily built most of the same stuff from open source. (yes, sarcasm)

Now IBM has seen the light. How could they not when Workplace was a giant money pit while Notes and Domino continued to earn a healthy and growing profit despite neglect. My inside sources tell much of the budget and personnel from Workplace initiatives are being reassigned to Notes and Domino. Let's hope they get the good, creative people to work on it, and not corporate drones with no other prospects than to work on famously reviled product with an ancient codebase. And let’s hope they fight hard to make sure it stays the same stable, and easy to support platform its always been and not a slow bloated, unstable mishmash of technologies. (more sarcasm? I'm not really sure)

And one wonders why it has been so successful? Despite how maligned Notes often is, despite how neglected its own community feels, despite its outdated and limited codebase (16 bit database limits? In 2006?), despite IBM's own attempts to replace the product and migrate users, despite all this it continues to be a huge, money making success for IBM. People still buy new licenses, build new applications and deploy new installations to solve real problems everyday.

Why? What is it about the platform that makes it such a continued success? Is it the fat Notes client? Does it's UI do something other fat clients don't do? Snappier? Better UI? Better text editor? Is it the PKI security? Is it the IDE? Is the management tools? Better support for industry standards? What exactly is so unique about Notes and Domino that keep it in demand?

Here is the answer: It's the database. The Notes database model is simple and functional with built-in security and bidirectional replication. The implementation is solid, if limited, and performs pretty damn well. It easily solves many problems that are nightmares to deal with in SQL.

The answer seems so obvious to me. I was involved with Notes and Domino for a very long time, as a customer and later as an engineer deep on the guts of the product. I’ve seen it used in many different ways, and the one element always a constant in every success was the database back end. Without the database, nothing else had any reason to exist.

Domino web development may be an exercise in frustration, but its mostly because the rest of its web development tools are outdated and inconsistent and hacky, but they are really the only way to work with and expose the power underneath. It doesn't have to be this way. IBM continues to squander a huge opportunity, and frustrated Domino developers know it.

Link

December 2, 2006

Project Road Map

I just updated the project road map for the upcoming releases.

==Features for next release==
* File attachments
* Basic security model
* Document validation model (validate live and replicated changes)
* Documentation, Documentation, Documentation

==Priority feature work==
* Live compaction
* Extensible security model
* LDAP authentication
* Table joins - one to one, one to many
* Fabric agents
* Documentation, Documentation, Documentation

==Future feature work==
* Server storage partioning
* Server failover clustering

Link