- Unique Cluster Node ID
- Sync Delay
- Removing Old Revisions
- Easily add new cluster nodes
- Journal Type
- Sample Cluster Configurations
- Concurrent Write Behavior
Clustering in Jackrabbit works as follows: content is shared between all cluster nodes. That means all Jackrabbit cluster nodes need access to the *same* persistent storage (persistence manager, data store, and repository file system).
The persistence manager must be clusterable (eg. central database that allows for concurrent access, see PersistenceManagerFAQ); any DataStore (file or DB) is clusterable by its very nature, as they store content by unique hash ids.
Every change made by one cluster node is reported in a journal, which can be either file based or written to some database.
Session scoped locks currently have no effect on other cluster nodes - see http://issues.apache.org/jira/browse/JCR-1173
In order to use clustering, the following prerequisites must be met:
- Each cluster node must have its own repository configuration.
- A DataStore must always be shared between nodes, if used.
- The global FileSystem on the repository level must be shared (only the one that is on the same level as the data store; only in the repository.xml file).
- Each cluster node needs its own (private) workspace level and version FileSystem (only those within the workspace and versioning configuration; the ones in the repository.xml and workspace.xml file).
- Each cluster node needs its own (private) Search indexes.
- Every cluster node must be assigned a unique ID.
- A journal type must be chosen, either based on files or stored in a database.
- Each cluster node must use the same (shared) journal.
- The persistence managers must store their data in the same, globally accessible location (see PersistenceManagerFAQ).
Unique Cluster Node ID
Every cluster node needs a unique ID. This ID can be either specified in the cluster configuration as id attribute or as value of the system property org.apache.jackrabbit.core.cluster.node_id. When copying repository configurations, do not forget to adapt the cluster node IDs if they are hardcoded. See below for some sample cluster configurations. A cluster id can be freely defined, the only requirement is that it has to be different on each cluster node.
By default, cluster nodes read the journal and update their state every 5 seconds (5000 milliseconds). To use a different value, set the attribute syncDelay in the cluster configuration.
Removing Old Revisions
The journal in which cluster nodes write their changes can potentially become very large. By default, old revisions are not removed. This enables one to add a cluster node without much work: the new cluster node just replays the journal to get up to date (of course, if the journal contains data from two years of work then this might take a while...)
As of Jackrabbit 1.5 there is the possibility to automatically clean the database-based journal. The local revision counter is automatically migrated to a new table in the database called LOCAL_REVISIONS. To support a proper migration the "revision" parameter must be present in the configuration. After the migration it can be removed.
The clean-up task can be configured with three parameters:
- janitorEnabled specifies whether the clean-up task for the journal table is enabled (default = false)
- janitorSleep specifies the sleep time of the clean-up task in seconds (only useful when the clean-up task is enabled, default is 24 hours)
- janitorFirstRunHourOfDay specifies the hour at which the clean-up task initiates its first run (default = 3, which means 3:00 at night)
The current solution has three known caveats:
- If the janitor is enabled then you loose the possibility to easily add cluster nodes. (It is still possible but takes detailed knowledge of Jackrabbit.)
- You must make sure that all cluster nodes have written their local revision to the database before the clean-up task runs for the first time because otherwise cluster nodes might miss updates (because they have been purged) and their local caches and search-indexes get out of sync.
- If a cluster node is removed permanently from the cluster, then its entry in the LOCAL_REVISIONS table should be removed manually. Otherwise, the clean-up thread will not be effective.
Related issue: http://issues.apache.org/jira/browse/JCR-1087
Easily add new cluster nodes
- Shutdown one of your instances
- Get the current revision number that instance was from your database
- Copy your whole Jackrabbit repository directory to another server/location
- Start your original Jackrabbit again
- Change the copied repository.xml with a new nodename in your clusterconfig
- Add that nodename to your DB in JOURNAL_LOCAL_REVISIONS with the number from the original instance
- Start your new Jackrabbit instance (or keep it for backup purposes)
The cluster nodes store information identifying items they modified in a journal. This journal must again be globally available to all nodes in the cluster. This can be either a folder in the file system or a database running standalone.
The file journal is configured through the following properties:
- revision: location of the cluster node's revision file
- directory: location of the journal folder
There are three Journal classes:
If you use Oracle, you do need to use the OracleDatabaseJournal. The DatabaseJournal will not work.
The database journal is configured through the following properties:
- revision: location of the cluster node's revision file
- driver: JDBC driver class name
- url: JDBC URL
- user: user name
- password: password
Sample Cluster Configurations
Database (Complete Example)
A sample repository.xml file that is using a clustered H2 database for all data (file system, data store, persistence managers, versioning, journal):
For details about H2 clustering see the H2 documentation.
This section contains some sample cluster configurations. First, using a file based journal implementation, where the journal files are created in a share exported by NFS. Please note that for high availability, the NFS itself must be highly available (using a clustered file system).
In the next configuration, the journal is stored in an Oracle database, using a sync delay of 2 seconds (2000 milliseconds):
In the following configuration, the journal is stored in an PostgreSQL database, accessed via "JNDI" (See Also UsingJNDIDataSource):
Note: the journal implementation classes have been refactored in Jackrabbit 1.3. In earlier versions, journal implementations resided in the package org.apache.jackrabbit.core.cluster.
Persistence Manager Configuration
All cluster nodes must point to the same persistence location. For performance reasons, only information identifying the modified items is stored in the journal. This implies, that all cluster nodes must configure the same persistence manager and persistence location, because they must have access to the items' actual content. The persistence manager needs to be transactional, and need to support concurrent access from multiple processes. When using Jackrabbit, one option is to use a database persistence manager, and use a database that does support concurrent access. The file system based persistence managers in Jackrabbit are not transactional and don't support concurrent access; Apache Derby doesn't support concurrent access in the embedded mode. The following sample shows a workspace's persistence manager configuration using an Oracle database:
Previous versions of Jackrabbit that do not support this persistence manager may need to use org.apache.jackrabbit.core.persistence.db.OraclePersistenceManager.
Data Store Configuration
All cluster nodes must point to the same data store location. The data store should be used to store large binaries (all cluster nodes need to access the same data store). When not using the data store, one need to set the parameter externalBLOBs to false so that large binaries are stored in the persistence manager. The file system blob store does not support clustering, because it uses a local directory.
Concurrent Write Behavior
When multiple cluster nodes write to the same nodes, those nodes must be locked first. If the nodes are not locked, then the operation may fail if the nodes were updated concurrently (or a little bit earlier). As an example, the following sequence may fail on session 2 (session 1 operates on cluster node 1, session 2 on cluster node 2), even the operations on session 2 are executed after session1.save():
The best solution is to use locking. If session2 operations are guaranteed to be executed after all session1 operations, another solution is to use session.refresh(), however only if cluster sync in refresh is enabled: