All Classes and Interfaces
Class
Description
Abstract base implementation of the
JackrabbitAccessControlList
interface.Default implementation of the
JackrabbitAccessControlManager
interface.Abstract implementation of the
AuthorizableAction
interface that
doesn't perform any action.Abstract base class for
Blob
implementations.An abstract data store that splits the binaries in relatively small blocks,
so that each block fits in memory.
A block id.
The data for a block.
Abstract base class for providing cache statistic via the
CacheStatsMBean
.Abstract base class for
CheckpointMBean
implementations.Abstract base class for
ChildNodeEntry
implementations.Abstract configuration class that is based on a bean map.
Implements
DataRecord
The storage implementation for tar files.
Abstract implementation of the
GroupAction
interface that
doesn't perform any action.Abstract implementation of the
LoginModule
interface that can act
as base class for login modules that aim to authenticate subjects against
information stored in the content repository.AbstractMutableTree
extends AbstractTree
with implementations
for most write methods of Tree
.Abstract base class for
NodeState
implementations.Abstract base class for
PropertyState
implementations.AbstractRebaseDiff
serves as base for rebase implementations.Abstraction for Segment-Tar based backends.
AbstractServiceTracker
is a base class for composite components
that dynamically look up the available component services from the
whiteboard.Cache files locally and stage files locally for async uploads.
AbstractTree
provides default implementations for most
read methods of Tree
.The
AccessControlAction
allows to setup permissions upon creation
of a new authorizable; namely the privileges the new authorizable should be
granted on it's own 'home directory' being represented by the new node
associated with that new authorizable.Constants for the default access control management implementation and
and for built-in access control related node types.
ProtectedNodeImporter
implementation that handles access control lists,
entries and restrictions.This implementation of
AccessControlManager
delegates back to a
delegatee wrapping each call into a SessionOperation
closure.Default implementation of the
JackrabbitAccessControlManager
interface.AccessControlValidatorProvider
aimed to provide a root validator
that makes sure access control related content modifications (adding, modifying
and removing access control policies) are valid according to the
constraints defined by this access control implementation.AccessManager
Default implementation of the
JackrabbitAccessControlEntry
interface.EventTypeFilter
filters based on the access rights of the observing session.This
EventFilter
implementation excludes events for child nodes
of added nodes.Principal used to mark an administrator.
This class acts as the base class for the implementations of the first
normalization of the informative content in the DFR framework.
Implementation used when there is no aftereffect.
Model of the information gain based on the ratio of two Bernoulli processes.
Model of the information gain based on Laplace's law of succession.
Extension of the
PermissionProvider
interface that allows it to be
used in combination with other provider implementations.An AggregatingDescriptors is an implementation of Descriptors
that allows to aggregate multiple Descriptors (which are
provided dynamically via a whiteboard tracker).
Marker interface intended to extend a
RestrictionProvider
to make it aware of it's aggregated
nature in a composite when it comes to evaluate the validity of restrictions.Tracks a prefetch window for the AOT downloader.
Implementation of the
PermissionProvider
interface that grants full
permission everywhere.This exception is thrown when there is an attempt to
access something that has already been closed.
An Analyzer builds TokenStreams, which analyze text.
Deprecated.
This implementation class will be hidden in Lucene 5.0.
Deprecated.
This implementation class will be hidden in Lucene 5.0.
Strategy defining how TokenStreamComponents are reused per call to
Analyzer.tokenStream(String, java.io.Reader)
.This class encapsulates the outer components of a token stream.
Extension to
Analyzer
suitable for Analyzers which wrap
other Analyzers.The implementation of the corresponding JCR interface.
An AND condition.
The extension of
StandardMBean
that will automatically provide JMX
metadata through annotations.This
ThreeWayConflictHandler
implementation resolves conflicts to
ThreeWayConflictHandler.Resolution.THEIRS
and in addition marks nodes where a
conflict occurred with the mixin rep:MergeConflict
:Utility class to buffer a list of signed longs in memory.
Utility class to buffer a list of signed longs in memory.
A node state diff handler that applies all reported changes
as-is to the given node builder.
This
Blob
implementations is based on an array of bytes.Methods for manipulating arrays.
A factory for syntax tree elements.
A visitor to access all elements.
The base class to visit all elements.
This class is responsible for creating and deleting checkpoints asynchronously.
Base class for
DocumentProcessor
implementations that create tasks
executed by an executor service.Index update callback that tries to raise the async status flag when
the first index change is detected.
A
DocumentProcessor
that processes NodeState
s.
Manages a node as Atomic Counter: a node which will handle at low level a protected
property (
AtomicCounterEditor.PROP_COUNTER
) in an atomic way.Provide an instance of
AtomicCounterEditor
.AtomicReader
is an abstract class, providing an interface for accessing an
index.IndexReaderContext
for AtomicReader
instances.Base interface for attributes.
Base class for Attributes that can be added to a
AttributeSource
.This interface is used to reflect contents of
AttributeSource
or AttributeImpl
.An AttributeSource contains a list of different
AttributeImpl
s,
and methods to add and get them.An AttributeFactory creates instances of
AttributeImpl
s.This class holds the state of an AttributeSource.
The
Authentication
interface defines methods to validate
Credentials
during the
login step
of the
authentication process.Interface for the authentication setup.
Default implementation of the
AuthenticationConfiguration
with the
following characteristics:
LoginContextProvider
: Returns the default implementation of
LoginContextProvider
that handles standard JAAS based logins and
deals with pre-authenticated subjects.AuthInfo
instances provide access to information related
to authentication and authorization of a given content session.Default implementation of the AuthInfo interface.
The
AuthorizableAction
interface provide an implementation
specific way to execute additional validation or write tasks upon
User creation
,
Group creation
,
Authorizable removal
and
User password modification
.AuthorizableActionProvider
is used to provide AuthorizableAction
s
for each instance of UserManager
.AuthorizableExistsException
The
AuthorizableNodeName
is in charge of generating a valid node
name from a given authorizable ID.Default implementation of the
AuthorizableNodeName
interface
that uses the specified authorizable identifier as node name
escaping
any illegal JCR chars.The different authorizable types.
The
AuthorizableTypeException
signals an Authorizable
type mismatch.Configuration for access control management and permission evaluation.
Default implementation of the
AccessControlConfiguration
.Finite-state automaton with regular expression operations.
Automaton provider for
RegExp.
RegExp.toAutomaton(AutomatonProvider)
A
Query
that will match terms against a finite-state machine.Interface to identify a given
SyncHandler
as being aware of the optional AutoMembershipConfig
.Optional extension of the
DefaultSyncConfig.Authorizable.getAutoMembership()
that allows to define conditional auto-membership based on the nature of a given Authorizable
.Implementation of the user management that allows to set the autosave flag.
Calculate the final score as the average score of all payloads seen.
Perform an offline compaction of an existing AWS Segment Store.
Collect options for the
Compact
command.Perform a full-copy of repository data at segment level.
Collect options for the
AwsSegmentCopy
command.Utility class for common stuff pertaining to tooling.
Provides access to the blob metadata.
Collect options for the
Check
command.Perform an offline compaction of an existing Azure Segment Store.
Collect options for the
Compact
command.Utility class for parsing Oak Segment Azure configuration (e.g.
Backend using a a remote Azure Segment Store.
An observer that uses a change queue and a background thread to forward
content changes to another observer.
Perform a backup of a segment store into a specified folder.
Collect options for the
Backup
command.Base class for implementing
CompositeReader
s based on an array
of sub-readers.Base implementation for a concrete
Directory
.Construction of basic automata.
This class acts as the base class for the specific basic model
implementations in the DFR framework.
Limiting form of the Bose-Einstein model.
Implements the approximation of the binomial model with the divergence
for DFR.
Geometric as limiting form of the Bose-Einstein model.
An approximation of the I(ne) model.
The basic tf-idf model of randomness.
Tf-idf model of randomness, based on a mixture of Poisson and inverse
document frequency.
Implements the Poisson approximation for the binomial model for DFR.
Basic automata operations.
Utility BlobStore implementation to be used in tooling that can work with a
FileStore without the need of the DataStore being present locally
Stores all statistics commonly used ranking methods.
This Blob implementation is based on an underlying
Binary
.A per-document byte[]
Field that stores a per-document
BytesRef
value.This extension interface provides a mechanism whereby a client can download
a
Binary
directly from a storage location.Specifies the options to be used when obtaining a direct download URI via
BinaryDownload.getURI(BinaryDownloadOptions)
.Used to build an instance of
BinaryDownloadOptions
with the
options set as desired by the caller.A binary id.
An index of binary references.
A consumer of entries from a binary references index.
Maintains the transient state of a binary references index, formats it and
serializes it.
Collects the total binary size (references to the datastore) per path.
Collects the histogram of binary sizes (embedded binaries and references to
the datastore).
Describes uploading a binary through HTTP requests in a single or multiple
parts.
Specifies the options to be used when requesting direct upload URIs via
JackrabbitValueFactory.initiateBinaryUpload(long, int, BinaryUploadOptions)
.Used to build an instance of
BinaryUploadOptions
with the options
set as desired by the caller.The implementation of the corresponding JCR interface.
A bind variable.
FunctionalInterface
to consume Metric Stats for update/remove operationInterface for Bitset-like structures.
Bits impl of the specified length with all bits set.
Bits impl of the specified length with no bits set.
This implementation supplies a filtered DocIdSet, that excludes all
docids which are not in a Bits instance.
A variety of high efficiency bit twiddling routines.
Immutable representation of a binary value of finite length.
Extension interface applied to a class that indicates that the class
implements the direct upload and direct download feature for
Blob
s.Download options to be provided to a call to
BlobAccessProvider.getDownloadURI(Blob, BlobDownloadOptions)
.Interface for blob garbage collector
Default implementation of
BlobGCMBean
based on a BlobGarbageCollector
.MBean for starting and monitoring the progress of
blob garbage collection.
Blob serializer which serializes blobs depending on type
In memory blobs (having contentIdentity as null) would be serialized as base64
encoded string.
Tracks the blob ids available or added in the blob store using the
BlobIdTracker.BlobIdStore
.Tracking any active deletions store for managing the blob reference
Options while writing blobs to the blob store / data store.
Specifies the upload type for the blob.
An iterator over all referenced binaries.
Interface to abstract out the low-level details of retrieving blob references from different
NodeStore
Customizable mechanism for mapping
Blob
instances to corresponding
serialization identifiers.BlobStoreStatsCollector receives callback when blobs are written and read
from BlobStore
An interface to store and read large binary objects.
A blob implementation.
An input stream to simplify reading from a store.
Track the blob ids.
Interface to be implemented by a data store which can support local blob id tracking.
An object containing information needed to complete a direct binary upload.
Download options to be provided to a call to
BlobAccessProvider.initiateBlobUpload(long, int, BlobUploadOptions)
.Provides random access to a stream written with
BlockPackedWriter
.Reader for sequences of longs written with
BlockPackedWriter
.A writer for large sequences of longs.
Holds all state required for
PostingsReaderBase
to produce a DocsEnum
without re-seeking the
terms dict.A block-based terms index and dictionary that assigns
terms to variable length blocks according to how they
share prefixes.
BlockTree statistics for a single field
returned by
BlockTreeTermsReader.FieldReader.computeStats()
.Block-based terms index and dictionary writer.
A Bloom filter implementation.
BM25 Similarity.
A BNF visitor that generates HTML railroad diagrams.
A BNF visitor that generates BNF in HTML form.
A clause in a BooleanQuery.
Specifies how clauses are to occur in matching documents.
A Query that matches documents matching boolean combinations of other
queries, e.g.
Thrown when an attempt is made to add more than
BooleanQuery.getMaxClauseCount()
clauses.Add this
Attribute
to a TermsEnum
returned by MultiTermQuery.getTermsEnum(Terms,AttributeSource)
and update the boost on each returned term.Implementation class for
BoostAttribute
.The bootstrap configuration hold information about initial startup
parameters like repository config and home.
A histogram that keeps a maximum number of buckets (entries).
A breadth first traversal trace.
A broadcast mechanism that is able to send and receive commands.
A listener for new messages.
Methods and constants inspired by the article
"Broadword Implementation of Rank/Select Queries" by Sebastiano Vigna, January 30, 2012:
algorithm 1:
BroadWord.bitCount(long)
, count of set bits in a long
algorithm 2: BroadWord.select(long, int)
, selection of a set bit in a long
,
bytewise signed smaller <8 operator: BroadWord.smallerUpTo7_8(long,long)
.This is a wrapper around
ByteBuffer
.Base implementation class for buffered
IndexInput
.Base implementation class for buffered
IndexOutput
.Builds a minimal FST (maps an IntsRef term to an arbitrary
output) from pre-sorted terms with outputs.
Expert: holds a pending (seen but not yet serialized) arc.
Expert: this is invoked by Builder whenever a suffix
is serialized.
Expert: holds a pending (seen but not yet serialized) Node.
Deprecated, for removal: This API element is subject to removal in a future version.
use
BundlingConfigInitializer
insteadImplements a
Closeable
wrapper over a LineIterator
.DataInput backed by a byte array.
DataOutput backed by a byte array.
Class that Posting and PostingVector use to write byte
streams into shared fixed-size byte[] arrays.
Abstract class for allocating and freeing byte
blocks.
A simple
ByteBlockPool.Allocator
that never recycles.A simple
ByteBlockPool.Allocator
that never recycles, but
tracks how much total RAM is in use.Deprecated.
use
NumericDocValuesField
instead.Automaton representation for matching UTF-8 byte[].
An FST
Outputs
implementation where each output
is a sequence of bytes.Represents byte[], as a slice (offset + length) into an
existing byte[].
Enumerates all input (BytesRef) + output pairs in an
FST.
Holds a single input (BytesRef) + output pair.
BytesRefHash
is a special purpose hash-map like data-structure
optimized for BytesRef
instances.Manages allocation of the per-term addresses.
A simple
BytesRefHash.BytesStartArray
that tracks
memory allocation using a private Counter
instance.A simple iterator interface for
BytesRef
iteration.Partial mapping of keys of type
K
to values of type Cache
.A cache backend that can load objects from persistent storage.
A cacheable object.
Cache
wrapper exposing the number of read accesses and the
number of misses ot the underlying cache via the StatisticsProvider
.An asynchronous buffer of the CacheAction objects.
Constants for persisted user management related caches.
Interface for reading the membership information of a given authorizable and store the result in a cache.
For Oak internal use only.
A builder for the cache.
Listener for items that are evicted from the cache.
Responsible for providing the set of principals for a given user.
A cache map.
In order to avoid leaking values from the metadataMap, following order should
be maintained for combining the cache and CacheMetadata:
1.
Factory for creating
Principal
instances based on the principal name.Cache statistics.
An OSGi component that binds to all
CacheStatsMBean
instances and
exposes their counters as Metric
s.A cache value.
A blob store with a cache.
Caches all docs, and optionally also scores, coming from
a search, and is then able to replay them to another
collector.
File system implementation of
AbstractSharedCachingDataStore
.This
SegmentReader
implementation implements caching for
strings and templates.This class can be used if the token attributes of a TokenStream
are intended to be consumed more than once.
Wraps another
Filter
's result and caches it.A
NodeStateDiff
that cancels itself when a condition occurs.The result of a check for a pending cancellation request.
Represents a way to check for a cancellation request.
Wrapper around the map that allows accessing the map with case-insensitive keys.
A ChangeCollectorProvider can be hooked into Oak thus enabling the collection
of ChangeSets of changed items of a commit, which downstream Observers can
then use at their convenience.
A
ChangeDispatcher
instance dispatches content changes
to registered Observer
s.A ChangeSet is a collection of items that have been changed as part of a
commit.
Builder of a ChangeSet - only used by ChangeCollectorProvider (and tests..)
A ChangeSetFilter is capable of inspecting a ChangeSet
and deciding if the corresponding consumer
(eg EventListener) is possibly interested in it
or definitely not.
Automaton representation for matching char[].
Subclasses of CharFilter can be chained to filter a Reader
They can be used as
Reader
with additional offset
correction.An FST
Outputs
implementation where each output
is a sequence of characters.Utility class related to encoding characters into (UTF-8) byte sequences.
Represents char[], as a slice (offset + length) into an existing char[].
The term text of a Token.
Default implementation of
CharTermAttribute
.Perform a consistency check on an existing segment store.
Collect options for the
Check
command.Collect options for the
Check
command.Basic tool and API to check the health of an index and
write a new segments file that removes reference to
problematic segments.
Returned from
CheckIndex.checkIndex()
detailing the health and status of the index.Status from testing DocValues
Status from testing field norms.
Holds the status of each segment in the index.
Status from testing stored fields.
Status from testing term index.
Status from testing stored fields.
This compactor implementation is aware of the checkpoints in the repository.
MBean for managing
org.apache.jackrabbit.oak.spi.state.NodeStore#checkpoint checkpoints
.A helper class to manage checkpoints on TarMK and DocumentMK.
Helper class to access package private functionality.
Reads bytes through to a primary IndexInput, computing
checksum as it goes.
Writes bytes through to a primary IndexOutput, computing
checksum.
A
ChildNodeEntry
instance represents the child node states of a
NodeState
.The implementation of the corresponding JCR interface.
The "ischildnode(...)" condition.
The implementation of the corresponding JCR interface.
The "ischildnode(...)" join condition.
This conflict handler instance takes care of properly merging conflicts
occurring by concurrent reorder operations.
Instances of this class can be used to compact a node state.
Initial data and logic needed for the cleanup of unused TAR entries.
Authorizable action attempting to clear all group membership before removing
the specified authorizable.
Mechanism for keeping track of time at millisecond accuracy.
Fast clock implementation whose
Clock.Fast.getTime()
method returns
instantaneously thanks to a background task that takes care of the
actual time-keeping work.A virtual clock that has no connection to the actual system time.
Java's builtin ThreadLocal has a serious flaw:
it can take an arbitrarily long amount of time to
dereference the things you had stored in it, even once the
ThreadLocal instance itself is no longer referenced.
Convenience utility to close a list of
Closeable
s in reverse order,
suppressing all but the first exception to occur.Implementation of the
BlobStore
to store blobs in a cloud blob store.Interface for bearing cluster node specific information.
Information about a cluster node.
A document storing cluster node info.
Utility class to manage a unique cluster/repository id for the cluster.
DocumentNS-internal listener that gets invoked when a change in the
clusterNodes collection (active/inactive/timed out/recovering) is detected.
The function "coalesce(..)".
Encodes/decodes an inverted index segment.
Utility class for reading and writing versioned headers.
The collection types.
Contains statistics for a collection (field)
Throw this exception in
Collector.collect(int)
to prematurely
terminate collection of the current leaf.Methods for manipulating (sorting) collections.
Utility methods for collections conversions.
Expert: Collectors are primarily meant to be used to
gather raw results from a search, and implement sorting
or custom result filtering, collation, etc.
The implementation of the corresponding JCR interface.
A result column expression.
Class containing some useful methods used by command line tools
A higher level object representing a commit.
A
Commit
instance represents a set of related changes, which when
applied to a base node state result in a new node state.A CommitContext instance can be obtained from
CommitInfo.getInfo()
if it has been set before the merge call.Main exception thrown by methods defined on the
ContentSession
interface indicating that committing a given set of changes failed.Extension point for validating and modifying content changes.
Commit info instances associate some meta data with a commit.
This
CommitHook
can be used to block or delay commits for any length of time.Resolves the commit value for a given change revision on a document.
Perform an offline compaction of an existing segment store.
Collect options for the
Compact
command.Simple wrapper class for
SegmentNodeState
to keep track of fully and partially compacted nodes.The CompactionWriter delegates compaction calls to the correct
SegmentWriter
based on GCGeneration.The implementation of the corresponding JCR interface.
A comparison operation (including "like").
Immutable class holding compiled details for a given
Automaton.
Automata are compiled into different internal forms for the
most efficient execution depending upon the language they accept.
Expert: Describes the score computation for document and query, and
can distinguish a match independent of a positive value.
Aggregates a collection of
AuthorizableActionProvider
s into a single
provider.CompositeAuthorizationConfiguration
that combines different
authorization models.Abstract base implementation for
SecurityConfiguration
s that can
combine different implementations.A
CompositeConflictHandler
delegates conflict handling
to multiple backing handlers.Composite implementation of the
CredentialsSupport
interface that handles multiple providers.CompositeDocumentProcessor
...Aggregation of a list of editors into a single editor.
Aggregation of a list of editor providers into a single provider.
Composite commit hook.
Aggregation of a list of editor providers into a single provider.
Many methods in this class call themselves recursively, and are susceptible to infinite recursion if a composite
indexer contains itself, directly or indirectly.
Composite repository initializer that delegates the
CompositeInitializer.initialize(NodeBuilder)
call in sequence to all the
component initializers.This
IOMonitor
instance delegates all calls to all
IOMonitor
instances registered.Composite observer that delegates all content changes to the set of
currently registered component observers.
Aggregates of a list of
RestrictionPattern
s into a single pattern.PrincipalConfiguration
that combines different principal provider
implementations that share a common principal manager implementation.PrincipalProvider
implementation that aggregates a list of principal
providers into a single.This
QueryIndexProvider
aggregates a list of query index providers
into a single query index provider.Instances of this reader type can only
be used to get stored fields from the underlying AtomicReaders,
but it is not possible to directly retrieve postings.
IndexReaderContext
for CompositeReader
instance.A composite of registrations that unregisters all its constituents
upon
CompositeRegistration.unregister()
.Aggregates of a collection of
RestrictionProvider
implementations
into a single provider.TokenConfiguration
that combines different token provider implementations.Aggregates a collection of
TokenProvider
s into a single
provider.Composite repository initializer that delegates the
CompositeWorkspaceInitializer.initialize(org.apache.jackrabbit.oak.spi.state.NodeBuilder, String)
calls in sequence to all the component initializers.Class for accessing a compound stream.
Offset/Length for a slice inside of a compound file
PropertyState compression implementation with lazy parsing of the JSOP encoded value.
A
StoredFieldsFormat
that is very similar to
Lucene40StoredFieldsFormat
but compresses documents in chunks in
order to improve the compression ratio.Random-access reader for
CompressingStoredFieldsIndexWriter
.Efficient index format for block-based
Codec
s.A
TermVectorsFormat
that compresses chunks of documents together in
order to improve the compression ratio.This interface provides a default list of support compression algorithms and some utility functions.
The enum allows to disable or enable compression of the storage files.
A compression mode.
Simple utility class providing static methods to
compress and decompress binary data for stored fields.
A data compressor.
A synchronized LRU cache.
A
MergeScheduler
that runs each merge using a
separate thread.Abstract base implementation for the various security configurations.
ConfigurationParameters is a convenience class that allows typed access to configuration properties.
Helper class for configuration parameters that denote a "duration", such
as a timeout or expiration time.
Utility to create
Configuration
s for built-in LoginModule implementations.This implementation of
AbstractRebaseDiff
implements a NodeStateDiff
,
which performs the conflict handling as defined in NodeStore.rebase(NodeBuilder)
on the Oak SPI state level by annotating conflicting items with conflict
markers.Deprecated.
Use
ThreeWayConflictHandler
instead.This commit hook implementation is responsible for resolving
conflicts.
Enum to define various types of conflicts.
Validator
which checks the presence of conflict markers
in the tree in fails the commit if any are found.TODO document
Consistency check on a
NodeDocument
.Callback interface for result of a consistency check.
ConsistencyCheck
...A command line console.
Light weight session to a NodeStore, holding context information.
Stats for caching data store.
TODO document
Some useful constants.
A query that wraps another query or a filter and simply returns a constant score equal to the
query boost for every document that matches the filter or query.
The implementation of the corresponding JCR interface.
The base class for constraints.
Oak content repository.
NodeStore
-based implementation of
the ContentRepository
interface.Authentication session for accessing a content repository.
Context
represents item related information in relation to a
dedicated SecurityConfiguration
.Default implementation of the
Context
interface that always returns
false
.Extension to IndexUpdateCallback which also provides access to
IndexingContext
Utility class that runs a thread to manage periodicc
reopens of a
ReferenceManager
, with methods to wait for a specific
index changes to become visible.Utility class defining the conversion that take place between
PropertyState
s
of different types.A converter converts a value to its representation as a specific target type.
MBean for managing the copy-on-write node store
This exception is thrown when Lucene detects
an inconsistency in the index.
Simple counter class
A
NodeStateDiff
implementation that counts the differences between
two node states, including their sub tree.A count-min sketch implementation.
The copy-on-write (COW) node store implementation allows to temporarily
switch the repository into the "testing" mode, in which all the changes are
stored in a volatile storage, namely the MemoryNodeStore.
CreateGarbageCommand creates garbage nodes in the repository in order to allow for testing fullGC functionality.
Helper class to generate garbage for testing purposes.
Base class to update the metrics for
DocumentStoreStatsCollector.doneCreate(long, Collection, List, boolean)
for underlying DocumentStore
Callback implementation to retrieve
Credentials
.Simple helper interface that allows to easily plug support for additional or
custom
Credentials
implementations during authentication.Validator
which detects references crossing the mount boundariesInterface that allows to exclude certain principals from the CUG evaluation.
Default implementation of the
CugExclude
interface that excludes
the following principal classes from CUG evaluation:
AdminPrincipals
SystemPrincipal
SystemUserPrincipal
Extension of the default
CugExclude
implementation that allow to specify additional principal names to be excluded
from CUG evaluation.Denies read access for all principals except for the specified principals.
A cursor to read a number of nodes sequentially.
A custom login module for test purposes.
Implements a LoginModuleFactory that creates
CustomLoginModule
s
and allows to configure login modules via OSGi config.Custom principal configuration that is disabled by default.
EXERCISE: complete the implementation
Abstract base class for performing read operations of Lucene's low-level
data types.
Abstract base class for performing write operations of Lucene's low-level
data types.
Contains download options for downloading a data record directly from a
storage location using the direct download feature.
General exception thrown when a binary upload being made via
DataRecordAccessProvider.initiateDataRecordUpload(long, int)
and
DataRecordAccessProvider.completeDataRecordUpload(String)
cannot be completed.Represents an upload token returned by
DataRecordAccessProvider.initiateDataRecordUpload(long, int)
and
used in subsequent calls to DataRecordAccessProvider.completeDataRecordUpload(String)
.BlobStore wrapper for DataStore.
Command to upgrade JR2 DataStore cache.
Utility methods to upgrade Old DataStore cache
CachingDataStore
.Common utility methods used for DataStore caches.
Command to check data store consistency and also optionally retrieve ids
and references.
Command to check data store consistency and also optionally retrieve ids
and references.
Command to concurrently download blobs from an azure datastore using sas token authentication.
Garbage collector for DataStore.
Extension to
DataStoreUtils
to enable S3 / AzureBlob extensions for cleaning and initialization.Provides support for converting dates to strings and vice-versa.
Specifies the time granularity.
Print debugging information about segments, node records and node record
ranges.
Collect options for the
DebugSegments
command.Print debugging information about a segment store.
Collect options for the
DebugStore
command.Print information about one or more TAR files from an existing segment store.
Collect options for the
DebugTars
command.DebugTimer
...Predicate used to filter authorizables based on their declared group membership.
A decompressor.
Scans a FlatFileStore for non-inlined blobs in nodes matching a given pattern and downloads them from the blob store.
Default implementation of the
AuthorizableActionProvider
interface
that allows to config all actions provided by the OAK.Deprecated.
Use
DefaultThreeWayConflictHandler
instead.Editor that does nothing by default and doesn't recurse into subtrees.
Default implementation of
EventHandler
that
does nothing.This
IOTraceWriter
implementation implements persistence
through a Writer
instance.MoveValidator
that does nothing by default and doesn't recurse into subtrees.Node state diff handler that by default does nothing.
Converts nodes, properties, values, etc.
Builder for building
DefaultSegmentWriter
instances.Expert: Default scoring implementation which
encodes
norm values as a single byte before being stored.DefaultSyncConfig
defines how users and groups from an external source are synced into the repository using
the DefaultSyncHandler
.Base config class for users and groups
Group specific config
User specific config.
DefaultSyncConfig
defines how users and groups from an external source are synced into the repository using
the DefaultSyncHandler
.Internal implementation of the sync context
Implements a simple synced identity that maps an authorizable id to an external ref.
DefaultSyncHandler
implements an sync handler that synchronizes users and groups from an external identity
provider with the repository users.Implements a simple sync result with and id and a status.
This implementation of a
ThreeWayConflictHandler
always returns the
same resolution.Validator that does nothing by default and doesn't recurse into subtrees.
DefinitionProvider...
NodeState wrapper which wraps another NodeState (mostly SegmentNodeState)
so as to expose it as an
AbstractDocumentNodeState
by extracting
the meta properties which are stored as hidden propertiesThis
GCMonitor
implementation simply delegates all its call
to registered monitors.This
EventFilter
implementation excludes events for child nodes
of removed nodes.A depth first traversal trace.
Utility methods for
Deque
conversions.Deprecated.
Use
BinaryDocValuesField
instead.The implementation of the corresponding JCR interface.
The "isdescendantnode(...)" condition.
The implementation of the corresponding JCR interface.
The "isdescendantnode(...)" join condition.
Produces a description that will be used by JMX metadata.
Repository descriptors interface that is used to support providing the repository descriptors of
Repository
Implements the divergence from randomness (DFR) framework
introduced in Gianni Amati and Cornelis Joost Van Rijsbergen.
Shows the differences between two head states.
Collect options for the
Diff
command.Abstract base class for observers that use a content diff to determine
what changed between two consecutive observed states of the repository.
A Directory is a flat list of files.
DirectoryReader is an implementation of
CompositeReader
that can read indexes in a Directory
.A query that generates the union of documents produced by its subqueries, and that scores each document with the maximum
score for that document as produced by any subquery, plus a tie breaking increment for any additional matching subqueries.
This
IOMonitor
implementations registers the following monitoring endpoints
with the Metrics library if available:
DiskCacheIOMonitor.OAK_SEGMENT_CACHE_DISK_SEGMENT_READ_BYTES
:
a meter metrics for the number of bytes read from segment disk cache
DiskCacheIOMonitor.OAK_SEGMENT_CACHE_DISK_SEGMENT_WRITE_BYTES
:
a meter metrics for the number of bytes written to segment disk cache
DiskCacheIOMonitor.OAK_SEGMENT_CACHE_DISK_SEGMENT_READ_TIME
:
a timer metrics for the time spent reading from segment disk cache
DiskCacheIOMonitor.OAK_SEGMENT_CACHE_DISk_SEGMENT_WRITE_TIME
:
a timer metrics for the time spent writing to segment disk cache
DiskCacheIOMonitor.OAK_SEGMENT_CACHE_DISK_CACHE_SIZE_CALCULATED
:
a histogram for the calculated segment disk cache size
DiskCacheIOMonitor.OAK_SEGMENT_CACHE_DISK_CACHE_SIZE_CHANGE
:
a histogram for the segment disk cache size change
Collects the number and size of distinct binaries.
A histogram of distinct binaries.
The probabilistic distribution used to model term occurrence
in information-based models.
Log-logistic distribution.
The smoothed power-law (SPL) distribution for the information-based framework
that is described in the original paper.
Simple DocIdSet and DocIdSetIterator backed by a BitSet
A DocIdSet contains a set of doc ids.
This abstract class defines methods to iterate over a set of non-decreasing
doc ids.
Also iterates through positions.
Iterates through the documents and term freqs.
This class enables fast access to multiple term ords for
a specified field across all docIDs.
A range filter built on top of a cached multi-valued term field (in
FieldCache
).Rewrites MultiTermQueries into a filter, using DocTermOrds for term enumeration.
A document corresponds to a node stored in the DocumentNodeStore.
Documents are the unit of indexing and search.
Implementation of
BlobReferenceRetriever
for the DocumentNodeStore.Extension point which needs to be registered with the Whiteboard
attached to Options
CheckpointMBean
implementation for the DocumentNodeStore
.The DocumentDiscoveryLiteService is taking care of providing a repository
descriptor that contains the current cluster-view details.
A
NodeState
implementation for the DocumentNodeStore
.A list of children for a node.
Implementation of a NodeStore on
DocumentStore
.A generic builder for a
DocumentNodeStore
.Helper class to access package private method of DocumentNodeStore and other
classes in this package.
The OSGi service to start/stop a DocumentNodeStore instance.
Defines an interface to process
NodeDocument
s.The interface for the backend storage for documents.
DocumentStoreCheck
...A
StoredFieldVisitor
that creates a Document
containing all stored fields, or only specific
requested fields provided to DocumentStoredFieldVisitor(Set)
.DocumentStoreException
is a runtime exception for
DocumentStore
implementations to signal unexpected problems like
a communication exception.Document Store statistics helper class.
Inventory printer for
DocumentStore.getStats()
.Abstract API that consumes numeric, binary and
sorted docvalues.
Encodes/decodes per-document values.
Abstract API that produces numeric, binary and
sorted docvalues.
A simple implementation of
DocValuesProducer.getDocsWithField(org.apache.lucene.index.FieldInfo)
that
returns true
if a document has an ordinal >= 0A simple implementation of
DocValuesProducer.getDocsWithField(org.apache.lucene.index.FieldInfo)
that
returns true
if a document has any ordinals.Simple concurrent LRU cache, using a "double barrel"
approach where two ConcurrentHashMaps record entries.
Object providing clone(); the key class must subclass this.
Syntactic sugar for encoding doubles as NumericDocValues
via
Double.doubleToRawLongBits(double)
.
Field that indexes
double
values
for efficient range filtering and sorting.Generic concurrent file downloader which uses Java NIO channels to potentially leverage OS internal optimizations.
Aggregates statistics when downloading from Mongo with two threads
DataStore implementation which creates empty files matching given identifier.
Broadcast configuration.
The base class for dynamic operands.
The base class for dynamic operands (such as a function or property).
Extension of the
DefaultSyncContext
that doesn't synchronize group
membership of new external users into the user management of the repository.Content change editor.
This commit hook implementation processes changes to be committed
using the
Editor
instance provided by the EditorProvider
passed to the constructor.Extension point for content change editors.
EffectiveNodeTypeProvider...
A decoder for an
EliasFanoEncoder
.A DocIdSet in Elias-Fano encoding.
Encode a non decreasing sequence of non negative whole numbers in the Elias-Fano encoding
that was introduced in the 1970's by Peter Elias and Robert Fano.
Determines the weight of object based on the memory taken by them.
Basic commit hook implementation that by default doesn't do anything.
Singleton instances of empty and non-existent node states, i.e.
Basic content change observer that doesn't do anything.
Permission provider implementation that does not grant any permissions.
Implementation of the
PrincipalProvider
interface that never
returns any principals.Abstract base class for
PropertyState
implementations
providing default implementation which correspond to a property
without any value.Wrapper around
System.getenv()
.Helper class for comparing the equality of node states based on the
content diff mechanism.
The implementation of the corresponding JCR interface.
The "a.x = b.y" join condition.
Instances of this class represent a
Value
which couldn't be retrieved.Utility class to escape '\n', '\r', '\' char
while being written to file and unescape then upon getting
read from file.
ETA
...An EventAggregator can be provided via a FilterProvider
and is then used to 'aggregate' an event at creation time
(ie after filtering).
Event factory for generating JCR event instances that are optimized
for minimum memory overhead.
Filter for determining what changes to report the the event listener.
Continuation-based content diff implementation that generates
EventHandler
callbacks by recursing down a content diff
in a way that guarantees that only a finite number of callbacks
will be made during a EventGenerator.generate()
method call, regardless
of how large or complex the content diff is.Handler of content change events.
MBean interface for exposing information about a registered observation
listener.
EventTypeFilter
filters based on event types as defined
by ObservationManager.addEventListener()
.Built-in principal group that has every other principal as member.
A listener that gets notified of entries that were removed from the cache.
ExceptionResult
...The presence of this marker interface on a
EventListener
indicates that cluster external observation events must not be reported to that
event listener.An instance of this class provides the context for the execution of a query,
which in essence captures a stable state of the content tree from the time
the execution context was created.
An execution plan for a join or a selector.
Utility class to properly close any ExecutorService.
Expert: Describes the score computation for document and query.
NodeStore explorer
Interface that adds stats to
BlobStatsCollector
for additional
capabilities in blob stores that are added via
DataStoreBlobStore
.ExternalGroup defines a group that is provided by an external system.
Deprecated.
ExternalIdentity
defines an identity provided by an external system.Constants used by the external identity management.
ExternalIdentityException
is used to notify about errors when dealing with external identities.ExternalIdentityProvider
defines an interface to an external system that provides users and groups that
can be synced with local ones.The external identity provider management.
ExternalIdentityRef
defines a reference to an external identity.ExternalIDPManagerImpl
is used to manage registered external identity provider.ExternalLoginModule
implements a LoginModule
that uses an
ExternalIdentityProvider
for authentication.Implements a LoginModuleFactory that creates
ExternalLoginModule
s and allows to configure login modules
via OSGi config.Implementation of the
PrincipalConfiguration
interface that provides
principal management for Group principals
associated with
external identities
managed outside of the scope of the repository by an
ExternalIdentityProvider
.Source copied from a publicly available library.
Variation of ExternalSort that stores the lines read from intermediate files as byte arrays to avoid the conversion
from byte[] to String and then back.
ExternalUser defines a user provided by an external system.
A facet result column expression.
A facet result is a wrapper for
QueryResult
capable of returning information about facets
stored in the query result Row
s.A query result facet, composed by its label and count.
Validator that rejects all changes.
A feature toggle to control new functionality.
A feature toggle is registered with the
Whiteboard
and can be
discovered by third party code to control the state of feature toggles.Expert: directly create a field for a document.
Deprecated.
This is here only to ease transition from
the pre-4.0 APIs.
Specifies whether and how a field should be stored.
Deprecated.
This is here only to ease transition from
the pre-4.0 APIs.
Expert: Maintains caches of term values.
Deprecated.
Field values as 8-bit signed bytes
EXPERT: A unique Identifier/Description for each item in the FieldCache.
Placeholder indicating creation of this cache is currently in-progress.
Interface to parse doubles from document fields.
Field values as 64-bit doubles
Interface to parse floats from document fields.
Field values as 32-bit floats
Interface to parse ints from document fields.
Field values as 32-bit signed integers
Interface to parse long from document fields.
Field values as 64-bit signed long integers
Marker interface as super-interface to all parsers.
Deprecated.
Field values as 16-bit signed shorts
Base class for DocIdSet to be used with FieldCache.
A range filter built on top of a cached single term field (in
FieldCache
).Rewrites MultiTermQueries into a filter, using the FieldCache for term enumeration.
Provides methods for sanity checking that entries in the FieldCache
are not wasteful or inconsistent.
Simple container for a collection of related CacheEntry objects that
in conjunction with each other represent some "insane" usage of the
FieldCache.
An Enumeration of the different types of "insane" behavior that
may be detected in a FieldCache.
A
Filter
that only accepts documents whose single
term value in the specified field is contained in the
provided set of allowed terms.Expert: a FieldComparator compares hits so as to determine their
sort order when collecting the top results with
TopFieldCollector
.Deprecated.
Sorts by ascending docID
Parses field's values as double (using
FieldCache.getDoubles(org.apache.lucene.index.AtomicReader, java.lang.String, boolean)
and sorts by ascending valueParses field's values as float (using
FieldCache.getFloats(org.apache.lucene.index.AtomicReader, java.lang.String, boolean)
and sorts by ascending valueParses field's values as int (using
FieldCache.getInts(org.apache.lucene.index.AtomicReader, java.lang.String, boolean)
and sorts by ascending valueParses field's values as long (using
FieldCache.getLongs(org.apache.lucene.index.AtomicReader, java.lang.String, boolean)
and sorts by ascending valueBase FieldComparator class for numeric types
Sorts by descending relevance.
Deprecated.
Sorts by field's natural Term sort order, using
ordinals.
Sorts by field's natural Term sort order.
Provides a
FieldComparator
for custom field sorting.Expert: A ScoreDoc which also contains information about
how to sort the referenced document.
Access to the Field Info file that describes document fields and whether or
not they are indexed.
DocValues types.
Controls how much information is stored in the postings lists.
Collection of
FieldInfo
s (accessible by number or by name).Encodes/decodes
FieldInfos
Codec API for reading
FieldInfos
.Codec API for writing
FieldInfos
.This class tracks the number and position / offset parameters of terms
being added to the index.
Wrapper to allow
SpanQuery
objects participate in composite
single-field SpanQueries by 'lying' about their search field.Flex API for access to fields and terms
Abstract API that consumes terms, doc, freq, prox, offset and
payloads postings.
Abstract API that produces terms, doc, freq, prox, offset and
payloads postings.
Describes the properties of a field.
Data type of the numeric value
A
Filter
that accepts all documents that have one or more values in a
given field.Expert: A hit queue for sorting by hits by terms in more than one field.
Extension of ScoreDoc to also store the
FieldComparator
slot.A file blob store.
Simple File utils
Decorates the given comparator and applies the function before delegating to the decorated
comparator.
FileLineDifferenceIterator class which iterates over the difference of 2 files line by line.
A utility class that allows converting the files of a tree store into one
file (pack the files), and back from a file to a list of files (unpack the
files).
Thread-safe class tracking files to be removed.
A storage backend for the tree store that stores files on the local file
system.
The storage implementation for tar files.
Default implementation of
FileStoreBackupRestoreMBean
based on a
file.MBean for backing up and restoring a
NodeStore
.Builder for creating
FileStore
instances.GCMonitor
implementation providing the file store gc status.FileStoreMonitor are notified for any writes or deletes
performed by FileStore
A void implementation of the
FileStoreMonitor
.Expert: A Directory instance that switches files between
two other Directory instances.
A filter is used by the FilteringObserver to decide whether or not a content
change should be forwarded.
The filter for an index lookup that contains a number of restrictions that
are combined with AND.
Interface that allows to define the principals for which principal based access control management and permission
evaluation can be executed.
Abstract base class for restricting which documents may
be returned during searching.
The path restriction type.
A restriction for a property.
A
FilterAtomicReader
contains another AtomicReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.Base class for filtering
DocsAndPositionsEnum
implementations.Base class for filtering
DocsEnum
implementations.Base class for filtering
Fields
implementations.Base class for filtering
Terms
implementations.Base class for filtering
TermsEnum
implementations.Builder for
FilterProvider
instances.A codec that forwards all its method calls to another codec.
Directory implementation that delegates calls to another directory.
A FilterDirectoryReader wraps another DirectoryReader, allowing implementations
to transform or extend it.
A no-op SubReaderWrapper that simply returns the parent
DirectoryReader's original subreaders.
Factory class passed to FilterDirectoryReader constructor that allows
subclasses to wrap the filtered DirectoryReader's subreaders.
Abstract decorator class for a DocIdSet implementation
that provides on-demand filtering/validation
mechanism on a given DocIdSet.
Abstract decorator class of a DocIdSetIterator
implementation that provides on-demand filter/validation
mechanism on an underlying DocIdSetIterator.
Filtered event handler.
A query that applies a filter to the results of another query.
Abstract class that defines how the filter (
DocIdSet
) applied during document collection.A
FilteredQuery.FilterStrategy
that conditionally uses a random access filter if
the given DocIdSet
supports random access (returns a non-null value
from DocIdSet.bits()
) and
FilteredQuery.RandomAccessFilterStrategy.useRandomAccess(Bits, int)
returns
true
.Abstract class for enumerating a subset of all terms.
Return value, if term should be accepted or the iteration should
END
.Static factory that allows wrapping a JackrabbitEventFilter into an
OakEventFilter that contains some oak specific extensions.
A filter or lookup condition.
A FilteringAwareObserver is the stateless-variant of
an Observer which gets an explicit before as well as the
after NodeState.
Part of the FilteringObserver: the FilteringDispatcher is used
to implement the skipping (filtering) of content changes
which the FilteringDispatcher flags as NOOP_CHANGE.
NodeState implementation that decorates another node-state instance
in order to hide subtrees or partial subtrees from the consumer of
the API.
An observer that implements filtering of content changes
while at the same time supporting (wrapping) a BackgroundObserver
underneath.
An
Iterator
implementation that filters elements with a boolean predicate.Filtering iterators that are useful for queries with limit, offset, order by,
or distinct.
Instance of this class provide a
EventFilter
for observation
events and a filter for commits.Interface that allows to define the principals for which principal based access control management and permission
evaluation can be executed.
Implementation of the
Filter
interface that
consists of the following two filtering conditions:
All principals in the set must be of type SystemUserPrincipal
All principals in the set must be located in the repository below the configured path.This utility class provides common
EventFilter
instancesThe function "first(..)".
BitSet of fixed length (numBits), backed by accessible (
FixedBitSet.getBits()
)
long[], accessed with an int index, implementing Bits
and
DocIdSet
.A
DocIdSetIterator
which iterates over set bits in a
FixedBitSet
.This attribute can be used to pass different flags down the
Tokenizer
chain,
e.g.Default implementation of
FlagsAttribute
.Linked list implementation which supports multiple iterators.
The flatfile command is an extract of the ability to create a filefile from
the index command.
This class is where the strategy being selected for building FlatFileStore.
This class is being used when "oak.indexer.parallelIndex" is set to true.
Deprecated.
Use
IndexStoreUtils
insteadSyntactic sugar for encoding floats as NumericDocValues
via
Float.floatToRawIntBits(float)
.
Field that indexes
float
values
for efficient range filtering and sorting.A FlushInfo provides information required for a FLUSH context.
The format version currently in use by the DocumentNodeStore and written
to the underlying DocumentStore.
Listener which forwards the notifications to a delegate.
This utility class allows to match strings against a simple pattern language.
Logger facility for frozen node lookups by identifier.
Contains one particular reference to an nt:frozenNode.
Scans and lists all references to nt:frozenNode and returns an exit code of 1 if any are found (0 otherwise).
Scans and lists all references to nt:frozenNode and returns an exit code of 1 if any are found (0 otherwise).
Serializer which stores blobs in a FileDataStore format
Base class for Directory implementations that store index
files in the file system.
Base class for reading input from a RandomAccessFile
Writes output with
RandomAccessFile.write(byte[], int, int)
Base class for file system based locking implementation.
Represents an finite state machine (FST), using a
compact byte[] format.
Represents a single arc.
Reads bytes stored in an FST.
Specifies allowed range of each int input label for
this FST.
Exporter interface for setting dependency for VersionGarbageCollector that allows
for export of fullGC metrics to Prometheus via pushgateway.
Fixture encapsulating FullGC metrics exporter instance of T
This class is as a wrapper around DocumentStore that expose two methods used to clean garbage from NODES collection
public int remove(Map<String, Long> orphanOrDeletedRemovalMap)
public List findAndUpdate(List updateOpList)
When enabled
Each method saves the document ID or empty properties names (that will be deleted) to a separate _bin collection as a BinDocument then delegates deletion to DocumentStore
When disabled (default)
Each method delegates directly to DocumentStore
Collector interface for
DocumentNodeStore
full garbage collection
statistics.A fulltext "and" condition.
A group of full-text expressions that reflects a "contains(...)" expression,
and allows to access the original (unparsed) full text term.
The base class for fulltext condition expression.
A fulltext "or" condition.
A parser for fulltext condition literals.
The implementation of the corresponding JCR interface.
A fulltext "contains(...)" condition.
The implementation of the corresponding JCR interface.
A fulltext search score expression.
A fulltext term, or a "not" term.
A visitor for full-text expressions.
The base implementation of a full-text visitor.
Implements the fuzzy search query.
Subclass of TermsEnum for enumerating all terms that are similar
to the specified filter term.
reuses compiled automata across different segments,
because they are independent of the index
Stores compiled automata as a list (indexed by edit distance)
A blob store that supports garbage collection.
Remove unreferenced files from the store.
Garbage collection results.
Garbage collection stats for the repository.
Class for keeping the file system state of the garbage collection.
Instances of this class represent the garbage collection generation related
information of a segment.
Utility class to keep track of generations for incremental compaction.
Persists the repository size and the reclaimed size following a cleanup
operation in the
gc.log
file with the format:
'repoSize, reclaimedSize, timestamp, gc generation, gc full generation (since Oak 1.8),
number of nodes compacted, root id (since Oak 1.8)'.This type abstracts the
gc.log
file, used to save information about
the segment garbage collection.Responsible for raising the low memory flag whenever the available memory
falls under a specified threshold.
GCMonitor
instance are used to monitor garbage collection.This
GCMonitor
implementation tracks GCMonitor
instances registered
to the Whiteboard
delegating all calls to to those.Monitors the compaction cycle and keeps a compacted nodes counter, in order
to provide a best effort progress log based on extrapolating the previous
size and node count and current size to deduce current node count.
Generate a report with the list of affected versionHistory nodes containing
empty version nodes or an incorrect primaryType.
Default implementation of the
Descriptors
interface.Encodes a 'get head' response.
Encodes a 'get segment' response.
Name mapper with no local prefix remappings.
This
Filter
implementation supports filtering on paths using
simple glob patterns.A Group is a collection of
Authorizable
s.The
GroupAction
interface allows for implementations to be informed about and react to the following
changes to a Group
's members:
GroupAction.onMemberAdded(Group, Authorizable, Root, NamePathMapper)
GroupAction.onMembersAdded(Group, Iterable, Iterable, Root, NamePathMapper)
GroupAction.onMembersAddedContentId(Group, Iterable, Iterable, Root, NamePathMapper)
GroupAction.onMemberRemoved(Group, Authorizable, Root, NamePathMapper)
GroupAction.onMembersRemoved(Group, Iterable, Iterable, Root, NamePathMapper)
This interface is used to represent a group of principals.
Helper class to deal with the migration between the 2 types of groups
A
DataOutput
that can be used to build a byte[].Implements
PackedInts.Mutable
, but grows the
bit count of the underlying packed ints on-demand.The
GuestLoginModule
is intended to provide backwards compatibility
with the login handling present in the JCR reference implementation located
in jackrabbit-core.A hash function utility class.
A metric which calculates the distribution of a value.
Prints the revision history of an existing segment store.
Collect options for the
History
command.A HyperLogLog implementation.
Cardinality estimation with the HyperLogLog algorithm, using the tail cut
mechanism.
Provides a framework for the family of information-based models, as described
in Stéphane Clinchant and Eric Gaussier.
TODO document
Simple utility class for lazily tracking the current identifier during
a tree traversal that recurses down a subtree.
This exception can be thrown by implementers of this API to signal an error
condition caused by an invalid state of the repository.
An implementation of the
JackrabbitAccessControlList
interface that only
allows for reading.Default implementation of the
PrivilegeDefinition
interface.Simple implementation of the Root interface that only supports simple read
operations based on the
NodeState
(or ImmutableTree
)
passed to the constructor.Immutable implementation of the
Tree
interface in order to provide
the much feature rich API functionality for a given NodeState
.Produces an operation impact that will be returned by JMX
MBeanOperationInfo.getImpact()
.The
Impersonation
maintains Principals that are allowed to
impersonate.Implementation of the JCR
Credentials
interface used to distinguish
a regular login request from Session.impersonate(javax.jcr.Credentials)
.Utility class defining specific, configurable import behavior.
Content importer.
An
ImportHandler
instance can be used to import serialized
data in System View XML or Document View XML.Include represents a single path pattern which captures the path which
needs to be included in bundling.
The operation to perform.
An index for the entries in a TAR file.
Deprecated.
Implement
TermToBytesRefAttribute
and store bytes directly
instead.Represents a single field for indexing.
Describes the properties of a field.
Expert: represents a single commit into an index as seen by the
IndexDeletionPolicy
or IndexReader
.Implementations of this interface can be notified of progress of
commit that would update the index.
TODO document
Utility that allows to merge index definitions.
Expert: policy for deletion of stale
index commits
.The index diff tools allows to compare and merge indexes
Represents the content of a QueryIndex as well as a mechanism for keeping
this content up to date.
Extension point for plugging in different kinds of IndexEditor providers.
An indexed property.
An entry in the index of entries of a TAR file.
Indexer configuration for parallel indexing
This class contains useful constants representing filenames and extensions
used by lucene, as well as convenience methods for querying whether a file
name matches an extension (
matchesExtension
), as well as generating file names from a segment name,
generation and extension (
fileNameFromGeneration
,
segmentFileName
).This exception is thrown when Lucene detects
an index that is newer than this Lucene version.
This exception is thrown when Lucene detects
an index that is too old for this Lucene version
Captures information related to index
Service to be provided by various index implementations.
Stores diagnostic and performance information about indexing operations for reporting at the end of the indexing job.
IndexInitializer configures the repository with required fulltext index
Abstract base class for input from a file in a
Directory
.Load and validate the index of a TAR file.
Merge custom index definitions with out-of-the-box index definitions.
An index name, which possibly contains two version numbers: the product
version number, and the customer version number.
Signals that no index was found in the Directory.
Abstract base class for output to a file in a Directory.
IndexReader is an abstract class, providing an interface for accessing an
index.
A custom listener that's invoked when the IndexReader
is closed.
A struct like class that represents a hierarchical relationship between
IndexReader
instances.A row returned by the index.
A simple index row implementation.
Implements search over a single IndexReader.
A class holding a subset of the
IndexSearcher
s leaf contexts to be
executed within a single thread.This class defines the index selection policy constants
This is an easy-to-use tool that upgrades all segments of an index from previous Lucene versions
to the current segment file format.
TODO document
Builds an index incrementally in memory, and serializes its contents into a
sequence of bytes.
An
IndexWriter
creates and maintains an index.If
DirectoryReader.open(IndexWriter,boolean)
has
been called (ie, this writer is in near real-time
mode), then after a merge completes, this class can be
invoked to warm the reader on the newly merged
segment, before the merge commits.Holds all the configuration that is used to create an
IndexWriter
.Specifies the open mode for
IndexWriter
.An MBean that provides the inference configuration.
Debugging API for Lucene classes such as
IndexWriter
and SegmentInfos
.A "in" comparison operation.
InitialContent
implements a RepositoryInitializer
the creates
the initial JCR/Oak repository structure.A mechanism that broadcasts to all registered consumers.
Represents binary data which is backed by a byte[] (in memory).
Sorter
implementation based on the merge-sort algorithm that merges
in place (no extra memory will be allocated).A
DataInput
wrapping a plain InputStream
.Provides very basic installation capabilities.
A pool for int blocks similar to
ByteBlockPool
Abstract class for allocating and freeing int
blocks.
A simple
IntBlockPool.Allocator
that never recycles.A
IntBlockPool.SliceReader
that can read int slices written by a IntBlockPool.SliceWriter
A
IntBlockPool.SliceWriter
that allows to write multiple integer slices into a given IntBlockPool
.Deprecated.
use
NumericDocValuesField
instead.Elements annotated @Internal are -- although possibly exported -- intended
for Oak's internal use only.
Field that indexes
int
values
for efficient range filtering and sorting.An FST
Outputs
implementation where each output
is a sequence of ints.Represents int[], as a slice (offset + length) into an
existing int[].
Enumerates all input (IntsRef) + output pairs in an
FST.
Holds a single input (IntsRef) + output pair.
Thrown to indicate that invalid or malformed data is encountered while
validating an index.
IOContext holds additional details on the merge/search context.
Context is a enumerator which specifies the context in which the Directory
is being used for.
Callback interface that eases the collection of statistics about I/O
operations.
A void implementation of the
IOMonitor
.This
IOTraceWriter
implementation implements persistence
through a Logger
instance.This implementation of a
IOMonitor
logs all io reads to an
underlying IOTraceWriter
.This utility class allows collecting IO traces of read accesses to segments
caused by reading specific items.
Instances of
IOTraceWriter
are responsible for persisting
io traces.Input/output utility methods.
This class emulates the new Java 7 "Try-With-Resources" statement.
ItemBasedPrincipal
is a Principal
having a
corresponding item within the JCR repository.Abstract base class for
NodeDelegate
and PropertyDelegate
This validator checks that all changes are contained within the subtree
rooted at a given path.
Utility methods for
Iterable
conversions.Utility methods for
Iterator
conversions.Bridge class that connects the JAAS
LoginContext
class with the
LoginContext
interface used by Oak.JackrabbitAccessControlEntry
is a Jackrabbit specific extension
of the AccessControlEntry
interface.JackrabbitAccessControlList
is an extension of the AccessControlList
.JackrabbitAccessControlManager
provides extensions to the
AccessControlManager
interface.This implementation of
JackrabbitAccessControlManager
delegates back to a
delegatee wrapping each call into a SessionOperation
closure.JackrabbitAccessControlPolicy
is an extension of the
AccessControlPolicy
that exposes the path of the Node to
which it can be applied using AccessControlManager.setPolicy(String, javax.jcr.security.AccessControlPolicy)
.This is an extension of the event interface which provides
a method to detect whether the changes happened on locally
or remotely in a clustered environment.
A storage object for event filter configuration.
The Jackrabbit Node interface.
Deprecated.
Use standard JCR 2.0 API methods defined by
NodeTypeManager
instead.Jackrabbit specific extensions to
ObservationManager
.JackrabbitPrincipal
marks the principal to be the result of
authentication against the repository.The Jackrabbit query result interface.
The Jackrabbit repository interface.
Classes that implement this interface additionally provide management features.
Jackrabbit specific extension of the JCR
Session
interface.Values returned by Jackrabbit may implement this interface.
Defines optional functionality that a
ValueFactory
may choose to
provide.The Jackrabbit workspace interface.
This class contains methods replacing the deprecated
Subject.getSubject(AccessControlContext)
and associated methods, which changed their behavior
with Java 23 (@see https://inside.java/2024/07/08/quality-heads-up).Builder class which encapsulates the details of building a JCR
Repository
backed by an Oak ContentRepository
instanceException for signaling that the JCR API is not available.
Utility class providing conflict handlers used for JCR.
The
JcrDescriptorsImpl
extend the GenericDescriptors
by automatically marking some of the JCR
features as supported.Conflict Handler that merges concurrent updates to
org.apache.jackrabbit.JcrConstants.JCR_LASTMODIFIED
by picking the
older of the 2 conflicting dates and
org.apache.jackrabbit.JcrConstants.JCR_CREATED
by picking the newer
of the 2 conflicting dates.Parses and validates JCR names.
TODO document
JcrRemotingServlet
...JCRWebdavServerServlet provides request/response handling for the
JCRWebdavServer.
Utility methods related to JMX
The JNDI config hold information about JNDI connection details.
The base class for join conditions.
The base class for join conditions.
An execution plan for a join.
The implementation of the corresponding JCR interface.
A join.
Enumeration of the JCR 2.0 join types.
The enumeration of all join types.
Keeps track of changes performed between two consecutive background updates.
A value class representing an entry in the revisions journal.
The journal is a special, atomically updated file that records the state of
the repository as a sequence of references to successive root node records.
The
JournalFile
reader.The
JournalFile
writer.The JournalGarbageCollector can clean up JournalEntries that are older than a
particular age.
Marker interface to indicate the implementing class can be made part of JournalEntry
Each component which needs to add a property to JournalEntry
should register this service
Iterator over the revisions in the journal in reverse order
(end of the file to beginning).
Simple JSON Object representation.
Utility class for serializing node and property states to JSON.
A builder for Json and Jsop strings.
TODO document
A reader for Json and Jsop strings.
A fast Jsop writer / reader.
A tokenizer for Json and Jsop strings.
A builder for Json and Json diff strings.
This
IndexDeletionPolicy
implementation that
keeps only the most recent commit and immediately removes
all prior commits after a new commit is done.This attribute can be used to mark a token as a keyword.
Default implementation of
KeywordAttribute
.The lambda (λw) parameter in information-based
models.
Computes lambda as
docFreq+1 / numberOfDocuments+1
.Computes lambda as
totalTermFreq+1 / numberOfDocuments+1
.Utility class for recovering potential missing _lastRev updates of nodes due
to crash of a node.
An implementation of this interface receives callbacks about paths
that need an update of the _lastRev field on documents.
* This input stream delays accessing the
InputStream
until the first byte is readAn instances of this class represents a lazy value of type
T
.Implements an identity that is provided by the
LdapIdentityProvider
.LdapIdentityProperties
implements a case insensitive hash map that preserves the case of the keys but
ignores the case during lookup.LdapIdentityProvider
implements an external identity provider that reads users and groups from an ldap
source.Configuration of the ldap provider.
Defines the configuration of a connection pool.
Wrapper of another DocumentStore that does a lease check on any method
invocation (read or update) and fails if the lease is not valid.
The different modes for lease checks.
A LeaseFailureHandler can be provided to the DocumentMK.Builder
and will be passed on to the ClusterNodeInfo for use upon
lease failure.
The implementation of the corresponding JCR interface.
The function "length(..)".
Class to construct DFAs that match a word within some edit distance.
A pattern matcher.
A collector for a list of collectors.
Utility methods for
List
conversions.The implementation of the corresponding JCR interface.
A literal of a certain data type, possibly "cast(..)" of a literal.
Format for live/deleted documents
Tracks live field values across NRT reader reopens.
Holds all the configuration used by
IndexWriter
with few setters for
settings that can be changed on an IndexWriter
instance "live".Bayesian smoothing using Dirichlet priors.
Language model based on the Jelinek-Mercer smoothing method.
Abstract superclass for language modeling Similarities.
A strategy for computing the collection language model.
Models
p(w|C)
as the number of occurrences of the term in the
collection, divided by the total number of tokens + 1
.Stores the collection distribution of the current term.
A diff cache, which is pro-actively filled after a commit.
Name mapper with local namespace mappings.
An interprocess mutex lock.
Utility class for executing code with exclusive access.
Deprecated.
Use
LockConstants
insteadSupport deprecation of JCR locking as per OAK-6421.
Base class for Locking implementation.
This exception is thrown when the
write.lock
could not be acquired.Abstract base class for locking operations.
This exception is thrown when the
write.lock
could not be released.Simple standalone tool that forever acquires & releases a
lock using a specific LockFactory.
Simple standalone server that must be running when you
use
VerifyingLockFactory
.This is a
LogMergePolicy
that measures size of a
segment as the total byte size of the segment's files.This is a
LogMergePolicy
that measures size of a
segment as the number of documents (not taking deletions
into account).Implements a
DocumentStore
wrapper and logs all calls.This
GCMonitor
implementation logs all calls to its
LoggingGCMonitor.info(String, Object...)
, LoggingGCMonitor.warn(String, Object...)
,
LoggingGCMonitor.error(String, Exception)
and LoggingGCMonitor.skipped(String, Object...)
methods at the respective levels using the logger instance passed to the
constructor.Configures the logging based on logback-{logIdentifier}.xml specified.
A Reporter implementation that logs every nth node
and/or any nth property to the given logger on
info
level.Interface version of the JAAS
LoginContext
class.Configurable provider taking care of building login contexts for
the desired authentication mechanism.
Default login module implementation that authenticates JCR
Credentials
against the repository.Deprecated.
Since Oak 1.40.0.
Deprecated.
Since Oak 1.38.0 in favor of
SecurityConfiguration.getMonitors(StatisticsProvider)
This class implements a
MergePolicy
that tries
to merge segments into levels of exponentially
increasing size, where each level has fewer segments than
the value of the merge factor.Utility class to silence log output based on a specific key.
A wrapper for storage backends that allows to log store and read operations.
BitSet of fixed length (numBits), backed by accessible (
LongBitSet.getBits()
)
long[], accessed with a long index.Deprecated.
use
NumericDocValuesField
instead.
Field that indexes
long
values
for efficient range filtering and sorting.Represents long[], as a slice (offset + length) into an
existing long[].
Abstraction over an array of longs.
The implementation of the corresponding JCR interface.
The function "lower(..)".
Deprecated.
Only for reading existing 3.x indexes
Deprecated.
(4.0) This is only used to read indexes created
before 4.0.
Deprecated.
Only for reading existing 3.x indexes
Deprecated.
Only for reading old 4.0 segments
Deprecated.
Only for reading old 4.0 and 4.1 segments
Deprecated.
Only for reading old 4.0 and 4.1 segments
Lucene 4.0 Live Documents Format.
Deprecated.
Only for reading old 4.0 and 4.1 segments
Deprecated.
Only for reading old 4.0 segments
Deprecated.
Only for reading old 4.0 segments
Deprecated.
Only for reading old 4.0 segments
Deprecated.
Only for reading old 4.0-4.5 segments, and supporting IndexWriter.addIndexes
Deprecated.
Only for reading old 4.0-4.5 segments
Deprecated.
Deprecated.
Only for reading old 4.0 segments
Lucene 4.0 Stored Fields Format.
Class responsible for access to stored document fields.
Class responsible for writing stored document fields.
Lucene 4.0 Term Vectors format.
Lucene 4.0 Term Vectors reader.
Lucene 4.0 Term Vectors writer.
Deprecated.
Only for reading old 4.0 segments
Provides a
PostingsReaderBase
and PostingsWriterBase
.Lucene 4.1 postings format, which encodes postings in packed integer blocks
for fast decode.
Concrete class that reads docId(maybe frq,pos,offset,payloads) list
with postings format.
Concrete class that writes docId(maybe frq,pos,offset,payloads) list
with postings format.
Lucene 4.1 stored fields format.
Deprecated.
Only for reading old 4.2 segments
Deprecated.
Only for reading old 4.2 segments
Deprecated.
Only for reading old 4.2-4.5 segments
Lucene 4.2 score normalization format.
Lucene 4.2
term vectors format
.Deprecated.
Only for reading old 4.3-4.5 segments
writer for
Lucene45DocValuesFormat
Lucene 4.5 DocValues format.
reader for
Lucene45DocValuesFormat
metadata entry for a binary docvalues field
metadata entry for a numeric docvalues field
metadata entry for a sorted-set docvalues field
Implements the Lucene 4.6 index format, with configurable per-field postings
and docvalues formats.
Lucene 4.6 Field Infos format.
Lucene 4.6 Segment info format.
Lucene 4.6 implementation of
SegmentInfoReader
.Lucene 4.0 implementation of
SegmentInfoWriter
.Lucene's package information, including version.
Interface for managing a JCR repository as a JMX MBean.
A
ManagementOperation
is a background task, which can be
executed by an Executor
.Status of a
ManagementOperation
.Manifest is a properties files, providing the information about the segment
store (eg.
A MapFactory backed by MapDB, which stores the map in a temporary file.
Experimental extension point for OAK-1772 to try out alternative approaches for persisting in memory state
Not part of API
Helper class for keeping Lists of Objects associated with keys.
Exposes flex API, merged from flex API of sub-segments,
remapping docIDs (this is used for segment merging).
Exposes flex API, merged from flex API of sub-segments,
remapping docIDs (this is used for segment merging).
A map.
Utility methods for
Map
conversions.The listener interface for receiving garbage collection scan events.
Mark and sweep garbage collector.
A query that matches all documents.
Math static utility methods.
Add this
Attribute
to a fresh AttributeSource
before calling
MultiTermQuery.getTermsEnum(Terms,AttributeSource)
.Implementation class for
MaxNonCompetitiveBoostAttribute
.Returns the maximum payload score seen, else 1 if there are no payloads on the doc.
A memory blob store.
Basic JavaBean implementation of a child node entry.
An in-memory diff cache implementation.
Emulates a MongoDB store (possibly consisting of multiple shards and
replicas).
In-memory node state builder.
Basic in-memory node store implementation.
A interface for memory-bound cache objects.
An in-memory storage backend for the tree store.
A store used for in-memory operations.
This is a simple in memory
Revisions
implementation.Provides a merged sorted view from several sorted iterators.
A MergeInfo provides information required for a MERGE context.
Expert: a MergePolicy determines the sequence of
primitive merge operations.
A map of doc IDs.
Thrown when a merge was explicity aborted because
IndexWriter.close(boolean)
was called with
false
.Exception thrown if there are any problems while
executing a merge.
A MergeSpecification instance provides the information
necessary to perform multiple merges.
MergeTrigger is passed to
MergePolicy.findMerges(MergeTrigger, SegmentInfos)
to indicate the
event that triggered the merge.OneMerge provides the information necessary to perform
an individual primitive merge operation, resulting in
a single new segment.
Expert:
IndexWriter
uses an instance
implementing this interface to execute the merges
selected by a MergePolicy
.MergeSortedIterators
is a specialized implementation of a
merge sort of already sorted iterators of some type of comparable elements.Holds common state used during segment merging.
Class for recording units of work when merging segments.
Remaps docids around deletes during merge
MergingNodeStateDiff...
Fixture encapsulating metrics exporter instance of T
Initialize different metrics exporter fixture based on parameters used.
Exporter Type supported
This
IOMonitor
implementations registers the following monitoring endpoints
with the Metrics library if available:
MetricsIOMonitor.OAK_SEGMENT_SEGMENT_READ_BYTES
:
a meter metrics for the number of bytes read from tar files
MetricsIOMonitor.OAK_SEGMENT_SEGMENT_WRITE_BYTES
:
a meter metrics for the number of bytes written to tar files
MetricsIOMonitor.OAK_SEGMENT_SEGMENT_READ_TIME
:
a timer metrics for the time spent reading from tar files
MetricsIOMonitor.OAK_SEGMENT_SEGMENT_WRITE_TIME
:
a timer metrics for the time spent writing to tar files
Operations for minimizing automata.
Calculates the minimum payload seen
Utilities to retrieve _lastRev missing update candidates.
File-based
Directory
implementation that uses
mmap for reading, and FSDirectory.FSIndexOutput
for writing.represent an individual Mode for running a COMMAND.
Immutable snapshot of a mutable node state.
Base class to update the metrics for
DocumentStoreStatsCollector.doneFindAndModify(long, Collection, String, boolean, boolean, int)
for underlying DocumentStore
The
MongoDB
representation of a blob.Implementation of blob store for the MongoDB extending from
CachingBlobStore
.The
MongoConnection
abstracts connection to the MongoDB
.Implements a filter to decide if a given Mongo document should be processed or ignored based on its path.
A builder for a
DocumentNodeStore
backed by MongoDB.A base builder implementation for a
DocumentNodeStore
backed by
MongoDB.A document store that uses MongoDB as the backend.
MongoDocumentStoreCheckHelper
...Helper class to access package private methods on MongoDocumentStore.
Implementation specific metrics exposed by the
MongoDocumentStore
.Mongo Document Store throttling metric updater.
This class is as a wrapper around DocumentStore that expose two methods used to clean garbage from NODES collection
public int remove(Map<String, Long> orphanOrDeletedRemovalMap)
public List findAndUpdate(List updateOpList)
When enabled
Each method saves the document ID or empty properties names (that will be deleted) to a separate _bin collection as a BinDocument then delegates deletion to DocumentStore
When disabled (default)
Each method delegates directly to DocumentStore
Mongo specific version of MissingLastRevSeeker which uses mongo queries
to fetch candidates which may have missed '_lastRev' updates.
Factory to create Mongo Throttlers
Mongo specific version of VersionGCSupport which uses mongo queries
to fetch required NodeDocuments
Marker interface for monitors that are to be registered with a
Whiteboard
.Utility class to buffer signed longs in memory, which is optimized for the
case where the sequence is monotonic, although it can encode any sequence of
arbitrary longs.
Provides random access to a stream written with
MonotonicBlockPackedWriter
.A writer for large monotonically increasing sequences of positive longs.
Refers to a set of paths from a
ContentRepository
x that are possibly
stored in a separate physical persistent store.Applies a category of consistency checks specific to NodeStore mounts
Default
Mount
implementation for non-default mounts.Holds information related to the
Mount
s configured in a ContentRepository
.Provides helper methods for creating
MountInfoProvider
instances.Provides a fluent API from creating
MountInfoProvider
instancesA
MoveDetector
is a validator that can detect certain move operations
and reports these to the wrapped MoveValidator
by calling
MoveValidator.move(String, String, NodeState)
.This filter implementation excludes generating add node
events for child nodes of the destination of a move operation.
Utility to keep track of the move operations that are performed between two
calls to
Root.commit(java.util.Map<java.lang.String, java.lang.Object>)
.A validator that also receives notifications about moved nodes.
Exposes flex API, merged from flex API of sub-segments.
Holds a
DocsAndPositionsEnum
along with the
corresponding ReaderSlice
.Holds a
DocsEnum
along with the
corresponding ReaderSlice
.A wrapper for CompositeIndexReader providing access to DocValues.
Implements SortedDocValues over n subs, using an OrdinalMap
Implements MultiSortedSetDocValues over n subs, using an OrdinalMap
maps per-segment ordinals to/from global ordinal space
Exposes flex API, merged from flex API of sub-segments.
This abstract class reads skip lists with multiple levels.
This abstract class writes skip lists with multiple levels.
MultiPhraseQuery is a generalized version of PhraseQuery, with an added
method
MultiPhraseQuery.add(Term[])
.A
CompositeReader
which reads multiple indexes, appending
their content.Implements the CombSUM method for combining evidence from multiple
similarity values described in: Joseph A.
An abstract
Query
that matches documents
containing a subset of terms provided by a FilteredTermsEnum
enumeration.A rewrite method that tries to pick the best
constant-score rewrite method based on term and
document counts from the query.
Abstract class that defines how the query is rewritten.
A rewrite method that first translates each term into
BooleanClause.Occur.SHOULD
clause in a BooleanQuery, but the scores
are only computed as the boost.A rewrite method that first translates each term into
BooleanClause.Occur.SHOULD
clause in a BooleanQuery, and keeps the
scores as computed by the query.A wrapper for
MultiTermQuery
, that exposes its
functionality as a Filter
.Exposes flex API, merged from flex API of
sub-segments.
Extension of Bits for live documents.
Base class for all mutable values.
MutableValue
implementation of type
boolean
.MutableValue
implementation of type
Date
.MutableValue
implementation of type
double
.MutableValue
implementation of type
float
.MutableValue
implementation of type
int
.MutableValue
implementation of type
long
.MutableValue
implementation of type
String
.Produces a parameter name that will be returned by JMX
MBeanFeatureInfo.getName()
.Helper class for loading named SPIs from classpath (e.g.
Interface to support
NamedSPILoader.lookup(String)
by name.A default
ThreadFactory
implementation that accepts the name prefix
of the created threads as a constructor argument.TODO document
The
NamePathMapper
interface combines NameMapper
and
PathMapper
.Default implementation that doesn't perform any conversions for cases
where a mapper object only deals with oak internal names and paths.
TODO document
A cache key implementation, which is a combination of a name, path and a
revision vector.
Deprecated.
Use
NamespaceConstants
instead.TODO document
Validator service that checks that all node and property names as well
as any name values are syntactically valid and that any namespace prefixes
are properly registered.
Internal static utility class for managing the persisted namespace registry.
Validator service that checks that all node and property names as well
as any name values are syntactically valid and that any namespace prefixes
are properly registered.
Implements
LockFactory
using native OS file
locks.A native function condition.
A Spans that is formed from the ordered subspans of a SpanNearQuery
where the subspans do not overlap and have a maximum slop between them.
Similar to
NearSpansOrdered
, but for the unordered case.This is a
PhraseQuery
which is optimized for n-gram phrase query.An
FSDirectory
implementation that uses java.nio's FileChannel's
positional read, which allows multiple threads to read from the same file
without synchronizing.Reads bytes with
FileChannel.read(ByteBuffer, long)
Builder interface for constructing new
node states
.A mutable
Tree
implementation based on an underlying
NodeBuilder
, which tracks all changes recorded through
this tree's mutator methods.A collector for approximate node counts.
Count documents and nodes that exist.
Represents a node in a stream.
A reader for node data.
NodeDelegate
serve as internal representations of Node
s.A document storing data about a node.
A document which is created from splitting a main document can be classified
into multiple types depending on the content i.e.
Cache for the NodeDocuments.
Custom codec to create NodeDocument from a stream of BSON data received from MongoDB.
Helper class to access package private methods on NodeDocument.
Implements a comparator, which sorts NodeDocumentId string according to 1) their
depth (highest first) and 2) the id string itself.
This is a prototype class of a very fine-grained revision cleaner that cleans even revisions
in-between checkpoints.
TODO document
Information about a node being imported.
An
IndexDeletionPolicy
which keeps all index commits around, never
deleting them.The implementation of the corresponding JCR interface.
The function "localname(..)".
A wrapper for a collector that allows to filter for certain node names, or
children of those.
The implementation of the corresponding JCR interface.
The function "name(..)".
Base class for
Observer
instances that group changes
by node instead of tracking them down to individual properties.Represents a property of a node.
A node in a content tree consists of child nodes and properties, each
of which evolves through different states during its lifecycle.
The NodeStateCopier and NodeStateCopier.Builder classes allow
recursively copying a NodeState to a NodeBuilder.
The NodeStateCopier.Builder allows configuring a NodeState copy operation with
includePaths
, excludePaths
and mergePaths
.Handler of node state differences.
A nodetype info provider that is based on node states.
Utility method for code that deals with node states.
Storage abstraction for trees.
An instance of this class represents a private branch of the tree in a
NodeStore
to which transient changes can be applied and later merged
back or discarded.Provides a NodeStore instance for specific role.
Callback which invoked for any changed node read by IndexUpdate
as part of diff traversal
Provides a way to lazily construct the path
and provides access to the current path
A reader for tree store files.
Deprecated.
Use
NodeTypeConstants
insteadNodeTypeConstants...
A collector for node types.
A
NodeTypeDefDiff
represents the result of the comparison of
two node type definitions.Checks that nodes present in a mount are consistent with the global node type definitions
A nodetype info mechanism.
A nodetype info mechanism.
BuiltInNodeTypes
is a utility class that registers the built-in
node types required for a JCR repository running on Oak.Use this
LockFactory
to disable locking entirely.A
MergePolicy
which never returns merges to execute (hence it's
name).A
MergeScheduler
which never executes any merges.Reports writes to non-default mounts
A null FST
Outputs
implementation; use this if
you just want to build an FSA.This class acts as the base class for the implementations of the term
frequency normalization methods in the DFR framework.
Implementation used when there is no normalization.
Normalization model that assumes a uniform distribution of the term frequency.
Normalization model in which the term frequency is inversely related to the
length.
Dirichlet Priors normalization
Pareto-Zipf Normalization
Encodes/decodes per-document score normalization values.
This exception is thrown when you try to list a
non-existent directory.
A
Future
that accepts completion listener.The implementation of the corresponding JCR interface.
A "not" condition.
Wraps a
RAMDirectory
around any provided delegate directory, to
be used during NRT search.A per-document numeric value.
Field that stores a per-document
long
value for scoring,
sorting or value retrieval.A
Filter
that only accepts numeric values within
a specified range.A
Query
that matches numeric values within a
specified range.Expert: This class provides a
TokenStream
for indexing numeric values that can be used by NumericRangeQuery
or NumericRangeFilter
.Expert: Use this attribute to get the details of the currently generated token.
Implementation of
NumericTokenStream.NumericTermAttribute
.This is a helper class to generate prefix-encoded representations for numerical values
and supplies converters to represent float/double values as sortable integers/longs.
Builder class for constructing
ContentRepository
instances with
a set of specified plugin components.Extension of the JackrabbitEventFilter that exposes Oak specific
features.
Implements OakEventFilter which is an extension to the JackrabbitEventFilter
with features only supported by Oak.
Oak specific extension of JR2 FileDataStore which enables
provisioning the signing key via OSGi config
RepositoryFactory which constructs an instance of Oak repository.
TODO: document
Provides version information about Oak.
Provides version information about Oak.
An
Observable
supports attaching Observer
instances for
listening to content changes.Extension point for observing changes in an Oak repository.
The start and end character offset of a Token.
Default implementation of
OffsetAttribute
.This implementation of the authentication configuration provides login
contexts that accept any credentials and doesn't validate specified
workspace name.
This class implements an
AuthorizationConfiguration
which grants
full access to any Subject
.An "open" BitSet implementation that allows direct access to the array of words
storing the bits.
OpenBitSet with added methods to bulk-update the bits
from a
DocIdSetIterator
.An iterator to iterate over set bits in an OpenBitSet.
Permission provider implementation that grants full access everywhere.
Rudimentary
SecurityProvider
implementation that allow every subject
to authenticate and grants it full access everywhere.Interface to give useful statistics for maintenance operations.
Implementations of this can use to mark the relevant statistics.
Enumeration of the JCR 2.0 query operators.
The enumeration of all operators.
Enumeration of the JCR 2.0 query order.
The enumeration of query column orders (ascending and descending).
Return the childrenNames in the order defined by the orderedChildren iterator, and merges it
with the existing children defined by allChildren.
The implementation of the corresponding JCR interface.
An element of an "order by" list.
An ordinal based
TermState
The implementation of the corresponding JCR interface.
An "or" condition.
OrphanedNodeCheck
...Workaround to a JAAS class loading issue in OSGi environments.
Utility methods to use in an OSGi environment.
OSGi-based whiteboard implementation.
Represents the outputs for an FST, providing the basic
algebra required for building and traversing the FST.
A
DataOutput
wrapping a plain OutputStream
.A
DataInput
wrapper to read unaligned, variable-length packed
integers.A
DataOutput
wrapper to write unaligned, variable-length packed
integers.Simplistic compression for array of unsigned long values.
A decoder for packed integers.
An encoder for packed integers.
A format to write packed ints.
Simple class that holds a format and a number of bits per value.
Header identifying the structure of a packed integer array.
A packed integer array that can be modified.
A
PackedInts.Reader
which has all its values equal to 0 (bitsPerValue = 0).A read-only random access array of positive integers.
Run-once iterator interface, to decode previously saved PackedInts.
A write-once Writer.
Deprecated.
use
NumericDocValuesField
instead.A store where all the entries are stored in a "Pack File" (see @FilePacker).
Represents a logical byte[] as a series of pages.
Provides methods to read BytesRefs from a frozen
PagedBytes.
A
PagedMutable
.A B-tree page (leaf, or inner node).
An FST
Outputs
implementation, holding two other outputs.Holds a single pair of two outputs.
An
AtomicReader
which reads multiple, parallel indexes.This compactor implementation leverages the tree structure of the repository for concurrent compaction.
An
CompositeReader
which reads multiple, parallel indexes.A parallel store has the ability to "split itself" into multiple stores that
each cover a subset of the content.
A wrapper around the tree store that only iterates over a subset of the
nodes.
Deprecated.
Use
ThreeWayConflictHandler
instead.Resolutions for conflicts
A partial value factory implementation that only deals with in-memory values
and can wrap a
Value
around a PropertyState
.PasswordChangeAction
asserts that the upon
PasswordChangeAction.onPasswordChange(org.apache.jackrabbit.api.security.user.User, String, org.apache.jackrabbit.oak.api.Root, org.apache.jackrabbit.oak.namepath.NamePathMapper)
a different, non-null password is specified.Utility to generate and compare password hashes.
PasswordValidationAction
provides a simple password validation
mechanism with the following configurable option:
constraint: a regular expression that can be compiled
to a Pattern
defining validation rules for a password.The
Path
class is closely modeled after the semantics of
PathUtils
in oak-commons.Implements a comparator, which sorts paths according to 1) their depth
(highest first) and 2) the paths natural ordering.
Filter which determines whether given path should be included for processing
or not
The function "path(..)".
A utility class that allows skipping nodes that are not included in the index
definition.
PathMapper
instances provide methods for mapping paths from their JCR
string representation to their Oak representation and vice versa.A cache key implementation, which is a combination of a path and a revision
vector.
Simple utility class for lazily tracking the current path during
a tree traversal that recurses down a subtree.
Utility methods to parse a path.
The payload of a Token.
Default implementation of
PayloadAttribute
.An abstract class that defines a way for Payload*Query instances to transform
the cumulative effects of payload scores for a document.
This class is very similar to
SpanNearQuery
except that it factors
in the value of the payloads located at each of the positions where the
TermSpans
occurs.Experimental class to get set of payloads for most standard Lucene queries.
This class is very similar to
SpanTermQuery
except that it factors
in the value of the payload located at each of the positions where the
Term
occurs.Enables per field docvalues support.
Enables per field postings support.
Provides the ability to use a different
Similarity
for different fields.PerfLogger is a simpler wrapper around a slf4j Logger which
comes with the capability to issue log statements containing
the measurement between start() and end() methods.
Abstract class that simplifies development of a Reporter
that should only report every nth event (node or property seen).
Interface indicating that a given object (like e.g.
Implementation specific constants related to permission evaluation.
CommitHook
implementation that processes any modification made to
access control content and updates persisted permission store associated
with access control related data stored in the repository.Main entry point for permission evaluation in Oak.
Factory for
PermissionProvider
instances.Provides constants for permissions used in the OAK access evaluation as well
as permission related utility methods.
Validator implementation that asserts that the permission store is read-only.
Utility methods to evaluate permissions.
ValidatorProvider
implementation for permission evaluation associated
with write operations.A persistent linked list that internally uses the MVStore.
A persistent linked list that internally uses the MVStore.
A persistent cache for the document store.
This interface represents a cache which survives segment store restarts.
Persistence Cache Statistics.
A
SnapshotDeletionPolicy
which adds a persistence layer so that
snapshots can be maintained across the life of an application.DocIdSet
implementation based on pfor-delta encoding.A builder for
PForDeltaDocIdSet
.A Query that matches documents containing a particular sequence of terms.
Accumulates the intermediate sorted files and, when all files are generated, merges them into a single sorted file,
the flat file store
Selects a Mongo server that is available for a new connection.
Downloads the contents of the MongoDB repository dividing the tasks in a pipeline with the following stages:
Download - Downloads from Mongo all the documents in the node store.
Downloads the contents of the MongoDB repository dividing the tasks in a pipeline with the following stages:
Download - Downloads from Mongo all the documents in the node store.
Receives batches of node state entries, sorts then in memory, and finally writes them to a tree store.
Interface to improve pluggability of the
AccessControlManager
,
namely the interaction of multiple managers within a
single repository.A factory for creating unbound LdapConnection objects managed by LdapConnectionPool.
A position of an entry in a page file.
Determines the position of this token
relative to the previous Token in a TokenStream, used in phrase
searching.
Default implementation of
PositionIncrementAttribute
.Determines how many positions this
token spans.
Default implementation of
PositionLengthAttribute
.An FST
Outputs
implementation where each output
is a non-negative long value.Provides a
PostingsReaderBase
and PostingsWriterBase
.Abstract API that consumes postings for an individual term.
Encodes/decodes terms, postings, and proximity data.
The core terms dictionaries (BlockTermsReader,
BlockTreeTermsReader) interact with a single instance
of this class to manage creation of
DocsEnum
and
DocsAndPositionsEnum
instances.Extension of
PostingsConsumer
to support pluggable term dictionaries.Extension to the
CommitHook
interface that indicates that this
commit hook implementation must be executed after the
validation hooks.LoginContext for pre-authenticated subjects that don't require further
validation nor additional login/logout steps.
PreAuthenticatedLogin
is used as marker in the shared map of the login context.The prefetcher, in a separate threads, reads ahead of indexing, such that the
nodestore cache and datastore cache is filled.
An iterator that pre-fetches a number of items in order to calculate the size
of the result if possible.
The options to use for prefetching.
Experimental
NodeStore
extension that allows prefetching of node
states given a collection of paths.A Filter that restricts search results to values that have a matching prefix in a given
field.
A Query that matches documents containing terms with a specified prefix.
Subclass of FilteredTermEnum for enumerating all terms that match the
specified prefix filter term.
Extension of the
JackrabbitAccessControlList
that is bound to a Principal
.Extension of the
JackrabbitAccessControlEntry
that additionally defines the target object where this entry
will take effect.Configuration interface for principal management.
Default implementation of the
PrincipalConfiguration
Default implementation of the
JackrabbitPrincipal
interface.Principal specific
RangeIteratorAdapter
implementing the
PrincipalIterator
interface.This interface defines the principal manager which is the clients view on all
principals known to the repository.
This implementation of
PrincipalManager
delegates back to a
delegatee wrapping each call into a SessionOperation
closure.Default implementation of the
PrincipalManager
interface.Interface to obtain the name of the
Principal
from a
given ExternalIdentityRef
.The
PrincipalProvider
defines methods to provide access to sources
of Principal
s.Callback implementation used to pass a
PrincipalProvider
to the
login module.Extension for the
PrincipalManager
that offers range search.Restriction provider implementation used for editing access control by
principal.
Extension of the JCR
AccessControlPolicy
intended to grant a set of Principal
s the ability to perform certain
actions.InfoStream implementation over a
PrintStream
such as System.out
.PriorityCache
implements a partial mapping from keys of type K
to values
of type V
.A PriorityQueue maintains a partial ordering of its elements such that the
least element can always be found in constant time.
Validator
which detects change commits to the read only mounts.Internal representation of JCR privileges.
Allows to obtain the internal
representation
of privileges (or their names) and to covert the
internal representation back to privilege names.Wrapper around a set of
Privilege
s that allows to test if a given list of privilege names in included.Default implementation of the
PrivilegeCollection
interface.Interface for the privilege management configuration.
Configuration for the privilege management component.
Internal name constants used for the privilege management.
The
PrivilegeDefinition
interface defines the characteristics of
a JCR Privilege
.PrivilegeManager
is a jackrabbit specific extensions to
JCR access control management that allows to retrieve privileges known
by this JCR implementation and to register new custom privileges according
to implementation specific rules.This implementation of
PrivilegeManager
delegates back to a
delegatee wrapping each call into a SessionOperation
closure.Privilege management related utility methods.
A simple CPU profiling tool similar to java -Xrunhprof.
Progress
...This
Editor
instance logs invocations to the logger
passed to its constructor after each 10000 calls to it
enter()
method.ProgressWithETA
...The
PropertiesUtil
is a utility class providing some
useful utility methods for converting property types.PropertyBuilder
for building in memory PropertyState
instances.PropertyDelegate
serve as internal representations of Property
s.The implementation of the corresponding JCR interface.
A condition to check if the property exists ("is not null").
TODO document
A condition to check if the property does not exist ("is null").
Predicate on property values.
Immutable property state.
Utility class for creating
PropertyState
instances.A
PropertyValue
implementation that wraps a PropertyState
Property statistics.
A property definition within a template (the property name, the type, and the
index within the list of properties for the given node).
Immutable property value.
The implementation of the corresponding JCR interface.
A property expression.
Utility class for creating
PropertyValue
instances.Information about a property being imported.
Hint indicating whether the property is multi- or single-value
Base interface for
ProtectedNodeImporter
and ProtectedPropertyImporter
.ProtectedNodeImporter
provides means to import protected
Node
s and the subtree defined below such nodes.ProtectedPropertyImporter
is in charge of importing single
properties with a protected PropertyDefinition
.Interface to mark properties or nodes located underneath a synchronized external identity as being protected (i.e.
A query to match
Authorizable
s.A "select" or "union" query.
The abstract base class for queries.
Creates queries from the
Analyzer
chain.The sort order of the result set of a query.
The query engine allows to parse and execute queries.
The query engine implementation.
Used to instruct the
QueryEngineImpl
on how to act with respect of the SQL2
optimisation.Settings of the query engine.
Formatter for JCR queries in order to make them easier to read.
The implementation of the corresponding JCR interface.
Represents a parsed query.
Represents an index.
A query index that may support using multiple access orders
(returning the rows in a specific order), and that can provide detailed
information about the cost.
A marker interface which means this index supports may support more than
just the minimal fulltext query syntax.
An index plan.
A builder for index plans.
A marker interface which means this index supports executing native queries
A sort order entry.
The sort order (ascending or descending).
A mechanism to index data.
Marker interface for services that need a QueryIndexProvider
The implementation of the corresponding JCR interface.
The implementation of the corresponding JCR interface.
The implementation of the corresponding JCR interface.
A query options (or "hints") that are used to customize the way the query is processed.
Query parser interface.
The implementation of the corresponding JCR interface.
Statistics on query operations
Object that holds statistical info about a query.
JMX Bindings for
QueryStat
.Common utilities used for user/group queries.
A validator for query.
Constrains search results to only match those which also match a provided
query.
JCR 2.0 / SQL-2 railroad generator.
RailroadMacro macro that prints out the content of a file or a URL.
A memory-resident
Directory
implementation.Represents a file in RAM as a list of byte[] buffers.
A memory-resident
IndexInput
implementation.A memory-resident
IndexOutput
implementation.Estimates the size (memory representation) of Java objects.
JVM diagnostic features.
A random access trace
Implementation of the
AuthorizableNodeName
that generates a random
node name that doesn't reveal the ID of the authorizable.Abstract base class to rate limit IO.
Simple class to rate limit IO.
Defines variation in the capabilities of different RDBs.
Utility functions for connection handling.
Factory for creating
DataSource
s based on a JDBC connection URL.A builder for a
DocumentNodeStore
backed by a relational database.Serialization/Parsing of documents.
Implementation of
DocumentStore
for relational databases.Defines variation in the capabilities of different RDBs.
Implements (most) DB interactions used in
RDBDocumentStore
.Utility for dumping contents from
RDBDocumentStore
's tables.Convenience class that dumps the table creation statements for various
database types.
Convenience methods dealing with JDBC specifics.
Provides a component for a
PreparedStatement
and a method for
setting the parameters within this componentUtilities that provide JSON support on top of the existing
JsopTokenizer
support in oak-commons.RDB specific version of MissingLastRevSeeker.
Options applicable to RDB persistence
Container for the information in a RDB database column.
RDB specific version of
VersionGCSupport
which uses an extended query
interface to fetch required NodeDocument
s.Read raw data from the end of an underlying data source.
A cache consisting of a fast and slow component.
Utility class to safely share
DirectoryReader
instances across
multiple threads, while periodically reopening.Subreader slice from a parent composite reader.
Common util methods for dealing with
IndexReader
s and IndexReaderContext
s.Read Only Authorization Model
A node builder that throws an
UnsupportedOperationException
on
all attempts to modify the given base state.A read only
AbstractFileStore
implementation that supports going back
to old revisions.Read-only namespace registry.
Base implementation of a
NodeTypeManager
with support for reading
node types from the Tree
returned by ReadOnlyNodeTypeManager.getTypes()
.ReadOnlyVersionManager
provides implementations for read-only
version operations modeled after the ones available in VersionManager
.Writable namespace registry.
ReadWriteNodeTypeManager
extends the ReadOnlyNodeTypeManager
with support for operations that modify node types.ReadWriteVersionManager
...Extends the
ReadOnlyVersionManager
with methods to modify the
version store.Indicates that a modification operation was tried to execute on a read-only builder.
Partial mapping of keys of type
K
to values of type RecordId
.Statistics for
RecordCache
.The record id.
A memory optimised set of
RecordId
s.A table to translate record numbers to offsets.
Represents an entry in the record table.
The type of a record in a segment.
This utility breaks down space usage per record type.
A
ByteBlockPool.Allocator
implementation that recycles unused byte
blocks in a buffer and reuses them in subsequent calls to
RecyclingByteBlockAllocator.getByteBlock()
.A
IntBlockPool.Allocator
implementation that recycles unused int
blocks in a buffer and reuses them in subsequent calls to
RecyclingIntBlockAllocator.getIntBlock()
.This
IOMonitor
implementations registers the following monitoring endpoints
with the Metrics library if available:
RedisCacheIOMonitor.OAK_SEGMENT_CACHE_REDIS_SEGMENT_READ_BYTES
:
a meter metrics for the number of bytes read from segment redis cache
RedisCacheIOMonitor.OAK_SEGMENT_CACHE_REDIS_SEGMENT_WRITE_BYTES
:
a meter metrics for the number of bytes written to segment redis cache
RedisCacheIOMonitor.OAK_SEGMENT_CACHE_REDIS_SEGMENT_READ_TIME
:
a timer metrics for the time spent reading from segment redis cache
RedisCacheIOMonitor.OAK_SEGMENT_CACHE_REDIS_SEGMENT_WRITE_TIME
:
a timer metrics for the time spent writing to segment redis cache
Manages reference counting for a given object.
Referenceable binary.
Helper class used to keep track of uuid mappings (e.g.
Checks if
jcr:baseVersion
reference properties resolve to a node.Callback interface for collecting all blob references that are
potentially accessible.
Exposes the blob along with the Node id from which referenced
Utility class to safely share instances of a certain type across multiple
threads, while periodically refreshing them.
Use to receive notification when a refresh has
finished.
Implementations of this interface determine whether a session needs
to be refreshed before the next session operation is performed.
Composite of zero or more
RefreshStrategy
instances,
each of which covers a certain strategy.This refresh strategy refreshes after a given timeout of inactivity.
Regular Expression extension to
Automaton
.A fast regular expression query based on the
org.apache.lucene.util.automaton
package.Whiteboard service registration.
Holds the names of well-known registration properties for security-related components
A selector for selecting a node at a relative path from the node selected by
an initial selector.
Configures the Webdav and Davex servlet to enabled remote
access to the repository
Base class to update the metrics for
DocumentStoreStatsCollector.doneRemove(long, Collection, int)
for underlying DocumentStore
Keeps track of the status of a replica set based on information provided
by heartbeat events.
A
Reporter
receives callbacks for every NodeState
and PropertyState that was accessed via a {ReportingNodeState}
instance.A decoration layer for NodeState instances that intercepts
all accesses to NodeStates and PropertyStates (getters) and
informs a
Reporter
via its callbacks that the respective
NodeStates or PropertyStates have been accessed.This Class implements a servlet that is used as unified mechanism to retrieve
a jcr repository either through JNDI.
Callback implementation used to access the repository.
TODO document
Initializer of repository content.
This type represents the lock that has been already acquired on the segment
store.
This interface exposes repository management operations and the status
of such operations.
Enum whose ordinals correspond to the status codes.
The repository manager provides life-cycle management features for
repositories.
RepositoryManager constructs the Repository instance and registers it with OSGi Service Registry.
Default implementation of the
RepositoryManagementMBean
based
on a Whiteboard
instance, which is used to look up individual
service providers for backup (FileStoreBackupRestoreMBean
), data store
garbage collections (BlobGCMBean
) and revision store garbage
collections (RevisionGCMBean
).This exception is thrown when the store cannot be accessed (e.g.
The
RepositoryPermission
allows to evaluate permissions that have
been defined on the repository level and which consequently are not bound
to a particular item.The RepositoryStartupServlet starts a jackrabbit repository and registers it
to the JNDI environment.
Statistics on core repository operations
The values of this enum determine the type of the time
series returned by
RepositoryStatistics.getTimeSeries(Type)
and link RepositoryStatistics.getTimeSeries(String, boolean)
.MBean for providing repository wide statistics.
Restore a backup of a segment store into an existing segment store.
Collect options for the
Restore
command.A
Restriction
object represents a "live" restriction object that
has been created using the Jackrabbit specific extensions of the
AccessControlEntry
interface.The
RestrictionDefinition
interface provides methods for
discovering the static definition of any additional policy-internal refinements
of the access control definitions.Default implementation of the
RestrictionDefinition
interface.RestrictionImpl
Interface used to verify if a given
restriction
applies to a given
item or path.Interface to manage the supported restrictions present with a given access
control and permission management implementation.
Default restriction provider implementation that supports the following
restrictions:
AccessControlConstants.REP_GLOB
: A simple paths matching pattern.A result from executing a query.
Result
...A query result.
Implements a query result iterator which only returns a maximum number of
element from an underlying iterator starting at a given offset.
A query result row.
A query result row that keeps all data (for this row only) in memory.
ResultWriter
...Implementation of a
NodeStateDiff
that reports the inverse operation
to the wrapped NodeStateDiff
.A revision.
Provides revision related context.
Wraps an existing revision context and exposes a custom
clusterId
.A light-weight implementation of a MongoDB DBObject for a single revision
based map entry.
Default implementation of
RevisionGCMBean
based on a Runnable
.MBean for starting and monitoring the progress of
revision garbage collection.
Collector interface for DocumentNodeStore revision garbage collection
statistics.
MBean exposing DocumentNodeStore revision garbage collection statistics.
Utility for tracing a node back through the revision history.
Representation of a point in time for a given node.
Revisions
instances provide read and write access to
the current head state.Collect and print the revisions of a segment store.
Collect options for the
Revisions
command.Implementation specific options for the
setHead
methods.Gives information about current node revisions state.
A cache key implementation which consists of two
Revision
s.A vector of revisions.
Acts like forever growing T[], but internally uses a
circular buffer to reuse instances of T.
Implement to reset an instance
A
Root
instance serves as a container for a Tree
.Deprecated.
Please use
RootProvider
insteadThe implementation of the corresponding JCR interface.
Finite-state automaton with fast run operation.
The exception thrown when traversing too many entries in the result.
A data store backend that stores data on Amazon S3.
Defined Amazon S3 constants.
Amazon S3 data store extending from
AbstractSharedCachingDataStore
.MBean for JMX statistics pertaining to an S3DataStore.
This class to sets encrption mode in S3 request.
The implementation of the corresponding JCR interface.
The function "issamenode(..)".
The implementation of the corresponding JCR interface.
The "issamenode(...)" join condition.
A simple scheduler for executing and scheduling tasks in the background.
A
Scheduler
instance transforms changes to the content tree
into a queue of commits
.Scheduling options for parametrizing individual commits.
A
Scorer
which wraps another scorer and caches the score of the
current document.Holds one hit in
TopDocs
.Expert: Common scoring functionality for different types of queries.
A child Scorer and its relationship to its parent.
Base rewrite method that translates each term into a query, and keeps
the scores as computed by the query.
Factory class used by
SearcherManager
to
create new IndexSearchers.Keeps track of current plus old IndexSearchers, closing
the old ones once they have timed out.
Simple pruner that drops any searcher older by
more than the specified seconds, than the newest
searcher.
Utility class to safely share
IndexSearcher
instances across multiple
threads, while periodically reopening.Base interface for all security related configurations.
Default implementation that provides empty initializers, validators,
commit hooks and parameters.
Main entry point for security related plugins to an Oak repository.
Callback implementation to set and get the
SecurityProvider
.Deprecated.
Replaced by
org.apache.jackrabbit.oak.security.internal.SecurityProviderBuilder
A list of records.
A consumer of record data.
Represents a single entry (segment) in the segment archive.
SegmentArchiveManager provides a low-level access to the segment files (eg.
This interface represents a read-only segment archive.
Represents a write-enabled, append-only archive.
A BLOB (stream of bytes).
Implementation of
BlobReferenceRetriever
to retrieve blob references from the
SegmentTracker
.This class exposes
CounterStats
for allocations and de-allocations
of Buffer
instances:
SegmentBufferMonitor.DIRECT_BUFFER_COUNT
: number of allocated direct byte
buffers.This class encapsulates the state of a segment being written.
This
WriteOperationHandler
uses a pool of SegmentBufferWriter
s,
which it passes to its execute
method.CheckpointMBean
implementation for the SegmentNodeStore
.Embeds a [read-only] SegmentInfo and adds per-commit
fields.
Perform a full-copy of repository data at segment level.
Collect options for the
SegmentCopy
command.Access the data of a segment.
This class holds configuration options for segment store revision gc.
The compactor type
The gc type.
Segment identifier.
A factory for
SegmentId
given their representation in MSB/LSB longs.Instances of this class provides
SegmentId
instances of a given
SegmentStore
and creates new SegmentId
instances on the fly
if required.Hash table of weak references to segment identifiers.
Information about a segment such as it's name, directory, and files related
to the segment.
Expert: Controls the format of the
SegmentInfo
(segment metadata file).Specifies an API for classes that can read
SegmentInfo
information.A collection of segmentInfo objects with methods for operating on
those segments in relation to the file system.
Utility class for executing code that needs to do
something with the current segments file.
Specifies an API for classes that can write out
SegmentInfo
data.A node builder that keeps track of the number of updates
(set property calls and so on).
A record of type "NODE".
The top level class for the segment store.
Static factories for creating
SegmentNodeBuilder
instances
pertaining to specific SegmentStore
instances.A factory allowing creation of secondary segment node stores.
SegmentNodeStoreMonitor is notified for commit related operations performed by SegmentNodeStore.
An OSGi wrapper for segment node store monitoring configurations.
This type is a main entry point for the segment node store persistence.
An OSGi wrapper for the segment node store.
This component is activated when a configuration for the deprecated
SegmentNodeStoreService
from oak-segment
is detected.This exception is thrown when there the segment does not exist in the store
Listener for
SegmentNotFoundException
.This exception is thrown by the Segment NodeStore when an internal
limit is exceeded such as too many segment references.
SegmentParser
serves as a base class for parsing segments.Return type of
SegmentParser.parseBlob(RecordId)
.Type of blobs (and strings)
Return type of
SegmentParser.parseListBucket(RecordId, int, int, int)
.Return type of
SegmentParser.parseList(RecordId, RecordId, int)
.Result type of
SegmentParser.parseMap(RecordId, RecordId, MapRecord)
.Result type of
SegmentParser.parseNode(RecordId)
.Result type of
SegmentParser.parseTemplate(RecordId)
.Result type of
SegmentParser.parseValue(RecordId, RecordId, Type)
.Editor implementation which stores the property index NodeState data in a different
SegmentNodeStore used solely for property index storage purpose
A property, which can read a value or list record from a segment.
Instances of
SegmentReader
are responsible for reading records from segments.IndexReader implementation over a single segment.
Called when the shared core for this SegmentReader
is closed.
Holder class for common parameters used during read.
Represents a list of segment IDs referenced from a segment.
This MBean exposes the settings from
SegmentGCOptions
and
reflects the GC status as reported by the GCMonitor
.The backend storage interface used by the segment node store.
For reading any record of type "VALUE" as binary streams.
Tracker of references to segment identifiers and segment instances
that are currently kept in memory and factory for creating
SegmentId
instances.Version of the segment storage format.
Converts nodes, properties, values, etc.
Holder class for common parameters used during write.
An execution plan for one selector in a query.
The implementation of the corresponding JCR interface.
A selector within a query.
Common
Selector
implementationsA native int hash-based set where one value is reserved to mean "EMPTY" internally.
A
MergeScheduler
that simply does each merge
sequentially, using the current thread.Provides access to the underlying ServiceRegistry.
Utility class that links
ServletException
with support for
the exception chaining mechanism in Throwable
.Instances of this class are passed to all JCR implementation classes
(e.g.
TODO document
TODO document
MBean providing basic
Session
information and statistics.SessionNamespaces
implements namespace handling on the JCR
Session level.A
SessionOperation
provides an execution context for executing session scoped operations.User-specific settings which may be passed by the query engine to index providers during query planning and iteration
of results.
Provides
SessionQuerySettings
for principals with access to the content
repository.Overrides oak.fastQuerySize system property when available.
A convenient class which offers a semi-immutable object wrapper
implementation which allows one to set the value of an object exactly once,
and retrieve it many times.
Thrown when
SetOnce.set(Object)
is called more than once.Utility methods for
Set
conversions.Interface to be implemented by a shared data store.
Explicitly identifies the type of the data store
Utility class for
SharedDataStore
.Encapsulates the different type of records at the data store root.
Deprecated.
use
NumericDocValuesField
instead.A Sieve cache.
Support for "similar(...)
Similarity defines the components of Lucene scoring.
Stores the weight for a query across the indexed collection.
A subclass of
Similarity
that provides a simplified API for its
descendants.Implementation of the
CredentialsSupport
interface that handles SimpleCredentials
.This util class can be used to export a tree (eg entire repository) to a flat
file, without index dependency/involvement.
A straightforward implementation of
FSDirectory
using java.io.RandomAccessFile.Reads bytes with
RandomAccessFile.seek(long)
followed by
RandomAccessFile.read(byte[], int, int)
.Implements
LockFactory
using File.createNewFile()
.A very simple merged segment warmer that just ensures
data structures are initialized.
WebdavServlet provides webdav support (level 1 and 2 complient) for repository
resources.
Implements
LockFactory
for a single in-process instance,
meaning all locking will take place through this one instance.Subclass of FilteredTermsEnum for enumerating a single term.
Exposes multi-valued view over a single-valued instance.
Math functions that trade off accuracy for speed.
This class forces a composite reader (eg a
MultiReader
or DirectoryReader
) to emulate an
atomic reader.A wrapper store to simulate a slow backend store.
Floating point numbers smaller than 32 bits.
An
IndexDeletionPolicy
that wraps any other
IndexDeletionPolicy
and adds the ability to hold and later release
snapshots of an index.Encapsulates sort criteria for returned hits.
Deprecated.
Use
SortedDocValuesField
instead.A per-document byte[] with presorted values.
Field that stores
a per-document
BytesRef
value, indexed for
sorting.A per-document set of presorted byte[] values.
Field that stores
a set of per-document
BytesRef
values, indexed for
faceting,grouping,joining.A helper class to iterate over key-value pairs in a tree store, in ascending
key order.
Base class for sorting algorithms implementations.
Stores information about how to sort documents by terms in an individual
field.
Specifies the type of the terms to be sorted, or special types such as CUSTOM
Deprecated.
depending on what type of store it is use
IndexStoreSortStrategy
or IncrementalIndexStoreSortStrategy
insteadThe base class for sources.
The base class of a selector and a join.
Matches spans near the beginning of a field.
Wraps any
MultiTermQuery
as a SpanQuery
,
so it can be nested within other SpanQuery classes.Abstract class that defines how the query is rewritten.
A rewrite method that first translates each term into a SpanTermQuery in a
BooleanClause.Occur.SHOULD
clause in a BooleanQuery, and keeps the
scores as computed by the query.Only return those matches that have a specific payload at
the given position.
Matches spans which are near one another.
Removes matches which overlap with another SpanQuery or
within a x tokens before or y tokens after another SpanQuery.
Matches the union of its clauses.
Only return those matches that have a specific payload at
the given position.
Base class for filtering a SpanQuery based on the position of a match.
Return value for
SpanPositionCheckQuery.acceptPosition(Spans)
.Checks to see if the
SpanPositionCheckQuery.getMatch()
lies between a start and end positionBase class for span-based queries.
Expert: an enumeration of span matches.
Public for extension only.
Matches spans containing a term.
Expert-only.
Special automata operations.
Support for "spellcheck(...)
Helper class for loading SPI classes from classpath (META-INF files).
Implements a split document cleanup.
The SQL2 parser can convert a JCR-SQL2 query to a query.
StableRevisionComparator
implements a revision comparator, which
is only based on stable information available in the two revisions presented
to this comparator.This component is activated when a configuration for the deprecated
StandbyStoreService
from oak-segment
is detected.Automaton state.
An xpath statement.
Pair of states.
The base class for static operands.
The base class for static operands (literal, bind variables).
Manager for all repository wide statistics.
Factory to create StatisticsProvider depending on setup.
A tag interface to indicate that a class is a Stat.
Builder for commonly used statistics for flat file stores.
A collector for statistics about a repository.
Statistics Util class.
Util class to generate a name for Stats implementations that can be used for creating labels in prometheus.
A wrapper store that allows capturing performance counters for a storage
backend.
Utility class to be used for tracking of timing within methods.
An in-memory storage for collectors.
Storage for files in a tree store.
A helper class to build storage backends for a tree store.
A field whose value is stored so that
IndexSearcher.doc(int)
and IndexReader.document(int, org.apache.lucene.index.StoredFieldVisitor)
will
return the field and its value.Controls the format of stored fields
Codec API for reading stored fields.
Codec API for writing stored fields:
Expert: provides a low-level means of accessing the stored field
values in an index.
Enumeration of possible return values for
StoredFieldVisitor.needsField(org.apache.lucene.index.FieldInfo)
.Deprecated.
Use
BinaryDocValuesField
instead.Utility methods for
Stream
conversions.This
Blob
implementations is based on a string.This class caches the path strings used in the CompositeNodeState to avoid
keeping too many strings in the memory.
TODO document
A field that is indexed but not tokenized: the entire
String value is indexed as a single token.
Methods for manipulating strings.
Source copied from a publicly available library.
Utility class to store a list of string and perform sort on that.
Some string utility methods.
A cache value wrapping a simple string.
Editor wrapper that passes only changes in the specified subtree to
the given delegate editor.
Validator that excludes a subtree from the validation process and delegates
validation of other changes to another given validator.
Validator that detects changes to a specified subtree and delegates the
validation of such changes to another given validator.
Support for "suggest(...)
Summary
...Helper methods for sweep2 functionality introduced with OAK-9176.
Represents the sweep2 status as recorded in the settings collection.
Helper class to perform a revision sweep for a given clusterId.
SyncContext
is used as scope for sync operations.Represents a synchronized identity managed by a
SyncHandler
.Exception thrown by methods defined on the
SyncHandler
interface indicating that user or group synchronization failed.SyncHandler is used to sync users and groups from an
ExternalIdentityProvider
.Marker interface identifying classes that map a given
SyncHandler
to an ExternalIdentityProvider
where both are identified by their name.Provides utilities to manage synchronized external identities.
Implements a
DocumentStore
wrapper which synchronizes on all
methods.The external identity synchronization management.
SyncManagerImpl
is used to manage registered sync handlers.Implementation of the
SynchronizationMBean
interface.Defines the result of a sync operation
Result codes for sync operation.
Principal to mark an system internal subject.
Utility class for consistent handling of system properties.
Internal extension of the
MutableRoot
to be used
when an usage of the system internal subject is needed.Internal utility providing access to a system internal subject instance.
Principal used to mark a system user.
TargetImportHandler
serves as the base class for the concrete
classes {@link DocViewImportHandler}
and
{@link SysViewImportHandler}
.A strategy for the recovery of segments.
This implementation of
Revisions
is backed by a
journal
file where the current head is persisted
by calling TarRevisions.tryFlush(Flusher)
.The in-memory representation of a "hidden class" of a node; inspired by the
Chrome V8 Javascript engine).
A Term represents a word from text.
A Query that matches documents containing a term.
A Filter that restricts search results to a range of term
values in a given field.
A Query that matches documents within an range of terms.
Subclass of FilteredTermEnum for enumerating all terms that match the
specified range parameters.
Access to the terms in a specific field.
Abstract API that consumes terms for an individual field.
Iterator to seek (
TermsEnum.seekCeil(BytesRef)
, TermsEnum.seekExact(BytesRef)
) or step through (BytesRefIterator.next()
terms to obtain frequency information (TermsEnum.docFreq()
), DocsEnum
or DocsAndPositionsEnum
for the current term (TermsEnum.docs(org.apache.lucene.util.Bits, org.apache.lucene.index.DocsEnum)
.Represents returned result from
TermsEnum.seekCeil(org.apache.lucene.util.BytesRef)
.Expert:
Public for extension only
Encapsulates all required internal state to position the associated
TermsEnum
without re-seeking.Contains statistics for a specific term
Holder for per-term statistics.
This attribute is requested by TermsHashPerField to index the contents.
Controls the format of term vectors
Codec API for reading term vectors:
Codec API for writing term vectors:
A field that is indexed and tokenized, without term
vectors.
TextValue
represents a serialized property value read
from a System or Document View XML document.Implementation of
Similarity
with the Vector Space Model.A tool that removes uninteresting lines from stack traces.
A tool that converts full thread dumps files to the "standard" format.
A tool that converts a file with date/time and thread dumps into a list of
date/time and just the thread names.
Thrown by lucene on detecting that Thread.interrupt() had
been called.
Keeps track of a list of threads and prints statistics of CPU usage of the threads.
Thread factory that registers all new threads with a given thread monitor.
A
ThreeWayConflictHandler
is responsible for handling conflicts which happen
on Root.rebase()
and on the implicit rebase operation which
takes part on Root.commit()
.Resolutions for conflicts
Interface to implement throttling for document store.
Wrapper of another DocumentStore that does a throttling check on any method
invocation (create, update or delete) and throttled the system if under high load.
Stats Collector for throttling operation.
Throttling statistics helper class.
Merges segments of approximately equal size, subject to
an allowed number of segments per tier.
Holds score and explanation for a single candidate
merge.
Format a time duration as human-readable string, inspired by
Stopwatch#toString()
.A class representing a time interval, with utility methods to derive related
intervals, check time stamps for containment, etc.
The
TimeLimitingCollector
is used to timeout search requests that
take longer than the maximum allowed search time limit.Thrown when elapsed search time exceeds allowed search time.
Thread used to timeout search requests.
A collector that also measures the elapsed time.
A timing context.
Interface for a time series of the measured values per
second, minute, hour and day.
A UUID implementation.
A DocumentStore wrapper that can be used to log and also time DocumentStore
calls.
A Token is an occurrence of a term from the text of a field.
Expert: Creates a TokenAttributeFactory returning
Token
as instance for the basic attributes
and for all other attributes calls the given delegate factory.Configuration for token management.
Default implementation for the
TokenConfiguration
interface.TokenCredentials
implements the Credentials
interface and represents single token credentials.Subclass of
CredentialException
indicating that the token credentials used for repository login have expired.A TokenFilter is a TokenStream whose input is another TokenStream.
The
TokenInfo
provides data associated with a login token and
basic methods to verify the validity of token credentials at given
point in time.A Tokenizer is a TokenStream whose input is a Reader.
LoginModule
implementation that is able to handle login request
based on TokenCredentials
.Interface to manage create and manage login tokens.
Callback implementation to set and retrieve a login token provider.
Consumes a TokenStream and creates an
Automaton
where the transition labels are UTF8 bytes (or Unicode
code points if unicodeArcs is true) from the TermToBytesRefAttribute
.Utility class for common stuff pertaining to tooling.
Represents hits returned by
IndexSearcher.search(Query,Filter,int)
and IndexSearcher.search(Query,int)
.A base class for all collectors that return a
TopDocs
output.Represents hits returned by
IndexSearcher.search(Query,Filter,int,Sort)
.A class that remembers the top k entries.
Collect the top largest binaries.
Base rewrite method for collecting only the top terms
via a priority queue.
Helper methods to ease implementing
Object.toString()
.Just counts the total number of hits.
An instance of a
Trace
specifies a read pattern for tracing
IO reads of segments with an IOTracer
instance.Utility class for running the various
Trace
implementations.Tracker for whiteboard services.
A delegating Directory that records which files were
written to and deleted.
Class that tracks changes to a delegated
IndexWriter, used by
ControlledRealTimeReopenThread
to ensure specific
changes are visible.Automaton transition.
An index that traverses over a given subtree.
A tree instance represents a snapshot of the
ContentRepository
tree at the time the instance was acquired from a ContentSession
.Status of an item in a
Tree
Oak internal utility interface to avoid repeated retrieval of an underlying
Tree
.TreeContext
represents item related information in relation to a
dedicated module.Deprecated.
Please use
TreeProvider
insteadA
TreeLocation
denotes a location inside a tree.The
TreePermission
allow to evaluate permissions defined for a given
Tree
and it's properties.A session that allows reading and writing keys and values in a tree store.
The tree store is similar to the flat file store, but instead of storing all
key-value pairs in a single file, it stores the entries in multiple files
(except if there are very few nodes).
A node state of an Oak node that is stored in a tree store.
A command line utility for the tree store.
Allows to distinguish different types of trees based on their name, ancestry
or primary type.
Utility providing common operations for the
Tree
that are not provided
by the API.FunctionalInterface
to consume Metric Stats for create/upsert operationAn interface for implementations that support 2-phase commit.
A utility for executing 2-phase commit on several objects.
Thrown by
TwoPhaseCommitTool.execute(TwoPhaseCommit...)
when an
object fails to commit().Thrown by
TwoPhaseCommitTool.execute(TwoPhaseCommit...)
when an
object fails to prepareCommit().Instances of this class map Java types to
property types
.A Token's lexical type.
Default implementation of
TypeAttribute
.TypeCodes maps between
Type
and the code used to prefix
its json serialisation.Interface to provide ability to the
DataStore
to add records with BlobOptions
.Validator implementation that check JCR node type constraints.
Extension point that allows pluggable handling of constraint violations
Inheritance-aware node type predicate for
node states
.UnboundConnectionValidator
...A pool implementation for LdapConnection objects.
UnboundConnectionValidator
...Class to encode java's UTF16 char[] into UTF8 byte[]
without always allocating a new byte[] as
String.getBytes("UTF-8") does.
Represents a union query.
Checker that ensures the consistency of unique entries in the various mounts
An universal
Filter
implementation, which can be parametrised by
a UniversalFilter.Selector
and a Predicate
.A selector instance maps call backs on
Filters
to NodeState
instances,
which should be used for determining inclusion or exclusion of the associated event.A DocumentStore "update" operation for one document.
A condition to check before an update is applied.
A key for an operation consists of a property name and an optional
revision.
A DocumentStore operation for a given key within a document.
The DocumentStore operation type.
This
MergePolicy
is used for upgrading all existing segments of
an index when calling IndexWriter.forceMerge(int)
.Cache for staging async uploads.
The implementation of the corresponding JCR interface.
The function "upper(..)".
Base class to update the metrics for
DocumentStoreStatsCollector.doneCreateOrUpdate(long, Collection, List)
for underlying DocumentStore
User is a special
Authorizable
that can be authenticated and
impersonated.The
UserAction
interface allows for implementations to be informed
about and react to the following changes to a User
:
UserAction.onDisable(User, String, Root, NamePathMapper)
Provides a user management specific implementation of the
Authentication
interface to those LoginModule
s that verify
a given authentication request by evaluation information exposed by the
Jackrabbit user management API.Configuration interface for user management.
Default implementation of the
UserConfiguration
.User management related constants.
Credentials implementation that only contains a
userId
but no password.LoginModule implementation for
UserIDTest
The
UserManager
provides access to and means to maintain
authorizable objects
i.e.Callback implementation used to pass a
UserManager
to the
login module.This implementation of
UserManager
delegates back to a
delegatee wrapping each call into a UserManager
closure.UserManagerImpl...
Query manager for user specific searches.
Utility methods for user management.
Converts UTF-32 automata to the equivalent UTF-8 representation.
Utility class related to encoding characters into (UTF-8) byte sequences.
Static helper methods.
Represents a path in TopNSearcher.
Holds a single input (IntsRef) + output, returned by
shortestPaths()
.Utility class to find top N shortest paths from start
point(s).
Amazon S3 utilities.
Utility methods.
Utils
provide some utility methods.A predicate for matching against a list of UUIDs.
Content change validator.
Extension point for plugging in different kinds of validation rules
for content changes.
Implementation of
ValueFactory
interface.A
LockFactory
that wraps another LockFactory
and verifies that each lock obtain/release
is "correct" (never results in two processes holding the
lock at the same time).Use by certain classes to match version compatibility
across releases of Lucene.
The VersionableEditor provides two possible ways to handle
versionable nodes:
it can copy the version histories of versionable nodes, or
it can skip copying version histories and remove the
mix:versionable
mixin together with any related
properties (see VersionHistoryUtil.removeVersionProperties(NodeBuilder, TypePredicate)
).Commit hook which is responsible for storing the path of the versionable
node with every version history.
The VersionablePropertiesEditor adds missing versionable properties.
Deprecated.
Use
VersionConstants
insteadVersionConstants...
This class allows to copy the version history, optionally filtering it with a
given date.
This class allows to configure the behaviour of the version copier.
VersionDelegate
...Gives a recommendation about parameters for the next revision garbage collection run.
VersionHistoryDelegate
...VersionHistoryImpl
...This class gathers together editors related to handling version storage:
VersionEditorProvider
VersionEditor
- creates version history, handles
checking-in, checking-out and restoring, prevents a
checked-in node from being modified,
VersionStorageEditor
- validates changes on the
version storage,
VersionableCollector
- collects all existing versionable
UUIDs, so assigned histories won't be removed in the next step,
OrphanedVersionCleaner
- removes all histories that are
empty and have no longer a parent versionable node.VersionManagerDelegate
...Simple abstraction of the version storage.
A utility for keeping backwards compatibility on previously abstract methods
(or similar replacements).
Editor wrapper that passes only changes to non-hidden nodes and properties
(i.e.
Event filter that hides all non-visible content.
Validator implementation that allows to exclude hidden nodes and/or properties
for the validation process.
DocIdSet
implementation based on word-aligned hybrid encoding on
words of 8 bits.A builder for
WAH8DocIdSet
s.Implements a combination of
WeakHashMap
and
IdentityHashMap
.Integrates the Felix WebConsole support with servlet container.
Expert: Calculate query weights and build query scorers.
Dynamic
AuthorizableActionProvider
based on the available
whiteboard services.Dynamic
AuthorizableNodeName
based on the available
whiteboard services.Marker interface for services than can hold a whiteboard.
Callback implementation to set and retrieve the
Whiteboard
.Dynamic
EditorProvider
based on the available
whiteboard services.Dynamic
Executor
based on the available whiteboard services.Dynamic
IndexEditorProvider
based on the available
whiteboard services.Dynamic
QueryIndexProvider
based on the available
whiteboard services.Dynamic
RestrictionProvider
based on the available
whiteboard services.Dynamic
UserAuthenticationFactory
based on the available whiteboard services.Implements the wildcard search query.
Delegate class for workspace operations.
TODO document
Initializer of a workspace and it's initial content.
Instances of this class manage the deduplication caches used by the
SegmentWriter
to avoid writing multiple copies of the same record.This implementation of
WriterCacheManager
returns
RecordCache
instances for the string and template cache
and Cache
instance for the node cache.This implementation of
WriterCacheManager
returns empty caches
of size 0.Deprecated.
An XA-enabled session should directly implement the
XAResource
interfaceThis class can can convert a XPATH query to a SQL2 query.