Uses of Class
org.apache.lucene.analysis.TokenStream
Packages that use TokenStream
Package
Description
API and code to convert text into indexable/searchable tokens.
The logical representation of a 
Document for indexing and searching.Code to maintain and access indices.
- 
Uses of TokenStream in org.apache.lucene.analysisSubclasses of TokenStream in org.apache.lucene.analysisModifier and TypeClassDescriptionfinal classThis class can be used if the token attributes of a TokenStream are intended to be consumed more than once.final classExpert: This class provides aTokenStreamfor indexing numeric values that can be used byNumericRangeQueryorNumericRangeFilter.classA TokenFilter is a TokenStream whose input is another TokenStream.classA Tokenizer is a TokenStream whose input is a Reader.Fields in org.apache.lucene.analysis declared as TokenStreamModifier and TypeFieldDescriptionprotected final TokenStreamTokenFilter.inputThe source of tokens for this filter.protected final TokenStreamAnalyzer.TokenStreamComponents.sinkSink tokenstream, such as the outer tokenfilter decorating the chain.Methods in org.apache.lucene.analysis that return TokenStreamModifier and TypeMethodDescriptionAnalyzer.TokenStreamComponents.getTokenStream()Returns the sinkTokenStreamfinal TokenStreamAnalyzer.tokenStream(String fieldName, Reader reader) Returns a TokenStream suitable forfieldName, tokenizing the contents ofreader.final TokenStreamAnalyzer.tokenStream(String fieldName, String text) Returns a TokenStream suitable forfieldName, tokenizing the contents oftext.Methods in org.apache.lucene.analysis with parameters of type TokenStreamModifier and TypeMethodDescriptionTokenStreamToAutomaton.toAutomaton(TokenStream in) Pulls the graph (includingPositionLengthAttribute) from the providedTokenStream, and creates the corresponding automaton where arcs are bytes (or Unicode code points if unicodeArcs = true) from each term.Constructors in org.apache.lucene.analysis with parameters of type TokenStreamModifierConstructorDescriptionCachingTokenFilter(TokenStream input) Create a new CachingTokenFilter aroundinput, caching its token attributes, which can be replayed again after a call toCachingTokenFilter.reset().protectedTokenFilter(TokenStream input) Construct a token stream filtering the given input.TokenStreamComponents(Tokenizer source, TokenStream result) Creates a newAnalyzer.TokenStreamComponentsinstance.
- 
Uses of TokenStream in org.apache.lucene.documentFields in org.apache.lucene.document declared as TokenStreamModifier and TypeFieldDescriptionprotected TokenStreamField.tokenStreamPre-analyzed tokenStream for indexed fields; this is separate from fieldsData because you are allowed to have both; eg maybe field has a String value but you customize how it's tokenizedMethods in org.apache.lucene.document that return TokenStreamModifier and TypeMethodDescriptionField.tokenStream(Analyzer analyzer) Field.tokenStreamValue()The TokenStream for this field to be used when indexing, or null.Methods in org.apache.lucene.document with parameters of type TokenStreamModifier and TypeMethodDescriptionvoidField.setTokenStream(TokenStream tokenStream) Expert: sets the token stream to be used for indexing and causes isIndexed() and isTokenized() to return true.Constructors in org.apache.lucene.document with parameters of type TokenStreamModifierConstructorDescriptionField(String name, TokenStream tokenStream) Deprecated.Field(String name, TokenStream tokenStream, Field.TermVector termVector) Deprecated.UseTextFieldinsteadField(String name, TokenStream tokenStream, FieldType type) Create field with TokenStream value.TextField(String name, TokenStream stream) Creates a new un-stored TextField with TokenStream value.
- 
Uses of TokenStream in org.apache.lucene.indexMethods in org.apache.lucene.index that return TokenStreamModifier and TypeMethodDescriptionIndexableField.tokenStream(Analyzer analyzer) Creates the TokenStream used for indexing this field.
TextFieldinstead