All Classes Interface Summary Class Summary Enum Summary Exception Summary Error Summary Annotation Types Summary
Class |
Description |
AbstractAllGroupHeadsCollector<GH extends AbstractAllGroupHeadsCollector.GroupHead> |
This collector specializes in collecting the most relevant document (group head) for each group that match the query.
|
AbstractAllGroupHeadsCollector.GroupHead<GROUP_VALUE_TYPE> |
Represents a group head.
|
AbstractAllGroupsCollector<GROUP_VALUE_TYPE> |
A collector that collects all groups that match the
query.
|
AbstractAllTermDocs |
Base class for enumerating all but deleted docs.
|
AbstractEncoder |
Base class for payload encoders.
|
AbstractField |
Base class for Field implementations
|
AbstractFirstPassGroupingCollector<GROUP_VALUE_TYPE> |
FirstPassGroupingCollector is the first of two passes necessary
to collect grouped hits.
|
AbstractQueryConfig |
|
AbstractQueryMaker |
Abstract base query maker.
|
AbstractRangeQueryNode<T extends FieldValuePairQueryNode<?>> |
This class should be extended by nodes intending to represent range queries.
|
AbstractSecondPassGroupingCollector<GROUP_VALUE_TYPE> |
SecondPassGroupingCollector is the second of two passes
necessary to collect grouped docs.
|
AdaptiveFacetsAccumulator |
|
AddDocTask |
Add a document, optionally of a certain size.
|
AddFacetedDocTask |
Add a faceted document.
|
Aggregator |
An Aggregator is the analogue of Lucene's Collector (see
Collector ), for processing the categories
belonging to a certain document.
|
Algorithm |
Test algorithm, as read from file
|
AllowLeadingWildcardAttribute |
Deprecated. |
AllowLeadingWildcardAttributeImpl |
Deprecated. |
AllowLeadingWildcardProcessor |
|
AlreadyClosedException |
This exception is thrown when there is an attempt to
access something that has already been closed.
|
Among |
|
Analyzer |
An Analyzer builds TokenStreams, which analyze text.
|
AnalyzerAttribute |
Deprecated. |
AnalyzerAttributeImpl |
Deprecated. |
AnalyzerProfile |
Manages analysis data configuration for SmartChineseAnalyzer
|
AnalyzerQueryNodeProcessor |
|
AnalyzingQueryParser |
Overrides Lucene's default QueryParser so that Fuzzy-, Prefix-, Range-, and WildcardQuerys
are also passed through the given analyzer, but wild card characters (like * )
don't get removed from the search terms.
|
AndQuery |
|
AndQueryNode |
A AndQueryNode represents an AND boolean operation performed on a
list of nodes.
|
AnyQueryNode |
A AnyQueryNode represents an ANY operator performed on a list of
nodes.
|
AnyQueryNodeBuilder |
|
ArabicAnalyzer |
|
ArabicLetterTokenizer |
Deprecated.
|
ArabicNormalizationFilter |
|
ArabicNormalizer |
Normalizer for Arabic.
|
ArabicStemFilter |
|
ArabicStemmer |
Stemmer for Arabic.
|
ArmenianAnalyzer |
|
ArmenianStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
ArrayHashMap<K,V> |
An Array-based hashtable which maps, similar to Java's HashMap, only
performance tests showed it performs better.
|
ArrayUtil |
Methods for manipulating arrays.
|
ASCIIFoldingFilter |
This class converts alphabetic, numeric, and symbolic Unicode characters
which are not in the first 127 ASCII characters (the "Basic Latin" Unicode
block) into their ASCII equivalents, if one exists.
|
AssertingIndexSearcher |
Helper class that adds some extra checks to ensure correct
usage of IndexSearcher and Weight .
|
AssociationEnhancement |
|
AssociationFloatProperty |
An AssociationProperty which treats the association as float - the
association bits are actually float bits, and thus merging two associations
is done by float summation.
|
AssociationFloatSumAggregator |
An Aggregator which updates the weight of a category by summing the
weights of the float association it finds for every document.
|
AssociationFloatSumFacetRequest |
Facet request for weighting facets according to their float association by
summing the association values.
|
AssociationIntProperty |
An AssociationProperty which treats the association as int - merges
two associations by summation.
|
AssociationIntSumAggregator |
An Aggregator which updates the weight of a category by summing the
weights of the integer association it finds for every document.
|
AssociationIntSumFacetRequest |
Facet request for weighting facets according to their integer association by
summing the association values.
|
AssociationListTokenizer |
Tokenizer for associations of a category
|
AssociationProperty |
|
AssociationsPayloadIterator |
Allows easy iteration over the associations payload, decoding and breaking it
to (ordinal, value) pairs, stored in a hash.
|
Attribute |
Base interface for attributes.
|
AttributeImpl |
|
AttributeReflector |
|
AttributeSource |
An AttributeSource contains a list of different AttributeImpl s,
and methods to add and get them.
|
AttributeSource.AttributeFactory |
|
AttributeSource.State |
This class holds the state of an AttributeSource.
|
AveragePayloadFunction |
Calculate the final score as the average score of all payloads seen.
|
BalancedSegmentMergePolicy |
Deprecated.
|
BalancedSegmentMergePolicy.MergePolicyParams |
Specifies configuration parameters for BalancedSegmentMergePolicy.
|
BaseCharFilter |
|
BaseFormAttribute |
|
BaseFormAttributeImpl |
|
BaseFragmentsBuilder |
|
BaseTokenStreamTestCase |
Base class for all Lucene unit tests that use TokenStreams.
|
BaseTokenStreamTestCase.CheckClearAttributesAttribute |
Attribute that records if it was cleared or not.
|
BaseTokenStreamTestCase.CheckClearAttributesAttributeImpl |
Attribute that records if it was cleared or not.
|
BasicQueryFactory |
|
BasqueAnalyzer |
|
BasqueStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
BeiderMorseFilter |
TokenFilter for Beider-Morse phonetic encoding.
|
Benchmark |
Run the benchmark algorithm.
|
BenchmarkHighlighter |
|
BinaryDictionary |
Base class for a binary-encoded in-memory dictionary.
|
Bits |
Interface for Bitset-like structures.
|
Bits.MatchAllBits |
Bits impl of the specified length with all bits set.
|
Bits.MatchNoBits |
Bits impl of the specified length with no bits set.
|
BitUtil |
A variety of high efficiency bit twiddling routines.
|
BitVector |
Optimized implementation of a vector of bits.
|
BlockGroupingCollector |
|
BooleanClause |
A clause in a BooleanQuery.
|
BooleanClause.Occur |
Specifies how clauses are to occur in matching documents.
|
BooleanFilter |
A container Filter that allows Boolean composition of Filters.
|
BooleanFilterBuilder |
|
BooleanModifierNode |
|
BooleanModifiersQueryNodeProcessor |
|
BooleanQuery |
A Query that matches documents matching boolean combinations of other
queries, e.g.
|
BooleanQuery.TooManyClauses |
|
BooleanQuery2ModifierNodeProcessor |
|
BooleanQueryBuilder |
|
BooleanQueryNode |
A BooleanQueryNode represents a list of elements which do not have an
explicit boolean operator defined between them.
|
BooleanQueryNodeBuilder |
|
BooleanSingleChildOptimizationQueryNodeProcessor |
This processor removes every BooleanQueryNode that contains only one
child and returns this child.
|
BoostAttribute |
Deprecated. |
BoostAttributeImpl |
Deprecated. |
BoostingQuery |
The BoostingQuery class can be used to effectively demote results that match a given query.
|
BoostingQueryBuilder |
|
BoostingTermBuilder |
|
BoostQueryNode |
|
BoostQueryNodeBuilder |
|
BoostQueryNodeProcessor |
|
BoundaryScanner |
|
BrazilianAnalyzer |
Analyzer for Brazilian Portuguese language.
|
BrazilianStemFilter |
|
BrazilianStemmer |
A stemmer for Brazilian Portuguese words.
|
BreakIteratorBoundaryScanner |
|
BufferedIndexInput |
Base implementation class for buffered IndexInput .
|
BufferedIndexOutput |
|
BufferingTermFreqIteratorWrapper |
This wrapper buffers incoming elements.
|
Builder<T> |
Builds a minimal FST (maps an IntsRef term to an arbitrary
output) from pre-sorted terms with outputs.
|
Builder.Arc<T> |
Expert: holds a pending (seen but not yet serialized) arc.
|
Builder.FreezeTail<T> |
Expert: this is invoked by Builder whenever a suffix
is serialized.
|
Builder.UnCompiledNode<T> |
Expert: holds a pending (seen but not yet serialized) Node.
|
BulgarianAnalyzer |
|
BulgarianStemFilter |
|
BulgarianStemmer |
Light Stemmer for Bulgarian.
|
ByteArrayDataInput |
DataInput backed by a byte array.
|
ByteArrayDataOutput |
DataOutput backed by a byte array.
|
ByteBlockPool |
Class that Posting and PostingVector use to write byte
streams into shared fixed-size byte[] arrays.
|
ByteBlockPool.Allocator |
Abstract class for allocating and freeing byte
blocks.
|
ByteBlockPool.DirectAllocator |
|
ByteBlockPool.DirectTrackingAllocator |
|
ByteFieldSource |
Expert: obtains single byte field values from the
FieldCache
using getBytes() and makes those values
available as other numeric types, casting as needed.
|
ByteSequenceOutputs |
An FST Outputs implementation where each output
is a sequence of bytes.
|
BytesRef |
Represents byte[], as a slice (offset + length) into an
existing byte[].
|
BytesRefFSTEnum<T> |
Enumerates all input (BytesRef) + output pairs in an
FST.
|
BytesRefFSTEnum.InputOutput<T> |
Holds a single input (BytesRef) + output pair.
|
BytesRefHash |
|
BytesRefHash.BytesStartArray |
Manages allocation of the per-term addresses.
|
BytesRefHash.DirectBytesStartArray |
|
BytesRefHash.MaxBytesLengthExceededException |
|
BytesRefHash.TrackingDirectBytesStartArray |
|
BytesRefIterator |
A simple iterator interface for BytesRef iteration.
|
BytesRefList |
A simple append only random-access BytesRef array that stores full
copies of the appended bytes in a ByteBlockPool .
|
BytesRefSorter |
Collects BytesRef and then allows one to iterate over their sorted order.
|
ByteVector |
This class implements a simple byte vector with access to the underlying
array.
|
CachedFilterBuilder |
Filters are cached in an LRU Cache keyed on the contained query or filter object.
|
CachingCollector |
Caches all docs, and optionally also scores, coming from
a search, and is then able to replay them to another
collector.
|
CachingSpanFilter |
Wraps another SpanFilter's result and caches it.
|
CachingTokenFilter |
This class can be used if the token attributes of a TokenStream
are intended to be consumed more than once.
|
CachingWrapperFilter |
Wraps another filter's result and caches it.
|
CachingWrapperFilter.DeletesMode |
Expert: Specifies how new deletions against a reopened
reader should be handled.
|
CachingWrapperFilterHelper |
A unit test helper class to test when the filter is getting cached and when it is not.
|
CannedTokenStream |
TokenStream from a canned list of Tokens.
|
CarmelTopKTermPruningPolicy |
Pruning policy with a search quality parameterized guarantee - configuration
of this policy allows to specify two parameters: k and
ε such that:
|
CarmelTopKTermPruningPolicy.ByDocComparator |
|
CarmelUniformTermPruningPolicy |
Enhanced implementation of Carmel Uniform Pruning,
|
CarmelUniformTermPruningPolicy.ByDocComparator |
|
CartesianPoint |
Deprecated. |
CartesianPolyFilterBuilder |
Deprecated. |
CartesianShapeFilter |
Deprecated. |
CartesianTierPlotter |
Deprecated. |
CatalanAnalyzer |
|
CatalanStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
CategoryAttribute |
An attribute which contains for a certain category the CategoryPath
and additional properties.
|
CategoryAttributeImpl |
|
CategoryAttributesIterable |
|
CategoryAttributesStream |
|
CategoryContainer |
|
CategoryDocumentBuilder |
|
CategoryEnhancement |
This interface allows easy addition of enhanced category features.
|
CategoryListCache |
|
CategoryListData |
Category list data maintained in RAM.
|
CategoryListIterator |
An interface for iterating over a "category list", i.e., the list of
categories per document.
|
CategoryListParams |
Contains parameters for a category list *
|
CategoryListPayloadStream |
Accumulates category IDs for a single document, for writing in byte array
form, for example, to a Lucene Payload.
|
CategoryListTokenizer |
A base class for category list tokenizers, which add category list tokens to
category streams.
|
CategoryParentsStream |
|
CategoryPath |
A CategoryPath holds a sequence of string components, specifying the
hierarchical name of a category.
|
CategoryProperty |
|
CategoryTokenizer |
|
CategoryTokenizerBase |
A base class for all token filters which add term and payload attributes to
tokens and are to be used in CategoryDocumentBuilder .
|
ChainedFilter |
Allows multiple Filter s to be chained.
|
CharacterDefinition |
Character category data.
|
CharacterUtils |
CharacterUtils provides a unified interface to Character-related
operations to implement backwards compatible character operations based on a
Version instance.
|
CharacterUtils.CharacterBuffer |
|
CharArrayIterator |
|
CharArrayMap<V> |
A simple class that stores key Strings as char[]'s in a
hash table.
|
CharArraySet |
A simple class that stores Strings as char[]'s in a
hash table.
|
CharFilter |
Subclasses of CharFilter can be chained to filter CharStream.
|
CharReader |
CharReader is a Reader wrapper.
|
CharsRef |
Represents char[], as a slice (offset + length) into an existing char[].
|
CharStream |
|
CharStream |
This interface describes a character stream that maintains line and
column number positions of the characters.
|
CharStream |
This interface describes a character stream that maintains line and
column number positions of the characters.
|
CharTermAttribute |
The term text of a Token.
|
CharTermAttributeImpl |
The term text of a Token.
|
CharTokenizer |
An abstract base class for simple, character-oriented tokenizers.
|
CharType |
Internal SmartChineseAnalyzer character type constants.
|
CharVector |
This class implements a simple char vector with access to the underlying
array.
|
CheckHits |
Utility class for asserting expected hits in tests.
|
CheckHits.ExplanationAsserter |
Asserts that the score explanation for every document matching a
query corresponds with the true score.
|
CheckHits.ExplanationAssertingSearcher |
an IndexSearcher that implicitly checks hte explanation of every match
whenever it executes a search.
|
CheckHits.SetCollector |
Just collects document ids into a set.
|
CheckIndex |
Basic tool and API to check the health of an index and
write a new segments file that removes reference to
problematic segments.
|
CheckIndex.Status |
|
CheckIndex.Status.FieldNormStatus |
Status from testing field norms.
|
CheckIndex.Status.SegmentInfoStatus |
Holds the status of each segment in the index.
|
CheckIndex.Status.StoredFieldStatus |
Status from testing stored fields.
|
CheckIndex.Status.TermIndexStatus |
Status from testing term index.
|
CheckIndex.Status.TermVectorStatus |
Status from testing stored fields.
|
ChecksumIndexInput |
Writes bytes through to a primary IndexOutput, computing
checksum as it goes.
|
ChecksumIndexOutput |
Writes bytes through to a primary IndexOutput, computing
checksum.
|
ChineseAnalyzer |
Deprecated.
|
ChineseFilter |
Deprecated.
|
ChineseTokenizer |
Deprecated.
|
ChunksIntEncoder |
|
CJKAnalyzer |
|
CJKBigramFilter |
Forms bigrams of CJK terms that are generated from StandardTokenizer
or ICUTokenizer.
|
CJKTokenizer |
Deprecated.
|
CJKWidthFilter |
A TokenFilter that normalizes CJK width differences:
Folds fullwidth ASCII variants into the equivalent basic latin
Folds halfwidth Katakana variants into the equivalent kana
|
Cl2oTaxonomyWriterCache |
|
ClassicAnalyzer |
|
ClassicFilter |
|
ClassicTokenizer |
A grammar-based tokenizer constructed with JFlex
|
ClearStatsTask |
Clear statistics data.
|
CloseableThreadLocal<T> |
Java's builtin ThreadLocal has a serious flaw:
it can take an arbitrarily long amount of time to
dereference the things you had stored in it, even once the
ThreadLocal instance itself is no longer referenced.
|
CloseIndexTask |
Close index writer.
|
CloseReaderTask |
Close index reader.
|
CloseTaxonomyIndexTask |
Close taxonomy index.
|
CloseTaxonomyReaderTask |
Close taxonomy reader.
|
CodecUtil |
Utility class for reading and writing versioned headers.
|
CollationKeyAnalyzer |
|
CollationKeyFilter |
|
CollationTestBase |
Base test class for testing Unicode collation.
|
CollectionUtil |
Methods for manipulating (sorting) collections.
|
Collector |
Expert: Collectors are primarily meant to be used to
gather raw results from a search, and implement sorting
or custom result filtering, collation, etc.
|
CollisionMap |
HashMap to store colliding labels.
|
CommandLineUtil |
Class containing some useful methods used by command line tools
|
CommitIndexTask |
Commits the IndexWriter.
|
CommitTaxonomyIndexTask |
Commits the Taxonomy Index.
|
CompactLabelToOrdinal |
This is a very efficient LabelToOrdinal implementation that uses a
CharBlockArray to store all labels and a configurable number of HashArrays to
reference the labels.
|
Compile |
The Compile class is used to compile a stemmer table.
|
ComplementCountingAggregator |
|
ComplexExplanation |
Expert: Describes the score computation for document and query, and
can distinguish a match independent of a positive value.
|
ComplexPhraseQueryParser |
QueryParser which permits complex phrase query syntax eg "(john jon
jonathan~) peters*".
|
ComposedQuery |
|
CompoundFileExtractor |
Command-line tool for extracting sub-files out of a compound file.
|
CompoundFileWriter |
Combines multiple files into a single compound file.
|
CompoundWordTokenFilterBase |
Base class for decomposition token filters.
|
CompressionTools |
Simple utility class providing static methods to
compress and decompress binary data for stored fields.
|
ConcurrentMergeScheduler |
|
Config |
Perf run configuration properties.
|
ConfigAttribute |
Deprecated. |
ConfigurationKey<T> |
An instance of this class represents a key that is used to retrieve a value
from AbstractQueryConfig .
|
ConnectionCosts |
n-gram connection cost data
|
Constants |
Various benchmarking constants (mostly defaults)
|
Constants |
Some useful constants.
|
ConstantScoreQuery |
A query that wraps another query or a filter and simply returns a constant score equal to the
query boost for every document that matches the filter or query.
|
ConstantScoreQueryBuilder |
|
ConsumeContentSourceTask |
|
ContentItemsSource |
Base class for source of data for benchmarking
|
ContentSource |
Represents content from a specified source, such as TREC, Reuters etc.
|
CoreParser |
Assembles a QueryBuilder which uses only core Lucene Query objects
|
CorePlusExtensionsParser |
|
CorruptIndexException |
This exception is thrown when Lucene detects
an inconsistency in the index.
|
Counter |
Simple counter class
|
CountFacetRequest |
Facet request for counting facets.
|
CountingAggregator |
A CountingAggregator updates a counter array with the size of the whole
taxonomy, counting the number of times each category appears in the given set
of documents.
|
CountingListTokenizer |
|
CreateIndexTask |
Create an index.
|
CreateTaxonomyIndexTask |
Create a taxonomy index.
|
CSVUtil |
Utility class for parsing CSV text
|
CustomScoreProvider |
|
CustomScoreQuery |
Query that sets document score as a programmatic function of several (sub) scores:
the score of its subQuery (any query)
(optional) the score of its ValueSourceQuery (or queries).
|
CzechAnalyzer |
|
CzechStemFilter |
|
CzechStemmer |
Light Stemmer for Czech.
|
DanishAnalyzer |
|
DanishStemmer |
Generated class implementing code defined by a snowball script.
|
DataInput |
Abstract base class for performing read operations of Lucene's low-level
data types.
|
DataOutput |
Abstract base class for performing write operations of Lucene's low-level
data types.
|
DateField |
Deprecated.
|
DateRecognizerSinkFilter |
|
DateResolutionAttribute |
Deprecated. |
DateResolutionAttributeImpl |
Deprecated. |
DateTools |
Provides support for converting dates to strings and vice-versa.
|
DateTools.Resolution |
Specifies the time granularity.
|
DefaultEncoder |
Simple Encoder implementation that does not modify the output
|
DefaultEnhancementsIndexingParams |
|
DefaultFacetIndexingParams |
|
DefaultICUTokenizerConfig |
|
DefaultOperatorAttribute |
Deprecated. |
DefaultOperatorAttribute.Operator |
|
DefaultOperatorAttributeImpl |
Deprecated. |
DefaultOrdinalPolicy |
This class filters our the ROOT category ID.
|
DefaultPathPolicy |
This class filters our the ROOT category path.
|
DefaultPhraseSlopAttribute |
Deprecated. |
DefaultPhraseSlopAttributeImpl |
Deprecated. |
DefaultPhraseSlopQueryNodeProcessor |
|
DefaultSimilarity |
Expert: Default scoring implementation.
|
DeleteByPercentTask |
Deletes a percentage of documents from an index randomly
over the number of documents.
|
DeleteDocTask |
Delete a document by docid.
|
DeletedQueryNode |
|
DelimitedPayloadTokenFilter |
Characters before the delimiter are the "token", those after are the payload.
|
DemoHTMLParser |
HTML Parser that is based on Lucene's demo HTML parser.
|
DGapIntDecoder |
|
DGapIntEncoder |
An IntEncoderFilter which encodes the gap between the given values,
rather than the values themselves.
|
Dictionary |
Dictionary interface for retrieving morphological data
by id.
|
Dictionary |
A simple interface representing a Dictionary.
|
DictionaryCompoundWordTokenFilter |
A TokenFilter that decomposes compound words found in many Germanic languages.
|
Diff |
The Diff object generates a patch string.
|
DiffIt |
The DiffIt class is a means generate patch commands from an already prepared
stemmer table.
|
DirContentSource |
|
DirContentSource.Iterator |
Iterator over the files in the directory
|
DirectIOLinuxDirectory |
An Directory implementation that uses the
Linux-specific O_DIRECT flag to bypass all OS level
caching.
|
Directory |
A Directory is a flat list of files.
|
DirectoryTaxonomyReader |
|
DirectoryTaxonomyWriter |
TaxonomyWriter which uses a Directory to store the taxonomy
information on disk, and keeps an additional in-memory cache of some or all
categories.
|
DirectoryTaxonomyWriter.DiskOrdinalMap |
|
DirectoryTaxonomyWriter.MemoryOrdinalMap |
|
DirectoryTaxonomyWriter.OrdinalMap |
Mapping from old ordinal to new ordinals, used when merging indexes
wit separate taxonomies.
|
DisjunctionMaxQuery |
A query that generates the union of documents produced by its subqueries, and that scores each document with the maximum
score for that document as produced by any subquery, plus a tie breaking increment for any additional matching subqueries.
|
DistanceApproximation |
Deprecated.
|
DistanceFieldComparatorSource |
Deprecated. |
DistanceFilter |
Deprecated. |
DistanceHandler |
Deprecated. |
DistanceHandler.Precision |
|
DistanceQuery |
|
DistanceQueryBuilder |
Deprecated. |
DistanceSubQuery |
|
DistanceUnits |
Deprecated. |
DistanceUtils |
Deprecated. |
DocData |
Output of parsing (e.g.
|
DocIdBitSet |
Simple DocIdSet and DocIdSetIterator backed by a BitSet
|
DocIdSet |
A DocIdSet contains a set of doc ids.
|
DocIdSetIterator |
This abstract class defines methods to iterate over a set of non-decreasing
doc ids.
|
DocMaker |
|
DocNameExtractor |
Utility: extract doc names from an index
|
Document |
Documents are the unit of indexing and search.
|
DocValues |
Expert: represents field values as different types.
|
DOMUtils |
|
DoubleBarrelLRUCache<K extends DoubleBarrelLRUCache.CloneableKey,V> |
Simple concurrent LRU cache, using a "double barrel"
approach where two ConcurrentHashMaps record entries.
|
DoubleBarrelLRUCache.CloneableKey |
Object providing clone(); the key class must subclass this.
|
DoubleIterator |
Iterator interface for primitive double iteration.
|
DoubleMetaphoneFilter |
Filter for DoubleMetaphone (supporting secondary codes)
|
DrillDown |
Creation of drill down term or query.
|
DummyConcurrentLock |
A dummy lock as a replacement for ReentrantLock to disable locking
|
DummyQueryNodeBuilder |
This builder does nothing.
|
DuplicateFilter |
|
DuplicateFilterBuilder |
|
DutchAnalyzer |
|
DutchStemFilter |
Deprecated.
|
DutchStemmer |
Deprecated.
|
DutchStemmer |
Generated class implementing code defined by a snowball script.
|
EdgeNGramTokenFilter |
Tokenizes the given token into n-grams of given size(s).
|
EdgeNGramTokenFilter.Side |
Specifies which side of the input the n-gram should be generated from
|
EdgeNGramTokenizer |
Tokenizes the input from an edge into n-grams of given size(s).
|
EdgeNGramTokenizer.Side |
Specifies which side of the input the n-gram should be generated from
|
EightFlagsIntDecoder |
|
EightFlagsIntEncoder |
|
ElisionFilter |
|
Ellipse |
Deprecated. |
EmptyTokenizer |
Emits no tokens
|
EmptyTokenStream |
An always exhausted token stream.
|
Encoder |
Encodes original text.
|
English |
|
EnglishAnalyzer |
|
EnglishMinimalStemFilter |
|
EnglishMinimalStemmer |
Minimal plural stemmer for English.
|
EnglishPossessiveFilter |
TokenFilter that removes possessives (trailing 's) from words.
|
EnglishStemmer |
Generated class implementing code defined by a snowball script.
|
EnhancementsCategoryTokenizer |
|
EnhancementsDocumentBuilder |
|
EnhancementsIndexingParams |
|
EnhancementsPayloadIterator |
|
Entities |
|
EnwikiContentSource |
|
EnwikiQueryMaker |
A QueryMaker that uses common and uncommon actual Wikipedia queries for
searching the English Wikipedia collection.
|
EscapeQuerySyntax |
A parser needs to implement EscapeQuerySyntax to allow the QueryNode
to escape the queries, when the toQueryString method is called.
|
EscapeQuerySyntax.Type |
|
EscapeQuerySyntaxImpl |
|
Explanation |
Expert: Describes the score computation for document and query.
|
Explanation.IDFExplanation |
Small Util class used to pass both an idf factor as well as an
explanation for that factor.
|
ExtendableQueryParser |
The ExtendableQueryParser enables arbitrary query parser extension
based on a customizable field naming scheme.
|
ExtensionQuery |
ExtensionQuery holds all query components extracted from the original
query string like the query field and the extension query string.
|
Extensions |
|
Extensions.Pair<Cur,Cud> |
This class represents a generic pair.
|
ExternalRefSorter |
Builds and iterates over sequences stored on disk.
|
ExtractReuters |
Split the Reuters SGML documents into Simple Text files containing: Title,
Date, Dateline, Body
|
ExtractWikipedia |
Extract the downloaded Wikipedia dump into separate files for indexing.
|
FacetArrays |
Provider of arrays used for facet operations such as counting.
|
FacetException |
A parent class for exceptions thrown by the Facets code.
|
FacetIndexingParams |
Parameters on how facets are to be written to the index.
|
FacetParamsMissingPropertyException |
Thrown when the facets params are missing a property.
|
FacetRequest |
Request to accumulate facet information for a specified facet and possibly
also some of its descendants, upto a specified depth.
|
FacetRequest.ResultMode |
|
FacetRequest.SortBy |
Sort options for facet results.
|
FacetRequest.SortOrder |
Requested sort order for the results.
|
FacetResult |
Result of faceted search.
|
FacetResultNode |
Result of faceted search for a certain taxonomy node.
|
FacetResultsHandler |
Handler for facet results.
|
FacetsAccumulator |
Driver for Accumulating facets of faceted search requests over given
documents.
|
FacetsCollector |
Collector for facet accumulation.
|
FacetSearchParams |
Faceted search parameters indicate for which facets should info be gathered.
|
FacetSource |
Source items for facets.
|
FacetsPayloadProcessorProvider |
|
FacetsPayloadProcessorProvider.FacetsDirPayloadProcessor |
|
FacetsPayloadProcessorProvider.FacetsPayloadProcessor |
A PayloadProcessor for updating facets ordinal references, based on an ordinal map
|
FastCharStream |
An efficient implementation of JavaCC's CharStream interface.
|
FastCharStream |
An efficient implementation of JavaCC's CharStream interface.
|
FastVectorHighlighter |
Another highlighter implementation.
|
Field |
A field is a section of a Document.
|
Field.Index |
Specifies whether and how a field should be indexed.
|
Field.Store |
Specifies whether and how a field should be stored.
|
Field.TermVector |
Specifies whether and how a field should have term vectors.
|
Fieldable |
|
FieldableNode |
A query node implements FieldableNode interface to indicate that its
children and itself are associated to a specific field.
|
FieldBoostMapAttribute |
Deprecated. |
FieldBoostMapAttributeImpl |
Deprecated. |
FieldBoostMapFCListener |
|
FieldCache |
Expert: Maintains caches of term values.
|
FieldCache.ByteParser |
Interface to parse bytes from document fields.
|
FieldCache.CacheEntry |
EXPERT: A unique Identifier/Description for each item in the FieldCache.
|
FieldCache.CreationPlaceholder |
|
FieldCache.DoubleParser |
Interface to parse doubles from document fields.
|
FieldCache.FloatParser |
Interface to parse floats from document fields.
|
FieldCache.IntParser |
Interface to parse ints from document fields.
|
FieldCache.LongParser |
Interface to parse long from document fields.
|
FieldCache.Parser |
Marker interface as super-interface to all parsers.
|
FieldCache.ShortParser |
Interface to parse shorts from document fields.
|
FieldCache.StringIndex |
Expert: Stores term text values and document ordering data.
|
FieldCacheDocIdSet |
Base class for DocIdSet to be used with FieldCache.
|
FieldCacheRangeFilter<T> |
A range filter built on top of a cached single term field (in FieldCache ).
|
FieldCacheSanityChecker |
Provides methods for sanity checking that entries in the FieldCache
are not wasteful or inconsistent.
|
FieldCacheSanityChecker.Insanity |
Simple container for a collection of related CacheEntry objects that
in conjunction with each other represent some "insane" usage of the
FieldCache.
|
FieldCacheSanityChecker.InsanityType |
An Enumeration of the different types of "insane" behavior that
may be detected in a FieldCache.
|
FieldCacheSource |
Expert: A base class for ValueSource implementations that retrieve values for
a single field from the FieldCache .
|
FieldCacheTermsFilter |
A Filter that only accepts documents whose single
term value in the specified field is contained in the
provided set of allowed terms.
|
FieldComparator<T> |
Expert: a FieldComparator compares hits so as to determine their
sort order when collecting the top results with TopFieldCollector .
|
FieldComparator.ByteComparator |
|
FieldComparator.DocComparator |
Sorts by ascending docID
|
FieldComparator.DoubleComparator |
|
FieldComparator.FloatComparator |
|
FieldComparator.IntComparator |
|
FieldComparator.LongComparator |
|
FieldComparator.NumericComparator<T extends Number> |
|
FieldComparator.RelevanceComparator |
Sorts by descending relevance.
|
FieldComparator.ShortComparator |
|
FieldComparator.StringComparatorLocale |
Sorts by a field's value using the Collator for a
given Locale.
|
FieldComparator.StringOrdValComparator |
Sorts by field's natural String sort order, using
ordinals.
|
FieldComparator.StringValComparator |
Sorts by field's natural String sort order.
|
FieldComparatorSource |
|
FieldConfig |
This class represents a field configuration.
|
FieldConfigListener |
This interface should be implemented by classes that wants to listen for
field configuration requests.
|
FieldDateResolutionFCListener |
|
FieldDateResolutionMapAttribute |
Deprecated. |
FieldDateResolutionMapAttributeImpl |
Deprecated. |
FieldDoc |
Expert: A ScoreDoc which also contains information about
how to sort the referenced document.
|
FieldFragList |
FieldFragList has a list of "frag info" that is used by FragmentsBuilder class
to create fragments (snippets).
|
FieldFragList.WeightedFragInfo |
|
FieldFragList.WeightedFragInfo.SubInfo |
|
FieldInfo |
Access to the Fieldable Info file that describes document fields and whether or
not they are indexed.
|
FieldInfo.IndexOptions |
Controls how much information is stored in the postings lists.
|
FieldInfos |
Collection of FieldInfo s (accessible by number or by name).
|
FieldInvertState |
This class tracks the number and position / offset parameters of terms
being added to the index.
|
FieldMaskingSpanQuery |
Wrapper to allow SpanQuery objects participate in composite
single-field SpanQueries by 'lying' about their search field.
|
FieldNormModifier |
Deprecated.
|
FieldPhraseList |
FieldPhraseList has a list of WeightedPhraseInfo that is used by FragListBuilder
to create a FieldFragList object.
|
FieldPhraseList.WeightedPhraseInfo |
|
FieldPhraseList.WeightedPhraseInfo.Toffs |
|
FieldQuery |
FieldQuery breaks down query object into terms/phrases and keep
them in QueryPhraseMap structure.
|
FieldQuery.QueryPhraseMap |
|
FieldQueryNode |
|
FieldQueryNodeBuilder |
|
FieldReaderException |
Exception thrown when stored fields have an unexpected format.
|
FieldScoreQuery |
A query that scores each document as the value of the numeric input field.
|
FieldScoreQuery.Type |
Type of score field, indicating how field values are interpreted/parsed.
|
FieldSelector |
|
FieldSelectorResult |
Provides information about what should be done with this Field
|
FieldSortedTermVectorMapper |
|
FieldsQuery |
|
FieldTermStack |
FieldTermStack is a stack that keeps query terms in the specified field
of the document to be highlighted.
|
FieldTermStack.TermInfo |
|
FieldValueFilter |
A Filter that accepts all documents that have one or more values in a
given field.
|
FieldValueHitQueue<T extends FieldValueHitQueue.Entry> |
Expert: A hit queue for sorting by hits by terms in more than one field.
|
FieldValueHitQueue.Entry |
|
FieldValuePairQueryNode<T> |
This interface should be implemented by QueryNode that holds a field
and an arbitrary value.
|
FileBasedQueryMaker |
Create queries from a FileReader.
|
FileDictionary |
Dictionary represented by a text file.
|
FileSwitchDirectory |
Expert: A Directory instance that switches files between
two other Directory instances.
|
FileUtils |
File utilities.
|
Filter |
Abstract base class for restricting which documents may
be returned during searching.
|
FilterBuilder |
|
FilterBuilderFactory |
|
FilterClause |
A Filter that wrapped with an indication of how that filter
is used when composed with another filter.
|
FilteredDocIdSet |
Abstract decorator class for a DocIdSet implementation
that provides on-demand filtering/validation
mechanism on a given DocIdSet.
|
FilteredDocIdSetIterator |
Abstract decorator class of a DocIdSetIterator
implementation that provides on-demand filter/validation
mechanism on an underlying DocIdSetIterator.
|
FilteredQuery |
A query that applies a filter to the results of another query.
|
FilteredQueryBuilder |
|
FilteredTermEnum |
Abstract class for enumerating a subset of all terms.
|
FilterIndexReader |
A FilterIndexReader contains another IndexReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.
|
FilterIndexReader.FilterTermDocs |
Base class for filtering TermDocs implementations.
|
FilterIndexReader.FilterTermEnum |
Base class for filtering TermEnum implementations.
|
FilterIndexReader.FilterTermPositions |
|
FilteringTokenFilter |
Abstract base class for TokenFilters that may remove tokens.
|
FilterManager |
Deprecated.
|
FinnishAnalyzer |
|
FinnishLightStemFilter |
|
FinnishLightStemmer |
Light Stemmer for Finnish.
|
FinnishStemmer |
Generated class implementing code defined by a snowball script.
|
FixedBitSet |
BitSet of fixed length (numBits), backed by accessible
( FixedBitSet.getBits() ) long[], accessed with an int index,
implementing Bits and DocIdSet.
|
FixedLatLng |
Deprecated. |
FlagsAttribute |
This attribute can be used to pass different flags down the Tokenizer chain,
eg from one TokenFilter to another one.
|
FlagsAttributeImpl |
This attribute can be used to pass different flags down the tokenizer chain,
eg from one TokenFilter to another one.
|
FloatArrayAllocator |
An FloatArrayAllocator is an object which manages float array objects
of a certain size.
|
FloatEncoder |
Encode a character array Float as a Payload .
|
FloatFieldSource |
Expert: obtains float field values from the
FieldCache
using getFloats() and makes those values
available as other numeric types, casting as needed.
|
FloatIterator |
Iterator interface for primitive int iteration.
|
FloatLatLng |
Deprecated. |
FloatToObjectMap<T> |
An Array-based hashtable which maps primitive float to Objects of generic type
T.
The hashtable is constracted with a given capacity, or 16 as a default.
|
FlushReaderTask |
Commits via IndexReader.
|
ForceMergeTask |
Runs forceMerge on the index.
|
Format |
Formatting utilities (for reports).
|
Formatter |
Processes terms found in the original text, typically by applying some form
of mark-up to highlight terms in HTML search results pages.
|
FourFlagsIntDecoder |
|
FourFlagsIntEncoder |
|
FragListBuilder |
FragListBuilder is an interface for FieldFragList builder classes.
|
Fragmenter |
Implements the policy for breaking text into multiple fragments for
consideration by the Highlighter class.
|
FragmentsBuilder |
|
FrenchAnalyzer |
|
FrenchLightStemFilter |
|
FrenchLightStemmer |
Light Stemmer for French.
|
FrenchMinimalStemFilter |
|
FrenchMinimalStemmer |
Light Stemmer for French.
|
FrenchStemFilter |
Deprecated.
|
FrenchStemmer |
Deprecated.
|
FrenchStemmer |
Generated class implementing code defined by a snowball script.
|
FSDirectory |
Base class for Directory implementations that store index
files in the file system.
|
FSDirectory.FSIndexOutput |
|
FSLockFactory |
Base class for file system based locking implementation.
|
FST<T> |
Represents an finite state machine (FST), using a
compact byte[] format.
|
FST.Arc<T> |
Represents a single arc.
|
FST.BytesReader |
Reads the bytes from this FST.
|
FST.INPUT_TYPE |
Specifies allowed range of each int input label for
this FST.
|
FSTCompletion |
Finite state automata based implementation of "autocomplete" functionality.
|
FSTCompletion.Completion |
A single completion for a given key.
|
FSTCompletionBuilder |
Finite state automata based implementation of "autocomplete" functionality.
|
FSTCompletionLookup |
|
FSTLookup |
Deprecated.
|
FuzzyAttribute |
Deprecated. |
FuzzyAttributeImpl |
Deprecated. |
FuzzyConfig |
|
FuzzyLikeThisQuery |
Fuzzifies ALL terms provided as strings and then picks the best n differentiating terms.
|
FuzzyLikeThisQueryBuilder |
|
FuzzyQuery |
Implements the fuzzy search query.
|
FuzzyQueryNode |
A FuzzyQueryNode represents a element that contains
field/text/similarity tuple
|
FuzzyQueryNodeBuilder |
|
FuzzyQueryNodeProcessor |
|
FuzzyTermEnum |
Subclass of FilteredTermEnum for enumerating all terms that are similar
to the specified filter term.
|
GalicianAnalyzer |
|
GalicianMinimalStemFilter |
|
GalicianMinimalStemmer |
Minimal Stemmer for Galician
|
GalicianStemFilter |
|
GalicianStemmer |
Galician stemmer implementing "Regras do lematizador para o galego".
|
Gener |
The Gener object helps in the discarding of nodes which break the reduction
effort and defend the structure against large reductions.
|
GeoHashDistanceFilter |
Deprecated. |
GeoHashUtils |
Deprecated. |
Geometry2D |
Deprecated. |
German2Stemmer |
Generated class implementing code defined by a snowball script.
|
GermanAnalyzer |
|
GermanLightStemFilter |
|
GermanLightStemmer |
Light Stemmer for German.
|
GermanMinimalStemFilter |
|
GermanMinimalStemmer |
Minimal Stemmer for German.
|
GermanNormalizationFilter |
|
GermanStemFilter |
|
GermanStemmer |
A stemmer for German words.
|
GermanStemmer |
Generated class implementing code defined by a snowball script.
|
GetTermInfo |
Utility to get document frequency and total number of occurrences (sum of the tf for each doc) of a term.
|
GradientFormatter |
Formats text with different color intensity depending on the score of the
term.
|
GraphvizFormatter |
Outputs the dot (graphviz) string for the viterbi lattice.
|
GreekAnalyzer |
|
GreekLowerCaseFilter |
Normalizes token text to lower case, removes some Greek diacritics,
and standardizes final sigma to sigma.
|
GreekStemFilter |
|
GreekStemmer |
A stemmer for Greek words, according to: Development of a Stemmer for the
Greek Language. Georgios Ntais
|
GroupDocs<GROUP_VALUE_TYPE> |
Represents one group in the results.
|
GroupQueryNode |
A GroupQueryNode represents a location where the original user typed
real parenthesis on the query string.
|
GroupQueryNodeBuilder |
|
GroupQueryNodeProcessor |
Deprecated.
|
GrowableWriter |
Implements PackedInts.Mutable , but grows the
bit count of the underlying packed ints on-demand.
|
Heap<T> |
Declares an interface for heap (and heap alike) structures,
handling a given type T
|
HHMMSegmenter |
Finds the optimal segmentation of a sentence into Chinese words
|
HighFreqTerms |
HighFreqTerms class extracts the top n most frequent terms
(by document frequency ) from an existing Lucene index and reports their
document frequency.
|
HighFrequencyDictionary |
HighFrequencyDictionary: terms taken from the given field
of a Lucene index, which appear in a number of documents
above a given threshold.
|
Highlighter |
|
HindiAnalyzer |
Analyzer for Hindi.
|
HindiNormalizationFilter |
|
HindiNormalizer |
Normalizer for Hindi.
|
HindiStemFilter |
|
HindiStemmer |
Light Stemmer for Hindi.
|
HTMLParser |
|
HTMLParser |
HTML Parsing Interface for test purposes
|
HTMLParserConstants |
Token literal values and constants.
|
HTMLParserTokenManager |
Token Manager.
|
HTMLStripCharFilter |
A CharFilter that wraps another Reader and attempts to strip out HTML constructs.
|
HungarianAnalyzer |
|
HungarianLightStemFilter |
|
HungarianLightStemmer |
Light Stemmer for Hungarian.
|
HungarianStemmer |
Generated class implementing code defined by a snowball script.
|
HunspellAffix |
Wrapper class representing a hunspell affix
|
HunspellDictionary |
In-memory structure for the dictionary (.dic) and affix (.aff)
data of a hunspell dictionary.
|
HunspellStemFilter |
TokenFilter that uses hunspell affix rules and words to stem tokens.
|
HunspellStemmer |
HunspellStemmer uses the affix rules declared in the HunspellDictionary to generate one or more stems for a word.
|
HunspellStemmer.Stem |
Stem represents all information known about a stem of a word.
|
HunspellWord |
A dictionary (.dic) entry with its associated flags.
|
Hyphen |
This class represents a hyphen.
|
Hyphenation |
This class represents a hyphenated word.
|
HyphenationCompoundWordTokenFilter |
A TokenFilter that decomposes compound words found in many Germanic languages.
|
HyphenationException |
This class has been taken from the Apache FOP project (http://xmlgraphics.apache.org/fop/).
|
HyphenationTree |
This tree structure stores the hyphenation patterns in an efficient way for
fast lookup.
|
ICUCollationKeyAnalyzer |
|
ICUCollationKeyFilter |
Converts each token into its CollationKey , and
then encodes the CollationKey with IndexableBinaryStringTools , to
allow it to be stored as an index term.
|
ICUFoldingFilter |
A TokenFilter that applies search term folding to Unicode text,
applying foldings from UTR#30 Character Foldings.
|
ICUNormalizer2Filter |
Normalize token text with ICU's Normalizer2
|
ICUTokenizer |
Breaks text into words according to UAX #29: Unicode Text Segmentation
(http://www.unicode.org/reports/tr29/)
|
ICUTokenizerConfig |
Class that allows for tailored Unicode Text Segmentation on
a per-writing system basis.
|
ICUTransformFilter |
|
IdentityEncoder |
Does nothing other than convert the char array to a byte array using the specified encoding.
|
InconsistentTaxonomyException |
Exception indicating that a certain operation could not be performed
on a taxonomy related object because of an inconsistency.
|
IndexableBinaryStringTools |
Provides support for converting byte sequences to Strings and back again.
|
IndexCommit |
|
IndexDeletionPolicy |
|
IndexFileNameFilter |
Filename filter that accept filenames and extensions only created by Lucene.
|
IndexFileNames |
This class contains useful constants representing filenames and extensions
used by lucene, as well as convenience methods for querying whether a file
name matches an extension ( matchesExtension ), as well as generating file names from a segment name,
generation and extension (
fileNameFromGeneration ,
segmentFileName ).
|
IndexFiles |
Index all text files under a directory.
|
IndexFormatTooNewException |
This exception is thrown when Lucene detects
an index that is newer than this Lucene version.
|
IndexFormatTooOldException |
This exception is thrown when Lucene detects
an index that is too old for this Lucene version
|
IndexInput |
Abstract base class for input from a file in a Directory .
|
IndexMergeTool |
Merges indices specified on the command line into the index
specified as the first command line argument.
|
IndexNotFoundException |
Signals that no index was found in the Directory.
|
IndexOutput |
Abstract base class for output to a file in a Directory.
|
IndexReader |
IndexReader is an abstract class, providing an interface for accessing an
index.
|
IndexReader.ReaderClosedListener |
A custom listener that's invoked when the IndexReader
is closed.
|
IndexSearcher |
Implements search over a single IndexReader.
|
IndexSorter |
Sort an index by document importance factor.
|
IndexSplitter |
Command-line tool that enables listing segments in an
index, copying specific segments to another index, and
deleting segments from an index.
|
IndexUpgrader |
This is an easy-to-use tool that upgrades all segments of an index from previous Lucene versions
to the current segment file format.
|
IndexWriter |
An IndexWriter creates and maintains an index.
|
IndexWriter.IndexReaderWarmer |
If IndexWriter.getReader() has been called (ie, this writer
is in near real-time mode), then after a merge
completes, this class can be invoked to warm the
reader on the newly merged segment, before the merge
commits.
|
IndexWriter.MaxFieldLength |
Deprecated.
|
IndexWriterConfig |
|
IndexWriterConfig.OpenMode |
|
IndicNormalizationFilter |
|
IndicNormalizer |
Normalizes the Unicode representation of text in Indian languages.
|
IndicTokenizer |
Deprecated.
|
IndonesianAnalyzer |
Analyzer for Indonesian (Bahasa)
|
IndonesianStemFilter |
|
IndonesianStemmer |
Stemmer for Indonesian.
|
InflectionAttribute |
Attribute for Kuromoji inflection data.
|
InflectionAttributeImpl |
Attribute for Kuromoji inflection data.
|
InMemorySorter |
|
InputStreamDataInput |
|
InstantiatedDocument |
A document in the instantiated index object graph, optionally coupled to the vector space view.
|
InstantiatedIndex |
Deprecated.
|
InstantiatedIndexReader |
Deprecated.
|
InstantiatedIndexWriter |
Deprecated.
|
InstantiatedTerm |
A term in the inverted index, coupled to the documents it occurs in.
|
InstantiatedTermDocs |
|
InstantiatedTermDocumentInformation |
There is one instance of this class per indexed term in a document
and it contains the meta data about each occurrence of a term in a document.
|
InstantiatedTermEnum |
|
InstantiatedTermFreqVector |
|
InstantiatedTermPositions |
|
InstantiatedTermPositionVector |
|
IntArray |
A Class wrapper for a grow-able int[] which can be sorted and intersect with
other IntArrays.
|
IntArrayAllocator |
An IntArrayAllocator is an object which manages counter array objects
of a certain length.
|
IntDecoder |
|
IntegerEncoder |
Encode a character array Integer as a Payload .
|
IntEncoder |
|
IntEncoderFilter |
An abstract implementation of IntEncoder which is served as a filter
on the values to encode.
|
IntermediateFacetResult |
|
IntersectCase |
Deprecated. |
IntFieldSource |
Expert: obtains int field values from the
FieldCache
using getInts() and makes those values
available as other numeric types, casting as needed.
|
IntHashSet |
A Set or primitive int.
|
IntIterator |
Iterator interface for primitive int iteration.
|
IntSequenceOutputs |
An FST Outputs implementation where each output
is a sequence of ints.
|
IntsRef |
Represents int[], as a slice (offset + length) into an
existing int[].
|
IntsRefFSTEnum<T> |
Enumerates all input (IntsRef) + output pairs in an
FST.
|
IntsRefFSTEnum.InputOutput<T> |
Holds a single input (IntsRef) + output pair.
|
IntToDoubleMap |
An Array-based hashtable which maps primitive int to a primitive double.
The hashtable is constracted with a given capacity, or 16 as a default.
|
IntToIntMap |
An Array-based hashtable which maps primitive int to primitive int.
The hashtable is constracted with a given capacity, or 16 as a default.
|
IntToObjectMap<T> |
An Array-based hashtable which maps primitive int to Objects of generic type
T.
The hashtable is constracted with a given capacity, or 16 as a default.
|
InvalidGeoException |
Deprecated. |
InvalidTokenOffsetsException |
Exception thrown if TokenStream Tokens are incompatible with provided text
|
IOUtils |
This class emulates the new Java 7 "Try-With-Resources" statement.
|
IProjector |
Deprecated. |
IrishAnalyzer |
|
IrishLowerCaseFilter |
Normalises token text to lower case, handling t-prothesis
and n-eclipsis (i.e., that 'nAthair' should become 'n-athair')
|
IrishStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
ISOLatin1AccentFilter |
Deprecated.
|
ItalianAnalyzer |
|
ItalianLightStemFilter |
|
ItalianLightStemmer |
Light Stemmer for Italian.
|
ItalianStemmer |
Generated class implementing code defined by a snowball script.
|
JakartaRegexpCapabilities |
|
JapaneseAnalyzer |
Analyzer for Japanese that uses morphological analysis.
|
JapaneseBaseFormFilter |
|
JapaneseKatakanaStemFilter |
A TokenFilter that normalizes common katakana spelling variations
ending in a long sound character by removing this character (U+30FC).
|
JapanesePartOfSpeechStopFilter |
Removes tokens that match a set of part-of-speech tags.
|
JapaneseReadingFormFilter |
A TokenFilter that replaces the term
attribute with the reading of a token in either katakana or romaji form.
|
JapaneseTokenizer |
Tokenizer for Japanese that uses morphological analysis.
|
JapaneseTokenizer.Mode |
Tokenization mode: this determines how the tokenizer handles
compound and unknown words.
|
JaroWinklerDistance |
Similarity measure for short strings such as person names.
|
JaspellLookup |
|
JaspellTernarySearchTrie |
Implementation of a Ternary Search Trie, a data structure for storing
String objects that combines the compact size of a binary search
tree with the speed of a digital search trie, and is therefore ideal for
practical use in sorting and searching data.
|
JavaCharStream |
An implementation of interface CharStream, where the stream is assumed to
contain only ASCII characters (with java-like unicode escape processing).
|
JavaUtilRegexCapabilities |
An implementation tying Java's built-in java.util.regex to RegexQuery.
|
JoinUtil |
Utility for query time joining using TermsQuery and TermsCollector .
|
Judge |
Judge if a document is relevant for a quality query.
|
KeepOnlyLastCommitDeletionPolicy |
This IndexDeletionPolicy implementation that
keeps only the most recent commit and immediately removes
all prior commits after a new commit is done.
|
KeywordAnalyzer |
"Tokenizes" the entire stream as a single token.
|
KeywordAttribute |
This attribute can be used to mark a token as a keyword.
|
KeywordAttributeImpl |
This attribute can be used to mark a token as a keyword.
|
KeywordMarkerFilter |
|
KeywordTokenizer |
Emits the entire input as a single token.
|
KpStemmer |
Generated class implementing code defined by a snowball script.
|
KStemFilter |
A high-performance kstem filter for english.
|
KStemmer |
This class implements the Kstem algorithm
|
LabelToOrdinal |
Abstract class for storing Label->Ordinal mappings in a taxonomy.
|
LaoBreakIterator |
Syllable iterator for Lao text.
|
LatLng |
Deprecated. |
LatLongDistanceFilter |
Deprecated. |
LatvianAnalyzer |
|
LatvianStemFilter |
|
LatvianStemmer |
Light stemmer for Latvian.
|
LengthFilter |
Removes words that are too long or too short from the stream.
|
LetterTokenizer |
A LetterTokenizer is a tokenizer that divides text at non-letters.
|
LevensteinDistance |
Levenstein edit distance class.
|
Lift |
The Lift class is a data structure that is a variation of a Patricia trie.
|
LikeThisQueryBuilder |
|
LimitTokenCountAnalyzer |
This Analyzer limits the number of tokens while indexing.
|
LimitTokenCountFilter |
This TokenFilter limits the number of tokens while indexing.
|
LineDocSource |
|
LineDocSource.HeaderLineParser |
|
LineDocSource.LineParser |
Reader of a single input line into DocData .
|
LineDocSource.SimpleLineParser |
|
LineFileDocs |
Minimal port of contrib/benchmark's LneDocSource +
DocMaker, so tests can enum docs from a line file created
by contrib/benchmark's WriteLineDoc task
|
LineSegment |
Deprecated. |
LLRect |
Deprecated. |
LoadFirstFieldSelector |
Load the First field and break.
|
LocaleAttribute |
Deprecated. |
LocaleAttributeImpl |
Deprecated. |
Lock |
An interprocess mutex lock.
|
Lock.With |
Utility class for executing code with exclusive access.
|
LockFactory |
Base class for Locking implementation.
|
LockObtainFailedException |
This exception is thrown when the write.lock
could not be acquired.
|
LockReleaseFailedException |
This exception is thrown when the write.lock
could not be released.
|
LockStressTest |
Simple standalone tool that forever acquires & releases a
lock using a specific LockFactory.
|
LockVerifyServer |
|
LogByteSizeMergePolicy |
This is a LogMergePolicy that measures size of a
segment as the total byte size of the segment's files.
|
LogDocMergePolicy |
This is a LogMergePolicy that measures size of a
segment as the number of documents (not taking deletions
into account).
|
LogMergePolicy |
This class implements a MergePolicy that tries
to merge segments into levels of exponentially
increasing size, where each level has fewer segments than
the value of the merge factor.
|
LongToEnglishContentSource |
Creates documents whose content is a long number starting from
Long.MIN_VALUE + 10
.
|
LongToEnglishQueryMaker |
Creates queries whose content is a spelled-out long number
starting from Long.MIN_VALUE + 10
.
|
LookaheadTokenFilter<T extends LookaheadTokenFilter.Position> |
An abstract TokenFilter to make it easier to build graph
token filters requiring some lookahead.
|
LookaheadTokenFilter.Position |
Holds all state for a single position; subclass this
to record other state at each position.
|
Lookup |
|
Lookup.LookupPriorityQueue |
|
Lookup.LookupResult |
Result of a lookup.
|
LovinsStemmer |
Generated class implementing code defined by a snowball script.
|
LowercaseExpandedTermsAttribute |
Deprecated. |
LowercaseExpandedTermsAttributeImpl |
Deprecated. |
LowercaseExpandedTermsQueryNodeProcessor |
|
LowerCaseFilter |
Normalizes token text to lower case.
|
LowerCaseTokenizer |
LowerCaseTokenizer performs the function of LetterTokenizer
and LowerCaseFilter together.
|
LRUHashMap<K,V> |
LRUHashMap is an extension of Java's HashMap, which has a bounded size();
When it reaches that size, each time a new element is added, the least
recently used (LRU) entry is removed.
|
LruTaxonomyWriterCache |
|
LruTaxonomyWriterCache.LRUType |
|
LuceneDictionary |
Lucene Dictionary: terms taken from the given field
of a Lucene index.
|
LuceneJUnitDividingSelector |
Divides filesets into equal groups
|
LuceneJUnitResultFormatter |
Just like BriefJUnitResultFormatter "brief" bundled with ant,
except all formatted text is buffered until the test suite is finished.
|
LucenePackage |
Lucene's package information, including version.
|
LuceneTestCase |
Base class for all Lucene unit tests, Junit3 or Junit4 variant.
|
LuceneTestCase.Nightly |
Annotation for tests that should only be run during nightly builds.
|
LuceneTestCaseRunner |
optionally filters the tests to be run by TEST_METHOD
|
MapBackedSet<E> |
A Set implementation that wraps an actual Map based
implementation.
|
MapFieldSelector |
|
MapOfSets<K,V> |
Helper class for keeping Lists of Objects associated with keys.
|
MappingCharFilter |
Simplistic CharFilter that applies the mappings
contained in a NormalizeCharMap to the character
stream, and correcting the resulting changes to the
offsets.
|
MatchAllDocsQuery |
A query that matches all documents.
|
MatchAllDocsQueryBuilder |
|
MatchAllDocsQueryNode |
A MatchAllDocsQueryNode indicates that a query node tree or subtree
will match all documents if executed in the index.
|
MatchAllDocsQueryNodeBuilder |
|
MatchAllDocsQueryNodeProcessor |
|
MatchNoDocsQueryNode |
A MatchNoDocsQueryNode indicates that a query node tree or subtree
will not match any documents if executed in the index.
|
MatchNoDocsQueryNodeBuilder |
|
MaxPayloadFunction |
Returns the maximum payload score seen, else 1 if there are no payloads on the doc.
|
MemoryIndex |
High-performance single-document main memory Apache Lucene fulltext search index.
|
MergePolicy |
Expert: a MergePolicy determines the sequence of
primitive merge operations.
|
MergePolicy.MergeAbortedException |
|
MergePolicy.MergeException |
Exception thrown if there are any problems while
executing a merge.
|
MergePolicy.MergeSpecification |
A MergeSpecification instance provides the information
necessary to perform multiple merges.
|
MergePolicy.OneMerge |
OneMerge provides the information necessary to perform
an individual primitive merge operation, resulting in
a single new segment.
|
MergeScheduler |
Expert: IndexWriter uses an instance
implementing this interface to execute the merges
selected by a MergePolicy .
|
Message |
Deprecated.
|
MessageImpl |
Deprecated.
|
MinPayloadFunction |
Calculates the minimum payload seen
|
MMapDirectory |
|
MockAnalyzer |
Analyzer for testing
|
MockCharFilter |
the purpose of this charfilter is to send offsets out of bounds
if the analyzer doesn't use correctOffset or does incorrect offset math.
|
MockDirectoryWrapper |
This is a Directory Wrapper that adds methods
intended to be used only by unit tests.
|
MockDirectoryWrapper.Failure |
Objects that represent fail-able conditions.
|
MockDirectoryWrapper.Throttling |
|
MockFixedLengthPayloadFilter |
TokenFilter that adds random fixed-length payloads.
|
MockGraphTokenFilter |
Randomly inserts overlapped (posInc=0) tokens with
posLength sometimes > 1.
|
MockHoleInjectingTokenFilter |
|
MockIndexInput |
IndexInput backed by a byte[] for testing.
|
MockIndexInputWrapper |
Used by MockDirectoryWrapper to create an input stream that
keeps track of when it's been closed.
|
MockIndexOutputWrapper |
Used by MockRAMDirectory to create an output stream that
will throw an IOException on fake disk full, track max
disk space actually used, and maybe throw random
IOExceptions.
|
MockLockFactoryWrapper |
Used by MockDirectoryWrapper to wrap another factory
and track open locks.
|
MockRandomLookaheadTokenFilter |
|
MockRandomMergePolicy |
MergePolicy that makes random decisions for testing.
|
MockReaderWrapper |
Wraps a Reader, and can throw random or fixed
exceptions, and spoon feed read chars.
|
MockTokenizer |
Tokenizer for testing.
|
MockVariableLengthPayloadFilter |
TokenFilter that adds random variable-length payloads.
|
ModifierQueryNode |
A ModifierQueryNode indicates the modifier value (+,-,?,NONE) for
each term on the query string.
|
ModifierQueryNode.Modifier |
|
ModifierQueryNodeBuilder |
|
MoreLikeThis |
Generate "more like this" similarity queries.
|
MoreLikeThisQuery |
A simple wrapper for MoreLikeThis for use in scenarios where a Query object
is required eg in custom QueryParser extensions.
|
MultiCategoryListIterator |
|
MultiCollector |
|
MultiFieldAttribute |
Deprecated. |
MultiFieldAttributeImpl |
Deprecated. |
MultiFieldQueryNodeProcessor |
This processor is used to expand terms so the query looks for the same term
in different fields.
|
MultiFieldQueryParser |
A QueryParser which constructs queries to search multiple fields.
|
MultiFieldQueryParserWrapper |
Deprecated.
|
MultiPassIndexSplitter |
This tool splits input index into multiple equal parts.
|
MultiPhraseQuery |
|
MultiPhraseQueryNode |
|
MultiPhraseQueryNodeBuilder |
|
MultipleTermPositions |
|
MultiReader |
An IndexReader which reads multiple indexes, appending
their content.
|
MultiSearcher |
Deprecated.
|
MultiTermQuery |
An abstract Query that matches documents
containing a subset of terms provided by a FilteredTermEnum enumeration.
|
MultiTermQuery.ConstantScoreAutoRewrite |
A rewrite method that tries to pick the best
constant-score rewrite method based on term and
document counts from the query.
|
MultiTermQuery.RewriteMethod |
Abstract class that defines how the query is rewritten.
|
MultiTermQuery.TopTermsBoostOnlyBooleanQueryRewrite |
A rewrite method that first translates each term into
BooleanClause.Occur.SHOULD clause in a BooleanQuery, but the scores
are only computed as the boost.
|
MultiTermQuery.TopTermsScoringBooleanQueryRewrite |
A rewrite method that first translates each term into
BooleanClause.Occur.SHOULD clause in a BooleanQuery, and keeps the
scores as computed by the query.
|
MultiTermQueryWrapperFilter<Q extends MultiTermQuery> |
|
MultiTermRewriteMethodAttribute |
Deprecated. |
MultiTermRewriteMethodAttributeImpl |
Deprecated. |
MultiTermRewriteMethodProcessor |
|
MultiTrie |
The MultiTrie is a Trie of Tries.
|
MultiTrie2 |
The MultiTrie is a Trie of Tries.
|
MutableFacetResultNode |
Mutable implementation for Result of faceted search for a certain taxonomy node.
|
NamedThreadFactory |
A default ThreadFactory implementation that accepts the name prefix
of the created threads as a constructor argument.
|
NameHashIntCacheLRU |
An an LRU cache of mapping from name to int.
|
NativeFSLockFactory |
|
NativePosixUtil |
|
NearRealtimeReaderTask |
Spawns a BG thread that periodically (defaults to 3.0
seconds, but accepts param in seconds) wakes up and asks
IndexWriter for a near real-time reader.
|
NearSpansOrdered |
A Spans that is formed from the ordered subspans of a SpanNearQuery
where the subspans do not overlap and have a maximum slop between them.
|
NearSpansUnordered |
|
NewAnalyzerTask |
Create a new Analyzer and set it it in the getRunData() for use by all future tasks.
|
NewCollationAnalyzerTask |
Task to support benchmarking collation.
|
NewCollationAnalyzerTask.Implementation |
|
NewLocaleTask |
Set a Locale for use in benchmarking.
|
NewRoundTask |
Increment the counter for properties maintained by Round Number.
|
NewShingleAnalyzerTask |
Task to support benchmarking ShingleFilter / ShingleAnalyzerWrapper
|
NGramDistance |
N-Gram version of edit distance based on paper by Grzegorz Kondrak,
"N-gram similarity and distance".
|
NGramPhraseQuery |
This is a PhraseQuery which is optimized for n-gram phrase query.
|
NGramTokenFilter |
Tokenizes the input into n-grams of the given size(s).
|
NGramTokenizer |
Tokenizes the input into n-grams of the given size(s).
|
NIOFSDirectory |
An FSDirectory implementation that uses java.nio's FileChannel's
positional read, which allows multiple threads to read from the same file
without synchronizing.
|
NIOFSDirectory.NIOFSIndexInput |
|
NLS |
Deprecated.
|
NLSException |
Deprecated.
|
NoChildOptimizationQueryNodeProcessor |
|
NoDeletionPolicy |
|
NoLockFactory |
|
NoMergePolicy |
A MergePolicy which never returns merges to execute (hence it's
name).
|
NoMergeScheduler |
|
NoMoreDataException |
Exception indicating there is no more data.
|
NOnesIntDecoder |
|
NOnesIntEncoder |
A variation of FourFlagsIntEncoder which translates the data as
follows:
Values ≥ 2 are trnalsated to value+1 (2 ⇒ 3, 3
⇒ 4 and so forth).
|
NonTopLevelOrdinalPolicy |
Filter out any "top level" category ordinals.
|
NonTopLevelPathPolicy |
This class filters our the ROOT category, and it's direct descendants.
|
NoOutputs |
A null FST Outputs implementation; use this if
you just want to build an FSA.
|
NormalizeCharMap |
|
NorwegianAnalyzer |
|
NorwegianLightStemFilter |
|
NorwegianLightStemmer |
Light Stemmer for Norwegian.
|
NorwegianMinimalStemFilter |
|
NorwegianMinimalStemmer |
Minimal Stemmer for Norwegian bokmål (no-nb)
|
NorwegianStemmer |
Generated class implementing code defined by a snowball script.
|
NoSuchDirectoryException |
This exception is thrown when you try to list a
non-existent directory.
|
NoTokenFoundQueryNode |
A NoTokenFoundQueryNode is used if a term is convert into no tokens
by the tokenizer/lemmatizer/analyzer (null).
|
NotQuery |
|
NRTCachingDirectory |
Wraps a RAMDirectory
around any provided delegate directory, to
be used during NRT search.
|
NRTManager |
Utility class to manage sharing near-real-time searchers
across multiple searching thread.
|
NRTManager.TrackingIndexWriter |
Class that tracks changes to a delegated
IndexWriter.
|
NRTManager.WaitingListener |
NRTManager invokes this interface to notify it when a
caller is waiting for a specific generation searcher
to be visible.
|
NRTManagerReopenThread |
Utility class that runs a reopen thread to periodically
reopen the NRT searchers in the provided NRTManager .
|
NullFragmenter |
Fragmenter implementation which does not fragment the text.
|
NumberDateFormat |
This Format parses Long into date strings and vice-versa.
|
NumberTools |
Deprecated.
|
NumericConfig |
This class holds the configuration used to parse numeric queries and create
NumericRangeQuery s.
|
NumericField |
This class provides a Field that enables indexing
of numeric values for efficient range filtering and
sorting.
|
NumericField.DataType |
|
NumericFieldConfigListener |
|
NumericPayloadTokenFilter |
|
NumericQueryNode |
This query node represents a field query that holds a numeric value.
|
NumericQueryNodeProcessor |
|
NumericRangeFilter<T extends Number> |
A Filter that only accepts numeric values within
a specified range.
|
NumericRangeFilterBuilder |
|
NumericRangeQuery<T extends Number> |
A Query that matches numeric values within a
specified range.
|
NumericRangeQueryBuilder |
|
NumericRangeQueryNode |
This query node represents a range query composed by NumericQueryNode
bounds, which means the bound values are Number s.
|
NumericRangeQueryNodeBuilder |
|
NumericRangeQueryNodeProcessor |
|
NumericTokenStream |
|
NumericUtils |
This is a helper class to generate prefix-encoded representations for numerical values
and supplies converters to represent float/double values as sortable integers/longs.
|
NumericUtils.IntRangeBuilder |
|
NumericUtils.LongRangeBuilder |
|
ObjectToFloatMap<K> |
An Array-based hashtable which maps Objects of generic type
T to primitive float values.
The hashtable is constructed with a given capacity, or 16 as a default.
|
ObjectToIntMap<K> |
An Array-based hashtable which maps Objects of generic type
T to primitive int values.
The hashtable is constructed with a given capacity, or 16 as a default.
|
OffsetAttribute |
The start and end character offset of a Token.
|
OffsetAttributeImpl |
The start and end character offset of a Token.
|
OffsetLimitTokenFilter |
This TokenFilter limits the number of tokens while indexing by adding up the
current offset.
|
OpaqueQueryNode |
A OpaqueQueryNode is used for specify values that are not supposed to
be parsed by the parser.
|
OpenBitSet |
An "open" BitSet implementation that allows direct access to the array of words
storing the bits.
|
OpenBitSetDISI |
|
OpenBitSetIterator |
An iterator to iterate over set bits in an OpenBitSet.
|
OpenIndexTask |
Open an index writer.
|
OpenReaderTask |
Open an index reader.
|
OpenStringBuilder |
A StringBuilder that allows one to access the array.
|
OpenTaxonomyIndexTask |
Open a taxonomy index.
|
OpenTaxonomyReaderTask |
Open a taxonomy index reader.
|
Optimizer |
The Optimizer class is a Trie that will be reduced (have empty rows removed).
|
Optimizer2 |
The Optimizer class is a Trie that will be reduced (have empty rows removed).
|
OrdFieldSource |
Expert: obtains the ordinal of the field value from the default Lucene
Fieldcache using getStringIndex().
|
OrdinalPolicy |
|
OrdinalProperty |
|
OrQuery |
|
OrQueryNode |
A OrQueryNode represents an OR boolean operation performed on a list
of nodes.
|
Outputs<T> |
Represents the outputs for an FST, providing the basic
algebra required for building and traversing the FST.
|
OutputStreamDataOutput |
|
PackedInts |
Simplistic compression for array of unsigned long values.
|
PackedInts.Mutable |
A packed integer array that can be modified.
|
PackedInts.Reader |
A read-only random access array of positive integers.
|
PackedInts.ReaderImpl |
A simple base for Readers that keeps track of valueCount and bitsPerValue.
|
PackedInts.Writer |
A write-once Writer.
|
PagedBytes |
Represents a logical byte[] as a series of pages.
|
PagedBytes.Reader |
Provides methods to read BytesRefs from a frozen
PagedBytes.
|
PairOutputs<A,B> |
An FST Outputs implementation, holding two other outputs.
|
PairOutputs.Pair<A,B> |
Holds a single pair of two outputs.
|
ParallelMultiSearcher |
Deprecated.
|
ParallelReader |
An IndexReader which reads multiple, parallel indexes.
|
Parameter |
Deprecated.
|
ParametricQueryNode |
Deprecated.
|
ParametricQueryNode.CompareOperator |
|
ParametricRangeQueryNode |
|
ParametricRangeQueryNodeProcessor |
|
ParseException |
This exception is thrown when parse errors are encountered.
|
ParseException |
This exception is thrown when parse errors are encountered.
|
ParseException |
This exception is thrown when parse errors are encountered.
|
ParseException |
This exception is thrown when parse errors are encountered.
|
ParserException |
|
ParserExtension |
This class represents an extension base class to the Lucene standard
QueryParser .
|
PartitionsUtils |
Utilities for partitions - sizes and such
|
PartOfSpeechAttribute |
|
PartOfSpeechAttributeImpl |
|
PathHierarchyTokenizer |
Tokenizer for path-like hierarchies.
|
PathPolicy |
Filtering category paths in CategoryParentsStream , where a given
category is added to the stream, and than all its parents are being
added one after the other by successively removing the last component.
|
PathQueryNode |
A PathQueryNode is used to store queries like
/company/USA/California /product/shoes/brown.
|
PathQueryNode.QueryText |
|
PatternAnalyzer |
|
PatternConsumer |
This interface is used to connect the XML pattern file parser to the
hyphenation tree.
|
PatternParser |
A SAX document handler to read and parse hyphenation patterns from a XML
file.
|
Payload |
A Payload is metadata that can be stored together with each occurrence
of a term.
|
PayloadAttribute |
The payload of a Token.
|
PayloadAttributeImpl |
The payload of a Token.
|
PayloadEncoder |
Mainly for use with the DelimitedPayloadTokenFilter, converts char buffers to Payload.
|
PayloadFunction |
An abstract class that defines a way for Payload*Query instances to transform
the cumulative effects of payload scores for a document.
|
PayloadHelper |
Utility methods for encoding payloads.
|
PayloadIntDecodingIterator |
A payload deserializer comes with its own working space (buffer).
|
PayloadIterator |
A utility class for iterating through a posting list of a given term and
retrieving the payload of the first occurrence in every document.
|
PayloadNearQuery |
This class is very similar to
SpanNearQuery except that it factors
in the value of the payloads located at each of the positions where the
TermSpans occurs.
|
PayloadProcessorProvider |
|
PayloadProcessorProvider.DirPayloadProcessor |
Deprecated.
|
PayloadProcessorProvider.PayloadProcessor |
Processes the given payload.
|
PayloadProcessorProvider.ReaderPayloadProcessor |
|
PayloadSpanUtil |
Experimental class to get set of payloads for most standard Lucene queries.
|
PayloadTermQuery |
This class is very similar to
SpanTermQuery except that it factors
in the value of the payload located at each of the positions where the
Term occurs.
|
PerDimensionIndexingParams |
|
PerFieldAnalyzerWrapper |
This analyzer is used to facilitate scenarios where different
fields require different analysis techniques.
|
PerfRunData |
Data maintained by a performance test run.
|
PerfTask |
An abstract task to be tested for performance.
|
PersianAnalyzer |
|
PersianCharFilter |
CharFilter that replaces instances of Zero-width non-joiner with an
ordinary space.
|
PersianNormalizationFilter |
|
PersianNormalizer |
Normalizer for Persian.
|
PersistentSnapshotDeletionPolicy |
A SnapshotDeletionPolicy which adds a persistence layer so that
snapshots can be maintained across the life of an application.
|
PhoneticFilter |
Create tokens for phonetic matches.
|
PhraseQuery |
A Query that matches documents containing a particular sequence of terms.
|
PhraseQueryNodeBuilder |
|
PhraseSlopQueryNode |
|
PhraseSlopQueryNodeProcessor |
This processor removes invalid SlopQueryNode objects in the query
node tree.
|
PKIndexSplitter |
Split an index based on a Filter .
|
PlainTextDictionary |
Dictionary represented by a text file.
|
Point2D |
Deprecated. |
Points |
Test run data points collected as the test proceeds.
|
PolishAnalyzer |
|
PorterStemFilter |
Transforms the token stream as per the Porter stemming algorithm.
|
PorterStemmer |
Generated class implementing code defined by a snowball script.
|
PortugueseAnalyzer |
|
PortugueseLightStemFilter |
|
PortugueseLightStemmer |
Light Stemmer for Portuguese
|
PortugueseMinimalStemFilter |
|
PortugueseMinimalStemmer |
Minimal Stemmer for Portuguese
|
PortugueseStemFilter |
|
PortugueseStemmer |
Portuguese stemmer implementing the RSLP (Removedor de Sufixos da Lingua Portuguesa)
algorithm.
|
PortugueseStemmer |
Generated class implementing code defined by a snowball script.
|
PositionBasedTermVectorMapper |
For each Field, store position by position information.
|
PositionBasedTermVectorMapper.TVPositionInfo |
Container for a term at a position
|
PositionFilter |
Set the positionIncrement of all tokens to the "positionIncrement",
except the first return token which retains its original positionIncrement value.
|
PositionIncrementAttribute |
The positionIncrement determines the position of this token
relative to the previous Token in a TokenStream, used in phrase
searching.
|
PositionIncrementAttributeImpl |
The positionIncrement determines the position of this token
relative to the previous Token in a TokenStream , used in phrase
searching.
|
PositionIncrementsAttribute |
Deprecated. |
PositionIncrementsAttributeImpl |
Deprecated. |
PositionLengthAttribute |
The positionLength determines how many positions this
token spans.
|
PositionLengthAttributeImpl |
|
PositionSpan |
Utility class to record Positions Spans
|
PositiveIntOutputs |
An FST Outputs implementation where each output
is a non-negative long value.
|
PositiveScoresOnlyCollector |
A Collector implementation which wraps another
Collector and makes sure only documents with
scores > 0 are collected.
|
PrecedenceQueryNodeProcessorPipeline |
|
PrecedenceQueryParser |
This query parser works exactly as the standard query parser ( StandardQueryParser ),
except that it respect the boolean precedence, so <a AND b OR c AND d> is parsed to <(+a +b) (+c +d)>
instead of <+a +b +c +d>.
|
PrefixAndSuffixAwareTokenFilter |
|
PrefixAwareTokenFilter |
Joins two token streams and leaves the last token of the first stream available
to be used when updating the token values in the second stream based on that token.
|
PrefixFilter |
A Filter that restricts search results to values that have a matching prefix in a given
field.
|
PrefixQuery |
A Query that matches documents containing terms with a specified prefix.
|
PrefixTermEnum |
Subclass of FilteredTermEnum for enumerating all terms that match the
specified prefix filter term.
|
PrefixWildcardQueryNode |
|
PrefixWildcardQueryNodeBuilder |
|
PrintReaderTask |
Opens a reader and prints basic statistics.
|
PriorityQueue<T> |
A PriorityQueue maintains a partial ordering of its elements such that the
least element can always be found in constant time.
|
ProximityQueryNode |
A ProximityQueryNode represents a query where the terms should meet
specific distance conditions.
|
ProximityQueryNode.ProximityType |
|
ProximityQueryNode.Type |
|
PruningPolicy |
General Definitions for Index Pruning, such as operations to be performed on field data.
|
PruningReader |
This class produces a subset of the input index, by removing some postings
data according to rules implemented in a TermPruningPolicy , and
optionally it can also remove stored fields of documents according to rules
implemented in a StorePruningPolicy .
|
PruningTool |
|
QualityBenchmark |
Main entry point for running a quality benchmark.
|
QualityQueriesFinder |
Suggest Quality queries based on an index contents.
|
QualityQuery |
A QualityQuery has an ID and some name-value pairs.
|
QualityQueryParser |
Parse a QualityQuery into a Lucene query.
|
QualityStats |
Results of quality benchmark run for a single query or for a set of queries.
|
QualityStats.RecallPoint |
A certain rank in which a relevant doc was found.
|
Query |
The abstract base class for queries.
|
QueryAutoStopWordAnalyzer |
An Analyzer used primarily at query time to wrap another analyzer and provide a layer of protection
which prevents very common words from being passed into queries.
|
QueryBuilder |
This interface is used by implementors classes that builds some kind of
object from a query tree.
|
QueryBuilder |
Implemented by objects that produce Lucene Query objects from XML streams.
|
QueryBuilderFactory |
|
QueryConfigHandler |
This class can be used to hold any query configuration and no field
configuration.
|
QueryDriver |
Command-line tool for doing a TREC evaluation run.
|
QueryMaker |
Create queries for the test.
|
QueryNode |
A QueryNode is a interface implemented by all nodes on a QueryNode
tree.
|
QueryNodeError |
Error class with NLS support
|
QueryNodeException |
This exception should be thrown if something wrong happens when dealing with
QueryNode s.
|
QueryNodeImpl |
|
QueryNodeOperation |
Allow joining 2 QueryNode Trees, into one.
|
QueryNodeParseException |
This should be thrown when an exception happens during the query parsing from
string to the query node tree.
|
QueryNodeProcessor |
|
QueryNodeProcessorImpl |
This is a default implementation for the QueryNodeProcessor
interface, it's an abstract class, so it should be extended by classes that
want to process a QueryNode tree.
|
QueryNodeProcessorPipeline |
|
QueryParser |
This class is generated by JavaCC.
|
QueryParser |
This class is generated by JavaCC.
|
QueryParser.Operator |
The default operator for parsing queries.
|
QueryParserConstants |
Token literal values and constants.
|
QueryParserConstants |
Token literal values and constants.
|
QueryParserHelper |
This class is a helper for the query parser framework, it does all the three
query parser phrases at once: text parsing, query processing and query
building.
|
QueryParserMessages |
Flexible Query Parser message bundle class
|
QueryParserTestBase |
Base Test class for QueryParser subclasses
|
QueryParserTestBase.QPTestAnalyzer |
Filters LowerCaseTokenizer with QPTestFilter.
|
QueryParserTestBase.QPTestFilter |
Filter which discards the token 'stop' and which expands the
token 'phrase' into 'phrase1 phrase2'
|
QueryParserTestBase.QPTestParser |
Test QueryParser that does not allow fuzzy or wildcard queries.
|
QueryParserTokenManager |
Token Manager.
|
QueryParserTokenManager |
Token Manager.
|
QueryParserUtil |
This class defines utility methods to (help) parse query strings into
Query objects.
|
QueryParserWrapper |
Deprecated.
|
QueryParserWrapper.Operator |
The default operator for parsing queries.
|
QueryScorer |
Scorer implementation which scores text fragments by the number of
unique query terms found.
|
QueryTemplateManager |
Provides utilities for turning query form input (such as from a web page or Swing gui) into
Lucene XML queries by using XSL templates.
|
QueryTermExtractor |
Utility class used to extract the terms used in a query, plus any weights.
|
QueryTermScorer |
Scorer implementation which scores text fragments by the number of
unique query terms found.
|
QueryTermVector |
|
QueryTreeBuilder |
This class should be used when there is a builder for each type of node.
|
QueryUtils |
Utility class for sanity-checking queries.
|
QueryWrapperFilter |
Constrains search results to only match those which also match a provided
query.
|
QuotedFieldQueryNode |
|
RAMDirectory |
|
RAMFile |
|
RAMInputStream |
|
RAMOutputStream |
|
RamUsageEstimator |
Estimates the size (memory representation) of Java objects.
|
RamUsageEstimator.JvmFeature |
JVM diagnostic features.
|
RandomFacetSource |
Simple implementation of a random facet source
|
RandomIndexWriter |
Silly class that randomizes the indexing experience.
|
RandomSampler |
Simple random sampler
|
RangeCollatorAttribute |
Deprecated. |
RangeCollatorAttributeImpl |
Deprecated. |
RangeFilterBuilder |
|
RangeQueryNode<T extends FieldValuePairQueryNode<?>> |
This interface should be implemented by a QueryNode that represents
some kind of range query.
|
RangeQueryNode |
Deprecated.
|
RangeQueryNodeBuilder |
Deprecated.
|
RawTermFilter |
Expert: creates a filter accepting all documents
containing the provided term, disregarding deleted
documents.
|
ReaderUtil |
|
ReaderUtil.Gather |
Recursively visits all sub-readers of a reader.
|
ReadingAttribute |
Attribute for Kuromoji reading data
|
ReadingAttributeImpl |
Attribute for Kuromoji reading data
|
ReadTask |
Read index (abstract) task.
|
ReadTokensTask |
Simple task to test performance of tokenizers.
|
Rectangle |
Deprecated. |
RecyclingByteBlockAllocator |
|
Reduce |
The Reduce object is used to remove gaps in a Trie which stores a dictionary.
|
ReferenceManager<G> |
Utility class to safely share instances of a certain type across multiple
threads, while periodically refreshing them.
|
RegexCapabilities |
Defines basic operations needed by RegexQuery for a regular
expression implementation.
|
RegexQuery |
Implements the regular expression term search query.
|
RegexQueryCapable |
Defines methods for regular expression supporting Querys to use.
|
RegexTermEnum |
Subclass of FilteredTermEnum for enumerating all terms that match the
specified regular expression term using the specified regular expression
implementation.
|
RemoteCachingWrapperFilter |
Deprecated.
|
RemoteSearchable |
Deprecated.
|
RemoveDeletedQueryNodesProcessor |
|
RemoveEmptyNonLeafQueryNodeProcessor |
This processor removes every QueryNode that is not a leaf and has not
children.
|
ReopenReaderTask |
Reopens IndexReader and closes old IndexReader.
|
RepAllTask |
Report all statistics with no aggregations.
|
RepeatableSampler |
Take random samples of large collections.
|
Report |
Textual report of current statistics.
|
ReportTask |
Report (abstract) task - all report tasks extend this task.
|
RepSelectByPrefTask |
Report by-name-prefix statistics with no aggregations.
|
RepSumByNameRoundTask |
Report all statistics grouped/aggregated by name and round.
|
RepSumByNameTask |
Report all statistics aggregated by name.
|
RepSumByPrefRoundTask |
Report all prefix matching statistics grouped/aggregated by name and round.
|
RepSumByPrefTask |
Report by-name-prefix statistics aggregated by name.
|
ResetInputsTask |
Reset inputs so that the test run would behave, input wise,
as if it just started.
|
ResetSystemEraseTask |
Reset all index and input data and call gc, erase index and dir, does NOT clear statistics.
|
ResetSystemSoftTask |
Reset all index and input data and call gc, does NOT erase index/dir, does NOT clear statistics.
|
ResultSortUtils |
Utilities for generating facet results sorted as required
|
Rethrow |
Sneaky: rethrowing checked exceptions as unchecked
ones.
|
ReusableAnalyzerBase |
An convenience subclass of Analyzer that makes it easy to implement
TokenStream reuse.
|
ReusableAnalyzerBase.TokenStreamComponents |
This class encapsulates the outer components of a token stream.
|
ReutersContentSource |
|
ReutersQueryMaker |
A QueryMaker that makes queries devised manually (by Grant Ingersoll) for
searching in the Reuters collection.
|
ReverseOrdFieldSource |
Expert: obtains the ordinal of the field value from the default Lucene
FieldCache using getStringIndex()
and reverses the order.
|
ReversePathHierarchyTokenizer |
Tokenizer for domain-like hierarchies.
|
ReverseStringFilter |
Reverse token string, for example "country" => "yrtnuoc".
|
RIDFTermPruningPolicy |
|
RMIRemoteSearchable |
Deprecated.
|
RollbackIndexTask |
Rollback the index writer.
|
RollingBuffer<T extends RollingBuffer.Resettable> |
Acts like forever growing T[], but internally uses a
circular buffer to reuse instances of T.
|
RollingBuffer.Resettable |
|
RollingCharBuffer |
Acts like a forever growing char[] as you read
characters into it from the provided reader, but
internally it uses a circular buffer to only hold the
characters that haven't been freed yet.
|
RomanianAnalyzer |
|
RomanianStemmer |
Generated class implementing code defined by a snowball script.
|
Row |
The Row class represents a row in a matrix representation of a trie.
|
RSLPStemmerBase |
Base class for stemmers that use a set of RSLP-like stemming steps.
|
RSLPStemmerBase.Rule |
A basic rule, with no exceptions.
|
RSLPStemmerBase.RuleWithSetExceptions |
A rule with a set of whole-word exceptions.
|
RSLPStemmerBase.RuleWithSuffixExceptions |
A rule with a set of exceptional suffixes.
|
RSLPStemmerBase.Step |
A step containing a list of rules.
|
RussianAnalyzer |
|
RussianLetterTokenizer |
Deprecated.
|
RussianLightStemFilter |
|
RussianLightStemmer |
Light Stemmer for Russian.
|
RussianLowerCaseFilter |
Deprecated.
|
RussianStemFilter |
Deprecated.
|
RussianStemmer |
Generated class implementing code defined by a snowball script.
|
Sample |
Sample performance test written programmatically - no algorithm file is needed here.
|
SampleFixer |
Fixer of sample facet accumulation results
|
Sampler |
Sampling definition for facets accumulation
|
Sampler.SampleResult |
Result of sample computation
|
SamplingAccumulator |
Facets accumulation with sampling.
|
SamplingParams |
Parameters for sampling, dictating whether sampling is to take place and how.
|
SamplingWrapper |
Wrap any Facets Accumulator with sampling.
|
ScoreCachingWrappingScorer |
A Scorer which wraps another scorer and caches the score of the
current document.
|
ScoredDocIdCollector |
|
ScoredDocIDs |
Document IDs with scores for each, driving facets accumulation.
|
ScoredDocIDsIterator |
Iterator over document IDs and their scores.
|
ScoredDocIdsUtils |
Utility methods for Scored Doc IDs.
|
ScoreDoc |
|
ScoreFacetRequest |
Facet request for weighting facets according to document scores.
|
ScoreOrderFragmentsBuilder |
An implementation of FragmentsBuilder that outputs score-order fragments.
|
ScoreOrderFragmentsBuilder.ScoreComparator |
|
Scorer |
A Scorer is responsible for scoring a stream of tokens.
|
Scorer |
Expert: Common scoring functionality for different types of queries.
|
Scorer.ScorerVisitor<P extends Query,C extends Query,S extends Scorer> |
A callback to gather information from a scorer and its sub-scorers.
|
ScorerDocQueue |
Deprecated. |
ScoringAggregator |
An Aggregator which updates the weight of a category according to the
scores of the documents it was found in.
|
ScoringRewrite<Q extends Query> |
|
ScriptAttribute |
This attribute stores the UTR #24 script value for a token of text.
|
ScriptAttributeImpl |
|
Searchable |
Deprecated.
|
SearchEquivalenceTestBase |
Simple base class for checking search equivalence.
|
Searcher |
Deprecated.
|
SearcherFactory |
|
SearcherLifetimeManager |
Keeps track of current plus old IndexSearchers, closing
the old ones once they have timed out.
|
SearcherLifetimeManager.PruneByAge |
Simple pruner that drops any searcher older by
more than the specified seconds, than the newest
searcher.
|
SearcherLifetimeManager.Pruner |
|
SearcherManager |
Utility class to safely share IndexSearcher instances across multiple
threads, while periodically reopening.
|
SearchFiles |
Simple command-line based search demo.
|
SearchGroup<GROUP_VALUE_TYPE> |
Represents a group that is found during the first pass search.
|
SearchTask |
Search task.
|
SearchTravRetHighlightTask |
Search and Traverse and Retrieve docs task.
|
SearchTravRetLoadFieldSelectorTask |
Search and Traverse and Retrieve docs task using a SetBasedFieldSelector.
|
SearchTravRetTask |
Search and Traverse and Retrieve docs task.
|
SearchTravRetVectorHighlightTask |
Search and Traverse and Retrieve docs task.
|
SearchTravTask |
Search and Traverse task.
|
SearchWithCollectorTask |
Does search w/ a custom collector
|
SearchWithSortTask |
Does sort search on specified field.
|
SegmentInfo |
Information about a segment such as it's name, directory, and files related
to the segment.
|
SegmentInfos |
A collection of segmentInfo objects with methods for operating on
those segments in relation to the file system.
|
SegmentInfos.FindSegmentsFile |
Utility class for executing code that needs to do
something with the current segments file.
|
SegmentReader |
IndexReader implementation over a single segment.
|
SegmentReader.CoreClosedListener |
Called when the shared core for this SegmentReader
is closed.
|
SegmentWriteState |
Holder class for common parameters used during write.
|
SegToken |
SmartChineseAnalyzer internal token
|
SegTokenFilter |
Filters a SegToken by converting full-width latin to half-width, then lowercasing latin.
|
SentenceTokenizer |
Tokenizes input text into sentences.
|
SentinelIntSet |
A native int set where one value is reserved to mean "EMPTY"
|
SerialMergeScheduler |
A MergeScheduler that simply does each merge
sequentially, using the current thread.
|
SetBasedFieldSelector |
Declare what fields to load normally and what fields to load lazily
|
SetOnce<T> |
A convenient class which offers a semi-immutable object wrapper
implementation which allows one to set the value of an object exactly once,
and retrieve it many times.
|
SetOnce.AlreadySetException |
|
SetPropTask |
Set a performance test configuration property.
|
Shape |
Deprecated. |
ShingleAnalyzerWrapper |
|
ShingleFilter |
A ShingleFilter constructs shingles (token n-grams) from a token stream.
|
ShingleMatrixFilter |
Deprecated.
|
ShingleMatrixFilter.Matrix |
A column focused matrix in three dimensions:
|
ShingleMatrixFilter.OneDimensionalNonWeightedTokenSettingsCodec |
|
ShingleMatrixFilter.SimpleThreeDimensionalTokenSettingsCodec |
A full featured codec not to be used for something serious.
|
ShingleMatrixFilter.TokenPositioner |
|
ShingleMatrixFilter.TokenSettingsCodec |
Strategy used to code and decode meta data of the tokens from the input stream
regarding how to position the tokens in the matrix, set and retreive weight, et c.
|
ShingleMatrixFilter.TwoDimensionalNonWeightedSynonymTokenSettingsCodec |
A codec that creates a two dimensional matrix
by treating tokens from the input stream with 0 position increment
as new rows to the current column.
|
ShortFieldSource |
Expert: obtains short field values from the
FieldCache
using getShorts() and makes those values
available as other numeric types, casting as needed.
|
Similarity |
Expert: Scoring API.
|
SimilarityDelegator |
Deprecated.
|
SimilarityQueries |
Simple similarity measures.
|
SimpleAnalyzer |
|
SimpleBoundaryScanner |
|
SimpleCharStream |
An implementation of interface CharStream, where the stream is assumed to
contain only ASCII characters (without unicode processing).
|
SimpleFragListBuilder |
|
SimpleFragmenter |
Fragmenter implementation which breaks text up into same-size
fragments with no concerns over spotting sentence boundaries.
|
SimpleFragmentsBuilder |
A simple implementation of FragmentsBuilder.
|
SimpleFSDirectory |
A straightforward implementation of FSDirectory
using java.io.RandomAccessFile.
|
SimpleFSDirectory.SimpleFSIndexInput |
|
SimpleFSDirectory.SimpleFSIndexInput.Descriptor |
|
SimpleFSLockFactory |
|
SimpleHTMLEncoder |
Simple Encoder implementation to escape text for HTML output
|
SimpleHTMLFormatter |
Simple Formatter implementation to highlight terms with a pre and
post tag.
|
SimpleIntDecoder |
|
SimpleIntEncoder |
A simple IntEncoder , writing an integer as 4 raw bytes.
|
SimpleQQParser |
Simplistic quality query parser.
|
SimpleQueryMaker |
A QueryMaker that makes queries for a collection created
using SingleDocSource .
|
SimpleSloppyPhraseQueryMaker |
Create sloppy phrase queries for performance test, in an index created using simple doc maker.
|
SimpleSpanFragmenter |
Fragmenter implementation which breaks text up into same-size
fragments but does not split up Spans .
|
SimpleStringInterner |
Simple lockless and memory barrier free String intern cache that is guaranteed
to return the same String instance as String.intern()
does.
|
SimpleTerm |
|
SimpleTerm.MatchingTermVisitor |
|
SingleDocSource |
|
SingleFragListBuilder |
|
SingleInstanceLockFactory |
Implements LockFactory for a single in-process instance,
meaning all locking will take place through this one instance.
|
SingleTermEnum |
Subclass of FilteredTermEnum for enumerating a single term.
|
SingleTokenTokenStream |
|
SinusoidalProjector |
Deprecated.
|
SlopQueryNode |
|
SlopQueryNodeBuilder |
|
SlowMultiReaderWrapper |
Acts like Lucene 4.x's SlowMultiReaderWrapper for testing
of top-level MultiTermEnum, MultiTermDocs, ...
|
SmallFloat |
Floating point numbers smaller than 32 bits.
|
SmartChineseAnalyzer |
SmartChineseAnalyzer is an analyzer for Chinese or mixed Chinese-English text.
|
SmartRandom |
A random that tracks if its been initialized properly,
and throws an exception if it hasn't.
|
SnapshotDeletionPolicy |
|
SnowballAnalyzer |
Deprecated.
|
SnowballFilter |
A filter that stems words using a Snowball-generated stemmer.
|
SnowballProgram |
This is the rev 502 of the Snowball SVN trunk,
but modified:
made abstract and introduced abstract method stem to avoid expensive reflection in filter class.
|
SolrSynonymParser |
Parser for the Solr synonyms format.
|
Sort |
Encapsulates sort criteria for returned hits.
|
Sort |
On-disk sorting of byte arrays.
|
Sort.BufferSize |
A bit more descriptive unit for constructors.
|
Sort.ByteSequencesReader |
Utility class to read length-prefixed byte[] entries from an input.
|
Sort.ByteSequencesWriter |
Utility class to emit length-prefixed byte[] entries to an output stream for sorting.
|
SortableSingleDocSource |
Adds fields appropriate for sorting: country, random_string and sort_field
(int).
|
SortedTermFreqIteratorWrapper |
This wrapper buffers incoming elements and makes sure they are sorted based on given comparator.
|
SortedTermVectorMapper |
|
SortedVIntList |
Stores and iterate on sorted integers in compressed form in RAM.
|
SorterTemplate |
This class was inspired by CGLIB, but provides a better
QuickSort algorithm without additional InsertionSort
at the end.
|
SortField |
Stores information about how to sort documents by terms in an individual
field.
|
SortingIntEncoder |
An IntEncoderFilter which sorts the values to encode in ascending
order before encoding them.
|
SpanBuilderBase |
|
SpanFilter |
Abstract base class providing a mechanism to restrict searches to a subset
of an index and also maintains and returns position information.
|
SpanFilterResult |
The results of a SpanQueryFilter.
|
SpanFilterResult.PositionInfo |
|
SpanFilterResult.StartEnd |
|
SpanFirstBuilder |
|
SpanFirstQuery |
Matches spans near the beginning of a field.
|
SpanGradientFormatter |
Formats text with different color intensity depending on the score of the
term using the span tag.
|
SpanishAnalyzer |
|
SpanishLightStemFilter |
|
SpanishLightStemmer |
Light Stemmer for Spanish
|
SpanishStemmer |
Generated class implementing code defined by a snowball script.
|
SpanMultiTermQueryWrapper<Q extends MultiTermQuery> |
|
SpanMultiTermQueryWrapper.SpanRewriteMethod |
Abstract class that defines how the query is rewritten.
|
SpanMultiTermQueryWrapper.TopTermsSpanBooleanQueryRewrite |
A rewrite method that first translates each term into a SpanTermQuery in a
BooleanClause.Occur.SHOULD clause in a BooleanQuery, and keeps the
scores as computed by the query.
|
SpanNearBuilder |
|
SpanNearClauseFactory |
|
SpanNearPayloadCheckQuery |
Only return those matches that have a specific payload at
the given position.
|
SpanNearQuery |
Matches spans which are near one another.
|
SpanNotBuilder |
|
SpanNotQuery |
Removes matches which overlap with another SpanQuery.
|
SpanOrBuilder |
|
SpanOrQuery |
Matches the union of its clauses.
|
SpanOrTermsBuilder |
|
SpanPayloadCheckQuery |
Only return those matches that have a specific payload at
the given position.
|
SpanPositionCheckQuery |
Base class for filtering a SpanQuery based on the position of a match.
|
SpanPositionCheckQuery.AcceptStatus |
Return value if the match should be accepted YES , rejected NO ,
or rejected and enumeration should advance to the next document NO_AND_ADVANCE .
|
SpanPositionRangeQuery |
|
SpanQuery |
Base class for span-based queries.
|
SpanQueryBuilder |
|
SpanQueryBuilderFactory |
|
SpanQueryFilter |
Constrains search results to only match those which also match a provided
query.
|
SpanRegexQuery |
Deprecated.
|
Spans |
Expert: an enumeration of span matches.
|
SpanScorer |
Public for extension only.
|
SpanTermBuilder |
|
SpanTermQuery |
Matches spans containing a term.
|
SpanWeight |
Expert-only.
|
SpellChecker |
Spell Checker class (Main class)
(initially inspired by the David Spencer code).
|
SrndPrefixQuery |
|
SrndQuery |
|
SrndTermQuery |
|
SrndTruncQuery |
|
StaleReaderException |
|
StandardAnalyzer |
|
StandardBooleanQueryNode |
|
StandardBooleanQueryNodeBuilder |
|
StandardFacetsAccumulator |
Standard implementation for FacetsAccumulator , utilizing partitions to save on memory.
|
StandardFilter |
|
StandardQueryBuilder |
This interface should be implemented by every class that wants to build
Query objects from QueryNode objects.
|
StandardQueryConfigHandler |
|
StandardQueryConfigHandler.ConfigurationKeys |
|
StandardQueryConfigHandler.Operator |
|
StandardQueryNodeProcessorPipeline |
This pipeline has all the processors needed to process a query node tree,
generated by StandardSyntaxParser , already assembled.
|
StandardQueryParser |
This class is a helper that enables users to easily use the Lucene query
parser.
|
StandardQueryTreeBuilder |
This query tree builder only defines the necessary map to build a
Query tree object.
|
StandardSyntaxParser |
|
StandardSyntaxParserConstants |
Token literal values and constants.
|
StandardSyntaxParserTokenManager |
Token Manager.
|
StandardTokenizer |
A grammar-based tokenizer constructed with JFlex.
|
StandardTokenizerImpl |
|
StandardTokenizerImpl31 |
Deprecated.
|
StandardTokenizerInterface |
Internal interface for supporting versioned grammars.
|
StemmerOverrideFilter |
Provides the ability to override any KeywordAttribute aware stemmer
with custom dictionary-based stemming.
|
StemmerUtil |
Some commonly-used stemming functions
|
StempelFilter |
Transforms the token stream as per the stemming algorithm.
|
StempelStemmer |
Stemmer class is a convenient facade for other stemmer-related classes.
|
StopAnalyzer |
|
StopFilter |
Removes stop words from a token stream.
|
StopwordAnalyzerBase |
Base class for Analyzers that need to make use of stopword sets.
|
StoreClassNameRule |
|
StorePruningPolicy |
Pruning policy for removing stored fields from documents.
|
StreamUtils |
Stream utilities.
|
StreamUtils.Type |
File format type
|
StringBuilderReader |
|
StringDistance |
Interface for string distances.
|
StringHelper |
Methods for manipulating strings.
|
StringInterner |
Subclasses of StringInterner are required to
return the same single String object for all equal strings.
|
StringUtils |
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements.
|
SubmissionReport |
Create a log ready for submission.
|
SuggestMode |
Set of strategies for suggesting related terms
|
SuggestWord |
SuggestWord, used in suggestSimilar method in SpellChecker class.
|
SuggestWordFrequencyComparator |
Frequency first, then score.
|
SuggestWordQueue |
Sorts SuggestWord instances
|
SuggestWordScoreComparator |
Score first, then frequency
|
SwedishAnalyzer |
|
SwedishLightStemFilter |
|
SwedishLightStemmer |
Light Stemmer for Swedish.
|
SwedishStemmer |
Generated class implementing code defined by a snowball script.
|
SweetSpotSimilarity |
A similarity with a lengthNorm that provides for a "plateau" of
equally good lengths, and tf helper functions.
|
SynonymFilter |
Matches single or multi word synonyms in a token stream.
|
SynonymMap |
A map of synonyms, keys and values are phrases.
|
SynonymMap.Builder |
Builds an FSTSynonymMap.
|
SyntaxParser |
|
SystemPropertiesInvariantRule |
|
SystemPropertiesRestoreRule |
Restore system properties from before the nested Statement .
|
Tags |
|
TaskSequence |
Sequence of parallel or sequential tasks.
|
TaskStats |
Statistics for a task run.
|
TaxonomyReader |
TaxonomyReader is the read-only interface with which the faceted-search
library uses the taxonomy during search time.
|
TaxonomyReader.ChildrenArrays |
Equivalent representations of the taxonomy's parent info,
used internally for efficient computation of facet results:
"youngest child" and "oldest sibling"
|
TaxonomyWriter |
TaxonomyWriter is the interface which the faceted-search library uses
to dynamically build the taxonomy at indexing time.
|
TaxonomyWriterCache |
TaxonomyWriterCache is a relatively simple interface for a cache of
category->ordinal mappings, used in TaxonomyWriter implementations
(such as DirectoryTaxonomyWriter ).
|
TeeSinkTokenFilter |
This TokenFilter provides the ability to set aside attribute states
that have already been analyzed.
|
TeeSinkTokenFilter.SinkFilter |
|
TeeSinkTokenFilter.SinkTokenStream |
TokenStream output from a tee with optional filtering.
|
TemporaryObjectAllocator<T> |
An TemporaryObjectAllocator is an object which manages large, reusable,
temporary objects needed during multiple concurrent computations.
|
Term |
A Term represents a word from text.
|
TermAllGroupHeadsCollector<GH extends AbstractAllGroupHeadsCollector.GroupHead> |
|
TermAllGroupsCollector |
A collector that collects all groups that match the
query.
|
TermAttribute |
Deprecated.
|
TermAttributeImpl |
Deprecated.
|
TermDocs |
TermDocs provides an interface for enumerating <document, frequency>
pairs for a term.
|
TermEnum |
Abstract class for enumerating terms.
|
TermFirstPassGroupingCollector |
|
TermFreqIterator |
Interface for enumerating term,weight pairs.
|
TermFreqIterator.TermFreqIteratorWrapper |
Wraps a BytesRefIterator as a TermFreqIterator, with all weights
set to 1
|
TermFreqVector |
Provides access to stored term vector of
a document field.
|
TermPositions |
TermPositions provides an interface for enumerating the <document,
frequency, <position>* > tuples for a term.
|
TermPositionVector |
Extends TermFreqVector to provide additional information about
positions in which each of the terms is found.
|
TermPruningPolicy |
Policy for producing smaller index out of an input index, by examining its terms
and removing from the index some or all of their data as follows:
all terms of a certain field - see TermPruningPolicy.pruneAllFieldPostings(String)
all data of a certain term - see TermPruningPolicy.pruneTermEnum(TermEnum)
all positions of a certain term in a certain document - see #pruneAllPositions(TermPositions, Term)
some positions of a certain term in a certain document - see #pruneSomePositions(int, int[], Term)
|
TermQuery |
A Query that matches documents containing a term.
|
TermQueryBuilder |
|
TermRangeFilter |
A Filter that restricts search results to a range of term
values in a given field.
|
TermRangeQuery |
A Query that matches documents within an range of terms.
|
TermRangeQueryNode |
This query node represents a range query composed by FieldQueryNode
bounds, which means the bound values are strings.
|
TermRangeQueryNodeBuilder |
|
TermRangeTermEnum |
Subclass of FilteredTermEnum for enumerating all terms that match the
specified range parameters.
|
TermSecondPassGroupingCollector |
|
TermsFilter |
Constructs a filter for docs matching any of the terms added to this class.
|
TermsFilterBuilder |
|
TermSpans |
Expert:
Public for extension only
|
TermsQueryBuilder |
Builds a BooleanQuery from all of the terms found in the XML element using the choice of analyzer
|
TermVectorAccessor |
Transparent access to the vector space model,
either via TermFreqVector or by resolving it from the inverted index.
|
TermVectorEntry |
Convenience class for holding TermVector information.
|
TermVectorEntryFreqSortedComparator |
Compares TermVectorEntry s first by frequency and then by
the term (case-sensitive)
|
TermVectorMapper |
|
TermVectorOffsetInfo |
The TermVectorOffsetInfo class holds information pertaining to a Term in a TermPositionVector 's
offset information.
|
TernaryTree |
Ternary Search Tree.
|
TernaryTreeNode |
The class creates a TST node.
|
TestApp |
|
TextableQueryNode |
|
TextFragment |
Low-level class used to record information about a section of a document
with a score.
|
TFTermPruningPolicy |
Policy for producing smaller index out of an input index, by removing postings data
for those terms where their in-document frequency is below a specified
threshold.
|
ThaiAnalyzer |
|
ThaiWordFilter |
|
ThreadedIndexingAndSearchingTestCase |
Utility class that spawns multiple indexing and
searching threads.
|
ThreadInterruptedException |
Thrown by lucene on detecting that Thread.interrupt() had
been called.
|
ThrottledIndexOutput |
Intentionally slow IndexOutput for testing.
|
TieredMergePolicy |
Merges segments of approximately equal size, subject to
an allowed number of segments per tier.
|
TieredMergePolicy.MergeScore |
Holds score and explanation for a single candidate
merge.
|
TimeLimitingCollector |
The TimeLimitingCollector is used to timeout search requests that
take longer than the maximum allowed search time limit.
|
TimeLimitingCollector.TimeExceededException |
Thrown when elapsed search time exceeds allowed search time.
|
TimeLimitingCollector.TimerThread |
|
ToChildBlockJoinQuery |
Just like ToParentBlockJoinQuery , except this
query joins in reverse: you provide a Query matching
parent documents and it joins down to child
documents.
|
Token |
Analyzed token with morphological data from its dictionary.
|
Token |
A Token is an occurrence of a term from the text of a field.
|
Token |
Describes the input token stream.
|
Token |
Describes the input token stream.
|
Token |
Describes the input token stream.
|
Token |
Describes the input token stream.
|
Token.TokenAttributeFactory |
Expert: Creates a TokenAttributeFactory returning Token as instance for the basic attributes
and for all other attributes calls the given delegate factory.
|
TokenFilter |
A TokenFilter is a TokenStream whose input is another TokenStream.
|
TokenGroup |
One, or several overlapping tokens, along with the score(s) and the scope of
the original text
|
TokenInfoDictionary |
Binary dictionary implementation for a known-word dictionary model:
Words are encoded into an FST mapping to a list of wordIDs.
|
TokenInfoFST |
Thin wrapper around an FST with root-arc caching for Japanese.
|
TokenizedPhraseQueryNode |
|
Tokenizer |
A Tokenizer is a TokenStream whose input is a Reader.
|
TokenMgrError |
Token Manager Error.
|
TokenMgrError |
Token Manager Error.
|
TokenMgrError |
Token Manager Error.
|
TokenMgrError |
Token Manager Error.
|
TokenOffsetPayloadTokenFilter |
|
TokenRangeSinkFilter |
Counts the tokens as they go by and saves to the internal list those between the range of lower and upper, exclusive of upper
|
TokenSources |
Hides implementation issues associated with obtaining a TokenStream for use
with the higlighter - can obtain from TermFreqVectors with offsets and
(optionally) positions or from Analyzer class reparsing the stored content.
|
TokenStream |
A TokenStream enumerates the sequence of tokens, either from
Field s of a Document or from query text.
|
TokenStreamFromTermPositionVector |
|
TokenStreamToDot |
Consumes a TokenStream and outputs the dot (graphviz) string (graph).
|
TokenTypeSinkFilter |
Adds a token to the sink if it has a specific type.
|
TooManyBasicQueries |
|
ToParentBlockJoinCollector |
Collects parent document hits for a Query containing one more more
BlockJoinQuery clauses, sorted by the
specified parent Sort.
|
ToParentBlockJoinQuery |
|
ToParentBlockJoinQuery.ScoreMode |
How to aggregate multiple child hit scores into a
single parent score.
|
TopDocs |
|
TopDocsCollector<T extends ScoreDoc> |
A base class for all collectors that return a TopDocs output.
|
TopFieldCollector |
|
TopFieldDocs |
|
TopGroups<GROUP_VALUE_TYPE> |
Represents result returned by a grouping search.
|
TopKFacetResultsHandler |
Generate Top-K results for a particular FacetRequest.
|
TopKInEachNodeHandler |
|
TopKInEachNodeHandler.IntermediateFacetResultWithHash |
Intermediate result to hold counts from one or more partitions processed
thus far.
|
TopScoreDocCollector |
A Collector implementation that collects the top-scoring hits,
returning them as a TopDocs .
|
TopTermsRewrite<Q extends Query> |
Base rewrite method for collecting only the top terms
via a priority queue.
|
ToStringUtil |
Utility class for english translations of morphological data,
used only for debugging.
|
ToStringUtils |
|
TotalFacetCounts |
Maintain Total Facet Counts per partition, for given parameters:
Index reader of an index
Taxonomy index reader
Facet indexing params (and particularly the category list params)
The total facet counts are maintained as an array of arrays of integers,
where a separate array is kept for each partition.
|
TotalFacetCountsCache |
Manage an LRU cache for TotalFacetCounts per index, taxonomy, and
facet indexing params.
|
TotalHitCountCollector |
Just counts the total number of hits.
|
Trec1MQReader |
Read topics of TREC 1MQ track.
|
TrecContentSource |
|
TrecDocParser |
Parser for trec doc content, invoked on doc text excluding and
which are handled in TrecContentSource.
|
TrecDocParser.ParsePathType |
Types of trec parse paths,
|
TrecFBISParser |
Parser for the FBIS docs in trec disks 4+5 collection format
|
TrecFR94Parser |
Parser for the FR94 docs in trec disks 4+5 collection format
|
TrecFTParser |
Parser for the FT docs in trec disks 4+5 collection format
|
TrecGov2Parser |
Parser for the GOV2 collection format
|
TrecJudge |
Judge if given document is relevant to given quality query, based on Trec format for judgements.
|
TrecLATimesParser |
Parser for the FT docs in trec disks 4+5 collection format
|
TrecParserByPath |
Parser for trec docs which selects the parser to apply according
to the source files path, defaulting to TrecGov2Parser .
|
TrecTopicsReader |
Read TREC topics.
|
Trie |
A Trie is used to store a dictionary of words and their stems.
|
TSTAutocomplete |
|
TSTLookup |
|
TurkishAnalyzer |
|
TurkishLowerCaseFilter |
Normalizes Turkish token text to lower case.
|
TurkishStemmer |
Generated class implementing code defined by a snowball script.
|
TwoPhaseCommit |
An interface for implementations that support 2-phase commit.
|
TwoPhaseCommitTool |
A utility for executing 2-phase commit on several objects.
|
TwoPhaseCommitTool.CommitFailException |
|
TwoPhaseCommitTool.PrepareCommitFailException |
|
TwoPhaseCommitTool.TwoPhaseCommitWrapper |
A wrapper of a TwoPhaseCommit , which delegates all calls to the
wrapped object, passing the specified commitData.
|
TypeAsPayloadTokenFilter |
|
TypeAttribute |
A Token's lexical type.
|
TypeAttributeImpl |
A Token's lexical type.
|
TypeTokenFilter |
Removes tokens whose types appear in a set of blocked types from a token stream.
|
UAX29URLEmailAnalyzer |
|
UAX29URLEmailTokenizer |
This class implements Word Break rules from the Unicode Text Segmentation
algorithm, as specified in
Unicode Standard Annex #29
URLs and email addresses are also tokenized according to the relevant RFCs.
|
UAX29URLEmailTokenizerImpl |
This class implements Word Break rules from the Unicode Text Segmentation
algorithm, as specified in
Unicode Standard Annex #29
URLs and email addresses are also tokenized according to the relevant RFCs.
|
UAX29URLEmailTokenizerImpl31 |
Deprecated.
|
UAX29URLEmailTokenizerImpl34 |
Deprecated.
|
UncaughtExceptionsRule |
|
UncaughtExceptionsRule.UncaughtExceptionEntry |
|
UnescapedCharSequence |
CharsSequence with escaped chars information.
|
UnicodeUtil |
Class to encode java's UTF16 char[] into UTF8 byte[]
without always allocating a new byte[] as
String.getBytes("UTF-8") does.
|
UnicodeUtil.UTF16Result |
Holds decoded UTF16 code units.
|
UnicodeUtil.UTF8Result |
Holds decoded UTF8 code units.
|
UniqueValuesIntEncoder |
|
UnknownDictionary |
Dictionary for unknown-word handling.
|
UnsafeByteArrayInputStream |
|
UnsafeByteArrayOutputStream |
This class is used as a wrapper to a byte array, extending
OutputStream .
|
UnsortedTermFreqIteratorWrapper |
This wrapper buffers the incoming elements and makes sure they are in
random order.
|
UpdateDocTask |
Update a document, using IndexWriter.updateDocument,
optionally with of a certain size.
|
UpgradeIndexMergePolicy |
|
UpToTwoPositiveIntOutputs |
An FST Outputs implementation where each output
is one or two non-negative long values.
|
UpToTwoPositiveIntOutputs.TwoLongs |
Holds two long outputs.
|
UserDictionary |
Class for building a User Dictionary.
|
UserInputQueryBuilder |
UserInputQueryBuilder uses 1 of 2 strategies for thread-safe parsing:
1) Synchronizing access to "parse" calls on a previously supplied QueryParser
or..
|
Util |
Static helper methods.
|
Util.MinResult<T> |
|
Utility |
SmartChineseAnalyzer utility constants and methods
|
ValidatingTokenFilter |
A TokenFilter that checks consistency of the tokens (eg
offsets are consistent with one another).
|
ValueQueryNode<T> |
This interface should be implemented by a QueryNode that holds an
arbitrary value.
|
ValueSource |
Expert: source of values for basic function queries.
|
ValueSourceQuery |
Expert: A Query that sets the scores of document to the
values obtained from a ValueSource .
|
Vector2D |
Deprecated. |
VerifyingLockFactory |
A LockFactory that wraps another LockFactory and verifies that each lock obtain/release
is "correct" (never results in two processes holding the
lock at the same time).
|
Version |
Use by certain classes to match version compatibility
across releases of Lucene.
|
Vint8 |
Variable-length encoding of 32-bit integers, into 8-bit bytes.
|
Vint8.Position |
Because Java lacks call-by-reference, this class boxes the decoding position, which
is initially set by the caller, and returned after decoding, incremented by the number
of bytes processed.
|
VInt8IntDecoder |
|
VInt8IntEncoder |
An IntEncoder which implements variable length encoding.
|
VirtualMethod<C> |
A utility for keeping backwards compatibility on previously abstract methods
(or similar replacements).
|
VocabularyAssert |
Utility class for doing vocabulary-based stemming tests
|
WaitForMergesTask |
Waits for merges to finish.
|
WaitTask |
Simply waits for the specified (via the parameter) amount
of time.
|
WarmTask |
Warm reader task: retrieve all reader documents.
|
WeakIdentityMap<K,V> |
|
Weight |
Expert: Calculate query weights and build query scorers.
|
WeightedSpanTerm |
Lightweight class to hold term, weight, and positions used for scoring this
term.
|
WeightedSpanTermExtractor |
|
WeightedSpanTermExtractor.PositionCheckingMap<K> |
This class makes sure that if both position sensitive and insensitive
versions of the same term are added, the position insensitive one wins.
|
WeightedTerm |
Lightweight class to hold term and a weight value used for scoring this term
|
WFSTCompletionLookup |
Suggester based on a weighted FST: it first traverses the prefix,
then walks the n shortest paths to retrieve top-ranked
suggestions.
|
WhitespaceAnalyzer |
|
WhitespaceTokenizer |
A WhitespaceTokenizer is a tokenizer that divides text at whitespace.
|
WikipediaTokenizer |
Extension of StandardTokenizer that is aware of Wikipedia syntax.
|
WildcardQuery |
Implements the wildcard search query.
|
WildcardQueryNode |
|
WildcardQueryNodeBuilder |
|
WildcardQueryNodeProcessor |
|
WildcardTermEnum |
Subclass of FilteredTermEnum for enumerating all terms that match the
specified wildcard filter term.
|
WindowsDirectory |
Native Directory implementation for Microsoft Windows.
|
WindowsDirectory.WindowsIndexInput |
|
WordlistLoader |
Loader for text files that represent a list of stopwords.
|
WordnetSynonymParser |
Parser for wordnet prolog format
|
WordTokenFilter |
|
WordType |
Internal SmartChineseAnalyzer token type constants
|
WriteLineDocTask |
A task which writes documents, one line per document.
|
_TestHelper |
This class provides access to package-level features defined in the
store package.
|
_TestUtil |
General utility methods for Lucene unit tests.
|