Class CompoundWordTokenFilterBase
- java.lang.Object
-
- org.apache.lucene.util.AttributeSource
-
- org.apache.lucene.analysis.TokenStream
-
- org.apache.lucene.analysis.TokenFilter
-
- org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase
-
- All Implemented Interfaces:
Closeable
,AutoCloseable
- Direct Known Subclasses:
DictionaryCompoundWordTokenFilter
,HyphenationCompoundWordTokenFilter
public abstract class CompoundWordTokenFilterBase extends org.apache.lucene.analysis.TokenFilter
Base class for decomposition token filters.You must specify the required
Version
compatibility when creating CompoundWordTokenFilterBase:- As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
If you pass in a
CharArraySet
as dictionary, it should be case-insensitive unless it contains only lowercased entries and you haveLowerCaseFilter
before this filter in your analysis chain. For optional performance (as this filter does lots of lookups to the dictionary, you should use the latter analysis chain/CharArraySet). Be aware: If you supply arbitrarySets
to the ctors orString[]
dictionaries, they will be automatically transformed to case-insensitive!
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description protected class
CompoundWordTokenFilterBase.CompoundToken
Helper class to hold decompounded token information
-
Field Summary
Fields Modifier and Type Field Description static int
DEFAULT_MAX_SUBWORD_SIZE
The default for maximal length of subwords that get propagated to the output of this filterstatic int
DEFAULT_MIN_SUBWORD_SIZE
The default for minimal length of subwords that get propagated to the output of this filterstatic int
DEFAULT_MIN_WORD_SIZE
The default for minimal word length that gets decomposedprotected org.apache.lucene.analysis.CharArraySet
dictionary
protected int
maxSubwordSize
protected int
minSubwordSize
protected int
minWordSize
protected org.apache.lucene.analysis.tokenattributes.OffsetAttribute
offsetAtt
protected boolean
onlyLongestMatch
protected org.apache.lucene.analysis.tokenattributes.CharTermAttribute
termAtt
protected LinkedList<CompoundWordTokenFilterBase.CompoundToken>
tokens
-
Constructor Summary
Constructors Modifier Constructor Description protected
CompoundWordTokenFilterBase(org.apache.lucene.analysis.TokenStream input, String[] dictionary)
Deprecated.protected
CompoundWordTokenFilterBase(org.apache.lucene.analysis.TokenStream input, String[] dictionary, boolean onlyLongestMatch)
Deprecated.protected
CompoundWordTokenFilterBase(org.apache.lucene.analysis.TokenStream input, String[] dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
Deprecated.protected
CompoundWordTokenFilterBase(org.apache.lucene.analysis.TokenStream input, Set<?> dictionary)
Deprecated.protected
CompoundWordTokenFilterBase(org.apache.lucene.analysis.TokenStream input, Set<?> dictionary, boolean onlyLongestMatch)
Deprecated.protected
CompoundWordTokenFilterBase(org.apache.lucene.analysis.TokenStream input, Set<?> dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
Deprecated.protected
CompoundWordTokenFilterBase(org.apache.lucene.util.Version matchVersion, org.apache.lucene.analysis.TokenStream input, String[] dictionary)
protected
CompoundWordTokenFilterBase(org.apache.lucene.util.Version matchVersion, org.apache.lucene.analysis.TokenStream input, String[] dictionary, boolean onlyLongestMatch)
protected
CompoundWordTokenFilterBase(org.apache.lucene.util.Version matchVersion, org.apache.lucene.analysis.TokenStream input, String[] dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
protected
CompoundWordTokenFilterBase(org.apache.lucene.util.Version matchVersion, org.apache.lucene.analysis.TokenStream input, Set<?> dictionary)
protected
CompoundWordTokenFilterBase(org.apache.lucene.util.Version matchVersion, org.apache.lucene.analysis.TokenStream input, Set<?> dictionary, boolean onlyLongestMatch)
protected
CompoundWordTokenFilterBase(org.apache.lucene.util.Version matchVersion, org.apache.lucene.analysis.TokenStream input, Set<?> dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
-
Method Summary
All Methods Static Methods Instance Methods Abstract Methods Concrete Methods Deprecated Methods Modifier and Type Method Description protected abstract void
decompose()
Decomposes the currenttermAtt
and placesCompoundWordTokenFilterBase.CompoundToken
instances in thetokens
list.boolean
incrementToken()
static org.apache.lucene.analysis.CharArraySet
makeDictionary(org.apache.lucene.util.Version matchVersion, String[] dictionary)
Deprecated.Only available for backwards compatibility.void
reset()
-
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toString
-
-
-
-
Field Detail
-
DEFAULT_MIN_WORD_SIZE
public static final int DEFAULT_MIN_WORD_SIZE
The default for minimal word length that gets decomposed- See Also:
- Constant Field Values
-
DEFAULT_MIN_SUBWORD_SIZE
public static final int DEFAULT_MIN_SUBWORD_SIZE
The default for minimal length of subwords that get propagated to the output of this filter- See Also:
- Constant Field Values
-
DEFAULT_MAX_SUBWORD_SIZE
public static final int DEFAULT_MAX_SUBWORD_SIZE
The default for maximal length of subwords that get propagated to the output of this filter- See Also:
- Constant Field Values
-
dictionary
protected final org.apache.lucene.analysis.CharArraySet dictionary
-
tokens
protected final LinkedList<CompoundWordTokenFilterBase.CompoundToken> tokens
-
minWordSize
protected final int minWordSize
-
minSubwordSize
protected final int minSubwordSize
-
maxSubwordSize
protected final int maxSubwordSize
-
onlyLongestMatch
protected final boolean onlyLongestMatch
-
termAtt
protected final org.apache.lucene.analysis.tokenattributes.CharTermAttribute termAtt
-
offsetAtt
protected final org.apache.lucene.analysis.tokenattributes.OffsetAttribute offsetAtt
-
-
Constructor Detail
-
CompoundWordTokenFilterBase
@Deprecated protected CompoundWordTokenFilterBase(org.apache.lucene.analysis.TokenStream input, String[] dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
Deprecated.
-
CompoundWordTokenFilterBase
@Deprecated protected CompoundWordTokenFilterBase(org.apache.lucene.analysis.TokenStream input, String[] dictionary, boolean onlyLongestMatch)
Deprecated.
-
CompoundWordTokenFilterBase
@Deprecated protected CompoundWordTokenFilterBase(org.apache.lucene.analysis.TokenStream input, Set<?> dictionary, boolean onlyLongestMatch)
Deprecated.
-
CompoundWordTokenFilterBase
@Deprecated protected CompoundWordTokenFilterBase(org.apache.lucene.analysis.TokenStream input, String[] dictionary)
Deprecated.
-
CompoundWordTokenFilterBase
@Deprecated protected CompoundWordTokenFilterBase(org.apache.lucene.analysis.TokenStream input, Set<?> dictionary)
Deprecated.
-
CompoundWordTokenFilterBase
@Deprecated protected CompoundWordTokenFilterBase(org.apache.lucene.analysis.TokenStream input, Set<?> dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
Deprecated.
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(org.apache.lucene.util.Version matchVersion, org.apache.lucene.analysis.TokenStream input, String[] dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(org.apache.lucene.util.Version matchVersion, org.apache.lucene.analysis.TokenStream input, String[] dictionary, boolean onlyLongestMatch)
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(org.apache.lucene.util.Version matchVersion, org.apache.lucene.analysis.TokenStream input, Set<?> dictionary, boolean onlyLongestMatch)
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(org.apache.lucene.util.Version matchVersion, org.apache.lucene.analysis.TokenStream input, String[] dictionary)
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(org.apache.lucene.util.Version matchVersion, org.apache.lucene.analysis.TokenStream input, Set<?> dictionary)
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(org.apache.lucene.util.Version matchVersion, org.apache.lucene.analysis.TokenStream input, Set<?> dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
-
-
Method Detail
-
makeDictionary
@Deprecated public static org.apache.lucene.analysis.CharArraySet makeDictionary(org.apache.lucene.util.Version matchVersion, String[] dictionary)
Deprecated.Only available for backwards compatibility.
-
incrementToken
public final boolean incrementToken() throws IOException
- Specified by:
incrementToken
in classorg.apache.lucene.analysis.TokenStream
- Throws:
IOException
-
decompose
protected abstract void decompose()
Decomposes the currenttermAtt
and placesCompoundWordTokenFilterBase.CompoundToken
instances in thetokens
list. The original token may not be placed in the list, as it is automatically passed through this filter.
-
reset
public void reset() throws IOException
- Overrides:
reset
in classorg.apache.lucene.analysis.TokenFilter
- Throws:
IOException
-
-