Class MockAnalyzer

  • All Implemented Interfaces:
    Closeable, AutoCloseable

    public final class MockAnalyzer
    extends Analyzer
    Analyzer for testing

    This analyzer is a replacement for Whitespace/Simple/KeywordAnalyzers for unit tests. If you are testing a custom component such as a queryparser or analyzer-wrapper that consumes analysis streams, its a great idea to test it with this analyzer instead. MockAnalyzer has the following behavior:

    • By default, the assertions in MockTokenizer are turned on for extra checks that the consumer is consuming properly. These checks can be disabled with setEnableChecks(boolean).
    • Payload data is randomly injected into the stream for more thorough testing of payloads.
    See Also:
    MockTokenizer
    • Constructor Detail

      • MockAnalyzer

        public MockAnalyzer​(Random random,
                            int pattern,
                            boolean lowerCase,
                            CharArraySet filter,
                            boolean enablePositionIncrements)
        Creates a new MockAnalyzer.
        Parameters:
        random - Random for payloads behavior
        pattern - pattern constant describing how tokenization should happen
        lowerCase - true if the tokenizer should lowercase terms
        filter - CharArraySet describing how terms should be filtered (set of stopwords, etc)
        enablePositionIncrements - true if position increments should reflect filtered terms.
    • Method Detail

      • tokenStream

        public TokenStream tokenStream​(String fieldName,
                                       Reader reader)
        Description copied from class: Analyzer
        Creates a TokenStream which tokenizes all the text in the provided Reader. Must be able to handle null field name for backward compatibility.
        Specified by:
        tokenStream in class Analyzer
      • reusableTokenStream

        public TokenStream reusableTokenStream​(String fieldName,
                                               Reader reader)
                                        throws IOException
        Description copied from class: Analyzer
        Creates a TokenStream that is allowed to be re-used from the previous time that the same thread called this method. Callers that do not need to use more than one TokenStream at the same time from this analyzer should use this method for better performance.
        Overrides:
        reusableTokenStream in class Analyzer
        Throws:
        IOException
      • setPositionIncrementGap

        public void setPositionIncrementGap​(int positionIncrementGap)
      • getPositionIncrementGap

        public int getPositionIncrementGap​(String fieldName)
        Description copied from class: Analyzer
        Invoked before indexing a Fieldable instance if terms have already been added to that field. This allows custom analyzers to place an automatic position increment gap between Fieldable instances using the same field name. The default value position increment gap is 0. With a 0 position increment gap and the typical default token position increment of 1, all terms in a field, including across Fieldable instances, are in successive positions, allowing exact PhraseQuery matches, for instance, across Fieldable instance boundaries.
        Overrides:
        getPositionIncrementGap in class Analyzer
        Parameters:
        fieldName - Fieldable name being indexed.
        Returns:
        position increment gap, added to the next token emitted from Analyzer.tokenStream(String,Reader)
      • setEnableChecks

        public void setEnableChecks​(boolean enableChecks)
        Toggle consumer workflow checking: if your test consumes tokenstreams normally you should leave this enabled.
      • setMaxTokenLength

        public void setMaxTokenLength​(int length)
        Toggle maxTokenLength for MockTokenizer