org.apache.lucene.index.memory

Class AnalyzerUtil

public class AnalyzerUtil extends Object

Various fulltext analysis utilities avoiding redundant code in several classes.

Author: whoschek.AT.lbl.DOT.gov

Method Summary
static AnalyzergetLoggingAnalyzer(Analyzer child, PrintStream log, String logName)
Returns a simple analyzer wrapper that logs all tokens produced by the underlying child analyzer to the given log stream (typically System.err); Otherwise behaves exactly like the child analyzer, delivering the very same tokens; useful for debugging purposes on custom indexing and/or querying.
static AnalyzergetMaxTokenAnalyzer(Analyzer child, int maxTokens)
Returns an analyzer wrapper that returns at most the first maxTokens tokens from the underlying child analyzer, ignoring all remaining tokens.
static String[]getMostFrequentTerms(Analyzer analyzer, String text, int limit)
Returns (frequency:term) pairs for the top N distinct terms (aka words), sorted descending by frequency (and ascending by term, if tied).
static String[]getParagraphs(String text, int limit)
Returns at most the first N paragraphs of the given text.
static AnalyzergetPorterStemmerAnalyzer(Analyzer child)
Returns an English stemming analyzer that stems tokens from the underlying child analyzer according to the Porter stemming algorithm.
static String[]getSentences(String text, int limit)
Returns at most the first N sentences of the given text.
static AnalyzergetSynonymAnalyzer(Analyzer child, SynonymMap synonyms, int maxSynonyms)
Returns an analyzer wrapper that wraps the underlying child analyzer's token stream into a SynonymTokenFilter.
static AnalyzergetTokenCachingAnalyzer(Analyzer child)
Returns an analyzer wrapper that caches all tokens generated by the underlying child analyzer's token streams, and delivers those cached tokens on subsequent calls to tokenStream(String fieldName, Reader reader) if the fieldName has been seen before, altogether ignoring the Reader parameter on cache lookup.

Method Detail

getLoggingAnalyzer

public static Analyzer getLoggingAnalyzer(Analyzer child, PrintStream log, String logName)
Returns a simple analyzer wrapper that logs all tokens produced by the underlying child analyzer to the given log stream (typically System.err); Otherwise behaves exactly like the child analyzer, delivering the very same tokens; useful for debugging purposes on custom indexing and/or querying.

Parameters: child the underlying child analyzer log the print stream to log to (typically System.err) logName a name for this logger (typically "log" or similar)

Returns: a logging analyzer

getMaxTokenAnalyzer

public static Analyzer getMaxTokenAnalyzer(Analyzer child, int maxTokens)
Returns an analyzer wrapper that returns at most the first maxTokens tokens from the underlying child analyzer, ignoring all remaining tokens.

Parameters: child the underlying child analyzer maxTokens the maximum number of tokens to return from the underlying analyzer (a value of Integer.MAX_VALUE indicates unlimited)

Returns: an analyzer wrapper

getMostFrequentTerms

public static String[] getMostFrequentTerms(Analyzer analyzer, String text, int limit)
Returns (frequency:term) pairs for the top N distinct terms (aka words), sorted descending by frequency (and ascending by term, if tied).

Example XQuery:

 declare namespace util = "java:org.apache.lucene.index.memory.AnalyzerUtil";
 declare namespace analyzer = "java:org.apache.lucene.index.memory.PatternAnalyzer";
 
 for $pair in util:get-most-frequent-terms(
    analyzer:EXTENDED_ANALYZER(), doc("samples/shakespeare/othello.xml"), 10)
 return <word word="{substring-after($pair, ':')}" frequency="{substring-before($pair, ':')}"/>
 

Parameters: analyzer the analyzer to use for splitting text into terms (aka words) text the text to analyze limit the maximum number of pairs to return; zero indicates "as many as possible".

Returns: an array of (frequency:term) pairs in the form of (freq0:term0, freq1:term1, ..., freqN:termN). Each pair is a single string separated by a ':' delimiter.

getParagraphs

public static String[] getParagraphs(String text, int limit)
Returns at most the first N paragraphs of the given text. Delimiting characters are excluded from the results. Each returned paragraph is whitespace-trimmed via String.trim(), potentially an empty string.

Parameters: text the text to tokenize into paragraphs limit the maximum number of paragraphs to return; zero indicates "as many as possible".

Returns: the first N paragraphs

getPorterStemmerAnalyzer

public static Analyzer getPorterStemmerAnalyzer(Analyzer child)
Returns an English stemming analyzer that stems tokens from the underlying child analyzer according to the Porter stemming algorithm. The child analyzer must deliver tokens in lower case for the stemmer to work properly.

Background: Stemming reduces token terms to their linguistic root form e.g. reduces "fishing" and "fishes" to "fish", "family" and "families" to "famili", as well as "complete" and "completion" to "complet". Note that the root form is not necessarily a meaningful word in itself, and that this is not a bug but rather a feature, if you lean back and think about fuzzy word matching for a bit.

See the Lucene contrib packages for stemmers (and stop words) for German, Russian and many more languages.

Parameters: child the underlying child analyzer

Returns: an analyzer wrapper

getSentences

public static String[] getSentences(String text, int limit)
Returns at most the first N sentences of the given text. Delimiting characters are excluded from the results. Each returned sentence is whitespace-trimmed via String.trim(), potentially an empty string.

Parameters: text the text to tokenize into sentences limit the maximum number of sentences to return; zero indicates "as many as possible".

Returns: the first N sentences

getSynonymAnalyzer

public static Analyzer getSynonymAnalyzer(Analyzer child, SynonymMap synonyms, int maxSynonyms)
Returns an analyzer wrapper that wraps the underlying child analyzer's token stream into a SynonymTokenFilter.

Parameters: child the underlying child analyzer synonyms the map used to extract synonyms for terms maxSynonyms the maximum number of synonym tokens to return per underlying token word (a value of Integer.MAX_VALUE indicates unlimited)

Returns: a new analyzer

getTokenCachingAnalyzer

public static Analyzer getTokenCachingAnalyzer(Analyzer child)
Returns an analyzer wrapper that caches all tokens generated by the underlying child analyzer's token streams, and delivers those cached tokens on subsequent calls to tokenStream(String fieldName, Reader reader) if the fieldName has been seen before, altogether ignoring the Reader parameter on cache lookup.

If Analyzer / TokenFilter chains are expensive in terms of I/O or CPU, such caching can help improve performance if the same document is added to multiple Lucene indexes, because the text analysis phase need not be performed more than once.

Caveats:

Parameters: child the underlying child analyzer

Returns: a new analyzer

Copyright © 2000-2007 Apache Software Foundation. All Rights Reserved.