getBestFragment
public final String getBestFragment(Analyzer analyzer,
String fieldName,
String text)
throws IOException
analyzer
- the analyzer that will be used to split text
into chunksfieldName
- Name of field used to influence analyzer's tokenization policytext
- text to highlight terms in
- highlighted text fragment or null if no terms found
getBestFragment
public final String getBestFragment(TokenStream tokenStream,
String text)
throws IOException
Highlights chosen terms in a text, extracting the most relevant section.
The document text is analysed in chunks to record hit statistics
across the document. After accumulating stats, the fragment with the highest score
is returned
tokenStream
- a stream of tokens identified in the text parameter, including offset information.
This is typically produced by an analyzer re-parsing a document's
text. Some work may be done on retrieving TokenStreams more efficently
by adding support for storing original text position data in the Lucene
index but this support is not currently available (as of Lucene 1.4 rc2).text
- text to highlight terms in
- highlighted text fragment or null if no terms found
getBestFragments
public final String[] getBestFragments(Analyzer analyzer,
String fieldName,
String text,
int maxNumFragments)
throws IOException
analyzer
- the analyzer that will be used to split text
into chunksfieldName
- the name of the field being highlighted (used by analyzer)text
- text to highlight terms inmaxNumFragments
- the maximum number of fragments.
- highlighted text fragments (between 0 and maxNumFragments number of fragments)
getBestFragments
public final String[] getBestFragments(Analyzer analyzer,
String text,
int maxNumFragments)
throws IOException
This method incorrectly hardcodes the choice of fieldname. Use the
method of the same name that takes a fieldname.
analyzer
- the analyzer that will be used to split text
into chunkstext
- text to highlight terms inmaxNumFragments
- the maximum number of fragments.
- highlighted text fragments (between 0 and maxNumFragments number of fragments)
getBestFragments
public final String[] getBestFragments(TokenStream tokenStream,
String text,
int maxNumFragments)
throws IOException
Highlights chosen terms in a text, extracting the most relevant sections.
The document text is analysed in chunks to record hit statistics
across the document. After accumulating stats, the fragments with the highest scores
are returned as an array of strings in order of score (contiguous fragments are merged into
one in their original order to improve readability)
text
- text to highlight terms inmaxNumFragments
- the maximum number of fragments.
- highlighted text fragments (between 0 and maxNumFragments number of fragments)
getBestFragments
public final String getBestFragments(TokenStream tokenStream,
String text,
int maxNumFragments,
String separator)
throws IOException
Highlights terms in the text , extracting the most relevant sections
and concatenating the chosen fragments with a separator (typically "...").
The document text is analysed in chunks to record hit statistics
across the document. After accumulating stats, the fragments with the highest scores
are returned in order as "separator" delimited strings.
text
- text to highlight terms inmaxNumFragments
- the maximum number of fragments.separator
- the separator used to intersperse the document fragments (typically "...")
getBestTextFragments
public final TextFragment[] getBestTextFragments(TokenStream tokenStream,
String text,
boolean mergeContiguousFragments,
int maxNumFragments)
throws IOException
Low level api to get the most relevant (formatted) sections of the document.
This method has been made public to allow visibility of score information held in TextFragment objects.
Thanks to Jason Calabrese for help in redefining the interface.
tokenStream
- text
- mergeContiguousFragments
- maxNumFragments
-
getEncoder
public Encoder getEncoder()
getFragmentScorer
public Scorer getFragmentScorer()
- Object used to score each text fragment
getMaxDocBytesToAnalyze
public int getMaxDocBytesToAnalyze()
- the maximum number of bytes to be tokenized per doc
getTextFragmenter
public Fragmenter getTextFragmenter()
setEncoder
public void setEncoder(Encoder encoder)
setFragmentScorer
public void setFragmentScorer(Scorer scorer)
setMaxDocBytesToAnalyze
public void setMaxDocBytesToAnalyze(int byteCount)
byteCount
- the maximum number of bytes to be tokenized per doc
(This can improve performance with large documents)
setTextFragmenter
public void setTextFragmenter(Fragmenter fragmenter)