Class LRUQueryCache

    • Field Detail

      • maxSize

        private final int maxSize
      • maxRamBytesUsed

        private final long maxRamBytesUsed
      • leavesToCache

        private final java.util.function.Predicate<LeafReaderContext> leavesToCache
      • uniqueQueries

        private final java.util.Map<Query,​Query> uniqueQueries
      • mostRecentlyUsedQueries

        private final java.util.Set<Query> mostRecentlyUsedQueries
      • lock

        private final java.util.concurrent.locks.ReentrantLock lock
      • skipCacheFactor

        private final float skipCacheFactor
      • ramBytesUsed

        private volatile long ramBytesUsed
      • hitCount

        private volatile long hitCount
      • missCount

        private volatile long missCount
      • cacheCount

        private volatile long cacheCount
      • cacheSize

        private volatile long cacheSize
    • Constructor Detail

      • LRUQueryCache

        public LRUQueryCache​(int maxSize,
                             long maxRamBytesUsed,
                             java.util.function.Predicate<LeafReaderContext> leavesToCache,
                             float skipCacheFactor)
        Expert: Create a new instance that will cache at most maxSize queries with at most maxRamBytesUsed bytes of memory, only on leaves that satisfy leavesToCache.

        Also, clauses whose cost is skipCacheFactor times more than the cost of the top-level query will not be cached in order to not slow down queries too much.

      • LRUQueryCache

        public LRUQueryCache​(int maxSize,
                             long maxRamBytesUsed)
        Create a new instance that will cache at most maxSize queries with at most maxRamBytesUsed bytes of memory. Queries will only be cached on leaves that have more than 10k documents and have more than half of the average documents per leave of the index. This should guarantee that all leaves from the upper tier will be cached. Only clauses whose cost is at most 100x the cost of the top-level query will be cached in order to not hurt latency too much because of caching.
    • Method Detail

      • onHit

        protected void onHit​(java.lang.Object readerCoreKey,
                             Query query)
        Expert: callback when there is a cache hit on a given query. Implementing this method is typically useful in order to compute more fine-grained statistics about the query cache.
        See Also:
        onMiss(java.lang.Object, org.apache.lucene.search.Query)
      • onQueryCache

        protected void onQueryCache​(Query query,
                                    long ramBytesUsed)
        Expert: callback when a query is added to this cache. Implementing this method is typically useful in order to compute more fine-grained statistics about the query cache.
        See Also:
        onQueryEviction(org.apache.lucene.search.Query, long)
      • onDocIdSetCache

        protected void onDocIdSetCache​(java.lang.Object readerCoreKey,
                                       long ramBytesUsed)
        Expert: callback when a DocIdSet is added to this cache. Implementing this method is typically useful in order to compute more fine-grained statistics about the query cache.
        See Also:
        onDocIdSetEviction(java.lang.Object, int, long)
      • onDocIdSetEviction

        protected void onDocIdSetEviction​(java.lang.Object readerCoreKey,
                                          int numEntries,
                                          long sumRamBytesUsed)
        Expert: callback when one or more DocIdSets are removed from this cache.
        See Also:
        onDocIdSetCache(java.lang.Object, long)
      • onClear

        protected void onClear()
        Expert: callback when the cache is completely cleared.
      • requiresEviction

        boolean requiresEviction()
        Whether evictions are required.
      • evictIfNecessary

        private void evictIfNecessary()
      • clearCoreCacheKey

        public void clearCoreCacheKey​(java.lang.Object coreKey)
        Remove all cache entries for the given core cache key.
      • clearQuery

        public void clearQuery​(Query query)
        Remove all cache entries for the given query.
      • onEviction

        private void onEviction​(Query singleton)
      • clear

        public void clear()
        Clear the content of this cache.
      • getRamBytesUsed

        private static long getRamBytesUsed​(Query query)
      • assertConsistent

        void assertConsistent()
      • cachedQueries

        java.util.List<Query> cachedQueries()
      • doCache

        public Weight doCache​(Weight weight,
                              QueryCachingPolicy policy)
        Description copied from interface: QueryCache
        Return a wrapper around the provided weight that will cache matching docs per-segment accordingly to the given policy. NOTE: The returned weight will only be equivalent if scores are not needed.
        Specified by:
        doCache in interface QueryCache
        See Also:
        Collector.scoreMode()
      • ramBytesUsed

        public long ramBytesUsed()
        Description copied from interface: Accountable
        Return the memory usage of this object in bytes. Negative values are illegal.
        Specified by:
        ramBytesUsed in interface Accountable
      • getChildResources

        public java.util.Collection<Accountable> getChildResources()
        Description copied from interface: Accountable
        Returns nested resources of this class. The result should be a point-in-time snapshot (to avoid race conditions).
        Specified by:
        getChildResources in interface Accountable
        See Also:
        Accountables
      • cacheIntoRoaringDocIdSet

        private static LRUQueryCache.CacheAndCount cacheIntoRoaringDocIdSet​(BulkScorer scorer,
                                                                            int maxDoc)
                                                                     throws java.io.IOException
        Throws:
        java.io.IOException
      • getTotalCount

        public final long getTotalCount()
        Return the total number of times that a Query has been looked up in this QueryCache. Note that this number is incremented once per segment so running a cached query only once will increment this counter by the number of segments that are wrapped by the searcher. Note that by definition, getTotalCount() is the sum of getHitCount() and getMissCount().
        See Also:
        getHitCount(), getMissCount()
      • getHitCount

        public final long getHitCount()
        Over the total number of times that a query has been looked up, return how many times a cached DocIdSet has been found and returned.
        See Also:
        getTotalCount(), getMissCount()
      • getMissCount

        public final long getMissCount()
        Over the total number of times that a query has been looked up, return how many times this query was not contained in the cache.
        See Also:
        getTotalCount(), getHitCount()
      • getCacheCount

        public final long getCacheCount()
        Return the total number of cache entries that have been generated and put in the cache. It is highly desirable to have a hit count that is much higher than the cache count as the opposite would indicate that the query cache makes efforts in order to cache queries but then they do not get reused.
        See Also:
        getCacheSize(), getEvictionCount()
      • getEvictionCount

        public final long getEvictionCount()
        Return the number of cache entries that have been removed from the cache either in order to stay under the maximum configured size/ram usage, or because a segment has been closed. High numbers of evictions might mean that queries are not reused or that the caching policy caches too aggressively on NRT segments which get merged early.
        See Also:
        getCacheCount(), getCacheSize()