我有一些C ++ code,我需要使用LRU技术来实现缓存替换。 到目前为止,我知道两种方法来实现LRU高速缓存替换:
I have some C++ code where I need to implement cache replacement using LRU technique. So far I know two methods to implement LRU cache replacement:
使用时间戳每个缓存的数据访问时间,最后比较时间戳的更换时间。 在使用堆栈缓存的项目,并将其移动到顶部,如果他们最近访问过的,所以最后的底部将包含LRU候选人。那么,这其中更是在生产中code使用? 难道他们的任何其他更好的方法呢?
So, which of these is better to be used in production code? Are their any other better methods?
最近我实现了一个LRU缓存使用链接表S $ P $垫在哈希映射。
Recently I implemented a LRU cache using a linked list spread over a hash map.
/// Typedef for URL/Entry pair
typedef std::pair< std::string, Entry > EntryPair;
/// Typedef for Cache list
typedef std::list< EntryPair > CacheList;
/// Typedef for URL-indexed map into the CacheList
typedef boost::unordered_map< std::string, CacheList::iterator > CacheMap;
/// Cache LRU list
CacheList mCacheList;
/// Cache map into the list
CacheMap mCacheMap;
它的是O(1)对所有重要业务的优势。
It has the advantage of being O(1) for all important operations.
插入算法:
// create new entry
Entry iEntry( ... );
// push it to the front;
mCacheList.push_front( std::make_pair( aURL, iEntry ) );
// add it to the cache map
mCacheMap[ aURL ] = mCacheList.begin();
// increase count of entries
mEntries++;
// check if it's time to remove the last element
if ( mEntries > mMaxEntries )
{
// erease from the map the last cache list element
mCacheMap.erase( mCacheList.back().first );
// erase it from the list
mCacheList.pop_back();
// decrease count
mEntries--;
}