Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Intelligent file system inode cache management

IP.com Disclosure Number: IPCOM000247802D
Publication Date: 2016-Oct-06
Document File: 3 page(s) / 32K

Publishing Venue

The IP.com Prior Art Database

Abstract

In a typical filesystem the inode cache is common for both file and directory entries. When the cache is full, inode lookup operation replaces one of the entry. The victim selection algorithm does not give preference to directory entries. However, this can impact the performance of the inode lookup if the number of files are quite lager than the number of directories. Following article proposes a method of reserving a portion of the cache for directories which can improve the overall file access time.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

Page 01 of 3

Intelligent file system inode cache management

To access any file or directory in a file system, in all traditional operating systems, a pathname translation (lookup) operation is performed by file system module to access the inode corresponding to file or directory. This operation, in each iteration, finds inode corresponding to a component within a file/directory pathname. Operating systems make use of inode cache to improve the performance of file access. The inode cache is common for both directory and file inode entries. During the lookup operation, a search is done for inode of the component in inode cache. If it does not exist in inode cache, a disk I/O (Input/Output) operation is performed to get the on-disk inode of the component into inode cache (insert operation). During insert operation, if there are no free entries in inode cache, one of the existing cache entry is selected (based on selection algorithm such as LRU (Least Recently Used) or modified LRU) as victim and it is replaced with new entry.

Typically, there are more files than directories in a file system. Currently the inode cache implementation is based on LRU or modified LRU algorithm which does not give any preference to the directory entries. If there are recurring inode cache misses on directories it will lead to more on-disk accesses to fetch the corresponding inodes. The problem becomes more prominent for example in backup and archive, and listing of directory ('ls -R' or 'find' command) kind of operations on nested directories.

Hence retaining directory inode entries in inode cache over a file inode entries can improve lookup operations.

As there is no separation of file or directory entries, the lookup operation may not be efficient in some of scenarios mentioned above. To address this problem, cache entries can be split for directory and file entries. The method ensures:
a) certain amount of cache is reserved in the inode cache for directories.

b) file system intelligently determines when to dynamically increase or decrease the reserved

portion in inode cache for directories.

This article proposes a method which involves:


I) Dynamically adjusting the inode cache directory threshold

Step 1: Determine a directory weight(w)...