readahead: reduce unnecessary mmap_miss increases
authorAndi Kleen <ak@linux.intel.com>
Wed, 25 May 2011 00:12:29 +0000 (17:12 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Wed, 25 May 2011 15:39:26 +0000 (08:39 -0700)
commit207d04baa3591a354711e863dd90087fc75873b3
tree17498d55af5b2a588e7e7111e927a099236ca770
parent275b12bf5486f6f531111fd3d7dbbf01df427cfe
readahead: reduce unnecessary mmap_miss increases

The original INT_MAX is too large, reduce it to

- avoid unnecessarily dirtying/bouncing the cache line

- restore mmap read-around faster on changed access pattern

Background: in the mosbench exim benchmark which does multi-threaded page
faults on shared struct file, the ra->mmap_miss updates are found to cause
excessive cache line bouncing on tmpfs.  The ra state updates are needless
for tmpfs because it actually disabled readahead totally
(shmem_backing_dev_info.ra_pages == 0).

Tested-by: Tim Chen <tim.c.chen@intel.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/filemap.c