cifs: don't cap ra_pages at the same level as default_backing_dev_info
authorJeff Layton <jlayton@redhat.com>
Tue, 1 May 2012 21:41:49 +0000 (17:41 -0400)
committerSteve French <sfrench@us.ibm.com>
Wed, 2 May 2012 03:27:54 +0000 (22:27 -0500)
commit8f71465c19ffefbfd0da3c1f5dc172b4bce05e93
treec27bb25b91b148e5977ea29132dc98ccea89f725
parent156d17905e783d057061b3b56a9b3befec064e47
cifs: don't cap ra_pages at the same level as default_backing_dev_info

While testing, I've found that even when we are able to negotiate a
much larger rsize with the server, on-the-wire reads often end up being
capped at 128k because of ra_pages being capped at that level.

Lifting this restriction gave almost a twofold increase in sequential
read performance on my craptactular KVM test rig with a 1M rsize.

I think this is safe since the actual ra_pages that the VM requests
is run through max_sane_readahead() prior to submitting the I/O. Under
memory pressure we should end up with large readahead requests being
suppressed anyway.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
fs/cifs/connect.c