OSDN Git Service

RDMA/mthca: Use correct sizing on buffers holding page DMA addresses
authorShiraz Saleem <shiraz.saleem@intel.com>
Thu, 28 Mar 2019 16:49:45 +0000 (11:49 -0500)
committerJason Gunthorpe <jgg@mellanox.com>
Thu, 28 Mar 2019 17:13:27 +0000 (14:13 -0300)
commit41d34865b24c6a0b594b0a69bfe9ea56dff5abcd
treef8c21b06f980dd2585c4c13d6cb3bb668a3a0fea
parent5f818d676ac455bbc812ffaaf5bf780be5465114
RDMA/mthca: Use correct sizing on buffers holding page DMA addresses

The buffer that holds the page DMA addresses is sized off umem->nmap.
This can potentially cause out of bound accesses on the PBL array when
iterating the umem DMA-mapped SGL. This is because if umem pages are
combined, umem->nmap can be much lower than the number of system pages
in umem.

Use ib_umem_num_pages() to size this buffer.

Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
drivers/infiniband/hw/mthca/mthca_provider.c