OSDN Git Service

RDMA/umem: Move to allocate SG table from pages
authorMaor Gottlieb <maorg@nvidia.com>
Sun, 4 Oct 2020 15:43:40 +0000 (18:43 +0300)
committerJason Gunthorpe <jgg@nvidia.com>
Mon, 5 Oct 2020 23:45:45 +0000 (20:45 -0300)
commit0c16d9635e3a51377e5815b9f8e14f497a4dbb42
treee455c8f46c4a50a4b078c79b6184a88ff62b13ad
parent07da1223ec939982497db3caccd6215b55acc35c
RDMA/umem: Move to allocate SG table from pages

Remove the implementation of ib_umem_add_sg_table and instead
call to __sg_alloc_table_from_pages which already has the logic to
merge contiguous pages.

Besides that it removes duplicated functionality, it reduces the
memory consumption of the SG table significantly. Prior to this
patch, the SG table was allocated in advance regardless consideration
of contiguous pages.

In huge pages system of 2MB page size, without this change, the SG table
would contain x512 SG entries.
E.g. for 100GB memory registration:

 Number of entries Size
Before        26214400          600.0MB
After            51200   1.2MB

Link: https://lore.kernel.org/r/20201004154340.1080481-5-leon@kernel.org
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
drivers/infiniband/core/umem.c