OSDN Git Service

tomoyo/tomoyo-test1.git
2 years agoRDMA/srpt: Fix a use-after-free
Bart Van Assche [Wed, 27 Jul 2022 19:34:15 +0000 (12:34 -0700)]
RDMA/srpt: Fix a use-after-free

Change the LIO port members inside struct srpt_port from regular members
into pointers. Allocate the LIO port data structures from inside
srpt_make_tport() and free these from inside srpt_make_tport(). Keep
struct srpt_device as long as either an RDMA port or a LIO target port is
associated with it. This patch decouples the lifetime of struct srpt_port
(controlled by the RDMA core) and struct srpt_port_id (controlled by LIO).
This patch fixes the following KASAN complaint:

  BUG: KASAN: use-after-free in srpt_enable_tpg+0x31/0x70 [ib_srpt]
  Read of size 8 at addr ffff888141cc34b8 by task check/5093

  Call Trace:
   <TASK>
   show_stack+0x4e/0x53
   dump_stack_lvl+0x51/0x66
   print_address_description.constprop.0.cold+0xea/0x41e
   print_report.cold+0x90/0x205
   kasan_report+0xb9/0xf0
   __asan_load8+0x69/0x90
   srpt_enable_tpg+0x31/0x70 [ib_srpt]
   target_fabric_tpg_base_enable_store+0xe2/0x140 [target_core_mod]
   configfs_write_iter+0x18b/0x210
   new_sync_write+0x1f2/0x2f0
   vfs_write+0x3e3/0x540
   ksys_write+0xbb/0x140
   __x64_sys_write+0x42/0x50
   do_syscall_64+0x34/0x80
   entry_SYSCALL_64_after_hwframe+0x46/0xb0
   </TASK>

Link: https://lore.kernel.org/r/20220727193415.1583860-4-bvanassche@acm.org
Reported-by: Li Zhijian <lizhijian@fujitsu.com>
Tested-by: Li Zhijian <lizhijian@fujitsu.com>
Fixes: a42d985bd5b2 ("ib_srpt: Initial SRP Target merge for v3.3-rc1")
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/srpt: Introduce a reference count in struct srpt_device
Bart Van Assche [Wed, 27 Jul 2022 19:34:14 +0000 (12:34 -0700)]
RDMA/srpt: Introduce a reference count in struct srpt_device

This will be used to keep struct srpt_device around as long as either the
RDMA port exists or a LIO target port is associated with the struct
srpt_device.

Link: https://lore.kernel.org/r/20220727193415.1583860-3-bvanassche@acm.org
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/srpt: Duplicate port name members
Bart Van Assche [Wed, 27 Jul 2022 19:34:13 +0000 (12:34 -0700)]
RDMA/srpt: Duplicate port name members

Prepare for decoupling the lifetimes of struct srpt_port and struct
srpt_port_id by duplicating the port name into struct srpt_port.

Link: https://lore.kernel.org/r/20220727193415.1583860-2-bvanassche@acm.org
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoIB/qib: Fix repeated "in" within comments
wangjianli [Sun, 24 Jul 2022 07:44:07 +0000 (15:44 +0800)]
IB/qib: Fix repeated "in" within comments

Delete the redundant word 'in'.

Link: https://lore.kernel.org/r/20220724074407.18552-1-wangjianli@cdjrlc.com
Signed-off-by: wangjianli <wangjianli@cdjrlc.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoMerge branch 'erdma' into rdma.git for-next
Jason Gunthorpe [Wed, 27 Jul 2022 19:09:25 +0000 (16:09 -0300)]
Merge branch 'erdma' into rdma.git for-next

Cheng Xu says

====================
This v14 patch set introduces the Elastic RDMA Adapter (ERDMA) driver,
which released in Apsara Conference 2021 by Alibaba. The PR of ERDMA
userspace provider has already been created [1].

ERDMA enables large-scale RDMA acceleration capability in Alibaba ECS
environment, initially offered in g7re instance. It can improve the
efficiency of large-scale distributed computing and communication
significantly and expand dynamically with the cluster scale of Alibaba
Cloud.

ERDMA is a RDMA networking adapter based on the Alibaba MOC hardware. It
works in the VPC network environment (overlay network), and uses iWarp
transport protocol. ERDMA supports reliable connection (RC). ERDMA also
supports both kernel space and user space verbs. Now we have already
supported HPC/AI applications with libfabric, NoF and some other internal
verbs libraries, such as xrdma, epsl, etc,.

For the ECS instance with RDMA enabled, our MOC hardware generates two
kinds of PCI devices: one for ERDMA, and one for the original net device
(virtio-net). They are separated PCI devices.
====================

* branch 'erdma':
  RDMA/erdma: Add driver to kernel build environment
  RDMA/erdma: Add the ABI definitions
  RDMA/erdma: Add the erdma module
  RDMA/erdma: Add connection management (CM) support
  RDMA/erdma: Add verbs implementation
  RDMA/erdma: Add verbs header file
  RDMA/erdma: Add event queue implementation
  RDMA/erdma: Add cmdq implementation
  RDMA/erdma: Add main include file
  RDMA/erdma: Add the hardware related definitions
  RDMA: Add ERDMA to rdma_driver_id definition

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/erdma: Add driver to kernel build environment
Cheng Xu [Wed, 27 Jul 2022 01:49:27 +0000 (09:49 +0800)]
RDMA/erdma: Add driver to kernel build environment

Add erdma to the kernel build environment, and sort the source
order in drivers/infiniband/Kconfig.

Link: https://lore.kernel.org/r/20220727014927.76564-12-chengyou@linux.alibaba.com
Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/erdma: Add the ABI definitions
Cheng Xu [Wed, 27 Jul 2022 01:49:26 +0000 (09:49 +0800)]
RDMA/erdma: Add the ABI definitions

Add erdma ABI definitions which will be shared between kernel and
userspace. This commit also fix compile issues reported by lkp.

Link: https://lore.kernel.org/r/20220727014927.76564-11-chengyou@linux.alibaba.com
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/erdma: Add the erdma module
Cheng Xu [Wed, 27 Jul 2022 01:49:25 +0000 (09:49 +0800)]
RDMA/erdma: Add the erdma module

Add the main erdma module, which provides interface to infiniband
subsystem.

This commit includes a modification from Christophe, that using the bitmap
API to allocate bitmaps instead of hand-writing. And the commit also fixes
warnings reported by static checkers.

Link: https://lore.kernel.org/r/20220727014927.76564-10-chengyou@linux.alibaba.com
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/erdma: Add connection management (CM) support
Cheng Xu [Wed, 27 Jul 2022 01:49:24 +0000 (09:49 +0800)]
RDMA/erdma: Add connection management (CM) support

ERDMA's transport protocol is iWarp, so the driver must support CM
interface. In CM part, we use the same way as SoftiWarp: using kernel
socket to set up the connection, then performing MPA negotiation in
kernel. So, this part of code mainly comes from SoftiWarp, base on it,
we add some more features, such as non-blocking iw_connect implementation.

This commit also fixes a duplicated include issue reported by Abaci Robot.

Link: https://lore.kernel.org/r/20220727014927.76564-9-chengyou@linux.alibaba.com
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/erdma: Add verbs implementation
Cheng Xu [Wed, 27 Jul 2022 01:49:23 +0000 (09:49 +0800)]
RDMA/erdma: Add verbs implementation

The RDMA verbs implementation of erdma is divided into three files:
erdma_qp.c, erdma_cq.c, and erdma_verbs.c. Internal used functions and
datapath functions of QP/CQ are put in erdma_qp.c and erdma_cq.c, the rest
is in erdma_verbs.c.

This commit also fixes some static check warnings.

Link: https://lore.kernel.org/r/20220727014927.76564-8-chengyou@linux.alibaba.com
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/erdma: Add verbs header file
Cheng Xu [Wed, 27 Jul 2022 01:49:22 +0000 (09:49 +0800)]
RDMA/erdma: Add verbs header file

This header file defines the main structures and functions used for RDMA
Verbs, including qp, cq, mr, ucontext, etc,.

Link: https://lore.kernel.org/r/20220727014927.76564-7-chengyou@linux.alibaba.com
Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/erdma: Add event queue implementation
Cheng Xu [Wed, 27 Jul 2022 01:49:21 +0000 (09:49 +0800)]
RDMA/erdma: Add event queue implementation

Event queue (EQ) is the main notification way from erdma hardware to its
driver. Each erdma device contains 2 kinds EQs: asynchronous EQ (AEQ) and
completion EQ (CEQ). Per device has 1 AEQ, which used for RDMA async event
report, and max to 32 CEQs (numbered for CEQ0 to CEQ31). CEQ0 is used for
cmdq completion event report, and the rest CEQs are used for RDMA
completion event report.

Link: https://lore.kernel.org/r/20220727014927.76564-6-chengyou@linux.alibaba.com
Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/erdma: Add cmdq implementation
Cheng Xu [Wed, 27 Jul 2022 01:49:20 +0000 (09:49 +0800)]
RDMA/erdma: Add cmdq implementation

Cmdq is the main control plane channel between erdma driver and hardware.
After erdma device is initialized, the cmdq channel will be active in the
whole lifecycle of this driver.

This commit also includes two modifications from Christophe, one is using
the bitmap API to allocate bitmaps instead of hand-writing, and another
is using the non-atomic bitmap API when applicable.

Link: https://lore.kernel.org/r/20220727014927.76564-5-chengyou@linux.alibaba.com
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/erdma: Add main include file
Cheng Xu [Wed, 27 Jul 2022 01:49:19 +0000 (09:49 +0800)]
RDMA/erdma: Add main include file

Add ERDMA driver main header file, defining internal used data structures
and operations. The defined data structures includes *cmdq*, which is used
as the communication channel between ERDMA driver and hardware.

Link: https://lore.kernel.org/r/20220727014927.76564-4-chengyou@linux.alibaba.com
Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/erdma: Add the hardware related definitions
Cheng Xu [Wed, 27 Jul 2022 01:49:18 +0000 (09:49 +0800)]
RDMA/erdma: Add the hardware related definitions

ERDMA is a PCIe device, and this file provides ERDMA hardware related
definitions, mainly including PCIe device capabilities and restrictions,
device registers definitions, doorbell space, doorbell structure
definitions and WQE definitions.

Link: https://lore.kernel.org/r/20220727014927.76564-3-chengyou@linux.alibaba.com
Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA: Add ERDMA to rdma_driver_id definition
Cheng Xu [Wed, 27 Jul 2022 01:49:17 +0000 (09:49 +0800)]
RDMA: Add ERDMA to rdma_driver_id definition

Define RDMA_DRIVER_ERDMA in enum rdma_driver_id.

Link: https://lore.kernel.org/r/20220727014927.76564-2-chengyou@linux.alibaba.com
Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/mlx5: Rename the mkey cache variables and functions
Aharon Landau [Tue, 26 Jul 2022 07:19:11 +0000 (10:19 +0300)]
RDMA/mlx5: Rename the mkey cache variables and functions

After replacing the MR cache with an Mkey cache, rename the variables and
functions to fit the new meaning.

Link: https://lore.kernel.org/r/20220726071911.122765-6-michaelgur@nvidia.com
Signed-off-by: Aharon Landau <aharonl@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/mlx5: Store in the cache mkeys instead of mrs
Aharon Landau [Tue, 26 Jul 2022 07:19:10 +0000 (10:19 +0300)]
RDMA/mlx5: Store in the cache mkeys instead of mrs

Currently, the driver stores mlx5_ib_mr struct in the cache entries,
although the only use of the cached MR is the mkey. Store only the mkey in
the cache.

Link: https://lore.kernel.org/r/20220726071911.122765-5-michaelgur@nvidia.com
Signed-off-by: Aharon Landau <aharonl@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/mlx5: Store the number of in_use cache mkeys instead of total_mrs
Aharon Landau [Tue, 26 Jul 2022 07:19:09 +0000 (10:19 +0300)]
RDMA/mlx5: Store the number of in_use cache mkeys instead of total_mrs

total_mrs is used only to calculate the number of mkeys currently in
use. To simplify things, replace it with a new member called "in_use" and
directly store the number of mkeys currently in use.

Link: https://lore.kernel.org/r/20220726071911.122765-4-michaelgur@nvidia.com
Signed-off-by: Aharon Landau <aharonl@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/mlx5: Replace cache list with Xarray
Aharon Landau [Tue, 26 Jul 2022 07:19:08 +0000 (10:19 +0300)]
RDMA/mlx5: Replace cache list with Xarray

The Xarray allows us to store the cached mkeys in memory efficient way.

Entries are reserved in the Xarray using xa_cmpxchg before calling to the
upcoming callbacks to avoid allocations in interrupt context.  The
xa_cmpxchg can sleep when using GFP_KERNEL, so we call it in a loop to
ensure one reserved entry for each process trying to reserve.

Link: https://lore.kernel.org/r/20220726071911.122765-3-michaelgur@nvidia.com
Signed-off-by: Aharon Landau <aharonl@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/mlx5: Replace ent->lock with xa_lock
Aharon Landau [Tue, 26 Jul 2022 07:19:07 +0000 (10:19 +0300)]
RDMA/mlx5: Replace ent->lock with xa_lock

In the next patch, ent->list will be replaced with an xarray. The xarray
uses an internal lock to protect the indexes. Use it to protect all the
entry fields, and get rid of ent->lock.

Link: https://lore.kernel.org/r/20220726071911.122765-2-michaelgur@nvidia.com
Signed-off-by: Aharon Landau <aharonl@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRevert "RDMA/rxe: Create duplicate mapping tables for FMRs"
Li Zhijian [Tue, 26 Jul 2022 03:16:26 +0000 (03:16 +0000)]
Revert "RDMA/rxe: Create duplicate mapping tables for FMRs"

Below 2 commits will be reverted:
 commit 8ff5f5d9d8cf ("RDMA/rxe: Prevent double freeing rxe_map_set()")
 commit 647bf13ce944 ("RDMA/rxe: Create duplicate mapping tables for FMRs")

The community has a few bug reports which pointed this commit at last.
Some proposals are raised up in the meantime but all of them have no
follow-up operation.

The previous commit led the map_set of FMR to be not available any more if
the MR is registered again after invalidating. Although the mentioned
patch try to fix a potential race in building/accessing the same table
for fast memory regions, it broke rtrs etc ULPs. Since the latter could
be worse, revert this patch.

With previous commit, it's observed that a same MR in rnbd server will
trigger below code path:
 -> rxe_mr_init_fast()
 |-> alloc map_set() # map_set is uninitialized
 |...-> rxe_map_mr_sg() # build the map_set
     |-> rxe_mr_set_page()
 |...-> rxe_reg_fast_mr() # mr->state change to VALID from FREE that means
                          # we can access host memory(such rxe_mr_copy)
 |...-> rxe_invalidate_mr() # mr->state change to FREE from VALID
 |...-> rxe_reg_fast_mr() # mr->state change to VALID from FREE,
                          # but map_set was not built again
 |...-> rxe_mr_copy() # kernel crash due to access wild addresses
                      # that lookup from the map_set

The backtraces are not always identical.
[1st]----------
  RIP: 0010:lookup_iova+0x66/0xa0 [rdma_rxe]
  Code: 00 00 00 48 d3 ee 89 32 c3 4c 8b 18 49 8b 3b 48 8b 47 08 48 39 c6 72 38 48 29 c6 45 31 d2 b8 01 00 00 00 48 63 c8 48 c1 e1 04 <48> 8b 4c 0f 08 48 39 f1 77 21 83 c0 01 48 29 ce 3d 00 01 00 00 75
  RSP: 0018:ffffb7ff80063bf0 EFLAGS: 00010246
  RAX: 0000000000000000 RBX: ffff9b9949d86800 RCX: 0000000000000000
  RDX: ffffb7ff80063c00 RSI: 0000000049f6b378 RDI: 002818da00000004
  RBP: 0000000000000120 R08: ffffb7ff80063c08 R09: ffffb7ff80063c04
  R10: 0000000000000002 R11: ffff9b9916f7eef8 R12: ffff9b99488a0038
  R13: ffff9b99488a0038 R14: ffff9b9914fb346a R15: ffff9b990ab27000
  FS:  0000000000000000(0000) GS:ffff9b997dc00000(0000) knlGS:0000000000000000
  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
  CR2: 00007efc33a98ed0 CR3: 0000000014f32004 CR4: 00000000001706f0
  DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
  DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
  Call Trace:
   <TASK>
   rxe_mr_copy.part.0+0x6f/0x140 [rdma_rxe]
   rxe_responder+0x12ee/0x1b60 [rdma_rxe]
   ? rxe_icrc_check+0x7e/0x100 [rdma_rxe]
   ? rxe_rcv+0x1d0/0x780 [rdma_rxe]
   ? rxe_icrc_hdr.isra.0+0xf6/0x160 [rdma_rxe]
   rxe_do_task+0x67/0xb0 [rdma_rxe]
   rxe_xmit_packet+0xc7/0x210 [rdma_rxe]
   rxe_requester+0x680/0xee0 [rdma_rxe]
   ? update_load_avg+0x5f/0x690
   ? update_load_avg+0x5f/0x690
   ? rtrs_clt_recv_done+0x1b/0x30 [rtrs_client]

[2nd]----------
  RIP: 0010:rxe_mr_copy.part.0+0xa8/0x140 [rdma_rxe]
  Code: 00 00 49 c1 e7 04 48 8b 00 4c 8d 2c d0 48 8b 44 24 10 4d 03 7d 00 85 ed 7f 10 eb 6c 89 54 24 0c 49 83 c7 10 31 c0 85 ed 7e 5e <49> 8b 3f 8b 14 24 4c 89 f6 48 01 c7 85 d2 74 06 48 89 fe 4c 89 f7
  RSP: 0018:ffffae3580063bf8 EFLAGS: 00010202
  RAX: 0000000000018978 RBX: ffff9d7ef7a03600 RCX: 0000000000000008
  RDX: 000000000000007c RSI: 000000000000007c RDI: ffff9d7ef7a03600
  RBP: 0000000000000120 R08: ffffae3580063c08 R09: ffffae3580063c04
  R10: ffff9d7efece0038 R11: ffff9d7ec4b1db00 R12: ffff9d7efece0038
  R13: ffff9d7ef4098260 R14: ffff9d7f11e23c6a R15: 4c79500065708144
  FS:  0000000000000000(0000) GS:ffff9d7f3dc00000(0000) knlGS:0000000000000000
  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
  CR2: 00007fce47276c60 CR3: 0000000003f66004 CR4: 00000000001706f0
  DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
  DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
  Call Trace:
   <TASK>
   rxe_responder+0x12ee/0x1b60 [rdma_rxe]
   ? rxe_icrc_check+0x7e/0x100 [rdma_rxe]
   ? rxe_rcv+0x1d0/0x780 [rdma_rxe]
   ? rxe_icrc_hdr.isra.0+0xf6/0x160 [rdma_rxe]
   rxe_do_task+0x67/0xb0 [rdma_rxe]
   rxe_xmit_packet+0xc7/0x210 [rdma_rxe]
   rxe_requester+0x680/0xee0 [rdma_rxe]
   ? update_load_avg+0x5f/0x690
   ? update_load_avg+0x5f/0x690
   ? rtrs_clt_recv_done+0x1b/0x30 [rtrs_client]
   rxe_do_task+0x67/0xb0 [rdma_rxe]
   tasklet_action_common.constprop.0+0x92/0xc0
   __do_softirq+0xe1/0x2d8
   run_ksoftirqd+0x21/0x30
   smpboot_thread_fn+0x183/0x220
   ? sort_range+0x20/0x20
   kthread+0xe2/0x110
   ? kthread_complete_and_exit+0x20/0x20
   ret_from_fork+0x22/0x30

Link: https://lore.kernel.org/r/1658805386-2-1-git-send-email-lizhijian@fujitsu.com
Link: https://lore.kernel.org/all/20220210073655.42281-1-guoqing.jiang@linux.dev/T/
Link: https://www.spinics.net/lists/linux-rdma/msg110836.html
Link: https://lore.kernel.org/lkml/94a5ea93-b8bb-3a01-9497-e2021f29598a@linux.dev/t/
Tested-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/rxe: Replace __rxe_do_task by rxe_run_task
Bob Pearson [Thu, 30 Jun 2022 19:04:26 +0000 (14:04 -0500)]
RDMA/rxe: Replace __rxe_do_task by rxe_run_task

In rxe_req.c replace calls to __rxe_do_task() by calls to rxe_run_task(..,
0). Using __rxe_do_task is an error because the completer tasklet is not
designed to be re-entrant and __rxe_do_task() should only be called when
it is clear that no one else could be calling the completer tasklet as is
the case in rxe_qp.c where this call is used in safe environments.

Link: https://lore.kernel.org/r/20220630190425.2251-10-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Limit the number of calls to each tasklet
Bob Pearson [Thu, 30 Jun 2022 19:04:25 +0000 (14:04 -0500)]
RDMA/rxe: Limit the number of calls to each tasklet

Limit the maximum number of calls to each tasklet from rxe_do_task()
before yielding the cpu. When the limit is reached reschedule the tasklet
and exit the calling loop. This patch prevents one tasklet from consuming
100% of a cpu core and causing a deadlock or soft lockup.

Link: https://lore.kernel.org/r/20220630190425.2251-9-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Make the tasklet exits the same
Bob Pearson [Thu, 30 Jun 2022 19:04:24 +0000 (14:04 -0500)]
RDMA/rxe: Make the tasklet exits the same

Make changes to the three tasklets so that the exit logic from each is the
same. This makes the code easier to understand.

Link: https://lore.kernel.org/r/20220630190425.2251-8-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Fix rnr retry behavior
Bob Pearson [Thu, 30 Jun 2022 19:04:22 +0000 (14:04 -0500)]
RDMA/rxe: Fix rnr retry behavior

Currently the completer tasklet when retransmit timer or the rnr timer
fires the same flag (qp->req.need_retry) is set so that if either timer
fires it will attempt to perform a retry flow on the send queue.  This has
the effect of responding to an RNR NAK at the first retransmit timer event
which might not allow the requested rnr timeout.

This patch adds a new flag (qp->req.wait_for_rnr_timer) which, if set,
prevents a retry flow until the rnr nak timer fires.

This patch fixes rnr retry errors which can be observed by running the
pyverbs test_rdmacm_async_traffic_external_qp multiple times. With this
patch applied they do not occur.

Link: https://lore.kernel.org/linux-rdma/a8287823-1408-4273-bc22-99a0678db640@gmail.com/
Link: https://lore.kernel.org/linux-rdma/2bafda9e-2bb6-186d-12a1-179e8f6a2678@talpey.com/
Fixes: 8700e3e7c485 ("Soft RoCE driver")
Link: https://lore.kernel.org/r/20220630190425.2251-6-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Add rxe_is_fenced() subroutine
Bob Pearson [Thu, 30 Jun 2022 19:04:18 +0000 (14:04 -0500)]
RDMA/rxe: Add rxe_is_fenced() subroutine

The code thc that decides whether to defer execution of a wqe in
rxe_requester.c is isolated into a subroutine rxe_is_fenced() and removed
from the call to req_next_wqe(). The condition whether a wqe should be
fenced is changed to comply with the IBA. Currently an operation is fenced
if the fence bit is set in the wqe flags and the last wqe has not
completed. For normal operations the IBA actually only requires that the
last read or atomic operation is complete.

Link: https://lore.kernel.org/r/20220630190425.2251-2-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: For invalidate compare according to set keys in mr
Md Haris Iqbal [Thu, 7 Jul 2022 07:30:06 +0000 (09:30 +0200)]
RDMA/rxe: For invalidate compare according to set keys in mr

The 'rkey' input can be an lkey or rkey, and in rxe the lkey or rkey have
the same value, including the variant bits.

So, if mr->rkey is set, compare the invalidate key with it, otherwise
compare with the mr->lkey.

Since we already did a lookup on the non-varient bits to get this far, the
check's only purpose is to confirm that the wqe has the correct variant
bits.

Fixes: 001345339f4c ("RDMA/rxe: Separate HW and SW l/rkeys")
Link: https://lore.kernel.org/r/20220707073006.328737-1-haris.phnx@gmail.com
Signed-off-by: Md Haris Iqbal <haris.phnx@gmail.com>
Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA: Fix comment typo
Xin Gao [Fri, 22 Jul 2022 02:18:33 +0000 (10:18 +0800)]
RDMA: Fix comment typo

The double `get' is duplicated, remove one.

Link: https://lore.kernel.org/r/20220722021833.15669-1-gaoxin@cdjrlc.com
Signed-off-by: Xin Gao <gaoxin@cdjrlc.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoIB: Fix repeated words 'the the' comments
Slark Xiao [Thu, 21 Jul 2022 08:52:32 +0000 (16:52 +0800)]
IB: Fix repeated words 'the the' comments

Replace 'the the' with 'the' in the comments.

Signed-off-by: Slark Xiao <slark_xiao@163.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Fix mw bind to allow any consumer key portion
Bob Pearson [Thu, 14 Jul 2022 20:46:20 +0000 (15:46 -0500)]
RDMA/rxe: Fix mw bind to allow any consumer key portion

The current implementation of rxe_check_bind_mw() in rxe_mw.c is incorrect
since it requires the new key portion provided by the mw consumer to be
different than the previous key portion. This is not required by the
IBA. Remove the test.

Link: https://lore.kernel.org/linux-rdma/fb4614e7-4cac-0dc7-3ef7-766dfd10e8f2@gmail.com/
Fixes: 32a577b4c3a9 ("Add support for bind MW work requests")
Link: https://lore.kernel.org/r/20220714204619.13396-1-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Fix spelling mistake in error print
Zhang Jiaming [Fri, 1 Jul 2022 08:00:19 +0000 (16:00 +0800)]
RDMA/rxe: Fix spelling mistake in error print

There is a spelling mistake (writeable) in function rxe_check_bind_mw.
Fix it.

Signed-off-by: Zhang Jiaming <jiaming@nfschina.com>
Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/mlx5: Expose steering anchor to userspace
Mark Bloch [Sun, 3 Jul 2022 20:54:07 +0000 (13:54 -0700)]
RDMA/mlx5: Expose steering anchor to userspace

Expose a steering anchor per priority to allow users to re-inject
packets back into default NIC pipeline for additional processing.

MLX5_IB_METHOD_STEERING_ANCHOR_CREATE returns a flow table ID which
a user can use to re-inject packets at a specific priority.

A FTE (flow table entry) can be created and the flow table ID
used as a destination.

When a packet is taken into a RDMA-controlled steering domain (like
software steering) there may be a need to insert the packet back into
the default NIC pipeline. This exposes a flow table ID to the user that can
be used as a destination in a flow table entry.

With this new method priorities that are exposed to users via
MLX5_IB_METHOD_FLOW_MATCHER_CREATE can be reached from a non-zero UID.

As user-created flow tables (via RDMA DEVX) are created with a non-zero UID
thus it's impossible to point to a NIC core flow table (core driver flow tables
are created with UID value of zero) from userspace.
Create flow tables that are exposed to users with the shared UID, this
allows users to point to default NIC flow tables.

Steering loops are prevented at FW level as FW enforces that no flow
table at level X can point to a table at level lower than X.

Link: https://lore.kernel.org/all/20220703205407.110890-6-saeed@kernel.org/
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Yishai Hadas <yishaih@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/mlx5: Refactor get flow table function
Mark Bloch [Sun, 3 Jul 2022 20:54:06 +0000 (13:54 -0700)]
RDMA/mlx5: Refactor get flow table function

_get_flow_table() requires the entire matcher being passed
while all it needs is the priority and namespace type.
Pass the priority and namespace type directly instead.

Link: https://lore.kernel.org/all/20220703205407.110890-5-saeed@kernel.org/
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoMerge branch 'mlx5-next' into wip/leon-for-next
Leon Romanovsky [Tue, 19 Jul 2022 17:40:28 +0000 (20:40 +0300)]
Merge branch 'mlx5-next' into wip/leon-for-next

Mark Bloch Says:
================
Expose steering anchor

Expose a steering anchor per priority to allow users to re-inject
packets back into default NIC pipeline for additional processing.

MLX5_IB_METHOD_STEERING_ANCHOR_CREATE returns a flow table ID which
a user can use to re-inject packets at a specific priority.

A FTE (flow table entry) can be created and the flow table ID
used as a destination.

When a packet is taken into a RDMA-controlled steering domain (like
software steering) there may be a need to insert the packet back into
the default NIC pipeline. This exposes a flow table ID to the user that can
be used as a destination in a flow table entry.

With this new method priorities that are exposed to users via
MLX5_IB_METHOD_FLOW_MATCHER_CREATE can be reached from a non-zero UID.

As user-created flow tables (via RDMA DEVX) are created with a non-zero UID
thus it's impossible to point to a NIC core flow table (core driver flow tables
are created with UID value of zero) from userspace.
Create flow tables that are exposed to users with the shared UID, this
allows users to point to default NIC flow tables.

Steering loops are prevented at FW level as FW enforces that no flow
table at level X can point to a table at level lower than X.

================

Link: https://lore.kernel.org/all/20220703205407.110890-1-saeed@kernel.org
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/rxe: Remove unused qp parameter
Xiao Yang [Fri, 8 Jul 2022 03:55:50 +0000 (03:55 +0000)]
RDMA/rxe: Remove unused qp parameter

The qp parameter in free_rd_atomic_resource() has become
unused so remove it directly.

Fixes: 15ae1375ea91 ("RDMA/rxe: Fix qp reference counting for atomic ops")
Link: https://lore.kernel.org/all/20220708035547.6592-1-yangx.jy@fujitsu.com/
Signed-off-by: Xiao Yang <yangx.jy@fujitsu.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoIB/qib: Fix comment typo
Jason Wang [Fri, 15 Jul 2022 05:40:07 +0000 (13:40 +0800)]
IB/qib: Fix comment typo

The double `are' is duplicated in line 156, remove one.

Link: https://lore.kernel.org/r/20220715054007.5320-1-wangborong@cdjrlc.com
Signed-off-by: Jason Wang <wangborong@cdjrlc.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/hfi1: fix potential memory leak in setup_base_ctxt()
Jianglei Nie [Mon, 11 Jul 2022 07:07:18 +0000 (15:07 +0800)]
RDMA/hfi1: fix potential memory leak in setup_base_ctxt()

setup_base_ctxt() allocates a memory chunk for uctxt->groups with
hfi1_alloc_ctxt_rcv_groups(). When init_user_ctxt() fails, uctxt->groups
is not released, which will lead to a memory leak.

We should release the uctxt->groups with hfi1_free_ctxt_rcv_groups()
when init_user_ctxt() fails.

Fixes: e87473bc1b6c ("IB/hfi1: Only set fd pointer when base context is completely initialized")
Link: https://lore.kernel.org/r/20220711070718.2318320-1-niejianglei2021@163.com
Signed-off-by: Jianglei Nie <niejianglei2021@163.com>
Acked-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/rxe: Remove unused mask parameter
lizhijian@fujitsu.com [Fri, 15 Jul 2022 03:46:36 +0000 (03:46 +0000)]
RDMA/rxe: Remove unused mask parameter

This parameter had been deprecated since below commit:
1a7085b34291 ("RDMA/rxe: Skip adjusting remote addr for write in retry operation")

Link: https://lore.kernel.org/r/20220715035340.1900168-1-lizhijian@fujitsu.com
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/rxe: Rename rxe_atomic_reply to atomic_reply
Xiao Yang [Tue, 5 Jul 2022 14:52:12 +0000 (22:52 +0800)]
RDMA/rxe: Rename rxe_atomic_reply to atomic_reply

It's better to use the unified naming format.

Link: https://lore.kernel.org/r/20220705145212.12014-2-yangx.jy@fujitsu.com
Signed-off-by: Xiao Yang <yangx.jy@fujitsu.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/rxe: Add common rxe_prepare_res()
Xiao Yang [Tue, 5 Jul 2022 14:52:11 +0000 (22:52 +0800)]
RDMA/rxe: Add common rxe_prepare_res()

It's redundant to prepare resources for Read and Atomic
requests by different functions. Replace them by a common
rxe_prepare_res() with different parameters. In addition,
the common rxe_prepare_res() can also be used by new Flush
and Atomic Write requests in the future.

Link: https://lore.kernel.org/r/20220705145212.12014-1-yangx.jy@fujitsu.com
Signed-off-by: Xiao Yang <yangx.jy@fujitsu.com>
Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/rxe: Fix BUG: KASAN: null-ptr-deref in rxe_qp_do_cleanup
Zhu Yanjun [Tue, 5 Jul 2022 22:54:14 +0000 (18:54 -0400)]
RDMA/rxe: Fix BUG: KASAN: null-ptr-deref in rxe_qp_do_cleanup

The function rxe_create_qp calls rxe_qp_from_init. If some error
occurs, the error handler of function rxe_qp_from_init will set
both scq and rcq to NULL.

Then rxe_create_qp calls rxe_put to handle qp. In the end,
rxe_qp_do_cleanup is called by rxe_put. rxe_qp_do_cleanup directly
accesses scq and rcq before checking them. This will cause
null-ptr-deref error.

The call graph is as below:

rxe_create_qp {
  ...
  rxe_qp_from_init {
    ...
  err1:
    ...
    qp->rcq = NULL;  <---rcq is set to NULL
    qp->scq = NULL;  <---scq is set to NULL
    ...
  }

qp_init:
  rxe_put{
    ...
    rxe_qp_do_cleanup {
      ...
      atomic_dec(&qp->scq->num_wq); <--- scq is accessed
      ...
      atomic_dec(&qp->rcq->num_wq); <--- rcq is accessed
    }
}

Fixes: 4703b4f0d94a ("RDMA/rxe: Enforce IBA C11-17")
Link: https://lore.kernel.org/r/20220705225414.315478-1-yanjun.zhu@linux.dev
Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev>
Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com>
Reviewed-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/siw: Fix duplicated reported IW_CM_EVENT_CONNECT_REPLY event
Cheng Xu [Thu, 14 Jul 2022 01:30:47 +0000 (09:30 +0800)]
RDMA/siw: Fix duplicated reported IW_CM_EVENT_CONNECT_REPLY event

If siw_recv_mpa_rr returns -EAGAIN, it means that the MPA reply hasn't
been received completely, and should not report IW_CM_EVENT_CONNECT_REPLY
in this case. This may trigger a call trace in iw_cm. A simple way to
trigger this:
 server: ib_send_lat
 client: ib_send_lat -R <server_ip>

The call trace looks like this:

 kernel BUG at drivers/infiniband/core/iwcm.c:894!
 invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
 <...>
 Workqueue: iw_cm_wq cm_work_handler [iw_cm]
 Call Trace:
  <TASK>
  cm_work_handler+0x1dd/0x370 [iw_cm]
  process_one_work+0x1e2/0x3b0
  worker_thread+0x49/0x2e0
  ? rescuer_thread+0x370/0x370
  kthread+0xe5/0x110
  ? kthread_complete_and_exit+0x20/0x20
  ret_from_fork+0x1f/0x30
  </TASK>

Fixes: 6c52fdc244b5 ("rdma/siw: connection management")
Link: https://lore.kernel.org/r/dae34b5fd5c2ea2bd9744812c1d2653a34a94c67.1657706960.git.chengyou@linux.alibaba.com
Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/hns: Recover 1bit-ECC error of RAM on chip
Haoyue Xu [Thu, 14 Jul 2022 13:43:53 +0000 (21:43 +0800)]
RDMA/hns: Recover 1bit-ECC error of RAM on chip

Since ECC memory maintains a memory system immune to single-bit errors,
add support for correcting the 1bit-ECC error, which prevents a 1bit-ECC
error become an uncorrected type error. When a 1bit-ECC error happens in
the internal ram of the ROCE engine, such as the QPC table, as a 1bit-ECC
error caused by reading, the ROCE engine only corrects those 1bit ECC
errors by writing.

Link: https://lore.kernel.org/r/20220714134353.16700-6-liangwenpeng@huawei.com
Signed-off-by: Haoyue Xu <xuhaoyue1@hisilicon.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/hns: Refactor the abnormal interrupt handler function
Haoyue Xu [Thu, 14 Jul 2022 13:43:52 +0000 (21:43 +0800)]
RDMA/hns: Refactor the abnormal interrupt handler function

Use a single function to handle the same kind of abnormal interrupts.

Link: https://lore.kernel.org/r/20220714134353.16700-5-liangwenpeng@huawei.com
Signed-off-by: Haoyue Xu <xuhaoyue1@hisilicon.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/hns: Fix incorrect clearing of interrupt status register
Haoyue Xu [Thu, 14 Jul 2022 13:43:51 +0000 (21:43 +0800)]
RDMA/hns: Fix incorrect clearing of interrupt status register

The driver will clear all the interrupts in the same area
when the driver handles the interrupt of type AEQ overflow.
It should only set the interrupt status bit of type AEQ overflow.

Fixes: a5073d6054f7 ("RDMA/hns: Add eq support of hip08")
Link: https://lore.kernel.org/r/20220714134353.16700-4-liangwenpeng@huawei.com
Signed-off-by: Haoyue Xu <xuhaoyue1@hisilicon.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/hns: Fix the wrong type of return value of the interrupt handler
Haoyue Xu [Thu, 14 Jul 2022 13:43:50 +0000 (21:43 +0800)]
RDMA/hns: Fix the wrong type of return value of the interrupt handler

The type of return value of the interrupt handler should be irqreturn_t.

Link: https://lore.kernel.org/r/20220714134353.16700-3-liangwenpeng@huawei.com
Signed-off-by: Haoyue Xu <xuhaoyue1@hisilicon.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/hns: Remove unused abnormal interrupt of type RAS
Haoyue Xu [Thu, 14 Jul 2022 13:43:49 +0000 (21:43 +0800)]
RDMA/hns: Remove unused abnormal interrupt of type RAS

The HNS NIC driver receives and handles the abnormal interrupt of the RAS
type generated by ROCEE, and the HNS RDMA driver does not need to handle
this type of interrupt. Therefore, delete unused codes in the HNS RDMA
driver.

Link: https://lore.kernel.org/r/20220714134353.16700-2-liangwenpeng@huawei.com
Signed-off-by: Haoyue Xu <xuhaoyue1@hisilicon.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/qedr: Fix potential memory leak in __qedr_alloc_mr()
Jianglei Nie [Thu, 14 Jul 2022 06:15:05 +0000 (14:15 +0800)]
RDMA/qedr: Fix potential memory leak in __qedr_alloc_mr()

__qedr_alloc_mr() allocates a memory chunk for "mr->info.pbl_table" with
init_mr_info(). When rdma_alloc_tid() and rdma_register_tid() fail, "mr"
is released while "mr->info.pbl_table" is not released, which will lead
to a memory leak.

We should release the "mr->info.pbl_table" with qedr_free_pbl() when error
occurs to fix the memory leak.

Fixes: e0290cce6ac0 ("qedr: Add support for memory registeration verbs")
Link: https://lore.kernel.org/r/20220714061505.2342759-1-niejianglei2021@163.com
Signed-off-by: Jianglei Nie <niejianglei2021@163.com>
Acked-by: Michal Kalderon <michal.kalderon@marvell.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/hfi1: Depend on !UML
Ehab Ababneh [Mon, 11 Jul 2022 14:54:38 +0000 (10:54 -0400)]
RDMA/hfi1: Depend on !UML

Both hfi1 and UML depend on x86_64, this can trigger build errors.
This driver must depends on !UML because it accesses x86_64
features that are not supported by UML.

Link: https://lore.kernel.org/r/165755127879.2996325.5668395672492732376.stgit@awfm-02.cornelisnetworks.com
Signed-off-by: Ehab Ababneh <ehab.ababneh@cornelisnetworks.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/irdma: Use the bitmap API to allocate bitmaps
Christophe JAILLET [Mon, 11 Jul 2022 19:21:39 +0000 (21:21 +0200)]
RDMA/irdma: Use the bitmap API to allocate bitmaps

Use bitmap_zalloc()/bitmap_free() instead of hand-writing them.

It is less verbose and it improves the semantic.

Link: https://lore.kernel.org/r/1f671b1af5881723ee265a0a12809c92950e58aa.1657567269.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Acked-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/rtrs-srv: Do not use mempool for page allocation
Jack Wang [Tue, 12 Jul 2022 10:31:13 +0000 (12:31 +0200)]
RDMA/rtrs-srv: Do not use mempool for page allocation

The mempool is for guaranteed memory allocation during
extreme VM load (see the header of mempool.c of the kernel).
But rtrs-srv allocates pages only when creating new session.
There is no need to use the mempool.

With the removal of mempool, rtrs-server no longer need to reserve
huge mount of memory, this will avoid error like this:
https://lore.kernel.org/lkml/20220620020727.GA3669@xsang-OptiPlex-9020/

Link: https://lore.kernel.org/r/20220712103113.617754-6-haris.iqbal@ionos.com
Reported-by: kernel test robot <oliver.sang@intel.com>
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Acked-by: Guoqing Jiang <guoqing.jiang@linux.dev>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/rtrs-clt: Replace list_next_or_null_rr_rcu with an inline function
Md Haris Iqbal [Tue, 12 Jul 2022 10:31:12 +0000 (12:31 +0200)]
RDMA/rtrs-clt: Replace list_next_or_null_rr_rcu with an inline function

removes list_next_or_null_rr_rcu macro to fix below warnings.
That macro is used only twice.
CHECK:MACRO_ARG_REUSE: Macro argument reuse 'head' - possible side-effects?
CHECK:MACRO_ARG_REUSE: Macro argument reuse 'ptr' - possible side-effects?
CHECK:MACRO_ARG_REUSE: Macro argument reuse 'memb' - possible side-effects?

Replaces that macro with an inline function.

Fixes: 6a98d71daea1 ("RDMA/rtrs: client: main functionality")
Cc: jinpu.wang@ionos.com
Link: https://lore.kernel.org/r/20220712103113.617754-5-haris.iqbal@ionos.com
Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Suggested-by: Jason Gunthorpe <jgg@ziepe.ca>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/rtrs-srv: Use per-cpu variables for rdma stats
Santosh Kumar Pradhan [Tue, 12 Jul 2022 10:31:11 +0000 (12:31 +0200)]
RDMA/rtrs-srv: Use per-cpu variables for rdma stats

Convert server stat counters from atomic to per-cpu variables.

Link: https://lore.kernel.org/r/20220712103113.617754-4-haris.iqbal@ionos.com
Signed-off-by: Santosh Kumar Pradhan <santosh.pradhan@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/rtrs-clt: Use this_cpu_ API for stats
Santosh Kumar Pradhan [Tue, 12 Jul 2022 10:31:10 +0000 (12:31 +0200)]
RDMA/rtrs-clt: Use this_cpu_ API for stats

Use this_cpu_x() for increasing/adding a percpu counter through a
percpu pointer without the need to disable/enable preemption.

Link: https://lore.kernel.org/r/20220712103113.617754-3-haris.iqbal@ionos.com
Suggested-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Santosh Kumar Pradhan <santosh.pradhan@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Acked-by: Guoqing Jiang <guoqing.jiang@linux.dev>
Reviewed-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/rtrs-srv: Fix modinfo output for stringify
Jack Wang [Tue, 12 Jul 2022 10:31:09 +0000 (12:31 +0200)]
RDMA/rtrs-srv: Fix modinfo output for stringify

stringify works with define, not enum.

Fixes: 91fddedd439c ("RDMA/rtrs: private headers with rtrs protocol structs and helpers")
Cc: jinpu.wang@ionos.com
Link: https://lore.kernel.org/r/20220712103113.617754-2-haris.iqbal@ionos.com
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com>
Reviewed-by: Aleksei Marov <aleksei.marov@ionos.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA: remove useless condition in siw_create_cq()
Andrey Strachuk [Mon, 11 Jul 2022 15:12:51 +0000 (18:12 +0300)]
RDMA: remove useless condition in siw_create_cq()

Comparison of 'cq' with NULL is useless since
'cq' is a result of container_of and cannot be NULL
in any reasonable scenario.

Found by Linux Verification Center (linuxtesting.org) with SVACE.

Fixes: 303ae1cdfdf7 ("rdma/siw: application interface")
Link: https://lore.kernel.org/r/20220711151251.17089-1-strochuk@ispras.ru
Signed-off-by: Andrey Strachuk <strochuk@ispras.ru>
Acked-by: Bernard Metzler <bmt@zurich.ibm.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/rtrs-clt: Use bitmap_empty()
Christophe JAILLET [Fri, 8 Jul 2022 16:47:37 +0000 (18:47 +0200)]
RDMA/rtrs-clt: Use bitmap_empty()

Use bitmap_empty() instead of hand-writing them.

Link: https://lore.kernel.org/r/b71ccfaf4a47dee8e1ad373604c861479d499b6b.1657298747.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Acked-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/rtrs-clt: Use the bitmap API to allocate bitmaps
Christophe JAILLET [Fri, 8 Jul 2022 16:47:27 +0000 (18:47 +0200)]
RDMA/rtrs-clt: Use the bitmap API to allocate bitmaps

Use bitmap_zalloc()/bitmap_free() instead of hand-writing them.

It is less verbose and it improves the semantic.

Link: https://lore.kernel.org/r/ca9c5c8301d76d60de34640568b3db0d4401d050.1657298747.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Acked-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/qib: Use the bitmap API to allocate bitmaps
Christophe JAILLET [Fri, 8 Jul 2022 17:20:39 +0000 (19:20 +0200)]
RDMA/qib: Use the bitmap API to allocate bitmaps

Use bitmap_zalloc()/bitmap_free() instead of hand-writing them.

It is less verbose and it improves the semantic.

Link: https://lore.kernel.org/r/f7a8588447679e80a438b6188b0603c1a11ad877.1657300671.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/irdma: Fix setting of QP context err_rq_idx_valid field
Mustafa Ismail [Tue, 5 Jul 2022 23:08:15 +0000 (18:08 -0500)]
RDMA/irdma: Fix setting of QP context err_rq_idx_valid field

Setting err_rq_idx_valid field in QP context when the AE source of the
AEQE is not associated with an RQ causes the firmware flush to fail.

Set err_rq_idx_valid field in QP context only if it is associated with an
RQ. Additionally, cleanup the redundant setting of this field in
irdma_process_aeq.

Fixes: 44d9e52977a1 ("RDMA/irdma: Implement device initialization definitions")
Link: https://lore.kernel.org/r/20220705230815.265-8-shiraz.saleem@intel.com
Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com>
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/irdma: Fix VLAN connection with wildcard address
Mustafa Ismail [Tue, 5 Jul 2022 23:08:14 +0000 (18:08 -0500)]
RDMA/irdma: Fix VLAN connection with wildcard address

When an application listens on a wildcard address, and there are VLAN and
non-VLAN IP addresses, iWARP connection establishemnt can fail if the listen
node VLAN ID does not match.

Fix this by checking the vlan_id only if not a wildcard listen node.

Fixes: 146b9756f14c ("RDMA/irdma: Add connection manager")
Link: https://lore.kernel.org/r/20220705230815.265-7-shiraz.saleem@intel.com
Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com>
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/irdma: Fix a window for use-after-free
Mustafa Ismail [Tue, 5 Jul 2022 23:08:13 +0000 (18:08 -0500)]
RDMA/irdma: Fix a window for use-after-free

During a destroy CQ an interrupt may cause processing of a CQE after CQ
resources are freed by irdma_cq_free_rsrc(). Fix this by moving the call
to irdma_cq_free_rsrc() after the irdma_sc_cleanup_ceqes(), which is
called under the cq_lock.

Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs")
Link: https://lore.kernel.org/r/20220705230815.265-6-shiraz.saleem@intel.com
Signed-off-by: Bartosz Sobczak <bartosz.sobczak@intel.com>
Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com>
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/irdma: Make resource distribution algorithm more QP oriented
Nayan Kumar [Tue, 5 Jul 2022 23:08:12 +0000 (18:08 -0500)]
RDMA/irdma: Make resource distribution algorithm more QP oriented

Adapt the resource distribution algorithm in irdma_cfg_fpm_val to be more
QP oriented. If the configuration is too big for the available memory,
trim the MR and PBLE's first before trimming the QPs. This also avoids
having to double QPs requested as input to algorithm for GEN1 devices.

Link: https://lore.kernel.org/r/20220705230815.265-5-shiraz.saleem@intel.com
Signed-off-by: Nayan Kumar <nayan.kumar@intel.com>
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/irdma: Make CQP invalid state error non-critical
Mustafa Ismail [Tue, 5 Jul 2022 23:08:11 +0000 (18:08 -0500)]
RDMA/irdma: Make CQP invalid state error non-critical

The invalid state error returned by the Control Queue-Pair (CQP) is not a
critical error.

Add it to the irdma_noncrit_err_list and drop reporting it as device error
message.

Link: https://lore.kernel.org/r/20220705230815.265-4-shiraz.saleem@intel.com
Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com>
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/irdma: Add AE source to error log
Mustafa Ismail [Tue, 5 Jul 2022 23:08:10 +0000 (18:08 -0500)]
RDMA/irdma: Add AE source to error log

To assist with debugging add the Asynchronous Event (AE) source when
logging the abnormal AE error log message.

Link: https://lore.kernel.org/r/20220705230815.265-3-shiraz.saleem@intel.com
Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com>
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/irdma: Add 2 level PBLE support for FMR
Mustafa Ismail [Tue, 5 Jul 2022 23:08:09 +0000 (18:08 -0500)]
RDMA/irdma: Add 2 level PBLE support for FMR

Level 2 Physical Buffer List Entry (PBLE) is currently not supported for
Fast MRs which limits memory registrations to 256K pages.

Adapt irdma_set_page and irdma_alloc_mr to allow for 2 level PBLEs.

Link: https://lore.kernel.org/r/20220705230815.265-2-shiraz.saleem@intel.com
Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com>
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agonet/mlx5: fs, allow flow table creation with a UID
Mark Bloch [Sun, 3 Jul 2022 20:54:05 +0000 (13:54 -0700)]
net/mlx5: fs, allow flow table creation with a UID

Add UID field to flow table attributes to allow creating flow tables
with a non default (zero) uid.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: fs, expose flow table ID to users
Mark Bloch [Sun, 3 Jul 2022 20:54:04 +0000 (13:54 -0700)]
net/mlx5: fs, expose flow table ID to users

Expose the flow table ID to users. This will be used by downstream
patches to allow creating steering rules that point to a flow table ID.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: Expose the ability to point to any UID from shared UID
Mark Bloch [Sun, 3 Jul 2022 20:54:03 +0000 (13:54 -0700)]
net/mlx5: Expose the ability to point to any UID from shared UID

Expose shared_object_to_user_object_allowed, this capability
means an object created with shared UID can point to any UID.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agoipoib: switch to netif_napi_add_weight()
Jakub Kicinski [Tue, 5 Jul 2022 23:02:08 +0000 (16:02 -0700)]
ipoib: switch to netif_napi_add_weight()

We want to remove the weight argument from the basic
netif_napi_add() API and just default to 64.
Switch ipoib to the new API for explicitly specifying
the weight.

Link: https://lore.kernel.org/r/20220705230208.924408-4-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoIB/hfi1: switch to netif_napi_add_weight()
Jakub Kicinski [Tue, 5 Jul 2022 23:02:07 +0000 (16:02 -0700)]
IB/hfi1: switch to netif_napi_add_weight()

Since we'll remove the last argument from netif_napi_add()
soon switch this RDMA driver to netif_napi_add_weight()
for now to avoid cross-tree patches.

Link: https://lore.kernel.org/r/20220705230208.924408-3-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoIB/hfi1: switch to netif_napi_add_tx()
Jakub Kicinski [Tue, 5 Jul 2022 23:02:06 +0000 (16:02 -0700)]
IB/hfi1: switch to netif_napi_add_tx()

Switch to the new API not requiring the NAPI_POLL_WEIGHT argument.

Link: https://lore.kernel.org/r/20220705230208.924408-2-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/qib: Use the bitmap API when applicable
Christophe JAILLET [Sun, 3 Jul 2022 07:42:48 +0000 (09:42 +0200)]
RDMA/qib: Use the bitmap API when applicable

Using the bitmap API is less verbose than hand writing them.
It also improves the semantic.

While at it, initialize the bitmaps. It can't hurt.

Link: https://lore.kernel.org/r/33d8992586d382bec8b8efd83e4729fb7feaf89e.1656834106.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoIB: Fix spelling of 'writable'
Zhang Jiaming [Fri, 1 Jul 2022 07:48:12 +0000 (15:48 +0800)]
IB: Fix spelling of 'writable'

There is a typo (writeable) in qib_file_ops.c, qib_sd7220.c's comments,
and in rxe_check_bind_mw()

Link: https://lore.kernel.org/r/20220701074812.12615-1-jiaming@nfschina.com
Link: https://lore.kernel.org/r/20220701080019.13329-1-jiaming@nfschina.com
Signed-off-by: Zhang Jiaming <jiaming@nfschina.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Remove unnecessary include statement
Bob Pearson [Thu, 30 Jun 2022 19:04:20 +0000 (14:04 -0500)]
RDMA/rxe: Remove unnecessary include statement

rxe_verbs.h includes the file <rdma/rdma_user_rxe.h>.  It should have been
<uapi/rdma/rdma_user_rxe.h>, however, it is not used and not required in
this file.

This patch removes the include statement.

Link: https://lore.kernel.org/r/20220630190425.2251-4-rpearsonhpe@gmail.com
Reported-by: Frank Zago <frank.zago@hpe.com>
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Replace include statement
Bob Pearson [Thu, 30 Jun 2022 19:04:21 +0000 (14:04 -0500)]
RDMA/rxe: Replace include statement

rxe_queue.h currently includes <uapi/rdma/rdma_user_rxe.h> for a
definition of struct rxe_queue_buf. But it is only used as a pointer so
the definition is not needed.

This patch replaces the include statement with the declaration

     struct rxe_queue_buf;

Link: https://lore.kernel.org/r/20220630190425.2251-5-rpearsonhpe@gmail.com
Reported-by: Frank Zago <frank.zago@hpe.com>
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Convert pr_warn/err to pr_debug in pyverbs
Bob Pearson [Thu, 30 Jun 2022 19:04:19 +0000 (14:04 -0500)]
RDMA/rxe: Convert pr_warn/err to pr_debug in pyverbs

The pyverbs test suite generates a few dmesg traces from intentional error
tests. This patch replaces those messages with pr_debug() calls which
improves the usefullness of the tests.

Link: https://lore.kernel.org/r/20220630190425.2251-3-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Fix deadlock in rxe_do_local_ops()
Bob Pearson [Mon, 23 May 2022 22:32:52 +0000 (17:32 -0500)]
RDMA/rxe: Fix deadlock in rxe_do_local_ops()

When a local operation (invalidate mr, reg mr, bind mw) is finished there
will be no ack packet coming from a responder to cause the wqe to be
completed. This may happen anyway if a subsequent wqe performs
IO. Currently if the wqe is signalled the completer tasklet is scheduled
immediately but not otherwise.

This leads to a deadlock if the next wqe has the fence bit set in send
flags and the operation is not signalled. This patch removes the condition
that the wqe must be signalled in order to schedule the completer tasklet
which is the simplest fix for this deadlock and is fairly low cost. This
is the analog for local operations of always setting the ackreq bit in all
last or only request packets even if the operation is not signalled.

Link: https://lore.kernel.org/r/20220523223251.15350-1-rpearsonhpe@gmail.com
Reported-by: Jenny Hack <jhack@hpe.com>
Fixes: c1a411268a4b ("RDMA/rxe: Move local ops to subroutine")
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Merge normal and retry atomic flows
Bob Pearson [Mon, 6 Jun 2022 14:38:37 +0000 (09:38 -0500)]
RDMA/rxe: Merge normal and retry atomic flows

Make the execution of the atomic operation in rxe_atomic_reply()
conditional on res->replay and make duplicate_request() call into
rxe_atomic_reply() to merge the two flows. This is modeled on the behavior
of read reply. Delete the skb from the atomic responder resource since it
is no longer used. Adjust the reference counting of the qp in
send_atomic_ack() for this flow.

Fixes: 8700e3e7c485 ("Soft RoCE driver")
Link: https://lore.kernel.org/r/20220606143836.3323-6-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Move atomic original value to res
Bob Pearson [Mon, 6 Jun 2022 14:38:36 +0000 (09:38 -0500)]
RDMA/rxe: Move atomic original value to res

Move the saved original value to the atomic responder resource.  This
replaces saving it in the qp. In preparation for merging the normal and
retry atomic responder flows.

Link: https://lore.kernel.org/r/20220606143836.3323-5-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Move atomic responder res to atomic_reply
Bob Pearson [Mon, 6 Jun 2022 14:38:35 +0000 (09:38 -0500)]
RDMA/rxe: Move atomic responder res to atomic_reply

Move the allocation of the atomic responder resource up into
rxe_atomic_reply() from send_atomic_ack(). In preparation for merging the
normal and retry atomic responder flows.

Link: https://lore.kernel.org/r/20220606143836.3323-4-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Add a responder state for atomic reply
Bob Pearson [Mon, 6 Jun 2022 14:38:34 +0000 (09:38 -0500)]
RDMA/rxe: Add a responder state for atomic reply

Add a responder state for atomic reply similar to read reply and rename
process_atomic() rxe_atomic_reply(). In preparation for merging the normal
and retry atomic responder flows.

Link: https://lore.kernel.org/r/20220606143836.3323-3-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Move code to rxe_prepare_atomic_res()
Bob Pearson [Mon, 6 Jun 2022 14:38:33 +0000 (09:38 -0500)]
RDMA/rxe: Move code to rxe_prepare_atomic_res()

Separate the code that prepares the atomic responder resource into a
subroutine. This is preparation for merging the normal and retry atomic
responder flows.

Link: https://lore.kernel.org/r/20220606143836.3323-2-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Convert read side locking to rcu
Bob Pearson [Sun, 12 Jun 2022 22:34:35 +0000 (17:34 -0500)]
RDMA/rxe: Convert read side locking to rcu

Use rcu_read_lock() for protecting read side operations in rxe_pool.c.

Link: https://lore.kernel.org/r/20220612223434.31462-3-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Stop lookup of partially built objects
Bob Pearson [Sun, 12 Jun 2022 22:34:34 +0000 (17:34 -0500)]
RDMA/rxe: Stop lookup of partially built objects

Currently the rdma_rxe driver has a security weakness due to giving
objects which are partially initialized indices allowing external actors
to gain access to them by sending packets which refer to their
index (e.g. qpn, rkey, etc) causing unpredictable results.

This patch adds a new API rxe_finalize(obj) which enables looking up pool
objects from indices using rxe_pool_get_index() for AH, QP, MR, and
MW. They are added in create verbs only after the objects are fully
initialized.

It also adds wait for completion to destroy/dealloc verbs to assure that
all references have been dropped before returning to rdma_core by
implementing a new rxe_pool API rxe_cleanup() which drops a reference to
the object and then waits for all other references to be dropped.  When
the last reference is dropped the object is completed by kref.  After that
it cleans up the object and if locally allocated frees the memory. In the
special case of address handle objects the delay is implemented separately
if the destroy_ah call is not sleepable.

Combined with deferring cleanup code to type specific cleanup routines
this allows all pending activity referring to objects to complete before
returning to rdma_core.

Link: https://lore.kernel.org/r/20220612223434.31462-2-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA/rxe: Remove useless pkt parameters
Xiao Yang [Thu, 23 Jun 2022 13:16:27 +0000 (21:16 +0800)]
RDMA/rxe: Remove useless pkt parameters

The pkt parameters in prepare_ack_packet(), send_ack() and
send_atomic_ack() have become useless by the following commits.  So remove
them directly.

Fixes: bf139b58af09 ("RDMA/rxe: Remove unused pkt->offset")
Fixes: 3896bde92d03 ("RDMA/rxe: Fix extra copy in prepare_ack_packet")
Link: https://lore.kernel.org/r/20220623131627.18903-1-yangx.jy@fujitsu.com
Signed-off-by: Xiao Yang <yangx.jy@fujitsu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoRDMA: Correct duplicated words in comments
Jiang Jian [Wed, 22 Jun 2022 16:22:13 +0000 (00:22 +0800)]
RDMA: Correct duplicated words in comments

There is a duplicated word 'is' and 'for' in a comment that needs to be
dropped.

Link: https://lore.kernel.org/r/20220622170853.3644-1-jiangjian@cdjrlc.com
Link: https://lore.kernel.org/r/20220623103708.43104-1-jiangjian@cdjrlc.com
Signed-off-by: Jiang Jian <jiangjian@cdjrlc.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoIB/iser: Drain the entire QP during destruction flow
Max Gurtovoy [Wed, 15 Jun 2022 08:28:39 +0000 (11:28 +0300)]
IB/iser: Drain the entire QP during destruction flow

It's important to drain both the sq and the rq to make sure all WRs were
flushed before destroying the QP.

Link: https://lore.kernel.org/r/20220615082839.26328-1-mgurtovoy@nvidia.com
Reviewed-by: Sergey Gorenko <sergeygo@nvidia.com>
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2 years agoMerge branch 'mlx5-next' into wip/leon-for-next
Leon Romanovsky [Thu, 16 Jun 2022 17:09:11 +0000 (20:09 +0300)]
Merge branch 'mlx5-next' into wip/leon-for-next

This merge commit includes shared updates to net-next and rdma-next
for upcoming mlx5 features.

1) Updated HW bits and definitions for upcoming features
 1.1) vport debug counters
 1.2) flow meter
 1.3) Execute ASO action for flow entry
 1.4) enhanced CQE compression

2) Add ICM header-modify-pattern RDMA API

Leon Says
=========

SW steering manipulates packet's header using "modifying header" actions.
Many of these actions do the same operation, but use different data each time.
Currently we create and keep every one of these actions, which use expensive
and limited resources.

Now we introduce a new mechanism - pattern and argument, which splits
a modifying action into two parts:
1. action pattern: contains the operations to be applied on packet's header,
mainly set/add/copy of fields in the packet
2. action data/argument: contains the data to be used by each operation
in the pattern.

This way we reuse same patterns with different arguments to create new
modifying actions, and since many actions share the same operations, we end
up creating a small number of patterns that we keep in a dedicated cache.

These modify header patterns are implemented as new type of ICM memory,
so the following kernel patch series add the support for this new ICM type.
==========

Link: https://lore.kernel.org/all/20220614184028.51548-1-saeed@kernel.org
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/rxe: fix xa_alloc_cycle() error return value check again
Dongliang Mu [Thu, 9 Jun 2022 07:06:56 +0000 (15:06 +0800)]
RDMA/rxe: fix xa_alloc_cycle() error return value check again

Currently rxe_alloc checks ret to indicate error, but 1 is also a valid
return and just indicates that the allocation succeeded with a wrap.

Fix this by modifying the check to be < 0.

Link: https://lore.kernel.org/r/20220609070656.1446121-1-dzm91@hust.edu.cn
Fixes: 3225717f6dfa ("RDMA/rxe: Replace red-black trees by xarrays")
Signed-off-by: Dongliang Mu <mudongliangabcd@gmail.com>
Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/usnic: Use device_iommu_capable()
Robin Murphy [Wed, 8 Jun 2022 11:46:33 +0000 (12:46 +0100)]
RDMA/usnic: Use device_iommu_capable()

Use the new interface to check the capability for our device
specifically.

Link: https://lore.kernel.org/r/96ffe7050da0aa0ad6bce4705c3532f3ecaf32e3.1654688682.git.robin.murphy@arm.com
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/core: Add a netevent notifier to cma
Patrisious Haddad [Tue, 7 Jun 2022 11:32:44 +0000 (14:32 +0300)]
RDMA/core: Add a netevent notifier to cma

Add a netevent callback for cma, mainly to catch NETEVENT_NEIGH_UPDATE.

Previously, when a system with failover MAC mechanism change its MAC address
during a CM connection attempt, the RDMA-CM would take a lot of time till
it disconnects and timesout due to the incorrect MAC address.

Now when we get a NETEVENT_NEIGH_UPDATE we check if it is due to a failover
MAC change and if so, we instantly destroy the CM and notify the user in order
to spare the unnecessary waiting for the timeout.

Link: https://lore.kernel.org/r/bb255c9e301cd50b905663b8e73f7f5133d0e4c5.1654601342.git.leonro@nvidia.com
Signed-off-by: Patrisious Haddad <phaddad@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agoRDMA/core: Add an rb_tree that stores cm_ids sorted by ifindex and remote IP
Patrisious Haddad [Tue, 7 Jun 2022 11:32:43 +0000 (14:32 +0300)]
RDMA/core: Add an rb_tree that stores cm_ids sorted by ifindex and remote IP

Add to the cma, a tree that keeps track of all rdma_id_private channels
that were created while in RoCE mode.

The IDs are sorted first according to their netdevice ifindex then their
destination IP. And for IDs with matching IP they would be at the same node
in the tree, since the tree data is a list of all ids with matching destination IP.

The tree allows fast and efficient lookup of ids using an ifindex and
IP address which is useful for identifying relevant net_events promptly.

Link: https://lore.kernel.org/r/2fac52c86cc918c634ab24b3867d4aed992f54ec.1654601342.git.leonro@nvidia.com
Signed-off-by: Patrisious Haddad <phaddad@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
2 years agonet/mlx5: Add bits and fields to support enhanced CQE compression
Ofer Levi [Wed, 8 Jun 2022 20:04:52 +0000 (13:04 -0700)]
net/mlx5: Add bits and fields to support enhanced CQE compression

Expose ifc bits and add needed structure fields and methods to
support enhanced CQE compression feature.
The enhanced CQE compression feature improves cpu utiliziation with
better packet latency from nic to host.

Signed-off-by: Ofer Levi <oferle@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: Remove not used MLX5_CAP_BITS_RW_MASK
Shay Drory [Wed, 8 Jun 2022 20:04:51 +0000 (13:04 -0700)]
net/mlx5: Remove not used MLX5_CAP_BITS_RW_MASK

Remove not used MLX5_CAP_BITS_RW_MASK.
While at it, remove CAP_MASK, MLX5_CAP_OFF_CMDIF_CSUM
and MLX5_DEV_CAP_FLAG_*, since MLX5_CAP_BITS_RW_MASK
was their only user.

Signed-off-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: group fdb cleanup to single function
Shay Drory [Wed, 8 Jun 2022 20:04:50 +0000 (13:04 -0700)]
net/mlx5: group fdb cleanup to single function

Currently, the allocation of fdb software objects are done is single
function, oppose to the cleanup of them.
Group the cleanup of fdb software objects to single function.

Signed-off-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: Add support EXECUTE_ASO action for flow entry
Jianbo Liu [Wed, 8 Jun 2022 20:04:49 +0000 (13:04 -0700)]
net/mlx5: Add support EXECUTE_ASO action for flow entry

Attach flow meter to FTE with object id and index.
Use metadata register C5 to store the packet color meter result.

Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: Add HW definitions of vport debug counters
Saeed Mahameed [Wed, 8 Jun 2022 20:04:48 +0000 (13:04 -0700)]
net/mlx5: Add HW definitions of vport debug counters

total_q_under_processor_handle - number of queues in error state due to an
async error or errored command.

send_queue_priority_update_flow - number of QP/SQ priority/SL update
events.

cq_overrun - number of times CQ entered an error state due to an
overflow.

async_eq_overrun -number of time an EQ mapped to async events was
overrun.

comp_eq_overrun - number of time an EQ mapped to completion events was
overrun.

quota_exceeded_command - number of commands issued and failed due to quota
exceeded.

invalid_command - number of commands issued and failed dues to any reason
other than quota exceeded.

Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Michael Guralnik <michaelgur@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5: Add IFC bits and enums for flow meter
Jianbo Liu [Wed, 8 Jun 2022 20:04:47 +0000 (13:04 -0700)]
net/mlx5: Add IFC bits and enums for flow meter

Add/extend structure layouts and defines for flow meter.

Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Ariel Levkovich <lariel@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>