OSDN Git Service

nvme-tcp-offload: Add IO level implementation
authorDean Balandin <dbalandin@marvell.com>
Wed, 2 Jun 2021 18:42:46 +0000 (21:42 +0300)
committerDavid S. Miller <davem@davemloft.net>
Thu, 3 Jun 2021 21:11:21 +0000 (14:11 -0700)
commit35155e2626dcae187df7071550fbfd94b7113d6c
tree59cd234f8b1561db0d38c4d0990661d6fc2f34d2
parente4ba452ded39caae59dcecba7412c34750b6e229
nvme-tcp-offload: Add IO level implementation

In this patch, we present the IO level functionality.
The nvme-tcp-offload shall work on the IO-level, meaning the
nvme-tcp-offload ULP module shall pass the request to the nvme-tcp-offload
vendor driver and shall expect for the request completion.
No additional handling is needed in between, this design will reduce the
CPU utilization as we will describe below.

The nvme-tcp-offload vendor driver shall register to nvme-tcp-offload ULP
with the following IO-path ops:
 - send_req - in order to pass the request to the handling of the offload
   driver that shall pass it to the vendor specific device
 - poll_queue

The vendor driver will manage the context from which the request will be
executed and the request aggregations.
Once the IO completed, the nvme-tcp-offload vendor driver shall call
command.done() that shall invoke the nvme-tcp-offload ULP layer for
completing the request.

This patch also add support for the nvme-tcp-offload timeout and
nvme-tcp-offload ASYNC flow.

Acked-by: Igor Russkikh <irusskikh@marvell.com>
Signed-off-by: Dean Balandin <dbalandin@marvell.com>
Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com>
Signed-off-by: Omkar Kulkarni <okulkarni@marvell.com>
Signed-off-by: Michal Kalderon <mkalderon@marvell.com>
Signed-off-by: Ariel Elior <aelior@marvell.com>
Signed-off-by: Shai Malin <smalin@marvell.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
drivers/nvme/host/tcp-offload.c
drivers/nvme/host/tcp-offload.h