OSDN Git Service

scsi: lpfc: Fix crash in blk_mq layer when executing modprobe -r lpfc
authorJames Smart <jsmart2021@gmail.com>
Fri, 25 May 2018 04:08:59 +0000 (21:08 -0700)
committerMartin K. Petersen <martin.petersen@oracle.com>
Tue, 29 May 2018 02:40:33 +0000 (22:40 -0400)
commit7438273fa23bea6d1e647e66c451570b86e2758b
tree5d5ee28789dc9ea6c93fc403c50b708442a8a589
parent4d5e789a2eb111d7f9e032d0ebaecb465a2eca8f
scsi: lpfc: Fix crash in blk_mq layer when executing modprobe -r lpfc

modprobe -r lpfc produces the following:

Call Trace:
 __blk_mq_run_hw_queue+0xa2/0xb0
 __blk_mq_delay_run_hw_queue+0x9d/0xb0
 ? blk_mq_hctx_has_pending+0x32/0x80
 blk_mq_run_hw_queue+0x50/0xd0
 blk_mq_sched_insert_request+0x110/0x1b0
 blk_execute_rq_nowait+0x76/0x180
 nvme_keep_alive_work+0x8a/0xd0 [nvme_core]
 process_one_work+0x17f/0x440
 worker_thread+0x126/0x3c0
 ? manage_workers.isra.24+0x2a0/0x2a0
 kthread+0xd1/0xe0
 ? insert_kthread_work+0x40/0x40
 ret_from_fork_nospec_begin+0x21/0x21
 ? insert_kthread_work+0x40/0x40

However, rmmod lpfc would run correctly.

When an nvme remoteport is unregistered with the host nvme transport, it
needs to set the remoteport->dev_loss_tmo value 0 to indicate an immediate
termination of device loss and prevent any further keep alives to that
rport.  The driver was never setting dev_loss_tmo causing the nvme
transport to continue to send the keep alive.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
drivers/scsi/lpfc/lpfc_nvme.c