On really fast storage it can be beneficial to delay running the
request_queue to allow the elevator more opportunity to merge requests.
Otherwise, it has been observed that requests are being sent to
q->request_fn much quicker than is ideal on IOPS-bound backends.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
*/
static void rq_completed(struct mapped_device *md, int rw, bool run_queue)
{
*/
static void rq_completed(struct mapped_device *md, int rw, bool run_queue)
{
+ int nr_requests_pending;
+
atomic_dec(&md->pending[rw]);
/* nudge anyone waiting on suspend queue */
atomic_dec(&md->pending[rw]);
/* nudge anyone waiting on suspend queue */
+ nr_requests_pending = md_in_flight(md);
+ if (!nr_requests_pending)
* back into ->request_fn() could deadlock attempting to grab the
* queue lock again.
*/
* back into ->request_fn() could deadlock attempting to grab the
* queue lock again.
*/
- if (run_queue)
- blk_run_queue_async(md->queue);
+ if (run_queue) {
+ if (!nr_requests_pending ||
+ (nr_requests_pending >= md->queue->nr_congestion_on))
+ blk_run_queue_async(md->queue);
+ }
/*
* dm_put() must be at the end of this function. See the comment above
/*
* dm_put() must be at the end of this function. See the comment above