OSDN Git Service

drm/i915: Simplify VLV drain latency computation
authorVille Syrjälä <ville.syrjala@linux.intel.com>
Thu, 5 Mar 2015 19:19:43 +0000 (21:19 +0200)
committerDaniel Vetter <daniel.vetter@ffwll.ch>
Tue, 17 Mar 2015 21:30:02 +0000 (22:30 +0100)
The current drain lantency computation relies on hardcoded limits to
determine when the to use the low vs. high precision multiplier.
Rewrite the code to use a more straightforward approach.

Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
drivers/gpu/drm/i915/intel_pm.c

index cc3c2d9..c198dba 100644 (file)
@@ -755,12 +755,15 @@ static bool vlv_compute_drain_latency(struct drm_crtc *crtc,
                return false;
 
        entries = DIV_ROUND_UP(clock, 1000) * pixel_size;
-       if (IS_CHERRYVIEW(dev))
-               *prec_mult = (entries > 32) ? 16 : 8;
-       else
-               *prec_mult = (entries > 128) ? 64 : 32;
+
+       *prec_mult = IS_CHERRYVIEW(dev) ? 16 : 64;
        *drain_latency = (64 * (*prec_mult) * 4) / entries;
 
+       if (*drain_latency > DRAIN_LATENCY_MASK) {
+               *prec_mult /= 2;
+               *drain_latency = (64 * (*prec_mult) * 4) / entries;
+       }
+
        if (*drain_latency > DRAIN_LATENCY_MASK)
                *drain_latency = DRAIN_LATENCY_MASK;