All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Paul Zimmerman <pauldzim@gmail.com>
To: Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH RFC] Revert "drm/i915: Try to use fast+narrow link on eDP again and fall back to the old max strategy on failure"
Date: Sun, 9 May 2021 16:19:52 -0700	[thread overview]
Message-ID: <CADBGO78C0T8dx9pgvim6oYd1K2DCvh8pctUwdWpgCP_VqRKbBQ@mail.gmail.com> (raw)
In-Reply-To: <20210509044330.4655-1-pauldzim@gmail.com>

On Sat, May 8, 2021 at 9:44 PM Paul Zimmerman <pauldzim@gmail.com> wrote:
>
> This reverts commit 2bbd6dba84d44219387df051a1c799b7bac46099.
>
> Since 5.12-rc2, my Dell XPS-15 laptop has had a blank screen on boot.
> The system seems to run fine other than having no display, I am able
> to ssh into the machine. I don't see anything interesting in the dmesg
> log. I bisected the problem down to this commit, and reverting it fixes
> the problem.
>
> Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>

Replying to myself, to say that I just tested kernel 5.13-rc1, and the
problem still exists there, and the same revert fixes the problem.

- Paul

> ---
> I am attaching the dmesg log from 5.12.0 when the problem occurs. Any
> other debugging info you want me to provide?
>
>  .../drm/i915/display/intel_display_types.h    |  1 -
>  drivers/gpu/drm/i915/display/intel_dp.c       | 75 +++----------------
>  2 files changed, 9 insertions(+), 67 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
> index 184ecbbcec99..196900100689 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_types.h
> +++ b/drivers/gpu/drm/i915/display/intel_display_types.h
> @@ -1424,7 +1424,6 @@ struct intel_dp {
>         bool has_hdmi_sink;
>         bool has_audio;
>         bool reset_link_params;
> -       bool use_max_params;
>         u8 dpcd[DP_RECEIVER_CAP_SIZE];
>         u8 psr_dpcd[EDP_PSR_RECEIVER_CAP_SIZE];
>         u8 downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 775d89b6c3fc..238ae1099132 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -480,13 +480,6 @@ int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
>                 return -1;
>         }
>
> -       if (intel_dp_is_edp(intel_dp) && !intel_dp->use_max_params) {
> -               drm_dbg_kms(&i915->drm,
> -                           "Retrying Link training for eDP with max parameters\n");
> -               intel_dp->use_max_params = true;
> -               return 0;
> -       }
> -
>         index = intel_dp_rate_index(intel_dp->common_rates,
>                                     intel_dp->num_common_rates,
>                                     link_rate);
> @@ -1174,44 +1167,6 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
>         return -EINVAL;
>  }
>
> -/* Optimize link config in order: max bpp, min lanes, min clock */
> -static int
> -intel_dp_compute_link_config_fast(struct intel_dp *intel_dp,
> -                                 struct intel_crtc_state *pipe_config,
> -                                 const struct link_config_limits *limits)
> -{
> -       const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
> -       int bpp, clock, lane_count;
> -       int mode_rate, link_clock, link_avail;
> -
> -       for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
> -               int output_bpp = intel_dp_output_bpp(pipe_config->output_format, bpp);
> -
> -               mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
> -                                                  output_bpp);
> -
> -               for (lane_count = limits->min_lane_count;
> -                    lane_count <= limits->max_lane_count;
> -                    lane_count <<= 1) {
> -                       for (clock = limits->min_clock; clock <= limits->max_clock; clock++) {
> -                               link_clock = intel_dp->common_rates[clock];
> -                               link_avail = intel_dp_max_data_rate(link_clock,
> -                                                                   lane_count);
> -
> -                               if (mode_rate <= link_avail) {
> -                                       pipe_config->lane_count = lane_count;
> -                                       pipe_config->pipe_bpp = bpp;
> -                                       pipe_config->port_clock = link_clock;
> -
> -                                       return 0;
> -                               }
> -                       }
> -               }
> -       }
> -
> -       return -EINVAL;
> -}
> -
>  static int intel_dp_dsc_compute_bpp(struct intel_dp *intel_dp, u8 dsc_max_bpc)
>  {
>         int i, num_bpc;
> @@ -1435,14 +1390,13 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
>         limits.min_bpp = intel_dp_min_bpp(pipe_config->output_format);
>         limits.max_bpp = intel_dp_max_bpp(intel_dp, pipe_config);
>
> -       if (intel_dp->use_max_params) {
> +       if (intel_dp_is_edp(intel_dp)) {
>                 /*
>                  * Use the maximum clock and number of lanes the eDP panel
> -                * advertizes being capable of in case the initial fast
> -                * optimal params failed us. The panels are generally
> +                * advertizes being capable of. The panels are generally
>                  * designed to support only a single clock and lane
> -                * configuration, and typically on older panels these
> -                * values correspond to the native resolution of the panel.
> +                * configuration, and typically these values correspond to the
> +                * native resolution of the panel.
>                  */
>                 limits.min_lane_count = limits.max_lane_count;
>                 limits.min_clock = limits.max_clock;
> @@ -1461,22 +1415,11 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
>             intel_dp_can_bigjoiner(intel_dp))
>                 pipe_config->bigjoiner = true;
>
> -       if (intel_dp_is_edp(intel_dp))
> -               /*
> -                * Optimize for fast and narrow. eDP 1.3 section 3.3 and eDP 1.4
> -                * section A.1: "It is recommended that the minimum number of
> -                * lanes be used, using the minimum link rate allowed for that
> -                * lane configuration."
> -                *
> -                * Note that we fall back to the max clock and lane count for eDP
> -                * panels that fail with the fast optimal settings (see
> -                * intel_dp->use_max_params), in which case the fast vs. wide
> -                * choice doesn't matter.
> -                */
> -               ret = intel_dp_compute_link_config_fast(intel_dp, pipe_config, &limits);
> -       else
> -               /* Optimize for slow and wide. */
> -               ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits);
> +       /*
> +        * Optimize for slow and wide. This is the place to add alternative
> +        * optimization policy.
> +        */
> +       ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits);
>
>         /* enable compression if the mode doesn't fit available BW */
>         drm_dbg_kms(&i915->drm, "Force DSC en = %d\n", intel_dp->force_dsc_en);
> --
> 2.25.1

WARNING: multiple messages have this Message-ID (diff)
From: Paul Zimmerman <pauldzim@gmail.com>
To: Jani Nikula <jani.nikula@linux.intel.com>,
	 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org
Subject: Re: [Intel-gfx] [PATCH RFC] Revert "drm/i915: Try to use fast+narrow link on eDP again and fall back to the old max strategy on failure"
Date: Sun, 9 May 2021 16:19:52 -0700	[thread overview]
Message-ID: <CADBGO78C0T8dx9pgvim6oYd1K2DCvh8pctUwdWpgCP_VqRKbBQ@mail.gmail.com> (raw)
In-Reply-To: <20210509044330.4655-1-pauldzim@gmail.com>

On Sat, May 8, 2021 at 9:44 PM Paul Zimmerman <pauldzim@gmail.com> wrote:
>
> This reverts commit 2bbd6dba84d44219387df051a1c799b7bac46099.
>
> Since 5.12-rc2, my Dell XPS-15 laptop has had a blank screen on boot.
> The system seems to run fine other than having no display, I am able
> to ssh into the machine. I don't see anything interesting in the dmesg
> log. I bisected the problem down to this commit, and reverting it fixes
> the problem.
>
> Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>

Replying to myself, to say that I just tested kernel 5.13-rc1, and the
problem still exists there, and the same revert fixes the problem.

- Paul

> ---
> I am attaching the dmesg log from 5.12.0 when the problem occurs. Any
> other debugging info you want me to provide?
>
>  .../drm/i915/display/intel_display_types.h    |  1 -
>  drivers/gpu/drm/i915/display/intel_dp.c       | 75 +++----------------
>  2 files changed, 9 insertions(+), 67 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
> index 184ecbbcec99..196900100689 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_types.h
> +++ b/drivers/gpu/drm/i915/display/intel_display_types.h
> @@ -1424,7 +1424,6 @@ struct intel_dp {
>         bool has_hdmi_sink;
>         bool has_audio;
>         bool reset_link_params;
> -       bool use_max_params;
>         u8 dpcd[DP_RECEIVER_CAP_SIZE];
>         u8 psr_dpcd[EDP_PSR_RECEIVER_CAP_SIZE];
>         u8 downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 775d89b6c3fc..238ae1099132 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -480,13 +480,6 @@ int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
>                 return -1;
>         }
>
> -       if (intel_dp_is_edp(intel_dp) && !intel_dp->use_max_params) {
> -               drm_dbg_kms(&i915->drm,
> -                           "Retrying Link training for eDP with max parameters\n");
> -               intel_dp->use_max_params = true;
> -               return 0;
> -       }
> -
>         index = intel_dp_rate_index(intel_dp->common_rates,
>                                     intel_dp->num_common_rates,
>                                     link_rate);
> @@ -1174,44 +1167,6 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
>         return -EINVAL;
>  }
>
> -/* Optimize link config in order: max bpp, min lanes, min clock */
> -static int
> -intel_dp_compute_link_config_fast(struct intel_dp *intel_dp,
> -                                 struct intel_crtc_state *pipe_config,
> -                                 const struct link_config_limits *limits)
> -{
> -       const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
> -       int bpp, clock, lane_count;
> -       int mode_rate, link_clock, link_avail;
> -
> -       for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
> -               int output_bpp = intel_dp_output_bpp(pipe_config->output_format, bpp);
> -
> -               mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
> -                                                  output_bpp);
> -
> -               for (lane_count = limits->min_lane_count;
> -                    lane_count <= limits->max_lane_count;
> -                    lane_count <<= 1) {
> -                       for (clock = limits->min_clock; clock <= limits->max_clock; clock++) {
> -                               link_clock = intel_dp->common_rates[clock];
> -                               link_avail = intel_dp_max_data_rate(link_clock,
> -                                                                   lane_count);
> -
> -                               if (mode_rate <= link_avail) {
> -                                       pipe_config->lane_count = lane_count;
> -                                       pipe_config->pipe_bpp = bpp;
> -                                       pipe_config->port_clock = link_clock;
> -
> -                                       return 0;
> -                               }
> -                       }
> -               }
> -       }
> -
> -       return -EINVAL;
> -}
> -
>  static int intel_dp_dsc_compute_bpp(struct intel_dp *intel_dp, u8 dsc_max_bpc)
>  {
>         int i, num_bpc;
> @@ -1435,14 +1390,13 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
>         limits.min_bpp = intel_dp_min_bpp(pipe_config->output_format);
>         limits.max_bpp = intel_dp_max_bpp(intel_dp, pipe_config);
>
> -       if (intel_dp->use_max_params) {
> +       if (intel_dp_is_edp(intel_dp)) {
>                 /*
>                  * Use the maximum clock and number of lanes the eDP panel
> -                * advertizes being capable of in case the initial fast
> -                * optimal params failed us. The panels are generally
> +                * advertizes being capable of. The panels are generally
>                  * designed to support only a single clock and lane
> -                * configuration, and typically on older panels these
> -                * values correspond to the native resolution of the panel.
> +                * configuration, and typically these values correspond to the
> +                * native resolution of the panel.
>                  */
>                 limits.min_lane_count = limits.max_lane_count;
>                 limits.min_clock = limits.max_clock;
> @@ -1461,22 +1415,11 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
>             intel_dp_can_bigjoiner(intel_dp))
>                 pipe_config->bigjoiner = true;
>
> -       if (intel_dp_is_edp(intel_dp))
> -               /*
> -                * Optimize for fast and narrow. eDP 1.3 section 3.3 and eDP 1.4
> -                * section A.1: "It is recommended that the minimum number of
> -                * lanes be used, using the minimum link rate allowed for that
> -                * lane configuration."
> -                *
> -                * Note that we fall back to the max clock and lane count for eDP
> -                * panels that fail with the fast optimal settings (see
> -                * intel_dp->use_max_params), in which case the fast vs. wide
> -                * choice doesn't matter.
> -                */
> -               ret = intel_dp_compute_link_config_fast(intel_dp, pipe_config, &limits);
> -       else
> -               /* Optimize for slow and wide. */
> -               ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits);
> +       /*
> +        * Optimize for slow and wide. This is the place to add alternative
> +        * optimization policy.
> +        */
> +       ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits);
>
>         /* enable compression if the mode doesn't fit available BW */
>         drm_dbg_kms(&i915->drm, "Force DSC en = %d\n", intel_dp->force_dsc_en);
> --
> 2.25.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2021-05-09 23:20 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-09  4:43 [PATCH RFC] Revert "drm/i915: Try to use fast+narrow link on eDP again and fall back to the old max strategy on failure" Paul Zimmerman
2021-05-09  4:43 ` [Intel-gfx] " Paul Zimmerman
2021-05-09 23:19 ` Paul Zimmerman [this message]
2021-05-09 23:19   ` Paul Zimmerman
2021-05-10 10:31 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for " Patchwork
2021-05-10 13:59 ` [Intel-gfx] [PATCH RFC] " Timo Aaltonen
2021-05-10 13:59   ` Timo Aaltonen
2021-05-10 20:49   ` Paul Zimmerman
2021-05-10 20:49     ` Paul Zimmerman
2021-05-11 12:55     ` Jani Nikula
2021-05-11 12:55       ` Jani Nikula
2021-05-11 12:56       ` Jani Nikula
2021-05-11 12:56         ` Jani Nikula

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CADBGO78C0T8dx9pgvim6oYd1K2DCvh8pctUwdWpgCP_VqRKbBQ@mail.gmail.com \
    --to=pauldzim@gmail.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jani.nikula@linux.intel.com \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rodrigo.vivi@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.