DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Zhang, Qi Z" <qi.z.zhang@intel.com>
To: Shuang Han <hanshuang87@gmail.com>
Cc: "Yang, Qiming" <qiming.yang@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"stable@dpdk.org" <stable@dpdk.org>
Subject: RE: [PATCH v2] net/ice: fix link update
Date: Thu, 4 Jan 2024 02:36:39 +0000	[thread overview]
Message-ID: <DM4PR11MB5994FF9E6F8751E58C207D77D767A@DM4PR11MB5994.namprd11.prod.outlook.com> (raw)
In-Reply-To: <CADG19vpRM10nfdCW-GWdyOuthZtJYb7hrud9ra6k=yFeierc4A@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 6185 bytes --]

Thanks for raising this.
Agree with your finding, the current implementation ensures data integrity but not guarantee 100% correctness, as updates to link status by interrupts could be missed in certain cases though less happen.
Please raise a Bugzilla if you think its urgent for your usage and welcome to submit a patch directly.

Regards
Qi

From: Shuang Han <hanshuang87@gmail.com>
Sent: Thursday, January 4, 2024 9:17 AM
To: Zhang, Qi Z <qi.z.zhang@intel.com>
Cc: Yang, Qiming <qiming.yang@intel.com>; dev@dpdk.org; stable@dpdk.org
Subject: Re: [PATCH v2] net/ice: fix link update

ice_atomic_write_link_status in ice_link_update function is not protected by the spinlock, there maybe a situation:
1.dev_start call ice_link_update with wait_to_complete = 1,and get link_status = 0
2.LSC interrupt handler call ice_link_update and get link_status = 1
3.LSC interrupt handler call ice_atomic_write_link_status update link status to 1
4.dev_start call ice_atomic_write_link_status update link status to 0
So I think ice_atomic_write_link_status must be protected by spinlock

Zhang, Qi Z <qi.z.zhang@intel.com<mailto:qi.z.zhang@intel.com>> 于2023年12月21日周四 10:44写道:


> -----Original Message-----
> From: Yang, Qiming <qiming.yang@intel.com<mailto:qiming.yang@intel.com>>
> Sent: Thursday, December 21, 2023 9:44 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com<mailto:qi.z.zhang@intel.com>>
> Cc: dev@dpdk.org<mailto:dev@dpdk.org>; stable@dpdk.org<mailto:stable@dpdk.org>
> Subject: RE: [PATCH v2] net/ice: fix link update
>
> hi
>
> > -----Original Message-----
> > From: Zhang, Qi Z <qi.z.zhang@intel.com<mailto:qi.z.zhang@intel.com>>
> > Sent: Thursday, December 14, 2023 4:41 PM
> > To: Yang, Qiming <qiming.yang@intel.com<mailto:qiming.yang@intel.com>>
> > Cc: dev@dpdk.org<mailto:dev@dpdk.org>; Zhang, Qi Z <qi.z.zhang@intel.com<mailto:qi.z.zhang@intel.com>>; stable@dpdk.org<mailto:stable@dpdk.org>
> > Subject: [PATCH v2] net/ice: fix link update
> >
> > The ice_aq_get_link_info function is not thread-safe. However, it is
> > possible to simultaneous invocations during both the dev_start and the
> > LSC interrupt handler, potentially leading to unexpected adminq
> > errors. This patch addresses the issue by introducing a thread-safe
> > wrapper that utilizes a spinlock.
> >
> > Fixes: cf911d90e366 ("net/ice: support link update")
> > Cc: stable@dpdk.org<mailto:stable@dpdk.org>
> >
> > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com<mailto:qi.z.zhang@intel.com>>
> > ---
> > v2:
> > - fix coding style warning.
> >
> >  drivers/net/ice/ice_ethdev.c | 26 ++++++++++++++++++++------
> > drivers/net/ice/ice_ethdev.h |  4 ++++
> >  2 files changed, 24 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/net/ice/ice_ethdev.c
> > b/drivers/net/ice/ice_ethdev.c index 3ccba4db80..1f8ab5158a 100644
> > --- a/drivers/net/ice/ice_ethdev.c
> > +++ b/drivers/net/ice/ice_ethdev.c
> > @@ -1804,6 +1804,7 @@ ice_pf_setup(struct ice_pf *pf)
> >     }
> >
> >     pf->main_vsi = vsi;
> > +   rte_spinlock_init(&pf->link_lock);
> >
> >     return 0;
> >  }
> > @@ -3621,17 +3622,31 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
> >     return 0;
> >  }
> >
> > +static enum ice_status
> > +ice_get_link_info_safe(struct ice_pf *pf, bool ena_lse,
> > +                  struct ice_link_status *link) {
> > +   struct ice_hw *hw = ICE_PF_TO_HW(pf);
> > +   int ret;
> > +
> > +   rte_spinlock_lock(&pf->link_lock);
> > +
> > +   ret = ice_aq_get_link_info(hw->port_info, ena_lse, link, NULL);
> > +
> > +   rte_spinlock_unlock(&pf->link_lock);
> > +
> > +   return ret;
> > +}
> > +
> >  static void
> >  ice_get_init_link_status(struct rte_eth_dev *dev)  {
> > -   struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > >dev_private);
> >     struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > >dev_private);
> >     bool enable_lse = dev->data->dev_conf.intr_conf.lsc ? true : false;
> >     struct ice_link_status link_status;
> >     int ret;
> >
> > -   ret = ice_aq_get_link_info(hw->port_info, enable_lse,
> > -                              &link_status, NULL);
> > +   ret = ice_get_link_info_safe(pf, enable_lse, &link_status);
> >     if (ret != ICE_SUCCESS) {
> >             PMD_DRV_LOG(ERR, "Failed to get link info");
> >             pf->init_link_up = false;
> > @@ -3996,7 +4011,7 @@ ice_link_update(struct rte_eth_dev *dev, int
> > wait_to_complete)  {  #define CHECK_INTERVAL 50  /* 50ms */  #define
> > MAX_REPEAT_TIME 40  /* 2s (40 * 50ms) in total */
> > -   struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > >dev_private);
> > +   struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > >dev_private);
> >     struct ice_link_status link_status;
> >     struct rte_eth_link link, old;
> >     int status;
> > @@ -4010,8 +4025,7 @@ ice_link_update(struct rte_eth_dev *dev, int
> > wait_to_complete)
> >
> >     do {
> >             /* Get link status information from hardware */
> > -           status = ice_aq_get_link_info(hw->port_info, enable_lse,
> > -                                         &link_status, NULL);
> > +           status = ice_get_link_info_safe(pf, enable_lse, &link_status);
> >             if (status != ICE_SUCCESS) {
> >                     link.link_speed = RTE_ETH_SPEED_NUM_100M;
> >                     link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; diff -
> -git
> > a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index
> > abe6dcdc23..d607f028e0 100644
> > --- a/drivers/net/ice/ice_ethdev.h
> > +++ b/drivers/net/ice/ice_ethdev.h
> > @@ -548,6 +548,10 @@ struct ice_pf {
> >     uint64_t rss_hf;
> >     struct ice_tm_conf tm_conf;
> >     uint16_t outer_ethertype;
> > +   /* lock prevent race condition between lsc interrupt handler
> > +    * and link status update during dev_start.
> > +    */
> > +   rte_spinlock_t link_lock;
> >  };
> >
> >  #define ICE_MAX_QUEUE_NUM  2048
> > --
> > 2.31.1
>
>
> Acked-by: Qiming Yang <qiming.yang@intel.com<mailto:qiming.yang@intel.com>>

Applied to dpdk-next-net-intel.

Thanks
Qi

[-- Attachment #2: Type: text/html, Size: 12174 bytes --]

      reply	other threads:[~2024-01-04  2:36 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-14  8:40 Qi Zhang
2023-12-21  1:43 ` Yang, Qiming
2023-12-21  2:43   ` Zhang, Qi Z
2024-01-04  1:17     ` Shuang Han
2024-01-04  2:36       ` Zhang, Qi Z [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM4PR11MB5994FF9E6F8751E58C207D77D767A@DM4PR11MB5994.namprd11.prod.outlook.com \
    --to=qi.z.zhang@intel.com \
    --cc=dev@dpdk.org \
    --cc=hanshuang87@gmail.com \
    --cc=qiming.yang@intel.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).