DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH] net/ice: fix ice dcf contrl thread crash
@ 2023-02-08  8:30 Ke Zhang
  2023-02-09  0:05 ` Stephen Hemminger
                   ` (3 more replies)
  0 siblings, 4 replies; 24+ messages in thread
From: Ke Zhang @ 2023-02-08  8:30 UTC (permalink / raw)
  To: qi.z.zhang, qiming.yang, dev; +Cc: Ke Zhang

The control thread accesses the hardware resources after the
resources were released, resulting in a segment error.

This commit fixes the bug by exiting thread before resource released.

Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
---
 drivers/net/ice/ice_dcf.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 1c3d22ae0f..e58908caf5 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -543,6 +543,8 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
 	ice_dcf_disable_irq0(hw);
 
 	for (;;) {
+		if (hw->vc_event_msg_cb == NULL)
+			pthread_exit(NULL);
 		if (ice_dcf_get_vf_resource(hw) == 0 &&
 		    ice_dcf_get_vf_vsi_map(hw) >= 0) {
 			err = 0;
@@ -760,6 +762,8 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_intr_callback_unregister(intr_handle,
 				     ice_dcf_dev_interrupt_handler, hw);
 
+	hw->vc_event_msg_cb = NULL;
+
 	ice_dcf_mode_disable(hw);
 	iavf_shutdown_adminq(&hw->avf);
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] net/ice: fix ice dcf contrl thread crash
  2023-02-08  8:30 [PATCH] net/ice: fix ice dcf contrl thread crash Ke Zhang
@ 2023-02-09  0:05 ` Stephen Hemminger
  2023-02-13  7:03 ` [PATCH v2] " Ke Zhang
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 24+ messages in thread
From: Stephen Hemminger @ 2023-02-09  0:05 UTC (permalink / raw)
  To: Ke Zhang; +Cc: qi.z.zhang, qiming.yang, dev

On Wed,  8 Feb 2023 16:30:05 +0800
Ke Zhang <ke1x.zhang@intel.com> wrote:

> +		if (hw->vc_event_msg_cb == NULL)
> +			pthread_exit(NULL);

Do we need rte_thread_exit() wrapper to be compatiable with Windows?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v2] net/ice: fix ice dcf contrl thread crash
  2023-02-08  8:30 [PATCH] net/ice: fix ice dcf contrl thread crash Ke Zhang
  2023-02-09  0:05 ` Stephen Hemminger
@ 2023-02-13  7:03 ` Ke Zhang
  2023-02-21  0:29   ` Zhang, Qi Z
  2023-02-13  7:14 ` Ke Zhang
  2023-02-13  7:16 ` [PATCH v2] net/ice: fix ice dcf control " Ke Zhang
  3 siblings, 1 reply; 24+ messages in thread
From: Ke Zhang @ 2023-02-13  7:03 UTC (permalink / raw)
  To: qi.z.zhang, qiming.yang, dev; +Cc: Ke Zhang

The control thread accesses the hardware resources after the
resources were released, resulting in a segment error.

This commit fixes the bug by exiting thread before resource released.

Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
---
v2: add pthread_exit() for windows
---
 drivers/net/ice/ice_dcf.c         | 4 ++++
 lib/eal/windows/include/pthread.h | 5 +++++
 2 files changed, 9 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 1c3d22ae0f..e58908caf5 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -543,6 +543,8 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
 	ice_dcf_disable_irq0(hw);
 
 	for (;;) {
+		if (hw->vc_event_msg_cb == NULL)
+			pthread_exit(NULL);
 		if (ice_dcf_get_vf_resource(hw) == 0 &&
 		    ice_dcf_get_vf_vsi_map(hw) >= 0) {
 			err = 0;
@@ -760,6 +762,8 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_intr_callback_unregister(intr_handle,
 				     ice_dcf_dev_interrupt_handler, hw);
 
+	hw->vc_event_msg_cb = NULL;
+
 	ice_dcf_mode_disable(hw);
 	iavf_shutdown_adminq(&hw->avf);
 
diff --git a/lib/eal/windows/include/pthread.h b/lib/eal/windows/include/pthread.h
index 27fd2cca52..f0068ebd73 100644
--- a/lib/eal/windows/include/pthread.h
+++ b/lib/eal/windows/include/pthread.h
@@ -149,6 +149,11 @@ pthread_detach(__rte_unused pthread_t thread)
 	return 0;
 }
 
+static inline void
+pthread_exit(__rte_unused void *__retval)
+{
+}
+
 static inline int
 pthread_join(__rte_unused pthread_t thread,
 	__rte_unused void **value_ptr)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v2] net/ice: fix ice dcf contrl thread crash
  2023-02-08  8:30 [PATCH] net/ice: fix ice dcf contrl thread crash Ke Zhang
  2023-02-09  0:05 ` Stephen Hemminger
  2023-02-13  7:03 ` [PATCH v2] " Ke Zhang
@ 2023-02-13  7:14 ` Ke Zhang
  2023-02-13  7:16 ` [PATCH v2] net/ice: fix ice dcf control " Ke Zhang
  3 siblings, 0 replies; 24+ messages in thread
From: Ke Zhang @ 2023-02-13  7:14 UTC (permalink / raw)
  To: qi.z.zhang, qiming.yang, dev; +Cc: Ke Zhang

The control thread accesses the hardware resources after the
resources were released, resulting in a segment error.

This commit fixes the bug by exiting thread before resource released.

Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
---
v2: add pthread_exit() for windows
---
 drivers/net/ice/ice_dcf.c         | 4 ++++
 lib/eal/windows/include/pthread.h | 5 +++++
 2 files changed, 9 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 1c3d22ae0f..e58908caf5 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -543,6 +543,8 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
 	ice_dcf_disable_irq0(hw);
 
 	for (;;) {
+		if (hw->vc_event_msg_cb == NULL)
+			pthread_exit(NULL);
 		if (ice_dcf_get_vf_resource(hw) == 0 &&
 		    ice_dcf_get_vf_vsi_map(hw) >= 0) {
 			err = 0;
@@ -760,6 +762,8 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_intr_callback_unregister(intr_handle,
 				     ice_dcf_dev_interrupt_handler, hw);
 
+	hw->vc_event_msg_cb = NULL;
+
 	ice_dcf_mode_disable(hw);
 	iavf_shutdown_adminq(&hw->avf);
 
diff --git a/lib/eal/windows/include/pthread.h b/lib/eal/windows/include/pthread.h
index 27fd2cca52..f0068ebd73 100644
--- a/lib/eal/windows/include/pthread.h
+++ b/lib/eal/windows/include/pthread.h
@@ -149,6 +149,11 @@ pthread_detach(__rte_unused pthread_t thread)
 	return 0;
 }
 
+static inline void
+pthread_exit(__rte_unused void *__retval)
+{
+}
+
 static inline int
 pthread_join(__rte_unused pthread_t thread,
 	__rte_unused void **value_ptr)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v2] net/ice: fix ice dcf control thread crash
  2023-02-08  8:30 [PATCH] net/ice: fix ice dcf contrl thread crash Ke Zhang
                   ` (2 preceding siblings ...)
  2023-02-13  7:14 ` Ke Zhang
@ 2023-02-13  7:16 ` Ke Zhang
  2023-02-14 11:03   ` Thomas Monjalon
                     ` (2 more replies)
  3 siblings, 3 replies; 24+ messages in thread
From: Ke Zhang @ 2023-02-13  7:16 UTC (permalink / raw)
  To: qi.z.zhang, qiming.yang, dev; +Cc: Ke Zhang

The control thread accesses the hardware resources after the
resources were released, resulting in a segment error.

This commit fixes the bug by exiting thread before resource released.

Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
---
v2: add pthread_exit() for windows
---
 drivers/net/ice/ice_dcf.c         | 4 ++++
 lib/eal/windows/include/pthread.h | 5 +++++
 2 files changed, 9 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 1c3d22ae0f..e58908caf5 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -543,6 +543,8 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
 	ice_dcf_disable_irq0(hw);
 
 	for (;;) {
+		if (hw->vc_event_msg_cb == NULL)
+			pthread_exit(NULL);
 		if (ice_dcf_get_vf_resource(hw) == 0 &&
 		    ice_dcf_get_vf_vsi_map(hw) >= 0) {
 			err = 0;
@@ -760,6 +762,8 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_intr_callback_unregister(intr_handle,
 				     ice_dcf_dev_interrupt_handler, hw);
 
+	hw->vc_event_msg_cb = NULL;
+
 	ice_dcf_mode_disable(hw);
 	iavf_shutdown_adminq(&hw->avf);
 
diff --git a/lib/eal/windows/include/pthread.h b/lib/eal/windows/include/pthread.h
index 27fd2cca52..f0068ebd73 100644
--- a/lib/eal/windows/include/pthread.h
+++ b/lib/eal/windows/include/pthread.h
@@ -149,6 +149,11 @@ pthread_detach(__rte_unused pthread_t thread)
 	return 0;
 }
 
+static inline void
+pthread_exit(__rte_unused void *__retval)
+{
+}
+
 static inline int
 pthread_join(__rte_unused pthread_t thread,
 	__rte_unused void **value_ptr)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2] net/ice: fix ice dcf control thread crash
  2023-02-13  7:16 ` [PATCH v2] net/ice: fix ice dcf control " Ke Zhang
@ 2023-02-14 11:03   ` Thomas Monjalon
  2023-02-16  7:53     ` Zhang, Ke1X
  2023-03-01 14:53   ` Kevin Traynor
  2023-03-15  8:20   ` [PATCH v3] " Mingjin Ye
  2 siblings, 1 reply; 24+ messages in thread
From: Thomas Monjalon @ 2023-02-14 11:03 UTC (permalink / raw)
  To: Ke Zhang; +Cc: qi.z.zhang, qiming.yang, dev, Tyler Retzlaff

13/02/2023 08:16, Ke Zhang:
> --- a/lib/eal/windows/include/pthread.h
> +++ b/lib/eal/windows/include/pthread.h
> +static inline void
> +pthread_exit(__rte_unused void *__retval)
> +{
> +}

Please don't add more shim layer.
There is a new layer rte_thread_* in lib/eal/include/rte_thread.h



^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH v2] net/ice: fix ice dcf control thread crash
  2023-02-14 11:03   ` Thomas Monjalon
@ 2023-02-16  7:53     ` Zhang, Ke1X
  2023-02-20  0:30       ` Thomas Monjalon
  0 siblings, 1 reply; 24+ messages in thread
From: Zhang, Ke1X @ 2023-02-16  7:53 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Zhang, Qi Z, Yang, Qiming, dev, Tyler Retzlaff



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Tuesday, February 14, 2023 7:03 PM
> To: Zhang, Ke1X <ke1x.zhang@intel.com>
> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; dev@dpdk.org; Tyler Retzlaff
> <roretzla@linux.microsoft.com>
> Subject: Re: [PATCH v2] net/ice: fix ice dcf control thread crash
> 
> 13/02/2023 08:16, Ke Zhang:
> > --- a/lib/eal/windows/include/pthread.h
> > +++ b/lib/eal/windows/include/pthread.h
> > +static inline void
> > +pthread_exit(__rte_unused void *__retval) { }
> 
> Please don't add more shim layer.
> There is a new layer rte_thread_* in lib/eal/include/rte_thread.h
> 
Thanks for your comments.
Do I need add a function like rte_thread_exit() in in lib/eal/include/rte_thread.h?
There is no function for supporting pthread_exit.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2] net/ice: fix ice dcf control thread crash
  2023-02-16  7:53     ` Zhang, Ke1X
@ 2023-02-20  0:30       ` Thomas Monjalon
  2023-03-01  1:54         ` Zhang, Ke1X
  0 siblings, 1 reply; 24+ messages in thread
From: Thomas Monjalon @ 2023-02-20  0:30 UTC (permalink / raw)
  To: Tyler Retzlaff, Zhang, Ke1X
  Cc: dev, Zhang, Qi Z, Yang, Qiming, dev, david.marchand

16/02/2023 08:53, Zhang, Ke1X:
> From: Thomas Monjalon <thomas@monjalon.net>
> > 13/02/2023 08:16, Ke Zhang:
> > > --- a/lib/eal/windows/include/pthread.h
> > > +++ b/lib/eal/windows/include/pthread.h
> > > +static inline void
> > > +pthread_exit(__rte_unused void *__retval) { }
> > 
> > Please don't add more shim layer.
> > There is a new layer rte_thread_* in lib/eal/include/rte_thread.h
> > 
> Thanks for your comments.
> Do I need add a function like rte_thread_exit() in in lib/eal/include/rte_thread.h?

I guess yes.

> There is no function for supporting pthread_exit.

Tyler, how would you achieve the equivalent of pthread_exit?



^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH v2] net/ice: fix ice dcf contrl thread crash
  2023-02-13  7:03 ` [PATCH v2] " Ke Zhang
@ 2023-02-21  0:29   ` Zhang, Qi Z
  0 siblings, 0 replies; 24+ messages in thread
From: Zhang, Qi Z @ 2023-02-21  0:29 UTC (permalink / raw)
  To: Zhang, Ke1X, Yang, Qiming, dev



> -----Original Message-----
> From: Zhang, Ke1X <ke1x.zhang@intel.com>
> Sent: Monday, February 13, 2023 3:03 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; dev@dpdk.org
> Cc: Zhang, Ke1X <ke1x.zhang@intel.com>
> Subject: [PATCH v2] net/ice: fix ice dcf contrl thread crash
> 
> The control thread accesses the hardware resources after the resources
> were released, resulting in a segment error.
> 
> This commit fixes the bug by exiting thread before resource released.
> 
> Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
> ---
> v2: add pthread_exit() for windows
> ---
>  drivers/net/ice/ice_dcf.c         | 4 ++++
>  lib/eal/windows/include/pthread.h | 5 +++++
>  2 files changed, 9 insertions(+)
> 
> diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index
> 1c3d22ae0f..e58908caf5 100644
> --- a/drivers/net/ice/ice_dcf.c
> +++ b/drivers/net/ice/ice_dcf.c
> @@ -543,6 +543,8 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw
> *hw)
>  	ice_dcf_disable_irq0(hw);
> 
>  	for (;;) {
> +		if (hw->vc_event_msg_cb == NULL)
> +			pthread_exit(NULL);
>  		if (ice_dcf_get_vf_resource(hw) == 0 &&
>  		    ice_dcf_get_vf_vsi_map(hw) >= 0) {
>  			err = 0;
> @@ -760,6 +762,8 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev,
> struct ice_dcf_hw *hw)
>  	rte_intr_callback_unregister(intr_handle,
>  				     ice_dcf_dev_interrupt_handler, hw);
> 
> +	hw->vc_event_msg_cb = NULL;
> +
>  	ice_dcf_mode_disable(hw);
>  	iavf_shutdown_adminq(&hw->avf);
> 
> diff --git a/lib/eal/windows/include/pthread.h
> b/lib/eal/windows/include/pthread.h
> index 27fd2cca52..f0068ebd73 100644
> --- a/lib/eal/windows/include/pthread.h
> +++ b/lib/eal/windows/include/pthread.h

Suggest move this part into a separate patch.

> @@ -149,6 +149,11 @@ pthread_detach(__rte_unused pthread_t thread)
>  	return 0;
>  }
> 
> +static inline void
> +pthread_exit(__rte_unused void *__retval) { }
> +
>  static inline int
>  pthread_join(__rte_unused pthread_t thread,
>  	__rte_unused void **value_ptr)
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH v2] net/ice: fix ice dcf control thread crash
  2023-02-20  0:30       ` Thomas Monjalon
@ 2023-03-01  1:54         ` Zhang, Ke1X
  0 siblings, 0 replies; 24+ messages in thread
From: Zhang, Ke1X @ 2023-03-01  1:54 UTC (permalink / raw)
  To: Thomas Monjalon, Tyler Retzlaff
  Cc: dev, Zhang, Qi Z, Yang, Qiming, dev, david.marchand

> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Monday, February 20, 2023 8:30 AM
> To: Tyler Retzlaff <roretzla@linux.microsoft.com>; Zhang, Ke1X
> <ke1x.zhang@intel.com>
> Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; dev@dpdk.org; david.marchand@redhat.com
> Subject: Re: [PATCH v2] net/ice: fix ice dcf control thread crash
> 
> 16/02/2023 08:53, Zhang, Ke1X:
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 13/02/2023 08:16, Ke Zhang:
> > > > --- a/lib/eal/windows/include/pthread.h
> > > > +++ b/lib/eal/windows/include/pthread.h
> > > > +static inline void
> > > > +pthread_exit(__rte_unused void *__retval) { }
> > >
> > > Please don't add more shim layer.
> > > There is a new layer rte_thread_* in lib/eal/include/rte_thread.h
> > >
> > Thanks for your comments.
> > Do I need add a function like rte_thread_exit() in in
> lib/eal/include/rte_thread.h?
> 
> I guess yes.
> 
> > There is no function for supporting pthread_exit.
> 
> Tyler, how would you achieve the equivalent of pthread_exit?
> 
@ Tyler, would you please share any idea?


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2] net/ice: fix ice dcf control thread crash
  2023-02-13  7:16 ` [PATCH v2] net/ice: fix ice dcf control " Ke Zhang
  2023-02-14 11:03   ` Thomas Monjalon
@ 2023-03-01 14:53   ` Kevin Traynor
  2023-03-15  8:20   ` [PATCH v3] " Mingjin Ye
  2 siblings, 0 replies; 24+ messages in thread
From: Kevin Traynor @ 2023-03-01 14:53 UTC (permalink / raw)
  To: Ke Zhang, qi.z.zhang, qiming.yang, dev

On 13/02/2023 07:16, Ke Zhang wrote:
> The control thread accesses the hardware resources after the
> resources were released, resulting in a segment error.
> 
> This commit fixes the bug by exiting thread before resource released.
> 

Please add the "Fixes: xyz" tag for the commit that introduced this bug 
so the fix can be backported to the appropriate stable branches.

> Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
> ---
> v2: add pthread_exit() for windows
> ---
>   drivers/net/ice/ice_dcf.c         | 4 ++++
>   lib/eal/windows/include/pthread.h | 5 +++++
>   2 files changed, 9 insertions(+)
> 
> diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
> index 1c3d22ae0f..e58908caf5 100644
> --- a/drivers/net/ice/ice_dcf.c
> +++ b/drivers/net/ice/ice_dcf.c
> @@ -543,6 +543,8 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
>   	ice_dcf_disable_irq0(hw);
>   
>   	for (;;) {
> +		if (hw->vc_event_msg_cb == NULL)
> +			pthread_exit(NULL);
>   		if (ice_dcf_get_vf_resource(hw) == 0 &&
>   		    ice_dcf_get_vf_vsi_map(hw) >= 0) {
>   			err = 0;
> @@ -760,6 +762,8 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
>   	rte_intr_callback_unregister(intr_handle,
>   				     ice_dcf_dev_interrupt_handler, hw);
>   
> +	hw->vc_event_msg_cb = NULL;
> +
>   	ice_dcf_mode_disable(hw);
>   	iavf_shutdown_adminq(&hw->avf);
>   
> diff --git a/lib/eal/windows/include/pthread.h b/lib/eal/windows/include/pthread.h
> index 27fd2cca52..f0068ebd73 100644
> --- a/lib/eal/windows/include/pthread.h
> +++ b/lib/eal/windows/include/pthread.h
> @@ -149,6 +149,11 @@ pthread_detach(__rte_unused pthread_t thread)
>   	return 0;
>   }
>   
> +static inline void
> +pthread_exit(__rte_unused void *__retval)
> +{
> +}
> +
>   static inline int
>   pthread_join(__rte_unused pthread_t thread,
>   	__rte_unused void **value_ptr)


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v3] net/ice: fix ice dcf control thread crash
  2023-02-13  7:16 ` [PATCH v2] net/ice: fix ice dcf control " Ke Zhang
  2023-02-14 11:03   ` Thomas Monjalon
  2023-03-01 14:53   ` Kevin Traynor
@ 2023-03-15  8:20   ` Mingjin Ye
  2023-03-15 13:06     ` Zhang, Qi Z
  2023-03-17  5:09     ` [PATCH v4] " Mingjin Ye
  2 siblings, 2 replies; 24+ messages in thread
From: Mingjin Ye @ 2023-03-15  8:20 UTC (permalink / raw)
  To: dev; +Cc: qiming.yang, stable, yidingx.zhou, Mingjin Ye, Ke Zhang, Qi Zhang

The control thread accesses the hardware resources after the
resources were released, resulting in a segment error.

This commit fixes the issue by waiting for all `ice-reset` threads to
finish before reclaiming resources.

Fixes: b71573ec2fc2 ("net/ice: retry getting VF VSI map after failure")
Fixes: 7564d5509611 ("net/ice: add DCF hardware initialization")
Cc: stable@dpdk.org

Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
v2: add pthread_exit() for windows
---
V3: Optimization. It is unsafe for a thread to forcibly exit, which will
cause the spin lock to not be released correctly
---
 drivers/net/ice/ice_dcf.c        | 15 +++++++++++++--
 drivers/net/ice/ice_dcf.h        |  2 ++
 drivers/net/ice/ice_dcf_parent.c |  1 -
 3 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 1c3d22ae0f..b3dea779aa 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -543,6 +543,8 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
 	ice_dcf_disable_irq0(hw);
 
 	for (;;) {
+		if (hw->vc_event_msg_cb == NULL)
+			break;
 		if (ice_dcf_get_vf_resource(hw) == 0 &&
 		    ice_dcf_get_vf_vsi_map(hw) >= 0) {
 			err = 0;
@@ -555,8 +557,10 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
 		rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
 	}
 
-	rte_intr_enable(pci_dev->intr_handle);
-	ice_dcf_enable_irq0(hw);
+	if (hw->vc_event_msg_cb != NULL) {
+		rte_intr_enable(pci_dev->intr_handle);
+		ice_dcf_enable_irq0(hw);
+	}
 
 	rte_spinlock_unlock(&hw->vc_cmd_send_lock);
 
@@ -749,6 +753,8 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
 
+	hw->vc_event_msg_cb = NULL;
+
 	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
 		if (hw->tm_conf.committed) {
 			ice_dcf_clear_bw(hw);
@@ -760,6 +766,9 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_intr_callback_unregister(intr_handle,
 				     ice_dcf_dev_interrupt_handler, hw);
 
+	rte_delay_us(ICE_DCF_VSI_UPDATE_SERVICE_INTERVAL);
+	rte_spinlock_lock(&hw->vc_cmd_send_lock);
+
 	ice_dcf_mode_disable(hw);
 	iavf_shutdown_adminq(&hw->avf);
 
@@ -783,6 +792,8 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 
 	rte_free(hw->ets_config);
 	hw->ets_config = NULL;
+
+	rte_spinlock_unlock(&hw->vc_cmd_send_lock);
 }
 
 int
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 7f42ebabe9..f9465f60a6 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -15,6 +15,8 @@
 #include "base/ice_type.h"
 #include "ice_logs.h"
 
+#define ICE_DCF_VSI_UPDATE_SERVICE_INTERVAL	100000 /* us */
+
 /* ICE_DCF_DEV_PRIVATE_TO */
 #define ICE_DCF_DEV_PRIVATE_TO_ADAPTER(adapter) \
 	((struct ice_dcf_adapter *)adapter)
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index 01e390ddda..d1b227c431 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -12,7 +12,6 @@
 #include "ice_dcf_ethdev.h"
 #include "ice_generic_flow.h"
 
-#define ICE_DCF_VSI_UPDATE_SERVICE_INTERVAL	100000 /* us */
 static rte_spinlock_t vsi_update_lock = RTE_SPINLOCK_INITIALIZER;
 
 struct ice_dcf_reset_event_param {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH v3] net/ice: fix ice dcf control thread crash
  2023-03-15  8:20   ` [PATCH v3] " Mingjin Ye
@ 2023-03-15 13:06     ` Zhang, Qi Z
  2023-03-17  5:09     ` [PATCH v4] " Mingjin Ye
  1 sibling, 0 replies; 24+ messages in thread
From: Zhang, Qi Z @ 2023-03-15 13:06 UTC (permalink / raw)
  To: Ye, MingjinX, dev; +Cc: Yang, Qiming, stable, Zhou, YidingX, Zhang, Ke1X



> -----Original Message-----
> From: Ye, MingjinX <mingjinx.ye@intel.com>
> Sent: Wednesday, March 15, 2023 4:20 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou, YidingX
> <yidingx.zhou@intel.com>; Ye, MingjinX <mingjinx.ye@intel.com>; Zhang,
> Ke1X <ke1x.zhang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [PATCH v3] net/ice: fix ice dcf control thread crash
> 
> The control thread accesses the hardware resources after the resources were
> released, resulting in a segment error.
> 
> This commit fixes the issue by waiting for all `ice-reset` threads to finish
> before reclaiming resources.

Please explain how the patch implemented this, I didn't see the code that waiting for other thread ready, like "pthread_join" call
Please try to add more comments in your code for easy review.

> 
> Fixes: b71573ec2fc2 ("net/ice: retry getting VF VSI map after failure")
> Fixes: 7564d5509611 ("net/ice: add DCF hardware initialization")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> ---
> v2: add pthread_exit() for windows
> ---
> V3: Optimization. It is unsafe for a thread to forcibly exit, which will cause
> the spin lock to not be released correctly
> ---
>  drivers/net/ice/ice_dcf.c        | 15 +++++++++++++--
>  drivers/net/ice/ice_dcf.h        |  2 ++
>  drivers/net/ice/ice_dcf_parent.c |  1 -
>  3 files changed, 15 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index
> 1c3d22ae0f..b3dea779aa 100644
> --- a/drivers/net/ice/ice_dcf.c
> +++ b/drivers/net/ice/ice_dcf.c
> @@ -543,6 +543,8 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw
> *hw)
>  	ice_dcf_disable_irq0(hw);
> 
>  	for (;;) {
> +		if (hw->vc_event_msg_cb == NULL)
> +			break;
>  		if (ice_dcf_get_vf_resource(hw) == 0 &&
>  		    ice_dcf_get_vf_vsi_map(hw) >= 0) {
>  			err = 0;
> @@ -555,8 +557,10 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw
> *hw)
>  		rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
>  	}
> 
> -	rte_intr_enable(pci_dev->intr_handle);
> -	ice_dcf_enable_irq0(hw);
> +	if (hw->vc_event_msg_cb != NULL) {
> +		rte_intr_enable(pci_dev->intr_handle);
> +		ice_dcf_enable_irq0(hw);
> +	}
> 
>  	rte_spinlock_unlock(&hw->vc_cmd_send_lock);
> 
> @@ -749,6 +753,8 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev,
> struct ice_dcf_hw *hw)
>  	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
>  	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
> 
> +	hw->vc_event_msg_cb = NULL;
> +
>  	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
>  		if (hw->tm_conf.committed) {
>  			ice_dcf_clear_bw(hw);
> @@ -760,6 +766,9 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev,
> struct ice_dcf_hw *hw)
>  	rte_intr_callback_unregister(intr_handle,
>  				     ice_dcf_dev_interrupt_handler, hw);
> 
> +	rte_delay_us(ICE_DCF_VSI_UPDATE_SERVICE_INTERVAL);
> +	rte_spinlock_lock(&hw->vc_cmd_send_lock);
> +
>  	ice_dcf_mode_disable(hw);
>  	iavf_shutdown_adminq(&hw->avf);
> 
> @@ -783,6 +792,8 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev,
> struct ice_dcf_hw *hw)
> 
>  	rte_free(hw->ets_config);
>  	hw->ets_config = NULL;
> +
> +	rte_spinlock_unlock(&hw->vc_cmd_send_lock);
>  }
> 
>  int
> diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index
> 7f42ebabe9..f9465f60a6 100644
> --- a/drivers/net/ice/ice_dcf.h
> +++ b/drivers/net/ice/ice_dcf.h
> @@ -15,6 +15,8 @@
>  #include "base/ice_type.h"
>  #include "ice_logs.h"
> 
> +#define ICE_DCF_VSI_UPDATE_SERVICE_INTERVAL	100000 /* us */
> +
>  /* ICE_DCF_DEV_PRIVATE_TO */
>  #define ICE_DCF_DEV_PRIVATE_TO_ADAPTER(adapter) \
>  	((struct ice_dcf_adapter *)adapter)
> diff --git a/drivers/net/ice/ice_dcf_parent.c
> b/drivers/net/ice/ice_dcf_parent.c
> index 01e390ddda..d1b227c431 100644
> --- a/drivers/net/ice/ice_dcf_parent.c
> +++ b/drivers/net/ice/ice_dcf_parent.c
> @@ -12,7 +12,6 @@
>  #include "ice_dcf_ethdev.h"
>  #include "ice_generic_flow.h"
> 
> -#define ICE_DCF_VSI_UPDATE_SERVICE_INTERVAL	100000 /* us */
>  static rte_spinlock_t vsi_update_lock = RTE_SPINLOCK_INITIALIZER;
> 
>  struct ice_dcf_reset_event_param {
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v4] net/ice: fix ice dcf control thread crash
  2023-03-15  8:20   ` [PATCH v3] " Mingjin Ye
  2023-03-15 13:06     ` Zhang, Qi Z
@ 2023-03-17  5:09     ` Mingjin Ye
  2023-03-17 10:15       ` Zhang, Qi Z
  2023-03-20  9:40       ` [PATCH v5] " Mingjin Ye
  1 sibling, 2 replies; 24+ messages in thread
From: Mingjin Ye @ 2023-03-17  5:09 UTC (permalink / raw)
  To: dev; +Cc: qiming.yang, stable, yidingx.zhou, Mingjin Ye, Ke Zhang, Qi Zhang

The control thread accesses the hardware resources after the
resources were released, which results in a segment error.

The 'ice-reset' threads are detached, so thread resources cannot be
reclaimed by `pthread_join` calls.

This commit synchronizes the number of 'ice-reset' threads by adding two
variables (the 'vsi_update_thread_num' static global and
the 'vsi_thread_lock' static global spinlock). When releasing HW
resources, we clear the event callback function. That makes these threads
exit quickly. After the number of 'ice-reset' threads decreased to be 0,
we release resources.

Fixes: 3b3757bda3c3 ("net/ice: get VF hardware index in DCF")
Fixes: 931ee54072b1 ("net/ice: support QoS bandwidth config after VF reset in DCF")
Fixes: c7e1a1a3bfeb ("net/ice: refactor DCF VLAN handling")
Fixes: 0b02c9519432 ("net/ice: handle PF initialization by DCF")
Fixes: b71573ec2fc2 ("net/ice: retry getting VF VSI map after failure")
Fixes: 7564d5509611 ("net/ice: add DCF hardware initialization")
Cc: stable@dpdk.org

Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
v2: add pthread_exit() for windows
---
v3: Optimization. It is unsafe for a thread to forcibly exit, which
will cause the spin lock to not be released correctly
---
v4: Safely wait for all event threads to end
---
 drivers/net/ice/ice_dcf.c        | 18 ++++++++++++++--
 drivers/net/ice/ice_dcf.h        |  1 +
 drivers/net/ice/ice_dcf_parent.c | 37 ++++++++++++++++++++++++++++++++
 3 files changed, 54 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 1c3d22ae0f..169520f5bb 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -543,6 +543,8 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
 	ice_dcf_disable_irq0(hw);
 
 	for (;;) {
+		if (hw->vc_event_msg_cb == NULL)
+			break;
 		if (ice_dcf_get_vf_resource(hw) == 0 &&
 		    ice_dcf_get_vf_vsi_map(hw) >= 0) {
 			err = 0;
@@ -555,8 +557,10 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
 		rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
 	}
 
-	rte_intr_enable(pci_dev->intr_handle);
-	ice_dcf_enable_irq0(hw);
+	if (hw->vc_event_msg_cb != NULL) {
+		rte_intr_enable(pci_dev->intr_handle);
+		ice_dcf_enable_irq0(hw);
+	}
 
 	rte_spinlock_unlock(&hw->vc_cmd_send_lock);
 
@@ -749,6 +753,12 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
 
+	/* Clear event callbacks, `VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE`
+	 * event will be ignored and all running `ice-thread` threads
+	 * will exit quickly.
+	 */
+	hw->vc_event_msg_cb = NULL;
+
 	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
 		if (hw->tm_conf.committed) {
 			ice_dcf_clear_bw(hw);
@@ -760,6 +770,10 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_intr_callback_unregister(intr_handle,
 				     ice_dcf_dev_interrupt_handler, hw);
 
+	/* Wait for all `ice-thread` threads to exit. */
+	while (ice_dcf_event_handle_num() > 0)
+		rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
+
 	ice_dcf_mode_disable(hw);
 	iavf_shutdown_adminq(&hw->avf);
 
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 7f42ebabe9..6c636a7497 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -143,6 +143,7 @@ int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
 int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
 			void *buf, uint16_t buf_size);
 int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
+int ice_dcf_event_handle_num(void);
 int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index 01e390ddda..0ff08e179e 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -14,6 +14,9 @@
 
 #define ICE_DCF_VSI_UPDATE_SERVICE_INTERVAL	100000 /* us */
 static rte_spinlock_t vsi_update_lock = RTE_SPINLOCK_INITIALIZER;
+static rte_spinlock_t vsi_thread_lock = RTE_SPINLOCK_INITIALIZER;
+static int vsi_update_thread_num;
+
 
 struct ice_dcf_reset_event_param {
 	struct ice_dcf_hw *dcf_hw;
@@ -130,6 +133,9 @@ ice_dcf_vsi_update_service_handler(void *param)
 
 	rte_spinlock_lock(&vsi_update_lock);
 
+	if (hw->vc_event_msg_cb == NULL)
+		goto update_end;
+
 	if (!ice_dcf_handle_vsi_update_event(hw)) {
 		__atomic_store_n(&parent_adapter->dcf_state_on, true,
 				 __ATOMIC_RELAXED);
@@ -150,10 +156,14 @@ ice_dcf_vsi_update_service_handler(void *param)
 	if (hw->tm_conf.committed)
 		ice_dcf_replay_vf_bw(hw, reset_param->vf_id);
 
+update_end:
 	rte_spinlock_unlock(&vsi_update_lock);
 
 	free(param);
 
+	rte_spinlock_lock(&vsi_thread_lock);
+	vsi_update_thread_num--;
+	rte_spinlock_unlock(&vsi_thread_lock);
 	return NULL;
 }
 
@@ -183,6 +193,10 @@ start_vsi_reset_thread(struct ice_dcf_hw *dcf_hw, bool vfr, uint16_t vf_id)
 		PMD_DRV_LOG(ERR, "Failed to start the thread for reset handling");
 		free(param);
 	}
+
+	rte_spinlock_lock(&vsi_thread_lock);
+	vsi_update_thread_num++;
+	rte_spinlock_unlock(&vsi_thread_lock);
 }
 
 static uint32_t
@@ -262,6 +276,18 @@ ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw,
 		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_PF_DRIVER_CLOSE event");
 		break;
 	case VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE:
+		/* If the event handling callback is empty, the event cannot
+		 * be handled. Therefore we ignore this event.
+		 */
+		if (dcf_hw->vc_event_msg_cb == NULL) {
+			PMD_DRV_LOG(DEBUG,
+				"VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE event "
+				"received: VF%u with VSI num %u, ignore processing",
+			    pf_msg->event_data.vf_vsi_map.vf_id,
+			    pf_msg->event_data.vf_vsi_map.vsi_id);
+			break;
+		}
+
 		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE event : VF%u with VSI num %u",
 			    pf_msg->event_data.vf_vsi_map.vf_id,
 			    pf_msg->event_data.vf_vsi_map.vsi_id);
@@ -505,3 +531,14 @@ ice_dcf_uninit_parent_adapter(struct rte_eth_dev *eth_dev)
 	ice_flow_uninit(parent_adapter);
 	ice_dcf_uninit_parent_hw(parent_hw);
 }
+
+int ice_dcf_event_handle_num(void)
+{
+	int ret;
+
+	rte_spinlock_lock(&vsi_thread_lock);
+	ret = vsi_update_thread_num;
+	rte_spinlock_unlock(&vsi_thread_lock);
+
+	return ret;
+}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH v4] net/ice: fix ice dcf control thread crash
  2023-03-17  5:09     ` [PATCH v4] " Mingjin Ye
@ 2023-03-17 10:15       ` Zhang, Qi Z
  2023-03-20  9:40       ` [PATCH v5] " Mingjin Ye
  1 sibling, 0 replies; 24+ messages in thread
From: Zhang, Qi Z @ 2023-03-17 10:15 UTC (permalink / raw)
  To: Ye, MingjinX, dev; +Cc: Yang, Qiming, stable, Zhou, YidingX, Zhang, Ke1X



> -----Original Message-----
> From: Ye, MingjinX <mingjinx.ye@intel.com>
> Sent: Friday, March 17, 2023 1:10 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou, YidingX
> <yidingx.zhou@intel.com>; Ye, MingjinX <mingjinx.ye@intel.com>; Zhang,
> Ke1X <ke1x.zhang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [PATCH v4] net/ice: fix ice dcf control thread crash
> 
> The control thread accesses the hardware resources after the resources were
> released, which results in a segment error.
> 
> The 'ice-reset' threads are detached, so thread resources cannot be
> reclaimed by `pthread_join` calls.
> 
> This commit synchronizes the number of 'ice-reset' threads by adding two
> variables (the 'vsi_update_thread_num' static global and the
> 'vsi_thread_lock' static global spinlock). When releasing HW resources, we
> clear the event callback function. That makes these threads exit quickly.
> After the number of 'ice-reset' threads decreased to be 0, we release
> resources.
> 
> Fixes: 3b3757bda3c3 ("net/ice: get VF hardware index in DCF")
> Fixes: 931ee54072b1 ("net/ice: support QoS bandwidth config after VF reset
> in DCF")
> Fixes: c7e1a1a3bfeb ("net/ice: refactor DCF VLAN handling")
> Fixes: 0b02c9519432 ("net/ice: handle PF initialization by DCF")
> Fixes: b71573ec2fc2 ("net/ice: retry getting VF VSI map after failure")
> Fixes: 7564d5509611 ("net/ice: add DCF hardware initialization")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> ---
> v2: add pthread_exit() for windows
> ---
> v3: Optimization. It is unsafe for a thread to forcibly exit, which will cause
> the spin lock to not be released correctly
> ---
> v4: Safely wait for all event threads to end
> ---
>  drivers/net/ice/ice_dcf.c        | 18 ++++++++++++++--
>  drivers/net/ice/ice_dcf.h        |  1 +
>  drivers/net/ice/ice_dcf_parent.c | 37 ++++++++++++++++++++++++++++++++
>  3 files changed, 54 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index
> 1c3d22ae0f..169520f5bb 100644
> --- a/drivers/net/ice/ice_dcf.c
> +++ b/drivers/net/ice/ice_dcf.c
> @@ -543,6 +543,8 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw
> *hw)
>  	ice_dcf_disable_irq0(hw);
> 
>  	for (;;) {
> +		if (hw->vc_event_msg_cb == NULL)
> +			break;
>  		if (ice_dcf_get_vf_resource(hw) == 0 &&
>  		    ice_dcf_get_vf_vsi_map(hw) >= 0) {
>  			err = 0;
> @@ -555,8 +557,10 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw
> *hw)
>  		rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
>  	}
> 
> -	rte_intr_enable(pci_dev->intr_handle);
> -	ice_dcf_enable_irq0(hw);
> +	if (hw->vc_event_msg_cb != NULL) {
> +		rte_intr_enable(pci_dev->intr_handle);
> +		ice_dcf_enable_irq0(hw);
> +	}
> 
>  	rte_spinlock_unlock(&hw->vc_cmd_send_lock);
> 
> @@ -749,6 +753,12 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev,
> struct ice_dcf_hw *hw)
>  	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
>  	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
> 
> +	/* Clear event callbacks, `VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE`
> +	 * event will be ignored and all running `ice-thread` threads
> +	 * will exit quickly.
> +	 */
> +	hw->vc_event_msg_cb = NULL;
> +
>  	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
>  		if (hw->tm_conf.committed) {
>  			ice_dcf_clear_bw(hw);
> @@ -760,6 +770,10 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev,
> struct ice_dcf_hw *hw)
>  	rte_intr_callback_unregister(intr_handle,
>  				     ice_dcf_dev_interrupt_handler, hw);
> 
> +	/* Wait for all `ice-thread` threads to exit. */
> +	while (ice_dcf_event_handle_num() > 0)
> +		rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
> +
>  	ice_dcf_mode_disable(hw);
>  	iavf_shutdown_adminq(&hw->avf);
> 
> diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index
> 7f42ebabe9..6c636a7497 100644
> --- a/drivers/net/ice/ice_dcf.h
> +++ b/drivers/net/ice/ice_dcf.h
> @@ -143,6 +143,7 @@ int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw
> *hw,  int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
>  			void *buf, uint16_t buf_size);
>  int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
> +int ice_dcf_event_handle_num(void);
>  int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
> void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
> int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw); diff --git
> a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
> index 01e390ddda..0ff08e179e 100644
> --- a/drivers/net/ice/ice_dcf_parent.c
> +++ b/drivers/net/ice/ice_dcf_parent.c
> @@ -14,6 +14,9 @@
> 
>  #define ICE_DCF_VSI_UPDATE_SERVICE_INTERVAL	100000 /* us */
>  static rte_spinlock_t vsi_update_lock = RTE_SPINLOCK_INITIALIZER;
> +static rte_spinlock_t vsi_thread_lock = RTE_SPINLOCK_INITIALIZER;
> +static int vsi_update_thread_num;

Is this correct? Consider a scenario where we have two NICs and two DCFs running in the same DPDK process. Should the parameters be specific to each DCF?

> +
> 
>  struct ice_dcf_reset_event_param {
>  	struct ice_dcf_hw *dcf_hw;
> @@ -130,6 +133,9 @@ ice_dcf_vsi_update_service_handler(void *param)
> 
>  	rte_spinlock_lock(&vsi_update_lock);
> 
> +	if (hw->vc_event_msg_cb == NULL)
> +		goto update_end;
> +
>  	if (!ice_dcf_handle_vsi_update_event(hw)) {
>  		__atomic_store_n(&parent_adapter->dcf_state_on, true,
>  				 __ATOMIC_RELAXED);
> @@ -150,10 +156,14 @@ ice_dcf_vsi_update_service_handler(void *param)
>  	if (hw->tm_conf.committed)
>  		ice_dcf_replay_vf_bw(hw, reset_param->vf_id);
> 
> +update_end:
>  	rte_spinlock_unlock(&vsi_update_lock);
> 
>  	free(param);
> 
> +	rte_spinlock_lock(&vsi_thread_lock);
> +	vsi_update_thread_num--;
> +	rte_spinlock_unlock(&vsi_thread_lock);
>  	return NULL;
>  }
> 
> @@ -183,6 +193,10 @@ start_vsi_reset_thread(struct ice_dcf_hw *dcf_hw,
> bool vfr, uint16_t vf_id)
>  		PMD_DRV_LOG(ERR, "Failed to start the thread for reset
> handling");
>  		free(param);
>  	}
> +
> +	rte_spinlock_lock(&vsi_thread_lock);
> +	vsi_update_thread_num++;
> +	rte_spinlock_unlock(&vsi_thread_lock);
>  }
> 
>  static uint32_t
> @@ -262,6 +276,18 @@ ice_dcf_handle_pf_event_msg(struct ice_dcf_hw
> *dcf_hw,
>  		PMD_DRV_LOG(DEBUG,
> "VIRTCHNL_EVENT_PF_DRIVER_CLOSE event");
>  		break;
>  	case VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE:
> +		/* If the event handling callback is empty, the event cannot
> +		 * be handled. Therefore we ignore this event.
> +		 */
> +		if (dcf_hw->vc_event_msg_cb == NULL) {
> +			PMD_DRV_LOG(DEBUG,
> +				"VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE
> event "
> +				"received: VF%u with VSI num %u, ignore
> processing",
> +			    pf_msg->event_data.vf_vsi_map.vf_id,
> +			    pf_msg->event_data.vf_vsi_map.vsi_id);
> +			break;
> +		}
> +
>  		PMD_DRV_LOG(DEBUG,
> "VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE event : VF%u with VSI num %u",
>  			    pf_msg->event_data.vf_vsi_map.vf_id,
>  			    pf_msg->event_data.vf_vsi_map.vsi_id);
> @@ -505,3 +531,14 @@ ice_dcf_uninit_parent_adapter(struct rte_eth_dev
> *eth_dev)
>  	ice_flow_uninit(parent_adapter);
>  	ice_dcf_uninit_parent_hw(parent_hw);
>  }
> +
> +int ice_dcf_event_handle_num(void)
> +{
> +	int ret;
> +
> +	rte_spinlock_lock(&vsi_thread_lock);
> +	ret = vsi_update_thread_num;
> +	rte_spinlock_unlock(&vsi_thread_lock);
> +
> +	return ret;
> +}
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v5] net/ice: fix ice dcf control thread crash
  2023-03-17  5:09     ` [PATCH v4] " Mingjin Ye
  2023-03-17 10:15       ` Zhang, Qi Z
@ 2023-03-20  9:40       ` Mingjin Ye
  2023-03-20 12:52         ` Zhang, Qi Z
  2023-03-22  5:56         ` [PATCH v6] " Mingjin Ye
  1 sibling, 2 replies; 24+ messages in thread
From: Mingjin Ye @ 2023-03-20  9:40 UTC (permalink / raw)
  To: dev; +Cc: qiming.yang, stable, yidingx.zhou, Mingjin Ye, Ke Zhang, Qi Zhang

The control thread accesses the hardware resources after the
resources were released, which results in a segment error.

The 'ice-reset' threads are detached, so thread resources cannot be
reclaimed by `pthread_join` calls.

This commit synchronizes the number of 'ice-reset' threads by adding two
variables ('vsi_update_thread_num' and 'vsi_thread_lock' spinlock)
to 'struct ice_dcf_hw'. When releasing HW resources, we clear the event
callback function. That makes these threads exit quickly. After the number
of 'ice-reset' threads decreased to be 0, we release resources.

Fixes: 3b3757bda3c3 ("net/ice: get VF hardware index in DCF")
Fixes: 931ee54072b1 ("net/ice: support QoS bandwidth config after VF reset in DCF")
Fixes: c7e1a1a3bfeb ("net/ice: refactor DCF VLAN handling")
Fixes: 0b02c9519432 ("net/ice: handle PF initialization by DCF")
Fixes: b71573ec2fc2 ("net/ice: retry getting VF VSI map after failure")
Fixes: 7564d5509611 ("net/ice: add DCF hardware initialization")
Cc: stable@dpdk.org

Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
v2: add pthread_exit() for windows
---
v3: Optimization. It is unsafe for a thread to forcibly exit, which
will cause the spin lock to not be released correctly
---
v4: Safely wait for all event threads to end
---
v5: Spinlock moved to struct ice_dcf_hw
---
 drivers/net/ice/ice_dcf.c        | 21 +++++++++++++++++++--
 drivers/net/ice/ice_dcf.h        |  3 +++
 drivers/net/ice/ice_dcf_parent.c | 23 +++++++++++++++++++++++
 3 files changed, 45 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 1c3d22ae0f..53f62a06f4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -543,6 +543,8 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
 	ice_dcf_disable_irq0(hw);
 
 	for (;;) {
+		if (hw->vc_event_msg_cb == NULL)
+			break;
 		if (ice_dcf_get_vf_resource(hw) == 0 &&
 		    ice_dcf_get_vf_vsi_map(hw) >= 0) {
 			err = 0;
@@ -555,8 +557,10 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
 		rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
 	}
 
-	rte_intr_enable(pci_dev->intr_handle);
-	ice_dcf_enable_irq0(hw);
+	if (hw->vc_event_msg_cb != NULL) {
+		rte_intr_enable(pci_dev->intr_handle);
+		ice_dcf_enable_irq0(hw);
+	}
 
 	rte_spinlock_unlock(&hw->vc_cmd_send_lock);
 
@@ -639,6 +643,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_spinlock_init(&hw->vc_cmd_queue_lock);
 	TAILQ_INIT(&hw->vc_cmd_queue);
 
+	rte_spinlock_init(&hw->vsi_thread_lock);
+	hw->vsi_update_thread_num = 0;
+
 	hw->arq_buf = rte_zmalloc("arq_buf", ICE_DCF_AQ_BUF_SZ, 0);
 	if (hw->arq_buf == NULL) {
 		PMD_INIT_LOG(ERR, "unable to allocate AdminQ buffer memory");
@@ -749,6 +756,12 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
 
+	/* Clear event callbacks, `VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE`
+	 * event will be ignored and all running `ice-thread` threads
+	 * will exit quickly.
+	 */
+	hw->vc_event_msg_cb = NULL;
+
 	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
 		if (hw->tm_conf.committed) {
 			ice_dcf_clear_bw(hw);
@@ -760,6 +773,10 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_intr_callback_unregister(intr_handle,
 				     ice_dcf_dev_interrupt_handler, hw);
 
+	/* Wait for all `ice-thread` threads to exit. */
+	while (hw->vsi_update_thread_num != 0)
+		rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
+
 	ice_dcf_mode_disable(hw);
 	iavf_shutdown_adminq(&hw->avf);
 
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 7f42ebabe9..f95ef2794c 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -105,6 +105,9 @@ struct ice_dcf_hw {
 	void (*vc_event_msg_cb)(struct ice_dcf_hw *dcf_hw,
 				uint8_t *msg, uint16_t msglen);
 
+	rte_spinlock_t vsi_thread_lock;
+	int vsi_update_thread_num;
+
 	uint8_t *arq_buf;
 
 	uint16_t num_vfs;
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index 01e390ddda..e48eb69c1a 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -130,6 +130,9 @@ ice_dcf_vsi_update_service_handler(void *param)
 
 	rte_spinlock_lock(&vsi_update_lock);
 
+	if (hw->vc_event_msg_cb == NULL)
+		goto update_end;
+
 	if (!ice_dcf_handle_vsi_update_event(hw)) {
 		__atomic_store_n(&parent_adapter->dcf_state_on, true,
 				 __ATOMIC_RELAXED);
@@ -150,10 +153,14 @@ ice_dcf_vsi_update_service_handler(void *param)
 	if (hw->tm_conf.committed)
 		ice_dcf_replay_vf_bw(hw, reset_param->vf_id);
 
+update_end:
 	rte_spinlock_unlock(&vsi_update_lock);
 
 	free(param);
 
+	rte_spinlock_lock(&hw->vsi_thread_lock);
+	hw->vsi_update_thread_num--;
+	rte_spinlock_unlock(&hw->vsi_thread_lock);
 	return NULL;
 }
 
@@ -183,6 +190,10 @@ start_vsi_reset_thread(struct ice_dcf_hw *dcf_hw, bool vfr, uint16_t vf_id)
 		PMD_DRV_LOG(ERR, "Failed to start the thread for reset handling");
 		free(param);
 	}
+
+	rte_spinlock_lock(&dcf_hw->vsi_thread_lock);
+	dcf_hw->vsi_update_thread_num++;
+	rte_spinlock_unlock(&dcf_hw->vsi_thread_lock);
 }
 
 static uint32_t
@@ -262,6 +273,18 @@ ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw,
 		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_PF_DRIVER_CLOSE event");
 		break;
 	case VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE:
+		/* If the event handling callback is empty, the event cannot
+		 * be handled. Therefore we ignore this event.
+		 */
+		if (dcf_hw->vc_event_msg_cb == NULL) {
+			PMD_DRV_LOG(DEBUG,
+				"VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE event "
+				"received: VF%u with VSI num %u, ignore processing",
+			    pf_msg->event_data.vf_vsi_map.vf_id,
+			    pf_msg->event_data.vf_vsi_map.vsi_id);
+			break;
+		}
+
 		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE event : VF%u with VSI num %u",
 			    pf_msg->event_data.vf_vsi_map.vf_id,
 			    pf_msg->event_data.vf_vsi_map.vsi_id);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH v5] net/ice: fix ice dcf control thread crash
  2023-03-20  9:40       ` [PATCH v5] " Mingjin Ye
@ 2023-03-20 12:52         ` Zhang, Qi Z
  2023-03-21  2:08           ` Ye, MingjinX
  2023-03-22  5:56         ` [PATCH v6] " Mingjin Ye
  1 sibling, 1 reply; 24+ messages in thread
From: Zhang, Qi Z @ 2023-03-20 12:52 UTC (permalink / raw)
  To: Ye, MingjinX, dev; +Cc: Yang, Qiming, stable, Zhou, YidingX, Zhang, Ke1X



> -----Original Message-----
> From: Ye, MingjinX <mingjinx.ye@intel.com>
> Sent: Monday, March 20, 2023 5:41 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou, YidingX
> <yidingx.zhou@intel.com>; Ye, MingjinX <mingjinx.ye@intel.com>; Zhang,
> Ke1X <ke1x.zhang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [PATCH v5] net/ice: fix ice dcf control thread crash
> 
> The control thread accesses the hardware resources after the resources were
> released, which results in a segment error.
> 
> The 'ice-reset' threads are detached, so thread resources cannot be
> reclaimed by `pthread_join` calls.
> 
> This commit synchronizes the number of 'ice-reset' threads by adding two
> variables ('vsi_update_thread_num' and 'vsi_thread_lock' spinlock) to 'struct
> ice_dcf_hw'. When releasing HW resources, we clear the event callback
> function. That makes these threads exit quickly. After the number of 'ice-
> reset' threads decreased to be 0, we release resources.
> 
> Fixes: 3b3757bda3c3 ("net/ice: get VF hardware index in DCF")
> Fixes: 931ee54072b1 ("net/ice: support QoS bandwidth config after VF reset
> in DCF")
> Fixes: c7e1a1a3bfeb ("net/ice: refactor DCF VLAN handling")
> Fixes: 0b02c9519432 ("net/ice: handle PF initialization by DCF")
> Fixes: b71573ec2fc2 ("net/ice: retry getting VF VSI map after failure")
> Fixes: 7564d5509611 ("net/ice: add DCF hardware initialization")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> ---
> v2: add pthread_exit() for windows
> ---
> v3: Optimization. It is unsafe for a thread to forcibly exit, which will cause
> the spin lock to not be released correctly
> ---
> v4: Safely wait for all event threads to end
> ---
> v5: Spinlock moved to struct ice_dcf_hw
> ---
>  drivers/net/ice/ice_dcf.c        | 21 +++++++++++++++++++--
>  drivers/net/ice/ice_dcf.h        |  3 +++
>  drivers/net/ice/ice_dcf_parent.c | 23 +++++++++++++++++++++++
>  3 files changed, 45 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index
> 1c3d22ae0f..53f62a06f4 100644
> --- a/drivers/net/ice/ice_dcf.c
> +++ b/drivers/net/ice/ice_dcf.c
> @@ -543,6 +543,8 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw
> *hw)
>  	ice_dcf_disable_irq0(hw);
> 
>  	for (;;) {
> +		if (hw->vc_event_msg_cb == NULL)
> +			break;
Can you explain why this is required, seems it not related with your commit log

>  		if (ice_dcf_get_vf_resource(hw) == 0 &&
>  		    ice_dcf_get_vf_vsi_map(hw) >= 0) {
>  			err = 0;
> @@ -555,8 +557,10 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw
> *hw)
>  		rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
>  	}
> 
> -	rte_intr_enable(pci_dev->intr_handle);
> -	ice_dcf_enable_irq0(hw);
> +	if (hw->vc_event_msg_cb != NULL) {
> +		rte_intr_enable(pci_dev->intr_handle);
> +		ice_dcf_enable_irq0(hw);

Same question as above

> +	}
> 
>  	rte_spinlock_unlock(&hw->vc_cmd_send_lock);
> 
> @@ -639,6 +643,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct
> ice_dcf_hw *hw)
>  	rte_spinlock_init(&hw->vc_cmd_queue_lock);
>  	TAILQ_INIT(&hw->vc_cmd_queue);
> 
> +	rte_spinlock_init(&hw->vsi_thread_lock);
> +	hw->vsi_update_thread_num = 0;
> +
>  	hw->arq_buf = rte_zmalloc("arq_buf", ICE_DCF_AQ_BUF_SZ, 0);
>  	if (hw->arq_buf == NULL) {
>  		PMD_INIT_LOG(ERR, "unable to allocate AdminQ buffer
> memory"); @@ -749,6 +756,12 @@ ice_dcf_uninit_hw(struct rte_eth_dev
> *eth_dev, struct ice_dcf_hw *hw)
>  	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
>  	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
> 
> +	/* Clear event callbacks, `VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE`
> +	 * event will be ignored and all running `ice-thread` threads
> +	 * will exit quickly.
> +	 */
> +	hw->vc_event_msg_cb = NULL;
> +
>  	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
>  		if (hw->tm_conf.committed) {
>  			ice_dcf_clear_bw(hw);
> @@ -760,6 +773,10 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev,
> struct ice_dcf_hw *hw)
>  	rte_intr_callback_unregister(intr_handle,
>  				     ice_dcf_dev_interrupt_handler, hw);
> 
> +	/* Wait for all `ice-thread` threads to exit. */
> +	while (hw->vsi_update_thread_num != 0)
> +		rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
> +
>  	ice_dcf_mode_disable(hw);
>  	iavf_shutdown_adminq(&hw->avf);
> 
> diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index
> 7f42ebabe9..f95ef2794c 100644
> --- a/drivers/net/ice/ice_dcf.h
> +++ b/drivers/net/ice/ice_dcf.h
> @@ -105,6 +105,9 @@ struct ice_dcf_hw {
>  	void (*vc_event_msg_cb)(struct ice_dcf_hw *dcf_hw,
>  				uint8_t *msg, uint16_t msglen);
> 
> +	rte_spinlock_t vsi_thread_lock;
> +	int vsi_update_thread_num;
> +
>  	uint8_t *arq_buf;
> 
>  	uint16_t num_vfs;
> diff --git a/drivers/net/ice/ice_dcf_parent.c
> b/drivers/net/ice/ice_dcf_parent.c
> index 01e390ddda..e48eb69c1a 100644
> --- a/drivers/net/ice/ice_dcf_parent.c
> +++ b/drivers/net/ice/ice_dcf_parent.c
> @@ -130,6 +130,9 @@ ice_dcf_vsi_update_service_handler(void *param)
> 
>  	rte_spinlock_lock(&vsi_update_lock);
> 
> +	if (hw->vc_event_msg_cb == NULL)
> +		goto update_end;
> +
>  	if (!ice_dcf_handle_vsi_update_event(hw)) {
>  		__atomic_store_n(&parent_adapter->dcf_state_on, true,
>  				 __ATOMIC_RELAXED);
> @@ -150,10 +153,14 @@ ice_dcf_vsi_update_service_handler(void *param)
>  	if (hw->tm_conf.committed)
>  		ice_dcf_replay_vf_bw(hw, reset_param->vf_id);
> 
> +update_end:
>  	rte_spinlock_unlock(&vsi_update_lock);
> 
>  	free(param);
> 
> +	rte_spinlock_lock(&hw->vsi_thread_lock);
> +	hw->vsi_update_thread_num--;
> +	rte_spinlock_unlock(&hw->vsi_thread_lock);
>  	return NULL;
>  }
> 
> @@ -183,6 +190,10 @@ start_vsi_reset_thread(struct ice_dcf_hw *dcf_hw,
> bool vfr, uint16_t vf_id)
>  		PMD_DRV_LOG(ERR, "Failed to start the thread for reset
> handling");
>  		free(param);
>  	}
> +
> +	rte_spinlock_lock(&dcf_hw->vsi_thread_lock);
> +	dcf_hw->vsi_update_thread_num++;
> +	rte_spinlock_unlock(&dcf_hw->vsi_thread_lock);

I think you can define vsi_update_thread_num as rte_atomic32_t and use rte_atomic32_add/sub 

>  }
> 
>  static uint32_t
> @@ -262,6 +273,18 @@ ice_dcf_handle_pf_event_msg(struct ice_dcf_hw
> *dcf_hw,
>  		PMD_DRV_LOG(DEBUG,
> "VIRTCHNL_EVENT_PF_DRIVER_CLOSE event");
>  		break;
>  	case VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE:
> +		/* If the event handling callback is empty, the event cannot
> +		 * be handled. Therefore we ignore this event.
> +		 */
> +		if (dcf_hw->vc_event_msg_cb == NULL) {
> +			PMD_DRV_LOG(DEBUG,
> +				"VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE
> event "
> +				"received: VF%u with VSI num %u, ignore
> processing",
> +			    pf_msg->event_data.vf_vsi_map.vf_id,
> +			    pf_msg->event_data.vf_vsi_map.vsi_id);
> +			break;
> +		}
> +
>  		PMD_DRV_LOG(DEBUG,
> "VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE event : VF%u with VSI num %u",
>  			    pf_msg->event_data.vf_vsi_map.vf_id,
>  			    pf_msg->event_data.vf_vsi_map.vsi_id);
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH v5] net/ice: fix ice dcf control thread crash
  2023-03-20 12:52         ` Zhang, Qi Z
@ 2023-03-21  2:08           ` Ye, MingjinX
  2023-03-21 11:55             ` Zhang, Qi Z
  0 siblings, 1 reply; 24+ messages in thread
From: Ye, MingjinX @ 2023-03-21  2:08 UTC (permalink / raw)
  To: Zhang, Qi Z, dev; +Cc: Yang, Qiming, stable, Zhou, YidingX, Zhang, Ke1X

Hi Qi, here is my new solution, can you give me some good suggestions.
1. remove the 'vc_event_msg_cb == NULL' related processing and let each 'ice-rest' thread, end normally.
2. Define vsi_update_thread_num as rte_atomic32_t.

> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: 2023年3月20日 20:53
> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou, YidingX
> <yidingx.zhou@intel.com>; Zhang, Ke1X <ke1x.zhang@intel.com>
> Subject: RE: [PATCH v5] net/ice: fix ice dcf control thread crash

> >  	for (;;) {
> > +		if (hw->vc_event_msg_cb == NULL)
> > +			break;
> Can you explain why this is required, seems it not related with your commit
> log
The purpose of this is to bring all 'ice-reset' threads to a quick end when hw is released.

> >
> > -	rte_intr_enable(pci_dev->intr_handle);
> > -	ice_dcf_enable_irq0(hw);
> > +	if (hw->vc_event_msg_cb != NULL) {
> > +		rte_intr_enable(pci_dev->intr_handle);
> > +		ice_dcf_enable_irq0(hw);
> 
> Same question as above
These are called when HW releases the resource. Therefore, there is no need to call.

> > +	rte_spinlock_lock(&dcf_hw->vsi_thread_lock);
> > +	dcf_hw->vsi_update_thread_num++;
> > +	rte_spinlock_unlock(&dcf_hw->vsi_thread_lock);
> 
> I think you can define vsi_update_thread_num as rte_atomic32_t and use
> rte_atomic32_add/sub
At first I chose the rte_atomic32_t option, which is not very elegant
using spinlock.
The 'checkpatches.sh' script gives a warning ('Using rte_atomicNN_xxx')
when it is executed. I saw the comment on line 89 of the
script (# refrain from new additions of 16/32/64 bits rte_atomicNN_xxx()),
so I went with the spinlock solution.




^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH v5] net/ice: fix ice dcf control thread crash
  2023-03-21  2:08           ` Ye, MingjinX
@ 2023-03-21 11:55             ` Zhang, Qi Z
  2023-03-21 16:24               ` Tyler Retzlaff
  0 siblings, 1 reply; 24+ messages in thread
From: Zhang, Qi Z @ 2023-03-21 11:55 UTC (permalink / raw)
  To: Ye, MingjinX, dev; +Cc: Yang, Qiming, stable, Zhou, YidingX, Zhang, Ke1X



> -----Original Message-----
> From: Ye, MingjinX <mingjinx.ye@intel.com>
> Sent: Tuesday, March 21, 2023 10:08 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou, YidingX
> <yidingx.zhou@intel.com>; Zhang, Ke1X <ke1x.zhang@intel.com>
> Subject: RE: [PATCH v5] net/ice: fix ice dcf control thread crash
> 
> Hi Qi, here is my new solution, can you give me some good suggestions.
> 1. remove the 'vc_event_msg_cb == NULL' related processing and let each
> 'ice-rest' thread, end normally.
> 2. Define vsi_update_thread_num as rte_atomic32_t.
> 
> > -----Original Message-----
> > From: Zhang, Qi Z <qi.z.zhang@intel.com>
> > Sent: 2023年3月20日 20:53
> > To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> > Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou,
> > YidingX <yidingx.zhou@intel.com>; Zhang, Ke1X <ke1x.zhang@intel.com>
> > Subject: RE: [PATCH v5] net/ice: fix ice dcf control thread crash
> 
> > >  	for (;;) {
> > > +		if (hw->vc_event_msg_cb == NULL)
> > > +			break;
> > Can you explain why this is required, seems it not related with your
> > commit log
> The purpose of this is to bring all 'ice-reset' threads to a quick end when hw
> is released.

I don't understand, the vc_event_msg_cb was initialized in ice_dcf_dev_init and never be reset why we need this check, anything I missed?
> 
> > >
> > > -	rte_intr_enable(pci_dev->intr_handle);
> > > -	ice_dcf_enable_irq0(hw);
> > > +	if (hw->vc_event_msg_cb != NULL) {
> > > +		rte_intr_enable(pci_dev->intr_handle);
> > > +		ice_dcf_enable_irq0(hw);
> >
> > Same question as above
> These are called when HW releases the resource. Therefore, there is no need
> to call.
> 
> > > +	rte_spinlock_lock(&dcf_hw->vsi_thread_lock);
> > > +	dcf_hw->vsi_update_thread_num++;
> > > +	rte_spinlock_unlock(&dcf_hw->vsi_thread_lock);
> >
> > I think you can define vsi_update_thread_num as rte_atomic32_t and use
> > rte_atomic32_add/sub
> At first I chose the rte_atomic32_t option, which is not very elegant using
> spinlock.
> The 'checkpatches.sh' script gives a warning ('Using rte_atomicNN_xxx') when
> it is executed. I saw the comment on line 89 of the script (# refrain from new
> additions of 16/32/64 bits rte_atomicNN_xxx()), so I went with the spinlock
> solution.

You are right, rte_atomicNN_xxx will be deprecated, and should not be used
We can use the gcc build-in function __atomic_fetch_add/sub as an alternative, sp

> 
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v5] net/ice: fix ice dcf control thread crash
  2023-03-21 11:55             ` Zhang, Qi Z
@ 2023-03-21 16:24               ` Tyler Retzlaff
  0 siblings, 0 replies; 24+ messages in thread
From: Tyler Retzlaff @ 2023-03-21 16:24 UTC (permalink / raw)
  To: Zhang, Qi Z
  Cc: Ye, MingjinX, dev, Yang, Qiming, stable, Zhou, YidingX, Zhang, Ke1X

On Tue, Mar 21, 2023 at 11:55:19AM +0000, Zhang, Qi Z wrote:
> 
> 
> > -----Original Message-----
> > From: Ye, MingjinX <mingjinx.ye@intel.com>
> > Sent: Tuesday, March 21, 2023 10:08 AM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>; dev@dpdk.org
> > Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou, YidingX
> > <yidingx.zhou@intel.com>; Zhang, Ke1X <ke1x.zhang@intel.com>
> > Subject: RE: [PATCH v5] net/ice: fix ice dcf control thread crash
> > 
> > Hi Qi, here is my new solution, can you give me some good suggestions.
> > 1. remove the 'vc_event_msg_cb == NULL' related processing and let each
> > 'ice-rest' thread, end normally.
> > 2. Define vsi_update_thread_num as rte_atomic32_t.
> > 
> > > -----Original Message-----
> > > From: Zhang, Qi Z <qi.z.zhang@intel.com>
> > > Sent: 2023年3月20日 20:53
> > > To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> > > Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou,
> > > YidingX <yidingx.zhou@intel.com>; Zhang, Ke1X <ke1x.zhang@intel.com>
> > > Subject: RE: [PATCH v5] net/ice: fix ice dcf control thread crash
> > 
> > > >  	for (;;) {
> > > > +		if (hw->vc_event_msg_cb == NULL)
> > > > +			break;
> > > Can you explain why this is required, seems it not related with your
> > > commit log
> > The purpose of this is to bring all 'ice-reset' threads to a quick end when hw
> > is released.
> 
> I don't understand, the vc_event_msg_cb was initialized in ice_dcf_dev_init and never be reset why we need this check, anything I missed?
> > 
> > > >
> > > > -	rte_intr_enable(pci_dev->intr_handle);
> > > > -	ice_dcf_enable_irq0(hw);
> > > > +	if (hw->vc_event_msg_cb != NULL) {
> > > > +		rte_intr_enable(pci_dev->intr_handle);
> > > > +		ice_dcf_enable_irq0(hw);
> > >
> > > Same question as above
> > These are called when HW releases the resource. Therefore, there is no need
> > to call.
> > 
> > > > +	rte_spinlock_lock(&dcf_hw->vsi_thread_lock);
> > > > +	dcf_hw->vsi_update_thread_num++;
> > > > +	rte_spinlock_unlock(&dcf_hw->vsi_thread_lock);
> > >
> > > I think you can define vsi_update_thread_num as rte_atomic32_t and use
> > > rte_atomic32_add/sub
> > At first I chose the rte_atomic32_t option, which is not very elegant using
> > spinlock.
> > The 'checkpatches.sh' script gives a warning ('Using rte_atomicNN_xxx') when
> > it is executed. I saw the comment on line 89 of the script (# refrain from new
> > additions of 16/32/64 bits rte_atomicNN_xxx()), so I went with the spinlock
> > solution.
> 
> You are right, rte_atomicNN_xxx will be deprecated, and should not be used
> We can use the gcc build-in function __atomic_fetch_add/sub as an alternative, sp

using __atomic_fetch_{add,sub} is appropriate. please be aware using
__atomic_{add_fetch}_fetch is discouraged but it is easy to write the
code using only the former.

> 
> > 
> > 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v6] net/ice: fix ice dcf control thread crash
  2023-03-20  9:40       ` [PATCH v5] " Mingjin Ye
  2023-03-20 12:52         ` Zhang, Qi Z
@ 2023-03-22  5:56         ` Mingjin Ye
  2023-04-03  6:54           ` Zhang, Qi Z
  2023-04-11  2:08           ` [PATCH v7] " Mingjin Ye
  1 sibling, 2 replies; 24+ messages in thread
From: Mingjin Ye @ 2023-03-22  5:56 UTC (permalink / raw)
  To: dev; +Cc: qiming.yang, stable, yidingx.zhou, Mingjin Ye, Ke Zhang, Qi Zhang

The control thread accesses the hardware resources after the
resources were released, which results in a segment error.

The 'ice-reset' threads are detached, so thread resources cannot be
reclaimed by `pthread_join` calls.

This commit synchronizes the number of "ice-reset" threads by adding a
variable ("vsi_update_thread_num") to the "struct ice_dcf_hw" and
performing an atomic operation on this variable. When releasing HW
resources, we wait for the number of "ice-reset" threads to be reduced
to 0 before releasing the resources.

Fixes: c7e1a1a3bfeb ("net/ice: refactor DCF VLAN handling")
Fixes: 7564d5509611 ("net/ice: add DCF hardware initialization")
Fixes: 0b02c9519432 ("net/ice: handle PF initialization by DCF")
Cc: stable@dpdk.org

Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
v2: add pthread_exit() for windows
---
v3: Optimization. It is unsafe for a thread to forcibly exit, which
will cause the spin lock to not be released correctly
---
v4: Safely wait for all event threads to end
---
v5: Spinlock moved to struct ice_dcf_hw
---
v6: Spinlock changed to atomic
---
 drivers/net/ice/ice_dcf.c        | 9 +++++++++
 drivers/net/ice/ice_dcf.h        | 2 ++
 drivers/net/ice/ice_dcf_parent.c | 6 ++++++
 3 files changed, 17 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 1c3d22ae0f..adf2cf2cb6 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -32,6 +32,8 @@
 #define ICE_DCF_ARQ_MAX_RETRIES 200
 #define ICE_DCF_ARQ_CHECK_TIME  2   /* msecs */
 
+#define ICE_DCF_CHECK_INTERVAL  100   /* 100ms */
+
 #define ICE_DCF_VF_RES_BUF_SZ	\
 	(sizeof(struct virtchnl_vf_resource) +	\
 		IAVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource))
@@ -639,6 +641,8 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_spinlock_init(&hw->vc_cmd_queue_lock);
 	TAILQ_INIT(&hw->vc_cmd_queue);
 
+	__atomic_store_n(&hw->vsi_update_thread_num, 0, __ATOMIC_RELAXED);
+
 	hw->arq_buf = rte_zmalloc("arq_buf", ICE_DCF_AQ_BUF_SZ, 0);
 	if (hw->arq_buf == NULL) {
 		PMD_INIT_LOG(ERR, "unable to allocate AdminQ buffer memory");
@@ -760,6 +764,11 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_intr_callback_unregister(intr_handle,
 				     ice_dcf_dev_interrupt_handler, hw);
 
+	/* Wait for all `ice-thread` threads to exit. */
+	while (__atomic_load_n(&hw->vsi_update_thread_num,
+		__ATOMIC_ACQUIRE) != 0)
+		rte_delay_ms(ICE_DCF_CHECK_INTERVAL);
+
 	ice_dcf_mode_disable(hw);
 	iavf_shutdown_adminq(&hw->avf);
 
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 7f42ebabe9..7becf6d187 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -105,6 +105,8 @@ struct ice_dcf_hw {
 	void (*vc_event_msg_cb)(struct ice_dcf_hw *dcf_hw,
 				uint8_t *msg, uint16_t msglen);
 
+	int vsi_update_thread_num;
+
 	uint8_t *arq_buf;
 
 	uint16_t num_vfs;
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index 01e390ddda..6ee0fbcd2b 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -154,6 +154,9 @@ ice_dcf_vsi_update_service_handler(void *param)
 
 	free(param);
 
+	__atomic_fetch_sub(&hw->vsi_update_thread_num, 1,
+		__ATOMIC_RELEASE);
+
 	return NULL;
 }
 
@@ -183,6 +186,9 @@ start_vsi_reset_thread(struct ice_dcf_hw *dcf_hw, bool vfr, uint16_t vf_id)
 		PMD_DRV_LOG(ERR, "Failed to start the thread for reset handling");
 		free(param);
 	}
+
+	__atomic_fetch_add(&dcf_hw->vsi_update_thread_num, 1,
+		__ATOMIC_RELAXED);
 }
 
 static uint32_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH v6] net/ice: fix ice dcf control thread crash
  2023-03-22  5:56         ` [PATCH v6] " Mingjin Ye
@ 2023-04-03  6:54           ` Zhang, Qi Z
  2023-04-11  2:08           ` [PATCH v7] " Mingjin Ye
  1 sibling, 0 replies; 24+ messages in thread
From: Zhang, Qi Z @ 2023-04-03  6:54 UTC (permalink / raw)
  To: Ye, MingjinX, dev; +Cc: Yang, Qiming, stable, Zhou, YidingX, Zhang, Ke1X



> -----Original Message-----
> From: Ye, MingjinX <mingjinx.ye@intel.com>
> Sent: Wednesday, March 22, 2023 1:56 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou, YidingX
> <yidingx.zhou@intel.com>; Ye, MingjinX <mingjinx.ye@intel.com>; Zhang,
> Ke1X <ke1x.zhang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [PATCH v6] net/ice: fix ice dcf control thread crash
> 
> The control thread accesses the hardware resources after the resources were
> released, which results in a segment error.
> 
> The 'ice-reset' threads are detached, so thread resources cannot be
> reclaimed by `pthread_join` calls.
> 
> This commit synchronizes the number of "ice-reset" threads by adding a
> variable ("vsi_update_thread_num") to the "struct ice_dcf_hw" and
> performing an atomic operation on this variable. When releasing HW
> resources, we wait for the number of "ice-reset" threads to be reduced to 0
> before releasing the resources.
> 
> Fixes: c7e1a1a3bfeb ("net/ice: refactor DCF VLAN handling")
> Fixes: 7564d5509611 ("net/ice: add DCF hardware initialization")
> Fixes: 0b02c9519432 ("net/ice: handle PF initialization by DCF")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> ---
> v2: add pthread_exit() for windows
> ---
> v3: Optimization. It is unsafe for a thread to forcibly exit, which will cause the
> spin lock to not be released correctly
> ---
> v4: Safely wait for all event threads to end
> ---
> v5: Spinlock moved to struct ice_dcf_hw
> ---
> v6: Spinlock changed to atomic
> ---
>  drivers/net/ice/ice_dcf.c        | 9 +++++++++
>  drivers/net/ice/ice_dcf.h        | 2 ++
>  drivers/net/ice/ice_dcf_parent.c | 6 ++++++
>  3 files changed, 17 insertions(+)
> 
> diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index
> 1c3d22ae0f..adf2cf2cb6 100644
> --- a/drivers/net/ice/ice_dcf.c
> +++ b/drivers/net/ice/ice_dcf.c
> @@ -32,6 +32,8 @@
>  #define ICE_DCF_ARQ_MAX_RETRIES 200
>  #define ICE_DCF_ARQ_CHECK_TIME  2   /* msecs */
> 
> +#define ICE_DCF_CHECK_INTERVAL  100   /* 100ms */
> +
>  #define ICE_DCF_VF_RES_BUF_SZ	\
>  	(sizeof(struct virtchnl_vf_resource) +	\
>  		IAVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource)) @@
> -639,6 +641,8 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct
> ice_dcf_hw *hw)
>  	rte_spinlock_init(&hw->vc_cmd_queue_lock);
>  	TAILQ_INIT(&hw->vc_cmd_queue);
> 
> +	__atomic_store_n(&hw->vsi_update_thread_num, 0,
> __ATOMIC_RELAXED);
> +
>  	hw->arq_buf = rte_zmalloc("arq_buf", ICE_DCF_AQ_BUF_SZ, 0);
>  	if (hw->arq_buf == NULL) {
>  		PMD_INIT_LOG(ERR, "unable to allocate AdminQ buffer
> memory"); @@ -760,6 +764,11 @@ ice_dcf_uninit_hw(struct rte_eth_dev
> *eth_dev, struct ice_dcf_hw *hw)
>  	rte_intr_callback_unregister(intr_handle,
>  				     ice_dcf_dev_interrupt_handler, hw);
> 
> +	/* Wait for all `ice-thread` threads to exit. */
> +	while (__atomic_load_n(&hw->vsi_update_thread_num,
> +		__ATOMIC_ACQUIRE) != 0)
> +		rte_delay_ms(ICE_DCF_CHECK_INTERVAL);
> +
>  	ice_dcf_mode_disable(hw);
>  	iavf_shutdown_adminq(&hw->avf);
> 
> diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index
> 7f42ebabe9..7becf6d187 100644
> --- a/drivers/net/ice/ice_dcf.h
> +++ b/drivers/net/ice/ice_dcf.h
> @@ -105,6 +105,8 @@ struct ice_dcf_hw {
>  	void (*vc_event_msg_cb)(struct ice_dcf_hw *dcf_hw,
>  				uint8_t *msg, uint16_t msglen);
> 
> +	int vsi_update_thread_num;
> +
>  	uint8_t *arq_buf;
> 
>  	uint16_t num_vfs;
> diff --git a/drivers/net/ice/ice_dcf_parent.c
> b/drivers/net/ice/ice_dcf_parent.c
> index 01e390ddda..6ee0fbcd2b 100644
> --- a/drivers/net/ice/ice_dcf_parent.c
> +++ b/drivers/net/ice/ice_dcf_parent.c
> @@ -154,6 +154,9 @@ ice_dcf_vsi_update_service_handler(void *param)
> 
>  	free(param);
> 
> +	__atomic_fetch_sub(&hw->vsi_update_thread_num, 1,
> +		__ATOMIC_RELEASE);
> +
>  	return NULL;
>  }
> 
> @@ -183,6 +186,9 @@ start_vsi_reset_thread(struct ice_dcf_hw *dcf_hw,
> bool vfr, uint16_t vf_id)
>  		PMD_DRV_LOG(ERR, "Failed to start the thread for reset
> handling");
>  		free(param);
>  	}
> +
> +	__atomic_fetch_add(&dcf_hw->vsi_update_thread_num, 1,
> +		__ATOMIC_RELAXED);

Is this correct? you increment the counter after you start the new thread with rte_ctrl_thread_create, 
It is possible, __atomic_fetch_sub in ice_dcf_vsi_update_service_handler will be executed before __atomic_fetch_add?

Why not moving __atomic_fetch_add to the service_handler thread? So the order of "add" and "sub" can be guaranteed ?

>  }
> 
>  static uint32_t
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v7] net/ice: fix ice dcf control thread crash
  2023-03-22  5:56         ` [PATCH v6] " Mingjin Ye
  2023-04-03  6:54           ` Zhang, Qi Z
@ 2023-04-11  2:08           ` Mingjin Ye
  2023-05-15  6:28             ` Zhang, Qi Z
  1 sibling, 1 reply; 24+ messages in thread
From: Mingjin Ye @ 2023-04-11  2:08 UTC (permalink / raw)
  To: dev; +Cc: qiming.yang, stable, yidingx.zhou, Mingjin Ye, Ke Zhang, Qi Zhang

The control thread accesses the hardware resources after the
resources were released, which results in a segment error.

The 'ice-reset' threads are detached, so thread resources cannot be
reclaimed by `pthread_join` calls.

This commit synchronizes the number of "ice-reset" threads by adding a
variable ("vsi_update_thread_num") to the "struct ice_dcf_hw" and
performing an atomic operation on this variable. When releasing HW
resources, we wait for the number of "ice-reset" threads to be reduced
to 0 before releasing the resources.

Fixes: c7e1a1a3bfeb ("net/ice: refactor DCF VLAN handling")
Fixes: 3b3757bda3c3 ("net/ice: get VF hardware index in DCF")
Fixes: 7564d5509611 ("net/ice: add DCF hardware initialization")
Fixes: 0b02c9519432 ("net/ice: handle PF initialization by DCF")
Cc: stable@dpdk.org

Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
v2: add pthread_exit() for windows
---
v3: Optimization. It is unsafe for a thread to forcibly exit, which
will cause the spin lock to not be released correctly
---
v4: Safely wait for all event threads to end
---
v5: Spinlock moved to struct ice_dcf_hw
---
v6: Spinlock changed to atomic
---
V7: moving __atomic_fetch_add to the service_handler thread
---
 drivers/net/ice/ice_dcf.c        | 9 +++++++++
 drivers/net/ice/ice_dcf.h        | 2 ++
 drivers/net/ice/ice_dcf_parent.c | 6 ++++++
 3 files changed, 17 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 1c3d22ae0f..adf2cf2cb6 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -32,6 +32,8 @@
 #define ICE_DCF_ARQ_MAX_RETRIES 200
 #define ICE_DCF_ARQ_CHECK_TIME  2   /* msecs */
 
+#define ICE_DCF_CHECK_INTERVAL  100   /* 100ms */
+
 #define ICE_DCF_VF_RES_BUF_SZ	\
 	(sizeof(struct virtchnl_vf_resource) +	\
 		IAVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource))
@@ -639,6 +641,8 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_spinlock_init(&hw->vc_cmd_queue_lock);
 	TAILQ_INIT(&hw->vc_cmd_queue);
 
+	__atomic_store_n(&hw->vsi_update_thread_num, 0, __ATOMIC_RELAXED);
+
 	hw->arq_buf = rte_zmalloc("arq_buf", ICE_DCF_AQ_BUF_SZ, 0);
 	if (hw->arq_buf == NULL) {
 		PMD_INIT_LOG(ERR, "unable to allocate AdminQ buffer memory");
@@ -760,6 +764,11 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_intr_callback_unregister(intr_handle,
 				     ice_dcf_dev_interrupt_handler, hw);
 
+	/* Wait for all `ice-thread` threads to exit. */
+	while (__atomic_load_n(&hw->vsi_update_thread_num,
+		__ATOMIC_ACQUIRE) != 0)
+		rte_delay_ms(ICE_DCF_CHECK_INTERVAL);
+
 	ice_dcf_mode_disable(hw);
 	iavf_shutdown_adminq(&hw->avf);
 
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 7f42ebabe9..7becf6d187 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -105,6 +105,8 @@ struct ice_dcf_hw {
 	void (*vc_event_msg_cb)(struct ice_dcf_hw *dcf_hw,
 				uint8_t *msg, uint16_t msglen);
 
+	int vsi_update_thread_num;
+
 	uint8_t *arq_buf;
 
 	uint16_t num_vfs;
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index 01e390ddda..0563edb0b2 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -124,6 +124,9 @@ ice_dcf_vsi_update_service_handler(void *param)
 		container_of(hw, struct ice_dcf_adapter, real_hw);
 	struct ice_adapter *parent_adapter = &adapter->parent;
 
+	__atomic_fetch_add(&hw->vsi_update_thread_num, 1,
+		__ATOMIC_RELAXED);
+
 	pthread_detach(pthread_self());
 
 	rte_delay_us(ICE_DCF_VSI_UPDATE_SERVICE_INTERVAL);
@@ -154,6 +157,9 @@ ice_dcf_vsi_update_service_handler(void *param)
 
 	free(param);
 
+	__atomic_fetch_sub(&hw->vsi_update_thread_num, 1,
+		__ATOMIC_RELEASE);
+
 	return NULL;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH v7] net/ice: fix ice dcf control thread crash
  2023-04-11  2:08           ` [PATCH v7] " Mingjin Ye
@ 2023-05-15  6:28             ` Zhang, Qi Z
  0 siblings, 0 replies; 24+ messages in thread
From: Zhang, Qi Z @ 2023-05-15  6:28 UTC (permalink / raw)
  To: Ye, MingjinX, dev; +Cc: Yang, Qiming, stable, Zhou, YidingX, Zhang, Ke1X



> -----Original Message-----
> From: Ye, MingjinX <mingjinx.ye@intel.com>
> Sent: Tuesday, April 11, 2023 10:09 AM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou, YidingX
> <yidingx.zhou@intel.com>; Ye, MingjinX <mingjinx.ye@intel.com>; Zhang,
> Ke1X <ke1x.zhang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [PATCH v7] net/ice: fix ice dcf control thread crash
> 
> The control thread accesses the hardware resources after the resources were
> released, which results in a segment error.
> 
> The 'ice-reset' threads are detached, so thread resources cannot be
> reclaimed by `pthread_join` calls.
> 
> This commit synchronizes the number of "ice-reset" threads by adding a
> variable ("vsi_update_thread_num") to the "struct ice_dcf_hw" and
> performing an atomic operation on this variable. When releasing HW
> resources, we wait for the number of "ice-reset" threads to be reduced to 0
> before releasing the resources.
> 
> Fixes: c7e1a1a3bfeb ("net/ice: refactor DCF VLAN handling")
> Fixes: 3b3757bda3c3 ("net/ice: get VF hardware index in DCF")
> Fixes: 7564d5509611 ("net/ice: add DCF hardware initialization")
> Fixes: 0b02c9519432 ("net/ice: handle PF initialization by DCF")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>

Acked-by: Qi Zhang <qi.z.zhang@intel.com>

Applied to dpdk-next-net-intel.

Thanks
Qi


^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2023-05-15  6:29 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-08  8:30 [PATCH] net/ice: fix ice dcf contrl thread crash Ke Zhang
2023-02-09  0:05 ` Stephen Hemminger
2023-02-13  7:03 ` [PATCH v2] " Ke Zhang
2023-02-21  0:29   ` Zhang, Qi Z
2023-02-13  7:14 ` Ke Zhang
2023-02-13  7:16 ` [PATCH v2] net/ice: fix ice dcf control " Ke Zhang
2023-02-14 11:03   ` Thomas Monjalon
2023-02-16  7:53     ` Zhang, Ke1X
2023-02-20  0:30       ` Thomas Monjalon
2023-03-01  1:54         ` Zhang, Ke1X
2023-03-01 14:53   ` Kevin Traynor
2023-03-15  8:20   ` [PATCH v3] " Mingjin Ye
2023-03-15 13:06     ` Zhang, Qi Z
2023-03-17  5:09     ` [PATCH v4] " Mingjin Ye
2023-03-17 10:15       ` Zhang, Qi Z
2023-03-20  9:40       ` [PATCH v5] " Mingjin Ye
2023-03-20 12:52         ` Zhang, Qi Z
2023-03-21  2:08           ` Ye, MingjinX
2023-03-21 11:55             ` Zhang, Qi Z
2023-03-21 16:24               ` Tyler Retzlaff
2023-03-22  5:56         ` [PATCH v6] " Mingjin Ye
2023-04-03  6:54           ` Zhang, Qi Z
2023-04-11  2:08           ` [PATCH v7] " Mingjin Ye
2023-05-15  6:28             ` Zhang, Qi Z

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).