DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] app/testpmd: fix random number of Tx segments
@ 2021-09-02  8:20 Alvin Zhang
  2021-09-06  8:58 ` Li, Xiaoyun
  2021-09-17  1:39 ` [dpdk-dev] [PATCH v2 1/2] app/testpmd: update forward engine beginning Alvin Zhang
  0 siblings, 2 replies; 24+ messages in thread
From: Alvin Zhang @ 2021-09-02  8:20 UTC (permalink / raw)
  To: xiaoyun.li, konstantin.ananyev; +Cc: dev, Alvin Zhang, stable

When random number of segments in Tx packets is enabled,
the total data space length of all segments must be greater
or equal than the size of an Eth/IP/UDP/timestamp packet,
that's total 14 + 20 + 8 + 16 bytes. Otherwise the Tx engine
may cause the application to crash.

Bugzilla ID: 797
Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing packets")
Cc: stable@dpdk.org

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
---
 app/test-pmd/config.c  | 16 +++++++++++-----
 app/test-pmd/testpmd.c |  5 +++++
 app/test-pmd/testpmd.h |  5 +++++
 app/test-pmd/txonly.c  |  7 +++++--
 4 files changed, 26 insertions(+), 7 deletions(-)

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 31d8ba1..5105b3b 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -3837,10 +3837,11 @@ struct igb_ring_desc_16_bytes {
 	 * Check that each segment length is greater or equal than
 	 * the mbuf data size.
 	 * Check also that the total packet length is greater or equal than the
-	 * size of an empty UDP/IP packet (sizeof(struct rte_ether_hdr) +
-	 * 20 + 8).
+	 * size of an Eth/IP/UDP + timestamp packet
+	 * (sizeof(struct rte_ether_hdr) + 20 + 8 + 16).
 	 */
 	tx_pkt_len = 0;
+	tx_pkt_nb_min_segs = 0;
 	for (i = 0; i < nb_segs; i++) {
 		if (seg_lengths[i] > mbuf_data_size[0]) {
 			fprintf(stderr,
@@ -3849,11 +3850,16 @@ struct igb_ring_desc_16_bytes {
 			return;
 		}
 		tx_pkt_len = (uint16_t)(tx_pkt_len + seg_lengths[i]);
+
+		if (!tx_pkt_nb_min_segs &&
+		    tx_pkt_len >= (sizeof(struct rte_ether_hdr) + 20 + 8 + 16))
+			tx_pkt_nb_min_segs = i + 1;
 	}
-	if (tx_pkt_len < (sizeof(struct rte_ether_hdr) + 20 + 8)) {
+
+	if (!tx_pkt_nb_min_segs) {
 		fprintf(stderr, "total packet length=%u < %d - give up\n",
-				(unsigned) tx_pkt_len,
-				(int)(sizeof(struct rte_ether_hdr) + 20 + 8));
+			(unsigned int) tx_pkt_len,
+			(int)(sizeof(struct rte_ether_hdr) + 20 + 8 + 16));
 		return;
 	}
 
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 6cbe9ba..c496e59 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -232,6 +232,11 @@ struct fwd_engine * fwd_engines[] = {
 };
 uint8_t  tx_pkt_nb_segs = 1; /**< Number of segments in TXONLY packets */
 
+/**< Minimum number of segments in TXONLY packets to accommodate all packet
+ * headers.
+ */
+uint8_t  tx_pkt_nb_min_segs = 1;
+
 enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF;
 /**< Split policy for packets to TX. */
 
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 16a3598..f5bc427 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -464,6 +464,11 @@ enum dcb_mode_enable
 extern uint16_t tx_pkt_length; /**< Length of TXONLY packet */
 extern uint16_t tx_pkt_seg_lengths[RTE_MAX_SEGS_PER_PKT]; /**< Seg. lengths */
 extern uint8_t  tx_pkt_nb_segs; /**< Number of segments in TX packets */
+
+/**< Minimum number of segments in TXONLY packets to accommodate all packet
+ * headers.
+ */
+extern uint8_t  tx_pkt_nb_min_segs;
 extern uint32_t tx_pkt_times_intra;
 extern uint32_t tx_pkt_times_inter;
 
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f..27e4458 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -195,8 +195,11 @@
 	uint32_t nb_segs, pkt_len;
 	uint8_t i;
 
-	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND))
-		nb_segs = rte_rand() % tx_pkt_nb_segs + 1;
+	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) &&
+	    tx_pkt_nb_segs > tx_pkt_nb_min_segs)
+		nb_segs = rte_rand() %
+			  (tx_pkt_nb_segs - tx_pkt_nb_min_segs + 1) +
+			  tx_pkt_nb_min_segs;
 	else
 		nb_segs = tx_pkt_nb_segs;
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH] app/testpmd: fix random number of Tx segments
  2021-09-02  8:20 [dpdk-dev] [PATCH] app/testpmd: fix random number of Tx segments Alvin Zhang
@ 2021-09-06  8:58 ` Li, Xiaoyun
  2021-09-06 10:03   ` Zhang, AlvinX
  2021-09-17  1:39 ` [dpdk-dev] [PATCH v2 1/2] app/testpmd: update forward engine beginning Alvin Zhang
  1 sibling, 1 reply; 24+ messages in thread
From: Li, Xiaoyun @ 2021-09-06  8:58 UTC (permalink / raw)
  To: Zhang, AlvinX, Ananyev, Konstantin; +Cc: dev, stable

Hi

> -----Original Message-----
> From: Zhang, AlvinX <alvinx.zhang@intel.com>
> Sent: Thursday, September 2, 2021 16:20
> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; Zhang, AlvinX <alvinx.zhang@intel.com>; stable@dpdk.org
> Subject: [PATCH] app/testpmd: fix random number of Tx segments
> 
> When random number of segments in Tx packets is enabled, the total data space
> length of all segments must be greater or equal than the size of an
> Eth/IP/UDP/timestamp packet, that's total 14 + 20 + 8 + 16 bytes. Otherwise the
> Tx engine may cause the application to crash.
> 
> Bugzilla ID: 797
> Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing packets")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
> ---
>  app/test-pmd/config.c  | 16 +++++++++++-----  app/test-pmd/testpmd.c |  5
> +++++  app/test-pmd/testpmd.h |  5 +++++  app/test-pmd/txonly.c  |  7 +++++--
>  4 files changed, 26 insertions(+), 7 deletions(-)
> 
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> 31d8ba1..5105b3b 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -3837,10 +3837,11 @@ struct igb_ring_desc_16_bytes {
>  	 * Check that each segment length is greater or equal than
>  	 * the mbuf data size.
>  	 * Check also that the total packet length is greater or equal than the
> -	 * size of an empty UDP/IP packet (sizeof(struct rte_ether_hdr) +
> -	 * 20 + 8).
> +	 * size of an Eth/IP/UDP + timestamp packet
> +	 * (sizeof(struct rte_ether_hdr) + 20 + 8 + 16).

I don't really agree on this. Most of the time, txonly generate packets with Eth/IP/UDP. It's not fair to limit the hdr length to include timestamp in all cases.
And to be honest, I don't see why you need to add "tx_pkt_nb_min_segs". It's only used in txonly when "TX_PKT_SPLIT_RND". So this issue is because when "TX_PKT_SPLIT_RND", the random nb_segs is not enough for the hdr.

But if you read txonly carefully, if "TX_PKT_SPLIT_RND", the first segment length should be equal or greater than 42 (14+20+8). Because when "TX_PKT_SPLIT_RND", update_pkt_header() should be called. And that function doesn't deal with header in multi-segments.
I think there's bug here.

So I think you should only add a check in pkt_burst_prepare() in txonly().
	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) || txonly_multi_flow)
+                           if (tx_pkt_seg_lengths[0] < 42) {
+			err_log;
+			return false;
+		}
		update_pkt_header(pkt, pkt_len);

As for timestamp, maybe refer to "pkt_copy_split" in csumonly.c is better? Copy the extra to the last segment if it's not enough. Not sure how to deal with this issue better.

>  	 */
>  	tx_pkt_len = 0;
> +	tx_pkt_nb_min_segs = 0;
>  	for (i = 0; i < nb_segs; i++) {
>  		if (seg_lengths[i] > mbuf_data_size[0]) {
>  			fprintf(stderr,
> @@ -3849,11 +3850,16 @@ struct igb_ring_desc_16_bytes {
>  			return;
>  		}
>  		tx_pkt_len = (uint16_t)(tx_pkt_len + seg_lengths[i]);
> +
> +		if (!tx_pkt_nb_min_segs &&
> +		    tx_pkt_len >= (sizeof(struct rte_ether_hdr) + 20 + 8 + 16))
> +			tx_pkt_nb_min_segs = i + 1;
>  	}
> -	if (tx_pkt_len < (sizeof(struct rte_ether_hdr) + 20 + 8)) {
> +
> +	if (!tx_pkt_nb_min_segs) {
>  		fprintf(stderr, "total packet length=%u < %d - give up\n",
> -				(unsigned) tx_pkt_len,
> -				(int)(sizeof(struct rte_ether_hdr) + 20 + 8));
> +			(unsigned int) tx_pkt_len,
> +			(int)(sizeof(struct rte_ether_hdr) + 20 + 8 + 16));
>  		return;
>  	}
> 
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> 6cbe9ba..c496e59 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -232,6 +232,11 @@ struct fwd_engine * fwd_engines[] = {  };  uint8_t
> tx_pkt_nb_segs = 1; /**< Number of segments in TXONLY packets */
> 
> +/**< Minimum number of segments in TXONLY packets to accommodate all
> +packet
> + * headers.
> + */
> +uint8_t  tx_pkt_nb_min_segs = 1;
> +
>  enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF;  /**< Split policy for
> packets to TX. */
> 
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> 16a3598..f5bc427 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -464,6 +464,11 @@ enum dcb_mode_enable  extern uint16_t
> tx_pkt_length; /**< Length of TXONLY packet */  extern uint16_t
> tx_pkt_seg_lengths[RTE_MAX_SEGS_PER_PKT]; /**< Seg. lengths */  extern
> uint8_t  tx_pkt_nb_segs; /**< Number of segments in TX packets */
> +
> +/**< Minimum number of segments in TXONLY packets to accommodate all
> +packet
> + * headers.
> + */
> +extern uint8_t  tx_pkt_nb_min_segs;
>  extern uint32_t tx_pkt_times_intra;
>  extern uint32_t tx_pkt_times_inter;
> 
> diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index
> aed820f..27e4458 100644
> --- a/app/test-pmd/txonly.c
> +++ b/app/test-pmd/txonly.c
> @@ -195,8 +195,11 @@
>  	uint32_t nb_segs, pkt_len;
>  	uint8_t i;
> 
> -	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND))
> -		nb_segs = rte_rand() % tx_pkt_nb_segs + 1;
> +	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) &&
> +	    tx_pkt_nb_segs > tx_pkt_nb_min_segs)
> +		nb_segs = rte_rand() %
> +			  (tx_pkt_nb_segs - tx_pkt_nb_min_segs + 1) +
> +			  tx_pkt_nb_min_segs;
>  	else
>  		nb_segs = tx_pkt_nb_segs;
> 
> --
> 1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH] app/testpmd: fix random number of Tx segments
  2021-09-06  8:58 ` Li, Xiaoyun
@ 2021-09-06 10:03   ` Zhang, AlvinX
  2021-09-06 10:54     ` Li, Xiaoyun
  0 siblings, 1 reply; 24+ messages in thread
From: Zhang, AlvinX @ 2021-09-06 10:03 UTC (permalink / raw)
  To: Li, Xiaoyun, Ananyev, Konstantin; +Cc: dev, stable

> -----Original Message-----
> From: Li, Xiaoyun <xiaoyun.li@intel.com>
> Sent: Monday, September 6, 2021 4:59 PM
> To: Zhang, AlvinX <alvinx.zhang@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; stable@dpdk.org
> Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
> 
> Hi
> 
> > -----Original Message-----
> > From: Zhang, AlvinX <alvinx.zhang@intel.com>
> > Sent: Thursday, September 2, 2021 16:20
> > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>
> > Cc: dev@dpdk.org; Zhang, AlvinX <alvinx.zhang@intel.com>;
> > stable@dpdk.org
> > Subject: [PATCH] app/testpmd: fix random number of Tx segments
> >
> > When random number of segments in Tx packets is enabled, the total
> > data space length of all segments must be greater or equal than the
> > size of an Eth/IP/UDP/timestamp packet, that's total 14 + 20 + 8 + 16
> > bytes. Otherwise the Tx engine may cause the application to crash.
> >
> > Bugzilla ID: 797
> > Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing
> > packets")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
> > ---
> >  app/test-pmd/config.c  | 16 +++++++++++-----  app/test-pmd/testpmd.c
> > |  5
> > +++++  app/test-pmd/testpmd.h |  5 +++++  app/test-pmd/txonly.c  |  7
> > +++++ +++++--
> >  4 files changed, 26 insertions(+), 7 deletions(-)
> >
> > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> > 31d8ba1..5105b3b 100644
> > --- a/app/test-pmd/config.c
> > +++ b/app/test-pmd/config.c
> > @@ -3837,10 +3837,11 @@ struct igb_ring_desc_16_bytes {
> >  	 * Check that each segment length is greater or equal than
> >  	 * the mbuf data size.
> >  	 * Check also that the total packet length is greater or equal than the
> > -	 * size of an empty UDP/IP packet (sizeof(struct rte_ether_hdr) +
> > -	 * 20 + 8).
> > +	 * size of an Eth/IP/UDP + timestamp packet
> > +	 * (sizeof(struct rte_ether_hdr) + 20 + 8 + 16).
> 
> I don't really agree on this. Most of the time, txonly generate packets with
> Eth/IP/UDP. It's not fair to limit the hdr length to include timestamp in all cases.
> And to be honest, I don't see why you need to add "tx_pkt_nb_min_segs". It's
> only used in txonly when "TX_PKT_SPLIT_RND". So this issue is because when
> "TX_PKT_SPLIT_RND", the random nb_segs is not enough for the hdr.
> 
> But if you read txonly carefully, if "TX_PKT_SPLIT_RND", the first segment length
> should be equal or greater than 42 (14+20+8). Because when
> "TX_PKT_SPLIT_RND", update_pkt_header() should be called. And that function
> doesn't deal with header in multi-segments.
> I think there's bug here.
> 
> So I think you should only add a check in pkt_burst_prepare() in txonly().
> 	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) || txonly_multi_flow)
> +                           if (tx_pkt_seg_lengths[0] < 42) {
> +			err_log;
> +			return false;
> +		}
> 		update_pkt_header(pkt, pkt_len);

Yes, I didn't notice the updating for the UDP header, but the bug first occurs in this function 
copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt,
			sizeof(struct rte_ether_hdr) +
			sizeof(struct rte_ipv4_hdr));
not in update_pkt_header.

Here we expecting users should set minimum 42 byte for first segment seems ok,
But I think we putting the check in configuring the data space length of first segment is more graceful.

> 
> As for timestamp, maybe refer to "pkt_copy_split" in csumonly.c is better? Copy
> the extra to the last segment if it's not enough. Not sure how to deal with this
> issue better.
> 
> >  	 */
> >  	tx_pkt_len = 0;
> > +	tx_pkt_nb_min_segs = 0;
> >  	for (i = 0; i < nb_segs; i++) {
> >  		if (seg_lengths[i] > mbuf_data_size[0]) {
> >  			fprintf(stderr,
> > @@ -3849,11 +3850,16 @@ struct igb_ring_desc_16_bytes {
> >  			return;
> >  		}
> >  		tx_pkt_len = (uint16_t)(tx_pkt_len + seg_lengths[i]);
> > +
> > +		if (!tx_pkt_nb_min_segs &&
> > +		    tx_pkt_len >= (sizeof(struct rte_ether_hdr) + 20 + 8 + 16))
> > +			tx_pkt_nb_min_segs = i + 1;
> >  	}
> > -	if (tx_pkt_len < (sizeof(struct rte_ether_hdr) + 20 + 8)) {
> > +
> > +	if (!tx_pkt_nb_min_segs) {
> >  		fprintf(stderr, "total packet length=%u < %d - give up\n",
> > -				(unsigned) tx_pkt_len,
> > -				(int)(sizeof(struct rte_ether_hdr) + 20 + 8));
> > +			(unsigned int) tx_pkt_len,
> > +			(int)(sizeof(struct rte_ether_hdr) + 20 + 8 + 16));
> >  		return;
> >  	}
> >
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 6cbe9ba..c496e59 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -232,6 +232,11 @@ struct fwd_engine * fwd_engines[] = {  };
> > uint8_t tx_pkt_nb_segs = 1; /**< Number of segments in TXONLY packets
> > */
> >
> > +/**< Minimum number of segments in TXONLY packets to accommodate all
> > +packet
> > + * headers.
> > + */
> > +uint8_t  tx_pkt_nb_min_segs = 1;
> > +
> >  enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF;  /**< Split policy
> > for packets to TX. */
> >
> > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> > 16a3598..f5bc427 100644
> > --- a/app/test-pmd/testpmd.h
> > +++ b/app/test-pmd/testpmd.h
> > @@ -464,6 +464,11 @@ enum dcb_mode_enable  extern uint16_t
> > tx_pkt_length; /**< Length of TXONLY packet */  extern uint16_t
> > tx_pkt_seg_lengths[RTE_MAX_SEGS_PER_PKT]; /**< Seg. lengths */  extern
> > uint8_t  tx_pkt_nb_segs; /**< Number of segments in TX packets */
> > +
> > +/**< Minimum number of segments in TXONLY packets to accommodate all
> > +packet
> > + * headers.
> > + */
> > +extern uint8_t  tx_pkt_nb_min_segs;
> >  extern uint32_t tx_pkt_times_intra;
> >  extern uint32_t tx_pkt_times_inter;
> >
> > diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index
> > aed820f..27e4458 100644
> > --- a/app/test-pmd/txonly.c
> > +++ b/app/test-pmd/txonly.c
> > @@ -195,8 +195,11 @@
> >  	uint32_t nb_segs, pkt_len;
> >  	uint8_t i;
> >
> > -	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND))
> > -		nb_segs = rte_rand() % tx_pkt_nb_segs + 1;
> > +	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) &&
> > +	    tx_pkt_nb_segs > tx_pkt_nb_min_segs)
> > +		nb_segs = rte_rand() %
> > +			  (tx_pkt_nb_segs - tx_pkt_nb_min_segs + 1) +
> > +			  tx_pkt_nb_min_segs;
> >  	else
> >  		nb_segs = tx_pkt_nb_segs;
> >
> > --
> > 1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH] app/testpmd: fix random number of Tx segments
  2021-09-06 10:03   ` Zhang, AlvinX
@ 2021-09-06 10:54     ` Li, Xiaoyun
  2021-09-07  2:25       ` Zhang, AlvinX
  0 siblings, 1 reply; 24+ messages in thread
From: Li, Xiaoyun @ 2021-09-06 10:54 UTC (permalink / raw)
  To: Zhang, AlvinX, Ananyev, Konstantin; +Cc: dev, stable



> -----Original Message-----
> From: Zhang, AlvinX <alvinx.zhang@intel.com>
> Sent: Monday, September 6, 2021 18:04
> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; stable@dpdk.org
> Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
> 
> > -----Original Message-----
> > From: Li, Xiaoyun <xiaoyun.li@intel.com>
> > Sent: Monday, September 6, 2021 4:59 PM
> > To: Zhang, AlvinX <alvinx.zhang@intel.com>; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>
> > Cc: dev@dpdk.org; stable@dpdk.org
> > Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
> >
> > Hi
> >
> > > -----Original Message-----
> > > From: Zhang, AlvinX <alvinx.zhang@intel.com>
> > > Sent: Thursday, September 2, 2021 16:20
> > > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Ananyev, Konstantin
> > > <konstantin.ananyev@intel.com>
> > > Cc: dev@dpdk.org; Zhang, AlvinX <alvinx.zhang@intel.com>;
> > > stable@dpdk.org
> > > Subject: [PATCH] app/testpmd: fix random number of Tx segments
> > >
> > > When random number of segments in Tx packets is enabled, the total
> > > data space length of all segments must be greater or equal than the
> > > size of an Eth/IP/UDP/timestamp packet, that's total 14 + 20 + 8 +
> > > 16 bytes. Otherwise the Tx engine may cause the application to crash.
> > >
> > > Bugzilla ID: 797
> > > Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing
> > > packets")
> > > Cc: stable@dpdk.org
> > >
> > > Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
> > > ---
> > >  app/test-pmd/config.c  | 16 +++++++++++-----
> > > app/test-pmd/testpmd.c
> > > |  5
> > > +++++  app/test-pmd/testpmd.h |  5 +++++  app/test-pmd/txonly.c  |
> > > +++++ 7
> > > +++++ +++++--
> > >  4 files changed, 26 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> > > 31d8ba1..5105b3b 100644
> > > --- a/app/test-pmd/config.c
> > > +++ b/app/test-pmd/config.c
> > > @@ -3837,10 +3837,11 @@ struct igb_ring_desc_16_bytes {
> > >  	 * Check that each segment length is greater or equal than
> > >  	 * the mbuf data size.
> > >  	 * Check also that the total packet length is greater or equal than the
> > > -	 * size of an empty UDP/IP packet (sizeof(struct rte_ether_hdr) +
> > > -	 * 20 + 8).
> > > +	 * size of an Eth/IP/UDP + timestamp packet
> > > +	 * (sizeof(struct rte_ether_hdr) + 20 + 8 + 16).
> >
> > I don't really agree on this. Most of the time, txonly generate
> > packets with Eth/IP/UDP. It's not fair to limit the hdr length to include
> timestamp in all cases.
> > And to be honest, I don't see why you need to add
> > "tx_pkt_nb_min_segs". It's only used in txonly when
> > "TX_PKT_SPLIT_RND". So this issue is because when "TX_PKT_SPLIT_RND", the
> random nb_segs is not enough for the hdr.
> >
> > But if you read txonly carefully, if "TX_PKT_SPLIT_RND", the first
> > segment length should be equal or greater than 42 (14+20+8). Because
> > when "TX_PKT_SPLIT_RND", update_pkt_header() should be called. And
> > that function doesn't deal with header in multi-segments.
> > I think there's bug here.
> >
> > So I think you should only add a check in pkt_burst_prepare() in txonly().
> > 	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) || txonly_multi_flow)
> > +                           if (tx_pkt_seg_lengths[0] < 42) {
> > +			err_log;
> > +			return false;
> > +		}
> > 		update_pkt_header(pkt, pkt_len);
> 
> Yes, I didn't notice the updating for the UDP header, but the bug first occurs in
> this function copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt,
> 			sizeof(struct rte_ether_hdr) +
> 			sizeof(struct rte_ipv4_hdr));
> not in update_pkt_header.
> 
> Here we expecting users should set minimum 42 byte for first segment seems ok,
> But I think we putting the check in configuring the data space length of first
> segment is more graceful.

No. You didn't get my point. It's not graceful at all.
The segment fault will only happen when "TX_PKT_SPLIT_RND". Because the hdr may take 2 segs while random nb_segs is only 1.
But if it's not "TX_PKT_SPLIT_RND", the current set_tx_pkt_segments() already make sure pkt_len is enough for 42.

So the only case you need to deal with is "TX_PKT_SPLIT_RND". And since "TX_PKT_SPLIT_RND" actually needs the first segment to be enough to contain 42 bytes.
And in cmd_set_txpkts_parsed, you may not get the configuration to know if it will be configured as "TX_PKT_SPLIT_RND". That's why you should check before update_pkt_header().

In this way, when it's NOT "TX_PKT_SPLIT_RND", it will maintain the old behavior that hdrs may cross several segs which makes more sense.

> 
> >
> > As for timestamp, maybe refer to "pkt_copy_split" in csumonly.c is
> > better? Copy the extra to the last segment if it's not enough. Not
> > sure how to deal with this issue better.
> >
> > >  	 */
> > >  	tx_pkt_len = 0;
> > > +	tx_pkt_nb_min_segs = 0;
> > >  	for (i = 0; i < nb_segs; i++) {
> > >  		if (seg_lengths[i] > mbuf_data_size[0]) {
> > >  			fprintf(stderr,
> > > @@ -3849,11 +3850,16 @@ struct igb_ring_desc_16_bytes {
> > >  			return;
> > >  		}
> > >  		tx_pkt_len = (uint16_t)(tx_pkt_len + seg_lengths[i]);
> > > +
> > > +		if (!tx_pkt_nb_min_segs &&
> > > +		    tx_pkt_len >= (sizeof(struct rte_ether_hdr) + 20 + 8 + 16))
> > > +			tx_pkt_nb_min_segs = i + 1;
> > >  	}
> > > -	if (tx_pkt_len < (sizeof(struct rte_ether_hdr) + 20 + 8)) {
> > > +
> > > +	if (!tx_pkt_nb_min_segs) {
> > >  		fprintf(stderr, "total packet length=%u < %d - give up\n",
> > > -				(unsigned) tx_pkt_len,
> > > -				(int)(sizeof(struct rte_ether_hdr) + 20 + 8));
> > > +			(unsigned int) tx_pkt_len,
> > > +			(int)(sizeof(struct rte_ether_hdr) + 20 + 8 + 16));
> > >  		return;
> > >  	}
> > >
> > > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > > 6cbe9ba..c496e59 100644
> > > --- a/app/test-pmd/testpmd.c
> > > +++ b/app/test-pmd/testpmd.c
> > > @@ -232,6 +232,11 @@ struct fwd_engine * fwd_engines[] = {  };
> > > uint8_t tx_pkt_nb_segs = 1; /**< Number of segments in TXONLY
> > > packets */
> > >
> > > +/**< Minimum number of segments in TXONLY packets to accommodate
> > > +all packet
> > > + * headers.
> > > + */
> > > +uint8_t  tx_pkt_nb_min_segs = 1;
> > > +
> > >  enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF;  /**< Split
> > > policy for packets to TX. */
> > >
> > > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> > > 16a3598..f5bc427 100644
> > > --- a/app/test-pmd/testpmd.h
> > > +++ b/app/test-pmd/testpmd.h
> > > @@ -464,6 +464,11 @@ enum dcb_mode_enable  extern uint16_t
> > > tx_pkt_length; /**< Length of TXONLY packet */  extern uint16_t
> > > tx_pkt_seg_lengths[RTE_MAX_SEGS_PER_PKT]; /**< Seg. lengths */
> > > extern uint8_t  tx_pkt_nb_segs; /**< Number of segments in TX
> > > packets */
> > > +
> > > +/**< Minimum number of segments in TXONLY packets to accommodate
> > > +all packet
> > > + * headers.
> > > + */
> > > +extern uint8_t  tx_pkt_nb_min_segs;
> > >  extern uint32_t tx_pkt_times_intra;  extern uint32_t
> > > tx_pkt_times_inter;
> > >
> > > diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index
> > > aed820f..27e4458 100644
> > > --- a/app/test-pmd/txonly.c
> > > +++ b/app/test-pmd/txonly.c
> > > @@ -195,8 +195,11 @@
> > >  	uint32_t nb_segs, pkt_len;
> > >  	uint8_t i;
> > >
> > > -	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND))
> > > -		nb_segs = rte_rand() % tx_pkt_nb_segs + 1;
> > > +	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) &&
> > > +	    tx_pkt_nb_segs > tx_pkt_nb_min_segs)
> > > +		nb_segs = rte_rand() %
> > > +			  (tx_pkt_nb_segs - tx_pkt_nb_min_segs + 1) +
> > > +			  tx_pkt_nb_min_segs;
> > >  	else
> > >  		nb_segs = tx_pkt_nb_segs;
> > >
> > > --
> > > 1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH] app/testpmd: fix random number of Tx segments
  2021-09-06 10:54     ` Li, Xiaoyun
@ 2021-09-07  2:25       ` Zhang, AlvinX
  2021-09-07  8:05         ` Li, Xiaoyun
  0 siblings, 1 reply; 24+ messages in thread
From: Zhang, AlvinX @ 2021-09-07  2:25 UTC (permalink / raw)
  To: Li, Xiaoyun, Ananyev, Konstantin; +Cc: dev, stable

> -----Original Message-----
> From: Li, Xiaoyun <xiaoyun.li@intel.com>
> Sent: Monday, September 6, 2021 6:55 PM
> To: Zhang, AlvinX <alvinx.zhang@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; stable@dpdk.org
> Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
> 
> 
> 
> > -----Original Message-----
> > From: Zhang, AlvinX <alvinx.zhang@intel.com>
> > Sent: Monday, September 6, 2021 18:04
> > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>
> > Cc: dev@dpdk.org; stable@dpdk.org
> > Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
> >
> > > -----Original Message-----
> > > From: Li, Xiaoyun <xiaoyun.li@intel.com>
> > > Sent: Monday, September 6, 2021 4:59 PM
> > > To: Zhang, AlvinX <alvinx.zhang@intel.com>; Ananyev, Konstantin
> > > <konstantin.ananyev@intel.com>
> > > Cc: dev@dpdk.org; stable@dpdk.org
> > > Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
> > >
> > > Hi
> > >
> > > > -----Original Message-----
> > > > From: Zhang, AlvinX <alvinx.zhang@intel.com>
> > > > Sent: Thursday, September 2, 2021 16:20
> > > > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Ananyev, Konstantin
> > > > <konstantin.ananyev@intel.com>
> > > > Cc: dev@dpdk.org; Zhang, AlvinX <alvinx.zhang@intel.com>;
> > > > stable@dpdk.org
> > > > Subject: [PATCH] app/testpmd: fix random number of Tx segments
> > > >
> > > > When random number of segments in Tx packets is enabled, the total
> > > > data space length of all segments must be greater or equal than
> > > > the size of an Eth/IP/UDP/timestamp packet, that's total 14 + 20 +
> > > > 8 +
> > > > 16 bytes. Otherwise the Tx engine may cause the application to crash.
> > > >
> > > > Bugzilla ID: 797
> > > > Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing
> > > > packets")
> > > > Cc: stable@dpdk.org
> > > >
> > > > Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
> > > > ---
> > > >  app/test-pmd/config.c  | 16 +++++++++++-----
> > > > app/test-pmd/testpmd.c
> > > > |  5
> > > > +++++  app/test-pmd/testpmd.h |  5 +++++  app/test-pmd/txonly.c  |
> > > > +++++ 7
> > > > +++++ +++++--
> > > >  4 files changed, 26 insertions(+), 7 deletions(-)
> > > >
> > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> > > > 31d8ba1..5105b3b 100644
> > > > --- a/app/test-pmd/config.c
> > > > +++ b/app/test-pmd/config.c
> > > > @@ -3837,10 +3837,11 @@ struct igb_ring_desc_16_bytes {
> > > >  	 * Check that each segment length is greater or equal than
> > > >  	 * the mbuf data size.
> > > >  	 * Check also that the total packet length is greater or equal than the
> > > > -	 * size of an empty UDP/IP packet (sizeof(struct rte_ether_hdr) +
> > > > -	 * 20 + 8).
> > > > +	 * size of an Eth/IP/UDP + timestamp packet
> > > > +	 * (sizeof(struct rte_ether_hdr) + 20 + 8 + 16).
> > >
> > > I don't really agree on this. Most of the time, txonly generate
> > > packets with Eth/IP/UDP. It's not fair to limit the hdr length to
> > > include
> > timestamp in all cases.
> > > And to be honest, I don't see why you need to add
> > > "tx_pkt_nb_min_segs". It's only used in txonly when
> > > "TX_PKT_SPLIT_RND". So this issue is because when
> > > "TX_PKT_SPLIT_RND", the
> > random nb_segs is not enough for the hdr.
> > >
> > > But if you read txonly carefully, if "TX_PKT_SPLIT_RND", the first
> > > segment length should be equal or greater than 42 (14+20+8). Because
> > > when "TX_PKT_SPLIT_RND", update_pkt_header() should be called. And
> > > that function doesn't deal with header in multi-segments.
> > > I think there's bug here.
> > >
> > > So I think you should only add a check in pkt_burst_prepare() in txonly().
> > > 	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) ||
> > > txonly_multi_flow)
> > > +                           if (tx_pkt_seg_lengths[0] < 42) {
> > > +			err_log;
> > > +			return false;
> > > +		}
> > > 		update_pkt_header(pkt, pkt_len);


As above, If user have below configuration, there will no one packet be sent out and have lots and lots of repeated annoying logs.
testpmd> set fwd txonly
Set txonly packet forwarding mode
testpmd> set txpkts 40,64
testpmd> set txsplit rand
testpmd> start
txonly packet forwarding - ports=1 - cores=1 - streams=4 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 4 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00

  txonly packet forwarding packets/burst=32
  packet len=104 - nb packet segments=2
  nb forwarding cores=1 - nb forwarding ports=1
  port 0: RX queue number: 4 Tx queue number: 4
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=1024 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=1024 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tetx_pkt_seg_lengths[0] must bigger than 41 bytes
stpmd> tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
tx_pkt_seg_lengths[0] must bigger than 41 bytes
...
...
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

By the way, if multi flow was enabled, the ip header also will be updated, so we should put the check before below codes.

	if (txonly_multi_flow) {
		uint8_t  ip_var = RTE_PER_LCORE(_ip_var);
		struct rte_ipv4_hdr *ip_hdr;
		uint32_t addr;

		ip_hdr = rte_pktmbuf_mtod_offset(pkt,
				struct rte_ipv4_hdr *,
				sizeof(struct rte_ether_hdr));
		/*
		 * Generate multiple flows by varying IP src addr. This
		 * enables packets are well distributed by RSS in
		 * receiver side if any and txonly mode can be a decent
		 * packet generator for developer's quick performance
		 * regression test.
		 */
		addr = (tx_ip_dst_addr | (ip_var++ << 8)) + rte_lcore_id();
		ip_hdr->src_addr = rte_cpu_to_be_32(addr);
		RTE_PER_LCORE(_ip_var) = ip_var;
	}

> >
> > Yes, I didn't notice the updating for the UDP header, but the bug
> > first occurs in this function copy_buf_to_pkt(&pkt_udp_hdr,
> sizeof(pkt_udp_hdr), pkt,
> > 			sizeof(struct rte_ether_hdr) +
> > 			sizeof(struct rte_ipv4_hdr));
> > not in update_pkt_header.
> >
> > Here we expecting users should set minimum 42 byte for first segment
> > seems ok, But I think we putting the check in configuring the data
> > space length of first segment is more graceful.
> 
> No. You didn't get my point. It's not graceful at all.
> The segment fault will only happen when "TX_PKT_SPLIT_RND". Because the
> hdr may take 2 segs while random nb_segs is only 1.
> But if it's not "TX_PKT_SPLIT_RND", the current set_tx_pkt_segments() already
> make sure pkt_len is enough for 42.
> 
> So the only case you need to deal with is "TX_PKT_SPLIT_RND". And since
> "TX_PKT_SPLIT_RND" actually needs the first segment to be enough to contain
> 42 bytes.
> And in cmd_set_txpkts_parsed, you may not get the configuration to know if it
> will be configured as "TX_PKT_SPLIT_RND". That's why you should check before
> update_pkt_header().
> 
> In this way, when it's NOT "TX_PKT_SPLIT_RND", it will maintain the old
> behavior that hdrs may cross several segs which makes more sense.
> 
> >
> > >
> > > As for timestamp, maybe refer to "pkt_copy_split" in csumonly.c is
> > > better? Copy the extra to the last segment if it's not enough. Not
> > > sure how to deal with this issue better.
> > >
> > > >  	 */
> > > >  	tx_pkt_len = 0;
> > > > +	tx_pkt_nb_min_segs = 0;
> > > >  	for (i = 0; i < nb_segs; i++) {
> > > >  		if (seg_lengths[i] > mbuf_data_size[0]) {
> > > >  			fprintf(stderr,
> > > > @@ -3849,11 +3850,16 @@ struct igb_ring_desc_16_bytes {
> > > >  			return;
> > > >  		}
> > > >  		tx_pkt_len = (uint16_t)(tx_pkt_len + seg_lengths[i]);
> > > > +
> > > > +		if (!tx_pkt_nb_min_segs &&
> > > > +		    tx_pkt_len >= (sizeof(struct rte_ether_hdr) + 20 + 8 + 16))
> > > > +			tx_pkt_nb_min_segs = i + 1;
> > > >  	}
> > > > -	if (tx_pkt_len < (sizeof(struct rte_ether_hdr) + 20 + 8)) {
> > > > +
> > > > +	if (!tx_pkt_nb_min_segs) {
> > > >  		fprintf(stderr, "total packet length=%u < %d - give up\n",
> > > > -				(unsigned) tx_pkt_len,
> > > > -				(int)(sizeof(struct rte_ether_hdr) + 20 + 8));
> > > > +			(unsigned int) tx_pkt_len,
> > > > +			(int)(sizeof(struct rte_ether_hdr) + 20 + 8 + 16));
> > > >  		return;
> > > >  	}
> > > >
> > > > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > > > 6cbe9ba..c496e59 100644
> > > > --- a/app/test-pmd/testpmd.c
> > > > +++ b/app/test-pmd/testpmd.c
> > > > @@ -232,6 +232,11 @@ struct fwd_engine * fwd_engines[] = {  };
> > > > uint8_t tx_pkt_nb_segs = 1; /**< Number of segments in TXONLY
> > > > packets */
> > > >
> > > > +/**< Minimum number of segments in TXONLY packets to accommodate
> > > > +all packet
> > > > + * headers.
> > > > + */
> > > > +uint8_t  tx_pkt_nb_min_segs = 1;
> > > > +
> > > >  enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF;  /**< Split
> > > > policy for packets to TX. */
> > > >
> > > > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> > > > 16a3598..f5bc427 100644
> > > > --- a/app/test-pmd/testpmd.h
> > > > +++ b/app/test-pmd/testpmd.h
> > > > @@ -464,6 +464,11 @@ enum dcb_mode_enable  extern uint16_t
> > > > tx_pkt_length; /**< Length of TXONLY packet */  extern uint16_t
> > > > tx_pkt_seg_lengths[RTE_MAX_SEGS_PER_PKT]; /**< Seg. lengths */
> > > > extern uint8_t  tx_pkt_nb_segs; /**< Number of segments in TX
> > > > packets */
> > > > +
> > > > +/**< Minimum number of segments in TXONLY packets to accommodate
> > > > +all packet
> > > > + * headers.
> > > > + */
> > > > +extern uint8_t  tx_pkt_nb_min_segs;
> > > >  extern uint32_t tx_pkt_times_intra;  extern uint32_t
> > > > tx_pkt_times_inter;
> > > >
> > > > diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index
> > > > aed820f..27e4458 100644
> > > > --- a/app/test-pmd/txonly.c
> > > > +++ b/app/test-pmd/txonly.c
> > > > @@ -195,8 +195,11 @@
> > > >  	uint32_t nb_segs, pkt_len;
> > > >  	uint8_t i;
> > > >
> > > > -	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND))
> > > > -		nb_segs = rte_rand() % tx_pkt_nb_segs + 1;
> > > > +	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) &&
> > > > +	    tx_pkt_nb_segs > tx_pkt_nb_min_segs)
> > > > +		nb_segs = rte_rand() %
> > > > +			  (tx_pkt_nb_segs - tx_pkt_nb_min_segs + 1) +
> > > > +			  tx_pkt_nb_min_segs;
> > > >  	else
> > > >  		nb_segs = tx_pkt_nb_segs;
> > > >
> > > > --
> > > > 1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH] app/testpmd: fix random number of Tx segments
  2021-09-07  2:25       ` Zhang, AlvinX
@ 2021-09-07  8:05         ` Li, Xiaoyun
  0 siblings, 0 replies; 24+ messages in thread
From: Li, Xiaoyun @ 2021-09-07  8:05 UTC (permalink / raw)
  To: Zhang, AlvinX, Ananyev, Konstantin; +Cc: dev, stable



> -----Original Message-----
> From: Zhang, AlvinX <alvinx.zhang@intel.com>
> Sent: Tuesday, September 7, 2021 10:25
> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; stable@dpdk.org
> Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
> 
> > -----Original Message-----
> > From: Li, Xiaoyun <xiaoyun.li@intel.com>
> > Sent: Monday, September 6, 2021 6:55 PM
> > To: Zhang, AlvinX <alvinx.zhang@intel.com>; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>
> > Cc: dev@dpdk.org; stable@dpdk.org
> > Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
> >
> >
> >
> > > -----Original Message-----
> > > From: Zhang, AlvinX <alvinx.zhang@intel.com>
> > > Sent: Monday, September 6, 2021 18:04
> > > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Ananyev, Konstantin
> > > <konstantin.ananyev@intel.com>
> > > Cc: dev@dpdk.org; stable@dpdk.org
> > > Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
> > >
> > > > -----Original Message-----
> > > > From: Li, Xiaoyun <xiaoyun.li@intel.com>
> > > > Sent: Monday, September 6, 2021 4:59 PM
> > > > To: Zhang, AlvinX <alvinx.zhang@intel.com>; Ananyev, Konstantin
> > > > <konstantin.ananyev@intel.com>
> > > > Cc: dev@dpdk.org; stable@dpdk.org
> > > > Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
> > > >
> > > > Hi
> > > >
> > > > > -----Original Message-----
> > > > > From: Zhang, AlvinX <alvinx.zhang@intel.com>
> > > > > Sent: Thursday, September 2, 2021 16:20
> > > > > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Ananyev, Konstantin
> > > > > <konstantin.ananyev@intel.com>
> > > > > Cc: dev@dpdk.org; Zhang, AlvinX <alvinx.zhang@intel.com>;
> > > > > stable@dpdk.org
> > > > > Subject: [PATCH] app/testpmd: fix random number of Tx segments
> > > > >
> > > > > When random number of segments in Tx packets is enabled, the
> > > > > total data space length of all segments must be greater or equal
> > > > > than the size of an Eth/IP/UDP/timestamp packet, that's total 14
> > > > > + 20 +
> > > > > 8 +
> > > > > 16 bytes. Otherwise the Tx engine may cause the application to crash.
> > > > >
> > > > > Bugzilla ID: 797
> > > > > Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing
> > > > > packets")
> > > > > Cc: stable@dpdk.org
> > > > >
> > > > > Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
> > > > > ---
> > > > >  app/test-pmd/config.c  | 16 +++++++++++-----
> > > > > app/test-pmd/testpmd.c
> > > > > |  5
> > > > > +++++  app/test-pmd/testpmd.h |  5 +++++  app/test-pmd/txonly.c
> > > > > +++++ |
> > > > > +++++ 7
> > > > > +++++ +++++--
> > > > >  4 files changed, 26 insertions(+), 7 deletions(-)
> > > > >
> > > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> > > > > 31d8ba1..5105b3b 100644
> > > > > --- a/app/test-pmd/config.c
> > > > > +++ b/app/test-pmd/config.c
> > > > > @@ -3837,10 +3837,11 @@ struct igb_ring_desc_16_bytes {
> > > > >  	 * Check that each segment length is greater or equal than
> > > > >  	 * the mbuf data size.
> > > > >  	 * Check also that the total packet length is greater or equal than the
> > > > > -	 * size of an empty UDP/IP packet (sizeof(struct rte_ether_hdr) +
> > > > > -	 * 20 + 8).
> > > > > +	 * size of an Eth/IP/UDP + timestamp packet
> > > > > +	 * (sizeof(struct rte_ether_hdr) + 20 + 8 + 16).
> > > >
> > > > I don't really agree on this. Most of the time, txonly generate
> > > > packets with Eth/IP/UDP. It's not fair to limit the hdr length to
> > > > include
> > > timestamp in all cases.
> > > > And to be honest, I don't see why you need to add
> > > > "tx_pkt_nb_min_segs". It's only used in txonly when
> > > > "TX_PKT_SPLIT_RND". So this issue is because when
> > > > "TX_PKT_SPLIT_RND", the
> > > random nb_segs is not enough for the hdr.
> > > >
> > > > But if you read txonly carefully, if "TX_PKT_SPLIT_RND", the first
> > > > segment length should be equal or greater than 42 (14+20+8).
> > > > Because when "TX_PKT_SPLIT_RND", update_pkt_header() should be
> > > > called. And that function doesn't deal with header in multi-segments.
> > > > I think there's bug here.
> > > >
> > > > So I think you should only add a check in pkt_burst_prepare() in txonly().
> > > > 	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) ||
> > > > txonly_multi_flow)
> > > > +                           if (tx_pkt_seg_lengths[0] < 42) {
> > > > +			err_log;
> > > > +			return false;
> > > > +		}
> > > > 		update_pkt_header(pkt, pkt_len);
> 
> 
> As above, If user have below configuration, there will no one packet be sent out
> and have lots and lots of repeated annoying logs.
> testpmd> set fwd txonly
> Set txonly packet forwarding mode
> testpmd> set txpkts 40,64
> testpmd> set txsplit rand
> testpmd> start
> txonly packet forwarding - ports=1 - cores=1 - streams=4 - NUMA support
> enabled, MP allocation mode: native Logical Core 2 (socket 0) forwards packets
> on 4 streams:
>   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>   RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
>   RX P=0/Q=2 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
>   RX P=0/Q=3 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
> 
>   txonly packet forwarding packets/burst=32
>   packet len=104 - nb packet segments=2
>   nb forwarding cores=1 - nb forwarding ports=1
>   port 0: RX queue number: 4 Tx queue number: 4
>     Rx offloads=0x0 Tx offloads=0x10000
>     RX queue: 0
>       RX desc=1024 - RX free threshold=32
>       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>       RX Offloads=0x0
>     TX queue: 0
>       TX desc=1024 - TX free threshold=32
>       TX threshold registers: pthresh=32 hthresh=0  wthresh=0
>       TX offloads=0x10000 - TX RS bit threshold=32 tx_pkt_seg_lengths[0] must
> bigger than 41 bytes tetx_pkt_seg_lengths[0] must bigger than 41 bytes
> stpmd> tx_pkt_seg_lengths[0] must bigger than 41 bytes
> tx_pkt_seg_lengths[0] must bigger than 41 bytes tx_pkt_seg_lengths[0] must
> bigger than 41 bytes tx_pkt_seg_lengths[0] must bigger than 41 bytes
> tx_pkt_seg_lengths[0] must bigger than 41 bytes tx_pkt_seg_lengths[0] must
> bigger than 41 bytes tx_pkt_seg_lengths[0] must bigger than 41 bytes
> tx_pkt_seg_lengths[0] must bigger than 41 bytes tx_pkt_seg_lengths[0] must
> bigger than 41 bytes tx_pkt_seg_lengths[0] must bigger than 41 bytes
> tx_pkt_seg_lengths[0] must bigger than 41 bytes tx_pkt_seg_lengths[0] must
> bigger than 41 bytes tx_pkt_seg_lengths[0] must bigger than 41 bytes
> tx_pkt_seg_lengths[0] must bigger than 41 bytes tx_pkt_seg_lengths[0] must
> bigger than 41 bytes tx_pkt_seg_lengths[0] must bigger than 41 bytes
> tx_pkt_seg_lengths[0] must bigger than 41 bytes tx_pkt_seg_lengths[0] must
> bigger than 41 bytes tx_pkt_seg_lengths[0] must bigger than 41 bytes
> tx_pkt_seg_lengths[0] must bigger than 41 bytes tx_pkt_seg_lengths[0] must
> bigger than 41 bytes tx_pkt_seg_lengths[0] must bigger than 41 bytes
> tx_pkt_seg_lengths[0] must bigger than 41 bytes ...
> ...
> testpmd> stop
> Telling cores to stop...
> Waiting for lcores to finish...
> 
>   ---------------------- Forward statistics for port 0  ----------------------
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>   ----------------------------------------------------------------------------
> 
>   +++++++++++++++ Accumulated forward statistics for all
> ports+++++++++++++++
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
> 
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> +++++++++++
> 
> By the way, if multi flow was enabled, the ip header also will be updated, so we
> should put the check before below codes.
> 
> 	if (txonly_multi_flow) {
> 		uint8_t  ip_var = RTE_PER_LCORE(_ip_var);
> 		struct rte_ipv4_hdr *ip_hdr;
> 		uint32_t addr;
> 
> 		ip_hdr = rte_pktmbuf_mtod_offset(pkt,
> 				struct rte_ipv4_hdr *,
> 				sizeof(struct rte_ether_hdr));
> 		/*
> 		 * Generate multiple flows by varying IP src addr. This
> 		 * enables packets are well distributed by RSS in
> 		 * receiver side if any and txonly mode can be a decent
> 		 * packet generator for developer's quick performance
> 		 * regression test.
> 		 */
> 		addr = (tx_ip_dst_addr | (ip_var++ << 8)) + rte_lcore_id();
> 		ip_hdr->src_addr = rte_cpu_to_be_32(addr);
> 		RTE_PER_LCORE(_ip_var) = ip_var;
> 	}
> 

Yes. No matter where you put the check in txonly, there'll be annoying print.
Tricky.
I can only think of 2 solutions:
1. If needs to update hdr (ip or udp) and first seg len < 42, copy pkt_ip_hdr and pkt_udp_hdr to a temp value, update the temp value and use copy_buf_to_pkt() to copy the tmp hdr to segs.
And other cases keep the original code in case perf drop. Also, This one needs to keep your min_segs design.

The shortcoming is it's quite complicated change.

2. Force the first segment length is over 42 bytes in set_tx_pkt_segments()

This way is very simple but the shortcoming is the limit usage and you probably need to add a notice in doc to tell users this change.

> > >
> > > Yes, I didn't notice the updating for the UDP header, but the bug
> > > first occurs in this function copy_buf_to_pkt(&pkt_udp_hdr,
> > sizeof(pkt_udp_hdr), pkt,
> > > 			sizeof(struct rte_ether_hdr) +
> > > 			sizeof(struct rte_ipv4_hdr));
> > > not in update_pkt_header.
> > >
> > > Here we expecting users should set minimum 42 byte for first segment
> > > seems ok, But I think we putting the check in configuring the data
> > > space length of first segment is more graceful.
> >
> > No. You didn't get my point. It's not graceful at all.
> > The segment fault will only happen when "TX_PKT_SPLIT_RND". Because
> > the hdr may take 2 segs while random nb_segs is only 1.
> > But if it's not "TX_PKT_SPLIT_RND", the current set_tx_pkt_segments()
> > already make sure pkt_len is enough for 42.
> >
> > So the only case you need to deal with is "TX_PKT_SPLIT_RND". And
> > since "TX_PKT_SPLIT_RND" actually needs the first segment to be enough
> > to contain
> > 42 bytes.
> > And in cmd_set_txpkts_parsed, you may not get the configuration to
> > know if it will be configured as "TX_PKT_SPLIT_RND". That's why you
> > should check before update_pkt_header().
> >
> > In this way, when it's NOT "TX_PKT_SPLIT_RND", it will maintain the
> > old behavior that hdrs may cross several segs which makes more sense.
> >
> > >
> > > >
> > > > As for timestamp, maybe refer to "pkt_copy_split" in csumonly.c is
> > > > better? Copy the extra to the last segment if it's not enough. Not
> > > > sure how to deal with this issue better.
> > > >
> > > > >  	 */
> > > > >  	tx_pkt_len = 0;
> > > > > +	tx_pkt_nb_min_segs = 0;
> > > > >  	for (i = 0; i < nb_segs; i++) {
> > > > >  		if (seg_lengths[i] > mbuf_data_size[0]) {
> > > > >  			fprintf(stderr,
> > > > > @@ -3849,11 +3850,16 @@ struct igb_ring_desc_16_bytes {
> > > > >  			return;
> > > > >  		}
> > > > >  		tx_pkt_len = (uint16_t)(tx_pkt_len + seg_lengths[i]);
> > > > > +
> > > > > +		if (!tx_pkt_nb_min_segs &&
> > > > > +		    tx_pkt_len >= (sizeof(struct rte_ether_hdr) + 20 + 8 +
> 16))
> > > > > +			tx_pkt_nb_min_segs = i + 1;
> > > > >  	}
> > > > > -	if (tx_pkt_len < (sizeof(struct rte_ether_hdr) + 20 + 8)) {
> > > > > +
> > > > > +	if (!tx_pkt_nb_min_segs) {
> > > > >  		fprintf(stderr, "total packet length=%u < %d - give up\n",
> > > > > -				(unsigned) tx_pkt_len,
> > > > > -				(int)(sizeof(struct rte_ether_hdr) + 20 + 8));
> > > > > +			(unsigned int) tx_pkt_len,
> > > > > +			(int)(sizeof(struct rte_ether_hdr) + 20 + 8 + 16));
> > > > >  		return;
> > > > >  	}
> > > > >
> > > > > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> > > > > index
> > > > > 6cbe9ba..c496e59 100644
> > > > > --- a/app/test-pmd/testpmd.c
> > > > > +++ b/app/test-pmd/testpmd.c
> > > > > @@ -232,6 +232,11 @@ struct fwd_engine * fwd_engines[] = {  };
> > > > > uint8_t tx_pkt_nb_segs = 1; /**< Number of segments in TXONLY
> > > > > packets */
> > > > >
> > > > > +/**< Minimum number of segments in TXONLY packets to
> > > > > +accommodate all packet
> > > > > + * headers.
> > > > > + */
> > > > > +uint8_t  tx_pkt_nb_min_segs = 1;
> > > > > +
> > > > >  enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF;  /**< Split
> > > > > policy for packets to TX. */
> > > > >
> > > > > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> > > > > index
> > > > > 16a3598..f5bc427 100644
> > > > > --- a/app/test-pmd/testpmd.h
> > > > > +++ b/app/test-pmd/testpmd.h
> > > > > @@ -464,6 +464,11 @@ enum dcb_mode_enable  extern uint16_t
> > > > > tx_pkt_length; /**< Length of TXONLY packet */  extern uint16_t
> > > > > tx_pkt_seg_lengths[RTE_MAX_SEGS_PER_PKT]; /**< Seg. lengths */
> > > > > extern uint8_t  tx_pkt_nb_segs; /**< Number of segments in TX
> > > > > packets */
> > > > > +
> > > > > +/**< Minimum number of segments in TXONLY packets to
> > > > > +accommodate all packet
> > > > > + * headers.
> > > > > + */
> > > > > +extern uint8_t  tx_pkt_nb_min_segs;
> > > > >  extern uint32_t tx_pkt_times_intra;  extern uint32_t
> > > > > tx_pkt_times_inter;
> > > > >
> > > > > diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index
> > > > > aed820f..27e4458 100644
> > > > > --- a/app/test-pmd/txonly.c
> > > > > +++ b/app/test-pmd/txonly.c
> > > > > @@ -195,8 +195,11 @@
> > > > >  	uint32_t nb_segs, pkt_len;
> > > > >  	uint8_t i;
> > > > >
> > > > > -	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND))
> > > > > -		nb_segs = rte_rand() % tx_pkt_nb_segs + 1;
> > > > > +	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) &&
> > > > > +	    tx_pkt_nb_segs > tx_pkt_nb_min_segs)
> > > > > +		nb_segs = rte_rand() %
> > > > > +			  (tx_pkt_nb_segs - tx_pkt_nb_min_segs + 1) +
> > > > > +			  tx_pkt_nb_min_segs;
> > > > >  	else
> > > > >  		nb_segs = tx_pkt_nb_segs;
> > > > >
> > > > > --
> > > > > 1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 1/2] app/testpmd: update forward engine beginning
  2021-09-02  8:20 [dpdk-dev] [PATCH] app/testpmd: fix random number of Tx segments Alvin Zhang
  2021-09-06  8:58 ` Li, Xiaoyun
@ 2021-09-17  1:39 ` Alvin Zhang
  2021-09-17  1:39   ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: fix txonly forwording Alvin Zhang
  2021-09-18  3:06   ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: update forward engine beginning Alvin Zhang
  1 sibling, 2 replies; 24+ messages in thread
From: Alvin Zhang @ 2021-09-17  1:39 UTC (permalink / raw)
  To: xiaoyun.li, konstantin.ananyev; +Cc: dev, Alvin Zhang

For each forward engine, there may be some special conditions
must be meet before the forwarding run.

Adding checks for these conditions in configuring is not suitable,
because one condition may rely on multiple configurations, and the
conditions required by each forward engine is not general.

The best solution is each forward engine has a callback to check
whether these conditions are meet, and the testpmd then call the
callback to determine whether the forwarding can be started.

There was a void callback 'port_fwd_begin' in forward engine,
it did some initialization for forwarding, this patch updates it's
return type, then we can add some checks in it to confirm whether
the forwarding can be started. In addition, this patch puts the
calling part of the callback up to before some forwarding related
status being set.

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
---
 app/test-pmd/ieee1588fwd.c |  3 ++-
 app/test-pmd/noisy_vnf.c   |  4 +++-
 app/test-pmd/testpmd.c     | 38 ++++++++++++++++++++++++++------------
 app/test-pmd/testpmd.h     |  2 +-
 app/test-pmd/txonly.c      |  3 ++-
 5 files changed, 34 insertions(+), 16 deletions(-)

diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c
index 034f238..81624a7 100644
--- a/app/test-pmd/ieee1588fwd.c
+++ b/app/test-pmd/ieee1588fwd.c
@@ -198,10 +198,11 @@ struct ptpv2_msg {
 	port_ieee1588_tx_timestamp_check(fs->rx_port);
 }
 
-static void
+static int
 port_ieee1588_fwd_begin(portid_t pi)
 {
 	rte_eth_timesync_enable(pi);
+	return 0;
 }
 
 static void
diff --git a/app/test-pmd/noisy_vnf.c b/app/test-pmd/noisy_vnf.c
index 382a4c2..e4434be 100644
--- a/app/test-pmd/noisy_vnf.c
+++ b/app/test-pmd/noisy_vnf.c
@@ -231,7 +231,7 @@ struct noisy_config {
 	rte_free(noisy_cfg[pi]);
 }
 
-static void
+static int
 noisy_fwd_begin(portid_t pi)
 {
 	struct noisy_config *n;
@@ -273,6 +273,8 @@ struct noisy_config {
 		rte_exit(EXIT_FAILURE,
 			 "--noisy-lkup-memory-size must be > 0\n");
 	}
+
+	return 0;
 }
 
 struct fwd_engine noisy_vnf_engine = {
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 97ae52e..0345b2e 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2172,16 +2172,10 @@ struct extmem_param {
 static void
 launch_packet_forwarding(lcore_function_t *pkt_fwd_on_lcore)
 {
-	port_fwd_begin_t port_fwd_begin;
 	unsigned int i;
 	unsigned int lc_id;
 	int diag;
 
-	port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin;
-	if (port_fwd_begin != NULL) {
-		for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++)
-			(*port_fwd_begin)(fwd_ports_ids[i]);
-	}
 	for (i = 0; i < cur_fwd_config.nb_fwd_lcores; i++) {
 		lc_id = fwd_lcores_cpuids[i];
 		if ((interactive == 0) || (lc_id != rte_lcore_id())) {
@@ -2227,10 +2221,35 @@ struct extmem_param {
 		fprintf(stderr, "Packet forwarding already started\n");
 		return;
 	}
-	test_done = 0;
 
 	fwd_config_setup();
 
+	port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin;
+	if (port_fwd_begin != NULL) {
+		for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
+			if (port_fwd_begin(fwd_ports_ids[i])) {
+				fprintf(stderr,
+					"Packet forwarding not ready\n");
+				return;
+			}
+		}
+	}
+
+	if (with_tx_first) {
+		port_fwd_begin = tx_only_engine.port_fwd_begin;
+		if (port_fwd_begin != NULL) {
+			for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
+				if (port_fwd_begin(fwd_ports_ids[i])) {
+					fprintf(stderr,
+						"Packet forwarding not ready\n");
+					return;
+				}
+			}
+		}
+	}
+
+	test_done = 0;
+
 	if(!no_flush_rx)
 		flush_fwd_rx_queues();
 
@@ -2239,11 +2258,6 @@ struct extmem_param {
 
 	fwd_stats_reset();
 	if (with_tx_first) {
-		port_fwd_begin = tx_only_engine.port_fwd_begin;
-		if (port_fwd_begin != NULL) {
-			for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++)
-				(*port_fwd_begin)(fwd_ports_ids[i]);
-		}
 		while (with_tx_first--) {
 			launch_packet_forwarding(
 					run_one_txonly_burst_on_core);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 5863b2f..e9d9db0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -268,7 +268,7 @@ struct fwd_lcore {
  *     Forwards packets unchanged on the same port.
  *     Check that sent IEEE1588 PTP packets are timestamped by the hardware.
  */
-typedef void (*port_fwd_begin_t)(portid_t pi);
+typedef int (*port_fwd_begin_t)(portid_t pi);
 typedef void (*port_fwd_end_t)(portid_t pi);
 typedef void (*packet_fwd_t)(struct fwd_stream *fs);
 
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f..386a4ff 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -435,7 +435,7 @@
 	get_end_cycles(fs, start_tsc);
 }
 
-static void
+static int
 tx_only_begin(portid_t pi)
 {
 	uint16_t pkt_data_len;
@@ -467,6 +467,7 @@
 		timestamp_init_req++;
 	/* Make sure all settings are visible on forwarding cores.*/
 	rte_wmb();
+	return 0;
 }
 
 struct fwd_engine tx_only_engine = {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 2/2] app/testpmd: fix txonly forwording
  2021-09-17  1:39 ` [dpdk-dev] [PATCH v2 1/2] app/testpmd: update forward engine beginning Alvin Zhang
@ 2021-09-17  1:39   ` Alvin Zhang
  2021-09-18  3:06   ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: update forward engine beginning Alvin Zhang
  1 sibling, 0 replies; 24+ messages in thread
From: Alvin Zhang @ 2021-09-17  1:39 UTC (permalink / raw)
  To: xiaoyun.li, konstantin.ananyev; +Cc: dev, Alvin Zhang, stable

When random number of Tx segments is enabled, because the actual
number of segments may be only one, the first segment of the Tx
packets must accommodate a complete being sending Eth/IP/UDP packet.

Besides, if multiple flow is enabled, the forwarding will update
the IP and UDP header, these headers shouldn't cross segments.
This also requires the first Tx segment can accommodate a complete
Eth/IP/UDP packet.

In addition, if time stamp is enabled, the forwarding need more
Tx segment space for time stamp information.

This patch adds checks in beginning of forward engine to make sure
all above conditions are meet.

Bugzilla ID: 797
Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing packets")
Cc: stable@dpdk.org

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
---
 app/test-pmd/txonly.c | 67 ++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 55 insertions(+), 12 deletions(-)

diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 386a4ff..40f4075 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -40,6 +40,13 @@
 
 #include "testpmd.h"
 
+struct tx_timestamp {
+	rte_be32_t signature;
+	rte_be16_t pkt_idx;
+	rte_be16_t queue_idx;
+	rte_be64_t ts;
+};
+
 /* use RFC863 Discard Protocol */
 uint16_t tx_udp_src_port = 9;
 uint16_t tx_udp_dst_port = 9;
@@ -257,12 +264,7 @@
 
 	if (unlikely(timestamp_enable)) {
 		uint64_t skew = RTE_PER_LCORE(timestamp_qskew);
-		struct {
-			rte_be32_t signature;
-			rte_be16_t pkt_idx;
-			rte_be16_t queue_idx;
-			rte_be64_t ts;
-		} timestamp_mark;
+		struct tx_timestamp timestamp_mark;
 
 		if (unlikely(timestamp_init_req !=
 				RTE_PER_LCORE(timestamp_idone))) {
@@ -438,13 +440,23 @@
 static int
 tx_only_begin(portid_t pi)
 {
-	uint16_t pkt_data_len;
+	uint16_t pkt_hdr_len, pkt_data_len;
 	int dynf;
 
-	pkt_data_len = (uint16_t) (tx_pkt_length - (
-					sizeof(struct rte_ether_hdr) +
-					sizeof(struct rte_ipv4_hdr) +
-					sizeof(struct rte_udp_hdr)));
+	pkt_hdr_len = (uint16_t)(sizeof(struct rte_ether_hdr) +
+				 sizeof(struct rte_ipv4_hdr) +
+				 sizeof(struct rte_udp_hdr));
+	pkt_data_len = tx_pkt_length - pkt_hdr_len;
+
+	if ((tx_pkt_split == TX_PKT_SPLIT_RND || txonly_multi_flow) &&
+	    tx_pkt_seg_lengths[0] < pkt_hdr_len) {
+		TESTPMD_LOG(ERR,
+			    "Random segment number or multiple flow enabled,"
+			    " but tx_pkt_seg_lengths[0] %u < %u (needed)\n",
+			    tx_pkt_seg_lengths[0], pkt_hdr_len);
+		return -EINVAL;
+	}
+
 	setup_pkt_udp_ip_headers(&pkt_ip_hdr, &pkt_udp_hdr, pkt_data_len);
 
 	timestamp_enable = false;
@@ -463,8 +475,39 @@
 			   timestamp_mask &&
 			   timestamp_off >= 0 &&
 			   !rte_eth_read_clock(pi, &timestamp_initial[pi]);
-	if (timestamp_enable)
+
+	if (timestamp_enable) {
+		pkt_hdr_len += sizeof(struct timestamp);
+
+		if (tx_pkt_split == TX_PKT_SPLIT_RND) {
+			if (tx_pkt_seg_lengths[0] < pkt_hdr_len) {
+				TESTPMD_LOG(ERR,
+					    "Time stamp and random segment number enabled,"
+					    " but tx_pkt_seg_lengths[0] %u < %u (needed)\n",
+					    tx_pkt_seg_lengths[0], pkt_hdr_len);
+				return -EINVAL;
+			}
+		} else {
+			uint16_t total = 0;
+			uint8_t i;
+
+			for (i = 0; i < tx_pkt_nb_segs; i++) {
+				total += tx_pkt_seg_lengths[i];
+				if (total >= pkt_hdr_len)
+					break;
+			}
+
+			if (total < pkt_hdr_len) {
+				TESTPMD_LOG(ERR,
+					    "No enough Tx segment space for time stamp info."
+					    " total %u < %u (needed)\n",
+					    total, pkt_hdr_len);
+				return -EINVAL;
+			}
+		}
 		timestamp_init_req++;
+	}
+
 	/* Make sure all settings are visible on forwarding cores.*/
 	rte_wmb();
 	return 0;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 1/2] app/testpmd: update forward engine beginning
  2021-09-17  1:39 ` [dpdk-dev] [PATCH v2 1/2] app/testpmd: update forward engine beginning Alvin Zhang
  2021-09-17  1:39   ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: fix txonly forwording Alvin Zhang
@ 2021-09-18  3:06   ` Alvin Zhang
  2021-09-18  3:06     ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: fix txonly forwording Alvin Zhang
                       ` (2 more replies)
  1 sibling, 3 replies; 24+ messages in thread
From: Alvin Zhang @ 2021-09-18  3:06 UTC (permalink / raw)
  To: xiaoyun.li, konstantin.ananyev; +Cc: dev, Alvin Zhang

For each forward engine, there may be some special conditions
must be meet before the forwarding run.

Adding checks for these conditions in configuring is not suitable,
because one condition may rely on multiple configurations, and the
conditions required by each forward engine is not general.

The best solution is each forward engine has a callback to check
whether these conditions are meet, and then testpmd can call the
callback to determine whether the forwarding can be started.

There was a void callback 'port_fwd_begin' in forward engine,
it did some initialization for forwarding, this patch updates it's
return type, then we can add some checks in it to confirm whether
the forwarding can be started. In addition, this patch puts the
calling part of the callback up to before some forwarding related
status being set.

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
---
 app/test-pmd/flowgen.c     |  3 ++-
 app/test-pmd/ieee1588fwd.c |  3 ++-
 app/test-pmd/noisy_vnf.c   |  4 +++-
 app/test-pmd/testpmd.c     | 38 ++++++++++++++++++++++++++------------
 app/test-pmd/testpmd.h     |  2 +-
 app/test-pmd/txonly.c      |  3 ++-
 6 files changed, 36 insertions(+), 17 deletions(-)

diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 0d3664a..83234fc 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -201,10 +201,11 @@
 	get_end_cycles(fs, start_tsc);
 }
 
-static void
+static int
 flowgen_begin(portid_t pi)
 {
 	printf("  number of flows for port %u: %d\n", pi, nb_flows_flowgen);
+	return 0;
 }
 
 struct fwd_engine flow_gen_engine = {
diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c
index 034f238..81624a7 100644
--- a/app/test-pmd/ieee1588fwd.c
+++ b/app/test-pmd/ieee1588fwd.c
@@ -198,10 +198,11 @@ struct ptpv2_msg {
 	port_ieee1588_tx_timestamp_check(fs->rx_port);
 }
 
-static void
+static int
 port_ieee1588_fwd_begin(portid_t pi)
 {
 	rte_eth_timesync_enable(pi);
+	return 0;
 }
 
 static void
diff --git a/app/test-pmd/noisy_vnf.c b/app/test-pmd/noisy_vnf.c
index 382a4c2..e4434be 100644
--- a/app/test-pmd/noisy_vnf.c
+++ b/app/test-pmd/noisy_vnf.c
@@ -231,7 +231,7 @@ struct noisy_config {
 	rte_free(noisy_cfg[pi]);
 }
 
-static void
+static int
 noisy_fwd_begin(portid_t pi)
 {
 	struct noisy_config *n;
@@ -273,6 +273,8 @@ struct noisy_config {
 		rte_exit(EXIT_FAILURE,
 			 "--noisy-lkup-memory-size must be > 0\n");
 	}
+
+	return 0;
 }
 
 struct fwd_engine noisy_vnf_engine = {
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 97ae52e..0345b2e 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2172,16 +2172,10 @@ struct extmem_param {
 static void
 launch_packet_forwarding(lcore_function_t *pkt_fwd_on_lcore)
 {
-	port_fwd_begin_t port_fwd_begin;
 	unsigned int i;
 	unsigned int lc_id;
 	int diag;
 
-	port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin;
-	if (port_fwd_begin != NULL) {
-		for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++)
-			(*port_fwd_begin)(fwd_ports_ids[i]);
-	}
 	for (i = 0; i < cur_fwd_config.nb_fwd_lcores; i++) {
 		lc_id = fwd_lcores_cpuids[i];
 		if ((interactive == 0) || (lc_id != rte_lcore_id())) {
@@ -2227,10 +2221,35 @@ struct extmem_param {
 		fprintf(stderr, "Packet forwarding already started\n");
 		return;
 	}
-	test_done = 0;
 
 	fwd_config_setup();
 
+	port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin;
+	if (port_fwd_begin != NULL) {
+		for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
+			if (port_fwd_begin(fwd_ports_ids[i])) {
+				fprintf(stderr,
+					"Packet forwarding not ready\n");
+				return;
+			}
+		}
+	}
+
+	if (with_tx_first) {
+		port_fwd_begin = tx_only_engine.port_fwd_begin;
+		if (port_fwd_begin != NULL) {
+			for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
+				if (port_fwd_begin(fwd_ports_ids[i])) {
+					fprintf(stderr,
+						"Packet forwarding not ready\n");
+					return;
+				}
+			}
+		}
+	}
+
+	test_done = 0;
+
 	if(!no_flush_rx)
 		flush_fwd_rx_queues();
 
@@ -2239,11 +2258,6 @@ struct extmem_param {
 
 	fwd_stats_reset();
 	if (with_tx_first) {
-		port_fwd_begin = tx_only_engine.port_fwd_begin;
-		if (port_fwd_begin != NULL) {
-			for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++)
-				(*port_fwd_begin)(fwd_ports_ids[i]);
-		}
 		while (with_tx_first--) {
 			launch_packet_forwarding(
 					run_one_txonly_burst_on_core);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 5863b2f..e9d9db0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -268,7 +268,7 @@ struct fwd_lcore {
  *     Forwards packets unchanged on the same port.
  *     Check that sent IEEE1588 PTP packets are timestamped by the hardware.
  */
-typedef void (*port_fwd_begin_t)(portid_t pi);
+typedef int (*port_fwd_begin_t)(portid_t pi);
 typedef void (*port_fwd_end_t)(portid_t pi);
 typedef void (*packet_fwd_t)(struct fwd_stream *fs);
 
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f..386a4ff 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -435,7 +435,7 @@
 	get_end_cycles(fs, start_tsc);
 }
 
-static void
+static int
 tx_only_begin(portid_t pi)
 {
 	uint16_t pkt_data_len;
@@ -467,6 +467,7 @@
 		timestamp_init_req++;
 	/* Make sure all settings are visible on forwarding cores.*/
 	rte_wmb();
+	return 0;
 }
 
 struct fwd_engine tx_only_engine = {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 2/2] app/testpmd: fix txonly forwording
  2021-09-18  3:06   ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: update forward engine beginning Alvin Zhang
@ 2021-09-18  3:06     ` Alvin Zhang
  2021-09-18  8:20       ` Li, Xiaoyun
  2021-09-18  8:31     ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: update forward engine beginning Li, Xiaoyun
  2021-09-22  2:49     ` [dpdk-dev] [PATCH v4 " Alvin Zhang
  2 siblings, 1 reply; 24+ messages in thread
From: Alvin Zhang @ 2021-09-18  3:06 UTC (permalink / raw)
  To: xiaoyun.li, konstantin.ananyev; +Cc: dev, Alvin Zhang, stable

When random number of Tx segments is enabled, because the actual
number of segments may be only one, the first segment of the Tx
packets must accommodate a complete being sending Eth/IP/UDP packet.

Besides, if multiple flow is enabled, the forwarding will update
the IP and UDP header, these headers shouldn't cross segments.
This also requires the first Tx segment can accommodate a complete
Eth/IP/UDP packet.

In addition, if time stamp is enabled, the forwarding need more
Tx segment space for time stamp information.

This patch adds checks in beginning of forward engine to make sure
all above conditions are meet.

Bugzilla ID: 797
Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing packets")
Cc: stable@dpdk.org

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
---
 app/test-pmd/txonly.c | 67 ++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 55 insertions(+), 12 deletions(-)

diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 386a4ff..40f4075 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -40,6 +40,13 @@
 
 #include "testpmd.h"
 
+struct tx_timestamp {
+	rte_be32_t signature;
+	rte_be16_t pkt_idx;
+	rte_be16_t queue_idx;
+	rte_be64_t ts;
+};
+
 /* use RFC863 Discard Protocol */
 uint16_t tx_udp_src_port = 9;
 uint16_t tx_udp_dst_port = 9;
@@ -257,12 +264,7 @@
 
 	if (unlikely(timestamp_enable)) {
 		uint64_t skew = RTE_PER_LCORE(timestamp_qskew);
-		struct {
-			rte_be32_t signature;
-			rte_be16_t pkt_idx;
-			rte_be16_t queue_idx;
-			rte_be64_t ts;
-		} timestamp_mark;
+		struct tx_timestamp timestamp_mark;
 
 		if (unlikely(timestamp_init_req !=
 				RTE_PER_LCORE(timestamp_idone))) {
@@ -438,13 +440,23 @@
 static int
 tx_only_begin(portid_t pi)
 {
-	uint16_t pkt_data_len;
+	uint16_t pkt_hdr_len, pkt_data_len;
 	int dynf;
 
-	pkt_data_len = (uint16_t) (tx_pkt_length - (
-					sizeof(struct rte_ether_hdr) +
-					sizeof(struct rte_ipv4_hdr) +
-					sizeof(struct rte_udp_hdr)));
+	pkt_hdr_len = (uint16_t)(sizeof(struct rte_ether_hdr) +
+				 sizeof(struct rte_ipv4_hdr) +
+				 sizeof(struct rte_udp_hdr));
+	pkt_data_len = tx_pkt_length - pkt_hdr_len;
+
+	if ((tx_pkt_split == TX_PKT_SPLIT_RND || txonly_multi_flow) &&
+	    tx_pkt_seg_lengths[0] < pkt_hdr_len) {
+		TESTPMD_LOG(ERR,
+			    "Random segment number or multiple flow enabled,"
+			    " but tx_pkt_seg_lengths[0] %u < %u (needed)\n",
+			    tx_pkt_seg_lengths[0], pkt_hdr_len);
+		return -EINVAL;
+	}
+
 	setup_pkt_udp_ip_headers(&pkt_ip_hdr, &pkt_udp_hdr, pkt_data_len);
 
 	timestamp_enable = false;
@@ -463,8 +475,39 @@
 			   timestamp_mask &&
 			   timestamp_off >= 0 &&
 			   !rte_eth_read_clock(pi, &timestamp_initial[pi]);
-	if (timestamp_enable)
+
+	if (timestamp_enable) {
+		pkt_hdr_len += sizeof(struct timestamp);
+
+		if (tx_pkt_split == TX_PKT_SPLIT_RND) {
+			if (tx_pkt_seg_lengths[0] < pkt_hdr_len) {
+				TESTPMD_LOG(ERR,
+					    "Time stamp and random segment number enabled,"
+					    " but tx_pkt_seg_lengths[0] %u < %u (needed)\n",
+					    tx_pkt_seg_lengths[0], pkt_hdr_len);
+				return -EINVAL;
+			}
+		} else {
+			uint16_t total = 0;
+			uint8_t i;
+
+			for (i = 0; i < tx_pkt_nb_segs; i++) {
+				total += tx_pkt_seg_lengths[i];
+				if (total >= pkt_hdr_len)
+					break;
+			}
+
+			if (total < pkt_hdr_len) {
+				TESTPMD_LOG(ERR,
+					    "No enough Tx segment space for time stamp info."
+					    " total %u < %u (needed)\n",
+					    total, pkt_hdr_len);
+				return -EINVAL;
+			}
+		}
 		timestamp_init_req++;
+	}
+
 	/* Make sure all settings are visible on forwarding cores.*/
 	rte_wmb();
 	return 0;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/2] app/testpmd: fix txonly forwording
  2021-09-18  3:06     ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: fix txonly forwording Alvin Zhang
@ 2021-09-18  8:20       ` Li, Xiaoyun
  0 siblings, 0 replies; 24+ messages in thread
From: Li, Xiaoyun @ 2021-09-18  8:20 UTC (permalink / raw)
  To: Zhang, AlvinX, Ananyev, Konstantin; +Cc: dev, stable

Hi

> -----Original Message-----
> From: Zhang, AlvinX <alvinx.zhang@intel.com>
> Sent: Saturday, September 18, 2021 11:07
> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; Zhang, AlvinX <alvinx.zhang@intel.com>; stable@dpdk.org
> Subject: [PATCH v3 2/2] app/testpmd: fix txonly forwording
> 
> When random number of Tx segments is enabled, because the actual number of
> segments may be only one, the first segment of the Tx packets must
> accommodate a complete being sending Eth/IP/UDP packet.
> 
> Besides, if multiple flow is enabled, the forwarding will update the IP and UDP
> header, these headers shouldn't cross segments.
> This also requires the first Tx segment can accommodate a complete
> Eth/IP/UDP packet.
> 
> In addition, if time stamp is enabled, the forwarding need more Tx segment

needs

> space for time stamp information.
> 
> This patch adds checks in beginning of forward engine to make sure all above
> conditions are meet.

are met

> 
> Bugzilla ID: 797
> Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing packets")
> Cc: stable@dpdk.org

This patch is based on the 1st patch of this patch-set. So if you only send this patch to stable, there'll be incompatible issues.
Either you send both patches to stable or you just fix this for the future (don't cc to stable).

But the patch looks good to me.

> 
> Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
> ---
>  app/test-pmd/txonly.c | 67
> ++++++++++++++++++++++++++++++++++++++++++---------
>  1 file changed, 55 insertions(+), 12 deletions(-)
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/2] app/testpmd: update forward engine beginning
  2021-09-18  3:06   ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: update forward engine beginning Alvin Zhang
  2021-09-18  3:06     ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: fix txonly forwording Alvin Zhang
@ 2021-09-18  8:31     ` Li, Xiaoyun
  2021-09-18  8:50       ` Zhang, AlvinX
  2021-09-22  2:49     ` [dpdk-dev] [PATCH v4 " Alvin Zhang
  2 siblings, 1 reply; 24+ messages in thread
From: Li, Xiaoyun @ 2021-09-18  8:31 UTC (permalink / raw)
  To: Zhang, AlvinX, Ananyev, Konstantin; +Cc: dev

Hi

> -----Original Message-----
> From: Zhang, AlvinX <alvinx.zhang@intel.com>
> Sent: Saturday, September 18, 2021 11:07
> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; Zhang, AlvinX <alvinx.zhang@intel.com>
> Subject: [PATCH v3 1/2] app/testpmd: update forward engine beginning
> 
> For each forward engine, there may be some special conditions must be meet

met

> before the forwarding run.

is running / runs

> 
> Adding checks for these conditions in configuring is not suitable, because one
> condition may rely on multiple configurations, and the conditions required by
> each forward engine is not general.
> 
> The best solution is each forward engine has a callback to check whether these
> conditions are meet, and then testpmd can call the callback to determine

met

> whether the forwarding can be started.
> 
> There was a void callback 'port_fwd_begin' in forward engine, it did some
> initialization for forwarding, this patch updates it's return type, then we can add

its return value then we can

> some checks in it to confirm whether the forwarding can be started. In addition,
> this patch puts the calling part of the callback up to before some forwarding
> related status being set.

this patch calls this callback before the forwarding stats is reset and then launches the forwarding engine.

Also the cc stable issue I state in the 2nd patch.
But overall the patch looks good to me.

BRs
Xiaoyun

> 
> Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
> ---
>  app/test-pmd/flowgen.c     |  3 ++-
>  app/test-pmd/ieee1588fwd.c |  3 ++-
>  app/test-pmd/noisy_vnf.c   |  4 +++-
>  app/test-pmd/testpmd.c     | 38 ++++++++++++++++++++++++++------------
>  app/test-pmd/testpmd.h     |  2 +-
>  app/test-pmd/txonly.c      |  3 ++-
>  6 files changed, 36 insertions(+), 17 deletions(-)
> 
> diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index
> 0d3664a..83234fc 100644
> --- a/app/test-pmd/flowgen.c
> +++ b/app/test-pmd/flowgen.c
> @@ -201,10 +201,11 @@
>  	get_end_cycles(fs, start_tsc);
>  }
> 
> -static void
> +static int
>  flowgen_begin(portid_t pi)
>  {
>  	printf("  number of flows for port %u: %d\n", pi, nb_flows_flowgen);
> +	return 0;
>  }
> 
>  struct fwd_engine flow_gen_engine = {
> diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c index
> 034f238..81624a7 100644
> --- a/app/test-pmd/ieee1588fwd.c
> +++ b/app/test-pmd/ieee1588fwd.c
> @@ -198,10 +198,11 @@ struct ptpv2_msg {
>  	port_ieee1588_tx_timestamp_check(fs->rx_port);
>  }
> 
> -static void
> +static int
>  port_ieee1588_fwd_begin(portid_t pi)
>  {
>  	rte_eth_timesync_enable(pi);
> +	return 0;
>  }
> 
>  static void
> diff --git a/app/test-pmd/noisy_vnf.c b/app/test-pmd/noisy_vnf.c index
> 382a4c2..e4434be 100644
> --- a/app/test-pmd/noisy_vnf.c
> +++ b/app/test-pmd/noisy_vnf.c
> @@ -231,7 +231,7 @@ struct noisy_config {
>  	rte_free(noisy_cfg[pi]);
>  }
> 
> -static void
> +static int
>  noisy_fwd_begin(portid_t pi)
>  {
>  	struct noisy_config *n;
> @@ -273,6 +273,8 @@ struct noisy_config {
>  		rte_exit(EXIT_FAILURE,
>  			 "--noisy-lkup-memory-size must be > 0\n");
>  	}
> +
> +	return 0;
>  }
> 
>  struct fwd_engine noisy_vnf_engine = {
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> 97ae52e..0345b2e 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -2172,16 +2172,10 @@ struct extmem_param {  static void
> launch_packet_forwarding(lcore_function_t *pkt_fwd_on_lcore)  {
> -	port_fwd_begin_t port_fwd_begin;
>  	unsigned int i;
>  	unsigned int lc_id;
>  	int diag;
> 
> -	port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin;
> -	if (port_fwd_begin != NULL) {
> -		for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++)
> -			(*port_fwd_begin)(fwd_ports_ids[i]);
> -	}
>  	for (i = 0; i < cur_fwd_config.nb_fwd_lcores; i++) {
>  		lc_id = fwd_lcores_cpuids[i];
>  		if ((interactive == 0) || (lc_id != rte_lcore_id())) { @@ -2227,10
> +2221,35 @@ struct extmem_param {
>  		fprintf(stderr, "Packet forwarding already started\n");
>  		return;
>  	}
> -	test_done = 0;
> 
>  	fwd_config_setup();
> 
> +	port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin;
> +	if (port_fwd_begin != NULL) {
> +		for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
> +			if (port_fwd_begin(fwd_ports_ids[i])) {
> +				fprintf(stderr,
> +					"Packet forwarding not ready\n");
> +				return;
> +			}
> +		}
> +	}
> +
> +	if (with_tx_first) {
> +		port_fwd_begin = tx_only_engine.port_fwd_begin;
> +		if (port_fwd_begin != NULL) {
> +			for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
> +				if (port_fwd_begin(fwd_ports_ids[i])) {
> +					fprintf(stderr,
> +						"Packet forwarding not
> ready\n");
> +					return;
> +				}
> +			}
> +		}
> +	}
> +
> +	test_done = 0;
> +
>  	if(!no_flush_rx)
>  		flush_fwd_rx_queues();
> 
> @@ -2239,11 +2258,6 @@ struct extmem_param {
> 
>  	fwd_stats_reset();
>  	if (with_tx_first) {
> -		port_fwd_begin = tx_only_engine.port_fwd_begin;
> -		if (port_fwd_begin != NULL) {
> -			for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++)
> -				(*port_fwd_begin)(fwd_ports_ids[i]);
> -		}
>  		while (with_tx_first--) {
>  			launch_packet_forwarding(
>  					run_one_txonly_burst_on_core);
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> 5863b2f..e9d9db0 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -268,7 +268,7 @@ struct fwd_lcore {
>   *     Forwards packets unchanged on the same port.
>   *     Check that sent IEEE1588 PTP packets are timestamped by the hardware.
>   */
> -typedef void (*port_fwd_begin_t)(portid_t pi);
> +typedef int (*port_fwd_begin_t)(portid_t pi);
>  typedef void (*port_fwd_end_t)(portid_t pi);  typedef void
> (*packet_fwd_t)(struct fwd_stream *fs);
> 
> diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index
> aed820f..386a4ff 100644
> --- a/app/test-pmd/txonly.c
> +++ b/app/test-pmd/txonly.c
> @@ -435,7 +435,7 @@
>  	get_end_cycles(fs, start_tsc);
>  }
> 
> -static void
> +static int
>  tx_only_begin(portid_t pi)
>  {
>  	uint16_t pkt_data_len;
> @@ -467,6 +467,7 @@
>  		timestamp_init_req++;
>  	/* Make sure all settings are visible on forwarding cores.*/
>  	rte_wmb();
> +	return 0;
>  }
> 
>  struct fwd_engine tx_only_engine = {
> --
> 1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/2] app/testpmd: update forward engine beginning
  2021-09-18  8:31     ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: update forward engine beginning Li, Xiaoyun
@ 2021-09-18  8:50       ` Zhang, AlvinX
  0 siblings, 0 replies; 24+ messages in thread
From: Zhang, AlvinX @ 2021-09-18  8:50 UTC (permalink / raw)
  To: Li, Xiaoyun, Ananyev, Konstantin; +Cc: dev

> -----Original Message-----
> From: Li, Xiaoyun <xiaoyun.li@intel.com>
> Sent: Saturday, September 18, 2021 4:31 PM
> To: Zhang, AlvinX <alvinx.zhang@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v3 1/2] app/testpmd: update forward engine beginning
> 
> Hi
> 
> > -----Original Message-----
> > From: Zhang, AlvinX <alvinx.zhang@intel.com>
> > Sent: Saturday, September 18, 2021 11:07
> > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>
> > Cc: dev@dpdk.org; Zhang, AlvinX <alvinx.zhang@intel.com>
> > Subject: [PATCH v3 1/2] app/testpmd: update forward engine beginning
> >
> > For each forward engine, there may be some special conditions must be
> > meet
> 
> met

Hi Xiaoyun,
Thanks for your attentively reviewing. I will update them in V4.

> 
> > before the forwarding run.
> 
> is running / runs
> 
> >
> > Adding checks for these conditions in configuring is not suitable,
> > because one condition may rely on multiple configurations, and the
> > conditions required by each forward engine is not general.
> >
> > The best solution is each forward engine has a callback to check
> > whether these conditions are meet, and then testpmd can call the
> > callback to determine
> 
> met
> 
> > whether the forwarding can be started.
> >
> > There was a void callback 'port_fwd_begin' in forward engine, it did
> > some initialization for forwarding, this patch updates it's return
> > type, then we can add
> 
> its return value then we can
> 
> > some checks in it to confirm whether the forwarding can be started. In
> > addition, this patch puts the calling part of the callback up to
> > before some forwarding related status being set.
> 
> this patch calls this callback before the forwarding stats is reset and then
> launches the forwarding engine.
> 
> Also the cc stable issue I state in the 2nd patch.
> But overall the patch looks good to me.
> 
> BRs
> Xiaoyun
> 
> >
> > Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
> > ---
> >  app/test-pmd/flowgen.c     |  3 ++-
> >  app/test-pmd/ieee1588fwd.c |  3 ++-
> >  app/test-pmd/noisy_vnf.c   |  4 +++-
> >  app/test-pmd/testpmd.c     | 38
> ++++++++++++++++++++++++++------------
> >  app/test-pmd/testpmd.h     |  2 +-
> >  app/test-pmd/txonly.c      |  3 ++-
> >  6 files changed, 36 insertions(+), 17 deletions(-)
> >
> > diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index
> > 0d3664a..83234fc 100644
> > --- a/app/test-pmd/flowgen.c
> > +++ b/app/test-pmd/flowgen.c
> > @@ -201,10 +201,11 @@
> >  	get_end_cycles(fs, start_tsc);
> >  }
> >
> > -static void
> > +static int
> >  flowgen_begin(portid_t pi)
> >  {
> >  	printf("  number of flows for port %u: %d\n", pi, nb_flows_flowgen);
> > +	return 0;
> >  }
> >
> >  struct fwd_engine flow_gen_engine = { diff --git
> > a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c index
> > 034f238..81624a7 100644
> > --- a/app/test-pmd/ieee1588fwd.c
> > +++ b/app/test-pmd/ieee1588fwd.c
> > @@ -198,10 +198,11 @@ struct ptpv2_msg {
> >  	port_ieee1588_tx_timestamp_check(fs->rx_port);
> >  }
> >
> > -static void
> > +static int
> >  port_ieee1588_fwd_begin(portid_t pi)
> >  {
> >  	rte_eth_timesync_enable(pi);
> > +	return 0;
> >  }
> >
> >  static void
> > diff --git a/app/test-pmd/noisy_vnf.c b/app/test-pmd/noisy_vnf.c index
> > 382a4c2..e4434be 100644
> > --- a/app/test-pmd/noisy_vnf.c
> > +++ b/app/test-pmd/noisy_vnf.c
> > @@ -231,7 +231,7 @@ struct noisy_config {
> >  	rte_free(noisy_cfg[pi]);
> >  }
> >
> > -static void
> > +static int
> >  noisy_fwd_begin(portid_t pi)
> >  {
> >  	struct noisy_config *n;
> > @@ -273,6 +273,8 @@ struct noisy_config {
> >  		rte_exit(EXIT_FAILURE,
> >  			 "--noisy-lkup-memory-size must be > 0\n");
> >  	}
> > +
> > +	return 0;
> >  }
> >
> >  struct fwd_engine noisy_vnf_engine = { diff --git
> > a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 97ae52e..0345b2e 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -2172,16 +2172,10 @@ struct extmem_param {  static void
> > launch_packet_forwarding(lcore_function_t *pkt_fwd_on_lcore)  {
> > -	port_fwd_begin_t port_fwd_begin;
> >  	unsigned int i;
> >  	unsigned int lc_id;
> >  	int diag;
> >
> > -	port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin;
> > -	if (port_fwd_begin != NULL) {
> > -		for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++)
> > -			(*port_fwd_begin)(fwd_ports_ids[i]);
> > -	}
> >  	for (i = 0; i < cur_fwd_config.nb_fwd_lcores; i++) {
> >  		lc_id = fwd_lcores_cpuids[i];
> >  		if ((interactive == 0) || (lc_id != rte_lcore_id())) { @@ -2227,10
> > +2221,35 @@ struct extmem_param {
> >  		fprintf(stderr, "Packet forwarding already started\n");
> >  		return;
> >  	}
> > -	test_done = 0;
> >
> >  	fwd_config_setup();
> >
> > +	port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin;
> > +	if (port_fwd_begin != NULL) {
> > +		for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
> > +			if (port_fwd_begin(fwd_ports_ids[i])) {
> > +				fprintf(stderr,
> > +					"Packet forwarding not ready\n");
> > +				return;
> > +			}
> > +		}
> > +	}
> > +
> > +	if (with_tx_first) {
> > +		port_fwd_begin = tx_only_engine.port_fwd_begin;
> > +		if (port_fwd_begin != NULL) {
> > +			for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
> > +				if (port_fwd_begin(fwd_ports_ids[i])) {
> > +					fprintf(stderr,
> > +						"Packet forwarding not
> > ready\n");
> > +					return;
> > +				}
> > +			}
> > +		}
> > +	}
> > +
> > +	test_done = 0;
> > +
> >  	if(!no_flush_rx)
> >  		flush_fwd_rx_queues();
> >
> > @@ -2239,11 +2258,6 @@ struct extmem_param {
> >
> >  	fwd_stats_reset();
> >  	if (with_tx_first) {
> > -		port_fwd_begin = tx_only_engine.port_fwd_begin;
> > -		if (port_fwd_begin != NULL) {
> > -			for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++)
> > -				(*port_fwd_begin)(fwd_ports_ids[i]);
> > -		}
> >  		while (with_tx_first--) {
> >  			launch_packet_forwarding(
> >  					run_one_txonly_burst_on_core);
> > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> > 5863b2f..e9d9db0 100644
> > --- a/app/test-pmd/testpmd.h
> > +++ b/app/test-pmd/testpmd.h
> > @@ -268,7 +268,7 @@ struct fwd_lcore {
> >   *     Forwards packets unchanged on the same port.
> >   *     Check that sent IEEE1588 PTP packets are timestamped by the
> hardware.
> >   */
> > -typedef void (*port_fwd_begin_t)(portid_t pi);
> > +typedef int (*port_fwd_begin_t)(portid_t pi);
> >  typedef void (*port_fwd_end_t)(portid_t pi);  typedef void
> > (*packet_fwd_t)(struct fwd_stream *fs);
> >
> > diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index
> > aed820f..386a4ff 100644
> > --- a/app/test-pmd/txonly.c
> > +++ b/app/test-pmd/txonly.c
> > @@ -435,7 +435,7 @@
> >  	get_end_cycles(fs, start_tsc);
> >  }
> >
> > -static void
> > +static int
> >  tx_only_begin(portid_t pi)
> >  {
> >  	uint16_t pkt_data_len;
> > @@ -467,6 +467,7 @@
> >  		timestamp_init_req++;
> >  	/* Make sure all settings are visible on forwarding cores.*/
> >  	rte_wmb();
> > +	return 0;
> >  }
> >
> >  struct fwd_engine tx_only_engine = {
> > --
> > 1.8.3.1

BRs
Alvin

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v4 1/2] app/testpmd: update forward engine beginning
  2021-09-18  3:06   ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: update forward engine beginning Alvin Zhang
  2021-09-18  3:06     ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: fix txonly forwording Alvin Zhang
  2021-09-18  8:31     ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: update forward engine beginning Li, Xiaoyun
@ 2021-09-22  2:49     ` Alvin Zhang
  2021-09-22  2:49       ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: fix txonly forwording Alvin Zhang
                         ` (2 more replies)
  2 siblings, 3 replies; 24+ messages in thread
From: Alvin Zhang @ 2021-09-22  2:49 UTC (permalink / raw)
  To: xiaoyun.li, konstantin.ananyev; +Cc: dev, Alvin Zhang, stable

For each forward engine, there may be some special conditions
must be met before the forwarding runs.

Adding checks for these conditions in configuring is not suitable,
because one condition may rely on multiple configurations, and the
conditions required by each forward engine is not general.

The best solution is each forward engine has a callback to check
whether these conditions are met, and then testpmd can call the
callback to determine whether the forwarding can be started.

There was a void callback 'port_fwd_begin' in forward engine,
it did some initialization for forwarding, this patch updates its
return value then we can add some checks in it to confirm whether
the forwarding can be started. In addition, this patch calls the
callback before the forwarding stats is reset and then launches the
forwarding engine.

Bugzilla ID: 797
Cc: stable@dpdk.org

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
---
 app/test-pmd/flowgen.c     |  3 ++-
 app/test-pmd/ieee1588fwd.c |  3 ++-
 app/test-pmd/noisy_vnf.c   |  4 +++-
 app/test-pmd/testpmd.c     | 38 ++++++++++++++++++++++++++------------
 app/test-pmd/testpmd.h     |  2 +-
 app/test-pmd/txonly.c      |  3 ++-
 6 files changed, 36 insertions(+), 17 deletions(-)

diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 0d3664a..83234fc 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -201,10 +201,11 @@
 	get_end_cycles(fs, start_tsc);
 }
 
-static void
+static int
 flowgen_begin(portid_t pi)
 {
 	printf("  number of flows for port %u: %d\n", pi, nb_flows_flowgen);
+	return 0;
 }
 
 struct fwd_engine flow_gen_engine = {
diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c
index 034f238..81624a7 100644
--- a/app/test-pmd/ieee1588fwd.c
+++ b/app/test-pmd/ieee1588fwd.c
@@ -198,10 +198,11 @@ struct ptpv2_msg {
 	port_ieee1588_tx_timestamp_check(fs->rx_port);
 }
 
-static void
+static int
 port_ieee1588_fwd_begin(portid_t pi)
 {
 	rte_eth_timesync_enable(pi);
+	return 0;
 }
 
 static void
diff --git a/app/test-pmd/noisy_vnf.c b/app/test-pmd/noisy_vnf.c
index 382a4c2..e4434be 100644
--- a/app/test-pmd/noisy_vnf.c
+++ b/app/test-pmd/noisy_vnf.c
@@ -231,7 +231,7 @@ struct noisy_config {
 	rte_free(noisy_cfg[pi]);
 }
 
-static void
+static int
 noisy_fwd_begin(portid_t pi)
 {
 	struct noisy_config *n;
@@ -273,6 +273,8 @@ struct noisy_config {
 		rte_exit(EXIT_FAILURE,
 			 "--noisy-lkup-memory-size must be > 0\n");
 	}
+
+	return 0;
 }
 
 struct fwd_engine noisy_vnf_engine = {
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 97ae52e..0345b2e 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2172,16 +2172,10 @@ struct extmem_param {
 static void
 launch_packet_forwarding(lcore_function_t *pkt_fwd_on_lcore)
 {
-	port_fwd_begin_t port_fwd_begin;
 	unsigned int i;
 	unsigned int lc_id;
 	int diag;
 
-	port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin;
-	if (port_fwd_begin != NULL) {
-		for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++)
-			(*port_fwd_begin)(fwd_ports_ids[i]);
-	}
 	for (i = 0; i < cur_fwd_config.nb_fwd_lcores; i++) {
 		lc_id = fwd_lcores_cpuids[i];
 		if ((interactive == 0) || (lc_id != rte_lcore_id())) {
@@ -2227,10 +2221,35 @@ struct extmem_param {
 		fprintf(stderr, "Packet forwarding already started\n");
 		return;
 	}
-	test_done = 0;
 
 	fwd_config_setup();
 
+	port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin;
+	if (port_fwd_begin != NULL) {
+		for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
+			if (port_fwd_begin(fwd_ports_ids[i])) {
+				fprintf(stderr,
+					"Packet forwarding not ready\n");
+				return;
+			}
+		}
+	}
+
+	if (with_tx_first) {
+		port_fwd_begin = tx_only_engine.port_fwd_begin;
+		if (port_fwd_begin != NULL) {
+			for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
+				if (port_fwd_begin(fwd_ports_ids[i])) {
+					fprintf(stderr,
+						"Packet forwarding not ready\n");
+					return;
+				}
+			}
+		}
+	}
+
+	test_done = 0;
+
 	if(!no_flush_rx)
 		flush_fwd_rx_queues();
 
@@ -2239,11 +2258,6 @@ struct extmem_param {
 
 	fwd_stats_reset();
 	if (with_tx_first) {
-		port_fwd_begin = tx_only_engine.port_fwd_begin;
-		if (port_fwd_begin != NULL) {
-			for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++)
-				(*port_fwd_begin)(fwd_ports_ids[i]);
-		}
 		while (with_tx_first--) {
 			launch_packet_forwarding(
 					run_one_txonly_burst_on_core);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 5863b2f..e9d9db0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -268,7 +268,7 @@ struct fwd_lcore {
  *     Forwards packets unchanged on the same port.
  *     Check that sent IEEE1588 PTP packets are timestamped by the hardware.
  */
-typedef void (*port_fwd_begin_t)(portid_t pi);
+typedef int (*port_fwd_begin_t)(portid_t pi);
 typedef void (*port_fwd_end_t)(portid_t pi);
 typedef void (*packet_fwd_t)(struct fwd_stream *fs);
 
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f..386a4ff 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -435,7 +435,7 @@
 	get_end_cycles(fs, start_tsc);
 }
 
-static void
+static int
 tx_only_begin(portid_t pi)
 {
 	uint16_t pkt_data_len;
@@ -467,6 +467,7 @@
 		timestamp_init_req++;
 	/* Make sure all settings are visible on forwarding cores.*/
 	rte_wmb();
+	return 0;
 }
 
 struct fwd_engine tx_only_engine = {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v4 2/2] app/testpmd: fix txonly forwording
  2021-09-22  2:49     ` [dpdk-dev] [PATCH v4 " Alvin Zhang
@ 2021-09-22  2:49       ` Alvin Zhang
  2021-09-22  5:58         ` Li, Xiaoyun
  2021-09-22  5:59       ` [dpdk-dev] [PATCH v4 1/2] app/testpmd: update forward engine beginning Li, Xiaoyun
  2021-09-23  1:49       ` [dpdk-dev] [PATCH v5 " Alvin Zhang
  2 siblings, 1 reply; 24+ messages in thread
From: Alvin Zhang @ 2021-09-22  2:49 UTC (permalink / raw)
  To: xiaoyun.li, konstantin.ananyev; +Cc: dev, Alvin Zhang, stable

When random number of Tx segments is enabled, because the actual
number of segments may be only one, the first segment of the Tx
packets must accommodate a complete being sending Eth/IP/UDP packet.

Besides, if multiple flow is enabled, the forwarding will update
the IP and UDP header, these headers shouldn't cross segments.
This also requires the first Tx segment can accommodate a complete
Eth/IP/UDP packet.

In addition, if time stamp is enabled, the forwarding needs more
Tx segment space for time stamp information.

This patch adds checks in beginning of forward engine to make sure
all above conditions are met.

Bugzilla ID: 797
Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing packets")
Cc: stable@dpdk.org

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
---
 app/test-pmd/txonly.c | 67 ++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 55 insertions(+), 12 deletions(-)

diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 386a4ff..40f4075 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -40,6 +40,13 @@
 
 #include "testpmd.h"
 
+struct tx_timestamp {
+	rte_be32_t signature;
+	rte_be16_t pkt_idx;
+	rte_be16_t queue_idx;
+	rte_be64_t ts;
+};
+
 /* use RFC863 Discard Protocol */
 uint16_t tx_udp_src_port = 9;
 uint16_t tx_udp_dst_port = 9;
@@ -257,12 +264,7 @@
 
 	if (unlikely(timestamp_enable)) {
 		uint64_t skew = RTE_PER_LCORE(timestamp_qskew);
-		struct {
-			rte_be32_t signature;
-			rte_be16_t pkt_idx;
-			rte_be16_t queue_idx;
-			rte_be64_t ts;
-		} timestamp_mark;
+		struct tx_timestamp timestamp_mark;
 
 		if (unlikely(timestamp_init_req !=
 				RTE_PER_LCORE(timestamp_idone))) {
@@ -438,13 +440,23 @@
 static int
 tx_only_begin(portid_t pi)
 {
-	uint16_t pkt_data_len;
+	uint16_t pkt_hdr_len, pkt_data_len;
 	int dynf;
 
-	pkt_data_len = (uint16_t) (tx_pkt_length - (
-					sizeof(struct rte_ether_hdr) +
-					sizeof(struct rte_ipv4_hdr) +
-					sizeof(struct rte_udp_hdr)));
+	pkt_hdr_len = (uint16_t)(sizeof(struct rte_ether_hdr) +
+				 sizeof(struct rte_ipv4_hdr) +
+				 sizeof(struct rte_udp_hdr));
+	pkt_data_len = tx_pkt_length - pkt_hdr_len;
+
+	if ((tx_pkt_split == TX_PKT_SPLIT_RND || txonly_multi_flow) &&
+	    tx_pkt_seg_lengths[0] < pkt_hdr_len) {
+		TESTPMD_LOG(ERR,
+			    "Random segment number or multiple flow enabled,"
+			    " but tx_pkt_seg_lengths[0] %u < %u (needed)\n",
+			    tx_pkt_seg_lengths[0], pkt_hdr_len);
+		return -EINVAL;
+	}
+
 	setup_pkt_udp_ip_headers(&pkt_ip_hdr, &pkt_udp_hdr, pkt_data_len);
 
 	timestamp_enable = false;
@@ -463,8 +475,39 @@
 			   timestamp_mask &&
 			   timestamp_off >= 0 &&
 			   !rte_eth_read_clock(pi, &timestamp_initial[pi]);
-	if (timestamp_enable)
+
+	if (timestamp_enable) {
+		pkt_hdr_len += sizeof(struct timestamp);
+
+		if (tx_pkt_split == TX_PKT_SPLIT_RND) {
+			if (tx_pkt_seg_lengths[0] < pkt_hdr_len) {
+				TESTPMD_LOG(ERR,
+					    "Time stamp and random segment number enabled,"
+					    " but tx_pkt_seg_lengths[0] %u < %u (needed)\n",
+					    tx_pkt_seg_lengths[0], pkt_hdr_len);
+				return -EINVAL;
+			}
+		} else {
+			uint16_t total = 0;
+			uint8_t i;
+
+			for (i = 0; i < tx_pkt_nb_segs; i++) {
+				total += tx_pkt_seg_lengths[i];
+				if (total >= pkt_hdr_len)
+					break;
+			}
+
+			if (total < pkt_hdr_len) {
+				TESTPMD_LOG(ERR,
+					    "No enough Tx segment space for time stamp info."
+					    " total %u < %u (needed)\n",
+					    total, pkt_hdr_len);
+				return -EINVAL;
+			}
+		}
 		timestamp_init_req++;
+	}
+
 	/* Make sure all settings are visible on forwarding cores.*/
 	rte_wmb();
 	return 0;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v4 2/2] app/testpmd: fix txonly forwording
  2021-09-22  2:49       ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: fix txonly forwording Alvin Zhang
@ 2021-09-22  5:58         ` Li, Xiaoyun
  0 siblings, 0 replies; 24+ messages in thread
From: Li, Xiaoyun @ 2021-09-22  5:58 UTC (permalink / raw)
  To: Zhang, AlvinX, Ananyev, Konstantin; +Cc: dev, stable


> -----Original Message-----
> From: Zhang, AlvinX <alvinx.zhang@intel.com>
> Sent: Wednesday, September 22, 2021 10:50
> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; Zhang, AlvinX <alvinx.zhang@intel.com>; stable@dpdk.org
> Subject: [PATCH v4 2/2] app/testpmd: fix txonly forwording
> 
> When random number of Tx segments is enabled, because the actual number of
> segments may be only one, the first segment of the Tx packets must
> accommodate a complete being sending Eth/IP/UDP packet.
> 
> Besides, if multiple flow is enabled, the forwarding will update the IP and UDP
> header, these headers shouldn't cross segments.
> This also requires the first Tx segment can accommodate a complete
> Eth/IP/UDP packet.
> 
> In addition, if time stamp is enabled, the forwarding needs more Tx segment
> space for time stamp information.
> 
> This patch adds checks in beginning of forward engine to make sure all above
> conditions are met.
> 
> Bugzilla ID: 797
> Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing packets")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>

Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v4 1/2] app/testpmd: update forward engine beginning
  2021-09-22  2:49     ` [dpdk-dev] [PATCH v4 " Alvin Zhang
  2021-09-22  2:49       ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: fix txonly forwording Alvin Zhang
@ 2021-09-22  5:59       ` Li, Xiaoyun
  2021-09-23  1:49       ` [dpdk-dev] [PATCH v5 " Alvin Zhang
  2 siblings, 0 replies; 24+ messages in thread
From: Li, Xiaoyun @ 2021-09-22  5:59 UTC (permalink / raw)
  To: Zhang, AlvinX, Ananyev, Konstantin; +Cc: dev, stable

> -----Original Message-----
> From: Zhang, AlvinX <alvinx.zhang@intel.com>
> Sent: Wednesday, September 22, 2021 10:50
> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; Zhang, AlvinX <alvinx.zhang@intel.com>; stable@dpdk.org
> Subject: [PATCH v4 1/2] app/testpmd: update forward engine beginning
> 
> For each forward engine, there may be some special conditions must be met
> before the forwarding runs.
> 
> Adding checks for these conditions in configuring is not suitable, because one
> condition may rely on multiple configurations, and the conditions required by
> each forward engine is not general.
> 
> The best solution is each forward engine has a callback to check whether these
> conditions are met, and then testpmd can call the callback to determine whether
> the forwarding can be started.
> 
> There was a void callback 'port_fwd_begin' in forward engine, it did some
> initialization for forwarding, this patch updates its return value then we can add
> some checks in it to confirm whether the forwarding can be started. In addition,
> this patch calls the callback before the forwarding stats is reset and then
> launches the forwarding engine.
> 
> Bugzilla ID: 797
> Cc: stable@dpdk.org

Not sure if you should add the same fixline as patch 2
> 
> Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>

Except that, Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v5 1/2] app/testpmd: update forward engine beginning
  2021-09-22  2:49     ` [dpdk-dev] [PATCH v4 " Alvin Zhang
  2021-09-22  2:49       ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: fix txonly forwording Alvin Zhang
  2021-09-22  5:59       ` [dpdk-dev] [PATCH v4 1/2] app/testpmd: update forward engine beginning Li, Xiaoyun
@ 2021-09-23  1:49       ` Alvin Zhang
  2021-09-23  1:49         ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: fix txonly forwording Alvin Zhang
  2021-09-23  8:01         ` [dpdk-dev] [PATCH v6 1/2] app/testpmd: update forward engine beginning Alvin Zhang
  2 siblings, 2 replies; 24+ messages in thread
From: Alvin Zhang @ 2021-09-23  1:49 UTC (permalink / raw)
  To: xiaoyun.li, konstantin.ananyev; +Cc: dev, Alvin Zhang, stable

For each forward engine, there may be some special conditions
must be met before the forwarding runs.

Adding checks for these conditions in configuring is not suitable,
because one condition may rely on multiple configurations, and the
conditions required by each forward engine is not general.

The best solution is each forward engine has a callback to check
whether these conditions are met, and then testpmd can call the
callback to determine whether the forwarding can be started.

There was a void callback 'port_fwd_begin' in forward engine,
it did some initialization for forwarding, this patch updates its
return value then we can add some checks in it to confirm whether
the forwarding can be started. In addition, this patch calls the
callback before the forwarding stats is reset and then launches the
forwarding engine.

Bugzilla ID: 797
Cc: stable@dpdk.org

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
---
 app/test-pmd/flowgen.c     |  3 ++-
 app/test-pmd/ieee1588fwd.c |  3 ++-
 app/test-pmd/noisy_vnf.c   |  4 +++-
 app/test-pmd/testpmd.c     | 38 ++++++++++++++++++++++++++------------
 app/test-pmd/testpmd.h     |  2 +-
 app/test-pmd/txonly.c      |  3 ++-
 6 files changed, 36 insertions(+), 17 deletions(-)

diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 0d3664a..83234fc 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -201,10 +201,11 @@
 	get_end_cycles(fs, start_tsc);
 }
 
-static void
+static int
 flowgen_begin(portid_t pi)
 {
 	printf("  number of flows for port %u: %d\n", pi, nb_flows_flowgen);
+	return 0;
 }
 
 struct fwd_engine flow_gen_engine = {
diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c
index 034f238..81624a7 100644
--- a/app/test-pmd/ieee1588fwd.c
+++ b/app/test-pmd/ieee1588fwd.c
@@ -198,10 +198,11 @@ struct ptpv2_msg {
 	port_ieee1588_tx_timestamp_check(fs->rx_port);
 }
 
-static void
+static int
 port_ieee1588_fwd_begin(portid_t pi)
 {
 	rte_eth_timesync_enable(pi);
+	return 0;
 }
 
 static void
diff --git a/app/test-pmd/noisy_vnf.c b/app/test-pmd/noisy_vnf.c
index 382a4c2..e4434be 100644
--- a/app/test-pmd/noisy_vnf.c
+++ b/app/test-pmd/noisy_vnf.c
@@ -231,7 +231,7 @@ struct noisy_config {
 	rte_free(noisy_cfg[pi]);
 }
 
-static void
+static int
 noisy_fwd_begin(portid_t pi)
 {
 	struct noisy_config *n;
@@ -273,6 +273,8 @@ struct noisy_config {
 		rte_exit(EXIT_FAILURE,
 			 "--noisy-lkup-memory-size must be > 0\n");
 	}
+
+	return 0;
 }
 
 struct fwd_engine noisy_vnf_engine = {
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 97ae52e..0345b2e 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2172,16 +2172,10 @@ struct extmem_param {
 static void
 launch_packet_forwarding(lcore_function_t *pkt_fwd_on_lcore)
 {
-	port_fwd_begin_t port_fwd_begin;
 	unsigned int i;
 	unsigned int lc_id;
 	int diag;
 
-	port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin;
-	if (port_fwd_begin != NULL) {
-		for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++)
-			(*port_fwd_begin)(fwd_ports_ids[i]);
-	}
 	for (i = 0; i < cur_fwd_config.nb_fwd_lcores; i++) {
 		lc_id = fwd_lcores_cpuids[i];
 		if ((interactive == 0) || (lc_id != rte_lcore_id())) {
@@ -2227,10 +2221,35 @@ struct extmem_param {
 		fprintf(stderr, "Packet forwarding already started\n");
 		return;
 	}
-	test_done = 0;
 
 	fwd_config_setup();
 
+	port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin;
+	if (port_fwd_begin != NULL) {
+		for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
+			if (port_fwd_begin(fwd_ports_ids[i])) {
+				fprintf(stderr,
+					"Packet forwarding not ready\n");
+				return;
+			}
+		}
+	}
+
+	if (with_tx_first) {
+		port_fwd_begin = tx_only_engine.port_fwd_begin;
+		if (port_fwd_begin != NULL) {
+			for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
+				if (port_fwd_begin(fwd_ports_ids[i])) {
+					fprintf(stderr,
+						"Packet forwarding not ready\n");
+					return;
+				}
+			}
+		}
+	}
+
+	test_done = 0;
+
 	if(!no_flush_rx)
 		flush_fwd_rx_queues();
 
@@ -2239,11 +2258,6 @@ struct extmem_param {
 
 	fwd_stats_reset();
 	if (with_tx_first) {
-		port_fwd_begin = tx_only_engine.port_fwd_begin;
-		if (port_fwd_begin != NULL) {
-			for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++)
-				(*port_fwd_begin)(fwd_ports_ids[i]);
-		}
 		while (with_tx_first--) {
 			launch_packet_forwarding(
 					run_one_txonly_burst_on_core);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 5863b2f..e9d9db0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -268,7 +268,7 @@ struct fwd_lcore {
  *     Forwards packets unchanged on the same port.
  *     Check that sent IEEE1588 PTP packets are timestamped by the hardware.
  */
-typedef void (*port_fwd_begin_t)(portid_t pi);
+typedef int (*port_fwd_begin_t)(portid_t pi);
 typedef void (*port_fwd_end_t)(portid_t pi);
 typedef void (*packet_fwd_t)(struct fwd_stream *fs);
 
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f..386a4ff 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -435,7 +435,7 @@
 	get_end_cycles(fs, start_tsc);
 }
 
-static void
+static int
 tx_only_begin(portid_t pi)
 {
 	uint16_t pkt_data_len;
@@ -467,6 +467,7 @@
 		timestamp_init_req++;
 	/* Make sure all settings are visible on forwarding cores.*/
 	rte_wmb();
+	return 0;
 }
 
 struct fwd_engine tx_only_engine = {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v5 2/2] app/testpmd: fix txonly forwording
  2021-09-23  1:49       ` [dpdk-dev] [PATCH v5 " Alvin Zhang
@ 2021-09-23  1:49         ` Alvin Zhang
  2021-09-23  4:25           ` Ivan Malov
  2021-09-23  8:01         ` [dpdk-dev] [PATCH v6 1/2] app/testpmd: update forward engine beginning Alvin Zhang
  1 sibling, 1 reply; 24+ messages in thread
From: Alvin Zhang @ 2021-09-23  1:49 UTC (permalink / raw)
  To: xiaoyun.li, konstantin.ananyev; +Cc: dev, Alvin Zhang, stable

When random number of Tx segments is enabled, because the actual
number of segments may be only one, the first segment of the Tx
packets must accommodate a complete being sending Eth/IP/UDP packet.

Besides, if multiple flow is enabled, the forwarding will update
the IP and UDP header, these headers shouldn't cross segments.
This also requires the first Tx segment can accommodate a complete
Eth/IP/UDP packet.

In addition, if time stamp is enabled, the forwarding needs more
Tx segment space for time stamp information.

This patch adds checks in beginning of forward engine to make sure
all above conditions are met.

Bugzilla ID: 797
Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing packets")
Cc: stable@dpdk.org

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
---

v5: fixes a compilation issue
---
 app/test-pmd/txonly.c | 67 ++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 55 insertions(+), 12 deletions(-)

diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 386a4ff..e7b1b42 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -40,6 +40,13 @@
 
 #include "testpmd.h"
 
+struct tx_timestamp {
+	rte_be32_t signature;
+	rte_be16_t pkt_idx;
+	rte_be16_t queue_idx;
+	rte_be64_t ts;
+};
+
 /* use RFC863 Discard Protocol */
 uint16_t tx_udp_src_port = 9;
 uint16_t tx_udp_dst_port = 9;
@@ -257,12 +264,7 @@
 
 	if (unlikely(timestamp_enable)) {
 		uint64_t skew = RTE_PER_LCORE(timestamp_qskew);
-		struct {
-			rte_be32_t signature;
-			rte_be16_t pkt_idx;
-			rte_be16_t queue_idx;
-			rte_be64_t ts;
-		} timestamp_mark;
+		struct tx_timestamp timestamp_mark;
 
 		if (unlikely(timestamp_init_req !=
 				RTE_PER_LCORE(timestamp_idone))) {
@@ -438,13 +440,23 @@
 static int
 tx_only_begin(portid_t pi)
 {
-	uint16_t pkt_data_len;
+	uint16_t pkt_hdr_len, pkt_data_len;
 	int dynf;
 
-	pkt_data_len = (uint16_t) (tx_pkt_length - (
-					sizeof(struct rte_ether_hdr) +
-					sizeof(struct rte_ipv4_hdr) +
-					sizeof(struct rte_udp_hdr)));
+	pkt_hdr_len = (uint16_t)(sizeof(struct rte_ether_hdr) +
+				 sizeof(struct rte_ipv4_hdr) +
+				 sizeof(struct rte_udp_hdr));
+	pkt_data_len = tx_pkt_length - pkt_hdr_len;
+
+	if ((tx_pkt_split == TX_PKT_SPLIT_RND || txonly_multi_flow) &&
+	    tx_pkt_seg_lengths[0] < pkt_hdr_len) {
+		TESTPMD_LOG(ERR,
+			    "Random segment number or multiple flow enabled,"
+			    " but tx_pkt_seg_lengths[0] %u < %u (needed)\n",
+			    tx_pkt_seg_lengths[0], pkt_hdr_len);
+		return -EINVAL;
+	}
+
 	setup_pkt_udp_ip_headers(&pkt_ip_hdr, &pkt_udp_hdr, pkt_data_len);
 
 	timestamp_enable = false;
@@ -463,8 +475,39 @@
 			   timestamp_mask &&
 			   timestamp_off >= 0 &&
 			   !rte_eth_read_clock(pi, &timestamp_initial[pi]);
-	if (timestamp_enable)
+
+	if (timestamp_enable) {
+		pkt_hdr_len += sizeof(struct tx_timestamp);
+
+		if (tx_pkt_split == TX_PKT_SPLIT_RND) {
+			if (tx_pkt_seg_lengths[0] < pkt_hdr_len) {
+				TESTPMD_LOG(ERR,
+					    "Time stamp and random segment number enabled,"
+					    " but tx_pkt_seg_lengths[0] %u < %u (needed)\n",
+					    tx_pkt_seg_lengths[0], pkt_hdr_len);
+				return -EINVAL;
+			}
+		} else {
+			uint16_t total = 0;
+			uint8_t i;
+
+			for (i = 0; i < tx_pkt_nb_segs; i++) {
+				total += tx_pkt_seg_lengths[i];
+				if (total >= pkt_hdr_len)
+					break;
+			}
+
+			if (total < pkt_hdr_len) {
+				TESTPMD_LOG(ERR,
+					    "No enough Tx segment space for time stamp info."
+					    " total %u < %u (needed)\n",
+					    total, pkt_hdr_len);
+				return -EINVAL;
+			}
+		}
 		timestamp_init_req++;
+	}
+
 	/* Make sure all settings are visible on forwarding cores.*/
 	rte_wmb();
 	return 0;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v5 2/2] app/testpmd: fix txonly forwording
  2021-09-23  1:49         ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: fix txonly forwording Alvin Zhang
@ 2021-09-23  4:25           ` Ivan Malov
  2021-09-23  5:11             ` Zhang, AlvinX
  0 siblings, 1 reply; 24+ messages in thread
From: Ivan Malov @ 2021-09-23  4:25 UTC (permalink / raw)
  To: Alvin Zhang, xiaoyun.li, konstantin.ananyev; +Cc: dev, stable

Hi Alvin,

There's a typo in the commit summary: forwording -> forwarding.

On 23/09/2021 04:49, Alvin Zhang wrote:
> When random number of Tx segments is enabled, because the actual
> number of segments may be only one, the first segment of the Tx
> packets must accommodate a complete being sending Eth/IP/UDP packet.
> 
> Besides, if multiple flow is enabled, the forwarding will update
> the IP and UDP header, these headers shouldn't cross segments.
> This also requires the first Tx segment can accommodate a complete
> Eth/IP/UDP packet.
> 
> In addition, if time stamp is enabled, the forwarding needs more
> Tx segment space for time stamp information.
> 
> This patch adds checks in beginning of forward engine to make sure
> all above conditions are met.
> 
> Bugzilla ID: 797
> Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing packets")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
> Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
> ---
> 
> v5: fixes a compilation issue
> ---
>   app/test-pmd/txonly.c | 67 ++++++++++++++++++++++++++++++++++++++++++---------
>   1 file changed, 55 insertions(+), 12 deletions(-)
> 
> diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
> index 386a4ff..e7b1b42 100644
> --- a/app/test-pmd/txonly.c
> +++ b/app/test-pmd/txonly.c
> @@ -40,6 +40,13 @@
>   
>   #include "testpmd.h"
>   
> +struct tx_timestamp {
> +	rte_be32_t signature;
> +	rte_be16_t pkt_idx;
> +	rte_be16_t queue_idx;
> +	rte_be64_t ts;
> +};
> +
>   /* use RFC863 Discard Protocol */
>   uint16_t tx_udp_src_port = 9;
>   uint16_t tx_udp_dst_port = 9;
> @@ -257,12 +264,7 @@
>   
>   	if (unlikely(timestamp_enable)) {
>   		uint64_t skew = RTE_PER_LCORE(timestamp_qskew);
> -		struct {
> -			rte_be32_t signature;
> -			rte_be16_t pkt_idx;
> -			rte_be16_t queue_idx;
> -			rte_be64_t ts;
> -		} timestamp_mark;
> +		struct tx_timestamp timestamp_mark;
>   
>   		if (unlikely(timestamp_init_req !=
>   				RTE_PER_LCORE(timestamp_idone))) {
> @@ -438,13 +440,23 @@
>   static int
>   tx_only_begin(portid_t pi)
>   {
> -	uint16_t pkt_data_len;
> +	uint16_t pkt_hdr_len, pkt_data_len;
>   	int dynf;
>   
> -	pkt_data_len = (uint16_t) (tx_pkt_length - (
> -					sizeof(struct rte_ether_hdr) +
> -					sizeof(struct rte_ipv4_hdr) +
> -					sizeof(struct rte_udp_hdr)));
> +	pkt_hdr_len = (uint16_t)(sizeof(struct rte_ether_hdr) +
> +				 sizeof(struct rte_ipv4_hdr) +
> +				 sizeof(struct rte_udp_hdr));
> +	pkt_data_len = tx_pkt_length - pkt_hdr_len;
> +
> +	if ((tx_pkt_split == TX_PKT_SPLIT_RND || txonly_multi_flow) &&
> +	    tx_pkt_seg_lengths[0] < pkt_hdr_len) {
> +		TESTPMD_LOG(ERR,
> +			    "Random segment number or multiple flow enabled,"
> +			    " but tx_pkt_seg_lengths[0] %u < %u (needed)\n",

This should probably be on a single line:

TESTPMD_LOG(ERR, "Random segment number or multiple flow enabled, but 
tx_pkt_seg_lengths[0] %u < %u (needed)\n",

because this way it's more search-friendly. Style checks should be OK 
with that.

> +			    tx_pkt_seg_lengths[0], pkt_hdr_len);
> +		return -EINVAL;
> +	}
> +
>   	setup_pkt_udp_ip_headers(&pkt_ip_hdr, &pkt_udp_hdr, pkt_data_len);
>   
>   	timestamp_enable = false;
> @@ -463,8 +475,39 @@
>   			   timestamp_mask &&
>   			   timestamp_off >= 0 &&
>   			   !rte_eth_read_clock(pi, &timestamp_initial[pi]);
> -	if (timestamp_enable)
> +
> +	if (timestamp_enable) {
> +		pkt_hdr_len += sizeof(struct tx_timestamp);
> +
> +		if (tx_pkt_split == TX_PKT_SPLIT_RND) {
> +			if (tx_pkt_seg_lengths[0] < pkt_hdr_len) {
> +				TESTPMD_LOG(ERR,
> +					    "Time stamp and random segment number enabled,"
> +					    " but tx_pkt_seg_lengths[0] %u < %u (needed)\n",

Likewise.

> +					    tx_pkt_seg_lengths[0], pkt_hdr_len);
> +				return -EINVAL;
> +			}
> +		} else {
> +			uint16_t total = 0;
> +			uint8_t i;
> +
> +			for (i = 0; i < tx_pkt_nb_segs; i++) {
> +				total += tx_pkt_seg_lengths[i];
> +				if (total >= pkt_hdr_len)
> +					break;
> +			}
> +
> +			if (total < pkt_hdr_len) {
> +				TESTPMD_LOG(ERR,
> +					    "No enough Tx segment space for time stamp info."
> +					    " total %u < %u (needed)\n",

Likewise.

> +					    total, pkt_hdr_len);
> +				return -EINVAL;
> +			}
> +		}
>   		timestamp_init_req++;
> +	}
> +
>   	/* Make sure all settings are visible on forwarding cores.*/
>   	rte_wmb();
>   	return 0;
> 

-- 
Ivan M

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v5 2/2] app/testpmd: fix txonly forwording
  2021-09-23  4:25           ` Ivan Malov
@ 2021-09-23  5:11             ` Zhang, AlvinX
  0 siblings, 0 replies; 24+ messages in thread
From: Zhang, AlvinX @ 2021-09-23  5:11 UTC (permalink / raw)
  To: Ivan Malov, Li, Xiaoyun, Ananyev, Konstantin; +Cc: dev, stable

> -----Original Message-----
> From: Ivan Malov <Ivan.Malov@oktetlabs.ru>
> Sent: Thursday, September 23, 2021 12:26 PM
> To: Zhang, AlvinX <alvinx.zhang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; stable@dpdk.org
> Subject: Re: [PATCH v5 2/2] app/testpmd: fix txonly forwording
> 
> Hi Alvin,
> 
> There's a typo in the commit summary: forwording -> forwarding.

I'll update it in V6.

> 
> On 23/09/2021 04:49, Alvin Zhang wrote:
> > When random number of Tx segments is enabled, because the actual
> > number of segments may be only one, the first segment of the Tx
> > packets must accommodate a complete being sending Eth/IP/UDP packet.
> >
> > Besides, if multiple flow is enabled, the forwarding will update the
> > IP and UDP header, these headers shouldn't cross segments.
> > This also requires the first Tx segment can accommodate a complete
> > Eth/IP/UDP packet.
> >
> > In addition, if time stamp is enabled, the forwarding needs more Tx
> > segment space for time stamp information.
> >
> > This patch adds checks in beginning of forward engine to make sure all
> > above conditions are met.
> >
> > Bugzilla ID: 797
> > Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing
> > packets")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
> > Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > ---
> >
> > v5: fixes a compilation issue
> > ---
> >   app/test-pmd/txonly.c | 67
> ++++++++++++++++++++++++++++++++++++++++++---------
> >   1 file changed, 55 insertions(+), 12 deletions(-)
> >
> > diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index
> > 386a4ff..e7b1b42 100644
> > --- a/app/test-pmd/txonly.c
> > +++ b/app/test-pmd/txonly.c
> > @@ -40,6 +40,13 @@
> >
> >   #include "testpmd.h"
> >
> > +struct tx_timestamp {
> > +	rte_be32_t signature;
> > +	rte_be16_t pkt_idx;
> > +	rte_be16_t queue_idx;
> > +	rte_be64_t ts;
> > +};
> > +
> >   /* use RFC863 Discard Protocol */
> >   uint16_t tx_udp_src_port = 9;
> >   uint16_t tx_udp_dst_port = 9;
> > @@ -257,12 +264,7 @@
> >
> >   	if (unlikely(timestamp_enable)) {
> >   		uint64_t skew = RTE_PER_LCORE(timestamp_qskew);
> > -		struct {
> > -			rte_be32_t signature;
> > -			rte_be16_t pkt_idx;
> > -			rte_be16_t queue_idx;
> > -			rte_be64_t ts;
> > -		} timestamp_mark;
> > +		struct tx_timestamp timestamp_mark;
> >
> >   		if (unlikely(timestamp_init_req !=
> >   				RTE_PER_LCORE(timestamp_idone))) { @@ -438,13
> +440,23 @@
> >   static int
> >   tx_only_begin(portid_t pi)
> >   {
> > -	uint16_t pkt_data_len;
> > +	uint16_t pkt_hdr_len, pkt_data_len;
> >   	int dynf;
> >
> > -	pkt_data_len = (uint16_t) (tx_pkt_length - (
> > -					sizeof(struct rte_ether_hdr) +
> > -					sizeof(struct rte_ipv4_hdr) +
> > -					sizeof(struct rte_udp_hdr)));
> > +	pkt_hdr_len = (uint16_t)(sizeof(struct rte_ether_hdr) +
> > +				 sizeof(struct rte_ipv4_hdr) +
> > +				 sizeof(struct rte_udp_hdr));
> > +	pkt_data_len = tx_pkt_length - pkt_hdr_len;
> > +
> > +	if ((tx_pkt_split == TX_PKT_SPLIT_RND || txonly_multi_flow) &&
> > +	    tx_pkt_seg_lengths[0] < pkt_hdr_len) {
> > +		TESTPMD_LOG(ERR,
> > +			    "Random segment number or multiple flow enabled,"
> > +			    " but tx_pkt_seg_lengths[0] %u < %u (needed)\n",
> 
> This should probably be on a single line:
> 
> TESTPMD_LOG(ERR, "Random segment number or multiple flow enabled, but
> tx_pkt_seg_lengths[0] %u < %u (needed)\n",
> 
> because this way it's more search-friendly. Style checks should be OK with that.

I'll update them in V6.

> 
> > +			    tx_pkt_seg_lengths[0], pkt_hdr_len);
> > +		return -EINVAL;
> > +	}
> > +
> >   	setup_pkt_udp_ip_headers(&pkt_ip_hdr, &pkt_udp_hdr,
> pkt_data_len);
> >
> >   	timestamp_enable = false;
> > @@ -463,8 +475,39 @@
> >   			   timestamp_mask &&
> >   			   timestamp_off >= 0 &&
> >   			   !rte_eth_read_clock(pi, &timestamp_initial[pi]);
> > -	if (timestamp_enable)
> > +
> > +	if (timestamp_enable) {
> > +		pkt_hdr_len += sizeof(struct tx_timestamp);
> > +
> > +		if (tx_pkt_split == TX_PKT_SPLIT_RND) {
> > +			if (tx_pkt_seg_lengths[0] < pkt_hdr_len) {
> > +				TESTPMD_LOG(ERR,
> > +					    "Time stamp and random segment number
> enabled,"
> > +					    " but tx_pkt_seg_lengths[0] %u < %u (needed)\n",
> 
> Likewise.
> 
> > +					    tx_pkt_seg_lengths[0], pkt_hdr_len);
> > +				return -EINVAL;
> > +			}
> > +		} else {
> > +			uint16_t total = 0;
> > +			uint8_t i;
> > +
> > +			for (i = 0; i < tx_pkt_nb_segs; i++) {
> > +				total += tx_pkt_seg_lengths[i];
> > +				if (total >= pkt_hdr_len)
> > +					break;
> > +			}
> > +
> > +			if (total < pkt_hdr_len) {
> > +				TESTPMD_LOG(ERR,
> > +					    "No enough Tx segment space for time stamp
> info."
> > +					    " total %u < %u (needed)\n",
> 
> Likewise.
> 
> > +					    total, pkt_hdr_len);
> > +				return -EINVAL;
> > +			}
> > +		}
> >   		timestamp_init_req++;
> > +	}
> > +
> >   	/* Make sure all settings are visible on forwarding cores.*/
> >   	rte_wmb();
> >   	return 0;
> >
> 
> --
> Ivan M

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v6 1/2] app/testpmd: update forward engine beginning
  2021-09-23  1:49       ` [dpdk-dev] [PATCH v5 " Alvin Zhang
  2021-09-23  1:49         ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: fix txonly forwording Alvin Zhang
@ 2021-09-23  8:01         ` Alvin Zhang
  2021-09-23  8:01           ` [dpdk-dev] [PATCH v6 2/2] app/testpmd: fix txonly forwarding Alvin Zhang
  2021-10-08 17:01           ` [dpdk-dev] [dpdk-stable] [PATCH v6 1/2] app/testpmd: update forward engine beginning Ferruh Yigit
  1 sibling, 2 replies; 24+ messages in thread
From: Alvin Zhang @ 2021-09-23  8:01 UTC (permalink / raw)
  To: xiaoyun.li, konstantin.ananyev; +Cc: dev, Alvin Zhang, stable

For each forward engine, there may be some special conditions
must be met before the forwarding runs.

Adding checks for these conditions in configuring is not suitable,
because one condition may rely on multiple configurations, and the
conditions required by each forward engine is not general.

The best solution is each forward engine has a callback to check
whether these conditions are met, and then testpmd can call the
callback to determine whether the forwarding can be started.

There was a void callback 'port_fwd_begin' in forward engine,
it did some initialization for forwarding, this patch updates its
return value then we can add some checks in it to confirm whether
the forwarding can be started. In addition, this patch calls the
callback before the forwarding stats is reset and then launches the
forwarding engine.

Bugzilla ID: 797
Cc: stable@dpdk.org

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
---
 app/test-pmd/flowgen.c     |  3 ++-
 app/test-pmd/ieee1588fwd.c |  3 ++-
 app/test-pmd/noisy_vnf.c   |  4 +++-
 app/test-pmd/testpmd.c     | 38 ++++++++++++++++++++++++++------------
 app/test-pmd/testpmd.h     |  2 +-
 app/test-pmd/txonly.c      |  3 ++-
 6 files changed, 36 insertions(+), 17 deletions(-)

diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 0d3664a..83234fc 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -201,10 +201,11 @@
 	get_end_cycles(fs, start_tsc);
 }
 
-static void
+static int
 flowgen_begin(portid_t pi)
 {
 	printf("  number of flows for port %u: %d\n", pi, nb_flows_flowgen);
+	return 0;
 }
 
 struct fwd_engine flow_gen_engine = {
diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c
index 034f238..81624a7 100644
--- a/app/test-pmd/ieee1588fwd.c
+++ b/app/test-pmd/ieee1588fwd.c
@@ -198,10 +198,11 @@ struct ptpv2_msg {
 	port_ieee1588_tx_timestamp_check(fs->rx_port);
 }
 
-static void
+static int
 port_ieee1588_fwd_begin(portid_t pi)
 {
 	rte_eth_timesync_enable(pi);
+	return 0;
 }
 
 static void
diff --git a/app/test-pmd/noisy_vnf.c b/app/test-pmd/noisy_vnf.c
index 382a4c2..e4434be 100644
--- a/app/test-pmd/noisy_vnf.c
+++ b/app/test-pmd/noisy_vnf.c
@@ -231,7 +231,7 @@ struct noisy_config {
 	rte_free(noisy_cfg[pi]);
 }
 
-static void
+static int
 noisy_fwd_begin(portid_t pi)
 {
 	struct noisy_config *n;
@@ -273,6 +273,8 @@ struct noisy_config {
 		rte_exit(EXIT_FAILURE,
 			 "--noisy-lkup-memory-size must be > 0\n");
 	}
+
+	return 0;
 }
 
 struct fwd_engine noisy_vnf_engine = {
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 97ae52e..0345b2e 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2172,16 +2172,10 @@ struct extmem_param {
 static void
 launch_packet_forwarding(lcore_function_t *pkt_fwd_on_lcore)
 {
-	port_fwd_begin_t port_fwd_begin;
 	unsigned int i;
 	unsigned int lc_id;
 	int diag;
 
-	port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin;
-	if (port_fwd_begin != NULL) {
-		for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++)
-			(*port_fwd_begin)(fwd_ports_ids[i]);
-	}
 	for (i = 0; i < cur_fwd_config.nb_fwd_lcores; i++) {
 		lc_id = fwd_lcores_cpuids[i];
 		if ((interactive == 0) || (lc_id != rte_lcore_id())) {
@@ -2227,10 +2221,35 @@ struct extmem_param {
 		fprintf(stderr, "Packet forwarding already started\n");
 		return;
 	}
-	test_done = 0;
 
 	fwd_config_setup();
 
+	port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin;
+	if (port_fwd_begin != NULL) {
+		for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
+			if (port_fwd_begin(fwd_ports_ids[i])) {
+				fprintf(stderr,
+					"Packet forwarding not ready\n");
+				return;
+			}
+		}
+	}
+
+	if (with_tx_first) {
+		port_fwd_begin = tx_only_engine.port_fwd_begin;
+		if (port_fwd_begin != NULL) {
+			for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
+				if (port_fwd_begin(fwd_ports_ids[i])) {
+					fprintf(stderr,
+						"Packet forwarding not ready\n");
+					return;
+				}
+			}
+		}
+	}
+
+	test_done = 0;
+
 	if(!no_flush_rx)
 		flush_fwd_rx_queues();
 
@@ -2239,11 +2258,6 @@ struct extmem_param {
 
 	fwd_stats_reset();
 	if (with_tx_first) {
-		port_fwd_begin = tx_only_engine.port_fwd_begin;
-		if (port_fwd_begin != NULL) {
-			for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++)
-				(*port_fwd_begin)(fwd_ports_ids[i]);
-		}
 		while (with_tx_first--) {
 			launch_packet_forwarding(
 					run_one_txonly_burst_on_core);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 5863b2f..e9d9db0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -268,7 +268,7 @@ struct fwd_lcore {
  *     Forwards packets unchanged on the same port.
  *     Check that sent IEEE1588 PTP packets are timestamped by the hardware.
  */
-typedef void (*port_fwd_begin_t)(portid_t pi);
+typedef int (*port_fwd_begin_t)(portid_t pi);
 typedef void (*port_fwd_end_t)(portid_t pi);
 typedef void (*packet_fwd_t)(struct fwd_stream *fs);
 
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f..386a4ff 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -435,7 +435,7 @@
 	get_end_cycles(fs, start_tsc);
 }
 
-static void
+static int
 tx_only_begin(portid_t pi)
 {
 	uint16_t pkt_data_len;
@@ -467,6 +467,7 @@
 		timestamp_init_req++;
 	/* Make sure all settings are visible on forwarding cores.*/
 	rte_wmb();
+	return 0;
 }
 
 struct fwd_engine tx_only_engine = {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v6 2/2] app/testpmd: fix txonly forwarding
  2021-09-23  8:01         ` [dpdk-dev] [PATCH v6 1/2] app/testpmd: update forward engine beginning Alvin Zhang
@ 2021-09-23  8:01           ` Alvin Zhang
  2021-10-08 17:01           ` [dpdk-dev] [dpdk-stable] [PATCH v6 1/2] app/testpmd: update forward engine beginning Ferruh Yigit
  1 sibling, 0 replies; 24+ messages in thread
From: Alvin Zhang @ 2021-09-23  8:01 UTC (permalink / raw)
  To: xiaoyun.li, konstantin.ananyev; +Cc: dev, Alvin Zhang, stable

When random number of Tx segments is enabled, because the actual
number of segments may be only one, the first segment of the Tx
packets must accommodate a complete being sending Eth/IP/UDP packet.

Besides, if multiple flow is enabled, the forwarding will update
the IP and UDP header, these headers shouldn't cross segments.
This also requires the first Tx segment can accommodate a complete
Eth/IP/UDP packet.

In addition, if time stamp is enabled, the forwarding needs more
Tx segment space for time stamp information.

This patch adds checks in beginning of forward engine to make sure
all above conditions are met.

Bugzilla ID: 797
Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing packets")
Cc: stable@dpdk.org

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
---

v5: fixes a compilation issue
v6: update the commit summary and some logs
---
 app/test-pmd/txonly.c | 64 +++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 52 insertions(+), 12 deletions(-)

diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 386a4ff..8623239 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -40,6 +40,13 @@
 
 #include "testpmd.h"
 
+struct tx_timestamp {
+	rte_be32_t signature;
+	rte_be16_t pkt_idx;
+	rte_be16_t queue_idx;
+	rte_be64_t ts;
+};
+
 /* use RFC863 Discard Protocol */
 uint16_t tx_udp_src_port = 9;
 uint16_t tx_udp_dst_port = 9;
@@ -257,12 +264,7 @@
 
 	if (unlikely(timestamp_enable)) {
 		uint64_t skew = RTE_PER_LCORE(timestamp_qskew);
-		struct {
-			rte_be32_t signature;
-			rte_be16_t pkt_idx;
-			rte_be16_t queue_idx;
-			rte_be64_t ts;
-		} timestamp_mark;
+		struct tx_timestamp timestamp_mark;
 
 		if (unlikely(timestamp_init_req !=
 				RTE_PER_LCORE(timestamp_idone))) {
@@ -438,13 +440,22 @@
 static int
 tx_only_begin(portid_t pi)
 {
-	uint16_t pkt_data_len;
+	uint16_t pkt_hdr_len, pkt_data_len;
 	int dynf;
 
-	pkt_data_len = (uint16_t) (tx_pkt_length - (
-					sizeof(struct rte_ether_hdr) +
-					sizeof(struct rte_ipv4_hdr) +
-					sizeof(struct rte_udp_hdr)));
+	pkt_hdr_len = (uint16_t)(sizeof(struct rte_ether_hdr) +
+				 sizeof(struct rte_ipv4_hdr) +
+				 sizeof(struct rte_udp_hdr));
+	pkt_data_len = tx_pkt_length - pkt_hdr_len;
+
+	if ((tx_pkt_split == TX_PKT_SPLIT_RND || txonly_multi_flow) &&
+	    tx_pkt_seg_lengths[0] < pkt_hdr_len) {
+		TESTPMD_LOG(ERR,
+			    "Random segment number or multiple flow enabled, but tx_pkt_seg_lengths[0] %u < %u (needed)\n",
+			    tx_pkt_seg_lengths[0], pkt_hdr_len);
+		return -EINVAL;
+	}
+
 	setup_pkt_udp_ip_headers(&pkt_ip_hdr, &pkt_udp_hdr, pkt_data_len);
 
 	timestamp_enable = false;
@@ -463,8 +474,37 @@
 			   timestamp_mask &&
 			   timestamp_off >= 0 &&
 			   !rte_eth_read_clock(pi, &timestamp_initial[pi]);
-	if (timestamp_enable)
+
+	if (timestamp_enable) {
+		pkt_hdr_len += sizeof(struct tx_timestamp);
+
+		if (tx_pkt_split == TX_PKT_SPLIT_RND) {
+			if (tx_pkt_seg_lengths[0] < pkt_hdr_len) {
+				TESTPMD_LOG(ERR,
+					    "Time stamp and random segment number enabled, but tx_pkt_seg_lengths[0] %u < %u (needed)\n",
+					    tx_pkt_seg_lengths[0], pkt_hdr_len);
+				return -EINVAL;
+			}
+		} else {
+			uint16_t total = 0;
+			uint8_t i;
+
+			for (i = 0; i < tx_pkt_nb_segs; i++) {
+				total += tx_pkt_seg_lengths[i];
+				if (total >= pkt_hdr_len)
+					break;
+			}
+
+			if (total < pkt_hdr_len) {
+				TESTPMD_LOG(ERR,
+					    "No enough Tx segment space for time stamp info, total %u < %u (needed)\n",
+					    total, pkt_hdr_len);
+				return -EINVAL;
+			}
+		}
 		timestamp_init_req++;
+	}
+
 	/* Make sure all settings are visible on forwarding cores.*/
 	rte_wmb();
 	return 0;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [dpdk-stable] [PATCH v6 1/2] app/testpmd: update forward engine beginning
  2021-09-23  8:01         ` [dpdk-dev] [PATCH v6 1/2] app/testpmd: update forward engine beginning Alvin Zhang
  2021-09-23  8:01           ` [dpdk-dev] [PATCH v6 2/2] app/testpmd: fix txonly forwarding Alvin Zhang
@ 2021-10-08 17:01           ` Ferruh Yigit
  1 sibling, 0 replies; 24+ messages in thread
From: Ferruh Yigit @ 2021-10-08 17:01 UTC (permalink / raw)
  To: Alvin Zhang, xiaoyun.li, konstantin.ananyev; +Cc: dev, stable

On 9/23/2021 9:01 AM, Alvin Zhang wrote:
> For each forward engine, there may be some special conditions
> must be met before the forwarding runs.
> 
> Adding checks for these conditions in configuring is not suitable,
> because one condition may rely on multiple configurations, and the
> conditions required by each forward engine is not general.
> 
> The best solution is each forward engine has a callback to check
> whether these conditions are met, and then testpmd can call the
> callback to determine whether the forwarding can be started.
> 
> There was a void callback 'port_fwd_begin' in forward engine,
> it did some initialization for forwarding, this patch updates its
> return value then we can add some checks in it to confirm whether
> the forwarding can be started. In addition, this patch calls the
> callback before the forwarding stats is reset and then launches the
> forwarding engine.
> 
> Bugzilla ID: 797
> Cc: stable@dpdk.org
> 
> Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
> Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>

Series applied to dpdk-next-net/main, thanks.

Updated log messages slightly while merging.

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2021-10-08 17:01 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-02  8:20 [dpdk-dev] [PATCH] app/testpmd: fix random number of Tx segments Alvin Zhang
2021-09-06  8:58 ` Li, Xiaoyun
2021-09-06 10:03   ` Zhang, AlvinX
2021-09-06 10:54     ` Li, Xiaoyun
2021-09-07  2:25       ` Zhang, AlvinX
2021-09-07  8:05         ` Li, Xiaoyun
2021-09-17  1:39 ` [dpdk-dev] [PATCH v2 1/2] app/testpmd: update forward engine beginning Alvin Zhang
2021-09-17  1:39   ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: fix txonly forwording Alvin Zhang
2021-09-18  3:06   ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: update forward engine beginning Alvin Zhang
2021-09-18  3:06     ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: fix txonly forwording Alvin Zhang
2021-09-18  8:20       ` Li, Xiaoyun
2021-09-18  8:31     ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: update forward engine beginning Li, Xiaoyun
2021-09-18  8:50       ` Zhang, AlvinX
2021-09-22  2:49     ` [dpdk-dev] [PATCH v4 " Alvin Zhang
2021-09-22  2:49       ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: fix txonly forwording Alvin Zhang
2021-09-22  5:58         ` Li, Xiaoyun
2021-09-22  5:59       ` [dpdk-dev] [PATCH v4 1/2] app/testpmd: update forward engine beginning Li, Xiaoyun
2021-09-23  1:49       ` [dpdk-dev] [PATCH v5 " Alvin Zhang
2021-09-23  1:49         ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: fix txonly forwording Alvin Zhang
2021-09-23  4:25           ` Ivan Malov
2021-09-23  5:11             ` Zhang, AlvinX
2021-09-23  8:01         ` [dpdk-dev] [PATCH v6 1/2] app/testpmd: update forward engine beginning Alvin Zhang
2021-09-23  8:01           ` [dpdk-dev] [PATCH v6 2/2] app/testpmd: fix txonly forwarding Alvin Zhang
2021-10-08 17:01           ` [dpdk-dev] [dpdk-stable] [PATCH v6 1/2] app/testpmd: update forward engine beginning Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).