From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id CA682A0546;
	Fri, 30 Apr 2021 15:49:56 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 3DA2E40693;
	Fri, 30 Apr 2021 15:49:56 +0200 (CEST)
Received: from mga02.intel.com (mga02.intel.com [134.134.136.20])
 by mails.dpdk.org (Postfix) with ESMTP id 093FD4013F;
 Fri, 30 Apr 2021 15:49:53 +0200 (CEST)
IronPort-SDR: bBrrCMdEWJI/YM6xEjVFGlqgmpZYhZh1vDvxjivgSJD8mgxg8wg3LW0Rn4SC0lTKN28eo4lACt
 gfyAo5GYf8Fg==
X-IronPort-AV: E=McAfee;i="6200,9189,9970"; a="184413444"
X-IronPort-AV: E=Sophos;i="5.82,262,1613462400"; d="scan'208";a="184413444"
Received: from orsmga002.jf.intel.com ([10.7.209.21])
 by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 30 Apr 2021 06:49:52 -0700
IronPort-SDR: iEvQk5DW4MS0h/RVNBk7cWNKzWCrXDPQOhmsg7VkFG34ovGS2Qusrx9rj3Vd/dMcCBqSN3u3eH
 PLRrbKFf2T/g==
X-IronPort-AV: E=Sophos;i="5.82,262,1613462400"; d="scan'208";a="404607760"
Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.242.68])
 ([10.213.242.68])
 by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 30 Apr 2021 06:49:51 -0700
To: Gregory Etelson <getelson@nvidia.com>, dev@dpdk.org
Cc: matan@nvidia.com, rasland@nvidia.com, stable@dpdk.org,
 Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
 Xiaoyun Li <xiaoyun.li@intel.com>
References: <20210419130204.24348-1-getelson@nvidia.com>
 <20210425155722.32477-1-getelson@nvidia.com>
 <20210425155722.32477-2-getelson@nvidia.com>
From: Ferruh Yigit <ferruh.yigit@intel.com>
X-User: ferruhy
Message-ID: <5aca893f-6bcc-643e-40c4-755c3a1077c0@intel.com>
Date: Fri, 30 Apr 2021 14:49:47 +0100
MIME-Version: 1.0
In-Reply-To: <20210425155722.32477-2-getelson@nvidia.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
Subject: Re: [dpdk-dev] [PATCH v2 2/2] app/testpmd: fix tunnel offload
 private items location
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

On 4/25/2021 4:57 PM, Gregory Etelson wrote:
> Tunnel offload API requires application to query PMD for specific flow
> items and actions. Application uses these PMD specific elements to
> build flow rules according to the tunnel offload model.

Can you please give some samples what are "PMD specific elements" required to be
queried by application? To understand issue better.

> The model does not restrict private elements location in a flow rule,
> but the current MLX5 PMD implementation expected that tunnel offload
> rule will begin with PMD specific elements.

Why we need to refer the mlx5 pmd implementation in the testpmd patch? Is this
patch trying to align testpmd to the mlx5 implementation?

> The patch places tunnel offload private PMD flow elements between
> general RTE flow elements in a rule.
> 

Why?

Overall what was the problem, what was failing and what was its impact?

And how changing the private elements location in the flow rule solving the issue?

> Cc: stable@dpdk.org
> Fixes: 1b9f274623b8 ("app/testpmd: add commands for tunnel offload")
> 
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  app/test-pmd/config.c | 14 ++++++++------
>  1 file changed, 8 insertions(+), 6 deletions(-)
> 
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 40b2b29725..1520b8193f 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1664,7 +1664,7 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
>  		     aptr->type != RTE_FLOW_ACTION_TYPE_END;
>  		     aptr++, num_actions++);
>  		pft->actions = malloc(
> -				(num_actions +  pft->num_pmd_actions) *
> +				(num_actions +  pft->num_pmd_actions + 1) *
>  				sizeof(actions[0]));
>  		if (!pft->actions) {
>  			rte_flow_tunnel_action_decap_release(
> @@ -1672,9 +1672,10 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
>  					pft->num_pmd_actions, &error);
>  			return NULL;
>  		}
> -		rte_memcpy(pft->actions, pft->pmd_actions,
> +		pft->actions[0].type = RTE_FLOW_ACTION_TYPE_VOID;
> +		rte_memcpy(pft->actions + 1, pft->pmd_actions,
>  			   pft->num_pmd_actions * sizeof(actions[0]));
> -		rte_memcpy(pft->actions + pft->num_pmd_actions, actions,
> +		rte_memcpy(pft->actions + pft->num_pmd_actions + 1, actions,
>  			   num_actions * sizeof(actions[0]));
>  	}
>  	if (tunnel_ops->items) {
> @@ -1692,7 +1693,7 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
>  		for (iptr = pattern, num_items = 1;
>  		     iptr->type != RTE_FLOW_ITEM_TYPE_END;
>  		     iptr++, num_items++);
> -		pft->items = malloc((num_items + pft->num_pmd_items) *
> +		pft->items = malloc((num_items + pft->num_pmd_items + 1) *
>  				    sizeof(pattern[0]));
>  		if (!pft->items) {
>  			rte_flow_tunnel_item_release(
> @@ -1700,9 +1701,10 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
>  					pft->num_pmd_items, &error);
>  			return NULL;
>  		}
> -		rte_memcpy(pft->items, pft->pmd_items,
> +		pft->items[0].type = RTE_FLOW_ITEM_TYPE_VOID;
> +		rte_memcpy(pft->items + 1, pft->pmd_items,
>  			   pft->num_pmd_items * sizeof(pattern[0]));
> -		rte_memcpy(pft->items + pft->num_pmd_items, pattern,
> +		rte_memcpy(pft->items + pft->num_pmd_items + 1, pattern,
>  			   num_items * sizeof(pattern[0]));
>  	}
>  
>