DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Zhang, Qi Z" <qi.z.zhang@intel.com>
To: "Sun, Chenmin" <chenmin.sun@intel.com>,
	"Xing, Beilei" <beilei.xing@intel.com>,
	"Wu, Jingjing" <jingjing.wu@intel.com>,
	"Wang, Haiyue" <haiyue.wang@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH V2] net/i40e: i40e FDIR update rate optimization
Date: Fri, 10 Jul 2020 02:50:30 +0000	[thread overview]
Message-ID: <039ED4275CED7440929022BC67E706115485A08F@SHSMSX103.ccr.corp.intel.com> (raw)
In-Reply-To: <20200709143932.35806-1-chenmin.sun@intel.com>



> -----Original Message-----
> From: Sun, Chenmin <chenmin.sun@intel.com>
> Sent: Thursday, July 9, 2020 10:40 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>;
> Wu, Jingjing <jingjing.wu@intel.com>; Wang, Haiyue
> <haiyue.wang@intel.com>
> Cc: dev@dpdk.org; Sun, Chenmin <chenmin.sun@intel.com>
> Subject: [PATCH V2] net/i40e: i40e FDIR update rate optimization
> 
> From: Chenmin Sun <chenmin.sun@intel.com>
> 
> This patch optimized the fdir update rate for i40e PF, by tracking whether the
> fdir rule being inserted into the guaranteed space or shared space.
> For the flows that are inserted to the guaranteed space, we assume that the
> insertion will always succeed as the hardware only reports the "no enough
> space left" error. In this case, the software can directly return success and no
> need to retrieve the result from the hardware. See the fdir programming
> status descriptor format in the datasheet for more details.
> 
> This patch changes the global register GLQF_CTL. Therefore, when devarg
> ``support-multi-driver`` is set, the patch will not take effect to avoid affecting
> the normal behavior of other i40e drivers, e.g., Linux kernel driver.

Overall I think the patch is too big, is that possible to separate into 2 or more patches?

For example:
1.) you introduce some new data structure to tack the flow
2) the optimization for flow programming.

More comments inline.
> 
> Signed-off-by: Chenmin Sun <chenmin.sun@intel.com>
> ---
> 
> v2:
> * Refine commit message and code comments.
> * Refine code style.
> * Fixed several memory free bugs.
> * Replace the bin_serch() with rte_bsf64()
> ---
>  drivers/net/i40e/i40e_ethdev.c  | 136 ++++++++++++++++++++++-
> drivers/net/i40e/i40e_ethdev.h  |  63 ++++++++---
>  drivers/net/i40e/i40e_fdir.c    | 190 +++++++++++++++++++++-----------
>  drivers/net/i40e/i40e_flow.c    | 167 ++++++++++++++++++++++------
>  drivers/net/i40e/i40e_rxtx.c    |  15 ++-
>  drivers/net/i40e/rte_pmd_i40e.c |   2 +-
>  6 files changed, 455 insertions(+), 118 deletions(-)
> 
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 3bc312c11..099f4c5e3 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -26,6 +26,7 @@
>  #include <rte_dev.h>
>  #include <rte_tailq.h>
>  #include <rte_hash_crc.h>
> +#include <rte_bitmap.h>
> 
>  #include "i40e_logs.h"
>  #include "base/i40e_prototype.h"
> @@ -1045,8 +1046,17 @@ static int
>  i40e_init_fdir_filter_list(struct rte_eth_dev *dev)  {
>  	struct i40e_pf *pf =
> I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	struct i40e_hw *hw = I40E_PF_TO_HW(pf);
>  	struct i40e_fdir_info *fdir_info = &pf->fdir;
>  	char fdir_hash_name[RTE_HASH_NAMESIZE];
> +	void *mem = NULL;
> +	uint32_t i = 0;
> +	uint32_t bmp_size;
> +	uint32_t alloc = 0;
> +	uint32_t best = 0;
> +	uint32_t pfqf_fdalloc = 0;
> +	uint32_t glqf_ctl_reg = 0;
> +	struct rte_bitmap *bmp = NULL;
>  	int ret;

Its better to follow RCT, and please apply on other functions.

> 
>  	struct rte_hash_parameters fdir_hash_params = { @@ -1067,6 +1077,7
> @@ i40e_init_fdir_filter_list(struct rte_eth_dev *dev)
>  		PMD_INIT_LOG(ERR, "Failed to create fdir hash table!");
>  		return -EINVAL;
>  	}
> +
>  	fdir_info->hash_map = rte_zmalloc("i40e_fdir_hash_map",
>  					  sizeof(struct i40e_fdir_filter *) *
>  					  I40E_MAX_FDIR_FILTER_NUM,
> @@ -1077,8 +1088,100 @@ i40e_init_fdir_filter_list(struct rte_eth_dev
> *dev)
>  		ret = -ENOMEM;
>  		goto err_fdir_hash_map_alloc;
>  	}
> +
> +	fdir_info->fdir_filter_array = rte_zmalloc("fdir_filter",
> +			sizeof(struct i40e_fdir_filter) *
> +			I40E_MAX_FDIR_FILTER_NUM,
> +			0);
> +
> +	if (!fdir_info->fdir_filter_array) {
> +		PMD_INIT_LOG(ERR,
> +			     "Failed to allocate memory for fdir filter array!");
> +		ret = -ENOMEM;
> +		goto err_fdir_filter_array_alloc;
> +	}
> +
> +	pfqf_fdalloc = i40e_read_rx_ctl(hw, I40E_PFQF_FDALLOC);
> +	alloc = ((pfqf_fdalloc & I40E_PFQF_FDALLOC_FDALLOC_MASK) >>
> +			I40E_PFQF_FDALLOC_FDALLOC_SHIFT);
> +	best = ((pfqf_fdalloc & I40E_PFQF_FDALLOC_FDBEST_MASK) >>
> +			I40E_PFQF_FDALLOC_FDBEST_SHIFT);
> +
> +	glqf_ctl_reg = i40e_read_rx_ctl(hw, I40E_GLQF_CTL);
> +	if (!pf->support_multi_driver) {
> +		fdir_info->fdir_invalprio = 1;
> +		glqf_ctl_reg |= I40E_GLQF_CTL_INVALPRIO_MASK;
> +		PMD_DRV_LOG(INFO, "FDIR INVALPRIO set to guaranteed first");
> +	} else {
> +		if (glqf_ctl_reg | I40E_GLQF_CTL_INVALPRIO_MASK) {
> +			fdir_info->fdir_invalprio = 1;
> +			PMD_DRV_LOG(INFO, "FDIR INVALPRIO is: guaranteed first");
> +		} else {
> +			fdir_info->fdir_invalprio = 0;
> +			PMD_DRV_LOG(INFO, "FDIR INVALPRIO is: shared first");
> +		}
> +	}
> +
> +	i40e_write_rx_ctl(hw, I40E_GLQF_CTL, glqf_ctl_reg);
> +	PMD_DRV_LOG(INFO, "FDIR guarantee space: %u, best_effort
> space %u.",
> +		alloc * 32, best * 32);

I think *32 can be applied when you assign alloc and best. 
Also its better to replace by macro with meaningful name and use bit shift << but not *.
> +
> +	fdir_info->fdir_space_size = (alloc + best) * 32;
> +	fdir_info->fdir_actual_cnt = 0;
> +	fdir_info->fdir_guarantee_available_space = alloc * 32;
> +	fdir_info->fdir_guarantee_free_space =
> +		fdir_info->fdir_guarantee_available_space;
> +
> +	fdir_info->fdir_flow_bitmap.fdir_flow =
> +			rte_zmalloc("i40e_fdir_flows",
> +				sizeof(struct i40e_fdir_flows) *
> +				fdir_info->fdir_space_size,
> +				0);
> +
> +	if (!fdir_info->fdir_flow_bitmap.fdir_flow) {
> +		PMD_INIT_LOG(ERR,
> +			     "Failed to allocate memory for bitmap flow!");
> +		ret = -ENOMEM;
> +		goto err_fdir_bitmap_flow_alloc;
> +	}
> +
> +	for (i = 0; i < fdir_info->fdir_space_size; i++)
> +		fdir_info->fdir_flow_bitmap.fdir_flow[i].idx = i;
> +
> +	bmp_size =
> +		rte_bitmap_get_memory_footprint(fdir_info->fdir_space_size);
> +
> +	mem = rte_zmalloc("fdir_bmap", bmp_size, RTE_CACHE_LINE_SIZE);
> +	if (mem == NULL) {
> +		PMD_INIT_LOG(ERR,
> +			     "Failed to allocate memory for fdir bitmap!");
> +		ret = -ENOMEM;
> +		goto err_fdir_mem_alloc;
> +	}
> +
> +	bmp = rte_bitmap_init(fdir_info->fdir_space_size, mem, bmp_size);
> +	if (bmp == NULL) {
> +		PMD_INIT_LOG(ERR,
> +			     "Failed to initialization fdir bitmap!");
> +		ret = -ENOMEM;
> +		goto err_fdir_bmp_alloc;
> +	}
> +
> +	for (i = 0; i < fdir_info->fdir_space_size; i++)
> +		rte_bitmap_set(bmp, i);
> +
> +	fdir_info->fdir_flow_bitmap.b = bmp;
> +
>  	return 0;
> 
> +err_fdir_bmp_alloc:
> +	rte_free(mem);
> +err_fdir_mem_alloc:
> +	rte_free(fdir_info->fdir_flow_bitmap.fdir_flow);
> +err_fdir_bitmap_flow_alloc:
> +	rte_free(fdir_info->fdir_filter_array);
> +err_fdir_filter_array_alloc:
> +	rte_free(fdir_info->hash_map);
>  err_fdir_hash_map_alloc:
>  	rte_hash_free(fdir_info->hash_table);
> 
> @@ -1749,18 +1852,34 @@ i40e_rm_fdir_filter_list(struct i40e_pf *pf)
>  	struct i40e_fdir_info *fdir_info;
> 
>  	fdir_info = &pf->fdir;
> -	/* Remove all flow director rules and hash */
> +
> +	/* Remove all flow director rules */
> +	while ((p_fdir = TAILQ_FIRST(&fdir_info->fdir_list)))
> +		TAILQ_REMOVE(&fdir_info->fdir_list, p_fdir, rules); }
> +
> +static void
> +i40e_fdir_memory_cleanup(struct i40e_pf *pf) {
> +	struct i40e_fdir_info *fdir_info;
> +
> +	fdir_info = &pf->fdir;
> +
> +	/* flow director memory cleanup */
>  	if (fdir_info->hash_map)
>  		rte_free(fdir_info->hash_map);
>  	if (fdir_info->hash_table)
>  		rte_hash_free(fdir_info->hash_table);
> +	if (fdir_info->fdir_flow_bitmap.b)
> +		rte_bitmap_free(fdir_info->fdir_flow_bitmap.b);
> +	if (fdir_info->fdir_flow_bitmap.fdir_flow)
> +		rte_free(fdir_info->fdir_flow_bitmap.fdir_flow);
> +	if (fdir_info->fdir_filter_array)
> +		rte_free(fdir_info->fdir_filter_array);
> 
> -	while ((p_fdir = TAILQ_FIRST(&fdir_info->fdir_list))) {
> -		TAILQ_REMOVE(&fdir_info->fdir_list, p_fdir, rules);
> -		rte_free(p_fdir);
> -	}
>  }
> 
> +
>  void i40e_flex_payload_reg_set_default(struct i40e_hw *hw)  {
>  	/*
> @@ -2618,9 +2737,14 @@ i40e_dev_close(struct rte_eth_dev *dev)
>  	/* Remove all flows */
>  	while ((p_flow = TAILQ_FIRST(&pf->flow_list))) {
>  		TAILQ_REMOVE(&pf->flow_list, p_flow, node);
> -		rte_free(p_flow);
> +		/* Do not free FDIR flows since they are static allocated */
> +		if (p_flow->filter_type != RTE_ETH_FILTER_FDIR)
> +			rte_free(p_flow);
>  	}
> 
> +	/* release the fdir static allocated memory */
> +	i40e_fdir_memory_cleanup(pf);
> +
>  	/* Remove all Traffic Manager configuration */
>  	i40e_tm_conf_uninit(dev);
> 
> diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
> index 31ca05de9..5861c358b 100644
> --- a/drivers/net/i40e/i40e_ethdev.h
> +++ b/drivers/net/i40e/i40e_ethdev.h
> @@ -264,6 +264,15 @@ enum i40e_flxpld_layer_idx {
>  #define I40E_DEFAULT_DCB_APP_NUM    1
>  #define I40E_DEFAULT_DCB_APP_PRIO   3
> 
> +/*
> + * Struct to store flow created.
> + */
> +struct rte_flow {
> +	TAILQ_ENTRY(rte_flow) node;
> +	enum rte_filter_type filter_type;
> +	void *rule;
> +};
> +
>  /**
>   * The overhead from MTU to max frame size.
>   * Considering QinQ packet, the VLAN tag needs to be counted twice.
> @@ -674,17 +683,33 @@ struct i40e_fdir_filter {
>  	struct i40e_fdir_filter_conf fdir;
>  };
> 
> +struct i40e_fdir_flows {

Why it be named as "flows"
Could you add more comment here, what's purpose of the data structure and each field?

> +	uint32_t idx;
> +	struct rte_flow flow;

Not sure if its better to move flow to the first field, so we cover between a rte_flow point and a i40e_fdir_flow point directly.
> +};
> +
> +struct i40e_fdir_flow_bitmap {
> +	struct rte_bitmap *b;
> +	struct i40e_fdir_flows *fdir_flow;
> +};

Can we just add rte_bitmap into i40e_fdir_flow?

> +
> +#define FLOW_TO_FLOW_BITMAP(f) \
> +	container_of((f), struct i40e_fdir_flows, flow)
> +
>  TAILQ_HEAD(i40e_fdir_filter_list, i40e_fdir_filter);
>  /*
>   *  A structure used to define fields of a FDIR related info.
>   */
>  struct i40e_fdir_info {
> +#define PRG_PKT_CNT	128
> +
>  	struct i40e_vsi *fdir_vsi;     /* pointer to fdir VSI structure */
>  	uint16_t match_counter_index;  /* Statistic counter index used for
> fdir*/
>  	struct i40e_tx_queue *txq;
>  	struct i40e_rx_queue *rxq;
> -	void *prg_pkt;                 /* memory for fdir program packet */
> -	uint64_t dma_addr;             /* physic address of packet
> memory*/
> +	void *prg_pkt[PRG_PKT_CNT];     /* memory for fdir program packet
> */
> +	uint64_t dma_addr[PRG_PKT_CNT];	/* physic address of packet
> memory*/
> +
>  	/* input set bits for each pctype */
>  	uint64_t input_set[I40E_FILTER_PCTYPE_MAX];
>  	/*
> @@ -698,6 +723,27 @@ struct i40e_fdir_info {
>  	struct i40e_fdir_filter **hash_map;
>  	struct rte_hash *hash_table;
> 
> +	struct i40e_fdir_filter *fdir_filter_array;
> +
> +	/*
> +	 * Priority ordering at filter invalidation(destroying a flow) between
> +	 * "best effort" space and "guaranteed" space.
> +	 *
> +	 * 0 = At filter invalidation, the hardware first tries to increment the
> +	 * "best effort" space. The "guaranteed" space is incremented only
> when
> +	 * the global "best effort" space is at it max value or the "best effort"
> +	 * space of the PF is at its max value.
> +	 * 1 = At filter invalidation, the hardware first tries to increment its
> +	 * "guaranteed" space. The "best effort" space is incremented only
> when
> +	 * it is already at its max value.
> +	 */
> +	uint32_t fdir_invalprio;
> +	uint32_t fdir_space_size;
> +	uint32_t fdir_actual_cnt;
> +	uint32_t fdir_guarantee_available_space;
> +	uint32_t fdir_guarantee_free_space;
> +	struct i40e_fdir_flow_bitmap fdir_flow_bitmap;

What is the flow_bitmap usage? its free bitmap or alloc bitmap?
Better add more description here, or rename it with more clean purpose.

> +
>  	/* Mark if flex pit and mask is set */
>  	bool flex_pit_flag[I40E_MAX_FLXPLD_LAYER];
>  	bool flex_mask_flag[I40E_FILTER_PCTYPE_MAX];
> @@ -894,15 +940,6 @@ struct i40e_mirror_rule {
> 
>  TAILQ_HEAD(i40e_mirror_rule_list, i40e_mirror_rule);
> 
> -/*
> - * Struct to store flow created.
> - */
> -struct rte_flow {
> -	TAILQ_ENTRY(rte_flow) node;
> -	enum rte_filter_type filter_type;
> -	void *rule;
> -};
> -
>  TAILQ_HEAD(i40e_flow_list, rte_flow);
> 
>  /* Struct to store Traffic Manager shaper profile. */ @@ -1335,8 +1372,8
> @@ int i40e_add_del_fdir_filter(struct rte_eth_dev *dev,
>  			     const struct rte_eth_fdir_filter *filter,
>  			     bool add);
>  int i40e_flow_add_del_fdir_filter(struct rte_eth_dev *dev,
> -				  const struct i40e_fdir_filter_conf *filter,
> -				  bool add);
> +			      const struct i40e_fdir_filter_conf *filter,
> +			      bool add, bool wait_status);
>  int i40e_dev_tunnel_filter_set(struct i40e_pf *pf,
>  			       struct rte_eth_tunnel_filter_conf *tunnel_filter,
>  			       uint8_t add);
> diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c index
> 4a778db4c..d7ba841d6 100644
> --- a/drivers/net/i40e/i40e_fdir.c
> +++ b/drivers/net/i40e/i40e_fdir.c
> @@ -99,7 +99,7 @@ static int
>  i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
>  				  enum i40e_filter_pctype pctype,
>  				  const struct i40e_fdir_filter_conf *filter,
> -				  bool add);
> +				  bool add, bool wait_status);
> 
>  static int
>  i40e_fdir_rx_queue_init(struct i40e_rx_queue *rxq) @@ -163,6 +163,7
> @@ i40e_fdir_setup(struct i40e_pf *pf)
>  	char z_name[RTE_MEMZONE_NAMESIZE];
>  	const struct rte_memzone *mz = NULL;
>  	struct rte_eth_dev *eth_dev = pf->adapter->eth_dev;
> +	uint16_t i;
> 
>  	if ((pf->flags & I40E_FLAG_FDIR) == 0) {
>  		PMD_INIT_LOG(ERR, "HW doesn't support FDIR"); @@ -179,6
> +180,7 @@ i40e_fdir_setup(struct i40e_pf *pf)
>  		PMD_DRV_LOG(INFO, "FDIR initialization has been done.");
>  		return I40E_SUCCESS;
>  	}
> +
>  	/* make new FDIR VSI */
>  	vsi = i40e_vsi_setup(pf, I40E_VSI_FDIR, pf->main_vsi, 0);
>  	if (!vsi) {
> @@ -233,17 +235,27 @@ i40e_fdir_setup(struct i40e_pf *pf)
>  			eth_dev->device->driver->name,
>  			I40E_FDIR_MZ_NAME,
>  			eth_dev->data->port_id);
> -	mz = i40e_memzone_reserve(z_name, I40E_FDIR_PKT_LEN,
> SOCKET_ID_ANY);
> +	mz = i40e_memzone_reserve(z_name, I40E_FDIR_PKT_LEN *
> PRG_PKT_CNT,
> +			SOCKET_ID_ANY);
>  	if (!mz) {
>  		PMD_DRV_LOG(ERR, "Cannot init memzone for "
>  				 "flow director program packet.");
>  		err = I40E_ERR_NO_MEMORY;
>  		goto fail_mem;
>  	}
> -	pf->fdir.prg_pkt = mz->addr;
> -	pf->fdir.dma_addr = mz->iova;
> +
> +	for (i = 0; i < PRG_PKT_CNT; i++) {
> +		pf->fdir.prg_pkt[i] = (uint8_t *)mz->addr + I40E_FDIR_PKT_LEN *
> +				(i % PRG_PKT_CNT);

The loop is from 0 to PRG_PKT_CNT, why "i % PKG_PTK_CNT"?

> +		pf->fdir.dma_addr[i] = mz->iova + I40E_FDIR_PKT_LEN *
> +				(i % PRG_PKT_CNT);
> +	}
> 
>  	pf->fdir.match_counter_index =
> I40E_COUNTER_INDEX_FDIR(hw->pf_id);
> +	pf->fdir.fdir_actual_cnt = 0;
> +	pf->fdir.fdir_guarantee_free_space =
> +		pf->fdir.fdir_guarantee_available_space;
> +
>  	PMD_DRV_LOG(INFO, "FDIR setup successfully, with programming
> queue %u.",
>  		    vsi->base_queue);
>  	return I40E_SUCCESS;
> @@ -327,6 +339,7 @@ i40e_init_flx_pld(struct i40e_pf *pf)
>  		pf->fdir.flex_set[index].src_offset = 0;
>  		pf->fdir.flex_set[index].size = I40E_FDIR_MAX_FLEXWORD_NUM;
>  		pf->fdir.flex_set[index].dst_offset = 0;
> +
Dummy empty line.

>  		I40E_WRITE_REG(hw, I40E_PRTQF_FLX_PIT(index), 0x0000C900);
>  		I40E_WRITE_REG(hw,
>  			I40E_PRTQF_FLX_PIT(index + 1), 0x0000FC29);/*non-used*/
> @@ -1557,11 +1570,11 @@ i40e_sw_fdir_filter_lookup(struct
> i40e_fdir_info *fdir_info,
>  	return fdir_info->hash_map[ret];
>  }
> 
> -/* Add a flow director filter into the SW list */  static int
> i40e_sw_fdir_filter_insert(struct i40e_pf *pf, struct i40e_fdir_filter *filter)
> {
>  	struct i40e_fdir_info *fdir_info = &pf->fdir;
> +	struct i40e_fdir_filter *hash_filter;
>  	int ret;
> 
>  	if (filter->fdir.input.flow_ext.pkt_template)
> @@ -1577,9 +1590,14 @@ i40e_sw_fdir_filter_insert(struct i40e_pf *pf,
> struct i40e_fdir_filter *filter)
>  			    ret);
>  		return ret;
>  	}
> -	fdir_info->hash_map[ret] = filter;
> 
> -	TAILQ_INSERT_TAIL(&fdir_info->fdir_list, filter, rules);
> +	if (fdir_info->hash_map[ret])
> +		return -1;
> +
> +	hash_filter = &fdir_info->fdir_filter_array[ret];
> +	rte_memcpy(hash_filter, filter, sizeof(*filter));
> +	fdir_info->hash_map[ret] = hash_filter;
> +	TAILQ_INSERT_TAIL(&fdir_info->fdir_list, hash_filter, rules);
> 
>  	return 0;
>  }
> @@ -1608,7 +1626,6 @@ i40e_sw_fdir_filter_del(struct i40e_pf *pf, struct
> i40e_fdir_input *input)
>  	fdir_info->hash_map[ret] = NULL;
> 
>  	TAILQ_REMOVE(&fdir_info->fdir_list, filter, rules);
> -	rte_free(filter);
> 
>  	return 0;
>  }
> @@ -1675,23 +1692,50 @@ i40e_add_del_fdir_filter(struct rte_eth_dev
> *dev,
>  	return ret;
>  }
> 
> +static inline unsigned char *
> +i40e_find_available_buffer(struct rte_eth_dev *dev) {
> +	struct i40e_pf *pf =
> I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	struct i40e_fdir_info *fdir_info = &pf->fdir;
> +	struct i40e_tx_queue *txq = pf->fdir.txq;
> +	volatile struct i40e_tx_desc *txdp = &txq->tx_ring[txq->tx_tail + 1];
> +	uint32_t i;
> +
> +	/* wait until the tx descriptor is ready */
> +	for (i = 0; i < I40E_FDIR_MAX_WAIT_US; i++) {
> +		if ((txdp->cmd_type_offset_bsz &
> +			rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) ==
> +			rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
> +			break;
> +		rte_delay_us(1);
> +	}
> +	if (i >= I40E_FDIR_MAX_WAIT_US) {
> +		PMD_DRV_LOG(ERR,
> +		    "Failed to program FDIR filter: time out to get DD on tx
> queue.");
> +		return NULL;
> +	}
> +
> +	return (unsigned char *)fdir_info->prg_pkt[txq->tx_tail / 2]; }

why tx_tail / 2, better some add some description here, and use " >> 1" if its word / byte conversion.

> +
>  /**
>   * i40e_flow_add_del_fdir_filter - add or remove a flow director filter.
>   * @pf: board private structure
>   * @filter: fdir filter entry
>   * @add: 0 - delete, 1 - add
>   */
> +
>  int
>  i40e_flow_add_del_fdir_filter(struct rte_eth_dev *dev,
>  			      const struct i40e_fdir_filter_conf *filter,
> -			      bool add)
> +			      bool add, bool wait_status)
>  {
>  	struct i40e_hw *hw =
> I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>  	struct i40e_pf *pf =
> I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> -	unsigned char *pkt = (unsigned char *)pf->fdir.prg_pkt;
> +	unsigned char *pkt = NULL;
>  	enum i40e_filter_pctype pctype;
>  	struct i40e_fdir_info *fdir_info = &pf->fdir;
> -	struct i40e_fdir_filter *fdir_filter, *node;
> +	struct i40e_fdir_filter *node;
>  	struct i40e_fdir_filter check_filter; /* Check if the filter exists */
>  	int ret = 0;
> 
> @@ -1724,25 +1768,41 @@ i40e_flow_add_del_fdir_filter(struct
> rte_eth_dev *dev,
>  	/* Check if there is the filter in SW list */
>  	memset(&check_filter, 0, sizeof(check_filter));
>  	i40e_fdir_filter_convert(filter, &check_filter);
> -	node = i40e_sw_fdir_filter_lookup(fdir_info, &check_filter.fdir.input);
> -	if (add && node) {
> -		PMD_DRV_LOG(ERR,
> -			    "Conflict with existing flow director rules!");
> -		return -EINVAL;
> -	}
> 
> -	if (!add && !node) {
> -		PMD_DRV_LOG(ERR,
> -			    "There's no corresponding flow firector filter!");
> -		return -EINVAL;
> +	if (add) {
> +		ret = i40e_sw_fdir_filter_insert(pf, &check_filter);
> +		if (ret < 0) {
> +			PMD_DRV_LOG(ERR,
> +				    "Conflict with existing flow director rules!");
> +			return -EINVAL;
> +		}
> +	} else {
> +		node = i40e_sw_fdir_filter_lookup(fdir_info,
> +				&check_filter.fdir.input);
> +		if (!node) {
> +			PMD_DRV_LOG(ERR,
> +				    "There's no corresponding flow firector filter!");
> +			return -EINVAL;
> +		}
> +
> +		ret = i40e_sw_fdir_filter_del(pf, &node->fdir.input);
> +		if (ret < 0) {
> +			PMD_DRV_LOG(ERR,
> +					"Error deleting fdir rule from hash table!");
> +			return -EINVAL;
> +		}
>  	}
> 
> -	memset(pkt, 0, I40E_FDIR_PKT_LEN);
> +	/* find a buffer to store the pkt */
> +	pkt = i40e_find_available_buffer(dev);
> +	if (pkt == NULL)
> +		goto error_op;
> 
> +	memset(pkt, 0, I40E_FDIR_PKT_LEN);
>  	ret = i40e_flow_fdir_construct_pkt(pf, &filter->input, pkt);
>  	if (ret < 0) {
>  		PMD_DRV_LOG(ERR, "construct packet for fdir fails.");
> -		return ret;
> +		goto error_op;
>  	}
> 
>  	if (hw->mac.type == I40E_MAC_X722) {
> @@ -1751,28 +1811,22 @@ i40e_flow_add_del_fdir_filter(struct
> rte_eth_dev *dev,
>  			hw, I40E_GLQF_FD_PCTYPES((int)pctype));
>  	}
> 
> -	ret = i40e_flow_fdir_filter_programming(pf, pctype, filter, add);
> +	ret = i40e_flow_fdir_filter_programming(pf, pctype, filter, add,
> +			wait_status);
>  	if (ret < 0) {
>  		PMD_DRV_LOG(ERR, "fdir programming fails for PCTYPE(%u).",
>  			    pctype);
> -		return ret;
> +		goto error_op;
>  	}
> 
> -	if (add) {
> -		fdir_filter = rte_zmalloc("fdir_filter",
> -					  sizeof(*fdir_filter), 0);
> -		if (fdir_filter == NULL) {
> -			PMD_DRV_LOG(ERR, "Failed to alloc memory.");
> -			return -ENOMEM;
> -		}
> +	return ret;
> 
> -		rte_memcpy(fdir_filter, &check_filter, sizeof(check_filter));
> -		ret = i40e_sw_fdir_filter_insert(pf, fdir_filter);
> -		if (ret < 0)
> -			rte_free(fdir_filter);
> -	} else {
> -		ret = i40e_sw_fdir_filter_del(pf, &node->fdir.input);
> -	}
> +error_op:
> +	/* roll back */
> +	if (add)
> +		i40e_sw_fdir_filter_del(pf, &check_filter.fdir.input);
> +	else
> +		i40e_sw_fdir_filter_insert(pf, &check_filter);
> 
>  	return ret;
>  }
> @@ -1875,7 +1929,7 @@ i40e_fdir_filter_programming(struct i40e_pf *pf,
> 
>  	PMD_DRV_LOG(INFO, "filling transmit descriptor.");
>  	txdp = &(txq->tx_ring[txq->tx_tail + 1]);
> -	txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr);
> +	txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr[0]);
>  	td_cmd = I40E_TX_DESC_CMD_EOP |
>  		 I40E_TX_DESC_CMD_RS  |
>  		 I40E_TX_DESC_CMD_DUMMY;
> @@ -1925,7 +1979,7 @@ static int
>  i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
>  				  enum i40e_filter_pctype pctype,
>  				  const struct i40e_fdir_filter_conf *filter,
> -				  bool add)
> +				  bool add, bool wait_status)
>  {
>  	struct i40e_tx_queue *txq = pf->fdir.txq;
>  	struct i40e_rx_queue *rxq = pf->fdir.rxq; @@ -1933,8 +1987,10 @@
> i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
>  	volatile struct i40e_tx_desc *txdp;
>  	volatile struct i40e_filter_program_desc *fdirdp;
>  	uint32_t td_cmd;
> -	uint16_t vsi_id, i;
> +	uint16_t vsi_id;
>  	uint8_t dest;
> +	uint32_t i;
> +	uint8_t retry_count = 0;
> 
>  	PMD_DRV_LOG(INFO, "filling filter programming descriptor.");
>  	fdirdp = (volatile struct i40e_filter_program_desc *) @@ -2009,7
> +2065,8 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
> 
>  	PMD_DRV_LOG(INFO, "filling transmit descriptor.");
>  	txdp = &txq->tx_ring[txq->tx_tail + 1];
> -	txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr);
> +	txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr[txq->tx_tail /
> +2]);
> +
>  	td_cmd = I40E_TX_DESC_CMD_EOP |
>  		 I40E_TX_DESC_CMD_RS  |
>  		 I40E_TX_DESC_CMD_DUMMY;
> @@ -2022,25 +2079,34 @@ i40e_flow_fdir_filter_programming(struct
> i40e_pf *pf,
>  		txq->tx_tail = 0;
>  	/* Update the tx tail register */
>  	rte_wmb();
> +
> +	/* capture the previous error report(if any) from rx ring */
> +	while ((i40e_check_fdir_programming_status(rxq) < 0) &&
> +		(++retry_count < 100))
> +		PMD_DRV_LOG(INFO, "previous error report captured.");
> +
>  	I40E_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
> -	for (i = 0; i < I40E_FDIR_MAX_WAIT_US; i++) {
> -		if ((txdp->cmd_type_offset_bsz &
> -				rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) ==
> -				rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
> -			break;
> -		rte_delay_us(1);
> -	}
> -	if (i >= I40E_FDIR_MAX_WAIT_US) {
> -		PMD_DRV_LOG(ERR,
> -		    "Failed to program FDIR filter: time out to get DD on tx
> queue.");
> -		return -ETIMEDOUT;
> -	}
> -	/* totally delay 10 ms to check programming status*/
> -	rte_delay_us(I40E_FDIR_MAX_WAIT_US);
> -	if (i40e_check_fdir_programming_status(rxq) < 0) {
> -		PMD_DRV_LOG(ERR,
> -		    "Failed to program FDIR filter: programming status
> reported.");
> -		return -ETIMEDOUT;
> +
> +	if (wait_status) {
> +		for (i = 0; i < I40E_FDIR_MAX_WAIT_US; i++) {
> +			if ((txdp->cmd_type_offset_bsz &
> +					rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK))
> ==
> +
> 	rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
> +				break;
> +			rte_delay_us(1);
> +		}
> +		if (i >= I40E_FDIR_MAX_WAIT_US) {
> +			PMD_DRV_LOG(ERR,
> +			    "Failed to program FDIR filter: time out to get DD on tx
> queue.");
> +			return -ETIMEDOUT;
> +		}
> +		/* totally delay 10 ms to check programming status*/
> +		rte_delay_us(I40E_FDIR_MAX_WAIT_US);
> +		if (i40e_check_fdir_programming_status(rxq) < 0) {
> +			PMD_DRV_LOG(ERR,
> +			    "Failed to program FDIR filter: programming status
> reported.");
> +			return -ETIMEDOUT;
> +		}
>  	}
> 
>  	return 0;
> @@ -2324,7 +2390,7 @@ i40e_fdir_filter_restore(struct i40e_pf *pf)
>  	uint32_t best_cnt;     /**< Number of filters in best effort spaces. */
> 
>  	TAILQ_FOREACH(f, fdir_list, rules)
> -		i40e_flow_add_del_fdir_filter(dev, &f->fdir, TRUE);
> +		i40e_flow_add_del_fdir_filter(dev, &f->fdir, TRUE, TRUE);
> 
>  	fdstat = I40E_READ_REG(hw, I40E_PFQF_FDSTAT);
>  	guarant_cnt =
> diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
> index 7cd537340..260e58dc1 100644
> --- a/drivers/net/i40e/i40e_flow.c
> +++ b/drivers/net/i40e/i40e_flow.c
> @@ -17,6 +17,7 @@
>  #include <rte_malloc.h>
>  #include <rte_tailq.h>
>  #include <rte_flow_driver.h>
> +#include <rte_bitmap.h>
> 
>  #include "i40e_logs.h"
>  #include "base/i40e_type.h"
> @@ -144,6 +145,8 @@ const struct rte_flow_ops i40e_flow_ops = {
> 
>  static union i40e_filter_t cons_filter;  static enum rte_filter_type
> cons_filter_type = RTE_ETH_FILTER_NONE;
> +/* internal pattern w/o VOID items */
> +struct rte_flow_item g_items[32];
> 
>  /* Pattern matched ethertype filter */
>  static enum rte_flow_item_type pattern_ethertype[] = { @@ -2044,9
> +2047,6 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
>  	const struct rte_flow_item_eth *eth_spec;
>  	const struct rte_flow_item_eth *eth_mask;
>  	enum rte_flow_item_type item_type;
> -	uint16_t outer_tpid;
> -
> -	outer_tpid = i40e_get_outer_vlan(dev);
> 
>  	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
>  		if (item->last) {
> @@ -2106,7 +2106,7 @@ i40e_flow_parse_ethertype_pattern(struct
> rte_eth_dev *dev,
>  			if (filter->ether_type == RTE_ETHER_TYPE_IPV4 ||
>  			    filter->ether_type == RTE_ETHER_TYPE_IPV6 ||
>  			    filter->ether_type == RTE_ETHER_TYPE_LLDP ||
> -			    filter->ether_type == outer_tpid) {
> +			    filter->ether_type == i40e_get_outer_vlan(dev)) {
>  				rte_flow_error_set(error, EINVAL,
>  						   RTE_FLOW_ERROR_TYPE_ITEM,
>  						   item,
> @@ -2349,6 +2349,7 @@ i40e_flow_set_fdir_flex_pit(struct i40e_pf *pf,
>  		field_idx = layer_idx * I40E_MAX_FLXPLD_FIED + i;
>  		flx_pit = MK_FLX_PIT(min_next_off, NONUSE_FLX_PIT_FSIZE,
>  				     NONUSE_FLX_PIT_DEST_OFF);
> +
>  		I40E_WRITE_REG(hw, I40E_PRTQF_FLX_PIT(field_idx), flx_pit);
>  		min_next_off++;
>  	}
> @@ -2608,7 +2609,6 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev
> *dev,
>  	uint16_t flex_size;
>  	bool cfg_flex_pit = true;
>  	bool cfg_flex_msk = true;
> -	uint16_t outer_tpid;
>  	uint16_t ether_type;
>  	uint32_t vtc_flow_cpu;
>  	bool outer_ip = true;
> @@ -2617,7 +2617,6 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev
> *dev,
>  	memset(off_arr, 0, sizeof(off_arr));
>  	memset(len_arr, 0, sizeof(len_arr));
>  	memset(flex_mask, 0, I40E_FDIR_MAX_FLEX_LEN);
> -	outer_tpid = i40e_get_outer_vlan(dev);
>  	filter->input.flow_ext.customized_pctype = false;
>  	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
>  		if (item->last) {
> @@ -2685,7 +2684,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev
> *dev,
>  				if (next_type == RTE_FLOW_ITEM_TYPE_VLAN ||
>  				    ether_type == RTE_ETHER_TYPE_IPV4 ||
>  				    ether_type == RTE_ETHER_TYPE_IPV6 ||
> -				    ether_type == outer_tpid) {
> +				    ether_type == i40e_get_outer_vlan(dev)) {
>  					rte_flow_error_set(error, EINVAL,
>  						     RTE_FLOW_ERROR_TYPE_ITEM,
>  						     item,
> @@ -2729,7 +2728,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev
> *dev,
> 
>  				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
>  				    ether_type == RTE_ETHER_TYPE_IPV6 ||
> -				    ether_type == outer_tpid) {
> +				    ether_type == i40e_get_outer_vlan(dev)) {
>  					rte_flow_error_set(error, EINVAL,
>  						     RTE_FLOW_ERROR_TYPE_ITEM,
>  						     item,
> @@ -5263,7 +5262,6 @@ i40e_flow_validate(struct rte_eth_dev *dev,
>  				   NULL, "NULL attribute.");
>  		return -rte_errno;
>  	}
> -
>  	memset(&cons_filter, 0, sizeof(cons_filter));
> 
>  	/* Get the non-void item of action */
> @@ -5285,12 +5283,18 @@ i40e_flow_validate(struct rte_eth_dev *dev,
>  	}
>  	item_num++;
> 
> -	items = rte_zmalloc("i40e_pattern",
> -			    item_num * sizeof(struct rte_flow_item), 0);
> -	if (!items) {
> -		rte_flow_error_set(error, ENOMEM,
> RTE_FLOW_ERROR_TYPE_ITEM_NUM,
> -				   NULL, "No memory for PMD internal items.");
> -		return -ENOMEM;
> +	if (item_num <= ARRAY_SIZE(g_items)) {
> +		items = g_items;
> +	} else {
> +		items = rte_zmalloc("i40e_pattern",
> +				    item_num * sizeof(struct rte_flow_item), 0);
> +		if (!items) {
> +			rte_flow_error_set(error, ENOMEM,
> +					RTE_FLOW_ERROR_TYPE_ITEM_NUM,
> +					NULL,
> +					"No memory for PMD internal items.");
> +			return -ENOMEM;
> +		}
>  	}
> 
>  	i40e_pattern_skip_void_item(items, pattern); @@ -5298,20 +5302,26
> @@ i40e_flow_validate(struct rte_eth_dev *dev,
>  	i = 0;
>  	do {
>  		parse_filter = i40e_find_parse_filter_func(items, &i);
> +
>  		if (!parse_filter && !flag) {
>  			rte_flow_error_set(error, EINVAL,
>  					   RTE_FLOW_ERROR_TYPE_ITEM,
>  					   pattern, "Unsupported pattern");
> -			rte_free(items);
> +
> +			if (items != g_items)
> +				rte_free(items);
>  			return -rte_errno;
>  		}
> +
>  		if (parse_filter)
>  			ret = parse_filter(dev, attr, items, actions,
>  					   error, &cons_filter);
> +
>  		flag = true;
>  	} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
> 
> -	rte_free(items);
> +	if (items != g_items)
> +		rte_free(items);
> 
>  	return ret;
>  }
> @@ -5324,21 +5334,67 @@ i40e_flow_create(struct rte_eth_dev *dev,
>  		 struct rte_flow_error *error)
>  {
>  	struct i40e_pf *pf =
> I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> -	struct rte_flow *flow;
> +	struct rte_flow *flow = NULL;
> +	struct i40e_fdir_info *fdir_info = &pf->fdir;
> +	uint32_t i = 0;
> +	uint32_t pos = 0;
> +	uint64_t slab = 0;
> +	bool wait_for_status = true;
>  	int ret;
> 
> -	flow = rte_zmalloc("i40e_flow", sizeof(struct rte_flow), 0);
> -	if (!flow) {
> -		rte_flow_error_set(error, ENOMEM,
> -				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
> -				   "Failed to allocate memory");
> -		return flow;
> -	}
> -
>  	ret = i40e_flow_validate(dev, attr, pattern, actions, error);
>  	if (ret < 0)
>  		return NULL;
> 
> +	if (cons_filter_type == RTE_ETH_FILTER_FDIR) {
> +		if (fdir_info->fdir_actual_cnt >=
> +				fdir_info->fdir_space_size) {
> +			rte_flow_error_set(error, ENOMEM,
> +					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
> +					   "Fdir space full");
> +			return NULL;
> +		}
> +
> +		ret = rte_bitmap_scan(fdir_info->fdir_flow_bitmap.b, &pos,
> +				&slab);
> +
> +		/* normally this won't happen as the fdir_actual_cnt should be
> +		 * same with the number of the set bits in fdir_flow_bitmap,
> +		 * but anyway handle this error condition here for safe
> +		 */
> +		if (ret == 0) {
> +			PMD_DRV_LOG(ERR, "fdir_actual_cnt out of sync");
> +			rte_flow_error_set(error, ENOMEM,
> +					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
> +					   "Fdir space full");
> +			return NULL;
> +		}
> +
> +		i = rte_bsf64(slab);
> +		pos += i;
> +		rte_bitmap_clear(fdir_info->fdir_flow_bitmap.b, pos);
> +		flow = &fdir_info->fdir_flow_bitmap.fdir_flow[pos].flow;
> +		memset(flow, 0, sizeof(struct rte_flow));
> +
> +		if (fdir_info->fdir_invalprio == 1) {
> +			if (fdir_info->fdir_guarantee_free_space > 0) {
> +				fdir_info->fdir_guarantee_free_space--;
> +				wait_for_status = false;
> +			}
> +		}
> +
> +		fdir_info->fdir_actual_cnt++;
> +
> +	} else {
> +		flow = rte_zmalloc("i40e_flow", sizeof(struct rte_flow), 0);
> +		if (!flow) {
> +			rte_flow_error_set(error, ENOMEM,
> +					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
> +					   "Failed to allocate memory");
> +			return flow;
> +		}
> +	}
> +
>  	switch (cons_filter_type) {
>  	case RTE_ETH_FILTER_ETHERTYPE:
>  		ret = i40e_ethertype_filter_set(pf,
> @@ -5350,9 +5406,17 @@ i40e_flow_create(struct rte_eth_dev *dev,
>  		break;
>  	case RTE_ETH_FILTER_FDIR:
>  		ret = i40e_flow_add_del_fdir_filter(dev,
> -				       &cons_filter.fdir_filter, 1);
> -		if (ret)
> +			       &cons_filter.fdir_filter, 1,
> +			       wait_for_status);
> +		if (ret) {
> +			rte_bitmap_set(fdir_info->fdir_flow_bitmap.b, pos);
> +			fdir_info->fdir_actual_cnt--;
> +			if (fdir_info->fdir_invalprio == 1)
> +				fdir_info->fdir_guarantee_free_space++;
> +
>  			goto free_flow;
> +		}
> +
>  		flow->rule = TAILQ_LAST(&pf->fdir.fdir_list,
>  					i40e_fdir_filter_list);
>  		break;
> @@ -5384,7 +5448,10 @@ i40e_flow_create(struct rte_eth_dev *dev,
>  	rte_flow_error_set(error, -ret,
>  			   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
>  			   "Failed to create flow.");
> -	rte_free(flow);
> +
> +	if (cons_filter_type != RTE_ETH_FILTER_FDIR)
> +		rte_free(flow);
> +
>  	return NULL;
>  }
> 
> @@ -5394,7 +5461,9 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
>  		  struct rte_flow_error *error)
>  {
>  	struct i40e_pf *pf =
> I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	struct i40e_fdir_info *fdir_info = &pf->fdir;
>  	enum rte_filter_type filter_type = flow->filter_type;
> +	struct i40e_fdir_flows *f;
>  	int ret = 0;
> 
>  	switch (filter_type) {
> @@ -5408,7 +5477,8 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
>  		break;
>  	case RTE_ETH_FILTER_FDIR:
>  		ret = i40e_flow_add_del_fdir_filter(dev,
> -		       &((struct i40e_fdir_filter *)flow->rule)->fdir, 0);
> +				&((struct i40e_fdir_filter *)flow->rule)->fdir,
> +				0, false);
> 
>  		/* If the last flow is destroyed, disable fdir. */
>  		if (!ret && TAILQ_EMPTY(&pf->fdir.fdir_list)) { @@ -5428,11
> +5498,27 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
> 
>  	if (!ret) {
>  		TAILQ_REMOVE(&pf->flow_list, flow, node);
> -		rte_free(flow);
> -	} else
> +		if (filter_type == RTE_ETH_FILTER_FDIR) {
> +			f = FLOW_TO_FLOW_BITMAP(flow);
> +			rte_bitmap_set(fdir_info->fdir_flow_bitmap.b, f->idx);
> +			fdir_info->fdir_actual_cnt--;
> +
> +			if (fdir_info->fdir_invalprio == 1)
> +				/* check if the budget will be gained back to
> +				 * guaranteed space
> +				 */
> +				if (fdir_info->fdir_guarantee_free_space <
> +					fdir_info->fdir_guarantee_available_space)
> +					fdir_info->fdir_guarantee_free_space++;
> +		} else {
> +			rte_free(flow);
> +		}
> +
> +	} else {
>  		rte_flow_error_set(error, -ret,
>  				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
>  				   "Failed to destroy flow.");
> +	}
> 
>  	return ret;
>  }
> @@ -5582,6 +5668,7 @@ i40e_flow_flush_fdir_filter(struct i40e_pf *pf)
>  	struct rte_flow *flow;
>  	void *temp;
>  	int ret;
> +	uint32_t i = 0;
> 
>  	ret = i40e_fdir_flush(dev);
>  	if (!ret) {
> @@ -5597,10 +5684,24 @@ i40e_flow_flush_fdir_filter(struct i40e_pf *pf)
>  		TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
>  			if (flow->filter_type == RTE_ETH_FILTER_FDIR) {
>  				TAILQ_REMOVE(&pf->flow_list, flow, node);
> -				rte_free(flow);
>  			}
>  		}
> 
> +		/* clear bitmap */
> +		rte_bitmap_reset(fdir_info->fdir_flow_bitmap.b);
> +		for (i = 0; i < fdir_info->fdir_space_size; i++) {
> +			fdir_info->fdir_flow_bitmap.fdir_flow[i].idx = i;
> +			rte_bitmap_set(fdir_info->fdir_flow_bitmap.b, i);
> +		}
> +
> +		fdir_info->fdir_actual_cnt = 0;
> +		fdir_info->fdir_guarantee_free_space =
> +			fdir_info->fdir_guarantee_available_space;
> +		memset(fdir_info->fdir_filter_array,
> +			0,
> +			sizeof(struct i40e_fdir_filter) *
> +			I40E_MAX_FDIR_FILTER_NUM);
> +
>  		for (pctype = I40E_FILTER_PCTYPE_NONF_IPV4_UDP;
>  		     pctype <= I40E_FILTER_PCTYPE_L2_PAYLOAD; pctype++)
>  			pf->fdir.inset_flag[pctype] = 0;
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index
> 840b6f387..f5235864d 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -2938,16 +2938,17 @@ i40e_dev_free_queues(struct rte_eth_dev
> *dev)
>  	}
>  }
> 
> -#define I40E_FDIR_NUM_TX_DESC  I40E_MIN_RING_DESC -#define
> I40E_FDIR_NUM_RX_DESC  I40E_MIN_RING_DESC
> +#define I40E_FDIR_NUM_TX_DESC  256
> +#define I40E_FDIR_NUM_RX_DESC  256
> 
>  enum i40e_status_code
>  i40e_fdir_setup_tx_resources(struct i40e_pf *pf)  {
>  	struct i40e_tx_queue *txq;
>  	const struct rte_memzone *tz = NULL;
> -	uint32_t ring_size;
> +	uint32_t ring_size, i;
>  	struct rte_eth_dev *dev;
> +	volatile struct i40e_tx_desc *txdp;
> 
>  	if (!pf) {
>  		PMD_DRV_LOG(ERR, "PF is not available"); @@ -2987,6 +2988,14
> @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf)
> 
>  	txq->tx_ring_phys_addr = tz->iova;
>  	txq->tx_ring = (struct i40e_tx_desc *)tz->addr;
> +
> +	/* Set all the DD flags to 1 */
> +	for (i = 0; i < I40E_FDIR_NUM_TX_DESC; i += 2) {
> +		txdp = &txq->tx_ring[i + 1];
> +		txdp->cmd_type_offset_bsz |=
> I40E_TX_DESC_DTYPE_DESC_DONE;
> +		txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr[i / 2]);
> +	}

DD bits are assume to be reported by hardware, why we initialize them to 1 ?

> +
>  	/*
>  	 * don't need to allocate software ring and reset for the fdir
>  	 * program queue just set the queue has been configured.
> diff --git a/drivers/net/i40e/rte_pmd_i40e.c
> b/drivers/net/i40e/rte_pmd_i40e.c index e216e6783..c52a5af2e 100644
> --- a/drivers/net/i40e/rte_pmd_i40e.c
> +++ b/drivers/net/i40e/rte_pmd_i40e.c
> @@ -3060,7 +3060,7 @@ int
> rte_pmd_i40e_flow_add_del_packet_template(
>  		(enum i40e_fdir_status)conf->action.report_status;
>  	filter_conf.action.flex_off = conf->action.flex_off;
> 
> -	return i40e_flow_add_del_fdir_filter(dev, &filter_conf, add);
> +	return i40e_flow_add_del_fdir_filter(dev, &filter_conf, add, true);
>  }
> 
>  int
> --
> 2.17.1


  reply	other threads:[~2020-07-10  2:50 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-12 18:00 [dpdk-dev] [PATCH] " chenmin.sun
2020-07-08  5:38 ` Xing, Beilei
2020-07-09  5:29   ` Sun, Chenmin
2020-07-08  8:41 ` Wang, Haiyue
2020-07-09  5:30   ` Sun, Chenmin
2020-07-09 14:39 ` [dpdk-dev] [PATCH V2] " chenmin.sun
2020-07-10  2:50   ` Zhang, Qi Z [this message]
2020-07-13  8:33     ` Sun, Chenmin
2020-07-13 22:23   ` [dpdk-dev] [PATCH V3 0/2] " chenmin.sun
2020-07-13 22:23     ` [dpdk-dev] [PATCH V3 1/2] net/i40e: i40e FDIR update rate optimization data structures chenmin.sun
2020-07-13 22:23     ` [dpdk-dev] [PATCH V3 2/2] net/i40e: i40e FDIR update rate optimization chenmin.sun
2020-07-15 19:53     ` [dpdk-dev] [PATCH V4 0/4] " chenmin.sun
2020-07-15 19:53       ` [dpdk-dev] [PATCH V4 1/4] net/i40e: introducing the fdir space tracking chenmin.sun
2020-07-16 13:16         ` Wu, Jingjing
2020-07-15 19:53       ` [dpdk-dev] [PATCH V4 2/4] net/i40e: FDIR flow memory management optimization chenmin.sun
2020-07-16 15:15         ` Wu, Jingjing
2020-07-15 19:53       ` [dpdk-dev] [PATCH V4 3/4] net/i40e: move the i40e_get_outer_vlan to where it real needed chenmin.sun
2020-07-15 19:53       ` [dpdk-dev] [PATCH V4 4/4] net/i40e: FDIR update rate optimization chenmin.sun
2020-07-16 13:57         ` Wu, Jingjing
2020-07-17  8:26           ` Sun, Chenmin
2020-07-17 17:19       ` [dpdk-dev] [PATCH v5 0/4] i40e " chenmin.sun
2020-07-17 17:19         ` [dpdk-dev] [PATCH v5 1/4] net/i40e: introducing the fdir space tracking chenmin.sun
2020-07-17 17:19         ` [dpdk-dev] [PATCH v5 2/4] net/i40e: FDIR flow memory management optimization chenmin.sun
2020-07-17 17:19         ` [dpdk-dev] [PATCH v5 3/4] net/i40e: move the i40e_get_outer_vlan to where it real needed chenmin.sun
2020-07-17 17:19         ` [dpdk-dev] [PATCH v5 4/4] net/i40e: FDIR update rate optimization chenmin.sun
2020-07-17 17:36         ` [dpdk-dev] [PATCH v6 0/4] i40e " chenmin.sun
2020-07-17 12:49           ` Zhang, Qi Z
2020-07-17 17:36           ` [dpdk-dev] [PATCH v6 1/4] net/i40e: introducing the fdir space tracking chenmin.sun
2020-07-17 17:36           ` [dpdk-dev] [PATCH v6 2/4] net/i40e: FDIR flow memory management optimization chenmin.sun
2020-07-17 17:36           ` [dpdk-dev] [PATCH v6 3/4] net/i40e: move the i40e_get_outer_vlan to where it real needed chenmin.sun
2020-07-17 17:36           ` [dpdk-dev] [PATCH v6 4/4] net/i40e: FDIR update rate optimization chenmin.sun

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=039ED4275CED7440929022BC67E706115485A08F@SHSMSX103.ccr.corp.intel.com \
    --to=qi.z.zhang@intel.com \
    --cc=beilei.xing@intel.com \
    --cc=chenmin.sun@intel.com \
    --cc=dev@dpdk.org \
    --cc=haiyue.wang@intel.com \
    --cc=jingjing.wu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).