DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Liu, Mingxia" <mingxia.liu@intel.com>
To: "Qiao, Wenjing" <wenjing.qiao@intel.com>,
	"Zhang, Yuying" <yuying.zhang@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"Zhang, Qi Z" <qi.z.zhang@intel.com>,
	"Wu, Jingjing" <jingjing.wu@intel.com>,
	"Xing, Beilei" <beilei.xing@intel.com>
Subject: FW: [PATCH v3 6/9] net/cpfl: add fxp rule module
Date: Tue, 12 Sep 2023 07:40:31 +0000	[thread overview]
Message-ID: <PH0PR11MB5877481A4777FDBE7CACE2BCECF1A@PH0PR11MB5877.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20230906093407.3635038-7-wenjing.qiao@intel.com>



> -----Original Message-----
> From: Qiao, Wenjing <wenjing.qiao@intel.com>
> Sent: Wednesday, September 6, 2023 5:34 PM
> To: Zhang, Yuying <yuying.zhang@intel.com>; dev@dpdk.org; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>
> Cc: Liu, Mingxia <mingxia.liu@intel.com>
> Subject: [PATCH v3 6/9] net/cpfl: add fxp rule module
> 
> From: Yuying Zhang <yuying.zhang@intel.com>
> 
> Added low level fxp module for rule packing / creation / destroying.
> 
> Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
> ---
>  drivers/net/cpfl/cpfl_controlq.c | 424 +++++++++++++++++++++++++++++++
> drivers/net/cpfl/cpfl_controlq.h |  24 ++
>  drivers/net/cpfl/cpfl_ethdev.c   |  31 +++
>  drivers/net/cpfl/cpfl_ethdev.h   |   6 +
>  drivers/net/cpfl/cpfl_fxp_rule.c | 297 ++++++++++++++++++++++
> drivers/net/cpfl/cpfl_fxp_rule.h |  68 +++++
>  drivers/net/cpfl/meson.build     |   1 +
>  7 files changed, 851 insertions(+)
>  create mode 100644 drivers/net/cpfl/cpfl_fxp_rule.c  create mode 100644
> drivers/net/cpfl/cpfl_fxp_rule.h
> 
> diff --git a/drivers/net/cpfl/cpfl_controlq.c b/drivers/net/cpfl/cpfl_controlq.c
> index 476c78f235..ed76282b0c 100644
> --- a/drivers/net/cpfl/cpfl_controlq.c
> +++ b/drivers/net/cpfl/cpfl_controlq.c
> @@ -331,6 +331,402 @@ cpfl_ctlq_add(struct idpf_hw *hw, struct
> cpfl_ctlq_create_info *qinfo,
>  	return status;
>  }
> 
> +/**
> + * cpfl_ctlq_send - send command to Control Queue (CTQ)
> + * @hw: pointer to hw struct
> + * @cq: handle to control queue struct to send on
> + * @num_q_msg: number of messages to send on control queue
> + * @q_msg: pointer to array of queue messages to be sent
> + *
> + * The caller is expected to allocate DMAable buffers and pass them to
> +the
> + * send routine via the q_msg struct / control queue specific data struct.
> + * The control queue will hold a reference to each send message until
> + * the completion for that message has been cleaned.
> + */
> +int
> +cpfl_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
> +	       uint16_t num_q_msg, struct idpf_ctlq_msg q_msg[]) {
> +	struct idpf_ctlq_desc *desc;
> +	int num_desc_avail = 0;
> +	int status = 0;
> +	int i = 0;
> +
> +	if (!cq || !cq->ring_size)
> +		return -ENOBUFS;
> +
> +	idpf_acquire_lock(&cq->cq_lock);
> +
> +	/* Ensure there are enough descriptors to send all messages */
> +	num_desc_avail = IDPF_CTLQ_DESC_UNUSED(cq);
> +	if (num_desc_avail == 0 || num_desc_avail < num_q_msg) {
> +		status = -ENOSPC;
> +		goto sq_send_command_out;
> +	}
> +
> +	for (i = 0; i < num_q_msg; i++) {
> +		struct idpf_ctlq_msg *msg = &q_msg[i];
> +		uint64_t msg_cookie;
> +
> +		desc = IDPF_CTLQ_DESC(cq, cq->next_to_use);
> +		desc->opcode = CPU_TO_LE16(msg->opcode);
> +		desc->pfid_vfid = CPU_TO_LE16(msg->func_id);
> +		msg_cookie = *(uint64_t *)&msg->cookie;
> +		desc->cookie_high =
> +			CPU_TO_LE32(IDPF_HI_DWORD(msg_cookie));
> +		desc->cookie_low =
> +			CPU_TO_LE32(IDPF_LO_DWORD(msg_cookie));
> +		desc->flags = CPU_TO_LE16((msg->host_id &
> IDPF_HOST_ID_MASK) <<
> +				IDPF_CTLQ_FLAG_HOST_ID_S);
> +		if (msg->data_len) {
> +			struct idpf_dma_mem *buff = msg-
> >ctx.indirect.payload;
> +
> +			desc->datalen |= CPU_TO_LE16(msg->data_len);
> +			desc->flags |= CPU_TO_LE16(IDPF_CTLQ_FLAG_BUF);
> +			desc->flags |= CPU_TO_LE16(IDPF_CTLQ_FLAG_RD);
> +			/* Update the address values in the desc with the pa
> +			 * value for respective buffer
> +			 */
> +			desc->params.indirect.addr_high =
> +				CPU_TO_LE32(IDPF_HI_DWORD(buff->pa));
> +			desc->params.indirect.addr_low =
> +				CPU_TO_LE32(IDPF_LO_DWORD(buff->pa));
> +			idpf_memcpy(&desc->params, msg-
> >ctx.indirect.context,
> +				    IDPF_INDIRECT_CTX_SIZE,
> IDPF_NONDMA_TO_DMA);
> +		} else {
> +			idpf_memcpy(&desc->params, msg->ctx.direct,
> +				    IDPF_DIRECT_CTX_SIZE,
> IDPF_NONDMA_TO_DMA);
> +		}
> +
> +		/* Store buffer info */
> +		cq->bi.tx_msg[cq->next_to_use] = msg;
> +		(cq->next_to_use)++;
> +		if (cq->next_to_use == cq->ring_size)
> +			cq->next_to_use = 0;
> +	}
> +
> +	/* Force memory write to complete before letting hardware
> +	 * know that there are new descriptors to fetch.
> +	 */
> +	idpf_wmb();
> +	wr32(hw, cq->reg.tail, cq->next_to_use);
> +
> +sq_send_command_out:
> +	idpf_release_lock(&cq->cq_lock);
> +
> +	return status;
> +}
> +
> +/**
> + * __cpfl_ctlq_clean_sq - helper function to reclaim descriptors on HW
> +write
> + * back for the requested queue
> + * @cq: pointer to the specific Control queue
> + * @clean_count: (input|output) number of descriptors to clean as
> +input, and
> + * number of descriptors actually cleaned as output
> + * @msg_status: (output) pointer to msg pointer array to be populated;
> +needs
> + * to be allocated by caller
> + * @force: (input) clean descriptors which were not done yet. Use with
> +caution
> + * in kernel mode only
> + *
> + * Returns an array of message pointers associated with the cleaned
> + * descriptors. The pointers are to the original ctlq_msgs sent on the
> +cleaned
> + * descriptors.  The status will be returned for each; any messages
> +that failed
> + * to send will have a non-zero status. The caller is expected to free
> +original
> + * ctlq_msgs and free or reuse the DMA buffers.
> + */
> +static int
> +__cpfl_ctlq_clean_sq(struct idpf_ctlq_info *cq, uint16_t *clean_count,
> +		     struct idpf_ctlq_msg *msg_status[], bool force) {
> +	struct idpf_ctlq_desc *desc;
> +	uint16_t i = 0, num_to_clean;
> +	uint16_t ntc, desc_err;
> +	int ret = 0;
> +
> +	if (!cq || !cq->ring_size)
> +		return -ENOBUFS;
> +
> +	if (*clean_count == 0)
> +		return 0;
> +	if (*clean_count > cq->ring_size)
> +		return -EINVAL;
> +
> +	idpf_acquire_lock(&cq->cq_lock);
> +	ntc = cq->next_to_clean;
> +	num_to_clean = *clean_count;
> +
> +	for (i = 0; i < num_to_clean; i++) {
> +		/* Fetch next descriptor and check if marked as done */
> +		desc = IDPF_CTLQ_DESC(cq, ntc);
> +		if (!force && !(LE16_TO_CPU(desc->flags) &
> IDPF_CTLQ_FLAG_DD))
> +			break;
> +
> +		desc_err = LE16_TO_CPU(desc->ret_val);
> +		if (desc_err) {
> +			/* strip off FW internal code */
> +			desc_err &= 0xff;
> +		}
> +
> +		msg_status[i] = cq->bi.tx_msg[ntc];
> +		if (!msg_status[i])
> +			break;
> +		msg_status[i]->status = desc_err;
> +		cq->bi.tx_msg[ntc] = NULL;
> +		/* Zero out any stale data */
> +		idpf_memset(desc, 0, sizeof(*desc), IDPF_DMA_MEM);
> +		ntc++;
> +		if (ntc == cq->ring_size)
> +			ntc = 0;
> +	}
> +
> +	cq->next_to_clean = ntc;
> +	idpf_release_lock(&cq->cq_lock);
> +
> +	/* Return number of descriptors actually cleaned */
> +	*clean_count = i;
> +
> +	return ret;
> +}
> +
> +/**
> + * cpfl_ctlq_clean_sq - reclaim send descriptors on HW write back for
> +the
> + * requested queue
> + * @cq: pointer to the specific Control queue
> + * @clean_count: (input|output) number of descriptors to clean as
> +input, and
> + * number of descriptors actually cleaned as output
> + * @msg_status: (output) pointer to msg pointer array to be populated;
> +needs
> + * to be allocated by caller
> + *
> + * Returns an array of message pointers associated with the cleaned
> + * descriptors. The pointers are to the original ctlq_msgs sent on the
> +cleaned
> + * descriptors.  The status will be returned for each; any messages
> +that failed
> + * to send will have a non-zero status. The caller is expected to free
> +original
> + * ctlq_msgs and free or reuse the DMA buffers.
> + */
> +int
> +cpfl_ctlq_clean_sq(struct idpf_ctlq_info *cq, uint16_t *clean_count,
> +		   struct idpf_ctlq_msg *msg_status[]) {
> +	return __cpfl_ctlq_clean_sq(cq, clean_count, msg_status, false); }
> +
> +/**
> + * cpfl_ctlq_post_rx_buffs - post buffers to descriptor ring
> + * @hw: pointer to hw struct
> + * @cq: pointer to control queue handle
> + * @buff_count: (input|output) input is number of buffers caller is
> +trying to
> + * return; output is number of buffers that were not posted
> + * @buffs: array of pointers to dma mem structs to be given to hardware
> + *
> + * Caller uses this function to return DMA buffers to the descriptor
> +ring after
> + * consuming them; buff_count will be the number of buffers.
> + *
> + * Note: this function needs to be called after a receive call even
> + * if there are no DMA buffers to be returned, i.e. buff_count = 0,
> + * buffs = NULL to support direct commands  */ int
> +cpfl_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
> +			uint16_t *buff_count, struct idpf_dma_mem **buffs) {
> +	struct idpf_ctlq_desc *desc;
> +	uint16_t ntp = cq->next_to_post;
> +	bool buffs_avail = false;
> +	uint16_t tbp = ntp + 1;
> +	int status = 0;
> +	int i = 0;
> +
> +	if (*buff_count > cq->ring_size)
> +		return -EINVAL;
> +
> +	if (*buff_count > 0)
> +		buffs_avail = true;
> +	idpf_acquire_lock(&cq->cq_lock);
> +	if (tbp >= cq->ring_size)
> +		tbp = 0;
> +
> +	if (tbp == cq->next_to_clean)
> +		/* Nothing to do */
> +		goto post_buffs_out;
> +
> +	/* Post buffers for as many as provided or up until the last one used */
> +	while (ntp != cq->next_to_clean) {
> +		desc = IDPF_CTLQ_DESC(cq, ntp);
> +		if (cq->bi.rx_buff[ntp])
> +			goto fill_desc;
> +		if (!buffs_avail) {
> +			/* If the caller hasn't given us any buffers or
> +			 * there are none left, search the ring itself
> +			 * for an available buffer to move to this
> +			 * entry starting at the next entry in the ring
> +			 */
> +			tbp = ntp + 1;
> +			/* Wrap ring if necessary */
> +			if (tbp >= cq->ring_size)
> +				tbp = 0;
> +
> +			while (tbp != cq->next_to_clean) {
> +				if (cq->bi.rx_buff[tbp]) {
> +					cq->bi.rx_buff[ntp] =
> +						cq->bi.rx_buff[tbp];
> +					cq->bi.rx_buff[tbp] = NULL;
> +
> +					/* Found a buffer, no need to
> +					 * search anymore
> +					 */
> +					break;
> +				}
> +
> +				/* Wrap ring if necessary */
> +				tbp++;
> +				if (tbp >= cq->ring_size)
> +					tbp = 0;
> +			}
> +
> +			if (tbp == cq->next_to_clean)
> +				goto post_buffs_out;
> +		} else {
> +			/* Give back pointer to DMA buffer */
> +			cq->bi.rx_buff[ntp] = buffs[i];
> +			i++;
> +
> +			if (i >= *buff_count)
> +				buffs_avail = false;
> +		}
> +
> +fill_desc:
> +		desc->flags =
> +			CPU_TO_LE16(IDPF_CTLQ_FLAG_BUF |
> IDPF_CTLQ_FLAG_RD);
> +
> +		/* Post buffers to descriptor */
> +		desc->datalen = CPU_TO_LE16(cq->bi.rx_buff[ntp]->size);
> +		desc->params.indirect.addr_high =
> +			CPU_TO_LE32(IDPF_HI_DWORD(cq->bi.rx_buff[ntp]-
> >pa));
> +		desc->params.indirect.addr_low =
> +			CPU_TO_LE32(IDPF_LO_DWORD(cq->bi.rx_buff[ntp]-
> >pa));
> +
> +		ntp++;
> +		if (ntp == cq->ring_size)
> +			ntp = 0;
> +	}
> +
> +post_buffs_out:
> +	/* Only update tail if buffers were actually posted */
> +	if (cq->next_to_post != ntp) {
> +		if (ntp)
> +			/* Update next_to_post to ntp - 1 since current ntp
> +			 * will not have a buffer
> +			 */
> +			cq->next_to_post = ntp - 1;
> +		else
> +			/* Wrap to end of end ring since current ntp is 0 */
> +			cq->next_to_post = cq->ring_size - 1;
> +
> +		wr32(hw, cq->reg.tail, cq->next_to_post);
> +	}
> +
> +	idpf_release_lock(&cq->cq_lock);
> +	/* return the number of buffers that were not posted */
> +	*buff_count = *buff_count - i;
> +
> +	return status;
> +}
> +
> +/**
> + * cpfl_ctlq_recv - receive control queue message call back
> + * @cq: pointer to control queue handle to receive on
> + * @num_q_msg: (input|output) input number of messages that should be
> +received;
> + * output number of messages actually received
> + * @q_msg: (output) array of received control queue messages on this q;
> + * needs to be pre-allocated by caller for as many messages as
> +requested
> + *
> + * Called by interrupt handler or polling mechanism. Caller is expected
> + * to free buffers
> + */
> +int
> +cpfl_ctlq_recv(struct idpf_ctlq_info *cq, uint16_t *num_q_msg,
> +	       struct idpf_ctlq_msg *q_msg)
> +{
> +	uint16_t num_to_clean, ntc, ret_val, flags;
> +	struct idpf_ctlq_desc *desc;
> +	int ret_code = 0;
> +	uint16_t i = 0;
> +
> +	if (!cq || !cq->ring_size)
> +		return -ENOBUFS;
> +
> +	if (*num_q_msg == 0)
> +		return 0;
> +	else if (*num_q_msg > cq->ring_size)
> +		return -EINVAL;
> +
> +	/* take the lock before we start messing with the ring */
> +	idpf_acquire_lock(&cq->cq_lock);
> +	ntc = cq->next_to_clean;
> +	num_to_clean = *num_q_msg;
> +
> +	for (i = 0; i < num_to_clean; i++) {
> +		/* Fetch next descriptor and check if marked as done */
> +		desc = IDPF_CTLQ_DESC(cq, ntc);
> +		flags = LE16_TO_CPU(desc->flags);
> +		if (!(flags & IDPF_CTLQ_FLAG_DD))
> +			break;
> +
> +		ret_val = LE16_TO_CPU(desc->ret_val);
> +		q_msg[i].vmvf_type = (flags &
> +				     (IDPF_CTLQ_FLAG_FTYPE_VM |
> +				      IDPF_CTLQ_FLAG_FTYPE_PF)) >>
> +				      IDPF_CTLQ_FLAG_FTYPE_S;
> +
> +		if (flags & IDPF_CTLQ_FLAG_ERR)
> +			ret_code = -EBADMSG;
> +
> +		q_msg[i].cookie.mbx.chnl_opcode = LE32_TO_CPU(desc-
> >cookie_high);
> +		q_msg[i].cookie.mbx.chnl_retval = LE32_TO_CPU(desc-
> >cookie_low);
> +		q_msg[i].opcode = LE16_TO_CPU(desc->opcode);
> +		q_msg[i].data_len = LE16_TO_CPU(desc->datalen);
> +		q_msg[i].status = ret_val;
> +
> +		if (desc->datalen) {
> +			idpf_memcpy(q_msg[i].ctx.indirect.context,
> +				    &desc->params.indirect,
> +				    IDPF_INDIRECT_CTX_SIZE,
> +				    IDPF_DMA_TO_NONDMA);
> +
> +			/* Assign pointer to dma buffer to ctlq_msg array
> +			 * to be given to upper layer
> +			 */
> +			q_msg[i].ctx.indirect.payload = cq->bi.rx_buff[ntc];
> +
> +			/* Zero out pointer to DMA buffer info;
> +			 * will be repopulated by post buffers API
> +			 */
> +			cq->bi.rx_buff[ntc] = NULL;
> +		} else {
> +			idpf_memcpy(q_msg[i].ctx.direct,
> +				    desc->params.raw,
> +				    IDPF_DIRECT_CTX_SIZE,
> +				    IDPF_DMA_TO_NONDMA);
> +		}
> +
> +		/* Zero out stale data in descriptor */
> +		idpf_memset(desc, 0, sizeof(struct idpf_ctlq_desc),
> +			    IDPF_DMA_MEM);
> +
> +		ntc++;
> +		if (ntc == cq->ring_size)
> +			ntc = 0;
> +	};
> +
> +	cq->next_to_clean = ntc;
> +	idpf_release_lock(&cq->cq_lock);
> +	*num_q_msg = i;
> +	if (*num_q_msg == 0)
> +		ret_code = -ENOMSG;
> +
> +	return ret_code;
> +}
> +
>  int
>  cpfl_vport_ctlq_add(struct idpf_hw *hw, struct cpfl_ctlq_create_info *qinfo,
>  		    struct idpf_ctlq_info **cq)
> @@ -377,3 +773,31 @@ cpfl_vport_ctlq_remove(struct idpf_hw *hw, struct
> idpf_ctlq_info *cq)  {
>  	cpfl_ctlq_remove(hw, cq);
>  }
> +
> +int
> +cpfl_vport_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
> +		     uint16_t num_q_msg, struct idpf_ctlq_msg q_msg[]) {
> +	return cpfl_ctlq_send(hw, cq, num_q_msg, q_msg); }
> +
> +int
> +cpfl_vport_ctlq_recv(struct idpf_ctlq_info *cq, uint16_t *num_q_msg,
> +		     struct idpf_ctlq_msg q_msg[])
> +{
> +	return cpfl_ctlq_recv(cq, num_q_msg, q_msg); }
> +
> +int
> +cpfl_vport_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
> +			      uint16_t *buff_count, struct idpf_dma_mem
> **buffs) {
> +	return cpfl_ctlq_post_rx_buffs(hw, cq, buff_count, buffs); }
> +
> +int
> +cpfl_vport_ctlq_clean_sq(struct idpf_ctlq_info *cq, uint16_t *clean_count,
> +			 struct idpf_ctlq_msg *msg_status[]) {
> +	return cpfl_ctlq_clean_sq(cq, clean_count, msg_status); }
> diff --git a/drivers/net/cpfl/cpfl_controlq.h b/drivers/net/cpfl/cpfl_controlq.h
> index 930d717f63..740ae6522c 100644
> --- a/drivers/net/cpfl/cpfl_controlq.h
> +++ b/drivers/net/cpfl/cpfl_controlq.h
> @@ -14,6 +14,13 @@
>  #define CPFL_DFLT_MBX_RING_LEN		512
>  #define CPFL_CFGQ_RING_LEN		512
> 
> +/* CRQ/CSQ specific error codes */
> +#define CPFL_ERR_CTLQ_ERROR             -74     /* -EBADMSG */
> +#define CPFL_ERR_CTLQ_TIMEOUT           -110    /* -ETIMEDOUT */
> +#define CPFL_ERR_CTLQ_FULL              -28     /* -ENOSPC */
> +#define CPFL_ERR_CTLQ_NO_WORK           -42     /* -ENOMSG */
> +#define CPFL_ERR_CTLQ_EMPTY             -105    /* -ENOBUFS */
> +
[Liu, Mingxia] How about replacing the  const number with macro statement, such as,
+#define CPFL_ERR_CTLQ_ERROR             (-EBADMSG)

>  /* Generic queue info structures */
>  /* MB, CONFIG and EVENT q do not have extended info */  struct
> cpfl_ctlq_create_info { @@ -44,8 +51,25 @@ int cpfl_ctlq_alloc_ring_res(struct
> idpf_hw *hw,  int cpfl_ctlq_add(struct idpf_hw *hw,
>  		  struct cpfl_ctlq_create_info *qinfo,
>  		  struct idpf_ctlq_info **cq);
> +int cpfl_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
> +		   u16 num_q_msg, struct idpf_ctlq_msg q_msg[]); int
> +cpfl_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
> +		       struct idpf_ctlq_msg *msg_status[]); int
> +cpfl_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
> +			    u16 *buff_count, struct idpf_dma_mem **buffs); int
> +cpfl_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
> +		   struct idpf_ctlq_msg *q_msg);
>  int cpfl_vport_ctlq_add(struct idpf_hw *hw,
>  			struct cpfl_ctlq_create_info *qinfo,
>  			struct idpf_ctlq_info **cq);
>  void cpfl_vport_ctlq_remove(struct idpf_hw *hw, struct idpf_ctlq_info *cq);
> +int cpfl_vport_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
> +			 u16 num_q_msg, struct idpf_ctlq_msg q_msg[]); int
> +cpfl_vport_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
> +			 struct idpf_ctlq_msg q_msg[]);
> +
> +int cpfl_vport_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info
> *cq,
> +				  u16 *buff_count, struct idpf_dma_mem
> **buffs); int
> +cpfl_vport_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
> +			     struct idpf_ctlq_msg *msg_status[]);
>  #endif
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index
> 618a6a0fe2..08a55f0352 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -16,6 +16,7 @@
>  #include <ethdev_private.h>
>  #include "cpfl_rxtx.h"
>  #include "cpfl_flow.h"
> +#include "cpfl_rules.h"
> 
>  #define CPFL_REPRESENTOR	"representor"
>  #define CPFL_TX_SINGLE_Q	"tx_single"
> @@ -1127,6 +1128,7 @@ cpfl_dev_close(struct rte_eth_dev *dev)
>  	adapter->cur_vport_nb--;
>  	dev->data->dev_private = NULL;
>  	adapter->vports[vport->sw_idx] = NULL;
> +	idpf_free_dma_mem(NULL, &cpfl_vport->itf.flow_dma);
>  	rte_free(cpfl_vport);
> 
>  	return 0;
> @@ -2462,6 +2464,26 @@ cpfl_p2p_queue_info_init(struct cpfl_vport
> *cpfl_vport,
>  	return 0;
>  }
> 
> +int
> +cpfl_alloc_dma_mem_batch(struct idpf_dma_mem *orig_dma, struct
> idpf_dma_mem *dma, uint32_t size,
> +			 int batch_size)
> +{
> +	int i;
> +
> +	if (!idpf_alloc_dma_mem(NULL, orig_dma, size * (1 + batch_size))) {
> +		PMD_INIT_LOG(ERR, "Could not alloc dma memory");
> +		return -ENOMEM;
> +	}
> +
> +	for (i = 0; i < batch_size; i++) {
> +		dma[i].va = (void *)((uint64_t)orig_dma->va + size * (i + 1));
> +		dma[i].pa = orig_dma->pa + size * (i + 1);
> +		dma[i].size = size;
> +		dma[i].zone = NULL;
> +	}
> +	return 0;
> +}
> +
>  static int
>  cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)  { @@ -2511,6
> +2533,15 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
>  	rte_ether_addr_copy((struct rte_ether_addr *)vport-
> >default_mac_addr,
>  			    &dev->data->mac_addrs[0]);
> 
> +	memset(cpfl_vport->itf.dma, 0, sizeof(cpfl_vport->itf.dma));
> +	memset(cpfl_vport->itf.msg, 0, sizeof(cpfl_vport->itf.msg));
> +	ret = cpfl_alloc_dma_mem_batch(&cpfl_vport->itf.flow_dma,
> +				       cpfl_vport->itf.dma,
> +				       sizeof(union cpfl_rule_cfg_pkt_record),
> +				       CPFL_FLOW_BATCH_SIZE);
> +	if (ret < 0)
> +		goto err_mac_addrs;
> +
>  	if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) {
>  		memset(&p2p_queue_grps_info, 0,
> sizeof(p2p_queue_grps_info));
>  		ret = cpfl_p2p_q_grps_add(vport, &p2p_queue_grps_info,
> p2p_q_vc_out_info); diff --git a/drivers/net/cpfl/cpfl_ethdev.h
> b/drivers/net/cpfl/cpfl_ethdev.h index be625284a4..6b02573b4a 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.h
> +++ b/drivers/net/cpfl/cpfl_ethdev.h
> @@ -149,10 +149,14 @@ enum cpfl_itf_type {
> 
>  TAILQ_HEAD(cpfl_flow_list, rte_flow);
> 
> +#define CPFL_FLOW_BATCH_SIZE  490
>  struct cpfl_itf {
>  	enum cpfl_itf_type type;
>  	struct cpfl_adapter_ext *adapter;
>  	struct cpfl_flow_list flow_list;
> +	struct idpf_dma_mem flow_dma;
> +	struct idpf_dma_mem dma[CPFL_FLOW_BATCH_SIZE];
> +	struct idpf_ctlq_msg msg[CPFL_FLOW_BATCH_SIZE];
>  	void *data;
>  };
> 
> @@ -238,6 +242,8 @@ int cpfl_cc_vport_info_get(struct cpfl_adapter_ext
> *adapter,
>  			   struct cpchnl2_vport_id *vport_id,
>  			   struct cpfl_vport_id *vi,
>  			   struct cpchnl2_get_vport_info_response *response);
> +int cpfl_alloc_dma_mem_batch(struct idpf_dma_mem *orig_dma, struct
> idpf_dma_mem *dma,
> +			     uint32_t size, int batch_size);
> 
>  #define CPFL_DEV_TO_PCI(eth_dev)		\
>  	RTE_DEV_TO_PCI((eth_dev)->device)
> diff --git a/drivers/net/cpfl/cpfl_fxp_rule.c b/drivers/net/cpfl/cpfl_fxp_rule.c
> new file mode 100644
> index 0000000000..f87ccc9f77
> --- /dev/null
> +++ b/drivers/net/cpfl/cpfl_fxp_rule.c
> @@ -0,0 +1,297 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2023 Intel Corporation
> + */
> +#include "cpfl_ethdev.h"
> +
> +#include "cpfl_fxp_rule.h"
> +#include "cpfl_logs.h"
> +
> +#define CTLQ_SEND_RETRIES 100
> +#define CTLQ_RECEIVE_RETRIES 100
> +
> +int
> +cpfl_send_ctlq_msg(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16
> num_q_msg,
> +		   struct idpf_ctlq_msg q_msg[])
> +{
> +	struct idpf_ctlq_msg **msg_ptr_list;
> +	u16 clean_count = 0;
> +	int num_cleaned = 0;
> +	int retries = 0;
> +	int ret = 0;
> +
> +	msg_ptr_list = calloc(num_q_msg, sizeof(struct idpf_ctlq_msg *));
> +	if (!msg_ptr_list) {
> +		PMD_INIT_LOG(ERR, "no memory for cleaning ctlq");
> +		ret = -ENOMEM;
> +		goto err;
> +	}
> +
> +	ret = cpfl_vport_ctlq_send(hw, cq, num_q_msg, q_msg);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "cpfl_vport_ctlq_send() failed with error:
> 0x%4x", ret);
> +		goto send_err;
> +	}
> +
> +	while (retries <= CTLQ_SEND_RETRIES) {
> +		clean_count = num_q_msg - num_cleaned;
> +		ret = cpfl_vport_ctlq_clean_sq(cq, &clean_count,
> +					       &msg_ptr_list[num_cleaned]);
> +		if (ret) {
> +			PMD_INIT_LOG(ERR, "clean ctlq failed: 0x%4x", ret);
> +			goto send_err;
> +		}
> +
> +		num_cleaned += clean_count;
> +		retries++;
> +		if (num_cleaned >= num_q_msg)
> +			break;
> +		rte_delay_us_sleep(10);
> +	}
> +
> +	if (retries > CTLQ_SEND_RETRIES) {
> +		PMD_INIT_LOG(ERR, "timed out while polling for
> completions");
> +		ret = -1;
> +		goto send_err;
> +	}
> +
> +send_err:
> +	if (msg_ptr_list)
> +		free(msg_ptr_list);
> +err:
> +	return ret;
> +}
> +
> +static int
> +cpfl_process_rx_ctlq_msg(u16 num_q_msg, struct idpf_ctlq_msg *q_msg) {
> +	u16 i;
> +	int ret = 0;
> +
> +	if (!num_q_msg || !q_msg)
> +		return -EINVAL;
> +
> +	for (i = 0; i < num_q_msg; i++) {
> +		if (q_msg[i].status == CPFL_CFG_PKT_ERR_OK) {
> +			continue;
> +		} else if (q_msg[i].status == CPFL_CFG_PKT_ERR_EEXIST &&
> +			   q_msg[i].opcode == cpfl_ctlq_sem_add_rule) {
> +			PMD_INIT_LOG(ERR, "The rule has confliction with
> already existed one");
> +			return -EINVAL;
> +		} else if (q_msg[i].status == CPFL_CFG_PKT_ERR_ENOTFND &&
> +			   q_msg[i].opcode == cpfl_ctlq_sem_del_rule) {
> +			PMD_INIT_LOG(ERR, "The rule has already deleted");
> +			return -EINVAL;
> +		} else {
> +			PMD_INIT_LOG(ERR, "Invalid rule");
> +			return -EINVAL;
> +		}
> +	}
> +
> +	return ret;
[Liu, Mingxia] The ret value has never been changed, can it be deleted? Return 0 directly.
> +}
> +
> +int
> +cpfl_receive_ctlq_msg(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16
> num_q_msg,
> +		      struct idpf_ctlq_msg q_msg[])
> +{
> +	int retries = 0;
> +	struct idpf_dma_mem *dma;
> +	u16 i;
> +	uint16_t buff_cnt;
> +	int ret = 0, handle_rule = 0;
> +
> +	retries = 0;
> +	while (retries <= CTLQ_RECEIVE_RETRIES) {
> +		rte_delay_us_sleep(10);
> +		ret = cpfl_vport_ctlq_recv(cq, &num_q_msg, &q_msg[0]);
> +
> +		if (ret && ret != CPFL_ERR_CTLQ_NO_WORK &&
> +		    ret != CPFL_ERR_CTLQ_ERROR) {
> +			PMD_INIT_LOG(ERR, "failed to recv ctrlq msg. err:
> 0x%4x\n", ret);
> +			retries++;
> +			continue;
> +		}
> +
> +		if (ret == CPFL_ERR_CTLQ_NO_WORK) {
> +			retries++;
> +			continue;
> +		}
> +
> +		if (ret == CPFL_ERR_CTLQ_EMPTY)
> +			break;
> +
> +		ret = cpfl_process_rx_ctlq_msg(num_q_msg, q_msg);
> +		if (ret) {
> +			PMD_INIT_LOG(WARNING, "failed to process rx_ctrlq
> msg");
[Liu, Mingxia] The log error is WARNING, but the return value is passed to the calling function, how about user ERROR log level?
> +			handle_rule = ret;
> +		}
> +
> +		for (i = 0; i < num_q_msg; i++) {
> +			if (q_msg[i].data_len > 0)
> +				dma = q_msg[i].ctx.indirect.payload;
> +			else
> +				dma = NULL;
> +
> +			buff_cnt = dma ? 1 : 0;
> +			ret = cpfl_vport_ctlq_post_rx_buffs(hw, cq, &buff_cnt,
> &dma);
> +			if (ret)
> +				PMD_INIT_LOG(WARNING, "could not posted
> recv bufs\n");
[Liu, Mingxia] The log level is WARNING, but the return value is passed to the calling function, how about user ERROR log level?
> +		}
> +		break;
> +	}
> +
> +	if (retries > CTLQ_RECEIVE_RETRIES) {
> +		PMD_INIT_LOG(ERR, "timed out while polling for receive
> response");
> +		ret = -1;
> +	}
> +
> +	return ret + handle_rule;
[Liu, Mingxia] Looks a bit confused and weird, the calling function cpfl_rule_process() only check if return value < 0, so how about return -1 if (ret < 0 || handle_rule < 0) ?
> +}
> +
> +static int
> +cpfl_mod_rule_pack(struct cpfl_rule_info *rinfo, struct idpf_dma_mem *dma,
> +		   struct idpf_ctlq_msg *msg)
> +{
> +	struct cpfl_mod_rule_info *minfo = &rinfo->mod;
> +	union cpfl_rule_cfg_pkt_record *blob = NULL;
> +	struct cpfl_rule_cfg_data cfg = {0};
> +
> +	/* prepare rule blob */
> +	if (!dma->va) {
> +		PMD_INIT_LOG(ERR, "dma mem passed to %s is null\n",
> __func__);
> +		return -1;
> +	}
> +	blob = (union cpfl_rule_cfg_pkt_record *)dma->va;
> +	memset(blob, 0, sizeof(*blob));
> +	memset(&cfg, 0, sizeof(cfg));
> +
> +	/* fill info for both query and add/update */
> +	cpfl_fill_rule_mod_content(minfo->mod_obj_size,
> +				   minfo->pin_mod_content,
> +				   minfo->mod_index,
> +				   &cfg.ext.mod_content);
> +
> +	/* only fill content for add/update */
> +	memcpy(blob->mod_blob, minfo->mod_content,
> +	       minfo->mod_content_byte_len);
> +
> +#define NO_HOST_NEEDED 0
> +	/* pack message */
> +	cpfl_fill_rule_cfg_data_common(cpfl_ctlq_mod_add_update_rule,
> +				       rinfo->cookie,
> +				       0, /* vsi_id not used for mod */
> +				       rinfo->port_num,
> +				       NO_HOST_NEEDED,
> +				       0, /* time_sel */
> +				       0, /* time_sel_val */
> +				       0, /* cache_wr_thru */
> +				       rinfo->resp_req,
> +				       (u16)sizeof(*blob),
> +				       (void *)dma,
> +				       &cfg.common);
> +	cpfl_prep_rule_desc(&cfg, msg);
> +	return 0;
> +}
> +
> +static int
> +cpfl_default_rule_pack(struct cpfl_rule_info *rinfo, struct idpf_dma_mem
> *dma,
> +		       struct idpf_ctlq_msg *msg, bool add) {
> +	union cpfl_rule_cfg_pkt_record *blob = NULL;
> +	enum cpfl_ctlq_rule_cfg_opc opc;
> +	struct cpfl_rule_cfg_data cfg;
> +	uint16_t cfg_ctrl;
> +
> +	if (!dma->va) {
> +		PMD_INIT_LOG(ERR, "dma mem passed to %s is null\n",
> __func__);
> +		return -1;
> +	}
> +	blob = (union cpfl_rule_cfg_pkt_record *)dma->va;
> +	memset(blob, 0, sizeof(*blob));
> +	memset(msg, 0, sizeof(*msg));
> +
> +	if (rinfo->type == CPFL_RULE_TYPE_SEM) {
> +		cfg_ctrl = CPFL_GET_MEV_SEM_RULE_CFG_CTRL(rinfo-
> >sem.prof_id,
> +							  rinfo-
> >sem.sub_prof_id,
> +							  rinfo-
> >sem.pin_to_cache,
> +							  rinfo-
> >sem.fixed_fetch);
> +		cpfl_prep_sem_rule_blob(rinfo->sem.key, rinfo-
> >sem.key_byte_len,
> +					rinfo->act_bytes, rinfo->act_byte_len,
> +					cfg_ctrl, blob);
> +		opc = add ? cpfl_ctlq_sem_add_rule : cpfl_ctlq_sem_del_rule;
> +	} else {
> +		PMD_INIT_LOG(ERR, "not support %d rule.", rinfo->type);
> +		return -1;
> +	}
> +
> +	cpfl_fill_rule_cfg_data_common(opc,
> +				       rinfo->cookie,
> +				       rinfo->vsi,
> +				       rinfo->port_num,
> +				       rinfo->host_id,
> +				       0, /* time_sel */
> +				       0, /* time_sel_val */
> +				       0, /* cache_wr_thru */
> +				       rinfo->resp_req,
> +				       sizeof(union cpfl_rule_cfg_pkt_record),
> +				       dma,
> +				       &cfg.common);
> +	cpfl_prep_rule_desc(&cfg, msg);
> +	return 0;
> +}
> +
> +static int
> +cpfl_rule_pack(struct cpfl_rule_info *rinfo, struct idpf_dma_mem *dma,
> +	       struct idpf_ctlq_msg *msg, bool add) {
> +	int ret = 0;
> +
> +	if (rinfo->type == CPFL_RULE_TYPE_SEM) {
> +		if (cpfl_default_rule_pack(rinfo, dma, msg, add) < 0)
> +			ret = -1;
> +	} else if (rinfo->type == CPFL_RULE_TYPE_MOD) {
> +		if (cpfl_mod_rule_pack(rinfo, dma, msg) < 0)
> +			ret = -1;
> +	} else {
> +		PMD_INIT_LOG(ERR, "Invalid type of rule");
> +		ret = -1;
> +	}
> +
> +	return ret;
> +}
> +
> +int
> +cpfl_rule_process(struct cpfl_itf *itf,
> +		  struct idpf_ctlq_info *tx_cq,
> +		  struct idpf_ctlq_info *rx_cq,
> +		  struct cpfl_rule_info *rinfo,
> +		  int rule_num,
> +		  bool add)
> +{
> +	struct idpf_hw *hw = &itf->adapter->base.hw;
> +	int i;
> +	int ret = 0;
> +
> +	if (rule_num == 0)
> +		return 0;
> +
> +	for (i = 0; i < rule_num; i++) {
> +		ret = cpfl_rule_pack(&rinfo[i], &itf->dma[i], &itf->msg[i], add);
> +		if (ret) {
> +			PMD_INIT_LOG(ERR, "Could not pack rule");
> +			return ret;
> +		}
> +	}
> +	ret = cpfl_send_ctlq_msg(hw, tx_cq, rule_num, itf->msg);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to send control message");
> +		return ret;
> +	}
> +	ret = cpfl_receive_ctlq_msg(hw, rx_cq, rule_num, itf->msg);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to update rule");
> +		return ret;
> +	}
> +
> +	return 0;
> +}


  reply	other threads:[~2023-09-12  7:40 UTC|newest]

Thread overview: 128+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-12  7:55 [PATCH v1 0/5] add rte flow support for cpfl Yuying Zhang
2023-08-12  7:55 ` [PATCH v1 1/5] net/cpfl: setup rte flow skeleton Yuying Zhang
2023-08-25  3:55   ` Xing, Beilei
2023-08-12  7:55 ` [PATCH v1 2/5] common/idpf/base: refine idpf ctlq message structure Yuying Zhang
2023-08-25  5:55   ` Xing, Beilei
2023-08-12  7:55 ` [PATCH v1 3/5] net/cpfl: add cpfl control queue message handle Yuying Zhang
2023-08-25  6:23   ` Xing, Beilei
2023-08-12  7:55 ` [PATCH v1 4/5] net/cpfl: add fxp rule module Yuying Zhang
2023-08-25  7:35   ` Xing, Beilei
2023-08-25  8:42   ` Xing, Beilei
2023-08-12  7:55 ` [PATCH v1 5/5] net/cpfl: add fxp flow engine Yuying Zhang
2023-08-25  9:15   ` Xing, Beilei
2023-09-01 11:31 ` [PATCH v2 0/8] add rte flow support for cpfl Yuying Zhang
2023-09-01 11:31   ` [PATCH v2 1/8] net/cpfl: parse flow parser file in devargs Yuying Zhang
2023-09-01 11:31   ` [PATCH v2 2/8] net/cpfl: add flow json parser Yuying Zhang
2023-09-01 11:31   ` [PATCH v2 3/8] net/cpfl: add FXP low level implementation Yuying Zhang
2023-09-01 11:31   ` [PATCH v2 4/8] net/cpfl: setup ctrl path Yuying Zhang
2023-09-01 11:31   ` [PATCH v2 5/8] net/cpfl: set up rte flow skeleton Yuying Zhang
2023-09-01 11:31   ` [PATCH v2 6/8] net/cpfl: add fxp rule module Yuying Zhang
2023-09-01 11:31   ` [PATCH v2 7/8] net/cpfl: add fxp flow engine Yuying Zhang
2023-09-01 11:31   ` [PATCH v2 8/8] net/cpfl: add flow support for representor Yuying Zhang
2023-09-06  9:33   ` [PATCH v3 0/9] add rte flow support for cpfl Wenjing Qiao
2023-08-15 16:50     ` [PATCH v4 " Zhang, Yuying
2023-08-15 16:50       ` [PATCH v4 1/9] net/cpfl: add json parser for rte flow pattern rules Zhang, Yuying
2023-09-15 15:11         ` Stephen Hemminger
2023-08-15 16:50       ` [PATCH v4 2/9] net/cpfl: add mod rule parser support for rte flow Zhang, Yuying
2023-08-15 16:50       ` [PATCH v4 3/9] net/cpfl: set up rte flow skeleton Zhang, Yuying
2023-08-15 16:50       ` [PATCH v4 4/9] net/cpfl: add FXP low level implementation Zhang, Yuying
2023-08-15 16:50       ` [PATCH v4 5/9] net/cpfl: add fxp rule module Zhang, Yuying
2023-08-15 16:50       ` [PATCH v4 6/9] net/cpfl: add fxp flow engine Zhang, Yuying
2023-08-15 16:50       ` [PATCH v4 7/9] net/cpfl: add flow support for representor Zhang, Yuying
2023-08-15 16:50       ` [PATCH v4 8/9] app/test-pmd: refine encap content Zhang, Yuying
2023-08-15 16:50       ` [PATCH v4 9/9] net/cpfl: fix incorrect status calculation Zhang, Yuying
2023-09-06  9:33     ` [PATCH v3 1/9] net/cpfl: parse flow parser file in devargs Wenjing Qiao
2023-09-11  0:48       ` Wu, Jingjing
2023-09-06  9:34     ` [PATCH v3 2/9] net/cpfl: add flow json parser Wenjing Qiao
2023-09-08  6:26       ` Liu, Mingxia
2023-09-11  6:24       ` Wu, Jingjing
2023-09-06  9:34     ` [PATCH v3 3/9] net/cpfl: add FXP low level implementation Wenjing Qiao
2023-09-06  9:34     ` [PATCH v3 4/9] net/cpfl: setup ctrl path Wenjing Qiao
2023-09-11  6:30       ` Liu, Mingxia
2023-09-11  6:36       ` Wu, Jingjing
2023-09-06  9:34     ` [PATCH v3 5/9] net/cpfl: set up rte flow skeleton Wenjing Qiao
2023-09-06  9:34     ` [PATCH v3 6/9] net/cpfl: add fxp rule module Wenjing Qiao
2023-09-12  7:40       ` Liu, Mingxia [this message]
2023-09-06  9:34     ` [PATCH v3 7/9] net/cpfl: add fxp flow engine Wenjing Qiao
2023-09-06  9:34     ` [PATCH v3 8/9] net/cpfl: add flow support for representor Wenjing Qiao
2023-09-06  9:34     ` [PATCH v3 9/9] app/test-pmd: refine encap content Wenjing Qiao
2023-09-15 10:00     ` [PATCH v5 0/9] add rte flow support for cpfl Zhang, Yuying
2023-08-22  1:02       ` [PATCH v6 0/8] " Zhang, Yuying
2023-08-22  1:02         ` [PATCH v6 1/8] net/cpfl: add json parser for rte flow pattern rules Zhang, Yuying
2023-08-22  1:02         ` [PATCH v6 2/8] net/cpfl: add mod rule parser support for rte flow Zhang, Yuying
2023-08-22  1:02         ` [PATCH v6 3/8] net/cpfl: set up rte flow skeleton Zhang, Yuying
2023-08-22  1:02         ` [PATCH v6 4/8] net/cpfl: set up control path Zhang, Yuying
2023-08-22  1:02         ` [PATCH v6 5/8] net/cpfl: add FXP low level implementation Zhang, Yuying
2023-08-22  1:02         ` [PATCH v6 6/8] net/cpfl: add fxp rule module Zhang, Yuying
2023-08-22  1:02         ` [PATCH v6 7/8] net/cpfl: add fxp flow engine Zhang, Yuying
2023-08-22  1:02         ` [PATCH v6 8/8] net/cpfl: add flow support for representor Zhang, Yuying
2023-09-26 18:16         ` [PATCH v7 0/8] add rte flow support for cpfl yuying.zhang
2023-09-26 18:16           ` [PATCH v7 1/8] net/cpfl: add json parser for rte flow pattern rules yuying.zhang
2023-09-26 19:03             ` Stephen Hemminger
2023-09-27  1:21               ` Zhang, Qi Z
2023-09-26 18:16           ` [PATCH v7 2/8] net/cpfl: build action mapping rules from JSON yuying.zhang
2023-09-26 18:16           ` [PATCH v7 3/8] net/cpfl: set up rte flow skeleton yuying.zhang
2023-09-26 18:16           ` [PATCH v7 4/8] net/cpfl: set up control path yuying.zhang
2023-09-26 18:17           ` [PATCH v7 5/8] net/cpfl: add FXP low level implementation yuying.zhang
2023-09-26 18:17           ` [PATCH v7 6/8] net/cpfl: add fxp rule module yuying.zhang
2023-09-28  3:29             ` Zhang, Qi Z
2023-09-26 18:17           ` [PATCH v7 7/8] net/cpfl: add fxp flow engine yuying.zhang
2023-09-26 18:17           ` [PATCH v7 8/8] net/cpfl: add flow support for representor yuying.zhang
2023-09-27 12:54           ` [PATCH v8 0/9] add rte flow support for cpfl yuying.zhang
2023-09-27 12:54             ` [PATCH v8 1/9] net/cpfl: add json parser for rte flow pattern rules yuying.zhang
2023-09-27 12:54             ` [PATCH v8 2/9] net/cpfl: build action mapping rules from JSON yuying.zhang
2023-09-27 12:54             ` [PATCH v8 3/9] net/cpfl: set up rte flow skeleton yuying.zhang
2023-09-27 12:54             ` [PATCH v8 4/9] net/cpfl: set up control path yuying.zhang
2023-09-27 12:54             ` [PATCH v8 5/9] net/cpfl: add FXP low level implementation yuying.zhang
2023-09-27 12:54             ` [PATCH v8 6/9] net/cpfl: add fxp rule module yuying.zhang
2023-09-27 12:54             ` [PATCH v8 7/9] net/cpfl: add fxp flow engine yuying.zhang
2023-09-27 12:54             ` [PATCH v8 8/9] net/cpfl: add flow support for representor yuying.zhang
2023-09-27 12:54             ` [PATCH v8 9/9] net/cpfl: add support of to represented port action yuying.zhang
2023-09-28  3:37             ` [PATCH v8 0/9] add rte flow support for cpfl Zhang, Qi Z
2023-09-28  8:44             ` [PATCH v9 " yuying.zhang
2023-09-08 16:05               ` [PATCH v10 " Zhang, Yuying
2023-09-08 16:05                 ` [PATCH v10 1/9] net/cpfl: parse flow offloading hint from JSON Zhang, Yuying
2023-09-08 16:05                 ` [PATCH v10 2/9] net/cpfl: build action mapping rules " Zhang, Yuying
2023-09-08 16:05                 ` [PATCH v10 3/9] net/cpfl: set up flow offloading skeleton Zhang, Yuying
2023-09-08 16:05                 ` [PATCH v10 4/9] net/cpfl: set up control path Zhang, Yuying
2023-09-08 16:05                 ` [PATCH v10 5/9] net/cpfl: add FXP low level implementation Zhang, Yuying
2023-09-08 16:05                 ` [PATCH v10 6/9] net/cpfl: implement FXP rule creation and destroying Zhang, Yuying
2023-09-08 16:05                 ` [PATCH v10 7/9] net/cpfl: adapt FXP to flow engine Zhang, Yuying
2023-09-08 16:05                 ` [PATCH v10 8/9] net/cpfl: support flow ops on representor Zhang, Yuying
2023-09-08 16:05                 ` [PATCH v10 9/9] net/cpfl: support represented port action Zhang, Yuying
2023-09-28  8:44               ` [PATCH v9 1/9] net/cpfl: add json parser for rte flow pattern rules yuying.zhang
2023-09-28  8:44               ` [PATCH v9 2/9] net/cpfl: build action mapping rules from JSON yuying.zhang
2023-09-28  8:44               ` [PATCH v9 3/9] net/cpfl: set up rte flow skeleton yuying.zhang
2023-10-15 13:01                 ` Thomas Monjalon
2023-10-16  3:07                   ` Zhang, Qi Z
2023-09-28  8:44               ` [PATCH v9 4/9] net/cpfl: set up control path yuying.zhang
2023-09-28  8:44               ` [PATCH v9 5/9] net/cpfl: add FXP low level implementation yuying.zhang
2023-09-28  8:44               ` [PATCH v9 6/9] net/cpfl: add fxp rule module yuying.zhang
2023-09-28  8:44               ` [PATCH v9 7/9] net/cpfl: add fxp flow engine yuying.zhang
2023-09-28  8:44               ` [PATCH v9 8/9] net/cpfl: add flow support for representor yuying.zhang
2023-09-28  8:44               ` [PATCH v9 9/9] net/cpfl: add support of to represented port action yuying.zhang
2023-09-28 12:45               ` [PATCH v9 0/9] add rte flow support for cpfl Zhang, Qi Z
2023-09-28 16:04               ` Stephen Hemminger
2023-10-09  4:00               ` [PATCH v10 " Zhang, Yuying
2023-10-09  4:00                 ` [PATCH v10 1/9] net/cpfl: parse flow offloading hint from JSON Zhang, Yuying
2023-10-09  4:00                 ` [PATCH v10 2/9] net/cpfl: build action mapping rules " Zhang, Yuying
2023-10-09  4:00                 ` [PATCH v10 3/9] net/cpfl: set up flow offloading skeleton Zhang, Yuying
2023-10-09  4:00                 ` [PATCH v10 4/9] net/cpfl: set up control path Zhang, Yuying
2023-10-09  4:00                 ` [PATCH v10 5/9] net/cpfl: add FXP low level implementation Zhang, Yuying
2023-10-09  4:00                 ` [PATCH v10 6/9] net/cpfl: implement FXP rule creation and destroying Zhang, Yuying
2023-10-09  4:00                 ` [PATCH v10 7/9] net/cpfl: adapt FXP to flow engine Zhang, Yuying
2023-10-09  4:00                 ` [PATCH v10 8/9] net/cpfl: support flow ops on representor Zhang, Yuying
2023-10-09  4:00                 ` [PATCH v10 9/9] net/cpfl: support represented port action Zhang, Yuying
2023-10-10  1:31                 ` [PATCH v10 0/9] add rte flow support for cpfl Zhang, Qi Z
2023-10-15 11:21               ` [PATCH v9 " Thomas Monjalon
2023-09-15 10:00       ` [PATCH v5 1/9] net/cpfl: add json parser for rte flow pattern rules Zhang, Yuying
2023-09-15 11:14         ` Zhang, Qi Z
2023-09-15 10:00       ` [PATCH v5 2/9] net/cpfl: add mod rule parser support for rte flow Zhang, Yuying
2023-09-15 10:00       ` [PATCH v5 3/9] net/cpfl: set up rte flow skeleton Zhang, Yuying
2023-09-15 10:00       ` [PATCH v5 4/9] net/cpfl: add FXP low level implementation Zhang, Yuying
2023-09-15 11:19         ` Zhang, Qi Z
2023-09-15 10:00       ` [PATCH v5 5/9] net/cpfl: add fxp rule module Zhang, Yuying
2023-09-15 10:00       ` [PATCH v5 6/9] net/cpfl: add fxp flow engine Zhang, Yuying
2023-09-15 10:00       ` [PATCH v5 7/9] net/cpfl: add flow support for representor Zhang, Yuying
2023-09-15 10:00       ` [PATCH v5 8/9] app/test-pmd: refine encap content Zhang, Yuying
2023-09-15 10:00       ` [PATCH v5 9/9] net/cpfl: fix incorrect status calculation Zhang, Yuying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH0PR11MB5877481A4777FDBE7CACE2BCECF1A@PH0PR11MB5877.namprd11.prod.outlook.com \
    --to=mingxia.liu@intel.com \
    --cc=beilei.xing@intel.com \
    --cc=dev@dpdk.org \
    --cc=jingjing.wu@intel.com \
    --cc=qi.z.zhang@intel.com \
    --cc=wenjing.qiao@intel.com \
    --cc=yuying.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).