From: "Liu, Mingxia" <mingxia.liu@intel.com>
To: "Xing, Beilei" <beilei.xing@intel.com>,
"Wu, Jingjing" <jingjing.wu@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "Wang, Xiao W" <xiao.w.wang@intel.com>
Subject: RE: [PATCH v4 09/13] net/cpfl: support hairpin queue start/stop
Date: Tue, 30 May 2023 03:30:40 +0000 [thread overview]
Message-ID: <PH0PR11MB5877137AD1C68D9288885259EC4B9@PH0PR11MB5877.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20230526073850.101079-10-beilei.xing@intel.com>
> -----Original Message-----
> From: Xing, Beilei <beilei.xing@intel.com>
> Sent: Friday, May 26, 2023 3:39 PM
> To: Wu, Jingjing <jingjing.wu@intel.com>
> Cc: dev@dpdk.org; Liu, Mingxia <mingxia.liu@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>; Wang, Xiao W <xiao.w.wang@intel.com>
> Subject: [PATCH v4 09/13] net/cpfl: support hairpin queue start/stop
>
> From: Beilei Xing <beilei.xing@intel.com>
>
> This patch supports Rx/Tx hairpin queue start/stop.
>
> Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> Signed-off-by: Beilei Xing <beilei.xing@intel.com>
> ---
> drivers/net/cpfl/cpfl_ethdev.c | 41 +++++++++
> drivers/net/cpfl/cpfl_rxtx.c | 151 +++++++++++++++++++++++++++++----
> drivers/net/cpfl/cpfl_rxtx.h | 14 +++
> 3 files changed, 188 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index
> a06def06d0..8035878602 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -896,6 +896,47 @@ cpfl_start_queues(struct rte_eth_dev *dev)
> }
> }
>
> + /* For non-manual bind hairpin queues, enable Tx queue and Rx queue,
> + * then enable Tx completion queue and Rx buffer queue.
> + */
> + for (i = 0; i < dev->data->nb_tx_queues; i++) {
[Liu, Mingxia] Better to use for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++), because when i < cpfl_tx_vport->nb_data_txq, (cpfl_txq->hairpin_info.hairpin_q && !cpfl_vport-
> >p2p_manual_bind) must be false, or (i - cpfl_vport->nb_data_txq) will < 0.
> + cpfl_txq = dev->data->tx_queues[i];
> + if (cpfl_txq->hairpin_info.hairpin_q && !cpfl_vport-
> >p2p_manual_bind) {
> + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport,
> + i - cpfl_vport-
> >nb_data_txq,
> + false, true);
> + if (err)
> + PMD_DRV_LOG(ERR, "Failed to switch hairpin
> TX queue %u on",
> + i);
> + else
> + cpfl_txq->base.q_started = true;
> + }
> + }
> +
> + for (i = 0; i < dev->data->nb_rx_queues; i++) {
[Liu, Mingxia] Better to use for (i = cpfl_rx_vport->nb_data_rxq; i < dev->data->nb_rx_queues; i++), because when i < cpfl_rx_vport->nb_data_rxq, (cpfl_txq->hairpin_info.hairpin_q && !cpfl_vport-
> >p2p_manual_bind) must be false, or (i - cpfl_vport->nb_data_rxq) will < 0.
> + cpfl_rxq = dev->data->rx_queues[i];
> + if (cpfl_rxq->hairpin_info.hairpin_q && !cpfl_vport-
> >p2p_manual_bind) {
> + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport,
> + i - cpfl_vport-
> >nb_data_rxq,
> + true, true);
> + if (err)
> + PMD_DRV_LOG(ERR, "Failed to switch hairpin
> RX queue %u on",
> + i);
> + else
> + cpfl_rxq->base.q_started = true;
> + }
> + }
> +
> + if (!cpfl_vport->p2p_manual_bind &&
> + cpfl_vport->p2p_tx_complq != NULL &&
> + cpfl_vport->p2p_rx_bufq != NULL) {
> + err = cpfl_switch_hairpin_bufq_complq(cpfl_vport, true);
> + if (err != 0) {
> + PMD_DRV_LOG(ERR, "Failed to switch hairpin Tx
> complq and Rx bufq");
> + return err;
> + }
> + }
> +
> return err;
> }
>
> diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index
> 702054d1c5..38c48ad8c7 100644
> --- a/drivers/net/cpfl/cpfl_rxtx.c
> +++ b/drivers/net/cpfl/cpfl_rxtx.c
> @@ -991,6 +991,81 @@ cpfl_hairpin_txq_config(struct idpf_vport *vport,
> struct cpfl_tx_queue *cpfl_txq
> return idpf_vc_txq_config_by_info(vport, txq_info, 1); }
>
> +int
> +cpfl_switch_hairpin_bufq_complq(struct cpfl_vport *cpfl_vport, bool on)
> +{
> + struct idpf_vport *vport = &cpfl_vport->base;
> + uint32_t type;
> + int err, queue_id;
> +
> + type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
> + queue_id = cpfl_vport->p2p_tx_complq->queue_id;
> + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> + if (err)
> + return err;
> +
> + type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
> + queue_id = cpfl_vport->p2p_rx_bufq->queue_id;
> + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> +
> + return err;
> +}
> +
> +int
> +cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t
> logic_qid,
> + bool rx, bool on)
> +{
> + struct idpf_vport *vport = &cpfl_vport->base;
> + uint32_t type;
> + int err, queue_id;
> +
> + type = rx ? VIRTCHNL2_QUEUE_TYPE_RX :
> VIRTCHNL2_QUEUE_TYPE_TX;
> +
> + if (type == VIRTCHNL2_QUEUE_TYPE_RX)
> + queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info-
> >rx_start_qid, logic_qid);
> + else
> + queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info-
> >tx_start_qid, logic_qid);
> + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> + if (err)
> + return err;
> +
> + return err;
> +}
> +
> +static int
> +cpfl_alloc_split_p2p_rxq_mbufs(struct idpf_rx_queue *rxq) {
> + volatile struct virtchnl2_p2p_rx_buf_desc *rxd;
> + struct rte_mbuf *mbuf = NULL;
> + uint64_t dma_addr;
> + uint16_t i;
> +
> + for (i = 0; i < rxq->nb_rx_desc; i++) {
> + mbuf = rte_mbuf_raw_alloc(rxq->mp);
> + if (unlikely(!mbuf)) {
> + PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
> + return -ENOMEM;
> + }
> +
> + rte_mbuf_refcnt_set(mbuf, 1);
> + mbuf->next = NULL;
> + mbuf->data_off = RTE_PKTMBUF_HEADROOM;
> + mbuf->nb_segs = 1;
> + mbuf->port = rxq->port_id;
> + dma_addr =
> rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
> +
> + rxd = &((volatile struct virtchnl2_p2p_rx_buf_desc *)(rxq-
> >rx_ring))[i];
> + rxd->reserve0 = 0;
> + rxd->pkt_addr = dma_addr;
> + }
> +
> + rxq->nb_rx_hold = 0;
> + /* The value written in the RX buffer queue tail register, must be a
> multiple of 8.*/
> + rxq->rx_tail = rxq->nb_rx_desc - CPFL_HAIRPIN_Q_TAIL_AUX_VALUE;
> +
> + return 0;
> +}
> +
> int
> cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -
> 1044,22 +1119,31 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t
> rx_queue_id)
> IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
> } else {
> /* Split queue */
> - err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1);
> - if (err != 0) {
> - PMD_DRV_LOG(ERR, "Failed to allocate RX buffer
> queue mbuf");
> - return err;
> - }
> - err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2);
> - if (err != 0) {
> - PMD_DRV_LOG(ERR, "Failed to allocate RX buffer
> queue mbuf");
> - return err;
> + if (cpfl_rxq->hairpin_info.hairpin_q) {
> + err = cpfl_alloc_split_p2p_rxq_mbufs(rxq->bufq1);
> + if (err != 0) {
> + PMD_DRV_LOG(ERR, "Failed to allocate p2p RX
> buffer queue mbuf");
> + return err;
> + }
> + } else {
> + err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1);
> + if (err != 0) {
> + PMD_DRV_LOG(ERR, "Failed to allocate RX
> buffer queue mbuf");
> + return err;
> + }
> + err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2);
> + if (err != 0) {
> + PMD_DRV_LOG(ERR, "Failed to allocate RX
> buffer queue mbuf");
> + return err;
> + }
> }
>
> rte_wmb();
>
> /* Init the RX tail register. */
> IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1-
> >rx_tail);
> - IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2-
> >rx_tail);
> + if (rxq->bufq2)
> + IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq-
> >bufq2->rx_tail);
> }
>
> return err;
> @@ -1166,7 +1250,12 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev,
> uint16_t rx_queue_id)
> return -EINVAL;
>
> cpfl_rxq = dev->data->rx_queues[rx_queue_id];
> - err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
> + if (cpfl_rxq->hairpin_info.hairpin_q)
> + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport,
> + rx_queue_id - cpfl_vport-
> >nb_data_txq,
> + true, false);
> + else
> + err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
> if (err != 0) {
> PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
> rx_queue_id);
> @@ -1180,10 +1269,17 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev,
> uint16_t rx_queue_id)
> idpf_qc_single_rx_queue_reset(rxq);
> } else {
> rxq->bufq1->ops->release_mbufs(rxq->bufq1);
> - rxq->bufq2->ops->release_mbufs(rxq->bufq2);
> - idpf_qc_split_rx_queue_reset(rxq);
> + if (rxq->bufq2)
> + rxq->bufq2->ops->release_mbufs(rxq->bufq2);
> + if (cpfl_rxq->hairpin_info.hairpin_q) {
> + cpfl_rx_hairpin_descq_reset(rxq);
> + cpfl_rx_hairpin_bufq_reset(rxq->bufq1);
> + } else {
> + idpf_qc_split_rx_queue_reset(rxq);
> + }
> }
> - dev->data->rx_queue_state[rx_queue_id] =
> RTE_ETH_QUEUE_STATE_STOPPED;
> + if (!cpfl_rxq->hairpin_info.hairpin_q)
> + dev->data->rx_queue_state[rx_queue_id] =
> RTE_ETH_QUEUE_STATE_STOPPED;
>
> return 0;
> }
> @@ -1202,7 +1298,12 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev,
> uint16_t tx_queue_id)
>
> cpfl_txq = dev->data->tx_queues[tx_queue_id];
>
> - err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
> + if (cpfl_txq->hairpin_info.hairpin_q)
> + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport,
> + tx_queue_id - cpfl_vport-
> >nb_data_txq,
> + false, false);
> + else
> + err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
> if (err != 0) {
> PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
> tx_queue_id);
> @@ -1215,10 +1316,17 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev,
> uint16_t tx_queue_id)
> if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
> idpf_qc_single_tx_queue_reset(txq);
> } else {
> - idpf_qc_split_tx_descq_reset(txq);
> - idpf_qc_split_tx_complq_reset(txq->complq);
> + if (cpfl_txq->hairpin_info.hairpin_q) {
> + cpfl_tx_hairpin_descq_reset(txq);
> + cpfl_tx_hairpin_complq_reset(txq->complq);
> + } else {
> + idpf_qc_split_tx_descq_reset(txq);
> + idpf_qc_split_tx_complq_reset(txq->complq);
> + }
> }
> - dev->data->tx_queue_state[tx_queue_id] =
> RTE_ETH_QUEUE_STATE_STOPPED;
> +
> + if (!cpfl_txq->hairpin_info.hairpin_q)
> + dev->data->tx_queue_state[tx_queue_id] =
> RTE_ETH_QUEUE_STATE_STOPPED;
>
> return 0;
> }
> @@ -1238,10 +1346,17 @@ cpfl_dev_tx_queue_release(struct rte_eth_dev
> *dev, uint16_t qid) void cpfl_stop_queues(struct rte_eth_dev *dev) {
> + struct cpfl_vport *cpfl_vport =
> + (struct cpfl_vport *)dev->data->dev_private;
> struct cpfl_rx_queue *cpfl_rxq;
> struct cpfl_tx_queue *cpfl_txq;
> int i;
>
> + if (cpfl_vport->p2p_rx_bufq != NULL) {
> + if (cpfl_switch_hairpin_bufq_complq(cpfl_vport, false) != 0)
> + PMD_DRV_LOG(ERR, "Failed to stop hairpin Tx complq
> and Rx bufq");
> + }
> +
> for (i = 0; i < dev->data->nb_rx_queues; i++) {
> cpfl_rxq = dev->data->rx_queues[i];
> if (cpfl_rxq == NULL)
> diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index
> 872ebc1bfd..42dfd07155 100644
> --- a/drivers/net/cpfl/cpfl_rxtx.h
> +++ b/drivers/net/cpfl/cpfl_rxtx.h
> @@ -41,6 +41,17 @@
>
> #define CPFL_RX_BUF_STRIDE 64
>
> +/* The value written in the RX buffer queue tail register,
> + * and in WritePTR field in the TX completion queue context,
> + * must be a multiple of 8.
> + */
> +#define CPFL_HAIRPIN_Q_TAIL_AUX_VALUE 8
> +
> +struct virtchnl2_p2p_rx_buf_desc {
> + __le64 reserve0;
> + __le64 pkt_addr; /* Packet buffer address */ };
> +
> struct cpfl_rxq_hairpin_info {
> bool hairpin_q; /* if rx queue is a hairpin queue */
> uint16_t peer_txp;
> @@ -102,4 +113,7 @@ int cpfl_hairpin_tx_complq_config(struct cpfl_vport
> *cpfl_vport); int cpfl_hairpin_txq_config(struct idpf_vport *vport, struct
> cpfl_tx_queue *cpfl_txq); int cpfl_hairpin_rx_bufq_config(struct cpfl_vport
> *cpfl_vport); int cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct
> cpfl_rx_queue *cpfl_rxq);
> +int cpfl_switch_hairpin_bufq_complq(struct cpfl_vport *cpfl_vport, bool
> +on); int cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t
> qid,
> + bool rx, bool on);
> #endif /* _CPFL_RXTX_H_ */
> --
> 2.26.2
next prev parent reply other threads:[~2023-05-30 3:30 UTC|newest]
Thread overview: 164+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-21 6:50 [PATCH 00/10] add hairpin queue support beilei.xing
2023-04-21 6:50 ` [PATCH 01/10] net/cpfl: refine structures beilei.xing
2023-04-21 6:50 ` [PATCH 02/10] net/cpfl: support hairpin queue capbility get beilei.xing
2023-04-21 6:50 ` [PATCH 03/10] common/idpf: support queue groups add/delete beilei.xing
2023-04-24 8:48 ` Liu, Mingxia
2023-04-24 8:49 ` Liu, Mingxia
2023-05-19 5:36 ` Xing, Beilei
2023-04-21 6:50 ` [PATCH 04/10] net/cpfl: add haipin queue group during vpotr init beilei.xing
2023-04-24 8:55 ` Liu, Mingxia
2023-05-19 5:36 ` Xing, Beilei
2023-04-21 6:50 ` [PATCH 05/10] net/cpfl: support hairpin queue setup and release beilei.xing
2023-04-21 6:50 ` [PATCH 06/10] net/cpfl: support hairpin queue configuration beilei.xing
2023-04-24 9:48 ` Liu, Mingxia
2023-05-19 5:43 ` Xing, Beilei
2023-04-21 6:50 ` [PATCH 07/10] net/cpfl: support hairpin queue start/stop beilei.xing
2023-04-21 6:50 ` [PATCH 08/10] net/cpfl: enable write back based on ITR expire beilei.xing
2023-04-21 6:50 ` [PATCH 09/10] net/cpfl: support peer ports get beilei.xing
2023-04-21 6:50 ` [PATCH 10/10] net/cpfl: support hairpin bind/unbind beilei.xing
2023-05-19 5:10 ` [PATCH v2 00/10] add hairpin queue support beilei.xing
2023-05-19 5:10 ` [PATCH v2 01/10] net/cpfl: refine structures beilei.xing
2023-05-19 5:10 ` [PATCH v2 02/10] net/cpfl: support hairpin queue capbility get beilei.xing
2023-05-19 5:10 ` [PATCH v2 03/10] common/idpf: support queue groups add/delete beilei.xing
2023-05-19 5:10 ` [PATCH v2 04/10] net/cpfl: add haipin queue group during vport init beilei.xing
2023-05-19 5:10 ` [PATCH v2 04/10] net/cpfl: add haipin queue group during vpotr init beilei.xing
2023-05-19 5:10 ` [PATCH v2 05/10] net/cpfl: support hairpin queue setup and release beilei.xing
2023-05-19 5:10 ` [PATCH v2 06/10] net/cpfl: support hairpin queue configuration beilei.xing
2023-05-19 5:10 ` [PATCH v2 07/10] net/cpfl: support hairpin queue start/stop beilei.xing
2023-05-19 5:10 ` [PATCH v2 08/10] net/cpfl: enable write back based on ITR expire beilei.xing
2023-05-19 5:10 ` [PATCH v2 09/10] net/cpfl: support peer ports get beilei.xing
2023-05-19 5:10 ` [PATCH v2 10/10] net/cpfl: support hairpin bind/unbind beilei.xing
2023-05-19 7:31 ` [PATCH v3 00/10] net/cpfl: add hairpin queue support beilei.xing
2023-05-19 7:31 ` [PATCH v3 01/10] net/cpfl: refine structures beilei.xing
2023-05-19 7:31 ` [PATCH v3 02/10] net/cpfl: support hairpin queue capbility get beilei.xing
2023-05-24 14:30 ` Wu, Jingjing
2023-05-19 7:31 ` [PATCH v3 03/10] common/idpf: support queue groups add/delete beilei.xing
2023-05-19 7:31 ` [PATCH v3 04/10] net/cpfl: add haipin queue group during vport init beilei.xing
2023-05-24 14:38 ` Wu, Jingjing
2023-05-19 7:31 ` [PATCH v3 05/10] net/cpfl: support hairpin queue setup and release beilei.xing
2023-05-24 9:01 ` Liu, Mingxia
2023-05-26 3:46 ` Xing, Beilei
2023-05-25 3:58 ` Wu, Jingjing
2023-05-26 3:52 ` Xing, Beilei
2023-05-19 7:31 ` [PATCH v3 06/10] net/cpfl: support hairpin queue configuration beilei.xing
2023-05-19 7:31 ` [PATCH v3 07/10] net/cpfl: support hairpin queue start/stop beilei.xing
2023-05-25 4:12 ` Wu, Jingjing
2023-05-19 7:31 ` [PATCH v3 08/10] net/cpfl: enable write back based on ITR expire beilei.xing
2023-05-25 4:17 ` Wu, Jingjing
2023-05-19 7:31 ` [PATCH v3 09/10] net/cpfl: support peer ports get beilei.xing
2023-05-25 5:26 ` Wu, Jingjing
2023-05-19 7:31 ` [PATCH v3 10/10] net/cpfl: support hairpin bind/unbind beilei.xing
2023-05-26 7:38 ` [PATCH v4 00/13] net/cpfl: add hairpin queue support beilei.xing
2023-05-26 7:38 ` [PATCH v4 01/13] net/cpfl: refine structures beilei.xing
2023-05-26 7:38 ` [PATCH v4 02/13] common/idpf: support queue groups add/delete beilei.xing
2023-05-26 7:38 ` [PATCH v4 03/13] net/cpfl: add haipin queue group during vport init beilei.xing
2023-05-26 7:38 ` [PATCH v4 04/13] net/cpfl: support hairpin queue capbility get beilei.xing
2023-05-26 7:38 ` [PATCH v4 05/13] net/cpfl: support hairpin queue setup and release beilei.xing
2023-05-30 2:27 ` Liu, Mingxia
2023-05-30 2:49 ` Liu, Mingxia
2023-05-31 10:53 ` Xing, Beilei
2023-05-26 7:38 ` [PATCH v4 06/13] common/idpf: add queue config API beilei.xing
2023-05-26 7:38 ` [PATCH v4 07/13] net/cpfl: support hairpin queue configuration beilei.xing
2023-05-26 7:38 ` [PATCH v4 08/13] common/idpf: add switch queue API beilei.xing
2023-05-26 7:38 ` [PATCH v4 09/13] net/cpfl: support hairpin queue start/stop beilei.xing
2023-05-30 3:30 ` Liu, Mingxia [this message]
2023-05-31 10:53 ` Xing, Beilei
2023-05-26 7:38 ` [PATCH v4 10/13] common/idpf: add irq map config API beilei.xing
2023-05-26 7:38 ` [PATCH v4 11/13] net/cpfl: enable write back based on ITR expire beilei.xing
2023-05-26 7:38 ` [PATCH v4 12/13] net/cpfl: support peer ports get beilei.xing
2023-05-26 7:38 ` [PATCH v4 13/13] net/cpfl: support hairpin bind/unbind beilei.xing
2023-05-30 3:59 ` Liu, Mingxia
2023-05-31 10:54 ` Xing, Beilei
2023-05-31 10:18 ` [PATCH v5 00/13] net/cpfl: add hairpin queue support beilei.xing
2023-05-31 10:18 ` [PATCH v5 01/13] net/cpfl: refine structures beilei.xing
2023-05-31 10:18 ` [PATCH v5 02/13] common/idpf: support queue groups add/delete beilei.xing
2023-05-31 10:18 ` [PATCH v5 03/13] net/cpfl: add haipin queue group during vport init beilei.xing
2023-05-31 10:18 ` [PATCH v5 04/13] net/cpfl: support hairpin queue capbility get beilei.xing
2023-05-31 10:18 ` [PATCH v5 05/13] net/cpfl: support hairpin queue setup and release beilei.xing
2023-05-31 10:18 ` [PATCH v5 06/13] common/idpf: add queue config API beilei.xing
2023-05-31 10:18 ` [PATCH v5 07/13] net/cpfl: support hairpin queue configuration beilei.xing
2023-05-31 10:18 ` [PATCH v5 08/13] common/idpf: add switch queue API beilei.xing
2023-05-31 10:18 ` [PATCH v5 09/13] net/cpfl: support hairpin queue start/stop beilei.xing
2023-05-31 10:18 ` [PATCH v5 10/13] common/idpf: add irq map config API beilei.xing
2023-05-31 10:18 ` [PATCH v5 11/13] net/cpfl: enable write back based on ITR expire beilei.xing
2023-05-31 10:18 ` [PATCH v5 12/13] net/cpfl: support peer ports get beilei.xing
2023-05-31 10:18 ` [PATCH v5 13/13] net/cpfl: support hairpin bind/unbind beilei.xing
2023-05-31 10:25 ` [PATCH v6 00/13] net/cpfl: add hairpin queue support beilei.xing
2023-05-31 10:25 ` [PATCH v6 01/13] net/cpfl: refine structures beilei.xing
2023-05-31 10:25 ` [PATCH v6 02/13] common/idpf: support queue groups add/delete beilei.xing
2023-05-31 10:25 ` [PATCH v6 03/13] net/cpfl: add haipin queue group during vport init beilei.xing
2023-05-31 10:25 ` [PATCH v6 04/13] net/cpfl: support hairpin queue capbility get beilei.xing
2023-05-31 10:25 ` [PATCH v6 05/13] net/cpfl: support hairpin queue setup and release beilei.xing
2023-05-31 10:25 ` [PATCH v6 06/13] common/idpf: add queue config API beilei.xing
2023-05-31 10:25 ` [PATCH v6 07/13] net/cpfl: support hairpin queue configuration beilei.xing
2023-05-31 10:25 ` [PATCH v6 08/13] common/idpf: add switch queue API beilei.xing
2023-05-31 10:25 ` [PATCH v6 09/13] net/cpfl: support hairpin queue start/stop beilei.xing
2023-05-31 10:25 ` [PATCH v6 10/13] common/idpf: add irq map config API beilei.xing
2023-05-31 10:25 ` [PATCH v6 11/13] net/cpfl: enable write back based on ITR expire beilei.xing
2023-05-31 10:25 ` [PATCH v6 12/13] net/cpfl: support peer ports get beilei.xing
2023-05-31 10:25 ` [PATCH v6 13/13] net/cpfl: support hairpin bind/unbind beilei.xing
2023-05-31 13:04 ` [PATCH v7 00/14] net/cpfl: add hairpin queue support beilei.xing
2023-05-31 13:04 ` [PATCH v7 01/14] net/cpfl: refine structures beilei.xing
2023-05-31 13:04 ` [PATCH v7 02/14] common/idpf: support queue groups add/delete beilei.xing
2023-05-31 13:04 ` [PATCH v7 03/14] net/cpfl: add haipin queue group during vport init beilei.xing
2023-05-31 13:04 ` [PATCH v7 04/14] net/cpfl: support hairpin queue capbility get beilei.xing
2023-05-31 13:04 ` [PATCH v7 05/14] net/cpfl: support hairpin queue setup and release beilei.xing
2023-05-31 13:04 ` [PATCH v7 06/14] common/idpf: add queue config API beilei.xing
2023-05-31 13:04 ` [PATCH v7 07/14] net/cpfl: support hairpin queue configuration beilei.xing
2023-05-31 13:04 ` [PATCH v7 08/14] common/idpf: add switch queue API beilei.xing
2023-05-31 13:04 ` [PATCH v7 09/14] net/cpfl: support hairpin queue start/stop beilei.xing
2023-05-31 13:04 ` [PATCH v7 10/14] common/idpf: add irq map config API beilei.xing
2023-05-31 13:04 ` [PATCH v7 11/14] net/cpfl: enable write back based on ITR expire beilei.xing
2023-05-31 13:04 ` [PATCH v7 12/14] net/cpfl: support peer ports get beilei.xing
2023-05-31 13:04 ` [PATCH v7 13/14] net/cpfl: support hairpin bind/unbind beilei.xing
2023-05-31 13:04 ` [PATCH v7 14/14] doc: update the doc of CPFL PMD beilei.xing
2023-06-05 6:17 ` [PATCH v8 00/14] net/cpfl: add hairpin queue support beilei.xing
2023-06-05 6:17 ` [PATCH v8 01/14] net/cpfl: refine structures beilei.xing
2023-06-05 6:17 ` [PATCH v8 02/14] common/idpf: support queue groups add/delete beilei.xing
2023-06-05 6:17 ` [PATCH v8 03/14] net/cpfl: add haipin queue group during vport init beilei.xing
2023-06-05 8:35 ` Wu, Jingjing
2023-06-05 8:53 ` Xing, Beilei
2023-06-05 6:17 ` [PATCH v8 04/14] net/cpfl: support hairpin queue capbility get beilei.xing
2023-06-05 6:17 ` [PATCH v8 05/14] net/cpfl: support hairpin queue setup and release beilei.xing
2023-06-05 6:17 ` [PATCH v8 06/14] common/idpf: add queue config API beilei.xing
2023-06-05 6:17 ` [PATCH v8 07/14] net/cpfl: support hairpin queue configuration beilei.xing
2023-06-05 6:17 ` [PATCH v8 08/14] common/idpf: add switch queue API beilei.xing
2023-06-05 6:17 ` [PATCH v8 09/14] net/cpfl: support hairpin queue start/stop beilei.xing
2023-06-05 6:17 ` [PATCH v8 10/14] common/idpf: add irq map config API beilei.xing
2023-06-05 6:17 ` [PATCH v8 11/14] net/cpfl: enable write back based on ITR expire beilei.xing
2023-06-05 6:17 ` [PATCH v8 12/14] net/cpfl: support peer ports get beilei.xing
2023-06-05 11:22 ` Wu, Jingjing
2023-06-05 6:17 ` [PATCH v8 13/14] net/cpfl: support hairpin bind/unbind beilei.xing
2023-06-05 6:17 ` [PATCH v8 14/14] doc: update the doc of CPFL PMD beilei.xing
2023-06-05 9:06 ` [PATCH v9 00/14] net/cpfl: add hairpin queue support beilei.xing
2023-06-05 9:06 ` [PATCH v9 01/14] net/cpfl: refine structures beilei.xing
2023-06-05 9:06 ` [PATCH v9 02/14] common/idpf: support queue groups add/delete beilei.xing
2023-06-05 9:06 ` [PATCH v9 03/14] net/cpfl: add haipin queue group during vport init beilei.xing
2023-06-05 9:06 ` [PATCH v9 04/14] net/cpfl: support hairpin queue capbility get beilei.xing
2023-06-05 9:06 ` [PATCH v9 05/14] net/cpfl: support hairpin queue setup and release beilei.xing
2023-06-05 9:06 ` [PATCH v9 06/14] common/idpf: add queue config API beilei.xing
2023-06-05 9:06 ` [PATCH v9 07/14] net/cpfl: support hairpin queue configuration beilei.xing
2023-06-05 9:06 ` [PATCH v9 08/14] common/idpf: add switch queue API beilei.xing
2023-06-05 9:06 ` [PATCH v9 09/14] net/cpfl: support hairpin queue start/stop beilei.xing
2023-06-05 9:06 ` [PATCH v9 10/14] common/idpf: add irq map config API beilei.xing
2023-06-05 9:06 ` [PATCH v9 11/14] net/cpfl: enable write back based on ITR expire beilei.xing
2023-06-05 9:06 ` [PATCH v9 12/14] net/cpfl: support peer ports get beilei.xing
2023-06-05 9:06 ` [PATCH v9 13/14] net/cpfl: support hairpin bind/unbind beilei.xing
2023-06-05 9:06 ` [PATCH v9 14/14] doc: update the doc of CPFL PMD beilei.xing
2023-06-06 10:03 ` [PATCH v10 00/14] net/cpfl: add hairpin queue support beilei.xing
2023-06-06 6:40 ` Wu, Jingjing
2023-06-07 7:16 ` Zhang, Qi Z
2023-06-06 10:03 ` [PATCH v10 01/14] net/cpfl: refine structures beilei.xing
2023-06-06 10:03 ` [PATCH v10 02/14] common/idpf: support queue groups add/delete beilei.xing
2023-06-06 10:03 ` [PATCH v10 03/14] net/cpfl: add haipin queue group during vport init beilei.xing
2023-06-06 10:03 ` [PATCH v10 04/14] net/cpfl: support hairpin queue capbility get beilei.xing
2023-06-06 10:03 ` [PATCH v10 05/14] net/cpfl: support hairpin queue setup and release beilei.xing
2023-06-06 10:03 ` [PATCH v10 06/14] common/idpf: add queue config API beilei.xing
2023-06-06 10:03 ` [PATCH v10 07/14] net/cpfl: support hairpin queue configuration beilei.xing
2023-06-06 10:03 ` [PATCH v10 08/14] common/idpf: add switch queue API beilei.xing
2023-06-06 10:03 ` [PATCH v10 09/14] net/cpfl: support hairpin queue start/stop beilei.xing
2023-06-06 10:03 ` [PATCH v10 10/14] common/idpf: add irq map config API beilei.xing
2023-06-06 10:03 ` [PATCH v10 11/14] net/cpfl: enable write back based on ITR expire beilei.xing
2023-06-06 10:03 ` [PATCH v10 12/14] net/cpfl: support peer ports get beilei.xing
2023-06-06 10:03 ` [PATCH v10 13/14] net/cpfl: support hairpin bind/unbind beilei.xing
2023-06-06 10:03 ` [PATCH v10 14/14] doc: update the doc of CPFL PMD beilei.xing
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=PH0PR11MB5877137AD1C68D9288885259EC4B9@PH0PR11MB5877.namprd11.prod.outlook.com \
--to=mingxia.liu@intel.com \
--cc=beilei.xing@intel.com \
--cc=dev@dpdk.org \
--cc=jingjing.wu@intel.com \
--cc=xiao.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).