* [PATCH 3/3] net/idpf: fix splitq xmit free [not found] <20221208072725.32434-1-beilei.xing@intel.com> @ 2022-12-08 7:27 ` beilei.xing [not found] ` <20230106090501.9106-1-beilei.xing@intel.com> 1 sibling, 0 replies; 3+ messages in thread From: beilei.xing @ 2022-12-08 7:27 UTC (permalink / raw) To: jingjing.wu, qi.z.zhang; +Cc: dev, stable, Beilei Xing From: Jingjing Wu <jingjing.wu@intel.com> When context descriptor is used during sending packets, mbuf is not freed correctly, it will cause mempool be exhausted. This patch refines the free function. Fixes: 770f4dfe0f79 ("net/idpf: support basic Tx data path") Cc: stable@dpdk.org Signed-off-by: Jingjing Wu <jingjing.wu@intel.com> Signed-off-by: Beilei Xing <beilei.xing@intel.com> --- drivers/net/idpf/idpf_rxtx.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c index b4a396c3f5..5aef8ba2b6 100644 --- a/drivers/net/idpf/idpf_rxtx.c +++ b/drivers/net/idpf/idpf_rxtx.c @@ -1508,6 +1508,7 @@ idpf_split_tx_free(struct idpf_tx_queue *cq) struct idpf_tx_entry *txe; struct idpf_tx_queue *txq; uint16_t gen, qid, q_head; + uint16_t nb_desc_clean; uint8_t ctype; txd = &compl_ring[next]; @@ -1525,20 +1526,24 @@ idpf_split_tx_free(struct idpf_tx_queue *cq) switch (ctype) { case IDPF_TXD_COMPLT_RE: - if (q_head == 0) - txq->last_desc_cleaned = txq->nb_tx_desc - 1; - else - txq->last_desc_cleaned = q_head - 1; - if (unlikely((txq->last_desc_cleaned % 32) == 0)) { + /* clean to q_head which indicates be fetched txq desc id + 1. + * TODO: need to refine and remove the if condition. + */ + if (unlikely(q_head % 32)) { PMD_DRV_LOG(ERR, "unexpected desc (head = %u) completion.", q_head); return; } - + if (txq->last_desc_cleaned > q_head) + nb_desc_clean = (txq->nb_tx_desc - txq->last_desc_cleaned) + + q_head; + else + nb_desc_clean = q_head - txq->last_desc_cleaned; + txq->nb_free += nb_desc_clean; + txq->last_desc_cleaned = q_head; break; case IDPF_TXD_COMPLT_RS: - txq->nb_free++; - txq->nb_used--; + /* q_head indicates sw_id when ctype is 2 */ txe = &txq->sw_ring[q_head]; if (txe->mbuf != NULL) { rte_pktmbuf_free_seg(txe->mbuf); @@ -1693,12 +1698,16 @@ idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* fill the last descriptor with End of Packet (EOP) bit */ txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_EOP; - if (unlikely((tx_id % 32) == 0)) - txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_RE; if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK) txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN; txq->nb_free = (uint16_t)(txq->nb_free - nb_used); txq->nb_used = (uint16_t)(txq->nb_used + nb_used); + + if (txq->nb_used >= 32) { + txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_RE; + /* Update txq RE bit counters */ + txq->nb_used = 0; + } } /* update the tail pointer if any packets were processed */ -- 2.26.2 ^ permalink raw reply [flat|nested] 3+ messages in thread
[parent not found: <20230106090501.9106-1-beilei.xing@intel.com>]
* [PATCH v2 3/5] net/idpf: fix splitq xmit free [not found] ` <20230106090501.9106-1-beilei.xing@intel.com> @ 2023-01-06 9:04 ` beilei.xing 2023-01-06 9:05 ` [PATCH v2 4/5] net/idpf: fix driver init symbols beilei.xing 1 sibling, 0 replies; 3+ messages in thread From: beilei.xing @ 2023-01-06 9:04 UTC (permalink / raw) To: qi.z.zhang; +Cc: dev, Jingjing Wu, stable, Beilei Xing From: Jingjing Wu <jingjing.wu@intel.com> When context descriptor is used during sending packets, mbuf is not freed correctly, it will cause mempool be exhausted. This patch refines the free function. Fixes: 770f4dfe0f79 ("net/idpf: support basic Tx data path") Cc: stable@dpdk.org Signed-off-by: Jingjing Wu <jingjing.wu@intel.com> Signed-off-by: Beilei Xing <beilei.xing@intel.com> --- drivers/net/idpf/idpf_rxtx.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c index b4a396c3f5..5aef8ba2b6 100644 --- a/drivers/net/idpf/idpf_rxtx.c +++ b/drivers/net/idpf/idpf_rxtx.c @@ -1508,6 +1508,7 @@ idpf_split_tx_free(struct idpf_tx_queue *cq) struct idpf_tx_entry *txe; struct idpf_tx_queue *txq; uint16_t gen, qid, q_head; + uint16_t nb_desc_clean; uint8_t ctype; txd = &compl_ring[next]; @@ -1525,20 +1526,24 @@ idpf_split_tx_free(struct idpf_tx_queue *cq) switch (ctype) { case IDPF_TXD_COMPLT_RE: - if (q_head == 0) - txq->last_desc_cleaned = txq->nb_tx_desc - 1; - else - txq->last_desc_cleaned = q_head - 1; - if (unlikely((txq->last_desc_cleaned % 32) == 0)) { + /* clean to q_head which indicates be fetched txq desc id + 1. + * TODO: need to refine and remove the if condition. + */ + if (unlikely(q_head % 32)) { PMD_DRV_LOG(ERR, "unexpected desc (head = %u) completion.", q_head); return; } - + if (txq->last_desc_cleaned > q_head) + nb_desc_clean = (txq->nb_tx_desc - txq->last_desc_cleaned) + + q_head; + else + nb_desc_clean = q_head - txq->last_desc_cleaned; + txq->nb_free += nb_desc_clean; + txq->last_desc_cleaned = q_head; break; case IDPF_TXD_COMPLT_RS: - txq->nb_free++; - txq->nb_used--; + /* q_head indicates sw_id when ctype is 2 */ txe = &txq->sw_ring[q_head]; if (txe->mbuf != NULL) { rte_pktmbuf_free_seg(txe->mbuf); @@ -1693,12 +1698,16 @@ idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* fill the last descriptor with End of Packet (EOP) bit */ txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_EOP; - if (unlikely((tx_id % 32) == 0)) - txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_RE; if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK) txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN; txq->nb_free = (uint16_t)(txq->nb_free - nb_used); txq->nb_used = (uint16_t)(txq->nb_used + nb_used); + + if (txq->nb_used >= 32) { + txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_RE; + /* Update txq RE bit counters */ + txq->nb_used = 0; + } } /* update the tail pointer if any packets were processed */ -- 2.26.2 ^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH v2 4/5] net/idpf: fix driver init symbols [not found] ` <20230106090501.9106-1-beilei.xing@intel.com> 2023-01-06 9:04 ` [PATCH v2 3/5] " beilei.xing @ 2023-01-06 9:05 ` beilei.xing 1 sibling, 0 replies; 3+ messages in thread From: beilei.xing @ 2023-01-06 9:05 UTC (permalink / raw) To: qi.z.zhang; +Cc: dev, Jingjing Wu, stable, Beilei Xing From: Jingjing Wu <jingjing.wu@intel.com> This patch fixes idpf driver init symbols. Fixes: 549343c25db8 ("net/idpf: support device initialization") Cc: stable@dpdk.org Signed-off-by: Jingjing Wu <jingjing.wu@intel.com> Signed-off-by: Beilei Xing <beilei.xing@intel.com> --- drivers/net/idpf/idpf_ethdev.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c index f7b3f8f515..89af27ca34 100644 --- a/drivers/net/idpf/idpf_ethdev.c +++ b/drivers/net/idpf/idpf_ethdev.c @@ -1251,7 +1251,11 @@ static struct rte_pci_driver rte_idpf_pmd = { */ RTE_PMD_REGISTER_PCI(net_idpf, rte_idpf_pmd); RTE_PMD_REGISTER_PCI_TABLE(net_idpf, pci_id_idpf_map); -RTE_PMD_REGISTER_KMOD_DEP(net_ice, "* igb_uio | uio_pci_generic | vfio-pci"); +RTE_PMD_REGISTER_KMOD_DEP(net_idpf, "* igb_uio | vfio-pci"); +RTE_PMD_REGISTER_PARAM_STRING(net_idpf, + IDPF_TX_SINGLE_Q "=<0|1> " + IDPF_RX_SINGLE_Q "=<0|1> " + IDPF_VPORT "=[vport_set0,[vport_set1],...]"); RTE_LOG_REGISTER_SUFFIX(idpf_logtype_init, init, NOTICE); RTE_LOG_REGISTER_SUFFIX(idpf_logtype_driver, driver, NOTICE); -- 2.26.2 ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2023-01-06 9:28 UTC | newest] Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <20221208072725.32434-1-beilei.xing@intel.com> 2022-12-08 7:27 ` [PATCH 3/3] net/idpf: fix splitq xmit free beilei.xing [not found] ` <20230106090501.9106-1-beilei.xing@intel.com> 2023-01-06 9:04 ` [PATCH v2 3/5] " beilei.xing 2023-01-06 9:05 ` [PATCH v2 4/5] net/idpf: fix driver init symbols beilei.xing
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).