From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lf0-f65.google.com (mail-lf0-f65.google.com [209.85.215.65]) by dpdk.org (Postfix) with ESMTP id D8788A49F for ; Thu, 11 Jan 2018 16:36:03 +0100 (CET) Received: by mail-lf0-f65.google.com with SMTP id v74so1283567lfa.7 for ; Thu, 11 Jan 2018 07:36:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2G5x6y5MNFFC/yZOjJqXwTsj6mfnj/8qEln7WyqbVkQ=; b=PPrUCIT0TcObZJkYtHgeYxcZkwEisFl8WEKaRzYK+CpoCcKmU6vYo4JVfrm1XT6lW7 dURaMfRgjzTykkRRXwBiYSCA8rdDf9YzYEoW0AmRU1Ukj8NBp+kc92HY3IQofvMxwKpR 5rOx0MboTRt7zRBz3wxQFlmYLS+/tLwVZ6gzsIEXRPyza80FG5C2W0quf5ceaDSn5O3H Y1xKhOlAyqoaP+GhzByJ3wxsOSl1a34wTPcq7oyA8NhJtmaGLVVfPirE6zJNbkCSVS47 unuHovZzr/KRsQi+mMxR25Nn8DuowknVA74JHzRIroytX/LVEzJcpSMnOk4+rEbtsCHc AixA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2G5x6y5MNFFC/yZOjJqXwTsj6mfnj/8qEln7WyqbVkQ=; b=oC2ZaUWUuuLKQuPZeM3g+ItM1v0vRDfGk75AGwasQ/IbvjRXD0EpyAqqjdAlR5tbZQ 2nEucHNaITNVqxdJ+axZrhbBvbENhdGlRoGdBgIWJSAvHBi5o9LKW7NmMFgS4a/J/ha6 NHLKEM4SH7S4toQPRwQbU3/kChx9jiLQ01WLKFusIIV/hnqxnR2BEiVKFJrwNqFqtT/2 /Lxtq69io14fLJGTvHn7TSjKHiEpk5AuYmFndAFPMBizwMKMyyR0dOF5jUglAX23q9Wk Uu+/Axpy4TZmFnlTia3AbIDWfpJj+Zm51l9+mlVy3Emr0+us3Xj4r/F3gTOgvdKHj26p kgLQ== X-Gm-Message-State: AKwxytfqnmoFZGR+kvYoxEqJ7RwuyfAHnwyybQ0KcpU4dVp6GKf9TJ33 HmYtalfA53nyGxrugK/gKQ2lOgI78rU= X-Google-Smtp-Source: ACJfBotLPoQY5p900roGR2/VJ/HVdD6FlgeMpt3wpYgHvkgax7PMSISvBzlI8w8Phhtl0bBy+uPtTg== X-Received: by 10.46.85.149 with SMTP id g21mr7315676lje.118.1515684963034; Thu, 11 Jan 2018 07:36:03 -0800 (PST) Received: from sh.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id y15sm3923018lje.10.2018.01.11.07.36.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 11 Jan 2018 07:36:02 -0800 (PST) From: Tomasz Duszynski To: dev@dpdk.org Cc: jck@semihalf.com, mw@semihalf.com, dima@marvell.com, nsamsono@marvell.com, Jianbo.liu@arm.com Date: Thu, 11 Jan 2018 16:35:43 +0100 Message-Id: <1515684943-32506-6-git-send-email-tdu@semihalf.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515684943-32506-1-git-send-email-tdu@semihalf.com> References: <1515684943-32506-1-git-send-email-tdu@semihalf.com> Subject: [dpdk-dev] [PATCH 5/5] net/mrvl: keep shadow txqs inside pmd txq X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Jan 2018 15:36:04 -0000 From: Natalie Samsonov Change shadow queues allocation from port/core to txq/core. Use array of shadow queues (one per lcore) for each tx queue object to avoid data corruption when few tx queues are handled by one lcore and buffers that were not sent yet, can be released and used for receive. Fixes: 0ddc9b8 ("net/mrvl: add net PMD skeleton") Signed-off-by: Natalie Samsonov --- drivers/net/mrvl/mrvl_ethdev.c | 47 ++++++++++++++++++++++++------------------ 1 file changed, 27 insertions(+), 20 deletions(-) diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c index 7ce4df3..4294c56 100644 --- a/drivers/net/mrvl/mrvl_ethdev.c +++ b/drivers/net/mrvl/mrvl_ethdev.c @@ -150,22 +150,17 @@ struct mrvl_txq { int queue_id; int port_id; uint64_t bytes_sent; + struct mrvl_shadow_txq shadow_txqs[RTE_MAX_LCORE]; }; -/* - * Every tx queue should have dedicated shadow tx queue. - * - * Ports assigned by DPDK might not start at zero or be continuous so - * as a workaround define shadow queues for each possible port so that - * we eventually fit somewhere. - */ -struct mrvl_shadow_txq shadow_txqs[RTE_MAX_ETHPORTS][RTE_MAX_LCORE]; - static int mrvl_lcore_first; static int mrvl_lcore_last; static int mrvl_dev_num; static int mrvl_fill_bpool(struct mrvl_rxq *rxq, int num); +static inline void mrvl_free_sent_buffers(struct pp2_ppio *ppio, + struct pp2_hif *hif, unsigned int core_id, + struct mrvl_shadow_txq *sq, int qid, int force); static inline int mrvl_get_bpool_size(int pp2_id, int pool_id) @@ -594,21 +589,32 @@ mrvl_flush_rx_queues(struct rte_eth_dev *dev) static void mrvl_flush_tx_shadow_queues(struct rte_eth_dev *dev) { - int i; + int i, j; + struct mrvl_txq *txq; RTE_LOG(INFO, PMD, "Flushing tx shadow queues\n"); - for (i = 0; i < RTE_MAX_LCORE; i++) { - struct mrvl_shadow_txq *sq = - &shadow_txqs[dev->data->port_id][i]; + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = (struct mrvl_txq *)dev->data->tx_queues[i]; + + for (j = 0; j < RTE_MAX_LCORE; j++) { + struct mrvl_shadow_txq *sq; + + if (!hifs[j]) + continue; - while (sq->tail != sq->head) { - uint64_t addr = cookie_addr_high | + sq = &txq->shadow_txqs[j]; + mrvl_free_sent_buffers(txq->priv->ppio, + hifs[j], j, sq, txq->queue_id, 1); + while (sq->tail != sq->head) { + uint64_t addr = cookie_addr_high | sq->ent[sq->tail].buff.cookie; - rte_pktmbuf_free((struct rte_mbuf *)addr); - sq->tail = (sq->tail + 1) & MRVL_PP2_TX_SHADOWQ_MASK; + rte_pktmbuf_free( + (struct rte_mbuf *)addr); + sq->tail = (sq->tail + 1) & + MRVL_PP2_TX_SHADOWQ_MASK; + } + memset(sq, 0, sizeof(*sq)); } - - memset(sq, 0, sizeof(*sq)); } } @@ -1959,7 +1965,7 @@ static uint16_t mrvl_tx_pkt_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct mrvl_txq *q = txq; - struct mrvl_shadow_txq *sq = &shadow_txqs[q->port_id][rte_lcore_id()]; + struct mrvl_shadow_txq *sq; struct pp2_hif *hif; struct pp2_ppio_desc descs[nb_pkts]; unsigned int core_id = rte_lcore_id(); @@ -1968,6 +1974,7 @@ mrvl_tx_pkt_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint64_t addr; hif = mrvl_get_hif(q->priv, core_id); + sq = &q->shadow_txqs[core_id]; if (unlikely(!q->priv->ppio || !hif)) return 0; -- 2.7.4