From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from nbfkord-smmo02.seg.att.com (nbfkord-smmo02.seg.att.com [209.65.160.78]) by dpdk.org (Postfix) with ESMTP id B6E093977 for ; Mon, 21 Nov 2016 16:01:40 +0100 (CET) Received: from unknown [12.187.104.26] by nbfkord-smmo02.seg.att.com(mxl_mta-7.2.4-7) with SMTP id 45c03385.0.1541303.00-2347.3424242.nbfkord-smmo02.seg.att.com (envelope-from ); Mon, 21 Nov 2016 15:01:40 +0000 (UTC) X-MXL-Hash: 58330c54268ce0e9-43f997bd5ff07da1903f10bf8464ea5b84eaa170 Received: from ocex03.SolarFlarecom.com (10.20.40.36) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Mon, 21 Nov 2016 07:01:22 -0800 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25 via Frontend Transport; Mon, 21 Nov 2016 07:01:22 -0800 Received: from uklogin.uk.solarflarecom.com (uklogin.uk.solarflarecom.com [10.17.10.10]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id uALF1Lcn007245 for ; Mon, 21 Nov 2016 15:01:21 GMT Received: from uklogin.uk.solarflarecom.com (localhost.localdomain [127.0.0.1]) by uklogin.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id uALF1J3e006765 for ; Mon, 21 Nov 2016 15:01:21 GMT From: Andrew Rybchenko To: Date: Mon, 21 Nov 2016 15:01:03 +0000 Message-ID: <1479740470-6723-50-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.2.3 In-Reply-To: <1479740470-6723-1-git-send-email-arybchenko@solarflare.com> References: <1479740470-6723-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 Content-Type: text/plain X-AnalysisOut: [v=2.1 cv=UI/baXry c=1 sm=1 tr=0 a=8BlWFWvVlq5taO8ncb8nKg==] X-AnalysisOut: [:17 a=L24OOQBejmoA:10 a=zRKbQ67AAAAA:8 a=zAlzCLoU7-uyqBI12] X-AnalysisOut: [2gA:9 a=Mu070EE48oES_R6w:21 a=vUm1UXvefizFtZKp:21 a=PA03WX] X-AnalysisOut: [8tBzeizutn5_OT:22] X-Spam: [F=0.2325865469; CM=0.500; S=0.232(2015072901)] X-MAIL-FROM: X-SOURCE-IP: [12.187.104.26] Subject: [dpdk-dev] [PATCH 49/56] net/sfc: implement Rx queue start and stop operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Nov 2016 15:01:41 -0000 These functions should set the queue state in dev->data->rx_queue_state array. Reviewed-by: Andy Moreton Signed-off-by: Andrew Rybchenko --- drivers/net/sfc/efx/sfc.c | 8 ++ drivers/net/sfc/efx/sfc_ev.c | 23 +++- drivers/net/sfc/efx/sfc_rx.c | 297 ++++++++++++++++++++++++++++++++++++++++ drivers/net/sfc/efx/sfc_rx.h | 20 +++ drivers/net/sfc/efx/sfc_tweak.h | 44 ++++++ 5 files changed, 386 insertions(+), 6 deletions(-) create mode 100644 drivers/net/sfc/efx/sfc_tweak.h diff --git a/drivers/net/sfc/efx/sfc.c b/drivers/net/sfc/efx/sfc.c index c0f48a8..1c0f59d 100644 --- a/drivers/net/sfc/efx/sfc.c +++ b/drivers/net/sfc/efx/sfc.c @@ -271,10 +271,17 @@ sfc_start(struct sfc_adapter *sa) if (rc != 0) goto fail_port_start; + rc = sfc_rx_start(sa); + if (rc != 0) + goto fail_rx_start; + sa->state = SFC_ADAPTER_STARTED; sfc_log_init(sa, "done"); return 0; +fail_rx_start: + sfc_port_stop(sa); + fail_port_start: sfc_ev_stop(sa); @@ -313,6 +320,7 @@ sfc_stop(struct sfc_adapter *sa) sa->state = SFC_ADAPTER_STOPPING; + sfc_rx_stop(sa); sfc_port_stop(sa); sfc_ev_stop(sa); sfc_intr_stop(sa); diff --git a/drivers/net/sfc/efx/sfc_ev.c b/drivers/net/sfc/efx/sfc_ev.c index 8e2fc94..6750bfb 100644 --- a/drivers/net/sfc/efx/sfc_ev.c +++ b/drivers/net/sfc/efx/sfc_ev.c @@ -37,6 +37,7 @@ #include "sfc_debug.h" #include "sfc_log.h" #include "sfc_ev.h" +#include "sfc_rx.h" /* Initial delay when waiting for event queue init complete event */ @@ -112,20 +113,30 @@ static boolean_t sfc_ev_rxq_flush_done(void *arg, uint32_t rxq_hw_index) { struct sfc_evq *evq = arg; + struct sfc_rxq *rxq; - sfc_err(evq->sa, "EVQ %u unexpected Rx flush done event", - evq->evq_index); - return B_TRUE; + rxq = evq->rxq; + SFC_ASSERT(rxq != NULL); + SFC_ASSERT(rxq->hw_index == rxq_hw_index); + SFC_ASSERT(rxq->evq == evq); + sfc_rx_qflush_done(rxq); + + return B_FALSE; } static boolean_t sfc_ev_rxq_flush_failed(void *arg, uint32_t rxq_hw_index) { struct sfc_evq *evq = arg; + struct sfc_rxq *rxq; - sfc_err(evq->sa, "EVQ %u unexpected Rx flush failed event", - evq->evq_index); - return B_TRUE; + rxq = evq->rxq; + SFC_ASSERT(rxq != NULL); + SFC_ASSERT(rxq->hw_index == rxq_hw_index); + SFC_ASSERT(rxq->evq == evq); + sfc_rx_qflush_failed(rxq); + + return B_FALSE; } static boolean_t diff --git a/drivers/net/sfc/efx/sfc_rx.c b/drivers/net/sfc/efx/sfc_rx.c index 0e1e399..8e82ee0 100644 --- a/drivers/net/sfc/efx/sfc_rx.c +++ b/drivers/net/sfc/efx/sfc_rx.c @@ -27,12 +27,261 @@ * EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ +#include + #include "efx.h" #include "sfc.h" #include "sfc_log.h" #include "sfc_ev.h" #include "sfc_rx.h" +#include "sfc_tweak.h" + +/* + * Maximum number of Rx queue flush attempt in the case of failure or + * flush timeout + */ +#define SFC_RX_QFLUSH_ATTEMPTS (3) + +/* + * Time to wait between event queue polling attempts when waiting for Rx + * queue flush done or failed events. + */ +#define SFC_RX_QFLUSH_POLL_WAIT_MS (1) + +/* + * Maximum number of event queue polling attempts when wating for Rx queue + * flush done or failed events. It defines Rx queue flush attempt timeout + * together with SFC_RX_QFLUSH_POLL_WAIT_MS. + */ +#define SFC_RX_QFLUSH_POLL_ATTEMPTS (2000) + +void +sfc_rx_qflush_done(struct sfc_rxq *rxq) +{ + rxq->state |= SFC_RXQ_FLUSHED; + rxq->state &= ~SFC_RXQ_FLUSHING; +} + +void +sfc_rx_qflush_failed(struct sfc_rxq *rxq) +{ + rxq->state |= SFC_RXQ_FLUSH_FAILED; + rxq->state &= ~SFC_RXQ_FLUSHING; +} + +static void +sfc_rx_qrefill(struct sfc_rxq *rxq) +{ + unsigned int free_space; + unsigned int bulks; + void *objs[SFC_RX_REFILL_BULK]; + efsys_dma_addr_t addr[RTE_DIM(objs)]; + unsigned int added = rxq->added; + unsigned int id; + unsigned int i; + struct sfc_rx_sw_desc *rxd; + struct rte_mbuf *m; + uint8_t port_id = rxq->port_id; + + free_space = EFX_RXQ_LIMIT(rxq->ptr_mask + 1) - + (added - rxq->completed); + bulks = free_space / RTE_DIM(objs); + + id = added & rxq->ptr_mask; + while (bulks-- > 0) { + if (rte_mempool_get_bulk(rxq->refill_mb_pool, objs, + RTE_DIM(objs)) < 0) { + /* + * It is hardly a safe way to increment counter + * from different contexts, but all PMDs do it. + */ + rxq->evq->sa->eth_dev->data->rx_mbuf_alloc_failed += + RTE_DIM(objs); + break; + } + + for (i = 0; i < RTE_DIM(objs); + ++i, id = (id + 1) & rxq->ptr_mask) { + m = objs[i]; + + rxd = &rxq->sw_desc[id]; + rxd->mbuf = m; + + rte_mbuf_refcnt_set(m, 1); + m->data_off = RTE_PKTMBUF_HEADROOM; + m->next = NULL; + m->nb_segs = 1; + m->port = port_id; + + addr[i] = rte_pktmbuf_mtophys(m); + } + + efx_rx_qpost(rxq->common, addr, rxq->buf_size, + RTE_DIM(objs), rxq->completed, added); + added += RTE_DIM(objs); + } + + /* Push doorbell if something is posted */ + if (rxq->added != added) { + rxq->added = added; + efx_rx_qpush(rxq->common, added, &rxq->pushed); + } +} + +static void +sfc_rx_qpurge(struct sfc_rxq *rxq) +{ + unsigned int i; + struct sfc_rx_sw_desc *rxd; + + for (i = rxq->completed; i != rxq->added; ++i) { + rxd = &rxq->sw_desc[i & rxq->ptr_mask]; + rte_mempool_put(rxq->refill_mb_pool, rxd->mbuf); + rxd->mbuf = NULL; + } +} + +static void +sfc_rx_qflush(struct sfc_adapter *sa, unsigned int sw_index) +{ + struct sfc_rxq *rxq; + unsigned int retry_count; + unsigned int wait_count; + + rxq = sa->rxq_info[sw_index].rxq; + SFC_ASSERT(rxq->state & SFC_RXQ_STARTED); + + /* + * Retry Rx queue flushing in the case of flush failed or + * timeout. In the worst case it can delay for 6 seconds. + */ + for (retry_count = 0; + ((rxq->state & SFC_RXQ_FLUSHED) == 0) && + (retry_count < SFC_RX_QFLUSH_ATTEMPTS); + ++retry_count) { + if (efx_rx_qflush(rxq->common) != 0) { + rxq->state |= SFC_RXQ_FLUSH_FAILED; + break; + } + rxq->state &= ~SFC_RXQ_FLUSH_FAILED; + rxq->state |= SFC_RXQ_FLUSHING; + + /* + * Wait for Rx queue flush done or failed event at least + * SFC_RX_QFLUSH_POLL_WAIT_MS milliseconds and not more + * than 2 seconds (SFC_RX_QFLUSH_POLL_WAIT_MS multiplied + * by SFC_RX_QFLUSH_POLL_ATTEMPTS). + */ + wait_count = 0; + do { + rte_delay_ms(SFC_RX_QFLUSH_POLL_WAIT_MS); + sfc_ev_qpoll(rxq->evq); + } while ((rxq->state & SFC_RXQ_FLUSHING) && + (wait_count++ < SFC_RX_QFLUSH_POLL_ATTEMPTS)); + + if (rxq->state & SFC_RXQ_FLUSHING) + sfc_err(sa, "RxQ %u flush timed out", sw_index); + + if (rxq->state & SFC_RXQ_FLUSH_FAILED) + sfc_err(sa, "RxQ %u flush failed", sw_index); + + if (rxq->state & SFC_RXQ_FLUSHED) + sfc_info(sa, "RxQ %u flushed", sw_index); + } + + sfc_rx_qpurge(rxq); +} + +int +sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index) +{ + struct sfc_rxq_info *rxq_info; + struct sfc_rxq *rxq; + struct sfc_evq *evq; + int rc; + + sfc_log_init(sa, "sw_index=%u", sw_index); + + SFC_ASSERT(sw_index < sa->rxq_count); + + rxq_info = &sa->rxq_info[sw_index]; + rxq = rxq_info->rxq; + SFC_ASSERT(rxq->state == SFC_RXQ_INITIALIZED); + + evq = rxq->evq; + + rc = sfc_ev_qstart(sa, evq->evq_index); + if (rc != 0) + goto fail_ev_qstart; + + rc = efx_rx_qcreate(sa->nic, rxq->hw_index, 0, rxq_info->type, + &rxq->mem, rxq_info->entries, + 0 /* not used on EF10 */, evq->common, + &rxq->common); + if (rc != 0) + goto fail_rx_qcreate; + + efx_rx_qenable(rxq->common); + + rxq->pending = rxq->completed = rxq->added = rxq->pushed = 0; + + rxq->state |= SFC_RXQ_STARTED; + + sfc_rx_qrefill(rxq); + + if (sw_index == 0) { + rc = efx_mac_filter_default_rxq_set(sa->nic, rxq->common, + B_FALSE); + if (rc != 0) + goto fail_mac_filter_default_rxq_set; + } + + /* It seems to be used by DPDK for debug purposes only ('rte_ether') */ + sa->eth_dev->data->rx_queue_state[sw_index] = + RTE_ETH_QUEUE_STATE_STARTED; + + return 0; + +fail_mac_filter_default_rxq_set: + sfc_rx_qflush(sa, sw_index); + +fail_rx_qcreate: + sfc_ev_qstop(sa, evq->evq_index); + +fail_ev_qstart: + return rc; +} + +void +sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index) +{ + struct sfc_rxq_info *rxq_info; + struct sfc_rxq *rxq; + + sfc_log_init(sa, "sw_index=%u", sw_index); + + SFC_ASSERT(sw_index < sa->rxq_count); + + rxq_info = &sa->rxq_info[sw_index]; + rxq = rxq_info->rxq; + SFC_ASSERT(rxq->state & SFC_RXQ_STARTED); + + /* It seems to be used by DPDK for debug purposes only ('rte_ether') */ + sa->eth_dev->data->rx_queue_state[sw_index] = + RTE_ETH_QUEUE_STATE_STOPPED; + + if (sw_index == 0) + efx_mac_filter_default_rxq_clear(sa->nic); + + sfc_rx_qflush(sa, sw_index); + + rxq->state = SFC_RXQ_INITIALIZED; + + efx_rx_qdestroy(rxq->common); + + sfc_ev_qstop(sa, rxq->evq->evq_index); +} static int sfc_rx_qcheck_conf(struct sfc_adapter *sa, @@ -243,6 +492,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index, rxq->refill_mb_pool = mb_pool; rxq->buf_size = buf_size; rxq->hw_index = sw_index; + rxq->port_id = sa->eth_dev->data->port_id; rxq->state = SFC_RXQ_INITIALIZED; @@ -288,6 +538,53 @@ sfc_rx_qfini(struct sfc_adapter *sa, unsigned int sw_index) rte_free(rxq); } +int +sfc_rx_start(struct sfc_adapter *sa) +{ + int sw_index; + int rc; + + sfc_log_init(sa, "rxq_count=%u", sa->rxq_count); + + rc = efx_rx_init(sa->nic); + if (rc != 0) + goto fail_rx_init; + + for (sw_index = 0; sw_index < sa->rxq_count; ++sw_index) { + rc = sfc_rx_qstart(sa, sw_index); + if (rc != 0) + goto fail_rx_qstart; + } + + return 0; + +fail_rx_qstart: + while (--sw_index >= 0) + sfc_rx_qstop(sa, sw_index); + + efx_rx_fini(sa->nic); + +fail_rx_init: + sfc_log_init(sa, "failed %d", rc); + return rc; +} + +void +sfc_rx_stop(struct sfc_adapter *sa) +{ + int sw_index; + + sfc_log_init(sa, "rxq_count=%u", sa->rxq_count); + + sw_index = sa->rxq_count; + while (--sw_index >= 0) { + if (sa->rxq_info[sw_index].rxq != NULL) + sfc_rx_qstop(sa, sw_index); + } + + efx_rx_fini(sa->nic); +} + static int sfc_rx_qinit_info(struct sfc_adapter *sa, unsigned int sw_index) { diff --git a/drivers/net/sfc/efx/sfc_rx.h b/drivers/net/sfc/efx/sfc_rx.h index 3e24434..1004a84 100644 --- a/drivers/net/sfc/efx/sfc_rx.h +++ b/drivers/net/sfc/efx/sfc_rx.h @@ -57,6 +57,14 @@ struct sfc_rx_sw_desc { enum sfc_rxq_state_bit { SFC_RXQ_INITIALIZED_BIT = 0, #define SFC_RXQ_INITIALIZED (1 << SFC_RXQ_INITIALIZED_BIT) + SFC_RXQ_STARTED_BIT, +#define SFC_RXQ_STARTED (1 << SFC_RXQ_STARTED_BIT) + SFC_RXQ_FLUSHING_BIT, +#define SFC_RXQ_FLUSHING (1 << SFC_RXQ_FLUSHING_BIT) + SFC_RXQ_FLUSHED_BIT, +#define SFC_RXQ_FLUSHED (1 << SFC_RXQ_FLUSHED_BIT) + SFC_RXQ_FLUSH_FAILED_BIT, +#define SFC_RXQ_FLUSH_FAILED (1 << SFC_RXQ_FLUSH_FAILED_BIT) }; /** @@ -69,8 +77,13 @@ struct sfc_rxq { struct sfc_rx_sw_desc *sw_desc; unsigned int state; unsigned int ptr_mask; + unsigned int pending; + unsigned int completed; /* Used on refill */ + unsigned int added; + unsigned int pushed; + uint8_t port_id; uint16_t buf_size; struct rte_mempool *refill_mb_pool; efx_rxq_t *common; @@ -105,12 +118,19 @@ struct sfc_rxq_info { int sfc_rx_init(struct sfc_adapter *sa); void sfc_rx_fini(struct sfc_adapter *sa); +int sfc_rx_start(struct sfc_adapter *sa); +void sfc_rx_stop(struct sfc_adapter *sa); int sfc_rx_qinit(struct sfc_adapter *sa, unsigned int rx_queue_id, uint16_t nb_rx_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool); void sfc_rx_qfini(struct sfc_adapter *sa, unsigned int sw_index); +int sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index); +void sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index); + +void sfc_rx_qflush_done(struct sfc_rxq *rxq); +void sfc_rx_qflush_failed(struct sfc_rxq *rxq); #ifdef __cplusplus } diff --git a/drivers/net/sfc/efx/sfc_tweak.h b/drivers/net/sfc/efx/sfc_tweak.h new file mode 100644 index 0000000..24cb9f4 --- /dev/null +++ b/drivers/net/sfc/efx/sfc_tweak.h @@ -0,0 +1,44 @@ +/*- + * Copyright (c) 2016 Solarflare Communications Inc. + * All rights reserved. + * + * This software was jointly developed between OKTET Labs (under contract + * for Solarflare) and Solarflare Communications, Inc. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions are met: + * + * 1. Redistributions of source code must retain the above copyright notice, + * this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright notice, + * this list of conditions and the following disclaimer in the documentation + * and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; + * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, + * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR + * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, + * EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _SFC_TWEAK_H_ +#define _SFC_TWEAK_H_ + +/* + * The header is intended to collect defines/constants which could be + * tweaked to improve the PMD performance characteristics depending on + * the usecase or requirements (CPU load, packet rate, latency). + */ + +/** + * Number of Rx descriptors in the bulk submitted on Rx ring refill. + */ +#define SFC_RX_REFILL_BULK (RTE_CACHE_LINE_SIZE / sizeof(efx_qword_t)) + +#endif /* _SFC_TWEAK_H_ */ -- 2.5.5