From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F24E8A0597; Wed, 8 Apr 2020 10:33:46 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 01BC51C1E2; Wed, 8 Apr 2020 10:30:04 +0200 (CEST) Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by dpdk.org (Postfix) with ESMTP id 0981E1C115 for ; Wed, 8 Apr 2020 10:29:58 +0200 (CEST) Received: by mail-lj1-f194.google.com with SMTP id p10so6713355ljn.1 for ; Wed, 08 Apr 2020 01:29:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/kYh8shi4DGZscYSsWJJmp7vlK+UMm6qliWJ9GVFOHU=; b=JfoaejCHIYpw3/dOETFj6QU9pmT8Y0lxIEuYn4zEvXn7AAWKsiycVTFlJodqQByjvn g/x+VWDW7MmazWNNnkDsA4gREeaQwpt2HaLOZlfIT1Kq4NxKFJojeLTfIWq4pNdBMi64 LT39KDO7LexIbStDdi0zUM93P9d0K1mLCiuSuRTgMpm1W6Dm3GUmut8a6eTQF48JKCwP PuNcmqmfnEdBp9rU6IZVRDsOSY9AEptiobSWb11pDtCjD0Go0DwYHcdWfFTbyAqmroyo 6CXx3sLRYuH5cCQiwyOGmpSan62+Lw64CMhEYZ1cE+qs8hPMMS4BTQGG10h707zSE5VB 8ehQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/kYh8shi4DGZscYSsWJJmp7vlK+UMm6qliWJ9GVFOHU=; b=qzPYHrc3Yg1pxi3HABys4+hGBtIrBoD2MtCXO5eDURQwbGCFhbgfA4Lf5rUBEYs9sA TCuWzGT4VAlUWEqOSm0Cnr0hwLch9uwB5gyUetm4O4v4JOlCtUuupXc8jzSTyCAMqYs1 vf4PeoAd9BxSN4FJbeuitbOsrzNeLEDt6216CKHO/oBpfnfv0sWknnoWQ1yK6kpOb5x2 gZqQQMQ3JAQidgJr4SZetqqEV1+7YbdorI1OxvENdwFv5vluvqQb6xTz8lYgsDP3UjrY +XABu1Iy7G+bN1wMmnohd6+B30GqmO+FGWH0vae1pMxtRyLI3DYCZAjng7F2H8HXArhq I3lA== X-Gm-Message-State: AGi0PuaOkxmbGYaABWpPmbGo6Y8P1AZVX89Vg677Wi2hxQKk8PiBgXBw lrksehEZcu7O5v2Ao+izwVnYOc3HVTE= X-Google-Smtp-Source: APiQypI1PLzrCnJ+ma5lbaKK5jua2Hql4T5pSC+vbRfXRgpWMkIZ1fNlX68IQ1p49RHqPs07pbjAIA== X-Received: by 2002:a2e:9f13:: with SMTP id u19mr4310780ljk.14.1586334597345; Wed, 08 Apr 2020 01:29:57 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:56 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:16 +0200 Message-Id: <20200408082921.31000-26-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v3 25/30] net/ena: limit refill threshold by fixed value X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Divider used for both Tx and Rx cleanup/refill threshold can cause too big delay in case of the really big rings - for example if the 8k Rx ring will be used, the refill won't trigger unless 1024 threshold will be reached. It will also cause driver to try to allocate that much descriptors. Limiting it by fixed value - 256 in that case, would limit maximum time spent in repopulate function. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/ena_ethdev.c | 27 ++++++++++++++------------- drivers/net/ena/ena_ethdev.h | 10 ++++++++++ 2 files changed, 24 insertions(+), 13 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 9d76ebb0d9..7804a5c85d 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -35,14 +35,6 @@ /*reverse version of ENA_IO_RXQ_IDX*/ #define ENA_IO_RXQ_IDX_REV(q) ((q - 1) / 2) -/* While processing submitted and completed descriptors (rx and tx path - * respectively) in a loop it is desired to: - * - perform batch submissions while populating sumbissmion queue - * - avoid blocking transmission of other packets during cleanup phase - * Hence the utilization ratio of 1/8 of a queue size. - */ -#define ENA_RING_DESCS_RATIO(ring_size) (ring_size / 8) - #define __MERGE_64B_H_L(h, l) (((uint64_t)h << 32) | l) #define TEST_BIT(val, bit_shift) (val & (1UL << bit_shift)) @@ -2146,7 +2138,8 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, struct ena_ring *rx_ring = (struct ena_ring *)(rx_queue); unsigned int ring_size = rx_ring->ring_size; unsigned int ring_mask = ring_size - 1; - unsigned int refill_required; + unsigned int free_queue_entries; + unsigned int refill_threshold; uint16_t next_to_clean = rx_ring->next_to_clean; uint16_t descs_in_use; struct rte_mbuf *mbuf; @@ -2215,11 +2208,15 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_ring->rx_stats.cnt += completed; rx_ring->next_to_clean = next_to_clean; - refill_required = ena_com_free_q_entries(rx_ring->ena_com_io_sq); + free_queue_entries = ena_com_free_q_entries(rx_ring->ena_com_io_sq); + refill_threshold = + RTE_MIN(ring_size / ENA_REFILL_THRESH_DIVIDER, + (unsigned int)ENA_REFILL_THRESH_PACKET); + /* Burst refill to save doorbells, memory barriers, const interval */ - if (refill_required > ENA_RING_DESCS_RATIO(ring_size)) { + if (free_queue_entries > refill_threshold) { ena_com_update_dev_comp_head(rx_ring->ena_com_io_cq); - ena_populate_rx_queue(rx_ring, refill_required); + ena_populate_rx_queue(rx_ring, free_queue_entries); } return completed; @@ -2358,6 +2355,7 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t seg_len; unsigned int ring_size = tx_ring->ring_size; unsigned int ring_mask = ring_size - 1; + unsigned int cleanup_budget; struct ena_com_tx_ctx ena_tx_ctx; struct ena_tx_buffer *tx_info; struct ena_com_buf *ebuf; @@ -2515,9 +2513,12 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* Put back descriptor to the ring for reuse */ tx_ring->empty_tx_reqs[next_to_clean & ring_mask] = req_id; next_to_clean++; + cleanup_budget = + RTE_MIN(ring_size / ENA_REFILL_THRESH_DIVIDER, + (unsigned int)ENA_REFILL_THRESH_PACKET); /* If too many descs to clean, leave it for another run */ - if (unlikely(total_tx_descs > ENA_RING_DESCS_RATIO(ring_size))) + if (unlikely(total_tx_descs > cleanup_budget)) break; } tx_ring->tx_stats.available_desc = diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 6bcca08563..13d87d48f0 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -30,6 +30,16 @@ #define ENA_WD_TIMEOUT_SEC 3 #define ENA_DEVICE_KALIVE_TIMEOUT (ENA_WD_TIMEOUT_SEC * rte_get_timer_hz()) +/* While processing submitted and completed descriptors (rx and tx path + * respectively) in a loop it is desired to: + * - perform batch submissions while populating sumbissmion queue + * - avoid blocking transmission of other packets during cleanup phase + * Hence the utilization ratio of 1/8 of a queue size or max value if the size + * of the ring is very big - like 8k Rx rings. + */ +#define ENA_REFILL_THRESH_DIVIDER 8 +#define ENA_REFILL_THRESH_PACKET 256 + struct ena_adapter; enum ena_ring_type { -- 2.20.1