From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-f171.google.com (mail-we0-f171.google.com [74.125.82.171]) by dpdk.org (Postfix) with ESMTP id A0BA6156 for ; Fri, 15 Nov 2013 14:18:18 +0100 (CET) Received: by mail-we0-f171.google.com with SMTP id t61so3560685wes.30 for ; Fri, 15 Nov 2013 05:19:14 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:subject:date:message-id; bh=TGlo19CU/8toCJxaqdGEGr068OmJxZASdYFJ/wu8xBI=; b=Adn9n/sUFiyhn8/9gzQbdH4JZqoqv+QtZGtLXhCcQE3PcUw4jbEM7PlBrx9uRtrnwW e3ON+5bKHoJE0Alp/niZeGdfLHh3PEZEUpQx7L6Pu4h/w55kD6fCRk6Lu52UMGcUeR3R xTjSZAORgNwo/Mh9HqMUnNwEZZ3DdE3Tp6ut3NSxp2T9JaMNgjIqKkq4bTiQBwjyuuwV f46g+FvpNwkEURs6g+Cz4GNImqTEWIFPR8ydF/1U7fSzNLHY9J1iiJRuOmVa10pez54z tpKYG5JLls5II830wi4FeyE1WtI/MtEIoLm4cC7JtGmTpcooWAY8zPZvKt5w75xrtK8j b73A== X-Gm-Message-State: ALoCoQl1y5Z/5YlqTh/n550Yq+E7PCfPnsk4oRWXzuP3mt47O7fjMDMaESyOm4CyGUCIcjYW5xoi X-Received: by 10.180.74.52 with SMTP id q20mr7477604wiv.60.1384521554607; Fri, 15 Nov 2013 05:19:14 -0800 (PST) Received: from 6wind.com (6wind.net2.nerim.net. [213.41.180.237]) by mx.google.com with ESMTPSA id gg20sm5150950wic.1.2013.11.15.05.19.12 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 15 Nov 2013 05:19:13 -0800 (PST) Received: by 6wind.com (sSMTP sendmail emulation); Fri, 15 Nov 2013 14:19:09 +0100 From: Thomas Monjalon To: dev@dpdk.org Date: Fri, 15 Nov 2013 14:19:08 +0100 Message-Id: <1384521548-7729-1-git-send-email-thomas.monjalon@6wind.com> X-Mailer: git-send-email 1.7.10.4 Subject: [dpdk-dev] [PATCH] igb/ixgbe: fix index overflow when resetting 4096 queue rings X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Nov 2013 13:18:18 -0000 Rings are resetted with a loop because memset cannot be used without issuing a warning about volatile casting. The index of the loop was a 16-bit variable which is is sufficient for ring entries number but not for the byte size of the whole ring. The overflow happens when rings are configured for 4096 entries (descriptor size is 16 bytes). The result is an endless loop. It is fixed by indexing ring entries and resetting all bytes of the entry with a simple assignment. The descriptor initializer is zeroed thanks to its static declaration. Signed-off-by: Thomas Monjalon --- lib/librte_pmd_e1000/igb_rxtx.c | 14 ++++++-------- lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 10 ++++++---- 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c index a09b774..f785d9f 100644 --- a/lib/librte_pmd_e1000/igb_rxtx.c +++ b/lib/librte_pmd_e1000/igb_rxtx.c @@ -1134,16 +1134,15 @@ igb_reset_tx_queue_stat(struct igb_tx_queue *txq) static void igb_reset_tx_queue(struct igb_tx_queue *txq, struct rte_eth_dev *dev) { + static const union e1000_adv_tx_desc zeroed_desc; struct igb_tx_entry *txe = txq->sw_ring; - uint32_t size; uint16_t i, prev; struct e1000_hw *hw; hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); - size = sizeof(union e1000_adv_tx_desc) * txq->nb_tx_desc; /* Zero out HW ring memory */ - for (i = 0; i < size; i++) { - ((volatile char *)txq->tx_ring)[i] = 0; + for (i = 0; i < txq->nb_tx_desc; i++) { + txq->tx_ring[i] = zeroed_desc; } /* Initialize ring entries */ @@ -1297,13 +1296,12 @@ eth_igb_rx_queue_release(void *rxq) static void igb_reset_rx_queue(struct igb_rx_queue *rxq) { - unsigned size; + static const union e1000_adv_rx_desc zeroed_desc; unsigned i; /* Zero out HW ring memory */ - size = sizeof(union e1000_adv_rx_desc) * rxq->nb_rx_desc; - for (i = 0; i < size; i++) { - ((volatile char *)rxq->rx_ring)[i] = 0; + for (i = 0; i < rxq->nb_rx_desc; i++) { + rxq->rx_ring[i] = zeroed_desc; } rxq->rx_tail = 0; diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c index 39d794d..0f7be95 100644 --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c @@ -1799,12 +1799,13 @@ ixgbe_dev_tx_queue_release(void *txq) static void ixgbe_reset_tx_queue(struct igb_tx_queue *txq) { + static const union ixgbe_adv_tx_desc zeroed_desc; struct igb_tx_entry *txe = txq->sw_ring; uint16_t prev, i; /* Zero out HW ring memory */ - for (i = 0; i < sizeof(union ixgbe_adv_tx_desc) * txq->nb_tx_desc; i++) { - ((volatile char *)txq->tx_ring)[i] = 0; + for (i = 0; i < txq->nb_tx_desc; i++) { + txq->tx_ring[i] = zeroed_desc; } /* Initialize SW ring entries */ @@ -2093,6 +2094,7 @@ check_rx_burst_bulk_alloc_preconditions(__rte_unused struct igb_rx_queue *rxq) static void ixgbe_reset_rx_queue(struct igb_rx_queue *rxq) { + static const union ixgbe_adv_rx_desc zeroed_desc; unsigned i; uint16_t len; @@ -2120,8 +2122,8 @@ ixgbe_reset_rx_queue(struct igb_rx_queue *rxq) * the H/W ring so look-ahead logic in Rx Burst bulk alloc function * reads extra memory as zeros. */ - for (i = 0; i < len * sizeof(union ixgbe_adv_rx_desc); i++) { - ((volatile char *)rxq->rx_ring)[i] = 0; + for (i = 0; i < len; i++) { + rxq->rx_ring[i] = zeroed_desc; } #ifdef RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC -- 1.7.10.4