From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <stephen@networkplumber.org>
Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com
 [209.85.215.196]) by dpdk.org (Postfix) with ESMTP id 706162C15
 for <dev@dpdk.org>; Tue, 24 Jul 2018 23:09:05 +0200 (CEST)
Received: by mail-pg1-f196.google.com with SMTP id z8-v6so3720454pgu.8
 for <dev@dpdk.org>; Tue, 24 Jul 2018 14:09:05 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=networkplumber-org.20150623.gappssmtp.com; s=20150623;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=yHt9YJuJcyEK2HJIGZV9Dkagkz2DrDkARcf3uX8fVI4=;
 b=c+SmaiQryf4Ih66TdbhElQMmRuzaCyDRitIRAy64pmDrkIF/T/nSKxWSqDg9qN5SsQ
 ZqiX52Wa6R9gD4nGYWbsLqaHrtAAhtjU9bCAlXlFKegt9LD17zB7Un4CfhumtJRQGWhC
 BEWVTEMJL75LZpCEd0sR3WeMnJBGIAU4EttUL66cwuX03wtPfL96vS9iHAE6u3Z+wuEI
 MAaQN/vD+iLbLw1jtVDEyJaVPRn/z2JjPROrLlzT9VEJ4G2Vz7vTRriPC3PLtdKtpfvF
 4/M9yGqHaxiKa4FwhWiz3mFvgLHeej02PjfW81JbmY1HUFUn4yBy8VuLwSHp4HuiNvL2
 RtEA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=yHt9YJuJcyEK2HJIGZV9Dkagkz2DrDkARcf3uX8fVI4=;
 b=Fs3PVOO5DL+d4KLcp9Bxj0MxPe0Z6xHP+AhtvP84AXpBLSMseAN8nILbHYVZgiKO22
 NqUuew1V2CrwDk9ifw/1aQYmgU+feg7053bxTuDsem1zP68wmaK436OcuVbF7SS3GjTO
 Du1OvT0pj+eaWNqrQLvUYyVENGXp85DFYl2cUqnkf/8h1kfFFLsq9hAf1s3ciLjmOgJY
 A6B9DQfmBrLV4y1o2bd0vDiJvPuDW7fy2FPAPGmx1fHI3lwEnN9yXOd9ZA9osSrrVPMN
 8ImfFjvkMnYrdSg2yfXZIU4UVUDqrpBk/ljHrlYiM4bwSWUj4dL64PXIipzOv4xud/cA
 revw==
X-Gm-Message-State: AOUpUlEVqqBsacl4TO1kIqFOLiTUjq87NCUANPIW5Hk0ixO7mZuYBlcY
 nyGvU39h/bh1wV6xAAQx13ZUhbp6E4Y=
X-Google-Smtp-Source: AAOMgpdp9CJgzgvgI2DHJoM8Bj+zoghpqYeFUQ0WK8seI73YPLKbfPPaZILIA8yxHkVrkMzsxERNpg==
X-Received: by 2002:a63:6c05:: with SMTP id
 h5-v6mr18035154pgc.367.1532466544129; 
 Tue, 24 Jul 2018 14:09:04 -0700 (PDT)
Received: from xeon-e3.wavecable.com (204-195-22-127.wavecable.com.
 [204.195.22.127])
 by smtp.gmail.com with ESMTPSA id d11-v6sm16921161pfo.135.2018.07.24.14.09.02
 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 24 Jul 2018 14:09:03 -0700 (PDT)
From: Stephen Hemminger <stephen@networkplumber.org>
To: dev@dpdk.org
Cc: Stephen Hemminger <stephen@networkplumber.org>,
 Stephen Hemminger <sthemmin@microsoft.com>
Date: Tue, 24 Jul 2018 14:08:51 -0700
Message-Id: <20180724210853.22767-3-stephen@networkplumber.org>
X-Mailer: git-send-email 2.18.0
In-Reply-To: <20180724210853.22767-1-stephen@networkplumber.org>
References: <20180724210853.22767-1-stephen@networkplumber.org>
Subject: [dpdk-dev] [PATCH 2/4] netvsc: avoid over filling receive
	descriptor ring
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Tue, 24 Jul 2018 21:09:05 -0000

If the number of packets requested are already present in the
rx_ring then skip reading the ring buffer from the host.

If the ring between the poll and receive side is full, then don't
poll (let incoming packets stay on host).

If no more transmit descriptors are available, then still try and
flush any outstanding data.

Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
---
 drivers/net/netvsc/hn_rxtx.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index 9a2dd9cb1beb..1aff64ee3ae5 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -878,11 +878,11 @@ void hn_process_events(struct hn_data *hv, uint16_t queue_id)
 			PMD_DRV_LOG(ERR, "unknown chan pkt %u", pkt->type);
 			break;
 		}
+
+		if (rxq->rx_ring && rte_ring_full(rxq->rx_ring))
+			break;
 	}
 	rte_spinlock_unlock(&rxq->ring_lock);
-
-	if (unlikely(ret != -EAGAIN))
-		PMD_DRV_LOG(ERR, "channel receive failed: %d", ret);
 }
 
 static void hn_append_to_chim(struct hn_tx_queue *txq,
@@ -1248,7 +1248,7 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 			pkt = hn_try_txagg(hv, txq, pkt_size);
 			if (unlikely(!pkt))
-				goto fail;
+				break;
 
 			hn_encap(pkt, txq->queue_id, m);
 			hn_append_to_chim(txq, pkt, m);
@@ -1269,7 +1269,7 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			} else {
 				txd = hn_new_txd(hv, txq);
 				if (unlikely(!txd))
-					goto fail;
+					break;
 			}
 
 			pkt = txd->rndis_pkt;
@@ -1310,8 +1310,9 @@ hn_recv_pkts(void *prxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	if (unlikely(hv->closed))
 		return 0;
 
-	/* Get all outstanding receive completions */
-	hn_process_events(hv, rxq->queue_id);
+	/* If ring is empty then process more */
+	if (rte_ring_count(rxq->rx_ring) < nb_pkts)
+		hn_process_events(hv, rxq->queue_id);
 
 	/* Get mbufs off staging ring */
 	return rte_ring_sc_dequeue_burst(rxq->rx_ring, (void **)rx_pkts,
-- 
2.18.0