From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CBFD0A046B; Thu, 30 Apr 2020 21:09:44 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C68E21DBE8; Thu, 30 Apr 2020 21:09:13 +0200 (CEST) Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by dpdk.org (Postfix) with ESMTP id 4553B1DB8D for ; Thu, 30 Apr 2020 21:09:08 +0200 (CEST) Received: by mail-pj1-f65.google.com with SMTP id mq3so1177462pjb.1 for ; Thu, 30 Apr 2020 12:09:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=p4knk4Jts5mF3s3iSfQeRtFVxJhOazgUvcV4lbo9Q9k=; b=QIm/vsCPzvTWtNPNTiZpaoVZMlakQmPtCNMP988BnQ5Xh75Cx1y4kEo5oKK+D31wHw 61BZehqLMywuK6E3CPqNq42dbSrpZGvuuru58jrU+N+DnLl6hDM9W0lBkqVMO42oGHpY wRH1OC9FPz3+E/WoAwzxXrnAlFppqXsW+z/TtP6b+jyhMsHQG7E+dTadkgj6J6cFZ6RO khywqgcalYpyY5udQeDeds/raR0kG0Zq/FSK4MmGMKqF8SCy0FgLXmxUf+xWZBFzUrXM Pa3MBkZxkL3jVRztQnn1t796jF9e16NUAevYEfXygGOpCqH7KDxbZoMpqcU4eVuCtABu dtAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=p4knk4Jts5mF3s3iSfQeRtFVxJhOazgUvcV4lbo9Q9k=; b=RTsCULFxp84OYQv47y5oxcertrigkaAm+he8TUU5aiN7yfKEcCY1QqSSZ2LzfHODcn d1a/QxIzTVc9BEaNILu11BOCKPZqQVa7zq7sYT5TEw0s5P3ifzLY27d8b4XtQac9sLz2 uHyAgIk37QO8G4kDoo9A+7gOf7LYW0uD2GOKM0QEAOsFgtGcm3LzCLgdBGnalUaHs5tv RmN/PEVXMvLIoZ7C34KaOZUW4K4hw8KgduutoEMUJpD0iNOwDWm/Z//0b+aS5Oj2g/Fu 1RoaHWu69RC+dTTUMQ7zmesKPhSzkpjgKkSBq5c9725N2qif+DZE/vCEae5k6GsbOjuf dbaQ== X-Gm-Message-State: AGi0PubyGBKRaeH3YZb0QDkl9lA7kEcmH1ptcN/+SGuB/H1MaH7Bj2bd 49ZYDQUXt6tVQUgT94IUiOT8RWBcLyCN5g== X-Google-Smtp-Source: APiQypKvJAZtNNm3Y9BoP6n1BGtUb7WJl4ac0I4MVaVvIPTV8ywit9rmVCqekOdjtnP+gT5Z3MMcYg== X-Received: by 2002:a17:902:bd42:: with SMTP id b2mr440814plx.13.1588273746968; Thu, 30 Apr 2020 12:09:06 -0700 (PDT) Received: from hermes.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id fy21sm452075pjb.25.2020.04.30.12.09.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Apr 2020 12:09:06 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Thu, 30 Apr 2020 12:08:50 -0700 Message-Id: <20200430190853.498-5-stephen@networkplumber.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200430190853.498-1-stephen@networkplumber.org> References: <20200430190853.498-1-stephen@networkplumber.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH 4/7] net/netvsc: check the vmbus ring buffer more often X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since VF notfications are handled as VMBUS notifications on the primary channel (and not as hotplug). The channel should be checked before deciding to use VF for Rx or Tx. Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_rxtx.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index 19f00a05285f..773ba31fcc64 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -1372,25 +1372,29 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct hn_data *hv = txq->hv; struct rte_eth_dev *vf_dev; bool need_sig = false; - uint16_t nb_tx, avail; + uint16_t nb_tx, tx_thresh; int ret; if (unlikely(hv->closed)) return 0; + /* + * Always check for events on the primary channel + * because that is where hotplug notifications occur. + */ + tx_thresh = RTE_MAX(txq->free_thresh, nb_pkts); + if (txq->queue_id == 0 || + rte_mempool_avail_count(txq->txdesc_pool) < tx_thresh) + hn_process_events(hv, txq->queue_id, 0); + /* Transmit over VF if present and up */ vf_dev = hn_get_vf_dev(hv); - if (vf_dev && vf_dev->data->dev_started) { void *sub_q = vf_dev->data->tx_queues[queue_id]; return (*vf_dev->tx_pkt_burst)(sub_q, tx_pkts, nb_pkts); } - avail = rte_mempool_avail_count(txq->txdesc_pool); - if (nb_pkts > avail || avail <= txq->free_thresh) - hn_process_events(hv, txq->queue_id, 0); - for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { struct rte_mbuf *m = tx_pkts[nb_tx]; uint32_t pkt_size = m->pkt_len + HN_RNDIS_PKT_LEN; @@ -1487,10 +1491,7 @@ hn_recv_pkts(void *prxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (unlikely(hv->closed)) return 0; - /* Receive from VF if present and up */ - vf_dev = hn_get_vf_dev(hv); - - /* Check for new completions */ + /* Check for new completions (and hotplug) */ if (likely(rte_ring_count(rxq->rx_ring) < nb_pkts)) hn_process_events(hv, rxq->queue_id, 0); @@ -1499,6 +1500,7 @@ hn_recv_pkts(void *prxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) (void **)rx_pkts, nb_pkts, NULL); /* If VF is available, check that as well */ + vf_dev = hn_get_vf_dev(hv); if (vf_dev && vf_dev->data->dev_started) nb_rcv += hn_recv_vf(vf_dev->data->port_id, rxq, rx_pkts + nb_rcv, nb_pkts - nb_rcv); -- 2.20.1