From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 8A412A0471 for ; Tue, 18 Jun 2019 04:09:14 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 260E61BFA5; Tue, 18 Jun 2019 04:09:13 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 84BEE1BFA4 for ; Tue, 18 Jun 2019 04:09:11 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jun 2019 19:09:10 -0700 X-ExtLoop1: 1 Received: from yexl-server.sh.intel.com ([10.67.110.186]) by orsmga004.jf.intel.com with ESMTP; 17 Jun 2019 19:09:08 -0700 From: Xiaolong Ye To: Xiaolong Ye , Qi Zhang Cc: Karlsson Magnus , Topel Bjorn , yuan.peng@intel.com, dev@dpdk.org, David Marchand , Bruce Richardson Date: Tue, 18 Jun 2019 16:51:06 +0800 Message-Id: <20190618085106.94237-1-xiaolong.ye@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190617142303.85240-1-xiaolong.ye@intel.com> References: <20190617142303.85240-1-xiaolong.ye@intel.com> Subject: [dpdk-dev] [PATCH v2] net/af_xdp: support need wakeup feature X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch enables need_wakeup flag for Tx and fill rings, when this flag is set by the driver, it means that the userspace application has to explicitly wake up the kernel Rx or kernel Tx processing by issuing a syscall. Poll() can wake up both and sendto() or its alternatives will wake up Tx processing only. This feature is to provide efficient support for case that application and driver are executing on the same core. Signed-off-by: Xiaolong Ye --- v2 changes: 1. remove need_wakeup devarg to make need_wakeup feature enabled unconditionally. 2. add conditional compilation directive to avoid breaking build with kernel which doesn't support need_wakeup feature yet. Note: Original busy poll feature has morphed into need_wakeup flag in kernel side, the main purpose is the same, that is to support both application and driver executing on the same core efficiently. kernel side patchset can be found at netdev mailing list. https://lore.kernel.org/netdev/CAJ8uoz2szX=+JXXAMyuVmvSsMXZuDqp6a8rjDQpTioxbZwxFmQ@mail.gmail.com/T/#t It is targeted for v5.3 Cc: David Marchand Cc: Bruce Richardson drivers/net/af_xdp/rte_eth_af_xdp.c | 41 +++++++++++++++++++++-------- 1 file changed, 30 insertions(+), 11 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index c638d9227..5ce90a760 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -90,6 +91,7 @@ struct pkt_rx_queue { struct rx_stats stats; struct pkt_tx_queue *pair; + struct pollfd fds[1]; int xsk_queue_idx; }; @@ -206,8 +208,14 @@ eth_af_xdp_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) return 0; rcvd = xsk_ring_cons__peek(rx, nb_pkts, &idx_rx); - if (rcvd == 0) + if (rcvd == 0) { +#if defined(XDP_USE_NEED_WAKEUP) + if (xsk_ring_prod__needs_wakeup(fq)) + (void)poll(rxq->fds, 1, 1000); +#endif + goto out; + } if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh) (void)reserve_fill_queue(umem, ETH_AF_XDP_RX_BATCH_SIZE); @@ -279,16 +287,19 @@ kick_tx(struct pkt_tx_queue *txq) { struct xsk_umem_info *umem = txq->pair->umem; - while (send(xsk_socket__fd(txq->pair->xsk), NULL, - 0, MSG_DONTWAIT) < 0) { - /* some thing unexpected */ - if (errno != EBUSY && errno != EAGAIN && errno != EINTR) - break; - - /* pull from completion queue to leave more space */ - if (errno == EAGAIN) - pull_umem_cq(umem, ETH_AF_XDP_TX_BATCH_SIZE); - } +#if defined(XDP_USE_NEED_WAKEUP) + if (xsk_ring_prod__needs_wakeup(&txq->tx)) +#endif + while (send(xsk_socket__fd(txq->pair->xsk), NULL, + 0, MSG_DONTWAIT) < 0) { + /* some thing unexpected */ + if (errno != EBUSY && errno != EAGAIN && errno != EINTR) + break; + + /* pull from completion queue to leave more space */ + if (errno == EAGAIN) + pull_umem_cq(umem, ETH_AF_XDP_TX_BATCH_SIZE); + } pull_umem_cq(umem, ETH_AF_XDP_TX_BATCH_SIZE); } @@ -622,6 +633,11 @@ xsk_configure(struct pmd_internals *internals, struct pkt_rx_queue *rxq, cfg.libbpf_flags = 0; cfg.xdp_flags = XDP_FLAGS_UPDATE_IF_NOEXIST; cfg.bind_flags = 0; + +#if defined(XDP_USE_NEED_WAKEUP) + cfg.bind_flags |= XDP_USE_NEED_WAKEUP; +#endif + ret = xsk_socket__create(&rxq->xsk, internals->if_name, rxq->xsk_queue_idx, rxq->umem->umem, &rxq->rx, &txq->tx, &cfg); @@ -683,6 +699,9 @@ eth_rx_queue_setup(struct rte_eth_dev *dev, goto err; } + rxq->fds[0].fd = xsk_socket__fd(rxq->xsk); + rxq->fds[0].events = POLLIN; + rxq->umem->pmd_zc = internals->pmd_zc; dev->data->rx_queues[rx_queue_id] = rxq; -- 2.17.1