From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E1FBEA04B7; Tue, 13 Oct 2020 15:36:51 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C69D11C043; Tue, 13 Oct 2020 15:36:50 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 9B2CD1DB2B for ; Tue, 13 Oct 2020 15:36:47 +0200 (CEST) IronPort-SDR: QCfqEADqBPfd3VBsqRGVtb0KRlDMCvDlNi+eIwMFWIAe4XOt/i/fpd7N0Jzb9e91Nxf8ycVvIB 52UA7Yvl3kJQ== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165119763" X-IronPort-AV: E=Sophos;i="5.77,370,1596524400"; d="scan'208";a="165119763" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2020 06:36:45 -0700 IronPort-SDR: 6NeUKX/zsjlL1OxrDRz/jh9GsoHDidhOijJ4ZRsVUlzmD3L8UQz7HAW2AL/NYaFFIXL0xw0Hdo SV1AMu7e69QQ== X-IronPort-AV: E=Sophos;i="5.77,370,1596524400"; d="scan'208";a="521035306" Received: from silpixa00399839.ir.intel.com (HELO localhost.localdomain) ([10.237.222.142]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2020 06:36:44 -0700 From: Ciara Loftus To: dev@dpdk.org Cc: Ciara Loftus Date: Tue, 13 Oct 2020 13:10:08 +0000 Message-Id: <20201013131008.4070-1-ciara.loftus@intel.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 In-Reply-To: <20201008091729.4321-1-ciara.loftus@intel.com> References: <20201008091729.4321-1-ciara.loftus@intel.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v2] net/af_xdp: don't allow umem sharing for xsks with same ctx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" AF_XDP PMDs who wish to share a UMEM must have a unique context (ctx) ie. netdev,qid tuple. For instance, the following will not work since both PMDs' contexts are identical. --vdev net_af_xdp0,iface=ens786f1,start_queue=0,shared_umem=1 --vdev net_af_xdp1,iface=ens786f1,start_queue=0,shared_umem=1 Supporting this scenario would require locks, which would impact the performance of the more typical cases - xsks with different netdev,qid tuples. Signed-off-by: Ciara Loftus Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM") --- v2: * Add doc update * Fix commit message style issues * Update commit message with more information doc/guides/nics/af_xdp.rst | 25 ++++++++++++++++ drivers/net/af_xdp/rte_eth_af_xdp.c | 44 +++++++++++++++++++++++------ 2 files changed, 60 insertions(+), 9 deletions(-) diff --git a/doc/guides/nics/af_xdp.rst b/doc/guides/nics/af_xdp.rst index be268fe7ff..052e59a3ae 100644 --- a/doc/guides/nics/af_xdp.rst +++ b/doc/guides/nics/af_xdp.rst @@ -82,3 +82,28 @@ Limitations Note: The AF_XDP PMD will fail to initialise if an MTU which violates the driver's conditions as above is set prior to launching the application. + +- **Shared UMEM** + + The sharing of UMEM is only supported for AF_XDP sockets with unique contexts. + The context refers to the netdev,qid tuple. + + The following combination will fail: + + .. code-block:: console + + --vdev net_af_xdp0,iface=ens786f1,shared_umem=1 \ + --vdev net_af_xdp1,iface=ens786f1,shared_umem=1 \ + + Either of the following however is permitted since either the netdev or qid differs + between the two vdevs: + + .. code-block:: console + + --vdev net_af_xdp0,iface=ens786f1,shared_umem=1 \ + --vdev net_af_xdp1,iface=ens786f1,start_queue=1,shared_umem=1 \ + + .. code-block:: console + + --vdev net_af_xdp0,iface=ens786f1,shared_umem=1 \ + --vdev net_af_xdp1,iface=ens786f2,shared_umem=1 \ \ No newline at end of file diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index eaf2c9c873..9e0e5c254a 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -634,16 +634,35 @@ find_internal_resource(struct pmd_internals *port_int) return list; } +/* Check if the netdev,qid context already exists */ +static inline bool +ctx_exists(struct pkt_rx_queue *rxq, const char *ifname, + struct pkt_rx_queue *list_rxq, const char *list_ifname) +{ + bool exists = false; + + if (rxq->xsk_queue_idx == list_rxq->xsk_queue_idx && + !strncmp(ifname, list_ifname, IFNAMSIZ)) { + AF_XDP_LOG(ERR, "ctx %s,%i already exists, cannot share umem\n", + ifname, rxq->xsk_queue_idx); + exists = true; + } + + return exists; +} + /* Get a pointer to an existing UMEM which overlays the rxq's mb_pool */ -static inline struct xsk_umem_info * -get_shared_umem(struct pkt_rx_queue *rxq) { +static inline int +get_shared_umem(struct pkt_rx_queue *rxq, const char *ifname, + struct xsk_umem_info **umem) +{ struct internal_list *list; struct pmd_internals *internals; - int i = 0; + int i = 0, ret = 0; struct rte_mempool *mb_pool = rxq->mb_pool; if (mb_pool == NULL) - return NULL; + return ret; pthread_mutex_lock(&internal_list_lock); @@ -655,20 +674,25 @@ get_shared_umem(struct pkt_rx_queue *rxq) { if (rxq == list_rxq) continue; if (mb_pool == internals->rx_queues[i].mb_pool) { + if (ctx_exists(rxq, ifname, list_rxq, + internals->if_name)) { + ret = -1; + goto out; + } if (__atomic_load_n( &internals->rx_queues[i].umem->refcnt, __ATOMIC_ACQUIRE)) { - pthread_mutex_unlock( - &internal_list_lock); - return internals->rx_queues[i].umem; + *umem = internals->rx_queues[i].umem; + goto out; } } } } +out: pthread_mutex_unlock(&internal_list_lock); - return NULL; + return ret; } static int @@ -913,7 +937,9 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals, uint64_t umem_size, align = 0; if (internals->shared_umem) { - umem = get_shared_umem(rxq); + if (get_shared_umem(rxq, internals->if_name, &umem) < 0) + return NULL; + if (umem != NULL && __atomic_load_n(&umem->refcnt, __ATOMIC_ACQUIRE) < umem->max_xsks) { -- 2.17.1