From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR01-VE1-obe.outbound.protection.outlook.com (mail-ve1eur01on0049.outbound.protection.outlook.com [104.47.1.49]) by dpdk.org (Postfix) with ESMTP id A9D4D23A for ; Sun, 6 May 2018 08:41:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=pQ1428RSA/9lotz/0QGnAfGDhVkeJPhyXsHTxJWe+2Y=; b=p1/h0YoID6RUscK12WXTE5DzKwjUVOLNk0JxjK16+CAGugYMoiYbBPMuTBnQ70e4jmElsFvqG0fC7d5ktpy/QaTX3xMX4HU5SrHo6WOCB3/f4twcwLXfUgXjgRwMdgjQYcON3dZpmWMyxFNstdgmXL0/Za+HxY7MiVCcpgxOz7M= Received: from DB7PR05MB4426.eurprd05.prod.outlook.com (52.134.109.15) by DB7PR05MB4186.eurprd05.prod.outlook.com (52.134.107.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.715.23; Sun, 6 May 2018 06:41:38 +0000 Received: from DB7PR05MB4426.eurprd05.prod.outlook.com ([fe80::f116:5be4:ba29:fed8]) by DB7PR05MB4426.eurprd05.prod.outlook.com ([fe80::f116:5be4:ba29:fed8%13]) with mapi id 15.20.0735.018; Sun, 6 May 2018 06:41:38 +0000 From: Shahaf Shuler To: Yongseok Koh , Adrien Mazarguil , =?iso-8859-1?Q?N=E9lio_Laranjeiro?= CC: "dev@dpdk.org" , Yongseok Koh Thread-Topic: [dpdk-dev] [PATCH 2/5] net/mlx5: remove Memory Region support Thread-Index: AQHT4mvEvHdr/xhRhk2YuaEs9IGpEaQiRMXQ Date: Sun, 6 May 2018 06:41:37 +0000 Message-ID: References: <20180502231654.7596-1-yskoh@mellanox.com> <20180502231654.7596-3-yskoh@mellanox.com> In-Reply-To: <20180502231654.7596-3-yskoh@mellanox.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=shahafs@mellanox.com; x-originating-ip: [31.154.10.107] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; DB7PR05MB4186; 7:v3JN+dCuTQIoqjtocZbHptt/yrlotMXhID0DryA+i/l0ZQJ+SnP5ECqi1p5+zOQ4UFWP4Fxs4ezukZCHPj7E9Oab3fgQthZCWrztbhbYYEVivFJw1VVlIZTyRL7hlUzJgH0xghDBE5NdssWm+/CKDszPhKVsIbwooqk+R6I4+Vz/3JP8n9+SCQ5Qic45iS8xXs+KruXBnhG0wrnvU58gIRDX5VV6GH0K4DXSwBHpVUlfZAoqGDeCrkKw9AYadDMh x-ms-exchange-antispam-srfa-diagnostics: SOS; x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(5600026)(48565401081)(2017052603328)(7153060)(7193020); SRVR:DB7PR05MB4186; x-ms-traffictypediagnostic: DB7PR05MB4186: x-ld-processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(3231254)(944501410)(52105095)(10201501046)(3002001)(93006095)(93001095)(6055026)(6041310)(20161123558120)(20161123564045)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123562045)(6072148)(201708071742011); SRVR:DB7PR05MB4186; BCL:0; PCL:0; RULEID:; SRVR:DB7PR05MB4186; x-forefront-prvs: 06640999CA x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(396003)(376002)(39860400002)(366004)(39380400002)(346002)(189003)(199004)(97736004)(9686003)(86362001)(575784001)(6246003)(5660300001)(478600001)(2906002)(110136005)(3846002)(99286004)(6116002)(316002)(476003)(229853002)(54906003)(68736007)(55016002)(486006)(6436002)(3660700001)(305945005)(25786009)(11346002)(6506007)(446003)(8936002)(76176011)(53946003)(74316002)(53936002)(81166006)(7696005)(105586002)(102836004)(3280700002)(59450400001)(14454004)(4326008)(26005)(7736002)(33656002)(81156014)(5250100002)(66066001)(106356001)(107886003)(8676002)(2900100001)(579004)(559001); DIR:OUT; SFP:1101; SCL:1; SRVR:DB7PR05MB4186; H:DB7PR05MB4426.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: J65zCmQ7M9gN/RfpGQC7PdX49Ez2HTBFeEzDs1iiP1v1Lhg1JQP1EPGoSH1If5FyuuJTsVY9nAFb2GMIaeB+33CnRg48y3iICDfZdAJwZ3oIlL3OqJpARWEb74leJOWaP/e8x6XE03JIWWG1DFFIAiK3kFcST0UP+Kn1WF5i4+j+MhXrbyPCDlDT33s7K+oz spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Office365-Filtering-Correlation-Id: 56e8da71-4785-4d7e-d249-08d5b31c6b18 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 56e8da71-4785-4d7e-d249-08d5b31c6b18 X-MS-Exchange-CrossTenant-originalarrivaltime: 06 May 2018 06:41:37.8631 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR05MB4186 Subject: Re: [dpdk-dev] [PATCH 2/5] net/mlx5: remove Memory Region support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 May 2018 06:41:40 -0000 Thursday, May 3, 2018 2:17 AM, Yongseok Koh: > Subject: [dpdk-dev] [PATCH 2/5] net/mlx5: remove Memory Region support >=20 > This patch removes current support of Memory Region (MR) in order to > accommodate the dynamic memory hotplug patch. This patch can be > compiled but traffic can't flow and HW will raise faults. Subsequent patc= hes > will add new MR support. >=20 > Signed-off-by: Yongseok Koh Acked-by: Shahaf Shuler =20 > --- > config/common_base | 1 - > doc/guides/nics/mlx5.rst | 8 - > drivers/net/mlx5/Makefile | 4 - > drivers/net/mlx5/mlx5.c | 4 - > drivers/net/mlx5/mlx5.h | 10 -- > drivers/net/mlx5/mlx5_defs.h | 11 -- > drivers/net/mlx5/mlx5_mr.c | 343 ----------------------------------= ------ > drivers/net/mlx5/mlx5_rxq.c | 24 +-- > drivers/net/mlx5/mlx5_rxtx.h | 90 +---------- > drivers/net/mlx5/mlx5_trigger.c | 14 -- > drivers/net/mlx5/mlx5_txq.c | 17 -- > 11 files changed, 5 insertions(+), 521 deletions(-) >=20 > diff --git a/config/common_base b/config/common_base index > 03a8688b5..bf7d5e785 100644 > --- a/config/common_base > +++ b/config/common_base > @@ -296,7 +296,6 @@ CONFIG_RTE_LIBRTE_MLX4_TX_MP_CACHE=3D8 > CONFIG_RTE_LIBRTE_MLX5_PMD=3Dn > CONFIG_RTE_LIBRTE_MLX5_DEBUG=3Dn > CONFIG_RTE_LIBRTE_MLX5_DLOPEN_DEPS=3Dn > -CONFIG_RTE_LIBRTE_MLX5_TX_MP_CACHE=3D8 >=20 > # > # Compile burst-oriented Netronome NFP PMD driver diff --git > a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index > 853c48f81..0fe6e1835 100644 > --- a/doc/guides/nics/mlx5.rst > +++ b/doc/guides/nics/mlx5.rst > @@ -167,14 +167,6 @@ These options can be modified in the ``.config`` fil= e. > adds additional run-time checks and debugging messages at the cost of > lower performance. >=20 > -- ``CONFIG_RTE_LIBRTE_MLX5_TX_MP_CACHE`` (default **8**) > - > - Maximum number of cached memory pools (MPs) per TX queue. Each MP > from > - which buffers are to be transmitted must be associated to memory regio= ns > - (MRs). This is a slow operation that must be cached. > - > - This value is always 1 for RX queues since they use a single MP. > - > Environment variables > ~~~~~~~~~~~~~~~~~~~~~ >=20 > diff --git a/drivers/net/mlx5/Makefile b/drivers/net/mlx5/Makefile index > 3c5b4943a..13f079334 100644 > --- a/drivers/net/mlx5/Makefile > +++ b/drivers/net/mlx5/Makefile > @@ -82,10 +82,6 @@ else > CFLAGS +=3D -DNDEBUG -UPEDANTIC > endif >=20 > -ifdef CONFIG_RTE_LIBRTE_MLX5_TX_MP_CACHE > -CFLAGS +=3D - > DMLX5_PMD_TX_MP_CACHE=3D$(CONFIG_RTE_LIBRTE_MLX5_TX_MP_CACH > E) > -endif > - > include $(RTE_SDK)/mk/rte.lib.mk >=20 > # Generate and clean-up mlx5_autoconf.h. > diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index > 6c4a571ab..01d554758 100644 > --- a/drivers/net/mlx5/mlx5.c > +++ b/drivers/net/mlx5/mlx5.c > @@ -245,10 +245,6 @@ mlx5_dev_close(struct rte_eth_dev *dev) > if (ret) > DRV_LOG(WARNING, "port %u some flows still remain", > dev->data->port_id); > - ret =3D mlx5_mr_verify(dev); > - if (ret) > - DRV_LOG(WARNING, "port %u some memory region still > remain", > - dev->data->port_id); > memset(priv, 0, sizeof(*priv)); > } >=20 > diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index > 3ab16bfa2..47d266c90 100644 > --- a/drivers/net/mlx5/mlx5.h > +++ b/drivers/net/mlx5/mlx5.h > @@ -26,7 +26,6 @@ > #include > #include > #include > -#include > #include > #include > #include > @@ -147,7 +146,6 @@ struct priv { > struct mlx5_hrxq_drop *flow_drop_queue; /* Flow drop queue. */ > struct mlx5_flows flows; /* RTE Flow rules. */ > struct mlx5_flows ctrl_flows; /* Control flow rules. */ > - LIST_HEAD(mr, mlx5_mr) mr; /* Memory region. */ > LIST_HEAD(rxq, mlx5_rxq_ctrl) rxqsctrl; /* DPDK Rx queues. */ > LIST_HEAD(rxqibv, mlx5_rxq_ibv) rxqsibv; /* Verbs Rx queues. */ > LIST_HEAD(hrxq, mlx5_hrxq) hrxqs; /* Verbs Hash Rx queues. */ > @@ -157,7 +155,6 @@ struct priv { > LIST_HEAD(ind_tables, mlx5_ind_table_ibv) ind_tbls; > uint32_t link_speed_capa; /* Link speed capabilities. */ > struct mlx5_xstats_ctrl xstats_ctrl; /* Extended stats control. */ > - rte_spinlock_t mr_lock; /* MR Lock. */ > int primary_socket; /* Unix socket for primary process. */ > void *uar_base; /* Reserved address space for UAR mapping */ > struct rte_intr_handle intr_handle_socket; /* Interrupt handler. */ > @@ -309,13 +306,6 @@ void mlx5_socket_uninit(struct rte_eth_dev *priv); > void mlx5_socket_handle(struct rte_eth_dev *priv); int > mlx5_socket_connect(struct rte_eth_dev *priv); >=20 > -/* mlx5_mr.c */ > - > -struct mlx5_mr *mlx5_mr_new(struct rte_eth_dev *dev, struct > rte_mempool *mp); -struct mlx5_mr *mlx5_mr_get(struct rte_eth_dev > *dev, struct rte_mempool *mp); -int mlx5_mr_release(struct mlx5_mr *mr); > -int mlx5_mr_verify(struct rte_eth_dev *dev); > - > /* mlx5_nl.c */ >=20 > int mlx5_nl_init(uint32_t nlgroups); > diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h > index 55a86957d..f9093777d 100644 > --- a/drivers/net/mlx5/mlx5_defs.h > +++ b/drivers/net/mlx5/mlx5_defs.h > @@ -38,17 +38,6 @@ > #define MLX5_TX_COMP_THRESH_INLINE_DIV (1 << 3) >=20 > /* > - * Maximum number of cached Memory Pools (MPs) per TX queue. Each > RTE MP > - * from which buffers are to be transmitted will have to be mapped by th= is > - * driver to their own Memory Region (MR). This is a slow operation. > - * > - * This value is always 1 for RX queues. > - */ > -#ifndef MLX5_PMD_TX_MP_CACHE > -#define MLX5_PMD_TX_MP_CACHE 8 > -#endif > - > -/* > * If defined, only use software counters. The PMD will never ask the > hardware > * for these, and many of them won't be available. > */ > diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c inde= x > 6613bd6b9..736c40ae4 100644 > --- a/drivers/net/mlx5/mlx5_mr.c > +++ b/drivers/net/mlx5/mlx5_mr.c > @@ -18,346 +18,3 @@ > #include "mlx5_rxtx.h" > #include "mlx5_glue.h" >=20 > -struct mlx5_check_mempool_data { > - int ret; > - char *start; > - char *end; > -}; > - > -/* Called by mlx5_check_mempool() when iterating the memory chunks. */ > -static void -mlx5_check_mempool_cb(struct rte_mempool *mp > __rte_unused, > - void *opaque, struct rte_mempool_memhdr *memhdr, > - unsigned int mem_idx __rte_unused) > -{ > - struct mlx5_check_mempool_data *data =3D opaque; > - > - /* It already failed, skip the next chunks. */ > - if (data->ret !=3D 0) > - return; > - /* It is the first chunk. */ > - if (data->start =3D=3D NULL && data->end =3D=3D NULL) { > - data->start =3D memhdr->addr; > - data->end =3D data->start + memhdr->len; > - return; > - } > - if (data->end =3D=3D memhdr->addr) { > - data->end +=3D memhdr->len; > - return; > - } > - if (data->start =3D=3D (char *)memhdr->addr + memhdr->len) { > - data->start -=3D memhdr->len; > - return; > - } > - /* Error, mempool is not virtually contiguous. */ > - data->ret =3D -1; > -} > - > -/** > - * Check if a mempool can be used: it must be virtually contiguous. > - * > - * @param[in] mp > - * Pointer to memory pool. > - * @param[out] start > - * Pointer to the start address of the mempool virtual memory area > - * @param[out] end > - * Pointer to the end address of the mempool virtual memory area > - * > - * @return > - * 0 on success (mempool is virtually contiguous), -1 on error. > - */ > -static int > -mlx5_check_mempool(struct rte_mempool *mp, uintptr_t *start, > - uintptr_t *end) > -{ > - struct mlx5_check_mempool_data data; > - > - memset(&data, 0, sizeof(data)); > - rte_mempool_mem_iter(mp, mlx5_check_mempool_cb, &data); > - *start =3D (uintptr_t)data.start; > - *end =3D (uintptr_t)data.end; > - return data.ret; > -} > - > -/** > - * Register a Memory Region (MR) <-> Memory Pool (MP) association in > - * txq->mp2mr[]. If mp2mr[] is full, remove an entry first. > - * > - * @param txq > - * Pointer to TX queue structure. > - * @param[in] mp > - * Memory Pool for which a Memory Region lkey must be returned. > - * @param idx > - * Index of the next available entry. > - * > - * @return > - * mr on success, NULL on failure and rte_errno is set. > - */ > -struct mlx5_mr * > -mlx5_txq_mp2mr_reg(struct mlx5_txq_data *txq, struct rte_mempool > *mp, > - unsigned int idx) > -{ > - struct mlx5_txq_ctrl *txq_ctrl =3D > - container_of(txq, struct mlx5_txq_ctrl, txq); > - struct rte_eth_dev *dev; > - struct mlx5_mr *mr; > - > - rte_spinlock_lock(&txq_ctrl->priv->mr_lock); > - /* Add a new entry, register MR first. */ > - DRV_LOG(DEBUG, "port %u discovered new memory pool \"%s\" > (%p)", > - port_id(txq_ctrl->priv), mp->name, (void *)mp); > - dev =3D eth_dev(txq_ctrl->priv); > - mr =3D mlx5_mr_get(dev, mp); > - if (mr =3D=3D NULL) { > - if (rte_eal_process_type() !=3D RTE_PROC_PRIMARY) { > - DRV_LOG(DEBUG, > - "port %u using unregistered mempool > 0x%p(%s)" > - " in secondary process, please create > mempool" > - " before rte_eth_dev_start()", > - port_id(txq_ctrl->priv), (void *)mp, mp- > >name); > - rte_spinlock_unlock(&txq_ctrl->priv->mr_lock); > - rte_errno =3D ENOTSUP; > - return NULL; > - } > - mr =3D mlx5_mr_new(dev, mp); > - } > - if (unlikely(mr =3D=3D NULL)) { > - DRV_LOG(DEBUG, > - "port %u unable to configure memory region," > - " ibv_reg_mr() failed.", > - port_id(txq_ctrl->priv)); > - rte_spinlock_unlock(&txq_ctrl->priv->mr_lock); > - return NULL; > - } > - if (unlikely(idx =3D=3D RTE_DIM(txq->mp2mr))) { > - /* Table is full, remove oldest entry. */ > - DRV_LOG(DEBUG, > - "port %u memory region <-> memory pool table full, > " > - " dropping oldest entry", > - port_id(txq_ctrl->priv)); > - --idx; > - mlx5_mr_release(txq->mp2mr[0]); > - memmove(&txq->mp2mr[0], &txq->mp2mr[1], > - (sizeof(txq->mp2mr) - sizeof(txq->mp2mr[0]))); > - } > - /* Store the new entry. */ > - txq_ctrl->txq.mp2mr[idx] =3D mr; > - DRV_LOG(DEBUG, > - "port %u new memory region lkey for MP \"%s\" (%p): > 0x%08" > - PRIu32, > - port_id(txq_ctrl->priv), mp->name, (void *)mp, > - txq_ctrl->txq.mp2mr[idx]->lkey); > - rte_spinlock_unlock(&txq_ctrl->priv->mr_lock); > - return mr; > -} > - > -struct mlx5_mp2mr_mbuf_check_data { > - int ret; > -}; > - > -/** > - * Callback function for rte_mempool_obj_iter() to check whether a given > - * mempool object looks like a mbuf. > - * > - * @param[in] mp > - * The mempool pointer > - * @param[in] arg > - * Context data (struct txq_mp2mr_mbuf_check_data). Contains the > - * return value. > - * @param[in] obj > - * Object address. > - * @param index > - * Object index, unused. > - */ > -static void > -txq_mp2mr_mbuf_check(struct rte_mempool *mp, void *arg, void *obj, > - uint32_t index __rte_unused) > -{ > - struct mlx5_mp2mr_mbuf_check_data *data =3D arg; > - struct rte_mbuf *buf =3D obj; > - > - /* > - * Check whether mbuf structure fits element size and whether > mempool > - * pointer is valid. > - */ > - if (sizeof(*buf) > mp->elt_size || buf->pool !=3D mp) > - data->ret =3D -1; > -} > - > -/** > - * Iterator function for rte_mempool_walk() to register existing mempool= s > and > - * fill the MP to MR cache of a TX queue. > - * > - * @param[in] mp > - * Memory Pool to register. > - * @param *arg > - * Pointer to TX queue structure. > - */ > -void > -mlx5_mp2mr_iter(struct rte_mempool *mp, void *arg) -{ > - struct priv *priv =3D (struct priv *)arg; > - struct mlx5_mp2mr_mbuf_check_data data =3D { > - .ret =3D 0, > - }; > - struct mlx5_mr *mr; > - > - /* Register mempool only if the first element looks like a mbuf. */ > - if (rte_mempool_obj_iter(mp, txq_mp2mr_mbuf_check, &data) =3D=3D > 0 || > - data.ret =3D=3D -1) > - return; > - mr =3D mlx5_mr_get(eth_dev(priv), mp); > - if (mr) { > - mlx5_mr_release(mr); > - return; > - } > - mr =3D mlx5_mr_new(eth_dev(priv), mp); > - if (!mr) > - DRV_LOG(ERR, "port %u cannot create memory region: %s", > - port_id(priv), strerror(rte_errno)); > -} > - > -/** > - * Register a new memory region from the mempool and store it in the > memory > - * region list. > - * > - * @param dev > - * Pointer to Ethernet device. > - * @param mp > - * Pointer to the memory pool to register. > - * > - * @return > - * The memory region on success, NULL on failure and rte_errno is set. > - */ > -struct mlx5_mr * > -mlx5_mr_new(struct rte_eth_dev *dev, struct rte_mempool *mp) -{ > - struct priv *priv =3D dev->data->dev_private; > - const struct rte_memseg *ms; > - uintptr_t start; > - uintptr_t end; > - struct mlx5_mr *mr; > - > - mr =3D rte_zmalloc_socket(__func__, sizeof(*mr), 0, mp->socket_id); > - if (!mr) { > - DRV_LOG(DEBUG, > - "port %u unable to configure memory region," > - " ibv_reg_mr() failed.", > - dev->data->port_id); > - rte_errno =3D ENOMEM; > - return NULL; > - } > - if (mlx5_check_mempool(mp, &start, &end) !=3D 0) { > - DRV_LOG(ERR, "port %u mempool %p: not virtually > contiguous", > - dev->data->port_id, (void *)mp); > - rte_errno =3D ENOMEM; > - return NULL; > - } > - DRV_LOG(DEBUG, "port %u mempool %p area start=3D%p end=3D%p > size=3D%zu", > - dev->data->port_id, (void *)mp, (void *)start, (void *)end, > - (size_t)(end - start)); > - /* Save original addresses for exact MR lookup. */ > - mr->start =3D start; > - mr->end =3D end; > - > - /* Round start and end to page boundary if found in memory > segments. */ > - ms =3D rte_mem_virt2memseg((void *)start, NULL); > - if (ms !=3D NULL) > - start =3D RTE_ALIGN_FLOOR(start, ms->hugepage_sz); > - end =3D RTE_ALIGN_CEIL(end, ms->hugepage_sz); > - DRV_LOG(DEBUG, > - "port %u mempool %p using start=3D%p end=3D%p size=3D%zu for > memory" > - " region", > - dev->data->port_id, (void *)mp, (void *)start, (void *)end, > - (size_t)(end - start)); > - mr->mr =3D mlx5_glue->reg_mr(priv->pd, (void *)start, end - start, > - IBV_ACCESS_LOCAL_WRITE); > - if (!mr->mr) { > - rte_errno =3D ENOMEM; > - return NULL; > - } > - mr->mp =3D mp; > - mr->lkey =3D rte_cpu_to_be_32(mr->mr->lkey); > - rte_atomic32_inc(&mr->refcnt); > - DRV_LOG(DEBUG, "port %u new memory Region %p refcnt: %d", > - dev->data->port_id, (void *)mr, rte_atomic32_read(&mr- > >refcnt)); > - LIST_INSERT_HEAD(&priv->mr, mr, next); > - return mr; > -} > - > -/** > - * Search the memory region object in the memory region list. > - * > - * @param dev > - * Pointer to Ethernet device. > - * @param mp > - * Pointer to the memory pool to register. > - * > - * @return > - * The memory region on success. > - */ > -struct mlx5_mr * > -mlx5_mr_get(struct rte_eth_dev *dev, struct rte_mempool *mp) -{ > - struct priv *priv =3D dev->data->dev_private; > - struct mlx5_mr *mr; > - > - assert(mp); > - if (LIST_EMPTY(&priv->mr)) > - return NULL; > - LIST_FOREACH(mr, &priv->mr, next) { > - if (mr->mp =3D=3D mp) { > - rte_atomic32_inc(&mr->refcnt); > - return mr; > - } > - } > - return NULL; > -} > - > -/** > - * Release the memory region object. > - * > - * @param mr > - * Pointer to memory region to release. > - * > - * @return > - * 1 while a reference on it exists, 0 when freed. > - */ > -int > -mlx5_mr_release(struct mlx5_mr *mr) > -{ > - assert(mr); > - if (rte_atomic32_dec_and_test(&mr->refcnt)) { > - DRV_LOG(DEBUG, "memory region %p refcnt: %d", (void > *)mr, > - rte_atomic32_read(&mr->refcnt)); > - claim_zero(mlx5_glue->dereg_mr(mr->mr)); > - LIST_REMOVE(mr, next); > - rte_free(mr); > - return 0; > - } > - return 1; > -} > - > -/** > - * Verify the flow list is empty > - * > - * @param dev > - * Pointer to Ethernet device. > - * > - * @return > - * The number of object not released. > - */ > -int > -mlx5_mr_verify(struct rte_eth_dev *dev) -{ > - struct priv *priv =3D dev->data->dev_private; > - int ret =3D 0; > - struct mlx5_mr *mr; > - > - LIST_FOREACH(mr, &priv->mr, next) { > - DRV_LOG(DEBUG, "port %u memory region %p still > referenced", > - dev->data->port_id, (void *)mr); > - ++ret; > - } > - return ret; > -} > diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c > index d993e3846..d4fe1fed7 100644 > --- a/drivers/net/mlx5/mlx5_rxq.c > +++ b/drivers/net/mlx5/mlx5_rxq.c > @@ -649,16 +649,6 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, > uint16_t idx) > goto error; > } > tmpl->rxq_ctrl =3D rxq_ctrl; > - /* Use the entire RX mempool as the memory region. */ > - tmpl->mr =3D mlx5_mr_get(dev, rxq_data->mp); > - if (!tmpl->mr) { > - tmpl->mr =3D mlx5_mr_new(dev, rxq_data->mp); > - if (!tmpl->mr) { > - DRV_LOG(ERR, "port %u: memeroy region creation > failure", > - dev->data->port_id); > - goto error; > - } > - } > if (rxq_ctrl->irq) { > tmpl->channel =3D mlx5_glue->create_comp_channel(priv- > >ctx); > if (!tmpl->channel) { > @@ -799,7 +789,7 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, > uint16_t idx) > .addr =3D rte_cpu_to_be_64(rte_pktmbuf_mtod(buf, > uintptr_t)), > .byte_count =3D rte_cpu_to_be_32(DATA_LEN(buf)), > - .lkey =3D tmpl->mr->lkey, > + .lkey =3D UINT32_MAX, > }; > } > rxq_data->rq_db =3D rwq.dbrec; > @@ -835,8 +825,6 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, > uint16_t idx) > claim_zero(mlx5_glue->destroy_cq(tmpl->cq)); > if (tmpl->channel) > claim_zero(mlx5_glue->destroy_comp_channel(tmpl- > >channel)); > - if (tmpl->mr) > - mlx5_mr_release(tmpl->mr); > priv->verbs_alloc_ctx.type =3D MLX5_VERBS_ALLOC_TYPE_NONE; > rte_errno =3D ret; /* Restore rte_errno. */ > return NULL; > @@ -865,10 +853,8 @@ mlx5_rxq_ibv_get(struct rte_eth_dev *dev, > uint16_t idx) > if (!rxq_data) > return NULL; > rxq_ctrl =3D container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); > - if (rxq_ctrl->ibv) { > - mlx5_mr_get(dev, rxq_data->mp); > + if (rxq_ctrl->ibv) > rte_atomic32_inc(&rxq_ctrl->ibv->refcnt); > - } > return rxq_ctrl->ibv; > } >=20 > @@ -884,15 +870,9 @@ mlx5_rxq_ibv_get(struct rte_eth_dev *dev, > uint16_t idx) int mlx5_rxq_ibv_release(struct mlx5_rxq_ibv *rxq_ibv) { > - int ret; > - > assert(rxq_ibv); > assert(rxq_ibv->wq); > assert(rxq_ibv->cq); > - assert(rxq_ibv->mr); > - ret =3D mlx5_mr_release(rxq_ibv->mr); > - if (!ret) > - rxq_ibv->mr =3D NULL; > if (rte_atomic32_dec_and_test(&rxq_ibv->refcnt)) { > DRV_LOG(DEBUG, "port %u Verbs Rx queue %u: refcnt %d", > port_id(rxq_ibv->rxq_ctrl->priv), > diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h > index 2fc12a186..e8cad51aa 100644 > --- a/drivers/net/mlx5/mlx5_rxtx.h > +++ b/drivers/net/mlx5/mlx5_rxtx.h > @@ -54,17 +54,6 @@ struct mlx5_txq_stats { >=20 > struct priv; >=20 > -/* Memory region queue object. */ > -struct mlx5_mr { > - LIST_ENTRY(mlx5_mr) next; /**< Pointer to the next element. */ > - rte_atomic32_t refcnt; /*<< Reference counter. */ > - uint32_t lkey; /*<< rte_cpu_to_be_32(mr->lkey) */ > - uintptr_t start; /* Start address of MR */ > - uintptr_t end; /* End address of MR */ > - struct ibv_mr *mr; /*<< Memory Region. */ > - struct rte_mempool *mp; /*<< Memory Pool. */ > -}; > - > /* Compressed CQE context. */ > struct rxq_zip { > uint16_t ai; /* Array index. */ > @@ -114,7 +103,6 @@ struct mlx5_rxq_ibv { > struct ibv_cq *cq; /* Completion Queue. */ > struct ibv_wq *wq; /* Work Queue. */ > struct ibv_comp_channel *channel; > - struct mlx5_mr *mr; /* Memory Region (for mp). */ > }; >=20 > /* RX queue control descriptor. */ > @@ -175,7 +163,6 @@ struct mlx5_txq_data { > uint16_t mpw_hdr_dseg:1; /* Enable DSEGs in the title WQEBB. */ > uint16_t max_inline; /* Multiple of RTE_CACHE_LINE_SIZE to inline. > */ > uint16_t inline_max_packet_sz; /* Max packet size for inlining. */ > - uint16_t mr_cache_idx; /* Index of last hit entry. */ > uint32_t qp_num_8s; /* QP number shifted by 8. */ > uint64_t offloads; /* Offloads for Tx Queue. */ > volatile struct mlx5_cqe (*cqes)[]; /* Completion queue. */ @@ - > 183,7 +170,6 @@ struct mlx5_txq_data { > volatile uint32_t *qp_db; /* Work queue doorbell. */ > volatile uint32_t *cq_db; /* Completion queue doorbell. */ > volatile void *bf_reg; /* Blueflame register remapped. */ > - struct mlx5_mr *mp2mr[MLX5_PMD_TX_MP_CACHE]; /* MR > translation table. */ > struct rte_mbuf *(*elts)[]; /* TX elements. */ > struct mlx5_txq_stats stats; /* TX queue counters. */ } > __rte_cache_aligned; @@ -322,12 +308,6 @@ uint16_t > mlx5_tx_burst_vec(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t > mlx5_rx_burst_vec(void *dpdk_txq, struct rte_mbuf **pkts, > uint16_t pkts_n); >=20 > -/* mlx5_mr.c */ > - > -void mlx5_mp2mr_iter(struct rte_mempool *mp, void *arg); -struct > mlx5_mr *mlx5_txq_mp2mr_reg(struct mlx5_txq_data *txq, > - struct rte_mempool *mp, unsigned int idx); > - > #ifndef NDEBUG > /** > * Verify or set magic value in CQE. > @@ -513,76 +493,12 @@ mlx5_tx_complete(struct mlx5_txq_data *txq) > *txq->cq_db =3D rte_cpu_to_be_32(cq_ci); } >=20 > -/** > - * Get Memory Pool (MP) from mbuf. If mbuf is indirect, the pool from > which > - * the cloned mbuf is allocated is returned instead. > - * > - * @param buf > - * Pointer to mbuf. > - * > - * @return > - * Memory pool where data is located for given mbuf. > - */ > -static struct rte_mempool * > -mlx5_tx_mb2mp(struct rte_mbuf *buf) > -{ > - if (unlikely(RTE_MBUF_INDIRECT(buf))) > - return rte_mbuf_from_indirect(buf)->pool; > - return buf->pool; > -} > - > -/** > - * Get Memory Region (MR) <-> rte_mbuf association from txq->mp2mr[]. > - * Add MP to txq->mp2mr[] if it's not registered yet. If mp2mr[] is full= , > - * remove an entry first. > - * > - * @param txq > - * Pointer to TX queue structure. > - * @param[in] mp > - * Memory Pool for which a Memory Region lkey must be returned. > - * > - * @return > - * mr->lkey on success, (uint32_t)-1 on failure. > - */ > static __rte_always_inline uint32_t > mlx5_tx_mb2mr(struct mlx5_txq_data *txq, struct rte_mbuf *mb) { > - uint16_t i =3D txq->mr_cache_idx; > - uintptr_t addr =3D rte_pktmbuf_mtod(mb, uintptr_t); > - struct mlx5_mr *mr; > - > - assert(i < RTE_DIM(txq->mp2mr)); > - if (likely(txq->mp2mr[i]->start <=3D addr && txq->mp2mr[i]->end > > addr)) > - return txq->mp2mr[i]->lkey; > - for (i =3D 0; (i !=3D RTE_DIM(txq->mp2mr)); ++i) { > - if (unlikely(txq->mp2mr[i] =3D=3D NULL || > - txq->mp2mr[i]->mr =3D=3D NULL)) { > - /* Unknown MP, add a new MR for it. */ > - break; > - } > - if (txq->mp2mr[i]->start <=3D addr && > - txq->mp2mr[i]->end > addr) { > - assert(txq->mp2mr[i]->lkey !=3D (uint32_t)-1); > - txq->mr_cache_idx =3D i; > - return txq->mp2mr[i]->lkey; > - } > - } > - mr =3D mlx5_txq_mp2mr_reg(txq, mlx5_tx_mb2mp(mb), i); > - /* > - * Request the reference to use in this queue, the original one is > - * kept by the control plane. > - */ > - if (mr) { > - rte_atomic32_inc(&mr->refcnt); > - txq->mr_cache_idx =3D i >=3D RTE_DIM(txq->mp2mr) ? i - 1 : i; > - return mr->lkey; > - } else { > - struct rte_mempool *mp =3D mlx5_tx_mb2mp(mb); > - > - DRV_LOG(WARNING, "failed to register mempool > 0x%p(%s)", > - (void *)mp, mp->name); > - } > - return (uint32_t)-1; > + (void)txq; > + (void)mb; > + return UINT32_MAX; > } >=20 > /** > diff --git a/drivers/net/mlx5/mlx5_trigger.c > b/drivers/net/mlx5/mlx5_trigger.c index fc56d1ee8..3db6c3f35 100644 > --- a/drivers/net/mlx5/mlx5_trigger.c > +++ b/drivers/net/mlx5/mlx5_trigger.c > @@ -48,17 +48,10 @@ mlx5_txq_start(struct rte_eth_dev *dev) >=20 > /* Add memory regions to Tx queues. */ > for (i =3D 0; i !=3D priv->txqs_n; ++i) { > - unsigned int idx =3D 0; > - struct mlx5_mr *mr; > struct mlx5_txq_ctrl *txq_ctrl =3D mlx5_txq_get(dev, i); >=20 > if (!txq_ctrl) > continue; > - LIST_FOREACH(mr, &priv->mr, next) { > - mlx5_txq_mp2mr_reg(&txq_ctrl->txq, mr->mp, > idx++); > - if (idx =3D=3D MLX5_PMD_TX_MP_CACHE) > - break; > - } > txq_alloc_elts(txq_ctrl); > txq_ctrl->ibv =3D mlx5_txq_ibv_new(dev, i); > if (!txq_ctrl->ibv) { > @@ -144,13 +137,11 @@ int > mlx5_dev_start(struct rte_eth_dev *dev) { > struct priv *priv =3D dev->data->dev_private; > - struct mlx5_mr *mr =3D NULL; > int ret; >=20 > dev->data->dev_started =3D 1; > DRV_LOG(DEBUG, "port %u allocating and configuring hash Rx > queues", > dev->data->port_id); > - rte_mempool_walk(mlx5_mp2mr_iter, priv); > ret =3D mlx5_txq_start(dev); > if (ret) { > DRV_LOG(ERR, "port %u Tx queue allocation failed: %s", @@ > -190,8 +181,6 @@ mlx5_dev_start(struct rte_eth_dev *dev) > ret =3D rte_errno; /* Save rte_errno before cleanup. */ > /* Rollback. */ > dev->data->dev_started =3D 0; > - for (mr =3D LIST_FIRST(&priv->mr); mr; mr =3D LIST_FIRST(&priv->mr)) > - mlx5_mr_release(mr); > mlx5_flow_stop(dev, &priv->flows); > mlx5_traffic_disable(dev); > mlx5_txq_stop(dev); > @@ -212,7 +201,6 @@ void > mlx5_dev_stop(struct rte_eth_dev *dev) > { > struct priv *priv =3D dev->data->dev_private; > - struct mlx5_mr *mr; >=20 > dev->data->dev_started =3D 0; > /* Prevent crashes when queues are still in use. */ @@ -228,8 +216,6 > @@ mlx5_dev_stop(struct rte_eth_dev *dev) > mlx5_dev_interrupt_handler_uninstall(dev); > mlx5_txq_stop(dev); > mlx5_rxq_stop(dev); > - for (mr =3D LIST_FIRST(&priv->mr); mr; mr =3D LIST_FIRST(&priv->mr)) > - mlx5_mr_release(mr); > } >=20 > /** > diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c > index 3f4b5fea5..a71f3d0f0 100644 > --- a/drivers/net/mlx5/mlx5_txq.c > +++ b/drivers/net/mlx5/mlx5_txq.c > @@ -409,7 +409,6 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, > uint16_t idx) > return NULL; > } > memset(&tmpl, 0, sizeof(struct mlx5_txq_ibv)); > - /* MRs will be registered in mp2mr[] later. */ > attr.cq =3D (struct ibv_cq_init_attr_ex){ > .comp_mask =3D 0, > }; > @@ -812,7 +811,6 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t > idx, uint16_t desc, > tmpl->txq.elts_n =3D log2above(desc); > tmpl->idx =3D idx; > txq_set_params(tmpl); > - /* MRs will be registered in mp2mr[] later. */ > DRV_LOG(DEBUG, "port %u priv->device_attr.max_qp_wr is %d", > dev->data->port_id, priv- > >device_attr.orig_attr.max_qp_wr); > DRV_LOG(DEBUG, "port %u priv->device_attr.max_sge is %d", @@ - > 847,15 +845,7 @@ mlx5_txq_get(struct rte_eth_dev *dev, uint16_t idx) > if ((*priv->txqs)[idx]) { > ctrl =3D container_of((*priv->txqs)[idx], struct mlx5_txq_ctrl, > txq); > - unsigned int i; > - > mlx5_txq_ibv_get(dev, idx); > - for (i =3D 0; i !=3D MLX5_PMD_TX_MP_CACHE; ++i) { > - if (ctrl->txq.mp2mr[i]) > - claim_nonzero > - (mlx5_mr_get(dev, > - ctrl->txq.mp2mr[i]->mp)); > - } > rte_atomic32_inc(&ctrl->refcnt); > } > return ctrl; > @@ -876,7 +866,6 @@ int > mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx) { > struct priv *priv =3D dev->data->dev_private; > - unsigned int i; > struct mlx5_txq_ctrl *txq; > size_t page_size =3D sysconf(_SC_PAGESIZE); >=20 > @@ -885,12 +874,6 @@ mlx5_txq_release(struct rte_eth_dev *dev, > uint16_t idx) > txq =3D container_of((*priv->txqs)[idx], struct mlx5_txq_ctrl, txq); > if (txq->ibv && !mlx5_txq_ibv_release(txq->ibv)) > txq->ibv =3D NULL; > - for (i =3D 0; i !=3D MLX5_PMD_TX_MP_CACHE; ++i) { > - if (txq->txq.mp2mr[i]) { > - mlx5_mr_release(txq->txq.mp2mr[i]); > - txq->txq.mp2mr[i] =3D NULL; > - } > - } > if (priv->uar_base) > munmap((void *)RTE_ALIGN_FLOOR((uintptr_t)txq- > >txq.bf_reg, > page_size), page_size); > -- > 2.11.0