From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id CBE89A00C4;
	Tue, 26 Jul 2022 11:26:53 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 7116E40DDD;
	Tue, 26 Jul 2022 11:26:53 +0200 (CEST)
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by mails.dpdk.org (Postfix) with ESMTP id D237140695
 for <dev@dpdk.org>; Tue, 26 Jul 2022 11:26:51 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1658827611;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=Re68g+PAUFBKz1GQEJgu0OScY0HFOv8wbi+thrsOOMY=;
 b=G0rXrmMia1gKW6N5ws4wc5+rFfdn7vKQuJxvB00ZBcoJungIQmGfaew4o0WGY53R66de5/
 gXhpj8Qa30ZCOAHfU5Hr11g88MpbPh6Prc5+wisbs+Wpf8B10g8R+BDpCuq7l/gabbWbUz
 paoRgA8d5gqL8xmN0Gx12MbK4PN92V8=
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-519-dVVtfMEdNWmd4xjtP6hdag-1; Tue, 26 Jul 2022 05:26:48 -0400
X-MC-Unique: dVVtfMEdNWmd4xjtP6hdag-1
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C7B623C00086;
 Tue, 26 Jul 2022 09:26:47 +0000 (UTC)
Received: from [10.39.208.26] (unknown [10.39.208.26])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 281161121319;
 Tue, 26 Jul 2022 09:26:46 +0000 (UTC)
Message-ID: <551fa8d0-9855-cd8f-7d5b-5118f3583b7e@redhat.com>
Date: Tue, 26 Jul 2022 11:26:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.11.0
Subject: Re: [PATCH v3 4/4] vhost: stop using mempool for IOTLB cache
To: David Marchand <david.marchand@redhat.com>, dev@dpdk.org
Cc: Chenbo Xia <chenbo.xia@intel.com>
References: <20220722135320.109269-1-david.marchand@redhat.com>
 <20220725203206.427083-1-david.marchand@redhat.com>
 <20220725203206.427083-5-david.marchand@redhat.com>
From: Maxime Coquelin <maxime.coquelin@redhat.com>
In-Reply-To: <20220725203206.427083-5-david.marchand@redhat.com>
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org



On 7/25/22 22:32, David Marchand wrote:
> A mempool consumes 3 memzones (with the default ring mempool driver).
> The default DPDK configuration allows RTE_MAX_MEMZONE (2560) memzones.
> 
> Assuming there is no other memzones that means that we can have a
> maximum of 853 mempools.
> 
> In the vhost library, the IOTLB cache code so far was requesting a
> mempool per vq, which means that at the maximum, the vhost library
> could request mempools for 426 qps.
> 
> This limit was recently reached on big systems with a lot of virtio
> ports (and multiqueue in use).
> 
> While the limit on mempool count could be something we fix at the DPDK
> project level, there is no reason to use mempools for the IOTLB cache:
> - the IOTLB cache entries do not need to be DMA-able and are only used
>    by the current process (in multiprocess context),
> - getting/putting objects from/in the mempool is always associated with
>    some other locks, so some level of lock contention is already present,
> 
> We can convert to a malloc'd pool with objects put in a free list
> protected by a spinlock.
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
>   lib/vhost/iotlb.c | 102 ++++++++++++++++++++++++++++------------------
>   lib/vhost/iotlb.h |   1 +
>   lib/vhost/vhost.c |   2 +-
>   lib/vhost/vhost.h |   4 +-
>   4 files changed, 67 insertions(+), 42 deletions(-)
> 

Thanks for working on this, this is definitely not needed to use mempool
for this.

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Maxime