From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 0535AA04A7;
	Tue, 25 Jan 2022 10:39:33 +0100 (CET)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id B84E441C3C;
	Tue, 25 Jan 2022 10:39:32 +0100 (CET)
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by mails.dpdk.org (Postfix) with ESMTP id 3F242411A7
 for <dev@dpdk.org>; Tue, 25 Jan 2022 10:39:31 +0100 (CET)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1643103570;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=EhN7+LLkPe5Xh0aActBuJL7cWPtmXOp11OrOpLoMIpg=;
 b=Ro3/Z1Sn1ifXYFlmjpxW7hzFfi3grIfjwxe3O5OiMpvjSKMbY9VQaEyCL7CEtwMQn8tG/s
 q+7S1n3sl2J9M1ON9E4zZVa4yhVozCD55Jx2N+hFBSgHcsQOtrqtWWgd0gcYeoY0/wJ3T+
 FPfvx+HzZccoxx/mOnCNKVT595SsKQs=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-584-gUryPo_OM0iVs4ejmkH4JQ-1; Tue, 25 Jan 2022 04:39:26 -0500
X-MC-Unique: gUryPo_OM0iVs4ejmkH4JQ-1
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D906E1083F60;
 Tue, 25 Jan 2022 09:39:25 +0000 (UTC)
Received: from [10.39.208.22] (unknown [10.39.208.22])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id C0DCC7A2E4;
 Tue, 25 Jan 2022 09:39:24 +0000 (UTC)
Message-ID: <7592a9c5-e591-8544-8fd4-163d3ff7a3c8@redhat.com>
Date: Tue, 25 Jan 2022 10:39:18 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.4.0
Subject: Re: [PATCH 1/7] vhost: improve IOTLB logs
To: David Marchand <david.marchand@redhat.com>
Cc: dev <dev@dpdk.org>, "Xia, Chenbo" <chenbo.xia@intel.com>
References: <20211223083659.245766-1-maxime.coquelin@redhat.com>
 <20211223083659.245766-2-maxime.coquelin@redhat.com>
 <CAJFAV8w26rQHN1iKym=qQS4Hh1Kzcc-7ycsdsHGunjMSMoim+A@mail.gmail.com>
From: Maxime Coquelin <maxime.coquelin@redhat.com>
In-Reply-To: <CAJFAV8w26rQHN1iKym=qQS4Hh1Kzcc-7ycsdsHGunjMSMoim+A@mail.gmail.com>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Hi David,

On 1/4/22 15:44, David Marchand wrote:
> On Thu, Dec 23, 2021 at 9:37 AM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
>>
>> This patch adds IOTLB mempool name when logging debug
>> or error messages, and also prepends the socket path.
>> to all the logs.
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>   lib/vhost/iotlb.c      | 26 +++++++++++++++-----------
>>   lib/vhost/iotlb.h      | 10 +++++-----
>>   lib/vhost/vhost.c      |  2 +-
>>   lib/vhost/vhost_user.c |  2 +-
>>   4 files changed, 22 insertions(+), 18 deletions(-)
>>
>> diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
>> index 82bdb84526..e9e1ede7a4 100644
>> --- a/lib/vhost/iotlb.c
>> +++ b/lib/vhost/iotlb.c
>> @@ -62,7 +62,7 @@ vhost_user_iotlb_pending_miss(struct vhost_virtqueue *vq, uint64_t iova,
>>   }
>>
>>   void
>> -vhost_user_iotlb_pending_insert(struct vhost_virtqueue *vq,
>> +vhost_user_iotlb_pending_insert(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>                                  uint64_t iova, uint8_t perm)
>>   {
>>          struct vhost_iotlb_entry *node;
>> @@ -70,14 +70,16 @@ vhost_user_iotlb_pending_insert(struct vhost_virtqueue *vq,
>>
>>          ret = rte_mempool_get(vq->iotlb_pool, (void **)&node);
>>          if (ret) {
>> -               VHOST_LOG_CONFIG(DEBUG, "IOTLB pool empty, clear entries\n");
>> +               VHOST_LOG_CONFIG(DEBUG, "(%s) IOTLB pool %s empty, clear entries\n",
>> +                               dev->ifname, vq->iotlb_pool->name);
>>                  if (!TAILQ_EMPTY(&vq->iotlb_pending_list))
>>                          vhost_user_iotlb_pending_remove_all(vq);
>>                  else
>>                          vhost_user_iotlb_cache_random_evict(vq);
>>                  ret = rte_mempool_get(vq->iotlb_pool, (void **)&node);
>>                  if (ret) {
>> -                       VHOST_LOG_CONFIG(ERR, "IOTLB pool still empty, failure\n");
>> +                       VHOST_LOG_CONFIG(ERR, "(%s) IOTLB pool %s still empty, failure\n",
>> +                                       dev->ifname, vq->iotlb_pool->name);
>>                          return;
>>                  }
>>          }
>> @@ -156,22 +158,25 @@ vhost_user_iotlb_cache_random_evict(struct vhost_virtqueue *vq)
>>   }
>>
>>   void
>> -vhost_user_iotlb_cache_insert(struct vhost_virtqueue *vq, uint64_t iova,
>> -                               uint64_t uaddr, uint64_t size, uint8_t perm)
>> +vhost_user_iotlb_cache_insert(struct virtio_net *dev, struct vhost_virtqueue *vq,
>> +                               uint64_t iova, uint64_t uaddr,
>> +                               uint64_t size, uint8_t perm)
>>   {
>>          struct vhost_iotlb_entry *node, *new_node;
>>          int ret;
>>
>>          ret = rte_mempool_get(vq->iotlb_pool, (void **)&new_node);
>>          if (ret) {
>> -               VHOST_LOG_CONFIG(DEBUG, "IOTLB pool empty, clear entries\n");
>> +               VHOST_LOG_CONFIG(DEBUG, "(%s) IOTLB pool %s empty, clear entries\n",
>> +                               dev->ifname, vq->iotlb_pool->name);
> 
> We have the same logs in two different paths
> (vhost_user_iotlb_pending_insert and vhost_user_iotlb_cache_insert).
> It would probably help when debugging to have separate messages.
> 
> This could be added later, since this current patch is about prefixing
> with the socket path.

Makes sense, I'm adding a new patch on top of the series to address
that.

> 
>>                  if (!TAILQ_EMPTY(&vq->iotlb_list))
>>                          vhost_user_iotlb_cache_random_evict(vq);
>>                  else
>>                          vhost_user_iotlb_pending_remove_all(vq);
>>                  ret = rte_mempool_get(vq->iotlb_pool, (void **)&new_node);
>>                  if (ret) {
>> -                       VHOST_LOG_CONFIG(ERR, "IOTLB pool still empty, failure\n");
>> +                       VHOST_LOG_CONFIG(ERR, "(%s) IOTLB pool %s still empty, failure\n",
>> +                                       dev->ifname, vq->iotlb_pool->name);
>>                          return;
>>                  }
>>          }
>> @@ -311,7 +316,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int vq_index)
>>
>>          snprintf(pool_name, sizeof(pool_name), "iotlb_%u_%d_%d",
>>                          getpid(), dev->vid, vq_index);
>> -       VHOST_LOG_CONFIG(DEBUG, "IOTLB cache name: %s\n", pool_name);
>> +       VHOST_LOG_CONFIG(DEBUG, "(%s) IOTLB cache name: %s\n", dev->ifname, pool_name);
>>
>>          /* If already created, free it and recreate */
>>          vq->iotlb_pool = rte_mempool_lookup(pool_name);
>> @@ -324,9 +329,8 @@ vhost_user_iotlb_init(struct virtio_net *dev, int vq_index)
>>                          RTE_MEMPOOL_F_NO_CACHE_ALIGN |
>>                          RTE_MEMPOOL_F_SP_PUT);
>>          if (!vq->iotlb_pool) {
>> -               VHOST_LOG_CONFIG(ERR,
>> -                               "Failed to create IOTLB cache pool (%s)\n",
>> -                               pool_name);
>> +               VHOST_LOG_CONFIG(ERR, "(%s) Failed to create IOTLB cache pool (%s)\n",
> 
> I'd make this log consistent with the previous log and remove the ()
> around the pool name.

Done.

> 
>> +                               dev->ifname, pool_name);
>>                  return -1;
>>          }
>>
> 
> The rest lgtm.
> 
> 
> 

Thanks,
Maxime