From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 35BA7A0C44; Fri, 16 Apr 2021 11:05:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 06683141BFD; Fri, 16 Apr 2021 11:05:23 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mails.dpdk.org (Postfix) with ESMTP id AB6CB141BEA for ; Fri, 16 Apr 2021 11:05:21 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618563920; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=wqf9KO6p6UH+SXxz24aBtCTWYhYOsy5NMHtuHByOgOQ=; b=QmM7CApvTfgm8yadorZUG80KjaQY5tHBmj737V/AOIHEznleYxclU85AO0y7LksUwzqan5 Qy2Ny0vcQ7cTxDSvfl0zjQCSWLh9s4D7gGm5MkFklIybK56HGsiKxpFs3oe49puOG3I8L/ AV7RFdSKOnsjnZ6//D62ATygjmVqgRY= Received: from mail-vk1-f200.google.com (mail-vk1-f200.google.com [209.85.221.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-409-GYnaXgZFOS6JuRi5Dns7SA-1; Fri, 16 Apr 2021 05:05:19 -0400 X-MC-Unique: GYnaXgZFOS6JuRi5Dns7SA-1 Received: by mail-vk1-f200.google.com with SMTP id a125-20020a1fca830000b02901cef6da1311so2339831vkg.14 for ; Fri, 16 Apr 2021 02:05:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=wqf9KO6p6UH+SXxz24aBtCTWYhYOsy5NMHtuHByOgOQ=; b=N0Ur3JC/3c3l2s/b0nzUTA7GAuDkZ5J3GSoGbmbgNZFpQ2mbK1c/ZmIm6tasFH+Q4T kmSML2eqrIanKLpWXI/LrwhKscFl1ZctK6yjvwD38AiYKpLS/87TOEDxHXiSSH3+gDmC cE1F8LYKONdq0nhsGy6UXB4jUVU0vtKbAe5RU7YC5UyEY5cuOKZ9kNmlvf6JMEPfdJ5v L368WSHxE3XqkfgBTLF13Kuaw+d1STWBRstRZr6gqQ/42MnvoQqG50ckXF+zpNPl1y4s 6qrl1gWh2AmxupNXm8e3LtvBxbKmlzkVQ8hF1h8m2e9ROvTqRckZ4GdG9Lca1kb6upNI 8Bzg== X-Gm-Message-State: AOAM532AXCxuW0bJPRh6QOG62WQyfAzVNyTf1etWt3rwsrhFjECYyssN Uf53wkdcNYnMiGz9kifBtP8hXgVOOC/M8mJ+L/8tHL8W5xV3EYF9s4unGUmQ6VRCk+kI70cGM16 tI6JYz4Arx/34cJKcAbQ= X-Received: by 2002:a67:87c1:: with SMTP id j184mr5679571vsd.18.1618563918734; Fri, 16 Apr 2021 02:05:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzcsP3jci4rNp5gl4Inq9G7Jb3xKfBZxgjZ+hHRtGjOzgSnJK21/fBJT6Ii9YpO/jPVbHITsExEmWvBGtgVcGo= X-Received: by 2002:a67:87c1:: with SMTP id j184mr5679548vsd.18.1618563918512; Fri, 16 Apr 2021 02:05:18 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: David Marchand Date: Fri, 16 Apr 2021 11:05:07 +0200 Message-ID: To: Balazs Nemeth Cc: dev , Maxime Coquelin , "Xia, Chenbo" Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dmarchan@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v3] vhost: allocate and free packets in bulk X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Fri, Apr 16, 2021 at 10:18 AM Balazs Nemeth wrote: > > Move allocation out further and perform all allocation in bulk. The same > goes for freeing packets. In the process, also rename > virtio_dev_pktmbuf_alloc to virtio_dev_pktmbuf_prep. This > function now receives an already allocated mbuf pointer. > > Signed-off-by: Balazs Nemeth The title should indicate we are only touching the tx packed path. What about tx split? If it is too complex to rework, this can wait. > --- > lib/librte_vhost/virtio_net.c | 58 +++++++++++++++++++++++------------ > 1 file changed, 38 insertions(+), 20 deletions(-) > > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c > index ff39878609..d6d5636e0f 100644 > --- a/lib/librte_vhost/virtio_net.c > +++ b/lib/librte_vhost/virtio_net.c > @@ -2168,6 +2168,24 @@ virtio_dev_pktmbuf_alloc(struct virtio_net *dev, struct rte_mempool *mp, > return NULL; > } > > +static __rte_always_inline int > +virtio_dev_pktmbuf_prep(struct virtio_net *dev, struct rte_mbuf *pkt, > + uint32_t data_len) > +{ > + if (rte_pktmbuf_tailroom(pkt) >= data_len) > + return 0; > + > + /* attach an external buffer if supported */ > + if (dev->extbuf && !virtio_dev_extbuf_alloc(pkt, data_len)) > + return 0; > + > + /* check if chained buffers are allowed */ > + if (!dev->linearbuf) > + return 0; > + > + return -1; > +} > + If virtio_dev_pktmbuf_alloc() uses this new helper, we avoid duplicating the logic. > static __rte_noinline uint16_t > virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, > struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) [snip] > @@ -2429,7 +2440,7 @@ static __rte_always_inline int > virtio_dev_tx_single_packed(struct virtio_net *dev, > struct vhost_virtqueue *vq, > struct rte_mempool *mbuf_pool, > - struct rte_mbuf **pkts) > + struct rte_mbuf *pkts) > { > > uint16_t buf_id, desc_count = 0; > @@ -2462,26 +2473,33 @@ virtio_dev_tx_packed(struct virtio_net *dev, > uint32_t pkt_idx = 0; > uint32_t remained = count; > > + if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) > + return 0; > + > do { > rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]); > > if (remained >= PACKED_BATCH_SIZE) { > - if (!virtio_dev_tx_batch_packed(dev, vq, mbuf_pool, > + if (!virtio_dev_tx_batch_packed(dev, vq, > &pkts[pkt_idx])) { > pkt_idx += PACKED_BATCH_SIZE; > remained -= PACKED_BATCH_SIZE; > + No need for the extra line. > continue; > } > } > > if (virtio_dev_tx_single_packed(dev, vq, mbuf_pool, > - &pkts[pkt_idx])) > + pkts[pkt_idx])) > break; > pkt_idx++; > remained--; > > } while (remained); > > + if (pkt_idx != count) > + rte_pktmbuf_free_bulk(&pkts[pkt_idx], count - pkt_idx); > + > if (vq->shadow_used_idx) { > do_data_copy_dequeue(vq); > With those comments addressed, Reviewed-by: David Marchand Thanks Balazs! -- David Marchand