From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4E8DEA0C44; Fri, 16 Apr 2021 11:12:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BFCDF141C02; Fri, 16 Apr 2021 11:12:19 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id E20D3141BEA for ; Fri, 16 Apr 2021 11:12:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618564337; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=3bc4A8pHWw1OXaaaahSTckHslme/w8rB7yO0j+mOBro=; b=Ncqrtrfn0k521yS16bTVulmNMPRK+OKEwHJk0JH//F2/MbJ8lCKGUnIPHhdoCM11bLqg3o L+733fsiTYY4L7RxJePNJ7PjmS1WlJQ0dxIPSCCJQGehjqOoZcXFGzGgp8AlCtOBuG0a+z 9oiTHWuL+e5wRSo4mooLwy9cCfTkFVU= Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-370-eOhANuePObeG_Gv4NUeIpQ-1; Fri, 16 Apr 2021 05:12:15 -0400 X-MC-Unique: eOhANuePObeG_Gv4NUeIpQ-1 Received: by mail-pf1-f199.google.com with SMTP id a7-20020a62bd070000b029025434d5ead4so3455330pff.0 for ; Fri, 16 Apr 2021 02:12:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=3bc4A8pHWw1OXaaaahSTckHslme/w8rB7yO0j+mOBro=; b=rWBkmguZPxjQhfSmH8432CDRhPSCm1x5nfhroYqcS4pDESs1jv9etEAr9P95p5p//J p1q+/wsuZnwqFa5+LxYh5l3ndMU2GMnO5lgHy/QMU4MAIxgD4R6AI69mI322RpBdxxVv fNMg2dHUUkiYuSAV6RbnR6dhXpO+qPrGLuHgqdapcT4IdKpm/pKdKJNwuHTiD5ORPWLW oqMYADcWdQF+cMGS5WCBHwZtnwXtK6wf3akRze98mPR/nPANjVp8U/jmfkIwaO8qH/3m 2NeVdZ54543YVOiz8OBShkKwQqfpVa4sgBzYtwWLkIXtwrO4uPjg+VjUYtagPqgkiGdA NTYA== X-Gm-Message-State: AOAM530P/Ojtz2/Nfh1vBwD8qoADr/xFAYkfFS4qh4ark8PGn/QU767P /Y4DBaKWEj+0gUSS+0hBF1nHi0LLx+ub6vht0iYEFViPHylmzTOysdXETa6oQ8nN69Gvysp71H9 v5/vnVwbLFIZ7SFeDwVE= X-Received: by 2002:a62:7bc3:0:b029:251:20c1:2ed8 with SMTP id w186-20020a627bc30000b029025120c12ed8mr7339030pfc.10.1618564333939; Fri, 16 Apr 2021 02:12:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzIsBooHt+NxcA4sezxVpZiwwENxZy1675HS6LBtGTkgbpNBQUWHnTNrrhQoxuyjSOZazKsTTVY8Astgx7gDrw= X-Received: by 2002:a62:7bc3:0:b029:251:20c1:2ed8 with SMTP id w186-20020a627bc30000b029025120c12ed8mr7339016pfc.10.1618564333688; Fri, 16 Apr 2021 02:12:13 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Balazs Nemeth Date: Fri, 16 Apr 2021 11:12:03 +0200 Message-ID: To: David Marchand Cc: dev , Maxime Coquelin , "Xia, Chenbo" Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=bnemeth@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: Re: [dpdk-dev] [PATCH v3] vhost: allocate and free packets in bulk X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi David, I was also thinking about using the same idea for tx split so that virtio_dev_pktmbuf_alloc can be removed completely. I don't have those patches yet. However, I can make virtio_dev_pktmbuf_alloc use virtio_dev_pktmbuf_prep for this patch that addresses the packed version and submit other patches for tx split later. Would that work for now? Regards, Balazs On Fri, Apr 16, 2021 at 11:05 AM David Marchand wrote: > On Fri, Apr 16, 2021 at 10:18 AM Balazs Nemeth wrote: > > > > Move allocation out further and perform all allocation in bulk. The same > > goes for freeing packets. In the process, also rename > > virtio_dev_pktmbuf_alloc to virtio_dev_pktmbuf_prep. This > > function now receives an already allocated mbuf pointer. > > > > Signed-off-by: Balazs Nemeth > > The title should indicate we are only touching the tx packed path. > > What about tx split? > If it is too complex to rework, this can wait. > > > > --- > > lib/librte_vhost/virtio_net.c | 58 +++++++++++++++++++++++------------ > > 1 file changed, 38 insertions(+), 20 deletions(-) > > > > diff --git a/lib/librte_vhost/virtio_net.c > b/lib/librte_vhost/virtio_net.c > > index ff39878609..d6d5636e0f 100644 > > --- a/lib/librte_vhost/virtio_net.c > > +++ b/lib/librte_vhost/virtio_net.c > > @@ -2168,6 +2168,24 @@ virtio_dev_pktmbuf_alloc(struct virtio_net *dev, > struct rte_mempool *mp, > > return NULL; > > } > > > > +static __rte_always_inline int > > +virtio_dev_pktmbuf_prep(struct virtio_net *dev, struct rte_mbuf *pkt, > > + uint32_t data_len) > > +{ > > + if (rte_pktmbuf_tailroom(pkt) >= data_len) > > + return 0; > > + > > + /* attach an external buffer if supported */ > > + if (dev->extbuf && !virtio_dev_extbuf_alloc(pkt, data_len)) > > + return 0; > > + > > + /* check if chained buffers are allowed */ > > + if (!dev->linearbuf) > > + return 0; > > + > > + return -1; > > +} > > + > > If virtio_dev_pktmbuf_alloc() uses this new helper, we avoid > duplicating the logic. > > > > static __rte_noinline uint16_t > > virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, > > struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t > count) > > [snip] > > > @@ -2429,7 +2440,7 @@ static __rte_always_inline int > > virtio_dev_tx_single_packed(struct virtio_net *dev, > > struct vhost_virtqueue *vq, > > struct rte_mempool *mbuf_pool, > > - struct rte_mbuf **pkts) > > + struct rte_mbuf *pkts) > > { > > > > uint16_t buf_id, desc_count = 0; > > @@ -2462,26 +2473,33 @@ virtio_dev_tx_packed(struct virtio_net *dev, > > uint32_t pkt_idx = 0; > > uint32_t remained = count; > > > > + if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) > > + return 0; > > + > > do { > > rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]); > > > > if (remained >= PACKED_BATCH_SIZE) { > > - if (!virtio_dev_tx_batch_packed(dev, vq, > mbuf_pool, > > + if (!virtio_dev_tx_batch_packed(dev, vq, > > &pkts[pkt_idx])) > { > > pkt_idx += PACKED_BATCH_SIZE; > > remained -= PACKED_BATCH_SIZE; > > + > > No need for the extra line. > > > > continue; > > } > > } > > > > if (virtio_dev_tx_single_packed(dev, vq, mbuf_pool, > > - &pkts[pkt_idx])) > > + pkts[pkt_idx])) > > break; > > pkt_idx++; > > remained--; > > > > } while (remained); > > > > + if (pkt_idx != count) > > + rte_pktmbuf_free_bulk(&pkts[pkt_idx], count - pkt_idx); > > + > > if (vq->shadow_used_idx) { > > do_data_copy_dequeue(vq); > > > > With those comments addressed, > Reviewed-by: David Marchand > > Thanks Balazs! > > > -- > David Marchand > >