From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B8995A2EDB for ; Tue, 1 Oct 2019 16:03:43 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8AAB2CFA6; Tue, 1 Oct 2019 16:03:43 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 61B204C8E for ; Tue, 1 Oct 2019 16:03:41 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us5.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 334FA58007D; Tue, 1 Oct 2019 14:03:34 +0000 (UTC) Received: from [192.168.38.17] (91.220.146.112) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Tue, 1 Oct 2019 15:03:29 +0100 To: Stephen Hemminger , References: <20190928003758.18489-1-stephen@networkplumber.org> <20190930192056.26828-1-stephen@networkplumber.org> <20190930192056.26828-5-stephen@networkplumber.org> From: Andrew Rybchenko Message-ID: Date: Tue, 1 Oct 2019 17:03:25 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <20190930192056.26828-5-stephen@networkplumber.org> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-GB X-Originating-IP: [91.220.146.112] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24946.003 X-TM-AS-Result: No-10.300000-8.000000-10 X-TMASE-MatchedRID: eVEkOcJu0F7mLzc6AOD8DfHkpkyUphL9TRVGHVvlufZ+SLLtNOiBhhTj msoJ9VExegV7ImQxYkMnL/7N3zuc5AsqFrgUovcVIDrhNaNLPjprLj3DxYBIN6tkcxxU6EVI3K7 vj0jIXo98bO6hWfRWzo9CL1e45ag4tAOuXrON8jYPe5gzF3TVt0Uj/7D85/VHV8ukjx868O6M9v nDwIPJka2w9MGGb+otBwizrhreQfnu2hpt/pkbiMnUT+eskUQP+IfriO3cV8QumZeX1WIQ8BbjW LZHubLuvRU20Fz4wZK028jHvtIaDLz5DD0+tkb19Ib/6w+1lWTGYnoF/CTeZd9zZd3pUn7KBnY1 HootTExIwDARJ7pHL5GTpe1iiCJq71zr0FZRMbBWdFebWIc3VsRB0bsfrpPIfiAqrjYtFiTx/OP vKeNzR/VPG5EgA7JhD96Uo8VDkxDDC2RhYa7cM37cGd19dSFd X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--10.300000-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24946.003 X-MDID: 1569938614-iYF90ob4SeeZ Subject: Re: [dpdk-dev] [PATCH v3 4/6] mbuf: add a pktmbuf copy routine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 9/30/19 10:20 PM, Stephen Hemminger wrote: > This is a commonly used operation that surprisingly the > DPDK has not supported. The new rte_pktmbuf_copy does a > deep copy of packet. This is a complete copy including > meta-data. > > It handles the case where the source mbuf comes from a pool > with larger data area than the destination pool. The routine > also has options for skipping data, or truncating at a fixed > length. > > Signed-off-by: Stephen Hemminger > --- > lib/librte_mbuf/rte_mbuf.c | 74 ++++++++++++++++++++++++++++ > lib/librte_mbuf/rte_mbuf.h | 26 ++++++++++ > lib/librte_mbuf/rte_mbuf_version.map | 1 + > 3 files changed, 101 insertions(+) > > diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c > index 9a1a1b5f9468..901df0192d2e 100644 > --- a/lib/librte_mbuf/rte_mbuf.c > +++ b/lib/librte_mbuf/rte_mbuf.c > @@ -321,6 +321,80 @@ __rte_pktmbuf_linearize(struct rte_mbuf *mbuf) > return 0; > } > > +/* Create a deep copy of mbuf */ > +struct rte_mbuf * > +rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp, > + uint32_t off, uint32_t len) > +{ > + const struct rte_mbuf *seg = m; > + struct rte_mbuf *mc, *m_last, **prev; > + > + if (unlikely(off >= m->pkt_len)) > + return NULL; > + > + mc = rte_pktmbuf_alloc(mp); > + if (unlikely(mc == NULL)) > + return NULL; > + > + if (len > m->pkt_len - off) > + len = m->pkt_len - off; > + > + /* clone meta data from original */ > + mc->port = m->port; > + mc->vlan_tci = m->vlan_tci; > + mc->vlan_tci_outer = m->vlan_tci_outer; > + mc->tx_offload = m->tx_offload; > + mc->hash = m->hash; > + mc->packet_type = m->packet_type; > + mc->timestamp = m->timestamp; The same is done in rte_pktmbuf_attach(). May be we need a helper function to copy meta data? Just to avoid duplication in many places. > + > + /* copy private data (if any) */ > + rte_memcpy(mc + 1, m + 1, > + rte_pktmbuf_priv_size(mp)); priv_size is mempool specific and original mbuf mempool may have smaller priv_size. I'm not sure that it is safe to copy outsize of priv_size at least from security point of view. So, I think it should be RTE_MIN here. > + > + prev = &mc->next; > + m_last = mc; > + while (len > 0) { > + uint32_t copy_len; > + > + while (off >= seg->data_len) { > + off -= seg->data_len; > + seg = seg->next; > + } > + > + /* current buffer is full, chain a new one */ > + if (rte_pktmbuf_tailroom(m_last) == 0) { > + m_last = rte_pktmbuf_alloc(mp); > + if (unlikely(m_last == NULL)) { > + rte_pktmbuf_free(mc); > + return NULL; > + } > + ++mc->nb_segs; > + *prev = m_last; > + prev = &m_last->next; > + } > + > + copy_len = RTE_MIN(seg->data_len - off, len); > + if (copy_len > rte_pktmbuf_tailroom(m_last)) > + copy_len = rte_pktmbuf_tailroom(m_last); > + > + /* append from seg to m_last */ > + rte_memcpy(rte_pktmbuf_mtod_offset(m_last, char *, > + m_last->data_len), > + rte_pktmbuf_mtod_offset(seg, char *, > + off), > + copy_len); > + > + m_last->data_len += copy_len; > + mc->pkt_len += copy_len; > + off += copy_len; > + len -= copy_len; > + } > + > + __rte_mbuf_sanity_check(mc, 1); > + return mc; > +} > + > /* dump a mbuf on console */ > void > rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len) [snip]