From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from nbfkord-smmo01.seg.att.com (nbfkord-smmo01.seg.att.com [209.65.160.76]) by dpdk.org (Postfix) with ESMTP id BDFCA3777 for ; Fri, 18 Nov 2016 17:46:36 +0100 (CET) Received: from unknown [193.34.186.16] (EHLO webmail.solarflare.com) by nbfkord-smmo01.seg.att.com(mxl_mta-7.2.4-7) over TLS secured channel with ESMTP id b603f285.0.769439.00-2349.1674223.nbfkord-smmo01.seg.att.com (envelope-from ); Fri, 18 Nov 2016 16:46:38 +0000 (UTC) X-MXL-Hash: 582f306e152c463e-e9cd5929e3b3018907748178ab5b4849f83c7817 Received: from mjp-desktop.uk.solarflarecom.com (10.17.20.3) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Fri, 18 Nov 2016 16:46:32 +0000 To: From: Mike Playle Message-ID: Date: Fri, 18 Nov 2016 16:46:29 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.17.20.3] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-11.0.0.1191-8.000.1202-22706.003 X-TM-AS-Result: No--4.232800-0.000000-31 X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-AnalysisOut: [v=2.1 cv=P5Y8830u c=1 sm=1 tr=0 a=8P+NB+fYZDP74ap4g4d9Kw==] X-AnalysisOut: [:17 a=7Tloq07CngIA:10 a=IkcTkHD0fZMA:10 a=L24OOQBejmoA:10 ] X-AnalysisOut: [a=Xvfo7GmQdcQOaExrfKMA:9 a=QEXdDO2ut3YA:10] X-Spam: [F=0.2000000000; CM=0.500; S=0.200(2015072901)] X-MAIL-FROM: X-SOURCE-IP: [193.34.186.16] Subject: [dpdk-users] Compatibly adding metadata to mbufs? X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Nov 2016 16:46:38 -0000 Hello, I've recently started looking into DPDK. We're interested in adding our own metadata to DPDK's mbuf structures by setting the 'priv_size' parameter to rte_pktmbuf_pool_create(). An example application which does this is Open vSwitch, and we would like to do something similar. When initialising a PMD, rte_eth_rx_queue_setup() takes a pointer to a mempool which is used to allocate receive buffers; packets returned from rte_eth_rx_burst() will be stored in mbufs allocated from this pool. This means that we can easily alter their layout to add our own metadata region. However it's not clear that this will work in all cases. For instance, the "rings-based" PMD doesn't appear to work like this. Instead the sender's mbufs are passed directly to the receiver. This means that if we connect to Open vSwitch instead of a physical NIC, we will have no control over the layout of the mbufs we receive, and so we can't guarantee to be able to store our metadata. Conversely, any mbufs that we send back to Open vSwitch will have to be allocated from its pool rather than ours, otherwise it will be unable to store its own metadata. Do we have to copy the packet data to/from our own mbufs to ensure compatibility here? We'd like to avoid copies as far as possible. Or am I misunderstanding something about how metadata works? Regards, Mike Playle Solarflare Communications