From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.15.15]) by dpdk.org (Postfix) with ESMTP id E784958FE for ; Thu, 13 Sep 2018 11:54:19 +0200 (CEST) Received: from [192.168.10.135] ([109.90.208.183]) by mail.gmx.com (mrgmx002 [212.227.17.190]) with ESMTPSA (Nemesis) id 0MO7im-1fx4wV1epY-005WAp; Thu, 13 Sep 2018 11:54:15 +0200 To: "Trahe, Fiona" , "Wiles, Keith" , "users@dpdk.org" References: <73D04E54-F6CC-41C3-A622-744B88592AC5@intel.com> <348A99DA5F5B7549AA880327E580B43589618BE2@IRSMSX101.ger.corp.intel.com> From: Sofia Baran Message-ID: <97dff537-e1ab-78a2-8d42-58c9b408dfb8@gmx.net> Date: Thu, 13 Sep 2018 11:54:14 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 MIME-Version: 1.0 In-Reply-To: <348A99DA5F5B7549AA880327E580B43589618BE2@IRSMSX101.ger.corp.intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Provags-ID: V03:K1:iAmtXVwLUiPik0WgtNA4hXwbdcKCKRgs/GhH3u+7j6b0XAAONaV X2bniQYPD5Rgc46o31S/pyAWVNPGSzplohBI6SL1gohklF74y3RJUDFPfGm+6HyKBAy/xmP 1PyJjbSOVi24YB83Th0d4wUd24pxDu3jKOeFX3MO/iPOuVS7PTEWGkBjLzrmi2bzOG7MvkG KRzrn52o5Cb/Abn/a7+tw== X-UI-Out-Filterresults: notjunk:1;V01:K0:E3yDI0RFsfI=:n34JdfR7aotIJdLGO+2tmw Q5YfzMabt1PFocOYWRrTv0rdC664vxZlOGVVFBm3e6zzt1GExzxarTSZpeEoK/7qqcbJ7HmxN EpkKtUzUXkn2fDYIsz0F+lEYYQONgD5lps9m3QD1LN/tVbD6GZjwkPmdt5nL+W9xCBD6H6tDJ Geb1OZlVO7hmHPlnNH4O9zk5BK30LRgIsvetlo85Iv6Vgow9gEbJ84FWodXlgMg4yVuf4MR8z +ktDdoZAKxsNe9v74B+Afdd3/VKy1n7PkNr6xJASkTdeOAv+prtUzTP+1/qdGCyx5ww0GlxPA MRtd9CJL7+GHrC0St7PcG/YzJGzpSAjL+2mHQfm6Qf4oWKqZ3CV1Tv7DMm9JmFhYDViAuLtZz /1mfu3+AhNVqgjq48RPK66yoUWTMWdhktJneVljWDMIsQ2C8qkGC+i5ubJ32iaaM1FitJ4RPO CRJv5nJ47XBGLRViF4QUQy3G2fXv33pXhVVRWgMN51pMFtBNzgF5FZob0RN8MIImd6f//WJj7 tpFfc+8IZMIZX7y2Hh3F0701xsPjdtpSD+XvWtQbsyyc2L1WsExl2aIg4O7E+y5fbvL3Ze11p PR5VDc+HNxRBCL96wgcvCfoq54av/ZNm5wfJQh5ZEwFSN4SsQMxiH73mJjkMrf9HVpNhK/XPQ khAkDg/XoF3WOXOcuSU8fLa2QIJENe4OLPaIobAXrW0+XsErKnf3sIEfCysYbrqh/a7Ct7+rB HU8+RU9N1C9Rq0yCiNDS93SnIKkHcQGE/o5Ex8P99uZ0TGNfJEg9b81OLBmmjRwB51tegWTHC Fi+wlCdQgmawALw9hUPmKDdZufKodVlu7zPrervRLJGYm5C8uw= Subject: Re: [dpdk-users] dpdk and bulk data (video/audio) X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 13 Sep 2018 09:54:20 -0000 Fiona and Keith, thanks for your help. I've tried to set buf_addr/buf_iova of a mbuf on my own, but I guess that I miss something because transferring such mbufs seems not to work. Here are some more details: I need to transfer video data with an UDP/RTP based protocol over ethernet. First I create a rte_mempool holding the video frames (e.g. 5MB per frame). Then I create a rte_mempool for mbufs holding the UDP/RTP header and another rte_mempool holding mbufs with no data to store the addresses of the video frame segments: struct rte_mempool* frame_pool = rte_mempool_create("FramePool", 32, 5*1024*1024, ...); struct rte_mempool* hdr_pool = rte_pktmbuf_pool_create("HeaderPool", 1024 - 1, MBUF_CACHE_SIZE, 0, 256, SOCKET_ID_ANY); struct rte_mempool* pay_pool = rte_pktmbuf_pool_create("PayloadPool", 1024 - 1, MBUF_CACHE_SIZE, 0, 0, SOCKET_ID_ANY); void* framebuf; rte_mempool_get(frame_pool, &framebuf); Then I prepare the header mbuf (RTP header here not used): struct rte_mbuf* mhdr = rte_pktmbuf_alloc(hdr_pool); struct ether_hdr* eth_hdr = rte_pktmbuf_mtod(mhdr, struct ether_hdr *); ... struct ipv4_hdr* ip_hdr = (struct ipv4_hdr*)(eth_hdr + 1); ... struct udp_hdr* udp_hdr = (struct udp_hdr*)(ip_hdr + 1); ... int payloadSize = 1024; mhdr->nb_segs   = 2; mhdr->data_len  = sizeof(struct ether_hdr) + sizeof(struct ipv4_hdr) + sizeof(struct udp_hdr); mhdr->pkt_len   = m->data_len + payloadSize; // don't know if the following lines is really that important mhdr->ol_flags = 0; mhdr->ol_flags |= PKT_TX_IPV4; mhdr->ol_flags |= PKT_TX_IP_CKSUM; mhdr->vlan_tci       = 0; mhdr->vlan_tci_outer = 0; mhdr->l2_len = sizeof(struct ether_hdr); mhdr->l3_len = sizeof(struct ipv4_hdr); mhdr->l4_len = 0; Then I prepared the payload mbuf and linked it to the header mbuf: struct rte_mbuf* mpay = rte_pktmbuf_alloc(pay_pool); mhdr->next = mpay; mpay->buf_addr          = framebuf; mpay->buf_iova          = rte_mempool_virt2iova(framebuf); mpay->nb_segs           = 1; mpay->data_len          = payloadSize; mpay->pkt_len           = m_hdr->pkt_len; Not really sure if I filled all required mbuf members correctly. At least when trying to transfer the mbuf using rte_eth_tx_burst(), it doesn't work (rte_eth_tx_burst() returns no error but no data is transfered by the NIC). Important remarks: when I create the payload mbufs with sizes != 0, and don't touch the buf_addr/buf_iova members, then the transfer works! I'm using the Mellanox mlx5 PMD. Hope to get some hints whats wrong. Thanks Sofia On 09/10/2018 11:46 PM, Trahe, Fiona wrote: > >> -----Original Message----- >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles, Keith >> Sent: Monday, September 10, 2018 1:52 PM >> To: Sofia Baran >> Cc: users@dpdk.org >> Subject: Re: [dpdk-users] dpdk and bulk data (video/audio) >> >> >> >>> On Sep 10, 2018, at 6:28 AM, Sofia Baran wrote: >>> >>> >>> Hi All, >>> >>> I want/try to us DPDK for transferring larger amount of data, e.g. video frames which usually are >> stored within memory buffers with sizes of several MB (remark: by using huges pages, these buffers >> could be physically contiguous). >>> When looking at the DPDK documentation, library APIs and examples, I can't find a way/hint how to >> transfer larger buffers using DPDK without copying the video buffer fragments to the payload sections >> of the mbufs - which results in high CPU loads. >>> Within the ip_fragmentation example indirect mbufs are used, pointing to the payload section of a >> direct mbuf (holding the header). But in my understanding the maximum size of a mbuf payload is 65KB >> (uint16_t)!? >> >> It is true that mbufs only hold (64K - 1). The concept of mbufs is normally an ethernet packet and they >> are limited to 64K. >> >> You can create a small mbuf (128 bytes) then set offset/data in the mbuf to point to the video buffer >> only if you can find the physical memory address for the data. The mbuf normally holds the physical >> address of the mbuf->data not the attached buffer in this case. This of course means you have to >> manage the mbuf internal structure members yourself and be very careful you do not rearrange the >> mbuf members as that can cause a performance problem. >> > But the 64k-1 limit still applies, unless I'm misunderstanding. > A way to get around this is to use chained mbufs. > So create lots of small mbufs, each 128bytes, holding no payload, just the mbuf struct. > Chain them together with each buf_iova/buf_addr pointing to the next 64k-1 segment of the payload. > You'll need ~17 mbufs per MB, is this an acceptable overhead? > > You can also consider using compressdev API to compress the data before transmitting. > However will have the same problem - and can use the same solution - for passing the data to the PMD to be compressed/decompressed. > >>> I'm pretty new to DPDK so maybe I missed something. I hope that someone can provide me some hits >> how to avoid copying the entire payload. >>> Thanks >>> Sofia Baran >>> >>> >> Regards, >> Keith