DPDK usage discussions
 help / color / mirror / Atom feed
From: Sofia Baran <sofia.baran@gmx.net>
To: "Trahe, Fiona" <fiona.trahe@intel.com>,
	"Wiles, Keith" <keith.wiles@intel.com>,
	"users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] dpdk and bulk data (video/audio)
Date: Thu, 13 Sep 2018 11:54:14 +0200	[thread overview]
Message-ID: <97dff537-e1ab-78a2-8d42-58c9b408dfb8@gmx.net> (raw)
In-Reply-To: <348A99DA5F5B7549AA880327E580B43589618BE2@IRSMSX101.ger.corp.intel.com>


Fiona and Keith, thanks for your help.

I've tried to set buf_addr/buf_iova of a mbuf on my own, but I guess 
that I miss something because transferring such mbufs seems not to work. 
Here are some more details:

I need to transfer video data with an UDP/RTP based protocol over 
ethernet. First I create a rte_mempool holding the video frames (e.g. 
5MB per frame). Then I create a rte_mempool for mbufs holding the 
UDP/RTP header and another rte_mempool holding mbufs with no data to 
store the addresses of the video frame segments:

struct rte_mempool* frame_pool = rte_mempool_create("FramePool", 32, 
5*1024*1024, ...);

struct rte_mempool* hdr_pool = rte_pktmbuf_pool_create("HeaderPool", 
1024 - 1, MBUF_CACHE_SIZE, 0, 256, SOCKET_ID_ANY);
struct rte_mempool* pay_pool = rte_pktmbuf_pool_create("PayloadPool", 
1024 - 1, MBUF_CACHE_SIZE, 0, 0, SOCKET_ID_ANY);

void* framebuf;
rte_mempool_get(frame_pool, &framebuf);


Then I prepare the header mbuf (RTP header here not used):

struct rte_mbuf* mhdr = rte_pktmbuf_alloc(hdr_pool);

struct ether_hdr* eth_hdr = rte_pktmbuf_mtod(mhdr, struct ether_hdr *);
...
struct ipv4_hdr* ip_hdr = (struct ipv4_hdr*)(eth_hdr + 1);
...
struct udp_hdr* udp_hdr = (struct udp_hdr*)(ip_hdr + 1);
...

int payloadSize = 1024;

mhdr->nb_segs   = 2;
mhdr->data_len  = sizeof(struct ether_hdr) + sizeof(struct ipv4_hdr) + 
sizeof(struct udp_hdr);
mhdr->pkt_len   = m->data_len + payloadSize;

// don't know if the following lines is really that important
mhdr->ol_flags = 0;
mhdr->ol_flags |= PKT_TX_IPV4;
mhdr->ol_flags |= PKT_TX_IP_CKSUM;
mhdr->vlan_tci       = 0;
mhdr->vlan_tci_outer = 0;
mhdr->l2_len = sizeof(struct ether_hdr);
mhdr->l3_len = sizeof(struct ipv4_hdr);
mhdr->l4_len = 0;

Then I prepared the payload mbuf and linked it to the header mbuf:

struct rte_mbuf* mpay = rte_pktmbuf_alloc(pay_pool);

mhdr->next = mpay;

mpay->buf_addr          = framebuf;
mpay->buf_iova          = rte_mempool_virt2iova(framebuf);
mpay->nb_segs           = 1;
mpay->data_len          = payloadSize;
mpay->pkt_len           = m_hdr->pkt_len;


Not really sure if I filled all required mbuf members correctly. At 
least when trying to transfer the mbuf using rte_eth_tx_burst(), it 
doesn't work (rte_eth_tx_burst() returns no error but no data is 
transfered by the NIC).

Important remarks: when I create the payload mbufs with sizes != 0, and 
don't touch the buf_addr/buf_iova members, then the transfer works! I'm 
using the Mellanox mlx5 PMD.

Hope to get some hints whats wrong.

Thanks
Sofia



On 09/10/2018 11:46 PM, Trahe, Fiona wrote:
>
>> -----Original Message-----
>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles, Keith
>> Sent: Monday, September 10, 2018 1:52 PM
>> To: Sofia Baran <sofia.baran@gmx.net>
>> Cc: users@dpdk.org
>> Subject: Re: [dpdk-users] dpdk and bulk data (video/audio)
>>
>>
>>
>>> On Sep 10, 2018, at 6:28 AM, Sofia Baran <sofia.baran@gmx.net> wrote:
>>>
>>>
>>> Hi All,
>>>
>>> I want/try to us DPDK for transferring larger amount of data, e.g. video frames which usually are
>> stored within memory buffers with sizes of several MB (remark: by using huges pages, these buffers
>> could be physically contiguous).
>>> When looking at the DPDK documentation, library APIs and examples, I can't find a way/hint how to
>> transfer larger buffers using DPDK without copying the video buffer fragments to the payload sections
>> of the mbufs - which results in high CPU loads.
>>> Within the ip_fragmentation example indirect mbufs are used, pointing to the payload section of a
>> direct mbuf (holding the header). But in my understanding the maximum size of a mbuf payload is 65KB
>> (uint16_t)!?
>>
>> It is true that mbufs only hold (64K - 1). The concept of mbufs is normally an ethernet packet and they
>> are limited to 64K.
>>
>> You can create a small mbuf (128 bytes) then set offset/data in the mbuf to point to the video buffer
>> only if you can find the physical memory address for the data. The mbuf normally holds the physical
>> address of the mbuf->data not the attached buffer in this case. This of course means you have to
>> manage the mbuf internal structure members yourself and be very careful you do not rearrange the
>> mbuf members as that can cause a performance problem.
>>
> But the 64k-1 limit still applies, unless I'm misunderstanding.
> A way to get around this is to use chained mbufs.
> So create lots of small mbufs, each 128bytes, holding no payload, just the mbuf struct.
> Chain them together with each buf_iova/buf_addr pointing to the next 64k-1 segment of the payload.
> You'll need ~17 mbufs per MB, is this an acceptable overhead?
>    
> You can also consider using compressdev API to compress the data before transmitting.
> However will have the same problem - and can use the same solution - for passing the data to the PMD to be compressed/decompressed.
>   
>>> I'm pretty new to DPDK so maybe I missed something. I hope that someone can provide me some hits
>> how to avoid copying the entire payload.
>>> Thanks
>>> Sofia Baran
>>>
>>>
>> Regards,
>> Keith

      reply	other threads:[~2018-09-13  9:54 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-10 11:28 Sofia Baran
2018-09-10 12:51 ` Wiles, Keith
2018-09-10 21:46   ` Trahe, Fiona
2018-09-13  9:54     ` Sofia Baran [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=97dff537-e1ab-78a2-8d42-58c9b408dfb8@gmx.net \
    --to=sofia.baran@gmx.net \
    --cc=fiona.trahe@intel.com \
    --cc=keith.wiles@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).