DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Trahe, Fiona" <fiona.trahe@intel.com>
To: "dev@dpdk.org" <dev@dpdk.org>
Cc: "Trahe, Fiona" <fiona.trahe@intel.com>,
	"Verma, Shally" <Shally.Verma@cavium.com>,
	"Daly, Lee" <lee.daly@intel.com>,
	"Jozwiak, TomaszX" <tomaszx.jozwiak@intel.com>,
	Akhil Goyal <akhil.goyal@nxp.com>
Subject: [dpdk-dev] RFC: compressdev: append dest data in PMDs instead of in application
Date: Tue, 11 Sep 2018 17:16:55 +0000	[thread overview]
Message-ID: <348A99DA5F5B7549AA880327E580B43589619BC1@IRSMSX101.ger.corp.intel.com> (raw)

Proposal:
In compressdev, move the responsibility for appending data in the destination
mbuf from the application to the PMD.

Why:
Compression operations are all out-of-place and the output size is unknown.
Source and destination mbufs are passed to the PMDs.
There's no problem with the src, but there is with the destination.
The application allocates the dest mbuf from the pool, guesses how much of
the buffer the PMD will write with output and calls rte_pktmbuf_append() for this amount. 
No data is actually copied into the dest mbuf by the application.
 
The PMD writes output data to the destination mbuf, and returns the amount written in the 
op.produced parameter.

Throughout this the mbuf is not consistent with expectations, i.e. a call to
rte_pktmbuf_data_len() will NOT return the actual amount of data in the buffer. A call to
rte_pktmbuf_append() will NOT add space at end of existing data. rte_pktmbuf_tailroom()
 will not return how much space is available at the end of the data.
Also, some PMDs, e.g. ISA-L, need scratch space at end of the output buffer for checksum
calculation. So though the appl has appended a specific amount in the expectation that it
can be used for compressed data, the PMD needs to reduce this by 4 bytes to reserve
space for the checksum - even though there may be space to append another 4bytes.

It seems more logical that the PMD should take responsibility for appending.
I.e. the application should pass in an empty mbuf, the PMD uses the rte_pktmbuf_tailroom()
to know what space is available, uses this space as it needs and appends the output data to
match the actual amount of data it writes.
This method caters for an mbuf already containing some data, in this case the PMD will
append the output AFTER the data already in the mbuf.
It also needs to cater for SGL aka chained_mbuf case, though I'd propose in this case only
the first mbuf in the chain is allowed to already contain data, the rest must be empty.
An application can adjust a chain to satisfy this condition.

Code impacts:
 * rte_comp_op.dst.offset should be deprecated.
 * comments on the m_dst would be changed to describe this usage
 * applications should alloc destination mbuf(s) from a pool, but not append.
 * PMDs should use append() to correctly reflect the amount of data returned.

This turns out to be more complex than expected for SGLs as rte_mbuf chains DON'T support
empty mbuf chains, i.e. several macros assume there's only tailroom available in the last 
segment of a chain. So essentially if an application chains a bunch of empty mbufs
together the only tailroom available is the buf_len of the last mbuf and the space in
all the other mbufs is "lost".
We've investigated several ways around this, I think either options 1 or 2 would be ok, but
am not keen on option 3. I'm looking for feedback please.

Option1: create an rte_comp_mbuf, identical to rte_mbuf - probably just cast to it - with its own set of
   macros.  Most of these wrap existing mbuf macros, those that need to cater for the empty chain case.
Option2: pass in a pool on the API instead and the PMD allocs dest mbufs as needed and chains them.
   Storage customers expect to use external buffers attached to mbufs so this may not suit their use-case.
   Although it may be an option if all mbufs in the pool are pre-attached to external buffers.
Option3: appl does as today. PMD trims space not used. Would need changes or additions to mbuf macros
   so it could trim from more than just the last segment. PMD would need to free unused segments. 
   I'm not keen on this as doing the work in 2 places and there's potential
   to leak mbufs here as allocating them in API and freeing in PMD.

Regards,
Fiona

             reply	other threads:[~2018-09-11 17:17 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-11 17:16 Trahe, Fiona [this message]
2018-09-15 11:31 ` [dpdk-dev] " Verma, Shally
2018-09-16 10:07   ` Trahe, Fiona
2018-09-17  5:59     ` Verma, Shally

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=348A99DA5F5B7549AA880327E580B43589619BC1@IRSMSX101.ger.corp.intel.com \
    --to=fiona.trahe@intel.com \
    --cc=Shally.Verma@cavium.com \
    --cc=akhil.goyal@nxp.com \
    --cc=dev@dpdk.org \
    --cc=lee.daly@intel.com \
    --cc=tomaszx.jozwiak@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).