From: Avi Kivity <avi@scylladb.com>
To: Tomasz Kulasek <tomaszx.kulasek@intel.com>, dev@dpdk.org
Cc: Vladislav Zolotarov <vladz@scylladb.com>,
Takuya ASADA <syuu@scylladb.com>
Subject: Re: [dpdk-dev] [PATCH v2] doc: announce ABI change for rte_eth_dev structure
Date: Thu, 28 Jul 2016 15:04:31 +0300 [thread overview]
Message-ID: <83855193-c7ea-55ad-5a02-7f26a8984878@scylladb.com> (raw)
In-Reply-To: <1469114659-66063-1-git-send-email-tomaszx.kulasek@intel.com>
On 07/21/2016 06:24 PM, Tomasz Kulasek wrote:
> This is an ABI deprecation notice for DPDK 16.11 in librte_ether about
> changes in rte_eth_dev and rte_eth_desc_lim structures.
>
> As discussed in that thread:
>
> http://dpdk.org/ml/archives/dev/2015-September/023603.html
>
> Different NIC models depending on HW offload requested might impose
> different requirements on packets to be TX-ed in terms of:
>
> - Max number of fragments per packet allowed
> - Max number of fragments per TSO segments
> - The way pseudo-header checksum should be pre-calculated
> - L3/L4 header fields filling
> - etc.
>
>
> MOTIVATION:
> -----------
>
> 1) Some work cannot (and didn't should) be done in rte_eth_tx_burst.
> However, this work is sometimes required, and now, it's an
> application issue.
>
> 2) Different hardware may have different requirements for TX offloads,
> other subset can be supported and so on.
>
> 3) Some parameters (eg. number of segments in ixgbe driver) may hung
> device. These parameters may be vary for different devices.
>
> For example i40e HW allows 8 fragments per packet, but that is after
> TSO segmentation. While ixgbe has a 38-fragment pre-TSO limit.
>
> 4) Fields in packet may require different initialization (like eg. will
> require pseudo-header checksum precalculation, sometimes in a
> different way depending on packet type, and so on). Now application
> needs to care about it.
>
> 5) Using additional API (rte_eth_tx_prep) before rte_eth_tx_burst let to
> prepare packet burst in acceptable form for specific device.
>
> 6) Some additional checks may be done in debug mode keeping tx_burst
> implementation clean.
Thanks a lot for this. Seastar suffered from this issue and had to
apply NIC-specific workarounds.
The proposal will work well for seastar.
>
> PROPOSAL:
> ---------
>
> To help user to deal with all these varieties we propose to:
>
> 1. Introduce rte_eth_tx_prep() function to do necessary preparations of
> packet burst to be safely transmitted on device for desired HW
> offloads (set/reset checksum field according to the hardware
> requirements) and check HW constraints (number of segments per
> packet, etc).
>
> While the limitations and requirements may differ for devices, it
> requires to extend rte_eth_dev structure with new function pointer
> "tx_pkt_prep" which can be implemented in the driver to prepare and
> verify packets, in devices specific way, before burst, what should to
> prevent application to send malformed packets.
>
> 2. Also new fields will be introduced in rte_eth_desc_lim:
> nb_seg_max and nb_mtu_seg_max, providing an information about max
> segments in TSO and non-TSO packets acceptable by device.
>
> This information is useful for application to not create/limit
> malicious packet.
>
>
> APPLICATION (CASE OF USE):
> --------------------------
>
> 1) Application should to initialize burst of packets to send, set
> required tx offload flags and required fields, like l2_len, l3_len,
> l4_len, and tso_segsz
>
> 2) Application passes burst to the rte_eth_tx_prep to check conditions
> required to send packets through the NIC.
>
> 3) The result of rte_eth_tx_prep can be used to send valid packets
> and/or restore invalid if function fails.
>
> eg.
>
> for (i = 0; i < nb_pkts; i++) {
>
> /* initialize or process packet */
>
> bufs[i]->tso_segsz = 800;
> bufs[i]->ol_flags = PKT_TX_TCP_SEG | PKT_TX_IPV4
> | PKT_TX_IP_CKSUM;
> bufs[i]->l2_len = sizeof(struct ether_hdr);
> bufs[i]->l3_len = sizeof(struct ipv4_hdr);
> bufs[i]->l4_len = sizeof(struct tcp_hdr);
> }
>
> /* Prepare burst of TX packets */
> nb_prep = rte_eth_tx_prep(port, 0, bufs, nb_pkts);
>
> if (nb_prep < nb_pkts) {
> printf("tx_prep failed\n");
>
> /* drop or restore invalid packets */
>
> }
>
> /* Send burst of TX packets */
> nb_tx = rte_eth_tx_burst(port, 0, bufs, nb_prep);
>
> /* Free any unsent packets. */
>
>
>
> Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index f502f86..485aacb 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -41,3 +41,10 @@ Deprecation Notices
> * The mempool functions for single/multi producer/consumer are deprecated and
> will be removed in 16.11.
> It is replaced by rte_mempool_generic_get/put functions.
> +
> +* In 16.11 ABI changes are plained: the ``rte_eth_dev`` structure will be
> + extended with new function pointer ``tx_pkt_prep`` allowing verification
> + and processing of packet burst to meet HW specific requirements before
> + transmit. Also new fields will be added to the ``rte_eth_desc_lim`` structure:
> + ``nb_seg_max`` and ``nb_mtu_seg_max`` provideing information about number of
> + segments limit to be transmitted by device for TSO/non-TSO packets.
next prev parent reply other threads:[~2016-07-28 12:04 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-20 14:24 [dpdk-dev] [PATCH] " Tomasz Kulasek
2016-07-20 15:01 ` Thomas Monjalon
2016-07-20 15:13 ` Ananyev, Konstantin
2016-07-20 15:22 ` Thomas Monjalon
2016-07-20 15:42 ` Kulasek, TomaszX
2016-07-21 15:24 ` [dpdk-dev] [PATCH v2] " Tomasz Kulasek
2016-07-21 22:48 ` Ananyev, Konstantin
2016-07-27 8:59 ` Thomas Monjalon
2016-07-27 17:10 ` Jerin Jacob
2016-07-27 17:33 ` Ananyev, Konstantin
2016-07-27 17:41 ` Jerin Jacob
2016-07-27 20:51 ` Ananyev, Konstantin
2016-07-28 2:13 ` Jerin Jacob
2016-07-28 10:36 ` Ananyev, Konstantin
2016-07-28 11:38 ` Jerin Jacob
2016-07-28 12:07 ` Avi Kivity
2016-07-28 13:01 ` Ananyev, Konstantin
2016-07-28 13:58 ` Olivier MATZ
2016-07-28 14:21 ` Ananyev, Konstantin
2016-07-28 13:59 ` Jerin Jacob
2016-07-28 14:52 ` Thomas Monjalon
2016-07-28 16:25 ` Jerin Jacob
2016-07-28 17:07 ` Thomas Monjalon
2016-07-31 7:50 ` Vlad Zolotarov
2016-07-28 12:04 ` Avi Kivity [this message]
2016-07-31 7:46 ` [dpdk-dev] [PATCH] " Vlad Zolotarov
2016-07-31 8:10 ` Vlad Zolotarov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=83855193-c7ea-55ad-5a02-7f26a8984878@scylladb.com \
--to=avi@scylladb.com \
--cc=dev@dpdk.org \
--cc=syuu@scylladb.com \
--cc=tomaszx.kulasek@intel.com \
--cc=vladz@scylladb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).