DPDK patches and discussions
 help / color / mirror / Atom feed
From: Shahaf Shuler <shahafs@mellanox.com>
To: adrien.mazarguil@6wind.com, nelio.laranjeiro@6wind.com,
	thomas.monjalon@6wind.com, jingjing.wu@intel.com
Cc: dev@dpdk.org
Subject: [dpdk-dev] [PATCH 1/4] ethdev: add Tx offload limitations
Date: Wed, 22 Feb 2017 18:09:57 +0200	[thread overview]
Message-ID: <1487779800-46491-2-git-send-email-shahafs@mellanox.com> (raw)
In-Reply-To: <1487779800-46491-1-git-send-email-shahafs@mellanox.com>

Many Tx offloads are performed by hardware. As such, each offload
has its own limitations.
This commit adds the option to query Tx offload limitations in
order to use them properly and avoid bugs.
The limitations should be filled by the PMD upon query device info.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 lib/librte_ether/rte_ethdev.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 97f3e2d..3ab8568 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -728,6 +728,17 @@ struct rte_eth_desc_lim {
 	uint16_t nb_mtu_seg_max;
 };
 
+struct rte_eth_tx_offload_lim {
+	/**
+	 * Max allowed size of network headers (L2+L3+L4) for TSO offload.
+	 */
+	uint32_t max_tso_headers_sz;
+	/**
+	 * Max allowed size of TCP payload for TSO offload.
+	 */
+	uint32_t max_tso_payload_sz;
+};
+
 /**
  * This enum indicates the flow control mode
  */
@@ -920,6 +931,8 @@ struct rte_eth_dev_info {
 	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
 	uint32_t rx_offload_capa; /**< Device RX offload capabilities. */
 	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+	struct rte_eth_tx_offload_lim tx_off_lim;
+	/**< Device TX offloads limits. */
 	uint16_t reta_size;
 	/**< Device redirection table size, the total number of entries. */
 	uint8_t hash_key_size; /**< Hash key size in bytes */
-- 
1.8.3.1

  reply	other threads:[~2017-02-22 16:10 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-22 16:09 [dpdk-dev] [PATCH 0/4] net/mlx5 add TSO support Shahaf Shuler
2017-02-22 16:09 ` Shahaf Shuler [this message]
2017-02-22 16:09 ` [dpdk-dev] [PATCH 2/4] ethdev: add TSO disable flag Shahaf Shuler
2017-02-22 16:09 ` [dpdk-dev] [PATCH 3/4] app/testpmd: add TSO disable to test options Shahaf Shuler
2017-02-22 16:10 ` [dpdk-dev] [PATCH 4/4] net/mlx5: add hardware TSO support Shahaf Shuler
2017-03-01 11:11 ` [dpdk-dev] [PATCH v2 0/1] net/mlx5: add " Shahaf Shuler
2017-03-01 11:11   ` [dpdk-dev] [PATCH v2 1/1] net/mlx5: add hardware " Shahaf Shuler
2017-03-01 14:33     ` Nélio Laranjeiro
2017-03-02  9:01   ` [dpdk-dev] [PATCH v3 " Shahaf Shuler
2017-03-02  9:15     ` Nélio Laranjeiro
2017-03-06  9:32       ` Ferruh Yigit
2017-03-06  8:50     ` Ferruh Yigit
2017-03-06  9:31       ` Ferruh Yigit
2017-03-06 11:03         ` Shahaf Shuler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1487779800-46491-2-git-send-email-shahafs@mellanox.com \
    --to=shahafs@mellanox.com \
    --cc=adrien.mazarguil@6wind.com \
    --cc=dev@dpdk.org \
    --cc=jingjing.wu@intel.com \
    --cc=nelio.laranjeiro@6wind.com \
    --cc=thomas.monjalon@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).