From: Xiaoyun wang <cloud.wangxiaoyun@huawei.com>
To: <dev@dpdk.org>
Cc: <ferruh.yigit@intel.com>, <bluca@debian.org>,
<luoxianjun@huawei.com>, <luoxingyu@huawei.com>,
<zhouguoyang@huawei.com>, <shahar.belkar@huawei.com>,
<yin.yinshi@huawei.com>, <david.yangxiaoliang@huawei.com>,
<zhaohui8@huawei.com>, <zhengjingzhou@huawei.com>,
Xiaoyun wang <cloud.wangxiaoyun@huawei.com>
Subject: [dpdk-dev] [PATCH v5 2/4] net/hinic: add jumbo frame offload flag
Date: Sat, 9 May 2020 12:04:14 +0800 [thread overview]
Message-ID: <87114d9757b4b9f79b7e654b199728952789eeca.1588991713.git.cloud.wangxiaoyun@huawei.com> (raw)
In-Reply-To: <cover.1588991713.git.cloud.wangxiaoyun@huawei.com>
The patch adds DEV_RX_OFFLOAD_JUMBO_FRAME flag for jumbo when set mtu.
Signed-off-by: Xiaoyun wang <cloud.wangxiaoyun@huawei.com>
---
drivers/net/hinic/hinic_pmd_ethdev.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 5fcff81..85e7c3c 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1529,8 +1529,9 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
- int ret = 0;
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ uint32_t frame_size;
+ int ret = 0;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
dev->data->port_id, mtu, HINIC_MTU_TO_PKTLEN(mtu));
@@ -1548,7 +1549,15 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
}
/* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = HINIC_MTU_TO_PKTLEN(mtu);
+ frame_size = HINIC_MTU_TO_PKTLEN(mtu);
+ if (frame_size > RTE_ETHER_MAX_LEN)
+ dev->data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+ dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
nic_dev->mtu_size = mtu;
return ret;
--
1.8.3.1
next prev parent reply other threads:[~2020-05-09 3:40 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-09 4:04 [dpdk-dev] [PATCH v5 0/4] Support ipv6 flow rules Xiaoyun wang
2020-05-09 4:04 ` [dpdk-dev] [PATCH v5 1/4] net/hinic: increase Tx/Rx queues non-null judgment Xiaoyun wang
2020-05-11 19:09 ` Ferruh Yigit
2020-05-09 4:04 ` Xiaoyun wang [this message]
2020-05-09 4:04 ` [dpdk-dev] [PATCH v5 3/4] net/hinic: increase judgment for support NIC or not Xiaoyun wang
2020-05-09 4:04 ` [dpdk-dev] [PATCH v5 4/4] net/hinic/base: support ipv6 flow rules Xiaoyun wang
2020-05-11 20:12 ` Ferruh Yigit
2020-05-11 19:04 ` [dpdk-dev] [PATCH v5 0/4] Support " Ferruh Yigit
2020-05-11 20:20 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87114d9757b4b9f79b7e654b199728952789eeca.1588991713.git.cloud.wangxiaoyun@huawei.com \
--to=cloud.wangxiaoyun@huawei.com \
--cc=bluca@debian.org \
--cc=david.yangxiaoliang@huawei.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=luoxianjun@huawei.com \
--cc=luoxingyu@huawei.com \
--cc=shahar.belkar@huawei.com \
--cc=yin.yinshi@huawei.com \
--cc=zhaohui8@huawei.com \
--cc=zhengjingzhou@huawei.com \
--cc=zhouguoyang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).