From: Jerin Jacob <jerinjacobk@gmail.com>
To: Shahed Shaikh <shshaikh@marvell.com>
Cc: dev@dpdk.org, Rasesh Mody <rmody@marvell.com>,
Jerin Jacob <jerinj@marvell.com>,
GR-Everest-DPDK-Dev@marvell.com, ferruh.yigit@intel.com
Subject: Re: [dpdk-dev] [PATCH v2 0/5] net/qede: fixes and enhancement
Date: Fri, 13 Sep 2019 11:51:52 +0530 [thread overview]
Message-ID: <CALBAE1MikXMXhRf_Cii5A+Un6O3NZdWojVEgP4dafkvCwAQ3rg@mail.gmail.com> (raw)
In-Reply-To: <20190912152416.2990-1-shshaikh@marvell.com>
On Thu, Sep 12, 2019 at 8:54 PM Shahed Shaikh <shshaikh@marvell.com> wrote:
>
> First four patches are part of a fix for the ovs-dpdk issue
> with 100Gb NIC [1].
> Fifth patch adds support for drop action in rte_flow.
>
> [1]
> As per HW design of 100Gb mode, device internally uses 2 engines
> (eng0 and eng1), and both engines need to be configured symmetrically.
> Based on this requirement, driver design chose an approach
> to allow user to allocate only even number of queues and split
> those queues on both engines equally.
>
> This approach puts a limitation on number of queues to be allocated -
> i.e. user can't configure odd number of queues on 100Gb mode.
> OVS configures DPDK port with 1 rxq and 1 txq, which causes initialization
> of qede port to fail.
>
> This patch series changes the implementation of queue allocation method
> for 100Gb devices by removing above mentioned limitation and allowing
> user to configure odd number of queues.
>
> Fix is split into 4 logical patches -
> - First patch refactors Rx and Tx queue setup code to lay a foundation
> for actual fix.
> - Second patch actually implements a new approach to fix the issue.
> - Third patch fixes RSS configuration w.r.t. new approach.
> - Fourth patch fixes statistics code impacted due to new approach.
>
> change in v2:
> Fixed compilation failure from patch 2 on clang compiler.
>
> Shahed Shaikh (5):
> net/qede: refactor Rx and Tx queue setup
> net/qede: fix ovs-dpdk failure when using odd number of queues on
> 100Gb mode
> net/qede: fix RSS configuration as per new 100Gb queue allocation
> method
> net/qede: fix stats flow as per new 100Gb queue allocation method
> net/qede: implement rte_flow drop action
Fixed ./devtools/check-git-log.sh issues and Series applied to
dpdk-next-net-mrvl/master. Thanks.
prev parent reply other threads:[~2019-09-13 6:22 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-12 15:24 Shahed Shaikh
2019-09-12 15:24 ` [dpdk-dev] [PATCH v2 1/5] net/qede: refactor Rx and Tx queue setup Shahed Shaikh
2019-09-12 15:24 ` [dpdk-dev] [PATCH v2 2/5] net/qede: fix ovs-dpdk failure when using odd number of queues on 100Gb mode Shahed Shaikh
2019-09-12 15:24 ` [dpdk-dev] [PATCH v2 3/5] net/qede: fix RSS configuration as per new 100Gb queue allocation method Shahed Shaikh
2019-09-12 15:24 ` [dpdk-dev] [PATCH v2 4/5] net/qede: fix stats flow " Shahed Shaikh
2019-09-12 15:24 ` [dpdk-dev] [PATCH v2 5/5] net/qede: implement rte_flow drop action Shahed Shaikh
2019-09-13 6:21 ` Jerin Jacob [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CALBAE1MikXMXhRf_Cii5A+Un6O3NZdWojVEgP4dafkvCwAQ3rg@mail.gmail.com \
--to=jerinjacobk@gmail.com \
--cc=GR-Everest-DPDK-Dev@marvell.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=jerinj@marvell.com \
--cc=rmody@marvell.com \
--cc=shshaikh@marvell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).