From: Jerin Jacob <jerinjacobk@gmail.com>
To: Nithin Dabilpuram <ndabilpuram@marvell.com>,
Ferruh Yigit <ferruh.yigit@intel.com>
Cc: Jerin Jacob <jerinj@marvell.com>,
Kiran Kumar K <kirankumark@marvell.com>, dpdk-dev <dev@dpdk.org>,
Andrew Pinski <apinski@marvell.com>
Subject: Re: [dpdk-dev] [PATCH] net/octeontx2: perf improvement to rx vector func
Date: Mon, 13 Jan 2020 13:10:58 +0530 [thread overview]
Message-ID: <CALBAE1O0MH3tbo60W9qtLxAq8x66Q70Lr4yBjeMZqBO6BvRMFw@mail.gmail.com> (raw)
In-Reply-To: <20191210120844.50017-1-ndabilpuram@marvell.com>
On Tue, Dec 10, 2019 at 5:39 PM Nithin Dabilpuram
<ndabilpuram@marvell.com> wrote:
>
> From: Jerin Jacob <jerinj@marvell.com>
>
> Use scalar loads instead of vector loads for fields
> that don't need any vector operations.
>
> Signed-off-by: Andrew Pinski <apinski@marvell.com>
> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Applied to dpdk-next-net-mrvl/master. Thanks
> ---
> drivers/net/octeontx2/otx2_rx.c | 48 ++++++++++++++++++++---------------------
> 1 file changed, 24 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
> index 48565db..db4a221 100644
> --- a/drivers/net/octeontx2/otx2_rx.c
> +++ b/drivers/net/octeontx2/otx2_rx.c
> @@ -184,17 +184,21 @@ nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
> f3 = vqtbl1q_u8(cq3_w8, shuf_msk);
>
> /* Load CQE word0 and word 1 */
> - uint64x2_t cq0_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0)));
> - uint64x2_t cq1_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1)));
> - uint64x2_t cq2_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2)));
> - uint64x2_t cq3_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3)));
> + uint64_t cq0_w0 = ((uint64_t *)(cq0 + CQE_SZ(0)))[0];
> + uint64_t cq0_w1 = ((uint64_t *)(cq0 + CQE_SZ(0)))[1];
> + uint64_t cq1_w0 = ((uint64_t *)(cq0 + CQE_SZ(1)))[0];
> + uint64_t cq1_w1 = ((uint64_t *)(cq0 + CQE_SZ(1)))[1];
> + uint64_t cq2_w0 = ((uint64_t *)(cq0 + CQE_SZ(2)))[0];
> + uint64_t cq2_w1 = ((uint64_t *)(cq0 + CQE_SZ(2)))[1];
> + uint64_t cq3_w0 = ((uint64_t *)(cq0 + CQE_SZ(3)))[0];
> + uint64_t cq3_w1 = ((uint64_t *)(cq0 + CQE_SZ(3)))[1];
>
> if (flags & NIX_RX_OFFLOAD_RSS_F) {
> /* Fill rss in the rx_descriptor_fields1 */
> - f0 = vsetq_lane_u32(vgetq_lane_u32(cq0_w0, 0), f0, 3);
> - f1 = vsetq_lane_u32(vgetq_lane_u32(cq1_w0, 0), f1, 3);
> - f2 = vsetq_lane_u32(vgetq_lane_u32(cq2_w0, 0), f2, 3);
> - f3 = vsetq_lane_u32(vgetq_lane_u32(cq3_w0, 0), f3, 3);
> + f0 = vsetq_lane_u32(cq0_w0, f0, 3);
> + f1 = vsetq_lane_u32(cq1_w0, f1, 3);
> + f2 = vsetq_lane_u32(cq2_w0, f2, 3);
> + f3 = vsetq_lane_u32(cq3_w0, f3, 3);
> ol_flags0 = PKT_RX_RSS_HASH;
> ol_flags1 = PKT_RX_RSS_HASH;
> ol_flags2 = PKT_RX_RSS_HASH;
> @@ -206,25 +210,21 @@ nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
>
> if (flags & NIX_RX_OFFLOAD_PTYPE_F) {
> /* Fill packet_type in the rx_descriptor_fields1 */
> - f0 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
> - vgetq_lane_u64(cq0_w0, 1)), f0, 0);
> - f1 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
> - vgetq_lane_u64(cq1_w0, 1)), f1, 0);
> - f2 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
> - vgetq_lane_u64(cq2_w0, 1)), f2, 0);
> - f3 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
> - vgetq_lane_u64(cq3_w0, 1)), f3, 0);
> + f0 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq0_w1),
> + f0, 0);
> + f1 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq1_w1),
> + f1, 0);
> + f2 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq2_w1),
> + f2, 0);
> + f3 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq3_w1),
> + f3, 0);
> }
>
> if (flags & NIX_RX_OFFLOAD_CHECKSUM_F) {
> - ol_flags0 |= nix_rx_olflags_get(lookup_mem,
> - vgetq_lane_u64(cq0_w0, 1));
> - ol_flags1 |= nix_rx_olflags_get(lookup_mem,
> - vgetq_lane_u64(cq1_w0, 1));
> - ol_flags2 |= nix_rx_olflags_get(lookup_mem,
> - vgetq_lane_u64(cq2_w0, 1));
> - ol_flags3 |= nix_rx_olflags_get(lookup_mem,
> - vgetq_lane_u64(cq3_w0, 1));
> + ol_flags0 |= nix_rx_olflags_get(lookup_mem, cq0_w1);
> + ol_flags1 |= nix_rx_olflags_get(lookup_mem, cq1_w1);
> + ol_flags2 |= nix_rx_olflags_get(lookup_mem, cq2_w1);
> + ol_flags3 |= nix_rx_olflags_get(lookup_mem, cq3_w1);
> }
>
> if (flags & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
> --
> 2.8.4
>
prev parent reply other threads:[~2020-01-13 7:41 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-10 12:08 Nithin Dabilpuram
2020-01-13 7:40 ` Jerin Jacob [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CALBAE1O0MH3tbo60W9qtLxAq8x66Q70Lr4yBjeMZqBO6BvRMFw@mail.gmail.com \
--to=jerinjacobk@gmail.com \
--cc=apinski@marvell.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=jerinj@marvell.com \
--cc=kirankumark@marvell.com \
--cc=ndabilpuram@marvell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).