From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: Jerin Jacob <jerin.jacob@caviumnetworks.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: "Richardson, Bruce" <bruce.richardson@intel.com>,
"thomas.monjalon@6wind.com" <thomas.monjalon@6wind.com>
Subject: Re: [dpdk-dev] mbuff rearm_data aligmenet issue on non x86
Date: Thu, 12 May 2016 10:07:09 +0000 [thread overview]
Message-ID: <2601191342CEEE43887BDE71AB97725836B4FFE2@irsmsx105.ger.corp.intel.com> (raw)
In-Reply-To: <20160512091349.GA10395@localhost.localdomain>
Hi Jerrin,
>
> Hi All,
>
> I would like align mbuff rearm_data field to 8 byte aligned so that
> write to mbuf->rearm_data with uint64_t* will be naturally aligned.
> I am not sure about IA but some other architecture/implementation has overhead
> in non-naturally aligned stores.
>
> Proposed patch is something like this below, But open for any change to
> make fit for all other architectures/platform.
>
> Any thoughts ?
>
> ➜ [master] [dpdk-master] $ git diff
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 529debb..5a917d0 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -733,10 +733,8 @@ struct rte_mbuf {
> void *buf_addr; /**< Virtual address of segment
> buffer. */
> phys_addr_t buf_physaddr; /**< Physical address of segment
> buffer. */
>
> - uint16_t buf_len; /**< Length of segment buffer. */
> -
There is no need to move buf_len itself, I think.
Just move rearm_data marker prior to buf_len is enough.
Though how do you suggest to deal with the fact, that right now we blindly
update the whole 64bits pointed by rearm_data:
drivers/net/ixgbe/ixgbe_rxtx_vec.c:
/*
* Flush mbuf with pkt template.
* Data to be rearmed is 6 bytes long.
* Though, RX will overwrite ol_flags that are coming next
* anyway. So overwrite whole 8 bytes with one load:
* 6 bytes of rearm_data plus first 2 bytes of ol_flags.
*/
p0 = (uintptr_t)&mb0->rearm_data;
*(uint64_t *)p0 = rxq->mbuf_initializer;
?
If buf_len will be inside these 64bits, we can't do it anymore.
Are you suggesting something like:
uint64_t *p0, v0;
p0 = &mb0->rearm_data;
v0 = *p0 & REARM_MASK;
*p0 = v0 | rxq->mbuf_initializer;
?
If so I wonder what would be the performance impact of that change.
Konstantin
> /* next 6 bytes are initialised on RX descriptor rearm */
> - MARKER8 rearm_data;
> + MARKER64 rearm_data;
> uint16_t data_off;
>
> /**
> @@ -754,6 +752,7 @@ struct rte_mbuf {
> uint8_t nb_segs; /**< Number of segments. */
> uint8_t port; /**< Input port. */
>
> + uint16_t buf_len; /**< Length of segment buffer. */
> uint64_t ol_flags; /**< Offload features. */
>
> /* remaining bytes are set on RX when pulling packet from
> * descriptor
>
> /Jerin
next prev parent reply other threads:[~2016-05-12 10:07 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-12 9:14 Jerin Jacob
2016-05-12 10:07 ` Ananyev, Konstantin [this message]
2016-05-12 12:17 ` Jerin Jacob
2016-05-12 13:14 ` Ananyev, Konstantin
2016-05-12 14:50 ` Jerin Jacob
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2601191342CEEE43887BDE71AB97725836B4FFE2@irsmsx105.ger.corp.intel.com \
--to=konstantin.ananyev@intel.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=jerin.jacob@caviumnetworks.com \
--cc=thomas.monjalon@6wind.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).