DPDK usage discussions
 help / color / Atom feed
* [dpdk-users] Consulting the usage of rte_mb in memif rx/tx path
@ 2019-08-02 11:09 Phil Yang (Arm Technology China)
  0 siblings, 0 replies; only message in thread
From: Phil Yang (Arm Technology China) @ 2019-08-02 11:09 UTC (permalink / raw)
  To: jgrajcia, users
  Cc: damarion, Honnappa Nagarahalli, Gavin Hu (Arm Technology China), nd

Hi Jakub,

I am trying to understand the rte_mb() in eth_memif_rx/tx functions. What's the purpose of this barrier?

In my understanding, the RX and TX processes are handling in the same core (e.g the testpmd forwarding engine) on each side (Master/Salve).
x86 platform guarantees that all stores are visible in program order and load-load are also in program order. And it has data dependence between the before-rte_mb and the after-rte_mb operation (update ring->head/tail). So why do we need this barrier?

BTW, on Aarch64, the rte_mb() is a 'DSB SY', which will stop the pipeline. So after removed this barrier, the testpmd with memif vPMD got 3.5% performance improvement.

Best Regards,
Phil Yang

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, back to index

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-02 11:09 [dpdk-users] Consulting the usage of rte_mb in memif rx/tx path Phil Yang (Arm Technology China)

DPDK usage discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
	public-inbox-index users

Newsgroup available over NNTP:

AGPL code for this site: git clone https://public-inbox.org/ public-inbox