* Re: [dpdk-dev] Sync up status for Mellanox PMD barrier investigation
2019-08-21 9:57 ` [dpdk-dev] Sync up status for Mellanox PMD barrier investigation Phil Yang (Arm Technology China)
@ 2019-08-22 2:30 ` Phil Yang (Arm Technology China)
0 siblings, 0 replies; 2+ messages in thread
From: Phil Yang (Arm Technology China) @ 2019-08-22 2:30 UTC (permalink / raw)
To: Phil Yang (Arm Technology China), Honnappa Nagarahalli; +Cc: dev, nd, nd
Please disregard my last message. It was mistakenly sent to the wrong group.
Sorry about that.
Thanks,
Phil Yang
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Phil Yang (Arm
> Technology China)
> Sent: Wednesday, August 21, 2019 5:58 PM
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Cc: dev@dpdk.org; nd <nd@arm.com>
> Subject: Re: [dpdk-dev] Sync up status for Mellanox PMD barrier
> investigation
>
> Some update for this thread.
>
> In the most critical datapath of mlx5 PMD, there are some rte_cio_w/rmb,
> 'dmb osh' on aarch64, in use.
> C11 atomic is good for replacing the rte_smp_r/wmb to relax the data
> synchronization barrier between CPUs.
> However, mlx5 PMD needs to write data back to the HW, so it used a lot of
> rte_cio_r/wmb to synchronize data.
>
> Please check details below. All comments are welcomed. Thanks.
>
> //////////////////// Data path ///////////////////////////
> drivers/net/mlx5/mlx5_rxtx.c=950=mlx5_rx_err_handle(struct
> mlx5_rxq_data *rxq, uint8_t mbuf_prepare)
> drivers/net/mlx5/mlx5_rxtx.c:1002: rte_cio_wmb();
> drivers/net/mlx5/mlx5_rxtx.c:1004: rte_cio_wmb();
> drivers/net/mlx5/mlx5_rxtx.c:1010: rte_cio_wmb();
> drivers/net/mlx5/mlx5_rxtx.c=1272=mlx5_rx_burst(void *dpdk_rxq, struct
> rte_mbuf **pkts, uint16_t pkts_n)
> drivers/net/mlx5/mlx5_rxtx.c:1385: rte_cio_wmb();
> drivers/net/mlx5/mlx5_rxtx.c:1387: rte_cio_wmb();
> drivers/net/mlx5/mlx5_rxtx.c=1549=mlx5_rx_burst_mprq(void *dpdk_rxq,
> struct rte_mbuf **pkts, uint16_t pkts_n)
> drivers/net/mlx5/mlx5_rxtx.c:1741: rte_cio_wmb();
> drivers/net/mlx5/mlx5_rxtx.c:1745: rte_cio_wmb();
> drivers/net/mlx5/mlx5_rxtx_vec_neon.h=366=rxq_burst_v(struct
> mlx5_rxq_data *rxq, struct rte_mbuf **pkts, uint16_t pkts_n,
> drivers/net/mlx5/mlx5_rxtx_vec_neon.h:530: rte_cio_rmb();
>
> Commit messages:
> net/mlx5: cleanup memory barriers: mlx5_rx_burst
> https://git.dpdk.org/dpdk/commit/?id=9afa3f74658afc0e21fbe5c3884c55a21
> ff49299
>
> net/mlx5: add Multi-Packet Rx support : mlx5_rx_burst_mprq
> https://git.dpdk.org/dpdk/commit/?id=7d6bf6b866b8c25ec06539b3eeed1db
> 4f785577c
>
> net/mlx5: use coherent I/O memory barrier
> https://git.dpdk.org/dpdk/commit/drivers/net/mlx5/mlx5_rxtx.c?id=0cfdc18
> 08de82357a924a479dc3f89de88cd91c2
>
> net/mlx5: extend Rx completion with error handling
> https://git.dpdk.org/dpdk/commit/drivers/net/mlx5/mlx5_rxtx.c?id=88c073
> 3535d6a7ce79045d4d57a1d78d904067c8
>
> net/mlx5: fix synchronization on polling Rx completions
> https://git.dpdk.org/dpdk/commit/?id=1742c2d9fab07e66209f2d14e7daa508
> 29fc4423
>
>
> Thanks,
> Phil Yang
>
> From: Phil Yang (Arm Technology China)
> Sent: Thursday, August 15, 2019 6:35 PM
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Subject: Sync up status for Mellanox PMD barrier investigation
>
> Hi Honnappa,
>
> I have checked all the barriers in mlx5 PMD data path. In my understanding, it
> used the barrier correctly (Use DMB to synchronize the memory data
> between CPUs).
> The attachment is the list of positions of these barriers.
> I just want to sync up with you the status. Do you have any idea or
> suggestion on which part should we start to optimization?
>
> Best Regards,
> Phil Yang
^ permalink raw reply [flat|nested] 2+ messages in thread