* [dpdk-users] RDMA over DPDK
@ 2020-03-01 11:33 Xueming(Steven) Li
2020-03-01 15:31 ` Cliff Burdick
0 siblings, 1 reply; 2+ messages in thread
From: Xueming(Steven) Li @ 2020-03-01 11:33 UTC (permalink / raw)
To: users
With a quick hack on mlx5 pmd, it's possible to send RDMA operation with few changes. Performance result between 25Gb back to back connected NICs:
- Continues 1MB RDMA write on 256 different memory target of remote peer: line speed, 2.6Mpps, MTU 1024
- Continues 8B RDMA write to remote peer: line speed, 29.4Mpps, RoCE2(74B+8B)
Currently, dpdk usage focus on network scenario: ovs, firewall, load balance...
With hw acceleration, RDMA is application level api with more capability than sockets, 2GB xfer, less latency and atomic operations, it will enable dpdk bypass stack to application server - another huge market I believe.
Why RDMA over dpdk:
- performance , dpdk style batch/burst xfer, less i-cache miss
- easy to prefetch - no linked list
- reuse mbuf data structure with some modification
- able to send rdma request with eth mbuf data
- virtualization support, with rte_flow, able to do hw encap/decap for VF RDMA traffic
Potential application:
- rdma <-> rdma application in DC/HPC
- eth <-> rdma application
- device power saving. If pc/mobile support rdma, playing video or download file, most networking xfer happens with few cpu involvement.
Interested?
Xueming Li
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [dpdk-users] RDMA over DPDK
2020-03-01 11:33 [dpdk-users] RDMA over DPDK Xueming(Steven) Li
@ 2020-03-01 15:31 ` Cliff Burdick
0 siblings, 0 replies; 2+ messages in thread
From: Cliff Burdick @ 2020-03-01 15:31 UTC (permalink / raw)
To: Xueming(Steven) Li; +Cc: users
If you're interested in this for GPUs, you should check out cuVNF here:
https://developer.nvidia.com/aerial-sdk
On Sun, Mar 1, 2020 at 3:33 AM Xueming(Steven) Li <xuemingl@mellanox.com>
wrote:
> With a quick hack on mlx5 pmd, it's possible to send RDMA operation with
> few changes. Performance result between 25Gb back to back connected NICs:
>
> - Continues 1MB RDMA write on 256 different memory target of remote
> peer: line speed, 2.6Mpps, MTU 1024
> - Continues 8B RDMA write to remote peer: line speed, 29.4Mpps,
> RoCE2(74B+8B)
>
> Currently, dpdk usage focus on network scenario: ovs, firewall, load
> balance...
> With hw acceleration, RDMA is application level api with more capability
> than sockets, 2GB xfer, less latency and atomic operations, it will enable
> dpdk bypass stack to application server - another huge market I believe.
>
> Why RDMA over dpdk:
>
> - performance , dpdk style batch/burst xfer, less i-cache miss
> - easy to prefetch - no linked list
> - reuse mbuf data structure with some modification
> - able to send rdma request with eth mbuf data
> - virtualization support, with rte_flow, able to do hw encap/decap for VF
> RDMA traffic
>
> Potential application:
>
> - rdma <-> rdma application in DC/HPC
> - eth <-> rdma application
> - device power saving. If pc/mobile support rdma, playing video or
> download file, most networking xfer happens with few cpu involvement.
>
> Interested?
>
> Xueming Li
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2020-03-01 15:31 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-01 11:33 [dpdk-users] RDMA over DPDK Xueming(Steven) Li
2020-03-01 15:31 ` Cliff Burdick
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).