DPDK usage discussions
 help / color / Atom feed
* [dpdk-users] RDMA over DPDK
@ 2020-03-01 11:33 Xueming(Steven) Li
  2020-03-01 15:31 ` Cliff Burdick
  0 siblings, 1 reply; 2+ messages in thread
From: Xueming(Steven) Li @ 2020-03-01 11:33 UTC (permalink / raw)
  To: users

 With a quick hack on mlx5 pmd, it's possible to send RDMA operation with few changes. Performance result between 25Gb back to back connected NICs:

    - Continues 1MB RDMA write on 256 different memory target of remote peer: line speed, 2.6Mpps, MTU 1024
    - Continues 8B RDMA write to remote peer: line speed, 29.4Mpps, RoCE2(74B+8B) 

Currently, dpdk usage focus on network scenario: ovs, firewall, load balance...
With hw acceleration, RDMA is application level api with more capability than sockets, 2GB xfer, less latency and atomic operations, it will enable dpdk bypass stack to application server - another huge market I believe.

Why RDMA over dpdk:

- performance , dpdk style batch/burst xfer, less i-cache miss
- easy to prefetch - no linked list
- reuse mbuf data structure with some modification
- able to send rdma request with eth mbuf data
- virtualization support, with rte_flow, able to do hw encap/decap for VF RDMA traffic
 
Potential application:

- rdma <-> rdma application in DC/HPC
- eth <-> rdma application
- device power saving. If pc/mobile support rdma, playing video or download file, most networking xfer happens with few cpu involvement.

Interested?

Xueming Li

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-users] RDMA over DPDK
  2020-03-01 11:33 [dpdk-users] RDMA over DPDK Xueming(Steven) Li
@ 2020-03-01 15:31 ` Cliff Burdick
  0 siblings, 0 replies; 2+ messages in thread
From: Cliff Burdick @ 2020-03-01 15:31 UTC (permalink / raw)
  To: Xueming(Steven) Li; +Cc: users

If you're interested in this for GPUs, you should check out cuVNF here:

https://developer.nvidia.com/aerial-sdk

On Sun, Mar 1, 2020 at 3:33 AM Xueming(Steven) Li <xuemingl@mellanox.com>
wrote:

>  With a quick hack on mlx5 pmd, it's possible to send RDMA operation with
> few changes. Performance result between 25Gb back to back connected NICs:
>
>     - Continues 1MB RDMA write on 256 different memory target of remote
> peer: line speed, 2.6Mpps, MTU 1024
>     - Continues 8B RDMA write to remote peer: line speed, 29.4Mpps,
> RoCE2(74B+8B)
>
> Currently, dpdk usage focus on network scenario: ovs, firewall, load
> balance...
> With hw acceleration, RDMA is application level api with more capability
> than sockets, 2GB xfer, less latency and atomic operations, it will enable
> dpdk bypass stack to application server - another huge market I believe.
>
> Why RDMA over dpdk:
>
> - performance , dpdk style batch/burst xfer, less i-cache miss
> - easy to prefetch - no linked list
> - reuse mbuf data structure with some modification
> - able to send rdma request with eth mbuf data
> - virtualization support, with rte_flow, able to do hw encap/decap for VF
> RDMA traffic
>
> Potential application:
>
> - rdma <-> rdma application in DC/HPC
> - eth <-> rdma application
> - device power saving. If pc/mobile support rdma, playing video or
> download file, most networking xfer happens with few cpu involvement.
>
> Interested?
>
> Xueming Li
>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, back to index

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-01 11:33 [dpdk-users] RDMA over DPDK Xueming(Steven) Li
2020-03-01 15:31 ` Cliff Burdick

DPDK usage discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/ public-inbox