From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from jaguar.aricent.com (jaguar.aricent.com [121.241.96.11]) by dpdk.org (Postfix) with ESMTP id 4BD7E595A for ; Mon, 21 Jul 2014 17:14:05 +0200 (CEST) Received: from jaguar.aricent.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id EFFC82186D6; Mon, 21 Jul 2014 20:45:18 +0530 (IST) Received: from GURCASV01.AD.ARICENT.COM (unknown [10.203.26.90]) by jaguar.aricent.com (Postfix) with ESMTPS id D2708218607; Mon, 21 Jul 2014 20:45:18 +0530 (IST) Received: from GURMBXV02.AD.ARICENT.COM (10.203.26.97) by GURMBXV02.AD.ARICENT.COM (10.203.26.97) with Microsoft SMTP Server (TLS) id 15.0.847.32; Mon, 21 Jul 2014 20:45:18 +0530 Received: from GURMBXV02.AD.ARICENT.COM ([169.254.4.204]) by GURMBXV02.AD.ARICENT.COM ([169.254.4.204]) with mapi id 15.00.0847.030; Mon, 21 Jul 2014 20:45:17 +0530 From: Srinivas Reddi To: "dev@dpdk.org" , Hiroshi Shimamoto Thread-Topic: MEMNIC [How can we achieve max through put using memnic interfaces] Thread-Index: Ac+k9pRt2Lx5OaEdSqCQhTVxomo1TQ== Date: Mon, 21 Jul 2014 15:15:17 +0000 Message-ID: <9918fdb09c2c4dd5a4fb40a5e6a04968@GURMBXV02.AD.ARICENT.COM> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.203.2.27] MIME-Version: 1.0 X-TM-AS-MML: disable Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-dev] MEMNIC [How can we achieve max through put using memnic interfaces] X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Jul 2014 15:14:05 -0000 Hi, I wrote a test application using MEMNIC interfaces for int= er VM switching .[VM1 --> host --- >VM2 ] I send uni-directional traffic from traffic generator at v= m1 to the host . And in the host my application switches traffic coming fr= om vm1 to vm2. I have used dpdk-1.7 and memnic -1.2 . and librte_pmd_memn= ic_copy.so to bind memnic interfaces to dpdk application [my traffic genera= tor app]. I observed a maximum throughput of around 470Mbps only [wi= th packet size 1400 Bytes,21,000 packets per second ] .. Is there any best method to improve throughput using MEMNI= C interfaces .. If I increase shared memory area shown below .. does it s= ervers my need . If I want around 4 to 5 Gbps through put with unidirectiona= l flow from VM1 to VM2 through the host .. How much shared memory area shall I take per port .. What might be the memory alignment .. /* * Shared memory area mapping * +------------------+ * | Header Area 1MB | * +------------------+ * | Up to VM 7MB | * +------------------+ * | Padding 1MB | * +------------------+ * | Down to host 7MB | * +------------------+ */ struct memnic_area { union { struct memnic_header hdr; char hdr_pad[1024 * 1024]; }; union { struct memnic_data up; char up_pad[7 * 1024 * 1024]; }; char blank[1024 * 1024]; union { struct memnic_data down; char down_pad[7 * 1024 * 1024]; }; }; Thanks & Regards, Srinivas. "DISCLAIMER: This message is proprietary to Aricent and is intended solely = for the use of the individual to whom it is addressed. It may contain privi= leged or confidential information and should not be circulated or used for = any purpose other than for what it is intended. If you have received this m= essage in error, please notify the originator immediately. If you are not t= he intended recipient, you are notified that you are strictly prohibited fr= om using, copying, altering, or disclosing the contents of this message. Ar= icent accepts no responsibility for loss or damage arising from the use of = the information transmitted by this email including damage from virus."