From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-CY1-obe.outbound.protection.outlook.com (mail-cys01nam02on0042.outbound.protection.outlook.com [104.47.37.42]) by dpdk.org (Postfix) with ESMTP id DDFD6235 for ; Tue, 27 Jun 2017 11:35:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=VbmQnj6OSP0Qimg8LTWauCJ3qOIMPMY0wdjFpzCuuvI=; b=nQoTn6krw2pKhUl/Au91Vgv8r4E5X5GJgJ7JE5yGYW7Dnh/TFF7tyu0MS60+3SlHGR6WUKk9byV0ZYWJnHVHHhvF+62iC6w2cA8axciLdpwgFisCAXdnhx2Dr4lUykBigCv4ukU2FBoFI7yWVRBcnprV0DJqpWkeNKp677z/9oc= Authentication-Results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=caviumnetworks.com; Received: from jerin (111.93.218.67) by BN3PR0701MB1719.namprd07.prod.outlook.com (10.163.39.18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1199.15; Tue, 27 Jun 2017 09:35:55 +0000 Date: Tue, 27 Jun 2017 15:05:10 +0530 From: Jerin Jacob To: "Hunt, David" Cc: Harry van Haaren , dev@dpdk.org, Gage Eads , Bruce Richardson Message-ID: <20170627093506.GB14276@jerin> References: <1492768299-84016-1-git-send-email-harry.van.haaren@intel.com> <1492768299-84016-2-git-send-email-harry.van.haaren@intel.com> <20170510141202.GA8431@jerin> <6c53f05d-2dd2-5b83-2eab-bcecd93bea82@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6c53f05d-2dd2-5b83-2eab-bcecd93bea82@intel.com> User-Agent: Mutt/1.8.3 (2017-05-23) X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: MA1PR01CA0072.INDPRD01.PROD.OUTLOOK.COM (10.174.56.12) To BN3PR0701MB1719.namprd07.prod.outlook.com (10.163.39.18) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f542612a-39c3-4efe-4802-08d4bd3fea73 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(300000503095)(300135400095)(201703131423075)(201703031133081)(300000504095)(300135200095)(300000505095)(300135600095); SRVR:BN3PR0701MB1719; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 3:rvpfPn5x5CQMp9ujbO86z+3sRgd680oZX8qrAGewgIvpmuir2cQIyqHnpNWFxCc5xQ6sYazI5dQ28g0to1Wh6bn2/P+BqbGCGJBoXrh2KUxqFMdbdtuhFA0YoHi+T9JDokHXiVd1fXbrqgNEtryknjPpfRaCwzFbAZp+KDi9Fc61wYrQmzUxtCwrc2zgMgmfM/d69DN59HDZMIzTq7+GJLMQSoDW6H65DYRrXo2amJ0TYnXTYj3YACFKe+271jUTtqPTl+bAr4vH6+09UXNHM1w7ExJlnphSHv7IxUtCwjgZst4s4M9HF6fi9Fkt2hwyAts1kQ7N9W1qnrZrSHkT8cZWYF4kBppcFrv4b36ADJtaW9yVqmyf05V86390SpeF3vcKlW/XoWAyq0ftuDu9I790s+a8w3AdaYsP1crhXhN9s3DVwA9HXvYsduZqhwJScQMuBCBfr6EYyRhg1S5rzG9GxiYH0+Sd/aIPPSeQxRHcDjofOWbJYdcjbEOiacUUaGpqqEGUkaBJS3WRwRTR353IGD4J4wSZhGlYe4IUKUG5tXGSbspmOKgU6XwLb8LyjbT3FInatbkJQw8B8n6Pr4hRAEsYqNUNyr4Jb7SgHKg4DOIP+G8YB02o205FbFTOOvYrOFBn3nvUzodb7nRjBRjEyCUdD5fuArKYBYm/PD0N3CAcsDK7ehDgrJgIDHSS X-MS-TrafficTypeDiagnostic: BN3PR0701MB1719: X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 25:fDKVc789C8vACbBwLvYe0zPtY+N41n7q+ahugN0UJ6n26r0SgVLTkq4V5um9P/r8sTW8omLTKUwIQeYwGBihcD8H7V3f44CDKt+m12fabvpkSNA7Kom23/ZbZBBkDOkBSqefMy3IckETZ66rmoC9Hh1r0W+2gITFmF/XvR0gITk1F0S49NzijHFsrtVsNXGygUzFbWdqOUkk5u/GtivzI9HYH+pGYbv9/Y7ry3+RUlHGPKj7F1xNk1y2jtrwS5LC5J3FrKVH/t1BsZHHbrexS0/kwtk7BZTf7Qm5imH8+cRocoRyd4GuOI2PGsUhd1kYzP+Pg5DGUteoQYXbrJpNHNSKatdGEHr2V9Gom1aD1P8Kz6O+QlXTns07oYeCib3RsJx/+eV/PDnkxr05MkdZb3bIvmMHpxqE8DlXrxmRtcF7QhMN1/SR7nrZqVMxzu1alZIO7fkgHA7fD1vsczj20OUN/dI+7NwM9V9ZlHVnio3Cpht7mj1kli2DkufGthSeQxmNMkpUybmKUXCFA3c/OkRBTULF4WOPy36iYZM9ofateqgc7t5d25jwilkElGj+cESjjYrRvtV+QWpLLiAyZ/H9z3kwcffhVLZbfiTAslodkNV3chTbd7Nh5g4Fbui2ts2LWLRnweF52MUrSKLhg9Y/WJw68mKYesF+3NPz3XAkITi3xxyAapkegmNwC8k6Cb8FQYguxS+emHRnEG1Pybef6u/57ydskgn5QlJLfQ5EfgWZj5C9M/bGFcY2fcM2JsbwCqsU4VV6Uu+uwNJfv2gV4Tld+COvb4jFTwzloQW07unYGBfKcygifdGRAfgIy7ay3uRc3K7D8rAU/xRMCyg/Rb+zzdlwGh7AEHUsxRJ8jpi79lfN7M97jQjJpf3ESHnpYQphudbNme4Gy2v65zhpBdiYJSeWi/DuuFxhoFs= X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 31:XHql52B6C5PFDyNakx40HH63PhDRzTW8IvsFTg+Y1Bpn8+ijcK/BUCzMW6JXPhVk8maCh9e12V1xcXVLnjeBfNpeUTLkegQdCudPp4CblOgFrZo1qoHEW2+GmXI1X9pkJNRYV2aeaDjjm8epBm2bUG8NcI9trxz43JClm2Yh5sw24F44Id8AE9vvOzP3R4O0AxqCXCkTegGnzHJFvXKwKipoUpm2D55U47vP08Va/gZ4QGHDUdCCvDPVRRWvtnuTF2tfWkr1rjokMdo+pZOu1ik2PTzcSjafOAyCLyiP3dXO8J8iw5DU8nBJAsR0b5PgiCqcNoSjWXv3IhL+3jwPuHQ8Z7pXZq1OPcPBOSCiOt/IDGZ5vaG5Nt0e38nQ2bsncaxyIDPrvNS2RCQuUWYFQ+OxvDqJbr2lsEg3zM7XVa2upDUUdl6DqKTQx6jcKYLUPhgparkkqLOmD6fv8JlRe5kiAvC2YfKrkh59rpgOIc+pB0QOeudjGg+XiL0mEbZfNNADgH3E+No5dRvrXF7Wd5ZDNUprIGfQM0oQxvhOFmQdzznUFRLzbH2hcpqUSsOL2LdC5c0y4775oUmgsUfoLuSkHs44Ebg50NqKFbAFdO8iihrxCCbKXG5lsuP3l4kUFe2rFE+CIuDj7ly/r8f8Rqysw7hJoi8DFZBpPfOqAnWGGSRfmTbQ1vZNKuDKZstZJcn9UGYWFbLkwQvXhedYG9UFQoxBa+T0Tx/4HJ8R1XI= X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 20:RDGav7BtoyRt25zkUE8UneqyTMXw0+KCKgVHk12S7Eobmq2c5m2G1H/hwCc0aqHDL2nKZ4/Wzx3hewJCuVw8EvmUUN6KWLiEc8ysc4RgMIhZVUMoSyPW6OJe2dSg2oySAmpy6QJU4VzPMH0GMBZU3IUjs9HbeaplzbzN2I05h0FeHMpX8GEHP2cUk1t3jbeorz4S/jvWqD70GzV//rOMbaIDBlSuIVahOLXHLfaea2sB/mXBjKjViOafXYaJGK2X4MSgWi/Q4hAxFs9i8kHMboxbZcw7nXpMQk+A9BtXPYAnaV/y3T3m5w4QJOf06todWvWJooEDmfqwRGVexDZaarj4MK44LHixANZJRfxhSAUF2oOEPUlm0KVqIML2hsGMRT5McvYRWeGROqrslvZIKkFu3XWR3Hxpji5gv5QWT3wPqYhvr2a4Whco+OM/ay1I4L1iiEXnuO4HXFJvweeiw/STim8q359UU+XtywAezscCbLfKXW3/4hhxF5KiZCRZncOaEchSPQCuOjRWhdChbbjNUnc3qIOaeMEGuM93khq+QzaZk9Vfcaum7PpVrmys5GN9KNIutpCbUSG3a3plEYYaOrU+yAkHTfWXIq88bj0= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(133145235818549)(236129657087228)(48057245064654)(148574349560750)(228905959029699)(247924648384137); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(601004)(2401047)(5005006)(8121501046)(93006095)(3002001)(10201501046)(100000703101)(100105400095)(6041248)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123555025)(20161123558100)(20161123562025)(20161123560025)(20161123564025)(6072148)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:BN3PR0701MB1719; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:BN3PR0701MB1719; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN3PR0701MB1719; 4:Mv6+CSu908dv4IFfTYybK2bgK413335euqPlFXsl?= =?us-ascii?Q?KgjTXnJnIFKlVq/Ro8mogyGXNlSqF4cIudQWRapDw7W2GYSDvd95/u7/+jTd?= =?us-ascii?Q?EyIp5NPK711DUSeAR0WQOPsmfjH/lhjsBeNaZcyEbTN8EsD7dyJZNH/ynaoG?= =?us-ascii?Q?6mqztRZcXy/dcVcFthSBXANwYjnybT8KlnU/ZGdQ69f0kbuFnZYElRQHk1gn?= =?us-ascii?Q?EDTj3lp0SritCfzJI1MVpo0W/xjIGZcdVQu+5rzWOZSr/X3BFnPB8SyEkNPT?= =?us-ascii?Q?61PBzjkSZhKo2ysUggt7ONaesjjX6uJP60NFwaO4SosjL0v98lf63f9cDFpV?= =?us-ascii?Q?+QQ4MTRv04hwAbUb52J+GsCYR0DJ7+DnOmWiA/QhecTx7qTs4PNwCEoNXdBj?= =?us-ascii?Q?od/ruw3cBEAGBDzorXvkoFIf8yqYgcVHNQoNHNXgf+fVVjBiO2en5vRiTjWu?= =?us-ascii?Q?3TQuSNAyxjEp5PcHZ2VADb4c9gSzeSwp2UBwdRA+blYSeHrKorBypWELQa3k?= =?us-ascii?Q?+RYoeyrtVZUYmw6Q4YqRs2/ZMHS1iIKAbFeCsV+nrkX1GmHT8MP2HBMHtoTx?= =?us-ascii?Q?4o8h4m0Pr2A1nvIsPtq04Kjk6LmFU2pr4D3uF2lc3obFQB9hqGckRuZ3GVjv?= =?us-ascii?Q?cFXSQuUbMpYPV2Xh4+gDYsBrFLmC8lWLzMBaGLo+dimf8bB+h4mfi6rOJvFh?= =?us-ascii?Q?8W+7R79wGwWXdfnB3i/1tooqC4eiWTZXkXqrHs1mbC75m0bjieFD7Td1Nqli?= =?us-ascii?Q?Np5OvIQ2yx78qrQPgq+wly5QzP2eDNNyxfK0c8kXUR7A14qw5dyKHKM+4da9?= =?us-ascii?Q?Ofk4vteW6w7yKHxVNnTQEZytAH/3W9RnbUmahJUv0xi26Z9k0OGl9SwvmSaz?= =?us-ascii?Q?cTNUGGwgA92Drc3kuXB53JCBc3wn2Zyx/gRHgeHnkJ/Fr1yLOypYrpY3VXWp?= =?us-ascii?Q?MtrKaHx/zvS0ZFTLMh4VIMBCfpFaVPuM77Yo8Bv6M0jOuqCz0ZbFCd6vA4SE?= =?us-ascii?Q?q91H7fbHkV15L8X5YdewzToLpw/E6j8W+9QuhGQ0C06VJ/zu86H1N6oGBAaz?= =?us-ascii?Q?smZh135BnPSwZj65FVThIe2P85ZHcWiDx96MnfWJIfFb9Q9srHfeCwEJe7fC?= =?us-ascii?Q?HWF2mKVOguJn3Yccll006+aUVpEOC9qYgQPMD6bfjhPL4hZ7z5Y8odJF3gct?= =?us-ascii?Q?jkNMBaocmUjjqUxt/Q7nZKnGsjkkaIDtyN3IXSpprMCTsZvcc12D/Ki/eY0B?= =?us-ascii?Q?ixslaPz+OUDsu0kThRvy0B92LKkaM4+kFDCZPJ+nQPy/imEf0Q3zAuyUScq9?= =?us-ascii?Q?b3jWAQC4hXR26GuWwhKay2GrZPmN7ooKU+u0yw6Zb8y7?= X-Forefront-PRVS: 0351D213B3 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(39400400002)(39840400002)(39850400002)(39410400002)(39450400003)(377454003)(51914003)(24454002)(13464003)(54356999)(54906002)(25786009)(478600001)(33656002)(561944003)(53936002)(189998001)(3846002)(5009440100003)(93886004)(42186005)(1076002)(23726003)(6116002)(33716001)(4326008)(42882006)(6916009)(2950100002)(5660300001)(53546010)(229853002)(2906002)(6246003)(8676002)(9686003)(38730400002)(76176999)(55016002)(110136004)(50986999)(81166006)(6496005)(966005)(66066001)(53376002)(6666003)(72206003)(47776003)(50466002)(305945005)(6306002)(18370500001); DIR:OUT; SFP:1101; SCL:1; SRVR:BN3PR0701MB1719; H:jerin; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN3PR0701MB1719; 23:y62myn4a1kgntAI9aP+myKEFnz3UatPLJOjNded?= =?us-ascii?Q?M4VZtIQsmyLsnpO0FLS+Yk3x2KnggOwvda6Xdh0GNG0CRZ7AL2brQ3UMImjV?= =?us-ascii?Q?Ik+bawpMwK6Pr5EIojLMSZCe2fU7k+Nt9vaR/lJ2EmhyNPrsJKtZQYAyfniC?= =?us-ascii?Q?uRszDEBDZAABAR8Yq4/Wyd+U4Z9kgqJCIn4wQqjjuLA5KzBtfxqrcmqalBY/?= =?us-ascii?Q?IktMMkJeVIm+Zgbekv98uWovMHPEUPhtRGNz/+HjRVXaDPwxJf0L4YyMSw5q?= =?us-ascii?Q?AF4vp2D4tR29Ok5bgtni6xPq1PCrffogK7lRhQWMjBRGV2wwcXzzr9xcCzuX?= =?us-ascii?Q?ZzJXU4ZGo/oIqgJvHpqTmawHK/A1ZptEDlz9n3Ms3cAkLwcPiELNePnP1/xs?= =?us-ascii?Q?OgCSB7nxqTBsU5gqnWscD2ENlTjCq+L2eiYI5J+QuQRYIAb16n6NFu0/1CDi?= =?us-ascii?Q?JDHZxNUtms9qumi+VawCmA7C8a84wHBy5x+jtd2GgqLddqqiCTj2SAuMHXC5?= =?us-ascii?Q?Xy71yN/wl2D9i4N/jV6Z1hgrnTtCLnNiqohxHVYqjq9f730xaLhtOJP3kOIk?= =?us-ascii?Q?3Xiblnt5d6cUEukXsuzPn3uhGk5tl/vCtcj2X9kq7XD6kL+/45naYm2loeB6?= =?us-ascii?Q?nIrzInNBO66nnKCu0pZ9aonhDePorDa7PHf1rP1ccruT6J3WTtK6LICWdz9h?= =?us-ascii?Q?RHb4kkAEp2Tun9+oTHV3roas0DwWgNCmtPuoA3A+ccpvs+E99d1O5NyGfrMO?= =?us-ascii?Q?oIq+kLrZyrbQDUaLZsuraAi1cCvVVCW6ZZgigE83jXjjKjm7jYCnT4y3bCgx?= =?us-ascii?Q?K/4S+m3rDKMQh7gLUQeH/EzQr3QmBehohS9pku5Rj9X3Fr2Lg7SVa9VSwuXn?= =?us-ascii?Q?2hBFSZ6DeBLrUG3WfN7qQ0tmUZTjUSBFyn9YFbBqfcr2/vL7tpwcuqnfJkhn?= =?us-ascii?Q?xSPj7dr3yRKdEfFzf/SOfi8yF5e57PVl5wdItRJh9jOLkFY6zDttAanRv08d?= =?us-ascii?Q?F5Se77raiQfFO/JrXzedXalumKTitj57mtfCSxphXso7pBe8r1WgzQ5dPv9H?= =?us-ascii?Q?P+3diavKEvb125lYEs1gz8HE/JNQXBrCLrRl40tVuO7fjpZNpLNggUdJa4Jq?= =?us-ascii?Q?Lm3rEtV4xhLCwsHA6y2/eB+RuEF0jb+CaySDUjdzUcH9V68Um6iCGUQOjy4H?= =?us-ascii?Q?1ejRoHDjfzGI85ZHm0Q3nduQ0j0XY3ZVqqLiz59PlsBB05ZnSn6mIIdd9Q+1?= =?us-ascii?Q?EsF3OBAYuFfyiPZRFy8VP1Ysab/YFkLDPrbbPKA13N4dWSfZMrVXShYqBZvn?= =?us-ascii?Q?/SCTWvndvMB89PqgzzN6aZjVusap5DLEJn+NXw/rKk7i6WYvxvC5M9NPrcIW?= =?us-ascii?Q?/l+3WVg=3D=3D?= X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN3PR0701MB1719; 6:9AOiEl4WubSw7LUvZWWYg0qI7Xcrd49THAVP/wL+?= =?us-ascii?Q?zkDdWjhTGMNL+BDDtT7mLJRzMSkqqMUtFNvUIXhIDuY55QGVY8WgOmKN3WNB?= =?us-ascii?Q?asoJ4RoleSQgh7m4S5HUED3d17QZ3zrkt98VxeA4RVZAi3YRfbZiJSrk14BQ?= =?us-ascii?Q?bwihyGG/9+LyuyXCrlo3wx+2Ky880vc1Q/X9HSCCBT2Xz6qC81aUiTZVH+vS?= =?us-ascii?Q?XLqkG3K6uQ/4Ub1PfAdj1q2+FUEDGB/pBuCN5ezFXtJi2PkvnS+9iTwth67b?= =?us-ascii?Q?7Tk0pkNGdz3IqWIki1clovoKDx0MqvgmZAeLwXlGJvjzUQketOibSMISq/GL?= =?us-ascii?Q?MIp2EsRSgniyjEJkUDZWBTiIfizqKQr5x/z9bUcvK32JZjExXINStC/quUIi?= =?us-ascii?Q?x28OZhfzQxPMDF/IWwnY3woYEjJ1ss9SrUq17bOijQfTn5f6FhNJGnYDDFPE?= =?us-ascii?Q?v4G6r2pL6oaXkOqHmNQ2e6zqoE0d6QINpObAKaivPACURxDoxniVvaa2exXn?= =?us-ascii?Q?tKNoHqP3TCHOi0Z4ycVx6FCUmru7ie0VenBixifXyDV5uTkNBE/vU74hreT0?= =?us-ascii?Q?RlnMU1fOLAwtktuJJGwgjclmJaXveZx8cqisDo9bHXGz97Kp7YdQGt1DwuQy?= =?us-ascii?Q?vM1ZgWPFZVPltO8wZ0LEL6S0GHccAEwVx5FE/dTRr2XaWjb48VUhSeFv87ar?= =?us-ascii?Q?iC9JRqnqtHFR+pi+uYl8V8XaPzRk/M0OQgVWJA1J9bFhBC98v9tqwJo3TXgM?= =?us-ascii?Q?tlXG87D5Bfq+DzSceNijgpgiW8U8MwQvHE52/pzdDwS0Wjpq6jhDjRZUC0dC?= =?us-ascii?Q?iD3o9B7Ht51llzkMTfdC6pf3xh1ePkj5anU6YWya/zzk7fdAo2jCnTBir521?= =?us-ascii?Q?Ak9zhBU6g6DKCxPgNFFVFQzkDYxF1dVBta1dHE3DAdXcNsk1f33pUOVFYeek?= =?us-ascii?Q?RDJYAGl+4SXmxjxkCPrFMm2iToDDK4OTHwsnSwErAM9f0LhdSabq/SOR3BsO?= =?us-ascii?Q?0TY=3D?= X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 5:2wtwOkLbEUOYf9ywnUETGwzd16vU5CxWY+aiKM9UPKvReYSSCQSdeydpJMJrv31yYydo5YRynGCvG2jrLwSniJFX2tAca2wu7yC9a67MWo0I5Z7/wOGGddYim5PL6rUS+yI9B57ri5CMZy3lzzlqJoS3FuvJCQi2h6SvzTlnRQ8rgf1WkUngoHm6tbR3W2pcMD6SEelzHDLn9u8u5Qm70aLpBbvErp1ysaqvMHlJu9gp6Ctc+01YmwHbQn7foucFVlvck2I3ITns//x4nMrW+SC33Ryg7GLuBE0Dm9rWiPP8mX8+ftcqS7TRVeQIiDN5SNwWtxKuLEb2ClsMA0DcLZE+aXULYSAsVpba/yQvHuZ0xytZS9cTJhpRAVlMT4VoASkkgYtu4NadBjJbPC/7BpqmkdKRJ4RJ5i4PhdngEG/ze6w8g6ZjG1uB/mOkwVYAhf2eF0WKNzHobms2CbPf/nVRrWRXK1UYn92ARJfnMdIDvqXYTeTXdJfZ7U8riY67; 24:8OwzdEKk+VkBD17dMkmCbLzHnpbLi/o52l1NB6GovKfnwScNxvjkk+Ivl3onmAeu2L/JLNlNT9V59K6KgBDGP9q0Wig+jD52rtAshjkZMek= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 7:o2raD0Vq8m5R3ZwsHTMbjTIhirpWEdDyyCt71GVc8bzaUbHmcRBxveqYsPczKWJTYwPq/sDRS/5WySkkyChh9jaYf0IM4pa02tngVBlK4iLWY1aA2dumUqsPcDgD8GIfMr4Aa2cwUQT2nq/Kw1NGSbKwBhOdhpCrwVAuUKtzCR1tvX5dia/Y6UDbYGpS1ithUNlu8C9aKbHk17cdbyIzS59sF7l1q3QuU0C6jeGikNhnl5gxaPzPSbYPrU6VX5v2O5ZlGRiAEdpDizAXRW9+EyoZidLJ241psqryKw/U+RsUQXkRlL5uvo0iq6TBcbx6zny24+uefe70wLM8htWxqnvIj+uCELw+Rp4tcuRN5D/V9mNudSRxelECxXbLiyK12vbWNfSkJqPRZZpOGxc/NUHG/zPeW7rbZxgriLCkee24BMAnuhJH4ZpQUxRO/ht8jtUbkqZuwOu1ryMjWlIUhoRa1b6T1nr5i7CYIJSU+gZ3ZP0uHfHa3HB/iq8n8UJvA61gEUwW1V5hm9pFGGErn7R6BtKikZj/3Ous8PyvkOJ4QiFUeFaR3JsSJZ1VMERWrZh+VVBKkcgfLMZl11N6rvybSfCGfDMcZQN6h9+ZWRRALXLnuwBfOD9Cej6ttodH5g7+JdmrQFitViCqiSfnVUwqJBJ3ONqFEtfqKyynzNUqtringsPZIfEreFmN7c80GYKsa6up42/TO+q8yPYOLUSKQmKk/r9pylLjvsWavJ1tz0JA1aZ3kTxVkMN684DwRzFNnSTaDUp9P+4qM571lEznTIU7G4CJCe+mW46greM= X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2017 09:35:55.0407 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN3PR0701MB1719 Subject: Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 27 Jun 2017 09:36:00 -0000 -----Original Message----- > Date: Mon, 26 Jun 2017 15:46:47 +0100 > From: "Hunt, David" > To: Jerin Jacob , Harry van Haaren > > CC: dev@dpdk.org, Gage Eads , Bruce Richardson > > Subject: Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added > sample app > User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 > Thunderbird/45.8.0 > > Hi Jerin, Hi David, Looks like you have sent the old version. The below mentioned comments are not addressed in v2. > > I'm assisting Harry on the sample app, and have just pushed up a V2 patch > based on your feedback. I've addressed most of your suggestions, comments > below. There may still a couple of outstanding questions that need further > discussion. A few general comments: 1) Nikhil/Gage's proposal on ethdev rx to eventdev adapter will change the major portion(eventdev setup and producer()) of this application 2) Producing one lcore worth of packets, really cant show as example eventdev application as it will be pretty bad in low-end machine. At least application infrastructure should not limit. Considering above points, Should we wait for rx adapter to complete first? I would like to show this as real world application to use eventdev. Thoughts? On the same note: Can we finalize on rx adapter proposal? I can work on v1 of patch and common code if Nikhil or Gage don't have bandwidth. Let me know? last followup: http://dpdk.org/ml/archives/dev/2017-June/068776.html > > Regards, > Dave > > On 10/5/2017 3:12 PM, Jerin Jacob wrote: > > -----Original Message----- > >> Date: Fri, 21 Apr 2017 10:51:37 +0100 > >> From: Harry van Haaren > >> To: dev@dpdk.org > >> CC: jerin.jacob@caviumnetworks.com, Harry van Haaren > >> , Gage Eads , Bruce > >> Richardson > >> Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app > >> X-Mailer: git-send-email 2.7.4 > >> > >> This commit adds a sample app for the eventdev library. > >> The app has been tested with DPDK 17.05-rc2, hence this > >> release (or later) is recommended. > >> > >> The sample app showcases a pipeline processing use-case, > >> with event scheduling and processing defined per stage. > >> The application recieves traffic as normal, with each > >> packet traversing the pipeline. Once the packet has > >> been processed by each of the pipeline stages, it is > >> transmitted again. > >> > >> The app provides a framework to utilize cores for a single > >> role or multiple roles. Examples of roles are the RX core, > >> TX core, Scheduling core (in the case of the event/sw PMD), > >> and worker cores. > >> > >> Various flags are available to configure numbers of stages, > >> cycles of work at each stage, type of scheduling, number of > >> worker cores, queue depths etc. For a full explaination, > >> please refer to the documentation. > >> > >> Signed-off-by: Gage Eads > >> Signed-off-by: Bruce Richardson > >> Signed-off-by: Harry van Haaren > > > > Thanks for the example application to share the SW view. > > I could make it run on HW after some tweaking(not optimized though) > > > > [...] > >> +#define MAX_NUM_STAGES 8 > >> +#define BATCH_SIZE 16 > >> +#define MAX_NUM_CORE 64 > > > > How about RTE_MAX_LCORE? > > Core usage in the sample app is held in a uint64_t. Adding arrays would be > possible, but I feel that the extra effort would not give that much benefit. > I've left as is for the moment, unless you see any strong requirement to go > beyond 64 cores? I think, it is OK. Again with service core infrastructure this will change. > > > > >> + > >> +static unsigned int active_cores; > >> +static unsigned int num_workers; > >> +static unsigned long num_packets = (1L << 25); /* do ~32M packets */ > >> +static unsigned int num_fids = 512; > >> +static unsigned int num_priorities = 1; > > > > looks like its not used. > > Yes, Removed. > > > > >> +static unsigned int num_stages = 1; > >> +static unsigned int worker_cq_depth = 16; > >> +static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY; > >> +static int16_t next_qid[MAX_NUM_STAGES+1] = {-1}; > >> +static int16_t qid[MAX_NUM_STAGES] = {-1}; > > > > Moving all fastpath related variables under a structure with cache > > aligned will help. > > I tried a few different combinations of this, and saw no gains, some losses. > So will leave as is for the moment, if that's OK. I think, the one are using in fastpath better to allocate from huge page using rte_malloc() > > > > >> +static int worker_cycles; > >> +static int enable_queue_priorities; > >> + > >> +struct prod_data { > >> + uint8_t dev_id; > >> + uint8_t port_id; > >> + int32_t qid; > >> + unsigned num_nic_ports; > >> +}; > > Yes, saw a percent or two gain when this plus following two data structs > cache aligned. looks like it not fixed in v2. Looks like you have sent the old version. > > > > >> + > >> + return 0; > >> +} > >> + > >> +static int > >> +producer(void) > >> +{ > >> + static uint8_t eth_port; > >> + struct rte_mbuf *mbufs[BATCH_SIZE]; > >> + struct rte_event ev[BATCH_SIZE]; > >> + uint32_t i, num_ports = prod_data.num_nic_ports; > >> + int32_t qid = prod_data.qid; > >> + uint8_t dev_id = prod_data.dev_id; > >> + uint8_t port_id = prod_data.port_id; > >> + uint32_t prio_idx = 0; > >> + > >> + const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, > BATCH_SIZE); > >> + if (++eth_port == num_ports) > >> + eth_port = 0; > >> + if (nb_rx == 0) { > >> + rte_pause(); > >> + return 0; > >> + } > >> + > >> + for (i = 0; i < nb_rx; i++) { > >> + ev[i].flow_id = mbufs[i]->hash.rss; > > > > prefetching the buff[i+1] may help here? > > I tried, didn't make much difference. OK. > > > > >> + ev[i].op = RTE_EVENT_OP_NEW; > >> + ev[i].sched_type = queue_type; > > > > The value of RTE_EVENT_QUEUE_CFG_ORDERED_ONLY != RTE_SCHED_TYPE_ORDERED. > So, we > > cannot assign .sched_type as queue_type. > > > > I think, one option could be to avoid translation in application is to > > - Remove RTE_EVENT_QUEUE_CFG_ALL_TYPES, RTE_EVENT_QUEUE_CFG_*_ONLY > > - Introduce a new RTE_EVENT_DEV_CAP_ to denote > RTE_EVENT_QUEUE_CFG_ALL_TYPES cap > > ability > > - add sched_type in struct rte_event_queue_conf. If capability flag is > > not set then implementation takes sched_type value for the queue. > > > > Any thoughts? > > > Not sure here, would it be ok for the moment, and we can work on a patch in > the future? OK > >> + > >> + if (tx_core[lcore_id] && (tx_single || > >> + rte_atomic32_cmpset(&tx_lock, 0, 1))) { > >> + consumer(); > > > > Should consumer() need to come in this pattern? I am thinking like > > if events is from last stage then call consumer() in worker() > > > > I think, above scheme works better when the _same_ worker code need to run > the > > case where > > 1) ethdev HW is capable to enqueuing the packets to same txq from > > multiple thread > > 2) ethdev is not capable to do so. > > > > So, The above cases can be addressed in configuration time where we link > > the queues to port > > case 1) Link all workers to last queue > > case 2) Link only worker to last queue > > > > and keeping the common worker code. > > > > HW implementation has functional and performance issue if "two" ports are > > assigned to one lcore for dequeue. The above scheme fixes that problem > too. > > > Can we have a bit more discussion on this item? Is this needed for this > sample app, or can we perhaps work a patch for this later? Harry? As explained above, Is there any issue in keeping consumer() for last stage ? > > > > > >> + rte_atomic32_clear((rte_atomic32_t *)&tx_lock); > >> + } > >> +} > >> + > >> +static int > >> +worker(void *arg) > >> +{ > >> + struct rte_event events[BATCH_SIZE]; > >> + > >> + struct worker_data *data = (struct worker_data *)arg; > >> + uint8_t dev_id = data->dev_id; > >> + uint8_t port_id = data->port_id; > >> + size_t sent = 0, received = 0; > >> + unsigned lcore_id = rte_lcore_id(); > >> + > >> + while (!done) { > >> + uint16_t i; > >> + > >> + schedule_devices(dev_id, lcore_id); > >> + > >> + if (!worker_core[lcore_id]) { > >> + rte_pause(); > >> + continue; > >> + } > >> + > >> + uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id, > >> + events, RTE_DIM(events), 0); > >> + > >> + if (nb_rx == 0) { > >> + rte_pause(); > >> + continue; > >> + } > >> + received += nb_rx; > >> + > >> + for (i = 0; i < nb_rx; i++) { > >> + struct ether_hdr *eth; > >> + struct ether_addr addr; > >> + struct rte_mbuf *m = events[i].mbuf; > >> + > >> + /* The first worker stage does classification */ > >> + if (events[i].queue_id == qid[0]) > >> + events[i].flow_id = m->hash.rss % num_fids; > > > > Not sure why we need do(shrinking the flows) this in worker() in queue > based pipeline. > > If an PMD has any specific requirement on num_fids,I think, we > > can move this configuration stage or PMD can choose optimum fid internally > to > > avoid modulus operation tax in fastpath in all PMD. > > > > Does struct rte_event_queue_conf.nb_atomic_flows help here? > > In my tests the modulus makes very little difference in the throughput. And > I think it's good to have a way of varying the number of flows for testing > different scenarios, even if it's not the most performant. Not sure. > > > > >> + > >> + events[i].queue_id = next_qid[events[i].queue_id]; > >> + events[i].op = RTE_EVENT_OP_FORWARD; > > > > missing events[i].sched_type.HW PMD does not work with this. > > I think, we can use similar scheme like next_qid for next_sched_type. > > Done. added events[i].sched_type = queue_type. > > > > >> + > >> + /* change mac addresses on packet (to use mbuf data) */ > >> + eth = rte_pktmbuf_mtod(m, struct ether_hdr *); > >> + ether_addr_copy(ð->d_addr, &addr); > >> + ether_addr_copy(ð->s_addr, ð->d_addr); > >> + ether_addr_copy(&addr, ð->s_addr); > > > > IMO, We can make packet processing code code as "static inline function" > so > > different worker types can reuse. > > Done. moved out to a work() function. I think, mac swap should do in last stage, not on each forward. ie. With existing code, 2 stage forward makes in original order. > > > > >> + > >> + /* do a number of cycles of work per packet */ > >> + volatile uint64_t start_tsc = rte_rdtsc(); > >> + while (rte_rdtsc() < start_tsc + worker_cycles) > >> + rte_pause(); > > > > Ditto. > > Done. moved out to a work() function. > > > > > I think, All worker specific variables like "worker_cycles" can moved into > > one structure and use. > > > >> + } > >> + uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id, > >> + events, nb_rx); > >> + while (nb_tx < nb_rx && !done) > >> + nb_tx += rte_event_enqueue_burst(dev_id, port_id, > >> + events + nb_tx, > >> + nb_rx - nb_tx); > >> + sent += nb_tx; > >> + } > >> + > >> + if (!quiet) > >> + printf(" worker %u thread done. RX=%zu TX=%zu\n", > >> + rte_lcore_id(), received, sent); > >> + > >> + return 0; > >> +} > >> + > >> +/* > >> + * Parse the coremask given as argument (hexadecimal string) and fill > >> + * the global configuration (core role and core count) with the parsed > >> + * value. > >> + */ > >> +static int xdigit2val(unsigned char c) > > > > multiple instance of "xdigit2val" in DPDK repo. May be we can push this > > as common code. > > Sure, that's something we can look at in a separate patch, now that it's > being used more and more. make sense. > > > > >> +{ > >> + int val; > >> + > >> + if (isdigit(c)) > >> + val = c - '0'; > >> + else if (isupper(c)) > >> + val = c - 'A' + 10; > >> + else > >> + val = c - 'a' + 10; > >> + return val; > >> +} > >> + > >> + > >> +static void > >> +usage(void) > >> +{ > >> + const char *usage_str = > >> + " Usage: eventdev_demo [options]\n" > >> + " Options:\n" > >> + " -n, --packets=N Send N packets (default ~32M), 0 > implies no limit\n" > >> + " -f, --atomic-flows=N Use N random flows from 1 to N > (default 16)\n" > > > > I think, this parameter now, effects the application fast path code.I > think, > > it should eventdev configuration para-mater. > > See above comment on num_fids > > > > >> + " -s, --num_stages=N Use N atomic stages (default > 1)\n" > >> + " -r, --rx-mask=core mask Run NIC rx on CPUs in core > mask\n" > >> + " -w, --worker-mask=core mask Run worker on CPUs in core > mask\n" > >> + " -t, --tx-mask=core mask Run NIC tx on CPUs in core > mask\n" > >> + " -e --sched-mask=core mask Run scheduler on CPUs in core > mask\n" > >> + " -c --cq-depth=N Worker CQ depth (default 16)\n" > >> + " -W --work-cycles=N Worker cycles (default 0)\n" > >> + " -P --queue-priority Enable scheduler queue > prioritization\n" > >> + " -o, --ordered Use ordered scheduling\n" > >> + " -p, --parallel Use parallel scheduling\n" > > > > IMO, all stage being "parallel" or "ordered" or "atomic" is one mode of > > operation. It is valid have to any combination. We need to express that in > > command like > > example: > > 3 stage with > > O->A->P > > How about we add an option that specifies the mode of operation for each > stage in a string? Maybe have a '-m' option (modes) e.g. '-m appo' for 4 > stages with atomic, parallel, paralled, ordered. Or maybe reuse your > test-eventdev parameter style? Any scheme is fine. > > > > >> + " -q, --quiet Minimize printed output\n" > >> + " -D, --dump Print detailed statistics before > exit" > >> + "\n"; > >> + fprintf(stderr, "%s", usage_str); > >> + exit(1); > >> +} > >> + > > > > [...] > > > >> + rx_single = (popcnt == 1); > >> + break; > >> + case 't': > >> + tx_lcore_mask = parse_coremask(optarg); > >> + popcnt = __builtin_popcountll(tx_lcore_mask); > >> + tx_single = (popcnt == 1); > >> + break; > >> + case 'e': > >> + sched_lcore_mask = parse_coremask(optarg); > >> + popcnt = __builtin_popcountll(sched_lcore_mask); > >> + sched_single = (popcnt == 1); > >> + break; > >> + default: > >> + usage(); > >> + } > >> + } > >> + > >> + if (worker_lcore_mask == 0 || rx_lcore_mask == 0 || > >> + sched_lcore_mask == 0 || tx_lcore_mask == 0) { > > > >> + > >> + /* Q creation - one load balanced per pipeline stage*/ > >> + > >> + /* set up one port per worker, linking to all stage queues */ > >> + for (i = 0; i < num_workers; i++) { > >> + struct worker_data *w = &worker_data[i]; > >> + w->dev_id = dev_id; > >> + if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) { > >> + printf("Error setting up port %d\n", i); > >> + return -1; > >> + } > >> + > >> + uint32_t s; > >> + for (s = 0; s < num_stages; s++) { > >> + if (rte_event_port_link(dev_id, i, > >> + &worker_queues[s].queue_id, > >> + &worker_queues[s].priority, > >> + 1) != 1) { > >> + printf("%d: error creating link for port %d\n", > >> + __LINE__, i); > >> + return -1; > >> + } > >> + } > >> + w->port_id = i; > >> + } > >> + /* port for consumer, linked to TX queue */ > >> + if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) { > > > > If ethdev supports MT txq queue support then this port can be linked to > > worker too. something to consider for future. > > > > Sure. No change for now. OK