DPDK community structure changes
 help / color / Atom feed
From: "O'Driscoll, Tim" <tim.odriscoll@intel.com>
To: Shepard Siegel <shepard.siegel@atomicrules.com>,
	"moving@dpdk.org" <moving@dpdk.org>
Subject: Re: [dpdk-moving] Atomic Rules interest in DPDK Lab Participation
Date: Fri, 7 Apr 2017 15:44:25 +0000
Message-ID: <26FA93C7ED1EAA44AB77D62FBE1D27BA75A60BB6@IRSMSX108.ger.corp.intel.com> (raw)
In-Reply-To: <CAMMLSKDVvMM4OYmwwN9exPi-G=6Twq1ByxiHGGD29dZWkMQ_TA@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 5732 bytes --]

No problem Shep. I’ll add you to the meetings on this and we can work out if and how Atomic Rules fit into this initiative.


Tim

From: moving [mailto:moving-bounces@dpdk.org] On Behalf Of Shepard Siegel
Sent: Friday, April 7, 2017 12:07 PM
To: moving@dpdk.org
Subject: [dpdk-moving] Atomic Rules interest in DPDK Lab Participation

Tim and DPDK Lab Group,
Atomic Rules is a new DPDK/LF Member and is interested in participating in and contributing to the DPDK Lab effort. For over a year, about five engineers have been working to develop our Arkville product which will be announced publicly next month following 17.05. The elevator-pitch for Arkville is that it is a FPGA/GPP AXI/DPDK packet conduit. It is by design, line-rate and feature-agnostic. You can think of it as barebones FPGA-based NIC without any specific MAC. There is a blog post here:
http://atomicrules.blogspot.com/2017/03/the-road-to-arkville.html
Months ago we began to use DPDK/DTS as our testing regime. Then, as now, we require objective measures for correctness, throughput, and latency in our CI regression loop. We spent several frustrating months doing the following: Buy a bunch of Fortville/XL710s and servers; set up DTS and understand it; reach a baseline that shows out-of-the-box Fortville/XL710 performance. Instrument everything. Then; replace the Fortville DUT NIC with a work-alike Arkville DUT NIC and note the differences. This has not gone as smoothly as planned; as the DTS code is quite brittle to exact system configuration (e.g. numa zones and processor selection). We have learned from this and even built our own 4 x 100GbE Trex-ready packet player. (Thanks Cisco and Hanoch Haim)
http://atomicrules.blogspot.com/2017/01/paced-packet-player.html
This work remains incomplete and stalled on several technical fronts.

Still, this is "toy" up against what this group is considering. I'm not sure that Atomic Rules has the resources to stand shoulder-to-shoulder with the giants in this group. But we are terrifically interested in finding a way to get the vital DPDK correctness, throughput, and latency measures we seek. We are located 40 minutes from the UNH IOL in Durham, NH, where we have participated in related interop events. It's a great space with sharp engineers and an army of bright interns and grad students.
Please allow Atomic Rules to at least participate as an observer in the DPDK Lab project as we impedance-match to see if we can actually engage and contribute our software and hardware into this much-needed process.
-Shep
Shepard Siegel, CTO
atomicrules.com<http://atomicrules.com>



On Fri, Apr 7, 2017 at 6:00 AM, <moving-request@dpdk.org<mailto:moving-request@dpdk.org>> wrote:
Send moving mailing list submissions to
        moving@dpdk.org<mailto:moving@dpdk.org>

To subscribe or unsubscribe via the World Wide Web, visit
        http://dpdk.org/ml/listinfo/moving
or, via email, send a message with subject or body 'help' to
        moving-request@dpdk.org<mailto:moving-request@dpdk.org>

You can reach the person managing the list at
        moving-owner@dpdk.org<mailto:moving-owner@dpdk.org>

When replying, please edit your Subject line so it is more specific
than "Re: Contents of moving digest..."


Today's Topics:

   1. DPDK Lab (O'Driscoll, Tim)


----------------------------------------------------------------------

Message: 1
Date: Fri, 7 Apr 2017 05:02:32 +0000
From: "O'Driscoll, Tim" <tim.odriscoll@intel.com<mailto:tim.odriscoll@intel.com>>
To: "moving@dpdk.org<mailto:moving@dpdk.org>" <moving@dpdk.org<mailto:moving@dpdk.org>>
Subject: [dpdk-moving] DPDK Lab
Message-ID:
        <26FA93C7ED1EAA44AB77D62FBE1D27BA75A60870@IRSMSX108.ger.corp.intel.com<mailto:26FA93C7ED1EAA44AB77D62FBE1D27BA75A60870@IRSMSX108.ger.corp.intel.com>>

Content-Type: text/plain; charset="us-ascii"

A couple of months ago we discussed creating an open DPDK lab for identifying performance regressions. See http://dpdk.org/ml/archives/moving/2017-February/000177.html for the initial proposal. We agreed to form a small sub-team of those who were interested in participating in the lab, and have had a few follow-up calls involving reps from Intel, Mellanox, NXP, 6WIND and Red Hat.

Because several companies have now joined the DPDK Linux Foundation project who were not involved in those earlier discussions, we agreed at our last meeting to post again on this mailing list to see if anybody else is interested in participating. If anybody is, let me know and I'll include you in the meetings.

As background, the purpose of the lab is to identify any performance regressions in patches that are submitted to DPDK. Testing when the patches are submitted will help to identify problems early, and avoid situations where we're trying to fix performance issues late in the release (as we have been doing with the mbuf changes in 17.05 recently). Doing the testing in an open lab will help to give people confidence in the numbers, and make sure the data is accessible to the community.

As a quick status update on this, we have an initial equipment list now (see https://docs.google.com/spreadsheets/d/17t8j388wAxwF7B6iuZ5gpkauLMB1ownJe3eIVl4TXUg/edit?usp=sharing), and plan to focus on the specific tests to be run at our next meeting. At the moment the spreadsheet specifies all tests as being run on a daily basis, but we need to determine which can be run per patch and/or per patch set. We're also investigating hosting costs so that we can create a complete proposal that can then be submitted to the governing board for review/approval.


Tim



End of moving Digest, Vol 7, Issue 2
************************************


[-- Attachment #2: Type: text/html, Size: 10489 bytes --]

<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman",serif;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-reply;
	font-family:"Calibri",sans-serif;
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri",sans-serif;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">No problem Shep. I’ll add you to the meetings on this and we can work out if and how Atomic Rules fit into this initiative.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D"><br>
<br>
Tim<o:p></o:p></span></p>
<p class="MsoNormal"><a name="_MailEndCompose"><span style="font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></a></p>
<div style="border:none;border-left:solid blue 1.5pt;padding:0in 0in 0in 4.0pt">
<div>
<div style="border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><a name="_____replyseparator"></a><b><span style="font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-serif">From:</span></b><span style="font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-serif"> moving [mailto:moving-bounces@dpdk.org]
<b>On Behalf Of </b>Shepard Siegel<br>
<b>Sent:</b> Friday, April 7, 2017 12:07 PM<br>
<b>To:</b> moving@dpdk.org<br>
<b>Subject:</b> [dpdk-moving] Atomic Rules interest in DPDK Lab Participation<o:p></o:p></span></p>
</div>
</div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt">Tim and DPDK Lab Group,<o:p></o:p></p>
</div>
<p class="MsoNormal" style="margin-bottom:12.0pt">Atomic Rules is a new DPDK/LF Member and is interested in participating in and contributing to the DPDK Lab effort. For over a year, about five engineers have been working to develop our Arkville product which
 will be announced publicly next month following 17.05. The elevator-pitch for Arkville is that it is a FPGA/GPP AXI/DPDK packet conduit. It is by design, line-rate and feature-agnostic. You can think of it as barebones FPGA-based NIC without any specific MAC.
 There is a blog post here:<br>
<a href="http://atomicrules.blogspot.com/2017/03/the-road-to-arkville.html">http://atomicrules.blogspot.com/2017/03/the-road-to-arkville.html</a><o:p></o:p></p>
</div>
<p class="MsoNormal">Months ago we began to use DPDK/DTS as our testing regime. Then, as now, we require objective measures for correctness, throughput, and latency in our CI regression loop. We spent several frustrating months doing the following: Buy a bunch
 of Fortville/XL710s and servers; set up DTS and understand it; reach a baseline that shows out-of-the-box Fortville/XL710 performance. Instrument everything. Then; replace the Fortville DUT NIC with a work-alike Arkville DUT NIC and note the differences. This
 has not gone as smoothly as planned; as the DTS code is quite brittle to exact system configuration (e.g. numa zones and processor selection). We have learned from this and even built our own 4 x 100GbE Trex-ready packet player. (Thanks Cisco and Hanoch Haim)<br>
<a href="http://atomicrules.blogspot.com/2017/01/paced-packet-player.html">http://atomicrules.blogspot.com/2017/01/paced-packet-player.html</a><o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">This work remains incomplete and stalled on several technical fronts.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
<p class="MsoNormal" style="margin-bottom:12.0pt">Still, this is &quot;toy&quot; up against what this group is considering. I'm not sure that Atomic Rules has the resources to stand shoulder-to-shoulder with the giants in this group. But we are terrifically interested
 in finding a way to get the vital DPDK correctness, throughput, and latency measures we seek. We are located 40 minutes from the UNH IOL in Durham, NH, where we have participated in related interop events. It's a great space with sharp engineers and an army
 of bright interns and grad students.<o:p></o:p></p>
</div>
<p class="MsoNormal" style="margin-bottom:12.0pt">Please allow Atomic Rules to at least participate as an observer in the DPDK Lab project as we impedance-match to see if we can actually engage and contribute our software and hardware into this much-needed
 process.<o:p></o:p></p>
</div>
<p class="MsoNormal" style="margin-bottom:12.0pt">-Shep<o:p></o:p></p>
</div>
<p class="MsoNormal">Shepard Siegel, CTO<o:p></o:p></p>
</div>
<p class="MsoNormal"><a href="http://atomicrules.com">atomicrules.com</a><o:p></o:p></p>
<div>
<div>
<div>
<div>
<div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><br>
<br>
<o:p></o:p></p>
<div>
<div>
<div>
<div>
<div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<div>
<p class="MsoNormal">On Fri, Apr 7, 2017 at 6:00 AM, &lt;<a href="mailto:moving-request@dpdk.org" target="_blank">moving-request@dpdk.org</a>&gt; wrote:<o:p></o:p></p>
<blockquote style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in">
<p class="MsoNormal">Send moving mailing list submissions to<br>
&nbsp; &nbsp; &nbsp; &nbsp; <a href="mailto:moving@dpdk.org">moving@dpdk.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
&nbsp; &nbsp; &nbsp; &nbsp; <a href="http://dpdk.org/ml/listinfo/moving" target="_blank">http://dpdk.org/ml/listinfo/moving</a><br>
or, via email, send a message with subject or body 'help' to<br>
&nbsp; &nbsp; &nbsp; &nbsp; <a href="mailto:moving-request@dpdk.org">moving-request@dpdk.org</a><br>
<br>
You can reach the person managing the list at<br>
&nbsp; &nbsp; &nbsp; &nbsp; <a href="mailto:moving-owner@dpdk.org">moving-owner@dpdk.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than &quot;Re: Contents of moving digest...&quot;<br>
<br>
<br>
Today's Topics:<br>
<br>
&nbsp; &nbsp;1. DPDK Lab (O'Driscoll, Tim)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Fri, 7 Apr 2017 05:02:32 &#43;0000<br>
From: &quot;O'Driscoll, Tim&quot; &lt;<a href="mailto:tim.odriscoll@intel.com">tim.odriscoll@intel.com</a>&gt;<br>
To: &quot;<a href="mailto:moving@dpdk.org">moving@dpdk.org</a>&quot; &lt;<a href="mailto:moving@dpdk.org">moving@dpdk.org</a>&gt;<br>
Subject: [dpdk-moving] DPDK Lab<br>
Message-ID:<br>
&nbsp; &nbsp; &nbsp; &nbsp; &lt;<a href="mailto:26FA93C7ED1EAA44AB77D62FBE1D27BA75A60870@IRSMSX108.ger.corp.intel.com">26FA93C7ED1EAA44AB77D62FBE1D27BA75A60870@IRSMSX108.ger.corp.intel.com</a>&gt;<br>
<br>
Content-Type: text/plain; charset=&quot;us-ascii&quot;<br>
<br>
A couple of months ago we discussed creating an open DPDK lab for identifying performance regressions. See
<a href="http://dpdk.org/ml/archives/moving/2017-February/000177.html" target="_blank">
http://dpdk.org/ml/archives/moving/2017-February/000177.html</a> for the initial proposal. We agreed to form a small sub-team of those who were interested in participating in the lab, and have had a few follow-up calls involving reps from Intel, Mellanox, NXP,
 6WIND and Red Hat.<br>
<br>
Because several companies have now joined the DPDK Linux Foundation project who were not involved in those earlier discussions, we agreed at our last meeting to post again on this mailing list to see if anybody else is interested in participating. If anybody
 is, let me know and I'll include you in the meetings.<br>
<br>
As background, the purpose of the lab is to identify any performance regressions in patches that are submitted to DPDK. Testing when the patches are submitted will help to identify problems early, and avoid situations where we're trying to fix performance issues
 late in the release (as we have been doing with the mbuf changes in 17.05 recently). Doing the testing in an open lab will help to give people confidence in the numbers, and make sure the data is accessible to the community.<br>
<br>
As a quick status update on this, we have an initial equipment list now (see <a href="https://docs.google.com/spreadsheets/d/17t8j388wAxwF7B6iuZ5gpkauLMB1ownJe3eIVl4TXUg/edit?usp=sharing" target="_blank">
https://docs.google.com/spreadsheets/d/17t8j388wAxwF7B6iuZ5gpkauLMB1ownJe3eIVl4TXUg/edit?usp=sharing</a>), and plan to focus on the specific tests to be run at our next meeting. At the moment the spreadsheet specifies all tests as being run on a daily basis,
 but we need to determine which can be run per patch and/or per patch set. We're also investigating hosting costs so that we can create a complete proposal that can then be submitted to the governing board for review/approval.<br>
<br>
<br>
Tim<br>
<br>
<br>
<br>
End of moving Digest, Vol 7, Issue 2<br>
************************************<o:p></o:p></p>
</blockquote>
</div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</body>
</html>

      reply index

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-07 11:07 Shepard Siegel
2017-04-07 15:44 ` O'Driscoll, Tim [this message]

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=26FA93C7ED1EAA44AB77D62FBE1D27BA75A60BB6@IRSMSX108.ger.corp.intel.com \
    --to=tim.odriscoll@intel.com \
    --cc=moving@dpdk.org \
    --cc=shepard.siegel@atomicrules.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK community structure changes

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/moving/0 moving/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 moving moving/ http://inbox.dpdk.org/moving \
		moving@dpdk.org
	public-inbox-index moving


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.moving


AGPL code for this site: git clone https://public-inbox.org/ public-inbox