From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pd0-f181.google.com (mail-pd0-f181.google.com [209.85.192.181]) by dpdk.org (Postfix) with ESMTP id 7DE6A58D9 for ; Fri, 19 Sep 2014 14:23:00 +0200 (CEST) Received: by mail-pd0-f181.google.com with SMTP id w10so3522381pde.26 for ; Fri, 19 Sep 2014 05:28:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+2E+1nrnVyaNyzgVvPz60rZbqtz2r6nWVe9FG+PVdgI=; b=RKKS7QJeCtfDVv/FlfdLcHTsW4vjsKvyZQuYDLJxzkcFCB9+2eJ1zOEfiG66Y60Rff KG2z1cLi5NID2kt3zef1cedJC0LrdzSbNBO0FliEQCpKRp9/WIp0hGL6UBkp+kJB2X/j X4SVE2BhUpdFVtHmadFElzDYM6UrZh5wtlqD8GFEZZytWX10i5FTh7ERDnOS53R1HIfm wCMUqnDZESxWLLvb30qCkCHKprXRoXuLjy/gzMGQM6IuiUwt5eGsuqEAqDU0khJReJgA HGczPe+eeisOtoFYhlBWyBtRhZpsSJaCwmrqz9QLDtIY84+InV96X3sm0z8WQIdKThOO ovHg== X-Gm-Message-State: ALoCoQnPXbbAf+lQitvtVN0w2D08dqHEr4AHL16ZrpXObmbD8N8vTKUFFFQYKQNkRPL+liY/S7D+ X-Received: by 10.68.69.109 with SMTP id d13mr217415pbu.40.1411129729823; Fri, 19 Sep 2014 05:28:49 -0700 (PDT) Received: from localhost.localdomain (napt.igel.co.jp. [219.106.231.132]) by mx.google.com with ESMTPSA id kl3sm1756482pdb.87.2014.09.19.05.28.47 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 19 Sep 2014 05:28:49 -0700 (PDT) From: mukawa@igel.co.jp To: dev@dpdk.org Date: Fri, 19 Sep 2014 21:27:38 +0900 Message-Id: <1411129659-7132-1-git-send-email-mukawa@igel.co.jp> X-Mailer: git-send-email 1.9.1 In-Reply-To: References: Cc: nakajima.yoshihiro@lab.ntt.co.jp, masutani.hitoshi@lab.ntt.co.jp Subject: [dpdk-dev] [RFC] PMD for performance measurement X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Sep 2014 12:23:00 -0000 From: Tetsuya Mukawa Hi, I want to measure throughputs like following cases. - path connected by RING PMDs - path connected by librte_vhost and virtio-net PMD - path connected by MEMNIC PMDs - ..... Is there anyone want to do same thing? Anyway, I guess those throughputs may be too high for some devices like ixia. But it's a bit pain to write or fix applications just for measuring. I guess a PMD like '/dev/null' and testpmd application will help. This patch set is RFC of a PMD like '/dev/null'. Please see the first commit of this patch set. Here is a my plan to use this PMD +-------------------------------+ | testpmd1 | +-------------+------+----------+ | Target PMD1 | | null PMD | +---++--------+ +----------+ || || Target path || +---++--------+ +----------+ | Target PMD2 | | null PMD | +-------------+------+----------+ | testpmd2 | +-------------------------------+ The testpmd1 or testpmd2 will start with "start tx_first". It causes huge transfers. The result is not thuroughput of PMD1 or PMD2, but throughput between PMD1 and PMD2. But I guess it's enough to know rough thuroughput. Also it's nice for measuing that the same environment can be used. Any suggestions or comments? Thanks, Tetsuya -- Tetsuya Mukawa (1): librte_pmd_null: Add null PMD config/common_bsdapp | 5 + config/common_linuxapp | 5 + lib/Makefile | 1 + lib/librte_pmd_null/Makefile | 58 +++++ lib/librte_pmd_null/rte_eth_null.c | 474 +++++++++++++++++++++++++++++++++++++ 5 files changed, 543 insertions(+) create mode 100644 lib/librte_pmd_null/Makefile create mode 100644 lib/librte_pmd_null/rte_eth_null.c -- 1.9.1