From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f52.google.com (mail-pa0-f52.google.com [209.85.220.52]) by dpdk.org (Postfix) with ESMTP id 2D4EBDE0 for ; Thu, 9 Jan 2014 19:48:06 +0100 (CET) Received: by mail-pa0-f52.google.com with SMTP id ld10so3700784pab.11 for ; Thu, 09 Jan 2014 10:49:18 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:content-type:mime-version:subject:from :in-reply-to:date:content-transfer-encoding:message-id:references:to; bh=zN8+Gi/FRUjHjiRmQQFmY3BeG0FBRxMo43ssXZgoLhE=; b=VQA4VR5K5YPkoy2cNrZrEqe1+j2LAeA6qdY1H1fbr5MQtECOxpISR5KAOLhaviBT0V x+XdBEa0N52S76iPMih/vUjQZzSF6Vwkgm7D22GUx8nq1lMJy23eaFbB5qa0NX5nVieJ Yo1mDY51i9oVAcV4qmsG6yvWujacZLiWCBMY4o69mjxFpiXMqocK6ocQxLiysT68k64g xoRlrB/D+nvmI/V1hKjE2BRKNaY6tPwJ1qL4/+U2eLnNtOyfIoVvUHjpLIo3fsdNfh10 vhTJ5eUHqcydLeJxlZ/LyFYXb2tX8QqNc20Pr09b6yiTRjeKBCmSz3jUvADSVkJFN/Ug e4dg== X-Gm-Message-State: ALoCoQlUkH63/k/oergEHjCyARECWsVwBQo8lXgitt4VyqA86KXKXnT4NOnW8nv+u0MZk/KYzKNW X-Received: by 10.66.221.103 with SMTP id qd7mr5506883pac.44.1389293357968; Thu, 09 Jan 2014 10:49:17 -0800 (PST) Received: from [192.168.128.242] (50-76-35-254-ip-static.hfc.comcastbusiness.net. [50.76.35.254]) by mx.google.com with ESMTPSA id yd4sm11730234pbc.13.2014.01.09.10.49.16 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 09 Jan 2014 10:49:17 -0800 (PST) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\)) From: Daniel Kan In-Reply-To: Date: Thu, 9 Jan 2014 10:49:14 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: <6B1399EF-19FC-4493-B9CA-DD872CD728B4@nyansa.com> References: To: dev@dpdk.org X-Mailer: Apple Mail (2.1827) Subject: Re: [dpdk-dev] Unable to get RSS to work in testpmd and load balancing question X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jan 2014 18:48:06 -0000 The problem appears to be that rxmode.mq_mode was never set to = ETH_MQ_RX_RSS in testpmd.c; it=92s initialized to 0. There should = probably be a configuration for that, or should be set when rxq > 1.=20 Dan On Jan 8, 2014, at 3:24 PM, Dan Kan wrote: > I'm evaluating DPDK using dpdk-1.5.1r1. I have been playing around = with the test-pmd sample app. I'm having a hard time to get RSS to work. = I have a 2-port 82599 Intel X540-DA2 NIC. I'm running the following = command to start the app. >=20 > sudo ./testpmd -c 0x1f -n 2 -- -i --portmask=3D0x3 --nb-cores=3D4 = --rxq=3D4 --txq=3D4 >=20 > I have a packet generator that sends udp packets with various src IP. = According testpmd, I'm only receiving packets in port 0's queue 0. = Packets are not going into any other queues. I have attached the output = from testpmd. >=20 >=20 > ------- Forward Stats for RX Port=3D 0/Queue=3D 0 -> TX Port=3D = 1/Queue=3D 0 ------- > RX-packets: 1000000 TX-packets: 1000000 TX-dropped: 0 = =20 > ---------------------- Forward statistics for port 0 = ---------------------- > RX-packets: 1000000 RX-dropped: 0 RX-total: = 1000000 > TX-packets: 0 TX-dropped: 0 TX-total: 0 > = --------------------------------------------------------------------------= -- >=20 > ---------------------- Forward statistics for port 1 = ---------------------- > RX-packets: 0 RX-dropped: 0 RX-total: 0 > TX-packets: 1000000 TX-dropped: 0 TX-total: = 1000000 > = --------------------------------------------------------------------------= -- >=20 > +++++++++++++++ Accumulated forward statistics for all = ports+++++++++++++++ > RX-packets: 1000000 RX-dropped: 0 RX-total: = 1000000 > TX-packets: 1000000 TX-dropped: 0 TX-total: = 1000000 > = ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= ++ >=20 > On a separate note, I also find that the CPU utilization using 1 = forwarding core for 2 ports seems to be better (in the aggregate sense) = than using 2 forwarding cores for 2 ports. Running at 10gbps line rate = of pktlen=3D400, with 1 core, the core's utilization is 40%. With 2 = cores, each core's utilization would 30%, giving an aggregate of 60%. >=20 > I have a use case of only doing rxonly packet processing. =46rom my = initial test, it seems that it's more efficient to have a single core = read packets from both ports, and distribute the packet using rte_ring = instead of having each core read from its port. The rte_eth_rx = operations appear to be much CPU intensive than rte_ring_dequeue = operations.=20 >=20 > Thanks in advance. >=20 > Dan