From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pd0-f172.google.com (mail-pd0-f172.google.com [209.85.192.172]) by dpdk.org (Postfix) with ESMTP id 5CEAC6A8B for ; Wed, 10 Dec 2014 15:14:14 +0100 (CET) Received: by mail-pd0-f172.google.com with SMTP id y13so2840606pdi.31 for ; Wed, 10 Dec 2014 06:14:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=wXIheiRlfKv2MRg3RJFEAVxFj+p+iv7ljjfcE1W5mfc=; b=dV+kV1dpcsG0YZPmsil0KuSKpwDve4h3rU8qOmWjxtxsCen+SK0YYR+P1e2CBemn/r 335d4gP4vUO9Etn4umHVRyNdRBmovtjIN8ipYhrDlJ66CGjwZfRLpwdFLzaKRRoxHnsZ wSvjbc8E68/2c6K0ye7MYEMr9xZ48sVrLtyFEa26nV4JJs4GHoEbS6qx07ny6rUBhw7f wIVOP8vHlkfB65JinwKhuZvl3S/ATUsXR0b93LY0EucTkX6KDCAMNtx7LdOlALSPCV9y s8Z9aeA4TWzw/mkRU6LO3w4bvDubYlyNcLFTQpDw/QngWwLb9c7cb2WsXXmkmXfwcebw q+RA== MIME-Version: 1.0 X-Received: by 10.66.245.167 with SMTP id xp7mr7233838pac.134.1418220853275; Wed, 10 Dec 2014 06:14:13 -0800 (PST) Received: by 10.70.4.162 with HTTP; Wed, 10 Dec 2014 06:14:13 -0800 (PST) In-Reply-To: <20141210133744.GA1632@bricha3-MOBL3> References: <20141210112233.GC10056@bricha3-MOBL3> <20141210114517.GD10056@bricha3-MOBL3> <20141210133744.GA1632@bricha3-MOBL3> Date: Wed, 10 Dec 2014 15:14:13 +0100 Message-ID: From: Sachin Sharma To: Bruce Richardson Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] transmit functions of dpdk X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Dec 2014 14:14:15 -0000 Hi Bruce, >>>The standard sample applications with DPDK use a simple buffering scheme, where >>>we buffer the packets until a full burst of 32 are ready for sending. Once we >>>have a full burst of packets - or a timeout occurs - we then send that burst >>>of packets using tx_burst function. Would such a scheme not work for you? Perhaps, this scheme will give some timing issues to me. For me, time is very critical issue. In your example, NIC TX ring also does some kind of buffering in uncontrolled manner (i.e., packets are transmitted to it without knowing that it will buffer or will directly forward on wire). However, in my case, I want to control the transmission onto wire and buffering into NIC TX ring through my software. Therefore, I think that I need to create three NIC TX ring per port rather than creating three software queues and need to transmit on wire from those ring through my software code. So, is there any way that I can control transmission of packets through these NIC TX rings i.e, buffering and transmission on the wire? or this is not possible? Thanks & kind regards, Sachin. On Wed, Dec 10, 2014 at 2:37 PM, Bruce Richardson < bruce.richardson@intel.com> wrote: > On Wed, Dec 10, 2014 at 01:09:32PM +0100, Sachin Sharma wrote: > > Hi Bruce, > > > > >>>I'm not entirely clear on what you mean by filling one queue and > > emptying another. Is this just a form of buffering you are trying to > > implement? > > > > Yes, you are right! I am implementing a buffering mechanism in which a > node > > will have three queues and it fills one queue with packets and when the > > queue is full then transmit the packets from the queue to the wire and > > while filling one queue, it can transmit packets to a wire through > another > > queue that is already full. > > > > Thanks & Regards, > > Sachin. > > > The standard sample applications with DPDK use a simple buffering scheme, > where > we buffer the packets until a full burst of 32 are ready for sending. Once > we > have a full burst of packets - or a timeout occurs - we then send that > burst > of packets using tx_burst function. Would such a scheme not work for you? > > NOTE: the tx_burst function returns as soon as the packets are written to > the > NIC's TX ring and the NIC's tail pointer is updated. It does not actually > wait > until all the packets are transmitted onto the wire. This means that your > core > does not need to see about trying to do other tasks during the actual > packet > transmission time, so you don't need a buffer for new packets arriving at > the > core while the NIC is physically transmitting data. > > Regards, > /Bruce > > > > > > On Wed, Dec 10, 2014 at 12:45 PM, Bruce Richardson < > > bruce.richardson@intel.com> wrote: > > > > > On Wed, Dec 10, 2014 at 12:31:27PM +0100, Sachin Sharma wrote: > > > > Hi Bruce, > > > > > > > > In my use case, I want to have three NIC TX queues per port, and > want to > > > > fill one NIC TX queue and want to empty the other queue. Is it > possible > > > > this through tx_burst or do I need to implement these queues in > > > > applications as you suggested before. However, in this case, I would > have > > > > then one NIC TX queues and three queues in an application which > actually > > > > transmits packets to this NIC TX queue. Am I right? > > > > > > > > > > For the suggestion I made, yes, you would have three software queues in > > > your > > > application, and a single TX on the NIC - though you could also have a > 1:1 > > > mapping > > > of software to HW queues if you wanted. > > > However, I'm not entirely clear on what you mean by filling one queue > and > > > emptying > > > another. Is this just a form of buffering you are trying to implement? > > > > > > > Thanks, > > > > Sachin. > > > > > > > > On Wed, Dec 10, 2014 at 12:22 PM, Bruce Richardson < > > > > bruce.richardson@intel.com> wrote: > > > > > > > > > On Wed, Dec 10, 2014 at 12:03:41PM +0100, Sachin Sharma wrote: > > > > > > Dear all, > > > > > > > > > > > > In my algorithm, I am interested to perform two activities - > (1) > > > > > > transmitting packets to a tx_queue and (2) transmitting packets > from > > > > > > tx_queue to a wire - separately. I have gone through the code by > > > putting > > > > > > logs in the dpdk code and found that there is a function > > > rte_eth_tx_burst > > > > > > which transmits packets to a specific queue. However, when I > debugged > > > > > more > > > > > > then I found that this function just calls eth_igb_xmit_pkts > > > > > > from librte_pmd_e1000, and this function just directly write the > > > packets > > > > > to > > > > > > the wire by writing all packets into registers. Could you please > > > suggest > > > > > > how to implement these two functions if these are not implemented > > > already > > > > > > in dpdk? > > > > > > > > > > > > > > > > > > > > > > > > Thanks & Regards, > > > > > > Sachin. > > > > > > > > > > Hi Sachin, > > > > > > > > > > anything written to the NIC TX queue is automatically put onto the > wire > > > > > unless > > > > > the NIC port is down or the wire is unplugged etc. What is your > > > use-case > > > > > that you > > > > > need to do this? I would suggest doing internal buffering in your > > > > > application, > > > > > as many DPDK example applications do, and then call tx_burst to put > > > your > > > > > packets > > > > > on the wire when you want this capability. > > > > > > > > > > Regards, > > > > > /Bruce > > > > > > > > >