DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: Sachin Sharma <sharonsachin@gmail.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] transmit functions of dpdk
Date: Wed, 10 Dec 2014 13:37:45 +0000	[thread overview]
Message-ID: <20141210133744.GA1632@bricha3-MOBL3> (raw)
In-Reply-To: <CAH-Vi3zGXu7_DXofCVADueDnhKWrAik3D=9H-ZfB6x8EKXG6gA@mail.gmail.com>

On Wed, Dec 10, 2014 at 01:09:32PM +0100, Sachin Sharma wrote:
> Hi Bruce,
> 
> >>>I'm not entirely clear on what you mean by filling one queue and
> emptying another. Is this just a form of buffering you are trying to
> implement?
> 
> Yes, you are right! I am implementing a buffering mechanism in which a node
> will have three queues and it  fills one queue with packets and when the
> queue is full then transmit the packets from the queue to the wire and
> while filling one queue, it can transmit packets to a wire through another
> queue that is already full.
> 
> Thanks & Regards,
> Sachin.
> 
The standard sample applications with DPDK use a simple buffering scheme, where
we buffer the packets until a full burst of 32 are ready for sending. Once we
have a full burst of packets - or a timeout occurs - we then send that burst
of packets using tx_burst function. Would such a scheme not work for you?

NOTE: the tx_burst function returns as soon as the packets are written to the
NIC's TX ring and the NIC's tail pointer is updated. It does not actually wait
until all the packets are transmitted onto the wire. This means that your core
does not need to see about trying to do other tasks during the actual packet
transmission time, so you don't need a buffer for new packets arriving at the
core while the NIC is physically transmitting data.

Regards,
/Bruce
> 
> 
> On Wed, Dec 10, 2014 at 12:45 PM, Bruce Richardson <
> bruce.richardson@intel.com> wrote:
> 
> > On Wed, Dec 10, 2014 at 12:31:27PM +0100, Sachin Sharma wrote:
> > > Hi Bruce,
> > >
> > > In my use case, I want to have three NIC TX queues per port, and want to
> > > fill one NIC TX queue and want to empty the other queue. Is it possible
> > > this through tx_burst or do I need to implement these queues in
> > > applications as you suggested before. However, in this case, I would have
> > > then one NIC TX queues and three queues in an application which actually
> > > transmits packets to this NIC TX queue. Am I right?
> > >
> >
> > For the suggestion I made, yes, you would have three software queues in
> > your
> > application, and a single TX on the NIC - though you could also have a 1:1
> > mapping
> > of software to HW queues if you wanted.
> > However, I'm not entirely clear on what you mean by filling one queue and
> > emptying
> > another. Is this just a form of buffering you are trying to implement?
> >
> > > Thanks,
> > > Sachin.
> > >
> > > On Wed, Dec 10, 2014 at 12:22 PM, Bruce Richardson <
> > > bruce.richardson@intel.com> wrote:
> > >
> > > > On Wed, Dec 10, 2014 at 12:03:41PM +0100, Sachin Sharma wrote:
> > > > > Dear all,
> > > > >
> > > > >  In my algorithm, I am interested to perform  two activities - (1)
> > > > > transmitting packets to a tx_queue and (2) transmitting packets from
> > > > > tx_queue to a wire - separately. I have gone through the code by
> > putting
> > > > > logs in the dpdk code and found that there is a function
> > rte_eth_tx_burst
> > > > > which transmits packets to a specific queue. However, when I debugged
> > > > more
> > > > > then I found that this function just calls eth_igb_xmit_pkts
> > > > > from librte_pmd_e1000, and this function just directly write the
> > packets
> > > > to
> > > > > the wire by writing all packets into registers. Could you please
> > suggest
> > > > > how to implement these two functions if these are not implemented
> > already
> > > > > in dpdk?
> > > > >
> > > > >
> > > > >
> > > > > Thanks & Regards,
> > > > > Sachin.
> > > >
> > > > Hi Sachin,
> > > >
> > > > anything written to the NIC TX queue is automatically put onto the wire
> > > > unless
> > > > the NIC port is down or the wire is unplugged etc. What is your
> > use-case
> > > > that you
> > > > need to do this? I would suggest doing internal buffering in your
> > > > application,
> > > > as many DPDK example applications do, and then call tx_burst to put
> > your
> > > > packets
> > > > on the wire when you want this capability.
> > > >
> > > > Regards,
> > > > /Bruce
> > > >
> >

  reply	other threads:[~2014-12-10 13:39 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-10 11:03 Sachin Sharma
2014-12-10 11:22 ` Bruce Richardson
2014-12-10 11:31   ` Sachin Sharma
2014-12-10 11:45     ` Bruce Richardson
2014-12-10 12:09       ` Sachin Sharma
2014-12-10 13:37         ` Bruce Richardson [this message]
2014-12-10 14:14           ` Sachin Sharma

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141210133744.GA1632@bricha3-MOBL3 \
    --to=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=sharonsachin@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).