DPDK patches and discussions
 help / color / mirror / Atom feed
From: Sachin Sharma <sharonsachin@gmail.com>
To: Bruce Richardson <bruce.richardson@intel.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] transmit functions of dpdk
Date: Wed, 10 Dec 2014 15:14:13 +0100
Message-ID: <CAH-Vi3xYA16XR3cwkwcUc+cqReH6xAKk97Bqubzy0XOXDNHa1Q@mail.gmail.com> (raw)
In-Reply-To: <20141210133744.GA1632@bricha3-MOBL3>

Hi Bruce,
>>>The standard sample applications with DPDK use a simple buffering
scheme, where
>>>we buffer the packets until a full burst of 32 are ready for sending.
Once we
>>>have a full burst of packets - or a timeout occurs - we then send that
burst
>>>of packets using tx_burst function. Would such a scheme not work for you?

Perhaps, this scheme will give some timing issues to me. For me, time is
very critical issue.  In your example, NIC TX ring also does some kind of
buffering in uncontrolled manner (i.e., packets are transmitted to it
without knowing that it will buffer or will directly forward on wire).
However, in my case, I want to control the transmission onto wire and
buffering into NIC TX ring through my software. Therefore, I think that I
need to create three NIC TX ring per port rather than creating three
software queues and need to transmit on wire from those ring through my
software code. So, is there any way that I can control transmission of
packets through these NIC TX rings i.e, buffering and transmission on the
wire? or this is not possible?

Thanks & kind regards,
Sachin.


On Wed, Dec 10, 2014 at 2:37 PM, Bruce Richardson <
bruce.richardson@intel.com> wrote:

> On Wed, Dec 10, 2014 at 01:09:32PM +0100, Sachin Sharma wrote:
> > Hi Bruce,
> >
> > >>>I'm not entirely clear on what you mean by filling one queue and
> > emptying another. Is this just a form of buffering you are trying to
> > implement?
> >
> > Yes, you are right! I am implementing a buffering mechanism in which a
> node
> > will have three queues and it  fills one queue with packets and when the
> > queue is full then transmit the packets from the queue to the wire and
> > while filling one queue, it can transmit packets to a wire through
> another
> > queue that is already full.
> >
> > Thanks & Regards,
> > Sachin.
> >
> The standard sample applications with DPDK use a simple buffering scheme,
> where
> we buffer the packets until a full burst of 32 are ready for sending. Once
> we
> have a full burst of packets - or a timeout occurs - we then send that
> burst
> of packets using tx_burst function. Would such a scheme not work for you?
>
> NOTE: the tx_burst function returns as soon as the packets are written to
> the
> NIC's TX ring and the NIC's tail pointer is updated. It does not actually
> wait
> until all the packets are transmitted onto the wire. This means that your
> core
> does not need to see about trying to do other tasks during the actual
> packet
> transmission time, so you don't need a buffer for new packets arriving at
> the
> core while the NIC is physically transmitting data.
>
> Regards,
> /Bruce
> >
> >
> > On Wed, Dec 10, 2014 at 12:45 PM, Bruce Richardson <
> > bruce.richardson@intel.com> wrote:
> >
> > > On Wed, Dec 10, 2014 at 12:31:27PM +0100, Sachin Sharma wrote:
> > > > Hi Bruce,
> > > >
> > > > In my use case, I want to have three NIC TX queues per port, and
> want to
> > > > fill one NIC TX queue and want to empty the other queue. Is it
> possible
> > > > this through tx_burst or do I need to implement these queues in
> > > > applications as you suggested before. However, in this case, I would
> have
> > > > then one NIC TX queues and three queues in an application which
> actually
> > > > transmits packets to this NIC TX queue. Am I right?
> > > >
> > >
> > > For the suggestion I made, yes, you would have three software queues in
> > > your
> > > application, and a single TX on the NIC - though you could also have a
> 1:1
> > > mapping
> > > of software to HW queues if you wanted.
> > > However, I'm not entirely clear on what you mean by filling one queue
> and
> > > emptying
> > > another. Is this just a form of buffering you are trying to implement?
> > >
> > > > Thanks,
> > > > Sachin.
> > > >
> > > > On Wed, Dec 10, 2014 at 12:22 PM, Bruce Richardson <
> > > > bruce.richardson@intel.com> wrote:
> > > >
> > > > > On Wed, Dec 10, 2014 at 12:03:41PM +0100, Sachin Sharma wrote:
> > > > > > Dear all,
> > > > > >
> > > > > >  In my algorithm, I am interested to perform  two activities -
> (1)
> > > > > > transmitting packets to a tx_queue and (2) transmitting packets
> from
> > > > > > tx_queue to a wire - separately. I have gone through the code by
> > > putting
> > > > > > logs in the dpdk code and found that there is a function
> > > rte_eth_tx_burst
> > > > > > which transmits packets to a specific queue. However, when I
> debugged
> > > > > more
> > > > > > then I found that this function just calls eth_igb_xmit_pkts
> > > > > > from librte_pmd_e1000, and this function just directly write the
> > > packets
> > > > > to
> > > > > > the wire by writing all packets into registers. Could you please
> > > suggest
> > > > > > how to implement these two functions if these are not implemented
> > > already
> > > > > > in dpdk?
> > > > > >
> > > > > >
> > > > > >
> > > > > > Thanks & Regards,
> > > > > > Sachin.
> > > > >
> > > > > Hi Sachin,
> > > > >
> > > > > anything written to the NIC TX queue is automatically put onto the
> wire
> > > > > unless
> > > > > the NIC port is down or the wire is unplugged etc. What is your
> > > use-case
> > > > > that you
> > > > > need to do this? I would suggest doing internal buffering in your
> > > > > application,
> > > > > as many DPDK example applications do, and then call tx_burst to put
> > > your
> > > > > packets
> > > > > on the wire when you want this capability.
> > > > >
> > > > > Regards,
> > > > > /Bruce
> > > > >
> > >
>

      reply	other threads:[~2014-12-10 14:14 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-10 11:03 Sachin Sharma
2014-12-10 11:22 ` Bruce Richardson
2014-12-10 11:31   ` Sachin Sharma
2014-12-10 11:45     ` Bruce Richardson
2014-12-10 12:09       ` Sachin Sharma
2014-12-10 13:37         ` Bruce Richardson
2014-12-10 14:14           ` Sachin Sharma [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAH-Vi3xYA16XR3cwkwcUc+cqReH6xAKk97Bqubzy0XOXDNHa1Q@mail.gmail.com \
    --to=sharonsachin@gmail.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK patches and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ https://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git