From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from SNT004-OMC3S35.hotmail.com (snt004-omc3s35.hotmail.com [65.55.90.174]) by dpdk.org (Postfix) with ESMTP id F32BB2BCD for ; Thu, 2 Feb 2017 07:29:59 +0100 (CET) Received: from EUR03-DB5-obe.outbound.protection.outlook.com ([65.55.90.136]) by SNT004-OMC3S35.hotmail.com over TLS secured channel with Microsoft SMTPSVC(7.5.7601.23008); Wed, 1 Feb 2017 22:29:59 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hotmail.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=CuZgiZKyNQ/IDlhP+aO/4j4X+TGzLjnTQMGMo9oj0WI=; b=oIEM0dKREp7qu9Lo/VO+9Yzo+vZWAfAvwYmrEYY/SlvD0HA23o8KPfRXzfCjh5ioBZHuJk+gL8lOBJC4PZGtYv+hNk63W2eMxAJyfap1ukINulMLTz55GRL0DrJuqwOgQCSirJ9D7OSfDfuhsQMiwJs+aIguHzVKWh0wiXCO6d9YdzIYFb0NpavGpQRlvG+n1pGlXh8EJ/Rpzas2k3U263kfa56mSbsc5fcsyXVwYSPRKpdgfKnTZqLi/YFBxcDvcJ4qwByk/RKhJqeG/Ln+77R8itmTQQOJH6jsWYWiRM5ObVmqjIuX7Ou84OCawgCnB5xLeasNfms86q30y++RNA== Received: from VE1EUR03FT022.eop-EUR03.prod.protection.outlook.com (10.152.18.51) by VE1EUR03HT017.eop-EUR03.prod.protection.outlook.com (10.152.19.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.874.2; Thu, 2 Feb 2017 06:29:57 +0000 Received: from DB6PR1001MB1416.EURPRD10.PROD.OUTLOOK.COM (10.152.18.59) by VE1EUR03FT022.mail.protection.outlook.com (10.152.18.64) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.874.2 via Frontend Transport; Thu, 2 Feb 2017 06:29:57 +0000 Received: from DB6PR1001MB1416.EURPRD10.PROD.OUTLOOK.COM ([10.171.78.136]) by DB6PR1001MB1416.EURPRD10.PROD.OUTLOOK.COM ([10.171.78.136]) with mapi id 15.01.0874.021; Thu, 2 Feb 2017 06:29:57 +0000 From: Peter Keereweer To: "Wiles, Keith" CC: "users@dpdk.org" Thread-Topic: [dpdk-users] What to do after rte_eth_tx_burst: free or send again remaining packets? Thread-Index: AQHSeZ89QhXZzOqpSEWE4fljFDCT1aFOTeesgAC074CAAiJJtIAA5BOAgAM9Cm0= Date: Thu, 2 Feb 2017 06:29:57 +0000 Message-ID: References: , In-Reply-To: Accept-Language: nl-NL, en-US Content-Language: nl-NL X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=none action=none header.from=hotmail.com; x-incomingtopheadermarker: OriginalChecksum:1CFB9FE7F0432547AB2DBB2EEA65E69032A2BBA522F9D5596590B99A529F1457; UpperCasedChecksum:7E0EC9B3F868689FAC07127E47B1EB7822E818C6E1D90D62DD231AEE62CD5892; SizeAsReceived:8223; Count:40 x-ms-exchange-messagesentrepresentingtype: 1 x-tmn: [oyZYZH+U0FnmqumDGMjVghw9441FIFY8] x-incomingheadercount: 40 x-eopattributedmessage: 0 x-microsoft-exchange-diagnostics: 1; VE1EUR03HT017; 5:yf6yN5h3YWkddTUmQ2k8enUBrr7peHMQgjOsjwoggnS3lOd3z4Cc8a4mwDnpEf9Pl2DiTrOgAN0srIpol9XbaqBG8YF42AiIaunEs3FapjXj+IQiddzPAwgPWShK4H/etuTeaOa4jUt6v6UmNE4Fwf+OTnW2zRMk3GD1nm+edhM=; 24:x5yWpe894489hCQo9BDnQtpGelOE8nFjYcpGYWB2hqduWBmFOeFJXt3JYdw6MA27R5T7gbEsHhyV26VT/GL0D8Vi9Zrnkpuk0ekEM3Aq7DI=; 7:CKbPOubOi6WkAkiHChxeVwTatYMGCZFxn6SOaf047+3KFqWzwawi6sNLYscf/8pRpxmqCtHf1TlJdkESCOSeIrQsOrL79dtjN/K4XqHq8JiZBbcVYhRgak40qO3bAL0YvIWaUNI/6AF0olOM7twaqzgdsv7AOQcnfn+LyewxgBBroK0MMdUwrIccRcbG+a/eooFe9S0xKMRNLGs2iLBq9OOUsojha0/oOdC0VgqzohNHsbOj1MvBNnDC8TWS2ggZgtJb82T5/FxRbVmrx+b5mLCaQQk7dkWtoGWFXdsl7c3dC0B6kMHC9E4gB9uTRPJOnKY4BauHnn/mtc4WfzT2aEFOn4e5HSF6SwsCODsGl3/rZYvjALevVqLKmd9aXXeN0dFiRN/RwNiRYD9DHJ02LRI31PAusQ2qaO05l6ZdgZ8ErqAZeOl+vKii/lmCsZ0t1AVVnDy1fLkqa+/h3FLRNAX0rM0olaDIIKEQh/G1PpkDbxvG2o09qQeArkOnYrz2rcYHO9jSpOTv1GQfh7aO3A== x-forefront-antispam-report: EFV:NLI; SFV:NSPM; SFS:(10019020)(98900005); DIR:OUT; SFP:1102; SCL:1; SRVR:VE1EUR03HT017; H:DB6PR1001MB1416.EURPRD10.PROD.OUTLOOK.COM; FPR:; SPF:None; LANG:en; x-ms-office365-filtering-correlation-id: 5cbb8fb4-d0f6-49e4-6be6-08d44b34e847 x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(1601124038)(5061506411)(5061507331)(1603103132)(1601125091)(1701031042); SRVR:VE1EUR03HT017; x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(444111334)(432015044)(82015046); SRVR:VE1EUR03HT017; BCL:0; PCL:0; RULEID:; SRVR:VE1EUR03HT017; x-forefront-prvs: 02065A9E77 spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: hotmail.com X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Feb 2017 06:29:57.1958 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Internet X-MS-Exchange-CrossTenant-id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1EUR03HT017 X-OriginalArrivalTime: 02 Feb 2017 06:29:59.0361 (UTC) FILETIME=[C703E310:01D27D1D] Subject: Re: [dpdk-users] What to do after rte_eth_tx_burst: free or send again remaining packets? X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 Feb 2017 06:30:00 -0000 Hi Keith, Thanks again for your response! Eventually we are starting the Load Balance= r with these parameters and it's working fine: > sudo build/app/sessioniser -c 0xfff -n 4 -- --rx "(0,0,0)" --tx "(0,2)" -= -w "4" --bsz "(32,32),(64,64),(32,32)" So we use a burst size of 32 in the RX / TX cores and a burst size of 64 in= the worker cores. Thank you for your time to answer my questions! I have asked a different qu= estion about pktgen in another message in the users@dpdk.org (subject: pktg= en: sending / capturing more packets than configured?). Maybe you can help = me with that question as well. But this topic can be closed. I appreciate all your help! Peter Van: Wiles, Keith Verzonden: maandag 30 januari 2017 21:55 Aan: Peter Keereweer CC: users@dpdk.org Onderwerp: Re: [dpdk-users] What to do after rte_eth_tx_burst: free or send= again remaining packets? =A0 =20 > On Jan 30, 2017, at 10:02 AM, Peter Keereweer wrote: >=20 > Hi Keith, >=20 > Thanks a lot for your response! Based on your information I have tested d= ifferent burst sizes in the Load Balancer application (an let the TX ring s= ize unchanged). One can configure the read / write burst sizes of the NIC a= nd the software queues as a command line option. The default value of all = burst size is equal to 144. If I configure all read/write burst sizes as 32= , every packet will be transmitted by the TX core and no packets are droppe= d. But is this a valid solution? It seems to work, but it feels a little b= it strange to decrease the burst size from 144 to 32. I am going to guess the reason is 32 is a better fit to the ring size and h= ow the hardware handles the descriptors. Some hardware only frees the descr= iptors in bursts across the PCI bus. It makes it easier for the hardware an= d faster as it does not need to send every descriptor back one at a time. = You may have just hit a sweet spot in burst size and the hardware. Normally= 8 descriptors on Intel hardware is a cache line size 64 bytes, using multi= ple of 8 is the best performance as you do not want to write part of a cac= he line which causes a lot of PCI transactions to complete writing part of = a cache line. >=20 > Another solution is implementing a while loop (like in _send_burst_fast i= n pktgen), so every packet will be transmitted. This solution seems to work= too, but the same question here, is this a valid solution? The strange fee= ling about this solution is that basically the same happens in the ixgbe d= river code (ixgbe_rxtx.c): >=20 > uint16_t > ixgbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts, >=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 uint16_= t nb_pkts) > { >=A0=A0=A0=A0=A0=A0=A0 uint16_t nb_tx; >=20 >=A0=A0=A0=A0=A0=A0=A0 /* Try to transmit at least chunks of TX_MAX_BURST p= kts */ >=A0=A0=A0=A0=A0=A0=A0 if (likely(nb_pkts <=3D RTE_PMD_IXGBE_TX_MAX_BURST)) >=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 return tx_xmit_pkts(tx_queue= , tx_pkts, nb_pkts); >=20 >=A0=A0=A0=A0=A0=A0=A0 /* transmit more than the max burst, in chunks of TX= _MAX_BURST */ >=A0=A0=A0=A0=A0=A0=A0 nb_tx =3D 0; >=A0=A0=A0=A0=A0=A0=A0 while (nb_pkts) { >=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 uint16_t ret, n; >=20 >=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 n =3D (uint16_t)RTE_MIN(nb_p= kts, RTE_PMD_IXGBE_TX_MAX_BURST); >=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 ret =3D tx_xmit_pkts(tx_queu= e, &(tx_pkts[nb_tx]), n); >=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 nb_tx =3D (uint16_t)(nb_tx += ret); >=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 nb_pkts =3D (uint16_t)(nb_pk= ts - ret); >=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 if (ret < n) >=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 brea= k; >=A0=A0=A0=A0=A0=A0=A0 } >=20 >=A0=A0=A0=A0=A0=A0=A0 return nb_tx; > } >=20 > To be honest, I don't know whether this piece of code is called if I use = the rte_eth_tx_burst, but I expect something similar happens when rte_eth_t= x_burst uses another transmitting function in the ixgbe driver code. This w= hile loop in the ixgbe driver code is exactly doing same as using a while = loop in combination with rte_eth_tx_burst. But if I don't use a while loop = in combination with the rte_eth_tx_burst (and a burst size of 144) it's not= working (many packets are dropped), but if I implement this while loop it= seems to work=85 I think it is always safe to do the looping in your application as not all = drivers do the looping for you. In case above I think they were trying to o= ptimize the transfers to the NIC to 32 packets or descriptors for best perf= ormance. >=20 > I hope you can help me again with finding the best solution to solve this= problem! >=20 > Peter >=20 >=20 > Van: Wiles, Keith > Verzonden: zaterdag 28 januari 2017 23:43 > Aan: Peter Keereweer > CC: users@dpdk.org > Onderwerp: Re: [dpdk-users] What to do after rte_eth_tx_burst: free or se= nd again remaining packets? >=A0=A0=A0=A0=20 >=20 >> On Jan 28, 2017, at 1:57 PM, Peter Keereweer wrote: >>=20 >> Hi! >>=20 >> Currently I'am running some tests with the Load Balancer Sample Applicat= ion. I'm testing the Load Balancer Sample Application by sending packets wi= th pktgen. >> I have a setup of 2 servers with each server containing a Intel 10Gbe 82= 599 NIC (connected to each other). I have configured the Load Balancer appl= ication to use 1 core for RX, 1 worker core and 1 TX core. The TX core send= s all packets back to the pktgen=A0 application. >>=20 >> With the pktgen I send 1024 UDP packets to the Load Balancer. Every pack= et processed by the worker core will be printed to the screen (I added this= code by myself). If I send 1024 UDP packets, 1008 ( =3D 7 x 144) packets w= ill be printed to the screen. This=A0 is=A0 correct, because the RX core r= eads packets with a burst size of 144. So if I send 1024 packets, I expect = 1008 packets back in the pktgen application. But surprisingly I only receiv= e 224 packets instead of 1008 packets. After some research I found that=A0 = that=A0 224 packets is not just a random number, its 7 x 32 (=3D 224). So = if the RX reads 7 x 144 packets, I get back 7 x 32 packets. After digging i= nto the code from the Load Balancer application I found in 'runtime.c' in t= he 'app_lcore_io_tx' function this code=A0 : >>=20 >> n_pkts =3D rte_eth_tx_burst( >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0 port, >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0 0, >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0 lp->tx.mbuf_out[port].array, >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0 (uint16_t) n_mbufs); >>=20 >> ... >>=20 >> if (unlikely(n_pkts < n_mbufs)) { >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0 uint32_t k; >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0 for (k =3D n_pkts; k < n_mbufs; k ++) { >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 struct rte_mbuf *pkt_to= _free =3D lp->tx.mbuf_out[port].array[k]; >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 rte_pktmbuf_free(pkt_to= _free); >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0 } >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0 } >>=20 >> What I understand from this code is that n_mbufs 'packets' are send with= 'rte_eth_tx_burst' function. This function returns n_pkts, the number of p= ackets that are actually send. If the actual number of packets send is smal= ler then n_mbufs (packets ready for=A0=A0 send given to the rte_eth_tx_bur= st) then all remaining packets, which are not send, are freed. In de the Lo= ad Balancer application, n_mbufs is equal to 144. But in my case 'rte_eth_t= x_burst' returns the value 32, and not 144. So 32 packets are actually sen= d=A0=A0 and the remaining packets (144 - 32 =3D 112) are freed. This is the= reason why I get 224 (7 x 32) packets back instead of 1008 (=3D 7 x 144). >>=20 >> But the question is: why are the remaining packets freed instead of tryi= ng to send them again? If I look into the 'pktgen.c', there is a function '= _send_burst_fast' where all remaining packets are trying to be send again (= in a while loop until they are all=A0=A0 send) instead of freeing them (se= e code below) : >>=20 >> static __inline__ void >> _send_burst_fast(port_info_t *info, uint16_t qid) >> { >>=A0=A0=A0=A0=A0=A0=A0=A0=A0 struct mbuf_table=A0=A0 *mtab =3D &info->q[qi= d].tx_mbufs; >>=A0=A0=A0=A0=A0=A0=A0=A0=A0 struct rte_mbuf **pkts; >>=A0=A0=A0=A0=A0=A0=A0=A0=A0 uint32_t ret, cnt; >>=20 >>=A0=A0=A0=A0=A0=A0=A0=A0=A0 cnt =3D mtab->len; >>=A0=A0=A0=A0=A0=A0=A0=A0=A0 mtab->len =3D 0; >>=20 >>=A0=A0=A0=A0=A0=A0=A0=A0=A0 pkts=A0=A0=A0 =3D mtab->m_table; >>=20 >>=A0=A0=A0=A0=A0=A0=A0=A0=A0 if (rte_atomic32_read(&info->port_flags) & PR= OCESS_TX_TAP_PKTS) { >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 while (cnt > 0) { >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0 ret =3D rte_eth_tx_burst(info->pid, qid, pkts, cnt); >>=20 >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0 pktgen_do_tx_tap(info, pkts, ret); >>=20 >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0 pkts +=3D ret; >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0 cnt -=3D ret; >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 } >>=A0=A0=A0=A0=A0=A0=A0=A0=A0 } else { >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 while(cnt > 0) { >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0 ret =3D rte_eth_tx_burst(info->pid, qid, pkts, cnt); >>=20 >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0 pkts +=3D ret; >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0 cnt -=3D ret; >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 } >>=A0=A0=A0=A0=A0=A0=A0=A0=A0 } >> }=20 >>=20 >> Why is this while loop (sending packets until they have all been send) n= ot implemented in the 'app_lcore_io_tx' function in the Load Balancer appli= cation? That would make sense right? It looks like that the Load Balancer a= pplication makes an assumption that=A0=A0 if not all packets have been sen= d, the remaining packets failed during the sending proces and should be fre= ed. >=20 > The size of the TX ring on the hardware is limited in size, but you can a= djust that size. In pktgen I attempt to send all packets requested to be se= nt, but in the load balancer the developer decided to just drop the packets= that are not sent as the TX hardware=A0 ring or even a SW ring is full. T= his normally means the core is sending packets faster then the HW ring on t= he NIC can send the packets. >=20 > It was just a choice of the developer to drop the packets instead of tryi= ng again until the packets array is empty. One possible way to fix this is = to increase the size of the TX ring 2-4 time larger then the RX ring. This = still does not truly solve the problem=A0 it just moves it to the RX ring.= The NIC if is does not have a valid RX descriptor and a place to DMA the p= acket into memory it gets dropped at the wire. BTW increasing the TX ring s= ize also means the these packets will not returned to the free pool and you= =A0 can exhaust the packet pool. The packets are stuck on the TX ring as d= one because the threshold to reclaim the done packets is too high. >=20 > Say you have 1024 ring size and the high watermark for flushing the done = off the ring is 900 packets. Then if the packet pool is only 512 packets th= en when you send 512 packets they will all be on the TX done queue and now = you are in a deadlock not being able=A0 to send a packet as they are all o= n the TX done ring. This normally does not happen as the ring sizes or norm= ally much smaller then the number of TX packets or even RX packets. >=20 > In pktgen I attempt to send all of the packets requested as it does not m= ake any sense for the user to ask to send 10000 packets and pktgen only sen= d some number less as the core sending the packets can over run the TX queu= e at some point. >=20 > I hope that helps. >=20 >>=20 >> I hope someone can help me with this questions. Thank you in advance!! >>=20 >> Peter >=20 > Regards, > Keith >=20 Regards, Keith =