* [dpdk-users] Defining multiple actions based on a field value for the same table lookup in ip_pipeline application @ 2017-05-18 12:23 Nidhia Varghese 2017-05-18 14:04 ` [dpdk-users] [dpdk-dev] " Shyam Shrivastav 0 siblings, 1 reply; 9+ messages in thread From: Nidhia Varghese @ 2017-05-18 12:23 UTC (permalink / raw) To: users, dev Hi all, I am trying to design a pipeline where the table lookup has to do the actions as follows: Lookup hit -> FWD to port Lookup miss -> add a table entry if incoming port was 1 drop if incoming port was 2 Is it possible to do such an implementation using ip_pipeline application? If so, how can I do that? Thanks for your reply and help. Thanks, Nidhia Varghese ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-users] [dpdk-dev] Defining multiple actions based on a field value for the same table lookup in ip_pipeline application 2017-05-18 12:23 [dpdk-users] Defining multiple actions based on a field value for the same table lookup in ip_pipeline application Nidhia Varghese @ 2017-05-18 14:04 ` Shyam Shrivastav 2017-05-19 5:23 ` Nidhia Varghese 2017-07-17 2:03 ` [dpdk-users] Rx Can't receive anymore packet after received 1.5 billion packet vuonglv 0 siblings, 2 replies; 9+ messages in thread From: Shyam Shrivastav @ 2017-05-18 14:04 UTC (permalink / raw) To: Nidhia Varghese; +Cc: users, dev For each table a lookup hit and miss function can be registered, have a look at rte_pipeline_run(struct rte_pipeline *p) for the semantics On Thu, May 18, 2017 at 5:53 PM, Nidhia Varghese <nidhiavarghese93@gmail.com > wrote: > Hi all, > > I am trying to design a pipeline where the table lookup has to do the > actions as follows: > > Lookup hit -> FWD to port > Lookup miss -> add a table entry if incoming port was 1 > drop if incoming port was 2 > > Is it possible to do such an implementation using ip_pipeline application? > If so, how can I do that? > > Thanks for your reply and help. > > Thanks, > Nidhia Varghese > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-users] [dpdk-dev] Defining multiple actions based on a field value for the same table lookup in ip_pipeline application 2017-05-18 14:04 ` [dpdk-users] [dpdk-dev] " Shyam Shrivastav @ 2017-05-19 5:23 ` Nidhia Varghese 2017-05-19 5:37 ` Shyam Shrivastav 2017-07-17 2:03 ` [dpdk-users] Rx Can't receive anymore packet after received 1.5 billion packet vuonglv 1 sibling, 1 reply; 9+ messages in thread From: Nidhia Varghese @ 2017-05-19 5:23 UTC (permalink / raw) To: Shyam Shrivastav; +Cc: users, dev Thanks for your reply Shyam. Yes, I went through it. But my doubt is whether it is possible to define two different miss actions based on some decision criteria(input port in my case). Miss ->Case 1: Add entry to table if portid = 1 ->Case 2: Drop if portid = 2 On Thu, May 18, 2017 at 7:34 PM, Shyam Shrivastav < shrivastav.shyam@gmail.com> wrote: > For each table a lookup hit and miss function can be registered, have a > look at > rte_pipeline_run(struct rte_pipeline *p) for the semantics > Thanks, Nidhia Varghese ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-users] [dpdk-dev] Defining multiple actions based on a field value for the same table lookup in ip_pipeline application 2017-05-19 5:23 ` Nidhia Varghese @ 2017-05-19 5:37 ` Shyam Shrivastav 0 siblings, 0 replies; 9+ messages in thread From: Shyam Shrivastav @ 2017-05-19 5:37 UTC (permalink / raw) To: Nidhia Varghese; +Cc: users, dev That can be done in the same miss function as I understand, if (condition1) {} else {} On Fri, May 19, 2017 at 10:53 AM, Nidhia Varghese < nidhiavarghese93@gmail.com> wrote: > Thanks for your reply Shyam. > Yes, I went through it. But my doubt is whether it is possible to define > two different miss actions based on some decision criteria(input port in my > case). > Miss ->Case 1: Add entry to table if portid = 1 > ->Case 2: Drop if portid = 2 > > On Thu, May 18, 2017 at 7:34 PM, Shyam Shrivastav < > shrivastav.shyam@gmail.com> wrote: > >> For each table a lookup hit and miss function can be registered, have a >> look at >> rte_pipeline_run(struct rte_pipeline *p) for the semantics >> > > Thanks, > Nidhia Varghese > ^ permalink raw reply [flat|nested] 9+ messages in thread
* [dpdk-users] Rx Can't receive anymore packet after received 1.5 billion packet. 2017-05-18 14:04 ` [dpdk-users] [dpdk-dev] " Shyam Shrivastav 2017-05-19 5:23 ` Nidhia Varghese @ 2017-07-17 2:03 ` vuonglv 2017-07-17 10:31 ` [dpdk-users] [dpdk-dev] " Dumitrescu, Cristian 1 sibling, 1 reply; 9+ messages in thread From: vuonglv @ 2017-07-17 2:03 UTC (permalink / raw) Cc: users, dev Hi DPDK team, Sorry when I send this email to both of group users and dev. But I have big problem: Rx core on my application can not receive anymore packet after I did the stress test to it (~1 day Rx core received ~ 1.5 billion packet). Rx core still alive but didn't receive any packet and didn't generate any log. Below is my system configuration: - OS: CentOS 7 - Kernel: 3.10.0-514.16.1.el7.x86_64 - Huge page: 32G: 16384 page 2M - NIC card: Intel 85299 - DPDK version: 16.11 - Architecture: Rx (lcore 1) received packet then queue to the ring ----- Worker (lcore 2) dequeue packet in the ring and free it (use rte_pktmbuf_free() function). - Mempool create: rte_pktmbuf_pool_create ( "rx_pool", /* name */ 8192, /* number of elemements in the mbuf pool */ 256, /* Size of per-core object cache */ 0, /* Size of application private are between rte_mbuf struct and data buffer */ RTE_MBUF_DEFAULT_BUF_SIZE, /* Size of data buffer in each mbuf (2048 + 128)*/ 0 /* socket id */ ); If I change "number of elemements in the mbuf pool" from 8192 to 512, Rx have same problem after shorter time (~ 30s). Please tell me if you need more information. I am looking forward to hearing from you. Many thanks, Vuong Le ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-users] [dpdk-dev] Rx Can't receive anymore packet after received 1.5 billion packet. 2017-07-17 2:03 ` [dpdk-users] Rx Can't receive anymore packet after received 1.5 billion packet vuonglv @ 2017-07-17 10:31 ` Dumitrescu, Cristian 2017-07-18 1:36 ` vuonglv 0 siblings, 1 reply; 9+ messages in thread From: Dumitrescu, Cristian @ 2017-07-17 10:31 UTC (permalink / raw) To: vuonglv; +Cc: users, dev > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of > vuonglv@viettel.com.vn > Sent: Monday, July 17, 2017 3:04 AM > Cc: users@dpdk.org; dev@dpdk.org > Subject: [dpdk-dev] Rx Can't receive anymore packet after received 1.5 > billion packet. > > Hi DPDK team, > Sorry when I send this email to both of group users and dev. But I have > big problem: Rx core on my application can not receive anymore packet > after I did the stress test to it (~1 day Rx core received ~ 1.5 billion > packet). Rx core still alive but didn't receive any packet and didn't > generate any log. Below is my system configuration: > - OS: CentOS 7 > - Kernel: 3.10.0-514.16.1.el7.x86_64 > - Huge page: 32G: 16384 page 2M > - NIC card: Intel 85299 > - DPDK version: 16.11 > - Architecture: Rx (lcore 1) received packet then queue to the ring > ----- Worker (lcore 2) dequeue packet in the ring and free it (use > rte_pktmbuf_free() function). > - Mempool create: rte_pktmbuf_pool_create ( > "rx_pool", /* > name */ > 8192, /* > number of elemements in the mbuf pool */ > 256, /* Size of per-core > object cache */ > 0, /* Size of > application private are between rte_mbuf struct and data buffer */ > RTE_MBUF_DEFAULT_BUF_SIZE, /* > Size of data buffer in each mbuf (2048 + 128)*/ > 0 /* socket id */ > ); > If I change "number of elemements in the mbuf pool" from 8192 to 512, Rx > have same problem after shorter time (~ 30s). > > Please tell me if you need more information. I am looking forward to > hearing from you. > > > Many thanks, > Vuong Le Hi Vuong, This is likely to be a buffer leakage problem. You might have a path in your code where you are not freeing a buffer and therefore this buffer gets "lost", as the application is not able to use this buffer any more since it is not returned back to the pool, so the pool of free buffers shrinks over time up to the moment when it eventually becomes empty, so no more packets can be received. You might want to periodically monitor the numbers of free buffers in your pool; if this is the root cause, then you should be able to see this number constantly decreasing until it becomes flat zero, otherwise you should be able to the number of free buffers oscillating around an equilibrium point. Since it takes a relatively big number of packets to get to this issue, it is likely that the code path that has this problem is not executed very frequently: it might be a control plane packet that is not freed up, or an ARP request/reply pkt, etc. Regards, Cristian ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-users] [dpdk-dev] Rx Can't receive anymore packet after received 1.5 billion packet. 2017-07-17 10:31 ` [dpdk-users] [dpdk-dev] " Dumitrescu, Cristian @ 2017-07-18 1:36 ` vuonglv 2017-07-19 18:43 ` Dumitrescu, Cristian 0 siblings, 1 reply; 9+ messages in thread From: vuonglv @ 2017-07-18 1:36 UTC (permalink / raw) To: cristian.dumitrescu; +Cc: users, dev On 07/17/2017 05:31 PM, cristian.dumitrescu@intel.com wrote: > >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of >> vuonglv@viettel.com.vn >> Sent: Monday, July 17, 2017 3:04 AM >> Cc: users@dpdk.org; dev@dpdk.org >> Subject: [dpdk-dev] Rx Can't receive anymore packet after received 1.5 >> billion packet. >> >> Hi DPDK team, >> Sorry when I send this email to both of group users and dev. But I have >> big problem: Rx core on my application can not receive anymore packet >> after I did the stress test to it (~1 day Rx core received ~ 1.5 billion >> packet). Rx core still alive but didn't receive any packet and didn't >> generate any log. Below is my system configuration: >> - OS: CentOS 7 >> - Kernel: 3.10.0-514.16.1.el7.x86_64 >> - Huge page: 32G: 16384 page 2M >> - NIC card: Intel 85299 >> - DPDK version: 16.11 >> - Architecture: Rx (lcore 1) received packet then queue to the ring >> ----- Worker (lcore 2) dequeue packet in the ring and free it (use >> rte_pktmbuf_free() function). >> - Mempool create: rte_pktmbuf_pool_create ( >> "rx_pool", /* >> name */ >> 8192, /* >> number of elemements in the mbuf pool */ >> 256, /* Size of per-core >> object cache */ >> 0, /* Size of >> application private are between rte_mbuf struct and data buffer */ >> RTE_MBUF_DEFAULT_BUF_SIZE, /* >> Size of data buffer in each mbuf (2048 + 128)*/ >> 0 /* socket id */ >> ); >> If I change "number of elemements in the mbuf pool" from 8192 to 512, Rx >> have same problem after shorter time (~ 30s). >> >> Please tell me if you need more information. I am looking forward to >> hearing from you. >> >> >> Many thanks, >> Vuong Le > Hi Vuong, > > This is likely to be a buffer leakage problem. You might have a path in your code where you are not freeing a buffer and therefore this buffer gets "lost", as the application is not able to use this buffer any more since it is not returned back to the pool, so the pool of free buffers shrinks over time up to the moment when it eventually becomes empty, so no more packets can be received. > > You might want to periodically monitor the numbers of free buffers in your pool; if this is the root cause, then you should be able to see this number constantly decreasing until it becomes flat zero, otherwise you should be able to the number of free buffers oscillating around an equilibrium point. > > Since it takes a relatively big number of packets to get to this issue, it is likely that the code path that has this problem is not executed very frequently: it might be a control plane packet that is not freed up, or an ARP request/reply pkt, etc. > > Regards, > Cristian Hi Cristian, Thanks for your response, I am doing your ideal. But let me show you another case i have tested before. I changed architecture of my application as below: - Architecture: Rx (lcore 1) received packet then queue to the ring ----- after that: Rx (lcore 1) dequeue packet in the ring and free it immediately. (old architecture as above) With new architecture Rx still receive packet after 2 day and everything look good. Unfortunately, My application must run in old architecture. Any ideal for me? Many thanks, Vuong Le ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-users] [dpdk-dev] Rx Can't receive anymore packet after received 1.5 billion packet. 2017-07-18 1:36 ` vuonglv @ 2017-07-19 18:43 ` Dumitrescu, Cristian 2017-07-21 10:49 ` vuonglv 0 siblings, 1 reply; 9+ messages in thread From: Dumitrescu, Cristian @ 2017-07-19 18:43 UTC (permalink / raw) To: vuonglv; +Cc: users, dev > -----Original Message----- > From: vuonglv@viettel.com.vn [mailto:vuonglv@viettel.com.vn] > Sent: Tuesday, July 18, 2017 2:37 AM > To: Dumitrescu, Cristian <cristian.dumitrescu@intel.com> > Cc: users@dpdk.org; dev@dpdk.org > Subject: Re: [dpdk-dev] Rx Can't receive anymore packet after received 1.5 > billion packet. > > > > On 07/17/2017 05:31 PM, cristian.dumitrescu@intel.com wrote: > > > >> -----Original Message----- > >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of > >> vuonglv@viettel.com.vn > >> Sent: Monday, July 17, 2017 3:04 AM > >> Cc: users@dpdk.org; dev@dpdk.org > >> Subject: [dpdk-dev] Rx Can't receive anymore packet after received 1.5 > >> billion packet. > >> > >> Hi DPDK team, > >> Sorry when I send this email to both of group users and dev. But I have > >> big problem: Rx core on my application can not receive anymore packet > >> after I did the stress test to it (~1 day Rx core received ~ 1.5 billion > >> packet). Rx core still alive but didn't receive any packet and didn't > >> generate any log. Below is my system configuration: > >> - OS: CentOS 7 > >> - Kernel: 3.10.0-514.16.1.el7.x86_64 > >> - Huge page: 32G: 16384 page 2M > >> - NIC card: Intel 85299 > >> - DPDK version: 16.11 > >> - Architecture: Rx (lcore 1) received packet then queue to the ring > >> ----- Worker (lcore 2) dequeue packet in the ring and free it (use > >> rte_pktmbuf_free() function). > >> - Mempool create: rte_pktmbuf_pool_create ( > >> "rx_pool", /* > >> name */ > >> 8192, /* > >> number of elemements in the mbuf pool */ > >> 256, /* Size of per-core > >> object cache */ > >> 0, /* Size of > >> application private are between rte_mbuf struct and data buffer */ > >> RTE_MBUF_DEFAULT_BUF_SIZE, /* > >> Size of data buffer in each mbuf (2048 + 128)*/ > >> 0 /* socket id */ > >> ); > >> If I change "number of elemements in the mbuf pool" from 8192 to 512, > Rx > >> have same problem after shorter time (~ 30s). > >> > >> Please tell me if you need more information. I am looking forward to > >> hearing from you. > >> > >> > >> Many thanks, > >> Vuong Le > > Hi Vuong, > > > > This is likely to be a buffer leakage problem. You might have a path in your > code where you are not freeing a buffer and therefore this buffer gets > "lost", as the application is not able to use this buffer any more since it is not > returned back to the pool, so the pool of free buffers shrinks over time up to > the moment when it eventually becomes empty, so no more packets can be > received. > > > > You might want to periodically monitor the numbers of free buffers in your > pool; if this is the root cause, then you should be able to see this number > constantly decreasing until it becomes flat zero, otherwise you should be > able to the number of free buffers oscillating around an equilibrium point. > > > > Since it takes a relatively big number of packets to get to this issue, it is > likely that the code path that has this problem is not executed very > frequently: it might be a control plane packet that is not freed up, or an ARP > request/reply pkt, etc. > > > > Regards, > > Cristian > Hi Cristian, > Thanks for your response, I am doing your ideal. But let me show you > another case i have tested before. I changed architecture of my > application as below: > - Architecture: Rx (lcore 1) received packet then queue to the ring > ----- after that: Rx (lcore 1) dequeue packet in the ring and free it > immediately. > (old architecture as above) > With new architecture Rx still receive packet after 2 day and everything > look good. Unfortunately, My application must run in old architecture. > > Any ideal for me? > > > Many thanks, > Vuong Le I am not sure I understand the old architecture and the new architecture you are referring to, can you please clarify them. Regards, Cristian ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-users] [dpdk-dev] Rx Can't receive anymore packet after received 1.5 billion packet. 2017-07-19 18:43 ` Dumitrescu, Cristian @ 2017-07-21 10:49 ` vuonglv 0 siblings, 0 replies; 9+ messages in thread From: vuonglv @ 2017-07-21 10:49 UTC (permalink / raw) To: cristian dumitrescu; +Cc: users, dev ----- Original Message ----- From: "cristian dumitrescu" <cristian.dumitrescu@intel.com> To: vuonglv@viettel.com.vn Cc: users@dpdk.org, dev@dpdk.org Sent: Thursday, July 20, 2017 1:43:37 AM Subject: RE: [dpdk-dev] Rx Can't receive anymore packet after received 1.5 billion packet. > -----Original Message----- > From: vuonglv@viettel.com.vn [mailto:vuonglv@viettel.com.vn] > Sent: Tuesday, July 18, 2017 2:37 AM > To: Dumitrescu, Cristian <cristian.dumitrescu@intel.com> > Cc: users@dpdk.org; dev@dpdk.org > Subject: Re: [dpdk-dev] Rx Can't receive anymore packet after received 1.5 > billion packet. > > > > On 07/17/2017 05:31 PM, cristian.dumitrescu@intel.com wrote: > > > >> -----Original Message----- > >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of > >> vuonglv@viettel.com.vn > >> Sent: Monday, July 17, 2017 3:04 AM > >> Cc: users@dpdk.org; dev@dpdk.org > >> Subject: [dpdk-dev] Rx Can't receive anymore packet after received 1.5 > >> billion packet. > >> > >> Hi DPDK team, > >> Sorry when I send this email to both of group users and dev. But I have > >> big problem: Rx core on my application can not receive anymore packet > >> after I did the stress test to it (~1 day Rx core received ~ 1.5 billion > >> packet). Rx core still alive but didn't receive any packet and didn't > >> generate any log. Below is my system configuration: > >> - OS: CentOS 7 > >> - Kernel: 3.10.0-514.16.1.el7.x86_64 > >> - Huge page: 32G: 16384 page 2M > >> - NIC card: Intel 85299 > >> - DPDK version: 16.11 > >> - Architecture: Rx (lcore 1) received packet then queue to the ring > >> ----- Worker (lcore 2) dequeue packet in the ring and free it (use > >> rte_pktmbuf_free() function). > >> - Mempool create: rte_pktmbuf_pool_create ( > >> "rx_pool", /* > >> name */ > >> 8192, /* > >> number of elemements in the mbuf pool */ > >> 256, /* Size of per-core > >> object cache */ > >> 0, /* Size of > >> application private are between rte_mbuf struct and data buffer */ > >> RTE_MBUF_DEFAULT_BUF_SIZE, /* > >> Size of data buffer in each mbuf (2048 + 128)*/ > >> 0 /* socket id */ > >> ); > >> If I change "number of elemements in the mbuf pool" from 8192 to 512, > Rx > >> have same problem after shorter time (~ 30s). > >> > >> Please tell me if you need more information. I am looking forward to > >> hearing from you. > >> > >> > >> Many thanks, > >> Vuong Le > > Hi Vuong, > > > > This is likely to be a buffer leakage problem. You might have a path in your > code where you are not freeing a buffer and therefore this buffer gets > "lost", as the application is not able to use this buffer any more since it is not > returned back to the pool, so the pool of free buffers shrinks over time up to > the moment when it eventually becomes empty, so no more packets can be > received. > > > > You might want to periodically monitor the numbers of free buffers in your > pool; if this is the root cause, then you should be able to see this number > constantly decreasing until it becomes flat zero, otherwise you should be > able to the number of free buffers oscillating around an equilibrium point. > > > > Since it takes a relatively big number of packets to get to this issue, it is > likely that the code path that has this problem is not executed very > frequently: it might be a control plane packet that is not freed up, or an ARP > request/reply pkt, etc. > > > > Regards, > > Cristian > Hi Cristian, > Thanks for your response, I am doing your ideal. But let me show you > another case i have tested before. I changed architecture of my > application as below: > - Architecture: Rx (lcore 1) received packet then queue to the ring > ----- after that: Rx (lcore 1) dequeue packet in the ring and free it > immediately. > (old architecture as above) > With new architecture Rx still receive packet after 2 day and everything > look good. Unfortunately, My application must run in old architecture. > > Any ideal for me? > > > Many thanks, > Vuong Le I am not sure I understand the old architecture and the new architecture you are referring to, can you please clarify them. Regards, Cristian Hi Cristain, I have found my problem, It caused by I created mempool in socket 1 while I created ring in socket 0 (I have not set huge page for socket 0). This is my stupid mistake. But I don't understand why rings still created by system in socket 0 where I didn't set huge page before. Thanks for your support. Many thanks, Vuong Le ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2017-07-21 10:48 UTC | newest] Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2017-05-18 12:23 [dpdk-users] Defining multiple actions based on a field value for the same table lookup in ip_pipeline application Nidhia Varghese 2017-05-18 14:04 ` [dpdk-users] [dpdk-dev] " Shyam Shrivastav 2017-05-19 5:23 ` Nidhia Varghese 2017-05-19 5:37 ` Shyam Shrivastav 2017-07-17 2:03 ` [dpdk-users] Rx Can't receive anymore packet after received 1.5 billion packet vuonglv 2017-07-17 10:31 ` [dpdk-users] [dpdk-dev] " Dumitrescu, Cristian 2017-07-18 1:36 ` vuonglv 2017-07-19 18:43 ` Dumitrescu, Cristian 2017-07-21 10:49 ` vuonglv
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).