Yes, so this is the same NIC work on x86. By seeing the link, I am assuming you mean the ports that are using for Trex. If so they are UP before I start the Trex. Thank, David On Thu, Jan 14, 2021 at 1:11 AM Ajit Khaparde wrote: > Does the same NIC work on x86? > Also do you see link before you start the trex? I am trying to see if the > port failed to successfully come up > or if there is a problem with the link. Can you check and tell? > > Thanks > Ajit > > On Wed, Jan 13, 2021 at 4:20 PM David Liu wrote: > >> Hi Ajit, >> >> Thank you for helping out. >> >> We have it working on the x86, but not the Arm. >> >> >> Thanks, >> David >> >> On Wed, Jan 13, 2021 at 5:34 PM Ajit Khaparde >> wrote: >> >>> Hi David, >>> I will take a look. >>> Do you see similar issues on x86? I am asking because >>> even I would start with that to create a baseline and then >>> attempt on ARM. >>> >>> Thanks >>> Ajit >>> >>> On Wed, Jan 13, 2021 at 12:08 PM David Liu wrote: >>> >>>> Hi Ajit, >>>> >>>> We have a 25G Broadcom NIC that is installed on an Arm machine. >>>> And I am running into a problem when I try to run nic_single_core_perf >>>> on the NIC. >>>> Currently, we are using: Trex v2.86 and dts from >>>> http://git.dpdk.org/tools/dts/ >>>> All the NICs are up and running when testing. >>>> >>>> The problem is an error when I try to run the test case inside >>>> the nic_single_core_perf: >>>> >>>> TestNicSingleCorePerf: Test running at parameters: framesize: 64, >>>>> rxd/txd: 512 >>>>> dut.172.18.0.41: >>>>> x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32,33 -n 4 >>>>> --file-prefix=dpdk_15311_20210113190237 -a 0000:93:00.0 -a 0000:93:00.1 >>>>> -- -i --portmask=0x3 --txd=512 --rxd=512 --nb-cores=1 >>>>> dut.172.18.0.41: start >>>>> TestNicSingleCorePerf: Test Case test_perf_nic_single_core >>>>> Result ERROR: Traceback (most recent call last): >>>>> File "/opt/dts/framework/test_case.py", line 319, in >>>>> _execute_test_case >>>>> case_obj() >>>>> File "tests/TestSuite_nic_single_core_perf.py", line 196, in >>>>> test_perf_nic_single_core >>>>> self.perf_test(self.nb_ports) >>>>> File "tests/TestSuite_nic_single_core_perf.py", line 270, in >>>>> perf_test >>>>> _, packets_received = >>>>> self.tester.pktgen.measure_throughput(stream_ids=streams, >>>>> options=traffic_opt) >>>>> File "/opt/dts/framework/pktgen_base.py", line 245, in >>>>> measure_throughput >>>>> self._prepare_transmission(stream_ids=stream_ids) >>>>> File "/opt/dts/framework/pktgen_trex.py", line 778, in >>>>> _prepare_transmission >>>>> self._conn.reset(ports=self._ports) >>>>> File >>>>> "/opt/trex/v2.86/automation/trex_control_plane/interactive/trex/common/trex_api_annotators.py", >>>>> line 51, in wrap2 >>>>> ret = f(*args, **kwargs) >>>>> File >>>>> "/opt/trex/v2.86/automation/trex_control_plane/interactive/trex/stl/trex_stl_client.py", >>>>> line 339, in reset >>>>> self.clear_stats(ports) >>>>> File >>>>> "/opt/trex/v2.86/automation/trex_control_plane/interactive/trex/common/trex_api_annotators.py", >>>>> line 51, in wrap2 >>>>> ret = f(*args, **kwargs) >>>>> File >>>>> "/opt/trex/v2.86/automation/trex_control_plane/interactive/trex/stl/trex_stl_client.py", >>>>> line 1492, in clear_stats >>>>> self._clear_stats_common(ports, clear_global, clear_xstats) >>>>> File >>>>> "/opt/trex/v2.86/automation/trex_control_plane/interactive/trex/common/trex_client.py", >>>>> line 2876, in _clear_stats_common >>>>> raise TRexError(rc) >>>>> trex.common.trex_exceptions.TRexError: *** [RPC] - Failed to get >>>>> server response from tcp://172.18.0.40:4501 >>>> >>>> >>>> I also try to run TRex in stateless mode with./t-rex-64 -i --cfg >>>> /etc/trex_cfg.yaml and connect with ./trex-console >>>> >>>> Then start sending traffic with command >>>> >>>>> trex>start -f stl/imix.py >>>> >>>> >>>> But this will cause an error to turn off Trex. >>>> >>>> trex> >>>>> 2021-01-13 19:56:21 - [server][warning] - Server has been shutdown - >>>>> cause: 'WATCHDOG: task 'Trex DP core 1' has not responded for more than >>>>> 1.00135 seconds - timeout is 1 seconds >>>>> *** traceback follows *** >>>>> 1 0x561be173cf5a ./_t-rex-64(+0x19af5a) [0x561be173cf5a] >>>>> 2 0x7feea3ac0980 /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980) >>>>> [0x7feea3ac0980] >>>>> 3 0x561be1a05a2a rte_delay_us_block + 106 >>>>> 4 0x561be16ce874 CCoreEthIF::send_burst(CCorePerPort*, unsigned >>>>> short, CVirtualIFPerSideStats*) + 3220 >>>>> 5 0x561be16abf57 CCoreEthIF::flush_tx_queue() + 31 >>>>> 6 0x561be171e3d9 >>>>> CNodeGenerator::handle_maintenance(CFlowGenListPerThread*) + 265 >>>>> 7 0x561be171f7ec CNodeGenerator::handle_flow_sync(CGenNode*, >>>>> CFlowGenListPerThread*, bool&) + 92 >>>>> 8 0x561be171fc98 CNodeGenerator::handle_slow_messages(unsigned >>>>> char, CGenNode*, CFlowGenListPerThread*, bool) + 184 >>>>> 9 0x561be16cb5f1 int CNodeGenerator::flush_file_realtime<23, >>>>> false>(double, double, CFlowGenListPerThread*, double&) + 881 >>>>> 10 0x561be1905212 TrexStatelessDpCore::start_scheduler() + 226 >>>>> 11 0x561be1883ae9 TrexDpCore::start() + 89 >>>>> 12 0x561be1714113 >>>>> CFlowGenListPerThread::start(std::__cxx11::basic_string>>>> std::char_traits, std::allocator >&, CPreviewMode&) + 115 >>>>> 13 0x561be16af8dd CGlobalTRex::run_in_core(unsigned char) + 487 >>>>> 14 0x561be16d11ad ./_t-rex-64(+0x12f1ad) [0x561be16d11ad] >>>>> 15 0x561be1a1dfaa eal_thread_loop + 426 >>>>> 16 0x7feea3ab56db /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) >>>>> [0x7feea3ab56db] >>>>> 17 0x7feea2a8571f clone + 63 >>>>> >>>>> *** addr2line information follows *** >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ??:0 >>>>> ' >>>> >>>> >>>> I believe if everything is working, this should not turn off trex but >>>> please correct me if I am wrong. >>>> I wonder if you have any suggestion of how I can approach this issue? >>>> >>>> Best, >>>> David >>>> >>>> >>>>