I used DPDK bonding PMD (rte_eth_bond_api) for receiving packets from a bonding port bonded by four 82599EB 10 Gbps ports and then sent them to the network through another bonding port.
when the traffic reached 20Gbps(about 5 Gbps for each physical port), I found some packets are dropped by HW everytime because of no mbuf in the rx rings which is called as imissed errors in struct rte_eth_stats.
I use 4 cores for RSS, set the nb_desc to the max length and also use vPMD for high performace, but it is no use.
How can I avoid imissed errors? Could the DPDK bonding driver has influence on the performance?
Thanks.