Quantcast
Channel: Embedded Community : All Content - All Communities
Viewing all articles
Browse latest Browse all 3032

Questions about the imissed errors when recieving packets by DPDK using bonding PMD.

$
0
0

I used DPDK bonding PMD (rte_eth_bond_api) for receiving  packets from a bonding port bonded by four 82599EB 10 Gbps ports and then sent them to the network through another bonding port.

 

when the traffic reached 20Gbps(about 5 Gbps for each physical port), I found some packets are dropped by HW everytime because of  no mbuf in the rx rings which is called as imissed errors in struct rte_eth_stats.

 

I use 4 cores for RSS, set the nb_desc to the max length and also use vPMD for high performace,  but it is no use.

 

How can I avoid imissed errors? Could the DPDK bonding driver has influence on the performance?

 

Thanks.


Viewing all articles
Browse latest Browse all 3032

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>