85 lines
3.3 KiB
Markdown
85 lines
3.3 KiB
Markdown
|
It comes a time when programming that one will have to start
|
||
|
paying attention to performance. As this is true in many cases,
|
||
|
there are especially two places that is especially important: With
|
||
|
parallel processing and packet captures. Even better if doing both
|
||
|
at once. In this article we'll keep the latter in mind together
|
||
|
with jNetPcap, a Java wrapper for libpcap able to do 60Kpps per
|
||
|
instance.
|
||
|
|
||
|
First of all I found an excellent post on performance tuning
|
||
|
jNetPcap. There's also a good implementation example for moving to
|
||
|
the much faster ``JBufferHandler`` [1].
|
||
|
|
||
|
One should take note of the ring buffer, that is how much memory
|
||
|
you will have to temporarily store packets if there's a lot of
|
||
|
traffic. Usually this may be e.g. 453k, while the maximum can be
|
||
|
4M (for instance 4078 as it was in my case). For tuning this on
|
||
|
RedHat one may use ``ethtool -g eth0``, and adjust it with
|
||
|
``ethtool -G eth0 rx 4078``. Larger buffers results in high
|
||
|
throughput, but also higher latency (which is not that important
|
||
|
when doing packet captures). More on ethtool and ring buffer
|
||
|
adjustments here.
|
||
|
|
||
|
When it comes to jNetPcap, the following is an example
|
||
|
implementing it as a Apache Flume source [2]:
|
||
|
|
||
|
@Override
|
||
|
public void start() {
|
||
|
final ChannelProcessor channel = getChannelProcessor();
|
||
|
|
||
|
JBufferHandler<ChannelProcessor> jpacketHandler = new JBufferHandler<ChannelProcessor>() {
|
||
|
|
||
|
public void nextPacket(PcapHeader pcapHeader, JBuffer packet, ChannelProcessor channelProcessor) {
|
||
|
int size = packet.size();
|
||
|
JBuffer buffer = packet;
|
||
|
byte[] packetBytes = buffer.getByteArray(0, size);
|
||
|
|
||
|
Event flumeEvent = EventBuilder.withBody(packetBytes);
|
||
|
channel.processEvent(flumeEvent);
|
||
|
}
|
||
|
};
|
||
|
|
||
|
super.start();
|
||
|
pcap.loop(-1, jpacketHandler, channel);
|
||
|
|
||
|
}
|
||
|
|
||
|
The above shows you a slightly different version than the most
|
||
|
well-documented example (``PcapHandler``) [3]. You should choose
|
||
|
the above one since it is much faster due to the packet
|
||
|
referencing. I did a test on one site and the performance
|
||
|
increased drastically in terms of improving packet loss on the
|
||
|
software-side of things.
|
||
|
|
||
|
Last but not least, in order to do software side performance
|
||
|
monitoring, you might want to add a handler to capture statistics
|
||
|
in jNetPcap. This is mentioned here in the jNetPcap forums as well
|
||
|
[4]:
|
||
|
|
||
|
> You can also use PcapStat to see if libpcap is dropping any
|
||
|
> packets. If the buffer becomes full and libpcap can't store a
|
||
|
> packet, it will record it in statistics. This is different from
|
||
|
> the NIC dropping packets.
|
||
|
|
||
|
This may be implemented in the configuration as shown here:
|
||
|
|
||
|
PcapStat stats = new PcapStat();
|
||
|
pcap = Pcap.openLive(device.getName(), SNAPLEN, Pcap.MODE_PROMISCUOUS, timeout, errbuf);
|
||
|
pcap.stats(stats);
|
||
|
|
||
|
You can get the stats with the following:
|
||
|
|
||
|
System.out.printf("drop=%d, ifDrop=%d\n",stats.getDrop(), stats.getIfDrop());
|
||
|
|
||
|
|
||
|
Hope this gets you up and running smoothly, tuning packet captures
|
||
|
in chain with parallel computing is a challenge.
|
||
|
|
||
|
To get some more context you may also like to have a look at the
|
||
|
presentation that Cisco did on OpenSOC, that's how to do it.
|
||
|
|
||
|
[1] http://jnetpcap.com/node/67
|
||
|
[2] http://flume.apache.org/
|
||
|
[3] http://jnetpcap.com/examples/dumper
|
||
|
[4] http://jnetpcap.com/node/704
|