Skip to main content

Life with ELK Packetbeat Part 1 Overview

We were using wso2 das as our analytics tool and faced a lot of performance issues which made us to find an alternative and we stopped at elastic stack which was free to use.

We were using wso2 api manager and wso2 esb to handle traffic between two external parties.

Our initial das implementation had only response capturing and our customers were looking for requess as well. we had a hell of lot of work to overcome that problem if we were to stay with wso2 das.

Elastic stack mainly consists
  • Filebeat for reading data from files/logs
  • Logstash for processing logs/data
  • Elasticsearch for storing data
  • Kibana for presenting data 
So our intial plan was to use what ever the logs produced by our application and send it to elastisearch server using filebeat. then we realised our application doesn't print all requied data to log file and as a result we are not be able to get enough data for analysis.

Then we found another tool in the elk family called packetbeat which can sniff network traffic and extract requests and responses. This seems to be a very easy solution for the moment. the only drawback we found was it doesn't support https. as our few customers work with http traffic internally we desided to go with it. This also had another import feature whre it can correlate requests and reponses automatically.

We picked the latest elk products
  • Packetbeat  6.2.4
  • Elasticsearch  6.2.4
  • Kibana  6.2.4
Since packetbeat can directly send data to elasticsearch sever we desided to continue without logstash. Logstash we may use later for advance data transformations.

Setting-Up Elk

To be honest this was the easiest software installation I have done apart from something like microsoft word installation. If you are setting up in a single machine just just extract the downloaded files, all set to go.

  • Start elasticsearch first
  • Then packetbeat
  • And then kibana
Instantly you in a running setup. Once you logged into kibana using localhost:5601 you can see the kibana home. Now go to discovery tab. you may not see anything as you are yet to create a index. go to managemet -> index patterns and click create index patterns.

Put in packetbeat-* as index pattern and click next. select @timestamp and click create index. now go back to discovery tab and select creted index pattern. you should be see a lot of network data captured by packetbeat.

Wow now you have a working elastic setup with almost zero changes/configurations done.

Going Production

As the setup was very simple and everything was working as smoothly we desided to go to production without any hesitation. Now I will cover how did I managed to fix issues found on that environment.

High disk usage

The first problem we faced was very high disk usage by elastic search server. Initially we had to deside all the index data daily as it started creating like 12GB per hour. our setup is like
A -> B -> C
Capturing network latency
A sends a request to B. Then B sends a requests to C. C sends reponse to B and B sends a response to A. So basically two transactions were there between A-B and B-C. Packebeat can correalte these 4 records and convert in to two events. So it will show two transactions for the 4 calls. Directin for A to B as in and for B to C as out is displayed in kibana. This helped us gathering another informaiton to track network latency berween servers using response time. Response time AB can consider as latency of server B that is our ESB server while response time in BC can categorise as latency of C which is in our is an operator.

Packetbeat Duplicates
When we analysing data for looking reasons for the space issue we saw data duplicaiton. Reason was we had a packetbeat on both apim and esb. Where when apim emits a event as out, the same event arrived to esb as in. Which made the event duplication. So eimply we desided to go with a one packetbeat running on esb only. This helpled redusing the size in to half. 

Reducing fields


Changing index patterns to make hourly indices.


Remove unwanted events


Kibana Timeout


Download Requests & Responses 







<Rest after having a coffee ...>

Comments

Popular posts from this blog

Oracle Database 12c installation on Ubuntu 16.04

This article describes how to install Oracle 12c 64bit database on Ubuntu 16.04 64bit. Download software  Download the Oracle software from OTN or MOS or get a downloaded zip file. OTN: Oracle Database 12c Release 1 (12.1.0.2) Software (64-bit). edelivery: Oracle Database 12c Release 1 (12.1.0.2) Software (64-bit)   Unpacking  You should have following two files downloaded now. linuxamd64_12102_database_1of2.zip linuxamd64_12102_database_2of2.zip Unzip and copy them to \tmp\databases NOTE: you might have to merge two unzipped folders to create a single folder. Create new groups and users Open a terminal and execute following commands. you might need root permission. groupadd -g 502 oinstall groupadd -g 503 dba groupadd -g 504 oper groupadd -g 505 asmadmin Now create the oracle user useradd -u 502 -g oinstall -G dba,asmadmin,oper -s /bin/bash -m oracle You will prompt to set to password. set a momorable password and write it down. (mine is orac

DBCA : No Protocol specified

when trying to execute dbca from linux terminal got this error message. now execute the command xhost, you probably receiving No protocol specified xhost:  unable to open display ":0" issue is your user is not allowed to access the x server. You can use xhost to limit access for X server for security reasons. probably you are logged in as oracle user. switch back to default user and execute xhost again. you should see something like SI:localuser:nuwan solution is adding the oracle to access control list xhost +SI:localuser:oracle now go back to oracle user and try dbca it should be working

Java Head Dump Vs Thread Dump

JVM head dump is a snapshot of a JVM heap memory in a given time. So its simply a heap representation of JVM. That is the state of the objects. JVM thread dump is a snapshot of a JVM threads at a given time. So thats what were threads doing at any given time. This is the state of threads. This helps understanding such as locked threads, hanged threads and running threads. Head dump has more information of java class level information than a thread dump. For example Head dump is good to analyse JVM heap memory issues and OutOfMemoryError errors. JVM head dump is generated automatically when there is something like OutOfMemoryError has taken place.  Heap dump can be created manually by killing the process using kill -3 . Generating a heap dump is a intensive computing task, which will probably hang your jvm. so itsn't a methond to use offetenly. Heap can be analysed using tools such as eclipse memory analyser. Core dump is a os level memory usage of objects. It has more informaiton t