Skip to main content

Posts

Showing posts from 2018

customise your linux terminal prompt

This is how my final terminal should look like nuwans:mediation-dep-gw (master) $ I have removed lengthy host name, very long directory structure and added git branch This can be done via .bashrc file located in your local home directory. in bashrc file PS1 variables handles the terminal prompt. parse_git_branch() {     git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/ (\1)/' } if [ "$color_prompt" = yes ]; then     PS1='\[\033[01;32m\]\u:\[\033[01;34m\]\W\[\033[00m\]$(parse_git_branch) $ ' else     PS1='\u:\W\$ ' fi Paste above code to the bashrc file to update the existing code. What I have done? PS1='\[\033[01;32m\]\u:\[\033[01;34m\]\W\[\033[00m\]$(parse_git_branch) $ ' PS1 - sets the terminal pattern used by linux os. \[\033[01;32m\] - setting the colour \u - user name \W - current folder $(parse_git_branch) - will read git branch  info and if its not git folder empty value is retured.

Reverse proxy vs forward proxy

proxy is a server stands in between client and the original server. this get the reqest and sends a new request to the other end. and retrieves the response and creaetes a new response Client -> Proxy -> server Forward proxy this proxy act as the client for the  server. so server doesn't know the real client. real client is hidden from server. reverse proxy this proxy act as server for the client. so real server is hidden from the client.

ELK Logstash Filter writing for response time, request and response

ELK Logstash Filter writing for response time, request and response correlating Now my elk setup is up and running and reads a log file and shows it on Kibana dashboard. basically there is no business logic was implemented. The new business requirement was to write a csv file from a log file with request response and response time. and also show them on kibana. problem i faced here was request and response came as two separate events in the log file. so the requirement can be split as below. it should read the requests and responses from a file correlate them, as they not come in a single event calculate the response time create a csv file with request response and response time in same line As we have already configured our system to read log records from a file, there’s no extra work to do to get both requests and responses to elastic-search. we decided go with log-stash as it’s the tool recommended in elk stack to handle complex processing such as correlating events.

Life with ELK Packetbeat Part 1 Overview

We were using wso2 das as our analytics tool and faced a lot of performance issues which made us to find an alternative and we stopped at elastic stack which was free to use. We were using wso2 api manager and wso2 esb to handle traffic between two external parties. Our initial das implementation had only response capturing and our customers were looking for requess as well. we had a hell of lot of work to overcome that problem if we were to stay with wso2 das. Elastic stack mainly consists Filebeat for reading data from files/logs Logstash for processing logs/data Elasticsearch for storing data Kibana for presenting data  So our intial plan was to use what ever the logs produced by our application and send it to elastisearch server using filebeat. then we realised our application doesn't print all requied data to log file and as a result we are not be able to get enough data for analysis. Then we found another tool in the elk family called packetbeat which can sniff

Java Collection Framework

Comes under java.util from java 1.2. before collections vector stack properties classes used to manipulate group of objects. Collections has bring these all to an unified theme and following goals. 1. High performance. Dynamic arrays, linked list, trees, hash table are high efficient. 2. Different collections must work in similar manner with high interoperability. 3. Extending and adapting must me easy. So this is built on few standard interfaces like linked list, hash set, tree set. Own implementations supported too. 4. Allowing integration of standard arrays to collections framework. Algorithms defines static methods to manipulate collections. Iterators allows standard way of accessing collection elements one at a time.(enumeration) Spliterator introduced in Java 8 to provide support for parallel iteration. It has nested interfaces to support primitive types. Jdk 5 added following to collections Genetics Auto boxing/ un boxing For each loop Generics added type safety to t

Multi White-listing And Advance Throttling with WSO2 API Manager

After spending few sleepless nights found a workaround to do multi IP whitelisting and throttling on API manager as following. Multi IP whitelisting and throttling is not supported by wso2 API manager pack. There are two options we have tried, 1. IP based Throttling Go to API manager admin portal https://13.58.109.76:9444/admin/api-policy-list throttling policies > advanced throttling > add tier Set Request Count as the default limit Set 1 minute as unit time Press on add condition group > press IP condition > press "on" on IP Condition Policy Select specific IP as IP Condition type Add an IP address to be whitelisted and throttled Under Execution Policy Select Request Count as Request Count Set Request Count as you like Set time to 1 minute Press on add condition group again to add the second IP and repeat the same Add this to the API by setting it on "Advanced Throttling Policies" on API manage page on publisher page. Pro