Skip to main content

Spring Microservices in Action - Summary

Java Spring Boot Microservices
..............................

Contents


1 ■ Welcome to the cloud, Spring 
2 ■ Building microservices with Spring Boot
3 ■ Controlling your configuration with Spring Cloud
configuration serve
4 ■ On service discovery
5 ■ When bad things happen: client resiliency patterns with
Spring Cloud and Netflix Hystrix
6 ■ Service routing with Spring Cloud and Zuul
7 ■ Securing your microservices
8 ■ Event-driven architecture with Spring Cloud Stream
9 ■ Distributed tracing with Spring Cloud Sleuth and Zipkin
10 ■ Deploying your microservices



























1. Welcome to the cloud, Spring



=====================================


Summary
     Microservices are extremely small pieces of functionality that are responsible for one specific area of scope.
     No industry standards exist for microservices. Unlike other early web service protocols, microservices take a principle-based approach and align with the concepts of REST and JSON.
     Writing microservices is easy, but fully operationalizing them for production requires additional forethought. We introduced several categories of microservice development patterns, including core development, routing patterns, client resiliency, security, logging, and build/deployment patterns.
     While microservices are language-agnostic, we introduced two Spring frameworks that significantly help in building microservices: Spring Boot and Spring Cloud.
     Spring Boot is used to simplify the building of REST-based/JSON microservices. Its goal is to make it possible for you to build microservices quickly with nothing
    more than a few annotations.
     Spring Cloud is a collection of open source technologies from companies such as Netflix and HashiCorp that have been “wrapped” with Spring annotations to
    significantly simplify the setup and configuration of these services.


In Detail..................
    Microservices are distributed, loosely coupled software services that carry out a small number of well-defined tasks.

What’s a microservice?
    Monolithik, Every time an individual team needed to make a change, the entire application had to be rebuilt, retested and redeployed.

    microservice is a small, loosely coupled, distributed service

    Microservices allow you to take a large application and decompose it into easy-tomanage components with narrowly defined responsibilities. Microservices help combat the traditional problems of complexity in a large code base by decomposing the large code base down into small, well-defined pieces.

    A microservice architecture has the following characteristics:
         Application logic is broken down into small-grained components with welldefined boundaries of responsibility that coordinate to deliver a solution.
         Each component has a small domain of responsibility and is deployed completely independently of one another. Microservices should have responsibility
            for a single part of a business domain. Also, a microservice should be reusable across multiple applications.
         Microservices communicate based on a few basic principles (notice I said principles, not standards) and employ lightweight communication protocols such as
            HTTP and JSON (JavaScript Object Notation) for exchanging data between the service consumer and service provider.
         The underlying technical implementation of the service is irrelevant because the applications always communicate with a technology-neutral protocol (JSON is the most common). This means an application built using a microservice application could be built with multiple languages and technologies.
         Microservices—by their small, independent, and distributed nature—allow organizations to have small development teams with well-defined areas of responsibility. These teams might work toward a single goal such as delivering an application, but each team is responsible only for the services on which they’re working

What is Spring and why is it relevant to microservices?
    Spring Boot is a re-envisioning of the Spring framework. While it embraces core features of Spring, Spring Boot strips away many of the “enterprise” features found in Spring and instead delivers a framework geared toward Java-based, REST-oriented (Representational State Transfer)1 microservices. With a few simple annotations, a Java developer can quickly build a REST microservice that can be packaged and deployed without the need for an external application container

    The Spring Cloud framework makes it simple to operationalize and deploy microservices to a private or public cloud. Spring Cloud wraps several popular cloud-management microservice frameworks under a common framework and makes the use and deployment of these technologies as easy to use as annotating your code.

Why change the way we build applications?
    Complexity has gone way up
    Customers want faster delivery
    Performance and scalability -  Applications need to scale up across multiple servers quickly and then scale back down when the volume needs have passed.
    Customers expect their applications to be available - Failures or problems in one part of the application shouldn’t bring down the entire application.

    If we “unbundle” our applications into small services and move them away from a single monolithic artifact, we can build systems that are
        Flexible - smaller the unit of code > less complicated, less time it takes to test deploy
        Resilient - Failures can be localized to a small part of the application and contained before the entire application experiences an outage
        Scalable - Scaling on small services is localized and much more costeffective.

    Small, Simple, and Decoupled Services = Scalable, Resilient, and Flexible Applications

What exactly is the cloud?
    The difference between these options is about who’s responsible for cooking these meals and where the meal is going to be cooked.
    three basic models exist in cloud-based computing. These are
         Infrastructure as a Service (IaaS) - go to the grocery store and buy a meal pre-made, that you heat up an serve. the cloud vendor provides the basic infrastructure, but you’re accountable for selecting the technology and building the final solution.
         Platform as a Service (PaaS)- get a meal delivered to your house. rely on a vendor to take care of the core tasks associated
         Software as a Service (SaaS)    - eat at restaurant  

Why the cloud and microservices?
    The advantage of cloud-based microservices centers around the concept of elasticity. Cloud service providers allow you to quickly spin up new virtual machines and containers in a matter of minutes. If your capacity needs for your services drop, you can spin down virtual servers without incurring any additional costs.

Microservices are more than writing the code
    Running and supporting a robust microservice application,
         Right-sized—How do you ensure that your microservices are properly sized so that you don’t have a microservice take on too much responsibility? Remember, properly sized, a service allows you to quickly make changes to an application and reduces the overall risk of an outage to the entire application.
         Location transparent—How you we manage the physical details of service invocation when in a microservice application, multiple service instances can quickly start and shut down?
         Resilient—How do you protect your microservice consumers and the overall integrity of your application by routing around failing services and ensuring that you take a “fail-fast” approach?
         Repeatable—How do you ensure that every new instance of your service brought up is guaranteed to have the same configuration and code base as all the other service instances in production?
         Scalable—How do you use asynchronous processing and events to minimize the direct dependencies between your services and ensure that you can gracefully scale your microservices?

    microservice patterns:
         Core development patterns - addresses the basics of building a microservice.
            1.Service granularity - Making a service too coarse-grained makes difficult to maintain  or too fine-grained increases the overall complexity of the application
            2.Communication protocols - JSON is the ideal choice for microservices
            3.Interface design - A well-design microservice interface makes using your service intuitive.
            4.Configuration management of service - you never have to change the core application code or configuration
            5.Event processing between services - decouple your microservice using events so that you minimize hardcoded dependencies    

         Routing patterns - how a client application that wants to consume a microservice discovers the location of the service and is routed over to it. Service discovery and routing answer the question, “How do I get my client’s request for a service to a specific instance of a service?”
            1.Service discovery - Service discovery abstracts away the physical location of the service from the client. New microservice instances can be added to scale up, and unhealthy service instances can be transparently removed from the service.
            2.Service routing - gives the microservice client a single logical URL to talk to and acts as a policy enforcement point for things like authorization, authentication, and content checking.

         Client resiliency patterns -prevent a problem in a single service (or service instance) from cascading up and out to the consumers of the service.
            1.Client-side load balancing - cache - The service client caches microservice endpoints retrieved from the service discovery and ensures that the service calls are load balanced between instances.
            2.Circuit breakers pattern - fail fast - The circuit breaker pattern ensures that a service client does not repeatedly call a failing service. Instead, a circuit breaker "fails fast" to protect the client.
            3.Fallback pattern - alternative path - When a client does fail, is there an alternative path the client can take to retrieve data from or take action with?
            4.Bulkhead pattern - seperate environment- How do you segregate different service calls on a client to make sure one misbehaving service does not take up all the resources on the client?

         Security patterns
            1. Authentication - How do you determine the service client calling the service is who they say they are?
            2. Authorization - How do you determine whether the service client calling a microservice is allowed to undertake the action they’re trying to undertake?
            3. Credential management and propagation - prevent a service client from constantly having to present credentials. how token-based security standards such as OAuth2 and JavaScript Web Tokens (JWT) can be used to obtain a token that can be passed from service call to service call to authenticate and authorize the user.

         Logging and tracing patterns - monolithic application is broken down into small pieces, difficult to debug and trace
            1. Log correlation - All service log entries have a correlation ID that ties the log entry to a single transaction
            2. Log aggregation - how to pull together all of the logs produced by your microservices (and their individual instances) into a single queryable database.
            3. Microservice tracing - how to visualize the flow of a client transaction across all the services involved and understand the performance characteristics

         Build and deployment patterns  
            each instance of a microservice should be identical to all its other instances. You can’t allow “configuration drift” (something changes on a server after it’s been deployed) to occur, because this can introduce instability in your applications.

            no longer deploy software artifacts such as a Java WAR or EAR to an already-running piece of infrastructure.Instead, you want to build and compile your microservice and the virtual server image it’s running on as part of the build process.

            1. Build and deployment pipeline - one-button builds and deployment to any environment. - Everything starts with a developer checking in their code to a source control repository. This is the trigger to begin the build/deployment process.
            2. Infrastructure as code - treat the provisioning of your services as code that can be executed and managed under source control? When the microservice is compiled and packaged, we immediately bake and             provision a virtual server or container image with the microservice installed on it.
            3. Immutable servers - Once a microservice image is created, how do you ensure that it’s never changed after it has been deployed? no developer or system administrator is allowed to make modifications to the servers. When promoting between environments, the entire container or image is started
            4. Phoenix servers - The longer a server is running, the more opportunity for configuration drift. How do you ensure that servers that run microservices get torn down         on a regular basis and recreated off an immutable image? Because the actual servers are constantly being torn down as part of the continous integration process, new servers are being started and torn down. This greatly decreases the change of configuration drift between environments.


Using Spring Cloud in building your microservices
    Spring team has integrated a wide number of battletested open source projects into a Spring subproject collectively known as Spring Cloud. Spring Cloud wraps the work of open source companies such as Pivotal, HashiCorp, and Netflix in delivering patterns. Spring Cloud simplifies setting up and configuring of these projects into your Spring application

    Microservice patterns
        Development patterns
            Core microservice patterns - Spring Boot
            Configuration management - Spring Cloud Config
            Asynchronous messaging - Spring Cloud Stream              
        Routing patterns
            Service discovery patterns - Spring Cloud/Netflix Eureka
            Service routing patterns - Spring Cloud/Netflix Zuul          
        Build deployment patterns
            Client-side load balancing - Spring Cloud/Netflix Ribbon
             Circuit breaker pattern - Spring Cloud/Netflix Hystrix
             Fallback pattern - Spring Cloud/Netflix Hystrix
             Bulkhead pattern - Spring Cloud/Netflix Hystrix
        Client resiliency patterns
            Continuous integration - Travis CI
            Infrastructure as code - Docker
            Immutable servers - Docker
            Phoenix servers - Travis CI/Docker
        Logging patterns
            Log correlation - Spring Cloud Sleuth
            Log aggregation - Spring Cloud Sleuth (with Papertrail)
            Microservice tracing - Spring Cloud Sleuth/Zipkin
        Security patterns
            Authorization - Spring Cloud Security/OAuth2
            Authentication - Spring Cloud Security/OAuth2
            Credential management and propagation - Spring Cloud Security/OAuth2/JWT

    Spring Boot - simplifying the core tasks of building REST-based microservices. implifies mapping HTTPstyle verbs (GET, PUT, POST, and DELETE) to URLs and the serialization of the JSON protocol to and from Java objects, as well as the mapping of Java exceptions back to standard HTTP error codes.

    Spring Cloud Config - handles the management of application configuration data through a centralized service, cleanly separated from your deployed microservice. This ensures that no matter how many microservice instances you bring up, they’ll always have the same configuration. can integrate to git, consul, eureka

    Spring Cloud service discovery - abstract away the physical location (IP and/or server name) of where your servers are deployed from the clients consuming the service. Service consumers invoke business logic for the servers through a logical name rather than a physical location. handles the registration and deregistration of services instances as they’re started up and shutdown. can be implemented using Consul and Eureka as its service discovery engine.

    Spring Cloud/Netflix Hystrix and Ribbon - Using the Netflix Hystrix libraries, you can quickly implement service client resiliency patterns such as the circuit breaker and bulkhead patterns. While the Netflix Ribbon project simplifies integrating with service discovery agents such as Eureka, it also provides client-side load-balancing of service calls from a service consumer. This makes it possible for a client to continue making service calls even if the service discovery agent is temporarily unavailable.
              
    Spring Cloud/Netflix Zuul - Spring Cloud uses the Netflix Zuul project (https://github.com/Netflix/zuul) to provide service routing capabilities for your microservice application. Zuul is a service gateway that proxies service requests and makes sure that all calls to your microservices go through a single “front door” before the targeted service is invoked. With this centralization of service calls, you can enforce standard service policies such as a security authorization authentication, content filtering, and routing rules.          

    Spring Cloud Stream - allows you to easily integrate lightweight message processing into your microservice. use asynchronous events. quickly integrate your microservices with message brokers such as RabbitMQ

    Spring Cloud Sleuth - allows you to integrate unique tracking identifiers into the HTTP calls and message channels. With Spring Cloud Sleuth, these trace IDs are automatically added to any logging statements you make in your microservice. can combine  with Papertrail or Zipkin to aggregate logs and visualize the flow of your service calls

    Spring Cloud Security - authentication and authorization framework. allows services to communicate with one another through a token issued by an authentication server. supports the JavaScript Web Token JWT.

    provisioning - spring cloud doesn't support buld and deployment process. can use travesCI and docker

Spring Cloud by example
    @HystrixCommand(threadPoolKey = "helloThreadPool")
    public String helloRemoteServiceCall(String firstName, String lastName)
    The @HystrixCommand annotation is doing two things. First, any time the helloRemoteServiceCall method is called, it won’t be directly invoked. Instead, the method will be delegated to a thread pool managed by Hystrix. If the call takes too long (default is one second), Hystrix steps in and interrupts the call. This is the implementation of the circuit breaker pattern. The second thing this annotation does is create a thread pool called helloThreadPool that’s managed by Hystrix. All calls to helloRemoteServiceCall method will only occur on this thread pool and will be isolated from any other remote service calls being made.

    @EnableEurekaClient has told Spring Boot that you’re going to use a modified RestTemplate class, will contact the Eureka service and look up the physical location of one or more of the “name” service instances.

    Also, the RestTemplate class is using Netflix’s Ribbon library. Ribbon will retrieve a list of all the physical endpoints associated with a service. Every time the service is called by the client, it “round-robins” the call to the different service instances on the client without having to go through a centralized load balancer. By eliminating a centralized load balancer and moving it to the client, you eliminate another failure point (load balancer going down) in your application infrastructure.


























      
      












Building microservices with Spring Boot
=============================================================================================================================================

Summary
     To be successful with microservices, you need to integrate in the architect’s, software developer’s, and DevOps’ perspectives.
     Microservices, while a powerful architectural paradigm, have their benefits and tradeoffs. Not all applications should be microservice applications.
     From an architect’s perspective, microservices are small, self-contained, and distributed. Microservices should have narrow boundaries and manage a small set of data.
     From a developer’s perspective, microservices are typically built using a RESTstyle of design, with JSON as the payload for sending and receiving data from the service.
     Spring Boot is the ideal framework for building microservices because it lets you build a REST-based JSON service with a few simple annotations.
     From a DevOp’s perspective, how a microservice is packaged, deployed, and monitored are of critical importance.
     Out of the box, Spring Boot allows you to deliver a service as a single executable JAR file. An embedded Tomcat server in the producer JAR file hosts the service.
     Spring Actuator, which is included with the Spring Boot framework, exposes information about the operational health of the service along with information about the services runtime.

In Detail..................
    traditional waterfall methodologies, granularity of the software
        Tightly coupled
        Leaky - Ex: team from one domain to directly access the data that belongs to another team
        Monolithic - arge changes become nearly impossible
    A microservice-based architecture,
        Constrained - single set of responsibilities and are narrow in scope.
        Loosely coupled - small services that only interact with one another through a non–implementation specific interface - http,json,rest
        Abstracted - Microservices completely own their data structures and data sources   
        Independent - can be compiled and deployed independently

    microservice for cloud-based development,
        A large and diverse user base -
        Extremely high uptime requirements
        Uneven volume requirements -  easier to focus on the components that are under load and scale  

The architect’s story: designing the microservice architecture  

    1 Decomposing the business problem
        How the two different parts of the business transaction interact usually becomes the service interface for the microservices. Use the following guidelines for identifying and decomposing a business problem into microservice candidates:
        1 Describe the business problem, and listen to the nouns you’re using to describe the problem.
        2 Pay attention to the verbs.- usually indicates that multiple services are at play.
        3 Look for data cohesion. - Microservices should completely own their data

        The architect breaks the business problem into chunks that represent discrete domains of activity. These chunks encapsulate the business rules and the data logic associated with a particular part of the business domain.
    2 Establishing service granularity
        The goal is to take these major pieces of functionality and extract them into completely self-contained units that can be built and deployed independently of each other.
        A microservice that’s too coarse- or fine-grained will have a number of telltale attributes. use the following concepts to determine the correct solution
            1 It’s better to start broad with your microservice and refactor to smaller services
            2 Focus first on how your services will interact with one another
            3 Service responsibilities will change over time as your understanding of the problem domain grows

        If a microservice is too coarse-grained, you’ll likely see the following:  
            A service with too many responsibilities
            The service is managing data across a large number of tables
            Too many test cases
        What about a microservice that’s too fine-grained
            microservices are heavily interdependent on one another
            microservices become a collection of simple CRUD,  your microservices do nothing but CRUDrelated logic, they’re probably too fine-grained.
        better to start with your first set of services being more coarse-grained than fine-grained.  
    3 Defining the service interfaces
        Talking to one another: service interfaces
        general  guidelines
            Embrace the REST philosophy - GET, PUT, POST, and DELETE
            Use URI’s to communicate intent - uri should describe different resources in your domain and a mechanism to access them.
            Use JSON for your requests and responses - extremely lightweight data-serialization protocol and is much easier to consume then XML.
            Use HTTP status codes to communicate results - to indicate the success or failure of a service
        service interfaces must be easy to understand and consumable.  

When not to use microservices
    Complexity of building distributed systems - automation and operational work (monitoring, scaling) that a highly distributed application needs      
    Server sprawl - number of servers and that cost requires to the microservices
    Type of application - useful for building large applications that need to be highly resilient and scalable. not for small applications.
    Data transformations and consistency - Eg:
        no standard exists for performing transactions across microservices.
        microservices can communicate amongst themselves by using messages. Messaging introduces latency in data updates
The developer’s tale: building a microservice with Spring Boot and Java  
    Your goal is to get a simple microservice up and running in Spring Boot and then iterate on it to deliver functionality. To this end, you need to create two classes in your microservice application
        - A Spring Bootstrap class that will be used by Spring Boot to start up and initialize the application
        - A Spring Controller class that will expose the HTTP endpoints that can be invoked on the microservice
    Building the doorway into the microservice: the Spring Boot controller  
        exposes the services endpoints and maps the data from an incoming HTTP request to a Java method that will process the request.
        @RestController - tells the Spring Container that this Java class is going to be used for a REST-based service. automatically handles the serialization of data passed into the services as JSON or XML
        @RequestMapping annotation - is used to tell the Spring container the HTTP endpoint that the service is going to expose to the world.

    Why JSON for microservices
        extremely lightweight
        easily read and consumed by a human being
        is the default serialization protocol used in JavaScript
        If you need to minimize the size of the data you’re sending across the wire consider Apache Thrift or avro protocols
    Endpoint names matter
        establish standards for the endpoints that will be exposed via your services
        Use clear URL names & Be consistent in your naming conventions
        Use the URL to establish relationships between resources
        Establish a versioning scheme for URLS early          

The DevOps story: building for the rigors of runtime
    microservice development has four principles
        1. self-contained and independently deployable, with multiple instance, single software artifact
        2. configurable - No human intervention should be required to configure the service. should read from a central location
        3. transparent - needs to be transparent to the client. The client should never know the exact location of a service. Instead, a microservice client should talk to a service discovery agent that will allow the application to locate an instance of a microservice without having to know its physical location.
        4. communicate - A microservice should communicate its health. This is a critical part of your cloud architecture. Microservice instances will fail and clients need to route around bad service instances
    microservice dev-ops has four principles  
        1. Service assembly—How do you package and deploy your service to guarantee repeatability and consistency so that the same service code and runtime is deployed exactly the same way?
        2. Service bootstrapping—How do you separate your application and environmentspecific configuration code from the runtime code so you can start and deploy a microservice instance quickly in any environment without human intervention to configure the microservice?
        3. Service registration/discovery—When a new microservice instance is deployed, how do you make the new service instance discoverable by other application clients?
        4. Service monitoring—In a microservices environment it’s extremely common for multiple instances of the same service to be running due to high availability needs. From a DevOps perspective, you need to monitor microservice instances and ensure that any faults in your microservice are routed around and that ailing service instances are taken down.
    12 best practics on building a microservice
        1. Codebase - All application code and server provisioning information should be in version control. Each microservice should have its own independent code repository within the source control systems.    
        2. Dependencies - Explicitly declare the dependencies your application uses through build tools such as Maven (Java). Third-party JAR dependence should be declared using their specific version numbers. This allows your microservice to always be built using the same version of libraries.
        3. Config - Store your application configuration (especially your environment-specific configuration) independently from your code. Your application configuration should never be in the same repository as your source code.
        4. Backing services - Your microservice will often communicate over a network to a database or messaging system. When it does, you should ensure that at any time, you can swap out your implementation of the database from an in-house managed service to a third-party service. In chapter 10, we demonstrate this when you move your services away from a locally managed Postgres database to one managed by Amazon.
        5. Build, release, run - Keep your build, release, and run pieces of deploying your application completely separate. Once code is built, the developer should never make changes to the code at runtime. Any changes need to go back to the build process and be redeployed. A built service is immutable and cannot be changed.
        6. Processes—Your microservices should always be stateless. They can be killed and replaced at any timeout without the fear that a loss-of-a-service instance will result in data loss.
        7. Port binding—A microservice is completely self-contained with the runtime engine for the service packaged in the service executable. You should run the service without the need for a separated web or application server. The service should start by itself on the command line and be accessed immediately through an exposed HTTP port.
        8. Concurrency—When you need to scale, don’t rely on a threading model within a single service. Instead, launch more microservice instances and scale out horizontally. This doesn’t preclude using threading within your microservice, but don’t rely on it as your sole mechanism for scaling. Scale out, not up.
        9. Disposability—Microservices are disposable and can be started and stopped on demand. Startup time should be minimized and processes should shut down gracefully when they receive a kill signal from the operating system.
        10. Dev/prod parity—Minimize the gaps that exist between all of the environments in which the service runs (including the developer’s desktop). A developer should use the same infrastructure locally for the service development in which the actual service will run. It also means that the amount of time that a service is deployed between environments should be hours, not weeks. As soon as code is committed, it should be tested and then promoted as quickly as possible from Dev all the way to Prod
        11. Logs—Logs are a stream of events. As logs are written out, they should be streamable to tools, such as Splunk (http://splunk.com) or Fluentd (http://fluentd.org), that will collate the logs and write them to a central location. The microservice should never be concerned about the mechanics of how this happens and the developer should visually look at the logs via STDOUT as they’re being written out.
        12. Admin processes—Developers will often have to do administrative tasks against their services (data migration or conversion). These tasks should never be ad hoc and instead should be done via scripts that are managed and maintained through the source code repository. These scripts should be repeatable and non-changing (the script code isn’t modified for each environment) across each environment they’re run  against.
    Service assembly: packaging and deploying your microservices
        From a DevOps perspective, one of the key concepts behind a microservice architecture is that multiple instances of a microservice can be deployed quickly in response to a change application environment (for example, a sudden influx of user requests, problems within the infrastructure, and so on)  
        so a microservice needs to be packaged and installable as a single artifact with all of its dependencies defined within it.
        Will also include the runtime engine (for example, an HTTP server or application container) that will host the microservice.
        Eliminates many of these opportunities for configuration drift
    Service bootstrapping: managing configuration of your microservices  
        microservice first time starting up and also making the runtime behavior of the application configurable.
        Storing the data in a data store external to the service solves this problem, but microservices in the cloud offer a set of unique challenges:
            Configuration data is simple in structure, usually read frequently and written infrequently. Relational databases are overkill.
            data must be readable with a low level of latency
            data store has to be highly available
    Service registration and discovery: how clients communicate with your microservices  
        From a microservice consumer perspective, a microservice should be location-transparent, because in a cloud-based environment, servers are ephemeral. Ephemeral means the servers that a service is hosted on usually have shorter lives then a service running in a corporate data center. Cloud-based services can be started and torn down quickly with an entirely new IP address assigned to the server on which the services are running.
        Services constantly coming up and down, managing a large pool of ephemeral services manually or by hand is an invitation to an outage.
        service discovery: A microservice instance needs to register itself with the third-party agent. This registration process is called service discovery. this will tell
            1.The physical IP address or domain address of the service instance, and a
            2. logical name that an application can use to look up in a service.
            3. Optional - URL back to the registering service that can be used by the service discovery agent to perform health checks.
        The service client then communicates with the discovery agent to look up the service’s location.
    Communicating a microservice’s health
        The service discovery agent monitors the health of each service instance registered with it and removes any service instances from its routing tables to ensure that clients aren’t sent a service instance that has failed.      
        Spring Actuator provides out-of-the-box operational endpoints that will help you understand and manage the health of your service.

Pulling the perspectives together
    Microservices in the cloud seem deceptively simple. But to be successful with them, you need to have an integrated view that pulls the perspective of the architect, the developer, and DevOps engineer together into a cohesive vision.       
  







































































Controlling your configuration with Spring Cloud configuration server
==================================================================================================

Cloud-based microservices development emphasizes
    1 Completely separating the configuration of an application from the actual code being deployed
    2 Building the server and the application and an immutable image that never changes as it’s promoted through your environments
    3 Injecting any application configuration information at startup time of the server through either environment variables or through a centralized repository the application’s microservices read on startup


configuration management
    1. Segregate - Application configuration shouldn’t be deployed with the service instance. Instead, configuration information should either be passed to the starting service as environment variables or read from a centralized repository when the service starts.
    2. Abstract—Abstract the access of the configuration data behind a service interface. Rather than writing code that directly accesses the service repository (that is, read the data out of a file or a database using JDBC), have the application use a REST-based JSON service to retrieve the configuration data.
    3 Centralize—Because a cloud-based application might literally have hundreds of services, it’s critical to minimize the number of different repositories used to hold configuration information. Centralize your application configuration into as few repositories as possible.
    4 Harden—Because your application configuration information is going to be completely segregated from your deployed service and centralized, it’s critical that whatever solution you utilize can be implemented to be highly available and redundant.

Your configuration management architecture

    1 When a microservice instance comes up, it’s going to call a service endpoint to read its configuration information that’s specific to the environment it’s operating in. The connection information for the configuration management (connection credentials, service endpoint, and so on) will be passed into the microservice when it starts up.
    2 The actual configuration will reside in a repository. Based on the implementation of your configuration repository, you can choose to use different implementations to hold your configuration data. The implementation choices can include files under source control, a relational database, or a key-value data store.
    3 The actual management of the application configuration data occurs independently of how the application is deployed. Changes to configuration management are typically handled through the build and deployment pipeline wherechanges of the configuration can be tagged with version information and deployed through the different environments.
    4 When a configuration management change is made, the services that use thatapplication configuration data must be notified of the change and refresh theircopy of the application data.




Configuring the licensing service to use Spring Cloud Config
    In a Spring Boot service that uses Spring Cloud Config, configuration information can be set in one of two configuration files: bootstrap.yml and application.yml. The bootstrap.yml file reads the application properties before any other configuration information used. In general, the bootstrap.yml file contains the application name for the service, the application profile, and the URI to connect to a Spring Cloud Config server. Any other configuration information that you want to keep local to the service (and not stored in Spring Cloud Config) can be set locally in the services in the application.yml file. Usually, the information you store in the application.yml file is configuration data that you might want to have available to aservice even if the Spring Cloud Config service is unavailable. Both the bootstrap.yml and application.yml files are stored in a projects src/main/ resources directory

Refreshing your properties using Spring Cloud configuration server
    - Spring Boot Actuator does offer a @RefreshScope annotation that will allow a development team to access a /refresh endpoint that will force the Spring Boot application to reread its application configuration.
    - Spring Cloud configuration service does offer a “push”-based mechanism called Spring Cloud Bus that will allow the Spring Cloud configuration server to publish to all the clients using the service that a change has occurred. Spring Cloud configuration requires an extra piece of middleware running (RabbitMQ). This is an extremely useful means of detecting changes, but not all Spring Cloud configuration backends support the “push” mechanism (that is, the Consul server).


Protecting sensitive configuration information

    - Spring Cloud Config supports using both symmetric (shared secret) and asymmetric encryption (public/private key).
        The symmetric encryption key is nothing more than a shared secret that’s used by the encrypter to encrypt a value and the decrypter to decrypt a value
    -Spring Cloud configuration server requires all encrypted properties to be prepended with a value of {cipher}. The {cipher} value tells Spring Cloud configuration server it’s dealing with an encrypted value.  
    - By default, Spring Cloud Config will do all the property decryption on the server and pass the results back to the applications consuming the properties as plain, unencrypted text. However, you can tell Spring Cloud Config to not decrypt on the server and make it the responsibility of the application retrieving the configuration data to decrypt the encrypted properties.

Controlling your configuration with Spring Cloud configuration server Summary
     Spring Cloud configuration server allows you to set up application properties
    with environment specific values.
     Spring uses Spring profiles to launch a service to determine what environment
    properties are to be retrieved from the Spring Cloud Config service.
     Spring Cloud configuration service can use a file-based or Git-based application
    configuration repository to store application properties.
     Spring Cloud configuration service allows you to encrypt sensitive property files
    using symmetric and asymmetric encryption  


On service discovery
======================================================================================================

Summary
     The service discovery pattern is used to abstract away the physical location of
    services.
     A service discovery engine such as Eureka can seamlessly add and remove service instances from an environment without the service clients being impacted.
     Client-side load balancing can provide an extra level of performance and resiliency by caching the physical location of a service on the client making the service call.
     Eureka is a Netflix project that when used with Spring Cloud, is easy to set up
    and configure.
     You used three different mechanisms in Spring Cloud, Netflix Eureka, and Netflix Ribbon to invoke a service. These mechanisms included
    – Using a Spring Cloud service DiscoveryClient
    – Using Spring Cloud and Ribbon-backed RestTemplate
    – Using Spring Cloud and Netflix’s Feign client

--------------

    -In any distributed architecture, we need to find the physical address of where a machine is located. This concept has been around since the beginning of distributed computing and is known formally as service discovery.
    -Service discovery is critical to microservice, cloud-based applications for two key reasons. First, it offers the application team the ability to quickly horizontally scale upand down the number of service instances running in an environment. The service consumers are abstracted away from the physical location of the service via service iscovery. Because the service consumers don’t know the physical location of the actualservice instances, new service instances can be added or removed from the pool of available services.
    - The second benefit of service discovery is that it helps increase application resiliency. When a microservice instance becomes unhealthy or unavailable, most service discovery engines will remove that instance from its internal list of available services. The damage caused by a down service will be minimized because the service discovery engine will route services around the unavailable service.

old load balancer isn't enough
    1. Single point of failure — If the load balancer goes down, every application relying on it goes down too.
    2. Limited horizontal scalability — By centralizing your services into a single cluster of load balancers, you have limited ability to horizontally scale your load-balancing infrastructure across multiple servers.
    3. Statically managed — Most traditional load balancers aren’t designed for rapid registration and de-registration of services.
    4. Complex — Because a load balancer acts as a proxy to the services, service consumer requests have to have their requests mapped to the physical services. This translation layer often added a layer of complexity to your service infrastructure because the mapping rules for the service have to be defined and deployed by hand. In a traditional load balancer scenario, this registration of new service instances was done by hand and not at startup time of a new service instance.

service-discovery mechanism has,
     Highly available—Service discovery needs to be able to support a “hot” clustering environment where service lookups can be shared across multiple nodes in a service discovery cluster. If a node becomes unavailable, other nodes in the cluster should be able to take over.
     Peer-to-peer—Each node in the service discovery cluster shares the state of a service instance.
     Load balanced—Service discovery needs to dynamically load balance requests across all service instances to ensure that the service invocations are spread across all the service instances managed by it. In many ways, service discovery replaces the more static, manually managed load balancers used in many early web application implementations.
     Resilient—The service discovery’s client should “cache” service information locally. Local caching allows for gradual degradation of the service discovery feature so that if service discovery service does become unavailable, applications can still function and locate the services based on the information maintained in its local cache.
     Fault-tolerant—Service discovery needs to detect when a service instance isn’t healthy and remove the instance from the list of available services that can take client requests. It should detect these faults with services and take action without human intervention

architecture of service discovery
    Service registration
    Client lookup of service address—What’s the means by which a service client looks up service information?
    Information sharing—How is service information shared across nodes?
    Health monitoring—How do services communicate their health back to the service discovery agent

    As service instances start up, they’ll register their physical location, path, and port that they can be accessed by with one or more service discovery instances. While each instance of a service will have a unique IP address and port, each service instance that comes up will register under the same service ID. A service ID is nothing more than a key that uniquely identifies a group of the same service instances.

    A service will usually only register with one service discovery service instance. Most service discovery implementations use a peer-to-peer model of data propagation where the data around each service instance is communicated to all the other nodes in the cluster. Depending on the service discovery implementation, the propagation mechanism might use a hard-coded list of services to propagate to or use a multi-casting protocol like the “gossip”2 or “infection-style”3 protocol to allow other nodes to “discover” changes in the cluster

    Finally, each service instance will push to or have pulled from its status by the service discovery service. Any services failing to return a good health check will be removed from the pool of available service instances

    A client can rely solely on the service discovery engine to resolve service locations each time a service is called. But this approach is brittle because the service client is completely dependent on the service discovery engine to be running to find andinvoke a service. A more robust approach is to use what’s called client-side load balancing

    client-side load balancing - caches the location of the services so that the service client doesn’t have to contact service discovery on every call.

Registering services with Spring Eureka

    The eureka.client.fetchRegistry attribute is used to tell the Spring Eureka Client to fetch a local copy of the registry. Setting this attribute to true will cache the registry locally instead of calling the Eureka service with every lookup. Every 30 seconds, the client software will re-contact the Eureka service for any changes to the registry
    When a service registers with Eureka, Eureka will wait for three successive healthchecks over the course of 30 seconds before the service becomes available via Eureka.





















When bad things happen: client resiliency patterns with Spring Cloud and Netflix Hystrix
===============================================================================================================================================

Summary
     When designing highly distributed applications such as a microservice-based
    application, client resiliency must be taken into account.
     Outright failures of a service (for example, the server crashes) are easy to detect
    and deal with.
     A single poorly performing service can trigger a cascading effect of resource
    exhaustion as threads in the calling client are blocked waiting for a service to
    complete.
     Three core client resiliency patterns are the circuit-breaker pattern, the fallback
    pattern, and the bulkhead pattern.
     The circuit breaker pattern seeks to kill slow-running and degraded system calls
    so that the calls fail fast and prevent resource exhaustion.
     The fallback pattern allows you as the developer to define alternative code
    paths in the event that a remote service call fails or the circuit breaker for the
    call fails.
     The bulk head pattern segregates remote resource calls away from each other,
    isolating calls to a remote service into their own thread pool. If one set of service calls is failing, its failures shouldn’t be allowed to eat up all the resources in
    the application container.
     Spring Cloud and the Netflix Hystrix libraries provide implementations for the
    circuit breaker, fallback, and bulkhead patterns.
     The Hystrix libraries are highly configurable and can be set at global, class, and
    thread pool levels.
     Hystrix supports two isolation models: THREAD and SEMAPHORE.
     Hystrix’s default isolation model, THREAD, completely isolates a Hystrix protected call, but doesn’t propagate the parent thread’s context to the Hystrix
    managed thread.
     Hystrix’s other isolation model, SEMAPHORE, doesn’t use a separate thread to
    make a Hystrix call. While this is more efficient, it also exposes the service to
    unpredictable behavior if Hystrix interrupts the call.
     Hystrix does allow you to inject the parent thread context into a Hystrix managed
    Thread through a custom HystrixConcurrencyStrategy implementation.

Detail.............

when a service is running slow, detecting that poor performance and routing around it is extremely difficult because
    Degradation of a service can start out as intermittent and build momentum
    Calls to remote services are usually synchronous and don’t cut short a long-running call
    Applications are often designed to deal with complete failures of remote resources, not partial degradations

What are client-side resiliency patterns?
    Client resiliency software patterns are focused on protecting a remote resource’s (another microservice call or database lookup) client from crashing when the remote resource is failing because that remote service is throwing errors or performing poorly.
    The goal of these patterns is to allow the client to “fail fast,” not consume valuable resources such as database connections and thread pools, and prevent the problem of the remote service from spreading “upstream” to consumers of the client.
    There are four client resiliency patterns:
        1 Client-side load balancing - The service client caches microservice endpoints retrieved during service discovery.
        2 Circuit breakers - The circuit breaker pattern ensures that a service client does not repeatedly call a failing service.
        3 Fallbacks - When a call does fail, fallback asks if there’s an alternative that can be executed.
        4 Bulkheads - The bulkhead segregates different service calls on the service client to ensure a poor-behaving service does not use all the resources on the client.

Client-side load balancing
    Client-side load balancing involves having the client look up all of a service’s individual instances from a service discovery agent (like Netflix Eureka) and then caching the physical location of said service instances.
    Whenever a service consumer needs to call that service instance, the client-side load balancer will return a location from the pool of service locations it’s maintaining.
    Because the client-side load balancer sits between the service client and the service consumer, the load balancer can detect if a service instance is throwing errors or behaving poorly. If the client-side load balancer detects a problem, it can remove that service instance from the pool of available service locations and prevent any future service calls from hitting that service instance.
    - Netflix’s Ribbon does this

Circuit breaker
    With a software circuit breaker, when a remote service is called, the circuit breaker will monitor the call. If the calls take too long, the circuit breaker will intercede and kill the call. In addition, the circuit breaker will monitor all calls to a remote resource and if enough calls fail, the circuit break implementation will pop, failing fast and preventing future calls to the failing remote resource.  

Fallback processing
    With the fallback pattern, when a remote service call fails, rather than generating an exception, the service consumer will execute an alternative code path and try to carry out an action through another means. This usually involves looking for data from another data source or queueing the user’s request for future processing. The user’s call will not be shown an exception indicating a problem, but they may be notified that their request will have to be fulfilled at a later date.  

Bulkheads
    - if the ship’s hull is punctured, because the ship is divided into watertight compartments (bulkheads), the bulkhead will keep the water confined to the area of the ship where the puncture occurred and prevent the entire ship from filling with water and sinking.
    - break the calls to remote resources into their own thread pools and reduce the risk that a problem with one slow remote resource call will take down the entire application. The thread pools act as the bulkheads for your service. Each remote resource is segregated and assigned to the thread pool. If one service is responding slowly, the thread ool for that one type of service call will become saturated and stop processing requests. Service calls to other services won’t become saturated because they’re assigned to other thread pools.

Why client resiliency matters
    The key thing a circuit break patterns offers is the ability for remote calls to
    1 Fail fast—When a remote service is experiencing a degradation, the application will fail fast and prevent resource exhaustion issues that normally shut down the entire application. In most outage situations, it’s better to be partially down rather than completely down.
    2 Fail gracefully—By timing out and failing fast, the circuit breaker pattern gives the application developer the ability to fail gracefully or seek alternative mechanisms to carry out the user’s intent. For instance, if a user is trying to retrieve data from one data source, and that data source is experiencing a service degradation, then the application developer could try to retrieve that data from another location.
    3 Recover seamlessly—With the circuit-breaker pattern acting as an intermediary, the circuit breaker can periodically check to see if the resource being requested is back on line and re-enable access to it without human intervention.

Enter Hystrix
    Building implementations of the circuit breaker, fallback, and bulkhead patterns requires intimate knowledge of threads and thread management. you can use Spring Cloud and Netflix’s Hystrix library

Fallbacks
    Here are a few things to keep in mind as you determine whether you want to implement a fallback strategy:
        - 1 Fallbacks are a mechanism to provide a course of action when a resource has timed out or failed. If you find yourself using fallbacks to catch a timeout    exception and then doing nothing more than logging the error, then you should probably use a standard try.. catch block around your service invocation, catch the HystrixRuntimeException, and put the logging logic in the try..catch block.
        - 2 Be aware of the actions you’re taking with your fallback functions. If you call out to another distributed service in your fallback service you may need to wrap the fallback with a @HystrixCommand annotation. Remember, the same failure that you’re experiencing with your primary course of action might also impact your secondary fallback option. Code defensively. I have been bitten hard when I failed to take this into account when using fallbacks.

Implementing the bulkhead pattern
    Hystrix uses a thread pool to delegate all requests for remote services. By default, all Hystrix commands will share the same thread pool to process requests. This thread pool will have 10 threads in it to process remote service calls and those remote services calls could be anything, including REST-service invocations, database calls, and so on.

Thread context and Hystrix
    - When an @HystrixCommand is executed, it can be run with two different isolation strategies: THREAD and SEMAPHORE. By default, Hystrix runs with a THREAD isolation. Each Hystrix command used to protect a call runs in an isolated thread pool that doesn’t share its context with the parent thread making the call. This means Hystrix can interrupt the execution of a thread under its control without worrying about interrupting any other activity associated with the parent thread doing the original invocation.
    - With SEMAPHORE-based isolation, Hystrix manages the distributed call protected by the @HystrixCommand annotation without starting a new thread and will interrupt the parent thread if the call times out. In a synchronous container server environment (Tomcat), interrupting the parent thread will cause an exception to be thrown that cannot be caught by the developer. This can lead to unexpected consequences for the developer writing the code because they can’t catch the thrown exception or do any resource cleanup or error handling.
    - By default, the Hystrix team recommends you use the default isolation strategy of THREAD for most commands. This keeps a higher level of isolation between you and the parent thread. THREAD isolation is heavier than using the SEMAPHORE isolation. The SEMAPHORE isolation model is lighter-weight and should be used when you have a high-volume on your services and are running in an asynchronous I/O programming model (you are using an asynchronous I/O container such as Netty).
    - Hystrix, by default, will not propagate the parent thread’s context to threads managed by a Hystrix command. For example, any values set as ThreadLocal values in the parent thread will not be available by default to a method called by the parent thread and protected by the @HystrixCommand object. (Again, this is assuming you are using a THREAD isolation level.)

     By default, the Hystrix team recommends you use the default isolation strategy of THREAD for most commands. This keeps a higher level of isolation between you and the parent thread. THREAD isolation is heavier than using the SEMAPHORE isolation. The SEMAPHORE isolation model is lighter-weight and should be used when you have a high-volume on your services and are running in an asynchronous I/O programming model (you are using an asynchronous I/O container such as Netty).


















Service routing with Spring Cloud and Zuul
=======================================================================================================

Summary
     Spring Cloud makes it trivial to build a services gateway.
     The Zuul services gateway integrates with Netflix’s Eureka server and can automatically map services registered with Eureka to a Zuul route.
     Zuul can prefix all routes being managed, so you can easily prefix your routes     with something like /api.
     Using Zuul, you can manually define route mappings. These route mappings     are manually defined in the applications configuration files.
     By using Spring Cloud Config server, you can dynamically reload the route     mappings without having to restart the Zuul server.
     You can customize Zuul’s Hystrix and Ribbon timeouts at global and individual     service levels.
     Zuul allows you to implement custom business logic through Zuul filters. Zuul     has three types of filters: pre-, post, and routing Zuul filters.
     Zuul pre-filters can be used to generate a correlation ID that can be injected     into every service flowing through Zuul.
     A Zuul post filter can inject a correlation ID into every HTTP service response     back to a service client.
     A custom Zuul route filter can perform dynamic routing based on a Eureka service ID to do A/B testing between different versions of the same service.

Des..........
    In a distributed architecture like a microservices one, there will come a point where you’ll need to ensure that key behaviors such as security, logging, and tracking of users across multiple service calls occur.

    To solve this problem, you need to abstract these cross-cutting concerns into a service that can sit independently and act as a filter and router for all the microservice calls in your application. This cross-cutting concern is called a services gateway. Your service clients no longer directly call a service. Instead, all calls are routed through the service gateway, which acts as a single Policy Enforcement Point (PEP), and are then routed to a final destination.


    we’re going to see how to use Spring Cloud and Netflix’s Zuul to implement a services gateway. Zuul is Netflix’s open source services gateway implementation. Specifically, we’re going to look at how to use Spring Cloud and Zuul to
         Put all service calls behind a single URL and map those calls using service discovery to their actual service instances
         Inject correlation IDs into every service call flowing through the service gateway
         Inject the correlation ID back from the HTTP response sent back from the client
         Build a dynamic routing mechanism that will route specific individual organizations to a service instance endpoint that’s different than what everyone else is using

What is a services gateway?
    The service gateway sits as the gatekeeper for all inbound traffic to microservice calls within your application. With a service gateway in place, your service clients never directly call the URL of an individual  service, but instead place all calls to the service gateway

    Because a service gateway sits between all calls from the client to the individual services, it also acts as a central Policy Enforcement Point (PEP) for service calls. The use of a centralized PEP means that cross-cutting service concerns can be implemented in a single place without the individual development teams having to implement these concerns. Examples of cross-cutting concerns that can be implemented in a service gateway include
         Static routing—A service gateway places all service calls behind a single URL and API route. This simplifies development as developers only have to know about one service endpoint for all of their services.
         Dynamic routing—A service gateway can inspect incoming service requests and, based on data from the incoming request, perform intelligent routing based on who the service caller is.
         Authentication and authorization—Because all service calls route through a service gateway, the service gateway is a natural place to check whether the caller of a service has authenticated themselves and is authorized to make the service call.
         Metric collection and logging—A service gateway can be used to collect metrics and log information as a service call passes through the service gateway. You can also use the service gateway to ensure that key pieces of information are in place on the user request to ensure logging is uniform.

    Keep the code you write for your service gateway light. The service gateway is the “chokepoint” for your service invocation. Complex code with multiple database calls can be the source of difficult-to-track-down performance problems in the service gateway. or else service gateway will be a single point of failure and potential bottleneck

    Spring Cloud integrates with the Netflix open source project Zuul. Zuul is a services gateway that’s extremely easy to set up and use via Spring Cloud annotations.

    Zuul at its heart is a reverse proxy. A reverse proxy is an intermediate server that sits between the client trying to reach a resource and the resource itself. The client has no idea it’s even communicating to a server other than a proxy. The reverse proxy takes care of capturing the client’s request and then calls the remote resource on the client’s behalf.

     Zuul will do all its routing based on the mapping definitions you saw earlier in the chapter. However, by building a Zuul route filter, you can add intelligence to how a service client’s invocation will be routed.

Repeated code vs. shared libraries
    The subject of whether you should use common libraries across your microservices is a gray area in microservice design. Microservice purists will tell you that you shouldn’t use a custom framework across your services because it introduces artificial dependencies in your services. Changes in business logic or a bug can introduce wide scale refactoring of all your services. On the other side, other microservice practitioners will say that a purist approach is impractical because certain situations exist (like the previous UserContextFilter example) where it makes sense to build a common library and share it across services






















Securing your microservices
===================================================================================================

    To implement authorization and authentication controls, you’re going to use Spring Cloud security and the OAuth2 (Open Authentication) standard to secure your Spring-based services. OAuth2 is a token-based security framework that allows a  user to authenticate themselves with a third-party authentication service. If the user successfully authenticates, they will be presented a token that must be sent with every request. The token can then be validated back to the authentication service. The main goal behind OAuth2 is that when multiple services are called to fulfill a user’s request, the user can be authenticated by each service without having to present their credentials to each service processing their request. Spring Boot and Spring Cloud each provide an out-of-the-box implementation of an OAuth2 service and make it extremely easy to integrate OAuth2 security into your service.

Summary
     OAuth2 is a token-based authentication framework to authenticate users.
     OAuth2 ensures that each microservice carrying out a user request doesn’t need     to be presented with user credentials with every call.
     OAuth2 offers different mechanisms for protecting web services calls. These     mechanisms are called grants.
     To use OAuth2 in Spring, you need to set up an OAuth2-based authentication     service.
     Each application that wants to call your services needs to be registered with     your OAuth2 authentication service.
     Each application will have its own application name and secret key.
     User credentials and roles are in memory or a data store and accessed via     Spring security.
     Each service must define what actions a role can take.
     Spring Cloud Security supports the JavaScript Web Token (JWT) specification.
     JWT defines a signed, JavaScript standard for generating OAuth2 tokens.
     With JWT, you can inject custom fields into the specification.
     Securing your microservices involves more than just using OAuth2. You should
     Use HTTPS to encrypt all calls between services.
     Use a services gateway to narrow the number of access points a service can be     reached through.
     Limit the attack surface for a service by limiting the number of inbound and     outbound ports on the operating system that the service is running on.


Introduction to OAuth2
    OAuth2 is a token-based security authentication and authorization framework that breaks security down into four components.
        1.protected resource - only authenticated users will have the access.
        2.resource owner - A resource owner defines what applications can call their service, which users are allowed to access the service, and what they can do
        3.resource owner - the application that’s going to call the service on a behalf of a user.
        4.OAuth2 authentication server- The OAuth2 authentication server is the intermediary between the application and the services being consumed. The OAuth2 server allows the user to authenticate themselves without having to pass their user credentials down to every service the application is going to call on behalf of the user.

    The user only has to present their credentials.  
    If they successfully authenticate, they’re issued an authentication token that can be passed from service to service.   
    token that can be presented every time a service being used by the user’s application tries to access a protected resource
    The protected resource can then contact the OAuth2 server to determine the validity of the token and retrieve what roles a user has assigned to them.

grants
    OAuth2 allows you to protect your REST-based services across these different scenarios through different authentication schemes called grants.
    Types of grants
         Password
         Client credential
         Authorization code
         Implicit

Response for a token call
     access_token - OAuth2 token that will be presented with each service call the user makes to a protected resource.      
     token_type—The type of token. The most common token type used is the bearer token.
     refresh_token—Contains a token that can be presented back to the OAuth2 server to reissue a token after it has been expired.
     expires_in
     Scope

JavaScript Web Tokens and OAuth2
    OAuth2 is a token-based authentication framework, but ironically it doesn’t provide any standards for how the tokens in its specification are to be defined. To rectify the lack of standards around OAuth2 tokens, a new standard is emerging called JavaScript Web Tokens (JWT).
    JWT tokens are,
         Small—JWT tokens are encoded to Base64 and can be easily passed via a URL, HTTP header, or an HTTP POST parameter.
         Cryptographically signed—A JWT token is signed by the authenticating server that issues it. This means you can be guaranteed that the token hasn’t been tampered with.
         Self-contained—Because a JWT token is cryptographically signed, the microservice receiving the service can be guaranteed that the contents of the token are valid. There’s no need to call back to the authenticating service to validate the contents of the token because the signature of the token can be validated and the contents (such as the expiration time of the token and the user information) can be inspected by the receiving microservice.
         Extensible—When an authenticating service generates a token, it can place additional information in the token, before the token is sealed. A receiving service can decrypt the token payload and retrieve that additional context out of it.

    JWT specification does allow you extend the token and add additional information to the token.

As you build your microservices for production use, you should be building your microservices security around the following practices:
    1 Use HTTPS/Secure Sockets Layer (SSL) for all service communication.
    2 All service calls should go through an API gateway.
    3 Zone your services into a public API and private API.
    4 Limit the attack surface of your microservices by locking down unneeded network ports



















Event-driven architecture with Spring Cloud Stream
=============================================================================================================================
Summary
     Asynchronous communication with messaging is a critical part of microservices     architecture.
     Using messaging within your applications allows your services to scale and     become more fault tolerant.
     Spring Cloud Stream simplifies the production and consumption of messages     by using simple annotations and abstracting away platform-specific details of
    the underlying message platform.
     A Spring Cloud Stream message source is an annotated Java method that’s used     to publish messages to a message broker’s queue.
     A Spring Cloud Stream message sink is an annotated Java method that receives     messages off a message broker’s queue.
     Redis is a key-value store that can be used as both a database and cache.

In Detail
     Using asynchronous messages to communicate between applications isn’t new. What’s new is the concept of using messages to communicate events representing changes in state. This concept is called Event Driven Architecture (EDA). It’s also known as Message Driven Architecture (MDA). What an EDA-based approach allows you to do is to build highly decoupled systems that can react to changes without being tightly coupled to specific libraries or services. When combined with microservices, EDA allows you to quickly add new functionality into your application by merely having the service listen to the stream of events (messages) being emitted by your application.

     The Spring Cloud project has made it trivial to build messaging-based solutions through the Spring Cloud Stream sub-project.

The case for messaging, EDA, and microservices
    In a synchronous request-response model, tightly coupled services introduce complexity and brittleness.
    Using messaging to communicate state changes between services, you’re going to inject a queue in between. (Your service monitors the queue for any messages published by the backend/other service and can invalidate the cache data as needed.)
    This approach offers four benefits:
         Loose coupling - neither service knows about each other
         Durability - message will be delivered even if the consumer of the service is down.
         Scalability -  it should be trivial to spin up new instances of a microservice and have that additional microservice become another service that can process work off the message queue holding the messages.
         Flexibility -  sender of a message has no idea who is going to consume it. easily add new message consumers (and new functionality) without impacting the original sending service.
  
    Traditional scaling mechanisms for reading messages off a queue involved increasing the number of threads that a message consumer could process at one time. Unfortunately, with this approach, you were ultimately limited by the number of CPUs available to the message consumer. A microservice model doesn’t have this limitation because you’re scaling by increasing the number of machines hosting the service consuming the messages  

    Downsides of a messaging architecture
        A messaging-based architecture can be complex and requires the development team to pay close attention to several key things, including
             Message handling semantics - what happens if a message is processed out of order. f a message fails, do you retry processing the error or do you let it fail? How do you handle future messages related to that customer if one of the customer messages fails?
             Message visibility - The asynchronous nature of messages means they might not be received or processed in close proximity to when the message is published or consumed.
             Message choreography - more difficult to reason through the business logic of their applications because their code is no longer being processed in a linear fashion

Introducing Spring Cloud Stream
    Spring Cloud makes it easy to integrate messaging into your Spring-based microservices. Multiple message platforms can be used with Spring Cloud Stream (including the Apache Kafka project and RabbitMQ)

    four components are involved in publishing and consuming the message:
         Source - A source takes the message, serializes it (the default serialization is JSON), and publishes the message to a channel.
         Channel - A channel is an abstraction over the queue, can switch queues without changing the code
         Binder - talks to a specific message platform, without knowing internals of the specific messaging platform
         Sink -  A sink listens to a channel for incoming messages and de-serializes the message back into a plain old Java object.

Writing a simple message producer and consumer
    The consumer group guarantees a message will only be processed once by a group of service instances.
    The concept of a consumer group is this: You might have multiple services with each service having multiple instances listening to the same message queue. You want each unique service to process a copy of a message, but you only want one service instance within a group of service instances to consume and process a message. The group property identifies the consumer group that the service belongs to. As long as all the service instances have the same group name, Spring Cloud Stream and the underlying message broker will guarantee that only one copy of the message will be consumed by a service instance belonging to that group. In the case of your licensing service, the group property value will be called licensingGroup

    Spring Cloud Stream is acting as the middleman for these services. From a messaging perspective, the services know nothing about each other. They’re using a messaging broker to communicate as an intermediary and Spring Cloud Stream as an abstraction layer over the messaging broker

A Spring Cloud Stream use case: distributed caching
    You’ll have the licensing service always check a distributed Redis cache for the organization data associated with a particular license. If the organization data exists in the cache, you’ll return the data from the ache. If it doesn’t, you’ll call the organization service and cache the results of the call in a Redis hash. When data is updated in the organization service, the organization service will issue a message to Kafka. The licensing service will pick up the message and issue a delete against Redis to clear out the cache.

    we build our solution using Amazon’s Web Services (AWS) and are a heavy user of Amazon’s DynamoDB. We also use Amazon’s ElastiCache (Redis) to
        1. Improve performance for lookup of commonly held data - using Redis and caching to avoid the reads out to Dynamo.
        2. Reduce the load (and cost) on the Dynamo tables holding our data - Redis server is significantly cheaper for reads by a primary key then a Dynamo read.
        3. Increase resiliency so that our services can degrade gracefully if our primary data store (Dynamo) is having performance problems - a caching solution can help reduce the number of errors you get from hitting your data store.

    Redis is a key-value store data store that acts like a big, distributed, in-memory HashMap. In the simplest case, it stores data and looks up data by a key. It doesn’t have any kind of sophisticated query language to retrieve data. Its simplicity is its strength and one of the reasons why so many projects have adopted it for use in their projects.  
    If the organization object in question is not in Redis, the code will return a null value. If a null value is returned from the checkRedisCache() method, the code will invoke the organization service’s REST endpoint to retrieve the desired rganization record. If the organization service returns an organization, the returned organization object will be cached using the cacheOrganizationObject() method

    caching is meant to help improve performance and the absence of the caching server shouldn’t impact the success of the call.

    Previously you built your messaging integration between the licensing and organization services to use the default output and input channels that come packaged with the Source and Sink interfaces in the Spring Cloud Stream project. However, if you want to define more than one channel for your application or you want to customize the names of your channels, you can define your own interface and expose as many input and output channels as your application needs
























Distributed tracing with Spring Cloud Sleuth and Zipkin
========================================================================================================================================

Summary
     Spring Cloud Sleuth allows you to seamlessly add tracing information (correlation ID) to your microservice calls.
     Correlation IDs can be used to link log entries across multiple services. They allow you to see the behavior of a transaction across all the services involved in a single transaction.
     While correlation IDs are powerful, you need to partner this concept with a log    aggregation platform that will allow you to ingest logs from multiple sources
    and then search and query their contents.
     While multiple on-premise log aggregation platforms exist, cloud-based services     allow you to manage your logs without having to have extensive infrastructure
    in place. They also allow you to easily scale as your application logging volume     grows.
     You can integrate Docker containers with a log aggregation platform to capture     all the logging data being written to the containers stdout/stderr. In this chapter, you integrated your Docker containers with Logspout and an online cloud     logging provider, Papertrail, to capture and query your logs.
     While a unified logging platform is important, the ability to visually trace a     transaction through its microservices is also a valuable tool.
     Zipkin allows you to see the dependencies that exist between services when a     call to a service is made.  
     Spring Cloud Sleuth integrates with Zipkin. Zipkin allows you to graphically see     the flow of your transactions and understand the performance characteristics of
    each microservice involved in a user’s transaction.
     Spring Cloud Sleuth will automatically capture trace data for an HTTP call and     inbound/outbound message channel used within a Spring Cloud Sleuth     enabled service.
     Spring Cloud Sleuth maps each of the service call to the concept of a span. Zipkin allows you to see the performance of a span.
     Spring Cloud Sleuth and Zipkin also allow you to define your own custom spans     so that you can understand the performance of non-Spring-based resources (a
    database server such as Postgres or Redis)

Detail.................

    The microservices architecture is a powerful design paradigm for breaking down complex monolithic software systems into smaller, more manageable pieces. These manageable pieces can be built and deployed independently of each other; however, this flexibility comes at a price: complexity. Because microservices are distributed by nature, trying o debug where a problem is occurring can be maddening. The distributed nature of the services means that you have to trace one or more transactions across multiple services, physical machines, and different data stores, and try to piece together what exactly is going on.

     we look at the following:
         Using correlation IDs to link together transactions across multiple services - Spring Cloud Sleuth is a Spring Cloud project that instruments your HTTP calls with correlation IDs
         Aggregating log data from multiple services into a single searchable source - Papertrail is a cloud-based service (freemium-based) that allows you to aggregate logging data from multiple sources into single searchable database.
         Visualizing the flow of a user transaction across multiple services and understanding the performance characteristics of each part of the transaction - Zipkin is an open source data-visualization tool that can show the flow of a transaction across multiple services. Zipkin allows you to break a transaction down into its component pieces and visually identify where there might be performance hotspots.


Spring Cloud Sleuth and the correlation ID
    A correlation ID is a randomly generated, unique number or string that’s assigned to a transaction when a transaction is initiated. As the transaction flows across multiple services, the correlation ID is propagated from one service call to another.

    With Spring Cloud Sleuth if you use Spring Boot’s logging implementation, you’ll automatically get correlation IDs added to the log statements you put in your microservices.

    By adding Spring Cloud Sleuth to your Spring Microservices, you can
        create and inject a correlation ID into your service calls
        correlation ID for a transaction is automatically added to outbound calls, Manage the propagation of the correlation ID to outbound service calls
        Add the correlation information to Spring’s MDC logging so that the generated correlation ID is automatically logged by Spring Boots default SL4J and Logback implementation.
        publish the tracing information in the service call to the Zipkindistributed tracing platform.

Log aggregation and Spring Cloud Sleuth
    Debugging a problem across distributed servers is ugly work and often significantly increases the amount of time it takes to identify and resolve an issue. A much better approach is to stream, real-time, all the logs from all of your service instances to a centralized aggregation point where the log data can be indexed and made searchable.
        Each individual service is producing logging data.
        An aggregation mechanism collects all of the data and funnels it to a common data store.  
        As data comes into a central data store, it is indexed and stored in a searchable format.
        can query the log data to find individual transactions. The trace IDs from Spring Cloud Sleuth log entries allow us to tie log entries across services.

    In Docker, all containers write their standard out to an internal filesystem called Docker.sock. docker.sock is like a pipe that your containers can plug into and capture the overall activities going on within the Docker runtime environment on the virtual server the Docker daemon is running on.

    Logspout,
        send log data to multiple endpoints at once.   
        centralized configuration
        write specific log messages to a specific downstream log aggregation platform. Custom HTTP routes that let applications write log information via specific HTTP endpoints.
        Integration with protocols beyond syslog. Logspout allows you to send messages via UDP and TCP protocols.

     service call made with Spring Cloud Sleuth, you’ll see that the trace ID used in the call is never returned in the HTTP response headers. So A much simpler solution is to write a Zuul “POST” filter that will inject the trace ID in the HTTP response.  

Distributed tracing with Open Zipkin
    how to visualize the flow of transactions as they move across different microservices.
    Distributed tracing involves providing a visual picture of how a transaction flows across your different microservices.
    Distributed tracing tools will also give a rough approximation of individual microservice response times.

    Zipkin (http://zipkin.io/) is a distributed tracing platform that allows you to trace transactions across multiple service invocations. Zipkin allows you to graphically see the amount of time a transaction takes and breaks down the time spent in each microservice involved in the call.

    Zipkin supports four different back end data stores.
        1 In-memory data
        2 MySQL: http://mysql.com
        3 Cassandra: http://cassandra.apache.org
        4 Elasticsearch: http://elastic.co

















Deploying your microservices
==========================================================================================

Summary
     The build and deployment pipeline is a critical part of delivering microservices. A well-functioning build and deployment pipeline should allow new features and bug fixes to be deployed in minutes.
     The build and deployment pipeline should be automated with no direct human interaction to deliver a service. Any manual part of the process represents an opportunity for variability and failure.
     The build and deployment pipeline automation does require a great deal of scripting and configuration to get right. The amount of work needed to build it shouldn’t be underestimated.
     The build and deployment pipeline should deliver an immutable virtual machine or container image. Once a server image has been created, it should never be modified.
     Environment-specific server configuration should be passed in as parameters at the time the server is set up.


In Detail.........
    deployement time should be faster, so codes needs to be,
        - Automated - The process of building the software, provisioning a machine image, and then deploying the service should be automated and should be initiated by the act of committing code to the source repository.
        - Repeatable - The process you use to build and deploy your software should be repeatable so that the same thing happens every time a build and deploy kicks off.
        - Complete - The outcome of your deployed artifact should be a complete virtual machine or container image (Docker) that contains the “complete” run-time environment for the service.
        - Immutable - runtime configuration of the image should not be touched or changed after the image has been deployed. Runtime configuration changes should be passed as environment variables to the image while application configuration should be kept separate from the container (Spring Cloud Config).

    For this chapter, we’re going to see how to implement a build and deployment pipeline using a number of non-Spring tools. You’re going to take the suite of microservices you’ve been building for this book and do the following:
        1 Integrate the Maven build scripts you’ve been using into a continuous integration/deployment cloud-tool called Travis CI
        2 Build immutable Docker images for each service and push those images to a centralized repository
        3 Deploy the entire suite of microservices to Amazon’s Cloud using Amazon’s EC2         Container Service (ECS)
        4 Run platform tests that will test that the service is functioning properly


setting up your core infrastructure in the cloud
    You’re going to change that now by separating your database server (PostgreSQL) and caching server (Redis) away from Docker into Amazon’s cloud. All the other services will remain running as Docker containers running inside a single-node Amazon ECS cluster.
        1 All your EagleEye services (minus the database and the Redis cluster) are going to be deployed as Docker containers running inside of a single-node ECS cluster.
        2 With the deployment to the Amazon cloud, you’re going to move away from using your own PostgreSQL database and Redis server and instead use the Amazon RDS and Amazon ElastiCache services.
        3 you want all traffic for the server to go through your Zuul API gateway.
        4 You’ll still use Spring’s OAuth2 server to protect your services.
        5 All your servers, including your Kafka server, won’t be publicly accessible to the outside world via their exposed Docker ports.
    Because you’re running Zuul, you want all traffic to flow through a single port, port 5555.  


Beyond the infrastructure: deploying EagleEye
    I’m using environment variables ($AWS_ACCESS_KEY and $AWS_SECRET_KEY) to hold my Amazon access and secret key.  
    If you have problems with an ECS deployed service starting or staying up, you’ll need to SSH onto the ECS cluster to look at the Docker logs. To do this you need to add port 22 to the security group that the ECS cluster runs with.

The architecture of a build/deployment pipeline
    Continuous Delivery (CD)
        1 A developer commits their service code to a source repository       
        2 A build/deploy engine monitors the source code repository for changes. and will check out the code and run the code’s build scripts.
        3 compile the code, run its unit and integration tests, and then compile the service to an executable artifact. Because your microservices are built using Spring Boot, your build process will create an executable JAR file that contains both the service code and self-contained Tomcat server.
        4 This is where your build/deploy pipeline begins to deviate from a traditional Java CI build process. After your executable JAR is built you’re going to “bake” a machine image with your microservice deployed to it.
        5 Before you officially deploy to a new environment, the machine image is started and a series of platform tests are run against the running image to determine if everything is running correctly. If the platform tests pass, the machine image is promoted to the new environment and made available for use.
        6 Before a service is promoted to the next environment, the platform tests for the environment must be run. The promotion of the service to the new environment involves starting up the exact machine image that was used in the lower environment to the next environment.
    Testing
        Unit tests — Unit tests are run immediately before the compiliation of the service code, but before it’s deployed to an environment. They’re designed to run in complete isolation, with each unit test being small and narrow in focus. A unit test should have no dependencies on third-party infrastructure databases, services, and so on. Usually a unit test scope will encompass the testing of a single method or function.  
        Integration tests - test an entire workflow or code path.third-party dependencies are mocked or stubbed, calls that would invoke a remote service are mocked or stubbed so that calls never leave the build server
        Platform tests - Platform tests are run to determine integration problems with third-party services that would normally not be detected when a third-party service is stubbed out during an integration test
    This build/deploy process is built on four core patterns.
        Continuous Integration/Continuous Delivery (CI/CD) - if the code passes its unit, integration, and platform tests, it should be immediately promoted to the next development environment.  
        Infrastructure as code - after ci cd, provisioning of the machine image occurs through a series of scripts that are run with each build. No human hands should ever touch the server after it’s been built.
        Immutable servers - Once a server image is built, the configuration of the server and microservice is never touched after the provisioning process. changes will trigger a new build
    Phoenix server pattern
         server should have the option to be killed and restarted from the machine image without any changes in the service or microservices behavior.     when the old server is killed, the new server should rise from the ashes.
              consistency - it exposes and drives configuration drift out of your environment.
              resiliency - by helping find situations where a server or service isn’t cleanly recoverable after it has been killed and restarted.
         Netflix’s Chaos Monkey - randomly select and kill servers. when a new server is started, it should behave in the same fashion as the server that was killed.    

Your build and deployment pipeline in action
    pipeline
        GitHub
        Travis CI
        Maven/Spotify Docker Plugin -  allows us to kick off the creation of a Docker build right from within Maven.
        Docker
        Docker Hub
        Python - For writing the platform tests that are executed before a Docker image is deployed,
        Amazon’s EC2 Container Service (ECS)—The final destination for our microservices will be Docker instances deployed to Amazon’s Docker platform.

Beginning your build deploy/pipeline: GitHub and Travis CI
    The last thing that happens in the Maven build is the creation of a Docker container image that’s pushed to the local Docker repository running on your Travis build machine. The creation of the Docker image is carried out using the Spotify Docker plugin

    Python scripts that test the Spring Cloud Config server, the Eureka server, and the Zuul server.

Closing thoughts on the build/deployment pipeline
    Many shops will use provisioning tools like Ansible (https://github.com/ansible/ansible), Puppet (https://github.com/puppetlabs/puppet), or Chef (https://github.com/chef/chef) to install and configure the operating systems onto the virtual machine or container images being built














Other Notes
---------------------------






microservice pattern categories
 Core development patterns
 Routing patterns
 Client resiliency patterns
 Security patterns
 Logging and tracing patterns
 Build and deployment patterns



Service granularity
    1. Service granularity: What is the right level of responsibility the service should have?
    2. Communication protocols: How your client and service communicate data back and forth
    3. Interface design: How you are going to expose your service endpoints to clients
    4. Configuration management: How your services manage their application-specific configuration so that the code and configuration are independent entities
    5. Event processing: How you can use events to communicate state and data changes between services

Routing patterns
    1. Service routing gives the microservice client a single logical URL to talk to and acts as a policy enforcement point for things like authorization, authentication, and content checking.
    2.Service discovery abstracts away the physical location of the service from the client. New microservice instances can be added to scale up, and unhealthy service instances can be transparently removed from the service.

Client resiliency patterns
    1. Client-side load balancing—How do you cache the location of your service instances on the service client so that calls to multiple instances of a microser-vice are load balanced to all the health instances of that microservice?
    2 Circuit breakers pattern—How do you prevent a client from continuing to call a service that’s failing or suffering performance problems? When a service is run-ning slowly, it consumes resources on the client calling it. You want failingmicroservice calls to fail fast so that the calling client can quickly respond and take an appropriate action.
    3 Fallback pattern—When a service call fails, how do you provide a “plug-in” mech-anism that will allow the service client to try to carry out its work through alter-native means other than the microservice being called?
    4 Bulkhead pattern—Microservice applications use multiple distributed resources to carry out their work. How do you compartmentalize these calls so that the mis-behavior of one service call doesn’t negatively impact the rest of the application?

Security patterns
    1. Authentication — How do you determine the service client calling the service is who they say they are?
    2. Authorization — How do you determine whether the service client calling a microservice is allowed to undertake the action they’re trying to undertake?
    3. Credential management and propagation — How do you prevent a service client from constantly having to present their credentials for service calls involved in a trans- action? Specifically, we’ll look at how token-based security standards such as OAuth2 and JavaScript Web Tokens (JWT) can be used to obtain a token that can be passed from service call to service call to authenticate and authorize the user.

Microservice logging and tracing patterns
    1. Log correlation: All service log entries have a correlation ID that ties the log entry to a single transaction.
    2. Log aggegration: An aggregation mechanism collects all of the logs from all the services instances.
    3. Microservice transaction tracing: The development and operations teams can query the log data to find individual transactions. They should also be able to visualize the flow of all the services involved in a transaction.

Build and deployment patterns
    1. Infrastructure as code: We build our code and run our tests for our microservices. However, we also treat our infrastructure as code. When the microservice is compiled and packaged, we immediately bake and provision a virtual server or container image with the microservice installed on it.
    2. Immutable servers: The moment an image is baked and deployed, no developer or system administrator is allowed to make modifications to the servers. When promoting between environments, the entire container or image is started with environment-specific variables that are passed to the server when the server is first started.
    3. Phoenix servers: Because the actual servers are constantly being torn down as part of the continous integration process, new servers are being started and torn down. This greatly decreases the change of configuration drift between environments.


best practises
    use max 4 tables per service, to make it not too big
    if microservice is only doing database cruid operations, then it's too small. it should cover business logic.
    verion the uri.

    Remember, cloud-based servers are ephemeral. Don’t be afraid to start new instances of a service with their new configuration, direct traffic to the new services, and then tear down the old ones

service interfaces
    Talking to one another ms using REST, URI, Json, Http Status Codes

DevOps story, microservices
    should be
        1. self-contained and independently deployable with multiple instances of the service being started up and torn down with a single software artifact.
        2. should be configurable. When a service instance starts up, it should read the data it needs to configure itself from a central location or have its configuration information passed on as environment variables. No human intervention should be required to configure the service
        3. instances should be available to developers via service discovery tools, not the exact location.
        4. communicating its health
    principles
        1. Service assembly - guarantee repeatability and consistency, so deployed exactly the same way
        2. Service bootstrapping - deploy a instance quickly in any environment
        3. Service registration/discovery - how do you make the new service instance discoverable by other application clients?
        4. Service monitoring— monitor and ensure that any faults in your microservice are routed around and that ailing service instances are taken down.

Building the Twelve-Factor microservice service application
    I. Codebase -     Each microservice should have its own independent code repository within the source control systems.
    II. Dependencies -    Explicitly declare and isolate dependencies. allows always to be built using the same version of libraries.
    III. Config -    Store config in the environment. never be in the same repository as your source code.
    IV. Backing services -    Treat backing services as attached resources. can swap databases etc easily.
    V. Build, release, run -    Strictly separate build and run stages. A built service is immutable and cannot be changed.
    VI. Processes -    Execute the app as one or more stateless processes.  microservices should always be stateless. They can be killed and replaced without data loss.
    VII. Port binding -    Export services via port binding.
    VIII. Concurrency -    Scale out via the process model. scale out horizontally
    IX. Disposability -    Maximize robustness with fast startup and graceful shutdown
    X. Dev/prod parity -    Keep development, staging, and production as similar as possible
    XI. Logs -    Treat logs as event streams
    XII. Admin processes -    Run admin/management tasks as one-off processes



Service assembly: packaging and deploying your microservices
    - From a DevOps perspective, one of the key concepts behind a microservice architecture is that multiple instances of a microservice can be deployed quickly in response to a change application environment (for example, a sudden influx of user requests,problems within the infrastructure, and so on)
    - To accomplish this, a microservice needs to be packaged and installable as a singleartifact with all of its dependencies defined within it. This artifact can then be deployed to any server with a Java JDK installed on it. These dependencies will also include the runtime engine (for example, an HTTP server or application container) that will host the microservice.


Service bootstrapping: managing configuration of your microservices
    When a microservice starts, any environment-specific information or application configuration information data should be
        • Passed into the starting service as environment variables
        • Read from a centralized configuration management repository  

Service registration and discovery: how clients communicate with your microservices      
    By insisting that services are treated as short-lived disposable objects, microservice architectures can achieve a high-degree of scalability and availability by     having multiple instances of a service running. Service demand and resiliency can be managed as quickly as the situation warrants. Each service has a unique and non-permanent IP address assigned to it. The downside to ephemeral services is that with services constantly coming up and down, managing a large pool of ephemeral services manually or by hand is an invitation to an outage.

    A microservice instance needs to register itself with the third-party agent. This registration process is called service discovery. When a microservice instance registers with a service discovery agent, it will tell the discovery agent two things: the physical IP address or domain address of the service instance, and a logical name that an application  an use to look up in a service.


Communicating a microservice’s health
    - The service discovery agent monitors the health of a service instance. If the instance fails, the health check removes it from the pool of available instances.
    - If the service discovery agent discovers a problem with a service instance, it can take corrective action such as shutting down the ailing instance or bringing additional service instances up. In a microservices environment that uses REST, the simplest way to build a health check interface is to expose an HTTP end-point that can return a JSON payload and HTTP status code.
    - Spring Actuator provides out-of-the-box operational endpoints that will help you understand and manage the health of your service.




















A Running a cloud on your desktop
==================================================================================================

    Technology and patterns used throughout
        1 All projects use Apache Maven (http://maven.apache.org) as the build tool for the chapters.
        2 All services developed in the chapter compile to a Docker (http://docker.io) container image. Docker is an amazing runtime virtualization engine that runs on Windows, OS X, and Linux. Using Docker, I can build a complete runtime environment on the desktop that includes the application services and all the infrastructure needed to support the services. Also, Docker, unlike more proprietary virtualization technologies, is easily portable across multiple cloud providers. I’m using Spotify’s Docker Maven plugin (https://github.com/spotify/ docker-maven-plugin) to integrate the building of Docker container with the Maven build process.
        3 To start the services after they’ve compiled into Docker images, I use Docker Compose to start the services as a group. I’ve purposely avoided more sophisticated Docker orchestration tools such as Kubernetes (https://github.com/ kubernetes/kubernetes) or Mesos (http://mesos.apache.org/) to keep the chapter examples straightforward and portable
    software
        Apache Maven
        Docker
        Git Client

    Every service directory in a chapter is structured as a Maven-based build project. Inside each project is a src/main directory with the following sub-directories:       
        1. java - Java source code used to build the service.
        2. docker - files needed to build a Docker image.The first file will always be called Dockerfile and contains the stepby-step instructions used by Docker to build the Docker image. The second file, run.sh, is a custom Bash script that runs inside the Docker container. This script ensures that the service doesn’t start until certain key dependencies (database is up and running) become available.
        3. resources - contains all the services’ application.yml files. While application configuration is stored in the Spring Cloud Config, all services have configuration that’s stored locally in the application.yml. Also, the resources directory will contain a schema.sql file containing all the SQL commands used to create the tables and pre-load data for the services into the Postgres database.

    Building and compiling the projects   
        This will execute the Maven pom.xml file in each of the service directories. It will also build the Docker images locally   
        mvn clean package docker:build

    Building the Docker image
        carried out by the Spotify Maven plugin, it
            1. It copies the executable jar for the service, along with the contents of the src/ main/docker directory, to target/docker   
            2. It executes the Dockerfile defined in the target/docker directory. The Dockerfile is a list of commands that are executed whenever a new Docker image for that service is provisioned.
            3. It pushes the Docker image to the local Docker image repository that’s installed when you install Docker.
        installation
            - Use Alpine Linux image which already has Java JDK installed on it.   
            - nc command tool is used to ping a server and see if a specific port is online. to ensure that before you launch your service, all its dependent services started.

Launching the services with Docker Compose       
    Docker Compose, is a service orchestration tool that allows you to define services as a group and then launch together as a single unit. Docker Compose includes capabilities for also defining environment variables with each service.










OAuth2 grant types
================================================================================================================
    OAuth2 is a flexible authorization framework that provides multiple mechanisms for applications to authenticate and authorize users without forcing them to share credentials. Unfortunately, it’s also one of the reasons why OAuth2 is considered complicated. These authentication mechanisms are called authentication grants. OAuth2 has four forms of authentication grants that client applications can use to authenticate users, receive an access token, and then validate that token. These grants are
         Password
            1. Application owner registers application name with OAuth2 service, which provides a secret key
            2. User logs into EagleEye, which passes user credentials with application name and key to OAuth2 service
            3. OAuth2 authenticates user and application and provides access token
            4. EagleEye attaches access token to any service calls from user
            5. Protected services call OAuth2 to validate access token
         Client credential grants
            used when an application needs to access an OAuth2 protected resource, but no human being is involved in the transaction. authenticates based on application name and the secret key provided by the owner of the resource.
            1 The resource owner registers the EagleEye data analytics application with the OAuth2 service. The resource owner will provide the application name and receive back a secret key.
            2 When the EagleEye data analytics job runs, it will present its application name and secret key provided by the resource owner.
            3 The EagleEye OAuth2 service will authenticate the application using the application name and the secret key provided and then return back an OAuth2 access token.
            4 Every time the application calls one of the EagleEye services, it will present the OAuth2 access token it received with the service call.
         Authorization code
            1 The EagleEye user logs in to EagleEye and generates an application name and application secret key for their Salesforce application. As part of the registration process, they’ll also provide a callback URL back to their Salesforce-based application. This callback URL is a Salesforce URL that will be called after the EagleEye OAuth2 server has authenticated the user’s EagleEye credentials.
            2 The user configures their Salesforce application with the following information:
                – Their application name they created for Salesforce
                – The secret key they generated for Salesforce
                – A URL that points to the EagleEye OAuth2 login page
                – Now when the user tries to use their Salesforce application and access their
            EagleEye data via the organization service, they’ll be redirected over to the EagleEye login page via the URL described in the previous bullet point. The user will provide their EagleEye credentials. If they’ve provided valid EagleEye credentials, the EagleEye OAuth2 server will generate an authorization code and redirect the user back to SalesForce via the URL provided in number 1. The authorization code will be sent as a query parameter on the callback URL.
            3 The custom Salesforce application will persist this authorization code. Note: this authorization code isn’t an OAuth2 access token
            4 Once the authorization code has been stored, the custom Salesforce application can present the Salesforce application the secret key they generated during the registration process and the authorization code back to EagleEye OAuth2 server. The EagleEye OAuth2 server will validate that the authorization code is valid and then return back an OAuth2 token to the custom Salesforce application. This authorization code is used every time the custom Salesforce needs to authenticate the user and get an OAuth2 access token.
            5 The Salesforce application will call the EagleEye organization service, passing an OAuth2 token in the header.
            6 The organization service will validate the OAuth2 access token passed in to the EagleEye service call with the EagleEye OAuth2 service. If the token is valid, the organization service will process the user’s request.

         Implicit
            all the service interaction happens directly from the user’s client. access token is directly exposed to a public client. more vulnerable to attack and misuse. so should be short-lived (1-2 hours).  No concept of a refresh token.
            1 The owner of the JavaScript application has registered the application with the EagleEye OAuth2 server. They’ve provided an application name and also a callback URL that will be redirected with the OAuth2 access token for the user.
            2 The JavaScript application will call to the OAuth2 service. The JavaScript application must present a pre-registered application name. The OAuth2 server will force the user to authenticate.
            3 If the user successfully authenticates, the EagleEye OAuth2 service won’t return a token, but instead redirect the user back to a page the owner of the JavaScript application registered in step one. In the URL being redirected back to, the OAuth2 access token will be passed as a query parameter by the OAuth2 authentication service.
            4 The application will take the incoming request and run a JavaScript script that will parse the OAuth2 access token and store it (usually as a cookie)
            5 Every time a protected resource is called, the OAuth2 access token is presented to the calling service.
            6 The calling service will validate the OAuth2 token and check that the user is authorized to do the activity they’re attempting to do.

How tokens are refreshed
    When an OAuth2 access token is issued, it has a limited amount of time that it’s valid and will eventually expire. When the token expires, the calling application (and user) will need to re-authenticate with the OAuth2 service. However, in most of the Oauth2 grant flows, the OAuth2 server will issue both an access token and a refresh token. A client can present the refresh token to the OAuth2 authentication service and the service will validate the refresh token and then issue a new OAuth2 access token.

    1 The user has token expired.
    2 Application will pass the expired token to the organization service.
    3 The organization service will try to validate the token with the OAuth2 service, which return an HTTP status code 401 (unauthorized) and a JSON payload indicating that the token is no longer valid. The organization service will return an HTTP 401 status code back to the calling service.
    4 The EagleEye application gets the 401 HTTP status code and the JSON payload indicating the reason the call failed back from the organization service. The EagleEye application will then call the OAuth2 authentication service with the refresh token. The OAuth2 authentication service will validate the refresh token and then send back a new access token.













Comments

Popular posts from this blog

Oracle Database 12c installation on Ubuntu 16.04

This article describes how to install Oracle 12c 64bit database on Ubuntu 16.04 64bit. Download software  Download the Oracle software from OTN or MOS or get a downloaded zip file. OTN: Oracle Database 12c Release 1 (12.1.0.2) Software (64-bit). edelivery: Oracle Database 12c Release 1 (12.1.0.2) Software (64-bit)   Unpacking  You should have following two files downloaded now. linuxamd64_12102_database_1of2.zip linuxamd64_12102_database_2of2.zip Unzip and copy them to \tmp\databases NOTE: you might have to merge two unzipped folders to create a single folder. Create new groups and users Open a terminal and execute following commands. you might need root permission. groupadd -g 502 oinstall groupadd -g 503 dba groupadd -g 504 oper groupadd -g 505 asmadmin Now create the oracle user useradd -u 502 -g oinstall -G dba,asmadmin,oper -s /bin/bash -m oracle You will prompt to set to password. set a momorable password and write it down. (mine is orac

DBCA : No Protocol specified

when trying to execute dbca from linux terminal got this error message. now execute the command xhost, you probably receiving No protocol specified xhost:  unable to open display ":0" issue is your user is not allowed to access the x server. You can use xhost to limit access for X server for security reasons. probably you are logged in as oracle user. switch back to default user and execute xhost again. you should see something like SI:localuser:nuwan solution is adding the oracle to access control list xhost +SI:localuser:oracle now go back to oracle user and try dbca it should be working

Java Head Dump Vs Thread Dump

JVM head dump is a snapshot of a JVM heap memory in a given time. So its simply a heap representation of JVM. That is the state of the objects. JVM thread dump is a snapshot of a JVM threads at a given time. So thats what were threads doing at any given time. This is the state of threads. This helps understanding such as locked threads, hanged threads and running threads. Head dump has more information of java class level information than a thread dump. For example Head dump is good to analyse JVM heap memory issues and OutOfMemoryError errors. JVM head dump is generated automatically when there is something like OutOfMemoryError has taken place.  Heap dump can be created manually by killing the process using kill -3 . Generating a heap dump is a intensive computing task, which will probably hang your jvm. so itsn't a methond to use offetenly. Heap can be analysed using tools such as eclipse memory analyser. Core dump is a os level memory usage of objects. It has more informaiton t