Categories
Uncategorized

Microservices Architecture: The Case for Microservices Architecture

Microservice Architecture is a software design pattern that structures an application as a collection of loosely coupled services rather than one monolithic application. A microservice architecture makes it easier to implement large-scale applications by allowing you to break down a large application into smaller components, which can be designed and developed by smaller teams. Microservice architecture fosters application portability and agility. Because different teams are responsible for different services, more of the application can be changed at any time.

What is Microservices Architecture?

Microservices architecture is a new approach to designing applications. It emerged in the early 2010s as a response to the growing popularity of microservices as a method of scaling out applications in an effort to integrate multiple product lines from the same company. By embracing and mimicking agile development methodologies, the microservices architecture, originally designed for software as a service (SaaS) applications, has been applied successfully to traditional, relational enterprise applications as well. According to a survey conducted in 2017 by CollabNet, the adoption of microservices architecture in the enterprise is still in its infancy. Only 4.9% of respondents reported that their organizations are running a microservices architecture at present, while 70.

Advantages of Microservices Architecture

Every service has a purpose Single service can be replaced with another Applications are easier to deploy Microservice Architecture is DevOps With a small team, it’s easy to operate. It allows you to scale quickly. It offers quick responses to issues Automation comes standard with microservices Is a manageable architectural pattern Is very flexible Easier to test and deploy Less prone to errors Allows you to modularize your code Often cost-effective Microservice Architecture Simplifies Maintaining Your Application There are thousands of things that can go wrong with your app, so your developers need all the tools they can get their hands on.

Disadvantages of Microservices Architecture

There are several downsides to microservice architecture, and they are not as much of a problem with monoliths as they are with microservices. The biggest disadvantages are: Misconceptions about how services are created. Developers often don’t understand how services are built. When services are created through libraries and it is understood how the libraries work, the development process is simplified. However, when a company chooses to move to a microservice architecture, it is not obvious how to build the services. The developer teams must either rely on the libraries or have to learn how to build the services themselves. Developers don’t have the authority to shut down services.

How to implement a Microservices architecture

One of the biggest benefits of implementing microservices architecture is the ability to support application design and evolution from a small team. Since each service can be written using any language or framework, you can choose the tools that are best suited to the functionality that you want to expose to your users. Microservice architecture can also help scale the application, as services are made smaller and components can be reused more easily to keep the application as small as possible.

Microservices and Continuous Integration

Many large projects have struggled to build large applications and push them into production because of the challenges involved with developing the application and releasing it to production. In some ways, the monolithic application was easier to develop, maintain, and deliver. However, this approach increased the risk of delivering late and broke the entire team when it was too late to respond to issues in the system. Monolithic applications are only fully effective when they’re first deployed to production. As soon as the team is happy with the application and the platform, it has to start from scratch to implement the next feature.

Microservices and Continuous Deployment

Migrating to a microservice architecture is a huge improvement for developers because microservices permit them to easily deploy and upgrade components to new platforms. Here’s why. Continuous Delivery Continuous delivery is a methodology that guarantees that a software update or application patch is available to customers at all times. The idea behind continuous delivery is that software updates should be available to customers at all times or at the very least within a defined amount of time. Often times, the idea of continuous delivery is associated with migrating to a microservices architecture. The initial thought is that it is hard to integrate more than a few services. However, the idea of continuous delivery has nothing to do with different teams working on different services.

How to make a Microservices Architecture scalable

One of the main goals of developing a microservice architecture is to make sure that your applications remain scalable. When you break up your applications, your number of services does not necessarily need to decrease. In fact, you may want to increase the number of services, if your application needs more power and scalability. To increase the scale of a microservice architecture you will need to increase the number of operations your services are capable of running. For instance, if your application needs to process 10,000 transactions per second, then you can’t have five services doing the same thing. You’ll need 10,000 independent services. This scale is usually achieved by having multiple microservices in your architecture that communicate with each other through a message queue.

What is a good testing strategy for Microservice architecture?

Every team should develop a test strategy that aligns with their agile development methodology. When a team has a robust test strategy in place, its members can more easily identify and solve problems as they arise. When things go wrong, everyone involved can quickly identify and solve the problem. Docker (and Kubernetes) For many teams, Docker is a critical piece of the Microservice architectural puzzle. Using Docker helps companies be more agile because it helps the teams be more portable. Another advantage of using Docker is that it allows teams to do more testing easily and quickly. Other teams, however, don’t want to use Docker because of the downtime involved when Docker stops working. That’s why Kubernetes is becoming a popular choice for microservices architecture teams.

Conclusion

The combination of RESTful web services and microservices makes this an interesting application design pattern for MVC web applications. It introduces a new way for a web application to process a user’s request. It improves the responsiveness of the user experience because the response times will be less dependent on the complexity of the application. This application is an interesting use of Microservice Architecture. It’s another one of those cool-looking applications that you only see in movies. It even has a Twitter icon to signify that it’s a Twitter application. It’s really difficult to get lost in an application like this. There are all kinds of interesting connections with Web Services and RESTful APIs. Â This is just a first step in a long journey.

All this has made microservice architecture extremely popular in recent years. With more than 25% of all new applications being built with microservices, the business value it brings has finally begun to be appreciated. While all this talk of microservices is a real breakthrough, there are still some missing elements in the architecture that need to be addressed: Modular and Maintainable Services. While many successful microservices are designed in a functional programming style, the majority are implemented in object-oriented languages. This shows that we still need to work on the design and maintenance of services in order to avoid a few big bugs in the pipeline. In other words, there are a lot of little things that need to be improved in the design. Time to Market.

Categories
Microservices

What are microservices? Tips for successful implementation

What are Microservices?

Software always keeps evolving, both with regards to functionality and also with regards to the architecture. A decade or so ago we had software that was built using n-tier technology, that later led to Service Oriented Architecture and now microservices. This does not mean that these technologies were inferior but rather the way we do things have changed and hence expectation.

Microservices are an evolution of service oriented architecture where the services are designed to be smaller, complex services are composed of smaller services that work cohesively together. They allow multiple language/frameworks to be used while adhering to HTTP REST.

Microservices were conceived to solve the problems associated with frequent releases. Microservices don’t need to be designed for the max capacity, since the system is composed of smaller services elastic scalability (scale on demand) can be leveraged possibly using virtualization and rapid deployment.

Microservices Architecture

microservices

The shift from monolithic architectures to more loosely coupled services is due to the challenges faced when frequent releases were being sought. Monolithic applications brought about coupling as the code base comprises of multiple horizontal and vertical layers, thus bringing about code reuse challenges too. Owing to the increased complexity of the system along with the large build/test and deployment cycles mounted to considerable lead time, this was true even for small changes as they required complete deployments.

Microservices are designed to be domain specific around problem areas referred to as bounded contexts. They can evolve independently and can be build using different architectures as long as they HTTP REST based. This allows for agility and also makes cloud native easier to implement.

Building Microservices

In contrast to monolithic systems, microservices are composed of multiple services that each represent a domain. So the first task is to identify domains, this can be done by investigating the working system and decomposing the system into smaller pieces and define relationships between these services. The guidelines for microservices design are to contain a set of related functionality with little or no cross domain operations.

Not only are microservices required to enable rapid deployments and agility, they need rapid deployments and agility to work. This is due to increased number of services that in turn need frequent deployments as they evolve independently. Test automation should accompany the deployments to quickly identify issues whenever a new deployment occurs.
With regards to designing the services, the key is also to also to provide proper versioning and the ability to function in reduced capacity when a non-critical microservice ceases. There should also be instrumentation like log aggregators and trace id implementations that lets us inspect the system as a whole.

Advantages of Microservices

Microservices don’t make traditional services obsolete, they however do try to solve a problem associated with rapid deployments and elastic scalability. The following are the few advantages

  • Microservices are protocol aware, that is they leverage HTTP REST for their communication
  • Microservices allow for heterogeneous interoperability (also referred to as polyglot development), this means one can pick their choice of programming language as far as they are leveraging HTTP REST
  • Microservices allow each unit of work to be called from other unit of work within the system, though this is an advantage sometimes guidelines need to be established to define hierarchy of invocations to prevent circular references etc.
  • Microservices enable agility and quick deployments, though they need quick deployments to be successful
  • Microservices uses distribution (network calls for all interactions) and hence scale on demand (elastic scalability) is possible by introducing microservices if the system is under load
  • Microservices present a well-defined service boundary however they should be properly versioned to allow backward and possibly forward compatibility

Disadvantages of Microservices

Microservices definitely offer a few advantages over the traditional monolithic designs. They bring about agility and scalability that the previous designs lacked, however they do bring about challenges with regards to increased complexity and the need for a robust DevOps strategy.

The following are the challenges faced, however these can be easily mitigated as seen in the next section.

  • Microservices bring about increased complexity due to the increased number of services, this aggravated when overly fine grained service decomposition are used.
  • Microservices increase deployment costs owing to increased number of services.
  • Microservices bring about increased distribution cost owing to the increased network communication between the services. This includes cost of setup and tear-down of the services too many network calls will exponentially degrade the system.
  • Microservices reduce reliability of the system owing to the increased number of services.

Recommended practices

Microservices enable us to build scalable applications however with the paradigm shift we need to adopt practices that will make the adoption successful.

There are design considerations that need to be taken when one or more services fail. The increased complexity due to the number of services need a comprehensive strategy for monitoring, deploying and test executions.

Design Considerations

  • Use circuit breaker when latency increases, prefer degraded functionality over failure
  • Use hybrid architecture (debated) using hierarchy and service based rules to dictate which services can be invoked by which.
  • Practice Domain driven design

Monitoring Microservices

The increased complexity due to number of services makes monitoring of services imperative. The agility of the services needs instrumentation like trace ids and unified log aggregators to track and monitor the system.

  • Using trace ids, unique keys are generated and the entire interaction can be traced and exception scenarios can be monitored.
  • Unified log aggregators enable us to get a dashboard view of the entire system, we can identify which service is degraded and take necessary step to either scale or rollback deployments.

Microservices Deployment

The increased complexity due to number of services bring about deployment challenges. These can be mitigated using a DevOps (continuous deployment) strategy to systemize the process.

Continuous deployment is key since the number of moving parts increases and also the frequency of deployments increases as each service evolves independently.

Testing Microservices

The increased complexity due to number of services makes it crucial that we adopt a continuous testing strategy as part of the DevOps (continuous deployment) strategy.

End-to-End and System integration tests should be run after each deployment

Conclusion

Microservices are the next iteration of Service Oriented Architecture, it emphasizes the use of HTTP REST interaction between the services and focuses on the agility of the services using a DevOps Continuous Deployment strategy.

It aims to keep agility to the forefront and allow for the services to evolve independently. The architecture accounts for multiple services to decrease reliability and hence build systems that can scale or work in an acceptable degraded fashion instead of failing.

Additional Resources

Categories
Vagrant

Vagrant Provisioning – Setting up the Environment

Provisioning is the process of setting up the environment. It is the process of installing software along with setting up configurations. This is done upon execution of the ‘vagrant up’ process which starts and provisions the environment.

The default Vagrant boxes are usually generic and most likely lack the specific configuration our environment needs. One way to customize the environment is to ssh into the machine using ‘vagrant ssh’ and install the software adhoc. However the recommended way of provisioning the environment is to define a process that is repeatable, this way we can build environments that are automatically provisioned and also consistent.

Vagrant offers multiple options to provision the machine, this can be shell scripts that most Linux users/sysadmins prefer to industry standard configuration management environments not limited Ansible/Chef/Puppet/Salt/CFEngine etc.

When does Vagrant Provisioning happen?

Vagrant has a lifecycle where a virtual machine is created, sleeping, destroyed etc. The ‘vagrant up’ command is responsible to bringing the system up regardless of the state.

However Vagrant provisioning automatically happens the first time a ‘vagrant up’ command is executed. During this time vagrant checks for the existence of the box and also validates if there are any updates that need to be applied. The next step is apply the customization’s as defined in the configuration file and bring up the machine.

However since the same ‘vagrant up’ command can be used to wake up or booting a virtual machine that has already been created, vagrant must be informed if we desire to destroy and provision the machine again by using the ‘—provision’ flag.

Vagrant allows for customizing the behavior on the machine creation phase if we don’t want provisioning to happen. Issuing a ‘—no-provision’ flag will not provision the machine.

Reference

Official Provisioning Documentation: https://www.vagrantup.com/docs/provisioning/

Categories
Vagrant

Vagrantfile – Defining the Virtual Machines

Vagrantfile describes the virtual machine and also how to configure and provision the machine. There is one Vagrantfile for each project and it is an asset that can and should be committed into source control. This file will then be available for the team members to download and create environments that are identical with each other.

Upon issuing a ‘vagrant up’ command it will setup the machine as described in the Vagrantfile. Vagrantfile uses the Ruby language for its definition, a working knowledge of Ruby is beneficial however it is not necessary as most changes require simple variable assignment changes.

Vagrantfile loading order – Loading & Merging

Like most environments Vagrant allows for variable definitions at different levels. These are loaded in a specific order and merged (aka overridden) along the way, allowing for varying levels of specificity at the project level and also define generic setting defined at the system level.

Vagrantfile – Describing your Virtual Machine

The following defines the order of loading of Vagrantfile, if a Vagrantfile is not defined in a specific location below Vagrant continues to the next step.

  • Vagrantfile from the gem directory
  • Vagrantfile that comes packaged with the box
  • Vagrantfile in the Vagrant home directory (~/.vagrant.d)
  • Vagrantfile from the project directory
  • Multi-machine overrides if defined. (Configurations where a single Vagrantfile defines multiple guest machines where the virtual machines work together or are associated with each other)
  • Provider-specific overrides if defined (Configuration options defined by providers to expose distinct functionality that is applicable to the provider)

Official documentation –

https://www.vagrantup.com/docs/vagrantfile/

Categories
DevOps

Docker vs. Vagrant – How they stack up?

 In short

Vagrant is a tool geared towards administering a consistent development environment workflow spanning various operating systems. Docker is a container management tool that can consistently run software provided that a containerization system is present.

There are benefits and drawbacks for each type of virtualized system. If one desires total isolation with guaranteed resources, a full VM would be the strategy to use. For those who only desire to isolate processes from each other and wish to operate a lot of them using a moderately sized host, then Docker/LXC/runC is definitely the strategy to use.

Technical Considerations

  • Vagrant is easier to understand and is easier to get up and running but can be very resource intensive (in terms of RAM and space).
  • Docker architecture is harder to understand and can be harder to get up and running but is much faster, uses much less CPU and RAM and potentially uses much less space than Vagrant VM’s.

How does Docker work?

Containerization
Containerization

Docker makes use of containers that include your application as well as its dependencies, nevertheless it shares the kernel (operating system) with other containers. Containers run as isolated processes on the host operating system although they are not associated with any specific infrastructure (they are able to run on any computer). Containers are typically more lightweight than virtual machines, so starting and stopping containers is exceedingly fast. Usually development machines don’t have a containerization system built-in, and Docker makes use of a virtual machine with Linux installed to make it work.

Docker is a Linux-only virtual environment (VE) tool, as opposed to a VM tool. It builds on LxC (LinuX Containers), which utilizes the cgroups functionality to allow creation and running of multiple isolated Linux virtual environments (VE) on an individual control host. In contrast to a VM, a VE like Docker doesn’t create its own virtual computer with a distinct OS and processors and hardware emulation. A VE is VM-lite; it rides on the currently present kernel’s image of the underlying hardware, and merely creates a container in order to run one’s apps, and also recreate the OS if desired considering that the OS happens to be merely another application running on the kernel. It places just a little additional load on the system, so in contrast to the traditional VM there is very little overhead when using Docker. Due to the shared kernel, Docker’s isolation isn’t as good as a full VM’s, however it suits many scenarios just fine.

How does Vagrant work?

Virtualization

Vagrant uses virtual machines to run environments independent of the host machine. This is accomplished using what is referred to as virtualization using software like VirtualBox or VMware. Each environment possesses its own virtual machine and is configured by make use of a Vagrantfile. The Vagrantfile tells Vagrant how to set up the virtual machine along with what scripts ought to be run in order to provision the environment. The downside to this approach is that each virtual machine includes not only one’s application and all of its libraries but the entire guest operating system to boot, which can significantly add to the size of the image.

Vagrant lets one script and package the VM config along with the provisioning setup. It is engineered to run on top of nearly every VM tool however, default support is only included for VirtualBox (others are supported through plugins).  Vagrant also does integrate with Configuration Management tools for instance Puppet and Chef to provision VM setups and configs.

Where will Docker and Vagrant Shine?

If one needs higher level of separation of hardware resources, then they should use Virtualization (i.e. VMs). Ideal use case can be public cloud solutions where they demand stringent resource separation between VMs running on the same hardware. The implications are that we are guaranteed resources at the hardware level, however at the cost of heavier image and longer startup times. We also get support for more OS platforms like Linux/Unix/Windows etc.

If one does not need strict resource separation and want their application to get bundled with its user-space dependencies then containers are ideal for that. The implications are faster startup times with very lightweight images with lesser isolation and no guaranteed resources at the hardware level. Also, the support for OS platforms is Linux only.

In Conclusion

Although Vagrant and Docker appear to be competitors with overlapping feature set, they can be used together in a fashion that their functionality complement one another. In such a scenario, Vagrant can be used to create a base VM, then when one desires to create different configs that all make use of this base VM, use Docker to provision and create different lightweight versions. In other words we can say that Vagrant abstracts the machine whereas Docker abstracts the application.

Additional Resources

https://stackoverflow.com/questions/16047306/how-is-docker-different-from-a-normal-virtual-machine?rq=1

Categories
DevOps

Using DevOps to help increase Systems of Engagement

What Is DevOps needed for Systems of Engagement?

Making any kind of change in business is often hard and typically requires an investment. Anytime an organization adopts any kind of new technology, methodology, or approach, that adoption ought to be driven by a business need. To build a business case for adopting DevOps, one must understand the business need for it, which includes the challenges that it addresses.

The Business Need

Organizations strive to create innovative applications or services to solve business problems. This might be either to address internal business needs or provide services that reach out to their clients or end users. A majority of the organizations have challenges undertaking software projects successfully, and their failures are often associated with challenges in software development and delivery. Though the majority of enterprises consider software development and delivery are critical a tiny percent feel that their teams are effective. This execution gap leads to missed business opportunities.

This problem has been further amplified by a significant shift in the kinds of applications that businesses are expected to deliver, from systems of record to systems of engagement:

Systems of record:

Conventional software applications are often large systems that function as systems of record, which include things like massive amounts of data and/or transactions and are intended to be highly reliable and stable. As these applications don’t need to change often, organizations will often meet the needs of their customers and their own business needs by delivering only one or two significant new releases a year.

Systems of engagement:

With the advent of mobile communications and the maturity of web applications, systems of record are being supplemented by systems of engagement, which customers can access directly and use to interact with the business. Such applications must be easy to use, high performing, and capable of rapid change to address customers’ changing behavior and evolving market forces.

Because systems of engagement are utilized directly by customers, they demand intense focus on user experience, speed of delivery, and agility – in other words, a DevOps approach.

Systems of engagement aren’t isolated islands and are often tied to systems of record, so rapid changes to systems of engagement bring about changes to systems of record. Any kind of system that needs rapid delivery of innovation requires DevOps. Such innovation is driven primarily by emerging technology trends such as cloud computing, mobile applications, Big Data, and social media, which may affect all types of systems.

Recognizing the Business Value

DevOps applies agile and lean principles across the entire software supply chain. It enables a business to maximize the speed of its delivery of a product or service, from initial idea to production release to customer feedback to enhancements based on that feedback.

Because DevOps improves the way that a business delivers value to its customers, suppliers, and partners, it’s an essential business process, not just an IT capability.

DevOps provides significant return on investment in three areas:

  • Enhanced customer experience
  • Increased capacity to innovate
  • Faster time to value

In conclusion, DevOps helps deliver better systems of engagement and help business reach their customers by adapting to their changing behavior and keep them engaged

Categories
Microservices

Service Oriented Architecture vs Microservices Architecture

What is Service Oriented Architecture (SOA)?

​Service Oriented Architecture (SOA) which is a software structuring principle, the premise is that software systems are built around the concept of services these can be consumed by applications that can be independently be built around these services. Service providers published a service description that is available for consumers, consumers are able to interact with these using these descriptions (for example WSDL).

In a nutshell the goal of SOA is to decouple systems by allowing the service and client to evolve independently. By following proper versioning Services can provide newer capabilites for clients while supporting legacy clients to be operational and upgrade their capabilities based on their schedules. The follows are the attributes that SOA services adhere to

  • Boundaries are explicit
  • Service compatibility is based on policy
  • Services are autonomous
  • Services share schema and contract

SOA or Tiered Architecture

Microservices Architecture (MSA)

While SOA defines traits that Services need to adhere to the definition for Microservices are not well defined. Microservices are an evolution of SOA with the aim or creating modular services, the primary goal of Mircroservices are to evolve independently with a single application focus. An application can be composed of 100’s of Microservices, each driving an independent feature that allow them to be built on different platforms and deployed independently. Scalability is improved by allowing services to span multiple instances if the demand increases along with fault tolerance as a single service not loading will not cause the application to fail.

Microservices have gained in popularity with teams adopting Continuous Integration and Continuous Delivery, using rapid deployment techniques an application have 100’s or Microservices can have multiple deployments during the day. This is a big shift from SOA where large scale deployments were prevalent. The downsides of Microservices are that though they can be easily scaled they do have drawbacks as an in-memory function call is always less resource intensive than out-of-process component requests.

Microservice Architecture

Conclusion

SOA came about to decouple monolith applications and help usage of contracts to drive application development. This brought about the use of WSDL as a service description along with communication protocols that were standardized. This allowed for client to consume the service and also evolve at their own pace. The downsides for SOA were that deployments took longer as the size of the services grew and with a larger footprint scalability had challenges. Enter Microservices which takes advantage of the Continuous Integration and Continuous Delivery pipelines and also since they are feature focused can be scaled easier and also more fault tolerant.

Microservices vs SOA

See also

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-overview-microservices