One of the most common problems you may face in project expansion is the need to maintain multiple environments. This may be essential to get enough capacity (to serve all customers) or, for example, to handle different versions of applications. In these circumstances, you will probably encounter the problem of properly distributing traffic between these project copies, including a number of problems, such as establishing a correct method for routing orders, loading servers, and so on even for experienced developers.
Thus, to facilitate the resolution of these problems, MilesWeb Cloud hosting offers a completely free and easy-to-use solution, based on the automatically configured load balancer. Delivered as a special add-on to the Traffic Distributor, which is available for quick one click installation through MilesWeb Cloud Marketplace and provides intelligent traffic routing based on your needs.
With this solution, you can configure intelligent load balancing between the pair of the host and get the benefit of the following features and opportunities it provides:
Normally, in comparison to the execution of a single server, the Traffic Distributor allows to accelerate the processing of orders, to diminish the delay of response of the user and, generally, to handle more simultaneous threads without failures.
With MilesWeb Cloud Traffic Distributor solution, you can choose from three different routing methods in order to get the one, which will best fit your needs. Each of the available options has its own specifics and usage purposes, that should be taken in to account during the selection:
Round Robin - The simplest and most widely used routing method, which allows traffic to be evenly distributed in your environment, pointing each request to them in the rotation (ie one by one) due to the set backend priorities.
Note : Note that - To use this option, you must provide identical content on both of your backends (since the requested data by users is going to be loaded from both of them).
Sticky Sessions - This type of routing is based on "sticking" each user into a particular backend (according to the set server' weights), which will process all their requests until the corresponding user session, created on the first visit to the app, expire.
Failover: This type of traffic routing allows you to configure the backup of your primary server and keep it in standby mode (ie in reserve) until the first server fails. And if there is a problem with the main backend, all requests will be redirected automatically to the operating server, so your users probably will not notice any interruption in the application's work.
To get a traffic distributor, you must fill out the form with a series of key parameters (such as selecting hosts to route requests between, routing type, state traffic relationship, etc.) and start the installation process with a single click. After creation, the traffic distributor will represent a separate environment with the NGINX load balancing server (with a predefined number of default nodes) and a special add-on installed on top of it.
Herewith, during the installation, you also define an entry point for it, that is, it defines the requests to be processed through the shared load balancer or the public IP addresses connected to each of the nodes of the balancer.
Tip : Traffic distributor works through the most used HTTP and https protocols, but is also suitable for any other, which works over them (including websockets). In this case, load balancing is performed only during the http handshake operation, after which the persistent connection of websockets to the backend will be established.
This way, you can get an extremely flexible traffic distribution tool in the cloud, designed to help you reach multiple goals - from simple routing to the same servers loading, for much more complex scenarios like the blue-green deployment to install application updates with zero downtime, continuous A/B testing, advanced fault-protection application, and so on.
Round Robin is the most common routing method, simple and at the same time most used for Traffic Distributor. It navigates the request to backends in the rotation, according to the specified server weights, which provides high availability of the distributed application and allows you to easily take advantage of server's load.
Note : Note that this method should be chosen only when you have the same content on your backends since data requested by users will be loaded from both of them.
Thus, each backend is accessed in accordance with the predefined priority level which, in case if it is using the Traffic Distributor, is defined as a percentage of all incoming requests, for example:
This technique is used to achieve server-affinity by sticking the user to a particular backend, which allows you to work with a single version of the application. Thus, on the first visit, the customer is routed according to the server weights, remembering the backend assigned, making sure that all subsequent requests from that user are destined to the same environment.
Usually, this is implemented through remembering IP address, which is not optimal, as there could be a lot of customers behind a proxy, resulting in excessive balancing. Thus, MilesWeb cloud uses an advanced solution based on the session cookies to make a persistent routing, when each browser becomes a unique “user”, allowing to make balancing more constant.
In this way, the distribution of new users of Sticky Sessions is similar to the round robin method and is performed in accordance with the pre-set priority. For example, setting 50% to 50% will cause both versions of the application to be visited by the same number of unique users, which is useful for A/B testing. However, regardless of server weights, "old" user requests will always be redirected to the hosts that are assigned until the session expires or the cookie is deleted.
Failover is a unique routing method, that implements project protection through keeping a fully functional environment copy in reserve. In other words, you have primary server and backup one - all the requests are primarily forwarded to the first backend, while the second one is kept on a standby and is used only in case the primary server goes down.
For such type of distribution, of course, you cannot configure the distribution of the traffic ratio beyond 100 to 0, that is, you can only select which of your servers is primary and which is backed up. Thus, all incoming requests can go to one server at a time - either to the primary server or, if it is not available, to the secondary server. With this, the location of your backends in different regions of the environment will easily overcome hardware-dependent failures.
In general, client requests are automatically redirected to the operating server, so even in case of failure, users will not notice any interruption in the application's work.