قالب وردپرس درنا توس
Home / Tips and Tricks / Use of DigitalOcean load balancers between virtual machines – CloudSavvy IT

Use of DigitalOcean load balancers between virtual machines – CloudSavvy IT



Digital ocean logo

As many system administrators have learned over the years, making sure a system is highly available is critical to any production operation. Managing and maintaining a load balancer can often be a difficult task. DigitalOcean offers a load balancer product for only $ 10 per month that makes managing and maintaining a load balancer much easier.

What are the functions of the DigitalOcean Load Balancer? There are many options available that can affect the way the load balancer works and performs.

  • Redundant Load Balancers ̵
    1; Configured with automatic failover
  • Add resources by name or by tag
  • Supported protocols: HTTP (s), HTTP / 2, TCP
  • LetsEncrypt SSL certificates (if DigitalOcean is your DNS provider)
  • Support of the PROXY protocol
  • Sticky sessions about cookies
  • Configurable backend droplet health checks
  • Algorithm: Round Robin or Least Connections
  • SSL redirection to force all HTTP to HTTPS
  • Backend keepalive for performance

Limits of DigitalOcean Load Balancers

There are a number of restrictions for DigitalOcean Load Balancers that must be observed.

  • Incoming connections only support TLS 1.2, but connections to droplets support TLS 1.1 and TLS 1.2
  • No support for IPv6
  • SSL passthrough does not support headers such as X-Forwarded-Proto or X-Forwarded-Foras this only works for HTTP or HTTPS with a certificate
  • Sticky sessions are not visible after the load balancer, the cookies are set at the edge and removed and not passed on
  • When keep-alive is activated, there is a time limit of 60 seconds
  • Load balancers support 10,000 simultaneous connections spread across all resources (that is, 5000 to two different droplets).
  • Health checks are sent as HTTP 1.0
  • Floating IP addresses cannot be assigned to load balancers
  • port 50053 to 50055 are reserved for the load balancer
  • Let’s Encrypt is only supported when DigitalOcean is used as DNS
  • Let’s Encrypt on Load Balancers do not support wildcard certificates

Create a load balancer

After creating a new load balancer, you need to select the area where the load balancer will be created and load balancing along with the droplets. Load balancing does not work in different data center regions, so all droplets must be located together.

Select the region where the load balancer should be created and located along with the load balancing droplets

Next, we need to define the resources to be added to the load balancer. The best way to do this is through tags, as each new tagged resource is then added to the load balancer. Since there is a limit of 10 droplets that can be added individually, using tags is a good way to get around this limit as there is no limit to the number of droplets that can be added.

Define resources to be added to the load balancer

After adding resources, it is necessary to create all the necessary rules for routing traffic. In this example we are only using a standard web server and non-SSL traffic. So we just need a simple rule to forward to port 80.

    Create any necessary traffic routing rules

You can make advanced settings, but these can be changed later if necessary. These settings relate to the algorithm, sticky sessions, integrity checks, SSL redirection, support for proxy protocols and whether backend keepalive is enabled.

Advanced settings

Finally, choose a name for the load balancer and click Create Load Balancer.

Choose a name for Load Balancer and click Create Load Balancer

After the load balancer is created, you can navigate to view the resources that have been allocated and their status. If you have firewalls applied to the droplets, make sure you have the correct inbound ports open for the health checks to work.

Navigate to view the assigned resources and their status

To see this, first navigate to the IP of each individual droplet. In this case, we just installed Nginx and created one index.html File in /var/www/html that has identifying text for each server.

Navigate to the IP of each individual droplet.

Navigate to the IP of each individual droplet.

As you can see, each server is displaying the correct text that we would expect. Now let’s test what happens when we go to the IP address of the load balancer itself. After several reloads you will see that both sides have the same IP address when connections are forwarded between the assigned droplets.

Go to the IP address of the load balancer.  After multiple reloads, both sides will appear with the same IP address when forwarding connections between associated droplets.

Go to the IP address of the load balancer.  After multiple reloads, both sides will appear with the same IP address when forwarding connections between associated droplets.

State of the backend connection

Health connection checks run continuously based on the schedule you set. As soon as it is determined that a backend droplet is not working, the load balancer no longer directs connections to the defective backend droplet.

As you can see in the following screenshot, after switching off lc-test-02the load balancer stopped directing connections to it. After refreshing the page, you will only receive the page from test server 1.

After switching off lc-test-02, the load balancer interrupted the connection to it.

Conclusion

As you can see, DigitalOcean Load Balancers are an incredibly useful and inexpensive way to easily balance connections across any number of droplets. With the addition of HTTP / 2 support, SSL passthrough and termination, and the support of Let’s Encrypt using DigitalOcean Load Balancers, many applications can easily be provided with high availability and load balancers.


Source link