قالب وردپرس درنا توس
Home / Tips and Tricks / How much power does your cloud server really need? – CloudSavvy IT

How much power does your cloud server really need? – CloudSavvy IT



  CPU-Design
agsandrew / Shutterstock

Most cloud providers split their offers according to the number of CPU cores and RAM size. Do you need a large multicore server or an entire fleet of them? Learn how to measure your server's actual performance.

Does your application need to be "scaled"?

Tech startups are often attracted to "scalable" architecture, which means building your server architecture in such a way that every component can be scaled to meet every demand.

That's great and all, but if you don't experience as much traffic in the real world, it can be overkill (and more expensive) to build a scalable architecture with the intent to scale up to a million users, though You only manage a few thousand.

You should give priority to building a good app over building an exceptional infrastructure. Most applications run surprisingly well with just a few easy-to-manage standard servers. And if your app ever gets big, your growth is likely to take place over a few months, giving you enough time (and money) to work on your infrastructure.

Scalable architecture is still a good thing, especially with services like AWS, where automatic scaling can be used to shrink and save money during off-peak hours.

CONNECTION: How to speed up a slow website [1

9659004] You need to plan for the peak load

The important thing is that you don't plan the average load, but the peak load. If your servers cannot handle your peak load at noon, they have not served their purpose. You need to make sure that you measure and understand your server's usage over time, rather than just looking at CPU usage in a single moment.

Scalable architecture is useful here. The ability to quickly launch a spot instance (which is often much cheaper) to relieve the main servers is a very good design paradigm and allows you to significantly reduce costs. If you only need two servers for a few hours a day, why should you pay to run them overnight?

Most major cloud providers also offer scalable container solutions, such as Docker, that allow you to automatically scale things because your infrastructure can do this more easily.

CONNECTION: What is Docker doing and when should you use it?

How much performance does your server offer?

It is a difficult question to answer; All applications and websites are different, and server hosting for all users is different. We cannot give you an exact answer as to which server best suits your use case.

What we can do is tell you how to experiment yourself to find out what works best for your particular application. To do this, you need to run your application under real conditions and measure certain factors to determine whether you are over or under congested.

If your application is overloaded, you can start up a second server and use a load balancer to balance the traffic between them, such as AWS Elastic Load Balancer or Fastlys Load Balancing Service. If it's heavily underutilized, you might be able to save a few dollars by renting a cheaper server.

CPU usage

CPU usage is probably the most useful measure. You get a general overview of the overload of your server. If your CPU usage is too high, server operation can come to a standstill.

CPU usage is visible in above and the load averages for the last 1, 5, and 15 minutes are also visible. This data comes from / proc / loadavg / so that you can log it in a CSV file and graph it in Excel if you want.

However, most cloud providers have a much better diagram for this. AWS has CloudWatch, which shows the CPU usage for each instance under the EC2 metrics:

 Chart of CPU usage.

The Google Cloud Platform shows a nice diagram under the "Monitoring" tab in the instance information:

 Diagram of the CPU usage under the "Monitoring" tab in the instance information.

In both diagrams, you can adjust the time scales to show CPU usage over time. If this chart constantly reaches 100%, you should consider upgrading.

Note, however, that with a multi-core server, CPU usage may still be overloaded, while the graph is nowhere near 100%. If your CPU usage is close to 50% and you have a dual-core server, it is likely that your application is mostly single-threaded and has no performance advantages.

RAM Utilization

The RAM utilization is less likely to fluctuate widely as it is mainly about whether you have enough to perform a specific task or not.

You can quickly view memory usage in above which shows the currently allocated memory for each process in the "RES" column and shows usage as a percentage of total memory in the "% MEM" column .

 Currently allocated memory for each process in the column

You can press Shift + M to sort by% MEM, which lists the most memory intensive processes.

Note: Memory speed affects CPU speed to some extent, but is unlikely to be the limiting factor unless you are running an application that requires bare metal and the fastest possible speed.

Disk Space

If your server does not have enough disk space, certain processes can crash. You can check the disk usage with:

  df -H 

This shows a list of all devices connected to your instance, some of which may not be useful to you. Search for the largest (probably / dev / sda1 / ) and you can see how much is currently in use.

 Storage space currently in use.

You must use protocol rotation effectively and ensure that no excess files are created on your system. If so, you may only want to save the most recent files. You can delete old files using find with time parameters attached to a cron job that runs once an hour:

  0 * * * * find ~ / backups / -type f -mmin +90 - exec rm -f {} ; 

This script removes all files in the ~ / backups / folder that are older than 90 minutes (used for a Minecraft server that creates and populates 1 GB + backups every 15 minutes) has) a 16 GB SSD). You can also use logrotate, which gives the same effect more elegantly than with this hastily written command.

If you save a lot of files, you may want to move them to a managed storage service like S3. It is cheaper than connecting drives to your instance.

CONNECTION: How to set up Logrotate on Linux (so your server does not have enough space)

Network Speed ​​

There is no good way to do this natively to monitor. So if you want to get good command line output, install sar from sysstat :

  sudo apt -get install sysstat 

Activate it by pressing / etc Edit / default / sysstat and set "ENABLED" to "true".

This will monitor your system and generate a report every 10 minutes that will be turned out once a day. You can change this behavior by editing the Sysstat crontab at /etc/cron.d/sysstat .

You can average the network traffic with the flag -n :

  sar -n DEV 1 6 

Pipe it an tail for a nicer edition: [19659040] sar -n DEV 1 6 | tail -n3

On average, packets and kilobytes are displayed that are sent per second on each network interface.

However, it is easier to use a GUI for this. CloudWatch has "NetworkIn" and "NetworkOut" statistics for each instance:

 The CloudWatch statistics "NetworkIn" and "NetworkOut" for each instance.

You can add a dynamic label with a SUM function that shows the entire network in bytes for a period of time.

It is difficult to judge whether you are overloading your network or not. In most cases, you are limited by other factors, such as: For example, whether or not your server can keep up with requirements before worrying about bandwidth usage.

If you are really worried about traffic or want to host large files, you should consider getting a CDN. A CDN can relieve your server and allow you to provide static media very efficiently.


Source link