Showing posts with label Power and Cooling Calculation. Show all posts
Showing posts with label Power and Cooling Calculation. Show all posts

Calculation Method for Data Center Resources – Power, Cooling, and Capacity Planning

Calculation Method for Data Center Resources – Power, Cooling, and Capacity Planning

I'm often asked at work, "How effective is your backup software's deduplication? How much space does the backup data take up?" Honestly, in terms of effectiveness, this is often directly related to the type of data and how it's stored. Different methods and different types of equipment can produce vastly different results, so this question is honestly very difficult to answer in a single sentence. However, no matter what, backup data requires disk space for storage, and during the design phase of a backup project, the capacity of the backup repository must be designed. This design is usually directly related to the user's storage costs, storage efficiency, and backup availability, making it a critical part of the backup project. Designing the storage capacity and bandwidth as accurately as possible is directly related to the success or failure of the project.

Introduction

Data centers are the backbone of modern IT, but managing resources like power, cooling, and capacity requires precise calculations. Without proper planning, businesses risk downtime, inefficiency, and high costs.

This guide explains the calculation methods for data center resources, helping IT administrators achieve the right balance between performance, efficiency, and scalability.

Here, I will use a typical virtualized environment as an example to illustrate how this calculation should be performed.

Environment information:

ESXi Hosts: 25

VMs: 500

Average disk capacity per VM: 200GB

Total Datastore used capacity: 100TB


Bandwidth Design


Typically, the backup process involves two modes: full backup and incremental backup. Generally, the first backup is a full backup, which transfers all data from the virtualized environment to the backup storage device, so the amount of data transferred is almost the entire used capacity of the Datastore. All subsequent transfers are incremental backups, transferring the amount of changed data in the virtualized environment, commonly the daily change rate. For this article, we will temporarily use the daily change rate as the unit for calculation.


In each environment, the daily change rate can be obtained from the Veeam ONE change assessment report, which provides a relatively accurate value. I will assume this change rate is 7%. So we get the following values:


Initial transfer data volume: 100TB

Daily incremental transfer data volume: 7TB


After enabling optimized compression and deduplication, assuming it achieves a conventional effect, the actual data transferred is 50% of the Datastore capacity:


Actual initial transfer data volume: 50TB

Actual daily transfer data volume: 3.5TB


Therefore, the required bandwidth is calculated as follows. Assuming for the initial transfer, we can run a continuous transfer for 24 hours on Saturday, and subsequent incremental backups are performed during daily business idle hours from 8:00 PM to 6:00 AM. After accounting for the basic configuration time of the backup job, we estimate that the actual data transfer time is about 80% of the total time. So, out of a 10-hour backup window, 2 hours are for basic backup configuration and waiting, and 8 hours are for actual data transfer. A simple calculation example is as follows:


Full backup required bandwidth: 50 * 1024 * 8 / (24 * 3600 * 80%) = 5.93 Gbps (1 GB/s = 8 Gbps)


Incremental backup required bandwidth: 3.5 * 1024 * 8 / (10 * 3600 * 80%) = 1 Gbps (1 GB/s = 8 Gbps)


From the above, we can see a general picture. The read/write throughput on the network and disks can be planned according to this data, configuring the appropriate number of NICs/HBAs to achieve the required backup throughput.


Capacity Design


Depending on the backup mode, capacity design is easiest to calculate when the data storage has no deduplication technology. The following will use the most common conventional incremental backup as an example to explain the calculation method, which is also a relatively simple calculation.


Keep at least 14 backup copies, perform one full backup per week, and perform one incremental backup per day.


Per week: Full backup 50TB * 1, Incremental backup 3.5TB * 6, Total: 71TB

Per month: 4 Weeks, Total: 71TB * 4 = 281TB


Considering an additional 15% cache-free capacity, the total estimated capacity is: 326.6TB


This is the conventional way to design backup capacity. Here, I also have an excellent tool to recommend: the Veeam Backup Repository Capacity Calculator, created by Veeam. This tool offers more comprehensive and detailed calculation methods, allowing you to input more data based on actual situations. The Veeam calculators online tool address is as follows:

https://www.veeam.com/calculators/simple/vbr/machines/vm

screenshot of Veeam calculators

👉 Related reading: How to Solve the Problem of ESXi 7.0 System Storage Occupying Space