Modular container data centers. Modular data center: overview of solutions Modular data center

Autonomous data center infrastructure based on a prefabricated frame module. Patented unique design. Year-round on-site installation from large factory-produced components. No concrete base required. Data center in any location and any size within 4 months “from the contract”. Operating conditions -60°C+45°C. Precision cooling and humidity maintenance, closed loop.
Pre-fabricated data center transported in ISO containers or conventional transport. Ability to choose from ready-made solutions or configure a data center with any characteristics. A wide selection of facade types and colors, from armored to natural materials - stone, wood.

Typical configurations

Or choose

Basic properties

    The total area of ​​one module is from 6.2 sq.m to 124.74 sq.m.

    One module contains from 2 to 64 cabinets with dimensions from 600mm*1070mm to 800mm*1200mm and a height of up to 50U.

    Several ways to scale by area due to the use of a patented carrier panel (Utility model patent No. 138677).

    Possibility of creating MSDC clusters from any number of modules and any capacity.

    It is assembled from standard components produced at the factory, which are transported in an ISO container and installed without the use of heavy lifting mechanisms (special equipment).

    Ability to provide any required IT load on racks (from 5kW to 40kW).

    The ability to provide any level of reliability of the engineering infrastructure (Tier 1, 2, 3, 4), as well as change these parameters in the future.

    Does not require a foundation.

    It is not a capital construction project.

    Wide temperature range for start-up and operation.

Areas of use

    Creation of distributed networks of data processing centers.

    Construction of large modular data centers.

    Creation of a “private cloud”, “hybrid cloud”, “EDGE Computing, IoT”.

    Creation of a “temporary” IT infrastructure.

    Framework for leasing private infrastructure.

Benefits of the solution

Compared to traditional in-building data centers:

    Reduced capital costs, including due to the absence of costs for architectural and construction design, reduction of costs for the design of engineering infrastructure by typing systems, lack of foundations, etc.

    Saving time and money due to the absence of the need to obtain a permit for capital construction.

    Possibility of expanding infrastructure as business (needs) develops.

    Significant reduction in design and implementation time.

    Freeing up space in the building for its intended use.

    Reduce energy costs thanks to efficient cooling.

    Reusable.

Compared to container data centers:

    Cheaper than container solutions, including significantly reduced transportation costs.

    No need for a foundation.

    The ability to place any required amount of equipment with any load and level of reliability of the engineering infrastructure.

    Flexible scaling (in rack increments, not container increments).

    Reduced energy costs due to efficient cooling.

    The service space inside the MSDC complies with the requirements of the Rules for the Operation of Electrical Installations.

    Increased tightness of the data center due to the absence of additional service openings.

As always, we tried to make the model problem as close as possible to real projects. So, the fictitious customer planned for 2015 the construction of a new corporate data center using traditional technology (starting with capital construction or reconstruction of the building). However, the worsening economic situation forced him to reconsider his plans. It was decided to abandon large-scale construction, and to solve current IT problems, to organize a small modular data center in the courtyard on its territory.

Initially, the data center is required to install 10 racks with IT equipment with a power of up to 5 kW each. At the same time, the solution must be scalable in two parameters: by the number of racks (in the future, given a favorable economic situation and the development of the customer’s business, the data center should accommodate up to 30 racks) and by the power of an individual rack (20% of all racks should support equipment with a power of up to 20 kW on the rack). The preferred “quantum” of expansion is 5 racks. For more details, see the “Task” sidebar.

TASK

The fictitious customer planned for 2015 to build a new corporate data center using traditional technology (starting with a major reconstruction of the building). However, the worsening economic situation forced him to reconsider his plans. It was decided to abandon large-scale construction, and to solve current IT problems - to organize a small modular data center.

Task. The customer is going to deploy a small but highly scalable data center on its industrial territory. Initially, you need to install 10 racks with IT equipment, the power of each rack is up to 5 kW. In this case, the solution must be scalable in two ways:

  • by the number of racks: in the future, given a favorable economic situation and the development of the customer’s business, the data center will have to accommodate up to 30 racks;
  • by power of an individual rack: 20% of all racks should support equipment with a power of up to 20 kW (per rack).

It is advisable to invest in data center development as needed. The preferred “quantum” of expansion is 5 racks.

Features of the site. The site is provided with electricity (maximum supplied power - 300 kW) and communication channels. The size of the site is sufficient to accommodate any external equipment, such as diesel generator sets, chillers, etc. The site is located within the city, which imposes additional conditions on the noise level from the equipment. In addition, the location of a data center in a city implies a high level of air pollution.

Engineering systems. Fault tolerance level - Tier II.

Cooling. The customer did not specify a specific cooling technology (freon, chiller, adiabatic, natural cooling) - its choice is at the discretion of the designers. The main thing is to ensure efficient heat removal at the rack capacities specified in the problem. Minimizing energy consumption is encouraged - it is necessary to meet the limitations on the supplied power.

Uninterruptible power supply. The specific technology is also not specified. The autonomous power supply time in case of an accident is at least 10 minutes from batteries with a transition to operation from the diesel generator set. The fuel supply is at the discretion of the designers.

Other engineering systems. The project should include the following systems:

  • automatic gas fire extinguishing installation (AUGPT);
  • access control and management system (ACS);
  • structured cabling system (SCS);
  • raised floor and other systems are at the discretion of the designers.

The data center must have integrated management system, providing monitoring and control of all major systems: mechanical, electrical, fire, security systems, etc.

Additionally:

  • the customer asks to indicate the deadline for the implementation of the data center and the specifics of equipment delivery (access roads, special mechanisms for unloading/installation);
  • If the designer considers it necessary, IT equipment for the data center can be immediately offered.

We approached leading modular data center vendors with a request to develop projects for the customer. As a result, 9 detailed solutions were obtained. Before we look at them, note that the Journal of Networking/LAN has been tracking trends and trying to categorize modular data centers for many years. These issues are addressed, for example, in the author’s articles “” (LAN, No. 07–08, 2011) and “” (LAN, No. 07, 2014). In this article we will not waste time on presenting trends, but refer those interested to the indicated publications.

(NOT)CONSTRUCTIVE APPROACH

Very often, membership in the “Modular Data Center” category is determined by the type of design. At the first stage of the development of this market segment, data centers based on standard ISO containers were usually called “modular”. But in the last year or two, container data centers have come under fire, especially from new wave solution providers who point out that standard containers are not optimized for hosting IT equipment.

It is incorrect to categorize data centers as modular based on the type of design. “Modularity” should be determined by the possibility of flexible scaling through a coordinated increase in the number of racks, the power of uninterruptible guaranteed power supply systems, cooling and other systems. The design may be different. So our customer received proposals based on standard ISO containers, specialized containers and/or blocks, modular premises, and structures assembled on site (see table).

Let's say Huawei specialists chose for our customer a solution based on standard ISO containers, although the company's portfolio of solutions includes modular data centers based on other types of constructs. Mikhail Salikov, director of the data center department of this company, explained this choice by the fact that the traditional container solution, firstly, is cheaper, secondly, it allows for faster implementation of the project and, thirdly, is less “demanding” for site preparation. In his opinion, in the current conditions, taking into account the conditions set by the customer, this option may turn out to be optimal.

Several proposals received by the customer are based on containers, but not standard ones (ISO), but specially designed for building modular data centers. Such containers can be joined together to form a single machine room space (see, for example, the project by Schneider Electric).

One of the features of container-based options is more stringent requirements for means of transportation and installation. For example, to deliver NON-ISO25 container modules from Schneider Electric, a low-platform trailer with a useful length of at least 7.7 m is required. For unloading and installation, a crane with a lifting capacity of at least 25 tons is required (see Fig. 1).

When using smaller modules, the requirements are more relaxed. For example, CommScope DCoD modules are transported in a standard Eurotruck, and their unloading and installation can be carried out by a conventional forklift with a lifting capacity of at least 10 tons (see Fig. 2).

Finally, for installation of fast transport different (on-site) design, for examplemeasures of the NOTA IDC of the Utilex company, heavy lifting mechanisms are not required at all - self-sufficientloader with a lifting capacity of 1.5 tons.

All components of this IDC transfer are packed disassembled in a standard ISO container, which, according to Utilex representatives, significantly reduces the cost and delivery time to the placement site. This feature is especially important for remote locations without developed transport infrastructure. As an example, a project for the implementation of a distributed data center in the Magadan region for the Polyus Gold company is given: the total length of the delivery route was more than 8900 km (5956 km by rail, 2518 km by sea; 500 km by road).

POWER SUPPLY

Part of the system is guaranteed th uninterruptible power supplyselection of different options for givenThe project is small - only the classic “trio”: static UPS, batteries and a diesel generator set (DGS). It is not practical to consider dynamic UPSs as an alternative at such capacities.

Most projects use modular UPSs, the power of which is increased by adding power modules. Monoblock UPSs are also offered - in this case, the system power is increased by installing the UPS in parallel. Some companies indicated specific UPS modules, others limited themselves to general recommendations ().

The situation is similar with diesel generator sets: some companies recommended specific models, others limited themselves to calculating powerness. More detailed informationon DGU is presented in full descriptionnyami projects available on the site .

COOLING

But there are many cooling options offered. Most of the projects are based on freon air conditioners - not the most energy efficient, but the cheapest solution. Two companies - Huawei and Schneider Electric - have placed their bets on chiller systems. As Denis Sharapov, business development manager at Schneider Electric, explains, a cooling system project with freon air conditioners is cheaper, but to ensure fault-tolerant operation it will be necessary to install a much larger UPS (to power the compressors), as a result the cost of the projects will be approximately equal. Therefore, based on the calculations carried out, the Schneider specialist made a choice in favor of the chiller option - more reliable and functional. (For emergency cooling in chiller systems, a storage tank with cold coolant is used, and therefore the need for uninterruptible power is significantly lower - it is enough to power only the pumps for pumping the coolant from the UPS.)

The CommScope solution recommended to the customer by Croc specialists uses direct free cooling with adiabatic cooling. As Alexander Lasy, technical director of the department of intelligent buildings at Croc, notes, the use of direct free cooling almost all year round provides significant energy savings, and the proposed (in the CommScope solution) adiabatic cooling system does not require serious water preparation, since humidification occurs using inexpensive replaceable elements. The costs of replacing them are not comparable with the costs of deep water treatment. For post-cooling and backup cooling of the data center in this case, according to a Croc specialist, it is most advisable to use a freon direct evaporation (DX) system. In the presence of free cooling and adiabatic cooling, the total time of use of the backup/additional system during the year is extremely small, so there is no point in striving for its high energy efficiency.

The LANIT-Integration company proposed an indirect free-cooling system with adiabatic cooling. In such a system, there is no mixing of external and internal air flows, which is especially important when locating a data center within a city due to high air pollution. A chiller system was chosen as an additional one in this project - it will be turned on only during the most unfavorable periods for the main system.

MISSION IMPOSSIBLE?

There was a hidden catch in our task that most of the competitors did not notice (or rather, did not want to notice). The total total IT equipment power (240 kW) and the maximum supplied power (300 kW) specified by the customer leave only 60 kW for engineering and other auxiliary systems. Fulfilling this condition requires the use of very energy-efficient engineering systems: PUE = 1.25 (300/240).

Theoretically, achieving such an indicator is quite possible using direct free cooling and adiabatic cooling, proposed by Croc and CommScope. And the latter has examples of projects abroad where even lower PUE values ​​are achieved. But the effectiveness of these cooling technologies highly depends on the level of pollution and climatic parameters at the location. In this task, sufficient data was not provided, so at this stage it is impossible to say unambiguously whether direct free cooling will help “meet” the limitation on the supplied power.

The solution with indirect free cooling proposed by LANIT-Integration indicates a calculated PUE of 1.25–1.45. Therefore, this project does not fit within the given constraints. As for solutions with chiller cooling systems and freon air conditioners, their energy efficiency is even lower, which means that the overall consumption of the data center is higher.

For example, according to the calculations of Denis Sharapov, the peak energy consumption of the solution proposed by Schneider Electric is 429.58 kW - at maximum IT load (240 kW), ambient temperature above 15 ° C, at the time of charging the UPS batteries, taking into account all consumers (including including monitoring systems, AGPT, internal lighting, access control systems). This is 129 kW more than the supplied power. As an option to solve the problem, he suggested excluding highly loaded racks (20 kW) from the configuration or ensuring a regular supply of fuel to ensure the constant operation of a diesel generator set with a power of at least 150 kW.

Of course, operating a data center at 100% utilization is very unlikely - in practice this almost never happens. Therefore, to determine the total power consumption of the MSDC, GreenMDC specialists proposed taking the demand coefficient for the total power of IT equipment equal to 0.7. (According to them, for corporate data centers the value of this coefficient is in the range from 0.6 to 0.9.) Taking into account this assumption, the estimated power for the turbine hall will be 168 kW (24 cabinets of 5 kW; 6 cabinets of 20 kW; demand coefficient - 0.7: (24x5 + 6x20)x0.7 = 168).

ADJUSTMENT OF THE PROBLEM

But even taking into account the above comment about incomplete load, for reliable power supply to the data center, the customer will apparently have to decide on increasing the power supplied to the site and give up the dream of creating a data center with a PUE value at the level of the world's best indicators of objects of the scale of Facebook and Google. And the average corporate customer does not need such energy efficiency - especially considering the low cost of electricity.

As Alexander Lasy believes, since the power of the data center is very small, it does not make much sense to pay increased attention to energy efficiency, since the cost of solutions that can significantly reduce energy consumption can significantly exceed the amount of savings over the life cycle of the data center.

So, so that the customer is not left without a data center at all, we are removing the limitation on the supplied power. What other adjustments are possible? Note that many companies scrupulously fulfilled the customer’s wishes regarding the number/power of racks and the “quantum” of expansion of 5 racks. And the Croc and CommScope companies offered an even smaller “quantum” - modules for 4 racks.

However, a number of companies, based on the standard designs available in their portfolio of proposals, have somewhat deviated from the conditions of the task. For example, as Alexander Perevedentsev, chief specialist of the sales support department at Technoserv, notes, “according to our experience, the development (scaling) of data centers in clusters with a step of 15–20 racks with an average power per rack of at least 10 kW is currently relevant.” Therefore, Technoserv proposed a solution for 36 racks of 10 kW of electrical power per rack with an increment of 18 racks. Containers for 18 racks also appear in the Huawei project, thanks to which the customer can ultimately receive 6 more racks than he requested.

SITE PREPARATION

The site for installing the data center should be prepared - at least leveled. For the data center proposed to the customer by Croc, it is necessary to have a concrete foundation in the form of a load-bearing slab designed for the appropriate load.

In addition, the site must be equipped with ebbs and drains to drain rain and melt water. “Since there is sometimes quite a lot of snow in our country, in order to avoid breakdowns and leaks, it is advisable to equip the site with easily assembled sheds,” recommends Alexander Lasy.

Construction of a concrete foundation on site is provided for in most projects. At the same time, a number of experts pointed out that the cost of constructing a foundation is not comparable with the total cost of the project, so it is not worth saving on it.

The NOTA MSDC of the Utilex company stands apart, for which no foundation is required. Such a data center can be installed and launched on any flat site with a slope of no more than 1–2%. Details are below.

TIME…

As Alexander Lasy notes, the most important and time-consuming process is the design of a data center, since errors made at the design stage are extremely difficult to eliminate after the modules are released from production. According to him, it usually takes from 2 to 4 months to prepare and approve technical specifications, design, approval and approval of a project. If the customer chooses standard solutions, the process can be reduced to 1–2 months.

Should the customer select a CommScope solution, production and delivery of pre-assembled modules and all associated equipment will take 10-12 weeks for starter kits and 6-8 weeks for add-on modules. Assembling a starter kit for a line of 5 modules on a prepared site takes no more than 4–5 days, commissioning work takes 1–2 weeks. Thus, after agreeing and approving the project, concluding a contract and paying the necessary advance (usually 50–70%), the data center will be ready for operation in 12–14 weeks.

The project implementation period based on the LANIT-Integration solution is about 20 weeks. In this case, design will require about 4 weeks, production (including cooling systems) - 8 weeks, delivery - 4 weeks, installation and launch - another 4 weeks.

Other companies also indicate similar deadlines, and it does not make much difference where the assembly facilities are located - abroad or in Russia. Let's say the average turnkey delivery time for a Schneider Electric MSDC is 18–20 weeks (without design). The duration of the main stages (related to the installation of the first and subsequent modules and their turnkey delivery) during the construction of the Utilex NOTA MSDC ranges from 12 weeks. And the duration of the stages associated with the expansion of each module (adding 5 racks and corresponding engineering systems to it) is from 8 weeks.

Obviously, the dates indicated are approximate. Much depends on the geographical location of the customer’s site, the quality of work organization and the interaction of all participants in the process (customer, integrator and manufacturer). But in any case, it takes only six months or a little more from the idea to the full implementation of the project. This is 2–3 times less than when constructing a data center using traditional methods (with capital construction or reconstruction of the building).

…AND MONEY

We did not receive information about the cost from all participants. Nevertheless, the data obtained allow us to get an idea of ​​the structure and approximate amount of costs.

Alexander Lasy described in detail to the customer the structure and sequence of his costs when choosing a solution from Croc and CommScope. So, 10% will be spent on design, 40% on launching the starter kit (12 IT racks), 5% on completing the first line (expansion + 4 IT racks), 35% on launching the starter kit on the second line (8 IT racks) -racks), 5% for expansion (+4 IT racks) and another 5% for further expansion (+4 IT racks). As you can see, the distribution of costs is uniform - this is exactly what the customer wanted when he thought about choosing a modular data center.

Many customers believe that after recent fluctuations in the ruble exchange rate, localization of the proposed solutions is important in reducing and/or fixing costs. According to Alexander Perevedentsev, Technoserv employees have been actively searching for manufacturers of domestic equipment for the last six months, in particular in the field of engineering systems. At the moment, when implementing a data center, the degree of localization of engineering systems is 30%, and in high-availability solutions, which include the latest version of the IT Crew modular data center, it is at least 40%. The cost of the IT Crew MSDC for 36 racks with a capacity of 10 kW each with the possibility of further expansion and Tier III fault tolerance level will vary from 65 to 100 thousand dollars per rack. When converted at the rate of 55 rubles. per dollar it turns out from 3.5 to 5.5 million rubles. behind the counter. But this, we emphasize, is a Tier III solution, while most other companies offered Tier II, as requested by the customer.

If Technoserv did not fail to boast of a localization level of 40%, then representatives of Utilex modestly kept silent about the fact that in their solution this figure is significantly higher, since all the main engineering equipment, including air conditioners and UPSs, are domestically produced. The cost of the Utilex NOTA MSDC (based on 24 racks of 5 kW and 6 racks of 20 kW) is $36,400 per rack. At the rate of 55 rubles. For 1 dollar it turns out to be about 2 million rubles. behind the counter. This amount includes all delivery costs to the location within the Russian Federation and the cost of all installation and startup work, including travel and overhead costs. As Utilex representatives note, the total cost of the MSDC can be reduced by placing the entire infrastructure in one large NOTA module (5.4x16.8x3 m): 10 racks at the first stage and adding 5 racks at subsequent stages. But in this case, capital costs at the first stage will need to be increased, and if the customer refuses to scale the MDC, the funds will be spent ineffectively.

Dmitry Stepanov, director of the business direction of the President-Neva Energy Center company, estimated the cost of one module (container) for 10-12 racks at 15-20 million rubles. This amount includes the cost of all engineering systems included in the module, including air conditioners and UPS, but does not include the cost of the diesel generator set. According to him, if you choose cheaper engineering components instead of UPSs and air conditioners from Emerson Network Power, the cost can be reduced to 11–12 million rubles.

The cost of the GreenMDC project at the first stage is approximately 790 thousand euros (site preparation - 130 thousand euros, the first stage of the IDC - 660 thousand euros). The total cost of the entire project for 32 racks, including diesel generator sets, is 1.02 million euros. At the exchange rate at the time of preparation of the material (59 rubles per euro), it turns out to be 1.88 million rubles in terms of one rack. This is one of the most profitable options.

In general, if we talk about a solution based on freon air conditioners, the “starting point” for the customer can be a level of 2 million rubles in terms of one rack. The option based on a chiller system is approximately 1.5 times more expensive. Moreover, the difference in the cost of “Russian” and “imported” solutions is unlikely to differ much, since even domestic air conditioners (“Utileks”) use imported components. Unfortunately, modern compressors are not produced in Russia, and nothing can be done about it!

FEATURES OF THE PROPOSED SOLUTIONS

"GrandMotors"

The main IT module (“Machine room”) is formed on the basis of an all-metal container block and consists of a hardware compartment and a vestibule. To ensure a given “quantum” of growth, 4 racks with a capacity of up to 5 kW (cooling by row air conditioners) and 1 rack with a power of up to 20 kW (cooling by a similar air conditioner directly docked to it) are installed in the hardware compartment of each IT module (see Fig. 3). The UPS with batteries is located in the same compartment. As air conditioners, the customer is offered (to choose from) Emerson CRV or RC Group COOLSIDE EVO CW units, and as UPS - devices from the GMUPS Action Multi or ABB Newave UPScale DPA series.

The vestibule houses electrical panels and gas fire extinguishing system equipment. If necessary, it is possible to organize a workplace for a dispatcher or duty personnel in the vestibule.

As Daniil Kulakov, technical director of GrandMotors, notes, the use of in-line air conditioners is dictated by the presence of racks of different types in terms of power. In the case of racks of the same power, to increase the efficiency of filling the computer room area, it is possible to use outdoor air conditioners (for example, Stulz Wall-Air) - such a solution will reduce the energy consumption of the cooling system through the use of free-cooling mode.

At the first stage, it is enough to install two “Machine Room” modules and one “Power Module” on the site - the latter houses the GMGen Power Systems diesel generator set from SDMO Industries with a capacity of 440 kVA. Subsequently, it is possible to increase the available computing power of the data center with a discreteness of one “Machine Room” module (5 racks), while the built-in capacity of the diesel generator set is designed for 6 such modules. The monitoring and control system provides access to each piece of equipment and is implemented at the top level on the basis of a SCADA system.

All provided engineering solutions provide N+1 redundancy (with the exception of diesel generator sets - if necessary, a backup generator can be installed). The proposed architecture for building engineering systems makes it possible to increase the level of redundancy already during the creation of a data center.

"Krok"/CommScope

CommScope, the world's leading provider of cable infrastructure solutions, has recently entered the modular data center market, allowing it to address many of the shortcomings of first-generation products. She proposed building data centers from pre-assembled modules with a high degree of readiness, calling her solution “Data Center On Demand” (DCoD). These solutions have already been installed and are successfully used in the USA, Finland, South Africa and other countries. In Russia, with the participation of Croc, three DCoD projects are currently being developed.

DCoD base modules are designed to accommodate 1, 4, 10, 20 or 30 racks of IT equipment. Alexander Lasy, technical director of the intelligent buildings department at Krok, who presented the project to our customer, suggested assembling a solution from standard DCU-4 modules designed for 4 racks (you can order non-standard ones, but this will increase the cost of the project by about 20%). Such a module includes cooling systems (including automation for control), primary distribution and switching of power supplies.

The starting block will consist of 4 standard modules (see Fig. 4). Its total capacity is 16 racks, which will allow you to place a UPS of the required power and batteries, as well as equipment for security and monitoring systems, in 6 redundant rack spaces. The final configuration of the “machine room” (line) will consist of 5 modules of 4 racks each, that is, a total of 20 racks, 2 of which can have a total IT equipment power of up to 20 kW. In this case, 1 module (4 racks) will be allocated for an uninterruptible power supply system (UPS) and can be isolated from the main turbine room by a partition to restrict access for maintenance personnel.

The second line is completely similar in design to the first, but its starter kit consists not of 4, but of 3 modules for 4 racks. It will immediately allocate space for UPS, security and monitoring systems. However, as Alexander Lasy notes, as a result of gaining experience in operating the first line, adjustments may be made to the initial configuration plan.

The power of IT equipment of the first line in the initial installation (12 racks) will be about 60 kW, and after the expansion (16 racks) - 90 kW, including the channel-forming equipment of providers and the main switches of the data center. To ensure the operability of the data center in the event of an external power failure, it is necessary to provide uninterrupted power supply to the fans and controllers of the cooling system. The power of this equipment will be approximately 10 kW. In total, the UPS power will exceed 100 kW. A modular UPS for such a load with an autonomy time of at least 10 minutes and a power switching cabinet will occupy approximately 4 rack spaces.

As mentioned above, the highlight of the proposed solution is the use application for cooling direct free cooling with adiabatic humidification. For after-cooling and backup cooling, a freon system (DX) is proposed.

"LANIT-Integration"

The customer was offered a solution based on the modular physical protection room SME E-Module (see Fig. 5): from standard components, protected rooms of any geometry for a data center with an area from 15 to 1000 m 2 can be quickly erected. For our task, the area of ​​the room is at least 75 m2. Among the design features are a high level of physical protection from external influences (due to the steel supporting frame of the beam-column type and reinforced structural panels), fire resistance (60 min according to EN1047-2) and dust and moisture protection (IP65). Good thermal insulation properties allow you to optimize the cost of cooling / heating.

The SME FAC (Fresh Air Cooling) indirect free-cooling system with adiabatic humidification was chosen for IT equipment cooling. In such a system, there is no mixing of external and internal air flows. SME FAC blocks were selected for the project, allowing for the removal of up to 50 kW of heat each (for 10 racks of 5 kW each). They will be added as needed - when the number of racks increases, as well as when the load on individual racks increases to 20 kW.

Around the E-Module MPFZ, it is necessary to provide an area for the installation and maintenance of SME FAC cooling units with a perimeter width of at least 3 m. In addition, the project provides for a chiller system, but it will be used only during the most unfavorable periods for the main system. According to LANIT-Integration estimates, the data center will be able to operate without turning on chillers up to 93% of the time a year (at external air temperatures up to +220C).

Cold air will be distributed under the raised floor. If necessary, the MCDC can be equipped with the SME Eficube cold row containerization system.

The UPS is implemented on the basis of monoblock UPS Galaxy 5500 from Schneider Electric. At the first stage, 2 Galaxy 5500 80 kVA sources will be installed. Subsequently, the required number of UPSs will be added to the parallel. A parallel system can have up to 6 such modules, therefore it is capable of supporting a load of 400 kVA in N+1 mode.

Control and security solution environmental safety is also based on the Schneider Electric product - the InfraStruXure Central system.

LANIT-Integration's proposal looks like one of the most solid, especially in terms of the level of room security and the composition of the cooling system. However, a number of points - for example, the proposal to deploy a full-fledged chiller system that will be used only a small part of the year - suggest a high cost of this project. Unfortunately, we were not provided with cost information.

"President-Neva"

The company offered the customer a solution based on a Sever class block container. Thanks to this design, the solution can operate at temperatures down to –50°C, which is confirmed by the positive experience of operating three Gazpromneft-Khantos MSDCs at the Priobskoye field in the Far North (in total, the President-Neva company has 22 built MSDCs).

The container has transverse partitions dividing it into three rooms: a vestibule and two rooms for cabinets with IT equipment, as well as a partition between the cold and hot corridors inside the IT rooms (see Fig. 6). In one room, 8 racks of 5 kW will be installed, in the other - 2 high-load racks of 18 kW. Roller doors are installed between the rooms, when opened, single cold and hot corridors are provided in the equipment compartment.

The engineering core of the data center is Emerson Network Power equipment. To cool a room with 5 kW racks, Liebert HPSC14 ceiling air conditioners (cooling capacity 14.6 kW) with free cooling support are used. It houses four such air conditioners. Another Liebert HPSC14 is installed in a room with heavily loaded racks and UPS (UPS and battery). But it only serves as a closer, and the main load is removed by more powerful in-line air conditioners Liebert CRV CR035RA. Thanks to the emergency free cooling function, the Liebert HPSC14 is able to provide cooling during that short period of time when the main power supply is turned off when the diesel generator set starts up.

The UPS consists of a modular Emerson APM series UPS (N+1 redundancy) and a battery pack. An external UPS bypass is provided in the input distribution device (IDU), located in the vestibule. An FG Wilson unit with a capacity of 275 kVA is proposed as a diesel generator set.

The customer can start the development of his data center with a solution for 5 kW racks, and then, when the need arises, the container will be retrofitted with CRV air conditioners and UPS modules necessary to operate high-load racks. When the first container is filled, a second one will be installed, then a third one.

Although this solution offers the customer Emerson Network Power engineering systems, as Dmitry Stepanov emphasizes, President-Neva MDCs are also implemented using equipment from other manufacturers - in particular, companies such as Eaton, Legrand and Schneider Electric. This takes into account the customer’s corporate standards for engineering subsystems, as well as all the manufacturer’s technical requirements.

"Technoserv"

The IT Crew modular data center is a comprehensive solution based on prefabricated structures with pre-installed (factory) engineering systems. One data center module includes a server unit for installing IT equipment, as well as an engineering unit to ensure its uninterrupted operation. The blocks can be installed side by side on the same level (horizontal topology) or on top of each other (vertical topology): the engineering block is at the bottom, the server block is at the top. A vertical topology was chosen for our customer.

Two options for implementing the IT Ekipazh MSDC are presented for the customer’s consideration: the “Standard” and “Optimum” configurations. Their main difference is in the air conditioning system: in the “Standard” version a precision air conditioning system using chilled water is implemented, in the “Optimum” - a system using freon coolant.

The server block is the same. Up to 18 server racks can be installed in one block to accommodate active equipment. The computer room is proposed to be formed from 2 server blocks. Server blocks and a corridor block are located on the second floor and are combined into a single technological space (machine room) by dismantling the partitions (see Fig. 7).

Air conditioners (water cooling) are installed in the space under the raised floor of the server block of the “Standard” MSDC. In the Optimum MDC, cooling of IT equipment is carried out using precision air conditioners located in the engineering block. Air conditioning systems are reserved according to the 2N scheme.

The engineering block is divided into 3 compartments: for installation of UPS; refrigeration facilities and (optional) dry cooling tower. The UPS compartment contains a cabinet with a UPS, batteries and switchboard equipment. The UPS has N+1 redundancy.

In the refrigeration compartment of the Standard MCDC, two chillers, a hydraulic module, and storage tanks are installed. The implemented cooling system makes it possible to achieve a redundancy level of 2N within one data center module and 3N within two adjacent data center modules - by combining the circuits of the cooling system. The refrigeration system has a free-cooling function, which can significantly reduce operating costs and increase the service life of refrigeration machines.

It was already mentioned above about the high (40 percent) level of localization of engineering systems of the IT Crew MSDC. However, representatives of Technoserv did not disclose information about which manufacturer’s air conditioners and UPS they use. Apparently, the customer will be able to find out this secret during a more detailed study of the project.

"Utilex"

The NOTA modular data center is created on the basis of a prefabricated, easily dismantled frame structure, which is transported in a standard ISO container. Its basis is a metal frame, which is assembled at the installation site from ready-made elements. The base consists of sealed load-bearing panels, which are mounted on a flat platform made of Penoplex thermal insulation boards, laid in 3 layers in a checkerboard pattern. No foundation required! The walls and roof of the MCDC are made of sandwich panels. To install equipment in the data center, a raised floor is installed.

The project to create a data center is proposed to be implemented in 5 stages. At the first stage, an MSDC infrastructure is created to accommodate 10 racks with an average consumption of 5 kW per rack and the possibility of increasing the consumption of 2 racks to 20 kW each (see Fig. 8). The UPS is implemented on the basis of a modular UPS manufactured by Svyaz Engineering and a battery that provides autonomous operation for 15 minutes (taking into account the consumption of process equipment - air conditioners, lighting, access control systems, etc.). If it is necessary to support an IT load of 20 kW in 2 racks, power modules and additional batteries are added to the UPS.

The air conditioning system is implemented on the basis of 4 precision in-line Clever Breeze air conditioners produced by Utilex with a cooling capacity of 30 kW each. External units are located on the end wall of the MSDC. The controllers included in Clever Breeze air conditioners serve as the basis for a remote monitoring and control system. The latter provides monitoring of the temperature of the cold and hot aisles, the state of the UPS and humidity in the room; analyzes the condition and cooling capacity of air conditioners; performs the functions of a control and management system for access to the premises.

At the second stage, a second MDC module is installed for 5 racks with a load of 5 kW (in this and subsequent expansion “quanta” it is possible to increase the load per rack to 20 kW). At the same time, the module area has a reserve for accommodating 5 more racks (see Fig. 9), which will be installed in the third stage. The fourth and fifth stages of data center development are similar to the second and third. At all stages, the above-mentioned elements of the engineering infrastructure (UPS, air conditioners) are used, which are expanded as needed. The diesel generator set is installed to accommodate the entire planned capacity of the data center at once - Utilex specialists chose the WattStream WS560-DM unit (560 kVA/448 kW), placed in its own climatic container.

According to Utilex specialists, the NOTA MDC infrastructure allows increasing the area of ​​modules as the number of racks increases by adding the required number of carrier panels to the base. This makes it possible to evenly distribute the capital costs for the creation of modules by stages. But in this option, the creation of a smaller module (with dimensions of 5.4x4.2x3 m) is not economically feasible, since a slight decrease in capital costs for the module will be “compensated” by the need to invest twice in the fire extinguishing system (at the second and third stages).

An important advantage of the Utilex NOTA data center in terms of focusing on import substitution is the production of most data center subsystems in Russia. The stand-alone infrastructure of the NOTA WDC, racks, air-conditioning systems, remote monitoring and control systems are produced by Utileks itself. SBE - Svyaz Engineering company. Automatic gas fire extinguishing system - by Pozhtekhnika company. The computing and network infrastructure of the data center can also be implemented on the basis of Russian-made solutions - ETegro Technologies.

GreenMDC

The customer was offered a Telecom Outdoor NG modular data center, consisting of Telecom modules (designed to accommodate racks with IT equipment), Cooling (cooling systems) and Energy (SBU, electrical panels). All modules are manufactured at GreenMDC, and before being sent to the customer, the MDC is assembled at a test site, where full testing is carried out. Then the MCDC is disassembled and the modules are transported to their destination. Separate modules are metal structures with walls installed and prepared for transportation (see Fig. 10).

The heat removal system from the equipment is implemented on the basis of HiRef precision cabinet air conditioners with cold air supply under the raised floor. UPS - on Delta modular UPS. The electrical distribution network is made with redundant busbars - the racks are connected by installing branch boxes with automatic switches on the busbars.

As part of the first stage of the project, site preparation, installation of a diesel generator set, as well as 3 data center modules are carried out (see Fig. 10): a Telecom module with 16 racks (10 racks of 5 kW each, 6 rack spaces remain unfilled); Cooling module with 2 60 kW air conditioners and Energy module with 1 modular UPS. The UPS consists of a 200 kVA chassis and 3 25 kW modules (for the Delta DPH 200 UPS, the values ​​in kilowatts and kilovolt-amperes are identical). To ensure uninterrupted operation of engineering systems, 1 modular UPS is installed: an 80 kVA chassis and 2 20 kW modules.

At the second stage, the expansion is carried out within the existing MDC modules: 6 more racks are added to the Telecom module. Increasing electrical power is achieved by adding power modules to modular UPSs, and cooling capacity is achieved by increasing the number of air conditioners. All communications (freon pipes, electrical) for air conditioners are laid at the first stage. If it is necessary to install a rack with high consumption, a similar algorithm for increasing power is implemented. In addition, it is practiced to isolate the cold aisle and, if necessary, install active raised floor tiles.

At the third stage, a second Telecom module is installed, which will allow placing up to 32 racks in the MSDC. The increase in the capacity of engineering systems occurs in the same way as it was done at the second stage - up to the maximum design capacity of the MSDC. The operation of expanding the data center is carried out by the supplier’s employees without interrupting the functioning of the installed modules. The production and installation time for the additional Telecom module is 10 weeks. Installation of the module takes no more than one week.

Huawei

The Huawei IDS1000 mobile data center consists of several standard 40-foot containers: one or more containers for IT equipment, a container for an UPS, a container for a refrigeration system. If necessary, a container for duty personnel and storage is also supplied.

For our project, we chose an IT container with 18 racks (6 rows, each with 3 racks) - see fig. 11. At the first stage, it is proposed to place 10 racks for IT equipment, 7 row air conditioners, power distribution cabinets, AGPT cylinders and other auxiliary equipment. At the second and subsequent stages - add 8 racks and 5 row air conditioners to the first container, install the second container and place the required number of racks with air conditioners in it.

The maximum power of one IT container is 180 or 270 kW, which means that up to 10 and 15 kW can be allocated from each rack, respectively. Huawei's proposal does not specifically address the placement of 20 kW racks, but since the allowable power per rack is higher than the 5 kW requested by the customer, the load can be redistributed.

The cooling supply for row air conditioners is carried out from a system of chillers located on the frame of a separate container. In addition, the UPS (UPS and battery) is placed in a separate container. The UPS is configured depending on the calculated load in 40 kW increments (maximum 400 kVA).

As part of the solution, Huawei included a new generation control system NetEco, which allows you to control the functioning of all major systems: power supply (DGS, UPS, PDU, batteries, distribution boards and automatic transfer switches), refrigeration supply (air conditioners), security (ACS, equipment video surveillance). In addition, it is capable of monitoring the state of the environment using various sensors: smoke, temperature, humidity and water leakage.

The main feature and advantage of Huawei's offer is that all the main components, including air conditioners and UPSs, are from one manufacturer - Huawei itself.

Schneider Electric

The proposed MDC is based on standard NON-ISO25 all-weather modules with high fire-resistant characteristics (EI90 according to EN-1047-2). Access is controlled by an access control system with a biometric sensor. The modules are delivered to the installation site independently, then joined to form a single structure.

The first stage is the most voluminous and costly stage, at which the base is leveled and strengthened (screed is poured), two chillers are installed (Uniflair ERAF 1022A units with free cooling function), a diesel generator set, an energy module and one IT module (10 racks), the entire pipe system is fully installed wiring. The energy module and the IT module form three rooms: the energy center, the machine room, and the entrance gateway/vestibule (see Fig. 12).

An APC Symmetra 250/500 UPS (with a set of batteries and a bypass), switchboard equipment, a gas fire extinguishing system, main and emergency lighting, a cooling system (2 (N+1) SE AST HCX CW 20kW air conditioners) are installed in the power module room. The UPS powers IT equipment, air conditioners and pumps. The Stage 1 IT Module comes pre-installed with 10 racks (AR3100), PDU, busbar, hot aisle containment and cooling system based on SE AST HCX CW 40kW (2+1). These air conditioners are specifically designed for modular data centers: they are mounted above the racks and, unlike, say, row air conditioners, do not take up useful space in the IT module.

At the second stage, a second one is docked to the first IT module, which comes with 5 racks, and is otherwise equipped in the same way as the first (only at this stage 2 (1+1) of 3 air conditioners are activated). In addition, another chiller is installed, and the modular UPS is retrofitted with the necessary set of batteries and inverters. The third stage of data center development boils down to adding 5 racks to the second IT module and activating a third air conditioner. The fourth and fifth stages are similar to the second and third.

The StruxureWare DCIM class complex is proposed as a control system.

As a result, the customer will receive a solution that fully meets his requirements from one of the world leaders in the field of modular data centers. At this stage, Schneider Electric has not provided information on the cost of the solution.

CONCLUSION

The main design features of the proposed data centers, as well as their cooling and power supply systems, were briefly discussed above. These systems are presented in more detail in the project descriptions available on the website. It also provides information about other systems, including gas fire extinguishing equipment, an access control and management system (ACS), an integrated management system, etc.

The customer received 9 projects that mostly met his requirements. The presented descriptions and characteristics of the products convinced him of the main thing: modern modular solutions allow you to get a fully functional data center, while the project implementation time is several times shorter than with the traditional approach (with capital construction or reconstruction of the building), and funds can be invested gradually as you grow needs and appropriate data center scaling. The last circumstance is extremely important for him in the current conditions.

Alexander Barskov- Leading editor of the Journal of Network Solutions/LAN. He can be contacted at:

The many years of experience of the ADM Partnership company in the field of design and construction of Data Processing Centers (DPCs) have been summarized with the aim of developing unified modular solutions to implement any request from the Customer.

VARIETIES OF MODULAR DATA CENTERS

  • Container data centers (ISO)
  • Modular data centers OUTDOOR S
  • Modular data centers INDOOR
  • Non-standard data centers

CONTAINER DATA CENTERS (ISO)

Installation is possible both inside existing buildings and in open areas. Depending on the task, the standard solution is modified to meet specific needs.


The main advantages of container solutions:

  • low capital costs;
  • a container data center is not a capital construction project and does not require obtaining appropriate permits;
  • reduced production time compared to other types of data centers and the possibility of subsequent relocation;
  • delivery of a container data center does not require special permits and can be carried out by standard vehicles - road, rail, sea;
  • Commissioning of the data center is carried out at the factory. The Customer can evaluate the achievement of performance indicators even before installing the data center on the site;
  • simplicity and speed of commissioning. Installation of the container on the site takes no more than 2 days, provided that communications have been installed in advance.

Disadvantages of container solutions:

  • limited choice of brands of engineering equipment;
  • limited space for equipment maintenance;
  • limitation in equipment modernization in the future;
  • climate restrictions;
  • limitation on equipment power and number of racks;
  • construction of external infrastructure is required (DGS, BKTP, etc.).

MODULAR DATA CENTERS OUTDOOR S

The Outdoor S data center is designed for installation in an open area. It is mounted from modules pre-assembled at the factory, manufactured on the basis of a standard version, or developed individually taking into account the Customer’s requirements.


The design of the data center is assembled in a factory on our site. The maximum possible amount of equipment and systems is installed. When ready, modules with equipment inside are transported to the Customer’s site.

As part of the contract, we prepare the site and utilities at the Customer’s site. The systems are assembled and installed in a short time without extensive construction and installation work. Final commissioning is being carried out.

Installation of a data center with 60 racks on site takes no more than 1 week, taking into account pre-installed communications and prepared infrastructure.

The main advantages of Outdoor S solutions:

  • modernization of engineering infrastructure as necessary;
  • construction of external infrastructure using similar modular technology (Administrative buildings, checkpoints, warehouses, transformer substations, diesel generator sets and DIBP).

MODULAR DATA CENTERS OUTDOOR L


The Outdoor L data center is designed for installation in an open area. It is mounted from modules pre-assembled at the factory, manufactured on the basis of a standard solution, or developed individually taking into account the Customer’s requirements.

Main advantages of Outdoor L solutions:

  • the ability to create an individual layout;
  • the ability to choose a vendor of engineering equipment;
  • the ability to “hot” increase IT capacity as needed;
  • the possibility of subsequently changing the data center location with the involvement of ADM Partnership specialists;
  • construction of external infrastructure using similar modular technology (administrative buildings, checkpoints, warehouses, transformer substations, diesel generator sets and DIBP).

MODULAR DATA CENTERS INDOOR


The Indoor data center is designed for installation inside existing industrial buildings. It is assembled from standard or individually designed modules taking into account the Customer’s technical specifications.

Turbine room modules and data center refrigeration units are manufactured at the factory with the installation of the maximum possible amount of equipment and systems. When ready, modules with equipment inside are transported to the Customer’s site. Assembly and additional installation of systems is carried out in the shortest possible time. The Indoor solution requires the reconstruction of the building's engineering infrastructure, an individual solution for the location of transformer substations, DGU/DIUPS premises, and fuel supply systems. It is also necessary to develop a design solution for the placement of external units of air conditioners/cooling towers. As part of the contract, we carry out technical inspection of the building, preparation of the site and utilities.


Main advantages of Indoor solutions:

  • rapid deployment of data centers using pre-assembled modules;
  • the ability to choose a vendor of engineering equipment;
  • modernization of engineering infrastructure without stopping the data center;
  • possibility of flexible expansion of data center capacity as needed.

Disadvantages of INDOOR solutions:

  • restrictions associated with the capabilities of the building and its engineering infrastructure;
  • the need for individual design of power supply systems.

NON-STANDARD DATA CENTERS





The need to locate a data center in an existing building, special planning requirements, territory restrictions or engineering infrastructure - all this leads to the need to create an individual project that takes into account all the Customer’s requirements.

Creating a non-standard turnkey data center system is a complex, complex task that includes solving organizational, technological and engineering issues. First of all, the construction of such a facility requires the development of a unique concept to increase the investment attractiveness of the project, as well as the collection and approval of a set of initial permitting documentation to provide the site with the necessary capacities and communications. To solve this problem, the involvement of high-level specialists with experience in similar work is required. As a rule, when creating non-standard data centers in an existing building, it is necessary to reconstruct it and separately consider the issues of the location of transformer substations, premises of diesel generator sets/distribution power supply units, and fuel supply systems. As part of the contract, we carry out an industrial study of the structure, organization of the site and technical part.

Advantages of non-standard data centers:

  • the ability to create a data center complex of any complexity;
  • flexibility in choosing solutions and vendors;
  • open plan;
  • the ability to create unique objects and use innovative, advanced solutions.

Flaws:

  • the longest timeframe for commissioning a data center compared to standard and modular solutions;
  • high capital costs;
  • the need for individual design of all systems;
  • the risk of unexpected costs and additional work.

The main criterion for evaluating data centers is reliability. Services hosted in the computing power of the data center must operate around the clock, 365 days a year and avoid failures. Downtime of services for many companies means multimillion-dollar losses, so in order to attract customers, the data center owner should not skimp on the redundancy and operational characteristics of all critical data center systems.


Modular data centers can significantly reduce investments in the construction of data centers, primarily due to the absence of the need to build a permanent building and the ability to expand the site as needed. Savings on capital costs can reach 50%. In the context of an economic recession, the demand for such solutions is growing.

The concept of a modular data center (MDC) is built on the idea of ​​a standard solution - a module with a given set of power supply, cooling and physical security systems. Most of the components of such a block are already pre-assembled and pre-installed by the manufacturer, which makes it possible to significantly reduce the work required to install a ready-made solution on the customer’s side. In addition, individual modules are independent in terms of life support, which, combined with short deployment times, allows you to increase the power of the data center as needed.

“Thanks to scalability, the possibility of phased commissioning, as well as the independence of each module for critical life support systems, the customer has the opportunity to gradually invest only in the volume of the data center required at the moment. This fully corresponds to the current economic situation of many companies, in addition, in this way the problem of underutilization and idle capacity is solved, which ultimately significantly reduces the payback period of the data center,” says Stanislav Tereshkin, head of the data center department of the Asteros group.

What are the benefits of the MDC?

The economic feasibility of using modular technologies covers four aspects: reduced capital investment, lower operating costs, no downtime due to capacity expansion as needed, and a shorter return on investment due to faster site commissioning.

To launch a modular data center, it takes 4–6 months from the start of design, which is 2–3 times less than for a conventional data center. “The accelerated commissioning of modular data centers is also due to the fact that the modules, along with elements of the engineering infrastructure, are assembled directly at the vendor’s plant, where all engineering and installation resources are concentrated. This is much faster than performing this work at the customer’s site,” explains Vsevolod Vorobiev, head of the data center department of the Jet Infosystems network solutions center.

An important factor in reducing time is that modular solutions do not require the construction of a permanent concrete building with thick walls and ceilings. As a rule, modular technologies require the presence of a “prefabricated” room that can be placed either inside another building or in an “open field”. “When asking what a data center should be like, we often come across the stereotype of a concrete building with strong walls and ceilings,” he shares his experience Alexander Perevedentsev, Deputy Head of the Engineering Systems Sales Support Department at Technoserv. - However, in reality, such a data center has no advantages over a modular one. Usually the desire for monumentality fades when the customer compares the construction time and budget with the costs of constructing a modular structure.”

“With proper planning, the design and commissioning period can be reduced by 2 times or more due to the actual absence of construction of capital structures (including design, project examination, obtaining construction permits and the construction itself," comments an expert from NVision Group. Arseny Fomin.

In terms of capital costs, certain savings can be achieved through the use of ready-made standard products, which are cheaper than solutions developed to order. However, an even greater role is played by the ability to commission individual units in stages, thereby reducing the size of the initial investment. “Constructing a data center based on the principle of modularity makes it possible to invest money in stages, creating operational fault-tolerant clusters that allow you to simultaneously solve the business tasks assigned to them (make a profit) and return the investment. The initial investment of the project is 20%–30%, the rest is distributed over time in accordance with the stages of putting the data center into operation,” explains Pavel Dmitriev, Deputy Director of the Department of Intelligent Buildings at Croc.

The overall CAPEX savings when using modular solutions is up to 30%, calculated by the research company 451 Research. System integrators surveyed by CNews cite figures ranging from 15% to 50% savings on capital costs, depending on the design features of the facility.

In the area of ​​operating costs, savings occur due to the unification of engineering systems; accordingly, costs for maintenance personnel, the purchase of spare parts and repairs are reduced. In addition, it becomes possible to reduce your energy bill. According to Schneider Electric, the modular approach can reduce the total cost of ownership (Total Cost of Ownership, TCO) by $2–7 per 1 W of data center power. “Electricity savings come from the fact that modular data centers are usually more compact than “classic” ones. The volume of air that needs to be cooled is smaller, so air conditioning systems consume less electricity,” explains Valentin Foss, a representative of the Utilex company.

However, not all market players agree that the savings on OPEX are significant: “Operating costs will be approximately the same compared to a non-modular data center,” believes Arseny Fomin. However, he stipulates that “savings can be achieved due to the absence of the need to maintain and repair a capital building.”

Examples of modular data center projects in 2015–2016.

Customer System integrator Solution Project Description
Cherkizovo Group Jet Information Systems Contain-RZ The modular data center was built taking into account the requirements of the international standard TIA-942, the reliability level of the complex of engineering systems corresponds to TIER II, and the fault tolerance coefficient is 99.749%. Area – about 200 sq. m. The data center contains 32 high-load racks, which house the core of the corporate network and server platforms for all major business applications (SAP, 1C, CSB, corporate mail, etc.). The MSDC consists of three modules corresponding in size to a sea container. This allowed us to avoid additional costs associated with transporting modules from the manufacturer to the site. In addition, the modular solution eliminates risks such as design and installation errors.
LinxDataCenter Insystems n/a In 2016, the expansion of the LinxDataCenter data center by 265 rack spaces was completed. The implementation of this project took place in a working data center, in which it was necessary to build three additional modules. A characteristic feature of the project was the implementation of a large volume of general construction work (excavating soil, pouring reinforced concrete floor slabs, installing columns and beams of metal structures) subject to the uninterrupted functioning of the existing data center modules and unhindered access to them for clients. Newly created modules were also put into operation as soon as they were ready. Accordingly, the entire complex of engineering systems was created and put into operation in stages, with the entire complex of intermediate tests and commissioning work carried out.
Technopark "Zhigulevskaya Valley" Lanit-Integration Smart Shelter/AST Modular The data center will have six computer rooms with a total area of ​​843 square meters. m and accommodates 326 racks with power from 7 to 20 kW each. A cooling system from AST Modular, which includes 44 Natural Free Cooling external air cooling modules, is deployed in four machine rooms on the second floor. The uninterruptible and guaranteed power supply system is built on Piller diesel-dynamic uninterruptible power supplies. The building is controlled using solutions built on equipment from Schneider Electric. IT equipment is assembled in a fault-tolerant configuration and placed in Smart Shelter modular rooms, which provide reliable protection of the central heating center from fire, moisture, vibration and other external influences.
Aeroflot Technoserv IT Crew There are 78 server racks with an electrical power of 10 kW each installed in the computer room. The total area of ​​the two machine rooms is 175 square meters. m, which ensures ease of transportation and maintenance of equipment. The server and engineering blocks are located vertically - in two tiers. With this configuration, the main engineering equipment is located on the first tier, and the computing infrastructure is on the second. When connecting server blocks, a single technological room is formed for installing active equipment. The engineering systems of Aeroflot's new modular data center are designed for round-the-clock uninterrupted operation - they are pre-installed, adjusted and tested.
Krasnoyarsk hydroelectric power station Utilex/Lanit-Sibir Nota As part of the project, a modular data center with internal dimensions of 10.5 * 3 * 3.2 m was installed on a site located at a safe level outside the body of the hydroelectric dam. The data center is fully equipped with all the necessary engineering systems: distributed power supply, air conditioning and humidity maintenance, video surveillance, uninterruptible power supplies, control systems, monitoring and access management, automatic fire extinguishing, fire and security alarms, as well as a structured cabling system. The technical difficulty of the project was that the work was carried out under an existing 500 kV power transmission line. The construction of the data center facility itself was limited by time constraints. During the construction of the frame structure of the center, the work fit into the tightest and most regulated schedule for disconnecting power lines.

A modular data center is a data processing center consisting of standard system elements - modules. A single module is a mobile data center; when several such mobile cells are combined, we get a modular system. Modules can be with racks, with a UPS, with all kinds of auxiliary equipment. They can also be installed indoors. New units are installed as needs grow.

A modular, containerized data center includes IT equipment, power and cooling systems. Once it became possible to combine all of this into one container, proprietary modules from different vendors became increasingly common.

Modular data centers are a technology whose main goal is to reduce and optimize the costs of building IT infrastructure. The way infrastructure resources are distributed in this model is in many ways similar to the distribution of computing power in the cloud. The client pays exactly for the amount of capacity that he uses, and if necessary, he can relatively easily increase or reduce resources. The same is true with the modular construction of data centers: according to the customer’s decision, resources are added or subtracted quite quickly and relatively inexpensively.

Benefits of Mobile Data Centers

Mobile data centers have a number of advantages over stationary data centers. First of all, the advantages of modular data centers were appreciated by those who are looking for disaster-resistant solutions that can work in any conditions - mobile operators, the raw materials sector.

Secondly, these are companies for which business continuity is important, and the cost of downtime of IT equipment is extremely high, for example, financial and insurance companies, banks.

And thirdly, these are IT service providers who are looking for ways to reduce capital costs for construction.

Everything is learned by comparison, and the low cost of modular data centers becomes obvious when they find themselves on different scales with traditional ones. Firstly, modular data centers do not require capital construction or expensive refurbishment of existing premises. This is such a powerful advantage that it easily covers up any significant disadvantages. The builder and the customer do not need to invest, as they say, “in concrete.” Capital construction easily eats up half the budget for deploying a data center.

In addition, constructed buildings or a data center located on rented premises are not mobile. If the client’s lease expires or the need arises to move to another office, then in the case of a capital building this turns into an investment disaster: it is necessary to invest funds comparable in size to the construction of another data center. And a modular data center can simply be disassembled and deployed in any other space. In addition, it is not tied to the volume of the premises, despite the fact that it is the volume of commercial real estate that turns out to be an obstacle to building up the IT infrastructure.

The similarity with the cloud model, which was mentioned just above, is that a modular data center is easily scalable. For example, you have an open area of ​​a thousand square meters. You can build a modular data center of sufficient capacity for 300 sq. m, and the rest of the area will be filled with new modules as business needs grow. This incremental scaling allows for significant reductions in both capital and operating expenses.

MDCs are more efficient: they do not require special infrastructure, the construction period is about 2-3 months, they are mobile and can be located with sources of cheap energy. In addition, they can be used to expand the capacity of stationary data centers.

If you conduct a comparative analysis of mobile and stationary data centers, it will look like this:

Stationary data centerMobile data center

1. Deployment requires a special room

2. Has limitations in scaling.
3. It is difficult to relocate.
4. Project implementation period – 1 year.
5. Installation and commissioning of the center of the stationary center is much more expensive.
6. Operational cost is higher than mobile center


1. No dedicated building is required to deploy an MSDC, and hence the start-up and operating costs are significantly reduced.

2. Allows you to deploy IT infrastructure at a remote site
3. Suitable for mobile offices or mobile facilities.
4. Can be used to expand the IT infrastructure (together with a stationary center).
5. Possibility of locating close to cheap energy sources, without paying rent for the premises, preserving investments when moving or relocating an office.
6. Project implementation period 2 - 3 months.

Wherever your business is located, you can always install a mobile center there. The availability of ready-made solutions for the production of block containers, the absence or significant reduction of such stages as infrastructure design for stationary data centers, coordination of design solutions with supervisory authorities, construction and minimal requirements for site preparation lead to a significant reduction in the delivery and installation time of MSDCs compared to stationary centers.

The topic of mobile data centers, or Mobile Data Center (MDC), is often covered in the press, both foreign and Russian. This is of interest to domestic entrepreneurs and IT industry specialists. Mobile data centers are capable of solving a number of problems that often face Russian enterprises operating in the field of information technology.

Mobile data centers are capable of solving a number of problems that often face Russian enterprises and organizations working in the field of information technology: the need to deploy new racks in the shortest possible time or create new racks for a short time based on existing main and reserve capacities and engineering components infrastructure in the absence of premises or if the existing premises do not meet the requirements for a data center.

In addition, MDCs allow you to create temporary server premises at a time when the permanent one has not yet been completed or put into operation. Equipment that is necessary for the functioning of the business during this period can be installed in the temporary premises.

Also, if there is a lack of specialized premises to accommodate a data center, or specialists with the necessary qualifications, or stationary data processing centers that provide the required level of reliability, an MSDC may be the optimal solution. The mobile data center can be used to expand the IT infrastructure. During active expansion into regions or acquisition of other companies, branches are opened without delay, and acquired assets are promptly integrated into a single information infrastructure.

Factors driving demand for mobile data centers include increased Internet traffic, the importance of uptime for Internet applications, and integration with the global economy. Thus, mobile data centers have ceased to be a niche offer, and have become in demand not only in the field, but also in megacities where there is not enough free space.

"Mobile Data Center": do you need it?

Industry confusion over the term “mobile data center” can not only confuse CIOs, but also lead to them choosing an ineffective solution. Origin of the term The term “mobile” originates from solutions used by the American military. These data centers actually had the ability to move with the equipment installed inside and quickly deploy to a new location. Such data centers were usually located in containers, which subsequently led to a confusion of definitions: container data centers began to be identified with mobile and modular. However, existing “container solutions” have little in common with their military predecessors - most of them cannot be transported with not only IT, but even engineering equipment installed inside (especially on Russian roads). Who needs it? Data center relocation is a very specific need and the likelihood that a company will have one is low. If such a move were to occur, it is unlikely that it would be in the form of a military special operation.

Among the characteristics of “mobile” data centers, most clients consider the following to be key:

  • possibility of delivery to remote places;
  • the ability to choose a location (closer to users, sources of electricity, near the building);
  • reduction of implementation time;
  • the possibility of a phased increase in capacity (on demand);
  • the possibility of moving in the future to a new place of operation.

It is these options that manufacturers of “mobile” data centers based on ISO container solutions point out as their advantages. Pre-fabricated data center or mobile? It is important to understand: in addition to container (or mobile) solutions, there are pre-fabricated (or assembled from pre-fabricated structures) data centers on the market that not only provide customers with the same capabilities, but also surpass “container data centers” in a number of key characteristics.

Pre-fabricated data centers provide the opportunity for more flexible scaling (quantum scaling across racks rather than modules). In a pre-fabricated data center, there is no problem of lack of service space, which is typical for “container” data centers. At the same time, some prefabricated modular data centers have lower transportation costs with greater flexibility and even speed of delivery than “mobile” ones.

Story

2005-2006: Google and Sun create the first containerized data centers

Previously, it was believed that Sun Microsystems became a pioneer in the creation of mobile data centers by releasing the first Blackbox product in 2006. However, as it recently became known, the pioneer was Google Corporation, which in the fall of 2005 created its first data center based on 45 containers, each of which contained 1,160 servers and consumed 250 kW of electricity. The power density was 780 watts per square foot, and the cold aisle temperature did not exceed 27.2 degrees.

2010: Only 80 containers sold worldwide

Despite the rapid growth of MSDC offerings from leading manufacturers, this market segment, according to IDC estimates, remains small, although fast-growing. In 2010, just over 80 containers were sold worldwide, which is, however, twice as many as in 2009.

The main factors for the development of the commercial data center market in Russia, analysts from J’son & Partners (J&P) call the integration of Russian business into the global economy, the growth of Internet traffic, national projects and government programs, as well as continuity of work as a competitive advantage of companies. Despite the fact that the demand for MDCs is not yet great (no more than a dozen projects have been implemented so far), more than seven leading Russian IT companies have already developed and offered their own options for building mobile data centers.

At this time, MSDCs are in demand around the world by companies in the communications industry, oil and gas sector, large industrial and manufacturing holdings, metallurgical and energy enterprises.

Some integrator companies in Russia continue to work on technological solutions in this area:

  • Stack Group, offering the mobile solution Stack.Cube,

Companies such as Cherus and NVision Group also had a number of projects in this direction.

One of the reasons for choosing mobile data centers is the ability to connect them in a short time. On average, the production of an MSDC requires from 3 to 6 months, and the installation and connection time is limited to two weeks.

2011: Study of the efficiency of modular container data centers

The Green Grid consortium published in November 2011 the results of a study on the efficiency of data centers assembled from ready-made modular blocks that already contain the necessary equipment and programs. Similar modules are produced by a number of companies, including Dell, SGI, Capgemini and Microsoft.

Modular centers cost less to build due to reduced design costs. Modules tend to be denser than conventional centers, take up less space, and are often more productive because design flaws are identified during the module design stage. Additionally, modular data centers are typically optimized for energy consumption and cooling efficiency, and are designed to operate over wide temperature and humidity ranges. A common challenge for modular centers is reliability, The Green Grid report notes. The level of redundancy, robustness, and guaranteed runtime of different components of a module can vary greatly, resulting in the reliability of the entire module becoming dependent on a single element. Many are also concerned about the physical security of modules.

At this time, no more than 10% of mobile data centers (MDCs) operate in Russia. Activation of this area was expected - more than seven companies are already offering projects to create container data centers. All these companies note an increase in demand for mobile data centers: in 2011, the number of orders for mobile center projects exceeded the number of orders for 2009-2010. Large companies such as VimpelCom or the aircraft manufacturing concern Irkut have already appreciated the advantages of mobile data centers, which undoubtedly indicates the growing popularity of MSDCs in the Russian market.

Customers of these projects understand that by packaging a data center in a container, they will receive a completely finished product on a turnkey basis with minimal financial costs and high return on investment. The MSDC can become both the main center for storing and processing data, and a backup one.

2012: APC study by Schneider Electric

According to research from APC by Schneider Electric, modular containerized Data Centers are faster to deploy, more reliable to operate, and have a lower total cost of ownership than data centers built independently from various components.

In 2012, APC by Schneider Electric specialists conducted a detailed study of the problem of power and cooling of modular containerized data centers and a comparative analysis with self-built data centers.

Key issues addressed

Share with friends or save for yourself:

Loading...