Why Hyperscale Modular Data Centers Improve Efficiency

This article is part of a special issue of VB. Read the full series here: Smart Sustainability.

As the world transitions from Web 2.0 to Web3 - which is taking shape for us to deploy later in the decade - the powerhouses that will provide new and expanded services are undergoing major upgrades to handle everything that users will need. They will provide more bandwidth than we have ever seen before, but they will use less power from the wall.

How is this possible? That's because we're going modular: we can replace different parts of a data center much faster and more efficiently than in previous years. We also don't see the high number of data bottlenecks as was common in the past. That's because we now have more efficient network pipelines, better/lighter software, more solid-state data storage, newer, faster, and cooler processors, and a couple dozen other improvements.

All of these components can now be inserted or removed from data centers at any time when they are not doing the job. Previously, data center hardware upgrades or enhancements would take weeks or months to complete. This means that we will always have the best and fastest components running in our data centers at all times.

New super data centers and telecom interconnects are also replacing entire first-generation facilities at an increasing rate. Some model data centers stand out as prescient examples of scalable energy use, lower-tier power consumption, low carbon footprint, and carefully planned sustainability using natural energy sources. Data center builders can learn a lot from these installations as examples of how to provide great computing power while respecting the environment.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders from across the Metaverse to advise on how Metaverse technology will transform the way all industries communicate and do business on October 3-4 in San Francisco, CA.

> register here Much more power, bandwidth will be needed for Web3

We will need much more power and bandwidth to run web3 and metaverse-like applications that require much higher power envelopes, including applications involving cryptocurrency, high-end gaming, big data analysis and machine learning, 3D video and imagery as well as augmented reality.

AWS, Google, Alibaba, IBM, Microsoft, Dell EMC, Apple, Facebook, VMware, Oracle, AT&T, Verizon and other industry leaders are building new large-scale modular data centers around the world that will provide essential power for future computing needs. They all use new federal and state energy consumption guidelines, provide carbon footprint metrics, and incorporate natural energy sources (primarily hydroelectric, wind, and solar). They all have exemplary PUE (power utilization efficiency) ratings.

PUE is a metric (or score) used to determine the energy efficiency of a data center; it is determined by dividing the total amount of energy entering a data center by the energy used to operate the IT equipment in it. For example, Facebook's data center in Prineville, Oregon ran an exemplary PUE of 1.078; Google's many data centers average less than 1.20 across its entire global system. Generally, a PUE of less than 1.50 is considered high-end.

A conventional data center can take about two years to set up, from conceptualization to deployment to operational use. By contrast, implementing a modular data center is much faster, often taking 50-75% less time – and, as CFOs like to...

Why Hyperscale Modular Data Centers Improve Efficiency

This article is part of a special issue of VB. Read the full series here: Smart Sustainability.

As the world transitions from Web 2.0 to Web3 - which is taking shape for us to deploy later in the decade - the powerhouses that will provide new and expanded services are undergoing major upgrades to handle everything that users will need. They will provide more bandwidth than we have ever seen before, but they will use less power from the wall.

How is this possible? That's because we're going modular: we can replace different parts of a data center much faster and more efficiently than in previous years. We also don't see the high number of data bottlenecks as was common in the past. That's because we now have more efficient network pipelines, better/lighter software, more solid-state data storage, newer, faster, and cooler processors, and a couple dozen other improvements.

All of these components can now be inserted or removed from data centers at any time when they are not doing the job. Previously, data center hardware upgrades or enhancements would take weeks or months to complete. This means that we will always have the best and fastest components running in our data centers at all times.

New super data centers and telecom interconnects are also replacing entire first-generation facilities at an increasing rate. Some model data centers stand out as prescient examples of scalable energy use, lower-tier power consumption, low carbon footprint, and carefully planned sustainability using natural energy sources. Data center builders can learn a lot from these installations as examples of how to provide great computing power while respecting the environment.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders from across the Metaverse to advise on how Metaverse technology will transform the way all industries communicate and do business on October 3-4 in San Francisco, CA.

> register here Much more power, bandwidth will be needed for Web3

We will need much more power and bandwidth to run web3 and metaverse-like applications that require much higher power envelopes, including applications involving cryptocurrency, high-end gaming, big data analysis and machine learning, 3D video and imagery as well as augmented reality.

AWS, Google, Alibaba, IBM, Microsoft, Dell EMC, Apple, Facebook, VMware, Oracle, AT&T, Verizon and other industry leaders are building new large-scale modular data centers around the world that will provide essential power for future computing needs. They all use new federal and state energy consumption guidelines, provide carbon footprint metrics, and incorporate natural energy sources (primarily hydroelectric, wind, and solar). They all have exemplary PUE (power utilization efficiency) ratings.

PUE is a metric (or score) used to determine the energy efficiency of a data center; it is determined by dividing the total amount of energy entering a data center by the energy used to operate the IT equipment in it. For example, Facebook's data center in Prineville, Oregon ran an exemplary PUE of 1.078; Google's many data centers average less than 1.20 across its entire global system. Generally, a PUE of less than 1.50 is considered high-end.

A conventional data center can take about two years to set up, from conceptualization to deployment to operational use. By contrast, implementing a modular data center is much faster, often taking 50-75% less time – and, as CFOs like to...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow