Admit It, You Don’t Really Know What A Data Center Is

For commercial real estate professionals, data center jargon can seem more from the realm of silicon chips than steel girders.

Until recently, the data center sector was a relatively siloed corner of commercial real estate — a niche for investors focused primarily on tech and digital infrastructure. But now, a wave of new investors is inundating the data center space.

It’s easy to see why. While the coronavirus pandemic created uncertainty for traditionally stable asset classes like downtown office space and hotels, it simultaneously supercharged the growth of the internet and accelerated the pace of digital transformation for companies around the world.

The more internet users — and the more data that is collected and created by companies, individuals and devices — the more data center space is needed to process and store that data. As the internet grows, so grows the demand for data centers.

All of a sudden, data centers looked like the safest bet out there. But newcomers to data center investment face something akin to a language barrier. Even for a seasoned CRE pro, discussions around digital infrastructure are mired in sector-specific jargon that can be more tech than bricks-and-mortar.

Here’s a basic guide to demystifying some key terms and concepts around data centers.

Data Center

OK, what exactly is a “data center,” anyway? At its most basic, a data center is simply a place housing computing hardware and the equipment needed to connect to a network. But more broadly, data centers are the physical home of the internet — the central nervous system for the applications that are increasingly at the “center” of everyday life. They are essential pieces of infrastructure at the heart of the global economy. From a real estate perspective, understanding the IT equipment housed within data centers is less important than understanding what’s needed to keep that equipment running. All data centers, regardless of their use or location, provide three basic things: power, network connectivity and cooling.

Data centers are mission-critical facilities — meaning, they are expected to remain operational at all times, even during a natural disaster or any other event that causes widespread disruption. With no tolerance for downtime, data centers are built with redundancies in all of their critical infrastructures. Power comes from multiple sources, with backup through on-site generators and high-capacity battery units. Data centers also need multiple connections to broader network infrastructure, primarily through connections to optical fiber cabling operated by internet service providers.

Imagine a company or institution with significant computing needs. It could be a tech company with a web-based product, a university or other research institution — or perhaps a manufacturer using computer-assisted design and automation to coordinate production with shipping companies and retailers. This company has some options when it comes to where the infrastructure required for these computing needs is located.

One option is for the organization to operate a data center itself, often referred to as an “on-prem” data center. The company could also lease server space at a “colocation” data center — a larger facility run by a third party that also houses servers for other users.  Companies operating colocation facilities are at the heart of commercial real estate investment in data centers. A significant majority of existing data center assets are colocation facilities operated by publicly traded REITs like Equinix and Digital Realty Trust.

The hypothetical company could also meet its computing needs using the “cloud.” Instead of worrying about physical servers and hardware or where to house it, data processing and storage capacity can be purchased as a service. How that computing power is achieved is fully outsourced to providers like Google, AWS and Microsoft. Cloud-based data infrastructure services are used to some degree by nearly all major organizations, largely due to their simplicity and scalability. A company can purchase additional computing power and have it available immediately, rather than have to wait to build out physical hardware on premises or at a colocation facility. In reality, most large companies use more than one of these infrastructure solutions to meet their data processing and storage needs.

Hyperscalers

These are the heavyweights. The term “hyperscalers” refers to the handful of companies that use the majority of data center space in the U.S., particularly the cloud providers like Google, AWS, Microsoft and Oracle. Companies like Apple and Facebook, although they do not offer commercial cloud or infrastructure as a service, are also lumped into this category because of their massive data center footprint. Hyperscalers are increasingly developing their own massive data center campuses, with square footage often in the millions. Hyperscalers also account for the majority of colocation tenancy.

Latency

Latency refers to the delay between a user’s input and a response from an application, such as the time between clicking on a link and the website appearing in the browser. The most significant factor in determining latency is the physical distance between the user and the server where the computing takes place. Even with data traveling at the speed of light, a greater geographic distance means a longer delay between input and response. The amount of data traveling through those same cables can also impact latency.

While small differences in latency may not be consequential when browsing the internet, those fractions of seconds can have serious implications for certain applications. The ability to fly a drone via a web-based application or the effectiveness of financial software based on split-second market reactions depends on low latency. This has significant consequences for data center investment and operations, as facilities need to consider their proximity to users who need data processed fast.

The Edge

Most simply, “The Edge” refers to computing done physically near the end user. In general, the concept of the edge requires thinking of internet infrastructure as a hub-and-spoke model, with a data center at the center and end users around the outside. This is actually a fairly accurate representation of the geography of digital infrastructure in the U.S. Most data centers are clustered in just a handful of locations, particularly Loudoun County, Virginia.

In 2020, Northern Virginia accounted for 61% of all new data center construction, according to CBRE. The lack of geographic diversity for data centers can create problems for users who need low latency but are geographically far away from data centers. These could be rural areas or dense urban areas where high power costs and land values are not conducive to data center development nearby. This could be an industrial controls system that uses real-time systems monitoring at a rural factory,  trading software for a firm in a Northeast banking hub or systems to integrate self-driving cars in a small city.

Addressing this problem generally requires deploying some sort of data center infrastructure physically near these end users at the edge. So-called edge data centers can refer to anything from facilities taking up entire buildings to modular units the size of shipping containers. The largest cloud providers are also trying to offer lower latency through services like Amazon’s Cloud Regions, which allow enterprise customers to designate which data centers handle their computing needs.

But discussions about the edge can also refer to the broader concept of an increasingly decentralized data ecosystem. Amazon and other large cloud providers have started creating regional data center clusters and offering enterprise customers the ability to direct computing through data centers in their region to lower latency. This decentralization could accelerate if a planned federal expansion of fiber optic infrastructure comes to fruition, potentially opening up new areas for data center construction.

Cooling

Packed tight with high-performance servers, data centers produce heat — a lot of it. Keeping the facility’s temperature within operation limits presents one of the fundamental challenges of operating a data center. Most existing data centers are cooled using either air systems or a mix of water and liquid refrigerant, with the latter utilized particularly in hot climates. Controversy regarding the amount of water required to cool large data center developments has made headlines in recent weeks, particularly in the drought-stricken western U.S.

Cooling technology is perhaps the primary target of innovation in data center design. Operators routinely use predictive technologies to model the flow of heat throughout facilities. Some companies have taken creative approaches to incorporate cooling into the design, from building underground in abandoned mine shafts to placing data centers under the ocean.

 

Source: Bisnow