"How to Evaluate a Building for Data Center Conversion" Earlier this week I shared how Chicago developers turned a $12 million office building into a $40 million data center in 15 months. Today, let's talk about what to look for. The Five Critical Factors: 1. Power Infrastructure This is the dealbreaker. Can you increase capacity to 30-50 megawatts? Existing transformers? Proximity to substations? The Chicago building had substantial electrical infrastructure from its trading floor days. Without power capacity, you don't have a deal. 2. Building Structure You need: Wide, column-free floors High ceilings for cooling Floor load capacity for server weight Cavernous layouts The Cboe building was designed for trading floors—which converts perfectly to data centers. 3. Existing Connectivity "This building is very heavily wired from its time as a trading platform," said buyer Daniel English. Look for heavy wiring, fiber proximity, and urban locations near connectivity hubs. 4. Cooling Potential CRE Daily reports liquid cooling is becoming standard as power densities jump from 120 kW per rack today to 600 kW by 2027. Can the building support liquid cooling systems and upgraded HVAC? 5. Urban Location Advantage English explained why urban conversions command premiums: "Just like Amazon last-mile delivery, data centers take less time to deliver when they're close." Low-latency applications—trading, streaming, gaming—pay premiums for urban proximity. The Best Candidates: Former trading floors, financial services buildings, telecom facilities, heavy industrial with power infrastructure. My Take: The Chicago flip proves it: The biggest returns aren't in greenfield development. They're in buying assets where someone else already solved the hard problems and the market hasn't caught up. What building in your market has these five factors? Because while everyone else sees obsolete real estate, you might be looking at a 233% return in 15 months. What are you seeing that others are missing? Sources: "Flip of former Cboe Global Markets headquarters in Chicago shows soaring data storage values" by Ryan Ori, CoStar News, October 23, 2025; "Data Centers Driving Growth In AI And Real Estate" CRE Daily, PrincipalAM research
Data Center Operations
Esplora i principali contenuti di professionisti esperti su LinkedIn.
-
-
Underwater Data Centres - Using The Oceans As A Natural Heat Sink "A two-year follow-up experiment began in 2018. A total of 864 servers, in a 12m by 3m tubular structure, were sunk 35m deep off the Orkney Islands in Scotland. Microsoft is not the only company experimenting with moving data underwater. subsea cloud is another American company doing so. China’s Shenzhen HiCloud Data Center Technology has deployed centres in tropical waters off the coast of Hainan Island. Underwater data centres promise several advantages over their land-locked cousins. The primary benefit is a significant cut in electricity consumption. According to the International Energy Agency, data centres consume around 1 to 1.5 per cent of global electricity use, of which about 40 per cent is used for cooling. Data centres in the ocean can dissipate heat in the surrounding water. Microsoft’s centre uses a small amount of electricity for cooling, while Subsea Cloud’s design has an entirely passive cooling system. The Microsoft experiment also found the underwater centre had a boost in reliability. When it was brought back to shore in 2020, the rate of server failures was less than 20 per cent that of land-based data centres. This was attributed to the stable temperature on the sea floor and the fact oxygen and humidity had been removed from the tube, which likely decreased corrosion of the components. The air inside the tube had also been replaced with nitrogen, making fires impossible. Another reason for the increased reliability may have been the complete absence of humans, which prevents the possibility of human error impacting the equipment." https://lnkd.in/g-bMjpX2
-
Three strikes. Three AWS data centers. Two availability zones down — the redundancy model AWS built to survive any single failure, gone in one attack. AWS confirmed it in their own health dashboard: attacks "disrupted power delivery to infrastructure." Not servers — power. A drone doesn't need to hit the building. The substation outside is enough. This isn't a war story. It's a structural vulnerability hiding in plain sight across the entire industry. 200+ data centers across the Middle East. Yemen, Iraq, Iran, the Red Sea — regions where drones are already a standard tool of pressure. Drones are getting cheaper. Data centers are getting more expensive. That asymmetry is only going to widen. The industry has no answer — because no one ever asked the question. Tier III/IV, BICSI, Uptime Institute, EN 50600. Not one standard contains the word "drone." They were written for a world where threats arrive on foot. The solutions exist — they're standard practice in military infrastructure: → Underground cable entries — you can't hit what you can't see → 3 independent power feeds from different directions — one strike doesn't take the site down → BESS — keeps the facility alive while power is restored → Hardened substations — reinforced concrete instead of an open yard → Anti-drone EW systems (Dedrone, Aaronia) — jam GPS guidance up to 3 km out. Cost: from $200K. Cost of two AZ downtime: orders of magnitude higher AWS was the first confirmed case. The precedent is set. Which data centers are next depends on who prepares first. Have you already seen drone defense requirements appear in data center RFPs or site specs? Photo credit: Wikimedia Commons (AWS us-west-2, Oregon) A typical hyperscale data center campus. Open substations, exposed power infrastructure, no standoff defense. In geopolitically stable regions — not a concern. In conflict zones — a potential single point of failure. #DataCenters #CriticalInfrastructure #PhysicalSecurity #EnergyResilience #AWS
-
Systems don’t fail because something went wrong - they fail because nothing was prepared to handle what went wrong. That’s why failure-handling patterns are a core part of system design. This visual breaks down 12 essential techniques engineers use to build resilient, fault-tolerant systems that stay reliable under real-world pressure: - Retry Reattempt failed operations to handle temporary network or service glitches. Used in API calls, database queries, and distributed requests. - Circuit Breaker Stops calls to unhealthy services to prevent cascading failures. Common in microservices communication. - Bulkhead Isolates failures so one overloaded component doesn’t crash the entire system. Used with thread pools and microservice resource isolation. - Fallback Provides a degraded or cached response when a dependency fails. Keeps the user experience smooth with static data or defaults. - Timeouts Prevents waiting forever for slow or stuck services. Critical for APIs, databases, and distributed systems. - Dead Letter Queue (DLQ) Captures failed messages for later inspection or reprocessing. A staple in message queues and event-driven architectures. - Rate Limiting Protects systems from abuse or overload by restricting excessive requests. Used widely in public APIs and authentication services. - Load Shedding Drops non-critical traffic during peak load to keep core functions alive. Common in high-traffic or real-time systems. - Graceful Degradation Reduces functionality instead of failing completely. Used in dashboards, e-commerce platforms, and streaming apps. - Redundancy Duplicates critical components to eliminate single points of failure. Standard practice for databases, servers, and networks. - Health Checks Detects unhealthy services and removes them from rotation. Used by load balancers and orchestration tools. - Failover Automatically switches to a backup system when the primary one fails. Essential for multi-region deployments and database clusters. Mastering these techniques is what separates systems that work in theory from systems that work in production. Which ones have you used in your architecture?
-
🔥 #AI #datacenters are being treated like “just another big load.” That’s a dangerous planning assumption. Most of the power they draw isn’t flexible by default - it’s reliability-driven and must stay on to keep compute running. Backup systems aren’t demand response, they are continuity systems. And GPUs don’t pull smooth power - they fluctuate in ways grids were not designed for. But here’s where the story has potential to shift 👇 📌 Batteries and energy storage aren’t just backup anymore - they can make large power users behave like flexible grid assets. With the right controls, storage can charge when the grid is abundant and discharge when it’s stressed, helping balance supply and demand and support frequency and stability, all while keeping compute running. This is backed by recent grid research on dispatch and optimal BESS use. 📌 Growing work on grid-interactive UPS and storage systems shows that data centers can participate in ancillary markets and provide services like fast frequency response and other flexibility if designed and governed with that intent. In #Europe, this is already moving from theory to planning reality ⚡ Reports show that grid congestion and connection constraints are now influencing where data centres are built, with utilities reassessing connection rules and flexibility incentives as grid capacity becomes a decisive factor in investment decisions. So the real shift isn’t debating whether AI loads are “flexible” - it’s about engineering them to be grid-interactive assets, not inflexible liabilities. 👉 If we plan for them as firm loads PLUS intentional, contracted flexibility, we unlock new options for reliability, carbon goals, and grid stability. This isn’t future talk - credible research and emerging deployments are already pointing toward hybrid storage, smarter dispatch, and real grid value. In our work where we help design control centers of the future for utilities and system operators, this needs to be part of the discussion. https://lnkd.in/dXDvf3BE https://lnkd.in/dnMXjKzh ⚡ #GridPlanning #AIInfrastructure #DataCenters #EnergyStorage #BESS #GridFlexibility #EnergyTransition photo: Interactive map of data centre hubs alongside associated power and digital infrastructure // IEA's Energy and AI Observatory
-
The most stressful seconds in a Data Center project. We spend months testing chillers, UPS units, and generators individually. They all pass their checklists (Level 3 & Level 4 Commissioning). But individual success doesn't guarantee system resilience. That is why the Integrated Systems Test (IST) is the single most critical milestone in Data Center delivery. It leads up to the "Black Building / Blackout Test" where we physically cut the utility power to the facility. No simulation. No software override. We just pull the plug. For a few heart-stopping seconds, the facility relies entirely on physics and logic: 1- The Ride Through: The UPS batteries must bridge the power gap with zero interruption to the IT Load. 2- The Transfer: The generators must start, synchronize, and accept the "Block Load" instantly. 3- The Restabilization: The mechanical cooling must restart and normalize temperatures before thermal limits are breached. The IST reveals hidden flaws that individual testing misses: 1- Breaker coordination acting slower than the sensitive IT threshold. 2- BMS latency causing "chatter" during the transfer switch. 3- Harmonic distortion that only appears when the entire infrastructure runs on backup power. A Data Center hasn't truly been commissioned until it has survived the dark. #DataCenters #IST #MissionCritical #Commissioning #Engineering #Resilience
-
The rapid growth of digital infrastructure has intensified the demand for reliable, efficient, and sustainable data center power systems. With AI workloads, high-density computing, and real-time digital services scaling at an unprecedented pace, enterprises and hyperscalers must operate without compromise or downtime. Modern architectures now integrate modular UPS, intelligent PDUs, and advanced energy storage, enabling scalable capacity, improved efficiency, and seamless operational continuity. ⭐ Core Attributes of Next-Gen Data Center Power Solutions 🔹 Redundancy & Resilience Multiple failover paths (N, N+1, 2N, 2N+1) eliminating single-point failures. 🔹 High Availability / Uptime Designed for “always-on” performance with near-zero unplanned outages. 🔹 Energy Efficiency High-efficiency UPS, optimized power paths, reduced conversion losses. 🔹 Scalability Modular architecture allowing incremental expansion as demand grows. 🔹 Power Quality & Conditioning Harmonic filtering, surge suppression, and voltage regulation to protect loads. 🔹 Monitoring & Smart Control Real-time analytics, predictive alarms, and complete DCIM integration. 🔹 Fast Transfer & Response Time Instant source switching (grid → UPS → generator → ESS) without service impact. 🔹 Sustainability & Green Energy Integration Renewables, battery technology, carbon tracking, microgrid compatibility. ⭐ Why Data Center Power Matters ✔ Critical backbone of global digital infrastructure ✔ Supports exponential growth in data, cloud, and AI loads ✔ Reduces operational risk and downtime exposure ✔ Enables energy efficiency and sustainability outcomes ✔ Drives cost optimization and technology innovation ⭐ Key Technologies Shaping the Future • SiC & GaN-based high-efficiency power semiconductors • Solid-state circuit breakers (SSCBs) • Small Modular Reactors (SMRs) for hyperscale power resilience • Digital Twins for power flow modelling & predictive maintenance • DCIM + EMS-driven energy management platforms • Edge- and cloud-based remote monitoring • Microgrids with dynamic demand response • Grid-interactive UPS & power-as-a-service models #datacenterpower #DataCenter #AIDataCenter #CloudComputing #Infrastructure #DataCenterManagement #ITInfrastructure #DataCenterDesign #Colocation #Virtualization #NetworkInfrastructure #DataCenterOptimization #GreenDataCenters #EdgeComputing #DataCenterSecurity #ITStrategy #DigitalTransformation #ModularDataCenter #DataInfrastructure
-
🚨 Downtime in a data center doesn’t start with failure. It starts with a design decision. 🔍 A poorly designed power architecture can look perfectly fine on paper… Until the first overload, fault, or maintenance window exposes its weakness. In data centers, reliability isn’t added later — it is engineered from Day 1. Here are 10 critical Do’s & Don’ts every engineer should consider when designing data center power systems: ⚡ 1️⃣ Redundancy Strategy ❌ Don’t design a single power path. ✅ Do implement N+1, 2N, or 2N+1 redundancy aligned with your Tier target. 📈 2️⃣ Load Forecasting ❌ Don’t size only for today’s IT load. ✅ Do plan for 20–30% growth and future rack density increases. 🔋 3️⃣ UPS Architecture ❌ Don’t oversize monolithic UPS systems. ✅ Do use modular UPS for scalability and higher part-load efficiency. ⚡ 4️⃣ Short-Circuit Analysis ❌ Don’t ignore fault level calculations. ✅ Do perform full short-circuit and equipment rating verification. 🔀 5️⃣ A & B Path Separation ❌ Don’t route redundant feeds together. ✅ Do maintain physical and electrical separation of A/B paths. 🔄 6️⃣ Generator Coordination ❌ Don’t assume generators will seamlessly handle load transitions. ✅ Do verify synchronization timing and step-load acceptance. 🛑 7️⃣ Selective Coordination ❌ Don’t allow upstream breakers to trip before downstream ones. ✅ Do perform protection coordination studies to ensure selectivity. 🌍 8️⃣ Grounding & Bonding ❌ Don’t treat grounding as an afterthought. ✅ Do design robust grounding for safety, fault clearing, and EMI control. 📊 9️⃣ Monitoring & Visibility ❌ Don’t operate without real-time power visibility. ✅ Do integrate DCIM, BMS, and branch-level monitoring. 🔧 🔟 Maintainability ❌ Don’t design systems that require full shutdown for maintenance. ✅ Do ensure concurrent maintainability with bypass paths. 🎯 In data centers: Seconds of downtime = massive financial loss Poor coordination = cascading failures Lack of redundancy = single point of failure Power architecture is not just distribution — it’s risk management engineered into copper and steel. 💬 What’s the most common design mistake you’ve seen in data center projects? Let’s share insights and improve how we design mission-critical systems. ♻️ Repost to share with your network if you find this useful 🔗 Follow Ashish Shorma Dipta for more posts like this #DataCenter #PowerSystems #ElectricalEngineering #MissionCritical #InfrastructureDesign
-
Think Fire Can’t Happen in a Tier IV Data Center? Think Again. . . Inside a data center, fire isn’t just a safety hazard, it’s a nightmare that can erase uptime and trust in seconds. A single spark near a cable tray or UPS battery can trigger disaster. That’s why fire protection isn’t optional it’s your final barrier between continuity and collapse. (Uptime Institute, 2023) Why Fire Protection Matters? Even a few seconds of uncontrolled fire can destroy Tier IV uptime goals. One incident may cost over $8 million in damage, plus reputation loss. From lithium-ion battery fires to short circuits, modern data centers need instant detection and clean suppression. (Data Center Dynamics, 2024) From Spark to Suppression: The process begins with VESDA (Very Early Smoke Detection Apparatus) detecting smoke before a flame appears. Then, the Fire Alarm Control Panel (FACP) triggers alerts, shuts down CRAC units, and isolates affected zones. If heat sensors confirm fire, clean-agent gas releases in under 10 seconds, extinguishing flames in less than 30 without harming equipment. (NFPA 75 & 2001, 2024) The Core Components: 1. Detection: VESDA, smoke, and heat sensors 2. Control: Fire panel & BMS link 3. Suppression: Cylinders, nozzles, piping 4. Shutdown: HVAC & power isolation 5. Monitoring: DCIM integration (FSSA, 2024) The Gases That Save Data FM-200 (HFC-227ea): Reliable but high GWP Novec 1230 (FK-5-1-12): Clean, eco-safe, zero residue Inert Gases (IG-541, IG-100): Reduce oxygen safely Next-Gen Agents (FK-5112, BlueSky): Low-impact and sustainable (3M Fire Protection Report, 2024) When Systems Fail: In 2022, a lithium-battery fire at Kakao Data Center (South Korea) disrupted national digital services due to failure of automatic suppression response. Losses reached millions and shook public infrastructure confidence. (International Fire & Safety Journal, 2023) Lessons for Every Data Center Engineer: Test systems quarterly Integrate suppression with DCIM Use eco-safe gases Prioritize cable management & ventilation (NFPA & Uptime Institute, 2024) 💬 Question: What fire suppression system protects your data hall — FM-200, Novec 1230, or inert gas?