This EY incident underscores a truth we often overlook: the most common cloud vulnerability isn't a zero-day exploit; it's a configuration oversight. A single misstep in cloud storage permissions turned a database backup into a public-facing risk. These files often hold the "keys to the kingdom" ie. credentials, API keys, and tokens that can lead to a much wider breach. How do we protect ourselves against these costly mistakes? Suggestions 1. Continuous Monitoring: Implement a CSPM for 24/7 configuration scanning. CSPM is Cloud Security Posture Management -> a type of automated security tool that continuously monitors cloud environments for misconfigurations, vulnerabilities, and compliance violations. It provides visibility, threat detection, and remediation workflows across multi-cloud and hybrid cloud setups, including SaaS, PaaS, and IaaS services 2. Least Privilege Access: Default to private. Grant access sparingly. 3. Data Encryption: For data at rest and in transit. 4. Automated Alerts: The moment something becomes public, you should know. 5. Regular Audits: Regularly review access controls and rotate secrets.
Cloud Migration Challenges and Solutions
Explore top LinkedIn content from expert professionals.
-
-
Here are the most expensive Kubernetes mistakes (that nobody talks about). I’ve spent 12+ years in DevOps and I’ve seen K8s turn into a money pit when engineering teams don’t understand how infra decisions hit the bill. Not because the team is bad. But because Kubernetes makes it way too easy to burn cash silently. 𝐇𝐞𝐫𝐞 𝐚𝐫𝐞 𝐭𝐡𝐞 𝐫𝐞𝐚𝐥 𝐦𝐢𝐬𝐭𝐚𝐤𝐞𝐬 that don’t show up in your monitoring tools: 1. 𝐎𝐯𝐞𝐫𝐩𝐫𝐨𝐯𝐢𝐬𝐢𝐨𝐧𝐞𝐝 𝐧𝐨𝐝𝐞𝐬 "𝐣𝐮𝐬𝐭 𝐢𝐧 𝐜𝐚𝐬𝐞". Engineers love to play it safe. So they add buffer CPU and memory for traffic spikes that rarely happen. ☠️ What you get: idle nodes running 24/7, racking up your cloud bill. ✓ 𝐅𝐢𝐱: Use vertical pod autoscaling and limit ranges properly. Educate teams on real usage patterns vs. “just in case” setups. 2. 𝐏𝐞𝐫𝐬𝐢𝐬𝐭𝐞𝐧𝐭 𝐯𝐨𝐥𝐮𝐦𝐞𝐬 𝐭𝐡𝐚𝐭 𝐧𝐞𝐯𝐞𝐫 𝐝𝐢𝐞. You delete the app. But the storage stays. Forever. Cloud providers won’t remind you. They’ll just keep billing you. ✓ 𝐅𝐢𝐱: Use “reclaimPolicy: Delete” where safe. And audit your PVs like your AWS bill depends on it. Because it does. 3. 𝐋𝐨𝐠𝐠𝐢𝐧𝐠 𝐞𝐯𝐞𝐫𝐲𝐭𝐡𝐢𝐧𝐠... 𝐚𝐭 𝐞𝐯𝐞𝐫𝐲 𝐥𝐞𝐯𝐞𝐥. Verbose logging might help you debug. But writing 1TB+ of logs daily to expensive storage? That’s just bad economics. ✓ 𝐅𝐢𝐱: Route logs smartly. Don’t store what you won’t read. Consider tiered logging or low-cost storage for historical data. 4. 𝐔𝐬𝐢𝐧𝐠 𝐒𝐒𝐃𝐬 𝐰𝐡𝐞𝐫𝐞 𝐇𝐃𝐃𝐬 𝐰𝐨𝐮𝐥𝐝 𝐝𝐨. Yes, SSDs are fast. But do you really need them for staging environments or batch jobs? ✓ 𝐅𝐢𝐱: Use storage classes wisely. Match performance to actual workload needs, not just default configs. 5. 𝐈𝐠𝐧𝐨𝐫𝐢𝐧𝐠 𝐢𝐧𝐭𝐞𝐫𝐧𝐚𝐥 𝐭𝐫𝐚𝐟𝐟𝐢𝐜 𝐞𝐠𝐫𝐞𝐬𝐬. You’re not just paying for internet egress. Internal service-to-service comms can spike costs, especially in multi-zone clusters. ✓ 𝐅𝐢𝐱: Optimize service placement. Use node affinity and avoid chatty microservices spraying traffic across zones. 6. 𝐍𝐞𝐯𝐞𝐫 𝐫𝐞𝐯𝐢𝐬𝐢𝐭𝐢𝐧𝐠 𝐲𝐨𝐮𝐫 𝐚𝐮𝐭𝐨𝐬𝐜𝐚𝐥𝐞𝐫 𝐜𝐨𝐧𝐟𝐢𝐠𝐬. Initial HPA/VPA configs get set and never touched again. Meanwhile, your workloads have changed completely. ✓ 𝐅𝐢𝐱: Treat autoscaling like code. Revisit, test, and tune configs every sprint. Truth is most K8s cost overruns aren't infra problems. They're visibility problems. And cultural ones. If your engineering teams aren’t accountable for infra spend, it’s just a matter of time before you’re bleeding cash. ♻️ 𝐏𝐋𝐄𝐀𝐒𝐄 𝐑𝐄𝐏𝐎𝐒𝐓 𝐒𝐎 𝐎𝐓𝐇𝐄𝐑𝐒 𝐂𝐀𝐍 𝐋𝐄𝐀𝐑𝐍.
-
Lift and shift is the most expensive way to avoid real cloud transformation. Moving your mess to the cloud just gives you an expensive mess. At Mayfair IT, we have built cloud platforms using fundamentally different approaches. The difference in outcomes is dramatic. Lift and shift is seductive. Take existing servers, virtualise them, run them in Azure or AWS. Call it cloud migration. Declare victory. The infrastructure is now in the cloud. The problems are unchanged. Applications still assume they run on dedicated hardware. Scaling requires manual intervention. Failures cascade because nothing was designed for distributed failure. You pay cloud prices for on premises architecture. What cloud native actually means, We have built greenfield platforms on Azure designed from the beginning for cloud. Platform as a Service and Software as a Service components doing what they do best. Azure Data Factory orchestrating data pipelines instead of custom ETL running on virtual machines. Cosmos DB providing distributed databases instead of clustered SQL servers. Serverless functions handling event driven workloads instead of always on application servers. The difference is economic and operational. What changes with cloud native architecture: → Scaling happens automatically based on demand, not manual capacity planning → Failures in individual components do not bring down entire services → You pay only for resources actually used, not capacity provisioned for peak load → Updates deploy without downtime because architecture assumes continuous change We have also migrated legacy systems to cloud where complete refactoring was not feasible. The challenge is knowing which approach fits which situation. Greenfield builds should always be cloud native. Legacy migrations require honest assessment of whether lift and shift provides enough value to justify the effort. Sometimes the answer is yes. Moving a stable system with known workloads to cloud can reduce operational overhead even without refactoring. But presenting lift and shift as cloud transformation is dishonest. You moved the location. You did not change the architecture. The organisations getting real cloud value are the ones willing to rebuild applications to use cloud capabilities properly. How much of your cloud spending is on virtualised servers that could be replaced by managed services? #CloudNative #Azure #DigitalTransformation
-
📌 The Communications Security Establishment Canada | Centre de la sécurité des télécommunications Canada and Canadian Centre for Cybersecurity have issued the "Roadmap for the migration to postquantum cryptography for the Government of Canada" The publication outlines the Cyber Centre’s recommended roadmap for the Government of Canada (GC) to migrate non-classified IT systems to use #PQC, including milestones, deliverables, and guidance for departmental planning and execution. While the document is very synthetic, it covers some important topics in detail. I particularly like the organizational advice: 👉 Responsibility is delegated to the departments owning the systems. 👉 They are recommended to eatblish a committee and a dedicated migration lead. 👉 The committes should include at least one member from senior management to ensure executive buy in and support. 👉 Other non-techincal areas like finance, procurement, asset managment, etc. should also participate. 👉 It makes special focus on financial planning, education and procurement policies. This detail is not present in other recommendations and the guidance provided here is really useful and, if adopted successfully, may solve some typical issues found in these early stages. On milestones and deliverables: 📅 April 2026: Develop an initial departmental PQC migration plan 📅 Beginning April 2026 and annually after: Report on PQC migration progress 📅 End of 2031: Completion of PQC migration of high priority systems 📅 End of 2035: Completion of PQC migration of remaining systems It highlights that: 👉 The milestones mean that, rather than just supporting PQC, the quantum risk has been mitigated. 👉 Departments and agencies are encouraged to migrate systems as early as possible to meet the milestone dates. The Treasury Board of Canada Secretariat | Secrétariat du Conseil du Trésor du Canada will track and report on the overall process. In few days we've had the EU and the Canada roadmaps published. Obviously, awareness is turning into engagement and execution. I still have some backlog to analyze all these recommendations and extract what they have in common and what is different. I'll be back on that. Canadian roadmap: https://lnkd.in/dF5CYAdC #cryptography #cybersecurity #quantum
-
𝗪𝗵𝘆 𝗱𝗼 𝘀𝗼 𝗺𝗮𝗻𝘆 𝗘𝗥𝗣 𝗺𝗶𝗴𝗿𝗮𝘁𝗶𝗼𝗻𝘀 𝗳𝗮𝗶𝗹? 𝗕𝗲𝗰𝗮𝘂𝘀𝗲 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝘁𝗿𝗲𝗮𝘁 𝗶𝘁 𝗹𝗶𝗸𝗲 𝗮 𝘀𝗶𝗺𝗽𝗹𝗲 𝘀𝗼𝗳𝘁𝘄𝗮𝗿𝗲 𝗽𝗮𝘁𝗰𝗵, not the business transformation it truly is. Listening to my network, there seems to be a rush to complete ERP migrations, as fast as possible, with SAP S/4HANA plans driving most of it. But an ERP system is more than just an IT upgrade. It’s a chance to redesign how your business operates and build a solution architecture that supports agility and innovation. While necessary, these migrations often become redundant without proper alignment to business goals. Something, I've seen happen! Here some get rights to consider: ◉ 𝗔𝗹𝗶𝗴𝗻 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗮𝗻𝗱 𝘁𝗲𝗰𝗵 𝗴𝗼𝗮𝗹𝘀 Ensure that IT and business leaders are on the same page. ERP systems serve broader business objectives, such as innovation, improving procurement strategies, and enhancing supplier relationships. ◉ 𝗙𝗼𝗰𝘂𝘀 𝗼𝗻 𝗼𝘂𝘁𝗰𝗼𝗺𝗲𝘀, 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝘁𝗼𝗼𝗹𝘀. Instead of getting caught up in the technology itself, be clear about the business benefits you'd like to achieve. New ERP functionality can be of support to achieve goals like efficiency, cost reduction, and agility. ◉ 𝗦𝗶𝗺𝗽𝗹𝗶𝗳𝘆 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀 𝗮𝗻𝗱 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗲𝘀 𝗲𝗻𝗱-𝘁𝗼-𝗲𝗻𝗱 Don't just migrate complex, outdated processes but streamline them end-to-end. Reevaluate processes for efficiency and desired outcomes. ◉ 𝗜𝗻𝘃𝗲𝘀𝘁 𝗶𝗻 𝗰𝗵𝗮𝗻𝗴𝗲 𝗺𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 - 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝗶𝗻 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 ERP migrations often fail due to poor user adoption. Beyond training, invest in communication & ongoing support showing the value and relevance of the system to users. ◉ 𝗜𝗻𝘃𝗼𝗹𝘃𝗲 𝗰𝗿𝗼𝘀𝘀-𝗳𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹 𝘁𝗲𝗮𝗺𝘀 ERP impacts every area of the business, so cross-team collaboration is essential. Involve stakeholders from finance, procurement, IT, and operations ensures the system meets everyone’s needs. ◉ 𝗙𝗼𝗰𝘂𝘀 𝗼𝗻 𝗱𝗮𝘁𝗮 𝗾𝘂𝗮𝗹𝗶𝘁𝘆 - 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗰𝗼𝗺𝗽𝗿𝗼𝗺𝗶𝘀𝗲 An ERP system is only as good as the data it processes. Ensure that data is clean, consistent, and reliable before migration. Dirty or incomplete data is one of the biggest challenges post-go-live. ◉ 𝗣𝗿𝗶𝗼𝗿𝗶𝘁𝗶𝘀𝗲 𝗦𝘆𝘀𝘁𝗲𝗺 𝗳𝗹𝗲𝘅𝗶𝗯𝗶𝗹𝗶𝘁𝘆 𝗮𝗻𝗱 𝗖𝗼𝗺𝗽𝗼𝘀𝗮𝗯𝗶𝗹𝗶𝘁𝘆 Choose an architecture which allows for future-proofing and integration of new features, scalability and integration. Business models evolve, and your ERP must evolve with them." ◉ 𝗦𝗲𝘁 𝗿𝗲𝗮𝗹𝗶𝘀𝘁𝗶𝗰 𝘁𝗶𝗺𝗲𝗹𝗶𝗻𝗲𝘀 - 𝗶𝘁'𝘀 𝗻𝗼𝘁 𝗴𝗼𝗶𝗻𝗴 𝘁𝗼 𝗯𝗲 𝗾𝘂𝗶𝗰𝗸 𝗶𝗳 𝘁𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝘃𝗲 Don’t rush an implementation. ERP migrations are complex and require time to integrate properly. A phased approach allows for troubleshooting and mitigates a risk for failure. ❓Any other "get rights" i missed and you would add from your experience. #erp #businesstransformation #migration #sap4hana
-
On International Data Centre Day, my hope is that the "rest of Africa" doesn't get left behind in the AI investment boom. It's critical for giving millions of Africans opportunities to progress and lead better lives. A few years back, I worked on building a data center real estate business at Agility with multiple data center ready sites in Africa and engaged with a large range of data center operators and hyperscalers. Some thoughts: 1. The lion's share of investment in data centers is (still) in South Africa and four other countries (Nigeria, Kenya, Morocco & Egypt). The "rest of Africa" has very little data center capacity and investment - risking leaving those economies and societies behind and disadvantaged. The African continent only accounts for 0.6% of global data center capacity according to the Africa Data Centres Association. 2. Demand for capacity is expected to rise from about 0.4 GW today to 1.5 to 2.2 GW by 2030 according to McKinsey & Company research by Kartik Jayaram, Luca Bennici & colleagues. It will require $10 billion to $20 billion in new investment to unlock an estimated revenue pool of $20-30 billion across the value chain by 2030. What will be critical to unlocking that demand is the pace of AI adoption and large-scale digitalization by the public sector / governments and by enterprises, enterprise cloud adoption and consumer growth demand aggregation, investable sites, reducing the cost of capital and affordable power. 3. From my experience, multiple challenges exist to greenfield development in Africa, including land acquisition, power and fiber connectivity (problems I was working on solving) and regulatory environments. The war stories I have heard from others and seen directly show that data center development in Africa requires a different level of grit and commitment - a lot of that will come from great entrepreneurs that I have had the opportunity of knowing and learning from, including Amine K., Ayotunde (Tunde) Coker, Ike Nnamani, Ranjith Cherickel, Robert Mullins and others like Strive Masiyiwa and Funke Opeke - and hopefully any more! It's good to also see global giants like Digital Realty & Equinix also expand on the continent. --- The video clip below is a throwback to a conversation I had with Andy Davis on the Inside Data Centre Podcast a few years back - link in the comments. Africa Data Centres Association | DIGITAL COUNCIL AFRICA
-
Everyone's chasing data center land. Almost everyone is missing the real constraint. It's not fiber. It's not even land. It's power. U.S. Interior Secretary Doug Burgum said at the Prologis conference: "To win the AI arms race against China, we've got to figure out how to build these artificial intelligence factories close to where the power is produced, and just skip the years of trying to get permitting for pipelines and transmission lines." Translation: The next generation of data centers won't be built where the land is cheap. They'll be built where the power is available. Three implications for dirt investors: 1. Nuclear Proximity = New Premium: Amazon already signed deals with Dominion Energy near the North Anna nuclear power station in Virginia and expanded partnerships with Talen Energy at the Susquehanna nuclear plant. Sites within transmission distance of existing nuclear facilities just became exponentially more valuable. 2. Warehouse Conversions Accelerate: If Prologis is eyeing their 6,000 buildings for data center conversion, every industrial site with surplus power capacity needs re-evaluation. What looks like a struggling warehouse today might be a data center tomorrow. 3. Grid Capacity > Geographic Desirability: Constellation Energy CEO Joseph Dominguez noted that data economy customers "want to run their systems 24-7" with "firm pricing so that they know the price for energy for 20 years". Long-term power contracts are becoming the new land entitlements. But here's what nobody's talking about: The same power constraints driving this opportunity are also creating massive project risks. According to a recent CoStar analysis, data centers will account for up to 60% of total power load growth through 2030. But there's a timing mismatch: data centers take 2-3 years to build, while power system upgrades take 8 years. That gap is forcing developers to either wait or find sites with existing capacity. The Community Resistance Factor Data Center Watch estimates $64 billion in data center projects were blocked or delayed over a recent two-year period. There are now 142 activist groups across 24 states organizing against data center development. Northern Virginia alone-the nation's largest data center market-has 42 activist groups fighting projects. Reasons cited: water consumption, higher utility bills, noise, decreased property values, loss of open space. Translation for land investors: Sites with existing power capacity + community support just became exponentially more valuable than sites with just land and zoning. The power infrastructure thesis isn't just about finding available capacity. It's about finding that capacity in counties that actually want data centers. Not every market will roll out the welcome mat. Are you evaluating community sentiment alongside power infrastructure access?
-
Thailand plans dozens of data centers. Locals ask: where will the water come from? Thailand’s eastern seaboard is becoming a focal point for the global expansion of data centers, reports Gerry Flynn. Developers are planning dozens of facilities in Chonburi and neighboring Rayong province as the country seeks to position itself as a regional hub for artificial intelligence infrastructure. Investment has accelerated rapidly. In 2025 alone, Thailand’s Board of Investment approved more than $23 billion in data-center projects. Many of the new facilities are concentrated in the Eastern Economic Corridor, a special economic zone southeast of Bangkok established to modernize Thailand’s industrial base. Petrochemicals, automobile assembly and electronics manufacturing already dominate the region. Data centers represent a different type of industry. Their physical footprint is modest compared with factories, but their demand for electricity and water can be substantial. One example is a hyperscale facility known as QHI01, now under construction in Chonburi province. Developers say the project will draw about 3.3 million cubic meters of water each year to cool computer processors. That volume is roughly equivalent to the annual water consumption of tens of thousands of residents. Contractors working on related infrastructure have suggested the facility’s eventual demand could be higher. Water availability is already a concern in the corridor. Reservoir levels have fluctuated in recent years, and waterways have long absorbed wastewater from surrounding industrial estates. Local activists say little information has been released about how much water new data centers will use or how wastewater from cooling systems will be treated. Many developers declined to answer questions about environmental assessments or resource consumption. Officials maintain that existing infrastructure can handle additional demand. Provincial water authorities note that treatment plants in parts of the corridor still operate below capacity. Industry groups emphasize the economic benefits of the sector, including investment and high-skilled jobs. The rapid expansion nevertheless raises broader questions about how resource-intensive digital infrastructure will fit into regions already shaped by decades of industrial development. Data centers may occupy less land than traditional factories, but the scale of their energy and water needs suggests they could become a significant new pressure on local systems. ⚡ The investigation: https://lnkd.in/gZ7w_8PK
-
“The cloud is just someone else’s computer… sitting on someone else’s land, drinking someone else’s water.” Google’s decision to withdraw its $2 billion data centre project from Indianapolis stayed with me. Not because projects get cancelled, but because of what it revealed. Digital convenience has a physical footprint. The cloud may feel weightless. Its infrastructure is anything but. Local reporting pointed to environmental concerns from water usage, electricity demand, & community pushback. Even one of the world’s most efficient technology companies could not make the economic, environmental, & social math add up. I am not anti-data centre. I am thinking aloud about the scale, limits, & trade-offs we gloss over when we talk about “digital” growth. Take water. Data centres need intensive cooling. Water cooled systems are more energy efficient than air cooling, but the numbers are sobering. A single hyperscale facility can consume three to five million gallons a day, roughly what a small town uses. In drought prone regions, this has already triggered conflict. The question sharpens quickly: scarce water for servers, or for citizens? Then there is energy. The IEA estimates global data centre electricity use could double by 2026, driven by AI workloads. A hyperscale facility can draw as much power as a large industrial plant. In India, where grids already juggle agricultural, industrial, & urban demand, this is not abstract. Add capacity without planning, & we risk instability or deeper dependence on coal. There is also heat. Data centres do not just consume energy; they expel it. In warmer geographies, this becomes a liability. Systems designed for “cool efficiency” often end up warming neighbourhoods. Land adds another layer. Data centres promise jobs but create few permanent ones relative to the land they occupy. Communities are questioning what they give up, farmland, housing, green space, in exchange for high security campuses with limited spillover benefits. India is one of the fastest growing data centre markets, fuelled by AI, fintech, gaming, & digital public infrastructure. These questions are urgent, not theoretical. Where will the water come from? Can we meet power demand sustainably? Will communities benefit meaningfully? This is not about slowing ambition. It is about aligning ambition with ecology. Google walking away feels less like a corporate decision & more like a signal. The digital world is hitting physical limits. Every message leaves a trace. The cloud is not magical. It is material. Sharing this as part of my thinking aloud series, questions, not conclusions. Where are we underestimating the real costs of “digital” growth? What trade offs are we still unwilling to name? #Cloud #Data #Technology #Innovation #Ai
-
📣New NCSC Guidance on PQC Migration Timelines The UK’s National Cyber Security Centre (NCSC) just released a new publication to help organizations prepare for the shift to post-quantum cryptography (PQC). This 16-page paper outlines the key steps for migration, how different sectors may need to adapt, and timelines for navigating this multi-year transition. “Timelines for Migration to Post-Quantum Cryptography” breaks down key activities and recommended milestones to guide long-term planning: 📌 By 2028 • Complete discovery of crypto dependencies • Create your initial PQC migration plan 📌 By 2031 • Migrate highest-priority systems • Refine your roadmap based on ecosystem maturity 📌 By 2035 • Complete full PQC migration across your estate “It will not be possible to avoid PQC migration, so preparing and planning now will mean you can migrate securely and in an orderly fashion.” 💬 Link to the guidance in the comments. #technology #innovation #informationsecurity