Data centres: presumed sovereignty isn't true data sovereignty
Data centres need to be ready with scalable and energy efficient solutions to support the enterprise as they control and manage the increased flow of data.
Data centres have evolved, but AI is evolving faster
The first data centre was built in 1945 to house large mainframes. But it wasn’t until computers got smaller in the early 1990s and the first PCs began to connect to servers that the first client-server model emerged. That connection – that server – was ground zero for the first modern data centre. The dot-com boom created the need for more data centre facilities and, in 1999, VMware introduced the concept of virtualisation. By 2012, 38% of companies were using cloud services.
Now, twelve years later, artificial intelligence (AI) is driving the next generation of business growth. Organisations are expected to spend more than $500 billion on AI solutions in 2027 as they look for new ways to increase profits, improve efficiencies and attract new customers.
The growth and advancement of AI is outpacing the ability for data centres to keep up. Only 32% of available data is used by enterprises and only 2% of the data that’s generated is saved; of that data, most organisations store it across five or more platforms. This makes it even harder for companies to locate and control the access to their data.
The fact is that data centres can adapt quickly to changing technologies like 5G, GPU and cloud as the pace of change accelerates, but data centres are constrained by the mechanical and electrical suppliers on how they can make their components and systems more efficient – for instance, a backup generator is still a diesel engine, but much more efficient than 20 years ago and UPS systems still require batteries.
A presumption of sovereignty can cause unforeseen problems
Data storage was initially on-premises with servers and network equipment on-site because IT directors assumed this was the safest avenue – despite the multitude of risks. The realisation was that solutions frequently lacked sufficient floor load and backup generators, as well as adequate security, fire suppression and cooling space. Even worse, data backups were typically left in the same place as the data centre, defeating their purpose, or taken home, creating a farrago of security concerns.
This has changed with the advent of cloud computing, particularly with the growth of hyperscalers, which build their own data centres and commonly include a mix of third-party storage. But hyperscalers can’t provide assurances of where their customer’s data resides, eliminating any hope of achieving data sovereignty. Without data sovereignty, companies can’t stay in compliance with the data privacy laws and regulations in the countries they operate.
This problem is compounded by new laws and regulations related to AI. They’re not here yet, but they’re coming and they will have a significant impact on the way AI is developed, updated and ultimately used to propel business outcomes.
Storing data
Sovereign data and where data resides has become an important topic for governments and businesses, whereby key government data could be stored in a compliant country but backed up in another country where their data is not secure. This had led to many governments and regulated businesses bringing back their data that was held offshore and having the confidence about who has access to the data.
Confidential data is stored within the data centre facility on customer servers and the data centre has to ensure that the physical and logical security, policies, processes and procedures are in place to protect the server environment. But it is still the customer’s responsibility to ensure they have adequate systems, polices, processes and procedures in place to ensure that data breaches do not occur.
The tipping point
If an enterprise is going to comply with local, national or international data privacy laws and regulations as their business grows, they need to store their information in a hyper-sovereign environment that can scale as their data volume increases. And these next generation data centres need to make adjustments as they prepare for the future of massive volumes of data.
- Data Centres must be able to confirm where the customer data is stored, transported and backed up to give the customer the knowledge that their data is secured and not subject to external laws.
- Data Centres will need to develop and adopt international standards for energy efficiency and sustainability, and use high-density storage solutions to improve speed and their efficiency efforts.
- According to Goldman Sachs Research, AI will drive a 160% increase in the amount of energy needed to power data centres in six years by 2030. So this means that companies will need to establish Net Zero targets in the next 20 years and strive for a PUE of less than 1.1 overall and consider alternative and sustainable power supplies to reduce their growing carbon footprint.
- Data centres need to implement data deduplication, compression and other data reduction technologies to maximise storage capacity as well as research and develop new technologies and methodologies to improve their operations.
These steps are essential as data volumes increase and as AI advances in new and unexpected ways. The coming years will provide a turning point for businesses of all sizes as they fight for market share with a number of new technologies at their side.
Data will be at the heart of the entire digital infrastructure ecosystem, informing decisions and training algorithms that will impact virtually every product and service in existence. It is incumbent upon data centres to take the necessary actions now to ensure that enterprises are not hindered by their AI and data-driven efforts.