It’s indisputable to anyone seeing the big picture: New data-driven applications are moving to, or starting life running in the cloud
Given the maturity of cloud platforms and services, the increasing automation and resilience of current cloud infrastructure is an ideal environment for running modern data pipelines. For many companies and institutions, their cloud first strategy is becoming a cloud-only strategy.
Native online business’ such as Netflix as well as mainstream enterprises such as Capital One have multibillion-dollar valuations and almost no physical data centre by implementing this approach, and they are not the only ones.
In January this year global market intelligence provider IDC raised its forecast for total spending on cloud IT infrastructure in 2018 to $65 billion with year-over-year growth of 37 per cent. It reported that quarterly spending on public cloud IT infrastructure had more than doubled in the past two years, reaching $12 billion in the third quarter of 2018, and growing 56 per cent year-over-year.
All indications point to a massive shift in data deployments to the cloud, but there are still too many unknowns around cost, visibility and dependency risks that have prevented this transition to the cloud from occurring more quickly.
IT teams are looking to remove the risk of migration, such as disruption to availability, lost data, reduced visibility and control and so on by predicting what’s ahead – effectively smoothing their path to the cloud once they have run the numbers and discovered that running data services in the cloud makes financial and operational sense for their business use case. What some companies have found is that they can make those numbers look even better: at the heart of the solution to the challenge is having the intelligence and visibility to maximise the benefits of the enterprise cloud investment, while reducing the friction of migration, and minimising resource usage costs.
Plan with predictive analytics
It’s important to ‘look before you leap’ into the cloud. It helps to make the right choices for a migration with AI-driven insights. There are technologies, born with the cloud era, designed to provide those data-driven intelligence and recommendations so necessary for optimising compute, memory, and storage resources. These are the tools DevOps and DataOps teams should be selling into the wide business in order to make the transition a smooth and cost-effective one.
Such tools aid the IT team in identifying which applications are the best candidates for migration and can provide detailed dependency maps to help all stakeholders understand the resource requirements before the migration kicks-off.
It’s a real force-multiplier when the IT team can, for example, see the seasonality and ideal time of day to take advantage of the best prices for cloud services, spot instances, autoscaling, and a number of other tactics that enable them to make the most of their resources, be it time, money, or skills. On the cost side the team should be looking to enable automatic application speedup, optimised resource usage, and intelligent data tiering as part of the migration tool chest at hand.
Effective migration relies on validating the decision to do so by baselining performance before and after the move; by comparing how applications perform before and after the transition – and optimising them for the new cloud runtime environment. For the team under pressure, particularly those who maintain essential services for enterprises with a large consumer customer base, they’ll be looking for AI to offer guidance to improve the performance, scalability, and reliability of their applications once they’re in the cloud.
Ensure the plan continues post-migration
The hard work does not stop when the migration is successfully completed. DevOps and DataOps will immediately be focused on securing the insight to demonstrate, and increase, the return on investment.
However big and complicated the infrastructure is, these teams will want to find hard facts, clear metrics, and informed forecasting to meet enterprise need. This is where AI can maintain relevance, delivering new intelligence and recommendations to keep delivering consistent, cost-effective performance for the long haul and ongoing health of enterprise service delivery.
AI can digest, learn, and identify which users, applications, and projects are having the biggest impact, with chargeback and show back capabilities. It’s much easier for the team to break down costs by CPU, memory, I/O, and storage – and get recommendations for potential savings, when they don’t have to shift through the whole cloud capability range to find key metrics and put them in the wider service context. AI removes the tedium and massively increases the speed at which the DevOps team can detect and mitigate the inefficient use of resources by application, including CPU, memory, containers, caching, and nodes.
About the Author
Kunal Agarwal, CEO, Co-Founder, Unravel Data. Unravel radically simplifies the way businesses understand and optimize the performance of their modern data applications – and the complex pipelines that power those applications. Providing a unified view across the entire stack, Unravel’s data operations platform leverages AI, machine learning, and advanced analytics to offer actionable recommendations and automation for tuning, troubleshooting, and improving performance.
