How to navigate your cloud migration strategy
In today’s cloud-ubiquitous world, it’s no surprise that when thinking about a move to the cloud, a migration may seem like a straightforward affair. After all, everybody’s doing it, so it can’t be that difficult, can it?
The answer is yes and no. Or perhaps better put: difficult? Not necessarily; detailed? Extremely. And that’s not a bad thing – paying attention to the details from the start will save you headaches down the road when you need your cloud to adapt to your evolving needs.
If you embark on your cloud journey with one objective in mind – to build and maintain a well-architected cloud – your cloud migration will be a successful and fruitful exercise, giving you an environment that has the capacity to progress with your business and evolve with its changing needs. If you don’t, however, you risk an infrastructure that could fail to meet your requirements, may open you up to compliance questions or security risks, and will likely cost you a great deal more in the long run as you find yourself having to make changes by rolling back previous applications or even worse— bolting on features that could have been built on from the get-go.
The key pillars of a well-architected cloud are simple and hold true for every foundational cloud build. A clear focus on operational excellence, security, reliability, performance, cost, and future potential requirements will enable you to achieve success and repeat it for future cloud deployments— some of these principles are highlighted on the AWS Well-Architected site.
An excellent starting point is an in-depth interview between the customer and service provider (or whoever is designing your cloud architecture) to ascertain specifically what is required both now and in the future. These requirements should be based in business outcomes, with technology decisions helping to guide the environment. This will help establish the very basic necessities of a well-architected cloud: the use case for and outcome desired from the cloud environment.
And it doesn’t hurt to think even bigger and understand what these things will be in the future. Don’t underestimate how important it is to consider future requirements during a cloud build. One of the biggest pitfalls there is during the early stages of a cloud architecture setup is not factoring this in. It can be very difficult to go back and rebuild part of the foundation. Not impossible, but there is rework and rearchitecture that will have to be done, both of which could, and should, be avoided. Security is a perfect example of this. It doesn’t cost anything extra to encrypt everything, but in order to go back and retrofit after the fact– you’re now starting to run “technical debt”. In order to avoid this, we give recommendations to encrypt everything. Create that security layer right away, protect it at rest, protect it in transit, it doesn’t cost you anything to do that. It’s a best practice and something you should do out of the gate.
Another common mistake is building out an initial cloud architecture without factoring in regulatory issues that may arise down the road. If this happens and you find yourself having to go backward and retrofit, or worse, having to tear your initial architecture down and redeploy in a new architecture, you will be faced with wasted time and unnecessary cost.
Topping all of this is the biggest pitfall— customers not treating cloud as development code. What does this mean? Within any of the hyperscale cloud providers, you have a console, which is a webpage/GUI that you go to. You select what you want, create whatever service you deploy; that’s how you do it. But that’s not repeatable and you can experience technical drift, which means that everything you deploy may be different because the person that was clicking around the console didn’t do it the same way every time.
We (and many others in the cloud industry) prescribe infrastructure as code— treating infrastructure, your cloud foundation, as part of a code repository that has QA and version control, and that allows you to deploy many times and in the same exact pattern as before, which makes you more secure and more reliable, and creates better efficiency. Think of it as a foundational template that allows you to deploy your infrastructure no matter what, in any region and potentially on any cloud.
Of note, automation such as this allows you to replicate your typology or your infrastructure, wherever it may be, and it avoids the expense of manual effort. Whenever you have to create manual things that can be automated, you’re doing it wrong. Automation creates better auditability and enables you to revert if there’s a problem. For example, you make a change in your production environment and it’s in a code base, which means you can check, pull it back and redeploy very quickly without having to figure out where you made that change in the console, or what you did. It’s much more effective to do things programmatically, through the command line, through development.
One other crucial factor to consider when building a well-architected cloud – and the age-old adage of any project – is preparation. Take all the requirements, validate them, create high-level diagrams and high-level architectures of what you’re looking to do and then take those high-level diagrams and turn them into code. This creates operational readiness by creating those templates that can then be deployed and iterated on.
And lastly, don’t forget that there is no substitute for experience. Work with partners –the hyperscale cloud providers, managed service providers, cloud service providers – who will validate your architecture every single time because they want to make sure you’re successful. Bring your planned deployment to someone with experience to ensure you’re on the right path. Better to find out now than when you are so far down the line you find yourself unable to overcome one of the many potential pitfalls of building a well-architected cloud.
This article was contributed by Chris Resch is EVP of Cloud Solutions & Sales at 2nd Watch.