Public cloud from a DevOps point of view

or how efficient can you build and deploy your solutions

Why does efficiency matter ?

There are several factor that decide wich public cloud vendor you will use on your(next) project, like there are:

  • Functionality
    Part of choosing and using a particular public cloud vendor is of course the services they offer: if you need massive CPU/GPU clusters for your machne learning project, make sure they are available in your region(s) right now

  • Price
    Sometimes the only factor that matters

  • Size

  • Efficiency

In my experience it comes down to the efficiency of building and deploying the solution: the vendor can be hip&cool or cheap but if it takes weeks longer to deliver a working solution and results in a demotivated team, tired of fighting battles/bugs that do not add value (you have been there), this project or the next or your cloud journey is not in good shape.

In the next couple of weeks I'll review the 3 biggest (most popular) public cloud vendors from the efficiency perspective: what can potentially slow your team down, a Bump, and what will give you a Boost.

Starting with Azure this week, no particular order.

Azure

Boost

  • AD integration and concepts tranfer nicely

Bump

  • renaming resources is not an option: resources are referred to by name, not id. This implies that you need to recreate stuff if you want to rename something
  • the PaaS and IaaS services are not really integrated. PaaS must be addressed by public IPs
  • VM name is limited to 15 characters and must be unique; an inheritance from NETBIOS days
  • need to create and destroy a VM and default components (Disk, Nic) seperately. No such thing as deleting a complete VM, the disk&nic happyly remain
  • official documentation is not in sync with reality. This goes for ARM, CLI and extensions. Features documented are usualy for upcoming versions and not about what you can use right now in your region

AWS

GCP