Had the pleasure of attending Azure user group tonight focused on the up and coming Container technology. This is pretty foreign stuff to me but it was good presentation so I was able to follow along and learn a few things.

More and more applications today are built for a world powered by the cloud. Developers are now required to build with agility, hyper-scale, and availability - not an easy task. On top of that they need to make the app flexible and portable - also not an easy task. Hence containers were created.

So how does a container work? It's similar to a virtual machine. A key difference is VM's contain their own guest O.S. whereas Containers only share the host O.S. kernel. This architecture makes the containers very lightweight. One drawback of a shared O.S. in the container model is security. If the host O.S. becomes comprised the containers are at an increase risk.

How do containers make it easier? The presenter gave a nice demo to answer. He pulled a container from a 'registry', wrote a PowerShell script on the machine, created an image of the machine, and stored it in the registry. Then within a period of 5 seconds he started a new session, pulled the newly created container and had the Powershell script running. This illustrated nicely how using Containers equate into rapid deployment and fast iterations.

As the world moves toward faster development cycles and cloud first technology it's important for Data Pros to at least be aware of what's on the horizon.