My code can be found here on GitHub
Here's quick walkthrough of my final solution:
]]>Previous Posts:
My code can be found here on GitHub
Here's quick walkthrough of my final solution:
]]>My code can be found here on GitHub
Here's quick walkthrough of my final solution:
]]>My new role at Microsoft is a DevOps Architect on a brand new team whose goal is to drive VSTS and ultimately Azure adoption, in this case by focusing on helping (large) customers apply effective DevOps practices. The idea is to build relationships with Microsoft's biggest customers, and act in an advisory role to help them be successful in deploying things to azure and/or adopting VSTS. One of our first activities will be providing the Enterprise DevOps Accelerator benefit that was announced at the VS 2017 launch. This is a free 2-week engagement provided by Microsoft or one of our partners to customers that purchase 50+ VS Enterprise licenses. The goal is to help a customer migrate an application to Azure using DevOps practices and tools (i.e. VSTS). My team will be responsible for those engagements, determining what we can do to add the most value in 2 weeks, delivering some of them ourselves and working with partners to deliver others, and evolving the offering as we learn.
What I find most exciting is it’s a brand new team - with the broad goal of helping drive Azure/VSTS adoption via good DevOps – and me (and my team) will have a large amount of autonomy to decide exactly how we go about achieving that goal. On that note, my first order of business will be building a team – specifically I’m looking for a world-class DevOps expert to work alongside me to build the team, and help determine our direction and strategy. If you or somebody you know are a world-class DevOps expert, please get in touch with me and lets chat! For now the best way to reach me is dylansmith256@hotmail.com
PS - It will also be nice to not be a billable consultant anymore, not having to always be selling and convincing customers to give us more business.
]]>We came up with an idea, what if we could synchronize the GitHub repository to a VSTS repository. Then VSTS could see the commits and create the WI links. So that's what we did, and this blog post explains how you can do the same thing.
First things first, you need to create a VSTS Build that points to your GitHub repo. That's easy enough, as support is built right into VSTS builds to do this.
Alright, now we have a build that triggers on every GitHub commit/push, and will download the GH repo to the build agent. The next step is to make it push any and all changes into the VSTS repo. To do this I shamelessly copied a snippet of bash from StackOverflow. There is a built-in build task to run a bash script - Shell Script - but that requires you to point it to a script inside your repo, which is more work than I wanted. I just want to write the few lines of bash directly in the build.
Fortunately there is a VSTS extension in the marketplace that lets us do exactly this: https://marketplace.visualstudio.com/items?itemName=tsuyoshiushio.shell-exec
Once I installed that extension into my VSTS account, I can now add it as a task to my build and tell it the bash script I want it to run:
Those 3 lines of bash using the git command-line are all it takes. The one tricky bit to figure out was how to make sure it synchronized all branches - even newly created branches - in github into VSTS. The trickery in line 1 and 3 does that.
1 | git branch -r | grep -v '\->' | while read remote; do git branch --track "${remote#origin/}" "$remote"; done |
That script is doing a few things:
The stuff with $SYSTEM_ACCESSTOKEN in line 3 is accessing an environment variable that contains an Oauth token that can be used to communicate with VSTS - in a previous step where we set the option in the VSTS build to make Oauth token available to scripts, is what allows this to work.
There's one thing left to do to make this all work - we need to grant the build service account access to the VSTS repo. We can do this in the repo security screen like so:
Now you can push some commits to GitHub and/or create a new branch, and the VSTS build should automatically trigger and synch the VSTS repo up almost immediately. If everything is working you should see build output that looks something like this:
]]>If you subscribe to the lean software development way of thinking, you think about a pipeline of value that results in working software. For example this might be: Analysis -> Dev -> Test -> Deploy -> Monitor
As with any pipeline, there is likely a bottleneck somewhere that restricts the flow of value. Lean is all about identifying and attacking these bottlenecks. 10 years ago - before Agile - the bottleneck was probably Analysis, or maybe Test. With Agile development becoming mainstream over the last decade, it has done a pretty good job of attacking those bottlenecks, resulting in analysis and test becoming more just-in-time, spread out over the course of a project, and embedded in the regular dev workflows. They are no longer the bottleneck.
A new bottleneck has arisen, that is at the boundary of the dev/test team and the operations team. These tend to be very separate teams, with a clear handoff between them. This results in friction, and a bottleneck in the flow of value.
So back to my definition of DevOps:
This bottleneck manifests itself in a number of ways, here's a few common ones:
These are common causes of friction within development organizations. I could go into details, but I expect just reading the points above you can identify with at least some of these concerns.
When I hear people talk about DevOps, I often hear 3 common approaches:
Although #2 is the approach I see most often, I believe all 3 approaches are perfectly valid ways to attack the problem.
If you want help adopting DevOps practices or technologies, my company Imaginet is always happy to help. Check out our DevOps offerings here: http://www.imaginet.com/devops-as-a-service/
]]>We still can't do actual snapshots, but I've written some powershell that achieves the same goal. It will grab a copy of the VHD file (this acts as the snapshot), then when we want to restore to snapshot we'll just swap out the VHD's. The trick here is that Azure won't let you swap out the VHD for an existing VM, so we need to actually destroy the VM, swap out the VHD's, then recreate the VM using the existing VHD.
Note: This is all done using ARM (Azure Resource Manager) style VM's.
If you want to try it out, here's a step-by-step to try it out:
First create a VM in Azure, and be sure to select Resource Manager as the deployment model:
After it's done processing we'll have a new Resource Group with 6 resources included:
One thing I like to do - that the wizard doesn't let you specify - is to assign a DNS name to the public IP that I'll use to connect to my VM. You can set this in the configuration page for the Public IP resource.
Lastly, I need to copy the Storage Account Access Key for use in the powershell.
Now all we need to do is take the powershell below, and modify the values of the variables at the start to match the names you used when you created the VM/Resource Group.
The first time you run the script it will create the initial snapshot. All subsequent times you run the script it will reset it to that snapshot.
1 | # Set variable values |
We decided to use Jekyll to host our blog, which most of us had never used before. All of us need a way to fire up a Jekyll instance to test our changes (even simple things like how a new post will render). Jekyll is really made for Linux, and most of us run Windows. Although Jekyll can run on windows in theory, we have struggled to get it to work. Amir came to the rescue, and created a Docker image that includes Jekyll configured according to our needs.
Now we have a new problem – most of us haven’t used Docker before. We had some struggles, just getting Docker up and running and configured on Windows took a little bit of work for those of us that hadn’t used it before. Those of us using Windows 10 discovered there were [additional challenges getting Docker running](Docker on Windows 10 Problems). And for me personally, I do all my work in VM’s (either local VM’s, or Azure VM’s) and I didn’t want to install Docker/VirtualBox on my host OS, and I discovered that you can’t install Docker/VBox inside a Hyper-V Windows VM.
I decided I was going to get something running in Azure, and I had an additional goal of making what I did repeatable and automated so that my fellow Western Devs could easily do what I did. I’ve been doing a lot of work with Azure ARM Templates lately, so that was the approach I took. I noticed there is a pre-existing image with Ubuntu available, and there is a Docker VM Extension that you can apply during provisioning that will install/configure Docker, and it can use Docker Compose to spin up one or more containers too.
I created the JSON ARM Template, and a simple PowerShell script to deploy it. Now any of my fellow WesternDevs can simply run a PS1 script, get prompted for a few pieces of info (azure credentials, azure subscription, github branch name, resource group name), and ~10 mins later they will have a new VM in Azure, with Docker installed, our WesternDevs image deployed, and our Jekyll site up and running with the code from their branch. Then they can bring it up in a web browser and test out their changes before merging with Master. When they’re done they can delete the Azure resource group if they wish, or keep it around for future testing.
If you’ve never used Azure ARM Templates before, it’s a JSON file that describes a set of Azure Resources and their configurations. You can use a PowerShell cmdlet to give the JSON to Azure, and it will spin up a new Resource Group and a bunch of new resources based on what is described in the JSON. For the WesternDevs template the JSON describes the following resources:
The full JSON file is included at the end of this post. It can also be found on GitHub. Some of the configuration that is described in the JSON template includes:
VM Extensions are additional components that can be applied to your VM as part of the provisioning process. There is an extension available called DockerExtension that will install and configure Docker for you as part of the provisioning process. Here is the relevant part of the template:
1 | { |
This tells it to apply the DockerExtension to the VM previously created. Additionally it uses Docker Compose to allow you to specify one or more Docker containers that it will pull down from DockerHub, deploy into Docker, allow you to specify configuration such as ports to map to the host, and allow you to run command(s) on the docker image.
In the template above we tell it to grab the Docker image abarylko/western-devs:v1 which was created by my friend Amir Barylko and already has Jekyll installed. Then we tell it to map port 4000 from the docker container to port 4000 on the host Linux VM. Lastly we give it a few bash commands to run on the docker container when it starts up:
This will grab the github repo in to the docker container, checkout our branch that we want to test (the branch name is passed as a parameter into the ARM template as we’ll see below in the Powershell), update the _config.yml file (which is a config file Jekyll uses) to replace the public url with the URL for our Azure VM (so when we test the site, the links all point to the same testing site URL), use bundle to install all our gems, then fire up Jekyll to run our site.
Now that we have a JSON file that describes our Azure Resources, we need a way to deploy this. This is a simple bit of PowerShell. My goal here was to make this as simple as possible for somebody to use, even if they aren’t comfortable with PowerShell/Azure/Docker/Linux/Jekyll/etc. It’s as simple as running the PS1, being prompted for 4 things (new resource group name, github branch name, azure login, azure subscription), then waiting ~10 mins for Azure to do it’s thing.
1 | $EnvName = Read-Host "Name for Azure Resource Group (must be globally unique)?" |
The interesting line here is the one that does New-AzureResourceGroup. That passes the JSON template to azure and tells it to create a new resource group and provision the resources described in the template. We also tell it the azure datacenter location where everything should be created, and pass in a parameter that contains the branch name.
The rest of the script is just collecting some values from the user, and at the end it will launch your browser to the newly created site, and give you the option to delete all the azure resources just created if you wish.
You can easily give this a try yourself.
1 | { |