For others, in-person training is the most effective way to dive into new content. Having someone in the same room who knows how to navigate a new release of software, framework and tooling and all the related changes is a powerful asset while you learn.
That said, we are pleased to announce our first Monsters workshop in Calgary, Alberta. Please join us in Calgary as we mash on changes, approaches, caveats and wins for all things in ASP.NET Core.
Already interested? You can sign up today and join us in February from the 22nd to the 24th.
My good friends Dave, Simon and I have been mashing on ASP.NET Core since its inception. This workshop is the culmination of what we have learned along the way and applied in our projects, samples and through our videos on Microsoft's Channel 9. We're taking you deep into three fully-packed days that walk you through various stages of application development. Our number one priority is to equip you with the skills you need to start on a Core MVC project and transition your existing skills to the new tooling.
We expect you to be familiar with web technologies and to be comfortable in Visual Studio. Beyond that, here is some of what you can expect:
Be sure to check out our training site to view the full curriculum.
Calgary and area is home to some of the most beautiful sights in Canada, with a mountain range full of winter sports about an hour away, skiing at Calgary's Olympic Park as well as NHL and WHL Hockey on the edge of downtown. There are great restaurants, museums, art exhibits and theatre, along with with a great night life including brew pubs, world-famous Canadian poutine and an assortment of comedy clubs.
If you're joining us from outside the area, we highly recommend adding on a few days to your trip so that you can explore the area. If you are from outside of Canada, you will need to get a valid International Driver's Permit from your country before you leave if you wish to rent a car when you're here (handy for exploring!).
Can't join us in Calgary? No problem. Just hit the registration page and sign up for our email list to be notified of other upcoming training cities.
Happy New Year, and happy coding!
]]>Let's have a look at what it takes to allow users to authenticate in our application using GitHub as the login source, and you can check out the Monsters video take of this on Channel 9.
OAuth has been known as a complicated spec to adhere to, and this is further perpetuated by the fact that while much of the mechanics are the same among authentication providers, the implementation of how one retrieves information about the logged in user is different from source-to-source.
The security repo for ASP.NET gives us some pretty good options for the big, wider market plays like Facebook and Twitter, but there is aren't - nor can or should there be - packages for every provider. GitHub is appealing as a source when we target other developers, and while it lacks a package of its own, we can leverage the raw OAuth provider and implement the user profile loading details on our own.
In short, the steps are as follows:
Microsoft.AspNet.Authentication.OAuth
packageOkay, now let's dive into the nitty gritty of it.
First step is a gimme. Just head into your project.json
and add the package to the list of dependencies in your application.
1 | "Microsoft.AspNet.Authentication.OAuth": "1.0.0-rc1-final", |
You can see here that I am on RC1, so assume there may still be some changes to the naming and, obviously, the version of the package you'll want to use.
Pull down the user account menu from your avatar in the top-right corner of GitHub, then select Settings. Next, go to the OAuth Applications section and create a new application. This is pretty straightforward, but it's worth pointing out a few things.
First, you'll need to note your client ID and secret, or minimally, you'll want to leave the browser window open.
Second you'll see that I have a authorization callback setup in the app as follows:
https://localhost:44363/signin-github
This is important for two reasons:
signin-github
bit will need to be configured in our middlewareIf you want better control over how that is configured in your application, you can incorporate the appropriate settings into your configuration files, but you'll also need to update your GitHub app. This process is still relevant - you'll likely want something to test with locallying without having to deploy to test your application.
For production applications you'll be fine to set environment variables or configure application settings in Azure (which are loaded as env vars), but locally you'll want access to the config as well. You can setup user secrets via the command line, or you can just right-click on your project in Visual Studio 2015 and select "Manage User Secrets". From there, you set it up like so:
1 | { |
In the above code we also wired up some code to fire during the OnCreatingTicket
event, so let's implement that next. To do this, we'll add the middleare to the Configure
method in our startup class, and add a property to the class to expose our desired settings.
The middleware call is like so:app.UseOAuthAuthentication(GitHubOptions);
And we create the property as such:
Remember that callback path that we setup on GitHub, you'll see it again in our settings above. You'll also note that we're retrieving our client ID and secret from our configuration, and that we're setting up a handler when the auth ticket is created so that we can go fetch additional details about the authenticating party.
We'll have to call back out to GitHub to get the user's details, they don't come back with the base calls for authentication. This is the part that is different for each provider, and thus you'll need to write this part for yourself if you wish to use an alternate source for authentication.
We will add two parts to this; the first will call out to get the information about the user, the second will parse the result to extract the claims. Both of these can live in your startup.cs
class.
Unfortunately the base implementation of the OAuth provider does not support allowing us to request additional fields for the user; I'll take a look at that in a future post. All you're going to get are the basics with the above - so none of the account details beyond the email addres, nor ability to work with their repos/issues/PRs.
There you have it. All the chops you need to start exercising your OAuth muscle, and a basic implementation that you can leverage as a starting point. Trying this out will take you about 15 minutes, start to finish, provided you already have a GitHub account.
Finally, check out the Monsters' video on Channel 9 where I code this live.
Happy Coding!
]]>In a recent ASP.NET Community Standup, the team quickly ran through a list of things that you can do to make sure that your environment is in check for building as quickly as possible and running a stable version of Visual Studio. These tips included:
Update: I've gotten a few great ideas in the comments below and would love to hear more! Please feel free to add your thoughts in the comments!
This is an easy tip to try out and has no impact on your normal dev environment. It's not as destructive as, say, resetting Visual Studio and nuking all your plugins. Just open a Visual Studio command prompt (I typically do so as admin) and launch the IDE like so:
devenv /SafeMode
Many times extensions crap out and slow you down. There are some great ones out there, so I would never suggest removing them all. I have a love-hate relationship with ReSharper and often disable it, especially when I'm travelling and can't be plugged in to step up my CPU speed.
This was a game changer for me: it was a tough pill to swallow, but going from hundreds of gigs of space down to ~40 on my first SSD was so worth it. Over the last couple of years I've upgraded along the way and currently have two that I sit on (on my two different rigs).
If you can hear your hard drive, you are going to be an unhappy individual in your life. - Scott Hanselman
I recommend this one or this one and can vouch for your _happy++;
should you make the switch. The thing I love about the Samsungs is the little Magician software they bundle with the drives so that you can easily drop your HDD and move all your data over to the new, faster kit.
I want to make it perfectly clear that while I agree with this tip and do this myself, it's not one that you should take lightly as you're removing a layer of protection from your computer. So don't do it. Unless you want to run faster, in which case, exclude these guys from your real-time scan:
I'm running McAfee, which looks like this when you drill in from the dashboard:
Pretty easy to setup, and you'll get some of your day back. But I told you not to do it.
There isn't enough gain for me to recommend this one. I actually tried it a couple of years ago when I got my first SSD and the speed wasn't greatly improved. On top of that, it required slicing out RAM, running scripts to mirror or copy over the code and ran the risk of data loss if the computer freezes. If you can drop $100 or less on an SSD, it's just not worth it to run a RAM disk. Some folks argue that it's 10x the speed (or more) than an SSD, but I wasn't sold on that sales pitch.
That said, if you're still on an HDD, I can confirm that running on a RAM disk will be a Godsend. Most recently I have used this one but I haven't tried it on Win 10. There's also a commercial one that a friend of mine swears by.
There's a great tool that most devs I know run on their machine, the much-improved version of process monitor from sysinternals:
SIPM will reveal everything that gets logged out by any processes that are running, from disk reads/writes to thread allocation to network activity and more. If you ever figured there wasn't a lot to do when you "just wanted to build", you'll be quite surprised when you build your project and see tens of thousands of events drop in milliseconds. Computers are awesome.
Simply start up process monitor, then start Visual Studio and watch for events. You can start honing in and finding what is causing your grief. I find the best way to get at things is by excluding the bits that you know are not the problem, like explorer.exe, and then honing in by excluding things like reading from the registry.
Things like IntelliTrace offer great benefits, but if you're in power-saver mode you're going to find yourself crying for processor cycles. I notice when travelling, when I often find myself not plugged in, that builds can drag on and debugging can be brutal if you have certain features on. Check to see if you have anything running that you don't need.
You can watch the original video below (jumpt to the 36:00 mark), or hit it out on the YouTubes.
A huge thanks to the leaders there on the ASP.NET team who do the weekly standup and share their insight in this area.
Do you have any tips for others using Visual Studio? Any tricks you think have helped you reach performance nirvana? Please share your thoughts below!
Happy Coding!
]]>Finding success as a remote worker is pretty darn hard. Unless you're a complete natural, you will need to have the perfect combination of environment, corporate trust, family time and personal time. Failing being perfect, you can just take the route I took and try your best to follow a few practices that can help you disconnect at the end of the day.
To that end, corporate trust is just as tricky a puzzle as the rest of the challenges for work-at-homes; however, provided you are working at or moving to a company that understands remote work and empowers you to succeed, there are things you need to be doing to build and maintain that trust.
When you work in an office, chances are you have some form of a daily ritual that you partake in. You have things you do when get up as you prepare for your workday explicitly, and maybe some that happen subconsciously. You make some eggs for your husband or some bacon for your wife while they get the kids ready for school. Maybe you spend time in some personal study. Eventually you start to make your way to the office on foot, by bike, in your vehicle or perhaps on some form of mass public transit. Some poor saps have even parted ways with their cash to buy Segways and, sadly, they use those to get where they're going. You swing by your favorite coffee joint, or meet up with co-workers in the lobby and chat on the elevator ride and at some point you find yourself at your desk.
The point is simply this: getting from point A to point B is part of the routine. You start thinking about work, your schedule, what happened last week. If your transportation is "hands off" then you can catch up on emails or start to plan your day out. The transition is more important than the destination in this case, but this is unfortunately the very thing that most remote workers will omit.
For me, I love walking to and from work. Yes, I work from home, but I've made the walk part of my ritual. In the morning it lets me tune into my work day and in the late afternoon when I'm leaving work I can reset and get ready to enjoy my family.
Moving directly from a work context into a home context not only blurs the lines, but it discounts the fact that you need a different mindspace to manage your work than you do to manage your home.
You need a line to say, "this is where work stops and my personal life begins."
You have an agreement with your employer to perform certain units of work, but it's your responsibility to set expectations accordingly. That means that you should engage in practices that allow you to define a clean break from the work day.
One of the ways that I do this is by virtue of leaving my contact points at work. I have a Skype number that is associated with my work duties that I don't answer during evening and weekends. When I'm getting ready to leave work I change my email settings on my phone to only check for new mail on demand.
Another way to set boundaries is to actively engage in other things in the non-work times. That means dedicating time to your family, and if you must, signing your family up for things that happen during those evening and weekend hours, or getting involved with a group of friends that are active and motivated to do non-work-like things. This could be snowboarding, gaming, camping, playing music or hanging out for wings. If you make commitments that take you away from work, well, you can't work.
Let's also note that the reverse must also hold true; if you are going to be setting limits on when you can and can't access work, you must also set limits on when you can and can't access "home". I have done this in a number of ways:
There are, of course, some exceptions to these rules and there must be. My oldest son lives with Type 1 Diabetes, so when my wife is unavailable to assist, I bring the house phone into my workspace for emergencies. When I am expecting a package or someone to swing by I will indeed answer the door. But these are things that you can also make your co-workers aware of so that the disruption is not something that derails your work and they know that it may be coming. Emergencies, on the other hand, are emergencies and I don't think those should be viewed any differently by your team and management than they would if you were in the office.
Here are a few tips to help you close things off at the end of the day.
With practices like these in place, you'll find that you actually have some peace of mind through the evening, knowing that you've tied things off at the end of the day. You don't have to worry about lost work, or missed follow-ups.
By the way, the same is also true when starting your day: you should have an easy way to get going in the morning so that as soon as you sit down at your computer you are ready to be productive and get into your daily flow. As a software developer, I actually have a script that I run that spins up my tools and opens the folders that I need so that everything I'll be working on is in front of me. Even if my computer has rebooted or applied updates, I am ready to rock out on my project in just a few seconds. If you're interested, here's an example of a script I use to get my day started.
Finally, wind down! I walk home on most days to disconnect from work. I have friend who hits the gym and others for whom the timing works to break off at the end of the day and go pick up the kids.
If you are currently working at home and don't have these practices in place, it will take some time to work up to them. While they mostly seem simply on the surface, some habits are hard to break, and changing expectations is even more difficult if you've previously let work creep out of your home office.
As I always say when talking with folks about this, you really need to experiment and find the things that work for you for where you're at and keep evaluating if there are tweaks or corrections you need to make along the way.
Any effort to help you move away from the feeling of constantly being connected to work will help you better enjoy your evenings and weekends. I hope you find a few gems in here that encourage you to work towards that goal.
Happy relaxing! (Now, go spend some time with you family or friends!)
]]>I contribute to an open source project called AllReady from the Humanitarian Toolbox. One of the things that we do on the project is use Azure Storage Queues to send and process messages in a different execution context to keep our main application moving along nicely. In order to do this, I added some properties to the configuration file under a storage node:
1 | "Data": { |
Obviously, "[storagekey]
" is not a valid key to access a storage account in Azure, but you'll notice that I also have a flag in there to enable/disable the queue service. By putting this in place, we can toggle the service used at dev time and, rather than writing to the queue, we can instead write to the local console. Of course, we have the propery key set in our Azure Web App so that it's loaded and overridden at run time with the correct value. I discussed nomenclature of the keys you'd use in my post on JSON Configuration.
Now, to actually put the storage settings from our config in play, we're going to create a class to contain the properties that we will need to inspect at runtime.
1 | public class AzureStorageSettings |
This class is a one-to-one mapping of the values we put in our Storage
section. All that's left is to get the values from our configuration in there.
Originally I was loading up these properties one-by-each, line after line of reading from the config and assigning the values to the instance of the AzureStorageSettings
class. But in the fall I had the opportunity to work with Ryan Nowak of the ASP.NET team and he showed me a much better approach with what the ASP.NET team refers to as the options pattern. It's basically closing the loop on the work we have above and giving us the ability to get at our configuration with strongy-typed objects.
As a reminder, our Configuration
property back in startup.cs
is an instance of an IConfiguration
, built from the ConfigurationBuilder
in our constructor. It contains all the data that we've added in key-value pairs, and we can now use that object to expose the information we need through our IoC container when we're configuring our services.
1 | public void ConfigureServices(IServiceCollection services) |
What we have to do is call the GetSection
method along with the corresponding path to where the object instance's properties will be loaded from. Our Storage
information was in the Data
property at the root of the document, so we pack it in as Data:Storage
as the parameter to GetSection
.
Now I've got configuration in my IoC container and I've got a class that represents the slice of configuration that I'm interested in. Now I want to mux those up and use it in my service (or controller or anything that is spun up with IoC). To do that I simply inject it into my constructor like so:
1 | public QueueStorageService(IOptions<AzureStorageSettings> options) |
By simply accepting a parameter of type IOptions<AzureStorageSettings>
in the constructor of my controller, the appropriate configuration elements are parsed out and provided to me in the Value
property as an instance of my AzureStorageSettings
class.
Note: You'll have to add a using statement to your controller or service for the IOptions
interface:
using Microsoft.Extensions.OptionsModel;
So to review, there are a couple of things we need to do:
IOptions<>
to inject the settings into our constructorAs you can see, this is a powerful and efficient way to create strongly-typed configuration objects in your ASP.NET Core MVC projects. It takes a minute to wrap your head around the pieces that are in play, but we can do away with the old method of custom configuration sections and simply represent our configuration data as JSON.
Happy coding!
]]>You can see from the document snippet above, taken from the default project template, that we can easily achieve a well-structured, human-readible set of data. Where we used to do something the the following:
1 | <appSettings> |
Our other option, of course, is going the custom object route, but that has always been a pain in the rear. Today we can do this:
1 | "Logging": { |
Now the data that we have related to logging can be grouped into a logical fragment of the configuration file and can grow as required.
This organization is great and comes along with the benefit of being collapsable into a key-value pair. We see evidence of this in the connection string, which is also located in the appsettings.json
file:
1 | "Data": { |
And when you want to pull it out of the stored configuration, you do so like this example from startup.cs
:
1 | services.AddEntityFramework() |
Notice how the value of the ConnectionString
property of the DefaultConnection
object within the Data
object at the root was stored as the key Data:DefaultConnection:ConnectionString
. This is perfect for allowing overrides, such as using environment variables. This is further made handy by the fact that your settings in Azure are automatically loaded as environment variables into your application execution process at startup.
In your Azure Web App configuration, you would simply need to add a key named Data:DefaultConnection:ConnectionString
and set the value accordingly. This means that developers can use LocalDB locally, and the app automatically lights up in the cloud with the real database.
These key-value pairs are great, but in your application it would be a bother to have to load out each property by hand. In my next post I'm going to show you how to take a configuration section and turn it into a set of typed configuration options that can be used throughout your project.
Happy coding!
]]>ASP.NET Core MVC introduces a new configuration system that adds flexibility and simultaneously enables cross-platform support (in a way that makes sense on other platforms). In this post we're going to cover the basics of configuration and what you can expect as you look at the project template from File -> New Project in Visual Studio 2015.
ASP.NET Core was previously called ASP.NET 5, and before that ASP.NET vNext. ASP.NET Core MVC is what was referred to as MVC 6. The tooling and the branding will change in the weeks and months ahead, but the basics of configuration I detail here should remain relatively in-tact.
In earlier versions of MVC it is true that the configuration was loaded very early in the process. If you had values in your App.Config they got gobbled up at startup. The problem was, you didn't have a chance to really interact with the configuration system - it just was what it was. This usually meant that we would create our own systems for loading the values, we'd get creative in how we balanced config-time and run-time values and, in short, we'd have to do the heavy-lifting ourselves.
ASP.NET Core lets us be much more opinionated about what goes on while registering the configuration values. Sure, it does and should still load configuration pre-startup, but now we can play a role in the process.
1 | public Startup(IHostingEnvironment env) |
As you can see above, the first lines of code on the first bit of code our application contains what is needed to load our configuration.
More importantly, we get a say in how and where the configuration is loaded from. A great example of this is that we can load a JSON file for the default config and then later use environment variables to overload those defaults.
That code above is the Startup
method of the Startup
class, and we're very certain about when the config is loaded and where from. We even get to test if we're in the development envionment.
This comes in handy when you're deploying to Azure or would like to test with your own values instead of making changes to the JSON config file that would otherwise be checked in with the project.
Speaking of which, you're going to likely need to store some values in there that you will never want to share, and that you'll never want to check into your repo. This would be things like API tokens for integration into third-party services and the like (think SendGrid, Twilio, PayPal and the like).
And that brings us to user secrets. It's still not clear how these guys are going to shake down - there's still active discussion about how it should be named and stored - but the idea is straightforward and lets you work locally with sensitive data without having to modify your config. You can think of them as "environment variables for your project".
There is pretty basic tooling from the command line:
The secrets are stored in your user data here:
%APPDATA%\microsoft\UserSecrets\
If you've used secrets, there will be a sub-folder here for each project you've created. Depending on where they land, the secrets will likely be a combination of the project name and a GUID, but you can set this yourself in your project.json.
I'll do a follow-up post on user secrets and demonstrate in greater detail how to leverage it in your projects.
We're still in an RC period (should it be called beta?) and there are naming pieces yet to come, but there is nothing stopping you from learning about the configuration system in ASP.NET Core today. Grab a copy of Visual Studio 2015 - hey, it's free! - and start experimenting with the bits. Be sure to check back in the weeks ahead for more information about configuration in ASP.NET Core and Core MVC.
Happy coding!
]]>The day the announcement was made for Band 2, I was watching the keynote and keeping my finger on F5, repeatedly refreshing my browser and eagerly waiting for the Band 2 "Coming Soon" page turn over to an "Order Now" page. The week before launch, I added my credit card as a saved card on the Microsoft store. As soon as the page flipped over, I pulled the trigger and reserved my new edition.
I've worn it every day since November 2, 2015 and here's what I've learned about the Microsoft Band 2 battery life, including tips on how to keep yours running all week long.
The Band is easily charged by unhinging the clasp and sliding the device off your wrist, then attaching the magnetized charger, which snaps automatically in place. A full charge for a fully depleted battery should run you less than two hours.
Under normal usage, your Band will deplete about 30% per day. This includes getting notifications, syncing with your phone, controlling music, setting timers and alarms, using the daily heart rate tracking and buying coffee at Starbucks, should you be so inclined.
Adding workouts to your day will drive the battery down a little more quickly, and I seem to burn about 10% of my battery when I go for a 30 min run with GPS turned on.
The "bottom half" of the battery seems to charge a little quicker than the "top half". What I mean is that going from 0% to 50% seems to take about 40 minutes or so whereas from the 50% mark and higher, the Band 2 charges at a rate of about 1% per minute.
The battery on my Band 2 lasts me through the week. While it depletes every day about 30%, I also charge it every day during my daily routine and after workouts while I shower. Here's a log from my last week of use starting with a full charge on Saturday:
Basically, I'm charging it when I get ready in the AM, which doesn't quite catch it up for what it lost over the previous day. However, when I workout, I typically shower afterwards and this gives me an extra 15-20 minutes to charge it again. So, interestingly, I actualy use up the Band 2 battery less when I'm working out more.
Most Saturdays I actually wear my Band until the battery warning goes off, then I plug it in until fully charged. I called into the Microsoft support line and asked if this was a good practice for using and charging the Band 2 - essentially letting it run down through the week and then giving it a complete charge on the weekend. The support technician on the call agreed that this was a good strategy and commented that she has a similar routine. In the two-and-a-half months of use, I have not seen depreciating performance on the battery life.
Here's a few things you can do to keep your Band 2 running all week:
Happy fitnessing!
]]>These are difficult things to accept, especially if you're in competition for advancement or your employer is challenging your boundaries on a regular basis, but just remember that resetting expectations is much more difficult that setting them in the first place. Remembering that every action you take (or don't) defines how people will expect you to act in the future.
I am currently an employee and likely will be for the foreseeable future. My career aspirations do not include management despite the fact that I love to lead teams (there is a significant difference between the two, but I'll save that for a different post). I firmly believe that you can lead a work life and a personal life that are largely disconnected and be quite successful doing so.
As someone who has found incredible happiness in the balance between a happy home life and spending an appropriate amount of time on my career, I recently read a post that did not sit well with me at all. In "distilling" everything down in to what was needed in order to be productive, the author lists a set of "rules" that include this:
Embrace the fact that work and life are intricately intertwined
I'm going to tell you right now: there are a lot of things wrong with this rule. The rest of that post is laden with info from other sources and a ton of exercises that I can't see many people filling out, but the idea that I should somehow lose myself in dedication to work is terribly misguided.
Let's break it down.
I want to add some clarity to my thought process here. The most confused aspect of the aforementioned rule is that the terms used are not well-defined. I will do that here, defining what "work" actually means, and how it is a fundamentally different concept than your job or your career.
First, let's talk about your work. Work is the list of assigned duties that you carry out, which often vary from day-to-day and may be transient in nature; you may be asked to perform a set of work for a prescribed period of time and later be assigned different work. For some people, their work will be consistent for the entirety of their employ, others it may change from month-to-month or day-to-day. But this is a good segue as there is a difference between your work and your job.
Your job is the collection of work and tasks you perform in exchange for a pay cheque. Your job helps you meet your financial obligations and help to contribute to your household income. The tasks assigned are usually out of your control, though many forward-thinking organizations offer some freedom over long-term assignments and let you speak into the types of tasks you take on. Your experience in other, previous jobs will open doors for you to take on greater challenges in your next job, which may be within the same organization. Again, organizations that get this will help you form a path, leading you through more complicated work and increasingly important tasks en route to helping you move to more senior roles. Admittedly, not all career paths provide this opportunity, and that's okay, too, the important thing is finding a job that supports your career.
Which leads me to the definition of career. Unlike the tasks you perform or the role you assume at a company, your career will never be fully articulated until you retire. That is, if retirement is indeed your endgame. If someone asks you what your career is today, your answer will likely be a point-in-time reflection of how you got to where you are. It is defined by the achievements you capture and the challenges you overcome. For some it will be a story of creativity and expression, for others it will be about dedication or service.
Others still will explain their career as the vehicle they used to chase a passion, rather than something they were passionate about. As a concrete example example of that, I've demonstrated capabilities in software development and have been on this career path for 20 years now, writing code and leading teams. However, as much as I like writing code, my passion is actually learning, mentoring, sharing my work and teaching others. My career has allowed me to access my passion, and today I get to speak at conferences across North America and volunteer at the computer labs at local middle and high schools.
Okay, so if you're not going to embrace some kind of intertwined reality, what should you be chasing?
I believe the answer lies in sorting your crap out and remembering that the three things we talked about are tangible, distinct and sometimes disconnected or misaligned. Sometimes you won't like the work you're doing, but you will be completely happy with your job. Sometimes you take a job because you know you will enjoy the work for a period of time, but the job may not be helpful in advancing your career.
These things are okay, at least for a time. What you need to embrace is the fact that finding the perfect combination is very difficult, especially over the long haul. As your career objectives change, the job may no longer work for you as a tool to move down your career path. Sometimes you'll have an incredible job - great employer, solid pay and awesome co-workers - but the work you're assigned isn't what you like to be doing. Some people – it's happened to myself – will advance through an organization based on their performance into a role that they are not suitable for and will not find be successful at (this is known as the Peter Principle).
Rather than thinking of things in rules, let's instead think of things in truths. It is true that:
I don't want to constantly attend to the negative, I want to focus on the things that are going right and look for signals that I am on the correct path. These green flags tell me that I am moving in the direction that I want to be in, and help me keep a healthy balance between my personal life and what I do in my job. When pondering where I'm at with my work, job and career, I ask myself questions from this reflection list:
If I've got positive answers for several of these, I know I'm in a good spot. On the other hand, failing to meet my criteria here on a couple of points could be a sign that there needs to be a change of season. These questions may not be exactly what you need, but the exercise is what I believe we all need to embrace; the answers are dynamic and are going to change over time, so you need to find questions that help you identify your measure of happiness.
In it's simplest terms, I believe that an employer's job begins with the assignment of meaningful, relevant tasks and ends with a paycheque. In between that space there is opportunity to challenge an employee, to contribute to their growth and provide guidance on how to develop their skills in such a way that it serves the company and helps to realize the goals of the individual, wherever possible.
As employees we have to concede that an employer is concerned with generating income in greater magnitude than expense as they execute the services their clients depend on. Even when we're at a job we enjoy we will likely be tasked with actions we would not choose for ourselves. We need to be clear about our broader goals and, when appropriate, be honest when our work or our job is not checking off things from our reflection list.
If an employer is mentoring you to make your work part of your life, I boldly challenge you to push back and define strong bounds through which your work cannot cross. Someone in a mentorship role who guides employees with banter of blending work and life is clearly not interested in the career of the employee, and places higher weight on the importance of completed work than on the individual. I don't want to work there.
I will yield that the original post did not prescribe an explanation of "work", which is why I have above, so I'll argue this from the perspective of both "work" and "job" as I've defined in this article.
The separation of work is easy; work is a task and usually requires the context of your work environment. An engineer can't complete blueprints without the requisite software, a counsellor cannot complete an evaluation without a patient and someone in janitorial cannot wax the floors without the buffing machine. And the floor. These are the types of things that can easily be slotted into your work schedule and, when you are good with time management, need not spill into your personal life on any regular frequency.
The separation of job and your life is a little less trivial. To avoid carrying the stress of the day home you need to have establish some good practices around "putting your tools down", disconnecting from the office on your way out of the office. This is going to be something different for everybody, for me it involves tearing down my workspace, closing applications and checking code in. To prevent bigger picture concerns from affecting your home life such as the economy or the sustainability of the company you work at you need to regard your employment in the correct light, namely that it is part of your career, but likely doesn't define it. And when your employer puts requirements into your job that breach your personal time, you will have some hard decisions to make.
Wherever you land, you need time to recharge. I agree with the sentiment from the original post that suggests multitasking can have a negative affect on you. How, then, can you intertwine work with playing with your kids? How do you answer emails when on a secluded retreat with your spouse? If getting in the zone is as easy as taking 10-15 minutes of focus, then do that during the workday. The important thing is to define the bounds of that workday and maintain them.
I know some incredible folks who have taken entirely different walks of life than I. They have found success in ways that would not work for me, as I have found success in ways that may not work for you. Here are some scenarios where your workday may bleed more frequently into your personal life.
The Self-Employed Running your own business is hard work. You may be working across time zones, you may have travel considerations and you need to react quickly to clients in order to collect the revenues you need to stay afloat.
Family Businesses While less common these days, the daily topics of family-run businesses will naturally find their way into conversations that happen outside of the work day. Reminders, follow-ups and even stress relief may happen when you find that every supper doubles as a staff party.
Those Without Family I have friends who are not into the family scene and are living a single life. They have found that the cross-over tends to happen more naturally, but have also noted that they prefer to spend their time with friends or working on their career (versus servicing requests from work).
When Travel is Required Travel is a tricky beast, but one that raises the requirement to really define when work starts and stops. As an obvious impairment to split out a normal work day, work-related travel increases the relevancy of strong boundaries when you are at home.
Crunch Times These are realities for most folks in most fields: as a project closes, an emergency arises or a deadline approaches, a little extra effort is going to be required. You'll need to step up to successfully complete the tasks and stay in good standing with your employer.
In spite of these, I still believe that the cross-over time can be mitigated to a large degree. The important thing to do in these cases is to be more effective at communicating when the windows of work will be and ensuring that you have the support of those around you to help enforce it. Your husband or wife won't know that you are expecting a call unless you've found a way to share it with them.
First of all, don't sell the farm. If you're finding that something in your career, your job or your work is unsettling, make sure you have set of reasonable questions you can ask yourself to find out why. Talk with your employer about ways to make it right and, if needed, start to explore alternate work arrangements inside your organization, or outside of it.
One thing you can do, immediately, is to start blocking off time for your family and for you personally. This will help to give you time to connect with those that are important to you and reflect on what is becoming of your career. When I did this, I started to see - almost immediately - how I needed the time away from work in order to concentrate when I was there. It also paved the way for positive, sweeping changes in the time I spend with my wife and kids.
It would be remiss of me to omit some of the other beliefs that I have, namely that I put my family above my job, my ethics ahead of my work and my responsibility to my family's obligations ahead of my personal interests. This means that I have had to make difficult decisions at times in order to maintain integrity and, to be honest, I suppose I'm still a long way out from knowing whether or not those were the right decisions. The best I can hope for is that hindsight reveals that the decisions were right at that time for who I was.
Here is the blog post I referenced, if you're interested.
I usually close by saying, "happy coding!" but in this case, this might be more appropriate: Happy career! :)
]]>But this year, the MVP Summit was trumped in awesomeness as quickly at it came to a close as the very next morning the code-a-thon for the Humanitarian Toolbox kicked into high gear.
Want to join the cause? The easiest way to get started is to join our weekly Saturday morning call. We are online from 10AM CST to Noon CST every Saturday. Watch Twitter –> for the link just before 10AM.
There are a lot of great projects out there. AllReady is great software with great purpose as well.
Whenever disaster strikes a community – a forest fire, a tsunami, an earthquake – lives are impacted. Sadly, those with the fewest resources are often the ones at most risk after the disaster.
From November 6th to the 8th I was privileged to join in with about twenty other individuals from around the world to work on AllReady, an open source project that is curated by the Humanitarian Toolbox. AllReady is software that helps communities organize and execute efforts in preparedness so that those who are at risk are better equipped to make it out of a disaster in the best shape possible.
The group of us descended to the Garage at Building 27 on Microsoft Campus. We hunkered down, plowed through hundreds of commits and many dozens of issues and pull requests.
It was an amazing experience. It was a group of really smart people, supported by folks on the ASP.NET team, building software that is going to change lives.
To find out more about the awesome work that The Humanitarian Toolbox is doing, please visit their site.
There is a huge draw to dive in and help with a project that can affect so many people and thwart the negative impact of unfortunate conditions. Preparedness is so much more effective than disaster recovery.
So...it's a good reason to get involved. But if that's not enough, check out this tech stack:
I mean, just look at that list. That's like…all the buzzwords. And jumping in to help on this project is also jumping in to learn. This is an opportunity to work with world-class developers on a project that is striving to have great architecture. It runs on the cloud in cloud-like ways and uses technology that is going to be used for the next 5-10 years and beyond.
After the weekend, we drew to a close by having a retrospective where we worked through the next steps and where this project is headed. It's exciting to see the momentum building as more community members come on board and start making commits.
We've got a lot done in just a few weeks, and I'm excited to see it moving forward daily.
The best part about the software is that everyone can contribute. I'm not going to lie, there are some advanced aspects of the project that will be hard to work through for junior developers. There are more aspects, still, that need the love of some senior developers. Regardless of where you are in the world or in your career, there is likely a task where you can get started.
If you have questions, reach out to me on Twitter and I'll help to get you started.
Happy coding!
]]>TL;DR: Going forward, you're going to inherit from Controller instead of ApiController, or from nothing at all.
This is pretty much the bread and butter of a new controller in an old Web API 2.0 project:
1 | public class ValuesController : ApiController |
Nothing really too interesting here. We're inheriting from a base class so we get some methods to leverage for return types, we can access the identity of the user through an IPrincipal and we have an HttpContext available to inspect the request and modify the response.
In ASP.NET 5 we don't have the ApiController to inherit from, at least not out of the box. Instead we inherit from the Controller class.
1 | public class ValuesController : Controller |
Pretty easy, right? We actually have three less characters. Some pieces have moved around such as Request and Response objects that live as properties at the class level, and our User is now a ClaimsPrincipal instead of an IPrincipal. You'll also find that there's a host of other things that do not seem really relevant at first glance to Web API (things like the service resolver and TempData).
These extra bits are peripheral, however; the takeaway is actually that we no longer have two separate sets of classes that represent concerns like controllers or routing, and we can go about getting at the important parts of the request in the same way from both types of controllers – there really is just one now.
There are perfectly good reasons to keep using the old format, perhaps you're at the start of a port project or some have some other reason to stay as-was. No problem, you're just going to have to pull in another package as it's not part of your project template by default. Simply edit your project.json to include the following package:
Microsoft.AspNet.Mvc.WebApiCompatShim
While this is here and you can use it, it's also likely a good time to evaluate if you need to use it. There are only a small set of refactorings that are required in order to use the unified interface and you can be
Make sure you've got Visual Studio 2015, you have the latest beta installed (at time of writing, beta 8), and give it a try. Happy coding
]]>In my previous post on dnx commands I showed how you could create your own command as part of your project that could be invoked via the .Net Execution Environment, a.k.a., dnx. While this works fine in simple scenarios, chances are you might need to have more than one "command" embedded in your tooling. Right away you have concerns for parsing the arguments and options that are passed in, which will quickly lead to a more complex application than you were originally intending.
Important Note I am building the samples here in this post on Beta 6, knowing that there are two changes coming in, the first is that they are dropping the project path argument to dnx (the period, or "current directory"), and the second being the high likelihood that there will continue to be refinements in the namespaces of these libraries. I'll update these when I complete my upgrade to Beta 7.
Consider Entity Framework, where you can access a number of different commands. It provides tooling to your application by making a number of commands related to your project, your entities, your database and your context available from the command line. This is great, because it also means that you can use it in automation tasks.
Here's the command as executed from the command line, followed by a call to get the help on a specific command, migration:
1 | dnx . ef |
So, think about those switches for a second, and the mistakes and string manipulation you'd need to do to pull that all together. What about supporting help and organizing your commands? Being able to accept different options and arguments can grow to be an exhausting exercise in bloat…
…unless, of course, you had an abstraction over those parsing bits to work with. Quite wonderfully, Microsoft has made available the bits you need to take away those pains, and it all starts with the following package (and a bit of secret sauce):
1 | Microsoft.Framework.CommandLineUtils.Sources |
And here's the secret sauce…instead of using something like "1.0.0-*" for your version, use this instead: { "version": "1.0.0-*", "type": "build" }. This notation bakes the abstractions into your application so that you don't have to bundle and distribute multiple DLLs/dependencies when you author and share commands.
The full version of the final, working project in this post is available on GitHub. Feel free to pull down a copy and try this out for yourself!
Let's get started.
As previously covered, creating an ASP.NET 5 command line app is all that is required to get started with creating your commands. We have to add that package as a dependency as well, which should look like this in it's entirety in your project.json:
1 | "dependencies": { |
Next, we need to make sure that our command is available and named as we'd like it to be called, which is also done in the project.json. Mine looks like this:
1 | "commands": { |
You can imagine, of course, that it will be invoked much like Entity Framework, but with "sample-fu" instead of "ef". Feel free to name yours as you wish. With that out of the way, we can start to do the heavy lifting in getting our commands exposed to external tooling.
Here is a bare-bones application that just displays it's own help message:
1 | public int Main(string[] args) |
You can see that our Main method is basically creating an instance of the CommandLineApplication class, initializing some properties and finally wiring up a Func to be executed at some point in the future. Main returns the result of app.Execute, which in turn handles the processing of anything passed in and itself returns the appropriate value (0 for success, anything else for non-success). Here it is in action (the completed version), simply by typing dnx . sample-fu at the commandline:
A quick note here as well…the OnExecute() is called if no other command turns out to be appropriate to run, as determined by the internal handling in CommandLineApplication. In effect, we're saying, "If the user passes nothing in, show the help." Help is derived from the configuration of commands, so to illustrate that, we need to add one.
Now we get into the fun stuff. Let's write a command that takes a string as an argument and echos it right back out, and add an option to reverse the string.
1 | app.Command("display", c => |
Command takes a name and an action in which we can add our options and arguments and process the input as required. We write a Func for OnExecute here as well, which will be called if the user types the command "display". The option is implemented as a "NoValue" option type, so the parser is not expecting any value…it's either on the command line or it isn't.
The order of args is important, using the pattern:
COMMAND OPTIONS ARGUMENTS
You'll get some errors if you don't follow that order (and there are some open GitHub issues to help make better parsing and error messages available).
Next up, let's implement a command that can do one of two operations based on the option specified, and takes two values for an argument. Here a basic implementation of a calc method, supporting addition and multiplication:
1 | // the "calc" command |
Of note are the differences between the options and the arguments versus the first command. The option accepts one of two values, and the argument can accept exactly two values. We have to do a bit of validation on our own here, but these are the basic mechanics of getting commands working.
Taking it to the next level, you may wish to encapsulate your code in a class, or leverage the fact that DNX (and thus, your commands) are aware of the project context that you are running in…remember that if you are running in a project directory, you have the ability to read from the project.json.
Be sure to grab Visual Studio 2015 and then start experimenting with commands. You can have a look at some of the other repos/projects that leverage CommandLineUtils, or check out the completed project from this post on GitHub.
Happy Coding!
]]>Waiting for updates is no fun. Let's hack a little.
For me the primary motivator was the path length limitations in Windows. Nested node_modules folders buried 19 levels deep is no fun when you hit the max path length. For me, I was trying to share the files on OneDrive and hit 255 characters pretty quickly.
Older versions of npm resolved package dependencies by pulling in a package, creating a node_modules folder inside of it, then putting all the packages in there. Except, of course, if one of those packages contained more dependencies, then we were into the recursive bits of package resolution and very deep paths, ultimately toppling a lot of Windows tooling.
The latest major version of npm – version 3.0.x and above – creates a flat store of packages (very similar to what we know in NuGet) and only pulls one copy of each required version of each required package. Much nicer. So, back to the dicing!
These are pretty straightforward, once you find them. For me, they were located in the following directory:
C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\Microsoft\Web Tools\External
For example, here the entire contents of npm.cmd:
@"%~dp0\node\node" "%~dp0\npm\node_modules\npm\bin\npm-cli.js" %*
The %~dp0 is the old command line way of bringing the current drive letter (the d in the command), the path (the letter p here) and the current directory of the executing script (represented by 0) into context. So, basically, "start from where you're running". It's a very hard-to-read version of "." in most other notations. So, the command is running node (which is an exe), passing in the VS version of npm, and pushing into it the rest of the parameters that were passed along. So, when VS issues an "npm install", this command kicks in, runs npm via node and passes "install" as the command to npm.
With that knowledge, we can simply update the call that is proxied through to our current version. I installed node (which includes npm), then updated npm to the latest version (thanks to this module) and updated my npm.cmd to the following:
@"C:\Program Files (x86)\nodejs\node.exe" "C:\Program Files (x86)\nodejs\node_modules\npm\bin\npm-cli.js" %*
Of course, here be dragons: I have no idea how stable this will be with updates to VS, and/or how badly you may be crippling features if you mess this up. So, make sure you take a backup of your scripts before modifying them. This will be super-handy if you have some other requirement – like the order of params on tooling changes – but otherwise likely isn't needed. Thankfully, there is a UI-way of doing this, too.
Probably a more pleasing solution for your boss.
This one is pretty straightforward as well, and can be done by right-clicking on the "Dependencies" node in Solution Explorer, or by typing "external web tools" in the QuickLaunch bar.
From here, just add a new entry and move it to the top. For me, npm is located in the nodejs install directory, and this is good enough to get VS to see it first.
Note, I did seem to have some issues with caching and/or gremlins here, so you may need to restart Visual Studio for the tooling paths to be picked up.
Couple of things here that I don't care for:
Not too much to do, but if you run into long paths, nested node_modules kicking your butt or other out-of-date tooling, this should get you on your way.
Make sure you grab your copy of VS 2015 and start diving into the next phase of our careers!
Happy coding!
]]>DNX has the ability to scan a project.json and look for commands that you install as packages or that you create yourself. If you've started following the examples of the MVC Framework or perhaps with Entity Framework, you may have seen things like this in your project.json:
1 | "commands": { |
These entries are here so that DNX understands the alias you assign (such as "web" or "ef") and how it maps to an assembly that you've created or taken on as a dependency. The EF reference is quite straightforward above, simply saying that any call to "ef" via DNX will go into the entry point in EntityFramework.Commands. You would invoke that as follows from the directory of your project:
dnx . ef
All parameters that are passed in are available to you as well, so if you were to instead use:
dnx . ef help migration
Then EF would be getting the params "help migrations" to parse and process. As can be clearly seen in the "web" alias, you can also specify defaults that get passed into the command when it is executed, thus, the call to web in the above project.json passes in the path and filename of the configuration file to be used when starting IIS express.
There is no special meaning to "ef" or "web". These are just names that you assign so that the correct mapping can be made. If you changed "ef" to "right-said-fred" you would be able to run migrations from the command line like so:
dnx . right-said-fred migration add too-sexy
Great! So you can create commands, pass in parameters and share these commands through the project.json file. But now what?
I'm so glad you asked!
So far things really aren't too different from any other console app you might create. I mean, you can parse args and do whatever you like in those apps as well.
But here's the winner-winner-chicken-dinner bits: did you notice the "." that is passed into DNX? That is actually the path to the project.json file, and this is important.
Important Note: From beta 7 onward (or already if you're on the nightly builds) DNX will implicitly run with an appbase of the current directory, removing the need for the "." in the command. I'll try to remember to come back to this post to correct that when beta 7 is out in the wild. Read more about the change on the ASP.NET Announcement repo on GitHub.
DNX doesn't actually do a lot on its own, not other than providing an execution context under which you can run your commands. But this is a good thing! By passing in the path to a project.json, you feed DNX the command mappings that you want to use, and in turn, DNX provides you with all of the benefits of running inside of the ASP.NET 5.0 bits. Your console app just got access to Dependency Injection as a first-class citizen in your project, with access to information about whichever app it was that contained that project.json file.
Consider the EF command mapping again for migrations for a second: what is going on when you tell it to add a migration? It goes something like this:
This is actually super easy! Here's what you need to do:
From there, you can drop to a command line and run your command. That's it!
Pro Tip You can easily get a command line in your project folder by right-clicking on the project in Solution Explorer and selecting "Open Folder in File Explorer". When File Explorer opens, simply type in "cmd" or "powershell" in the location bar and you'll get your shell.
The secret as to why it works from the console can be found in your project.json: when you create a console app from the project templates, the command alias mapping for your project is automatically added to your project. In this same way, along with referencing your new command project, other projects can now consume your command.
It is far more likely that you're going to need to do something in the context of the project which uses your command. Minimally, you're likely going to need some configuration drawn in as a default or as a parameter in your command. Let's look at how you would take that hello world app you created in three steps and do something a little more meaningful with it.
First, let's add some dependencies to your project.json:
1 | "dependencies": { |
Now let's add a new JSON file to our project called config.json with the following contents:
1 | { |
Getting there. Next, let's bulk up the constructor of the Program class, add a private member and a Configuration property:
1 | private readonly IApplicationEnvironment _appEnv; |
We also need to add a method to Program that handles loading the config, taking in what it can from the config file, but loading on top of that any arguments passed in from the console:
1 | private void BuildConfiguration(string[] args) |
Finally, we'll add a little more meat to our our Main method:
1 | public void Main(string[] args) |
The above sample can now be executed as a command. I've got the following command mapping in my project.json file (yes, the same project you use to create the command can also expose the command):
1 | "commands": { |
This means that from the console in the dir of my project I can just type in the following:
dnx . DnxCommands
I can also now reference this project from any other project (or push my bits to NuGet and share them to any project) and use the command from there. Other projects can add the "command-text" key to their config.json files and specify their own value, or they can feed in the parameter as an arg to the command:
dnx . DnxCommands command-text="'Pop!' goes the weasel"
In my sample solution on GitHub, I also have a second project which renames the alias and has it's own config file that is read in by the command.
All of this opens the doors for some pretty powerful scenarios. Think about what you can do in your build pipeline without having to write, expose and consume custom msbuild targets. You can create commands that are used to build up local databases for new environments or automate the seeding of tables for integration tests. You could add scaffolders and image optimizers and deployment tools and send text messages to your Grandma.
What you should do next is to look at the kinds of things you do when you're working on your solution – not in it – and think about how you might be able to simplify those tasks. If there are complex parts of your build scripts that you encounter from one project to the next, perhaps you can abstract some of those bits away into a command and then shift to using simplified build scripts that invoke your commands via DNX.
To get some inspiration, check out my sample project on GitHub, the DNX commands for other libraries (such asEF or xUnit) and try writing a few of your own.
Happy coding!
]]>In this series we're working through the conversion of Clear Measure's Bootcamp MVC 5-based application and migrating it to MVC 6. You can track the entire series of posts from the intro page.
As of right now, there are no tools in place that would support an in-place migration from the old project system to the new one. Because we wanted to preserve project naming and namespaces, I copied everything out into a new directory – the solution and the projects – and rebuilt the solution from scratch.
I would anticipate a project conversion process at some point, even one that was able to provide the basics (like moving package dependencies to project.json) and guidance on the remaining pieces (like why part of the project wasn't able to convert, and how you might approach it). This post will walk through those steps of the conversion, but it will be done manually.
I wanted to maintain all the same names of the assemblies, namespaces and outputs, and the only way to currently do this is to clear out the src folder and start over. Don't worry, our code is still good, we just have to wrangle it into new containers.
One of the first changes that I made was a reorganization of the tooling that is used to support the build. Some of the build script relied on packages existing on disk (NUnit's console runner, AliaSql) but this is an order-of-operations problem. When you grab the solution from the repo, you're not actually able to build it until you restore the packages. Further, these assets are **solution-level **concerns, not project-level concerns, so which project do you install them into? NuGet does not have the concept of solution-level packages that apply to the solution itself, so while it works perfectly well for projects, NuGet is inherently not ideal for incorporating solution dependencies.
To remedy this, I have moved these types of assets into a tools folder and updated the build scripts accordingly. This approach is likely a matter of opinion more than anything, but the reality is that we want the directory structure to reflect which concerns are in the solution versus which concerns work on the solution.
I would like to note that there are still improvements to be made here – for instance, I know many teams actually have build scripts that are capable of not only restoring packages, but have the ability to go and grab NuGet itself – so expect some more changes as we continue to move through this migration. Automation is awesome.
Our Core project was a breeze to port because it's at the heart of the system in an Onion Architecture and takes on very few dependencies. I started the conversion by going through the motions of creating a new Core project, using the DLL project from the "Web Templates" part of the dialog. The first project also creates the solution, and the convention for the way the solutions are laid out on disk has changed.
So…the build broke.
Thankfully, this was easy to resolve with just a couple of quick fixes, but you'll likely have to take similar steps on your project:
We can't run unit tests quite yet (we need to convert those projects as well), but we can make sure that the project is building correctly.
We're not modifying code at this point, so provided we can get the solution building we can have a good level of confidence – but not a guarantee – that our code is still in good shape. We want those tests back online before we merge this branch back to develop.
With the build running, I was able to jump back into Visual Studio and start adding back the code. In my case, nearly everything worked just by copying in the files from my backup location and pasting them into the project. It's a bit tedious, but it's by no means difficult or complicated.
The only package that I had to add at this point was a legacy dependency from NHibernate, namely the Iesi.Collections package. This is done by opening up the project.json for Core and updating the "dependencies" part of the project file. As soon as you save the file out, Visual Studio goes off and runs a background install of the packages that it finds you've put in there, along with any dependencies of those packages.
Finding the right package and most recent version is quite easy in the project.json world. As you start typing a package name, in-line search kicks in and starts suggesting matches. Available versions of the packages are displayed, and VS indicates if those packages are available locally in a cache or found on a remote NuGet repository, indicated by the icon you see. All packages sources are queried for package information, so you can get packages and their version information from private repositories as well.
Once the packages were restored the solution built fine in Visual Studio 2015 and I was able to return to my console to run the build script.
Other than the fact that Data Access has a few more dependencies, it was really more of the same to get the Data Access project online and building through our script. I added another DLL to the solution, added the source files and installed the dependencies via project.json.
When I compiled the project at this point, some of the changes of the .NET Framework and the strategy of the team started to surface. For instance, typically you might find a reference to System.Data from your GAC in a project, however, in the new cross-platform project system and under the assumption that you may not have a GAC at all, the .NET folks have taken the mantra of "NuGet all the things." To get access to the System.Data namespace and the IDataReader interface that was used in the DataAccess project, I had to add a reference to System.Data version 4.0.0 from NuGet (via project.json).
Other projects will have similar hits on moved packages. It is likely safe to use the GAC in situations where you know what the build environment looks like and are sure that build agents and other developers will have access to the required dependencies. But it is a more stable approach – and a better chance to successful compile our application – to instead reference those binaries from a package repository.
The other notable piece was in how we reference other projects in our own solution; today they look a lot like referencing other packages. Whether you go through the Add Reference dialog or if you prefer to edit the project file by hand, you're going to also need to introduce a dependency on Core, which is done simply by adding the following line to the dependencies:
"Core": "1.0.0-*"
Excellent! Almost ready to build!
Just a couple of other notes that I took and a couple of tips I've learned as I created these projects:
You're also in charge of wiring up any dependencies your modules need where they aren't satisfied with a single package for all output types. For instance, when I tried a small gamut of output targets I ran into this problem:
The new .NET Platform (the base for Windows, web, mobile and x-plat) was not supported given the dependencies I have listed in my project, namely it is the IESI Collections that is the problem here. Ideally, you want to be able to support as many runtimes as possible, so you want to target the lowest common denominator. That is likely going to be "dotnet" going forward (which could in turn be used to build up applications for web, Windows or phone) but more realistically things like "net46", which is just the 4.6 version of .NET, or "dnx46", which is the new bits (think MVC Framework) running on top of .NET 4.6. In the cases where you don't have a package that matches the target you need, you have a couple of choices, listed in order of easiest to most difficult:
Sadly, that last one is likely the way we're going to need to go, especially if we want to target x-plat development. This is not an easy task, but getting to this point in the migration is and only takes a couple of hours. If you haven't done this sanity check in your project to identify packages that may cause issues during migrations, I would suggest that your assessment is not complete.
For the time being, we are concerned about supporting .NET 4.6 and DNX running on 4.6 for our project, so that is where I have left things. This is a reasonable compromise allowing continued development in web and Windows.
The main tenets of our application are now alive and kicking in our Visual Studio 2015 solution with the new project system in place. In the next post in this series we'll have a look at getting the tests online and updating the build script to execute our tests.
If you'd like to follow along with the progression as we get this fully converted you can check out the branch on GitHub.
Happy coding!
]]>The current runtime target framework is not compatible with 'YourWebApplication'.
Current runtime Target Framework: 'DNX,Version=v4.5.1 (dnx451)'
Type: CLR
Architecture: x64
Version: 1.0.0-beta6-12256
If you're instead running with a debugger attached, you won't hit a breakpoint, you'll only get a 500. It doesn't matter what framework runtimes you have installed on your machine. It doesn't matter what your global.json says or what dependencies or frameworks you take or specify in project.json.
This is because the default runtime for launching IIS Express from Visual Studio is indeed dnx451. You can get around this in one of two ways:
A huge thanks goes out to Andrew Nurse for providing a resolution on this matter and responding to my issue on GitHub.
]]>dnvm upgrade
After that, a "dnvm list" command will give you the following:
You can also upgrade dnvm itself with the following command:
dnvm update-self
Which will get you up to the beta 7 version (build 10400) of DNVM.
You'll also need the updated VS 2015 tooling, which is available here (along with the DNVM update tools if you want them seperately): Microsoft Visual Studio 2015 Beta 6 Tooling Download (no longer active).
As part of my progression in porting an MVC 5 app to MVC 6, one scenario that I needed support for was to have libraries targeting .NET 4.6 reference-able from a DNX project. MVC 6, up to this point, only supported 4.5.1, which meant that you'd have to roll back your targeting if you were on 4.5.2 or 4.6.
Of course, multi-targeting is a better option, but requires the time and capacity to either slave over the old code base and NuGet packaging nuances, or port to the new project format where you have much greater in-project support for targeting multiple frameworks.
As previously detailed by Damien Edwards, there are bug fixes, features and improvements in the following areas: Runtime, MVC, Razor, Identity. In addition to supporting .NET 4.6 in DNX, they have also added localization and have been working on other things like distributed caching, which you can read about here.
This is still a beta, and there are many moving parts.
Be sure to check out the community standup today and head over to GitHub for the announcements on breaking changes.
Happy coding!
]]>In this series we're working through the conversion of an MVC 5-based application and migrating it to MVC 6. You can track the entire series of posts from the intro page.
While the explicit modification of your projects may not be required to gain some of the 4.6 benefits, there may be other organizational factors that lead you down that path. We'll work through the mechanics of the upgrade to 4.6 in this post.
UPDATE: July 28, 2015 There is a known issue with certain 64bit applications running on .NET 4.6, under certain circumstances, with certain parameter types and sizes. You can read more about the bug finding here and the issue is being tracked on GitHub, followed by Microsoft's response and recommendation.
For this reason I am not recommending an upgrade to 4.6 unless you understand the implications and how to properly vet the scenarios described in your environment.
Every project we create references a specific version of the .NET Framework. This has been true throughout the history of .NET, and though the way we will do it in the future will change with the new project system, the premise remains the same.
For now, you can simply open the properties tab for your project and change the target Framework.
You will be prompted to let you know that some changes may be required.
Note that in my case, I had 7 projects with varying types of references and dependencies, and no modifications were required to the code. Your mileage may vary, of course, but this is a simple change and one that you can test quickly. With proper source control in place, this is a zero-risk test that should take only a moment or two.
Now, if you were to try to build the Bootcamp project when you're only partway through the upgrade, you'd see something similar to the following:
With a message that reads:
The primary reference "x" could not be resolved because it was built against the ".NETFramework,Version=v4.6" framework. The is a higher version than the currently targeted framework ".NETFramework,Version=4.5.1".
You may run into this in other scenarios, as well, especially if you you have references to packages or libraries that get out of sync in your upgrade process. A project that takes on dependencies must be at (or higher than) the target framework of the compiled dependencies. To remedy this, we simply need to complete the upgrade process on the rest of the projects.
This was pretty painless.
Moving from 4.5.x to 4.6 is not a required step in our conversion to an MVC 6 project. In fact, MVC 6 indeed runs on a different framework altogether. To that end, any environment where you have 4.6 installed will "pull up" other assemblies because it is a drop-in replacement for pervious versions.
Perhaps your primary motivator to move to 4.6 is the perf bump in the runtime, or it might be the new language features (which only require a framework install, not a version bump in your target). But it also ensures we're compatible with other projects in our organization, particularly when we consider the default target for new projects in VS 2015 is against 4.6. If we want to leverage these from other projects in our organization, we want to make sure that we're not the lowest common denominator in the mix.
However, there are a couple of other points that we should note, namely that our compiler isn't tied to the installed framework or the runtime, it's just used to target a specific set of instructions that the runtime can digest. So, if our MVC 6 will be running on DNX46, or we have other .NET 4.6 projects we're all set (though, we'd have to use DNU wrap to consume our library at this point in DNX).
But what if we have different projects across our organization, or we have external teams using our libraries? The answer lies in multi-targeting, which is what we'll address in the next post in this series.
Until then, happy coding!
]]>In this series we're working through the conversion of an MVC 5-based application and migrating it to MVC 6. You can track the entire series of posts from the intro page.
For the purpose of this exercise, we're using TeamCity to run our builds based on a VSC checkin. We'll get TeamCity prepped to run our build and then update our repository so that we show our build status indicator on the readme home page.
Here's the basics of what was required to get the builds back online:
I engaged my teammate James Allen here to help with some best practices, namely getting the server backed up. You can either back up the TeamCity data from the web interface or one of the other recommended approaches, or you could snapshot your server for a reset should one be required. During this process, it's a good idea to spin down your build agents so that you're not wrecking anyone's builds.
Next, we needed to move to version 9.1 of TeamCity, so we ran the upgrade process via the web site. This is a painless task and takes only a fraction of the time it took to back up the data. Failing any troubles (we saw none), your build server should be back online in no time, and the build agents were notified (and complied!) to update themselves as well.
Next, I downloaded and installed the .NET 4.6 installer and the VS 2015 tooling, which can be found on the VS 2015 download page. You'll need to explore through the available downloads on the page, as you can see on this screenshot, to grab the relevant files.
These installs will need to be run on every build agent.
One thing to note was that my original attempt to get the build running failed because of missing build targets at an expected location. I ended up having to copy files from my local machine, where Visual Studio 2015 is installed, from the path: C:Program Files (x86)MSBuildMicrosoftVisualStudiov14.0 on the build server.
I don't believe this is the best approach to getting the build targets on the build server. I will update this post if I find a better solution.
You'll know the tools have been installed correctly if you return to the build configuration settings and add a new build step for msbuild (you don't have to save it). You'll see that you'll have the new options in place:
The build server should be good to go now! For us, we're not using an MSBuild build runner, our application is build with a PowerShell script via a batch file. This allows our build to be executed locally with only a small parameter change, and the CI process is entirely encapsulated in code (and under source control).
Provided your project is pointing at the repository, you'll have a good shot at running the build at this point. For our project, everything worked as expected.
Now it's time to beef up our repo, at least a little. What I'm talking about is wearing our CI on our sleeve, letting everyone on the team (or other watchers of the repository) that our builds are healthy or, perhaps, needing some love; let's display the build status indicator on our readme, like this:
First, drill into the build configuration and locate the advanced options under "General Settings". You need to enable the status widget.
Also, from this screen, take note of your Build Configuration ID. This is important because you'll need to include it in the server request to generate the badge.
Finally, include the following markdown, which is essentially a formatted link with an image inside of it:
Current Build Status [![](http://YOUR_SERVER/app/rest/builds/buildType:(id:YOUR_BUILD_CONFIGURATION_ID)/statusIcon)](http://teamcity/viewType.html?buildTypeId=btN&guest=1)
Be sure to replace the obvious placeholder tokens with your own information.
With our build server updated and our builds back online, it's time to start shifting our targets. In the next post, we're going to update our projects and recover from any errors/challenges we may discover along the way.
Happy coding!
]]>Moving to MVC 6 is going to be a big shift for a lot of development teams, but that doesn't mean it needs to be scary, complicated or introduce instability into your project.
It does, however, mean that you're going to need an attitude of learning, that you'll pick up some new tooling, you'll have to brush up on your JavaScript and work with some new concepts.
I'm super excited to now be part of the excellent crew at Clear Measure, where this type of attitude seems to be fostered, encouraged and embodied by other members of the team and, more importantly, the management.
We're now undertaking the process of converting from MVC5 => MVC6 with our Bootcamp workshop project and I have the privilege of blogging my experience with it as I go. We're going to keep the project building and operable as we go, such that at an point it can be shipped to production or branched for feature development. We'll be using GitFlow, feature branches, continuous integration and continuous deployment. Our check-ins will be code that builds cleanly with passing tests.
And, for those of you who come join in our our MVC Masters Bootcamp sessions, you'll also get to work on this code base with all the tools, exposure to pair programming, a dedicated product owner and 3 days of intense coding.
Shameless plug: If you want to level up your team of developers, please reach out to Gina Hollis at Clear Measure to plan an on- or off-site event. We promise to melt your minds.
Well, to start it off, we're beginning with our initial commit as the MVC 5 project Jeffrey Palermo's been using in the Masters Bootcamp for some time.
The application is hosted on GitHub and you can see the issues that we're identifying and working through. We're doing the whole thing as open source in hopes that other teams can learn from what we learn in the process.
And, as I knock items off the issue list I'll be posting about them here, covering the challenges, pitfalls and wins we encounter along the way. You can bookmark this post for updates in the project. Feel free to ask questions on the issues in the repository, or ping me on Twitter (@CanadianJames).
Stay tuned!
]]>