Today my not problem was running a docker build wasn't copying the files I was expecting it to. In particular I had a themes
directory which was not ending up in the image and in fact the build was failing with something like
1 | ERROR: failed to solve: failed to compute cache key: failed to calculate checksum of ref b1f3faa4-fdeb-41ed-b016-fac3862d370a::pjh3jwhj2huqmcgigjh9udlh2: "/themes": not found |
I was really confused because themes
absolutly did exist on disk. It was as if it wasn't being added to the build context. In fact it wasn't being added and, as it turns out, this was because my .dockerignore file contained
1 | ** |
Which ignores everything from the local directory. That seemed a bit extreme so I changed it to
1 | ** |
With this in place the build worked as expected.
]]>We run our builds inside a multi-stage docker build so we actually need to have a build container communicate with the database container during the build phase. This is easy enough in the run phase but in the build phase there is just a flag you can pass to the build called network
which takes an argument but the arguments it can take don't appear to be documented anywhere. After significant trial and error I found that the argument it takes that we want is host
. This will build the container using the host networking. As we surfaced the ports for postgres in our workflow file like so
1 | postgres: |
We are able to access the database from the build context with 127.0.0.1
. So we can pass in a variable to our container build
1 | docker build --network=host . --tag ${{ env.DOCKER_REGISTRY_NAME }}/${{ env.DOCKER_IMAGE_NAME }}:${{ github.run_number }} --build-arg 'DATABASE_CONNECTION_STRING=${{ env.DATABASE_CONNECTION_STRING }}' |
With all this in place the tests run nicely in the container during the build. Phew.
]]>)
Next in Keycloak set the credentials up in the realm settings under email. You'll want the host to be smtp.mailgun.org and the port to be 465. Enable all the encryptions and use the full email address as the username.)
Check both the SSL boxes and give it port 465.
]]>We have an API that we call which is super slow and super fragile. We were recently told by the team that maintains it that they'd made improvements and increased our rate limit from something like 200 requests per minute to 300 and could we test it. So sure, I guess we can do your job for you. For this we're going to use the load testing tool artillery.
Artillery is a node based tool so you'll need to have node installed. You can install artillery with npm install -g artillery
.
You then write a configuration file to tell artillery what to do. Here's the one I used for this test (with the names of the guilty redacted):
1 | config: |
As you can see this is graphql and it is a private API so we need to pass in a bearer token. The body I just stole from our postman collection so it isn't well formatted.
Running this is as simple as running artillery run <filename>
.
At the top you can see arrival rates and duration. This is saying that we want to ramp up to 1 requests per second over the course of 1 second. So basically this is just proving that our request works. The first time I ran this I only got back 400 errors. To get the body of the response to allow me to see why I was getting a 400 I set
1 | export DEBUG=http,http:capture,http:response |
Once I had the simple case working I was able to increase the rates to higher levels. To do this I ended up adjusting the phases to look like
1 | phases: |
This provisions 30 users a second up to a maximum of 150 users - so that takes about 5 seconds to saturate. I left the duration higher because I'm lazy and artillery is smart enough to not provision more. Then to ensure that I was pretty constantly hitting the API with the maximum number of users I added a loop to the scenario like so:
1 | scenarios: |
Pay attention to that count at the bottom.
I was able to use this to fire thousands of requests at the service and prove out that our rate limit was indeed higher than it was before and we could raise our concurrency.
]]>One of my clients has a fair bit of data stored in a file share hosted in Azure Storage. They do nightly processing on this data using a legacy IaaS system. We were concerned that we might saturate the blob storage account with our requests. Fortunately, there are metrics we can use to understand what's going on inside blob storage. Nobody wants to monitor these all the time so we set up some alerting rules for the storage account.
Alert rules can easily be created by going to the file share in the storage account and clicking on metrics. Then in the top bar click on New Alert Rule
The typical rules we applied were
However there was one additional metric we wanted to catch: when we have hit throttling. This was a bit trickier to set up because we've never actually hit this threshold. This means that the dimensions to filter on don't actually show up in the portal. They must be entered by hand.
These are the normal values we see)
By clicking on add custom value we were able to add 3 new response codes
)
With these in place we can be confident that should these ever occur we'll be alerted to it
]]>1 | <script type="importmap"> |
The import map is a way to map a module name to a URL. This is necessary because the Vuetify ESM module imports from Vue. Don't forget you'll also need to add in the CSS for Vuetify
]]>1 | Error: creating Service Plan: (Serverfarm Name "***devplan" / Resource Group "***_dev"): web.AppServicePlansClient#CreateOrUpdate: Failure sending request: StatusCode=401 -- Original Error: Code="Unauthorized" Message="This region has quota of 0 instances for your subscription. Try selecting different region or SKU." |
This was a pretty simple deployment to an S1 app service plan. I've run into this before and it's typically easy to request a bump in quota in the subscription. My problem today was that it isn't obvious what CPU quota I need to request. I Googled around and found some suggestion that S1 ran on A series VMs but that wasn't something I had any limits on.
Creating in the UI gave the same error
)
I asked around and eventually somebody in the know was able to look into the consumption in that region. The cloud was full! Well not full but creation of some resources was restricted. Fortunately this was just a dev deployment so I was able to move to a different region and get things working. It would have been pretty miserable if this was a production deployment or if I was adding onto an existing deployment.
]]>)
The managed identity on the app service had only GET access to the keyvault. I added LIST access and the reference started working. I'm not sure why this is but I'm guessing that the reference is doing a LIST to get the secret and then a GET to get the secret value.
]]>The content looked something like
1 | { |
This was going to an ExpressJS application which was parsing the body using body-parser
. These days we can just use express.json()
and avoid taking on that additional dependency. The JSON parsing in both these is too strict to allow for comments. Fortunately, we can use middleware to resolve the issue. There is a swell package called strip-json-comments
which does the surprisingly difficult task of stripping comments. We can use that.
The typical json paring middleware looks like
1 | app.use(express.json()) |
Instead we can do
1 | import stripJsonComments from 'strip-json-comments'; |
This still allows us to take advantage of the compression and character encoding facilities in the original parser while also intercepting and cleaning up the JSON payload.
]]>https://github.com/zdavatz/spreadsheet/
As the name suggests this library deals with Excel spreadsheets. It is able to both read and write them by using Spreadsheet::Excel Library and the ParseExcel Library. However it only supports the older XLS file format. While this is still widely used it is not the default format for Excel 2007 and later. I try to stay clear of the format as much as possible. There have not been any releases of this library in about 18 months but there haven't been any releases of the XLS file format for decades so it doesn't seem like a big deal.
The library can be installed using
1 | gem install spreadsheet |
Then you can use it like so
1 | require 'spreadsheet' |
There are some limitations around editing files such as cell formats not updating but for most things it should be fine.
https://github.com/weshatheleopard/rubyXL
This library works on the more modern XLSX file formats. It is able to read and write files with modifications. However there are some limitations such as being unable to insert images
1 | require 'rubyXL' |
https://github.com/caxlsx/caxlsx
This library is the community supported version of AXLSX. It is able to generate XLSX files but not read them or modify them. There is rich support for charts, images and other more advanced excel features. The
Install using
1 | gem install caxlsx |
And then a simple example looks like
1 | require 'axlsx' |
Of all the libraries mentioned here the documentation for this one is the best. It is also the most actively maintained. The examples directory https://github.com/caxlsx/caxlsx/tree/master/examples gives a plethora of examples of how to use the library.
https://github.com/Paxa/fast_excel
This library focuses on being the fastest excel library for ruby. It is actually written in C to speed it up so comes with all the caveats about running native code. Similar to CAXLSX it is only able to read and write files and not modify them.
1 | require 'fast_excel' |
As you can see here the library really excels at adding consistently shaped rows. You're unlikely to get a complex spreadsheet with headers and footers built using this tooling.
]]>I know this, because I am a developer. My heart constantly wants to code up the solution to...well anything. What I have learned over the developing and architecting enterprise software solutions, and as the solo developer of my website project is how this love of code can actually slow down and sometimes halt the development of a project or feature because we get too caught up in the tech, we don't take the time to reflect and solve the actual problem.
How do you fix this habit? Before you start coding up a solution, make sure you understand the problem you are trying to solve. Seems simple enough yet developers (like me) have the habit of jumping right into the code before they even really know what they are trying to solve.
Through my years of experience solving problems with technology, I have a couple of steps I go through to help inform my solution design for problems of a variety of problems. I apply these steps when I am trying to figure out how to integrate two enterprise systems and when I'm trying to figure out the best way to implement a new feature on my website.
The steps are the same, although the effort required will vary.
And I don't mean coding problem.
I mean business problem or real life problem or whatever you want to call it, but it's not a code problem. Never have I ever been asked by a client to "implement a binary tree" or "write a sorting algorithm for sorting an array". That's not to say those aren't problems, but they aren't business problems. These are technical problems, and they are fun to work on...sometimes. 😅
Business problems are the reason clients engage with software developers. The client wants software to fix their problem, and they seem to think that software is the solution. Before you code anything, take a few moments to answer the following about the problem you're preparing to solve with code.
I am not suggesting you second guess the client, but rather try and empathize with your client and really understand why their problem is what it is. This is where you can start to understand whether or not software development fits into the solution to the problem. I have come across this many times, where after revisiting the problem with the client, we found the best solution was a change in their business process rather than adding tools to it.
Let's assume, for the sake of this post, that you see where software can help play a role in solving the problem.
Sounds silly, I know, but doing nothing is always an option and people do it all the time. But why would someone choose to do nothing? Because the risk doesn't outweigh the reward.
By answering this question with your client, you get to understand the risks associated with the problem. This will inform your solution design, as if the risks are high you may want to invest more time and effort into parts of the design than others. It will also give you context on the priority of your solution in the mind of your client.
The last thing I try to do is try and pull any key performance indicators (KPIs) or metrics that will help define success for the solution. I find that most of the time, this is about turning qualitative terms and statements into quantitative ones.
For example, "We need to process these forms faster" should change to something like "We should be able to process at least 100 forms an hour". See the difference?
You are adding clear, measurable, success criteria for your solution. The terms "these forms" and "faster" are too vague to build on. Maybe fast enough to you is 1 form a day, oy maybe 1 form a second. Your client is the expert in their business, so you should ask them so you can understand the goals and potential constraints your solution needs to address.
I know-- your hands are itchy from not coding, but assuming you took the time to understand the problem, the next step is to confirm your new found knowledge. The easiest way to do that is by explaining it to someone else, like your client. If your client agrees you nailed it, you nailed it and now you're ready to startdesigning (not coding) your solution.
One thing that is not uncommon is that your definition of the problem may sound different than the problem your client originally described. This is normal, as you are the technology problem solving expert.
The fact that your definition of the problem differs from your client's isn't necessarily a bad thing either. Many times, I have found that through my problem definition process, the client gains a better understanding of root cause of their problem and their mind will shift from their presumed solution, to something else.
Let me walk you though the process on something not so enterprise-level, but small scale, like a solo-developed website project.
I hit a problem planning the next release of my website where I realized that it was going to be very complicated and cumbersome to add non-blog content to my website, such as the presentation materials from Prairie Dev Con here and here. At this point, here is what we know:
- Client = Me
- Problem = Adding non-blog content to the website is difficult.
Like a good developer, I immediately started down the path of designing a custom application that would automate all the things that make adding content difficult. It was very fun, but after a couple of hours, I caught myself and took a step back and applied my problem definition process.
Let's go through it, and we start by understanding the problem.
It is a problem because I want to continue to add different types of content to the website. The whole purpose of the site is to create a central hub for all my work, almost like a portfolio, but more like a "hub" for all things I create a share. The website is built to handle blog posts or document style content, but when you add more complicated content that is made up of more than just an article or webpage, you need to add links to other data (like files) which is a manual process and is error prone.
In short, it is a problem because maintaining non-article data will be difficult.
You can see in the talks page I have already added some non-article data, which is all currently managed through a JSON file that the website generator pickups and creates pages for. I also needed to upload the files to a public storage host (Azure Blob Storage) and use copy and paste the links into the JSON, which I messed up a few times.
This was my first attempt at "doing nothing" for this problem, and it was difficult. The plan is to add the back catalogue of presentations I have done over the past 10 years (or more probably), which will make that JSON file exceptionally difficult to manage.
When you frame it in the context of risk: doing nothing will very likely result in an massive increase in the number of errors in the data.
If we look at the original problem statement "Adding non-blog content to the website is difficult", we need to translate the term "difficult" into a quantitative one. This would give us a measure to determine how much easier it is to add new content.
Pulling from the answer to question 2, it's really managing the JSON file that makes things difficult. And so I asked myself (the client), what makes managing a JSON file so difficult? There are plenty of tools for that already. And this is where the real problem revealed itself.
The relationships between the data leads to errors. Maintaining these relationships manually is exceptionally difficult, and we only have two relationships so far: presentation to event, and presentation to the presentation materials.
Now that we know the real problem, we can redefine problem:
Problem = The process of manually managing the relationships between content types and data is exceptionally error prone and not scalable.
This updated problem is one that will inform the solution design moving forward. If you want to get specific about the tech needed, we have a very powerful and mature tool that will help solve data relationships: a relational database. How it informs the solution, is a whole other blog post or posts, but at least now we know what we are trying to solve and can use our technical expertise to solve it.
Before you start designing solutions or coding, take the time to clearly define the problem you are working to solve with your client (which can be you, if its your own project). To define the problem, answer these questions first:
Once you have that, redefine the problem by wording it in a way that highlights the root issue to solve, along with the way to measure success. Assuming the client agrees with your redefined problem, you are ready to start using the big, beautiful brain of yours and start solution-ing!
Thanks for playing.
~ DW
When trying to add a key using apt-key
on a Debian 11 docker image, the step seems to run infinitely.
The screenshot below highlights this problem when adding a key that is necessary to validate the mono-complete package.
I setup a DevContainer to build Inky, a interactive fiction editor I like for game projects, without having to install all the build dependencies on my local machine. The Docker container build worked on my Linux machine, but would hang on my Windows 11 box, using Docker Desktop with WSL2. More specifically, it would run forever on the apt-key
command, as specified by the mono install instructions.
If you need an example, take a look at my Inky repository fork at that specific point.
The issue was that the command specifically references port 80 in the URL to the keyserver. In the end, I changed:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
to
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
You can see the specifics in the next commit in my example repository in the following commit here.
I was put on the right track with a Stack Overflow post trying to solve a similar issue with apt-key
. Scrolling through the answers, I found this one: LINK
apt-key
Deprecation NoticeIf you look at the Debian documentation for apt-key
or try running the command yourself, you might notice the deprecation warning. Underneath the hood, it runs the appropriate command in Debian 11, but will be gone after Debian 11 and Ubuntu 22.04.
Just something to note for those looking over this solution in the future.
I needed to remove the port number from the keyserver URL used in my apt-key
command.
Thanks for playing.
~ DW
]]>Don't worry-- it's not all feelings. It's data too. All my observed behaviours relate to my projects I worked on throughout the year on GitHub, which provides great insights into my contributions. I'll be using my GitHub contributions for 2022 to highlight the spots where I can identify the behaviour.
Why do this? Because I want to remind myself an others that if you feel like you are stuck, you are better off finding the source of the problem-- even if it makes you face some hard truths. By understanding the root problem, you can work at resolving it, even it it involves changing what you believe is your best approach to work.
At the end of 2021, I started looking at the job market and started to notice that the jobs I wanted (or thought I wanted) relied on skills that I have not been able to practice as part of my day job. Coding is no longer one of my responsibilities, only planning, designing, and providing oversight. This sparked the urge to refresh my skills and prove to myself that I could do for these jobs, and that all I had to do was put in the time.
And so began a series of LeetCode challenges, learning exercises, and review of various problems so that I could skill up and strengthen those coding muscles again. This is what you see in the contribution graph for the first 2-3 months of 2022.
Although this spark eventually faded as it does, I realized something about myself. I realized that it's not the code I love, but the learning about code and how to apply code in various ways. New languages, patterns and practices, solution architecture, whatever-- if it involves coding something, you can count I'll be interested.
This is example highlight Q1 of 2022, yet there are plenty of other times where I spent time learning new tech. Experimenting with Go and Rust as part of my VGL project (more about that later). A brief experiment with Q# back in 2019 early 2020, and my continual urge to learn C/C++ along with the DevOps tools around it. These are all things that have sparked that love of learning code over the past few years, and each time it's the same pattern: spark of interest, dive deep into the learning, burn out because you don't know where to go with this knowledge.
Which brings me to the lesson learned: I need to direct my learning energy towards a goal. This way, when the excitement of learning something new fades I will still have a goal in my sights and continue to channel that energy towards something, rather then letting it fade out.
At the end of the year, I looked at my GitHub Unwrapped video and was surprised by my top languages for 2022.
I was trying to figure out where I had written so much TypeScript, considering that for the past few months I have been living in JavaScript and HTML. Again, going back to my contribution graph I noticed another spike in activity in May.
I remembered that I decided to repurpose my learning strategy, and rather than just doing LeetCode exercises and textbook studying to strengthen my atrophied coding muscles, I would study by building something. Something that I found useful, all while further strengthening my skills! This was the beginning of the Video Game Library or "VGL" project where I spent time building a TypeScript-React project and included some experiments with both Go and Rust to determine which language allowed me to leverage WASM (which was yet another rabbit hole I became excited about).
In the end I shelved the project because I was letting my learning drive the project. This meant anything I wanted to learn, I added scope to the project. In the end, it become too big and my original vision was lost, but the urge to build never fades, only the "something" that I am building.
Looking back beyond 2022-- the idea of building something has always driven me. Building a business, a video game, or a product. It doesn't matter, as long as I am building it.
Where it falls over is when the scope gets too large and overwhelming. This is not uncommon amongst creative types (just ask any game developer) but building something, ideally out of code, is something that drives me. If I can channel that excitement and passion on something I believe is worth it, I think I could produce and finish something I could be proud of.
I started to make this realization about myself and my drive to build things later in the year. This is why I came back to building my website that I had let fall into dormant. I wanted to channel that excitement, energy, and knowledge into something I found valuable. My website is something I have talked about improving for years and started redoing countless times. Looking back at the contribution graph, this represents a large part of the contributions for October, November, and December for 2022. This is further evidenced by the releases of the website I published through the same time period.
Reflecting on my behaviour during the VGL project in May and the website work in the last quarter of the year-- the behaviour and drive was the same. I loved building something, yet the VGL project went onto the shelf, and the website finally managed to get some traction.
The difference was in my approach.
For the Video Game Library project, I let the excitement of learning and drive its development, which led to scope creep and dilution of the original project vision. With the website I took the time to plan and force myself to complete releases-- no matter how small.
This change in approach enabled in a longer focus on a single project. Ultimately, that kept my excitement for my website project going longer and I kept coming back to it over and over again to make small (or sometimes larger) improvements. In fact, I am continuing those improvements today as the website is far from complete-- but it's starting to represent the vision I started.
I have tried sharing and producing content in various forms over the years, but with Prairie Dev Con returning in 2022, I thought I would focus some energy into preparing and share content like I used to in my Microsoft MVP days. This meant lecture-style presentations and blogging.
After three live events in 2022, and almost a blog post a week since mid-October, I realized that I don't love sharing like I used to...rather I only like it. It's a subtle difference, but it is definitely different than it once was.
I like it because it is a practical way to document my work. I love learning and building things, and sharing those things is an easy way to document my progress for others-- but more importantly myself. With the blog posts, I documented things I learned for my website like the Open Graph protocol or my implementation of GDPR compliance. For the presentations, I focused on what I knew and delivered two original sessions; one about my day job and what it is means to be an IT Architect and the other a case study on how to do my day job.
Though this experience this year, I found that I liked the process-- but didn't love it like I used to. To me, the presentations and blog posts were necessary for other outcomes. More specifically, the presentations were my ticket to touch base with other real-life speakers and tech professionals after a multi-year hiatus. The blog posts were my way of documenting, analyzing, and appreciating my own effort into my various side projects.
In the past, with the MVP program, I blogged and shared to receive validation from my peers and the MVP program itself. Those goals are not bad ones by any stretch, but since I don't have the MVP program pushing me, I need something else to help push me. That "something" is myself, and the outcomes I mentioned previously. Personally, I think that means I've grown quite a bit since I was an MVP and is an great example of how 2022 has been a year filled with huge change for myself and my attitude towards work.
I have mentioned the good things, the changed things, and now I will go over the things I need to improve (in my opinion).
Everybody is different and bring different value to the table. I have led a very privileged career and have had massive success in many different areas, yet for years I have rarely taken the time to appreciate those accomplishments.
Instead, I would get caught up in comparing myself to others and what I couldn't do, rather than what I could do. I would dwell on my lack of recent coding experience, rather than celebrate the time I've spent migrating legacy systems into the cloud. I would focus on the jobs I did not qualify for, rather than the ones that I did qualify for.
This cycle of focusing on what is missing is lose-lose situation. There will never be enough success. The grass will always be greener on the other side of fence, no matter how many times I jump over it.
I need to remind myself of this moving forward, and hopefully you can remember that for yourself as well.
People refer to me as "a talker", as in, I like to talk and I'm pretty good at it.
I leverage my talking skill in my day-to-day job, but when it comes what I am trying to build for myself I need to focus on doing the work rather than talking about it.
It might be cliché, but "talk is cheap" and I need to talk less and do more. Plain and simple.
In short, I identified cyclical behaviours and patterns in myself that relate to the work I put into my various side projects and personal (and professional) development. In 2022, I noticed the following about myself:
The first two are my way of channelling creativity, which is why I love them so much. Although I used to love sharing my knowledge, at this point in my career and life, I like it as it is a practical way for me to document things as I discover them and connect with others, rather than as a method to be validated and rewarded.
In terms of how I can improve:
I need to accept and embrace my current skills and abilities, rather than focusing on what I think I am lacking. I also need to focus more on implementing my ideas rather that talking about them. Once I have something built, then I can talk more about it-- but until it's built, I need to focus my energy and excitement on the build rather than the talk.
Thanks for playing.
~ DW
]]>'Twas the night before Christmas, and all through the landThe Western Devs group was busily at handWorking on projects, coding with careIn hopes that the deadlines would soon be met, with flair
he computers were humming, the monitors aglowAs the developers worked on code, to and froSome were debugging, some were designingAll with the goal of creating something exciting
But as the night wore on, and the work was almost throughThe developers stopped, and a cheer rose anewThey had made it, they had finished the taskNow it was time to put down the code and unwind at last
So they gathered around, for the holiday was hereRaised a glass to a year of hard work and good cheerHere's to the Western Devs group, may your future be brightMerry Christmas to all, and to all a good night!
Have yourselves a great holiday season and new year!
Cheers!
]]>When the call for speakers opens up, you are required to submit a summary of your talk and yourself. I call this the pitch process, as your submission is your moment to convince the event organizers you are worth betting on.
It might sound stressful, but its not. It's a pretty low key process considering you are just filling out a form, and it's low stakes. If you don't make the cut then, you cam try again next time.
The point is that you need take the time to think about wht you're worth the effort, because you are definitely worth it! You know it, so now is your chance to practice.
Once you're accepted, you get a chance to connect with other speakers. These folks are like minded people who are willing to spend their time sharing their experiences and expertise. Sit with people you don't know and have conversations. Introduce yourself. Talk about what you do and listen to what they do. When you're done, find them on LinkedIn and remind them where you met them.
I have met some of the best people this way and have continued to stay connected beyond the conference (shout out to the WesternDevs).
As much as I appreciate livestreaming and virtualized meetings, speaking in the same room as other humans is very different and definitely develops a different set of skills and strengths. The interaction you get with your audience during and after you deliver your session is something I have not been able to replicate in the digitally transformed world we live in today, in 2022.
Just to be clear, something will go wrong...and that's okay.
No matter how much you prep, something will go wrong. A demo will fail, a slide will be out of order, a question will be asked that you don't have the answer to. The key is in how you react and respond to the situation. These "mistakes" are what has made me a better presenter in my day job. It has also helped me learn to stay calm and collected when pressure is being applied.
As a side, I wanted to note that not all conferences are created equally.
Before you submit your session take note on what the conference does to support their speakers. A few questions to ask yourself before you commit your time and effort to a conference:
There are no right or wrong answers to these questions, but you should consider what you're getting out of the deal when you submit sessions to a conference beyond professional development.
Just remember that the speakers are the talent that makes a conference possible. Your work is valuable, and the conference team should ensure you feel appreciated, ome way or another.
Speaking at in-person events, like tech conferences and user groups, is a a great way to grow as a professional. Key benefits are:
Thanks for playing.
~ DW
]]>To be fair, I should highlight that this is definitely a self-induced problem. The Docker Engine prerequisite is listed right on the README for the nektos/act, and had I reviewed the documentation I probably would have saved myself the trouble. Still, in my web sleuthing for solutions to the problem I created for myself, I found others had hit similar issues, hence this post.
I discovered the problem when I attempted to test my GitHub Workflows locally using nektos/act which is a tool I have been using for the past few years in my software development. It does this by pulling down a docker image that simulates the GitHub runner and runs the workflow in that Docker container. I have done this a few times over, so went to one of my older projects where I set this up and pulled in the code to get it running.
Being that this was a fresh Linux install, I had not installed Docker yet. When I searched out the installation instructions for Docker on Linux, I was greeted with this announcement:
I have been using Docker for Desktop on Windows for a while now, and I am always happy to have software that exists across my Windows-Linux development environment ecosystem, and so I went about installing Docker for Desktop as my new Docker install.
After testing my new and shiny Docker (for Desktop) installation with the standard docker run hello-world
, I was ready to get back to coding!
Or so I thought...
This is where things went sideways and the problem appeared. I ran act -j build
to my run my build job
in a workflow I know has worked previously and was greeted with the following error message:
Cannot connect to Docker daemon. Is the docker daemon running?
Not what I expected, considering I just tested out my fresh Docker install, but I tried pulling the image down myself with the docker pull
command just to make sure things didn't break, and everything worked as expected.
With a bit of web sleuthing, I came across others who reported the same issue and noticed this link in particular:
You could check if
/var/run
actually containsdocker.sock
When checking this, I found that docker.sock
was in fact NOT present. I immediate associated it with the Docker for Desktop installation, as that was the only new variable from my previous development environment.
This is part where I waste my time trying to figure out why did Docker for Desktop not install docker.sock. Rather that figuring out how to install the docker components that are missing.
Although I am no Docker expert, my understanding is that Docker for Desktop runs docker inside a VM rather than on the system itself, unlike Docker Engine. In fact, you can see a separate Docker context when you list out the contexts.
.
It should be noted that default context for Docker was listed, even though I had not installed Docker Engine yet. This lead me to believe something I installed was incorrectly configured, but really it was the fact that I had not installed the software I needed.
As technical as I made it sound, the real problem was that I was missing software. Specifically I was missing "docker" on my Linux machine, even though I installed Docker for Desktop. 😊
Well, if the problem is that I am missing software, then the solution must be to install the software. That software is Docker Engine, which sets up the Docker API right on the machine rather than though a VM like Docker for Desktop (as far as I understand it).
In conclusion, install the software dependencies the tools If you're running a Linux distro, as great as Docker for Desktop is-- you may still want to install Docker Engine. You can always switch contexts on where to run your own docker commands with the docker context set
command, but it's worth double checking to make sure the tool you are using supports Docker for Desktop on Linux platforms.
Thanks for playing.
~ DW
]]>Joel did a great session about API first design, which was a very dense session, but he delivered the content in a way that was very approachable and allowed me to think of the benefits of doing API first design with tools like Swagger.io and OpenAPI.
It was great seeing the value of these tools, and hearing about the patterns and practices experienced API developers like Joel use to implement consistent and secure APIs.
I went into this session thinking I was going to be fascinated with the subject, but that the concept would apply only to development leads or possibly coders, rather than an architect like me..
I was wrong.
The Developer Velocity Index (DVI), is a way for any team (even if it is a one-person team, like me on my side projects) can help frame up and scope the abstract problem of figuring out how do to deliver more value.
I plan on applying the DVI to my side project adventures, self-development, and my enterprise day-job efforts as soon as possible.
Although Dave and Lavanya delivered two completely separate sessions related to testing, the content they delivered worked together in a very interesting way.
Dave demonstrated and discussed Playwright and end-to-end testing framework that resolved or improved the problems we commonly see with end-to-end testing. Lavanya demonstrated how someone should apply proper code management and development techniques when creating test code using a framework, like Playwright.
For me, together they demonstrated why the test recorded features of end-to-end frameworks is not the "best approach" to creating tests, but rather it is only the first step.
I feel that these ideas will be seeping into both my day-job and side projects in the very near future.
Adam closed the Prairie Dev Con season with his session, and managed to leave me with a lot of ideas and helped me identify gaps that I have been living with as a developer and as a solution architect.
Ensuring that developers are security-aware is something I didn't realize I have been missing in my own skills, but also should be looking for in the implementation of my solution designs.
Rod delivered a keynote in both Regina and Winnipeg, and each time I walked away with a positive outlook on my own professional and personal growth, but also with the reminder: A Deal Is A Deal.
Sounds simple enough, but in the past I have frequently found myself regretting decisions or deals I had made with myself or others. But, a deal is a deal, and even if you don't like it or regret it, you need to take a moment to learn from it and ensure the next deal is one you won't regret.
In short, there were a lot of good ideas at Prairie Dev Con 2022. These are the ones that stood out to me the most:
Thanks for playing.
~ DW
]]>With my recent adventures with reimplementing my website, I wanted to leverage this on pages and posts, specifically with LinkedIn and it took a little more research to get it working right. So, for the web nerds like me looking to implement OGP on their projects, I wanted to share the resources I found useful to hopefully save them some time in finding the right resources.
ogp.me
I am calling this the specification, or "spec", and it probably the most important resource. The best part about this site is how approachable it is.
There are code snippets, explanations of all the object types and their properties, and its own list of tools (although they differ from the ones I am including on this list).
If you take one thing away from this post for your work with OGP, take this one.
Both Facebook and LinkedIn provide a developer tool to analyze and verify your implemenation of OGP and has the added feature of busting whatever the social networks have cached for the pages you share.
These tools for triaging or assesing publically shared pages, but not so much when it comes to local development. That is where the next tool comes into play.
Available for both Chromium Browsers and Firefox, this web extension allows you simulate what should appears for any page loaded up in your browser.
This tool saved me from having to continually publish the content to a public location for the post inspector, but note that it is just a simulation of what the tool thinks it should appear. It does not replace post inspector or proper testing on the site you are looking to share to.
If you are reading the post, then this one is an obvious one-- but sometimes we (like me) get so caught up on exploring new ways to solve my problem, we forget about the obvious ones.
OGP tags live in the <head>
of your HTML page. If you are unsure why things are not working, make sure you run your browser dev tools of choice and check the <head>
of the document and make sure the OGP tags you are expecting appear where they should be.
It seems simple, but depending on what tool, engine, or framework to output HTML, you may be surprised what shows up.
Open this post on a desktop browser and press the key combination Ctrl + Shift + i
and you should see your browser dev tools pop open for the site.
Read the aproachable spec document. That is the most important part takeaway from my OGP implemenation. It is very approachable and gives you a strong foundation to work from as you use other tools to triage and assess your implementation.
These are the tools I used to implement LinkedIn support, along with my browser dev tools.
Ctrl + Shift + i
on your desktop browserThanks for playing.
~ DW
]]>What I found odd was that all the links and articles I came across seemed to talk about things at a high-level (i.e. defining GDPR) or assumed I was working at a large scale (i.e. enterprise software), but nothing small projects like my personal website.
Still, I managed to draw some of my own conclusions on how to handle GDPR for my personal website and wanted to document them somewhere.
I am not a lawyer, so this is just an opinion from a developer. As a rule of thumb, I avoid taking legal advice from random folks on the internet. If you take advice from this article, take that bit and keep it.
I hope others (like you) use this post to draw your own conclusions or how you want to proceed with your own plan for handling GDPR.
But if you want real advice. Get a lawyer and talk to them.
Yes, it does apply to your personal website if are tracking information about your users and you are developing your own website or application.
I mean developing as it coding it, publishing that code, and hosting it somewhere like Microsoft Azure or GitHub Pages. If you are publishing your own code, GDPR may apply to you.
If you are using a third party tool or platform, like Facebook or LinkedIn to host your blog posts-- you appear to be in the clear. When you use a third-party platform, the platform, not you, is responsible for GDPR compliance.
Even if you think you are clear of GDPR responsibility, make sure that you trust your chosen platform to comply to GDPR and other regulatory bodies out there, as your site depends on it.
The GDPR is all about protecting personal information and giving control back to people navigating the internet. GDPR is not the only set of laws in play, as California, Brazil, and Canada have their own versions of similar legislation, but many of these laws seem to have been inspired by GDPR and why I tend to focus on it.
At the personal website level, you need to consider whether or not you are collecting personal information from your users. This includes things like IP addresses or cookie identifiers.
If you are NOT collecting information like that, you are good to go! Just remember that services like Google Analytics or Disqus Comments use personally identifiable information to operate, so if you have decided to include one of those services on your site then you need to think about GDPR compliance.
I concluded the GDPR-like laws apply to my personal website if I want to do any kind of usage tracking and understand how users are using my site.. This means it needs to be an opt-in policy that gives the user the option to do just that, opt-in.
The dialogue above is the only real visual evidence on the site now. As simple as that looks, a lot of thought went into it prior to implementation. Rather than doing a complete code review, I figured I would share the highlights.
My default would just be to include something like Google Analytics, and be done with it, but with GA being made illegal in the EU and more countries creating their own GDPR-like legislation, I thought I would stay away from it and try something different.
I chose Application Insights and took the time to learn how it handles data privacy and retention and how the JavaScript SDK uses cookies.
Regardless of what you choose for your analytics or tracking tool, the important part is that you understand how the tools are GDPR compliant and how the tracking technology works.
You've seen million of them already, but those cookie banners have purpose. The GDPR website outlines the requirements around using cookies, and many tools use them. The important thing is that you know how your website works, along with all the dependencies you choose to include.
In my case, the cookie banner enables cookies in Application Insights, which in turn enable usage data collection, only if they click "Accept".
This last point is less technical, and more about design. I am designing with transparency in the front of my mind. I added a privacy statement to my about page to explain the "why" around using Application Insights, and will share more specifics and document them accordingly.
GDPR and the various GDPR-like laws definitely apply to you and your personal website or app project if you are building the code yourself, assuming you want to track information about your users.
The short story on this is that you need to draw your own conclusions and take responsibility for what you include in your website. If you are developing something to share outward into the world, you need to take the time to understand how the various tools you are included (such as Google Analytics or Application Insights) as well as the requirements for compliance.
Two resources I found useful in explaining GDPR requirements are provided on the site GDPR.eu. If you are looking for more information, I definitely suggest checking out these links:
Thanks for playing.
~ DW
]]>Microsoft.SqlServer.Types
. This package is owned by the SQL server team so, as you'd expect, it is ridiculously behind the times. Fortunately, they are working on updating it and it is now available for Netstandard 2.1 in a preview mode.The steps I needed to take to update the app were:
Microsoft.SqlServer.Types
After that the tests we had for inserting polygons worked just great. This has been a bit of a challenge over the years but I'm delighted that we're almost there. We just need a non-preview version of the types package and we should be good to go.
When I'd only done step 1 I ran into errors like
1 | System.InvalidOperationException : The given value of type SqlGeometry from the data source cannot be converted to type udt of the specified target column. |
I went down a rabbit hole on that one before spotting a post from MVP Erik Jensen https://github.com/ErikEJ/EntityFramework6PowerTools/issues/103 which sent me in the right direction.
]]>