Thoughts on “Turning Tech Hobbies into Side Hustle”

Last week month I read Turning Tech Hobbies into Side Hustle by Erik Dietrich and it led to me to analyse what I am doing and whether it is productive or a hobby.

Initially I thought the message of the post was extreme, but after consideration changed my mind. The message I initially got was aim for a direct reward from hobbies, Erik’s example being book sales from writing a book on F#. After further consideration I found a better takeaway was be honest with how you are spending your time. If you are learning something to satisfy your curiosity, don’t count that as productive time/career development. Later if you find that you are not happy with your career progress, you can evaluate whether that time is unproductive and could be better spent. From this perspective learning a new language is no different to other hobbies, you just need to be honest that it is a hobby.

With that, I decided to take a look at myself and ensure I was being honest.

Coincidentally, and maybe why I found the post extreme initially, something I have wanted to do is learn a functional language. Being honest with myself I do want to do this to improve my software development, not for my own vanity. Functional programming concepts are making their way into a lot of languages. Tuples and pattern matching are some of the features in the latest C# release. Erik is correct, that just learning a language does not have quantifiable value. When I do learn a functional language following his suggestion, or at least blogging about it, will produce more value. Learning a functional language is not high in my priorities, which I’ll come back to later.

Speaking at a meetup is something that is partly motivated by vanity, but not entirely so. I think it would be beneficial, for myself and for One Model, to get a bit of awareness from me presenting. It may also be possible to quantify the value of speaking. The main outcome I’m after is more candidates for recruitment, which is measurable. So while there is some vanity behind the motivation, there are valid reasons too. If I wait until I have something worth presenting, instead of just getting up there for the sake of speaking, I think I am approaching it correctly.

I don’t have a quantifiable outcome in mind for blogging. I wonder if Erik did when he started? I doubt he envisioned that he’d be getting paid to write blog posts for other sites/companies. I think I could get more out of this blog if I target a more specific audience. This blog promotes me, but I could be promoting One Model more with it.

This brings me to the last point I want to write about. I don’t think I’m the target audience for Erik’s post. I am a founder and partial owner of One Model and by working to increase the value of the company, I increase the value of the shares I own. I actually get maximum value from my personal time by spending it in ways that benefit both myself and One Model, and most of the time that is what I do. When I was researching Selenium, I was doing it so that I could create automated UI tests for One Model.


Well, this took me a lot longer to complete than it should have. April was a busy month for me, but that’s not the real reason this took so long. Writing up the analysis of the post and myself ended up being more difficult than I anticipated and I let this take away my motivation. I’m disappointed I broke my weekly post streak, and that I have let a month pass without adding value to my blog. On the positive side I have finished this now and feel recharged. Time to get back into regular blogging.

Thoughts on “Turning Tech Hobbies into Side Hustle”

Prelude to Thoughts on “Turning Tech Hobbies into Side Hustle”

This is a bit of inception. A post on a post on a post.

For a while, I have been thinking that it might be worthwhile to try writing some shorter posts inspired by other blog articles I have read. During the week I read Erik Dietrich’s post Turning Tech Hobbies into Side Hustle and later sat down to write some thoughts it had inspired. I wrote 403 words, but they didn’t come very easily, lacked purpose and in the end I didn’t think they were worth sharing.

I think I’m still finding a balance between planning and writing off the cuff. Hopefully, as I go, I am reducing the amount of planning required for a good post. Eventually I realised that what might be worth writing about is analysing whether what I have done recently has been productive or a hobby.

So I am sitting down again to try and write something based on that. Fingers crossed it goes better than my last attempt. I am not sure that I will post that article tonight, which is disappointing as I was hoping to post something earlier than Sunday night this week and maybe even get two posts out.

Prelude to Thoughts on “Turning Tech Hobbies into Side Hustle”

Some reflection

I don’t have a specific topic to blog about this week, but I wanted to continue with publishing a post weekly, so this week I am writing a reflective piece.

When I set myself the goal of posting weekly I was aiming to

  • increase the quality of my writing
  • reduce the time it takes for me to write a post
  • share more information

Weekly blogging has been enjoyable, but when I think about what I am aiming to achieve I am not sure that I am progressing as well as I could be. Maybe that is because I didn’t specifically articulate the three points.

Personally judging whether my writing has improved is difficult. To improve the quality I think I should start getting regular review and feedback on my posts and I think there is two types of feedback I should look to get. The first is just general feedback on my writing e.g. spelling and grammar. The second, and probably more important, is whether my posts are clear and easy to understand and follow.

I haven’t been tracking the time it takes me to blog accurately, but I do think I have gotten faster at writing posts. The slow parts have been the researching and learning when I am writing about something new, and I learnt during the Selenium series that I should not try and preempt the posts, but instead learn and then write about what I learnt.

The last point is a bit vague and while it seems that I am sharing more information by blogging weekly, that is not how I was thinking about it. So far most of my blog posts have been technically orientated and describe the solution to a problem. What I want to get better at sharing are the things that do not have a solution or a right answer e.g. how I’ve hired developers, how I manage my team and building the product. I have been working on posts for those topics alongside other things and have found it harder to write about them. I am finding that it takes more thought on how to write about them to ensure those posts are clear and not just some ramblings. So while the aim is vague, once I get some of those topics on the blog I will be happier about progressing towards it.

Overall, while this has turned out to be a bit of a critical post (when is reflection not critical), I am happy with how it has been going. There is satisfaction in seeing the number of views my blog gets daily steadily increasing over time as I add more and more content to it. That said if I was just after views, I’d be blogging entirely about JavaScript probably specifically React and Webpack. My most popular post is far and away TypeScript to ES2015 (ES6) to ES5 via Babel 6 which I realise now I wrote a whole year ago. It must be extremely out of date by now!

Some reflection

Excluding node_modules folder from ASP.NET compilation

We started to make use of using Github URLs as npm dependencies (make sure to reference specific commits to ensure repeatable builds) to replace some JavaScript files that we had manually added to the project. Unfortunately one of the repositories we added contains an invalid razor view file (.cshtml), which causes razor view compilation to fail with the following error.

/temp/node_modules/ace-builds/demo/kitchen-sink/docs/razor.cshtml(6): error ASPPARSE: Encountered end tag "a" with no matching start tag.  Are your start/end tags properly balanced?

Razor view compilation failure caused by the node_modules folder is a bit of a recurring issue I have had. In the past I had been able to solve the problem by upgrading npm, but that was not going to work this time. What I needed to do was to get the ASP.NET compiler to ignore the node_modules folder.

The view compilation was configured in the .csproj file using the AspNetCompiler task e.g.

This results in a command like the following being executed during the build process.

C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_compiler.exe -v temp -p D:\repos\onemodel\webapp

If you run aspnet_compiler.exe with the help toggle you will see that there is a flag (-x) that can be used to exclude directories. Frustratingly there is no way to provide values to this flag using the provided AspNetCompiler task. Instead you need to use the Exec task to call aspnet_compiler.exe so you can pass in a value for node_modules. Replacing the AspNetCompiler task with Exec ends up like the following example.

The downside of this solution is that it is reliant on the aspnet_compiler executable always being in the same location, however that could probably be dynamically determined with some more MSBuild configuration.

Excluding node_modules folder from ASP.NET compilation

Making updates to citibank-statement-to-sheets

For months now I have been using my app to import Citibank statements into Google sheets without any issues. This weekend, however, I imported my latest statement and opened the sheet to discover something had went wrong and the import had put values into the wrong cells.  “This should be pretty straightforward to fix” I thought. “I just need to add a test for this case.” Of course it wasn’t quite as simple as I hoped.

Updating to TypeScript version 2

I opened Visual Studio Code and the first thing I saw was the message

Version mismatch! global tsc (1.8.10) != VS Code's language service (2.2.1). Inconsistent compile errors might occur.

While only a warning, this did prompt me to update the version of TypeScript used in the project and also to question whether I needed to have it installed globally.

Since the application is compiled with webpack and I don’t run the tsc command directly anymore I didn’t see a reason to have it installed globally, however the official docs always seem to provide the install command with the global flag, so I was unsure if it would work. I did uninstall it globally and had no issues from doing so.

With TypeScript only installed locally I then updated it by uninstalling it locally, increasing the version in my package.json file to the latest version and then running npm install again. As part of the upgrade in my tsconfig.json I changed the target from “es6” to “es2015”. It did appear to work as “es6”, but according the documentation that is not a supported value.

Migrating from typings to @types

Now that I was using TypeScript version 2, I could add definition files directly from npm. I uninstalled the typings package and removed it from my package.json. I also deleted the typings.json file and all the typings I had imported with typings. Then I installed the definition files through npm like so “npm install @types/pdf --save-dev”.

After doing this I was getting TypeScript compile errors when building the app where it was complaining that it could not find types that were in the newly imported definition files e.g.

ERROR in ./src/PdfScraping/PdfScraper.ts
(34,38): error TS2304: Cannot find name 'PDFDocumentProxy'.

A hunch told me that I probably needed to update ts-loader and a google search found this StackOverflow question that confirmed it. Updating fixed the error and the application was compiling once again.

I did find that VS Code works better with the latest TypeScript version and @types. Before it wouldn’t always find definitions and would then incorrectly display errors in the UI, but I haven’t had that happen since updating. I did need to close and reopen VS Code though to get it to pick up new definition files as I added them.

Sidetracked by a Chrome behaviour change

After getting the application compiling again, I ran the tests to check that everything still behaved correctly. They failed!

This stumped me for a long time. The output I was getting from karma contained

Chrome 56.0.2924 (Windows 10 0.0.0): Executed 0 of 0 ERROR (0.002 secs / 0 secs)

which showed that it wasn’t picking up the tests. Because I had not run the tests before making any changes I thought that it was something I had changed that was causing this. It was a while before I noticed this message in the console in Chrome when running the tests

Refused to execute script from 'http://localhost:3334/base/src/Statements/Parsing/StatementParserTests.ts' because its MIME type ('video/mp2t') is not executable.

This appears to be a change in Chrome’s behaviour and jtson provided a fix in this Github issue. Adding

mime: {
 "text/x-typescript": ["ts"]

into karma.conf.js fixed the issue. This was not the last frustrating problem I encountered.

Could not test locally

I also had issues setting up localhost as an authorised JavaScript origin in the Google Developer Console for the application. This was annoying as it prevented me from running the application locally to test. I attempted to get it to work for a while, but in the end gave up and decided to release it and test it live, hoping that the updates didn’t cause any issues.

Unfortunately those plans never work out and the application is currently broken with a script error. I will have to work out how to get localhost working again so that I can test and fix the issue locally.

Update: A day later

Well, it turns out I simply didn’t wait long enough for the new authorised origin to be accepted. I tried setting it again, waited a bit longer and then tried out testing locally and it worked. I then also tried the application out again and it worked too. Following that I tested out the deployed version and it also worked!

I’m not sure what happened, maybe it was tiredness (it was late at night) causing me to not pay enough attention. Oh well, the upside is it is working and I don’t need to make any further changes. I will have to keep an eye out and see if it plays up again.

Making updates to citibank-statement-to-sheets

Interlude and reflection on Selenium

I am putting my UI testing with Selenium series on hold for the moment. My goal at the start of the series was to be able to create UI tests and as I worked through learning how to, I blogged about what I was learning. My initial goal has been achieved and I can now write UI tests.

Before I continue the series I want to gain more experience UI testing, by writing tests for more advanced scenarios and allowing time for the way I write the tests to evolve. I also want to find out how robust (or brittle) the tests I create are when UI changes are made.

I think the blog series has been good. When I set out to learn Selenium, I personally found the available information confusing and had questions such as “which nuget packages do I actually need?”. I answered these questions and hopefully the series can help others who are looking to start using Selenium.

However, I do think the posts could have been better. It was not a conscious approach, but what I ended up doing was premeditating what I was going to blog about and then spend the week learning about that so that I could write the post. Deciding the goal in advance didn’t work out often and ended up splitting the blogging and learning into almost two separate activities.

I think a better approach would have been to have just blogged as I went about learning Selenium. For example something with a flow like the following bullet points.

  • I want to learn how to UI test with Selenium.
  • I can’t find any good guides for getting started.
  • I am attempting to create a UI test, but am confused about all the different nuget packages available.
  • Learn and describe the differences between them.
  • Create simple project with only the nuget required.

One of the things I struggled with was the structure of the posts and how to frame the information that I was learning. If I had written the posts as I went I think much of the background of why the information is important would have been taken care of. While writing the last post I started to make some changes to the example tests and the UI. Whilst making the changes I realised that I should add a page object to reduce code duplication and simplify any further changes. This led to me talking about page objects at the start of the last post, but I lost the context of why I introduced them, which would have help to demonstrate and explain the reason for the pattern.

I also think the posts require more code samples in the actual post. It would have been easier to add code snippets in if I was describing what I did in a more step by step manner.

My plans for the series are to look at libraries such as Seleno that simplify the UI tests, but as mentioned before I want to gain some more experience so that I can appreciate the benefits they provide. When I am ready to continue I will take the new approach to writing the blog posts.

Interlude and reflection on Selenium

ASP.NET UI Testing With Selenium in 2017: 3. Page Objects and using WebDriver

This is the third post in a series on creating automated UI tests for an ASP.NET app using C# and Selenium. My goal is to highlight and explain the key pieces required starting with Selenium WebDriver in 2017.

Last post I took a bit of a tangent from WebDriver and instead looked at starting the application automatically when tests are run. This time I have added some more realistic UI tests on a mock login page in the web app in the SeleniumExamples repo. To reduce code duplication and make the tests less brittle I have used the page object pattern, and to wait for the login request/response I used the WebDriverWait class.

Up until now I have not required the Selenium.Support nuget, but this week I have added it in as it provides classes for waiting and creating page objects.

Page Objects

Page objects are not specific to Selenium, but are a pattern for decoupling the logic for interacting with a UI element from your tests. This reduces the brittleness of UI tests, making updating your tests for UI changes quicker.

There are plenty of good resources available explaining page objects. Martin Fowler’s bliki is a must read, and the Selenium wiki has a good explanation and example.

In the PageObjects namespace in Selenium.Support there are classes to help simplify page object creation. Unfortunately there does not seem to be a lot information on them (the page in the Selenium wiki is not relevant for .NET). I found the best place to learn and get some examples were the test classes in the Selenium source code, but hopefully the page object class for the login page is a better example.


When UI testing with Selenium there will be times when you need to wait for things such as responses to requests or animations to complete. A basic, but inefficient, approach would be to use Task.Delay (or an equivalent) to cause the test to wait. WebDriver provides a better approach for waiting where you set a max timeout and it polls the condition until it is reached, or the time expires.

For .NET there are 2 types of waits available in WebDriver – implicit and explicit. You may see another type, fluent waits, mentioned around the internet, but this is not available in .NET. As far as I can tell fluent waits behave the same as explicit waits, but are declared in a fluent manner.

The Selenium docs provide some details on implicit and explicit waits, but I found this answer on StackOverflow a better source of information. To summarise implicit waits are a one size fits all solution that only applies to finding elements, while explicit waits provide more flexibility and allow waiting for any condition to be met. As the official documentation mentions, implicit and explicit waits should not be mixed as they can cause unreliable wait times.

I used explicit waiting in the login page object to wait for the simulated login request/response to complete. I haven’t tried, but using implicit waits with the page object pattern seems like it would be difficult and I think having the explicit waits in the code to document where there is expected to be a delay is beneficial.

Wrap up

That’s it for this week. From here the plan is to go and create some tests and then look at libraries like Seleno to see how they make it easier.

ASP.NET UI Testing With Selenium in 2017: 3. Page Objects and using WebDriver