<![CDATA[Gary Cheetham's Software Blog]]>https://glcheetham.name/https://glcheetham.name/favicon.pngGary Cheetham's Software Bloghttps://glcheetham.name/Ghost 2.30Sat, 07 Sep 2019 12:48:43 GMT60<![CDATA[Mocking UmbracoHelper and using it with Dependency Injection]]>The Umbraco APIs are full of examples where the developers have made tough design choices. The UmbracoHelper is the one, from my own experience at least, in which the reasoning behind the design is not immediately obvious.

In many cases, the way the UmbracoHelper was designed is a blessing -

]]>
https://glcheetham.name/2017/01/29/mocking-umbracohelper-using-dependency-injection-the-right-way/5a847e661b2568001840dedfSun, 29 Jan 2017 11:51:09 GMTThe Umbraco APIs are full of examples where the developers have made tough design choices. The UmbracoHelper is the one, from my own experience at least, in which the reasoning behind the design is not immediately obvious.

In many cases, the way the UmbracoHelper was designed is a blessing - enabling seamless use in Razor views and providing convenient self-documentation to the most important Umbraco APIs for heavy IntelliSense users. However, once you start using UmbracoHelper in your own controllers and services, and then start trying to write unit tests for that code and using dependency injection, you may be tempted to view reliance on this ubiquitous static class as a hinderance. The way to decouple your code from this dependency is hard to find through IntelliSense or through documentation.

But don't despair just yet, there is a little-known trick which makes unit testing code that uses the UmbracoHelper APIs both convenient and laughably easy - without endless fiddling trying mock UmbracoContext in your test fixtures.

And there's no need spend your precious time trying to re-implement UmbracoHelper in your own services, especially because UmbracoHelper was written by people who know what they are doing, and hidden in those method calls are often many layers of caching which are hard to get right unless you know Umbraco inside-out.

The trick: UmbracoHelper uses Interface Segregation

My pain with UmbracoHelper came from my expectation that there would be an IUmbracoHelper interface, which I could depend on in my controllers and services and easily mock in my tests.

But what I didn't know is that since 2014 there's been a ton of interfaces which themselves come together in the full UmbracoHelper. Using one interface for the entire class wouldn't be ideal, because there are just so many methods in there, but instead your code can just depend on the interfaces which it actually uses, enabling you to mock out those dependencies with well-known techniques and mocking libraries. Here they are in full:

  • ITagQuery
  • ITypedPublishedContentQuery
  • IDynamicPublishedContentQuery
  • IUmbracoComponentRenderer
  • MembershipHelper
  • UrlProvider
  • IDataTypeService
  • ICultureDictionary

Now this isn't interface segregation by any strict definition, because the class UmbracoHelper doesn't inherit from these directly. Rather, UmbracoHelper exposes these bits of it's API as object properties. Here follows a code example which will make the use of this technique a little clearer.

Writing decoupled code which uses UmbracoHelper

Here's a code snippet from one of my projects. It uses TypedContentAtXPath, a method from UmbracoHelper, to get every content node with a specific document type, and then uses Ditto to convert those IPublishedContent objects into my own model POCOs.

The special thing about this code is that it doesn't even know UmbracoHelper exists. It depends instead on that interface ITypedPublishedContentQuery which defines the bit of the API that it actually uses.

Now, how easy is it to instantiate this class, passing in the required dependency? As I've said above - laughably easy. Here's the relevant line from the AutoFac startup configuration for this same project. UmbracoHelper just exposes, as properties, classes which inherit from these interfaces - in this case ITypedPublishedContentQuery.

If you're not familiar with AutoFac, that's fine, it's not necessary. AutoFac just does the job of passing dependencies into my constructors for me.

You could just as easily instantiate your this same class like this:

Conclusion

After trying unsuccessfully to decouple code from the UmbracoHelper in a graceful way in many projects, this discovery was a real "eureka" moment for me. According to Shannon, not all features of the API are yet available through these interfaces, but most of the work I'm doing with UmbracoHelper was available with the APIs provided in ITypedPublishedContentQuery, making this interface become a close friend to my Umbraco codebases.

One more caveat is the necessity to refactor your code to depend on these interfaces, instead of UmbracoHelper directly or your own home-grown interfaces - but hey - who doesn't like refactoring?

If you've found this article useful, or if you have any feedback, I invite you to get in touch with me on twitter @glcheetham. Thanks for the your time and I hope you enjoyed the read :-)

]]>
<![CDATA[Dynamic Umbraco Sitemap.xml Without Plugins]]>It's easily possible to create a proper sitemap.xml, with the right URL, in a vanilla Umbraco installation with these three easy steps:

  1. Create a "XML sitemap" document type and template in the backoffice
  2. Add a line in the template for the "XML sitemap" doctype so
]]>
https://glcheetham.name/2016/10/12/dynamic-umbraco-sitemap-xml/5a847e661b2568001840deddWed, 12 Oct 2016 16:46:48 GMTIt's easily possible to create a proper sitemap.xml, with the right URL, in a vanilla Umbraco installation with these three easy steps:

  1. Create a "XML sitemap" document type and template in the backoffice
  2. Add a line in the template for the "XML sitemap" doctype so that Umbraco serves it as XML rather than HTML
  3. Configure Umbraco's built in url rewriting module to handle a request that ends in ".xml"

If you've read my previous post on creating a robots.txt in Umbraco, you'll notice that this tutorial is almost the same. Here's what we're going to achieve, with the example taken from a site which I work on. This sitemap is dynamically generated by Umbraco.

Example of an Umbraco sitemap.xml

1) Creating the document types

There are two document types you should create for this sitemap.xml.

  • An XML sitemap document type (with template)
  • (optional) An "XML Sitemap Settings" document type, without template, which you can compose into other document types to implement optional sitemap settings like change frequency.

First you'll need to create the XML Sitemap document type. No extra properties are needed here, since everything the sitemap needs is generated dynamically from the rest of your site's content tree.

Example XML sitemap document type in the Umbraco backoffice

The next, optional step, involves creating a document type without a template, which you can use in document type compositions to implement the optional properties as defined in the sitemap.xml standard (See here for a full list: http://www.sitemaps.org/protocol.html#xmlTagDefinitions). Here's an example from one of my sites, with sitemap-relevant settings skillfully highlighted:

Example sitemap settings document type with XML-related settings highlighted

2) Write the razor template for the XML sitemap

You can get a little creative here so that you end up with the right solution for your site (eg. if you need to split the sitemap up into multiple files), but I'll first provide my example code for you to read, then explain the interesting features.

I stress that this is an example - however, you could use this code as-is, and it will correctly handle a site that has multiple root nodes. On my production version of this I have an extension method which I use in place of IPublishedContent.UrlAbsolute() to ensure absolute URLs are rendered correctly when SSL is provided by my CDN.

Interesting Features

Line 5: This line instructs Umbraco to serve this page with the content type of text/xml, (instead of the default text/html), ensuring that browsers and crawlers understand that this is an XML page.

Line 9: The call to Umbraco.TypedContentAtRoot(), and the two foreach loops are needed to ensure that sites with multiple root nodes have all of their content nodes listed in the sitemap.

Lines 12 & 23: The version of this code which is closer to what I have in production, available here, has a line handling pages which have a canonicalUrl property. This code, however, just calls UrlAbsolute() which is fine for most cases.

Lines 14 & 25: This is where the optional settings mentioned in section 1 come into play, with some pages having an updateFrequency property which tells Google that our page has frequently changing content. This property is , however, completely optional.

3) Use Umbraco's built-in URL rewriting module to give our sitemap the right URL

Did you know that Umbraco has a built-in URL rewriting module? It takes one line of XML to configure it to handle requests to /sitemap.xml, giving us the standard sitemap URL and keeping Google happy.

The config file is located at ~\Config\UrlRewriting.config.

First, after you've created a page with the XML Sitemap document type, you'll need to find the URL that Umbraco has generated for you. This is found under the properties tab of your page:

Umbraco sitemap xml with auto-generated URL highlighted

And then, all you have to do is add one line of XML configuration to rewrite the URL:

    <add name="sitemap-rewrite"
      virtualUrl="^~/sitemap.xml"
      destinationUrl="~/sitemapxml"
    />

And now your sitemap will be correctly configured and ready for primetime. I also have a tutorial which might interest you which demonstrates the application of this same technique to the robots.txt.

]]>
<![CDATA[How to create a robots.txt in Umbraco and edit it from the backoffice]]>It's very easy to create a robots.txt in Umbraco which you can edit from the backoffice. You can achieve this natively without installing any packages or writing custom code with these simple steps:

  1. Create a "txt file" document type in the backoffice
  2. Add a line in the
]]>
https://glcheetham.name/2016/10/02/robotstxt-umbraco/5a847e661b2568001840dedbSun, 02 Oct 2016 13:31:17 GMTIt's very easy to create a robots.txt in Umbraco which you can edit from the backoffice. You can achieve this natively without installing any packages or writing custom code with these simple steps:

  1. Create a "txt file" document type in the backoffice
  2. Add a line in the template for the "txt file" doctype so that Umbraco serves it as text rather than HTML
  3. Configure Umbraco's built in url rewriting module to handle a request that ends in ".txt"

Here's what we're going to achieve:

Editing the robots.txt in the Umbraco backoffice

This file will be accessible to the crawlers at www.mywebsite.co.uk/robots.txt

1. Create a "txt file" document type

We need to create a document type for ".txt" files. I refrain from calling it "robots.txt doctype" or similar, because there's no reason this document type couldn't be used again for another txt file web standard, such as humans.txt

Showing the creating process of the robots.txt document type

This is easy. All we need is a text area for the file content. Make sure that the document type is created with accompanying template and that all your permissions are set up correctly.

2. Write the txt file template

Again, this is very simple. We need one line of Razor code to take the string from our text area and render it in the template.

Line no. 5 however, may take some explaining. This line tells Umbraco to set this page's content type header to plain text. Web servers generally send these extra bits of information with the web page to tell the user's browser what type of content it's dealing with so it can be rendered properly. For instance, it would obviously be incorrect for the browser to assume that HTML files and PNG images should be rendered in the same way!

Umbraco's default content type header is text/html, so we need to change it to text/plain so that our clients know they're dealing with a plain text file.

Now we can create the robots.txt file in our content tree and add our content to it.

3. Configure Umbraco to recognise the "robots.txt" URL

Once you've created a "robots.txt" file with your new document type in the backoffice, and you try to access it on www.mywebsite.com/robots.txt, you may see a 404 page, a blank page, or something else depending on how your web server is configured. This is because Umbraco doesn't intercept URLs with extensions like .txt by default. You'll need to configure your site to intercept the request to /robots.txt and send it to your content node.

The good news is that Umbraco provides an out-the-box solution for this. Did you know that Umbraco comes with a URL rewriting module? You can easily configure it to intercept a url with one line of XML configuration.

First you'll need to find the URL that Umbraco will have generated for you by clicking on your content node and going to the "Properties" tab.

Showing Umbraco's automatically generated URL in the backoffice

You see here that Umbraco's auto-generated URL for me is /robotstxt. You'll notice that even though there's a dot in the name of this content node, Umbraco doesn't add one to the URL for me.

Now, we need to open up Umbraco's URL rewriting config in our editor and add a line to rewrite the /robots.txt URL.

The config file is located at ~\Config\UrlRewriting.config. You can find an example from a live Umbraco site here.

And here's the line that we have to add:

Name can be set to whatever you want, as long as it's unique in the file. virtualUrl is the URL people will enter to get to your page (represented as a regular expression). destinationUrl is the URL we're rewriting to (Umbraco's auto-generated URL from the properties tab in the backoffice).

Et Voila

A robots.txt file shown in a web browser

]]>
<![CDATA[Import any CSV File into Umbraco with the Umbraco LINQPad Driver]]>I've recently successfully imported data from a large CSV file into Umbraco using Shannon Deminick's useful Umbraco LINQPad driver. Here's a high-level overview of how I did it:

Flow chart of the Umbraco CSV Import Process

You can import any valid CSV file through this process, and infact, it would work with any data format parseable in .NET

]]>
https://glcheetham.name/2016/07/23/import-any-csv-file-into-umbraco-with-the-umbraco-linqpad-driver/5a847e661b2568001840ded8Sat, 23 Jul 2016 11:59:15 GMTI've recently successfully imported data from a large CSV file into Umbraco using Shannon Deminick's useful Umbraco LINQPad driver. Here's a high-level overview of how I did it:

Flow chart of the Umbraco CSV Import Process

You can import any valid CSV file through this process, and infact, it would work with any data format parseable in .NET - LINQPad provides a full scriptable .NET environment.

Setting up LINQPad

You'll need to install the Umbraco LINQPad driver so that your local LINQPad can connect to your Umbraco installation. Note that you can only connect to a local Umbraco installation, not a copy running on a remote server. You'll have to deploy your data using an Umbraco package.

  1. Download and install LINQPad from http://www.linqpad.net/.
  2. Download UmbracoLinqPad.lpx from GitHub.

Now, you should be able to follow Shannon's documentation to get LINQPad setup (taken from the README.md in the GitHub repository)

  • In LinqPad, Click "Add Connection"
    Location of the Add Connection button in LINQPad
  • Click "View more drivers"
  • Select "Browse"
  • Add the UmbracoLinqPad.lpx file that you've downloaded
  • Choose your Umbraco installation root folder

LinqPAD should then have a connection available for your local Umbraco installation, as shown in the screenshot above.

Writing the Import Script

You'll have to write a little custom .NET code in LINQPad to take your CSV file and create the content nodes in Umbraco using the Umbraco API (specifically the content service).

LINQPad provides a full scriptable .NET environment for you to play with. If you've ever done this kind of thing in .NET or worked with LINQPad before, you should be able to improvise.

Though I can't provide a one-size-fits-all solution here, here's a little example code to get you started. Playing with code and your particular CSV file's schema should yield some quick results for you.

Make sure the "language" is set to "C# Program" in the dropdown list at the top of the editor in LINQPad. This will enable you to write a full program instead of a single C# statement inside LINQPad.

Deploying from your local Umbraco to a live environment

Now, because LINQPad can't connect to a live Umbraco environment, you'll have to deploy the new content nodes you've created to your live environment manually. I did this with a package.

  1. Open your local Umbraco backoffice. Expand Developer > Packages
  2. Right-click on "Created Packages" and click "Create Package". Fill in the package meta-data like Name, Version, License, etc. (There are required fields)
  3. Select the content nodes you'd like to deploy in the right-hand pane.

Picture of the Umbraco package creation interface

  • Make sure you include any document types the new content nodes depend on when you're creating the package.
  1. Click "save", and then click "publish"
  2. A download link will now be available in the "Package Properties" tab. Download the package and remember where you left it.

Installing the package on the server

This should be as easy as installing any other package.

  1. Go to the backoffice in your live environment and expand Developer > Packages
  2. Click on "Install Local Package" and follow the on-screen instructions

Picture of the Umbraco package install process

Conclusion

This process is very flexible and has allowed me to import a diverse range of information into Umbraco. There are some drawbacks, however:

Umbraco LINQPad driver has Certain Limitations

The Umbraco LINQPad driver is very handy, but certain parts of the Umbraco API depend on a WebContext and HTTP/Web Server environment which the LINQPad driver cannot provide.
Drawbacks of the Umbraco Linqpad

The Umbraco Packaging Process is Slow and Unreliable with Large Amounts of Data

I think my package was on the brink of being too big, because the package install and content publish process took aaages. I did think it started hanging at one point and refreshed the screen, but the install and publish process relies on the browser session not being refreshed halfway through.

Thanks for reading! If you enjoyed this post and would like to get in touch, find me on twitter @glcheetham.

]]>
<![CDATA[Introducing Captain Botbeard - A Swashbuckling Twitter Bot]]>

@glcheetham Learnin' without thought be labor lost; thought without learnin' be perilous - Confucius

— Captain Botbeard (@CaptainBotBeard) 5 July 2016 ]]>
https://glcheetham.name/2016/07/10/introducing-captain-botbeard-a-swashbuckling-twitter-bot/5a847e651b2568001840ded5Sun, 10 Jul 2016 10:36:09 GMT

@glcheetham Learnin' without thought be labor lost; thought without learnin' be perilous - Confucius

— Captain Botbeard (@CaptainBotBeard) 5 July 2016

I've recently become interested to know how readily users would interact with my software via social media platforms. Why? Because that's how most people consume and create content now - through a social media app on their smartphone or tablet. So I created Captain Botbeard, a simple twitter bot who will take whatever you tweet him, translate it into "pirate-speak", and then tweet it back to you.

I'm beginning to understand that my users are now almost all on Facebook or Twitter. I saw this presentation the other week and it makes the facts abundantly clear.

  • Soon, the smartphone will be the universal device. Every human will have one.
  • Internet users now spend more than half of their time online inside mobile apps.
  • A third of mobile app use is Facebook.

Bar chart showing distribution of internet time between mobile app, desktop, and browser

So, if I want to really want to engage with people, the browser isn't the place to do it. I should be doing it inside of a social network.

Enter Botbeard. Here's two ways why he's better as a Twitter bot than as a web app:

  • He's inherently viral: If somebody uses him, all their friends are notified via their timeline.
  • He's easy to interact with: People can interact with Botbeard without leaving the context of their social media app, and they already know how to interact with him because they do so via tweet.

I'm still in the process of figuring out how I can apply any of this to my day-to-day work, but I can't wait until Talk like a Pirate Day, because Botbeard will go down a treat!

Oh, and you can see his source code on Github:

https://github.com/glcheetham/pirate-translator
https://github.com/glcheetham/captain-botbeard

]]>
<![CDATA[What I'm Looking to Learn at Umbraco CodeGarden 2016]]>Guess what? I have a a ticket to Code Garden 2016. Jealous?

I've got a ticket to CodeGarden!

What? no?!? Wait, what - You don't even know what that is?!? What's that? - Sit back down sir or I'll stop the bus and ask you to leave?!?!?

*Ahem*

Well, in just about a week, I'll be

]]>
https://glcheetham.name/2016/06/04/what-im-looking-to-learn-at-umbraco-codegarden-2016/5a847e651b2568001840ded2Sat, 04 Jun 2016 15:12:08 GMTGuess what? I have a a ticket to Code Garden 2016. Jealous?

I've got a ticket to CodeGarden!

What? no?!? Wait, what - You don't even know what that is?!? What's that? - Sit back down sir or I'll stop the bus and ask you to leave?!?!?

*Ahem*

Well, in just about a week, I'll be flying out to Denmark to meet the Umbraco team at their HQ for the "infamous" yearly Umbraco developer conference, Code Garden.

There's alot to be excited about. Spread over a 3 day conference, there's 27 (!!!) sessions, 3 full workshops, "all the coffee you can handle", the intriguingly-named Umbraco Bingo (described as "infamous", "unmissable") and of course, everyone who's anyone in the Umbraco community will be handily located nearby, easily accessible for me to annoy them with my endless questions.

That sounds awesome.

My team's been using Umbraco now for around a year and a half. We've chosen it for the rebuild of some of our biggest websites - and are even considering doing our company's colossal CRM rebuild in it.

We've definitely invested alot in Umbraco - but there's still some big question marks in our heads concerning it's use and how it will fit into our workflow as we continue to build projects with it.

So it's my job for Code Garden - between all the bingo, socialising, and 'pull requests', to find answers for these questions.

  • How would Umbraco fit into our current CI/deployment workflow? We're using AppVeyor CI and WebDeploy onnon-umbraco projects, but because Umbraco document types/media types/etc. aren't recorded in the source code so they can't be turned into database records by the CI server.

We've been deploying to our servers with packages right now but for us to Umbraco some of our bigger projects we really need a way to use the CI server.

  • How can we make sure new clones have a working copy of the site?
    We generally like to ensure fresh clone-ers already have everything they need to get a working copy of the site - but again because we can't commit the document types/media types/etc in Umbraco, new clones must be provided with a package or a .sql file before their copy will work.

  • How can we avoid committing Umbraco into source control? We're using the Umbraco NuGet package, but when a package restore is run on a new clone a PowerShell script is run that generates aload of config files (including Web.config). We could just use the Umbraco.Core package and commit the rest of the CMS, but we'd really prefer not to.

These problems must already have been solved - but we're either just too stupid to figure it out or too introvert to have met the right people who can let us know how to solve them.

See you at Code Garden!

]]>
<![CDATA[Comparing Objects in Chai Doesn't Work as Expected?]]>Here's the gotcha of all gotchas.

If you compare two objects in Chai with .equal, you're not going to get what you expected.

These two objects are clearly equal, however the test fails:

× should equal
  PhantomJS 2.1.1 (Windows 8 0.0.0)
expected { Object (name, species, ...) } to equal
]]>
https://glcheetham.name/2016/05/30/comparing-objects-in-chai-doesnt-work-as-expected/5a847e651b2568001840decfMon, 30 May 2016 09:37:05 GMTHere's the gotcha of all gotchas.

If you compare two objects in Chai with .equal, you're not going to get what you expected.

These two objects are clearly equal, however the test fails:

× should equal
  PhantomJS 2.1.1 (Windows 8 0.0.0)
expected { Object (name, species, ...) } to equal { Object (name, species, ...) }

What's going wrong here? Use to.deep.equal. An equivalent method is to.eql, but I personally find the latter more readable.

When comparing objects, Chai needs to know that it must traverse the objects and compare nested properties. That's why the deep flag is needed for object comparison.

I hope you enjoyed this article, if you have anything you'd like to add or would otherwise like to get in touch, you can do so on twitter @glcheetham.

]]>
<![CDATA[Implementing IOC in Umbraco 7- Inversion of Control Like a Boss]]>I've devoted a significant amount of time this week to finding the best way to implement inversion of control for proper dependency injection in Umbraco.

After much brow-furrowing and much to-ing and fro-ing with some of the Umbraco devs (shout out to @clausjnsn for all the help) I think I

]]>
https://glcheetham.name/2016/05/27/implementing-ioc-in-umbraco-unit-testing-like-a-boss/5a847e651b2568001840decaFri, 27 May 2016 21:45:24 GMTI've devoted a significant amount of time this week to finding the best way to implement inversion of control for proper dependency injection in Umbraco.

After much brow-furrowing and much to-ing and fro-ing with some of the Umbraco devs (shout out to @clausjnsn for all the help) I think I have it.

I'm using Autofac because I've used it before, and @shazwazza's original IOC documentation on our.umbraco used it. If you're confident with your container's api you should be able to follow my instructions with any container you like.

Step 1: Setup

Step one is to add some custom startup code to Umbraco with a custom OnApplicationStarted method. For those familiar with dependency-injection lingo, this is our application's composition root, or where we will register the dependencies with our container. My preferred way to do this is by implementing IApplicationEventHandler. The mere presence of this class somewhere in your code should cause your OnApplicationStarted method to be run, well... on application started. If that makes any grammatical sense.

Step 2: Injecting an IContentService

Let's suppose we're doing some fancy stuff with Umbraco and have written a class that depends on ContentService. Nothing too fancy that the docs won't be able to help you with.

ApplicationContext.Current.Services.ContentService.CreateContent(name, parentID, contentTypeAlias);

Your trusty copy of Mark Seeman's Dependency Injection in .NET should have just dislodged itself from the shelf, wiggled free, and hurled itself at the back of your head at the sight of the above code. I hope it hurt. How could you be so naive? You've just gone and created an explicit dependency on ApplicationContext.Current.Services.ContentService. How are you going to unit test that? What happens when your boss tells you to rewrite the whole thing with EntityFramework and you have explicit references to Umbraco services all over the place?

Since the content service implements IContentService anyway, you should be writing your own wrappers around Umbraco's core services, and then using your dependency injection mojo to inject an instance of IContentService into them. Here's an example of a "wrapper" class.

Notice how it doesn't depend on Umbraco's ContentService directly? It only depends on the interface IContentService. Providing our own implementation of IContentService for unit testing should now be a piece of cake. Depending on this class will appease the SOLID gods and guarantee you a place in programmer paradise.

We just need to make sure that our IContentService is being injected properly at our app's composition root, which takes only a few lines of code in our IApplicationEventHandler implementation.

This shouldn't be spectacular to you, but all we're doing is telling Autofac to use applicationContext.Services.ContentService, the instance of IContentService that's helpfully setup by Umbraco, whenever one of our classes asks for an IContentService as a dependency, like our MyAppContentService wrapper does in the constructor.

Lastly, we tell ASP.NET MVC to use Autofac as a dependency resolver.

    // Set up MVC to use Autofac as a dependency resolver
    var container = builder.Build();
    System.Web.Mvc.DependencyResolver.SetResolver(new AutofacDependencyResolver(container));

Now you can live out the rest of your days in the endless bliss of dependency injection paradise.

OR CAN YOU?

Step 3: Trouble in Para-DI-se

If you're smart, you will have noticed that the above code has broken the Umbraco backoffice. If you're not smart, you will have shipped your code and upset your colleagues/clients/boss/deity. Well done. Yes, I'll have fries with that.

It looks like the code that sets the DependencyResolver for ASP.NET MVC causes the Umbraco backoffice to freak out when it tries to work with API controllers. Or something like that.

Happily, we can resolve this issue with very few lines of code by registering all the mvc controllers and web api controllers in Umbraco's assembly with our IOC. Autofac has quite a slick API for doing this.

In case you missed it, we added two LOC:

    builder.RegisterControllers(typeof(Umbraco.Web.UmbracoApplication).Assembly);
    builder.RegisterApiControllers(typeof(Umbraco.Web.UmbracoApplication).Assembly);

Step 4: .... Profit?

That's all there is to it folks. IOC in Umbraco. I've experience the pain of the many gotchas during this process so that you don't have to.

Thanks for reading! If you enjoyed this post, have anything to add, or would otherwise like to get in touch you can do so on Twitter @glcheetham.

]]>
<![CDATA[Killing Switch Statements in React with the Strategy Pattern]]>I saw this article about building multi-step forms in React recently - and although it's a very well-written and organised article, I was concerned about the use of the switch statement in the render() method of a React component here:

// file: Registration.jsx

var React         = require('react')
var AccountFields = require(
]]>
https://glcheetham.name/2016/05/20/killing-switch-statements-in-react-with-the-strategy-pattern/5a847e651b2568001840dec7Fri, 20 May 2016 19:29:36 GMTI saw this article about building multi-step forms in React recently - and although it's a very well-written and organised article, I was concerned about the use of the switch statement in the render() method of a React component here:

// file: Registration.jsx

var React         = require('react')
var AccountFields = require('./AccountFields')
var SurveyFields  = require('./SurveyFields')
var Confirmation  = require('./Confirmation')
var Success       = require('./Success')

var Registration = React.createClass({
	getInitialState: function() {
		return {
			step: 1
		}
	},

	render: function() {
		switch (this.state.step) {
			case 1:
				return <AccountFields />
			case 2:
				return <SurveyFields />
			case 3:
				return <Confirmation />
			case 4:
				return <Success />
		}
	}
}

module.exports = Registration

It's entirely possible that the writer of this article intended the switch statement here as example, or pseudo-code. However, as proven by reams of Stack Overflow questions, many programmers are quite happy to copy-paste code directly out of internet tutorials and into their production software. For this reason, I feel it's necessary to bring design patterns to the table when talking about code on the internet.

What's wrong with the switch statement?

I'm of the opinion that the presence of a switch statement, or an if-else, is a very pungent code smell. Why? A programmer who uses an if-else or a nested conditional is attempting to model a functional problem in an imperative style.

Anything that could be represented by a flow chart, like our conditional rendering of different React components in the prior example, should be approached as a functional problem. This is because every "step" in a flow chart is the representation of a function, or an "operation".

Anything that could be represented as a JSON string or an XML document (or a POCO, POJO, whatever your language calls them) should be approached through traditional object-oriented, imperative means. The two styles, like oil and water, don't mix very well.

Ever seen arrowhead code? If so, you know what happens when you try to mix the styles. Brittle code. Crossed eyes. Missed deadlines. Bad stuff.

Fixing the switch in the example

Yeah? If you're so smart, what do you use instead of a switch statement?

Answer: The Strategy Pattern

Here's a robust design pattern that can completely circumvent the need for a switch statement in most everyday cases.

If you can't immediately see the benefits of the strategy pattern from this pointless, contrived example, then try it next time you have to deal with some hardcore conditional logic. You'll be impressed.

Applying the strategy pattern to a React component

I'm not going to make an assumption about how you manage your state here, but I'm going to present it as a plain old JavaScript object to make it easier to see what I'm getting at.

Do you see how much clearer the component is? All we're doing is calling state.ActiveComponent, which is actually a function that generates React elements for us.

All you have to do, dear reader, is figure out how to iterate the index of the page array and apply the changes to your state, which I couldn't do without getting too opinionated about your state management framework.

Hopefully I've now provided you with everything you need to nuke switch statements in your code and start filling the internet with more maintainable, readable code.

]]>
<![CDATA[How to Notify New Relic of Deployments From AppVeyor]]>During app deployments, previous company policy required us to prepare a sacrifice so that the favour of the gods might shine upon us. I was first asked to see if this process could be automated to save overhead - but I soon realised that we could ensure deployments went smoothly

]]>
https://glcheetham.name/2016/05/07/how-to-notify-new-relic-of-deployments-from-appveyor/5a847e651b2568001840dec5Sat, 07 May 2016 09:58:24 GMTDuring app deployments, previous company policy required us to prepare a sacrifice so that the favour of the gods might shine upon us. I was first asked to see if this process could be automated to save overhead - but I soon realised that we could ensure deployments went smoothly without paying the salaried workers in the embalming-and-sacrificing department at all!

Now we use New Relic to get error rate metrics, and with a POST request to the New Relic API we can even track error rates between, and since the last, deployment. It's much easier to discern our fortunes this way than with haruspicy.

To set up deployment tracking in New Relic when you deploy with AppVeyor, you just need to add this POST request into a post-deploy PowerShell script.

To use the API, you'll need to first generate an API key by going to Account Settings -> Integrations -> API Keys in the New Relic dashboard.

Here's the PowerShell one-liner that we ended up using to send the POST request

Invoke-RestMethod -Uri https://api.newrelic.com/deployments.xml -Method Post -Headers @{"x-api-key"="YOUR_API_KEY"} -Body @{"deployment[application_id]"="YOUR_APPLICATION_ID"; "deployment[description]"="Your Application Deployment"}

The application id is generally just the name of the application in New Relic.

This is very simple, and will work just fine when added as a post-deploy script in AppVeyor. Here's an example appveyor.yml with the post-deploy script added.

branches:
  only:
  - master
  - develop
configuration:
- Release
force_https_clone: true
build:
  # Your build configuration
deploy:
- provider: WebDeploy
  # Your deploy configuration
after_deploy:
- ps: Invoke-RestMethod -Uri https://api.newrelic.com/deployments.xml -Method Post -Headers @{"x-api-key"="YOUR_API_KEY"} -Body @{"deployment[application_id]"="YOUR_APPLICATION_ID"; "deployment[description]"="Your Application Deployment"}

Notice how the PowerShell script is added under after_deploy?

But wait! It get's much better. We can take advantage of AppVeyor's built-in environment variables to notify New Relic of the build number along with the deployment.

It just takes a little of PowerShell's easy string interpolation and we can inline the environment variables provided by AppVeyor into the deployment description.

Invoke-RestMethod -Uri https://api.newrelic.com/deployments.xml -Method Post -Headers @{"x-api-key"="YOUR_API_KEY"} -Body @{"deployment[application_id]"="YOUR_APPLICATION_ID"; "deployment[description]"="Build $env:APPVEYOR_BUILD_ID commit ID $env:APPVEYOR_REPO_COMMIT"}
]]>
<![CDATA[Reconfigure a Project to Use IIS Express in Visual Studio]]>Apparently, back in the day, they had enough spare RAM on their development machines to run a full copy of IIS.

We're not so privileged today, but if you inherit one of these projects, you may end up looking at error messages like this:

MyProject.csproj : error  : The Web Application
]]>
https://glcheetham.name/2016/04/29/reconfigure-a-project-to-use-iis-express-in-visual-studio/5a847e651b2568001840dec3Fri, 29 Apr 2016 14:38:00 GMTApparently, back in the day, they had enough spare RAM on their development machines to run a full copy of IIS.

We're not so privileged today, but if you inherit one of these projects, you may end up looking at error messages like this:

MyProject.csproj : error  : The Web Application Project MyProject is configured to use IIS.  Unable to access the IIS metabase. You do not have sufficient privilege to access IIS web sites on your machine.

Or this:

MyProject.csproj : error  : The Web Application Project MyProject is configured to use IIS.  The Web server 'http://www.myproject.local/' could not be found.

Because on your weedy i7 CPU, bogged down by too many background processes, you've opted not to install a full copy of IIS and use IIS Express instead.

Projects that are configured to use IIS won't even load on machines that don't have it installed, but fortunately, it's easy to open up the XML in the csproj file and make the changes that you need to reconfigure the project to use IIS Express instead.

First, you'll need to add the UseIISExpress property and set to true in the first PropertyGroup in your csproj.

<UseIISExpress>true</UseIISExpress>

Then, under Project Extensions -> VisualStudio -> FlavorProperties -> WebProjectProperties, make the following changes.

<WebProjectProperties>
    <UseIIS>False</UseIIS>
    <AutoAssignPort>True</AutoAssignPort>
    <DevelopmentServerPort>3000</DevelopmentServerPort>
    <DevelopmentServerVPath>/</DevelopmentServerVPath>
    <IISUrl>http://localhost:3000</IISUrl>
    <NTLMAuthentication>False</NTLMAuthentication>
    <UseCustomServer>False</UseCustomServer>
    <CustomServerUrl>
    </CustomServerUrl
    <SaveServerSettingsInUserFile>False</SaveServerSettingsInUserFile>
</WebProjectProperties>

Clicking 'reload project' in your solution explorer will now load the project for you. Of course, you can change the port number to anything you want (just make sure that they match under both DevelopmentServerPort and IISUrl)

]]>
<![CDATA[Compile TypeScript and Package with Browserify in a Single Gulp Task]]>OK - the topic of packaging your js with browserify using Gulp has been done to death. However, integrating TypeScript into this workflow can be a little bit of a challenge because browserify doesn't take a regular gulp file stream as input.

tldr;

Just want to copy and paste some

]]>
https://glcheetham.name/2016/04/25/compile-typescript-package-browesrify-gulp/5a847e651b2568001840dec1Mon, 25 Apr 2016 13:11:35 GMTOK - the topic of packaging your js with browserify using Gulp has been done to death. However, integrating TypeScript into this workflow can be a little bit of a challenge because browserify doesn't take a regular gulp file stream as input.

tldr;

Just want to copy and paste some code and get on with your life? Here's the task that I ended up writing:

var gulp = require('gulp');
var buffer = require('vinyl-buffer');
var source = require('vinyl-source-stream');
var browserify = require('browserify');
var tsify = require('tsify');

gulp.task('build:ts', function () {
    return browserify()
        .add('./my-app/App.ts')
        .plugin(tsify)
        .bundle()
        .on('error', function (error) { console.error(error.toString()); })
        .pipe(source('bundle.js'))
        .pipe(buffer())
        .pipe(gulp.dest('./ReviewForm/'));
});

This method relies on a few plugins: tsify, vinyl-buffer, and vinyl-source-stream.

Explanation

If you're keen-eyed (or didn't install the dependencies after you copy and pasted the code) you'll spot the three things of interest in this gulp task: tsify, buffer(), and source().

tsify

The TypeScript gets compiled in this recipe using a browserify transform. Transforms allow you to hook into the browserify packaging process, in this case to compile TypeScript. Luckily, there exists a pre-written TypeScript transform for browserify called tsify.

vinyl-source-stream

Gulp can perform operations on special types of node streams called vinyl file streams. Essentially, vinyl is a format for describing files (more info here). However, browserify().bundle() returns a regular text-based node stream. vinyl-source-stream takes one of these conventional node text streams and returns a vinyl file stream (that gulp can deal with).

vinyl-buffer

Technically, in this task, vinyl-buffer isn't needed. However, if you'd like to pipe your browserified stream into another pipe such as uglify() you'll need to run it through this vinyl-buffer first. Otherwise, you'll find yourself at the mercy of an unhandled error.

]]>
<![CDATA[How to Get Syntax Highlighting for TypeScript in GVIM on Windows]]>Typescript code in vim without syntax highlighting
urghh... Looks terrible doesn't it? It's not provided by default, but if you need to work with a TypeScript project in vim then you'll certainly need to set up syntax highlighting.

The good news is that it's very easy to get going with a vim addon called typescript-vim. This link

]]>
https://glcheetham.name/2016/04/06/syntax-highlighting-typescript-vim-windows/5a847e651b2568001840debfWed, 06 Apr 2016 12:06:57 GMTTypescript code in vim without syntax highlighting
urghh... Looks terrible doesn't it? It's not provided by default, but if you need to work with a TypeScript project in vim then you'll certainly need to set up syntax highlighting.

The good news is that it's very easy to get going with a vim addon called typescript-vim. This link is all you need if you already have a vim addon manager, but otherwise I'd recommend you install pathogen.

To install pathogen on Windows, you'll need to download pathogen.vim from https://github.com/tpope/vim-pathogen/ and place it under %USERPROFILE%\vimfiles\autoload (for me, that was C:\Users\glcheetham\vimfiles\autoload\pathogen.vim). The easiest way to do this is to execute the following in a shell:

cd %USERPROFILE%\vimfiles
git clone https://github.com/tpope/vim-pathogen/

Next, to get pathogen to load on vim startup, you'll need to place the vimscript code at the top of your vimrc (or %USERPROFILE%\_gvimrc for me)

execute pathogen#infect()
filetype off
syntax on
filetype plugin indent on

Don't mind the microbe metaphors, that's just the author's sense of humour.

Now, to install the syntax highlighting plugin, you'll need to place the files from https://github.com/leafgarland/typescript-vim into %USERPROFILE%\vimfiles\bundle. Again, the easiest way to do this is git clone:

cd %USERPROFILE%\vimfiles\bundle
git clone https://github.com/leafgarland/typescript-vim

And... that's it! Pathogen should now helpfully load the typescript-vim package for you whenever vim starts up - and you'll be able to carry on working without going cross-eyed.

Typescript code in vim with syntax highlighting

]]>
<![CDATA[How To Test JsonResult in ASP.NET MVC]]>Here's a quick NuGet of wisdom! (Haha, just kidding, there is no wisdom in NuGet)

Writing a unit test for an ActionResult Json() is a bit of a head-scratcher for some, but all it requires is a simple cast operation.

Example controller:

    public ActionResult Index()
    {
        return Json(productsData, JsonRequestBehavior.AllowGet)
]]>
https://glcheetham.name/2016/03/28/test-jsonresult-asp-net-mvc/5a847e651b2568001840debcMon, 28 Mar 2016 16:00:40 GMTHere's a quick NuGet of wisdom! (Haha, just kidding, there is no wisdom in NuGet)

Writing a unit test for an ActionResult Json() is a bit of a head-scratcher for some, but all it requires is a simple cast operation.

Example controller:

    public ActionResult Index()
    {
        return Json(productsData, JsonRequestBehavior.AllowGet);
    }

Example test: (NUnit 3.x)

    [Test]
    public void GetActionReturnsProductsAsJson()
    {
        var mockProductsData = new List<IProduct> { /* ... */ };
        productsController.setData(mockProductsData);
        JsonResult result = productsController.Index() as JsonResult;
        Assert.That(
            result.Data as List<IProduct>,
            Is.EqualTo(mockProductsData));
    }

There are two casts going on here:

  • The first cast (productsController.Index() as JsonResult) makes sure that we're dealing with a JsonResult in the test and not an ActionResult.
  • The second cast result.Data as List<IProduct> takes the Data property of the JsonResult and casts it to a List of IProduct that we can compare against our mock data.

Note that you could have achieved the same thing using the alternative cast syntax:

JsonResult result = (JsonResult)productsController.Index();

But the as keyword is preferred, because it will return null instead of an exception if the cast is not possible.

]]>
<![CDATA[Book Review: Magicians Of The Gods by Graham Hancock]]>The hardback edition of Magicians of The Gods next to a cup of tea

This is the first book on "alternative" history that I've read, and it's proven to be a compelling and rewarding read.

I picked up this first-edition hardback copy of Magicians Of the Gods not long after it first went to press. I had seen one of the author's

]]>
https://glcheetham.name/2016/03/24/magicians-of-the-gods-review/5a847e651b2568001840debaThu, 24 Mar 2016 19:03:00 GMTThe hardback edition of Magicians of The Gods next to a cup of tea

This is the first book on "alternative" history that I've read, and it's proven to be a compelling and rewarding read.

I picked up this first-edition hardback copy of Magicians Of the Gods not long after it first went to press. I had seen one of the author's presentations on YouTube and been intrigued, his theories were not only attractive to the curious and open mind, but they were also unusually well-reasoned and internally consistent for a supposed "fringe" scientist.

The proposal is that a technologically advanced "ancestor" civilisation developed and flourished during the end of the last Ice Age 12,000 years ago - a time when, according to the present school of thought, our hunter-gatherer ancestors' most accomplished feats were little more sophisticated than cave paintings. (For a comparative timescale, the earliest Egyptian dynastic period started in 3150 BC).

This civilisation was then purportedly wiped out during the cataclysm following multiple comet fragments impacting the North American ice sheet, causing global temperatures to plummet (starting the geological period known as the Younger Dryas) and a dramatic, almost instantaneous rise in sea levels due to the vast amount of ice meltwater.

According to Hancock's research, the survivors of this ancient, advanced civilisation then travelled from their sunken homeland and settled amongst hunter-gatherer tribes at several key locations (eg. Egypt, Mesopotamia, Peru) in an attempt to spread the "seed" of civilisation and ensure the survival of their knowledge and culture. Their legacy is remembered in numerous myths and legends, and they were, according to the author, involved directly in the construction of several surviving ancient monuments and megalithic sites around the world.

It is firstly important to note that Graham Hancock is not a scientist, archeologist, or geologist - his lifelong profession is journalism. He does not follow the established scientific process of publishing peer-reviewed material in scientific journals, and as a result, many mainstream scientists and media commentators are quick to dismiss anything produced by Hancock as pseudo-science.

However, within minutes of opening the book the level of the author's commitment to and love for his work, which could easily rival that of any decorated scientist, is made glaringly apparent to the reader. Hancock has, in preparation for this book, personally travelled to ancient sites on almost every continent of the world. He has interviewed venerated archaeologists and geologists to build his case in support of the ancient "ancestor" civilisation. While being viewed as a "crackpot" by the archaeologists at work excavating the sites he visits, Hancock appraises and catalogues the structures and reliefs in minute detail - whilst, through the words he has chosen to commit to the page, communicating a strong sense of wonder and respect for the ancient craftsmen who's work he is studying.

Infact, Hancock's lack of any formal initiation into the scientific community has perhaps given him a strength where many of his critics see a debilitating weakness - a strength that I can personally identify with, being a self-taught software developer without any formal qualifications, and with only the passion and dedication I have invested in my past body of work available to demonstrate my efficacy.

What Hancock has presented here is not a cold analysis of the factual evidence. He is a storyteller. He uses colourful language to elucidate his arguments. Skills he has obviously acquired during his journalistic career have surfaced as tools to help organise and present information from the disparate sources that make up the bulk of the evidence cited in the book. The myths, oral traditions, and architecture left by the world's most ancient cultures are masterfully weaved together to represent a single continuity that Hancock recognises as the legacy of his theorised ancestor civilisation.

He discusses his beliefs that the civilising figures remembered by our ancestors as the "sages" or "heavenly teachers" who came after the "great flood" are infact the last survivors of a technologically advanced civilisation who guided our primitive ancestors in the construction of great monuments, such as the Pyramids at Giza and the megalithic sites recently uncovered at Göbekli Tepe in Turkey.

By looking at the many photographs taken by his wife, Santha, of the author gazing in awe at the immense blocks of stone left as parts of these great structures, it is easy to imagine a dialogue with the "sages" who's work he is appraising. He seems to regard it as his spiritual, almost divinely-ordained mission to decrypt the messages left for us by this extinct civilisation, and puts forward compelling evidence that such monuments encode, through their position on the ground related to the arrangement of certain constellations in the night sky, the date at which the fatal comet impacted the earth.

When reading a book like this, I'm inclined to maintain constant vigilance against dubious and unsubstantiated claims. The author, however, in true journalistic style through a generous scattering of footnotes, provides an external reference with almost every claim he has made. Theories such as the Younger Dryas Comet Impact Hypothesis and the Orion Correlation Theory which Hancock draws upon extensively are currently debated openly within the mainstream scientific community. And where Hancock does ask the reader to take a small leap of faith, he presents his motion as an open-ended question rather than as a factual truth. It is this that I greatly admire about the author and is what makes the attacks that are often made upon him seem grossly unjustified.

Regardless, beyond the simple black-and-white delineations of "truth" and "falsehood" that are so endlessly debated, there seems to be something more to this argument that Hancock is presenting. There's a prevailing sense that the "establishment" overshadows all scientific work, and that real discovery only ever happens when you willingly fly in the face of it. We can draw parallels with the life of the meteorologist Alfred Wegener and his now-accepted early twentieth-century theories of plate tectonics, which were ridiculed by the scientific establishment of the time. It seems that for all of the manpower and funding possessed by our most powerful institutions, we see time and again that it's the lone visionaries who push our understanding of the world forward.

It's precisely because of those who love their work enough to risk humiliation at the hands of their peers that I'm confident we'll one day know the truth about our past. After all, discovery is fuelled not by the constancy of bureaucratic organisation, but by the unbridled power of human curiosity.

]]>