Every now and then I get questions about build/deploy/test automation. Often the questions are asked by project lead or the customer and containing something like how much money will it cost? or how much time will it consume? My answer is usually something like…

It will depend on the complexity of the project… what kind of access does the build server have to the environments? What kind of test automation should we implement, unit test, integration test? How complex is the project to build, do we need a lot of configuration transformation and stuff like that?

Yes, it will be an initial cost but we will get the money back, it will make our life easier, minimize the risk of faulty deploys to production, making the release process person independent and making the project easier to maintain!

The last line about a easier life and easier to maintain usually sells the concept. In this post I will show how we can implement deploy automation by using products that are open source and easy to configure to fit most projects. The post will mainly focus on how to implement it so you have a good example to base your arguments on when talking to project lead or your customer.


If you have read my previous post about continuous integration you know that I have been writing about some specific steps in a deploy process like compiling and moving files using the FTP protocol. This post will try to put it all together to a complete example. This example requires that the build server has PowerShell, Robocopy (included in Win server 2008 R2), EPiServer 7 installed and FTP access to the target environment.

Installing the build server

We need a build server which can:

  • Execute our build automatically or manually whenever we want.
  • Create reports about the builds.
  • Notify the project team about faulty builds.

For that we will use Jenkins. Jenkins is open source and installs on most platforms, we will install it on Windows because we will be deploying an EPiServer 7 application. Jenkins is built upon Java so your build server needs to have Java installed. If it doesn’t already, go to download and install it. Make sure the path to the Java executable is added to the system environments variables. Verify that everything went fine by typing:

PS C:\> java -version  
java version "1.7.0_05"  
Java(TM) SE Runtime Environment (build 1.7.0_05-b06)  
Java HotSpot(TM) Client VM (build 23.1-b03, mixed mode, sharing)  

After you finished the installation of Java, download the Jenkins installation zip file from Extract the files and go to the extracted folder and execute setup.exe.

PS C:\jenkins-1.476> .\setup.exe  

After the installation wizard finish visit http://localhost:8080/ and you will see something like this:

Jenkins is now installed as a windows service and running at your server. This is a new even simpler installation process than before. If you have installed previous versions of Jenkins you know that you first had to install it from a war file and after that install it as a windows service through the Jenkins UI. Although, that seems to be included in the installation program nowadays!

Getting the source code

Jenkins has a lot of plugins and some of them are for getting source code from different source control system like SVN, CVS, TFS and of course GIT. The EPiServer project for this post is hosted at GitHub and fortunately Jenkins has a GitHub plugin! Install it through the plugin manager, search for “github plugin”, check the checkbox and hit “install without restart” and then check “Restart Jenkins when installation is complete and no jobs are running”

Configure the GitHub plug-in

Go to “Manage Jenkins” and add the path to git.cmd.

This project is hosted in a public repository at GitHub so we do not need to setup deploy keys. Which can be kind of painful but possible, this post doesn’t cover that topic but there are a lot of great articles about that out on the internet. Click on “new job”, set the job name and choose “Build a free-style software project”.

The repository settings for my project looks like this:

Try your settings by going back to the start page and click “schedule a build” (the clock icon to the right). If everything is set up correctly you should see something like this (with a nice sunny sun :)) after running the build:

The build system

Now we have a build server that gets the source code whenever we want. In most cases we probably want to do something with the fetched source code. There are a lot of solutions that can customize your build process like Phantom, Psake or NAnt. I can’t tell you which one is the best but I like Psake, maybe because I’m a PowerShell fan and doesn’t like to mess around with xml configuration files. For this application we will use Psake to do the following steps

  1. Some initial setup, copy and creating needed stuff
  2. Compile C# and client script + minifying it.
  3. Run unit tests and break the build if any test fails
  4. Copy needed files for running the application
  5. Transform configuration files for a production environment
  6. Backup the current application at the production environment (ftp)
  7. Redeploy the application (ftp)

In the root of the project I have created a powershell script called Build.ps1. Jenkins will run this script after he has pulled the source from Github.

    $Environment = 'debug',
    $DeployToFtp = $true

function Build() {  
    Try {
        if($Environment -ieq 'debug') {
            .\Web\EPiBooks\Tools\psake.ps1 ".\Web\EPiBooks\BuildScripts\Deploy.ps1" -properties @{ config='debug'; environment="$Environment" }
        if($Environment -ieq 'production') {
            .\Web\EPiBooks\Tools\psake.ps1 ".\Web\EPiBooks\BuildScripts\Deploy.ps1" -properties @{ config='release'; environment="$Environment"; deployToFtp = $DeployToFtp } "production"
        Write-Host "$Environment build done!"
    Catch {
        throw "build failed"
        exit 1
    Finally {
        if ($psake.build_success -eq $false) {
            exit 1
        } else {
            exit 0


The script is a wrapper around executing Psake and supplying Psake with our build script. The script takes two parameters, Environment and DeployToFtp. These parameters are set by Jenkins to let Psake and our build script know how to build and deploy the application. As you can see the execution is wrapped in a try-catch-finally statement, this is just to have better control on which exit codes to return to Jenkins to break or succeed the deploy.


The Deploy.ps1 is our build/deploy script that does the actual building/deploying. I have tried to put comments into the script to make it more understandable.

# properties that is used by the script
properties {  
    $dateLabel = ([DateTime]::Now.ToString("yyyy-MM-dd_HH-mm-ss"))
    $baseDir = resolve-path .\..\..\..\
    $sourceDir = "$baseDir\Web\"
    $toolsDir = "$sourceDir\EPiBooks\Tools\"
    $deployBaseDir = "$baseDir\Deploy\"
    $deployPkgDir = "$deployBaseDir\Package\"
    $backupDir = "$deployBaseDir\Backup\"
    $testBaseDir = "$baseDir\EPiBooks.Tests\"
    $config = 'debug'
    $environment = 'debug'
    $ftpProductionHost = ''
    $ftpProductionUsername = 'anton'
    $ftpProductionPassword = 'anton'
    $ftpProductionWebRootFolder = "www"
    $ftpProductionBackupFolder = "backup"
    $deployToFtp = $true

# the default task that is executed if no task is defined when calling this script
task default -depends local  
# task that is used when building the project at a local development environment, depending on the mergeConfig task
task local -depends mergeConfig  
# task that is used when building for production, depending on the deploy task
task production -depends deploy

# task that is setting up needed stuff for the build process
task setup {  
    # remove the ftp module if it's imported
    remove-module [f]tp
    # importing the ftp module from the tools dir
    import-module "$toolsDir\ftp.psm1"

    # removing and creating folders needed for the build, deploy package dir and a backup dir with a date
    Remove-ThenAddFolder $deployPkgDir
    Remove-ThenAddFolder $backupDir
    Remove-ThenAddFolder "$backupDir\$dateLabel"

        checking if any episerver dlls is existing in the Libraries folder. This requires that the build server has episerver 7 installed
        for this application the episerver dlls is not pushed to the source control if we had done that this would not be necessary
    $a = Get-ChildItem "$sourceDir\Libraries\EPiServer.*"
    if (-not $a.Count) {

        # if no episerver dlls are found, copy the episerver cms dlls with robocopy from the episerver installation dir
        robocopy "C:\Program Files (x86)\EPiServer\CMS\7.0.449.1\bin" "$sourceDir\Libraries" EPiServer.*

            checking the last exit code. robocopy is returning a number greater
            than 1 if something went wrong. For more info check out =>
        if($LASTEXITCODE -gt 1) {
            throw "robocopy command failed"
            exit 1

        # also we need to copy the episerver framework dlls
        robocopy "C:\Program Files (x86)\EPiServer\Framework\7.0.722.1\bin" "$sourceDir\Libraries" EPiServer.*
        if($LASTEXITCODE -gt 1) {
            throw "robocopy command failed"
            exit 1

# compiling csharp and client script with bundler
task compile -depends setup {  
    # executing msbuild for compiling the project
    exec { msbuild  $sourceDir\EPiBooks.sln /t:Clean /t:Build /p:Configuration=$config /v:q /nologo }

        executing Bundle.ps1, Bundle.ps1 is a wrapper around bundler that is compiling client script
        the wrapper also is executed as post-build script when compiling in debug mode. For more info check out =>
    # checking so that last exit code is ok else break the build
    if($LASTEXITCODE -ne 0) {
        throw "Failed to bundle client scripts"
        exit 1

# running unit tests
task test -depends compile {  
    # executing mspec and suppling the test assembly
    &"$sourceDir\packages\Machine.Specifications.0.5.7\tools\mspec-clr4.exe" "$testBaseDir\bin\$config\EPiBooks.Tests.dll"
    # checking so that last exit code is ok else break the build
    if($LASTEXITCODE -ne 0) {
        throw "Failed to run unit tests"
        exit 1

# copying the deployment package
task copyPkg -depends test {  
    # robocopy has some issue with a trailing slash in the path (or it's by design, don't know), lets remove that slash
    $deployPath = Remove-LastChar "$deployPkgDir"
    # copying the required files for the deloy package to the deploy folder created at setup
    robocopy "$sourceDir\EPiBooks" "$deployPath" /MIR /XD obj bundler Configurations Properties /XF *.bundle *.coffee *.less *.pdb *.cs *.csproj *.csproj.user *.sln .gitignore README.txt packages.config
    # checking so that last exit code is ok else break the build (robocopy returning greater that 1 if fail)
    if($LASTEXITCODE -gt 1) {
        throw "robocopy commande failed"
        exit 1

# merging and doing config transformations
task mergeConfig -depends copyPkg {  
    # only for production
    if($environment -ieq "production") {
        # first lets remove the files that will be transformed
        Remove-IfExists "$deployPkgDir\Web.config"
        Remove-IfExists "$deployPkgDir\episerver.config"

            doing the transformation for Web.config using Config Transformation Tool
            check out for more info
        &"$toolsDir\Config.Transformation.Tool.v1.2\ctt.exe" "s:$sourceDir\EPiBooks\Web.config" "t:$sourceDir\EPiBooks\ConfigTransformations\Production\Web.Transform.Config" "d:$deployPkgDir\Web.config"
        # checking so that last exit code is ok else break the build
        if($LASTEXITCODE -ne 0) {
            throw "Config transformation commande failed"
            exit 1

        # doing the transformation for episerver.config
        &"$toolsDir\Config.Transformation.Tool.v1.2\ctt.exe" "s:$sourceDir\EPiBooks\episerver.config" "t:$sourceDir\EPiBooks\ConfigTransformations\Production\episerver.Transform.Config" "d:$deployPkgDir\episerver.config"
        # checking so that last exit code is ok else break the build
        if($LASTEXITCODE -ne 0) {
            throw "Config transformation commande failed"
            exit 1

# deploying the package
task deploy -depends mergeConfig {  
    # only if production and deployToFtp property is set to true
    if($environment -ieq "production" -and $deployToFtp -eq $true) {
        # Setting the connection to the production ftp
        Set-FtpConnection $ftpProductionHost $ftpProductionUsername $ftpProductionPassword

        # backing up before deploy => by downloading and uploading the current webapplication at production enviorment
        $localBackupDir = Remove-LastChar "$backupDir"
        Get-FromFtp "$backupDir\$dateLabel" "$ftpProductionWebRootFolder"
        Send-ToFtp "$localBackupDir" "$ftpProductionBackupFolder"

        # redeploying the application => by removing the existing application and upload the new one
        Remove-FromFtp "$ftpProductionWebRootFolder"
        $localDeployPkgDir = Remove-LastChar "$deployPkgDir"
        Send-ToFtp "$localDeployPkgDir" "$ftpProductionWebRootFolder"

#helper methods
function Remove-IfExists([string]$name) {  
    if ((Test-Path -path $name)) {
        dir $name -recurse | where {!@(dir -force $_.fullname)} | rm
        Remove-Item $name -Recurse

function Remove-ThenAddFolder([string]$name) {  
    Remove-IfExists $name
    New-Item -Path $name -ItemType "directory"

function Remove-LastChar([string]$str) {  

Now we have the complete build/deploy script in place and we need Jenkins to execute it after fetching the source. Of course we have some issues with permissions and execution policies, because of that we need to execute a PowerShell process that allow execution of Psake and our build/deploy script. Go to the configuration page and “add a build step”, choose “Execute Windows batch command” and enter:

Powershell.exe -noprofile -executionpolicy Bypass -file .\Build.ps1 -env "production"  

Like this:

Now the configuration is done, go back to the start page and hit “schedule a build” and you should see that sunny sun again!


Jenkins and the Github plugin has some nice scheduling features. We can for example build to our (imaginary) test environment every time something is pushed to the repository or we can just set up a time scheduled when to build. The syntax is cron-like, this means every hour.

More information on the scheduling interval syntax can be found by clicking the “?” to the right of the “Schedule”-field.


Jenkins, Github and Psake plays pretty well together. I had some issues with permissions and execution policies but it wasn’t too hard to figure out. Hopefully it will not be too hard to configure deployments keys with private Github repositories.

I know the example has room for some improvement like creating and emailing build reports (which can be done with the Email-ext plug in). The ftp deployment task is not optimal but it works. To get a more reliable solution we could deploy using web deploy. The problem with our ftp deployment is that it can be interrupted in the middle of a deploy leaving the application broken (if the network or ftp server goes down).When using web deploy the target machine has a local service that does the deploy which minimizes the risks of a deploy being interrupted in the middle of the installation.

I think we should consider building and doing deploys using Jenkins and Psake. I like that we have almost full control over the process and that it’s almost xml free!

If you are still reading I think you found this topic interesting, I know I do. Please let me know if you have any thoughts. Thanks for reading!

Full source can be found here.

comments powered by Disqus