Wednesday, April 23rd, 2014

This post is aimed at providing a guideline for migrating existing jQuery to newer versions. There are many breaking changes in the newer versions of jQuery. The main changes have been highlighted and possible solutions are listed below.

jQuery 1.7

  • jQuery.isNaN() has been removed.

jQuery UI(calendar, date picker),  has dependency on this method

Solution:- Use jQuery.isNumeric()

jQuery 1.8

· $.curCSS: This method was simply an alias for jQuery.css()

jQuery UI has dependency on this method

jQuery 1.9

  • .toggle(function, function, … ) removed

This is the "click an element to run the specified functions" signature of .toggle()

No Impact on our code.

  • .live() removed

The .live() method has been deprecated since jQuery 1.7 and has been removed in 1.9.

There are some 96 places in TalentSuite where we are referring the live method.

Solution: Use the .on method.

  • .die() removed

The .die() method has been deprecated since jQuery 1.7 and has been removed in 1.9

There are references to this method in HCM

Solution: Use the .off method.

  • .after(), .before(), and .replaceWith() with disconnected nodes –

Prior to 1.9, .after(), .before(), and .replaceWith() would attempt to add or change nodes in the current jQuery set if the first node in the set was not connected to a document, and in those cases return a new jQuery set rather than the original set. This created several inconsistencies and outright bugs–the method might or might not return a new result depending on its arguments! As of 1.9, these methods always return the original unmodified set and attempting to use .after(), .before(), or .replaceWith() on a node without a parent has no effect–that is, neither the set or the nodes it contains are changed.

  • .appendTo, .insertBefore, .insertAfter, and .replaceAll

As of 1.9, these methods always return a new set, making them consistently usable with chaining and the .end() method. Prior to 1.9, they would return the old set only if there was a single target element. Note that these methods have always returned the aggregate set of all elements appended to the target elements. If no elements are selected by the target selector (e.g., $(elements).appendTo("#not_found")) the resulting set will be empty.

jQuery UI may get affected by this change

  • .add() method behavior changed

The .add() method is always supposed to return its results in document order. Prior to 1.9, .add() would not sort nodes in document order if either the context or the input set started with a disconnected node which is not in a document. Now, nodes are always returned in document order and disconnected nodes are placed at the end of the set.

  • Global Ajax events like ajaxStart, ajaxStop, ajaxSend, ajaxComplete, ajaxError, and ajaxSuccess are only triggered on the document element. Program has to be changed to listen for the Ajax events on the document. For example, if the code currently looks like:

$("#AjaxLoaderStatus").ajaxStart(function(){ $(this).text("Ajax started"); });


$(document).ajaxStart(function(){ $("#AjaxLoaderStatus ").text("Ajax started"); });


jQuery 1.10 and 1.11

There are no considerable breaking changes in these versions of jQuery.

Category: Javascript  | Leave a Comment
Tuesday, April 15th, 2014

Let us look at the formal definition of the builder pattern:

Separate the construction of a complex object from its representation so that the same construction process can be used to create different representations.

What this means is, if we have to create different versions of a complex object and there are many ways in which it can be created, then the applicability of the builder pattern in this scenario creates the complex object in a structured and maintainable way.

Let us take an example. The object/product which we want to build is Kurkure. This product has many versions like the Masala Munch, Chilly Chatka, Green Chutney and so on.


So, let’s start putting few things in code.

The Product Kurkure has the following attributes.

public class Kurkure


            public bool PotatoCleaned { get; set; }

            public bool PotatoPealed { get; set; }

            public SnackShape SnackShape{get;set;}

            public SnackSize SnackSize { get; set; }

            public Spice Spice { get; set; }

            public PackageType Package { get; set; }

            public BrandType BrandType { get; set; }

            public void Display()


                Console.WriteLine("Kurkure :{0} | Spice: {1} | Package: {2}", BrandType.ToString(),Spice,Package);




        public enum SnackSize







        public enum BrandType









        public enum SnackShape








        public enum Spice








If we look at it closely, all the versions of Kurkure are just different versions of potato chips. It’s just potato chips cut in different shapes and sizes and different spices applied. The way to prepare each of them is similar. 


·         Clean the potatoes

·         Peel and cut them into the required shape and size.

·         Add spices

·         Package them


Let us prepare Kurkure using the builder pattern. We will start putting all the common behavior in a base class.

public abstract class KurkureBuilder


            protected Kurkure kurkure;

            public Kurkure GetKurkure()


                return kurkure;



            public void CreateKurkure()


                kurkure = new Kurkure();



            public void PreparePotatos()


                kurkure.PotatoCleaned = true;

                kurkure.PotatoPealed = true;


            public abstract void CutPotatoes();

            public abstract void ApplySpices();

            public abstract void Package();




Let us create the Masala munch and Chilli Chatka kurkure!

public class MasalaMunchBuilder:KurkureBuilder


            public override void CutPotatoes()


                kurkure.SnackSize = SnackSize.Medium;

                kurkure.SnackShape = SnackShape.Random;



            public override void ApplySpices()


                kurkure.Spice = Spice.Masala;



            public override void Package()


                kurkure.BrandType = BrandType.MasalaMunch;

                kurkure.Package = PackageType.LightOrange;




        public class ChilliChatkaBuilder : KurkureBuilder


            public override void CutPotatoes()


                kurkure.SnackSize = SnackSize.Small;

                kurkure.SnackShape = SnackShape.Straight;



            public override void ApplySpices()


                kurkure.Spice = Spice.GreenChilli;



            public override void Package()


                kurkure.BrandType = BrandType.ChilliChatka;

                kurkure.Package = PackageType.DarkGreen;




With all this in place, let us now create a Kurkure maker which actually creates the Kurkure. The maker knows the sequence in which the product has to be created.


public class KurkureMaker


            private readonly KurkureBuilder builder;


            public KurkureMaker(KurkureBuilder builder)


                this.builder = builder;



            public void CreateKurkure()









            public Kurkure GetKurkure()


                return builder.GetKurkure();





Now, let us use the Kurkure maker to create some kurkures!


        static void Main(string[] args)


            var kurkureMaker = new KurkureMaker(new MasalaMunchBuilder());


            var masalaMunchKurkure = kurkureMaker.GetKurkure();



            kurkureMaker = new KurkureMaker(new ChilliChatkaBuilder());


            var chilliChatkaKurkure = kurkureMaker.GetKurkure();






Given below are the formal representation of players in the builder pattern.




What is being build. In our case Kurkure



The director in our case is the KurkureMaker. The directory uses the concrete builder(MasalaMunchBuilderut and ChilliChatkaBuilder in our example. The director also knows the sequence or order about how to build

Used directly by client.


Abstract class or interface. The KurkureBuilder in our case.

Defines steps.

Holds instance of the end product.


Concrete Builders

Implements the interface defined by the builder. MasalaMunchBuilder and ChilliChatkaBuilder are the concrete builders in our example.

Monday, April 14th, 2014

Before starting with the details a quick highlight on Specflow and Specrun

  • Specflow – It is open source pragmatic BDD and part of Cucumber family.
  • Specrun – It is an integration test runner for Specflow.

Specrun by default has a sequence of test execution steps and customizable options for Parallel execution. It also has extended options of controlling the sequence of test execution.

Selenium Grid : – Selenium grid is a framework that allows faster execution of test by distributing them across machines and balance the entire test volume among threads.


Integrating Grid with specrun

    1. Add the following in the App Settings file

add key="MasterHub" value="http://gridMasterIP:4444"/> <add key="Grid" value="False"


Create a new profile ex – Grid_Parallel.srprofile
Modify the Settings for the test profile :
Execution retryFor="None" testThreadCount="20" testSchedulingMode="Sequential"

Add a Target as follows :

<Target name="GridMode">
<ConfigFileTransformation configFile="App.config">
<![CDATA[<?xml version="1.0" encoding="utf-8"?>
<configuration xmlns:xdt="">
<add key="Grid" value="True" xdt:Locator="Match(key)" xdt:Transform="SetAttributes(value)"/>

Add a deployment transformation step -

<Relocate targetFolder="%TMP%\WBdriver"/>
<RelocateConfigurationFile target="CustomConfig_{TestThreadId}_{Target}.config"/>


What the above snippets do : – It creates a Target to be run. When the run starts, a config file is created with the thread id. The app settings is scanned for the key Grid and the value is set as true when this particular profile is set.

3. Add a class for remote driver for screenshot capability:

public RemoteScreenShotDriver(Uri RemoteAdress, ICapabilities capabilities)
: base(RemoteAdress, capabilities)

public Screenshot GetScreenshot()
Response screenshotResponse = this.Execute(DriverCommand.Screenshot, null);
string base64 = screenshotResponse.Value.ToString();
return new Screenshot(base64);

4. Add a filter in step definition/the place where the driver is getting invoked

if (ConfigurationManager.AppSettings["Grid"] == "False")
{_driver = new FirefoxDriver(cockPitFFProfile);}
capability = DesiredCapabilities.Firefox();
string MasterHub = ConfigurationManager.AppSettings["MasterHub"];
_driver = new RemoteScreenShotDriver(new Uri(MasterHub + "/wd/hub"), capability);

5. Add a check for test failure and take screen shots for remote driver

if(ScenarioContext.Current.TestError != null)
Screenshot screenshot = ((ITakesScreenshot)_driver).GetScreenshot();
screenshot.SaveAsFile("filename.png", ImageFormat.Png);

Done!!. Invoke the profile from command prompt and the grid launches the tests

Considering there are 4 machines connected to the master hub, if the profile launches 20 tests, the selenium hub would distribute 5 tests for each machine and the execution time comes down drastically. Further, any test failures are tracked and screenshots are taken remotely and stored on master machine.

Friday, April 11th, 2014

Microsoft recently acquired InCycle’s “InRelease” software [now called as Release Management (RM)] and integrated with VS 2013.  Release Management software fully supports TFS 2010, 2012 and 2013.

Before we look into details of Release Management, let’s look at what Continuous Delivery means.

What is CD?

Continuous Delivery is the capability of automatic deployment of components to various servers in different environments.  This typically involves configuration management of different environments, ability to define and customize a deployment workflow driven by business involving multiple roles in organization.

Why do we need it?

Well, DevOps is the talk of the town. If you want to be a cool kid (team), you gotta know/implement CD. Apart from the cool factor, CD brings following advantages to the dev team and business.

-          Develop and deploy quality applications at a faster pace.

-          Improve value of deliver by reducing cycle time.

-          Enable same deployment package to traverse various environments as opposed to rebuild for each environment

-          Manage all configuration information in a centralized location.

-          Have repeatable, visible and more efficient releases

-          Alight with deployments with business process

-          Adhere to any regulatory requirements during deployment process.

What is Release Management?

Release Management is a continuous delivery solution for .NET teams for automating deployments through every environment from Team Foundation Server (TFS) until production. RM also allows to define release paths to include approvals from Business and other departments (such as ops) when required. RM enables to assemble all the components of your application, copy them to required target servers and installs all of them in one transaction. QA checks such as automated tests or data generation scripts, configuration changes etc. are all handled by RM. Release Management also handles roll back in required scenarios.

Release Management Components:

The following diagram shows the main components of Release Management.

Release Management Components

Client: There are two Client components. The Windows client is a Windows Presentation Foundation (WPF) application that serves as the main interface point to manage release information.The Web client is used to act on Approval Requests. This is the interface to which users are directed when following links in e-mail notifications. Client is used both by business and development teams to provide necessary approval when required.

RM Server: The Server component is the heart of Release Management. It is a combination of Web and Windows Services that expose contracts used by all other components. The server component also contains a SQL Server database. Typically, RM server is installed on TFS server and can share same SQL server.

RM Deployer: The Deployer component is a Windows service that lives on the Target Servers (such as Dev, QA, Prod etc) where your application components need to be installed.

Tools: The Tools are components that help in the deployment of various components to different servers, configurations etc. Few of the are given below.

-          Installing a version of a component to a specific environment.

-          Deployments to Azure

-          Uninstalling a previous version of a component before a re-deployment

-          Deploying reports to Microsoft SQL Reporting Services

-          Running SQL scripts on a database server etc.

In next blog, I’ll write about configuring Release Management.

Reference material:

Channel 9 Video

Visual Studio 2013 ALM VM – Hands on lab

InRelease User guide

Thursday, April 03rd, 2014
Developers can configure visual studio to run tests remotely and concurrently on a configured test controller and agent groups. The architecture should consist of developer machines with VS 2013, at least one test controller with test agents.
The test controller should be installed and configured with a test account that will be used to login to the controller service. The test controller manages a set of test agents to run tests. The test controller communicates with test agents to start, stop and collect test execution result.
Similarly the agents runs as a service that listens for requests from the test controller to start a new test. When a request is received, the test agent service starts a process on which to run the tests. Each test agent runs the same test. As part of agent configuration it is important to register it with a controller.
To setup remote execution of tests from Visual Studio, you need to add a new test settings file and set the test execution method to ’Remote execution’.


After setting the execution mode, now you can associate the
tests with a controller by managing controller and agents from the settings
window and start execution of the tests on the remote agent.
Wednesday, April 02nd, 2014
DevOps Deployment Workbench offers customized XAML workflow templates that are designed to encapsulate many smaller individual deployment steps into a singular consistent workflow. Using the DevOps workbench you can create deployment Activities that offer many smaller coded XAML activities that perform distinct build or deployment tasks, such as updating of a value in an XML configuration document. All these activities can be later executed from the
workbench UI to perform a deployment activity. 
In this article we’ll use the workbench UI to create a deployment orchestration and use this to perform a deployment activity.
  • From the DevOps workbench UI, create a new Deployment Orchestration as given below.
  • This creates a master deploy sequence which you can customize and extend by adding custom activities that describes your deployment process 
  • From the properties box, change the master deploy sequence display name to your project deployment sequence name.
  • The deployment toolbox contains common deployment activities which you can use in your deployment sequence by dropping them to your activity
  • To start with the sample deployment scenario, we’ll add a pre check for a valid operating system before copying the assemblies to the server.
  • From the toolbox, drag and drop the CheckOSName activity to the deployment sequence
  • From the toolbox drag and drop the .Net application deployment activity to the sequence and provide details for source and target as given below
  • Save your orchestration to the disk
  • We’ll now use this initial deployment sequence to deploy to a test server from the workbench UI.
  • For adding a new target server connection, choose Target Servers option from the menu
  • Click on Refresh to see the target servers for deployment
  • Once you have configured the target servers, you can deploy the application from the workbench UI by clicking on Deploy to servers from the menu
  • Choose your build, package name and target server details to deploy the application from the workbench
  • Click on start to start the deployment process



Wednesday, April 02nd, 2014

Setting up Selenium grid takes lot of time especially the virtual machines and being a QA I always wondered why there was no installer available for this. To address this, after bit of research I found an easy way to create installer that can create/configure Selenium Grid agent machines on click of few buttons.


For creating a Selenium Grid setup, we need multiple physical/virtual machines, out of which one plays the role of hub, while others can be agent machines. Though setting up grid hub has to be done manually, configuring agent machines can be automated and all required artifacts can be bundled to an installer. All one needs to do is configure the installer with hub IP, ports to be registered and zip the installer.

Steps to create the installer :-

  1. Download the Clickteam Creater Pro installer.
  2. Put the files required for agent machines for test run in a folder.
  • chromedriver.exe
  • IEDRiverServer.exe
  • jre (dumping java installer in case java is not installed in the agent machine)
  • selenium stand alone jar. (take latest)
  • Ignore the bat file (in screenshot for now)

agent machine files

3. Go to Options tab. Select the last option (All Users Directory) + desired folder name. You could select any other option as well, depending on which the next step will vary.

options tab

4. Install Info – We need to generate a batch file that would do the necessary configurations like registering the port, configuring system properties for platforms/role and hub details

 installer info

  • Create a .bat file (TriggerAgent.bat in screenshot) in the same directory
  • Under Write strings into files section click on Add

Enter the following commands in Strings to write section.

cd #Installation Directory#\Folder name
java -jar selenium-server-standalone-$$$(latest jar).jar -role webdriver -hub http://ip for hub/grid/register -browser browserName="chrome", browserName="internetexplorer",version=ANY,platform=WINDOWS,maxInstances=5"#Installation Directory#\chromedriver.exe""#Installation Directory#\iedriverserver.exe" -port 5555 (port to be registered)


5. Go to build and click on Build. Tats all, its done!! It will create a installer with desired name.


All you need to do now is Set up grid master machine. Go to any fresh machine that you want to configure as agent machine. Get this installer and run it in any location.

It will dump the necessary files and auto generate the .bat file. This batch file launches the agent machine and does the port registration and configurations by itself.

NOTE – For machines with different configurations like windows/linux, you will need to create different installers.

Category: PHP, Testing/QA  | Leave a Comment
Wednesday, April 02nd, 2014

If you’re using CodedUI for web automation, you’d typically write code like this:

BrowserWindow browser = BrowserWindow.Launch(); 
HtmlButton btnSubmit = new HtmlButton(browser); 

In the code above, I am looking for button with ID “submitbutton” and clicking it. Seems like too much code to do a simple thing.
Also note we hardcoded it as HtmlButton, what if I wanted to click a checkbox, I will have repeat this code snippet with HtmlCheckBox.

Wouldn’t it be nice we can just do this:


We can do it by just adding extension an method for BrowserWindow class:

public static class BrowserWindowExtension
 public static HtmlControl GetById(this BrowserWindow browser, string id)
    return browser.GetByAttribute<HtmlControl>("Id", id);
 public static T GetByAttribute<T>(this BrowserWindow browser, string attribute, 
   string value) where T : HtmlControl
     var type = typeof (T);
     var ele = (T) Activator.CreateInstance(type, new object[] {browser});
     ele.SearchProperties.Add(attribute, value, PropertyExpressionOperator.Contains); 
     return ele; 

That’s it. You can now use browser.GetId() method for quick access by ID.
You’ll also notice above that I added another useful extension method, GetByAttribute(), which I used in the GetById().

In part 2, we’ll see extension methods for HtmlControl class that make it easy to work with different types of html elements.

-Anand Gothe

Tuesday, April 01st, 2014
Building and maintaining a local NuGet gallery helps you facilitate your development process with local packages which you do not want to publish to and make them free. It also helps you maintain and publish stable versions of your product that is accessible to all developers in your company or team. You can also integrate the package publishing from your TFS build to automatically publish stable versions of the package to your local NuGet.
In this post, we’ll explore the steps necessary to create and consume your own NuGet gallery.
Before you can get NuGet up and running you’ll need some prerequisite software.
  • Visual Studio 2013
  • PowerShell 2.0
  • NuGet
  • Windows Azure SDK v2.2
  • xUnit for Visual Studio 2013
Setting up local NuGet gallery
Once you have ensured all the prerequisite software is installed. You need to clone the Nuget gallery repository from Git.
After successfully cloning the repository, open the command prompt and run the build command to build the project
The next step is to setup the website in IIS express.
  • Open the windows PowerShell console as Administrator and ensure that your have the execution policy set as UnRestricted.
  • In the PowerShell console, navigate to local NuGet gallery solution folder and run the Ensure-LocalTestMe.ps1 cmdlet from the tools folder
  • After setting up the IIS website, you need to create the database for the local NuGet gallery.
  • Open the Package Manager Console window from Visual Studio 2013
  • Ensure that the Default Project is set to NuGetGallery
  • Set the startup project of the solution to NugetGallery
  • Open the NuGetGallery.sln solution from the root of this repository.
  • Run the Update-Database command in the package manager console as given below.
  • Change the ConfirmEmailAddress settings value in the Web.Config file of the NugetGallery project to false, this disables confirmation of email address on uploading packages to the local gallery
  • Press Ctrl+f5 to run the website in IIS Express.
Configure Package manager settings in Visual Studio
After setting up your private repository, you need to configure Visual Studio to add your repository to the local package sources which works just like the public repository.
  • Open the Package Manager settings
  • Go to the Package Sources section.
  • Click the plus button to add a new source.
  • Specify a name and source of the new gallery
  • Click OK
Creating and uploading a NuGet package
  • To create a package, launch Package Explorer and select File > New menu option
  • Then select the Edit > Edit Package Metadata menu option to edit the package metadata.
  • The final step is to drag in the contents of your package into the Package contents pane. Package Explorer will attempt to infer where the content belongs and prompt you to place it in the correct directory within the package. For example, if you drag an assembly into the Package contents window, it will prompt you to place the assembly in the lib folder.
  • Save your package via the File > Save menu option.
  • After creating a package use the local NuGet gallery website to upload this package.
  • After uploading the package you can use the local gallery site to verify the uploaded package in the packages in the site.
  • Once the package is available in the local NuGet store, you can install this package from Visual Studio
Tuesday, April 01st, 2014
While creating a build agent in the TFS admin console, you can define the agent name and the tags using the Team Foundation Administration Console and edit the agent properties as shown below.
You can also do this from inside Visual Studio by right-clicking the Builds node in Team Explorer, choosing Manage Build Controllers, and double-clicking a particular agent.
The “Agent Settings” section under “Advanced” section in the Process tab of the build definition can be used to influence agent reservation during the build process.
Under that section, you can specify which agent should run a build for that definition.  The property is “Name Filter” and it has an asterisk by default, which means that any available agent should be used, you can change that to any of the agents on the drop-down list”.   This will ensure that only the selected
agent will get picked for that build. You can also redirect specialized builds to specialized build agents by tweaking the build agent reservation properties of the default build process template.
When we schedule a build, we can define an agent name filter, a set of tags and a tag matching criteria. This controls the way the Team Foundation Build Controller selects and reserves a Team Foundation Build
Agent to perform our build.