Archive for the Category ◊ .Net ◊

Friday, April 10th, 2015

In previous blog there was some basic information regarding Typescript. Typescript on compilation gives readable , standard compiled JS file.
Let me write simple class using Typescript

class1

From the above figure its clear that Typescript is easy to learn and understand, which is similar to C# code writing class in typescript is as easy as C#.
check out the below screen which is the code snippet generated in js file on compilation of the TS file.
jsclass

Now to play around with the classes written, i’ve designed a simple html page with some textbox and button which performs some actions.

design
clickme
find the output below
output

Some example with function call back functionality within typescript
callcode
call
output
callback

Can we include Other Libraries in Typescript?
we can include the references of jquery or angularjs or any libraries in typescript. i’ve tried it by taking the reference of jquery.d.ts(DefinitelyTyped Jquery) which is library compatible with typescript.

How to use DefinitelyTyped jquery lib?
include new file in your project rename it “Jquery.d.ts” and copy code
http://typescript.codeplex.com/SourceControl/changeset/view/92d9e637f6e1#typings/jquery.d.ts
and add reference to this files in our typescript file as shown below
ref
Some code snippet including jquery
timer
Output:
timeout

Monday, March 23rd, 2015

Superset of Javascript is introduced by Microsoft which can be used for cross-browser development known as ” Typescript “(open source project)which combines static analysis,explicit interfaces ,type checking.
Typescript can be installed into visual studio using two ways
-Using Node.js Package Manager (npm)
-Using an MSI that integrates with Visual Studio.

After successfull installation, TypeScript compiler is installed in below location by default.
Capture1

Sometimes we may not find the template of typescript in Visual Studio 2012, all we need to do is extract the msi package and find the “TypeScriptLanguageService.vsix_File” and remove the trail “_File” and try to install that vsix file which will not only gives the template but also intellisense, code highlighting etc..
templa

In above figure you may find the new template added to my visual studio , so lets use this template to create a project.
In the figure below ,we can see that a file with .js extension is also added because “Typescript internally generates the javascript on compilation”,Typescript compiler can be used to compile the typescript.
ts2js

Code Snippet in JS file :
jsCode

Same Code Snippet in Ts file :
tsCode

code in js we has initialized the variable x =10 and then x =”some string ” and both the alert statements gives the output as expected.However this can lead to serious issues sometimes its hard to maintain code.
But TypeScript helps us to declare variables with specific datatypes like number ,string ,bool
and also with object types which includes classes, interfaces,etc.

i would explain more about typescript in upcoming blogs.

Author:
Friday, March 06th, 2015

Many a times, we would have done a mistake in Views like not closing braces or some syntax error. These error we will not get to know until we run the application and navigate to that screen.

How to find out such error while compiling ??

Here comes the rescue for us,powerful  ASP.NET Compilation tool which can be used along with some arguments to find out the error for a pre compiled application at an earlier stage locally even before we check in our code.

Some of the arguments for this tool available   are

aspnet_compiler  [-?]

[-m metabasePath | -v virtualPath [-p physicalPath]]

[[-u] [-f] [-d] [-fixednames] targetDir]

[-c]

[-errorstack]

[-nologo]

[[-keyfile file | -keycontainer container ] [-aptca] [-delaysign]]

For our scenario, we will be focusing on few commands

  1. Open visual studio command prompt
  2. Navigate to the location where source exist
  3. Run the command

a. aspnet_compiler.exe -c -v temp -p  ProjectName(For ex: MVC.Web)

It will display the list of warnings and errors if any, in the specified project.

clip_image001

Now you can go ahead and fix those.

clip_image003

In my next blog, Let’s see how to integrate this in our build process.

Category: .Net  | One Comment
Sunday, March 01st, 2015

 

Lets consider two situations mentioned below which define the problem.I guess lot of you would be familiar with the first one.

Problem

Situation 1 :

You start with a project which has a repository i.e. you have repository pattern implemented.Now initially situation would be something like shown below


public class ProductRepository : IProductRepository
    {
        public Product GetProduct(int Id)
        {
            //logic
        }
        public bool UpdateProduct(Product c)
        {
            //logic
        }

        public bool InsertProduct(Product c)
        {
            //logic
        }

        public bool DeleteProduct(Product c)
        {
            //logic
        }
    }

After 1 Year…



public class ProductRepository
    {
        public Product GetProduct(int Id)
        {
            //logic
        }
        public bool UpdateProduct(Product c)
        {
            //logic
        }

        public bool InsertProduct(Product c)
        {
            //logic
        }

        public bool DeleteProduct(Product c)
        {
            //logic
        }

        public List<Product> GetProductsByCategory()
        {
            //logic
        }

        public List<Product> GetProductsByDesignNo()
        {
            //logic
        }
        public List<Product> GetAllProductsWithComplaints()
        {
            //logic
        }
        public List<Product> GetProductsForCustomerWithComplaints(int customerId)
        {
            //logic
        }
        //////////////////////
        /// AND MANY MORE SUCH VARIATIONS OF GETTING PRODUCTS WITH DIFFERENT CRITERIA
        //////////////////////
    }

After 2 years …You can imagine.

Situation 2

You are developing a product with say advanced search functionality and you want to use along with your database a search server with its own query language.Only issue is that you want this search server functionality to be loosely coupled as your architects and management are not sure about the product and want to be able to replace it with a different search server in future.So the API to interact with this search functionality should provide complete abstraction of the specifics and should not leak any search product specific code in application.


Problem is that such products will have there specific query language and / or API.If I have to implement a SearchProducts method, how do I abstract the input query parameters / Objects so that I can present a uniform interface to code that uses this functionality. E.g. Elastic search provides you with a API which encapsulates search request in classes implementing ISearchRequest interface whereas Microsoft’s FAST provides a very elaborate REST API.

I have faced this situation in two of the projects i worked on and I am sure lot of people would have faced it. That’s what pointed me to Query objects ( or Query object Pattern).

Query Objects

This link has a brief explanation of query objects by Martin Fowler.

Query objects are basically named queries or queries represented by objects.It is equivalent to Command pattern for a query.

Lets see a simple implementation of query objects targeting Situation 2 above (same approach can be implemented for Situation 1).

First define a query interface which all our query classes will implement (below is an example of a query for search products by design no).


public interface ISearchQuery<T>
    {
        IEnumerable<T> Execute();
    }


public class FAST_ProductsByDesignNoQuery : ISearchQuery<IEnumerable<Product>>
    {

        private string _designNo;
        public FAST_ProductsByDesignNoQuery(string designNo)
        {
            _designNo = designNo;
        }

        public IEnumerable<IEnumerable<Product>> Execute()
        {
            //Specific query logic goes here
        }
    }

If we move to another search server than we will be defining a new query class for the same query implementing ISearchQuery interface.

Below is how our Search API  will look like.


public interface ISearch
    {
        IEnumerable<Product> SearchProducts(ISearchQuery<Product> query);
		 IEnumerable<Customer> SearchCustomers(ISearchQuery<Customer> query);
    }
public class SearchAPI : ISearch
    {

        public IEnumerable<Product> SearchProducts(ISearchQuery<Product> query)
        {
            return query.Execute();
        }
		 public IEnumerable<Product> SearchCustomers(ISearchQuery<Customer> query)
        {
            return query.Execute();
        }
    }

This makes my search API totally independent of search server which I am using.

The sample given above is the simplest example of Query objects but there is much more that can be done.

Going Further…

There are couple of things which we can do to make our query objects more sophisticated.We can have base class which adds paging support by adding properties related to page size,page number ,sorting etc.

Taking it to extreme we can define generic query objects where queries can be defined using project specific language rather using separate classes for each query and use interpreter for translating them to data source specific queries.And Yes, you are right,Expression trees in .NET is a good example query objects (provided you have an interpreter to translate expression tree to your data source specific query ).Another example is Hibernate’s Criteria API.

Only thing to be careful about above approach  is how complex you want this to become.For example having set of classes which define your project specific query language or writing custom interpreter for expression trees is quite complex and  does not make sense unless you are working on a very big project being implemented by multiple team.

Category: .Net | Tags: ,  | Leave a Comment
Author:
Monday, February 23rd, 2015

There are numerous blogs about getting code coverage in your TFS build process and failing the build when it’s below certain percentage. Finding out code coverage in your build process is such a common action that there is a readymade activity (GetCodeCoverageTotal) for it in the TfsBuildExtensions (https://github.com/tfsbuildextensions/CustomActivities).
If you can, go ahead and use the TfsBuildExtensions’s GetCodeCoverateTotal activity as described by Colin Dembovsky here: https://tfsbuildextensions.codeplex.com/wikipage?title=Failing%20Builds%20based%20on%20Code%20Coverage. However, there are three scenarios where you will need a different approach:

  1. You do not have access to TFS build servers; and your admin doesn’t allow adding custom DLLs to build servers. You need to copy TfsBuildExtensions DLL to all the build servers to use the customer activities defined therein.
  2. You have all the access you need. But getting everything to work as mentioned in the blog above takes too much time. You need to download the source and compile locally, to get the build activities show up in your Visual Studio Toolbox and to be able to drag drop in workflow designre (For eg. it took my several hours to figure out why Visual Studio kept complaining about missing IonicZip.dll even though it was present in my GAC).
  3. You have access to TFS build servers, and tried all the steps mentioned in the blog above. But could not get it to work for some reason (in my case GetCodeCoverageTotal
    kept throwing NullReferenceException without any explanation)

I needed an approach that doesn’t involve deploying any DLLs to build server. So I took a look at the code for GetCodeCoverageTotal activity in TfsBuildExtensions source code. It looked like this:

As you can see there isn’t much code here, which got me thinking if I can implement this logic in my build template workflow designer using primitive actions? I will just need Assign, ForEach and If activities.

Here’s how to do it:

1) Open your build template xaml in Visual Studio to bring up the workflow designer

2) Locate the If CompilationStatus = Unknown activity.

3) Add a Sequence activity after the If CompilationStatus = Unknown activity and name it “Sequence – Check Coverage”

4) Double click to expand the sequence. First thing we need to do is add a Delay activity. As Colin mentioned in his blog, the coverage results are uploaded to TFS asynchronously. So a delay will give enough time for that to complete. Set the Duration property to TimeSpan.FromSeconds(10)

(All the actions so far are similar to what Colin’s blog mentions)

5) Now we are ready to add actions that implements the code from GetCodeCoverageTotal.cs. The first two lines of code are as follows:

var buildDetail = this.BuildDetail.Get(this.ActivityContext);
var testService = buildDetail.BuildServer.TeamProjectCollection.GetService<ITestManagementService>();

We basically need to get an instance of ITestManagementService. First let’s add a variable to hold this instance. Click on Variables tab at the bottom of the screen and add a variable named MyTestService, for Variable type, click Browse for Types… and select ITestManagementService from the Browse and Select a .Net Type popup. Make sure the Scope is set to your sequence (in our case Sequence – Code Coverage)

Now that we have created a variable, it’s time to assign it. Add an Assign activity after the Delay activity. Set the To property to MyTestService and the Value property to BuildDetail.BuildServer.TeamProjectCollection.GetService(Of
ITestManagementService)()

If you see any exclamation or warnings on Assign activity, just same the xaml, close and reopen it. Sometimes it requires a reopen to recognize new variable types.

6) The next line on GetCodeCoverageTotal.cs is

var project = testService.GetTeamProject(buildDetail.TeamProject);

This will be similar to the step above. Add a variable called MyTestProject and choose the type ITestManagementTeamProject.

Then we’ll add an Assign activity and set MyTestProject to MyTestService.GetTeamProject(BuildDetail.TeamProject)

7) Next, let’s code this line: (we skipped a line about TestRuns, we’ll cover it in next step)

var covManager = project.CoverageAnalysisManager;

Create a variable named MyCoverageAnalyser of type ICoverageAnalysisManager. Then add an Assign activity to set MyCoverageAnalyser to MyTestProject.CoverageAnalysisManager

8) Now consider the following line that gets test runs:

var runs = project.TestRuns.ByBuild(buildDetail.Uri);

And the loop that goes through each test run and queries for coverage:

var totalBlocksCovered = 0;
var totalBlocksNotCovered = 0;
foreach (var run in runs)
{
  var coverageInfo = covManager.QueryTestRunCoverage(run.Id, CoverageQueryFlags.Modules);
  totalBlocksCovered += coverageInfo.Sum(c => c.Modules.Sum(m => m.Statistics.BlocksCovered));
  totalBlocksNotCovered += coverageInfo.Sum(c => c.Modules.Sum(m => m.Statistics.BlocksNotCovered));
}

We can do this using a ForEach<T> activity. But first let’s create three Decimal variables called MyTotalBlocksCovered,
MyTotalBlocksNotCovered and MyTotalBlocks


Now add a ForEach<T> activity to your sequence. For TypeArgument property of ForEach, Browse and select ITestRun. And for the Values property type in MyTestProject.TestRuns.ByBuild(BuildDetail.Uri)


Now we have to implement the body of the foreach loop. Double click on ForEach<ITestRun> and then add a Sequence activity in the Body section.

Double-click the sequence. We need to create some variable in this sequence first. Create a variable called MyCoverageInfo of type Array of [T], then browse and select ITestRunCoverage. Also add two Decimal variables TempCovered and TempNotCovered


Now add an Assign activity to the Sequence. Assign MyCodeCoverageInfo with MyCoverageAnalser.QueryTestRunCoverage(item.Id, CoverageQueryFlags.Modules)


Next add another Assign, TempCovered = MyCoverageInfo.Sum(Function(c) c.Modules.Sum(Function(m) m.Statistics.BlocksCovered))


Next add another Assign, TempNotCovered = MyCoverageInfo.Sum(Function(c) c.Modules.Sum(Function(m) m.Statistics.BlocksNotCovered))


Next add another Assign, MyTotalCovered = MyTotalCovered + TempCovered


Next add another Assign, MyTotalNotCovered = MyTotalNotCovered + TempNotCovered


That’s it for the ForEach loop. Now come back to the “Sequence – Check Coverage”

9) Add an Assign activity after the ForEach<ITestRun> activity. Set MyTotalBlocks to MyTotalCovered + MyTotalNotCovered

This basically represents the following line from GetCodeCoverageTotal.cs
var totalBlocks = totalBlocksCovered + totalBlocksNotCovered;

10) Now let’s do the last bit of code.

if(totalBlocks == 0)
{
 return 0;
}
return (int)(totalBlocksCovered * 100d / totalBlocks);

We will of course not return anything, instead we can check the value with the desired coverage we want. For example, let’s say we want to fail the build if coverage is less than 90%.
Add an If activity to Sequence – Check Coverage. Set the Condition property to MyTotalBlocks > 0

Then double click the If activity, and add a Sequence in the Then section

Then double click the Sequence. In the Sequence, create a Decimal variable called TotalCoverageActual

Add an Assign activity to set TotalCoverageActual to
MyTotalCovered * (100D / MyTotalBlocks)

Then add an If activity with condition TotalCoverageActual < 90

Double-click the If activity and add a Sequence in the Then section

Double-click the Sequence and add a WriteBuildWarning activity and set the Message property to “Coverage too low “ + TotalCoverageActual.ToString(“##.##”)

Then add a SetBuildProperties action and set the field PropertiesToSet to TestStatus and Status. Set Status property to BuildStatus.PartiallySucceeded (or to Failed if you like) and set the TestStatus property to BuildPhaseStatus.Failed

That’s it. Save the build template, and check it in.

To test it out, create a build definition that uses this build template. Or if you already have build definition, just Edit it and Refresh the template in the Process section.

That’s it folks. Your build should now fail if the coverage is less than 90%. Btw, instead of hard-coding 90, you can add an Argument to the build template so that you can set it from Build definition.
If you don’t get expected results, add WriteBuildWarning activities different places to see what’s going on.

If you have questions of face problems, drop me a line below.

Cheers!

-Anand Gothe

Category: .Net, ALM, TFS  | Leave a Comment
Wednesday, January 21st, 2015

The authorization manager helps control the execution of commands for the runspace. When you try to execute a PowerShell script from C#, and haven’t changed PowerShell’s default execution policy, the scripts that are executed under the execution policy set on the machine. If you want the tests executed from C# to bypass the default security policy, then you need to either use a null AuthorizationManager implementation for the runspace or create a custom implementation of the AuthorizationManager and override the policy based on any condition you have. Deriving from the AuthorizationManager class allows you to override the ShouldRun method and add the logic specific to your needs like set up a reason parameter with a custom execption with proper explanation and  details on why this command was blocked etc.

In the testing framework, I decided to use the second approach and created the custom authorization manager implementation as

internal class TestContextAuthorizationManager : AuthorizationManager

{
    public TestContextAuthorizationManager(string shellId) : base(shellId)
    {

    }
    protected override bool ShouldRun(CommandInfo commandInfo, CommandOrigin origin, PSHost host, outException
reason)

    {
        base.ShouldRun(commandInfo, origin, host, out reason);
        return true;
    }
}

In the LoadPSTestHost method you can now use this implementation instead of the default AuthorizationManager as

var state = InitialSessionState.CreateDefault2();
state.AuthorizationManager = newTestContextAuthorizationManager(“VSTestShellId”);

Monday, January 19th, 2015

PowerShell cmdlets and modules can report two kinds or errors (Terminating and non-terminating). Terminating errors are errors that cause the pipeline to be terminated immediately, or errors that occur when there is no reason to continue processing. Nonterminating errors are those errors that report a current error condition, but the cmdlet can continue to process input objects. With nonterminating errors, the user is  typically notified of the problem, but the cmdlet continues to process the next input object. Terminating errors are reported by throwing  exceptions or by calling the ThrowTerminatingError method, while non-terminating errors are reported by calling the Write-Error method that in turn sends an error record to the error stream.

To capture all the non-terminating errors you have to probe the PowerShell.Streams.Error collection and collect the details of the errors. While terminating errors are throw as RuntimeException and can be handled at the catch block.

In our framework, I’ve extended the FunctionInfo object to expose a property to capture non-terminating errors and also provided an option to expose the non-terminating error as a RuntimeException if needed by using the FailOnNonTerminatingError method.

public PsHost FailOnNonTerminatingError()
{
    _failOnNonTerminatingError = true;
    return this;
}

The implementation for the handle errors looks like

private string HandleNonTerminatingErrors(System.Management.Automation.PowerShell shell)
{
    var errors = shell.Streams.Error;
    if (errors == null || errors.Count <= 0) return String.Empty;
    var errorBuilder = new StringBuilder();
    foreach (var err in errors)
    {
errorBuilder.AppendLine(err.ToString());

    }
    if (_failOnNonTerminatingError)
    {
        throw new RuntimeException(errorBuilder.ToString());
    }
    return errorBuilder.ToString();
}

Now in the code, you can use the test methods as.

[TestMethod]
[ExpectedException(typeof (RuntimeException))]
public void Tests_PsHost_FailOnNonTerminatingError_ThrowsNonTerminatingErrorsAsRuntimeExceptions()
{
    PsHost<TestModule>
        .Create()
        .FailOnNonTerminatingError()
        .Execute(“Invoke-NonTerminatingError”);
}

Next we’ll see how to overcome the execution policies in the unit test context without altering the PowerShell environment policies.
Sunday, January 18th, 2015
Before we proceed into how we can stub out commands in our test framework, we’ll see how the AddScript method works and how to use it to execute a script block in powershell. The AddScript method adds a script to the end of the pipeline of the PowerShell object and can be invoked by the Invoke method. We’ll use this method to add our dummy function as a script object to the pipeline so that when this command is called later from a function, it’s going to call the dummy function that was added using the AddScript method.
 
So our Stub() is implemented as.
 
public PowerShellHost Stub(string method)
{
    if (_runspace == null)
    {
        throw new ArgumentException(“The PowerShell host should be setup before invoking
the methods”
);
    }
    var script = String.Format(“Function {0} {{}}”, method);
   _shell.AddScript(script).Invoke();          
    return this;
}
You can also see that I’ve used the return value of the method as the PowerShellHost, so that I can use a fluent interface model for my test methods. A sample test method using Stub to the Write-Host command can be
written as 
var psHost = new PowerShellHost<XModule>();
var actual = psHost
    .Stub(“Write-Host”)
    .Execute(“Get-Greetings”);
var result = actual.GetGreetings.Result.Select(psObject => psObject.BaseObject)
.OfType<string>()
.First();
Assert.AreEqual<string>(result, “Hello from VSTest”);
Next we’ll see how exception handling can be taken care in our framework.
Sunday, January 18th, 2015
The windows PowerShell engine can be hosted in the System.Management.Automation.PowerShell
class. Using this class you can create, execute and manage commands in a Runspace. We’ll use these features of the PowerShell class to load and execute our Modules and interact with the PowerShell engine whenever needed in our unit test framework. While creating the PowerShell host for our test framework, it’s good to define the Runspace that is responsible for the operating environment for command pipelines. In our framework I preferred to use the constrained runspace which will allow us to restrict the programming elements that can be applied by the user. 
We’ll later use this ability of constrained runspaces to simulate a Stub behavior in our test framework. To restrict the availability of aliases, applications, cmdlets, functions and scripts, we’ll create an empty InitialSessionState and use that for creating a runspace. Later we’ll use the AddPSModule, AddCommand, AddPSSnapin methods to include the required functionality in our runspace in a text context.
 
InitialSessionState has three different methods to create a container that holds commands
  • Create - Creates an empty container. No commands are added to this container.
  • CreateDefault - Creates a session state that includes all of the built-in Windows PowerShell
    commands on the machine. When using this API, all the built-in PowerShell commands are loaded as snapins.
  • CreateDefault2 - Creates a session state that includes only the minimal set of commands needed to host Windows PowerShell. When using this API, only one snapin – Microsoft.PowerShell.Core – is loaded.

We’ll use the CreateDefault2 overload method to create the InitialSesionState instance.

 
private InitialSessionState CreateSessionState(string path)
{
    var state = InitialSessionState.CreateDefault2();
    if (!String.IsNullOrEmpty(path))
    {        
state.ImportPSModulesFromPath(path);
    }
    return state;
}
 
In the CreateSessionState method, we use the Path property of the PSModuleAttribute created in the part 1 of this series to load all modules from the path provide to the InitilSessionState.
 
Once we have the InitialSessionState, we’ll use this container to create the Runspace and then the PowerShell host
 
_runspace = RunspaceFactory.CreateRunspace(state);
_runspace.Open();
_shell = PowerShell.Create();
_shell.Runspace = _runspace;
 
We’ll use this PowerShell instance to execute the commands needed from the module. In the framework, I have my execute method created as
 
public TModule Execute(string method)
{
    if (_shell == null)
    {
        throw new ArgumentException(“The PowerShell host should be setup before invoking
the methods”
);
    }
    _shell.AddCommand(_moduleInfo.Name + @”\” + method);
    var methodProperties = typeof(TModule).GetProperties()
        .Where(prop => prop.IsDefined(typeof(PsModuleFunctionAttribute), false)).ToList();
    var property = methodProperties.First(p => p.GetCustomAttribute<PsModuleFunctionAttribute>().Name == method);
    var commandInfo = property.GetValue(_module) asPsCommandInfo;
    var parameters = commandInfo.Parameters;
    if (parameters != null)
    {
        _shell.AddParameters(parameters);
    }
    var results = _shell.Invoke();
    commandInfo.Result = results;
    property.SetValue(_module, commandInfo);
    DisposeContext();
    return _module;
}
 
As you can see from the highlighted code, we add the commands and the parameters to the shell using reflection from the metadata defined in the attributes to invoke the shell. The results are set back as property values to the ModuleObject and returned back to the test to assert the conditions.
 
A simple test code using the framework can be created like
 
var psHost = newPowerShellHost<XModule>();
var actual = psHost
    .Execute(“Get-Greetings”);
var result = actual.GetGreetings.Result.Select(psObject =>
psObject.BaseObject).OfType<
string>().First();
Assert.AreEqual<string>(result, “Hello from VSTest”);
 

Next in the series, we’ll see how to make use of our  framework to stub methods/ cmdlets in the execution pipeline.

Saturday, January 17th, 2015
Recently I started working on a light weight unit testing framework for my PowerShell modules. There are a lot of testing frameworks for  PowerShell that can be executed as script files from a PS host, but not many allows you to integrate with VSTests and write test methods and classes in C#.
The following posts in this series is about how you can create a unit testing framework for Windows PowerShell and use it.
 
Being a big fan of the Page objects pattern and have seen the benefits of how easily you can model a page and reduce the amount of duplicate code when creating UI tests for websites, I wanted to do something similar modules for PowerShell also. So when I started writing the framework, one of the main considerations was to simplify my testing process by modelling a PowerShell module in code.
 
I also wanted to follow a more declarative approach on defining the metadata needed to provide inputs for my tests and model, so I started to think about attributes that I need to model a  PowerShell module.
 
 To define a module name and the location of the module, I created the PsModuleAttribute with a
Name and a Path property, so that I can use this attribute on my PSModule model for the ModuleObject pattern implementation.
 
[AttributeUsage(AttributeTargets.Class)]
public class PsModuleAttribute : Attribute
{
    public PsModuleAttribute(string name)
    {
        Name = name;
    }
    [Required(AllowEmptyStrings = false, ErrorMessage = “A PS module name should be provided to test a module”)]  
    public string Name { get; set; }
    public string Path { get; set; }
}
Next I wanted to define the functions and the parameters for these functions in my model. The functions in the PowerShell module can be simulated as properties in the ModuleObject. Once you have these properties defined, you can use the same approach of using attributes to define name, parameters, return value etc. 
 
[AttributeUsage(AttributeTargets.Property)]
public class PsModuleFunctionAttribute : Attribute
{
    [Required(AllowEmptyStrings = false, ErrorMessage = “A module should have a name”)]
    public string Name { get; set; }
    public PsModuleFunctionAttribute(string name)
    {
        Name = name;
    }
}            
[AttributeUsage(AttributeTargets.Property, AllowMultiple = true)]
public class PsModuleParameterAttribute : Attribute
{
    [Required(AllowEmptyStrings = false, ErrorMessage = “A parameter should have a name”)]
    public string Key { get; set; }
    [Required(AllowEmptyStrings = false, ErrorMessage = “A parameter should have a value”)] 
    public string Value { get; set; }
    public PsModuleParameterAttribute(string key, string value)
    {
        Key = key;
        Value = value;
    }
}
 
In the framework, I created the CommandInfo object to wrap these values in the properties.
 
public class PsCommandInfo
{
    public string Name { get; set; }
    public IDictionary Parameters { get; set; }
    public Collection<PSObject> Result { get; set; }
}
 
The final implementation of the ModuleObject should look like.
[PsModule(“xModule”, Path = @”E:\MyModules\xModule”)]
public class XModule
{
    [PsModuleFunction(“Get-HelloMessage”)]
    [PsModuleParameter(“context”, “VSTest”)]
    public PsCommandInfo GetGreetings { get; set; }
}
 
Next we’ll see how to extract this information in a unit test context to execute it.