Archive for the Category ◊ .Net ◊

Wednesday, January 21st, 2015

The authorization manager helps control the execution of commands for the runspace. When you try to execute a PowerShell script from C#, and haven’t changed PowerShell’s default execution policy, the scripts that are executed under the execution policy set on the machine. If you want the tests executed from C# to bypass the default security policy, then you need to either use a null AuthorizationManager implementation for the runspace or create a custom implementation of the AuthorizationManager and override the policy based on any condition you have. Deriving from the AuthorizationManager class allows you to override the ShouldRun method and add the logic specific to your needs like set up a reason parameter with a custom execption with proper explanation and  details on why this command was blocked etc.

In the testing framework, I decided to use the second approach and created the custom authorization manager implementation as

internal class TestContextAuthorizationManager : AuthorizationManager

{
    public TestContextAuthorizationManager(string shellId) : base(shellId)
    {

    }
    protected override bool ShouldRun(CommandInfo commandInfo, CommandOrigin origin, PSHost host, outException
reason)

    {
        base.ShouldRun(commandInfo, origin, host, out reason);
        return true;
    }
}

In the LoadPSTestHost method you can now use this implementation instead of the default AuthorizationManager as

var state = InitialSessionState.CreateDefault2();
state.AuthorizationManager = newTestContextAuthorizationManager(“VSTestShellId”);

Monday, January 19th, 2015

PowerShell cmdlets and modules can report two kinds or errors (Terminating and non-terminating). Terminating errors are errors that cause the pipeline to be terminated immediately, or errors that occur when there is no reason to continue processing. Nonterminating errors are those errors that report a current error condition, but the cmdlet can continue to process input objects. With nonterminating errors, the user is  typically notified of the problem, but the cmdlet continues to process the next input object. Terminating errors are reported by throwing  exceptions or by calling the ThrowTerminatingError method, while non-terminating errors are reported by calling the Write-Error method that in turn sends an error record to the error stream.

To capture all the non-terminating errors you have to probe the PowerShell.Streams.Error collection and collect the details of the errors. While terminating errors are throw as RuntimeException and can be handled at the catch block.

In our framework, I’ve extended the FunctionInfo object to expose a property to capture non-terminating errors and also provided an option to expose the non-terminating error as a RuntimeException if needed by using the FailOnNonTerminatingError method.

public PsHost FailOnNonTerminatingError()
{
    _failOnNonTerminatingError = true;
    return this;
}

The implementation for the handle errors looks like

private string HandleNonTerminatingErrors(System.Management.Automation.PowerShell shell)
{
    var errors = shell.Streams.Error;
    if (errors == null || errors.Count <= 0) return String.Empty;
    var errorBuilder = new StringBuilder();
    foreach (var err in errors)
    {
errorBuilder.AppendLine(err.ToString());

    }
    if (_failOnNonTerminatingError)
    {
        throw new RuntimeException(errorBuilder.ToString());
    }
    return errorBuilder.ToString();
}

Now in the code, you can use the test methods as.

[TestMethod]
[ExpectedException(typeof (RuntimeException))]
public void Tests_PsHost_FailOnNonTerminatingError_ThrowsNonTerminatingErrorsAsRuntimeExceptions()
{
    PsHost<TestModule>
        .Create()
        .FailOnNonTerminatingError()
        .Execute(“Invoke-NonTerminatingError”);
}

Next we’ll see how to overcome the execution policies in the unit test context without altering the PowerShell environment policies.
Sunday, January 18th, 2015
Before we proceed into how we can stub out commands in our test framework, we’ll see how the AddScript method works and how to use it to execute a script block in powershell. The AddScript method adds a script to the end of the pipeline of the PowerShell object and can be invoked by the Invoke method. We’ll use this method to add our dummy function as a script object to the pipeline so that when this command is called later from a function, it’s going to call the dummy function that was added using the AddScript method.
 
So our Stub() is implemented as.
 
public PowerShellHost Stub(string method)
{
    if (_runspace == null)
    {
        throw new ArgumentException(“The PowerShell host should be setup before invoking
the methods”
);
    }
    var script = String.Format(“Function {0} {{}}”, method);
   _shell.AddScript(script).Invoke();          
    return this;
}
You can also see that I’ve used the return value of the method as the PowerShellHost, so that I can use a fluent interface model for my test methods. A sample test method using Stub to the Write-Host command can be
written as 
var psHost = new PowerShellHost<XModule>();
var actual = psHost
    .Stub(“Write-Host”)
    .Execute(“Get-Greetings”);
var result = actual.GetGreetings.Result.Select(psObject => psObject.BaseObject)
.OfType<string>()
.First();
Assert.AreEqual<string>(result, “Hello from VSTest”);
Next we’ll see how exception handling can be taken care in our framework.
Sunday, January 18th, 2015
The windows PowerShell engine can be hosted in the System.Management.Automation.PowerShell
class. Using this class you can create, execute and manage commands in a Runspace. We’ll use these features of the PowerShell class to load and execute our Modules and interact with the PowerShell engine whenever needed in our unit test framework. While creating the PowerShell host for our test framework, it’s good to define the Runspace that is responsible for the operating environment for command pipelines. In our framework I preferred to use the constrained runspace which will allow us to restrict the programming elements that can be applied by the user. 
We’ll later use this ability of constrained runspaces to simulate a Stub behavior in our test framework. To restrict the availability of aliases, applications, cmdlets, functions and scripts, we’ll create an empty InitialSessionState and use that for creating a runspace. Later we’ll use the AddPSModule, AddCommand, AddPSSnapin methods to include the required functionality in our runspace in a text context.
 
InitialSessionState has three different methods to create a container that holds commands
  • Create - Creates an empty container. No commands are added to this container.
  • CreateDefault - Creates a session state that includes all of the built-in Windows PowerShell
    commands on the machine. When using this API, all the built-in PowerShell commands are loaded as snapins.
  • CreateDefault2 - Creates a session state that includes only the minimal set of commands needed to host Windows PowerShell. When using this API, only one snapin – Microsoft.PowerShell.Core – is loaded.

We’ll use the CreateDefault2 overload method to create the InitialSesionState instance.

 
private InitialSessionState CreateSessionState(string path)
{
    var state = InitialSessionState.CreateDefault2();
    if (!String.IsNullOrEmpty(path))
    {        
state.ImportPSModulesFromPath(path);
    }
    return state;
}
 
In the CreateSessionState method, we use the Path property of the PSModuleAttribute created in the part 1 of this series to load all modules from the path provide to the InitilSessionState.
 
Once we have the InitialSessionState, we’ll use this container to create the Runspace and then the PowerShell host
 
_runspace = RunspaceFactory.CreateRunspace(state);
_runspace.Open();
_shell = PowerShell.Create();
_shell.Runspace = _runspace;
 
We’ll use this PowerShell instance to execute the commands needed from the module. In the framework, I have my execute method created as
 
public TModule Execute(string method)
{
    if (_shell == null)
    {
        throw new ArgumentException(“The PowerShell host should be setup before invoking
the methods”
);
    }
    _shell.AddCommand(_moduleInfo.Name + @”\” + method);
    var methodProperties = typeof(TModule).GetProperties()
        .Where(prop => prop.IsDefined(typeof(PsModuleFunctionAttribute), false)).ToList();
    var property = methodProperties.First(p => p.GetCustomAttribute<PsModuleFunctionAttribute>().Name == method);
    var commandInfo = property.GetValue(_module) asPsCommandInfo;
    var parameters = commandInfo.Parameters;
    if (parameters != null)
    {
        _shell.AddParameters(parameters);
    }
    var results = _shell.Invoke();
    commandInfo.Result = results;
    property.SetValue(_module, commandInfo);
    DisposeContext();
    return _module;
}
 
As you can see from the highlighted code, we add the commands and the parameters to the shell using reflection from the metadata defined in the attributes to invoke the shell. The results are set back as property values to the ModuleObject and returned back to the test to assert the conditions.
 
A simple test code using the framework can be created like
 
var psHost = newPowerShellHost<XModule>();
var actual = psHost
    .Execute(“Get-Greetings”);
var result = actual.GetGreetings.Result.Select(psObject =>
psObject.BaseObject).OfType<
string>().First();
Assert.AreEqual<string>(result, “Hello from VSTest”);
 

Next in the series, we’ll see how to make use of our  framework to stub methods/ cmdlets in the execution pipeline.

Saturday, January 17th, 2015
Recently I started working on a light weight unit testing framework for my PowerShell modules. There are a lot of testing frameworks for  PowerShell that can be executed as script files from a PS host, but not many allows you to integrate with VSTests and write test methods and classes in C#.
The following posts in this series is about how you can create a unit testing framework for Windows PowerShell and use it.
 
Being a big fan of the Page objects pattern and have seen the benefits of how easily you can model a page and reduce the amount of duplicate code when creating UI tests for websites, I wanted to do something similar modules for PowerShell also. So when I started writing the framework, one of the main considerations was to simplify my testing process by modelling a PowerShell module in code.
 
I also wanted to follow a more declarative approach on defining the metadata needed to provide inputs for my tests and model, so I started to think about attributes that I need to model a  PowerShell module.
 
 To define a module name and the location of the module, I created the PsModuleAttribute with a
Name and a Path property, so that I can use this attribute on my PSModule model for the ModuleObject pattern implementation.
 
[AttributeUsage(AttributeTargets.Class)]
public class PsModuleAttribute : Attribute
{
    public PsModuleAttribute(string name)
    {
        Name = name;
    }
    [Required(AllowEmptyStrings = false, ErrorMessage = “A PS module name should be provided to test a module”)]  
    public string Name { get; set; }
    public string Path { get; set; }
}
Next I wanted to define the functions and the parameters for these functions in my model. The functions in the PowerShell module can be simulated as properties in the ModuleObject. Once you have these properties defined, you can use the same approach of using attributes to define name, parameters, return value etc. 
 
[AttributeUsage(AttributeTargets.Property)]
public class PsModuleFunctionAttribute : Attribute
{
    [Required(AllowEmptyStrings = false, ErrorMessage = “A module should have a name”)]
    public string Name { get; set; }
    public PsModuleFunctionAttribute(string name)
    {
        Name = name;
    }
}            
[AttributeUsage(AttributeTargets.Property, AllowMultiple = true)]
public class PsModuleParameterAttribute : Attribute
{
    [Required(AllowEmptyStrings = false, ErrorMessage = “A parameter should have a name”)]
    public string Key { get; set; }
    [Required(AllowEmptyStrings = false, ErrorMessage = “A parameter should have a value”)] 
    public string Value { get; set; }
    public PsModuleParameterAttribute(string key, string value)
    {
        Key = key;
        Value = value;
    }
}
 
In the framework, I created the CommandInfo object to wrap these values in the properties.
 
public class PsCommandInfo
{
    public string Name { get; set; }
    public IDictionary Parameters { get; set; }
    public Collection<PSObject> Result { get; set; }
}
 
The final implementation of the ModuleObject should look like.
[PsModule(“xModule”, Path = @”E:\MyModules\xModule”)]
public class XModule
{
    [PsModuleFunction(“Get-HelloMessage”)]
    [PsModuleParameter(“context”, “VSTest”)]
    public PsCommandInfo GetGreetings { get; set; }
}
 
Next we’ll see how to extract this information in a unit test context to execute it.
Monday, January 12th, 2015
With ASP.NET application Insights it’s now very easy to collect operation, performance and usage information in your ASP.NET web applications. With the data provided by application insights you can quickly detect and  act on performance and availability issues.
You can add application insights to your project from visual studio with the combination of an Azure account. Application insights option can be added when you create a project like given below (You’ll need to sign up using a Microsoft Azure account)
 
For existing web applications, you can also choose to add the script to enable insights at a later point of time.  Follow the steps given below to get the code to add insights.
  • Login to the Azure portal and choose add application insights option
 
  • Provide the name of the site and configure the resource group and subscription details.
 
  • Choose Add code to monitor web pages option
 
  • Get the script and insert the code in your Layout page or master page as given below.
 
  • Open the portal to see the insights in your dashboard.
 

 

Friday, January 02nd, 2015

If you are building and deploying public facing web applications, security has to be one of your key consideration. Whenever a browser makes an HTTP request to a web server, it sends along several HTTP headers. These HTTP Headers are used to provide the web server with information to assist with handling the request. While certain HTTP Headers are necessary, the web server’s identifying HTTP Headers are not necessary. Providing identifying information can pose a security risk. An attacker who knows of a vulnerability in a particular web server and ASP.NET version combination could hunt for targets making HTTP requests to many different servers and flagging those that return the particular web server/ASP.NET version numbers.

By default the following identifying headers are included in IIS7:

  1. Server
  2. X-Powered-By
  3. X-AspNet-Version
  4. X-AspNetMvc-Version

Observe the above headers in the following sample HTTP Response:

HTTP/1.1 200 OK

Cache-Control: private

Content-Length: 30345

Content-Type: text/html; charset=utf-8

Vary: Accept-Encoding

Server: Microsoft-IIS/7.0

P3P: CP=”ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI TELo OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI”

X-AspNetMvc-Version: 5.1

X-Frame-Options: SAMEORIGIN

X-AspNet-Version: 4.0.30319

X-Powered-By: ASP.NET

X-Instance: CO102

X-UA-Compatible: IE=edge

X-Powered-By: ARR/2.5

P3P: CP=”ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI TELo OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI”

X-Powered-By: ASP.NET

X-Instance: CO102

Date: Fri, 02 Jan 2015 11:16:21 GMT

 

This identifying information is not used by the browser in any way, and can safely be removed. Let’s see how to remove these HTTP Headers.

1.    Removing the X-AspNet-Version HTTP Response Header:

You can turn off the X-AspNet-Version header by changing the following configuration in your Web.Config:

<system.web>

<httpRuntime enableVersionHeader=”false”/>

</system.web>

2.    Removing the X-AspNetMvc-Version HTTP Response Header:

You can remove the X-AspNetMvc-Version header by altering your Global.asax.cs as follows:

protected void Application_Start() 

{
MvcHandler.DisableMvcResponseHeader = true;
}

3.    Removing the X-Powered-By HTTP Response Header:

You can turn off the X-Powered-By header by following the below steps:

  1. Launch the Internet Information Services (IIS) Manager
  2. Expand the Sites folder
  3. Select the website to modify and double-click the HTTP Response Headers section in the IIS grouping.
  4. Each custom header is listed here, as the screen shot below shows. Select the header to remove and click the Remove link in the right-hand column.

 

Removing X-Powered-By HTTP Response Header

Removing X-Powered-By HTTP Response Header

4.    Removing the Server HTTP Response Header:

Server HTTP Header can be removed by using a custom Module which is injected in the IIS 7 pipeline. Such a module can be developed as well using managed or unmanaged code.

Here is a sample .Net Module which replaces the Server HTTP header with a custom header:

using System;

using System.Text;

using System.Web;

namespace Sample.ServerModules

{

public class CustomServerHeaderModule : IHttpModule

{

public void Init(HttpApplication context)

{

context.PreSendRequestHeaders += OnPreSendRequestHeaders;

}

public void Dispose()

{ }

void OnPreSendRequestHeaders(object sender, EventArgs e)

{

// modify the “Server” Http Header

HttpContext.Current.Response.Headers.Set(“Server”, “YourWebServerName”);

}

}

}

When generating this module ensure to strong name it as it needs to be placed into the global assembly cache in order to allow IIS 7 to use it. To add the module to IIS 7 use the “Modules” configuration option on the server, choose “Add managed module” and select the module from the list of available modules.

Removing identifying HTTP Response Headers has two benefits:

  • It slims down the quantity of data transmitted from the web server back to the browser, and
  • It makes it a bit harder for attackers to determine the software (and their versions) that are powering the web server.

 

 

 

 

 

 

 

 

Thursday, January 01st, 2015
With your VSO account and team projects created using the account, it’s very easy to have full ALM capabilities integrated to your
project. Once you have chosen the version control for your project you can easily hookup a build definition and create CI builds using the hosted build controller provided as part of VS online.
If you are using the hosted build controller, there is no need of any additional hardware resources that needs to be installed or configured for setting up a CI build.
From VS online open the project in Visual Studio and then connect to the team project by selecting the VS online account server as shown below.
Choose Builds to add a new build definition for the project and create a new build definition
One the new build definition screen, give an appropriate name for the build definition.
On the Trigger tab, choose Continuous Integration as the trigger
Setup the source location based on the type of version control used. In my example, we have used Git as the version control system.
On the build controller screen, choose the Hosted Build Controller option. This is the build controller provided as part of VS online
and can be used for most of the common scenarios. Refer to this link to see if you need a separate build controller instead of the hosted build controller.
Configure the project to build and other options in the process tab as given below.
Save and close the build definition.
Open the solution in your visual studio and to make some changes to the project
Commit the changes to the master repository (You can check-in) in case of TFS.
After committing, you need to sync the changes to the Git server.
On your VS online account build page or the TFS explorer you can see the CI build queued as soon as your check in done.
Once the build is completed you can check the status and details of the build from the completed tab as given below.

 

Next we’ll see how to setup your build server on an Azure VM and use that build controller instead of the hosted build controller.
Tuesday, December 30th, 2014

ModelConventions is another goodie from Entity Framework Code First V6.X.X, They Can interfere with the way you build your model i.e In memory model which is commonly referred to as ‘CSpace’ meaning Conceptual Space and Storage Model referred to as ‘SSpace’ meaning Storage Space.Apart from having lightweight conventions that do little things, there are two kinds of Model Conventions – IConceptualModelConvention & IStoreModelConvention. In this blogpost, we see what IConceptualModelConvention is all about.

IConceptualModelConvention is an interface exposed by EF, Allows us how EF interprets the Conceptual Model i.e It affects the way how conceptual model(in memory model) creates conventions and rules based on how create conventions during OnModelCreating(..). Its very rare situation that we consume this interface to create our own conceptual model convention, EF out of the box has many CSpace Conventions. Generic type parameter ‘T’ in IConceptualModelConvention could be MetadataItem which is used to filter datatype that the convention applies to. There are many CSpace Conventions (abstract classes) that we can inherit from, and modify the convention itself as below.Remember, the order in which we add conventions is very important if we have two conventions modifying the same property.

  • AssociationInverseDiscoveryConvention
  • ComplexTypeDiscoveryConvention
  • DecimalPropertyConvention
  • DeclaredPropertyOrderingConvention
  • ForeignKeyAssociationMultiplicityConvention
  • ForeignKeyDiscoveryConvention(abstract)
  • ForeignKeyNavigationPropertyAttributeConvention
  • IdKeyDiscoveryConvention
  • KeyDiscoveryConvention(abstract)
  • NavigationPropertyNameForeignKeyDiscoveryConvention
  • OneToManyCascadeDeleteConvention
  • OneToOneConstraintIntroductionConvention
  • PluralizingEntitySetNameConvention
  • PrimaryKeyNameForeignKeyDiscoveryConvention
  • PropertyMaxLengthConvention
  • SqlCePropertyMaxLengthConvention
  • StoreGeneratedIdentityKeyConvention
  • TypeNameForeignKeyDiscoveryConvention

I have a problem, here I want all string properties having name ‘Key’ Should be treated as primary key., Let me solve this using KeyDiscoveryConvention(above list).

public class CustomPKConvention : KeyDiscoveryConvention
    {
        private string pkName = "Key";

        protected override IEnumerable<EdmProperty> MatchKeyProperty(
            EntityType entityType, IEnumerable<EdmProperty> primitiveProperties)
        {
            var matches = primitiveProperties.Where(p => pkName.Equals(p.Name, StringComparison.OrdinalIgnoreCase) && p.PrimitiveType == PrimitiveType.GetEdmPrimitiveType(PrimitiveTypeKind.String)).ToList();
            
            if (matches.Count > 1)
            {
                throw new InvalidOperationException("Multiple Keys Found");
            }

            return matches;
        }
    }

Now that we have our convention in place, we have to add it to modelbuilder.conventions list as below.

public class PersonContext : DbContext
    {
        protected override void OnModelCreating(DbModelBuilder modelBuilder)
        {
           modelBuilder.Conventions.AddBefore<IdKeyDiscoveryConvention>(new CustomPKConvention());
            modelBuilder.Conventions.Remove<IdKeyDiscoveryConvention>(); 

            base.OnModelCreating(modelBuilder);
        }
    }
Category: .Net  | Leave a Comment
Tuesday, December 30th, 2014
One of the most interesting and new features of Visual studio online is “Monaco”, the online code editor. With Monaco, you get a free, lightweight online code editor that is accessible from any device or platform and allows you to perform almost any tasks that can be done from a VS IDE.
To access Monaco follow the steps given below.
  • Open the Azure management portal
  • Click on websites and choose your website, which you want to edit using Monaco
  • Scroll down to the till you find the extensions tile on the website configurations section.
  • Click on the extensions and choose to add the visual studio online extension
  • After adding the extension, browse to see the code on the portal
  • On the let navigation panel, choose the Git icon to clone the vs online repository from the Git repository
  • Start editing the code from the browser.
  • After editing you can see the Git notifications for the changed files.
  • Click on this icon and commit the changes
  • Publish the changes and access the website to see your changes live!!!!