Archive for the Category ◊ .Net ◊

Sunday, March 01st, 2015

 

Lets consider two situations mentioned below which define the problem.I guess lot of you would be familiar with the first one.

Problem

Situation 1 :

You start with a project which has a repository i.e. you have repository pattern implemented.Now initially situation would be something like shown below


public class ProductRepository : IProductRepository
    {
        public Product GetProduct(int Id)
        {
            //logic
        }
        public bool UpdateProduct(Product c)
        {
            //logic
        }

        public bool InsertProduct(Product c)
        {
            //logic
        }

        public bool DeleteProduct(Product c)
        {
            //logic
        }
    }

After 1 Year…



public class ProductRepository
    {
        public Product GetProduct(int Id)
        {
            //logic
        }
        public bool UpdateProduct(Product c)
        {
            //logic
        }

        public bool InsertProduct(Product c)
        {
            //logic
        }

        public bool DeleteProduct(Product c)
        {
            //logic
        }

        public List<Product> GetProductsByCategory()
        {
            //logic
        }

        public List<Product> GetProductsByDesignNo()
        {
            //logic
        }
        public List<Product> GetAllProductsWithComplaints()
        {
            //logic
        }
        public List<Product> GetProductsForCustomerWithComplaints(int customerId)
        {
            //logic
        }
        //////////////////////
        /// AND MANY MORE SUCH VARIATIONS OF GETTING PRODUCTS WITH DIFFERENT CRITERIA
        //////////////////////
    }

After 2 years …You can imagine.

Situation 2

You are developing a product with say advanced search functionality and you want to use along with your database a search server with its own query language.Only issue is that you want this search server functionality to be loosely coupled as your architects and management are not sure about the product and want to be able to replace it with a different search server in future.So the API to interact with this search functionality should provide complete abstraction of the specifics and should not leak any search product specific code in application.


Problem is that such products will have there specific query language and / or API.If I have to implement a SearchProducts method, how do I abstract the input query parameters / Objects so that I can present a uniform interface to code that uses this functionality. E.g. Elastic search provides you with a API which encapsulates search request in classes implementing ISearchRequest interface whereas Microsoft’s FAST provides a very elaborate REST API.

I have faced this situation in two of the projects i worked on and I am sure lot of people would have faced it. That’s what pointed me to Query objects ( or Query object Pattern).

Query Objects

This link has a brief explanation of query objects by Martin Fowler.

Query objects are basically named queries or queries represented by objects.It is equivalent to Command pattern for a query.

Lets see a simple implementation of query objects targeting Situation 2 above (same approach can be implemented for Situation 1).

First define a query interface which all our query classes will implement (below is an example of a query for search products by design no).


public interface ISearchQuery<T>
    {
        IEnumerable<T> Execute();
    }


public class FAST_ProductsByDesignNoQuery : ISearchQuery<IEnumerable<Product>>
    {

        private string _designNo;
        public FAST_ProductsByDesignNoQuery(string designNo)
        {
            _designNo = designNo;
        }

        public IEnumerable<IEnumerable<Product>> Execute()
        {
            //Specific query logic goes here
        }
    }

If we move to another search server than we will be defining a new query class for the same query implementing ISearchQuery interface.

Below is how our Search API  will look like.


public interface ISearch
    {
        IEnumerable<Product> SearchProducts(ISearchQuery<Product> query);
		 IEnumerable<Customer> SearchCustomers(ISearchQuery<Customer> query);
    }
public class SearchAPI : ISearch
    {

        public IEnumerable<Product> SearchProducts(ISearchQuery<Product> query)
        {
            return query.Execute();
        }
		 public IEnumerable<Product> SearchCustomers(ISearchQuery<Customer> query)
        {
            return query.Execute();
        }
    }

This makes my search API totally independent of search server which I am using.

The sample given above is the simplest example of Query objects but there is much more that can be done.

Going Further…

There are couple of things which we can do to make our query objects more sophisticated.We can have base class which adds paging support by adding properties related to page size,page number ,sorting etc.

Taking it to extreme we can define generic query objects where queries can be defined using project specific language rather using separate classes for each query and use interpreter for translating them to data source specific queries.And Yes, you are right,Expression trees in .NET is a good example query objects (provided you have an interpreter to translate expression tree to your data source specific query ).Another example is Hibernate’s Criteria API.

Only thing to be careful about above approach  is how complex you want this to become.For example having set of classes which define your project specific query language or writing custom interpreter for expression trees is quite complex and  does not make sense unless you are working on a very big project being implemented by multiple team.

Category: .Net | Tags: ,  | Leave a Comment
Author:
Monday, February 23rd, 2015

There are numerous blogs about getting code coverage in your TFS build process and failing the build when it’s below certain percentage. Finding out code coverage in your build process is such a common action that there is a readymade activity (GetCodeCoverageTotal) for it in the TfsBuildExtensions (https://github.com/tfsbuildextensions/CustomActivities).
If you can, go ahead and use the TfsBuildExtensions’s GetCodeCoverateTotal activity as described by Colin Dembovsky here: https://tfsbuildextensions.codeplex.com/wikipage?title=Failing%20Builds%20based%20on%20Code%20Coverage. However, there are three scenarios where you will need a different approach:

  1. You do not have access to TFS build servers; and your admin doesn’t allow adding custom DLLs to build servers. You need to copy TfsBuildExtensions DLL to all the build servers to use the customer activities defined therein.
  2. You have all the access you need. But getting everything to work as mentioned in the blog above takes too much time. You need to download the source and compile locally, to get the build activities show up in your Visual Studio Toolbox and to be able to drag drop in workflow designre (For eg. it took my several hours to figure out why Visual Studio kept complaining about missing IonicZip.dll even though it was present in my GAC).
  3. You have access to TFS build servers, and tried all the steps mentioned in the blog above. But could not get it to work for some reason (in my case GetCodeCoverageTotal
    kept throwing NullReferenceException without any explanation)

I needed an approach that doesn’t involve deploying any DLLs to build server. So I took a look at the code for GetCodeCoverageTotal activity in TfsBuildExtensions source code. It looked like this:

As you can see there isn’t much code here, which got me thinking if I can implement this logic in my build template workflow designer using primitive actions? I will just need Assign, ForEach and If activities.

Here’s how to do it:

1) Open your build template xaml in Visual Studio to bring up the workflow designer

2) Locate the If CompilationStatus = Unknown activity.

3) Add a Sequence activity after the If CompilationStatus = Unknown activity and name it “Sequence – Check Coverage”

4) Double click to expand the sequence. First thing we need to do is add a Delay activity. As Colin mentioned in his blog, the coverage results are uploaded to TFS asynchronously. So a delay will give enough time for that to complete. Set the Duration property to TimeSpan.FromSeconds(10)

(All the actions so far are similar to what Colin’s blog mentions)

5) Now we are ready to add actions that implements the code from GetCodeCoverageTotal.cs. The first two lines of code are as follows:

var buildDetail = this.BuildDetail.Get(this.ActivityContext);
var testService = buildDetail.BuildServer.TeamProjectCollection.GetService<ITestManagementService>();

We basically need to get an instance of ITestManagementService. First let’s add a variable to hold this instance. Click on Variables tab at the bottom of the screen and add a variable named MyTestService, for Variable type, click Browse for Types… and select ITestManagementService from the Browse and Select a .Net Type popup. Make sure the Scope is set to your sequence (in our case Sequence – Code Coverage)

Now that we have created a variable, it’s time to assign it. Add an Assign activity after the Delay activity. Set the To property to MyTestService and the Value property to BuildDetail.BuildServer.TeamProjectCollection.GetService(Of
ITestManagementService)()

If you see any exclamation or warnings on Assign activity, just same the xaml, close and reopen it. Sometimes it requires a reopen to recognize new variable types.

6) The next line on GetCodeCoverageTotal.cs is

var project = testService.GetTeamProject(buildDetail.TeamProject);

This will be similar to the step above. Add a variable called MyTestProject and choose the type ITestManagementTeamProject.

Then we’ll add an Assign activity and set MyTestProject to MyTestService.GetTeamProject(BuildDetail.TeamProject)

7) Next, let’s code this line: (we skipped a line about TestRuns, we’ll cover it in next step)

var covManager = project.CoverageAnalysisManager;

Create a variable named MyCoverageAnalyser of type ICoverageAnalysisManager. Then add an Assign activity to set MyCoverageAnalyser to MyTestProject.CoverageAnalysisManager

8) Now consider the following line that gets test runs:

var runs = project.TestRuns.ByBuild(buildDetail.Uri);

And the loop that goes through each test run and queries for coverage:

var totalBlocksCovered = 0;
var totalBlocksNotCovered = 0;
foreach (var run in runs)
{
  var coverageInfo = covManager.QueryTestRunCoverage(run.Id, CoverageQueryFlags.Modules);
  totalBlocksCovered += coverageInfo.Sum(c => c.Modules.Sum(m => m.Statistics.BlocksCovered));
  totalBlocksNotCovered += coverageInfo.Sum(c => c.Modules.Sum(m => m.Statistics.BlocksNotCovered));
}

We can do this using a ForEach<T> activity. But first let’s create three Decimal variables called MyTotalBlocksCovered,
MyTotalBlocksNotCovered and MyTotalBlocks


Now add a ForEach<T> activity to your sequence. For TypeArgument property of ForEach, Browse and select ITestRun. And for the Values property type in MyTestProject.TestRuns.ByBuild(BuildDetail.Uri)


Now we have to implement the body of the foreach loop. Double click on ForEach<ITestRun> and then add a Sequence activity in the Body section.

Double-click the sequence. We need to create some variable in this sequence first. Create a variable called MyCoverageInfo of type Array of [T], then browse and select ITestRunCoverage. Also add two Decimal variables TempCovered and TempNotCovered


Now add an Assign activity to the Sequence. Assign MyCodeCoverageInfo with MyCoverageAnalser.QueryTestRunCoverage(item.Id, CoverageQueryFlags.Modules)


Next add another Assign, TempCovered = MyCoverageInfo.Sum(Function(c) c.Modules.Sum(Function(m) m.Statistics.BlocksCovered))


Next add another Assign, TempNotCovered = MyCoverageInfo.Sum(Function(c) c.Modules.Sum(Function(m) m.Statistics.BlocksNotCovered))


Next add another Assign, MyTotalCovered = MyTotalCovered + TempCovered


Next add another Assign, MyTotalNotCovered = MyTotalNotCovered + TempNotCovered


That’s it for the ForEach loop. Now come back to the “Sequence – Check Coverage”

9) Add an Assign activity after the ForEach<ITestRun> activity. Set MyTotalBlocks to MyTotalCovered + MyTotalNotCovered

This basically represents the following line from GetCodeCoverageTotal.cs
var totalBlocks = totalBlocksCovered + totalBlocksNotCovered;

10) Now let’s do the last bit of code.

if(totalBlocks == 0)
{
 return 0;
}
return (int)(totalBlocksCovered * 100d / totalBlocks);

We will of course not return anything, instead we can check the value with the desired coverage we want. For example, let’s say we want to fail the build if coverage is less than 90%.
Add an If activity to Sequence – Check Coverage. Set the Condition property to MyTotalBlocks > 0

Then double click the If activity, and add a Sequence in the Then section

Then double click the Sequence. In the Sequence, create a Decimal variable called TotalCoverageActual

Add an Assign activity to set TotalCoverageActual to
MyTotalCovered * (100D / MyTotalBlocks)

Then add an If activity with condition TotalCoverageActual < 90

Double-click the If activity and add a Sequence in the Then section

Double-click the Sequence and add a WriteBuildWarning activity and set the Message property to “Coverage too low “ + TotalCoverageActual.ToString(“##.##”)

Then add a SetBuildProperties action and set the field PropertiesToSet to TestStatus and Status. Set Status property to BuildStatus.PartiallySucceeded (or to Failed if you like) and set the TestStatus property to BuildPhaseStatus.Failed

That’s it. Save the build template, and check it in.

To test it out, create a build definition that uses this build template. Or if you already have build definition, just Edit it and Refresh the template in the Process section.

That’s it folks. Your build should now fail if the coverage is less than 90%. Btw, instead of hard-coding 90, you can add an Argument to the build template so that you can set it from Build definition.
If you don’t get expected results, add WriteBuildWarning activities different places to see what’s going on.

If you have questions of face problems, drop me a line below.

Cheers!

-Anand Gothe

Category: .Net, ALM, TFS  | Leave a Comment
Wednesday, January 21st, 2015

The authorization manager helps control the execution of commands for the runspace. When you try to execute a PowerShell script from C#, and haven’t changed PowerShell’s default execution policy, the scripts that are executed under the execution policy set on the machine. If you want the tests executed from C# to bypass the default security policy, then you need to either use a null AuthorizationManager implementation for the runspace or create a custom implementation of the AuthorizationManager and override the policy based on any condition you have. Deriving from the AuthorizationManager class allows you to override the ShouldRun method and add the logic specific to your needs like set up a reason parameter with a custom execption with proper explanation and  details on why this command was blocked etc.

In the testing framework, I decided to use the second approach and created the custom authorization manager implementation as

internal class TestContextAuthorizationManager : AuthorizationManager

{
    public TestContextAuthorizationManager(string shellId) : base(shellId)
    {

    }
    protected override bool ShouldRun(CommandInfo commandInfo, CommandOrigin origin, PSHost host, outException
reason)

    {
        base.ShouldRun(commandInfo, origin, host, out reason);
        return true;
    }
}

In the LoadPSTestHost method you can now use this implementation instead of the default AuthorizationManager as

var state = InitialSessionState.CreateDefault2();
state.AuthorizationManager = newTestContextAuthorizationManager(“VSTestShellId”);

Monday, January 19th, 2015

PowerShell cmdlets and modules can report two kinds or errors (Terminating and non-terminating). Terminating errors are errors that cause the pipeline to be terminated immediately, or errors that occur when there is no reason to continue processing. Nonterminating errors are those errors that report a current error condition, but the cmdlet can continue to process input objects. With nonterminating errors, the user is  typically notified of the problem, but the cmdlet continues to process the next input object. Terminating errors are reported by throwing  exceptions or by calling the ThrowTerminatingError method, while non-terminating errors are reported by calling the Write-Error method that in turn sends an error record to the error stream.

To capture all the non-terminating errors you have to probe the PowerShell.Streams.Error collection and collect the details of the errors. While terminating errors are throw as RuntimeException and can be handled at the catch block.

In our framework, I’ve extended the FunctionInfo object to expose a property to capture non-terminating errors and also provided an option to expose the non-terminating error as a RuntimeException if needed by using the FailOnNonTerminatingError method.

public PsHost FailOnNonTerminatingError()
{
    _failOnNonTerminatingError = true;
    return this;
}

The implementation for the handle errors looks like

private string HandleNonTerminatingErrors(System.Management.Automation.PowerShell shell)
{
    var errors = shell.Streams.Error;
    if (errors == null || errors.Count <= 0) return String.Empty;
    var errorBuilder = new StringBuilder();
    foreach (var err in errors)
    {
errorBuilder.AppendLine(err.ToString());

    }
    if (_failOnNonTerminatingError)
    {
        throw new RuntimeException(errorBuilder.ToString());
    }
    return errorBuilder.ToString();
}

Now in the code, you can use the test methods as.

[TestMethod]
[ExpectedException(typeof (RuntimeException))]
public void Tests_PsHost_FailOnNonTerminatingError_ThrowsNonTerminatingErrorsAsRuntimeExceptions()
{
    PsHost<TestModule>
        .Create()
        .FailOnNonTerminatingError()
        .Execute(“Invoke-NonTerminatingError”);
}

Next we’ll see how to overcome the execution policies in the unit test context without altering the PowerShell environment policies.
Sunday, January 18th, 2015
Before we proceed into how we can stub out commands in our test framework, we’ll see how the AddScript method works and how to use it to execute a script block in powershell. The AddScript method adds a script to the end of the pipeline of the PowerShell object and can be invoked by the Invoke method. We’ll use this method to add our dummy function as a script object to the pipeline so that when this command is called later from a function, it’s going to call the dummy function that was added using the AddScript method.
 
So our Stub() is implemented as.
 
public PowerShellHost Stub(string method)
{
    if (_runspace == null)
    {
        throw new ArgumentException(“The PowerShell host should be setup before invoking
the methods”
);
    }
    var script = String.Format(“Function {0} {{}}”, method);
   _shell.AddScript(script).Invoke();          
    return this;
}
You can also see that I’ve used the return value of the method as the PowerShellHost, so that I can use a fluent interface model for my test methods. A sample test method using Stub to the Write-Host command can be
written as 
var psHost = new PowerShellHost<XModule>();
var actual = psHost
    .Stub(“Write-Host”)
    .Execute(“Get-Greetings”);
var result = actual.GetGreetings.Result.Select(psObject => psObject.BaseObject)
.OfType<string>()
.First();
Assert.AreEqual<string>(result, “Hello from VSTest”);
Next we’ll see how exception handling can be taken care in our framework.
Sunday, January 18th, 2015
The windows PowerShell engine can be hosted in the System.Management.Automation.PowerShell
class. Using this class you can create, execute and manage commands in a Runspace. We’ll use these features of the PowerShell class to load and execute our Modules and interact with the PowerShell engine whenever needed in our unit test framework. While creating the PowerShell host for our test framework, it’s good to define the Runspace that is responsible for the operating environment for command pipelines. In our framework I preferred to use the constrained runspace which will allow us to restrict the programming elements that can be applied by the user. 
We’ll later use this ability of constrained runspaces to simulate a Stub behavior in our test framework. To restrict the availability of aliases, applications, cmdlets, functions and scripts, we’ll create an empty InitialSessionState and use that for creating a runspace. Later we’ll use the AddPSModule, AddCommand, AddPSSnapin methods to include the required functionality in our runspace in a text context.
 
InitialSessionState has three different methods to create a container that holds commands
  • Create - Creates an empty container. No commands are added to this container.
  • CreateDefault - Creates a session state that includes all of the built-in Windows PowerShell
    commands on the machine. When using this API, all the built-in PowerShell commands are loaded as snapins.
  • CreateDefault2 - Creates a session state that includes only the minimal set of commands needed to host Windows PowerShell. When using this API, only one snapin – Microsoft.PowerShell.Core – is loaded.

We’ll use the CreateDefault2 overload method to create the InitialSesionState instance.

 
private InitialSessionState CreateSessionState(string path)
{
    var state = InitialSessionState.CreateDefault2();
    if (!String.IsNullOrEmpty(path))
    {        
state.ImportPSModulesFromPath(path);
    }
    return state;
}
 
In the CreateSessionState method, we use the Path property of the PSModuleAttribute created in the part 1 of this series to load all modules from the path provide to the InitilSessionState.
 
Once we have the InitialSessionState, we’ll use this container to create the Runspace and then the PowerShell host
 
_runspace = RunspaceFactory.CreateRunspace(state);
_runspace.Open();
_shell = PowerShell.Create();
_shell.Runspace = _runspace;
 
We’ll use this PowerShell instance to execute the commands needed from the module. In the framework, I have my execute method created as
 
public TModule Execute(string method)
{
    if (_shell == null)
    {
        throw new ArgumentException(“The PowerShell host should be setup before invoking
the methods”
);
    }
    _shell.AddCommand(_moduleInfo.Name + @”\” + method);
    var methodProperties = typeof(TModule).GetProperties()
        .Where(prop => prop.IsDefined(typeof(PsModuleFunctionAttribute), false)).ToList();
    var property = methodProperties.First(p => p.GetCustomAttribute<PsModuleFunctionAttribute>().Name == method);
    var commandInfo = property.GetValue(_module) asPsCommandInfo;
    var parameters = commandInfo.Parameters;
    if (parameters != null)
    {
        _shell.AddParameters(parameters);
    }
    var results = _shell.Invoke();
    commandInfo.Result = results;
    property.SetValue(_module, commandInfo);
    DisposeContext();
    return _module;
}
 
As you can see from the highlighted code, we add the commands and the parameters to the shell using reflection from the metadata defined in the attributes to invoke the shell. The results are set back as property values to the ModuleObject and returned back to the test to assert the conditions.
 
A simple test code using the framework can be created like
 
var psHost = newPowerShellHost<XModule>();
var actual = psHost
    .Execute(“Get-Greetings”);
var result = actual.GetGreetings.Result.Select(psObject =>
psObject.BaseObject).OfType<
string>().First();
Assert.AreEqual<string>(result, “Hello from VSTest”);
 

Next in the series, we’ll see how to make use of our  framework to stub methods/ cmdlets in the execution pipeline.

Saturday, January 17th, 2015
Recently I started working on a light weight unit testing framework for my PowerShell modules. There are a lot of testing frameworks for  PowerShell that can be executed as script files from a PS host, but not many allows you to integrate with VSTests and write test methods and classes in C#.
The following posts in this series is about how you can create a unit testing framework for Windows PowerShell and use it.
 
Being a big fan of the Page objects pattern and have seen the benefits of how easily you can model a page and reduce the amount of duplicate code when creating UI tests for websites, I wanted to do something similar modules for PowerShell also. So when I started writing the framework, one of the main considerations was to simplify my testing process by modelling a PowerShell module in code.
 
I also wanted to follow a more declarative approach on defining the metadata needed to provide inputs for my tests and model, so I started to think about attributes that I need to model a  PowerShell module.
 
 To define a module name and the location of the module, I created the PsModuleAttribute with a
Name and a Path property, so that I can use this attribute on my PSModule model for the ModuleObject pattern implementation.
 
[AttributeUsage(AttributeTargets.Class)]
public class PsModuleAttribute : Attribute
{
    public PsModuleAttribute(string name)
    {
        Name = name;
    }
    [Required(AllowEmptyStrings = false, ErrorMessage = “A PS module name should be provided to test a module”)]  
    public string Name { get; set; }
    public string Path { get; set; }
}
Next I wanted to define the functions and the parameters for these functions in my model. The functions in the PowerShell module can be simulated as properties in the ModuleObject. Once you have these properties defined, you can use the same approach of using attributes to define name, parameters, return value etc. 
 
[AttributeUsage(AttributeTargets.Property)]
public class PsModuleFunctionAttribute : Attribute
{
    [Required(AllowEmptyStrings = false, ErrorMessage = “A module should have a name”)]
    public string Name { get; set; }
    public PsModuleFunctionAttribute(string name)
    {
        Name = name;
    }
}            
[AttributeUsage(AttributeTargets.Property, AllowMultiple = true)]
public class PsModuleParameterAttribute : Attribute
{
    [Required(AllowEmptyStrings = false, ErrorMessage = “A parameter should have a name”)]
    public string Key { get; set; }
    [Required(AllowEmptyStrings = false, ErrorMessage = “A parameter should have a value”)] 
    public string Value { get; set; }
    public PsModuleParameterAttribute(string key, string value)
    {
        Key = key;
        Value = value;
    }
}
 
In the framework, I created the CommandInfo object to wrap these values in the properties.
 
public class PsCommandInfo
{
    public string Name { get; set; }
    public IDictionary Parameters { get; set; }
    public Collection<PSObject> Result { get; set; }
}
 
The final implementation of the ModuleObject should look like.
[PsModule(“xModule”, Path = @”E:\MyModules\xModule”)]
public class XModule
{
    [PsModuleFunction(“Get-HelloMessage”)]
    [PsModuleParameter(“context”, “VSTest”)]
    public PsCommandInfo GetGreetings { get; set; }
}
 
Next we’ll see how to extract this information in a unit test context to execute it.
Monday, January 12th, 2015
With ASP.NET application Insights it’s now very easy to collect operation, performance and usage information in your ASP.NET web applications. With the data provided by application insights you can quickly detect and  act on performance and availability issues.
You can add application insights to your project from visual studio with the combination of an Azure account. Application insights option can be added when you create a project like given below (You’ll need to sign up using a Microsoft Azure account)
 
For existing web applications, you can also choose to add the script to enable insights at a later point of time.  Follow the steps given below to get the code to add insights.
  • Login to the Azure portal and choose add application insights option
 
  • Provide the name of the site and configure the resource group and subscription details.
 
  • Choose Add code to monitor web pages option
 
  • Get the script and insert the code in your Layout page or master page as given below.
 
  • Open the portal to see the insights in your dashboard.
 

 

Friday, January 02nd, 2015

If you are building and deploying public facing web applications, security has to be one of your key consideration. Whenever a browser makes an HTTP request to a web server, it sends along several HTTP headers. These HTTP Headers are used to provide the web server with information to assist with handling the request. While certain HTTP Headers are necessary, the web server’s identifying HTTP Headers are not necessary. Providing identifying information can pose a security risk. An attacker who knows of a vulnerability in a particular web server and ASP.NET version combination could hunt for targets making HTTP requests to many different servers and flagging those that return the particular web server/ASP.NET version numbers.

By default the following identifying headers are included in IIS7:

  1. Server
  2. X-Powered-By
  3. X-AspNet-Version
  4. X-AspNetMvc-Version

Observe the above headers in the following sample HTTP Response:

HTTP/1.1 200 OK

Cache-Control: private

Content-Length: 30345

Content-Type: text/html; charset=utf-8

Vary: Accept-Encoding

Server: Microsoft-IIS/7.0

P3P: CP=”ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI TELo OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI”

X-AspNetMvc-Version: 5.1

X-Frame-Options: SAMEORIGIN

X-AspNet-Version: 4.0.30319

X-Powered-By: ASP.NET

X-Instance: CO102

X-UA-Compatible: IE=edge

X-Powered-By: ARR/2.5

P3P: CP=”ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI TELo OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI”

X-Powered-By: ASP.NET

X-Instance: CO102

Date: Fri, 02 Jan 2015 11:16:21 GMT

 

This identifying information is not used by the browser in any way, and can safely be removed. Let’s see how to remove these HTTP Headers.

1.    Removing the X-AspNet-Version HTTP Response Header:

You can turn off the X-AspNet-Version header by changing the following configuration in your Web.Config:

<system.web>

<httpRuntime enableVersionHeader=”false”/>

</system.web>

2.    Removing the X-AspNetMvc-Version HTTP Response Header:

You can remove the X-AspNetMvc-Version header by altering your Global.asax.cs as follows:

protected void Application_Start() 

{
MvcHandler.DisableMvcResponseHeader = true;
}

3.    Removing the X-Powered-By HTTP Response Header:

You can turn off the X-Powered-By header by following the below steps:

  1. Launch the Internet Information Services (IIS) Manager
  2. Expand the Sites folder
  3. Select the website to modify and double-click the HTTP Response Headers section in the IIS grouping.
  4. Each custom header is listed here, as the screen shot below shows. Select the header to remove and click the Remove link in the right-hand column.

 

Removing X-Powered-By HTTP Response Header

Removing X-Powered-By HTTP Response Header

4.    Removing the Server HTTP Response Header:

Server HTTP Header can be removed by using a custom Module which is injected in the IIS 7 pipeline. Such a module can be developed as well using managed or unmanaged code.

Here is a sample .Net Module which replaces the Server HTTP header with a custom header:

using System;

using System.Text;

using System.Web;

namespace Sample.ServerModules

{

public class CustomServerHeaderModule : IHttpModule

{

public void Init(HttpApplication context)

{

context.PreSendRequestHeaders += OnPreSendRequestHeaders;

}

public void Dispose()

{ }

void OnPreSendRequestHeaders(object sender, EventArgs e)

{

// modify the “Server” Http Header

HttpContext.Current.Response.Headers.Set(“Server”, “YourWebServerName”);

}

}

}

When generating this module ensure to strong name it as it needs to be placed into the global assembly cache in order to allow IIS 7 to use it. To add the module to IIS 7 use the “Modules” configuration option on the server, choose “Add managed module” and select the module from the list of available modules.

Removing identifying HTTP Response Headers has two benefits:

  • It slims down the quantity of data transmitted from the web server back to the browser, and
  • It makes it a bit harder for attackers to determine the software (and their versions) that are powering the web server.

 

 

 

 

 

 

 

 

Thursday, January 01st, 2015
With your VSO account and team projects created using the account, it’s very easy to have full ALM capabilities integrated to your
project. Once you have chosen the version control for your project you can easily hookup a build definition and create CI builds using the hosted build controller provided as part of VS online.
If you are using the hosted build controller, there is no need of any additional hardware resources that needs to be installed or configured for setting up a CI build.
From VS online open the project in Visual Studio and then connect to the team project by selecting the VS online account server as shown below.
Choose Builds to add a new build definition for the project and create a new build definition
One the new build definition screen, give an appropriate name for the build definition.
On the Trigger tab, choose Continuous Integration as the trigger
Setup the source location based on the type of version control used. In my example, we have used Git as the version control system.
On the build controller screen, choose the Hosted Build Controller option. This is the build controller provided as part of VS online
and can be used for most of the common scenarios. Refer to this link to see if you need a separate build controller instead of the hosted build controller.
Configure the project to build and other options in the process tab as given below.
Save and close the build definition.
Open the solution in your visual studio and to make some changes to the project
Commit the changes to the master repository (You can check-in) in case of TFS.
After committing, you need to sync the changes to the Git server.
On your VS online account build page or the TFS explorer you can see the CI build queued as soon as your check in done.
Once the build is completed you can check the status and details of the build from the completed tab as given below.

 

Next we’ll see how to setup your build server on an Azure VM and use that build controller instead of the hosted build controller.