Archive for the Category ◊ .Net ◊

Tuesday, April 15th, 2014

Let us look at the formal definition of the builder pattern:

Separate the construction of a complex object from its representation so that the same construction process can be used to create different representations.

What this means is, if we have to create different versions of a complex object and there are many ways in which it can be created, then the applicability of the builder pattern in this scenario creates the complex object in a structured and maintainable way.

Let us take an example. The object/product which we want to build is Kurkure. This product has many versions like the Masala Munch, Chilly Chatka, Green Chutney and so on.

clip_image001[12]

So, let’s start putting few things in code.

The Product Kurkure has the following attributes.

public class Kurkure

        {

            public bool PotatoCleaned { get; set; }

            public bool PotatoPealed { get; set; }

            public SnackShape SnackShape{get;set;}

            public SnackSize SnackSize { get; set; }

            public Spice Spice { get; set; }

            public PackageType Package { get; set; }

            public BrandType BrandType { get; set; }

            public void Display()

            {

                Console.WriteLine("Kurkure :{0} | Spice: {1} | Package: {2}", BrandType.ToString(),Spice,Package);

            }

        }

 

        public enum SnackSize

        {

            Big,

            Small,

            Medium

        }

 

        public enum BrandType

        {

            MasalaMunch,

            ChilliChatka,

            GreenChutney,

            Hyderabadi,

            GreenChilli

        }

 

        public enum SnackShape

        {

            Diagonal,

            Straight,

            Triangle,

            Random

        }

 

        public enum Spice

        {

            Masala,

            GreenChilli,

            GreenChutney,

            Hyderabadi,

            Tomato

        }

If we look at it closely, all the versions of Kurkure are just different versions of potato chips. It’s just potato chips cut in different shapes and sizes and different spices applied. The way to prepare each of them is similar. 

 

·         Clean the potatoes

·         Peel and cut them into the required shape and size.

·         Add spices

·         Package them

 

Let us prepare Kurkure using the builder pattern. We will start putting all the common behavior in a base class.

public abstract class KurkureBuilder

        {

            protected Kurkure kurkure;

            public Kurkure GetKurkure()

            {

                return kurkure;

            }

 

            public void CreateKurkure()

            {

                kurkure = new Kurkure();

            }

 

            public void PreparePotatos()

            {

                kurkure.PotatoCleaned = true;

                kurkure.PotatoPealed = true;

            }

            public abstract void CutPotatoes();

            public abstract void ApplySpices();

            public abstract void Package();

        }

 

 

Let us create the Masala munch and Chilli Chatka kurkure!

public class MasalaMunchBuilder:KurkureBuilder

        {

            public override void CutPotatoes()

            {

                kurkure.SnackSize = SnackSize.Medium;

                kurkure.SnackShape = SnackShape.Random;

            }

 

            public override void ApplySpices()

            {

                kurkure.Spice = Spice.Masala;

            }

 

            public override void Package()

            {

                kurkure.BrandType = BrandType.MasalaMunch;

                kurkure.Package = PackageType.LightOrange;

            }

        }

 

        public class ChilliChatkaBuilder : KurkureBuilder

        {

            public override void CutPotatoes()

            {

                kurkure.SnackSize = SnackSize.Small;

                kurkure.SnackShape = SnackShape.Straight;

            }

 

            public override void ApplySpices()

            {

                kurkure.Spice = Spice.GreenChilli;

            }

 

            public override void Package()

            {

                kurkure.BrandType = BrandType.ChilliChatka;

                kurkure.Package = PackageType.DarkGreen;

            }

        }

 

With all this in place, let us now create a Kurkure maker which actually creates the Kurkure. The maker knows the sequence in which the product has to be created.

 

public class KurkureMaker

        {

            private readonly KurkureBuilder builder;

 

            public KurkureMaker(KurkureBuilder builder)

            {

                this.builder = builder;

            }

 

            public void CreateKurkure()

            {

                builder.CreateKurkure();

                builder.PreparePotatos();

                builder.CutPotatoes();

                builder.ApplySpices();

                builder.Package();

            }

 

            public Kurkure GetKurkure()

            {

                return builder.GetKurkure();

            }

        }

 

 

Now, let us use the Kurkure maker to create some kurkures!

 

        static void Main(string[] args)

        {

            var kurkureMaker = new KurkureMaker(new MasalaMunchBuilder());

            kurkureMaker.CreateKurkure();

            var masalaMunchKurkure = kurkureMaker.GetKurkure();

            masalaMunchKurkure.Display();

 

            kurkureMaker = new KurkureMaker(new ChilliChatkaBuilder());

            kurkureMaker.CreateKurkure();

            var chilliChatkaKurkure = kurkureMaker.GetKurkure();

            chilliChatkaKurkure.Display();

           

            Console.ReadKey();

        }

       

Given below are the formal representation of players in the builder pattern.

clip_image002

 

Product:

What is being build. In our case Kurkure

 

Director:

The director in our case is the KurkureMaker. The directory uses the concrete builder(MasalaMunchBuilderut and ChilliChatkaBuilder in our example. The director also knows the sequence or order about how to build

Used directly by client.

Builder

Abstract class or interface. The KurkureBuilder in our case.

Defines steps.

Holds instance of the end product.

 

Concrete Builders

Implements the interface defined by the builder. MasalaMunchBuilder and ChilliChatkaBuilder are the concrete builders in our example.

Author:
Friday, April 11th, 2014

Microsoft recently acquired InCycle’s “InRelease” software [now called as Release Management (RM)] and integrated with VS 2013.  Release Management software fully supports TFS 2010, 2012 and 2013.

Before we look into details of Release Management, let’s look at what Continuous Delivery means.

What is CD?

Continuous Delivery is the capability of automatic deployment of components to various servers in different environments.  This typically involves configuration management of different environments, ability to define and customize a deployment workflow driven by business involving multiple roles in organization.

Why do we need it?

Well, DevOps is the talk of the town. If you want to be a cool kid (team), you gotta know/implement CD. Apart from the cool factor, CD brings following advantages to the dev team and business.

-          Develop and deploy quality applications at a faster pace.

-          Improve value of deliver by reducing cycle time.

-          Enable same deployment package to traverse various environments as opposed to rebuild for each environment

-          Manage all configuration information in a centralized location.

-          Have repeatable, visible and more efficient releases

-          Alight with deployments with business process

-          Adhere to any regulatory requirements during deployment process.

What is Release Management?

Release Management is a continuous delivery solution for .NET teams for automating deployments through every environment from Team Foundation Server (TFS) until production. RM also allows to define release paths to include approvals from Business and other departments (such as ops) when required. RM enables to assemble all the components of your application, copy them to required target servers and installs all of them in one transaction. QA checks such as automated tests or data generation scripts, configuration changes etc. are all handled by RM. Release Management also handles roll back in required scenarios.

Release Management Components:

The following diagram shows the main components of Release Management.

Release Management Components

Client: There are two Client components. The Windows client is a Windows Presentation Foundation (WPF) application that serves as the main interface point to manage release information.The Web client is used to act on Approval Requests. This is the interface to which users are directed when following links in e-mail notifications. Client is used both by business and development teams to provide necessary approval when required.

RM Server: The Server component is the heart of Release Management. It is a combination of Web and Windows Services that expose contracts used by all other components. The server component also contains a SQL Server database. Typically, RM server is installed on TFS server and can share same SQL server.

RM Deployer: The Deployer component is a Windows service that lives on the Target Servers (such as Dev, QA, Prod etc) where your application components need to be installed.

Tools: The Tools are components that help in the deployment of various components to different servers, configurations etc. Few of the are given below.

-          Installing a version of a component to a specific environment.

-          Deployments to Azure

-          Uninstalling a previous version of a component before a re-deployment

-          Deploying reports to Microsoft SQL Reporting Services

-          Running SQL scripts on a database server etc.

In next blog, I’ll write about configuring Release Management.

Reference material:

Channel 9 Video

Visual Studio 2013 ALM VM – Hands on lab

InRelease User guide

Author:
Wednesday, April 02nd, 2014

If you’re using CodedUI for web automation, you’d typically write code like this:

BrowserWindow browser = BrowserWindow.Launch(); 
HtmlButton btnSubmit = new HtmlButton(browser); 
btnSubmit.SearchProperties[HtmlButton.PropertyNames.Id]="submitbutton"; 
btnSubmit.Click();

In the code above, I am looking for button with ID “submitbutton” and clicking it. Seems like too much code to do a simple thing.
Also note we hardcoded it as HtmlButton, what if I wanted to click a checkbox, I will have repeat this code snippet with HtmlCheckBox.

Wouldn’t it be nice we can just do this:

browser.GetById(“submitbutton”).Click();

We can do it by just adding extension an method for BrowserWindow class:

public static class BrowserWindowExtension
{
 public static HtmlControl GetById(this BrowserWindow browser, string id)
 {
    return browser.GetByAttribute<HtmlControl>("Id", id);
 }
 public static T GetByAttribute<T>(this BrowserWindow browser, string attribute, 
   string value) where T : HtmlControl
  {
     var type = typeof (T);
     var ele = (T) Activator.CreateInstance(type, new object[] {browser});
     ele.SearchProperties.Add(attribute, value, PropertyExpressionOperator.Contains); 
     return ele; 
 } 
}

That’s it. You can now use browser.GetId() method for quick access by ID.
You’ll also notice above that I added another useful extension method, GetByAttribute(), which I used in the GetById().

In part 2, we’ll see extension methods for HtmlControl class that make it easy to work with different types of html elements.

-Anand Gothe

Tuesday, April 01st, 2014
Building and maintaining a local NuGet gallery helps you facilitate your development process with local packages which you do not want to publish to NuGet.org and make them free. It also helps you maintain and publish stable versions of your product that is accessible to all developers in your company or team. You can also integrate the package publishing from your TFS build to automatically publish stable versions of the package to your local NuGet.
 
In this post, we’ll explore the steps necessary to create and consume your own NuGet gallery.
Prerequisites
Before you can get NuGet up and running you’ll need some prerequisite software.
  • Visual Studio 2013
  • PowerShell 2.0
  • NuGet
  • Windows Azure SDK v2.2
  • xUnit for Visual Studio 2013
Setting up local NuGet gallery
Once you have ensured all the prerequisite software is installed. You need to clone the Nuget gallery repository from Git.
 
After successfully cloning the repository, open the command prompt and run the build command to build the project
 
The next step is to setup the website in IIS express.
  • Open the windows PowerShell console as Administrator and ensure that your have the execution policy set as UnRestricted.
  • In the PowerShell console, navigate to local NuGet gallery solution folder and run the Ensure-LocalTestMe.ps1 cmdlet from the tools folder
 
  • After setting up the IIS website, you need to create the database for the local NuGet gallery.
  • Open the Package Manager Console window from Visual Studio 2013
  • Ensure that the Default Project is set to NuGetGallery
 
  • Set the startup project of the solution to NugetGallery
  • Open the NuGetGallery.sln solution from the root of this repository.
  • Run the Update-Database command in the package manager console as given below.
 
  • Change the ConfirmEmailAddress settings value in the Web.Config file of the NugetGallery project to false, this disables confirmation of email address on uploading packages to the local gallery
 
  • Press Ctrl+f5 to run the website in IIS Express.
Configure Package manager settings in Visual Studio
After setting up your private repository, you need to configure Visual Studio to add your repository to the local package sources which works just like the public repository.
  • Open the Package Manager settings
 
  • Go to the Package Sources section.
  • Click the plus button to add a new source.
 
  • Specify a name and source of the new gallery
 
  • Click OK
Creating and uploading a NuGet package
  • To create a package, launch Package Explorer and select File > New menu option
  • Then select the Edit > Edit Package Metadata menu option to edit the package metadata.
  • The final step is to drag in the contents of your package into the Package contents pane. Package Explorer will attempt to infer where the content belongs and prompt you to place it in the correct directory within the package. For example, if you drag an assembly into the Package contents window, it will prompt you to place the assembly in the lib folder.
 
  • Save your package via the File > Save menu option.
 
  • After creating a package use the local NuGet gallery website to upload this package.
 
  • After uploading the package you can use the local gallery site to verify the uploaded package in the packages in the site.
 
  • Once the package is available in the local NuGet store, you can install this package from Visual Studio
Tuesday, April 01st, 2014
The Gated Check-in builds help teams prevent broken builds by not automatically committing pending changes to the repository, but the system will instead create a separate shelveset that will be picked up by the Gated Check-in Build. The build itself will finally decide if the pending changes need to be committed to
the repository based on the applied quality gates. In large teams, the main branch is normally gated and DEV/ feature branches are gate to offer more protection for the branch.
A gated check-in means that a build is performed with one set of changes from one developer and if that passes those changes are checked-in. This means that breaking changes never get it into the code base
and it limits the pain to the developer checking-in rather than sharing that pain with the entire team when somebody makes a mistake.
Setting up a gated check in
  1. In Team Explorer, make sure you are connected to the team project and then open the Builds page.
  2. Choose the New Build Definition link or select a build, open its shortcut menu
  3. Choose Edit Build Definition.
  4. On the Trigger tab: Choose Gated Check-in.
 
When a user check-ins some code, Visual studio prompts for validating the build.
On selecting the Build Changes option, the team explorer window shows the status of build validation.
On successful build, TFS prompts the user the status of the check-in
Clicking the “Reconcile …” button will force an undo in the local workspace and pull the changes that were committed by the Gated Check-in build
Category: .Net, Agile/Scrum, ALM, DevOps, TFS | Tags: , ,  | Leave a Comment
Thursday, March 27th, 2014

Windows Services is previously called as NT Service.The advantage of the Windows Service Application, which makes it more useful, compared to other application is that, Service application can be made to run in the security context of a specific user account, as it continue running in the background until the user log off and will start running as the user logs on.

Basically services  are two types that can be created in .Net Framework.

– Own Process Service(win32OwnProcess.Services)

– Share Process Service(win32ShareProcess.Services)

And Occasionally used services are mostly for File System, hardware, kernel.

In Any type of the application the main method would be entry point.

In case of the services the main method is been replaced with the OnStart() Method which is called when service starts.

Window SERVICE application development involves two phases. One is the development of Service functionality and the other is about the development.

The 3 Main classes involved in Service development are

  •        System.ServiceProcess.ServiceBase
  •        System.ServiceProcess.ServiceProcessInstaller
  •        ServiceController

System.ServiceProcess.ServiceBase  ,is the class in which we override the methods for implementation ,Some the method which gives us the behaviour of this are as follows.

Method

Override to

OnStart Indicate what actions should be taken when your service starts running. You must write code in this procedure for your service to perform useful work.
OnPause Indicate what should happen when your service is paused.
OnStop Indicate what should happen when your service stops running.
OnContinue Indicate what should happen when your service resumes normal functioning after being paused.
OnShutDown Indicate what should happen just prior to your system shutting down, if your service is running at that time.
OnCustomCommand Indicate what should happen when your service receives a custom command. For more information on custom commands, see MSDN online.
OnPowerEvent Indicate how the service should respond when a power management event is received, such as a low battery or suspended operation.
Run The main entry point for the service

 

Steps to Create the Self installing Windows Services, By default the window services adds the references of the below mentioned references .

System.Configuration.Install

System.ServiceProcess

 

//Specifying the serviceName

private static string ServiceName = “FileListingService”;

 

protected override void OnStart(string[] args)

{

//start any threads or http listeners etc or perform any task
}

 

protected override void OnStop()

{

//start any threads or http listeners etc or perform any task
}

protected override void Dispose(bool disposing)

{

//clean your resources if you have to
base.Dispose(disposing);
}

Use the ServiceController has two methods GetDevices() and GetServices() ,in order to list out the entire services we can make use of the GetServices() and query through it.

private static bool IsServiceInstalled()

{

return ServiceController.GetServices().Any(service => service.ServiceName == ServiceName);

}

 

private void InstallService()

{

if (!IsServiceInstalled())

{

ManagedInstallerClass.InstallHelper(new string[] { “/I”, Assembly.GetExecutingAssembly().Location } );

}

}

 

We can use the following command

- /I – Installs the service
-/U – Uninstalls the service

There the service is installed need to check in the services.msc

Category: .Net  | Leave a Comment
Tuesday, March 25th, 2014
The Law of Demeter is a design guideline for developing software, particularly object-oriented programs. In its general form, the law is a specific case of loose coupling. The fundamental notion of the law is that a given object should assume as little as possible about the structure or properties of anything else (including its subcomponents), in accordance with the principle of “information hiding”.
For E.g. the code given below shows that the client calls the method on the dependent object of the client rather than on the object
itself. This increases the coupling between classes.
var manager = john.GetDepartment().GetManager();
The Hide Delegate refactoring principle can be applied to this design to solve the issue.
 
 
 
 
 
 
 
public Manager GetManager()
{
    return Department.GetManager();
}
Author:
Thursday, March 13th, 2014

(or) access azure website hosted on local Windows Azure Compute Emulator with the hostname of host’s IP address.

Scenario: If you are testing a website on your local development environment hosted on Windows Azure Emulator, if by default binds to your loopback address (localhost/127.0.0.1). What this means is, if you are on a domain or network and if you want to access this service/website from another computer on a network, you basically can’t.

 

Ideal Workaround:  Automated way is described with rinetd, and with or without serviceex – described in detail here – http://blog.sacaluta.com/2012/03/windows-azure-dev-fabric-access-it.html

Zappy Workaround: Here I will show you a very primitive way of working around the problem. But this is manual and you have to do it every time you start and stop the Azure emulator or the Azure role.

If the role you are trying to access from another computer on a domain is a website or uses IIS (most likely), then go ahead and edit the bindings, just as you would do for a normal website hosted in IIS. Right click on the website and select Edit Bindings. Add your IP address, or add your IP addresses and host names (with a port number that is available) that you want to bind to the website. That’s it, you are done.

For instance in the screenshot below, my service deployment was deployment22(44) as shown in the Windows Azure Compute Emulator, so my website in IIS looked like deployment22(44).xxxxxx. This website created in IIS is purged every time you start and stop an Azure service. That’s why I prefer the “Ideal Workaround”. But, this blog shows you yet another simple way to do it without tools.

All in one screenshot showing my csdef file, Windows Azure Compute Emulator, IIS Site Bindings, Internet explorer successfully navigating to the same website with all available bindings, and a linux on virtual box accessing the website with my ip address.

Click on the screenshot to enlarge or use this link for a high res image.

 

image

Author:
Wednesday, March 12th, 2014

Earlier last year, Sachin blogged about Using Httpmodule to secure your web application. While the concept of having a gatekeeper for every request still applies for authorization and authentication, the solution for ASP.Net MVC framework may be a bit different from ASP.Net WebForms, especially when you are looking for Controller level access control mechanisms.

HttpModule being the gatekeeper ASP.Net, one level down is the Action Filters for ASP.Net MVC. While managing large scale applications, it would not always seem very rational to create new Controllers for every functionality sometimes. You may also want to restrict access to specific controllers or specific action methods, and if you worked it through you would end up with a code snipped like below. An if else condition everywhere you wanted access control.

        [HttpGet]
        public ActionResult CustomizeEmails()
        {
            if (Context.Login.IsAdministrator)
            {
                var viewModel = new CustomizeEmailViewModel();
                return View(viewModel);
            }
            else
            {
                return AccessDeniedView();
            }
        }

        [HttpGet]
        public ActionResult CustomizeUserHomePage()
        {
            if (Context.Login.IsAdministrator)
            {
                var viewModel = new CustomizeUserHomePageViewModel();
                return View(viewModel);
            }
            else
            {
                return AccessDeniedView();
            }
        }


Which is obviously redundant and does not reflect on code reusability principle. So you may choose to create a custom HttpModule for access control during the initial ASP.Net request pipeline, of if that is not a possible solution in your case (or like the one above in ASP.Net MVC), then you must be looking at building a custom action filter. Once you have that in place, you could decorate your required action methods with your access control custom filter, or the entire controller, or as a global action filter (post ASP.Net MVC 3) so that the action filter would get invoked on every controller in the application.

Below is the code snippet showing the bare minimal implementation of a custom action filter for access control. In case the current request does not come from an Administrator, then it redirects him to an AccessDenied action method in the CompanyController.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;

namespace WebClient.Filters
{
    public class AdminOnlyAction : ActionFilterAttribute
    {
        public override void OnActionExecuting(ActionExecutingContext filterContext)
        {
            //Cast the filterContext.Controller to the controller that has the access control information 
            //In my case it happened to be a BaseController
            var baseController = ((BaseController)filterContext.Controller);

            if (!baseController.Context.Login.IsAdministrator)
            {
                filterContext
                    .HttpContext
                    .Response
                    .RedirectToRoute(new { controller = "Company", action = "AccessDenied" });
            }

            base.OnActionExecuting(filterContext);
        }
    }
}

 

The if else statements in the first snippets would take a little more elegant, and neat form.

        [HttpGet]
        [AdminOnlyAction]
        public ActionResult CustomizeEmails()
        {
                var viewModel = new CustomizeEmailViewModel();
                return View(viewModel);
        }

        [HttpGet]
        [AdminOnlyAction]
        public ActionResult CustomizeUserHomePage()
        {
                var viewModel = new CustomizeUserHomePageViewModel();
                return View(viewModel);
        }


Thus you would have a simple, elegant, and powerful access control mechanism via a custom action filter. If you like this kind of cleanliness in non MVC projects, please take a look at POSTSHARP as well.

Wednesday, March 05th, 2014

What I have noticed in most of the projects that I have been part of is that the Performance is considered as a least priority feature. Well its definitely not a feature, performance should be built in each feature and measured as early and frequently as possible.

I have collated few key points to fine tune any web application’s (targeting .Net and IIS but not limited to) performance.

1. Use CDN (Content Delivery network): All the 3rd party JavaScript files such as JQuery, Knockout should always use the CDN instead of the web application server. CDN Servers are dedicated to deliver the static content and is always faster than your own host.

There is a very high probability that the client (browser) would have already cached the JavaScript as part of other web application since most of them use the same CDN url. You can read more about the benefits about CDN here.

2. Use Bundling and Minification: The custom CSS files and the JavaScript files should be bundled into a single large file (reduces the number of HTTP requests) and also minified (reduces the size of the data transferred over the wire).

How to enable bundling and minification in MVC: http://www.asp.net/mvc/tutorials/mvc-4/bundling-and-minification

3. Use static content caching: Always set the static content to be cached (includes JavaScript, CSS, images etc.) on the client side. Most modern day browsers would cache the static content themselves. Use “Never Expires” policy to ensure that most of them needn’t be updated.

Note: This could also lead to the client not getting the latest updates when something changes, ensure that you have the version in the file name, when you update the file also change the version number.

<configuration> <system.webServer> <staticContent> <clientCache cacheControlMode="UseExpires" httpExpires="Tue, 19 Jan 2038 03:14:07 GMT" /> </staticContent> </system.webServer> </configuration>

You can read more about client cache here.

4. Always keep CSS and JavaScript external: Never add any JavaScript or inline style information within the views. That would regenerate the view each time and you would miss out on the benefits of the above.

Hence always keeps JS and CSS as separate files and add them as links in the view.

Note: Best practice is to add the link to the style file at the top of the view and JS at the bottom of the view file.

5. Use Url Compression: Since IIS 7+ allows easy way of compressing the response using gzip protocol and the browser decompresses the response on the client side. This would considerable reduce the network latency while transporting data.

There are two types of compression Static and Dynamic based on the contents. JS, CSS, Images and other static contents would be part of static compression, but the views, data would be come under dynamic compression. You can enable them using the following settings in the configuration file.

<urlCompression doStaticCompression="true" doDynamicCompression="false" />

image

above picture shows a simple request to the webserver which implements the dynamic compression. Data transferred over the wire is 4.7 KB after decompressing on the client side the data is 45 KB .

Note: The dynamic compression puts load on the server as each request as to be compressed so use it wisely.

You can read more about setting up the compression here

6. Use Output Caching: Use output caching for regularly used views or pages which have no dynamic updates. This can be done using an attribute (OutputCache) to the action in MVC

More reading here.

7. Use Data Caching: Reduces the database or the disk i/o by caching the regularly used data to in-memory cache. There are many providers in the market and also a default one available with IIS.

8. ASP.Net Pipeline Optimization: ASP.net has many http modules waiting for request to be processed and would go through the entire pipeline even if it’s not configured for your application.

All the default modules will be added in the machine.config place in “$WINDOWS$\Microsoft.NET\Framework\$VERSION$\CONFIG” directory. One could improve the performance by removing those modules which you wouldn’t require.

<httpModules> <!--<span class="code-comment"> Remove unnecessary Http Modules for faster pipeline </span>--> <remove name="Session" /> <remove name="WindowsAuthentication" /> <remove name="PassportAuthentication" /> <remove name="AnonymousIdentification" /> <remove name="UrlAuthorization" /> <remove name="FileAuthorization" /> </httpModules>

9. Avoid Session State: Session states should always be kept really small in size, if we cannot avoid the circumstances, then we should use session as distributed in-memory cache. Never use a database backed session provider.

10. Remove Unnecessary HTTP Headers: ASP.Net adds headers that aren’t really necessary to be transmitted over the wire. Such as ‘X-AspNet-Version’ , ‘X-Powered-By’ and many more. clip_image001

11. Compile in Release mode: Always set the build configuration to release mode for the website. For obvious reasons.

12. Turn Tracing off: Tracing is a good functionality, but each of the functionality would add an overhead instead use the asynchronous logging mechanism.

13. Async and Await: Since the async controllers are available since the MVC3.0 now we can have non-blocking requests to the webserver which improves the throughput of the requests made. You can read more about this @ http://www.campusmvp.net/blog/async-in-mvc-4.

14. HTTP Limitations: By default HTTP protocol doesn’t allow more than two concurrent requests from the same user and those requests are also limited by the browsers

Firefox 2:  2
Firefox 3+: 6
Opera 9.26: 4
Opera 12:   6
Safari 3:   4
Safari 5:   6
IE 7:       2
IE 8:       6
IE 10:      8
Chrome:     6

But there are scenarios where you would a webserver connecting to a webservice requesting data frequently then this restriction can degrade the performance. .Net has a way to overcome this restriction and allow users to make multiple concurrent calls to the service.

<system.net> <connectionManagement> <!-- Add address from the trusted connection only --> <add address="*" maxconnection="100" /> </connectionManagement> </system.net>

Some of the tools worth mentioning

a. YSlow2 – Yahoo’s add-in available for most of non-IE browsers. It analyses the web pages against the set of rules by default. You can look at the rules here

clip_image002

b. Chrome Inspector – Chrome browser’s audit tab allows to run the checks for performance.

clip_image004

c. Firebug for firefox, IE’s F12 window and Chrome’s element inspector could be used to track the network utilization and track all the http request made and can gauge which needs your attention.

d. Net Profiler – Available with Visual studio ultimate

e. ANTs profiler

f. For Entity framework there is ayande profiler http://hibernatingrhinos.com/Products/EFProf

Along with above mentioned checklist/recommendations one should always ensure that the best practices for languages (C#, VB.Net, JavaScript) is followed for the optimum performance of any application.