Archive for the Category ◊ .Net ◊

Monday, September 29th, 2014

Including Windows PowerShell script as part of your build and deployment process, brings you the flexibility of easily and effectively customize your packaging and deployment process. With the proper combination of environment configuration files (XML) and PowerShell scripts you can achieve the impossible. This post will show you how to run Windows PowerShell scripts remotely from a TFS build process.

Using CredSSP for second-hop remoting

One common issue with PowerShell remoting is the “double hop” problem. When the scripts are executed remotely on a Server A and then it tries to connect from Server A to Server B, the second connection fails to send the credentials to that server. As a result the second server fails to authenticate the request and rejects the connection. To get around this issue you need to use the CredSSP authentication mechanism in PowerShell.
Credential Security Service Provider (CredSSP) is a new security service provider that is available through the Security Support Provider Interface (SSPI) in Windows. CredSSP enables an application to delegate the user’s credentials from the client (by using the client-side SSP) to the target server (through the server-side SSP). To setup the machines, for CredSSP you can follow the below given steps:
On the local computer that needs to connect to a remote server, execute the command Enable-WSManCredSSP -Role client -DelegateComputer *. This will  make the local computer role as client since so
that it can connect to remote computer which acts as server.

On the remote server run the command Enable-WSManCredSSP -Role server.

Remote execution of the PowerShell script from TFS build

For executing PowerShell scripts from the C# code you need to make use of the API’s in the System.Management.Automation.dll assembly. You can use the PowerShell instance directly but a better way is to create a Runspace instance. Every PowerShell instance works in a Runspace. You can have multiple Runspaces to connect to different remote hosts at the same time. To get the remote execution working properly you need to first construct an instance of WSManConnectionInfo and pass that on to the RunspaceFactory.CreateRunspace(..) method. You can use the following code snippet to construct the connection object.

private PSCredential BuildCredentials(string username, string domain, string password) {

    PSCredential credential = null;
    if (String.IsNullOrWhiteSpace(username))
    {
        return credential;
    }
    if (!String.IsNullOrWhiteSpace(domain))
    {
        username = domain + “\\” + username;
    }
    var securePassword = new SecureString();
    if (!String.IsNullOrEmpty(password))
    {
        foreach (char c in password)
        {
            securePassword.AppendChar(c);
        }
    }
    securePassword.MakeReadOnly();
    credential = new PSCredential(username, securePassword);
    return credential;
}
private WSManConnectionInfo ConstructConnectionInfo()
{
    //The connection info values are created after looking into the ouput of the command in the target machine
    /*
        * dir WSMan:\localhost\Listener\Listener_*\*
        * /
    //Output           

    /* 

        * Name                      Value                                                             Type
        * —-                      —–                                                             —-
        * Address                   *                                                                 System.String
        * Transport                 HTTP                                                              System.String
        * Port                      5985                                                              System.String
        * URLPrefix                 wsman                                                             System.String
        * 
        */

   const string SHELL_URI = http://schemas.microsoft.com/powershell/Microsoft.PowerShell;

    var credentials = BuildCredentials(“username”“domain”“password”);    var connectionInfo = new WSManConnectionInfo(false“remoteserver”, 5985, “/wsman”, SHELL_URI, credentials);    connectionInfo.AuthenticationMechanism = AuthenticationMechanism.Credssp;

    return connectionInfo;

}
Next you can use this connection object to pass it to the RunspaceFactory.CreateRunspace method and invoke the PowerShell script as given below.
var scriptOutput = new ScriptOutput();
var tfsBuildHost = new TFSBuildHost(scriptOutput);
var connection = ConstructConnectionInfo();
using (var runspace = RunspaceFactory.CreateRunspace(tfsBuildHost, connection)){
    runspace.Open();
    InvokePSScript(runspace, scriptOutput);
    runspace.Close();
}
private void InvokePSScript(Runspace runspace, ScriptOutput scriptOutput){
    using (var ps = PowerShell.Create())    {
        ps.Runspace = runspace;
        var commandToExecute = ConstructScriptExecutionCommand("path to script file""parameter1""parameter2");
        ps.AddScript(commandToExecute);
        try
        {            
var results = ps.Invoke();
            foreach (var result in results)
            {
                if (result.BaseObject != null)
                {
                    scriptOutput.WriteResults(result.BaseObject.ToString());
                }
            }
        }
        catch(Exception)
        {
            if (ps.InvocationStateInfo.State != PSInvocationState.Failed)
            {
                return;
            }
            scriptOutput.WriteError(ps.InvocationStateInfo.Reason.Message); 
        }
    }
}
You can also notice the TFSBuildHost object in the code. This class provides communications between the Windows PowerShell engine and the user. The implementation details are as given below.
internal class TFSBuildHost : PSHost{
    private ScriptOutput _output;
    private CultureInfo originalCultureInfo =         System.Threading.Thread.CurrentThread.CurrentCulture; 
    private CultureInfo originalUICultureInfo =         System.Threading.Thread.CurrentThread.CurrentUICulture;
    private Guid _hostId = Guid.NewGuid();
    private TFSBuildUserInterface _tfsBuildInterface;
    public TFSBuildHost(ScriptOutput output)
    {
        this._output = output;
        _tfsBuildInterface = new TFSBuildUserInterface(_output);
    }
    public override System.Globalization.CultureInfo CurrentCulture
    {
        get { return this.originalCultureInfo; }
    }
    public override System.Globalization.CultureInfo CurrentUICulture
    {
        get { return this.originalUICultureInfo; }
    }
    public override Guid InstanceId
    {
        get { return this._hostId; }
    }
    public override string Name
    {
        get { return "TFSBuildPowerShellImplementation"; }
    }
    public override Version Version
    {
        get { return new Version(1, 0, 0, 0); }
    }
    public override PSHostUserInterface UI
    {
        get { return this._tfsBuildInterface; }
    }
    public override void NotifyBeginApplication()
    {        _output.Started = true;    }
    public override void NotifyEndApplication()
    {
        _output.Started = false;
        _output.Stopped = true;
    }
    public override void SetShouldExit(int exitCode)
    {
        this._output.ShouldExit = true;
        this._output.ExitCode = exitCode;
    }
    public override void EnterNestedPrompt()
    {
        throw new NotImplementedException(
                "The method or operation is not implemented.");
    }
    public override void ExitNestedPrompt()
    {
        throw new NotImplementedException(
                "The method or operation is not implemented.");
    }
} 
internal class TFSBuildUserInterface : PSHostUserInterface{
    private ScriptOutput _output; 
    private TFSBuildRawUserInterface _tfsBuildRawUi = new TFSBuildRawUserInterface();
    public TFSBuildUserInterface(ScriptOutput output)    
   {
        _output = output;
    }
    public override PSHostRawUserInterface RawUI
    {
        get { return this._tfsBuildRawUi; }
    }
    public override string ReadLine()
    {        return Environment.NewLine;
    }
    public override void Write(string value)   
    {
        _output.Write(value);
    }
    public override void Write(
                                ConsoleColor foregroundColor,
                                ConsoleColor backgroundColor,
                                string value)
    {
        //Ignore the colors for TFS build process.
        _output.Write(value);
    }
    public override void WriteDebugLine(string message)
    {
        Debug.WriteLine(String.Format("DEBUG: {0}", message));
    }
    public override void WriteErrorLine(string value)
    {
        _output.WriteError(value);
    }
    public override void WriteLine()
    {
        _output.WriteLine();
    }
    public override void WriteLine(string value)
    {
        _output.WriteLine(value);
    }
    public override void WriteLine(ConsoleColor foregroundColor, ConsoleColor backgroundColor, string value)
    {
        //Ignore the colors for TFS build process.
        _output.WriteLine(value);
    }
}
internal class TFSBuildRawUserInterface : PSHostRawUserInterface{
    private ConsoleColor _backColor = ConsoleColor.Black;
    private ConsoleColor _foreColor = ConsoleColor.White;
    private Size _bufferSize = new Size(300, 900);
    private Size _windowSize = new Size(100, 400);
    private Coordinates _cursorPosition = new Coordinates { X = 0, Y = 0 };
    private Coordinates _windowPosition = new Coordinates { X = 50, Y = 10 };
    private int _cursorSize = 1;
    private string _title = "TFS build process"; 
    public override ConsoleColor BackgroundColor
    {
        get { return _backColor; }
        set { _backColor = value; }
    }
    public override Size BufferSize
    {
        get { return _bufferSize; }
        set { _bufferSize = value; }
    }
    public override Coordinates CursorPosition
    {
        get { return _cursorPosition; }
        set { _cursorPosition = value; }
    }
    public override int CursorSize
    {
        get { return _cursorSize; }
        set { _cursorSize = value; }
    }
    public override ConsoleColor ForegroundColor
    {
        get { return _foreColor; }
        set { _foreColor = value; }
    }
    public override bool KeyAvailable
    {
        get { return false; }
    }
    public override Size MaxPhysicalWindowSize
    {
        get { return new Size(500, 2000); }
    }
    public override Size MaxWindowSize
    {
        get { return new Size(500, 2000); }
    }
    public override Coordinates WindowPosition
    {
        get { return _windowPosition; }
        set { _windowPosition = value; }
    }
    public override Size WindowSize
    {
        get { return _windowSize; }
        set { _windowSize = value; }
    }
    public override string WindowTitle
    {
        get { return _title; }
        set { _title = value; }
    }
}

Thursday, August 28th, 2014

Entity Framework is arguably the most powerful and heavy weight Object Relational Mapper when it comes to actually implementing the Object Relational Patterns, described by Martin Fowler in his ground breaking book Patterns of Enterprise Application Architecture (in 2002). In this blog post, I will give you insight of the problem that I was running into when I started using EF.
Choosing the right datatype instead of compatible data type – Casting nchar to nvarchar under the covers.
If your code first model defined a string property which SQL Server automatically presumes as nvarchar., but if the database had an varchar instead of nvarchar, This is when Entity Framework decide to coerce the type under the covers and it is very expensive. Lets look at our model.

 public class Student  
   {  
     [DatabaseGenerated(DatabaseGeneratedOption.Identity)]  
     public int StudentId { get; set; }  
     public string Name { get; set; }  
     public int Age { get; set; }  
     public Gender Gender { get; set; }  
   }  
 public enum Gender  
   {  
     None = 0,  
     Male = 1,  
     Female = 2  
   }  

And this is our Schema and for all practical purposes, let us assume we are working on legacy codebase, where we have database created already.
Schema_28082014
and this our code which talks to context

       IDatabaseInitializer<StudentContext> initializer = null;  
       Database.SetInitializer(initializer);  
       using (var context = new StudentContext())  
       {  
         var query = context.Students.Where(x => x.Name == "Raghav").ToString();  
       }  

and query will be generated as

 SELECT   
 [Extent1].[StudentId] AS [StudentId],   
 [Extent1].[Name] AS [Name],   
 [Extent1].[Age] AS [Age],   
 [Extent1].[Gender] AS [Gender]  
 FROM [dbo].[Students] AS [Extent1]  
 WHERE N'Raghav' = [Extent1].[Name]  

If you clearly notice the above query, Entity Framework assumed the column as nvarchar instead of varchar. You can say that .NET Strings are Unicode, but then with small set of data this does not appear like a problem. But if you have large amount of data residing in the database, this could be a potential performance problem.Below the execution plan of the above query, and SQL Server reports this a warning.

Query_2_28082014

One way to get away from this problem is to follow this rule “Model must be in Sync with your schema” i.e in our case decorating the name property with Column attribute as done below

 [Column(TypeName = "varchar")]  
 public string Name { get; set; }  

It’s very rare that we get into this problem, but then it’s very important to understand the implications. One way to completely get rid of it is using Migrations..In that case, you give the responsibility to EF to generate the database , keeping the database in sync with code. In my next article, I will talk about another interesting problem using EntityFramework Code First.

Category: .Net | Tags:  | Leave a Comment
Tuesday, August 26th, 2014

Static Constructors! Yes just like we have instance constructors, be it default constructor that run time provides or parameterized constructor that we explicitly create. We can also have static constructors, It’s just that constructor is marked with Static. Static Constructors are also called Class Constructors., because they reside at the type level. You can control the instance constructors, but not static constructors because we can’t set access modifiers to static constructors. Also you cannot manually invoke a static constructor, it will be handled by .NET Runtime. Runtime also ensures that static constructors cannot be invoked more than once.

 namespace ConsoleApplication8  
 {  
   using System;  
   class Parent  
   {  
     static Parent() { Console.WriteLine("Parent cctor"); }  
     public Parent() { Console.WriteLine("Parent ctor"); }  
   }  
   class Child : Parent  
   {  
     static Child() { Console.WriteLine("Child cctor"); }  
     public Child() { Console.WriteLine("Child ctor"); }  
   }  
   class GrandChild : Child  
   {  
     static GrandChild() { Console.WriteLine("GrandChild cctor"); }  
     public GrandChild() { Console.WriteLine("GrandChild ctor"); }  
   }  
   class Program  
   {  
     static void Main()  
     {  
       new GrandChild();  
       Console.ReadLine();  
     }  
   }  
 }  

In the above program, we notice the static constructor participating in every level of relationship. We know the order in which instance constructors will be invoked, having said this,
let’s look at the output of above program!
Static Constructors - Inheritance Graph

Its surprising right when it comes to order of invocation in static constructors, let’s see why ?

  • Main function tries to invoke GrandChild’s instance constructor on new instance of GrandChild
  • .NET Runtime detects GrandChild’s default constructor is about to invoked, so it invokes GrandChild’s static constructor
  • GrandChild’s instance constructor invokes Child’s instance instance constructor, Runtime detects that child’s instance constructor is going to get invoked, so it invokes the static constructor on Child Class and this goes till the control invokes the Parent Constructor after invoking the Parent Static Constructor.

Here is the invocation graph on static constructor and you will notice its always the static constructor most derived class that executes first!

GrandChild Static Constructor -> Child Static Constructor -> Parent Static Constructor

Tip: If you inspect the IL emitted from a class which has a static constructor, you will see a instruction marked with .cctor (indicating a static constructor)., and for instance constructor you will notice instruction marked with .ctor (indicating a instance constructor).
IL_StaticConstructors

Thanks to the following technical expert for reviewing this article: Maran

Category: .Net | Tags:  | One Comment
Monday, August 25th, 2014

Serializing object is very common operation that we do in these days, to enable data transfer over the wire and to enable our applications to consume the data in an understandable way i.e. in the form of JSON, XML, RSS, RDF, OData to name a few. Sometimes it happens that you may be working on the Legacy codebase, Or you may be consuming the entities that you don’t have control on the codebase itself, and you may want only few properties to participate in Serialization. In this article we will see how we can use Json.NET to solve the problem, so here is the problem. I want to serialize only “StudentId”,”Name”,”Age” on Student Type defined.

   public class Student  
   {  
     public int StudentId { get; set; }  
     public string Name { get; set; }  
     public int Age  
     {  
       get  
       {  
         var age = DateTime.Today.Year - DateOfBirth.Year;  
         if (DateTime.Today < DateOfBirth.AddYears(age)) { age--; }  
         return age;  
       }  
     }  
     public DateTime DateOfBirth { get; set; }  
     public Gender Gender { get; set; }  
   }  
   public enum Gender  
   {  
     Male,  
     Female  
   }  

There are few ways you can get away from this problem
1) Using anonymous Types

       Student student = new Student()  
       {  
         StudentId = 1,  
         DateOfBirth = new DateTime(  
           1996, 03, 02),  
         Name = "Student1",  
         Gender = Gender.Male  
       };  
       var studentType = new { student.StudentId, student.Name, student.Age };  
       string jsonString = JsonConvert.SerializeObject(studentType);  
       Console.WriteLine(jsonString);  

This will print the output as
image_1
This approach is good, except that we created an anonymous type just for the sake of serialization, let’s look at next option
2) To avoid creating anonymous type, we could write our own ContractResolver which inherits from DefaultContractResolver and override CreateProperties() method as below

 public class RequiredPropertiesResolver : DefaultContractResolver  
   {  
     private readonly IList<string> _properties;  
     public RequiredPropertiesResolver(IList<string> propertiesToSerialize)  
     {  
       _properties = propertiesToSerialize;  
     }  
     protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)  
     {  
       IList<JsonProperty> properties = base.CreateProperties(type, memberSerialization);  
       return properties.Where(p => _properties.Contains(p.PropertyName)).ToList();  
     }  
   }  

And consuming above method during serialization

List propertiesList = new List(3) { "StudentId", "Age", "Name"};
string json = JsonConvert.SerializeObject(student, Formatting.Indented, new JsonSerializerSettings() { ContractResolver = new RequiredPropertiesResolver(propertiesList) });
Console.WriteLine(json);

And this would produce the result

image_2

Category: .Net  | One Comment
Monday, August 25th, 2014

Table-Valued Parameters is a new feature introduced in SQL SERVER 2008. In earlier versions of SQL SERVER it was not possible to pass a table variable to a stored procedure as a parameter. But Sql Server 2008 allows us to pass table valued parameter to send multiple rows of data to a stored procedure or a function without creating a temporary table or passing so many parameters.

Many of us may come across a situation wherein we pass a huge comma/semi-colon separated string as parameter to a stored procedure and split that string using some function and insert them into a temp table for further usage in the stored procedure.

We can overcome such hassles by using a table valued parameter. We need to follow the below steps in order to use a table valued parameter:

  • Declare the table and user defined table type
  • Create a stored procedure which accepts user defined table type as a parameter
  • Create a data table or structure the same as your table valued parameter. Remember that all columns of the data table are parallel to the table data type
  • Pass the data to the stored procedure as a SqlParameter and the type of this parameter must be in parallel to table data type

EXAMPLE:

Below contains few sample code snippets which illustrate creation of table valued parameters and pass the same to a stored procedure:

Declare the table and user defined table type

CREATE TYPE dbo.TEMP_ORGANIZATIONS AS TABLE
(
OrganizationId INT NOT NULL,
OrganizationName VARCHAR (MAX),
PRIMARY KEY (OrganizationId)
)

Create a stored procedure which accepts user defined table type as a parameter

CREATE PROC proc_ADD_ORGANIZATION
(@TableParameter AS dbo.TEMP_ORGANIZATIONS READONLY)
AS
BEGIN
INSERT INTO ORGANIZATIONS (OrganizationId , OrganizationName)
SELECT OrganizationId, OrganizationName  FROM @TableParameter
END

Create a data table or structure the same as your table valued parameter. Remember that all columns of the data table are parallel to the table data type:

DataTable CreateTable()
{
var dt = new DataTable();
dt.Columns.Add (“OrganizationId”, typeof(Int32));
dt.Columns.Add (“OrganizationName”, typeof(string));
return dt;
}

Pass the data to the stored procedure as a sql parameter and the type of this parameter must be in parallel to table data type

//Create Table
DataTable dtOrganization = CreateTable();

// Add New Rows to the table
dtOrganization.Rows.Add(1, “Prowareness”);
dtOrganization.Rows.Add(2, ”Airtel”);
dtOrganization.Rows.Add(3, ”Vodafone”);

using (var connection = new SqlConnection(connectionString))
{
connection.Open();
using (var cmd = new SqlCommand(“proc_ADD_ORGANIZATION”, connection))
{
cmd.CommandType = CommandType.StoredProcedure;

//Pass table valued parameter to Store Procedure
SqlParameter sqlParam = cmd.Parameters.AddWithValue(“@TableParameter”, dtOrganization);
sqlParam.SqlDbType = SqlDbType.Structured;
cmd.ExecuteNonQuery();
}
}

Category: .Net, Databases  | Leave a Comment
Author:
Thursday, August 21st, 2014

Release Management – Things to consider when working with multiple VM’s

Its not very simple to communicate between different virtual machines, that too in a situation when you have to deploy applications, copy files/folders,
Main problem here is that we need to build trust between different systems/machines.

We have recently set up continuous deployment in azure cloud service and we ran into few issues.
In this blog I am trying to list down those workarounds/solutions, hoping that it would help some one in need.

We are using Release Management(Update 3) from Microsoft as CD tool, TFS(2013) as our source control and our VM’s are running as Windows 2012 server.
Our VM’s are running in the same cloud service.

When your RM server is on one machine and you are trying to configure your deployment agent from another machine you might run into few issues.
Error1: “The specified service user either does not exist or is not configured as service user in Release Management server….”
Error2: “The Remote server returned an error:(401) Unauthorized”

There are multiple things which you should keep in mind here.

1. Version of Client, Server, Deployment agent
For example if you are running RM Server version – Update2 and your deployment agent/ RM client is on version -Update3 then you might face some issues.
This will work in most of the cases, but make sure you are running all of them in same version.

2. Creating Ghost/Shadow users
You need to create a user in both the machines(Remember that user names should match and also passwords should be same for both the users)
You need to add Ghost user to administrators group on the machine running your deployment agent(Because he needs to drop some files, directories, create app pools etc)

and also he should have access to drop location

3. Adding users to RM
Add the Ghost user to RM.(RM client -> Administration->Manage Users)
In some cases you also need to add user with the machine name(for example if machine name VMDeploymentAgent1 and user name is Ghost then you need to add a user with name
VMDeploymentAgent1\Ghost).
This user need not exist in your windows local users group(Unlike above two Ghost users whom you added to windows users list), RM will still
add this user with out any errors.
Make both these users as Service user

4. Deployment Agent Service Account
You should configure your agent to run as Ghost user you created in step2

If you have considered above things, now you should be able to successfully configure your deployment agent.

Next thing you would do is finding your deployment agent in RM client.
In some cases when you “scan for new” to add agent in the servers tab in your RM client, your deployment agent will not be listed. To solve this issue, you need to make both
the user you added in step 3 as Release Managers.
But the thing is once you find and add your agent, you make both those users as only service users. Things will still work as expected!!!

To solve Error2
You need to login to machine(where your deployment agent resides) as ghost user you created in step 2. Now configure your deployment agent keeping a check on above things.
You are good to go!.

Category: .Net, ALM  | Leave a Comment
Author:
Wednesday, August 20th, 2014

The ASP.NET request pipeline loads Session information for each request before any of your code is executed. Which is good, because your session variables are loaded and available for use (eg. Request.Session["user"]).

However, depending on how you’ve configured your session state management, this might be an overheard for several pages that don’t need to read anything from session. And if you’re configured your session state to OutProc, then there is also an additional overhead of deserializing the session information. And if you used sql server for storing sessions, then there is also the overheard of sql queries.

Fortunately, there is a way to tell the asp.net request pipeline not to load session information.
If some page in your application doesn’t read/write anything from Session, then be sure to mark it as following in the Page tag:

<%@Page enableSessionState="false">
By setting the enableSessionState attribute to false, you are instructing asp.net not to load the session information when this page is requested.

If most pages in your application do not need session, then you can also make this setting globally for all pages in web.config as:
<system.web>
<pages enableSessionState="false">
</system.web>

Then you can selectively turn on session for pages that do need it with:
<%@Page enableSessionState="true">

By turning off Session state for pages that don’t need it, you’ll avoid overheads and get better response times.

-Anand

Monday, August 18th, 2014
Microsoft Application Request Routing (ARR) is a proxy-based routing module that forwards HTTP requests to application servers based on HTTP headers, server variables, and load balance algorithms. With ARR you can increase application availability and scalability by better utilization of content server resources with lower management cost by creating opportunities for shared hosting environments.
ARR relies on the URL rewrite module to inspect incoming HTTP requests to make the routing decisions. Therefore, the URL rewrite module is required to enable ARR features. Using ARR in a shared hosting environment introduces a new deployment architecture that provides additional benefits and opportunities for shared hosting. This scenario is enabled by a feature called host name affinity. The host name affinity feature in Application Request Routing enables shared hosting to rethink how sites are deployed. Application Request Routing affinitizes the requests, regardless of whether they are made from one client or multiple clients, to one server behind ARR, ensuring that a given site is consuming resources only on one of the servers.
For e.g. in the diagram the ARR load balances the request to one of the server and affinitizes the request for a DNS to the same server for the lifespan of corresponding worker process. Requests are sent to one of the server and responses are returned for the request. As you can see the load balancing is dynamically taken care by ARR which will help the administrators to scale the environment horizontally by adding new servers without predefined site allocations, by the flexibility of managing the shared configuration in a single place.
In this post we’ll see how we can configure ARR in a shared  hosting environment and thus make use of the features for a zero downtime deployment scenario when doing CD in your organization.
The first step is to create a server farm with the application servers that are configured with shared configuration and content. The sites on the application server should be using host name binding as follows:
  • Open server affinity from the server farm in ISS.
  • Enable host name affinity by using the host name and click Apply.

There are two providers available for this option.

  1. Microsoft.Web.Arr.HostNameRoundRobin tries to evenly distribute the number of affinitized host name in round robin. Using this provider has no requirements on the application servers.
  2. Microsoft.Web.Arr.HostNameMemory tries to distribute the number of affinitized host names based on the amount of available memory on the
    application servers where the server with the most amount of available memory would be assigned with the next host name.
  • Click on Apply from the Actions on right side to save the changes
  • Next to specify the number of servers to be used per host name, Select Advanced settings in the Server Affinity page and enter values for the host name and number of allocated servers.
  • Click OK to save the changes. Now you have successfully configured the host name affinity feature in ARR for a shared hosting scenario.
    SSL offloading is enabled by default in the Routing rules section. You can disable this and thus make all communications to be encrypted between servers. To
    disable SSL offloading uncheck the Enable SSL offloading checkbox in the Routing rules page.
  • To setup your website for zero downtime requirement, first you need to create an ARR site where the users connect to. To create an ARR site, create an empty website in ISS and put it in its own application pool and set the non-timeout requirements on the app pool.
  • Create two websites that will actually host the content. Make sure that these sites have a different IP address and on a different port than the ARR site. Assign these sites to their own application pools and use different virtual directories for the content.
  • On the server farm that was configured in step 1, add the servers create and change the http port to the port of the websites in the advanced settings as given below.
  • In the IIS manager, open URL rewrite rules and click on the ARR rule created for your configuration
  • Add a new condition for the {SERVER_PORT} does not match <<PORT NUMBER OF WEBSITES>> and click on OK. Apply your changes to the server.
  • That completes the configuration for ARR and URL redirect for your website from the ARR site. You can now use this configuration to deploy a new version of your application to the severs by drain stopping one of them and then warm-up the offline site after deployment by using the alternate IP address. Repeat the process for the second server and then complete the deployment process without a downtime.
Thursday, August 14th, 2014

We faced a scenario where we had a dataset (EF Linq query) which needs to be sorted based on the user selection (user was given the selection option with the table columns display name), we ended up making an extension method which could be used by anyone for dynamically selecting the sorting key.

We used Linq expressions and reflection to create parameter and property selector from the column name. Below is the code

public static IQueryable<T> Sort<T>(this IQueryable<T> source, string propertyName, bool isDescending) { if (source == null) return null; var entityType = typeof(T); var property = GetPropertyType(entityType, propertyName); if (property == null) { return source; } var sortOrder = isDescending ? "OrderByDescending" : "OrderBy"; var linqParamExpression = Expression.Parameter(entityType, "param"); var propertyExpression = GetPropertyExpression(propertyName, linqParamExpression); var delegateType = typeof (Func<,>).MakeGenericType(typeof (T), propertyExpression.Type); var expression = Expression.Lambda(delegateType, propertyExpression, linqParamExpression); var orderByExpression = Expression.Call(typeof (Queryable), sortOrder, new [] {source.ElementType, propertyExpression.Type} , source.Expression, expression); return source.Provider.CreateQuery<T>(orderByExpression); }

GetPropertyType gets the property of the class whose name is specified and allows to navigate to the child object property as well. e.g “AssignedToUser.FirstName” would select the child object AssignedToUser’s firstname property for the sorting

private static PropertyInfo GetPropertyType(Type entityType, string propertyName) { var splitStringOnDot = propertyName.Split(new[] { '.' }, StringSplitOptions.RemoveEmptyEntries); switch (splitStringOnDot.Length) { case 2: { var foreignKeyProperty = entityType.GetProperty(splitStringOnDot[0]); if (foreignKeyProperty != null) { var selectedProperty = foreignKeyProperty.PropertyType.GetProperty(splitStringOnDot[1]); return selectedProperty; } } break; case 1: return entityType.GetProperty(propertyName); default: break; } return null; }

And thank you Raghav!! for giving an example of creating delegates using reflection.

Happy Coding!!!

Category: .Net | Tags: ,  | 2 Comments
Author:
Friday, July 04th, 2014
Exporting DataSet to Excel using NetOffice

Goal of this blog is to export dataset to microsoft excel using NetOffice.

You can read more about NetOffice here

As a first step, include nuget package “NetOffice.Excel”,
type “netoffice” in nuget package manager from visual studio and the first one in the list will be
“NetOffice.Excel”

We have a dataset which includes datatable named “Users”

Complete code is listed below.

References to include

 
               
            using System.Data;
            using System.Reflection;
            using NetOffice.ExcelApi.Enums;
            using Excel = NetOffice.ExcelApi;
                
                
            
        public void GetExcel()
	    {
		//Create excel application
		var application = new Excel.Application();

		//Add workbook
		application.Workbooks.Add();

		application.DisplayAlerts = false;

		//Get all data into an array
		var tempArray = new object[dataset.Tables["Users"].Rows.Count,                                                     dataset.Tables["Users"].Columns.Count];
		for (var r = 0; r < dataset.Tables["Users"].Rows.Count; r++)
		{
			for (var c = 0; c < dataset.Tables["Users"].Columns.Count; c++)
				tempArray[r, c] = dataset.Tables["Users"].Rows[r][c];
		}

		//Get column names into an array
		var tempHeadingArray = new object[dataset.Tables["Users"].Columns.Count];
		for (var i = 0; i < dataset.Tables["Users"].Columns.Count; i++)
		{
			tempHeadingArray[i] = dataset.Tables["Users"].Columns[i].ColumnName;
		}

		//Create style used for displaying column names
		var style = application.ActiveWorkbook.Styles.Add("NewStyle");
		style.Font.Name = "Verdana";
		style.Font.Size = 10;
		style.Font.Color = System.Drawing.ColorTranslator.ToOle(System.Drawing.Color.Black);
		style.Font.Bold = true;

		//Get active worksheet
		var sheet = (Excel.Worksheet)application.ActiveSheet;

		AddColumnNames(sheet, tempHeadingArray);

		AddExcelHeadingText(sheet);

		AddDataRows(sheet, dataset, tempArray);

		sheet.Columns.AutoFit();

		application.ActiveWorkbook.SaveAs("E:\\Sampledocument",  Missing.Value, Missing.Value,                                 Missing.Value, false,
										  false, XlSaveAsAccessMode.xlExclusive);

		//CleanUp
		application.ActiveWorkbook.Close();
		application.Quit();
		application.Dispose();
	}

                

Helper methods are listed below

                
            
	 private static void AddDataRows(Excel.Worksheet sheet, DataSet dataset, object[,] tempArray)
	 {
		var range = sheet.Range(sheet.Cells[4, 1],
						sheet.Cells[(dataset.Tables["Users"].Rows.Count), (dataset.Tables["Users"].Columns.Count)]);
		sheet.Name = "Sample excel....";
		range.Value = tempArray;
	 }

	private static void AddColumnNames(Excel.Worksheet sheet, object[] tempHeadingArray)
	{
		var columnNameRange = sheet.get_Range(sheet.Cells[3, 3], sheet.Cells[3, tempHeadingArray.Length + 2]);
		columnNameRange.Style = "NewStyle";
		columnNameRange.Value = tempHeadingArray;
		columnNameRange.UseStandardWidth = true;
	}

	private static void AddExcelHeadingText(Excel.Worksheet sheet)
	{
		//Apply styling to heading text
		sheet.Cells[1, 1].Value = "Excel heading text";
		sheet.Cells[1, 1].Font.Name = "Verdana";
		sheet.Cells[1, 1].Font.Italic = true;
		sheet.Cells[1, 1].Font.Underline = true;
		sheet.Cells[1, 1].Font.Size = 14;
		sheet.Cells[1, 1].Font.Color = System.Drawing.ColorTranslator.ToOle(System.Drawing.Color.DarkRed);
	}

     

When you are deploying this to IIS, make sure you give sufficient permission for the IIS user for DCOM object(Microsoft Excel Application)

Steps to follow to achieve this:

1. Go to component services

2. Expand DCOM Config

3. Right click on ‘Microsoft Excel Application’

4. Select properties

5. Go to security tab

6. Select Customize and click on Edit(for all three permission types)

7. Add respective user and give sufficient permission

Thals all Folks! Happy Exporting!!

 

Category: .Net  | Leave a Comment