Archive for the Category ◊ .Net ◊

Thursday, February 11th, 2016

Unit test your MVC views using Razor Generator

Yes , we can unit test our MVC views . This is possible using  RazorGenerator which  generates classes which can further be used to instantiate views and unit test them.The generated class makes it possible that the views are pre compiled.

Razor Generator is a Custom Tool for Visual Studio that allows processing Razor files at design time instead of runtime, allowing them to be built into an assembly for simpler reuse and distribution.

But what’s the advantage of using MVC views?

If I (or another dev) removes some field from the view, I don’t want to wait for the Acceptance test to discover this. I want it to be discovered and flagged in a unit-test that runs multiple times daily.

If we are setting up CD pipeline we generally have test configured on different pipelines. So we can have unit test of views for testing all positive and negative scenarios and Acceptance test can be configured with few scenarios to check the basic overall functionality.

 

Steps to Write Unit Tests:

1. Install RazorGenerator from visual studio.

2. Create an MVC  project, using the ‘Internet Application’ template and including the unit test project.

3. Use NuGet to install the RazorGenerator.Mvc package in your MVC project.

4. Set the custom tool on Views\Home\Index.cshtml to  ’RazorGenerator’ and press enter, Index.generated.cs is generated under it. Please check the image below for reference.

1           generatedcs

5. Use NuGet again to add the RazorGenerator.Testing package to the unit test .And that’s all it takes to get set up!

Below is a sample Unit test code

Remove the code given by default in Index.cshtml with the code below.

<h2>@ViewBag.Message</h2>

Now we  can write a unit test for our precompiled Index.cshtml view  to test if our value is rendered  in the html document as expected.

e.g. create a Views\HomeViewsTest.cs (in the unit test project):

unittestMVC 

And this is how we can write multipe cases to test our views.

References :

http://razorgenerator.codeplex.com/

http://blogs.planbsoftware.co.nz/?p=469

Category: .Net  | Leave a Comment
Sunday, January 31st, 2016

If you write unit tests or follow TDD approach for your projects it is absolutely essential that your unit tests should be readable and clearly outlines the exact intent of the test.Also your unit tests should be able to act as documentation for your application logic.

You can always achieve these goals with usual asserts but it is difficult to keep it readable and very easy to mess it up to the extent that no body can understand what the test is trying to do.

To address these problems there are couple of fluent assertion framework which provide fluent APIs for doing asserts.One such framework which I found useful for .NET is Fluent Assertions.

Below is an Account class representing a bank account which I will use to demonstrate fluent assertions API.Some of the code is intentionally incorrect so that we can fail the tests.


 public enum TransactionType
    {
        Credit,
        Debit
    }
    public class BankTransaction
    {
        public BankTransaction(TransactionType type, double amount,DateTime transactionDate)
        {
            TypeOfTransaction = type;
            Amount = amount; 
            TransactionDate = transactionDate;
        }
        public TransactionType TypeOfTransaction { get;  }
        public double Amount { get;  }
        public DateTime TransactionDate { get;  }

        public override bool Equals(object obj)
        {
            
            var transaction = obj as BankTransaction ;
            if (obj == null) return false;

            return this.Amount == transaction.Amount &&
                this.TypeOfTransaction == transaction.TypeOfTransaction &&
                this.TransactionDate == transaction.TransactionDate;



        }
        public override int GetHashCode()
        {
           return  RuntimeHelpers.GetHashCode(this);
        }
    }
    public class Account
    {
        private string customerName;
        private double balance;
        private Account()
        {
        }

        public Account(string customerName, double balance)
        {
            this.customerName = customerName;
            this.balance = balance;
            Transactions = new List<BankTransaction>();
        }

        public string CustomerName
        {
            get { return customerName; }
        }

        public double Balance
        {
            get { return balance; }
        }

        public List<BankTransaction> Transactions { get; }

        public void Debit(double amount)
        {
            if (amount > balance)
            {
                throw new ArgumentOutOfRangeException("Invalid Amount");
            }

            if (amount < 0)
            {
                throw new ArgumentOutOfRangeException("Invalid Amount");
            }

            balance += amount; // INCORRECT CODE

            Transactions.Add(new BankTransaction(TransactionType.Debit, amount,DateTime.Now));
        }

        public void Credit(double amount)
        {

            if (amount < 0)
            {
                throw new ArgumentOutOfRangeException("Invalid Amount");
            }

            balance += amount;
            Transactions.Add(new BankTransaction(TransactionType.Debit, amount,DateTime.Now)); // INCORRECT CODE
        }



    }

First step is the install nuget package for Fluent Assertions (called FluentAssertions) library.

image

To start with lets see how simple assertion and failure message looks like.


[TestMethod]
        public void DebitAmount_ShouldBe_DeductedFrom_Overall_Balance()
        {
            Account acc = new Account("Jack", 1000);
            acc.Debit(50);

            //usual Notation 
            //Assert.AreEqual(950, acc.Balance);

            //Fluent assertions.
            acc.Balance.Should().Be(950); //SHOULD FAIL

        }

 

Below is the result you get when the above test fails.

Usual Assert:

image

Fluent Assertion:

image

You see how the assertion statement and failure message are much more descriptive.

Lets see how you can assert on collections.


 [TestMethod]
        public void AllTransaction_Should_Be_Recorded()
        {
            Account acc1 = new Account("Jack", 1000);
            acc1.Credit(50);
            acc1.Debit(50);

            Account acc2 = new Account("Matt", 1000);
            acc2.Credit(50);
            acc2.Debit(150);

            acc1.Transactions.Should().BeInAscendingOrder(x => x.TransactionDate);
            acc2.Transactions.Should().BeInAscendingOrder(x => x.TransactionDate);
            //Test each member to see it exists in other collection.Uses overridden equals method
            acc1.Transactions.Should().BeEquivalentTo(acc2.Transactions); //SHOULD FAIL

        }

Last assertion fails and below is the description which comes out.

image

This kind of shows how powerful this kind of API can be in debugging failing tests.

Below is one more example of asserting on exception that should be thrown.


  [TestMethod]
        public void Credit_Amount_LessThanZero_ShouldNotBeAllowed_AndThrowsException()
        {
            Account acc1 = new Account("Jack", 1000);
            
            Action creditAction = () => acc1.Credit(-50);

            //Test the type of exception and message (using wildcard)
            creditAction.ShouldThrow<ArgumentOutOfRangeException>().WithMessage("*Invalid Amount*");

        }

Fluent Assertions is very well documented and has a vast API covering each and every scenario which you might encounter in your tests.Here is the link to documentation wiki for Fluent assertions library.

This is by no means the only library available.There are other fluent assertion libraries like AssertJ (for Java), NFluent, Shouldly etc.

In the end, the library you use can be your choice but by means of this post I want to highlight the approach of fluent assertions which can go long way in keeping your test code maintainable.

Author:
Wednesday, January 20th, 2016

How often we have seen settings defined in config file like this :

 

Problem starts when they are a lot and there is no organized/logical way of putting them in config file and reading in application.

The more organized way of implementing custom app setting is via custom configSection.

Let’s first have a glimpse of config file to know what we want to achieve.

1 2

As it’s apparent now we have organized our mail server related settings in more organized way. Now by creating a strongly typed class for these settings we can read these setting directly by using our custom type. How ? Let’s understand this process step by step.

Step 1.) First we need to create a custom SettingsConfiguration class or any name you like inherited from IConfigurationSectionHandler interface. This is the class which we will use to register our custom config section in config file. For now just add a method called Create as highlighted below. We will discuss rest of the code in subsequent steps.

3

Step 2.) Now we will register this type in to our config file. How? Here is the way:

4

Here the name “settings” should be same as defined in our mailSetting parent node. In type first argument is our config class name with full namespace and second parameter is the assembly name in which this config class is defined.

5

So our basic config skeleton is ready. Let’s fill it with mailSetting that we want to achieve.

Step 3.) As seen earlier we already have our mailSetting defined in config file and to read it via XmlSerializer we need to create mapping classes for each node [if node does contain any other node(s)].

So here are our classes.

6

Here we can have different class name but name defined in XmlRoot should match with what is defined in config file.

Now there are different way of defining classes and their members based on their representation.

A.)   <name>default</name>: in this case we can have a property defined decorated with attribute XmlElement where name in attribute should match with name defined in config file.

B.)    <description companyName=”Prowareness” /> : for mapping this kind of node we need to have a separate class where CompanyName will be decorated with attribute XmlAttribute instead of XmlElement.

7

C.)    <to>

        <email description=”stakeholder” value=”reciever1@prowareness.com”/>

 <email description=”security admin” value=”reciever2@prowareness.com”/>    

      </to>  : Now It’s little tricky what if we want to define an email collection like above in our config file.

Here is the solution:

8

In this case List<Email> will be our XmlElement and the Email class can be defined as above.

So now our all the mapping classes are ready. Let’s see how to read it in our application.

Step 4.)  Now let’s go back to our class SettingsConfiguration’s Read method.

9

Here we are reading our custom config section named “settings” which we registered in step2. We get our “mailSetting” node using SelectSingleNode method and then using XmlSerializer we deserialize it to our MailSettingElement class. And then now onwards we can use this class to get the settings from config file and apply wherever needed.

So what advantages I’m getting out of this ?

As we can see even if sectionNode is null in above code (meaning you missed defining settings in config file due to various reason, may be you are not sure it’s value in development or test environment), you can still pass some default value in code as shown above.

By using this approach we can logically organize our settings in different sections which can be reused across applications if this class is developed as an assembly.

And here is the way to read/use these settings:

10

That’s all folks for this article. Please let me know your views/comments.

 

Friday, December 11th, 2015

A data-tier application (DAC) is a logical database management entity that defines all SQL Server objects – such as tables, views, and instance objects associated with a user’s database. It is a self contained unit of SQL Server database deployment that enables data-tier developers and DBAs to package SQL Server objects into a portable artifact called a DAC package, or in other words – a .dacpac file.

A DACPAC is a Windows file with a .dacpac extension. The file supports an open format consisting of multiple XML sections representing details of the DACPAC origin, the objects in the database, and other characteristics.

Life cycle of database development involves a lot of scripts exchange among members of a team or some ad-hoc arrangement. The advantage of DAC over scripts is that it helps in identifying and validating behaviors from different source and target databases.

“DAC operations were introduced in SQL Server 2008 R2.”

In this article, we will see an example of a database deployment using a .dacpac file programmatically using the APIs and what are the relevant permissions that are needed to get this to work.

  • Extract: A user can extract a database into a dacpac file.
  • Deploy: A user can deploy a database from an extracted dacpac file.
  • Upgrade: An existing database can be upgraded from an extracted dacpac file.

The user must be a member of the dbmanager role or assigned CREATE DATABASE permissions to create a database, including creating a database by deploying a DAC package. The user must be a member of the dbmanager role, or have been assigned DROP DATABASE permissions to drop a database.

To extract and deploy a database using a dacpac file programmatically, we need to add the Microsoft.SqlServer.Dac  assembly reference to the *.csproj file. Use Manage Nuget packages to add “Microsoft.SqlServer.Dac” reference.

Dac

DacServices class is used to extract and deploy a database from a dacpac file.

               

DacServices dacService = new DacServices("Connection String");

During extraction, we can create an instance of DataExtractOptions wherein we can specify things to include, exclude and validate objects. Sample code snippet is given below:

               

DacExtractOptions dacExtractOptions = new DacExtractOptions
{
	ExtractApplicationScopedObjectsOnly = true,
	ExtractReferencedServerScopedElements = true,
	VerifyExtraction = false,
	Storage = DacSchemaModelStorageType.Memory,
	IgnoreExtendedProperties = true,
	ExtractAllTableData = false,
	IgnoreUserLoginMappings = true,
	IgnorePermissions = false
};

Various options are available in DacExtractOptions. For instance, the content of the extracted dacpac file can be verified by setting the VerifyExtraction property to true. Extraction will fail if an issue is found. Similarly, you can include or exclude all the permissions, user login mappings, ExtractAllTableData etc if you so desire during the extraction process. There are lot of other options to exclude database objects as well.

While deploying the database using the extracted dacpac file, you can use an instance of DacDeployOptions to exclude database objects if you wish to. Sample code snippet is given below:

               

DacDeployOptions dacDeployOptions = new DacDeployOptions
{
	CreateNewDatabase = true,
	ExcludeObjectTypes = new[]
	{
		ObjectType.Users, ObjectType.Permissions, ObjectType.StoredProcedures,
		ObjectType.RoleMembership, ObjectType.Aggregates, ObjectType.ExtendedProperties
	},
	BlockOnPossibleDataLoss = true,
	IgnoreFileAndLogFilePath = true
};

As you can see from the above example, all the object types that are included in the ExcludeObjectTypes array will be excluded during the deployment process.

You can also hook up to events like ProcessChanged and Message using DacServices instance to get update and log messages related to all the operations that occur during the extract and deployment processes.

Here is the sample code for extracting and deploying the database from a dacpac file:

               


using System;
using System.Configuration;
using System.IO;
using Microsoft.SqlServer.Dac;

namespace Test
{
	class Program
	{
		static void Main(string[] args)
		{
			Extract();
			Deploy();
		}

		#region Private Declaraion

		private const string sourceDatabaseName = "SourceDatabaseName";

		private const string targetDatabaseName = "TargetDatabaseName";

		#endregion

		#region Private Properties

		private static string FilePath
		{
			get
			{
				return Path.Combine(Environment.CurrentDirectory, "TestDatabase.dacpac");
			}
		}

		#endregion

		#region Public Methods

		public static void ExtractAndDeployDataTierApplicationPack()
		{
			Extract();
			Deploy();
		}

		#endregion

		#region Private Methods

		private static void Extract()
		{
			string sourceConnectionString = ConfigurationManager.ConnectionStrings["SourceDatabaseConnection"].ConnectionString;

			DacServices dacService = new DacServices(sourceConnectionString);

			DacExtractOptions dacExtractOptions = new DacExtractOptions
			{
				ExtractApplicationScopedObjectsOnly = true,
				ExtractReferencedServerScopedElements = true,
				VerifyExtraction = false,
				Storage = DacSchemaModelStorageType.Memory,
				IgnoreExtendedProperties = true,
				ExtractAllTableData = false,
				IgnoreUserLoginMappings = true,
				IgnorePermissions = false
			};

			dacService.Extract(FilePath, sourceDatabaseName, null, new Version(1, 0, 0), null, null, dacExtractOptions);
		}

		private static void Deploy()
		{
			string targetConnectionString = ConfigurationManager.ConnectionStrings["TargetDatabaseConnection"].ConnectionString;
			DacServices dacServices = new DacServices(targetConnectionString);
			DacDeployOptions dacDeployOptions = new DacDeployOptions
			{
				CreateNewDatabase = true,
				ExcludeObjectTypes = new[]
				{
					ObjectType.Users, ObjectType.Permissions, ObjectType.StoredProcedures,
					ObjectType.RoleMembership, ObjectType.Aggregates, ObjectType.ExtendedProperties
				},
				BlockOnPossibleDataLoss = true,
				IgnoreFileAndLogFilePath = true
			};

			using (DacPackage dacpac = DacPackage.Load(FilePath))
			{
				dacServices.Deploy(package: dacpac, targetDatabaseName: targetDatabaseName, upgradeExisting: true, options: dacDeployOptions);
			}
		}

		#endregion
	}
}
Category: .Net | Tags: ,  | Leave a Comment
Sunday, July 05th, 2015

In almost all .NET applications you encounter lot of boilerplate code which although has pretty standard implementation but has to be written repetitively in application context. Some common examples are overridden ToString,Equals  method for your model classes.Or implementing IDisposable or INotifyPropertyChanged .In all these examples most of the code is pretty standard.

One of the ways of automating this code is to automatically inject this in your generated MSIL (which stands for Microsoft intermediate language).This process of injecting code post compilation directly into generated intermediate language is known as .NET assembly weaving or IL weaving (similar to byte code weaving in Java).

If you do this before compilation i.e. add source code lines before compilation it is known as Source code weaving.

What is Fody and how it works.

Fody is an extensible library for weaving .NET assembly written by Simon Cropp.It adds post build task in MS build pipeline to manipulate generated IL. Usually this requires lot of plumbing code which is what fody provides.Fody provides extensible add-in model where anybody can use core fody package (providing basic code for adding post build task and manipulating IL) and create there own specific add-ins. E.g. Equals.Fody generates Equals and GetHashCode method implementation and ToString.Fody generates ToString implementation for your classes.

Below figure shows how the whole process works.

image

You can also use the same technique for implementing AOP (e.g. this simple implemenation by Simon Cropp) ,logging , profiling methods etc.Here is a comprehensive list of already available fody add-ins.

Fody uses Mono.Cecil which is a library for manipulating intermediate language and in usage feels quite similar to .NET reflection APIs. As far as IL manipulation goes Mono.Cecil seems like only game in the town.Chances are high that if you have used plug-in for IL manipulation it would be using Mono.Cecil internally.

Lets try one of the fody add-ins…

For the purpose of this blog I will demonstrate one very useful fody add-in called NullGaurd.As name suggests this add-in automatically adds code for raising exceptions when null value is encountered in a property or a method parameter.Below are steps for adding plug-in and using it.

  • Create a sample project and add NullGaurd.Fody nuget package

image 

  • Now by default all the class properties ,method parameters and return types will have null checks injected i.e. if any of these have null value, an exception will be thrown.But you still can allow nulls wherever required by using [AllowNull] attribute as shown below.

class Program
    {
        static void Main(string[] args)
        {

            NullGaurdTestClass test = new NullGaurdTestClass();
            test.TestMethod(null, null);
        }
    }
class NullGaurdTestClass
    {
        [AllowNull]
        public string PropertyThatAllowsNull { get; set; }
        public string PropertyThatDoesNotAllowsNull { get; set; }

        public void TestMethod([AllowNull]string allowsNull, string doesNotAllowNull)
        {
            
        }
        public string TestMethodDoesNotAllowNullReturnValue()
        {
            return null;
        }
    }

  • Running this code gives following result

image

  • You can control the behavior of add-in by applying assembly attribute as shown below.Basically making ValidationFlags.None will stop adding null checks and ValidationFlags.All will add the behavior for everything.

image

Lets decompile the generated assembly using ILSpy to see what code add-in has injected.Below is what you get.And that is the magic of IL weaving.


namespace NullGaurdFodySample
{
	internal class NullGaurdTestClass
	{
		public string PropertyThatAllowsNull
		{
			get;
			set;
		}
		public string PropertyThatDoesNotAllowsNull
		{
			[CompilerGenerated]
			get
			{
				string text = this.<PropertyThatDoesNotAllowsNull>k__BackingField;
				string expr_0A = text;
				Debug.Assert(expr_0A != null, "[NullGuard] Return value of property 'System.String NullGaurdFodySample.NullGaurdTestClass::PropertyThatDoesNotAllowsNull()' is null.");
				if (expr_0A == null)
				{
					throw new InvalidOperationException("[NullGuard] Return value of property 'System.String NullGaurdFodySample.NullGaurdTestClass::PropertyThatDoesNotAllowsNull()' is null.");
				}
				return expr_0A;
			}
			[CompilerGenerated]
			set
			{
				Debug.Assert(value != null, "[NullGuard] Cannot set the value of property 'System.String NullGaurdFodySample.NullGaurdTestClass::PropertyThatDoesNotAllowsNull()' to null.");
				if (value == null)
				{
					throw new ArgumentNullException("value", "[NullGuard] Cannot set the value of property 'System.String NullGaurdFodySample.NullGaurdTestClass::PropertyThatDoesNotAllowsNull()' to null.");
				}
				this.<PropertyThatDoesNotAllowsNull>k__BackingField = value;
			}
		}
		public void TestMethod(string allowsNull, string doesNotAllowNull)
		{
			Debug.Assert(doesNotAllowNull != null, "[NullGuard] doesNotAllowNull is null.");
			if (doesNotAllowNull == null)
			{
				throw new ArgumentNullException("doesNotAllowNull", "[NullGuard] doesNotAllowNull is null.");
			}
		}
		public string TestMethodDoesNotAllowNullReturnValue()
		{
			string text = null;
			string expr_06 = text;
			Debug.Assert(expr_06 != null, "[NullGuard] Return value of method 'System.String NullGaurdFodySample.NullGaurdTestClass::TestMethodDoesNotAllowNullReturnValue()' is null.");
			if (expr_06 == null)
			{
				throw new InvalidOperationException("[NullGuard] Return value of method 'System.String NullGaurdFodySample.NullGaurdTestClass::TestMethodDoesNotAllowNullReturnValue()' is null.");
			}
			return expr_06;
		}
	}
}

Getting started with your own Fody Add-in

Creating your own fody add-in is not as easy as using one .But it is not as difficult as some of you may think.If you ever used reflection in .NET you already know a little bit about how it feels. If you are interested below are few resources to get you started.

Category: .Net | Tags: ,  | Leave a Comment
Wednesday, July 01st, 2015

It could be a surprise to many of us to know Table Valued Functions are actually not supported in Entity Framework, However there is a work around to embrace the power of them. Lets get started!

Here is my domain model
StudentSchoolMapping

As you see above diagram, I have "address " entity, which is a complex type as defined below.

public class StudentContext : DbContext
    {
        protected override void OnModelCreating(DbModelBuilder modelBuilder)
        {
            modelBuilder.RegisterEntityType(typeof(Student));
            modelBuilder.RegisterEntityType(typeof(School));

            //Complex Type Configuration
            modelBuilder.ComplexType<Address>()
                        .Property(x => x.AddressLine1)
                        .HasColumnName("AddressLine1");
            modelBuilder.ComplexType<Address>()
                        .Property(x => x.AddressLine2)
                        .HasColumnName("AddressLine2");
            modelBuilder.ComplexType<Address>()
                        .Property(x => x.AddressLine3)
                        .HasColumnName("AddressLine3");
            modelBuilder.ComplexType<Address>()
                        .Property(x => x.ZipCode)
                        .HasColumnName("ZipCode");

            base.OnModelCreating(modelBuilder);
        }
    }

    public class Student
    {
        public int Id { get; set; }
        public string Name { get; set; }
        public DateTime DateOfBirth { get; set; }
        public Address Address { get; set; }
        public School School { get; set; }
    }

    public class School
    {
        public int SchoolId { get; set; }
        public string Name { get; set; }
        public Address Address { get; set; }
    }

    public class Address
    {
        public string AddressLine1 { get; set; }
        public string AddressLine2 { get; set; }
        public string AddressLine3 { get; set; }
        public string ZipCode { get; set; }
    }

Since the table valued function return the table, which is an analogy to Class in OOP World. We define the table valued function below which returns students name along the school the student is studying. So lets define table definition.

[AttributeUsage(AttributeTargets.Class, AllowMultiple = false, Inherited = false)]
    public class TableValuedFunctionAttribute : Attribute
    {
        private readonly string _tvfDefinition;
        public TableValuedFunctionAttribute(string tvfDefinition)
        {
            _tvfDefinition = tvfDefinition;
        }

        public string Definition
        {
            get { return this._tvfDefinition; }
        }
    }

    [TableValuedFunction("[dbo].[GetStudentsAndTheirSchool]")]
    public class StudentSchoolDetail
    {
        public int Id { get; set; }
        public string Name { get; set; }
        public string School { get; set; }
    }

Now that we defined the tvf’s from .NET Side, Lets actually create a function which return table as below.

CREATE FUNCTION [dbo].[GetStudentsAndTheirSchool] ()
RETURNS @ReturnTable TABLE (
	Id INT
	,NAME NVARCHAR(50)
	,School NVARCHAR(50)
	)
AS
BEGIN
	INSERT @ReturnTable
	SELECT s.Id
		,s.NAME
		,s2.NAME
	FROM dbo.Students s
	INNER JOIN dbo.Schools s2 ON s2.SchoolId = s.School_SchoolId

	RETURN
END

I have a little extension, to enforce basic rules and get the materialized value back as below.

public static class DbExtensions
    {
        private const string TvfPlaceHolder = "SELECT * FROM {0} ()";

        public static IEnumerable<T> ExecuteTableValuedFunction<T>(this DbContext context) where T:class 
        {
            var attributes = typeof (T).GetCustomAttributes(typeof (TableValuedFunctionAttribute), false);
            if(attributes.Length == 0) throw new InvalidOperationException(string.Format("Cannot operate TVF on type:{0}",typeof(T)));
            var attribute = attributes[0] as TableValuedFunctionAttribute;
            return context.Database.SqlQuery<T>(string.Format(TvfPlaceHolder, attribute.Definition));
        }
    }

That is it! we are done., Lets consume it

    class Program
    {
        static void Main(string[] args)
        {
            using (var ctx = new StudentContext())
            {
                ctx.Database.CreateIfNotExists();
                var allStudents = ctx.ExecuteTableValuedFunction<StudentSchoolDetail>().ToList();

                for (int i = 0; i < allStudents.Count; i++)
                {
                    Console.WriteLine("Student Name:{0} School:{1}",allStudents[i].Name, allStudents[i].School);
                }
            }

            Console.ReadLine();
        }
    }
Category: .Net | Tags:  | Leave a Comment
Friday, April 10th, 2015

In previous blog there was some basic information regarding Typescript. Typescript on compilation gives readable , standard compiled JS file.
Let me write simple class using Typescript

class1

From the above figure its clear that Typescript is easy to learn and understand, which is similar to C# code writing class in typescript is as easy as C#.
check out the below screen which is the code snippet generated in js file on compilation of the TS file.
jsclass

Now to play around with the classes written, i’ve designed a simple html page with some textbox and button which performs some actions.

design
clickme
find the output below
output

Some example with function call back functionality within typescript
callcode
call
output
callback

Can we include Other Libraries in Typescript?
we can include the references of jquery or angularjs or any libraries in typescript. i’ve tried it by taking the reference of jquery.d.ts(DefinitelyTyped Jquery) which is library compatible with typescript.

How to use DefinitelyTyped jquery lib?
include new file in your project rename it “Jquery.d.ts” and copy code
http://typescript.codeplex.com/SourceControl/changeset/view/92d9e637f6e1#typings/jquery.d.ts
and add reference to this files in our typescript file as shown below
ref
Some code snippet including jquery
timer
Output:
timeout

Monday, March 23rd, 2015

Superset of Javascript is introduced by Microsoft which can be used for cross-browser development known as ” Typescript “(open source project)which combines static analysis,explicit interfaces ,type checking.
Typescript can be installed into visual studio using two ways
-Using Node.js Package Manager (npm)
-Using an MSI that integrates with Visual Studio.

After successfull installation, TypeScript compiler is installed in below location by default.
Capture1

Sometimes we may not find the template of typescript in Visual Studio 2012, all we need to do is extract the msi package and find the “TypeScriptLanguageService.vsix_File” and remove the trail “_File” and try to install that vsix file which will not only gives the template but also intellisense, code highlighting etc..
templa

In above figure you may find the new template added to my visual studio , so lets use this template to create a project.
In the figure below ,we can see that a file with .js extension is also added because “Typescript internally generates the javascript on compilation”,Typescript compiler can be used to compile the typescript.
ts2js

Code Snippet in JS file :
jsCode

Same Code Snippet in Ts file :
tsCode

code in js we has initialized the variable x =10 and then x =”some string ” and both the alert statements gives the output as expected.However this can lead to serious issues sometimes its hard to maintain code.
But TypeScript helps us to declare variables with specific datatypes like number ,string ,bool
and also with object types which includes classes, interfaces,etc.

i would explain more about typescript in upcoming blogs.

Author:
Friday, March 06th, 2015

Many a times, we would have done a mistake in Views like not closing braces or some syntax error. These error we will not get to know until we run the application and navigate to that screen.

How to find out such error while compiling ??

Here comes the rescue for us,powerful  ASP.NET Compilation tool which can be used along with some arguments to find out the error for a pre compiled application at an earlier stage locally even before we check in our code.

Some of the arguments for this tool available   are

aspnet_compiler  [-?]

[-m metabasePath | -v virtualPath [-p physicalPath]]

[[-u] [-f] [-d] [-fixednames] targetDir]

[-c]

[-errorstack]

[-nologo]

[[-keyfile file | -keycontainer container ] [-aptca] [-delaysign]]

For our scenario, we will be focusing on few commands

  1. Open visual studio command prompt
  2. Navigate to the location where source exist
  3. Run the command

a. aspnet_compiler.exe -c -v temp -p  ProjectName(For ex: MVC.Web)

It will display the list of warnings and errors if any, in the specified project.

clip_image001

Now you can go ahead and fix those.

clip_image003

In my next blog, Let’s see how to integrate this in our build process.

Category: .Net  | One Comment
Sunday, March 01st, 2015

 

Lets consider two situations mentioned below which define the problem.I guess lot of you would be familiar with the first one.

Problem

Situation 1 :

You start with a project which has a repository i.e. you have repository pattern implemented.Now initially situation would be something like shown below


public class ProductRepository : IProductRepository
    {
        public Product GetProduct(int Id)
        {
            //logic
        }
        public bool UpdateProduct(Product c)
        {
            //logic
        }

        public bool InsertProduct(Product c)
        {
            //logic
        }

        public bool DeleteProduct(Product c)
        {
            //logic
        }
    }

After 1 Year…



public class ProductRepository
    {
        public Product GetProduct(int Id)
        {
            //logic
        }
        public bool UpdateProduct(Product c)
        {
            //logic
        }

        public bool InsertProduct(Product c)
        {
            //logic
        }

        public bool DeleteProduct(Product c)
        {
            //logic
        }

        public List<Product> GetProductsByCategory()
        {
            //logic
        }

        public List<Product> GetProductsByDesignNo()
        {
            //logic
        }
        public List<Product> GetAllProductsWithComplaints()
        {
            //logic
        }
        public List<Product> GetProductsForCustomerWithComplaints(int customerId)
        {
            //logic
        }
        //////////////////////
        /// AND MANY MORE SUCH VARIATIONS OF GETTING PRODUCTS WITH DIFFERENT CRITERIA
        //////////////////////
    }

After 2 years …You can imagine.

Situation 2

You are developing a product with say advanced search functionality and you want to use along with your database a search server with its own query language.Only issue is that you want this search server functionality to be loosely coupled as your architects and management are not sure about the product and want to be able to replace it with a different search server in future.So the API to interact with this search functionality should provide complete abstraction of the specifics and should not leak any search product specific code in application.


Problem is that such products will have there specific query language and / or API.If I have to implement a SearchProducts method, how do I abstract the input query parameters / Objects so that I can present a uniform interface to code that uses this functionality. E.g. Elastic search provides you with a API which encapsulates search request in classes implementing ISearchRequest interface whereas Microsoft’s FAST provides a very elaborate REST API.

I have faced this situation in two of the projects i worked on and I am sure lot of people would have faced it. That’s what pointed me to Query objects ( or Query object Pattern).

Query Objects

This link has a brief explanation of query objects by Martin Fowler.

Query objects are basically named queries or queries represented by objects.It is equivalent to Command pattern for a query.

Lets see a simple implementation of query objects targeting Situation 2 above (same approach can be implemented for Situation 1).

First define a query interface which all our query classes will implement (below is an example of a query for search products by design no).


public interface ISearchQuery<T>
    {
        IEnumerable<T> Execute();
    }


public class FAST_ProductsByDesignNoQuery : ISearchQuery<IEnumerable<Product>>
    {

        private string _designNo;
        public FAST_ProductsByDesignNoQuery(string designNo)
        {
            _designNo = designNo;
        }

        public IEnumerable<IEnumerable<Product>> Execute()
        {
            //Specific query logic goes here
        }
    }

If we move to another search server than we will be defining a new query class for the same query implementing ISearchQuery interface.

Below is how our Search API  will look like.


public interface ISearch
    {
        IEnumerable<Product> SearchProducts(ISearchQuery<Product> query);
		 IEnumerable<Customer> SearchCustomers(ISearchQuery<Customer> query);
    }
public class SearchAPI : ISearch
    {

        public IEnumerable<Product> SearchProducts(ISearchQuery<Product> query)
        {
            return query.Execute();
        }
		 public IEnumerable<Product> SearchCustomers(ISearchQuery<Customer> query)
        {
            return query.Execute();
        }
    }

This makes my search API totally independent of search server which I am using.

The sample given above is the simplest example of Query objects but there is much more that can be done.

Going Further…

There are couple of things which we can do to make our query objects more sophisticated.We can have base class which adds paging support by adding properties related to page size,page number ,sorting etc.

Taking it to extreme we can define generic query objects where queries can be defined using project specific language rather using separate classes for each query and use interpreter for translating them to data source specific queries.And Yes, you are right,Expression trees in .NET is a good example query objects (provided you have an interpreter to translate expression tree to your data source specific query ).Another example is Hibernate’s Criteria API.

Only thing to be careful about above approach  is how complex you want this to become.For example having set of classes which define your project specific query language or writing custom interpreter for expression trees is quite complex and  does not make sense unless you are working on a very big project being implemented by multiple team.

Category: .Net | Tags: ,  | Leave a Comment