Archive for the Category ◊ .Net ◊

Author:
Tuesday, July 05th, 2016

What is a Deadlock?

You would have encountered deadlock in the real life situation like

  • It takes money to make money or,
  • You can’t get a job without experience and you can’t get experience without a job. 

Similarly, in concurrency, when two or more competing actions wait for each other to finish and if neither ever does, then it leads to a deadlock.

Generally, we have a myth about deadlock that it is an an issue which sometimes occurs in multiprocessing systems, parallel computing and distributed systems which involves more than one process and resources. But in actual, deadlock is a condition which occurs when one process holds a resource (e.g. R1) and waits for another (e.g. R2) to finish the task wherein the other process holds the same resource (R2) and wait for resource (R1) and so on.

 

graph

Example of Deadlock

  • Deadlock in SQL Server

Sometimes we have observed it takes lot of time for transactions to complete if there are large number of records in the database.

For example, let’s assume that the transaction B is in the process of modifying a row in a table T which causes Intent-Exclusive lock on both the table and the page that contains the row.

Simultaneously a process A needs to read a few pages on the same table. However, Process B already holds a lock on one of the pages that contains some of the rows which Process A needs. This situation causes both the processes to be in waiting queue resulting in a deadlock. 

 

Necessary Condition for a deadlock

Coffman identified 4 conditions that hold simultaneously to have a deadlock in a system.

  •     Mutual Exclusion

It means if one process is holding on to one or more than one resource, it cannot be shared by another process

e.g.: if one process acquires a memory, no other process can acquire it before the process is finished

  •       Hold and wait

It means one process is holding a resource and is waiting for another one.

  •       No-Preemptive Condition

It means if a process locks a resource, it cannot be taken away by a different process by any means until and unless it is finished.

  •         Circular wait

Suppose we have a set of processes such as {P0, P1, P2, P3…. Pn} and P1 is waiting for Resource X, P2 is waiting for Resource Y, and Pn is waiting for Resource X.

The above situation creates circular wait where in every process in the list is waiting for a process held by the next process in the list.

 

Referring to the diagram which appears earlier in the passage

  • Mutual Exclusion

R1 and R2 are non-sharable

  • No Preemptive               

                       Cannot force the processes to release the lock for Resource R1 and R2.

  • Hold and wait

Process T1 is holding R1 and waiting for R2. The process T2 is holding R2 and waiting for the Resource R1.

  • Circular wait

Third situation creates circular wait.

 

Handling Deadlock

There are 3 ways with which we can handle the deadlock

  • Deadlock Detection

In this approach we allow the system to enter deadlock state and then recover. There are different ways to detect deadlock. One of ways is through trace flag.

Below is the deadlock information once the trace flag was on

picture

When the deadlock is detected, the database engine ends the deadlock by choosing one of thread as deadlock victim.

  •   Deadlock Prevention

Ensure that the system will never enter a deadlock state. This can be ensured if we can prevent any of the conditions which is required for deadlock.

For example:

Hold and Wait:

            We should guarantee that whenever a process request for resources, it should not be holding any other resources.

No Preemption:

It means process should release the resources being held if it can’t get additional resource. Preempted resources will be added to the list of resources for which the process is waiting and a process can be restarted only if it regains its old resource as well as the new one which it is requesting.

Circular Wait

It means impose an order for all the resource type in a system and then process can request a resource based on order of its resource type.

E.g. provide an order to the resource type of hardware in a computer system

            1 – Card Reader

2 – Printer

3 – Plotter

4 – Tape Drive

Now the process may request for printer and then Tape drive but it cannot request for Tape drive and then Printer.        

  • Deadlock avoidance

                                    The deadlock avoidance approach handles deadlock before they occur.

There are methods like ‘Wait-for graph’ which can can be used to detect deadlock in advance.

Wait-for graph:

 

In this method, a node is created for each transaction starting in the system. Suppose Transaction T1 request for a resource e.g. X to acquire a lock which is acquired by some other transaction say T2 a direct edge is created from T2 to T1. If T2 release the resource, the edge is dropped.

pic

The system maintains the wait-for graph for every transaction waiting for resource and keeps checking whether there is cycle in the graph.

We can use following approaches to avoid deadlock:

  • Do not allow any transaction which request for resource which is already acquired by some other transactions. One way to accomplish this is to make sure the sequence of the transactions is in such a way that it does not allow transaction to go in waiting state.
  • Other way is to rollback one of the transaction. It is always feasible to rollback younger transaction. It is generally done by the database engine when the deadlock is detected.
Category: .Net, General  | One Comment
Tuesday, June 07th, 2016

 

Introduction

When we are trying to solve a large business problem, a good software architecture would help focus on the real business problem rather than the technical complexities due faced due to bad design.

In this article we will look at the DDD approach for solving a big business problem. There is a very good article on code-project about what is BDD and some basics: http://www.codeproject.com/Articles/339725/Domain-Driven-Design-Clear-Your-Concepts-Before-Yo

We will use this article and see how we can implement the technical challenges. We will look at the how we can implement different entities explained the article.

We will be using a framework called Highway.Data is one of the fastest and smoothest way to great architecture specially when working with DDD.

The first step in the design is to define a domain(s). If it’s a very small application just a single domain will suffice but the real power of DDD comes when we have divided our large application many independent segregated smaller domains. Each domain use its own DataContext, Entities, Value objects, Repositories, and domain service.

 

Problem Statement

Lest look at a real world business problem and try to solve it with our approach. Lets take an example of

“An e-commerce website where there are multiple consumers can browse through different products, add to cart, and purchase and then later the order is shipped”

When we analyses the above business problem the domain that comes to our mind is “Ecommerce”. But if we start our application with “Ecommerce” as our domain we might end up with a huge very complex unstructured application. But what if we could divide this application in smaller domains for example:

  • Users – the end consumers of the application
  • Login/Authentication – Login to application
  • Products – products listed for purchase
  • Order-Management – creating the order from the users cart
  • Payment – Handle the payment for the orders, discounts and other business logic
  • Shipping – Once order is completed handles the shipping, order tracking

Each of these domains can be independently developed, changed and maintained as a separate project.

For this article we will look at one domain (Users domain) in depth how we can implement it end to end.

 

Application domain

Since each domain is application in our context lets create a domain for Users called UserAppDomain. Let’s start by first installing Highway.Data NuGet package

 

1

Depending on which data access technology we want to use, select the package accordingly. In this article we will be using Entity framework for our example.

2

 

Entities & Value objects

When working with DDD it makes sense to define the entities for each domain. There will be situations where there can be common entities used across domain. But it’s not a good idea to use common entities. Let’s talk why not?

Let’s take an example,

A user entity (End consumer) would have all the information such as Username, Full-Name, Address, Phone, Age, Sex etc. But when we are in Login Domain which is responsible for authentication uses User entity but it does not need anything else other that some basic properties such as username, full name and say password etc. hence it makes sense to create an entity User specific to Login domain with just these few properties.

 

3

 

Depending on the ORM we need to map the entity to the corresponding table in DB.

4

 

So now we have our entity and it is mapped to respective table we can move on to next step.

 

Data Domain

So now we create a data domain through which we can access the data across the application

5

 

Repository

Highway.Data provides a generic repository which can be used to execute a typed query to execute. Let us look at a query first before we use the repository.

With highway data there are few default queries which can be used for querying most of the data

Eg.

var users = _repository.Find(new FindAll<UserEntity>()).ToList();

var user = _repository.Find(new GetById<UserEntity>(10));

Or we can create custom queries for more complex querying

Eg.

6

 

And then use it like

var users = _repository.Find(new UserComplexQuery<UserEntity>(“john”,”1st street”));

The repository can be directly injected in any controller or class where we wish to query a repository Eg.

 

7

 

 

Author:
Friday, April 01st, 2016

C# Interactive !! Seems interesting. What is that ?

The C# interactive window provides an easy way
to experiment with code, API, classes, external dll without writing code in
some .cs file. Instead you run the code in this window. It supports intelligence/
code completion.

Where and how I can found C# interactive window?

Well either you should have VS 2015 update from here:
https://www.visualstudio.com/downloads/download-visual-studio-vs
or If you are still using VS 2012 , you can install Microsoft “Roslyn” CTP from here:
https://www.microsoft.com/en-in/download/details.aspx?id=34685

csharp_interactive_1

 

I just read “Roslyn”. What is that ?

The .NET Compiler Platform (“Roslyn”) provides open-source C#  compilers with rich code analysis APIs.

What I can do with C# interactive window ?

1.)    Run simple code :

SimpleCode

2.)    Run Multi Line Code:

To add code in multiline where each line itself is complete; just press ctrl+enter other wise press shift+Enter and write code in subsequent line. Keep doing that until code is over and finally press enter ket to get output.

MultiLineCode

3.)    Define types and call their methods:

CustomTypeDefining

4.)    Test LINQ queries:

LinqQuery

5.) Reference External dlls:

DllReference

Anything else ?

Yes, there are a few commands too to help you out:

Commands:

#help If you need any help just type #help and it will show all available options.

#reset  used to reset the execution environment

 

That’s all folks. Thanks for reading. Hope you enjoy it.

 

 

 

 

 

Sunday, March 06th, 2016

Pipes are a mechanism for inter process communication in windows.Pipes come in two varieties viz. Anonymous pipes and Named pipes.Anonymous pipes as name suggest do not have a name and can be used to communicate between threads or two related processes i.e. when one process initiates another process.

Below are major differences between named and anonymous pipes.

Anonymous

Named

Communicate between threads or related process i.e. parent –child  processes on same machine. Communicate between any kind of process (related or unrelated) on a machine or over network.
One way communication (Half-duplex) i.e. either Server to client or client to server.* Two way communication (Duplex)
Byte based i.e. data is written and read as stream of bytes (or characters) Message based i.e.data is written and read as stream of messages(or string)

 * You can always use two anonymous pipes for bi-directional communication.

In this blog post I will talk about anonymous pipes and in upcoming post will be talking more about named pipes.

To communicate using anonymous pipes within the process or between the process we need a mechanism to pass pipe handle to client so that it can connect to the pipe created by server.Below are the steps.

  1. Parent creates Pipe hence its called pipe server.
  2. Get the handle for the pipe.
  3. Parent creates child and pass pipe handle to child
  4. Child uses handle to connect to the pipe

Now communication can start.Based on what communication direction server and client specified one end of pipe would be considered Writer end and another one reader.

Lets see how this works in two different scenarios.

Intra-Process Communication

You can use anonymous pipes to communicate between two threads in the same process.Below is the code for the same.


using System;
using System.IO;
using System.IO.Pipes;
using System.Threading;

namespace AnonymousPipesIntraProcess
{
    class Program
    {
        
        static void Main(string[] args)
        {
            using (AnonymousPipeServerStream pipedServer = new AnonymousPipeServerStream(PipeDirection.Out))
            {

                Thread child = new Thread(new ParameterizedThreadStart(childThread));
                child.Start(pipedServer.GetClientHandleAsString());
                
                using (StreamWriter sw = new StreamWriter(pipedServer))
                {
                    var data = string.Empty;
                    sw.AutoFlush = true;
                    while (!data.Equals("quit", StringComparison.InvariantCultureIgnoreCase))
                    {
                        pipedServer.WaitForPipeDrain();
                        Console.WriteLine("SERVER : ");
                        data = Console.ReadLine();
                        sw.WriteLine(data);
                    }


                }

            }
        }

        public static void childThread(object parentHandle)
        {
            using (AnonymousPipeClientStream pipedClient = new AnonymousPipeClientStream(PipeDirection.In, parentHandle.ToString()))
            {
                using (StreamReader reader = new StreamReader(pipedClient))
                {
                    var data = string.Empty;
                    while ((data = reader.ReadLine()) != null)
                    {
                        Console.WriteLine("CLIENT:" + data.ToString());
                    }
                    Console.Write("[CLIENT] Press Enter to continue...");
                    Console.ReadLine();
                }
            }
        }

       
    }
}

Code is self explanatory only thing to note is when we define the Pipe server and Pipe client we specify direction i.e. whether in or out and further communication can take place only in that specified one direction.

Inter-Process Communication

Little variation of the same code can be used to communicate between two related processes i.e. when one process creates another process.

Server code:


using System;
using System.Diagnostics;
using System.IO;
using System.IO.Pipes;

namespace AnonymousPipesServer
{
    class Program
    {
        static void Main(string[] args)
        {
            
            
            using (AnonymousPipeServerStream pipedServer = new AnonymousPipeServerStream(PipeDirection.Out, HandleInheritability.Inheritable))
            {
                var startInfo = new ProcessStartInfo(@"AnonymousPipesClient.exe");
                startInfo.UseShellExecute =false;
                
                startInfo.Arguments = pipedServer.GetClientHandleAsString();
                var client = Process.Start(startInfo);
                pipedServer.DisposeLocalCopyOfClientHandle();
                using (StreamWriter sw = new StreamWriter(pipedServer))
                {
                    var data = string.Empty;
                    sw.AutoFlush = true;
                    while (!data.Equals("quit", StringComparison.InvariantCultureIgnoreCase))
                    {
                        pipedServer.WaitForPipeDrain();
                        data = Console.ReadLine();
                        sw.WriteLine(data);
                    }
                }
            }
        }
    }
}

Client Code:


using System;
using System.Collections.Generic;
using System.IO;
using System.IO.Pipes;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;

namespace AnonymousPipesClient
{
    class Program
    {
        static void Main(string[] args)
        {
            
            var parentHandle = args[0];
            using (AnonymousPipeClientStream pipedClient = new AnonymousPipeClientStream(PipeDirection.In, parentHandle.ToString()))
            {
                using (StreamReader reader = new StreamReader(pipedClient))
                {
                   
                    var data = string.Empty;
                    while ((data = reader.ReadLine()) != null)
                    {
                        Console.WriteLine("CLIENT:" + data.ToString());
                    }
                    Console.Write("[CLIENT] Press Enter to continue...");
                    Console.ReadLine();
                }
            }

        }
    }
}

Two changes in this code are use of handle inheritance and DisposeLocalCopyOfClientHandle.

While creating object of AnonymousPipedServerStream  the second parameter specified in the constructor indicates whether or not child process can inherit the handles from parent process.This value should be HandleInheritability.Inheritable so that child process can use the pipe handle we are passing.Refer this link to know more about handle inheritance concept.

Also in case of inter process communication we need to call DisposeLocalCopyOfClientHandle method after passing handle to client as this makes sure server gets notification when client disposes its PipeStream object.Suppose you don’t call this method and client disposes its PipeStream object.In this case when server sends message to client ,server process will hang indefinitely. On the other hand by calling this method we insure that server will throw appropriate exception

Conclusion

Anonymous pipes provide an easy way of communicating between related processes without much overhead as in case of named pipes.There are also ways to make it work for unrelated process but that is not in scope of this post and also in such cases it is better to use named pipes.

Thursday, February 11th, 2016

Unit test your MVC views using Razor Generator

Yes , we can unit test our MVC views . This is possible using  RazorGenerator which  generates classes which can further be used to instantiate views and unit test them.The generated class makes it possible that the views are pre compiled.

Razor Generator is a Custom Tool for Visual Studio that allows processing Razor files at design time instead of runtime, allowing them to be built into an assembly for simpler reuse and distribution.

But what’s the advantage of using MVC views?

If I (or another dev) removes some field from the view, I don’t want to wait for the Acceptance test to discover this. I want it to be discovered and flagged in a unit-test that runs multiple times daily.

If we are setting up CD pipeline we generally have test configured on different pipelines. So we can have unit test of views for testing all positive and negative scenarios and Acceptance test can be configured with few scenarios to check the basic overall functionality.

 

Steps to Write Unit Tests:

1. Install RazorGenerator from visual studio.

2. Create an MVC  project, using the ‘Internet Application’ template and including the unit test project.

3. Use NuGet to install the RazorGenerator.Mvc package in your MVC project.

4. Set the custom tool on Views\Home\Index.cshtml to  ’RazorGenerator’ and press enter, Index.generated.cs is generated under it. Please check the image below for reference.

1           generatedcs

5. Use NuGet again to add the RazorGenerator.Testing package to the unit test .And that’s all it takes to get set up!

Below is a sample Unit test code

Remove the code given by default in Index.cshtml with the code below.

<h2>@ViewBag.Message</h2>

Now we  can write a unit test for our precompiled Index.cshtml view  to test if our value is rendered  in the html document as expected.

e.g. create a Views\HomeViewsTest.cs (in the unit test project):

unittestMVC 

And this is how we can write multipe cases to test our views.

References :

http://razorgenerator.codeplex.com/

http://blogs.planbsoftware.co.nz/?p=469

Category: .Net  | One Comment
Sunday, January 31st, 2016

If you write unit tests or follow TDD approach for your projects it is absolutely essential that your unit tests should be readable and clearly outlines the exact intent of the test.Also your unit tests should be able to act as documentation for your application logic.

You can always achieve these goals with usual asserts but it is difficult to keep it readable and very easy to mess it up to the extent that no body can understand what the test is trying to do.

To address these problems there are couple of fluent assertion framework which provide fluent APIs for doing asserts.One such framework which I found useful for .NET is Fluent Assertions.

Below is an Account class representing a bank account which I will use to demonstrate fluent assertions API.Some of the code is intentionally incorrect so that we can fail the tests.


 public enum TransactionType
    {
        Credit,
        Debit
    }
    public class BankTransaction
    {
        public BankTransaction(TransactionType type, double amount,DateTime transactionDate)
        {
            TypeOfTransaction = type;
            Amount = amount; 
            TransactionDate = transactionDate;
        }
        public TransactionType TypeOfTransaction { get;  }
        public double Amount { get;  }
        public DateTime TransactionDate { get;  }

        public override bool Equals(object obj)
        {
            
            var transaction = obj as BankTransaction ;
            if (obj == null) return false;

            return this.Amount == transaction.Amount &&
                this.TypeOfTransaction == transaction.TypeOfTransaction &&
                this.TransactionDate == transaction.TransactionDate;



        }
        public override int GetHashCode()
        {
           return  RuntimeHelpers.GetHashCode(this);
        }
    }
    public class Account
    {
        private string customerName;
        private double balance;
        private Account()
        {
        }

        public Account(string customerName, double balance)
        {
            this.customerName = customerName;
            this.balance = balance;
            Transactions = new List<BankTransaction>();
        }

        public string CustomerName
        {
            get { return customerName; }
        }

        public double Balance
        {
            get { return balance; }
        }

        public List<BankTransaction> Transactions { get; }

        public void Debit(double amount)
        {
            if (amount > balance)
            {
                throw new ArgumentOutOfRangeException("Invalid Amount");
            }

            if (amount < 0)
            {
                throw new ArgumentOutOfRangeException("Invalid Amount");
            }

            balance += amount; // INCORRECT CODE

            Transactions.Add(new BankTransaction(TransactionType.Debit, amount,DateTime.Now));
        }

        public void Credit(double amount)
        {

            if (amount < 0)
            {
                throw new ArgumentOutOfRangeException("Invalid Amount");
            }

            balance += amount;
            Transactions.Add(new BankTransaction(TransactionType.Debit, amount,DateTime.Now)); // INCORRECT CODE
        }



    }

First step is the install nuget package for Fluent Assertions (called FluentAssertions) library.

image

To start with lets see how simple assertion and failure message looks like.


[TestMethod]
        public void DebitAmount_ShouldBe_DeductedFrom_Overall_Balance()
        {
            Account acc = new Account("Jack", 1000);
            acc.Debit(50);

            //usual Notation 
            //Assert.AreEqual(950, acc.Balance);

            //Fluent assertions.
            acc.Balance.Should().Be(950); //SHOULD FAIL

        }

 

Below is the result you get when the above test fails.

Usual Assert:

image

Fluent Assertion:

image

You see how the assertion statement and failure message are much more descriptive.

Lets see how you can assert on collections.


 [TestMethod]
        public void AllTransaction_Should_Be_Recorded()
        {
            Account acc1 = new Account("Jack", 1000);
            acc1.Credit(50);
            acc1.Debit(50);

            Account acc2 = new Account("Matt", 1000);
            acc2.Credit(50);
            acc2.Debit(150);

            acc1.Transactions.Should().BeInAscendingOrder(x => x.TransactionDate);
            acc2.Transactions.Should().BeInAscendingOrder(x => x.TransactionDate);
            //Test each member to see it exists in other collection.Uses overridden equals method
            acc1.Transactions.Should().BeEquivalentTo(acc2.Transactions); //SHOULD FAIL

        }

Last assertion fails and below is the description which comes out.

image

This kind of shows how powerful this kind of API can be in debugging failing tests.

Below is one more example of asserting on exception that should be thrown.


  [TestMethod]
        public void Credit_Amount_LessThanZero_ShouldNotBeAllowed_AndThrowsException()
        {
            Account acc1 = new Account("Jack", 1000);
            
            Action creditAction = () => acc1.Credit(-50);

            //Test the type of exception and message (using wildcard)
            creditAction.ShouldThrow<ArgumentOutOfRangeException>().WithMessage("*Invalid Amount*");

        }

Fluent Assertions is very well documented and has a vast API covering each and every scenario which you might encounter in your tests.Here is the link to documentation wiki for Fluent assertions library.

This is by no means the only library available.There are other fluent assertion libraries like AssertJ (for Java), NFluent, Shouldly etc.

In the end, the library you use can be your choice but by means of this post I want to highlight the approach of fluent assertions which can go long way in keeping your test code maintainable.

Author:
Wednesday, January 20th, 2016

How often we have seen settings defined in config file like this :

 

Problem starts when they are a lot and there is no organized/logical way of putting them in config file and reading in application.

The more organized way of implementing custom app setting is via custom configSection.

Let’s first have a glimpse of config file to know what we want to achieve.

1 2

As it’s apparent now we have organized our mail server related settings in more organized way. Now by creating a strongly typed class for these settings we can read these setting directly by using our custom type. How ? Let’s understand this process step by step.

Step 1.) First we need to create a custom SettingsConfiguration class or any name you like inherited from IConfigurationSectionHandler interface. This is the class which we will use to register our custom config section in config file. For now just add a method called Create as highlighted below. We will discuss rest of the code in subsequent steps.

3

Step 2.) Now we will register this type in to our config file. How? Here is the way:

4

Here the name “settings” should be same as defined in our mailSetting parent node. In type first argument is our config class name with full namespace and second parameter is the assembly name in which this config class is defined.

5

So our basic config skeleton is ready. Let’s fill it with mailSetting that we want to achieve.

Step 3.) As seen earlier we already have our mailSetting defined in config file and to read it via XmlSerializer we need to create mapping classes for each node [if node does contain any other node(s)].

So here are our classes.

6

Here we can have different class name but name defined in XmlRoot should match with what is defined in config file.

Now there are different way of defining classes and their members based on their representation.

A.)   <name>default</name>: in this case we can have a property defined decorated with attribute XmlElement where name in attribute should match with name defined in config file.

B.)    <description companyName=”Prowareness” /> : for mapping this kind of node we need to have a separate class where CompanyName will be decorated with attribute XmlAttribute instead of XmlElement.

7

C.)    <to>

        <email description=”stakeholder” value=”reciever1@prowareness.com”/>

 <email description=”security admin” value=”reciever2@prowareness.com”/>    

      </to>  : Now It’s little tricky what if we want to define an email collection like above in our config file.

Here is the solution:

8

In this case List<Email> will be our XmlElement and the Email class can be defined as above.

So now our all the mapping classes are ready. Let’s see how to read it in our application.

Step 4.)  Now let’s go back to our class SettingsConfiguration’s Read method.

9

Here we are reading our custom config section named “settings” which we registered in step2. We get our “mailSetting” node using SelectSingleNode method and then using XmlSerializer we deserialize it to our MailSettingElement class. And then now onwards we can use this class to get the settings from config file and apply wherever needed.

So what advantages I’m getting out of this ?

As we can see even if sectionNode is null in above code (meaning you missed defining settings in config file due to various reason, may be you are not sure it’s value in development or test environment), you can still pass some default value in code as shown above.

By using this approach we can logically organize our settings in different sections which can be reused across applications if this class is developed as an assembly.

And here is the way to read/use these settings:

10

That’s all folks for this article. Please let me know your views/comments.

 

Friday, December 11th, 2015

A data-tier application (DAC) is a logical database management entity that defines all SQL Server objects – such as tables, views, and instance objects associated with a user’s database. It is a self contained unit of SQL Server database deployment that enables data-tier developers and DBAs to package SQL Server objects into a portable artifact called a DAC package, or in other words – a .dacpac file.

A DACPAC is a Windows file with a .dacpac extension. The file supports an open format consisting of multiple XML sections representing details of the DACPAC origin, the objects in the database, and other characteristics.

Life cycle of database development involves a lot of scripts exchange among members of a team or some ad-hoc arrangement. The advantage of DAC over scripts is that it helps in identifying and validating behaviors from different source and target databases.

“DAC operations were introduced in SQL Server 2008 R2.”

In this article, we will see an example of a database deployment using a .dacpac file programmatically using the APIs and what are the relevant permissions that are needed to get this to work.

  • Extract: A user can extract a database into a dacpac file.
  • Deploy: A user can deploy a database from an extracted dacpac file.
  • Upgrade: An existing database can be upgraded from an extracted dacpac file.

The user must be a member of the dbmanager role or assigned CREATE DATABASE permissions to create a database, including creating a database by deploying a DAC package. The user must be a member of the dbmanager role, or have been assigned DROP DATABASE permissions to drop a database.

To extract and deploy a database using a dacpac file programmatically, we need to add the Microsoft.SqlServer.Dac  assembly reference to the *.csproj file. Use Manage Nuget packages to add “Microsoft.SqlServer.Dac” reference.

Dac

DacServices class is used to extract and deploy a database from a dacpac file.

               

DacServices dacService = new DacServices("Connection String");

During extraction, we can create an instance of DataExtractOptions wherein we can specify things to include, exclude and validate objects. Sample code snippet is given below:

               

DacExtractOptions dacExtractOptions = new DacExtractOptions
{
	ExtractApplicationScopedObjectsOnly = true,
	ExtractReferencedServerScopedElements = true,
	VerifyExtraction = false,
	Storage = DacSchemaModelStorageType.Memory,
	IgnoreExtendedProperties = true,
	ExtractAllTableData = false,
	IgnoreUserLoginMappings = true,
	IgnorePermissions = false
};

Various options are available in DacExtractOptions. For instance, the content of the extracted dacpac file can be verified by setting the VerifyExtraction property to true. Extraction will fail if an issue is found. Similarly, you can include or exclude all the permissions, user login mappings, ExtractAllTableData etc if you so desire during the extraction process. There are lot of other options to exclude database objects as well.

While deploying the database using the extracted dacpac file, you can use an instance of DacDeployOptions to exclude database objects if you wish to. Sample code snippet is given below:

               

DacDeployOptions dacDeployOptions = new DacDeployOptions
{
	CreateNewDatabase = true,
	ExcludeObjectTypes = new[]
	{
		ObjectType.Users, ObjectType.Permissions, ObjectType.StoredProcedures,
		ObjectType.RoleMembership, ObjectType.Aggregates, ObjectType.ExtendedProperties
	},
	BlockOnPossibleDataLoss = true,
	IgnoreFileAndLogFilePath = true
};

As you can see from the above example, all the object types that are included in the ExcludeObjectTypes array will be excluded during the deployment process.

You can also hook up to events like ProcessChanged and Message using DacServices instance to get update and log messages related to all the operations that occur during the extract and deployment processes.

Here is the sample code for extracting and deploying the database from a dacpac file:

               


using System;
using System.Configuration;
using System.IO;
using Microsoft.SqlServer.Dac;

namespace Test
{
	class Program
	{
		static void Main(string[] args)
		{
			Extract();
			Deploy();
		}

		#region Private Declaraion

		private const string sourceDatabaseName = "SourceDatabaseName";

		private const string targetDatabaseName = "TargetDatabaseName";

		#endregion

		#region Private Properties

		private static string FilePath
		{
			get
			{
				return Path.Combine(Environment.CurrentDirectory, "TestDatabase.dacpac");
			}
		}

		#endregion

		#region Public Methods

		public static void ExtractAndDeployDataTierApplicationPack()
		{
			Extract();
			Deploy();
		}

		#endregion

		#region Private Methods

		private static void Extract()
		{
			string sourceConnectionString = ConfigurationManager.ConnectionStrings["SourceDatabaseConnection"].ConnectionString;

			DacServices dacService = new DacServices(sourceConnectionString);

			DacExtractOptions dacExtractOptions = new DacExtractOptions
			{
				ExtractApplicationScopedObjectsOnly = true,
				ExtractReferencedServerScopedElements = true,
				VerifyExtraction = false,
				Storage = DacSchemaModelStorageType.Memory,
				IgnoreExtendedProperties = true,
				ExtractAllTableData = false,
				IgnoreUserLoginMappings = true,
				IgnorePermissions = false
			};

			dacService.Extract(FilePath, sourceDatabaseName, null, new Version(1, 0, 0), null, null, dacExtractOptions);
		}

		private static void Deploy()
		{
			string targetConnectionString = ConfigurationManager.ConnectionStrings["TargetDatabaseConnection"].ConnectionString;
			DacServices dacServices = new DacServices(targetConnectionString);
			DacDeployOptions dacDeployOptions = new DacDeployOptions
			{
				CreateNewDatabase = true,
				ExcludeObjectTypes = new[]
				{
					ObjectType.Users, ObjectType.Permissions, ObjectType.StoredProcedures,
					ObjectType.RoleMembership, ObjectType.Aggregates, ObjectType.ExtendedProperties
				},
				BlockOnPossibleDataLoss = true,
				IgnoreFileAndLogFilePath = true
			};

			using (DacPackage dacpac = DacPackage.Load(FilePath))
			{
				dacServices.Deploy(package: dacpac, targetDatabaseName: targetDatabaseName, upgradeExisting: true, options: dacDeployOptions);
			}
		}

		#endregion
	}
}
Category: .Net | Tags: ,  | Leave a Comment
Sunday, July 05th, 2015

In almost all .NET applications you encounter lot of boilerplate code which although has pretty standard implementation but has to be written repetitively in application context. Some common examples are overridden ToString,Equals  method for your model classes.Or implementing IDisposable or INotifyPropertyChanged .In all these examples most of the code is pretty standard.

One of the ways of automating this code is to automatically inject this in your generated MSIL (which stands for Microsoft intermediate language).This process of injecting code post compilation directly into generated intermediate language is known as .NET assembly weaving or IL weaving (similar to byte code weaving in Java).

If you do this before compilation i.e. add source code lines before compilation it is known as Source code weaving.

What is Fody and how it works.

Fody is an extensible library for weaving .NET assembly written by Simon Cropp.It adds post build task in MS build pipeline to manipulate generated IL. Usually this requires lot of plumbing code which is what fody provides.Fody provides extensible add-in model where anybody can use core fody package (providing basic code for adding post build task and manipulating IL) and create there own specific add-ins. E.g. Equals.Fody generates Equals and GetHashCode method implementation and ToString.Fody generates ToString implementation for your classes.

Below figure shows how the whole process works.

image

You can also use the same technique for implementing AOP (e.g. this simple implemenation by Simon Cropp) ,logging , profiling methods etc.Here is a comprehensive list of already available fody add-ins.

Fody uses Mono.Cecil which is a library for manipulating intermediate language and in usage feels quite similar to .NET reflection APIs. As far as IL manipulation goes Mono.Cecil seems like only game in the town.Chances are high that if you have used plug-in for IL manipulation it would be using Mono.Cecil internally.

Lets try one of the fody add-ins…

For the purpose of this blog I will demonstrate one very useful fody add-in called NullGaurd.As name suggests this add-in automatically adds code for raising exceptions when null value is encountered in a property or a method parameter.Below are steps for adding plug-in and using it.

  • Create a sample project and add NullGaurd.Fody nuget package

image 

  • Now by default all the class properties ,method parameters and return types will have null checks injected i.e. if any of these have null value, an exception will be thrown.But you still can allow nulls wherever required by using [AllowNull] attribute as shown below.

class Program
    {
        static void Main(string[] args)
        {

            NullGaurdTestClass test = new NullGaurdTestClass();
            test.TestMethod(null, null);
        }
    }
class NullGaurdTestClass
    {
        [AllowNull]
        public string PropertyThatAllowsNull { get; set; }
        public string PropertyThatDoesNotAllowsNull { get; set; }

        public void TestMethod([AllowNull]string allowsNull, string doesNotAllowNull)
        {
            
        }
        public string TestMethodDoesNotAllowNullReturnValue()
        {
            return null;
        }
    }

  • Running this code gives following result

image

  • You can control the behavior of add-in by applying assembly attribute as shown below.Basically making ValidationFlags.None will stop adding null checks and ValidationFlags.All will add the behavior for everything.

image

Lets decompile the generated assembly using ILSpy to see what code add-in has injected.Below is what you get.And that is the magic of IL weaving.


namespace NullGaurdFodySample
{
	internal class NullGaurdTestClass
	{
		public string PropertyThatAllowsNull
		{
			get;
			set;
		}
		public string PropertyThatDoesNotAllowsNull
		{
			[CompilerGenerated]
			get
			{
				string text = this.<PropertyThatDoesNotAllowsNull>k__BackingField;
				string expr_0A = text;
				Debug.Assert(expr_0A != null, "[NullGuard] Return value of property 'System.String NullGaurdFodySample.NullGaurdTestClass::PropertyThatDoesNotAllowsNull()' is null.");
				if (expr_0A == null)
				{
					throw new InvalidOperationException("[NullGuard] Return value of property 'System.String NullGaurdFodySample.NullGaurdTestClass::PropertyThatDoesNotAllowsNull()' is null.");
				}
				return expr_0A;
			}
			[CompilerGenerated]
			set
			{
				Debug.Assert(value != null, "[NullGuard] Cannot set the value of property 'System.String NullGaurdFodySample.NullGaurdTestClass::PropertyThatDoesNotAllowsNull()' to null.");
				if (value == null)
				{
					throw new ArgumentNullException("value", "[NullGuard] Cannot set the value of property 'System.String NullGaurdFodySample.NullGaurdTestClass::PropertyThatDoesNotAllowsNull()' to null.");
				}
				this.<PropertyThatDoesNotAllowsNull>k__BackingField = value;
			}
		}
		public void TestMethod(string allowsNull, string doesNotAllowNull)
		{
			Debug.Assert(doesNotAllowNull != null, "[NullGuard] doesNotAllowNull is null.");
			if (doesNotAllowNull == null)
			{
				throw new ArgumentNullException("doesNotAllowNull", "[NullGuard] doesNotAllowNull is null.");
			}
		}
		public string TestMethodDoesNotAllowNullReturnValue()
		{
			string text = null;
			string expr_06 = text;
			Debug.Assert(expr_06 != null, "[NullGuard] Return value of method 'System.String NullGaurdFodySample.NullGaurdTestClass::TestMethodDoesNotAllowNullReturnValue()' is null.");
			if (expr_06 == null)
			{
				throw new InvalidOperationException("[NullGuard] Return value of method 'System.String NullGaurdFodySample.NullGaurdTestClass::TestMethodDoesNotAllowNullReturnValue()' is null.");
			}
			return expr_06;
		}
	}
}

Getting started with your own Fody Add-in

Creating your own fody add-in is not as easy as using one .But it is not as difficult as some of you may think.If you ever used reflection in .NET you already know a little bit about how it feels. If you are interested below are few resources to get you started.

Category: .Net | Tags: ,  | Leave a Comment
Wednesday, July 01st, 2015

It could be a surprise to many of us to know Table Valued Functions are actually not supported in Entity Framework, However there is a work around to embrace the power of them. Lets get started!

Here is my domain model
StudentSchoolMapping

As you see above diagram, I have "address " entity, which is a complex type as defined below.

public class StudentContext : DbContext
    {
        protected override void OnModelCreating(DbModelBuilder modelBuilder)
        {
            modelBuilder.RegisterEntityType(typeof(Student));
            modelBuilder.RegisterEntityType(typeof(School));

            //Complex Type Configuration
            modelBuilder.ComplexType<Address>()
                        .Property(x => x.AddressLine1)
                        .HasColumnName("AddressLine1");
            modelBuilder.ComplexType<Address>()
                        .Property(x => x.AddressLine2)
                        .HasColumnName("AddressLine2");
            modelBuilder.ComplexType<Address>()
                        .Property(x => x.AddressLine3)
                        .HasColumnName("AddressLine3");
            modelBuilder.ComplexType<Address>()
                        .Property(x => x.ZipCode)
                        .HasColumnName("ZipCode");

            base.OnModelCreating(modelBuilder);
        }
    }

    public class Student
    {
        public int Id { get; set; }
        public string Name { get; set; }
        public DateTime DateOfBirth { get; set; }
        public Address Address { get; set; }
        public School School { get; set; }
    }

    public class School
    {
        public int SchoolId { get; set; }
        public string Name { get; set; }
        public Address Address { get; set; }
    }

    public class Address
    {
        public string AddressLine1 { get; set; }
        public string AddressLine2 { get; set; }
        public string AddressLine3 { get; set; }
        public string ZipCode { get; set; }
    }

Since the table valued function return the table, which is an analogy to Class in OOP World. We define the table valued function below which returns students name along the school the student is studying. So lets define table definition.

[AttributeUsage(AttributeTargets.Class, AllowMultiple = false, Inherited = false)]
    public class TableValuedFunctionAttribute : Attribute
    {
        private readonly string _tvfDefinition;
        public TableValuedFunctionAttribute(string tvfDefinition)
        {
            _tvfDefinition = tvfDefinition;
        }

        public string Definition
        {
            get { return this._tvfDefinition; }
        }
    }

    [TableValuedFunction("[dbo].[GetStudentsAndTheirSchool]")]
    public class StudentSchoolDetail
    {
        public int Id { get; set; }
        public string Name { get; set; }
        public string School { get; set; }
    }

Now that we defined the tvf’s from .NET Side, Lets actually create a function which return table as below.

CREATE FUNCTION [dbo].[GetStudentsAndTheirSchool] ()
RETURNS @ReturnTable TABLE (
	Id INT
	,NAME NVARCHAR(50)
	,School NVARCHAR(50)
	)
AS
BEGIN
	INSERT @ReturnTable
	SELECT s.Id
		,s.NAME
		,s2.NAME
	FROM dbo.Students s
	INNER JOIN dbo.Schools s2 ON s2.SchoolId = s.School_SchoolId

	RETURN
END

I have a little extension, to enforce basic rules and get the materialized value back as below.

public static class DbExtensions
    {
        private const string TvfPlaceHolder = "SELECT * FROM {0} ()";

        public static IEnumerable<T> ExecuteTableValuedFunction<T>(this DbContext context) where T:class 
        {
            var attributes = typeof (T).GetCustomAttributes(typeof (TableValuedFunctionAttribute), false);
            if(attributes.Length == 0) throw new InvalidOperationException(string.Format("Cannot operate TVF on type:{0}",typeof(T)));
            var attribute = attributes[0] as TableValuedFunctionAttribute;
            return context.Database.SqlQuery<T>(string.Format(TvfPlaceHolder, attribute.Definition));
        }
    }

That is it! we are done., Lets consume it

    class Program
    {
        static void Main(string[] args)
        {
            using (var ctx = new StudentContext())
            {
                ctx.Database.CreateIfNotExists();
                var allStudents = ctx.ExecuteTableValuedFunction<StudentSchoolDetail>().ToList();

                for (int i = 0; i < allStudents.Count; i++)
                {
                    Console.WriteLine("Student Name:{0} School:{1}",allStudents[i].Name, allStudents[i].School);
                }
            }

            Console.ReadLine();
        }
    }
Category: .Net | Tags:  | Leave a Comment