Archive for the Category ◊ Databases ◊

Sunday, March 30th, 2014

One of the cool feature of SQL/LINQ which makes developer life simple

  •   Row number: Everyone would have encountered situation to filter out records based on one particular column but are not distinct.

In such cases row_number in SQL comes handy to filter out data based on our choice

Syntax : ROW_NUMBER ( )
OVER ( [ PARTITION BY value_expression , ... [ n ] ] order_by_clause )

For ex: Lets say my dataset results are as follows
Name                       Category       MutationDate   ReviewId
Communication    CoreValue      1/1/2012           111
Communication    CoreValue      2/1/2012           111
Communication     CoreValue     3/1/2012          111
Soft Skills                CoreValue     2/1/2012          111

If I need to get top latest result where have to filter out based on mutationdate which is <= 3/1/2012
Name                      Category    MutationDate   ReviewId
Communication    CoreValue    3/1/2012        111
Soft skills               CoreValue      2/1/2012     111

I can achieve this in sql by simply using

select Row_Number() over (Partition by MutationDate order by ReviewID) as rownumber, * from tbl  which results in
rownumber  Name          Category   MutationDate ReviewId
1        Communication     CoreValue  1/1/2012     111
2        Communication     CoreValue  2/1/2012     111
3        Communication     CoreValue  3/1/2012     111
1         Soft Skills                 CoreValue  2/1/2012     111

Next step is filter out based on the rownumber
select * from (select Row_Number() over (Partition by MutationDate order by ReviewID) as rownumber, * from tbl)
where rownumber = 1 that gives me desired result

rownumber Name           Category    MutationDate ReviewId
1         Communication      CoreValue   3/1/2012      111
1         Soft skills                  CoreValue   2/1/2012      111

These above queries are simple if am using database operation in SQl.
The same can be achieved if am using entity framework for which we need to use LINQ instead of sql

Syntax in LINQ

var result =tbl.OrderBy(x => x.ReviewId).GroupBy(x => x.MutationDate)
.Select(g => new {g, count= g.Count()})
.SelectMany(t => t.g.Select(b => b)
.Zip(Enumerable.Range(1,t.count), (j,i) => new {j.Name, j.Category,j.MutationDate,j,ReviewId, rn = i}));

Based on the result , we can filtered out based on the rownumber as
result.Where(x=>x.rn ==1);

Some more cool feature in my next blog….



Category: Databases  | Leave a Comment
Thursday, February 27th, 2014

We all know ReportViewer Web Server controls display reports that you create locally in Visual Studio as well as remote reports hosted in SQL Server 2008 Reporting Services.


imageReport Viewer Flow diagram


For example we have a report which takes 120 seconds to get the data and render it.It works fine when we directly view the report  in SQL Server Business Intelligence Development Studio.



When we browse the same report from ASP.NET Web Application it starts rendering the report by displaying the Loading..  icon in the screen. But after some time it does not show any information has shown below. The problem we can see in the web browser developer tool –>  console screen “The server request timed out”



Since ReportViewer Web server control is an ASP.NET AJAX control used to host reports in ASP.NET projects. You can refer ReportViewer Controls (Visual Studio) to read more about this control and how to use them in your projects. We use ReportViewer web server control along with the AJAX Script Manager which is required to host the report viewer web server control.The above error in the browser is caused because of the Script Manager’s AsyncPostbackTimeout. By default it is 90 seconds.


To fix, change the AsyncPostBackTimeout  property to 0 or an appropriate value, 0 meaning infinite.

<asp:ScriptManager ID="ScriptManager1" runat="server" AsyncPostBackTimeout="0"></asp:ScriptManager>


[Note: This is one way to avoid time out. But not always holds good.We need to tune up the report queries to get data quickly.]

Thursday, February 06th, 2014

Problem statement:

You are doing a lot of data fix for a corrupted database, but then when you are executing your ‘salvage script’ to restore the lost/corrupted data, amongst the events that get executed during a DML statement execution, triggers are one one of them. Sometimes you may find the triggers execution disturbing, and you would like to disable them. You may never know, the below code snippets might come in handy to enable and disable all triggers in a database.

Solution: [Ctrl+C, Ctrl+V style]

Copy, paste, and run the below.

Enable all triggers in a Microsoft SQL Server database

DECLARE @DisableTriggerCommand NVARCHAR(4000)
DECLARE @TriggerTableName NVARCHAR(500)

DECLARE DisableTriggerCursor CURSOR
		FROM sysobjects
		WHERE id IN (
				SELECT parent_obj
				FROM sysobjects
				WHERE xtype = 'tr'

OPEN DisableTriggerCursor

FROM DisableTriggerCursor
INTO @TriggerTableName

WHILE @@fetch_status = 0
	SET @DisableTriggerCommand = N'ALTER TABLE ' + @TriggerTableName + ' ENABLE TRIGGER ALL'

	PRINT 'Executing:   ' + @DisableTriggerCommand + CHAR(13)

	EXECUTE sp_executesql @DisableTriggerCommand

	FROM DisableTriggerCursor
	INTO @TriggerTableName

CLOSE DisableTriggerCursor

DEALLOCATE DisableTriggerCursor


Disable all triggers in a Microsoft SQL Server database 

DECLARE @DisableTriggerCommand NVARCHAR(4000)
DECLARE @TriggerTableName NVARCHAR(500)

DECLARE DisableTriggerCursor CURSOR
		FROM sysobjects
		WHERE id IN (
				SELECT parent_obj
				FROM sysobjects
				WHERE xtype = 'tr'

OPEN DisableTriggerCursor

FROM DisableTriggerCursor
INTO @TriggerTableName

WHILE @@fetch_status = 0
	SET @DisableTriggerCommand = N'ALTER TABLE ' + @TriggerTableName + ' DISABLE TRIGGER ALL'

	PRINT 'Executing:   ' + @DisableTriggerCommand + CHAR(13)

	EXECUTE sp_executesql @DisableTriggerCommand

	FROM DisableTriggerCursor
	INTO @TriggerTableName

CLOSE DisableTriggerCursor

DEALLOCATE DisableTriggerCursor



Wednesday, December 18th, 2013

As we all know, a view is just a virtual table that consists only the rows and columns specified in CREATE VIEW statement. Even though view doesn’t store any data by itself, it’s still possible to use view in an INSERT, UPDATE or in DELETE statement.

The requirement to create up-datable view is as follows:
1. Select statement can’t include a DISTINCT clause.
2. Select statement can’t include a aggregate clause.
3. Select statement can’t include a GROUP BY or HAVING clause.
4. View can’t include the UNION ,INTERSECT operator.

Below is an example of up-datable view:

CREATE or REPLACE VIEW balance_due_view AS
SELECT vendor_name, invoice_number,
invoice_total, payment_total, credit_total
invoice_total – payment_total – credit_total AS balance_due
FROM vendors JOIN invoices ON vendors.vendor_id = invoice.vendor_id
WHERE invoice_total – payment_total – credit_total > 0;

AN UPDATE statement that uses the view to update the data

UPDATE balance_due_view
SET invoice_total = 0
WHERE invoice_number = 100;

If we want to make the above view as read only, then we can make use of WITH READ ONLY clause, which is as shown below.

CREATE or REPLACE VIEW balance_due_view AS
SELECT vendor_name, invoice_number,
invoice_total, payment_total, credit_total.
invoice_total – payment_total – credit_total AS balance_due
FROM vendors JOIN invoices ON vendors.vendor_id = invoice.vendor_id
WHERE invoice_total – payment_total – credit_total > 0

Similarly we can do delete, insert operations.

Category: Databases | Tags:  | Leave a Comment
Monday, December 02nd, 2013

We all need our applications to be fast and more responsive and there are several factors that affect the performance of it. And this article will explain how to tune a mongo-DB application using indexes.

Let’s create a collection called users and insert 10 million records into it. I am making use of JavaScript to insert data into collection, which is as follows.

for (i=0; i<10000000; i++) {
{“i” : i,
“username” : “user”+i,
“age” : Math.floor(Math.random()*120), “created” : new Date()


Now, execute a simple find query and see how much time it takes to retrieve one record from the collection. To examine, append explain() at the end of query. For time being, we can assume explain() is a kind of diagnostic tool which provides the  information about how a query got executed. The output of explain () looks something like this:


Here, ‘nscannedObjects‘ indicates the number of documents got scanned and ‘millis‘ indicates how long a query was executed for (i.e. time taken to get response from server).

Now, create an index on the username field:

>db.users.ensureIndex({“username” : 1})

Let us run the same query which we executed before creating an index.


The number of documents scanned is now ’1′ and time taken is ’21′ milliseconds which is 174 times faster than the previous once.

To know more about how indexing works, you can check the below link:

Have fun!
-Rakesh SS

Thursday, July 18th, 2013

Schema Design in mongoDB:

In traditional database, we normally link two entities through joins, but in mongoDB we usually
don’t use joins, rather we embed one entity into another. Let me explain through an example:

Let us consider an employee document, which has fields like employee Id, name , date of birth , joining date etc. Suppose we need a report which indicates the skill set rating of an employee, we can design employee schema as shown in below diagram.

Schema Design

Employee document(collection):

In the above example, skill_set_rating is a sub document, which we are embedding into employee document. We can also achieve the above said by having separate collection like employee, skill_set etc which is similar to RDBMS approach. We will follow the second approach only if the document size is exceeding 16 MB else we always follow the first approach (where the read operation is much faster).

Finally, schema design in mongoDb depends on how we need to show the data in our UI, reports etc. Based on our requirement we can make use key value pair, arrays and sub-documents accordingly.

Happy mongoing :)

Thursday, June 20th, 2013

Cursor is a control structure that enables traversal over the records in a database. Cursors facilitate subsequent processing in conjunction with the traversal, such as retrieval, addition and removal of database records. – Wikipedia

When we execute find() method (which is similar to ‘select’ statement in sql), a cursor is constructed and returned to the mongo shell. This cursor can be assigned to a variable which allows to access and manipulate the cursor as shown in the below code snippet.


cursor.hasNext() returns true, if next document is present in the cursor and returns next document the cursor is pointing to. Since mongo shell is an interactive javascript interpreter we can iterate the cursor like how we do in most of the programming language. We can also sort the documents in ascending / descending order using cursor sort() method as shown below.



We can use limit() method on a cursor to specify the maximum number of documents the cursor should return.


At this point of time, we know how cursors and its methods work.  Let’s co-relate how we can use all this concepts and write queries which we can apply to real time scenarios.

Scenario1:  To get top three students from class 5th who have scored maximum marks.
Solution: db.student.find({ class: 5}).sort({ score : -1}).limit(3)

Scenario2:  To get top three students from class 5th who have scored maximum marks (list the records apart from topper)
Solution: db.student.find({ class: 5}).sort({ score : -1}).limit(3).skip(1)

Scenario 3: We can use find, limit and skip methods to fetch result set for implementing pagination.
function printStudents(pageNumber, nPerPage) {
print(“Page: ” + pageNumber);
db.students.find().skip((pageNumber-1)*nPerPage).limit(nPerPage).forEach( function(student) { print( + “<p>”); } );

We have to be little cautious when using skip method, since it is very expensive and more IO bound operation. Skip method requires the server to walk from the beginning of the collection to get the offset or skip position before returning the result.

Happy Mongoing :)

Tuesday, June 18th, 2013

You : “Deadly Injection ? Do we have such one ?”
“Yeah..Here I am.. And at your Service” says “SQL Injection”

So What is… SQL Injection?
SQL injection is a technique for maliciously exploiting applications that use client-supplied data in SQL statements, thereby gaining unauthorized access to a database in order to view or manipulate restricted data

SQL injection techniques differ, but they all exploit a single vulnerability: string input is not correctly validated and is concatenated into a dynamic SQL statement.

SQL injection attacks are classified as below:

1.Statement Modification
2.Statement Injection
3.Data Type Conversion

What can you do in Statement Modification?
Statement modification means deliberately altering a dynamic SQL statement so that it executes in a way unintended by the application developer. Typically, the user retrieves unauthorized data by changing the WHERE clause of a SELECT statement or by inserting a UNION ALL clause. The classic example of this technique is bypassing password authentication by making a WHERE clause always TRUE.

What can you do in Statement Injection
Statement injection means that a user appends one or more new SQL statements to a dynamic SQL statement. Anonymous PL/SQL blocks are vulnerable to this technique.

What can you do with Data Type Conversion

A less known SQL injection technique uses NLS session parameters to modify or inject SQL statements.

A datetime or numeric value that is concatenated into the text of a dynamic SQL statement must be converted to the VARCHAR2 data type. The conversion can be either implicit (when the value is an operand of the concatentation operator) or explicit (when the value is the argument of the TO_CHAR function). This data type conversion depends on the NLS settings of the database session that executes the dynamic SQL statement. The conversion of datetime values uses format models specified in the parameters NLS_DATE_FORMAT, NLS_TIMESTAMP_FORMAT, or NLS_TIMESTAMP_TZ_FORMAT, depending on the particular datetime data type. The conversion of numeric values applies decimal and group separators specified in the parameter NLS_NUMERIC_CHARACTERS.

One datetime format model is “text”. The text is copied into the conversion result. For example, if the value of NLS_DATE_FORMAT is ‘”Month:” Month’, then in June, TO_CHAR(SYSDATE) returns ‘Month: June’

Enough of Injecting ! Let’s see the cure

Guarding Against SQL Injection

If you use dynamic SQL in your PL/SQL applications, you must check the input text to ensure that it is exactly what you expected. You can use the following techniques:

Using Bind Arguments to Guard Against SQL Injection

The most effective way to make your PL/SQL code invulnerable to SQL injection attacks is to use bind arguments. The database uses the values of bind arguments exclusively and does not interpret their contents in any way. (Bind arguments also improve performance.)

Using Validation Checks to Guard Against SQL Injection
Always have your program validate user input to ensure that it is what is intended. For example, if the user is passing a department number for a DELETE statement, check the validity of this department number by selecting from the departments table. Similarly, if a user enters the name of a table to be deleted, check that this table exists by selecting from the static data dictionary view ALL_TABLES

DBMS_ASSERT – Sanitize User Input to Help Prevent SQL Injection

The package contains a number of functions that can be used to sanitize user input and help to guard against SQL injection in applications that don’t use bind variables


column_value NUMBER,
emp_column VARCHAR2,
amount NUMBER )
v_column VARCHAR2(30);
sql_stmt VARCHAR2(200);
-- Check validity of column name that was given as input:
AND COLUMN_NAME = emp_column;
sql_stmt := 'UPDATE employees SET salary = salary + :1 WHERE '
|| DBMS_ASSERT.ENQUOTE_NAME(v_column,FALSE) || ' = :2';
EXECUTE IMMEDIATE sql_stmt USING amount, column_value;
-- If column name is valid:
DBMS_OUTPUT.PUT_LINE('Salaries were updated for: '
|| emp_column || ' = ' || column_value);
-- If column name is not valid:
DBMS_OUTPUT.PUT_LINE ('Invalid Column: ' || emp_column);
END raise_emp_salary;

plsql_block VARCHAR2(500);
-- Invoke raise_emp_salary from a dynamic PL/SQL block:
plsql_block :=
'BEGIN raise_emp_salary(:cvalue, :cname, :amt); END;';

-- Invoke raise_emp_salary from a dynamic SQL statement:
EXECUTE IMMEDIATE 'BEGIN raise_emp_salary(:cvalue, :cname, :amt); END;'

Using Explicit Format Models to Guard Against SQL Injection

If you use datetime and numeric values that are concatenated into the text of a SQL or PL/SQL statement, and you cannot pass them as bind variables, convert them to text using explicit format models that are independent from the values of the NLS parameters of the executing session. Ensure that the converted values have the format of SQL datetime or numeric literals. Using explicit locale-independent format models to construct SQL is recommended not only from a security perspective, but also to ensure that the dynamic SQL statement runs correctly in any globalization environment

Know More :

Monday, April 22nd, 2013

NULL Polymorphism

We might be using the several features of a programming language on our daily routine of work.

But how many of us care to appreciate the concepts behind the feature?

The following example is only to appreciate the concept of Polymorphism and does not have a major impact over real time usage.

In PL/SQL Syntax for basic IF-THEN-ELSE is as follows

IF(expression) THEN
   Statement 1;
local_variable:= Rvalue;

The above signature of branching statement is used to appreciate the concept of Polymorphism implemented through “NULL”


IF(condition is true) THEN
NULL; -- Null acts as a Statement


IF(condition is true) THEN
local_variable := NULL; -- Null acts as a Rvalue
execute this statement;


IF(NULL) THEN -- NULL acts as an expression
DBMS_OUTPUT.PUT_LINE('I am not Null');

If the above anonyomous procedure is executed it will always provide an o/p : I am not Null

Category: Databases  | Leave a Comment
Tuesday, February 12th, 2013

Sometimes it is always necessary ,where we need to write query to retrieve data from various table based on some condition.These queries might be simple or complex depending on the need. Such queries which involves more joins will be difficult  to read  and hard to maintain.
For ex: select * from <t1> join <t2> on <t1>.<column> = <t2>.<column> join <t3>……

In such cases, CTE comes handy. This feature was introduced in SQL2005.

CTE (Common Table Expression) , powerful technique that is often used to improve query speed for complex queries.Common Table Expressions offer the same functionality as a view, but are ideal for one-off usages where you don’t necessarily need a view defined for the system.It is used when a subquery is executed multiple times(useful for recursive query)

A Common Table Expression contains three core parts:

The CTE name – Followed by WITH keyword.
The column list (optional)
The query (appears within parentheses after the AS keyword)

With <CTE_Name> (<Column_Name_1>, <Column_Name_2> ,<Column_Name_n>….. )

–TSQL which uses the CTE

The query using the CTE must be the first query appearing after the CTE.

1) Create a recursive query.
2) Substitute for a view when the general use of a view is not required; that is, you do not have to store the definition in metadata.
3)Enable grouping by a column that is derived from a scalar sub select.
4) Reference the resulting table multiple times in the same statement.
5) Improves readability
6) Maintainability becomes easy.

The following clauses cannot be used in the CTE_query_definition:
ORDER BY (except when a TOP clause is specified)
OPTION clause with query hints

Category: Databases  | One Comment