Archive for the Category ◊ General ◊

Wednesday, March 05th, 2014

What I have noticed in most of the projects that I have been part of is that the Performance is considered as a least priority feature. Well its definitely not a feature, performance should be built in each feature and measured as early and frequently as possible.

I have collated few key points to fine tune any web application’s (targeting .Net and IIS but not limited to) performance.

1. Use CDN (Content Delivery network): All the 3rd party JavaScript files such as JQuery, Knockout should always use the CDN instead of the web application server. CDN Servers are dedicated to deliver the static content and is always faster than your own host.

There is a very high probability that the client (browser) would have already cached the JavaScript as part of other web application since most of them use the same CDN url. You can read more about the benefits about CDN here.

2. Use Bundling and Minification: The custom CSS files and the JavaScript files should be bundled into a single large file (reduces the number of HTTP requests) and also minified (reduces the size of the data transferred over the wire).

How to enable bundling and minification in MVC: http://www.asp.net/mvc/tutorials/mvc-4/bundling-and-minification

3. Use static content caching: Always set the static content to be cached (includes JavaScript, CSS, images etc.) on the client side. Most modern day browsers would cache the static content themselves. Use “Never Expires” policy to ensure that most of them needn’t be updated.

Note: This could also lead to the client not getting the latest updates when something changes, ensure that you have the version in the file name, when you update the file also change the version number.

<configuration> <system.webServer> <staticContent> <clientCache cacheControlMode="UseExpires" httpExpires="Tue, 19 Jan 2038 03:14:07 GMT" /> </staticContent> </system.webServer> </configuration>

You can read more about client cache here.

4. Always keep CSS and JavaScript external: Never add any JavaScript or inline style information within the views. That would regenerate the view each time and you would miss out on the benefits of the above.

Hence always keeps JS and CSS as separate files and add them as links in the view.

Note: Best practice is to add the link to the style file at the top of the view and JS at the bottom of the view file.

5. Use Url Compression: Since IIS 7+ allows easy way of compressing the response using gzip protocol and the browser decompresses the response on the client side. This would considerable reduce the network latency while transporting data.

There are two types of compression Static and Dynamic based on the contents. JS, CSS, Images and other static contents would be part of static compression, but the views, data would be come under dynamic compression. You can enable them using the following settings in the configuration file.

<urlCompression doStaticCompression="true" doDynamicCompression="false" />

image

above picture shows a simple request to the webserver which implements the dynamic compression. Data transferred over the wire is 4.7 KB after decompressing on the client side the data is 45 KB .

Note: The dynamic compression puts load on the server as each request as to be compressed so use it wisely.

You can read more about setting up the compression here

6. Use Output Caching: Use output caching for regularly used views or pages which have no dynamic updates. This can be done using an attribute (OutputCache) to the action in MVC

More reading here.

7. Use Data Caching: Reduces the database or the disk i/o by caching the regularly used data to in-memory cache. There are many providers in the market and also a default one available with IIS.

8. ASP.Net Pipeline Optimization: ASP.net has many http modules waiting for request to be processed and would go through the entire pipeline even if it’s not configured for your application.

All the default modules will be added in the machine.config place in “$WINDOWS$\Microsoft.NET\Framework\$VERSION$\CONFIG” directory. One could improve the performance by removing those modules which you wouldn’t require.

<httpModules> <!--<span class="code-comment"> Remove unnecessary Http Modules for faster pipeline </span>--> <remove name="Session" /> <remove name="WindowsAuthentication" /> <remove name="PassportAuthentication" /> <remove name="AnonymousIdentification" /> <remove name="UrlAuthorization" /> <remove name="FileAuthorization" /> </httpModules>

9. Avoid Session State: Session states should always be kept really small in size, if we cannot avoid the circumstances, then we should use session as distributed in-memory cache. Never use a database backed session provider.

10. Remove Unnecessary HTTP Headers: ASP.Net adds headers that aren’t really necessary to be transmitted over the wire. Such as ‘X-AspNet-Version’ , ‘X-Powered-By’ and many more. clip_image001

11. Compile in Release mode: Always set the build configuration to release mode for the website. For obvious reasons.

12. Turn Tracing off: Tracing is a good functionality, but each of the functionality would add an overhead instead use the asynchronous logging mechanism.

13. Async and Await: Since the async controllers are available since the MVC3.0 now we can have non-blocking requests to the webserver which improves the throughput of the requests made. You can read more about this @ http://www.campusmvp.net/blog/async-in-mvc-4.

14. HTTP Limitations: By default HTTP protocol doesn’t allow more than two concurrent requests from the same user and those requests are also limited by the browsers

Firefox 2:  2
Firefox 3+: 6
Opera 9.26: 4
Opera 12:   6
Safari 3:   4
Safari 5:   6
IE 7:       2
IE 8:       6
IE 10:      8
Chrome:     6

But there are scenarios where you would a webserver connecting to a webservice requesting data frequently then this restriction can degrade the performance. .Net has a way to overcome this restriction and allow users to make multiple concurrent calls to the service.

<system.net> <connectionManagement> <!-- Add address from the trusted connection only --> <add address="*" maxconnection="100" /> </connectionManagement> </system.net>

Some of the tools worth mentioning

a. YSlow2 – Yahoo’s add-in available for most of non-IE browsers. It analyses the web pages against the set of rules by default. You can look at the rules here

clip_image002

b. Chrome Inspector – Chrome browser’s audit tab allows to run the checks for performance.

clip_image004

c. Firebug for firefox, IE’s F12 window and Chrome’s element inspector could be used to track the network utilization and track all the http request made and can gauge which needs your attention.

d. Net Profiler – Available with Visual studio ultimate

e. ANTs profiler

f. For Entity framework there is ayande profiler http://hibernatingrhinos.com/Products/EFProf

Along with above mentioned checklist/recommendations one should always ensure that the best practices for languages (C#, VB.Net, JavaScript) is followed for the optimum performance of any application.

Friday, February 28th, 2014

BDD or behavioral driven development is an extremely powerful and efficient way of software development. Since Scrum helps in identifying areas of improvement thereby helping you be more efficient, BDD is apt for the Scrum/XP hybrid. It is a way of defining complex business scenarios into simpler user scenarios in a very readable, English like language.

The process kick starts as early as defining the business requirement by the product owner. Since the product owner himself may not be technically inclined, it would help him define the scenarios which can be used by the development team to churn out quality without much of a hassle. It restricts the developers to key in any unnecessary code. What you need is what you write. The developers can use it for TDD and the QAs can use it for creating test automation.

For instance, look at a simple example of a specflow feature file below:

1

The language used here is Gherkin. The Gherkin language defines the structure and a basic syntax for describing these tests.

If we observe the syntax here, we see that there’s a feature element which provides the header for a feature file. Below this, we have some free text which gives a high level description of the feature under test. A feature may comprise of multiple scenarios.

@mytag helps us define a way to setup classes of behaviors. One can tag scenarios for different stuff. Generate categories out of it, in order to run selected scenarios and such.

Scenario is the actual acceptance test. It has some free text describing the scenario and some steps below it.

Scenario steps describe the precondition (by means of Given), action (When) and verification (Then) steps needed for the acceptance test.

The developer/QA can now get help in generating the code that would test this feature/scenario. When we right click on some scenario step, we get the context menu that also shows an option to “Generate Step Definitions”.

2

Initially the steps are shown in a color (Purple in my case, as I use the dark theme in visual studio). When the step definitions are bound to a step the text changes its color. In my case, it goes to white. This helps in identifying the steps which do not have an associated definition.

Clicking on this option would give you another pop-up:

3

Here the user can choose the style, preview it, copy the methods to clipboard or directly generate them. Generating would mean generating a new file with the name like SpecFlowFeatureSteps.cs. You may also choose to add a specflow step definition file on your own and copy/paste the generated code there.

The code would look something like this:

4

This is a result when I chose the underscore style.

What we need to do next, is just key in the code for each of these methods. The result is a very easy to read and easy to manage code. This also serves as running documentation for all the features developed.

Category: General  | Leave a Comment
Wednesday, September 04th, 2013

Hi All,

I tried a new approach to Control your system with your phone from remote location, that I would like to share with all:

We can start any application or we can shutdown the system from phone being at any remote location.

So first I will share this method and then some benefits of this:

I will consider the example to shut down the system from phone:

Steps:
1. open notepad and type “Shutdown -s -t 100″ and save it as Shutdown.bat
2. Go to Outlook and click Rules->Manage Rules->New Rule
3. Click on Apply rule on messages I receive under Start from a blank rule
4. Click Next and then check with specific words in the body, then click on the specific words at the bottom and type the word that you want to match, I will write Shutdown for this example, click next
5. from next screen check start application,then click start application from the bottom, browse your bat file and also check delete it
6. Click Finish

How to use from phone:

As such we created a rule in outlook and that would be executed by the mail only:

So If you have internet connectivity on the phone:

just send a email to your account which is configured with the outlook, with the content of the mail Shutdown

If internet connectivity is not there:

Send a multimedia sms from your phone with text as Shutdown

If phone doesnt support multimedia message also:

<email address> <space> <message>

To sms2mail gateway number…

07766 40 41 42

What if you don’t have an “@” symbol or an “_” symbol in phone?

Most mobiles support both “@” and “_” symbols, but if not you can use…

??  instead of  @

!!  instead of  _

Benefits:

1. So you can Shut down your system from any remote location now with any phone.
2. The above process can be used for any other application also, say you need to start Teamviewer from remote location and then access your system as you want.. or
3. you can try with any application you like.

Please post your comments and  feeback for the same… Will write more articles on this soon

Author:
Wednesday, May 01st, 2013

In this post I will present my views on various aspects of software architecture. These views are not limited to any specific technology.

What is Software Architecture?

A software architecture is typically a set of design decisions to address various non-functional requirements and attributes of a software system/application. It primarily focuses on aspects such as performance, reliability, scalability, testbility, maintainability and various other attributes, which can be key both structurally and behaviorally of a software system.

Architecture phase Architecture vs design

All architecture is design, but not all design is architecture

A new software product development requires a strong focus on some of these non-functional attributes based on the domain and the nature of the application. It establishes the context for design and coding. Example: A banking application may require special attention on security and availability of the application. Architectural decision have to be well thought to setup a strong base and long term view into consideration; its not only very difficult to change these decisions later on, but, will have significant ripple effects on the software.

Some of the things to keep in mind when architecting a software solution are:

  1. A, do not reinvent the wheel policy – Best architects copy solutions that are already proven in the software world. Architects are the one who adapts them to the current context, improve upon their weaknesses and design in such a way that it enables incremental improvements.
  2. Simplest architectures are the best – One always has to remember that there will be trade-offs. Example: For a highly scalable system you will trade off cost. Attempting to put an architecture in place which addresses all the non-functional attributes, you are asking for the trouble. By this approach you are not only creating complexity into the system but, making it extremely difficult for the design and implementation team.
  3. Demo’able architectural blueprint – A software architecture blueprint should ultimately be converted into executable code. It should not be a mere sketch on a paper or a soft-copy of the model, which the software designers find impossible to visualize and construct.
  4. Key responsibilities - Define and validate the system’s architecture, maintain the conceptual integrity of the system, assess and attack technical risks to the system, resolve the design and implementation teams conflicts, document the architecture.
  5. Keep tool set upto date – Modern day software requires architects to have updated tool sets in their bag to fulfill various challenges. They particularly have to be aware of how to effectively use them. There are several out of the box architectural tactics, patterns to help architect solve recurring problems.

I plan to cover some of these patterns and tactics in my next post.

 

Category: General  | Leave a Comment
Author:
Wednesday, May 01st, 2013

In most webpages we judiciously use jQuery and a few other JS plugins. Plus some of our own JS files. We also have a bunch of CSS files and images. Which means when our webpage is loaded in the browser, it has to make several requests to the server to get all the additional files. Sure, the browser tries to fetch several files in parallel. But most browsers put an upper limit to how many parallel requests to make. In most cases this limit is 6. That means if your web page refers to more than 6 files, then some files will have to wait till others have been downloaded. Its easy to see examine this behavior using your browser’s developer tools (Chrome developer tools, Firebug or IE’s F12 window). On most browsers you can hit F12 key to access developer tools. You can find out which files are being requested by your browser and in what order from the Network tab. Here’s an example of how it looks in Chrome:

chrome dev tools

As you can see from the picture some files are loaded after other have finished. Hence you can significantly reduce the load time of your page by referring as few files as possible. There are also tools/libraries that can help you combine several JS files into one (CSS files too). If you are using images to achieve a gradient look, try using CSS gradient instead. May be some files are not needed in a some pages. Anyways, you can figure out a suitable solution once you know what’s going on and the Network monitoring feature of your browser can help you figure out just that.

Firefox users can use Firebug: http://getfirebug.com/network

IE users can use Internet Explorer Developer Tools: http://msdn.microsoft.com/en-us/library/gg130952(v=vs.85).aspx

Happy optimizing!

- Anand Gothe

Category: CSS3, General, HTML5  | 2 Comments
Author:
Saturday, March 23rd, 2013

Writing email is a part of our day to day activities. If you are work at office you write email to your colleagues,clients. There are few rules of email etiquette help us communicate better via email.

1) Use appropriate Subject Lines

emailsubject

 

 

 

 

Do not keep the subject line blank. Keep the subject line simple and more appropriate to the content of the email. The recipient will decide to read the email reading the subject line of the email.

2) Always use “Reply to All”

reply to all

 

 

 

 

 

 

3) Be simple and to the Point.

Email is harder to read then the printed messages. The email that goes on and on is less likely to be read . If the email is just 2 or 3 sentences then may be you should just pick up the phone and talk to the person.

4) Use proper grammar, punctuation and spelling.

Poorly written email with bad grammar reflects on you and your company.Almost every email tool has speller and grammar checker program. So use it.

5) The 24 Hour Rule

Always reply to your email within 24 hours . This tells your customer that you are focused on prompt service .

6) Do not write in All Caps.

capsimages

 

 

 

 

 

Writing in all caps makes it difficult to read the email.Its also sends another unintended message that you are shouting .

 

7) Read you email before sending it .

Taking your time and read your email before sending it .

8) Use disclaimer at the bottom of the email .

The information transmitted, including attachments, is intended only for the person(s) or entity to which it is addressed and may contain confidential and/or privileged material. Any review, re transmission  dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and destroy any copies of this information.

Most of you might think that this will take more space in email . But these words will save a lot of money for your company whenever there are any legal issues.

 

Author:
Saturday, January 26th, 2013

A few months back Intel announced a “mini desktop motherboard + processor” combo, ambitiously named  NUC – Next Unit of Computing. These are small 4×4 inch motherboards with processors baked in and lots of connection ports. Put it in a tiny box, add some RAM and a hard disk and you’ve got yourself a really small (but powerful) PC. You can already buy devices based on NUC, from vendors like Gigabyte and Velocity. Several more vendors will introduce similar devices on coming months.

nuc-dc3217by-image Now, there isn’t anything new here. Small form-factor PCs have been around for quite some time now: Giada Mini PCs, Zotac’s ZBOX series and ASRock Mini PCs. What’s exciting about Intel’s move is that, this give a huge push to the Mini PC segment. All the models currently (Jan 2013) listed on Intel’s website have a 3rd generation Intel Core i3 processor. An i5 version is expected to come out soon. However, other vendors like Habey and Gigabyte have already announced and i5 mini-pc based on Intel’s NUC concept.

I’m really excited because these are powerful (yet-tiny) desktop PCs that would consume a fraction of electricity (compared to the huge box that sits under my desk).  Some models will also support dual monitors out of the box. Great of programming. They’ll come with vesa mount support, so it’ll fit right behind your monitor! I think the NUCs will really catch on, and in couple of years, completely replace the bulky desktop PCs.

Category: General | Tags: , ,  | Leave a Comment
Author:
Thursday, January 03rd, 2013

We have a large TV in our front office. About a year back, we set it up to display twitter feeds, weather, news and pictures. We did this by connecting it to a Windows 8 (preview) PC and set up live tiles for with like AccuWeather, FlipToast (twitter), NDTV news and picture gallery. It’s been working just fine. However, one thing kept nagging me. I felt like we’re wasting a perfectly fine desktop PC just to drive a simple front-office display. That was until recently, when I found out about the Android Mini PC!

The Android Mini PC is a small device, the size of a pen drive, that you can directly plug into your TV and convert it into a large android tablet! The model I bought was Rikomagic 802 III, which has a dual-core processor, 1GB RAM and 8GB system memory, Wifi and runs Android 4.1. It worked right out of box, all I had to do was plug it into the TVs HDMI port and USB port (for power). I also connected a wireless USB keyboard+mouse. It booted in few seconds with gorgeous Android desktop. All I had to do was download a few apps from Play Store and set up some widgets on the home screen. I used Android Pro Widgets for twitter feed, AccuWeather for weather, NDTV App for news and “Photo Frame Home Screen” for pictures. Here’s a picture of how it turned out:

Have I told you the best part yet? The mini pc cost me only $70 (including shipping) and freed it up the desktop PC that was connected to the TV.

Happy New Year!

- Anand Gothe

Tuesday, April 17th, 2012
Plates is the templating library in flatiron. Plates binds data to markup. It’s Javascript, Markup and JSON. It works in the browser and in node.js. All templates are actually valid HTML, with no special characters for value insertion. The relationship between tags and values is defined through object literals that are passed to Plates.bind:
You have to install plates before using it
Plates can be used in the program by requiring it.
var http = require(‘http’),
plates = require(../lib/plates’);
var server = http.createServer(function (req, res) {
res.writeHead(200);
var userSpan = ‘<span id=”spFirstName”>Prajeesh</span>&nbsp;<span id=”spLastName”>Prathap</span>’
var userHtml = ‘<div id=”pnlLogin”></div>’;
var map = plates.Map();
map.where(‘id’).is(‘spFirstName’).use(‘value’).as(‘Prajeesh’);
var content = plates.bind(userHtml, { ‘pnlLogin’: userSpan });
res.end(content)
});
server.listen(8083);


Category: General  | Leave a Comment
Monday, April 16th, 2012
Director is a URL router module which comes as part of the flatiron framework.  It works in a browser for single page apps and in Node.js. It’s not a plugin for another framework. It’s not dependent on anything. It’s a modern router that was designed from the ground up with javascript.
Installing flatiron
You can install flatiron using npm (npm install flatiron –g)
Director handles routing for HTTP requests similar to journey or express:
var http = require(‘http’);
var director = require(../lib/director’);
function sayWelcome(route) {
this.res.writeHead(200, { ‘Content-Type’‘text/plain’ });
this.res.end(‘Welcome to flatiron routing sample’);
}
var router = new director.http.Router({
: {
get: sayWelcome
}
});
router.get(‘/index’, sayWelcome);
router.get(‘/home’, sayWelcome);
var server = http.createServer(function (req, res) {
router.dispatch(req, res, function (err){
if (err) {
res.writeHead(404);
res.end(‘Error!!!’);
}
})
});
server.listen(8083);
When developing large client-side or server-side applications it is not always possible to define routes in one location. Usually individual decoupled components register their own routes with the application router. Director supports ad-hoc routing also.
router.path(/\/customers\/(\w+)/, function (res) {
this.post(function(id) {
//sayWelcome(‘some’);
this.res.writeHead(200, { ‘Content-Type’‘text/plain’ });
this.res.end(‘Create a customer with Id = ‘ + id)
});
this.get(function (id) {
//sayWelcome(‘some’);
this.res.writeHead(200, { ‘Content-Type’‘text/plain’ });
this.res.end(‘Gets a customer with Id = ‘ + id)
});
this.get(/\/orders/, function (id) {
this.res.writeHead(200, { ‘Content-Type’‘text/plain’ });
this.res.end(‘Gets orders for a customer with Id = ‘ + id)
});
});















Category: General | Tags: ,  | Leave a Comment