HTTP Headers Tutorial : Part 1-The basics

The Hypertext Transfer Protocol (HTTP) is the driving force behind the internet. It allows communication between browsers and servers.An important component of HTTP messages is the  HTTP Header. In this series of posts we’re going to take a deep dive to understand what they are and how to use them.

This tutorial is composed of several posts :

  • Part 1- The basics
  • Part 2- Authentication
  • Part 3- Caching
  • Part 4- Content Negotiation
  • Part 5- Cookies
  • Part 6- Redirects
  • Part 7- Conditionals
  • Part 8- Compression
  • Part 9- Range Requests
  • Part 10- Connection Management
  • Part 11- Security

What are HTTP Headers?

Before we dive in, let’s understand what an HTTP header actually is. HTTP headers are part of the HTTP message sent between client and server. The message sent from the client is called the HTTP Request and the message returned from the server is the HTTP Response.

HTTP headers allow the client (browser) and the server to pass additional information with the request or the response. A request header consists of its case-insensitive name followed by a colon ‘:‘, then by its value (without line breaks). Leading white space before the value is ignored.

Type of Headers

  • General / Message header: Apply to both request and response messages and relate to the message itself rather than the entity body.
    • Headers related to intermediaries, including Cache-Control, Pragma, and Via
    • Headers related to the message, including Transfer-Encoding and Trailer
    • Headers related to the request, including Connection, Upgrade, and Date
  • Request header: Apply generally to the request message and not to the entity body, with the exception of the Range header.
    • Headers about the request, including Host, Expect, and Range
    • Headers for authentication credentials, including User-Agent and From
    • Headers for content negotiation, including Accept, Accept-Language, and  Accept-Encoding
    • Headers for conditional requests, including If-Match, If-None-Match, and If-Modified-Since
  • Response header: Apply to the response message and not the entity body. They include:
    • Headers for providing information about the target resource, including Allow and Server
    • Headers providing additional control data, such as Age and Location
    •  Headers related to the selected representation, including ETag, Last-Modified, and Vary
    • Headers related to authentication challenges, including Proxy-Authenticate and WWW-Authenticate
  • Entity /Representation header: Apply generally to the request or response entity body (content). They include:
    • Headers about the entity body itself including Content-Type, Content-Length, Content-Location, and Content-Encoding
    • Headers related to caching of the entity body, including Expires


When you type in the address bar, your browser sends an HTTP message as shown below. The first line specifies the Verb, Uri & HTTP version. Following that are the HTTP headers. The headers in this message read something like this

  • Host : I would like to make a get request to the root of the resources located at using HTTP 1.1
  • Accept : I (browser) would like to negotiate the response that you (server) send back. Here are my preferred choices (mime-types) : html, xhtml+xml, any
  • Accept-Encoding : Here are the encoding types that I (browser) understand, gzip, deflate and br
  • Accept-Language : I prefer english to be sent back to me
  • DNT : I love my privacy so please don’t track me
  • Upgrade-Insecure-Requests : security is important to me so upgrade our communication channel to use encryption
  • User-Agent : I’m making this request from the Chrome browser
GET / HTTP/1.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
DNT: 1
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome Safari/537.36

After that, your browser receives an HTTP response that may look like this.

HTTP/1.1 200 OK
Cache-Control: no-cache, must-revalidate, max-age=0
Connection: keep-alive
Content-Encoding: gzip
Content-Length: 38850
Content-Type: text/html; charset=UTF-8
Date: Wed, 13 Dec 2017 14:00:29 GMT
Expires: Wed, 11 Jan 1984 05:00:00 GMT
Link: <>; rel=""
Server: nginx/1.12.2
Vary: Accept-Encoding

The first line is the status line, followed by HTTP headers. There are a variety of headers sent back which we will examine into detail in the upcoming posts.

How to View Headers

You can view headers in any modern browser by right-clicking and choosing to inspect. This will present a window where you can drill down via a network tab to see the headers associated with the HTTP request.



In our next post we’ll look at Authentication headers.

Uploading and deleting an entire directory to amazon s3 using Transfer Utility

Amazon S3 is a swiss army knife when it comes to cloud storage. There are simply a ton of ways you can use S3. To mention a few, data archiving, big data analytics, cloud storage, backup and recovery. One of the most common is the static hosting of websites. I want to show you how you can programmatically upload and delete an entire directory using the .NET apis for S3.

The short version

  1. Create a console application in visual studio
  2. Add the AWSSDK.S3 nuget package
  3. Create a class (S3TransferUtility) to manage uploading and deleting directories.
  4. Create a transfer request and call the UploadDirectory method using TransferUtility
  5. Use the File I/O APIs to delete the uploaded folder and files.

Uploading directories to S3

The AWSDK.S3 comes with a great utility called TransferUtility. Install via the following command in your console application

TransferUtility provides a simple API for uploading content to and downloading content from Amazon S3. It makes extensive use of Amazon S3 multipart uploads to achieve throughput, performance, and reliability. When uploading large files by specifying file paths instead of a stream, TransferUtility uses multiple threads to upload multiple parts of a single upload at once. When dealing with large content sizes and high bandwidth, this can increase throughput significantly.

To use the TransferUtility class simple initialize a new instance with your AWS access key and secret key.

TransferUtility transferUtility = new TransferUtility("[ACCESSKEY]", "[SECRETKEY]", RegionEndpoint.USWest2);


Uploading directories can be done by simply creating a new upload request and calling the method UploadDirectory. You can set the ACL permissions to PublicRead if you want the contents of your folder to be public.

/// <summary>
/// Upload specified Diretory to S3 bucket
/// </summary>
/// <param name="uploadDirectory"></param>
/// <param name="bucket"></param>
/// <returns></returns>
public bool SaveAsset(string uploadDirectory, string bucket)

        TransferUtilityUploadDirectoryRequest request = new TransferUtilityUploadDirectoryRequest
            BucketName = bucket,
            Directory = uploadDirectory,
            SearchOption = System.IO.SearchOption.AllDirectories,
            CannedACL = S3CannedACL.PublicRead

        return true;
    catch (Exception exception)
        //Log Exception
        return false;

Deleting a directory from S3

The S3 SDK also provides another set of APIs called File I/O. These APIs are useful for applications which want to treat S3 as a filesystem. It does this by mimicking the .NET base classes and FileInfo  DirectoryInfo with the new classes S3FileInfo and S3DirectoryInfo


/// <summary>
/// Delete Directory from S3
/// </summary>
/// <param name="uploadDirectory"></param>
/// <param name="bucket"></param>
/// <returns></returns>
public bool DeleteAsset(string bucket, string uploadDirectory)
        S3DirectoryInfo directoryToDelete = new S3DirectoryInfo(_client, bucket, uploadDirectory);

        var directoryFiles = directoryToDelete.EnumerateFiles();
        foreach (S3FileInfo file in directoryFiles)
            S3FileInfo filetoDelete = new S3FileInfo(_client, bucket, file.FullName.Replace(bucket + ":\\", string.Empty));
            if (filetoDelete.Exists)

        if (directoryToDelete.Exists)
            return true;
    catch (Exception exception)
        //Log error

        return false;
    return false;


 class Program
        static void Main(string[] args)
            var directoryToUpload = @"c:\\Dev\\site";
            var bucketName = "s3mediatransfers/transfers/site";

            //Upload Directory
            S3AssetTransferUtility transferUtility = new S3AssetTransferUtility();
            var uploadStatus = transferUtility.SaveAsset(directoryToUpload, bucketName);

            Console.WriteLine(string.Format("Upload to S3 Succeded : {0}", uploadStatus));

            //Delete Directory
            var deleteStatus = transferUtility.DeleteAsset("s3mediatransfers", "transfers\\site");
            Console.WriteLine(string.Format("Directory Deletion from S3 Succeded : {0}",deleteStatus));

Full code sample can be found here:



5 Things Every .NET Developer Should Know About MSBuild

MSBuild (Microsoft Build Engine) is the magical orchestrator which jumps into action every time you hit F5 in Visual Studio. Its super powers range from compiling your project into executables, to transforming web config files. Therefore, in order to take advantage of the many features msbuild provides, let’s review the basics.

MSBuild Overview

Msbuild is the underlying technology used by Visual Studio to build and compile projects and solutions. It comes packaged with the dotnet framework so it’s very likely that you already have it on your machine.

msbuild location

Msbuild acts as an interpreter which reads an xml file (*.csproj,*.sln,*.msbuild) and executes the instructions inside. It is available on the command line, in visual studio and TFS.

MS Build overview for msbuild

1. Characteristics of a build file

Firstly, A build file is nothing but a simple XML document. Each build file must have the Project root node with an xmlns (XML Namespace) pointing to .

<Project xmlns="">

The PropertyGroup node is usually declared next in a build file. PropertyGroups are containers for properties. Properties allow you to declare variables that can be used later in the file.The example below shows a variable called Name with the value Sam declared in a property group.


ItemGroups are containers for files and they behave like an array. Items are like properties but they allow you to access metadata on that object. The Example below shows a property PicsPath defined which lists all the .jpg pictures in a folder. The item group defined below then lists those files.

    <Pics Include="$(PicsPath)" />

A Target is a container for instructions. Each target has to have an associated task (function to invoke). Each Task is a .NET object which implements the interface ITask. An example of a task is to display a message to the console.

<Target Name="HelloWorld">
   <Message Text="Hello $(Name)" />


2. Hello World with MSBuild

Next is the basic Hello World example. The example below shows the creation of a simple build file which displays “Hello Sam”. A property name is declared with the value Sam. Next, the HelloWorld target is created with a message task.

Hello World Example for msbuild


We can run the build file by running the following inside the Developer Command Prompt for VS 2017.

c:\Dev\Msbuild>msbuild HelloWorld.msbuild

Build output for msbuild

3. Referencing Declared Properties

Another important tip, is how to reference declared properties. A property is a scalar variable which consists of a key value pair. Properties are always created inside of PropertyGroups. After you declare a property you can reference that variable in properties,itemgroups, targets etc by using the dollar parenthesis notation.

Example : $(name)

<FullName>$(Name) Simpson</FullName>

In the example above, we declare a name Homer and reference that property in the FullName property by using $(Name).

4. Reference Declared Items

In addition,we can reference items and their associated data using the following notation

Syntax :

5. Using a response file to pass in command line arguments to msbuild

Furthermore, MSBuild allows the use of numerous command line arguments. Some of the most common ones are below

  • /target:HelloWorld : Run the target HelloWorld when the build is run
  • /v:minimal : Set the logging to minimal
  • /p:Name=Lisa : Inject the value Lisa into the variable Name.

While it’s super convenient to be able to specify command arguments, it’s error prone and tedious. Response files allow you to place all your command arguments in the file and then just pass the name of the response file to msbuild. Here is an example


The contents above would be saved in a file called helloworld.rsp.

c:\Dev\Msbuild>msbuild HelloWorld.msbuild @helloworld.rsp


In conclusion, MSBuild is a great tool for build automation. Understanding how MSBuild works gives us the ability to be creative in build automation and continuous delivery.

Securing your local environment for Development

One of the most common tasks that developers face is to mimic production environments locally. When it comes to running your local app securely, most developers either just run regular “http” or create a self-signed certificate.

In this tutorial, I’m going to show you how to secure your local environment for development so you can run your application via HTTPS with no security warnings. We will use the tool makecert.exe  to create a root x.509 certificate and then use that to sign our SSL certificates. You can download this tool here.

What you’ll need.

  • makecert.exe – The makecert tool is used to create a root x.509 certificate.
  • pvk2pfx.exe – Pvk2Pfx copies the public and private key information contained in .spc, .cer and .pvk files into the personal information exchange file (.pfx).

Setting up your environment

We’ll begin by setting up our local environment. Create an ASP.NET web application as shown below. Modify your hosts file found here c:\Windows\System32\drivers\etc\  so you can map dev.local to localhost or       dev.local

Create your Root Certificate

First, use the makecert tool to create a root certificate. There are numerous parameters you can use when generating this certificate but the most important ones are outlined in the code below. This certificate is important for a number of reasons. The certificate created will have a private key which we will use to create our SSL certificate.

makecert.exe -r                         // self signed
             -n "CN=DevelopmentRoot"    // name
             -pe                        // exportable
             -sv DevelopmentRoot.pvk    // name of private key file
             -a sha1                    // hashing algorithm
             -len 2048                  // key length
             -b 01/21/2010              // valid from 
             -e 01/21/2030              // valid to
             -cy authority              // certificate type
             DevelopmentRoot.cer        // name of certificate file
--pvk2pfx copies public key and private key information in .cer & .pvk file to a personal information exchange
pvk2pfx.exe -pvk DevelopmentRoot.pvk    // Specifies the name of a .pvk file
            -spc DevelopmentRoot.cer    // Specifies the name and extension of the Software Publisher Certificate (SPC) file that contains the certificate
            -pfx DevelopmentRoot.pfx    // Specifies the name of a .pfx file.

 Use the Root Certificate to Create Self-Signed Certificate

makecert.exe -iv DevelopmentRoot.pvk    // file name of root priv key
             -ic DevelopmentRoot.cer    // file name of root cert
             -n "CN=dev.local"          // name
             -pe                        // mark as exportable
             -sv dev.local.pvk          // name of private key file
             -a sha1                    // hashing algorithm
             -len 2048                  // key length
             -b 01/21/2010              // valid from
             -e 01/21/2020              // valid to
             -sky exchange              // certificate type
             dev.local.cer              //name of certificate file
             -eku     // extended key usuage

--pvk2pfx copies public key and private key information in .cer & .pvk file to a personal information exchange
pvk2pfx.exe -pvk dev.local.pvk         // Specifies the name of a .pvk file
            -spc dev.local.cer         // Specifies the name and extension of the Software Publisher Certificate (SPC) file that contains the certificate
            -pfx dev.local.pfx         // Specifies the name of a .pfx file.

Install Certificates onto computer

Run the following command at the command prompt

In the dialog box that appears select to add a snap-in and following the prompts to select Certificates.

Right click on certificates under the Trusted Root Certificate Authorities and select the import tasks.

Navigate to where your certificates were created and choose the Development.cer file. Walk through the other steps and click finish.

Now it’s time to install the dev.local certificate on your machine.

Go back to the managment console and select personal -> certificates. Right click on certificates and select import under all tasks.

Next, follow the wizard and select the dev.local.pfx certificate.

At this point, we’re ready to associate the certificate with the site in IIS.

Using access tokens in Swagger with Swashbuckle

Securing access to your API using access tokens is common practice. In this post, we’ll learn how to call secure API endpoints using the swagger specification specifically using Swashbuckle (An implementation of Swagger for .NET)

Understanding Swagger Schema:
This outline shows the basic structure of a swagger specification document. This file is represented in Json which is in turn used by Swagger-UI to display the interactive API documentation.
"swagger": "2.0",
 "info": {
"version": "v1",
"title": ".NET Latest API",
"description": ".NET Latest API",
"termsOfService": "Some terms",
"contact": {
"name": "donetlatest Team",
"email": ""
},"host": "",
"schemes": [


],"paths": {

"/V1/api/Authentication": {},
"/V1/api/Countries": {},
"/V1/api/Clients": {
},"definitions": {

"CountryDTO": {},
"StateDTO": {},
"ClientDTO": {}


The Paths item object describes the operations on a single path. Each path has a parameters object which are a list of inputs for a given endpoint.
"/V1/api/LitmusClients": {
"post": {
"tags": [
"summary": "GET /api/clientsrn Gets an array of all clients",
"operationId": "Clients_Index",
"consumes": [
"produces": [
"parameters": [
"name": "Authorization",
"in": "header",
"description": "access token",
"required": true,
"type": "string"
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/ClientDTO"
"deprecated": false

 Types of Parameters

  • Path – Used together with  Path Templating
  • Query – Parameters that are appended to the URL
  • Header – Custom headers that are expected as part of the request
  • Body – The Payload that’s appended to the HTTP request.
  • Form – Used to describe the payload of an Http request
The swagger specification describes in detail about parameter types and how you can configure them.
Extending Swagger to add a new parameter:
Swashbuckles implementation of swagger reads XML code comments to generate the required swagger specification. Unfortunately, if you require an authorization header (access token) to make requests, the XML code comments cannot provide this info to Swashbuckle. You’ll have to manually inject this new parameter during swagger specification generation.
Swashbuckle provides an interface called IOperationFilter  to apply new parameters. Implementing this interface will look something like this.
public class AddAuthorizationHeaderParameterOperationFilter: IOperationFilter
        public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
            var filterPipeline = apiDescription.ActionDescriptor.GetFilterPipeline();
            var isAuthorized = filterPipeline
                                             .Select(filterInfo => filterInfo.Instance)
                                             .Any(filter => filter is IAuthorizationFilter);

            var allowAnonymous = apiDescription.ActionDescriptor.GetCustomAttributes<AllowAnonymousAttribute>().Any();

            if (isAuthorized && !allowAnonymous)
                operation.parameters.Add(new Parameter {
                    name = "Authorization",
                    @in = "header",
                    description = "access token",
                    required = true,
                    type = "string"                    
public class SwaggerConfig
        public static void Register()
            var thisAssembly = typeof(SwaggerConfig).Assembly;

                .EnableSwagger(c =>
                    c.SingleApiVersion("v1", "Wordfly API").Description("An API for the wordfly messaging platform")
                            .TermsOfService("Some terms")
                            .Contact(cc => cc.Name("Wordfly Team")
  c.OperationFilter(() => new AuthorizationHeaderParamterOperationFilter()));


Avoiding Herd Mentality by Asking “Why?”

My two-year-old daughter is in a phase of her development where she questions everything she  doesn’t understand. She throws questions at us faster than a 90mph curve ball. I’ll admit there are times where the incessant “why is this blue?” and “why did you open the bottle?” become hard to tolerate but this is an important stage of her development which we really need to encourage and not suppress.

You see, her search for deeper understanding of things is helping to build her internal decision-making engine. She’ll be able to make better choices once she understands “the why”. I guess this is something that I learned from my father who would hammer this point to me over and over again, “If you don’t know why you’re doing something, then there is a strong likely hood that you’re making a bad choice”.



The field of social psychology presents us with a very potent example. Soccer fans often breakout into fights before games for the most silliest reasons. What originally started as a misunderstanding between between two opposing fans might turn into a big riot. Why? Because people just jumped into the fight without understanding why they were even fighting. In psychology this is known as mob psychology or herd mentality.

As developers, we’re faced with similar but different choices. When you get that dream job you’ve been waiting for you’re whole life and you’re asked to build  a new feature, we often don’t question why things are being done the way they are and jump in head first. Let’s be honest, it’s easy to follow existing conventions without asking questions.

The problem with this approach is that, we become like soccer fans who start vandalizing existing structures without understanding why they’re doing it in the first place.

Questioning or seeking deeper understanding from your colleague,manager,wife or friend can sometimes come across as being rude or even disrespectful. When I first began my career I used to harbor feelings of animosity toward our team lead who would incessantly question my code choices. I later learned how beneficial this was in being able to make better choices.

Developing complex applications will always present tough challenges and choices. However, understanding why you’re choosing one development approach as opposed to another will definitely go a long way to enhance your chances of success. In code reviews it’s important that we not follow a “herd mentality” way of thinking and simple nod our heads. We must question and seek clarity in order to drive us on the path to success.

Guidelines for unit testing

Have you ever wondered what it takes to build a commercial Jet? It often blows my mind to think of the hours engineers spend assembling components together to build the plane. Interestingly enough there are similarities between building software and assembling planes. The individual units for each part of the software application or plane must be thoroughly tested to ensure the overall functionality of the app or plane. The testing of these units is what has become known as unit testing.

Unit testing requires you to test the functionality of individual units/parts/sections of your application in isolation. Testing in isolation ensures that you can confidently pinpoint bugs in code and verify that they have been fixed.

Phases of a Test – Arrange, Act, Assert

There are 3 generally accepted phases for any unit test.

The Arrange phase, is where you create an instance of the class you need to test and also setup up the initial state of any objects. The Act phase, requires you to call the functionality that represents the behavior being tested. Lastly, the Assert phase is where you check what actually happened was expected.

Tips for writing good unit tests?

The primary objective of unit tests is to prove correctness and you can do that by following these simple guidelines.

Prove a contract has been implemented

This is the most basic form of unit testing which verifies that the contract between the caller and method is being adhered to. For example this test validates a driver’s license number . Verifying that a method implements a contract is one of the weakest unit tests one can write.

public void ShouldBeValidWhen8DigitsAndStartsWithLetter()
var sut = new DriversLicenseValidator();
const string driversLicenseNumber = "A5522123";
Assert.That(sut.IsValid(driversLicenseNumber), Is.True);

Verify Computation Results

A stronger unit test involves verifying that the computation is correct. It is useful to categorize your methods into one of the two forms of computation:

  • Data Reduction: occurs when a test accepts multiple inputs and reduces to one resulting output. For example, the verify division test accepts 2 parameters and returns a single output.
  • [Test] public void VerifyDivisionTest()
    Assert.IsTrue(Divide(6, 2) == 3, "6/2 should equal 3!"); 
  • Data Transformation : These tests operate on sets of values

Establish a method correctly handles an external exception

When your code connects to an external service, it is important to determine that your code will handle exceptions gracefully. Attempting to get an external service to throw a specific error is tricky and so the use of Mocking tools will help in this process.

Prove a Bug is Re-creatable

Tests should be repeatable in any environment. They should be able to run in production, QA or even on the bus.

Write positive and negative tests :

Negative tests prove that something is repeatedly not working.They are important in understanding the problem and the solution .Positive tests prove that the problem has been fixed. They are important not only to verify the solution, but also for repeating the test whenever a change is made. Unit testing plays an important role when it comes to regression testing.

public void BadParameterTest() 
    Divide(5, 0);

Verify Tests are independent:

Tests should not depend on each other. On test should not set up the conditions for the next test.

These simple guidelines will set you off on the journey of unit testing. Feel free to share any ideas you might have stumbled on.

Viewing application logs in realtime using Sentinel & NLog

We all know how important log files can be when trouble shooting issues in an application. While log files are great to have, sometimes you just want a stream of information which describes what is going on in your application. Sentinel and NLog provide a great way to achieve this.

Sentinel is a log-viewer with configurable filtering and highlighting which can be used in conjunction with nLog to view log entries of an application in real time.

NLog Quick Setup

You can download and install it or just add it through Nuget in Visual Studio

Configuration File

Add a config called NLog.config in the root of your project

If you have a separate config file, make sure you set the “Copy to Output Directory” is set to “Copy Always” to avoid many tears wondering why the logging doesn’t work.

Sample Config File
<nlog xmlns="" xmlns:xsi="" autoReload="true">
    <variable name="log.dir" value="${basedir}" />
    <targets async="true">
      <target name="file" 
              layout="[${longdate} | ${level}][${threadid}] ${message}${onexception:${newline}EXCEPTION OCCURRED: ${exception:format=tostring}${newline}${stacktrace:format=Raw}}" />
      <target name="errors" 
              layout="[${longdate} | ${level}][${threadid}] ${message}${onexception:${newline}EXCEPTION OCCURRED: ${exception:format=tostring}${newline}${stacktrace:format=Raw}}" />

      <target name="debug"
             layout="[${longdate} | ${level}][${threadid}] ${message}${onexception:${newline}EXCEPTION OCCURRED: ${exception:format=tostring}${newline}${stacktrace:format=Raw}}" />
      <target xsi:type="NLogViewer"
               address="udp://"  includeNLogData="false"/>
      <logger name="*" minlevel="Info" writeTo="file" />
      <logger name="*" minlevel="Error" writeTo="errors" />
      <logger name="*" minlevel="Debug" writeTo="debug" />
      <logger name="*" writeTo="viewer" minlevel="Debug" />
The most important thing to note in setting up is the NLogViewer target which is setup to push log entires to the address “udp://”
<target xsi:type="NLogViewer"
               address="udp://"  includeNLogData="false"/>

Sentinel Setup

You can download and install sentinel from It comes with an easy to follow wizard which should be fairly straight forward to setup.

Start Logging

To illustrate how easy it is to stream your log data, create a simply console application. Make sure to add references to Nlog and add the NLog.config file as illustrated above.

namespace Sentinel
    class Program
        private static readonly Logger _log = LogManager.GetCurrentClassLogger();
        static void Main(string[] args)
               _log.Debug("This is something new that I just added.");
                _log.Warn("Lets Go!!");
                throw new ApplicationException();
            catch (ApplicationException e)
                _log.ErrorException("Something went wrong...", e);


Generating an iCalendar file

Situation: Generate an iCalendar file which will trigger a calendar application (eg. outlook) to open with an updated event.

The iCalendar file is a fairly common feature which most developers add to enable  users to add events to their personalized calendars via a custom calendar application.

Solution: Create a web handler which will create a plain text file with the ‘ics’ extension.

using System;
using System.Web;
namespace MyNamespace
 public class iCalendar: IHttpHandler
  public bool IsReusable
    return true;
  string DateFormat
      return "yyyyMMddTHHmmssZ"; // 20060215T092000Z
  public void ProcessRequest(HttpContext context)
   DateTime startDate = DateTime.Now.AddDays(5);
   DateTime endDate = startDate.AddMinutes(35);
   string organizer = "";
   string location = "My House";
   string summary = "My Event";
   string description = "Please come to\nMy House";
   context.Response.AddHeader("Content-disposition", "attachment; filename=appointment.ics");
   context.Response.Write("nORGANIZER:MAILTO:" + organizer);
   context.Response.Write("nDTSTART:" + startDate.ToUniversalTime().ToString(DateFormat));
   context.Response.Write("nDTEND:" + endDate.ToUniversalTime().ToString(DateFormat));
   context.Response.Write("nLOCATION:" + location);
   context.Response.Write("nUID:" + DateTime.Now.ToUniversalTime().ToString(DateFormat) + "");
   context.Response.Write("nDTSTAMP:" + DateTime.Now.ToUniversalTime().ToString(DateFormat));
   context.Response.Write("nSUMMARY:" + summary);
   context.Response.Write("nDESCRIPTION:" + description);

Geolocation using Advanced HTML 5

Geolocation is a feature in HTML5 which enables browsers to determine the geographical position of a user. For privacy reasons, geolocation is disabled by default in all supporting browsers. Users have to explicitly give permission to the browser before their position can be determined.

The geolocation API is published through the navigator.geolocation object.

Retrieving the current position

To obtain the user’s current location, you can call the getCurrentPosition() method. This initiates an asynchronous request to detect the user’s position, and queries the positioning hardware to get up-to-date information. When the position is determined, the defined callback function is executed. You can optionally provide a second callback function to be executed if an error occurs


In the following example, the Get Location button triggers a call to the getCurrentPositon method. Once the call to getCurrentPosition() is complete  longitude and latitude are updated on the form and the view map link will open a new window with google maps.

<!doctype html>
<html lang="en">
        <link rel="stylesheet" href="/global.css" type="text/css"/>

Show Position


View Map Latitude Longitude


    /scripts/Geolocation.js </body> </html>

    $(function() {
        var mapLink = $("#mapLink");
        var log = $("#log");
        $("#getLocationButton").click(function() {
            navigator.geolocation.getCurrentPosition(showPosition, positionError);
        function showPosition(position) {
            var coords = position.coords;
            mapLink.attr("href", ""
            + $("#lat").val() + ",+" +
                $("#long").val() + "+(You+are+here!)&iwloc=A&hl=en"
        function positionError(e) {
            switch (e.code) {
                case 0:
                    logMsg("The application has encounterd an unknown error");
                case 1:
                    logMsg("You chose not to allow this applicaiton to access your location");
                case 2:
                    logMsg("The application was unable to determine your location");
                case 3:
                    logMsg("The request to determine your location has timed out.");
        function logMsg(msg) {
            log.append("<li>" + msg + "</li>");