Uploading and deleting an entire directory to amazon s3 using Transfer Utility

Amazon S3 is a swiss army knife when it comes to cloud storage. There are simply a ton of ways you can use S3. To mention a few, data archiving, big data analytics, cloud storage, backup and recovery. One of the most common is the static hosting of websites. I want to show you how you can programmatically upload and delete an entire directory using the .NET apis for S3.

The short version

  1. Create a console application in visual studio
  2. Add the AWSSDK.S3 nuget package
  3. Create a class (S3TransferUtility) to manage uploading and deleting directories.
  4. Create a transfer request and call the UploadDirectory method using TransferUtility
  5. Use the File I/O APIs to delete the uploaded folder and files.

Uploading directories to S3

The AWSDK.S3 comes with a great utility called TransferUtility. Install via the following command in your console application

TransferUtility provides a simple API for uploading content to and downloading content from Amazon S3. It makes extensive use of Amazon S3 multipart uploads to achieve throughput, performance, and reliability. When uploading large files by specifying file paths instead of a stream, TransferUtility uses multiple threads to upload multiple parts of a single upload at once. When dealing with large content sizes and high bandwidth, this can increase throughput significantly.

To use the TransferUtility class simple initialize a new instance with your AWS access key and secret key.

TransferUtility transferUtility = new TransferUtility("[ACCESSKEY]", "[SECRETKEY]", RegionEndpoint.USWest2);

 

Uploading directories can be done by simply creating a new upload request and calling the method UploadDirectory. You can set the ACL permissions to PublicRead if you want the contents of your folder to be public.

 
/// <summary>
/// Upload specified Diretory to S3 bucket
/// </summary>
/// <param name="uploadDirectory"></param>
/// <param name="bucket"></param>
/// <returns></returns>
public bool SaveAsset(string uploadDirectory, string bucket)
{
    try
    {

        TransferUtilityUploadDirectoryRequest request = new TransferUtilityUploadDirectoryRequest
        {
            BucketName = bucket,
            Directory = uploadDirectory,
            SearchOption = System.IO.SearchOption.AllDirectories,
            CannedACL = S3CannedACL.PublicRead
        };
        _transferUtility.UploadDirectory(request);

        return true;
    }
    catch (Exception exception)
    {
        //Log Exception
        return false;
    }
}

Deleting a directory from S3

The S3 SDK also provides another set of APIs called File I/O. These APIs are useful for applications which want to treat S3 as a filesystem. It does this by mimicking the .NET base classes and FileInfo  DirectoryInfo with the new classes S3FileInfo and S3DirectoryInfo

 

/// <summary>
/// Delete Directory from S3
/// </summary>
/// <param name="uploadDirectory"></param>
/// <param name="bucket"></param>
/// <returns></returns>
public bool DeleteAsset(string bucket, string uploadDirectory)
{
    try
    {
        S3DirectoryInfo directoryToDelete = new S3DirectoryInfo(_client, bucket, uploadDirectory);

        var directoryFiles = directoryToDelete.EnumerateFiles();
        foreach (S3FileInfo file in directoryFiles)
        {
            S3FileInfo filetoDelete = new S3FileInfo(_client, bucket, file.FullName.Replace(bucket + ":\\", string.Empty));
            if (filetoDelete.Exists)
            {
                filetoDelete.Delete();
            }
        }


        if (directoryToDelete.Exists)
        {
            directoryToDelete.Delete(false);
            return true;
        }
        
    }
    catch (Exception exception)
    {
        //Log error

        return false;
    }
    return false;
}

Usage

 class Program
    {
        static void Main(string[] args)
        {
            var directoryToUpload = @"c:\\Dev\\site";
            var bucketName = "s3mediatransfers/transfers/site";

            //Upload Directory
            S3AssetTransferUtility transferUtility = new S3AssetTransferUtility();
            var uploadStatus = transferUtility.SaveAsset(directoryToUpload, bucketName);

            Console.WriteLine(string.Format("Upload to S3 Succeded : {0}", uploadStatus));

            //Delete Directory
            var deleteStatus = transferUtility.DeleteAsset("s3mediatransfers", "transfers\\site");
            Console.WriteLine(string.Format("Directory Deletion from S3 Succeded : {0}",deleteStatus));
        
        }
    }
}

Full code sample can be found here: https://github.com/samuelmensah/S3TransferUtility

references

  • https://aws.amazon.com/blogs/developer/the-three-different-apis-for-amazon-s3/
  • https://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/T_Amazon_S3_Transfer_TransferUtility.htm
  • https://www.nuget.org/packages/AWSSDK.S3/

5 Things Every .NET Developer Should Know About MSBuild

MSBuild (Microsoft Build Engine) is the magical orchestrator which jumps into action every time you hit F5 in Visual Studio. Its super powers range from compiling your project into executables, to transforming web config files. Therefore, in order to take advantage of the many features msbuild provides, let’s review the basics.

MSBuild Overview

Msbuild is the underlying technology used by Visual Studio to build and compile projects and solutions. It comes packaged with the dotnet framework so it’s very likely that you already have it on your machine.

msbuild location

Msbuild acts as an interpreter which reads an xml file (*.csproj,*.sln,*.msbuild) and executes the instructions inside. It is available on the command line, in visual studio and TFS.

MS Build overview for msbuild

1. Characteristics of a build file

Firstly, A build file is nothing but a simple XML document. Each build file must have the Project root node with an xmlns (XML Namespace) pointing to http://schemas.microsoft.com/developer/msbuild/2003 .

<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  
</Project>

The PropertyGroup node is usually declared next in a build file. PropertyGroups are containers for properties. Properties allow you to declare variables that can be used later in the file.The example below shows a variable called Name with the value Sam declared in a property group.

<PropertyGroup>
    <Name>Sam</Name>
  </PropertyGroup>

ItemGroups are containers for files and they behave like an array. Items are like properties but they allow you to access metadata on that object. The Example below shows a property PicsPath defined which lists all the .jpg pictures in a folder. The item group defined below then lists those files.

<PropertyGroup>
    <PicsPath>c:\temp\pics\*.jpg</PicsPath>
  </PropertyGroup>
 <ItemGroup>
    <Pics Include="$(PicsPath)" />
  </ItemGroup>

A Target is a container for instructions. Each target has to have an associated task (function to invoke). Each Task is a .NET object which implements the interface ITask. An example of a task is to display a message to the console.

<Target Name="HelloWorld">
   <Message Text="Hello $(Name)" />
 </Target>

 

2. Hello World with MSBuild

Next is the basic Hello World example. The example below shows the creation of a simple build file which displays “Hello Sam”. A property name is declared with the value Sam. Next, the HelloWorld target is created with a message task.

Hello World Example for msbuild

Output

We can run the build file by running the following inside the Developer Command Prompt for VS 2017.

c:\Dev\Msbuild>msbuild HelloWorld.msbuild

Build output for msbuild

3. Referencing Declared Properties

Another important tip, is how to reference declared properties. A property is a scalar variable which consists of a key value pair. Properties are always created inside of PropertyGroups. After you declare a property you can reference that variable in properties,itemgroups, targets etc by using the dollar parenthesis notation.

Syntax :$(PROPERTYNAME)
Example : $(name)


<PropertyGroup>
<Name>Homer</Name>
<FullName>$(Name) Simpson</FullName>
</PropertyGroup>

In the example above, we declare a name Homer and reference that property in the FullName property by using $(Name).

4. Reference Declared Items

In addition,we can reference items and their associated data using the following notation

Syntax :
@(ITEMS->'%(METADATA)')
@(Pics->'%(ModifiedTime)')

5. Using a response file to pass in command line arguments to msbuild

Furthermore, MSBuild allows the use of numerous command line arguments. Some of the most common ones are below

  • /target:HelloWorld : Run the target HelloWorld when the build is run
  • /v:minimal : Set the logging to minimal
  • /p:Name=Lisa : Inject the value Lisa into the variable Name.

While it’s super convenient to be able to specify command arguments, it’s error prone and tedious. Response files allow you to place all your command arguments in the file and then just pass the name of the response file to msbuild. Here is an example

/target:HelloWorld,GoodbyeWorld
/v:diagnostic

The contents above would be saved in a file called helloworld.rsp.

c:\Dev\Msbuild>msbuild HelloWorld.msbuild @helloworld.rsp

Summary

In conclusion, MSBuild is a great tool for build automation. Understanding how MSBuild works gives us the ability to be creative in build automation and continuous delivery.

Securing your local environment for Development

One of the most common tasks that developers face is to mimic production environments locally. When it comes to running your local app securely, most developers either just run regular “http” or create a self-signed certificate.

In this tutorial, I’m going to show you how to secure your local environment for development so you can run your application via HTTPS with no security warnings. We will use the tool makecert.exe  to create a root x.509 certificate and then use that to sign our SSL certificates. You can download this tool here.

What you’ll need.

  • makecert.exe – The makecert tool is used to create a root x.509 certificate.
  • pvk2pfx.exe – Pvk2Pfx copies the public and private key information contained in .spc, .cer and .pvk files into the personal information exchange file (.pfx).

Setting up your environment

We’ll begin by setting up our local environment. Create an ASP.NET web application as shown below. Modify your hosts file found here c:\Windows\System32\drivers\etc\  so you can map dev.local to localhost or 127.0.0.1

127.0.0.1       dev.local

Create your Root Certificate

First, use the makecert tool to create a root certificate. There are numerous parameters you can use when generating this certificate but the most important ones are outlined in the code below. This certificate is important for a number of reasons. The certificate created will have a private key which we will use to create our SSL certificate.

makecert.exe -r                         // self signed
             -n "CN=DevelopmentRoot"    // name
             -pe                        // exportable
             -sv DevelopmentRoot.pvk    // name of private key file
             -a sha1                    // hashing algorithm
             -len 2048                  // key length
             -b 01/21/2010              // valid from 
             -e 01/21/2030              // valid to
             -cy authority              // certificate type
             DevelopmentRoot.cer        // name of certificate file
             
--pvk2pfx copies public key and private key information in .cer & .pvk file to a personal information exchange
pvk2pfx.exe -pvk DevelopmentRoot.pvk    // Specifies the name of a .pvk file
            -spc DevelopmentRoot.cer    // Specifies the name and extension of the Software Publisher Certificate (SPC) file that contains the certificate
            -pfx DevelopmentRoot.pfx    // Specifies the name of a .pfx file.

 Use the Root Certificate to Create Self-Signed Certificate

makecert.exe -iv DevelopmentRoot.pvk    // file name of root priv key
             -ic DevelopmentRoot.cer    // file name of root cert
             -n "CN=dev.local"          // name
             -pe                        // mark as exportable
             -sv dev.local.pvk          // name of private key file
             -a sha1                    // hashing algorithm
             -len 2048                  // key length
             -b 01/21/2010              // valid from
             -e 01/21/2020              // valid to
             -sky exchange              // certificate type
             dev.local.cer              //name of certificate file
             -eku 1.3.6.1.5.5.7.3.1     // extended key usuage

--pvk2pfx copies public key and private key information in .cer & .pvk file to a personal information exchange
pvk2pfx.exe -pvk dev.local.pvk         // Specifies the name of a .pvk file
            -spc dev.local.cer         // Specifies the name and extension of the Software Publisher Certificate (SPC) file that contains the certificate
            -pfx dev.local.pfx         // Specifies the name of a .pfx file.

Install Certificates onto computer

Run the following command at the command prompt

In the dialog box that appears select to add a snap-in and following the prompts to select Certificates.

Right click on certificates under the Trusted Root Certificate Authorities and select the import tasks.

Navigate to where your certificates were created and choose the Development.cer file. Walk through the other steps and click finish.

Now it’s time to install the dev.local certificate on your machine.

Go back to the managment console and select personal -> certificates. Right click on certificates and select import under all tasks.

Next, follow the wizard and select the dev.local.pfx certificate.

At this point, we’re ready to associate the certificate with the site in IIS.

Using access tokens in Swagger with Swashbuckle

Securing access to your API using access tokens is common practice. In this post, we’ll learn how to call secure API endpoints using the swagger specification specifically using Swashbuckle (An implementation of Swagger for .NET)

Understanding Swagger Schema:
This outline shows the basic structure of a swagger specification document. This file is represented in Json which is in turn used by Swagger-UI to display the interactive API documentation.
{
"swagger": "2.0",
 "info": {
"version": "v1",
"title": ".NET Latest API",
"description": ".NET Latest API",
"termsOfService": "Some terms",
"contact": {
"name": "donetlatest Team",
"email": "team@dotnetlatest.com"
}
},"host": "local.api.donetlatest.com:80",
"schemes": [

"http"

],"paths": {

"/V1/api/Authentication": {},
"/V1/api/Countries": {},
"/V1/api/Clients": {
},"definitions": {

"CountryDTO": {},
"StateDTO": {},
"ClientDTO": {}
}
}

 

Parameters
The Paths item object describes the operations on a single path. Each path has a parameters object which are a list of inputs for a given endpoint.
"/V1/api/LitmusClients": {
"post": {
"tags": [
"LitmusClients"
],
"summary": "GET /api/clientsrn Gets an array of all clients",
"operationId": "Clients_Index",
"consumes": [
],
"produces": [
"application/json",
"text/json"
],
"parameters": [
{
"name": "Authorization",
"in": "header",
"description": "access token",
"required": true,
"type": "string"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/ClientDTO"
}
}
}
},
"deprecated": false
}
}
}

 Types of Parameters

  • Path – Used together with  Path Templating
  • Query – Parameters that are appended to the URL
  • Header – Custom headers that are expected as part of the request
  • Body – The Payload that’s appended to the HTTP request.
  • Form – Used to describe the payload of an Http request
The swagger specification describes in detail about parameter types and how you can configure them. https://github.com/swagger-api/swagger-spec/blob/master/versions/2.0.md
Extending Swagger to add a new parameter:
Swashbuckles implementation of swagger reads XML code comments to generate the required swagger specification. Unfortunately, if you require an authorization header (access token) to make requests, the XML code comments cannot provide this info to Swashbuckle. You’ll have to manually inject this new parameter during swagger specification generation.
Swashbuckle provides an interface called IOperationFilter  to apply new parameters. Implementing this interface will look something like this.
public class AddAuthorizationHeaderParameterOperationFilter: IOperationFilter
    {
        public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
        {
            var filterPipeline = apiDescription.ActionDescriptor.GetFilterPipeline();
            var isAuthorized = filterPipeline
                                             .Select(filterInfo => filterInfo.Instance)
                                             .Any(filter => filter is IAuthorizationFilter);

            var allowAnonymous = apiDescription.ActionDescriptor.GetCustomAttributes<AllowAnonymousAttribute>().Any();

            if (isAuthorized && !allowAnonymous)
            {
                operation.parameters.Add(new Parameter {
                    name = "Authorization",
                    @in = "header",
                    description = "access token",
                    required = true,
                    type = "string"                    
                });
            }
        }
    }
public class SwaggerConfig
    {
        public static void Register()
        {
            var thisAssembly = typeof(SwaggerConfig).Assembly;

            GlobalConfiguration.Configuration
                .EnableSwagger(c =>
                   
                   
                    c.SingleApiVersion("v1", "Wordfly API").Description("An API for the wordfly messaging platform")
                            .TermsOfService("Some terms")
                            .Contact(cc => cc.Name("Wordfly Team")
                            .Email("team@wordfly.com"));
                                                                          
  c.OperationFilter(() => new AuthorizationHeaderParamterOperationFilter()));
                   
                    c.IncludeXmlComments(GetXmlCommentsPath());
          }
     }

s

Avoiding Herd Mentality by Asking “Why?”

My two-year-old daughter is in a phase of her development where she questions everything she  doesn’t understand. She throws questions at us faster than a 90mph curve ball. I’ll admit there are times where the incessant “why is this blue?” and “why did you open the bottle?” become hard to tolerate but this is an important stage of her development which we really need to encourage and not suppress.

You see, her search for deeper understanding of things is helping to build her internal decision-making engine. She’ll be able to make better choices once she understands “the why”. I guess this is something that I learned from my father who would hammer this point to me over and over again, “If you don’t know why you’re doing something, then there is a strong likely hood that you’re making a bad choice”.

 

 

The field of social psychology presents us with a very potent example. Soccer fans often breakout into fights before games for the most silliest reasons. What originally started as a misunderstanding between between two opposing fans might turn into a big riot. Why? Because people just jumped into the fight without understanding why they were even fighting. In psychology this is known as mob psychology or herd mentality.

As developers, we’re faced with similar but different choices. When you get that dream job you’ve been waiting for you’re whole life and you’re asked to build  a new feature, we often don’t question why things are being done the way they are and jump in head first. Let’s be honest, it’s easy to follow existing conventions without asking questions.

The problem with this approach is that, we become like soccer fans who start vandalizing existing structures without understanding why they’re doing it in the first place.

Questioning or seeking deeper understanding from your colleague,manager,wife or friend can sometimes come across as being rude or even disrespectful. When I first began my career I used to harbor feelings of animosity toward our team lead who would incessantly question my code choices. I later learned how beneficial this was in being able to make better choices.

Developing complex applications will always present tough challenges and choices. However, understanding why you’re choosing one development approach as opposed to another will definitely go a long way to enhance your chances of success. In code reviews it’s important that we not follow a “herd mentality” way of thinking and simple nod our heads. We must question and seek clarity in order to drive us on the path to success.

Guidelines for unit testing

Have you ever wondered what it takes to build a commercial Jet? It often blows my mind to think of the hours engineers spend assembling components together to build the plane. Interestingly enough there are similarities between building software and assembling planes. The individual units for each part of the software application or plane must be thoroughly tested to ensure the overall functionality of the app or plane. The testing of these units is what has become known as unit testing.

Unit testing requires you to test the functionality of individual units/parts/sections of your application in isolation. Testing in isolation ensures that you can confidently pinpoint bugs in code and verify that they have been fixed.

Phases of a Test – Arrange, Act, Assert

There are 3 generally accepted phases for any unit test.

The Arrange phase, is where you create an instance of the class you need to test and also setup up the initial state of any objects. The Act phase, requires you to call the functionality that represents the behavior being tested. Lastly, the Assert phase is where you check what actually happened was expected.

Tips for writing good unit tests?

The primary objective of unit tests is to prove correctness and you can do that by following these simple guidelines.

Prove a contract has been implemented

This is the most basic form of unit testing which verifies that the contract between the caller and method is being adhered to. For example this test validates a driver’s license number . Verifying that a method implements a contract is one of the weakest unit tests one can write.

[Test]
public void ShouldBeValidWhen8DigitsAndStartsWithLetter()
{
var sut = new DriversLicenseValidator();
const string driversLicenseNumber = "A5522123";
Assert.That(sut.IsValid(driversLicenseNumber), Is.True);
}

Verify Computation Results

A stronger unit test involves verifying that the computation is correct. It is useful to categorize your methods into one of the two forms of computation:

  • Data Reduction: occurs when a test accepts multiple inputs and reduces to one resulting output. For example, the verify division test accepts 2 parameters and returns a single output.
  • [Test] public void VerifyDivisionTest()
    {
    Assert.IsTrue(Divide(6, 2) == 3, "6/2 should equal 3!"); 
    }
  • Data Transformation : These tests operate on sets of values

Establish a method correctly handles an external exception

When your code connects to an external service, it is important to determine that your code will handle exceptions gracefully. Attempting to get an external service to throw a specific error is tricky and so the use of Mocking tools will help in this process.

Prove a Bug is Re-creatable

Tests should be repeatable in any environment. They should be able to run in production, QA or even on the bus.

Write positive and negative tests :

Negative tests prove that something is repeatedly not working.They are important in understanding the problem and the solution .Positive tests prove that the problem has been fixed. They are important not only to verify the solution, but also for repeating the test whenever a change is made. Unit testing plays an important role when it comes to regression testing.

[TestMethod] 
[ExpectedException(typeof(ArgumentOutOfRangeException))] 
public void BadParameterTest() 
{ 
    Divide(5, 0);
}

Verify Tests are independent:

Tests should not depend on each other. On test should not set up the conditions for the next test.

These simple guidelines will set you off on the journey of unit testing. Feel free to share any ideas you might have stumbled on.

Viewing application logs in realtime using Sentinel & NLog

We all know how important log files can be when trouble shooting issues in an application. While log files are great to have, sometimes you just want a stream of information which describes what is going on in your application. Sentinel and NLog provide a great way to achieve this.

Sentinel is a log-viewer with configurable filtering and highlighting which can be used in conjunction with nLog to view log entries of an application in real time.

NLog Quick Setup

You can download and install it or just add it through Nuget in Visual Studio

Configuration File

Add a config called NLog.config in the root of your project

If you have a separate config file, make sure you set the “Copy to Output Directory” is set to “Copy Always” to avoid many tears wondering why the logging doesn’t work.

Sample Config File
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.netfx40.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" autoReload="true">
    <variable name="log.dir" value="${basedir}" />
    <targets async="true">
      
      <target name="file" 
              xsi:type="File" 
              fileName="${log.dir}/log.txt" 
              archiveFileName="${log.dir}/log.{#}.txt" 
              archiveEvery="Day" 
              archiveNumbering="Rolling" 
              maxArchiveFiles="10" 
              layout="[${longdate} | ${level}][${threadid}] ${message}${onexception:${newline}EXCEPTION OCCURRED: ${exception:format=tostring}${newline}${stacktrace:format=Raw}}" />
      
      <target name="errors" 
              xsi:type="File" 
              fileName="${log.dir}/errors.txt"
              layout="[${longdate} | ${level}][${threadid}] ${message}${onexception:${newline}EXCEPTION OCCURRED: ${exception:format=tostring}${newline}${stacktrace:format=Raw}}" />

      <target name="debug"
             xsi:type="File"
             fileName="${log.dir}/debug.txt"
             layout="[${longdate} | ${level}][${threadid}] ${message}${onexception:${newline}EXCEPTION OCCURRED: ${exception:format=tostring}${newline}${stacktrace:format=Raw}}" />
           
      <target xsi:type="NLogViewer"
               name="viewer"
               address="udp://127.0.0.1:9999"  includeNLogData="false"/>
    </targets>
    <rules>
      <logger name="*" minlevel="Info" writeTo="file" />
      <logger name="*" minlevel="Error" writeTo="errors" />
      <logger name="*" minlevel="Debug" writeTo="debug" />
      <logger name="*" writeTo="viewer" minlevel="Debug" />
    </rules>
  </nlog>
The most important thing to note in setting up is the NLogViewer target which is setup to push log entires to the address “udp://127.0.0.1:9999”
<target xsi:type="NLogViewer"
               name="viewer"
               address="udp://127.0.0.1:9999"  includeNLogData="false"/>

Sentinel Setup

You can download and install sentinel from http://sentinel.codeplex.com/ It comes with an easy to follow wizard which should be fairly straight forward to setup.

Start Logging

To illustrate how easy it is to stream your log data, create a simply console application. Make sure to add references to Nlog and add the NLog.config file as illustrated above.

namespace Sentinel
{
    class Program
    {
        private static readonly Logger _log = LogManager.GetCurrentClassLogger();
        static void Main(string[] args)
        {
            try
            {
               _log.Debug("This is something new that I just added.");
                _log.Warn("Lets Go!!");
                throw new ApplicationException();
            }
            catch (ApplicationException e)
            {
                _log.ErrorException("Something went wrong...", e);
            }
        }
    }
}

 

Generating an iCalendar file

Situation: Generate an iCalendar file which will trigger a calendar application (eg. outlook) to open with an updated event.

The iCalendar file is a fairly common feature which most developers add to enable  users to add events to their personalized calendars via a custom calendar application.

Solution: Create a web handler which will create a plain text file with the ‘ics’ extension.

using System;
using System.Web;
namespace MyNamespace
{
 public class iCalendar: IHttpHandler
 {
 
  public bool IsReusable
  {
   get
   {
    return true;
   }
  }
 
  string DateFormat
  {
    get
    {
      return "yyyyMMddTHHmmssZ"; // 20060215T092000Z
    }
  }
 
  public void ProcessRequest(HttpContext context)
  {
   DateTime startDate = DateTime.Now.AddDays(5);
   DateTime endDate = startDate.AddMinutes(35);
   string organizer = "foo@bar.com";
   string location = "My House";
   string summary = "My Event";
   string description = "Please come to\nMy House";
 
   context.Response.ContentType="text/calendar";
   context.Response.AddHeader("Content-disposition", "attachment; filename=appointment.ics");
 
   context.Response.Write("BEGIN:VCALENDAR");
   context.Response.Write("nVERSION:2.0");
   context.Response.Write("nMETHOD:PUBLISH");
   context.Response.Write("nBEGIN:VEVENT");
   context.Response.Write("nORGANIZER:MAILTO:" + organizer);
   context.Response.Write("nDTSTART:" + startDate.ToUniversalTime().ToString(DateFormat));
   context.Response.Write("nDTEND:" + endDate.ToUniversalTime().ToString(DateFormat));
   context.Response.Write("nLOCATION:" + location);
   context.Response.Write("nUID:" + DateTime.Now.ToUniversalTime().ToString(DateFormat) + "@mysite.com");
   context.Response.Write("nDTSTAMP:" + DateTime.Now.ToUniversalTime().ToString(DateFormat));
   context.Response.Write("nSUMMARY:" + summary);
   context.Response.Write("nDESCRIPTION:" + description);
   context.Response.Write("nPRIORITY:5");
   context.Response.Write("nCLASS:PUBLIC");
   context.Response.Write("nEND:VEVENT");
   context.Response.Write("nEND:VCALENDAR");
   context.Response.End();
  }
 }
}

Geolocation using Advanced HTML 5

Geolocation is a feature in HTML5 which enables browsers to determine the geographical position of a user. For privacy reasons, geolocation is disabled by default in all supporting browsers. Users have to explicitly give permission to the browser before their position can be determined.

The geolocation API is published through the navigator.geolocation object.

Retrieving the current position

To obtain the user’s current location, you can call the getCurrentPosition() method. This initiates an asynchronous request to detect the user’s position, and queries the positioning hardware to get up-to-date information. When the position is determined, the defined callback function is executed. You can optionally provide a second callback function to be executed if an error occurs

Example

In the following example, the Get Location button triggers a call to the getCurrentPositon method. Once the call to getCurrentPosition() is complete  longitude and latitude are updated on the form and the view map link will open a new window with google maps.

<!doctype html>
<html lang="en">
    <head>
        <title>Geolocation</title>
        <link rel="stylesheet" href="/global.css" type="text/css"/>
        http://../scripts/jquery-1.6.2.js
        </head>
    <body>

Show Position

 

View Map Latitude Longitude

     

    /scripts/Geolocation.js </body> </html>

    $(function() {
    
        var mapLink = $("#mapLink");
        var log = $("#log");
    
        $("#getLocationButton").click(function() {
            navigator.geolocation.getCurrentPosition(showPosition, positionError);
        });
    
        function showPosition(position) {
    
            var coords = position.coords;
    
            $("#lat").val(coords.latitude);
            $("#long").val(coords.longitude);
            
            mapLink.attr("href", "http://maps.google.com/maps?q="
            + $("#lat").val() + ",+" +
                $("#long").val() + "+(You+are+here!)&iwloc=A&hl=en"
            );
            mapLink.show();
    
        }
    
        function positionError(e) {
            switch (e.code) {
                case 0:
                    logMsg("The application has encounterd an unknown error");
                    break;
                case 1:
                    logMsg("You chose not to allow this applicaiton to access your location");
                    break;
                case 2:
                    logMsg("The application was unable to determine your location");
                    break;
                case 3:
                    logMsg("The request to determine your location has timed out.");
                    break;
                
            default:
            }
        }
    
        function logMsg(msg) {
            log.append("<li>" + msg + "</li>");
        }
    })

    Links

    Configuring Glimpse diagnostic tool

    Glimpse is a free, open source diagnostic tool that can save you a lot time when it comes to troubleshooting and diagnosing issues in your application. Over the years more and more developers have contributed to glimpse making this a must have tool for daily .NET development.

    Glimpse works by inspecting web requests as they come through the request pipeline. Each request is visually broken down into various information tabs, which in turn allows you to dive deeper to diagnose an issue. Each tab contains data specific to various server side concerns.

     

    Installation :

    The official site has very good notes on how to install so I won’t bore you with those details.

    Features

    The Glimpse.AspNet package adds these tabs to Glimpse, which can be used for diagnosing problems common to ASP.NET based frameworks:

    Tabs

    • Configuration – The Configuration tab displays web.config entries that could be helpful when debugging.
    • Environment – The Environment tab displays information about the server that responded to the selected HTTP request.
    • Request – The Request tab shows basic HTTP request information as the server received it
    • Routes – The Routes tab shows the routes of the web application, along with default values and constraints
    • Server – The Server tab shows all web server variables available for the request.
    • Session – The Session tab shows the data that is associated with the current requestor’s session store

    Here are some of the awesome features glimpse comes with :

    • Visual Profiling – Glimpse profiles key server side activities and displays the timing of each in an easy to understand Gantt chart.
    • Transparent Data Access – Out of process database calls are expensive. Glimpse lists each of them, so excessive or under-performant queries can be reigned in.
    • Server Configuration – Know everything necessary about a request’s origin server including: timezone, patch version, process ID and pertinent web.configentries.

    Configuration Tips

    NOTE : These changes are all made in your web.config file

    After installation Glimpse will make several changes to you your web.config file you should be aware of.

    <configuration>
      <configSections>
      
      <section name="glimpse" type="Glimpse.Core.Configuration.Section, Glimpse.Core" /></configSections>
    
    <!-- For IIS 7 & greater -->
      <system.webServer>
        <!-- The Glimpse.ASP.Net Module will run on every request made to the application-->
          <modules>
                <add name="Glimpse" type="Glimpse.AspNet.HttpModule, Glimpse.AspNet" preCondition="integratedMode" />
        </modules>
    
        <handlers>
          <!-- The Glimpse.axd handler is used to turn Glimpse on and off-->
          <add name="Glimpse" path="glimpse.axd" verb="GET" type="Glimpse.AspNet.HttpHandler, Glimpse.AspNet" preCondition="integratedMode" />
        </handlers>
      </system.webServer>
      <!-- This is where you put custom configuration -->
    <glimpse defaultRuntimePolicy="On" endpointBaseUri="~/Glimpse.axd">
        </glimpse>
      </configuration>
    • How to configure tabs

    You can disable Glimpse tabs by instructing Glimpse to ignore their types:

    <glimpse defaultRuntimePolicy="On" endpointBaseUri="~/Glimpse.axd">
        <tabs>
            <ignoredTypes>
                <add type="{Namespace.Type, AssemblyName}"/>
            </ignoredTypes>
        </tabs>
    </glimpse>
    • How to Configure runtime policy

    Policies control what Glimpse is allowed to do to any given request. Policies can be disabled and customized to simplify some scenarios. For example, to run Glimpse on a remote server (like a server in Windows Azure), disable theLocalPolicy:

    <glimpse defaultRuntimePolicy="On" endpointBaseUri="~/Glimpse.axd" >
        <runtimePolicies>
            <ignoredTypes>
                <add type="Glimpse.AspNet.Policy.LocalPolicy, Glimpse.AspNet"/>
            </ignoredTypes>
        </runtimePolicies>
    </glimpse>

    Glimpse will never be allowed more permissions than the defaultRuntimePolicy allows. On and Off are the simplest configuration values.