Uploading and deleting an entire directory to amazon s3 using Transfer Utility

Amazon S3 is a swiss army knife when it comes to cloud storage. There are simply a ton of ways you can use S3. To mention a few, data archiving, big data analytics, cloud storage, backup and recovery. One of the most common is the static hosting of websites. I want to show you how you can programmatically upload and delete an entire directory using the .NET apis for S3.

The short version

  1. Create a console application in visual studio
  2. Add the AWSSDK.S3 nuget package
  3. Create a class (S3TransferUtility) to manage uploading and deleting directories.
  4. Create a transfer request and call the UploadDirectory method using TransferUtility
  5. Use the File I/O APIs to delete the uploaded folder and files.

Uploading directories to S3

The AWSDK.S3 comes with a great utility called TransferUtility. Install via the following command in your console application

TransferUtility provides a simple API for uploading content to and downloading content from Amazon S3. It makes extensive use of Amazon S3 multipart uploads to achieve throughput, performance, and reliability. When uploading large files by specifying file paths instead of a stream, TransferUtility uses multiple threads to upload multiple parts of a single upload at once. When dealing with large content sizes and high bandwidth, this can increase throughput significantly.

To use the TransferUtility class simple initialize a new instance with your AWS access key and secret key.

TransferUtility transferUtility = new TransferUtility("[ACCESSKEY]", "[SECRETKEY]", RegionEndpoint.USWest2);

 

Uploading directories can be done by simply creating a new upload request and calling the method UploadDirectory. You can set the ACL permissions to PublicRead if you want the contents of your folder to be public.

 
/// <summary>
/// Upload specified Diretory to S3 bucket
/// </summary>
/// <param name="uploadDirectory"></param>
/// <param name="bucket"></param>
/// <returns></returns>
public bool SaveAsset(string uploadDirectory, string bucket)
{
    try
    {

        TransferUtilityUploadDirectoryRequest request = new TransferUtilityUploadDirectoryRequest
        {
            BucketName = bucket,
            Directory = uploadDirectory,
            SearchOption = System.IO.SearchOption.AllDirectories,
            CannedACL = S3CannedACL.PublicRead
        };
        _transferUtility.UploadDirectory(request);

        return true;
    }
    catch (Exception exception)
    {
        //Log Exception
        return false;
    }
}

Deleting a directory from S3

The S3 SDK also provides another set of APIs called File I/O. These APIs are useful for applications which want to treat S3 as a filesystem. It does this by mimicking the .NET base classes and FileInfo  DirectoryInfo with the new classes S3FileInfo and S3DirectoryInfo

 

/// <summary>
/// Delete Directory from S3
/// </summary>
/// <param name="uploadDirectory"></param>
/// <param name="bucket"></param>
/// <returns></returns>
public bool DeleteAsset(string bucket, string uploadDirectory)
{
    try
    {
        S3DirectoryInfo directoryToDelete = new S3DirectoryInfo(_client, bucket, uploadDirectory);

        var directoryFiles = directoryToDelete.EnumerateFiles();
        foreach (S3FileInfo file in directoryFiles)
        {
            S3FileInfo filetoDelete = new S3FileInfo(_client, bucket, file.FullName.Replace(bucket + ":\\", string.Empty));
            if (filetoDelete.Exists)
            {
                filetoDelete.Delete();
            }
        }


        if (directoryToDelete.Exists)
        {
            directoryToDelete.Delete(false);
            return true;
        }
        
    }
    catch (Exception exception)
    {
        //Log error

        return false;
    }
    return false;
}

Usage

 class Program
    {
        static void Main(string[] args)
        {
            var directoryToUpload = @"c:\\Dev\\site";
            var bucketName = "s3mediatransfers/transfers/site";

            //Upload Directory
            S3AssetTransferUtility transferUtility = new S3AssetTransferUtility();
            var uploadStatus = transferUtility.SaveAsset(directoryToUpload, bucketName);

            Console.WriteLine(string.Format("Upload to S3 Succeded : {0}", uploadStatus));

            //Delete Directory
            var deleteStatus = transferUtility.DeleteAsset("s3mediatransfers", "transfers\\site");
            Console.WriteLine(string.Format("Directory Deletion from S3 Succeded : {0}",deleteStatus));
        
        }
    }
}

Full code sample can be found here: https://github.com/samuelmensah/S3TransferUtility

references

  • https://aws.amazon.com/blogs/developer/the-three-different-apis-for-amazon-s3/
  • https://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/T_Amazon_S3_Transfer_TransferUtility.htm
  • https://www.nuget.org/packages/AWSSDK.S3/

5 Things Every .NET Developer Should Know About MSBuild

MSBuild (Microsoft Build Engine) is the magical orchestrator which jumps into action every time you hit F5 in Visual Studio. Its super powers range from compiling your project into executables, to transforming web config files. Therefore, in order to take advantage of the many features msbuild provides, let’s review the basics.

MSBuild Overview

Msbuild is the underlying technology used by Visual Studio to build and compile projects and solutions. It comes packaged with the dotnet framework so it’s very likely that you already have it on your machine.

msbuild location

Msbuild acts as an interpreter which reads an xml file (*.csproj,*.sln,*.msbuild) and executes the instructions inside. It is available on the command line, in visual studio and TFS.

MS Build overview for msbuild

1. Characteristics of a build file

Firstly, A build file is nothing but a simple XML document. Each build file must have the Project root node with an xmlns (XML Namespace) pointing to http://schemas.microsoft.com/developer/msbuild/2003 .

<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  
</Project>

The PropertyGroup node is usually declared next in a build file. PropertyGroups are containers for properties. Properties allow you to declare variables that can be used later in the file.The example below shows a variable called Name with the value Sam declared in a property group.

<PropertyGroup>
    <Name>Sam</Name>
  </PropertyGroup>

ItemGroups are containers for files and they behave like an array. Items are like properties but they allow you to access metadata on that object. The Example below shows a property PicsPath defined which lists all the .jpg pictures in a folder. The item group defined below then lists those files.

<PropertyGroup>
    <PicsPath>c:\temp\pics\*.jpg</PicsPath>
  </PropertyGroup>
 <ItemGroup>
    <Pics Include="$(PicsPath)" />
  </ItemGroup>

A Target is a container for instructions. Each target has to have an associated task (function to invoke). Each Task is a .NET object which implements the interface ITask. An example of a task is to display a message to the console.

<Target Name="HelloWorld">
   <Message Text="Hello $(Name)" />
 </Target>

 

2. Hello World with MSBuild

Next is the basic Hello World example. The example below shows the creation of a simple build file which displays “Hello Sam”. A property name is declared with the value Sam. Next, the HelloWorld target is created with a message task.

Hello World Example for msbuild

Output

We can run the build file by running the following inside the Developer Command Prompt for VS 2017.

c:\Dev\Msbuild>msbuild HelloWorld.msbuild

Build output for msbuild

3. Referencing Declared Properties

Another important tip, is how to reference declared properties. A property is a scalar variable which consists of a key value pair. Properties are always created inside of PropertyGroups. After you declare a property you can reference that variable in properties,itemgroups, targets etc by using the dollar parenthesis notation.

Syntax :$(PROPERTYNAME)
Example : $(name)


<PropertyGroup>
<Name>Homer</Name>
<FullName>$(Name) Simpson</FullName>
</PropertyGroup>

In the example above, we declare a name Homer and reference that property in the FullName property by using $(Name).

4. Reference Declared Items

In addition,we can reference items and their associated data using the following notation

Syntax :
@(ITEMS->'%(METADATA)')
@(Pics->'%(ModifiedTime)')

5. Using a response file to pass in command line arguments to msbuild

Furthermore, MSBuild allows the use of numerous command line arguments. Some of the most common ones are below

  • /target:HelloWorld : Run the target HelloWorld when the build is run
  • /v:minimal : Set the logging to minimal
  • /p:Name=Lisa : Inject the value Lisa into the variable Name.

While it’s super convenient to be able to specify command arguments, it’s error prone and tedious. Response files allow you to place all your command arguments in the file and then just pass the name of the response file to msbuild. Here is an example

/target:HelloWorld,GoodbyeWorld
/v:diagnostic

The contents above would be saved in a file called helloworld.rsp.

c:\Dev\Msbuild>msbuild HelloWorld.msbuild @helloworld.rsp

Summary

In conclusion, MSBuild is a great tool for build automation. Understanding how MSBuild works gives us the ability to be creative in build automation and continuous delivery.

Securing your local environment for Development

One of the most common tasks that developers face is to mimic production environments locally. When it comes to running your local app securely, most developers either just run regular “http” or create a self-signed certificate.

In this tutorial, I’m going to show you how to secure your local environment for development so you can run your application via HTTPS with no security warnings. We will use the tool makecert.exe  to create a root x.509 certificate and then use that to sign our SSL certificates. You can download this tool here.

What you’ll need.

  • makecert.exe – The makecert tool is used to create a root x.509 certificate.
  • pvk2pfx.exe – Pvk2Pfx copies the public and private key information contained in .spc, .cer and .pvk files into the personal information exchange file (.pfx).

Setting up your environment

We’ll begin by setting up our local environment. Create an ASP.NET web application as shown below. Modify your hosts file found here c:\Windows\System32\drivers\etc\  so you can map dev.local to localhost or 127.0.0.1

127.0.0.1       dev.local

Create your Root Certificate

First, use the makecert tool to create a root certificate. There are numerous parameters you can use when generating this certificate but the most important ones are outlined in the code below. This certificate is important for a number of reasons. The certificate created will have a private key which we will use to create our SSL certificate.

makecert.exe -r                         // self signed
             -n "CN=DevelopmentRoot"    // name
             -pe                        // exportable
             -sv DevelopmentRoot.pvk    // name of private key file
             -a sha1                    // hashing algorithm
             -len 2048                  // key length
             -b 01/21/2010              // valid from 
             -e 01/21/2030              // valid to
             -cy authority              // certificate type
             DevelopmentRoot.cer        // name of certificate file
             
--pvk2pfx copies public key and private key information in .cer & .pvk file to a personal information exchange
pvk2pfx.exe -pvk DevelopmentRoot.pvk    // Specifies the name of a .pvk file
            -spc DevelopmentRoot.cer    // Specifies the name and extension of the Software Publisher Certificate (SPC) file that contains the certificate
            -pfx DevelopmentRoot.pfx    // Specifies the name of a .pfx file.

 Use the Root Certificate to Create Self-Signed Certificate

makecert.exe -iv DevelopmentRoot.pvk    // file name of root priv key
             -ic DevelopmentRoot.cer    // file name of root cert
             -n "CN=dev.local"          // name
             -pe                        // mark as exportable
             -sv dev.local.pvk          // name of private key file
             -a sha1                    // hashing algorithm
             -len 2048                  // key length
             -b 01/21/2010              // valid from
             -e 01/21/2020              // valid to
             -sky exchange              // certificate type
             dev.local.cer              //name of certificate file
             -eku 1.3.6.1.5.5.7.3.1     // extended key usuage

--pvk2pfx copies public key and private key information in .cer & .pvk file to a personal information exchange
pvk2pfx.exe -pvk dev.local.pvk         // Specifies the name of a .pvk file
            -spc dev.local.cer         // Specifies the name and extension of the Software Publisher Certificate (SPC) file that contains the certificate
            -pfx dev.local.pfx         // Specifies the name of a .pfx file.

Install Certificates onto computer

Run the following command at the command prompt

In the dialog box that appears select to add a snap-in and following the prompts to select Certificates.

Right click on certificates under the Trusted Root Certificate Authorities and select the import tasks.

Navigate to where your certificates were created and choose the Development.cer file. Walk through the other steps and click finish.

Now it’s time to install the dev.local certificate on your machine.

Go back to the managment console and select personal -> certificates. Right click on certificates and select import under all tasks.

Next, follow the wizard and select the dev.local.pfx certificate.

At this point, we’re ready to associate the certificate with the site in IIS.

Using access tokens in Swagger with Swashbuckle

Securing access to your API using access tokens is common practice. In this post, we’ll learn how to call secure API endpoints using the swagger specification specifically using Swashbuckle (An implementation of Swagger for .NET)

Understanding Swagger Schema:
This outline shows the basic structure of a swagger specification document. This file is represented in Json which is in turn used by Swagger-UI to display the interactive API documentation.
{
"swagger": "2.0",
 "info": {
"version": "v1",
"title": ".NET Latest API",
"description": ".NET Latest API",
"termsOfService": "Some terms",
"contact": {
"name": "donetlatest Team",
"email": "team@dotnetlatest.com"
}
},"host": "local.api.donetlatest.com:80",
"schemes": [

"http"

],"paths": {

"/V1/api/Authentication": {},
"/V1/api/Countries": {},
"/V1/api/Clients": {
},"definitions": {

"CountryDTO": {},
"StateDTO": {},
"ClientDTO": {}
}
}

 

Parameters
The Paths item object describes the operations on a single path. Each path has a parameters object which are a list of inputs for a given endpoint.
"/V1/api/LitmusClients": {
"post": {
"tags": [
"LitmusClients"
],
"summary": "GET /api/clientsrn Gets an array of all clients",
"operationId": "Clients_Index",
"consumes": [
],
"produces": [
"application/json",
"text/json"
],
"parameters": [
{
"name": "Authorization",
"in": "header",
"description": "access token",
"required": true,
"type": "string"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/ClientDTO"
}
}
}
},
"deprecated": false
}
}
}

 Types of Parameters

  • Path – Used together with  Path Templating
  • Query – Parameters that are appended to the URL
  • Header – Custom headers that are expected as part of the request
  • Body – The Payload that’s appended to the HTTP request.
  • Form – Used to describe the payload of an Http request
The swagger specification describes in detail about parameter types and how you can configure them. https://github.com/swagger-api/swagger-spec/blob/master/versions/2.0.md
Extending Swagger to add a new parameter:
Swashbuckles implementation of swagger reads XML code comments to generate the required swagger specification. Unfortunately, if you require an authorization header (access token) to make requests, the XML code comments cannot provide this info to Swashbuckle. You’ll have to manually inject this new parameter during swagger specification generation.
Swashbuckle provides an interface called IOperationFilter  to apply new parameters. Implementing this interface will look something like this.
public class AddAuthorizationHeaderParameterOperationFilter: IOperationFilter
    {
        public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
        {
            var filterPipeline = apiDescription.ActionDescriptor.GetFilterPipeline();
            var isAuthorized = filterPipeline
                                             .Select(filterInfo => filterInfo.Instance)
                                             .Any(filter => filter is IAuthorizationFilter);

            var allowAnonymous = apiDescription.ActionDescriptor.GetCustomAttributes<AllowAnonymousAttribute>().Any();

            if (isAuthorized && !allowAnonymous)
            {
                operation.parameters.Add(new Parameter {
                    name = "Authorization",
                    @in = "header",
                    description = "access token",
                    required = true,
                    type = "string"                    
                });
            }
        }
    }
public class SwaggerConfig
    {
        public static void Register()
        {
            var thisAssembly = typeof(SwaggerConfig).Assembly;

            GlobalConfiguration.Configuration
                .EnableSwagger(c =>
                   
                   
                    c.SingleApiVersion("v1", "Wordfly API").Description("An API for the wordfly messaging platform")
                            .TermsOfService("Some terms")
                            .Contact(cc => cc.Name("Wordfly Team")
                            .Email("team@wordfly.com"));
                                                                          
  c.OperationFilter(() => new AuthorizationHeaderParamterOperationFilter()));
                   
                    c.IncludeXmlComments(GetXmlCommentsPath());
          }
     }

s

Microsoft embraces Open Source

Microsoft has made a dramatic shift to open source a number of it’s core technologies with a central focus on community development. Here’s a brief summary of everything going on in the .NET world.

Visual Studio 2015 Preview, C# 6 & ASP.NET 5, .NET Core 5

.NET Core 5 : is the new name for the cloud optimized version of .NET. You can use it on Windows, Linux or Mac.

.NET Core has two major components. It includes a small runtime that is built from the same codebase as the .NET Framework CLR. The .NET Core runtime includes the same GC and JIT (RyuJIT), but doesn’t include features like Application Domains or Code Access Security. The runtime is delivered via NuGet, as part of the ASP.NET 5 core package..NET Core also includes the base class libraries. These libraries are largely the same code as the .NET Framework class libraries, but have been factored (removal of dependencies) to enable them to ship a smaller set of libraries.

The focus and value of .NET Core is three-part:

  1. deployment,
  2. open source
  3. and cross-platform.

.NET Framework 4.6 : is the next version of the .NET Framework. Some new features include

  • WPF Improvements and Roadmap
  • Windows Forms High DPI
  • Next Generation JIT Compiler — RyuJIT
  • CLR Performance Improvements
  • Support for converting DateTime to or from Unix time
  • ASP.NET Model Binding supports Task returning methods

Visual Studio 2015 Preview – Visual Studio Community

There is now a new Visual Studio edition that is very similar to Pro and free for students, open source developers and many individual developers. It supports Visual Studio plugins like Xamarin or Resharper.

Performance Tips

The Visual Studio team has built something truly great for determining the performance characteristics of your code and to help discover performance bottlenecks. PerfTips allow you to quickly and easily see performance bottlenecks as you are debugging your application.

 

  • Intuitive Breakpoint Settings
  • Setting breakpoints on auto-implemented properties
  • Lambdas in the debugger windows
  • Core IDE and Editing Improvements

C# 6

Here’s the direct link to the C# changes:

https://t.co/7nU9UOjJLC

ASP .NET 5 :

ASP.NET 5 is the new Web stack for .NET. It unifies MVC, Web API and Web Pages into a single API called MVC 6. You can create ASP.NET 5 apps in Visual Studio 2015 Preview.

ASP.NET 5 has the following overall characteristics.

  • ASP.NET MVC and Web API, which have been unified into a single programming model.
  • A no-compile developer experience.
  • Environment-based configuration for a seamless transition to the cloud.
  • Dependency injection out-of-the-box.
  • NuGet everything, even the runtime itself.
  • Run in IIS, or self-hosted in your own process.
  • All open source through the .NET Foundation, and takes contributions in GitHub.
  • ASP.NET 5 runs on Windows with the .NET Framework or .NET Core.
  • .NET Core is a new cloud optimized runtime that supports true side-by-side versioning.
  • ASP.NET 5 runs on OS X and Linux with the Mono runtime.
Dependencies node for Bower and NPM dependencies

ASP.NET projects now integrate Bower and NPM into solution explorer, under a new dependencies node. You can uninstall a package through the context menu command, which will automatically remove the package from the corresponding JSON file.

Avoiding Herd Mentality by Asking “Why?”

My two-year-old daughter is in a phase of her development where she questions everything she  doesn’t understand. She throws questions at us faster than a 90mph curve ball. I’ll admit there are times where the incessant “why is this blue?” and “why did you open the bottle?” become hard to tolerate but this is an important stage of her development which we really need to encourage and not suppress.

You see, her search for deeper understanding of things is helping to build her internal decision-making engine. She’ll be able to make better choices once she understands “the why”. I guess this is something that I learned from my father who would hammer this point to me over and over again, “If you don’t know why you’re doing something, then there is a strong likely hood that you’re making a bad choice”.

 

 

The field of social psychology presents us with a very potent example. Soccer fans often breakout into fights before games for the most silliest reasons. What originally started as a misunderstanding between between two opposing fans might turn into a big riot. Why? Because people just jumped into the fight without understanding why they were even fighting. In psychology this is known as mob psychology or herd mentality.

As developers, we’re faced with similar but different choices. When you get that dream job you’ve been waiting for you’re whole life and you’re asked to build  a new feature, we often don’t question why things are being done the way they are and jump in head first. Let’s be honest, it’s easy to follow existing conventions without asking questions.

The problem with this approach is that, we become like soccer fans who start vandalizing existing structures without understanding why they’re doing it in the first place.

Questioning or seeking deeper understanding from your colleague,manager,wife or friend can sometimes come across as being rude or even disrespectful. When I first began my career I used to harbor feelings of animosity toward our team lead who would incessantly question my code choices. I later learned how beneficial this was in being able to make better choices.

Developing complex applications will always present tough challenges and choices. However, understanding why you’re choosing one development approach as opposed to another will definitely go a long way to enhance your chances of success. In code reviews it’s important that we not follow a “herd mentality” way of thinking and simple nod our heads. We must question and seek clarity in order to drive us on the path to success.

Guidelines for unit testing

Have you ever wondered what it takes to build a commercial Jet? It often blows my mind to think of the hours engineers spend assembling components together to build the plane. Interestingly enough there are similarities between building software and assembling planes. The individual units for each part of the software application or plane must be thoroughly tested to ensure the overall functionality of the app or plane. The testing of these units is what has become known as unit testing.

Unit testing requires you to test the functionality of individual units/parts/sections of your application in isolation. Testing in isolation ensures that you can confidently pinpoint bugs in code and verify that they have been fixed.

Phases of a Test – Arrange, Act, Assert

There are 3 generally accepted phases for any unit test.

The Arrange phase, is where you create an instance of the class you need to test and also setup up the initial state of any objects. The Act phase, requires you to call the functionality that represents the behavior being tested. Lastly, the Assert phase is where you check what actually happened was expected.

Tips for writing good unit tests?

The primary objective of unit tests is to prove correctness and you can do that by following these simple guidelines.

Prove a contract has been implemented

This is the most basic form of unit testing which verifies that the contract between the caller and method is being adhered to. For example this test validates a driver’s license number . Verifying that a method implements a contract is one of the weakest unit tests one can write.

[Test]
public void ShouldBeValidWhen8DigitsAndStartsWithLetter()
{
var sut = new DriversLicenseValidator();
const string driversLicenseNumber = "A5522123";
Assert.That(sut.IsValid(driversLicenseNumber), Is.True);
}

Verify Computation Results

A stronger unit test involves verifying that the computation is correct. It is useful to categorize your methods into one of the two forms of computation:

  • Data Reduction: occurs when a test accepts multiple inputs and reduces to one resulting output. For example, the verify division test accepts 2 parameters and returns a single output.
  • [Test] public void VerifyDivisionTest()
    {
    Assert.IsTrue(Divide(6, 2) == 3, "6/2 should equal 3!"); 
    }
  • Data Transformation : These tests operate on sets of values

Establish a method correctly handles an external exception

When your code connects to an external service, it is important to determine that your code will handle exceptions gracefully. Attempting to get an external service to throw a specific error is tricky and so the use of Mocking tools will help in this process.

Prove a Bug is Re-creatable

Tests should be repeatable in any environment. They should be able to run in production, QA or even on the bus.

Write positive and negative tests :

Negative tests prove that something is repeatedly not working.They are important in understanding the problem and the solution .Positive tests prove that the problem has been fixed. They are important not only to verify the solution, but also for repeating the test whenever a change is made. Unit testing plays an important role when it comes to regression testing.

[TestMethod] 
[ExpectedException(typeof(ArgumentOutOfRangeException))] 
public void BadParameterTest() 
{ 
    Divide(5, 0);
}

Verify Tests are independent:

Tests should not depend on each other. On test should not set up the conditions for the next test.

These simple guidelines will set you off on the journey of unit testing. Feel free to share any ideas you might have stumbled on.

Viewing application logs in realtime using Sentinel & NLog

We all know how important log files can be when trouble shooting issues in an application. While log files are great to have, sometimes you just want a stream of information which describes what is going on in your application. Sentinel and NLog provide a great way to achieve this.

Sentinel is a log-viewer with configurable filtering and highlighting which can be used in conjunction with nLog to view log entries of an application in real time.

NLog Quick Setup

You can download and install it or just add it through Nuget in Visual Studio

Configuration File

Add a config called NLog.config in the root of your project

If you have a separate config file, make sure you set the “Copy to Output Directory” is set to “Copy Always” to avoid many tears wondering why the logging doesn’t work.

Sample Config File
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.netfx40.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" autoReload="true">
    <variable name="log.dir" value="${basedir}" />
    <targets async="true">
      
      <target name="file" 
              xsi:type="File" 
              fileName="${log.dir}/log.txt" 
              archiveFileName="${log.dir}/log.{#}.txt" 
              archiveEvery="Day" 
              archiveNumbering="Rolling" 
              maxArchiveFiles="10" 
              layout="[${longdate} | ${level}][${threadid}] ${message}${onexception:${newline}EXCEPTION OCCURRED: ${exception:format=tostring}${newline}${stacktrace:format=Raw}}" />
      
      <target name="errors" 
              xsi:type="File" 
              fileName="${log.dir}/errors.txt"
              layout="[${longdate} | ${level}][${threadid}] ${message}${onexception:${newline}EXCEPTION OCCURRED: ${exception:format=tostring}${newline}${stacktrace:format=Raw}}" />

      <target name="debug"
             xsi:type="File"
             fileName="${log.dir}/debug.txt"
             layout="[${longdate} | ${level}][${threadid}] ${message}${onexception:${newline}EXCEPTION OCCURRED: ${exception:format=tostring}${newline}${stacktrace:format=Raw}}" />
           
      <target xsi:type="NLogViewer"
               name="viewer"
               address="udp://127.0.0.1:9999"  includeNLogData="false"/>
    </targets>
    <rules>
      <logger name="*" minlevel="Info" writeTo="file" />
      <logger name="*" minlevel="Error" writeTo="errors" />
      <logger name="*" minlevel="Debug" writeTo="debug" />
      <logger name="*" writeTo="viewer" minlevel="Debug" />
    </rules>
  </nlog>
The most important thing to note in setting up is the NLogViewer target which is setup to push log entires to the address “udp://127.0.0.1:9999”
<target xsi:type="NLogViewer"
               name="viewer"
               address="udp://127.0.0.1:9999"  includeNLogData="false"/>

Sentinel Setup

You can download and install sentinel from http://sentinel.codeplex.com/ It comes with an easy to follow wizard which should be fairly straight forward to setup.

Start Logging

To illustrate how easy it is to stream your log data, create a simply console application. Make sure to add references to Nlog and add the NLog.config file as illustrated above.

namespace Sentinel
{
    class Program
    {
        private static readonly Logger _log = LogManager.GetCurrentClassLogger();
        static void Main(string[] args)
        {
            try
            {
               _log.Debug("This is something new that I just added.");
                _log.Warn("Lets Go!!");
                throw new ApplicationException();
            }
            catch (ApplicationException e)
            {
                _log.ErrorException("Something went wrong...", e);
            }
        }
    }
}

 

R Fact Sheet

R is a free, open source language for data analysis.

History

R (the language) was created in the early 1990s by Ross Ihaka and Robert Gentleman, then both working at the University of Auckland.

  • R is a dialect of the S language
  • 1993: First announcement of R to the public.
  • 2000: R version 1.0.0 is released.
  • 2013: R version 3.0.2 is released on December 2013

What is R

R is an interpreted language (sometimes called a scripting language), which means that your code doesn’t need to be compiled before you run

R supports a mixture of programming paradigms. At its core, it is an imperative language (you write a script that does one calculation after another), but it also supports object- oriented programming (data and functions are combined inside classes) and functional programming (functions are first-class objects; you treat them like any other variable, and you can call them recursively)

Features

  • Runs on almost any standard computing platform (Mac / Linux /Windows)
  • Frequent releases and a lot of active development
  • Functionality is divided into modular packages
  • Sophisticated graphics capabilities
  • A great tool for interactive work