Saturday 25 August 2012

Upload very large files - Part 4

Earlier in this blog I talked about the general idea behind one solution for uploading very large files (I consider very large when they are above 2GB), how the file API is becoming more popular and the possible client side approaches. This time I’ll describe the Silverlight and JavaScript communication for the client implementation part of the solution. As a quick reminder, the client is responsible for:

  • Prompt the file dialog and store locally the metadata related to the file to be uploaded.
  • Send each packet accordingly with its id, the current packet number and the total packets.
  • Receive the ACK from each packet and send the last received + 1 unless this is the last one.

With the pure JavaScript experience in the client implementation, I proceeded to make a clone using Silverlight technology as a fallback if the browser doesn’t have all the requirements for running the HTML5 implementation. I considered Flash but as I mentioned in my earlier post in this blog, I hate Flash, in addition, Silverlight became more popular nowadays and the project where I was working included it. Anyway, if someone wants to make a flash version, it will be welcome!

The visual element are pretty simple (and rough) just a Textbox and a Button surrounded by a Grid layout, the goal is just to simulate the input file control, when the user clicks on the button, a file dialog is open in the same way the browser does it.

With this approach, all the code that manages the sending and receiving data from the server was translated from JavaScript to C#; in order to simplify the code, I’ve done a class named AsyncWebRequest. Some interesting points with Silverlight implementation is the communication with JavaScript, the usual way to do this is by annotating the public properties and methods with the attribute [ScriptableMember] and the class with [ScriptableType], so the client can interact through this interface.

From the client side, first, I wanted to avoid carrying with the file Silverlight.js, so I had to inspect very deep what exactly that file does, well, really not much: some helper functions that write the object tag with all the params and something interesting that of course I copy for me, the events setup. As far as I was made research, there’s a param named onLoad where we assign the name of the load function.

That works well if we have only one Silverlight in the page or if we have the complete control over what is rendered in the page. My idea was that you can have as many freshuploads as you want (you will know why). A little trick must be done, instead of expecting that a function exists, I added to window object (in order to be visible to the Silverlight object) but I added to the function name a random auto incremented number, so every newly freshupload created, will add a completely different name to the function load, and this function is implemented internally, and through this function as parameter we have the host object and with it, all the public methods and properties.

Once this key element is established, the code is barely a wrapper of the internal Silverlight implementation, and it looks as follows:

prototype: {
    init: function () {
        // ...
    },
    getDOMElement: function () {
        return this.input.next();
    },
    hasFile: function () {
        return this.uploader.HasFile();
    },
    getFileName: function () {
        return this.uploader.GetFileName();
    },
    getFileSize: function () {
        return this.uploader.GetFileSize();
    },
    cancelUpload: function () {
        this.uploader.CancelUpload();
    },
    startUpload: function () {
        this.uploader.StartUpload();
    }
}
	

In the next post I’ll describe how this can be used seamlessly, without worrying about what implementation to use, I mean, some kind of “Factory” that determines which variant to use. I’ll also show here in further posts some detailed examples on how to use it either with ASP.NET MVC or ASP.NET WebForms.

The whole source code can be found at https://bitbucket.org/abelperezok/freshupload.

Upload very large files - Part 3

Earlier in this blog I talked about the general idea behind one solution for uploading very large files (I consider very large when they are above 2GB), how the file API is becoming more popular and the possible client side approaches. This time I’ll describe the JavaScript client implementation part of the solution. As a quick reminder, the client is responsible for:

  • Prompt the file dialog and store locally the metadata related to the file to be uploaded.
  • Send each packet accordingly with its id, the current packet number and the total packets.
  • Receive the ACK from each packet and send the last received + 1 unless this is the last one.

The initial implementation, some kind of proof of concept, was made using HTML5 with the FileReader API, but as this cannot be the only implementation, it was necessary to let open an “interface” so any other technology for further implementations can be done without the risk of touching unnecessary code fragments.

The object was named $.uploaderHtml5 as a jquery extension and the interface was defined as follows:

{    
    // initialization code
    init: function () { },
    
    // return the visual element associated
    getDOMElement: function () { },
    
    // return true if there's a selected file
    hasFile: function () { },
    
    // return the file name from the input dialog
    getFileName: function () { },
    
    // return the file size from the input dialog        
    getFileSize: function () { },
    
    // stop the current upload 
    cancelUpload: function () { },
    
    // start the current upload
    startUpload: function () { }
}
 

In the HTML5 implementation there are of course more internal functions that deal with the proper send and receive packets actions, because it’s responsible for all the verifications such as parse the ACK from the server and take action accordingly, the POST to the server must be done with all the data required so the server can interpret it and ask for a new piece.

The particularly interesting point here is how the file API is used to deal with the partial and vendor-dependent implementation, the “standard” method slice has three known variants: mozSlice, webkitSlice and slice, so there’s no choice that ask for each of them.

if ('mozSlice' in this.ufile) {
    // mozilla
    packet = this.ufile.mozSlice(startByte, endByte);
} else if ('webkitSlice' in this.ufile) {
    // webkit
    packet = this.ufile.webkitSlice(startByte, endByte);
} else {
    // IE 10
    packet = this.ufile.slice(startByte, endByte);
} return packet;
 

Another HTML5 element used in this code is the FormData object which is really easy to use for send post data to the server; in addition, the XMLHttpRequest has the ability of uploading files which is very important in order to simplify the code, otherwise an iframe should have been set up with a form inside with an input type=file.

The whole source code can be found at https://bitbucket.org/abelperezok/freshupload.

Monday 2 July 2012

Upload very large files - Part 2

Earlier in this blog I talked about the general idea behind one solution for uploading very large files (I consider very large when they are above 2GB), how the file API is becoming more popular and the possible client side approaches. This time I’ll describe the server side part of the solution. As a quick reminder, the server is responsible for:

  • To determine if the request is to be handled by the mechanism, if not, just let it go.
  • To determine if the request is an initial packet or a data packet.
  • To handle properly each type of packet giving the appropriate response so the client can continue.

This is part of a bit more complex solution, named FreshUplaod, which has the Server component and client components using either JavaScript or Silverlight. The core element is the UploadHelper class, as the name indicates is a helper, it encapsulates the dirty logic regarding the request data reading and processing all stuff.

The methods ProcessInitialPacket and ProcessDataPacket do the actual work, but it’s necessary to know if the request data corresponds to one or another, that’s the job for these other methods IsInitialPacket and IsDataPacket. The actual helper utility is another method, ProcessRequest that acts as a façade to the whole mechanism. An example on how to use it is as follows:

[HttpPost]
public ActionResult Upload(FormCollection collection)
{
    var packetSize = 4 * 1024 * 1024; // default to 4 MB
    var filePath = Server.MapPath("~/_temp_upload/");

    var result = UploadHelper.ProcessRequest(
        Request, filePath, packetSize);

    if (result != null)
        return Json(result);
    return Content("");
}
 

This is using a controller action on ASP.NET MVC 3, where the access to the request is quite easy and the conversion to Json is stress-free too. The key points here are the packet size and the folder where the file is to be stored, the packet size must be exactly the same configured on client side in order to expect a good behavior, I’ve used the 4 MB value because this is the default constraint imposed by ASP.NET and of course this can be tweaked as you want in web.config file.

Now let’s see how to use this element on ASP.NET WebForms, the idea I suggest here is to use a handler whether it was a Generic Handler or ASP.NET Handler, it can be also a WebForm but is not elegant enough solution because the WebForms are chained in a different pipeline and there can be a bit of noise, although it work as well. Here’s the code sample:

public void ProcessRequest(HttpContext context)
{
    var packetSize = 4 * 1024 * 1024; // default to 4 MB
    var filePath = context.Server.MapPath("~/_temp_upload/");

    var result = UploadHelper.ProcessRequest(
        context.Request.RequestContext.HttpContext.Request, 
        filePath, packetSize);

    if (result != null)
    {
        JsonResult(context.Response, result);
    }
    else
    {
        context.Response.Write("");
    }
}

private void JsonResult(HttpResponse response, object result)
{
    response.ContentType = "application/json";
    var serializer = new JavaScriptSerializer();
    var json = serializer.Serialize(result);
    response.Write(json);
}
 
 

Here we have a little more hand work but not so much, the json conversion and output is done completely manual (if some WebForms people knows how to achieve this with fewer code, please let me know cause I’m a little out of this part of ASP.NET, although I support Scott Hanselman with his slogan that “Only One ASP.NET”). The other key point is how to get the Request but with an instance of HttpRequestBase class, the default context.Response is an HttpRequest instance, so after some minutes of try-error-retry I found this way to get right value, again if some knows a better way to do it, you know.

In the next posts I’ll cover the other parts of this solution. The whole source code can be found at https://bitbucket.org/abelperezok/freshupload, I am still preparing the code samples so in a few weeks they will be ready.

Wednesday 20 June 2012

Making easy asynchronous web requests

In web development it’s very frequent to communicate between applications via web requests, may be as client in Restful interfaces or APIs, or just simply send information to some server from any client technology. There are many ways to achieve this task in .NET Framework, the most popular probably is the HttpWebRequest class, which support either synchronous or asynchronous methods. The synchronous way is the simplest, but it has as consequence the thread-blocking problem and CPU time wastage and if we use it in some user interface it could get the so hated Not Responding state. If we don’t make a proper use of multi-threading aware, this is not a good approach.
The asynchronous tactic could be used instead, but it’s a little hard-to-try at glance, the same code I do in a few lines synchronous becomes at least three methods and a lot more lines of code. Just go to MSDN and search for HttpWebRequest.BeginGetRequestStream(...) and you’ll see the sample code like this.
var request = WebRequest.CreateHttp("http://www.example.com/data/to/request");
request.Method = "POST";
request.BeginGetRequestStream(AsyncCallbackRequest, request);
(...)

private void AsyncCallbackRequest(IAsyncResult ar) 
{
    HttpWebRequest request = (HttpWebRequest)ar.AsyncState;
    Stream stream = request.EndGetRequestStream(ar);
 (...) 
 // manipulate the stream: write data or wahtever
    request.BeginGetResponse(GetResponseCallback, request);
}

private void GetResponseCallback(IAsyncResult ar) 
{
    HttpWebRequest request = (HttpWebRequest)ar.AsyncState;
    HttpWebResponse response = (HttpWebResponse)request.EndGetResponse(ar);
    Stream streamResponse = response.GetResponseStream();
 (...) 
 // manipulate the response: read data and parse it 
}
In some scenarios for example in a Silverlight application, we have not available the whole .NET features and unless we use a third party library or another weird or abstract solution, this is the key for sending or receiving HTTP data across the network. If the application must do several types of request involving different formats or data types, then the amount of asynchronous-logic methods might be larger than the actual business-logic for our application and that’s not good for anyone, maintenance is compromised.
So I’ve done a small class that encapsulates all this garbage repetitive code, at the time I fire some custom events which will be listened outside of the class and the real logic to be performed in those event listeners, the code will be clear and focused on the real tasks to be done on it. I show the code here:
public class AsyncWebRequest
{
    private readonly string _uri;

    public AsyncWebRequest(string uri)
    {
        _uri = uri;
    }

    public void Start()
    {
        var request = WebRequest.CreateHttp(_uri);
        request.Method = "POST";

        var args = new PrepareDataEventArgs { Request = request };
        if (PrepareData != null)
        {
            PrepareData(this, args);
        }

        request.BeginGetRequestStream(AsyncCallbackRequest, request);
    }

    public event EventHandler<PrepareDataEventArgs> PrepareData;
    public event EventHandler<SendDataEventArgs> SendRequest;
    public event EventHandler<ReceiveDataEventArgs> ReceiveResponse; 

    private void AsyncCallbackRequest(IAsyncResult ar)
    {
        HttpWebRequest request = (HttpWebRequest)ar.AsyncState;

        var args = new SendDataEventArgs();

        // End the operation
        if (SendRequest != null)
        {
            SendRequest(this, args);
        }            

        // only if there's data or data is not empty, write the actual data
        if (args.Data != null && args.Data.Length > 0)
        {
            Stream stream = request.EndGetRequestStream(ar);

            // Write to the request stream.
            stream.Write(args.Data, 0, args.Length);

            // Close the stream
            stream.Close();
        }
        // Start the asynchronous operation to get the response
        request.BeginGetResponse(GetResponseCallback, request);
    }

    private void GetResponseCallback(IAsyncResult ar)
    {
        HttpWebRequest request = (HttpWebRequest)ar.AsyncState;

        // End the operation
        HttpWebResponse response = (HttpWebResponse)request.EndGetResponse(ar);

        Stream streamResponse = response.GetResponseStream();
        StreamReader streamRead = new StreamReader(streamResponse);
        string responseString = streamRead.ReadToEnd();

        if (ReceiveResponse != null)
        {
            ReceiveResponse(this, new ReceiveDataEventArgs{ Data = responseString });
        }

        // Close the stream object
        streamResponse.Close();
        streamRead.Close();

        // Release the HttpWebResponse
        response.Close();
        //Synchronization.Set();
    }
}
This class fires three events, first the PrepareData is fired before the request is modified for getting the stream, this is a constrain imposed by .NET because after this state the request cannot be modified, so this is the event used to set all the header information such as content/type, among others. After having got the request stream and before start getting the response the second event (SendRequest) is fired and here’s the place to set the actual data to send if any, the length must be set too because the buffer might be larger than the actual data to send. Finally, right after the data arrival, ReceiveResponse, the third event, allows us to set a listener in order to process the received data, for example to parse a JSON data. An example on how to use it is shown as follows:
var asweb = new AsyncWebRequest(_postDataurl);
asweb.PrepareData += (o, args) => 
{
    args.Request.ContentLength = bytes.Length;
    args.Request.ContentType = "application/x-www-form-urlencoded";
};

asweb.SendRequest += (o, args) =>
{
    args.Data = bytes;
    args.Length = bytes.Length;
};

asweb.ReceiveResponse += (o, args) =>
{
    var json = JsonValue.Parse(args.Data);
    (...)
};

asweb.Start();
I hope this tiny class will help you to deal with web request taking advantage of asynchrony.

Tuesday 1 May 2012

Upload very large files - Part 1

In my last post I talked about settings we have to set server side on IIS in order to allow large uploads, and I also mentioned the security risks involved. The solution I’ll talk this time is becoming more common as HTML5 is being implemented more and more.

The main problem was related to the size of the post when uploading, but why not split file into several small pieces and upload piece by piece? Maybe some time ago was crazy to think about manipulating the file in client side from JavaScript, the only ways were using Flash or Silverlight, now the file API is more popular, almost all modern browsers implement it but IE which is always retarded implementing good stuff.

I’ve found some implementations such as plupload or http://hertzen.com/experiments/jsupload/ but I’m not 100 % satisfied with it, I took the ideas and hands on. In this first part I’ll explain how it works to achieve a bullet proof mechanism, for example: The connection might be interrupted or something can happen on the server and the worker thread was shot down. The mechanism must be protected against these events.

I talked to a friend of mine and after a session of brainstorming I decided to imitate some kind of TCP connection in regard to sending packet and receiving the ACK (Acknowledgement), this stuff. The idea is implemented as follows:

  • The browser opens the file dialog and a file is selected, the initial packet is prepared with file name, the type (MIME) and the size.
  • In the server a new GUID is generated for this file that will be the file id for the whole process.
  • A new file is created with this id that will be the metadata file, where the information from the client will be stored for further use.
  • A new blank file is created with this id and the extension of the original name that will be the data file.
  • The reply for this initial packet is a JSON with the GUID generated.

The following steps in client side are:

  • Read the file fragment according to the current packet number and send to the server the data packet with the id, the current packet number and the total packets.
  • Each data packet sent receives its ACK which indicates the number of the last packet received OK, after that the client will send the last received + 1 unless this is the last one.

The server process the data packet:

  • If the metadata file or actual file doesn’t exist, the result is an ACK with packet -1, which means the file must be uploaded from scratch.
  • If the file doesn’t have the required size: the packet number – 1 multiplied by packet size, then return the required value in order to the client knows which is the correct fragment to resend.
  • If everything goes well, a success ACK with the current packet number is sent encouraging to the client to send the next packet.

I’ve implemented this using HTML5 techniques but not all the browsers are capable for to do it, Modernizr could be a great helper on this, since there’s no HTML5 detection but feature detection on demand, for now it’s necessary the browser to be capable of use the file API and if you want to support retry after refresh the page, it should be capable of access to local Storage which allows to store data and retrieve it when necessary. The browser will able to use the XMLHttpRequest object for upload files.

I haven’t implemented yet any fallback mechanism, it will be nice to have a third party such as Silverlight which could substitute the missing browser HTML5 features, a good example of implementation using Silverlight can be found at HSS.Interlink. they cover all the aspects on read a fragment of file and send the request to the server.

My implementation I am planning to distribute it via nuget.org containing a js script and a dll with the classes to be called from any handler/controller without to have to deal with all the verifications mentioned above.

In next posts I’ll cover the implementation of this solution and I’ll provide a code sample for to understand it better.

Saturday 14 April 2012

Magic settings for upload large files under IIS and .NET

I remember when I started this blog; I talked about upload large files using .NET and IIS. This time I’ll share with you some interesting points to be aware of when trying to upload large files. I consider large when the size is bigger than 2GB, because this is the actual limit is almost all web servers under Microsoft technology.
Based on my experience with this kind of situations, I can say that the web environment is not ready to transfer that much information. Remember the beginnings of WWW, when everything that roamed through the wires were just text and small graphics, I say more, remember what HTTP stands for (Hypertext Transfer Protocol) and what is HyperText?
However, the file transfer to and from the server is something very common nowadays, therefore, it’s necessary to know how to handle it. For those who doesn’t need to upload files greater than 500 MB, I recommend the solution I presented through the my firsts posts in this blog, even for upload a bit bigger files, but when the size start over the 1 GB it’s better to think about another solution.
The larger sizes you could upload the bigger security risk you have. I explain, solutions like NeatUpload and many others (open source or not) expect to make the upload in a big single request, although there are modules that make the copy block by block in order to be sure of not making big memory peaks, the web server must allow a very large request, exposing it to a DOS attack, just an example. Another possible problem may be if the client doesn’t have a fast connection (like me) there’s a (high) risk that a large file might be interrupted, and then? Start over again. On the server side the risk of execution timed out exists too (I’ve faced that), so the value of this setting must be increased.
If you wish to upload small/medium files there’s no problem using this kind of solution, but if you want go over the size, I suggest another kind of workaround.
Alongside my research for the best solution I’ve found the following links I hope to be useful to you, I let MSDN articles, Stack Overflow questions, among others very interesting sources written by the most involved people in this matter, where one can drink the honeys of knowledge.
As a quick conclusion, I can say the following in order to configure properly your application:
The default value for the request size in ASP.NET 2.0 and later is 4 MB. Under IIS6 the setting is configured by the httpRuntime node, example:
 <configuration> 
  <system.web>
   <httpRuntime maxRequestLength="102400" executionTimeout="3600" />
  </system.web>
 </configuration> 
Important to say here, the value of maxRequestLength is given in KB and the executionTimeout is given in seconds.
Under IIS7+ the setting is configured by the requestLimits node, example:
 <system.webServer>
  <security>
   <requestFiltering>
    <requestLimits maxAllowedContentLength="1024000000" />
   </requestFiltering>
  </security>
 </system.webServer>
In this case the value of maxAllowedContentLength is given in bytes. This must combined with the applicationHost.config file usually located at (C:\Windows\System32\inetsrv\Configs) whose section should look like:
 <section name="requestFiltering" overrideModeDefault="Allow" />

As I said before, I hope this compilation saves you time of google-research, in a further post I’ll talk about another solution that I consider that more fits better in web environment.

Friday 30 March 2012

Language selector in ASP.NET MVC 3 - Part 3

In the last post I talked about implementing a custom route in order to improve the route handling work to support multiple languages in a web application. This custom route would remove the need of duplicate the route (with and without the language parameter) and in the first part of this series I showed an example where the view rendered the language selector using ActionLink for generating the urls, and the collection to iterate was expected to be as part of the view model.

But this is not always possible to do, in fact, the most common scenarios I’ve dealt with, involve putting this kind of component in a common place such as the Layout page or even a partial which is included in Layout page, but less frequent in a dedicated view. To achieve this separation, it’s common to use a Child Action for invoke a controller, which is going to execute an action and return usually a partial view.

If we take the solutions previously discussed and make a little refactoring: create an action (LanguageList) in the languages controller and move the code that renders the links to a partial view (_langSelector) and go to layout page and invoke it as:

@Html.Action("LanguageList", "Langs")

Sure you are going to see the results. The controller action should be like this:

[ChildActionOnly]
public ActionResult LanguageList()
{
    var langs = LanguageProviders.Current.GetList();
    return PartialView("_langSelector", langs);
}

The HTML rendered looks like this:

<a href="/Langs/LanguageList">English</a> 
<a href="/es-ES/Langs/LanguageList">Espa&#241;ol</a> 
<a href="/fr-FR/Langs/LanguageList">Fran&#231;ais</a> 

As you can see, the urls always point to controller Langs and action LanguageList instead of pointing to current controller and action, why does it happen?

When we invoke a child action, a new request is started to the target action, that means, when the route asks for current request values, it obtains those from the new request, which means, the “original” route values were lost at the moment of making the child request.

As these values were lost, my solution is simply to provide them via parameter at invocation time. Once in the controller, the values must be passed to the view, may be using the ViewBag or even a more sophisticated solution could involve the creation of a view model with these data ready to be read by the view.

Having said that, here is the quick solution:

The controller action

[ChildActionOnly]
public ActionResult LanguageList()
{
    ViewBag.CurrentValues = RouteData.Values["CurrentValues"];
    var langs = LanguageProviders.Current.GetList();
    return PartialView("_langSelector", langs);
}

At this point and only to remark, the attribute ChildActionOnly is strongly recommended in order to avoid this action to be called directly or even from an AJAX request if this is not intended to.

The invoke statement in the layout page.

@Html.Action("LanguageList", "Langs", 
 new { CurrentValues = ViewContext.RouteData.Values }
)

The goal of this new parameter is to pass the real current route values to the controller, so it will be able to grab them and pass to the view.

The partial view

@{
    var currentValues = (RouteValueDictionary)ViewBag.CurrentValues;
    var tempValues = new RouteValueDictionary();
    foreach (var item in currentValues)
    {
        tempValues.Add(item.Key, item.Value);
    }
}

@foreach (var lang in Model)
{
    tempValues["lang"] = lang.Code;
    
    @Html.ActionLink(lang.Name, null, tempValues)
    @Html.Raw(" ")
}

At this point, I consider important the lines that make a copy (clone) of the route data received from the controller, otherwise the route values should be modified inside the loop that follows it, and as a consequence, the dictionary will affect the rest of routes in the page, setting the last value in lang.Code forever in the route values if this value were not supplied explicitly when generating the links.

Well, I hope this little sand grain will help you to your day-to-day web programming task, any comment and ideas will be welcome.

Tuesday 20 March 2012

Language selector in ASP.NET MVC 3 - Part 2

A few weeks ago I talked about giving support to multi lingual applications or i18n for brevity. There I mentioned the strategy I followed using two routes (one including the language parameter and another is just the default) and also I promised to talk more about routes, specially about routes whose goal is match with or without the language parameter. Well, actually I wasn’t too happy with that implementation, but if it isn’t broken, don’t fix it. However, something deep in my mind suspected there must be another solution, I mean, more elegant, while making google research I found an interesting article in codeproject part 1 and part 2. Nice try! But my question was: why to make a custom route class and finally to define the two different routes again at application startup?
I took it as starting point and worked on it, the result: a custom route which is able to deal with the dirty stuff about prepending properly the language fragment and matching from the request url.
The idea is to inherit from the Route class instead of RouteBase, I want to take advantage of the base implementation which is responsible of doing the really dirty stuff, I won’t reinvent the wheel. So the methods to override are GetRouteData and GetVirtualPath.
The custom route must know the default language, it has it as parameter, some comments on GetRouteData method. First, the goal for this method is to determine if the current request must be parsed by us or let it go. Manually split by / and remove the ~ element, after that if there are more than one segment and the first segment doesn’t match the regular expression ^[a-z]{2}(-[A-Z]{2})?$ (commented in the earlier post), then ask to base implementation and if it replies affirmative, set the route value indexed by language parameter to the default language. Otherwise, a little trick with the base implementation, save the current url and prepend {lang}/ to the current and ask to base implementation to check if is valid and finally restore the url value. As you can see, here is when I’ve done the work of two routes in one!
public override RouteData GetRouteData(HttpContextBase httpContext)
{
    var requestedURL = httpContext.Request.AppRelativeCurrentExecutionFilePath;
    var segments = requestedURL.Split('/').Where(x => x != "~").ToArray();

    // if request is not localized then call base
    if (segments.Length > 0 && !Regex.IsMatch(segments[0], @"^[a-z]{2}(-[A-Z]{2})?$"))
    {
        var result = base.GetRouteData(httpContext);
        if (result != null)
            result.Values[LangParam] = _defaultLang;
        return result;
    }

    // if request is localized then prepend the culture segment to the url
    var currentUrl = Url;
    Url = string.Format("{{" + LangParam + "}}/{0}", _url);
    var baseRoute = base.GetRouteData(httpContext);
    Url = currentUrl;
    // save and restore the current URL
    return baseRoute;
}
The next step to complete the route’s functionalities is GetVirtualPath which will generate the url according to route map data. Generate the url is a little harder than match the url, but not impossible. Let’s remember how the url generation occurs: if the dictionary received as parameter has all the necessary values for this route, the optional values may be taken from the current request (this will give us the idea of virtual directories commented in an earlier post). So here the important parameter is the language, find it either in current request or explicit values and remove it (the actual route doesn’t have this paramter). Call base implementation and restore each case if necessary (this is important, or the next routes generated since this will have altered values). Finally to prepend language segment if necessary and only if its value is different from the default language. I show here the code.
public override VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values)
{
    string lang = "", lnRequest = "", lnValues = "";
    // look for the language param in current request context
    if (requestContext.RouteData.Values.ContainsKey(LangParam))
    {
        // if found then save it in lang variable
        lnRequest = lang = (string)requestContext.RouteData.Values[LangParam];
        // and remove it
        requestContext.RouteData.Values.Remove(LangParam);
    }

    // look for the language param in explicit values 
    if (values.ContainsKey(LangParam))
    {
        lnValues = lang = (string)values[LangParam];
        values.Remove(LangParam);
    }

    // call base method...
    var virtualPath = base.GetVirtualPath(requestContext, values);

    // restore from current request context if necessary
    if (!string.IsNullOrWhiteSpace(lnRequest))
        requestContext.RouteData.Values.Add(LangParam, lnRequest);
    // restore from explicit values if necessary
    if (!string.IsNullOrWhiteSpace(lnValues))
        values.Add(LangParam, lnRequest);

    if (virtualPath == null) return null;

    // prepend language segment if necessary and only if different from the default language
    if (!string.IsNullOrWhiteSpace(lang) && !string.Equals(lang, _defaultLang, StringComparison.OrdinalIgnoreCase))
    {
        virtualPath.VirtualPath = lang + "/" + virtualPath.VirtualPath;
    }
    return virtualPath;
}
Now, it’s time to invoke this brand-new route just implemented, I think it should be as similar as possible to use as the out of the box route, like this:
routes.AddLocalizedRoute(
    "es-ES",                                                                        //The default Language 
    "{controller}/{action}/{id}",                                                   // URL with parameters (Without any sign of i18n)
    new { controller = "SimpleUser", action = "Index", id = UrlParameter.Optional } // Parameter defaults
);
In order to achieve this I’ve implemented an extension for the class RouteCollection, like the original MapRoute which is an extension too.

Tuesday 13 March 2012

Handling Excel spreadsheets XLS and XLSX with dot NET

More than once in my projects the format for input and/or output is Excel spreadsheet either using Excel 97-2003 (.xls) or Excel 2007 OOXML (.xlsx). The open source community contains several libraries, for legacy files the most famous is NPOI, which in fact is the .NET version of POI Java project at http://poi.apache.org/. The result about modern OOXML were more diverse, ExcelPackage was the first but this project was abandoned late 2010. However this code wasn’t to the trash, EPPlus is the evolved version based on it and strongly optimized by the new team.
If we want to use indiscriminately the two formats, it’s necessary to have updated the two libraries and since they are developed by two different teams, they have two completely different interfaces and ways to be used. Despite the fact it’s open source and the hard work is done, it’s a mess to deal with this difference in our code.
Start writing if-else statement alongside the code is not an option! This will get us to Spaghetti code anti-pattern and the consequent difficulties for maintaining the resulting software. So the design patterns to the rescue. The quick solution for the immediate needs, it’s necessary a common interface for basic operation with Excel files and we have two different implementations. The adapter pattern (aka wrapper) is the best match. The interfaces created are:
public interface IExcelHandler : IDisposable
{
    void LoadExelFile(string fileName);
    IExcelSheet GetSheet(int index);
    void Save();
    void Save(string filename);
}

public interface IExcelSheet
{
    string GetCellValue(int row, string column);
    void SetCellValue(int row, string column, object value);
}
 
The xml comments were removed for brevity but the methods’ names are self-explanatory (or at least I think so), the only thing might be confusing is the column parameter which is defined as string because it’s intended to be invoked with the actual Excel column name, such as A, B, etc. The operations defined above were enough in order to solve my needs; I know this might be extended for more complex scenarios but this is a first version and I don’t like to make over engineering or as we say in my town: killing mosquitos with cannons.
Having the common interface and all the implementations is not enough for a reusable component and abstract the client from the concrete implementation, how is to be created the instances of each implementation according to the file format? Design patterns again, the Factory Method may be one solution, in fact I made a simplified version using an interface that defines a Create method who is the responsible for create the concrete IExcelHandler instance.
public interface IExcelHandlerFactory
{
    IExcelHandler Create(string filename);
}
 
This interface might be injected via your favorite IoC Container and linked to the default implementation I provide in the class ExcelHandlerFactory or if you aren’t using dependency injection it can be used as a Singleton through the static property Instance.
As NPOI and EPPlus are already packages in the NuGet Gallery, I’ve done a package with these codes in order to contribute to the community and to be easier to bootstrap when handling Excel files. The package is available on NuGet feed at https://nuget.org/packages/ExcelHandler, just execute the following command at NuGet Package Manager Console:
Install-Package ExcelHandler
This will add automatically as dependencies the NPOI and EPPlus packages with all the corresponding references in the project. An example on how to use it is as follows:
Console.WriteLine("Opening and modifying a file with Excel 2007+ format...");
using (var excelHandler = ExcelHandlerFactory.Instance.Create(@"..\..\modern_file.xlsx"))
{
    var sheet = excelHandler.GetSheet(1);
    Console.WriteLine(sheet.GetCellValue(1, "A"));
    sheet.SetCellValue(1, "A", "Test value in A1 cell");
    excelHandler.Save();
}
Console.WriteLine("... done.");
Console.WriteLine("Opening and modifying a file with Excel 97-2003 format...");
using (var excelHandler = ExcelHandlerFactory.Instance.Create(@"..\..\legacy_file.xls"))
{
    var sheet = excelHandler.GetSheet(1);
    Console.WriteLine(sheet.GetCellValue(1, "A"));
    sheet.SetCellValue(1, "A", "Test value in A1 cell");
    excelHandler.Save();
}
Console.WriteLine("... done."); 
 
This example just opens two Excel files, read the content from the cell A1 and set some text in the same cell. Since the interface IExcelHandler inherit from IDisposable it can be used in a using statement in order to simplifying the resource disposing. The source code is available at https://bitbucket.org/abelperezok/excelhandler.

Thursday 23 February 2012

Language selector in ASP.NET MVC 3

Develop web applications with multi lingual (ML) support is always a very tricky affair, in this globalized world we are living nowadays is almost impossible to keep developing web applications without giving the opportunity to visitors from other languages than ours.

I’ll attempt to share with the community some of my strategies in order to give ML support using ASP.NET MVC 3.0, and I think the next versions will be backward compatible regarding these matters. Let’s start, one of the key points for ML is where to store the current language. There are some well-known places, in the session, in some cookie or even in the url.

Each strategy has its own pros and cons, after digging about the SEO and content indexing (here some SO guys talk about) I decided to choose the url as the ideal place for insert the current language the web is going to be displayed to the client. The general idea is to give the sense of “directories” where the language is the first level and the others contents are languages directories’ “children” in the big web content tree.

Having said that, there is another problem coming up: The route, despite the routing system provided by ASP.NET MVC 3.0 is powerful and very flexible, I have to say it’s something obscure to understand by definition (but not impossible), so let’s use an example on how the url should be:

  • http://www.example.com/directory/content/params
  • http://www.example.com/es/directory/content/params
  • http://www.example.com/fr/directory/content/params

Note that the first sample doesn’t have the language explicitly indicated in the url. This will be the default language defined by the site administrator, maybe. Even with this decision there are many things to think about. What to do when there is no language specified in the url? Maybe the easy way, to assume an internal (from configuration or hardcoded) default language, or maybe a better user experience in order to determine the user’s culture from the browser’s request or even a nice solution that a friend of mine implemented: search in a database table the IP address from the request and determine the country and depending on the country, the page is displayed in that language. Finally the concrete strategy is up to you.

Whatever decision you take, it’ll be necessary to have small (or medium size) element usually on top right of the page, which is the language selector. This fellow is responsible for render the links to the entry point in other languages rather than the current one. If you have a fixed number of languages (which I do not recommend, unless your client pays too few for the project, don’t blame it, it’s a joke, don’t do it), there’s not much difficulty so it’s possible to implement a set of links with all the necessary dirty url construction.

But what if the languages amount varies, for example, they come from a database? Then the routing system is here for save the world. At this point I say the whole explanation about how to get these routes is out of the scope of this post, I’ll talk more on details in a dedicated post. (Seriously I promise I’ll do it!) Here are the route samples.

routes.MapRoute(
    "Default_ln", // Route name
    "{lang}/{controller}/{action}/{id}", // URL with parameters
    new { controller = "Home", action = "Index", 
        id = UrlParameter.Optional },  // Parameter defaults
    new { lang = "[a-z]{2}(-[A-Z]{2})?" }
);

routes.MapRoute(
    "Default", // Route name
    "{controller}/{action}/{id}", // URL with parameters
    new { controller = "Home", action = "Index", 
        id = UrlParameter.Optional }  // Parameter defaults
);
    

Well, these routes really do not differ too much from the default template in Global.asax.cs at Visual Studio 2010. The order is a key point for the success and the new parameter introduced here is lang in the first route. If the url contains the language as in the second and third in the above urls, they will match the first route ("Default_ln") and if the url does not contain any information about language, then it’ll match the second route ("Default").

Note the regular expression (regex) in the first route as a constrain, I mean, the first segment is a language if and only if the segment content matches the regex, which in fact verifies the culture format (such as es-ES or en-GB, or just es or en) you are free to modify it and don’t be afraid of, the regex don’t bite.

The language selector is to be done in order to the crawler (and the human visitors too) be able to find the content written in other languages. The first (and easiest) solution maybe using static links pointing to the home page, followed by the language code or empty for the default. Really, don’t do that! What if the user has navigated deeper in the content tree? In this case, the user will have to navigate again through the path, and that’s not good. The link should point to the current content path but as child of target language. The routing system allows us to do this just setting properly the values for routeData parameter in any HTML extension method used in the views.

Assuming we have in our viewmodel a propery named Languages which is a list of some class with Code and Name properties. One first implementation could be as follows:

foreach (var lang in Model.Languages)
{
    @Html.ActionLink(lang.Name, null, new {lang = lang.Code})
    @Html.Raw("&nbsp;")
}
    

The result is acceptable but there’s a small glitch with url generation, the default language doesn't match with the initial definition about not explicitly show the language, it generates the lang parameter for all language even for the default one. What to do? Or better, why this come about? The origin of this is that we are saying, for each language in the list generate a link with the parameter lang equals this language’s code. We could instead to say, if this language is the default then do not set the parameter lang et c’est tout! As in this fragment:

foreach (var lang in Model.Languages)
{
    if (lang.Code == DefaultLangCode)
    {
        @Html.ActionLink(lang.Name, null, new { })
    }
    else
    {
        @Html.ActionLink(lang.Name, null, new { lang = lang.Code })
    }
    @Html.Raw("&nbsp;")
}
    

But that doesn’t work too, why? If we are in the default language (/) everything appears to be ok, but if we change for example to Spanish (/es) then checkout the link to our default (/es)! Surprise! And it should be just /. The reason is behind the scenes of routing system, it “fills” the route parameters with the current parameters explicitly passed as in lang, and falls back with the current request context data and in this case current request has the lang parameter that the explicit parameter did not pass, so finally the link is generated with the parameter lang equals the current lang. Only after to have understood this particular matter about the routing system we are able to fix the issue, as follows:

foreach (var lang in Model.Languages)
{
    if (lang.Code == DefaultLangCode)
    {
        var currentlang = ViewContext.RouteData.Values["lang"];
        ViewContext.RouteData.Values["lang"] = null;

        @Html.ActionLink(lang.Name, null, new {})

        ViewContext.RouteData.Values["lang"] = currentlang;
    }
    else
    {
       @Html.ActionLink(lang.Name, null, new {lang = lang.Code})
    }
    @Html.Raw("&nbsp;")
}
    

What’s the trick here? In this case we have said as part of the if condition for the default language, save the current value of route parameter lang in a temporary variable, then set to null (makes the same effect as remove it) without the lang garbage, generate the link and restore the value to the route data dictionary so the rest of the languages generate their link with this parameter which is necessary.

Sometime I comment with my developer friends and I say, I hate the web development but at the same time I love it! Because most of the time I have to be finding the exact trick for achieve the expected result and that’s the magic of software development!

Tuesday 21 February 2012

jQuery.NeatUpload – Examples refactored

jQuery.NeatUpload – Release 0.4

I’ve been using the jquery plugin for neatupload and the bugs keep coming up, well, that’s the developer’s life. Catching bugs is an ability that every developer must have and more than that, do not shame on, but be proud of.
This time I’ve improved the error handling when the server send as response some no comprehensive, for instance, there is a limit set in web.config for the maximum file size to upload and when the client attempts to send a file bigger than the limit, the server replies with some bizarre response.
As part of the mechanism implemented, it expects the HTML response for parse it and determine what to do, I mean, send information to the client indicating whether the request failed or not. Sometimes the HTML sent by the server was impossible to read because of some mysterious JavaScript error “Access denied”. I had no time for digging deeper in this matter, so I try-catch it and just a little extra setup in order to be consequent and ready. The release 0.4 can be downloaded here.

jQuery.NeatUpload – Examples refactored

In order to prepare for launch a new release of NeatUpload including these JavaScript features I’ve been doing some refactoring in the samples I published a few months ago. The first thing I’ll do is to apologize with the WebForms people, because I’ve let them in the road with the examples, just the part 1. Now the updated versions have both implementations WebForms and MVC3. Of course these new versions also have applied the bugfix mentioned above.
There’s something new in the web.config files, this time they are prepared for either IIS6 or IIS7+ and highly commented (well, this is questionable and relative to anyone’s need) every setting needed for set up NeatUpload even clarifying the confusing and inconsistent size units that Microsoft has implemented in both versions of IIS, they are in bytes and Kilobytes respectively.
Before describe what’s in these examples, I want to alert you about something that I’ve been dealing with many times and that has been cause of mysterious error in production scenarios. There is a setting in web.config that indicates the maximum time that IIS can wait before it shuts down the thread and return time out error. This is under and it’s expressed in seconds, this is especially useful when the files to be uploaded are really big.

What’s in the examples?

  • Default values: Shows how to use the minimum setup code and use the default values.
  • Changed Add and Upload texts: Shows how to change the texts for the elements that add and/or upload items rather than the default internal texts.
  • Changed Add and Upload elements: Shows how to change the default link for add and/or upload element by custom elements like buttons.
  • Custom cancel elements. Part 1: Shows how to change the cancel element by creating a simple new element inline and binding it to the internal click event.
  • Custom cancel elements. Part 2: Shows how to change the cancel element by cloning a complex element and binding it to the internal click event.
  • Start Upload Automatically: Show how to start automatically the upload without using a queue and adding one by one. This example uses just add element.
  • Show Progress Bar: Shows how to use a more cute visual feedback by styling up a progress bar that indicates the progress and changes the color if an error occurs.
  • Validating Data: Shows how to intercept the action of send the file to the server by performing validations to the associated data.
  • Sending extra Data: Shows how to send associated data along with the file by serializing it as JSON and how to receive it properly on the server.
à la prochaine mes amis!

Monday 13 February 2012

Unobtrusive validation with dynamically created fields

Some time ago I started to take advantage of unobtrusive validation, one of the most important approaches for gaining cleanness in the resulting HTML. The way it works is, in a nutshell, as an adapter to the traditional jquery validation plugin, it searches through the HTML for every input element with data-val HTML5 attribute and value true, then it reads all possible validation rule, creates the rules element and finally calls to the validation plugin. This unobtrusive validation comes with the default template in MVC3 web projects using Visual Studio 2010.
Everything worked as expected when the entire HTML was generated from the server, I mean, as usually, using the HTML helpers such as @Html.TextBoxFor(...). Even if I’ve been working with AJAX, it worked as expected, just using a little tweak as I mentioned in this post.
Suddenly, a new scenario has arisen to the table. This time it’s necessary to validate elements dynamically created, client side only. When the page loads there are a lot of input with its data-* attributes properly rendered, but there is another set of input elements which will be created at runtime and therefore they must be validated as well. Until now there is no problem, isn’t it? Surprise if I tell it doesn’t work at all; it simply ignores these new elements created at runtime. But, what’s the difference? Previously I mentioned that it worked with AJAX, new content is added to the DOM. In fact, it works with AJAX. After digging a lot inside the source code and debugging, yes debugging and tracing! God bless these modern tools for doing this kind of work in JavaScript!
Long story short, the root of the problem is inside jquery validation plugin, and I think is not really a bug, but something that the plugin’s developer team did not think about. When the unobtrusive passes the data to the validator, this first checks in the internal cache using jQuery.data(...) and if anyone previously has passed the validation rules for this particular form, then it does nothing. What’s the problem? That is a good idea; do not perform the same task more than once. When using AJAX, in every test I’ve done there is an entirely new form that arrives from the server, this new form was not in the cache and that’s why it worked to me using AJAX. I show the exact code fragment where I found it.
 // check if a validator for this form was already created
 var validator = $.data(this[0], 'validator');
 if ( validator ) {
  return validator;
 }

 validator = new $.validator( options, this[0] );
 $.data(this[0], 'validator', validator);
 
But what if I add new elements dynamically to that form previously initialized with the validator plugin? That is not very bizarre, or maybe a little, anyway I think it should be considered. I found a quick and dirty workaround for solving my particular case. It consists in reset the internal validator’s cache by hand and then command unobtrusive to re-parse the form and Voilà!
 $.removeData(form[0], 'validator');
 $.validator.unobtrusive.parse(form);
 

I have prepared an example in order to demonstrate what I am saying is true. In the example I present a very simple login form using jquery validator and unobtrusive along with three buttons: (create new field, reset Validator Cache and Add new Form). How it works? The initial form is completely normal, just click the Login button and everything works as normal, the validators pop out with their messages. The code is here

Click the ‘create new field’ button and a new field appears and press Login again, this new field is not part of validation despite the fact it has properly configured the data-* attributes.

Now click the ‘reset Validator Cache’ button and click again Login button, Magic?! The new field is now part of the validation and we have a third validation message that wasn’t before.

Finally press the button ‘Add a new Form’ and an entire new form is created below, this form work as it should without remove anything from the cache.

I hope this experience which took me hours may help more people who wants to develop rich-client and a lot of JavaScript as today’s web application demands.

Monday 6 February 2012

I became a member of NeatUpload Open Source Project

I’ll start with a question: who hasn’t ever borrowed at least a small piece of code from the community? I think the number is very close to zero, and why? Is it something about do not reinvent the wheel? And why the community is as huge as nowadays is? The truth is the times have changed, and the old times when a small group of hackers had access to the technology and they were too jealous with their achievements and there were too much selfishness in the air.

Now, as the internet and the communication media have globalized, the exchanges between developers around the world is something common, and probably that’s why you are reading this post. When you start learning a new technology/language/framework (the list might go on), you face many typical problems that a big number of developers around the world faced to, and without the help of some of these experienced developers the work becomes tortuous.

I recently read this post from Scott Hanselman about involving in open source projects and he explains how easy is, even with an example. But, why am I talking about all this? about a year ago, I had to develop a jquery plugin (which occupies several post in this blog) and I decided to publish it for the community. After all, the problem I solved might be the same for other people around the world and, why not to return back the favor to the community? When I needed, I had a lot of code available for just download it and use it. I know, it’s true, not always the documentation is as good as I’d like it was, but that’s something with I have to deal with.

A few weeks ago, I asked myself, what about to show my work to the NeatUpload team? I appreciate their opinion about and that’s what I did. Surprise for me! I received an email from Dean Brettle and Joe Audette celebrating my job and inviting me to join in the project.

My answer was very fast and now I am a member of NeatUpload project. My first contribution was to make the translations for Spanish language in the resources. According to joe and Dean, in the next release, will be included the plugin, documentation and examples mentioned in the early post in this blog. And of course, the further NeatUpload releases will include the last version of jquery.neatupload.

Conclusion, contribute to open source has no cost, is not hard to do, and the benefits you give to the community is returned back when you search for doubt in stackoverflow or just Google. So, contribute!

Wednesday 1 February 2012

jQuery.NeatUpload - Release 0.3

A new bug has been detected in the jquery.neatupload plugin, the issue occurred if more than one tab opened in the same browser window tried to upload files in time overlapping. The effect was progress bars becoming crazy with the progress indications.
The cause was a naming collision with the postbackId parameter which identifies the current file being uploaded, the number always started by 1, so if two requests from the same browser send the same postbackId, the NeatUpload module doesn’t know who is who and the polls for display the progress show by moments the results of one and by moments the results of the other.
The solution, change the initialization value for the internal variable postbackId, this time helped by a random number and rounded for to eliminate the decimal comma separator, just esthetic. This change is applied to version 0.3 which can be downloaded here.

Tuesday 24 January 2012

Publish using MSBuild like VS2010

A few weeks ago I talked about automating the process of updating, compiling and deploying from the source code to running web application. I’ve done some improvements since then; the most relevant was to apply the web.config transformation phase. I admit it, it’s something I’ve read once at a glance and almost forgot it, in part because I didn’t understand very well on the fly. But this I really needed it and made a second review and Voilà! By the way, Great job in the book “Pro ASP.NET MVC 3 Framework” from Apress. All I needed was to invoke MSBuild with the appropriate arguments so it makes exactly the same as in the Publish Dialog from Visual Studio 2010.

After spending some time making Google-research and consequently StackOverflowing a lot, almost everybody agreed that the solution was modify the .csproj or .vbproj and include something like this:

<Target Name="PublishToFileSystem" DependsOnTargets="PipelinePreDeployCopyAllFilesToOneFolder">
    <Error Condition="'$(PublishDestination)'==''" 
 Text="The PublishDestination property must be set to the intended publishing destination." />
        <MakeDir Condition="!Exists($(PublishDestination))" 
  Directories="$(PublishDestination)" />
        <ItemGroup>
            <PublishFiles Include="$(_PackageTempDir)\**\*.*" />
        </ItemGroup>
        <Copy SourceFiles="@(PublishFiles)"
  DestinationFiles="@(PublishFiles->'$(PublishDestination)\%(RecursiveDir)%(Filename)%(Extension)')"
  SkipUnchangedFiles="True" />
</Target>
 

And invoke the MSBuild with /t: PublishToFileSystem and include in the property parameter /p: PublishDestination with the location where to put the final compiled files. Everything looks great? But there is small problem, I don’t want to modify the .csproj file, I just want to do exactly what VS2010 does when you right-click and select Publish, as I said before. I want my script to be reusable and do not have to remember in each project to include obscure XML fragments. So the quest begins.

I started to learn a little about MSBuild syntax and quickly went to the file Microsoft.WebApplication.targets, which is located very deep in MSBuild Folder but easily located by opening any .csproj file. Well, what’s inside this odd file? This file defines all target used by VS2010 when compiling or deploying a project. Since the new target defined by stackoverflow.com people depends on PipelinePreDeployCopyAllFilesToOneFolder target, I went to hunt it and I discovered vital information about the parameters that target use for to copy the files. I started the experimentation phase, and finally I lined up the stars and Voilà encore une fois! The magic combination is as follows:

The MSBuild receives a positional parameter with the .csproj, no change with it. The target as commented before is /t:PipelinePreDeployCopyAllFilesToOneFolder, the properties are /p:Configuration=Release; BuildInParallel=true; PackageAsSingleFile=False; AutoParameterizationWebConfigConnectionStrings=false. All this for the following: build in release mode, optimal settings for production server, take advantages of multi core processing, do not package as a .zip file the result and do not set replaceable garbage in web.config, this is not necessary because I don’t want to import the result with IIS wizard. Another important property is /p:IntermediateOutputPath=..\TempObjWeb\ this is specified to avoid a mysterious compilation error after a deploy, it’s something related with the web.config remains inside the project file and it can’t be parsed because it’s not inside a folder configured as virtual directory or application in IIS. The most important property is /p:_PackageTempDir= compilation target, that’s I’ve found following the trace from target to dependent target inside the Microsoft.WebApplication.targets file, when setting this property, the destination path (like the dialog in VS2010) for the compiled web elements.

Other improvements I’ve done in my script are at clean phase, this time all the temporary folders are deleted totally before the compilation begins. Continuing with the delete elements, when I copy the final compilation results (using aspnet_compiler) this time I make a smart delete excluding vital folders for specific use in my application, using the -Exclude in powershell. I also added some new parameters to promote its maintenance. I think there are still too many things to do but as I am still learning I continuously improve my toolbox. I let you the entire script code.

$aspnetcompiler = $env:SystemRoot + "\Microsoft.NET\Framework\v4.0.30319\aspnet_compiler.exe"
$msbuild = $env:SystemRoot + "\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe"
$repo = "C:\Users\Administrator\Desktop\livost-repo"
$webapp = $repo + "\LivostWeb"
$webtarget = $repo + "\LivostWebPublish"
$fullcomptarget = $repo + "\WebCompiled"
$comptarget = "..\WebCompiled\"
$sitename = "IIS:\Sites\mvclivost"
$urltest = "http://localhost:8000/Home"
#$destitny = "Debug"
$destitny = "Release"

import-module .\PScommon\psake\psake.psm1
import-module .\PSCommon\WebAdministration
Set-Alias -Name ipsake -Value Invoke-psake

$old_pwd = pwd
cd $repo
hg pull
hg update

$msbuild_arg0 = $repo + "\Livost.sln"
$msbuild_arg1 = "/p:Configuration=$destitny;BuildInParallel=true"
$msbuild_arg2 = "/t:Clean"
$msbuild_arg3 = "/m"
$msbuild_arg4 = "/v:m"
$msbuild_arg5 = "/nologo"
$msbuild_arg6 = "/clp:Verbosity=minimal"
$msbuild_args = @($msbuild_arg0, $msbuild_arg1, $msbuild_arg2, $msbuild_arg3, 
$msbuild_arg4, $msbuild_arg5, $msbuild_arg6)

Write-Host "Cleaning the solution"
Write-Host "Executing $msbuild $msbuild_args"
& $msbuild $msbuild_args > out.txt

# removing temporary folders
$cleanwebtarget = $webtarget + "\*"
rm $cleanwebtarget -Force -Recurse
$cleanfullcomptarget = $fullcomptarget + "\*"
rm $cleanfullcomptarget -Force -Recurse
$cleantempobj = $repo + "\TempObjWeb\*"
rm $cleantempobj -Force -Recurse

# keep the rebuild step 
$msbuild_arg2 = "/t:Rebuild"
$msbuild_args = @($msbuild_arg0, $msbuild_arg1, $msbuild_arg2, $msbuild_arg3, 
$msbuild_arg4, $msbuild_arg5, $msbuild_arg6)

Write-Host "Rebluilding the solution"
Write-Host "Executing $msbuild $msbuild_args"
& $msbuild $msbuild_args >> out.txt


# modified here (publish instead of rebuild) this change allows to apply the 
web.config transformation 
$msbuild_arg0 = '"' + $webapp + '\LivostWeb.csproj"'
$msbuild_arg1 = "/t:PipelinePreDeployCopyAllFilesToOneFolder"
$msbuild_arg2 = "/p:Configuration=$destitny;BuildInParallel=true;
PackageAsSingleFile=False;
AutoParameterizationWebConfigConnectionStrings=false"
$msbuild_arg3 = "/p:IntermediateOutputPath=..\TempObjWeb\"
$msbuild_arg4 = "/p:_PackageTempDir=$comptarget"
$msbuild_arg5 = "/nologo"
$msbuild_arg6 = "/clp:Verbosity=minimal"
$msbuild_args = @($msbuild_arg0, $msbuild_arg1, $msbuild_arg2, $msbuild_arg3, 
$msbuild_arg4, $msbuild_arg5, $msbuild_arg6)

Write-Host "Building the Web Application"
Write-Host "Executing $msbuild $msbuild_args"
& $msbuild $msbuild_args >> out.txt

$anc_arg0 = "-v"
$anc_arg1 = "/"
$anc_arg2 = "-p"
$anc_arg3 = $fullcomptarget
$anc_arg4 = "-f"
$anc_arg5 = $webtarget
$anc_arg6 = "-c"

$asncargs = @($anc_arg0, $anc_arg1, $anc_arg2, $anc_arg3, $anc_arg4, $anc_arg5, 
$anc_arg6)

if (-not (Test-Path $webtarget)) {
    mkdir $webtarget > $null
}

Write-Host "Precompiling web application"
Write-Host "Executing $aspnetcompiler $asncargs"
& $aspnetcompiler $asncargs >> out.txt

$webSite = Get-Item $sitename

$poolName = $webSite.applicationPool
$pool = Get-Item "IIS:\AppPools\$poolName"
    
if ((Get-WebAppPoolState -Name $poolName).Value -ne "Stopped") {
    Write-Host "Stopping the Application Pool"
    $pool.Stop()
    Start-Sleep 3
}

Write-Host "Smart delete..."
$todelete = $webSite.physicalPath + "\*"
rm $todelete -Force -Recurse -Exclude @("Files","_temp_upload", "App_Data")

Write-Host "Copying files..."
$source = $webtarget + "\*"
cp $source $webSite.physicalPath -Force -Recurse

Write-Host "Waiting a few seconds..."
Start-Sleep 5
Write-Host "Starting the Application Pool"
$pool.Start()

& "${env:ProgramFiles(x86)}\Internet Explorer\iexplore.exe" $urltest

cd $old_pwd
 

Sunday 22 January 2012

Good practice for ASP.NET MVC Pager


When the data to be listed in a page becomes too long everybody agrees that it’s necessary to split the long flow into several small pieces, well that’s known in IT world as paging data, as you should know from day-to-day work in web applications. Every web technology gives us in some way the opportunity to do this without too much complexity. ASP.NET MVC is not an exception, after a quick Google-research I’ve found, of course since this problem is so common, tons of implementations, but the most relevant (at least in my modest opinion) was the one we can find at http://en.webdiyer.com/. There is another important implementation at https://github.com/martijnboland/MvcPaging that has also a NuGet package for ease the distribution. Almost all the implementations I found were inspired by ScottGu's PagedList idea.
I inspected many others but the general idea was in a way or another the same mentioned before over and over again. Finally I decided grab the first one and start using it as it comes out of the box. It was really easy to use: just make sure of your viewmodel has a IPagedList object, then specify the controller, action, pagination options, ajax options and html custom attributes.
Long story short, everything went good until I found a scenario where I needed a parameter combination that the author did not foresee. Wow! I really appreciate the amount of overloads made by the author, but in some way it was not enough for my scenario, so that’s Open Source magic. I downloaded the project and inspected it in a more detailed way. It really allowed the most common ways to make the pager but I needed at same time to specify controller, action, route data and ajax options for accomplish my task, after all I think this is not so uncommon scenario, I had a filter for my list and of course when I changed the current page the filter information must keep between round trips to the server.
The solution was actually simple, just make another overload in the Helper class and ready, but I wanted to dig deeper in the project structure and implementation and I discovered something I didn’t like too much (and it’s something I’ve seen in almost every projects in my research). The logic and view were strongly coupled. I am not a software philosopher but I like the good practices and the life had demonstrated to me that they are important. When I say strongly coupled I mean the same class is responsible of these two different tasks and concerns.
The relation between the classes is like this:
+-------------+    Render Pager    +--------------+
| PagerHelper |------------------->| PagerBuilder |
+-------------+                    +--------------+

The idea was to split that big class into two classes PagerModel and PagerView (obvious naming?) where the class PagerModel is responsible for all the logic involved such as the tedious calculations about page numbers, amount of pages, etc. model integrity validations using as transport a class PagerItem, this class holds data about each item to be shown in the pager but NOT how is to be shown, it’s only responsible for return a valid list of PagerItem. This separation of concerns is not just for to be compliant with the “holy bible” of design good practices, I’ve done this because I forecasted that in a short future it was going to be useful.
My refactoring looks like this:
+-------------+ Render Pager   +-----------+  Get Model +------------+
| PagerHelper |--------------->| PagerView |----------->| PagerModel |
+-------------+                +-----------+            +------------+

That future moment is arrived a few days ago, when I had to implement the same pager but this time with a different markup, actually just HTML and links and no ajax compatibility despite the fact I’m using unobtrusive from the default template. With this new distribution it was easy to me to create a new type of view, and why not, a little change of name, PagerView to AjaxPagerView and the new view named HtmlPagerView. It looks this way:
            Render         +---------------+  Get Model             
       +------------------>| AjaxPagerView +-----------------+        
       |                   +---------------+                 V        
+------+------+                                         +------------+
| PagerHelper |                                         | PagerModel |
+------+------+                                         +------------+
       |                   +---------------+                 ^        
       +------------------>| HtmlPagerView +-----------------+        
            Render         +---------------+   Get Model             

This is my vision of making a reusable design; in this case, instead of making a bunch of if-else or switch-case statements I apply a mixture between the Strategy and Decorator Design Pattern where every kind of view is a different “strategy” of rendering and “decorates” the model, which allows me in a future to implement many other types of view without affecting the core functionality and the actual logic: the model. The input options must be processed by the model class and make all the proper decisions returning a data structure which in most cases is named view model, ready for the view engine to read the necessary data from it. The view engine could be dependent in some way of the model element, but the model can NEVER be dependent of the view engine.
I hope this little refactor-reflection had made you think a bit more about real code reuse and the importance of using good practices and design patterns not just for to follow magic guidelines. I’d like to remark the consequences of developing software without keeping on eyes these techniques.
By the way, tell me about my ASCII-UML? Not bad, is it?

Thursday 12 January 2012

Extending Unobtrusive AJAX behavior

One of the nice features that come with ASP.NET MVC 3.0 with no doubts is the addition of scripts for unobtrusive behavior regarding AJAX and validations. In this case I’ll refer to the AJAX mechanism using this technique. It allows you to write the least JavaScript code and achieve the partial update in a web page and it has a big plus, if JavaScript is disabled in the browser (or any other reason that prevents the js execution), no problem, everything continues working just without partial updates.
There are many scenarios where this is good enough for us and that’s all, write the server side code in the controller and the appropriate Html helpers using @Ajax setting up the target Id element to update after the remote call, maybe the “loading” element or even a confirm dialog in order to prevent unwanted post such as delete elements. The most classical example of this is the typical administration backend for any website where we have a list with a pager element at bottom (usually a table) and a form in another page with the details for edit the values.
Now, if you wish to make something a little different than previous example, the troubles start coming up. Imagine this simple scenario, a form with several fields ready to accomplish whatever task, if the operation succeeds then a partial update is performed below the form with more data to be entered and if the operation doesn’t succeed then another partial update is performed above the form showing the server errors. After this brief a little messy, the point is: there are more than one target id elements to update and only in the server this decision in taken; we don’t know how to specify the target id just declaring the Html helper.
My solution for this issue: conventions to the rescue!  I’ve made my own convention in this case. There is a script that is responsible of receiving all AJAX requests and inspects the content looking for the concrete target id and then it puts the content within the correct target id. I explain in detail, every partial view must have a hidden field with id = partialviewid and the value is the target id to be updated in the client. This solves the issue because we can have n partial views, each one referring to a different target, so in the controller it’s possible to do any sort of if-else statements and return the appropriate partial view in every case.
Let’s talk about the script that parses the content, it consists in a function that resides in a js file as any other and that is included by layout page after all unobtrusive formal scripts. It looks like this:
function receiveAjaxData(data) {

    var tempdiv = $("div[id=temporary-div-ajax]");
    if (!tempdiv.length) {
        $(document.body).append("<div id='temporary-div-ajax' style='display:none'></div>");
        tempdiv = $("div[id=temporary-div-ajax]");
    }

Here I ensure the temporary-div-ajax exists and is a body child, this is the place where the content that arrives from the server is put.
    tempdiv.html(data.responseText);
    var hidden = $("input[type=hidden][id=partialviewid]", tempdiv);
    var loadfunc = hidden.attr("data-function-load");

Then I find the hidden previously mentioned and if I had defined a function to execute is also grabbed here.
    if (hidden.length) {
        // Remove the hidden after to have taken its value
        var pvid = hidden.val();
        hidden.remove();

        // Place the content into the real target
        var destiny = $("#" + pvid);
        destiny.empty();
        destiny.append(tempdiv.children());
      
        // Re-parse the destiny node, in order to the validator work properly
        $.validator.unobtrusive.parse(destiny);

        // if function exists then evalute it
        if (loadfunc) {
            eval(loadfunc + "();");
        }
        return false;
    }
    return true;

If the hidden exists then proceed to place the content at real destiny, as you could have seen there is also the possibility of to specify the name of a function to be executed when the partial view is placed into the target, this is useful when we want to execute startup view-specific logic. One thing important here is the return value, if the result is true that means the jquery.unobtrusive engine continues as usually and if false then it stops and no more internal steps are executed.
How to use it? This question becomes usual, eh?
Include this script after all jquery.unobtrusive specific preferably in layout page.
In every partial view include:
<input id="partialviewid" type="hidden" name="partialviewid" value="target-id" />

In every Ajax action link or form, specify the AjaxOptions parameter
@Ajax.ActionLink("Create", "Create", null,
    new AjaxOptions { OnComplete = "receiveAjaxData"}
)

I hope this simple example to be useful for your day-to-day work without too much headache, till the next!