Thursday, April 03, 2014

Two tier cache strategy


Application level caching is still used when you want to achieve high availability and response time. There are various techniques of achieving this and below is the technique used by an application that I work with currently.
(Click to view large image)
It employs a 2 level cache where the first level (Persistent Cache - Not to be confused with cache which resides in a Database, both of these caches are in memory) is a long term cache (with 7 day expiry) serving end user requests while the second level (Transient Cache) is a short term cache (with 30 seconds expiry) which feeds the long term cache asynchronously.

Any object list/db result which needs to be cached will be cached in both the persistent and transient caches. The fact that the end user is fully isolated not only from the DB query, but also from the cache refresh mechanisms works beautifully to make the application very responsive irrespective of the inherit inefficiencies of an integrated Enterprise Environment.

The probability of stale data is quite low due to the high refresh rate of the transient cache as well as the nature of the application usage, which mostly comprises of reading vast lists of data (cached) and then operating on one of them (not cached)

The choice of cache manager is Microsoft Application Block which may be questioned. However given the legacy nature of the application and the performance of the cache over the past few years, it's an interesting question whether it should be changed. In any case I'm very interested to figure out alternatives given our application stack being ASP.Net MVC / Oracle.

Tuesday, December 31, 2013

New Year Resolutions

Another year has completed and it’s time to look back and appreciate what you have done. Chances are, our attention will be mostly spent on things that we manage to slip up. Especially the dreaded new year resolutions that you made at the start of 2013 (i.e. If you still remember those of course)

New year resolution can be defined as an agreement that you make with your current self to enforce a positive effect on your future self. In essence it's a contract between current you and future you. We all have done this in the past and will continue to do so - lose weight, quit smoking, give up pornography, start exercising, learn swimming, work towards a promotion and the list goes on. But time and time again we fail horribly. Can we rethink our strategy behind new year resolutions in order to improve the success rate? Or are new year resolutions just a big fat farce?

In order to analyse this problem better, let’s put our attention to the world of Economics where a similar idea can be found. It’s known as a ‘Commitment Device’. In the context of the new year resolutions, let’s shrink the scope of the commitment device to a personal level. In that sense a commitment device is an effective way to force yourself into doing something that you want to do but aren't able to get yourself to do it or in most cases not persistent enough at it. As with everything to do with Economics the critical factor here is the notion of ‘incentive’ or in this context ‘penalty’. A commitment device should always be associated with a penalty.

There are several ways to establish a commitment device.
  • Establishing a (financial) penalty upon (future) you in case you falter. The penalty should be sizeable enough that it motivates you to stay on course
  • Make the commitment public thus putting your reputation on the line. With the advent of social media this is quite easy to implement now.
  • Create a large obstacle to temptations to increase the cost of succumbing to one. So the penalty here could be time, effort or money etc...

Either one or a collection of them could work in a given situation. (There are also online solutions helping you to do this - http://www.stickk.com/)

However there are several factors which influence which method has a higher probability of success.

One such factor is the nature of the action itself - i.e. whether it’s a preventing from an act or persisting with an act. Quitting or ramping down on something - be it smoking, drinking or excessive junk food is a preventive act. The obstacle oriented method or penalty driven method could work well here. If the commitment is about starting a new action - exercising - putting your reputation on the line could be more effective.

In addition, the personality and the social set up of a particular person could also make an impact. For an example a person who sits slightly high in the social pecking order (be it family or work) should opt for the reputation method where the cost of failure could be high.

So can we expect the success of the new year resolutions to improve by making sure that they comply with this framework? May be...or maybe not. My personal opinion is that it will only improve the results of a certain class of personalities albeit a low percentage.

So the more interesting question is why doesn't it work? The analysis seems logical and rational enough. Although most classical schools of economic thinking assumes humans to behave rationally, we are awfully bad at it.

The trouble is that by nature people deep down really don’t want to commit. No matter how clever the current self ties its future self to a commitment device, the future self will find a loophole (excuse) to breach it. Sometimes the problem with the commitment device is the non-linear pressure associated with it. As an example the benefit of losing a few kilos (and possibly gaining it back) might not outweigh the stress someone goes through during the committed period. This is especially true in the reputation method. At the same time we very well know that nothing will get done without a commitment.

“So what’s the sweet spot”? That’s up to each individual to find out. At least by giving it a shot you could find out what doesn't work. So that when 2015 arrives, you hopefully get better at getting something done from your future self.

Happy New Year Everyone!

The post was inspired by this freaknomics pod cast.

Friday, November 15, 2013

Technical Disobedience

If you Google for “Technical Disobedience” you’ll come across content featuring Ernesto Oraza, a Cuban industrial designer.  For years, Ernesto studied and collected various improvised machines - mostly simple day to day gadgets that we generally take for granted - made by Cubans out of sheer necessity. The list spread from TV antennas to workplace machines and included activities like repairing a machine to keep using it well past its intended lifetime or using left-over parts of other ‘dead’ machines to build a new machine out of scratch.


After the fall of Soviet Union Cuba went in to a state of complete closed economy. USA had already left Cuba along with it most of its investment, material resource sources and engineering intellect. Without Soviet Union to help them out, the Cuban government was not able to provide for even basic needs.


Things that we take for granted, like a Motorcycle, a TV antenna or an electric fan were not to be found. Once the existing lot of items ran out of their life time, Cubans had to invent and invent fast to make use of the remainings. Quite correctly Ernesto sees the situation not as a form of imprisonment or constraint rather as a form of liberation - freedom from the technical boundaries imposed by the objects. As an example a casing for an electric fan is a form of boundary imposed by the product designer on the consumer. A consumer is not meant to violate this restriction. However by opening up the casings of the fan and using its internal parts for originally un-intending purposes the Cubans achieved a sort of technological freedom that the present day consumer can not even fathom.


We are in times when products are intentionally designed to not last long and even before it’s intended short lifespan, they are made obsolete by either a competing product or most probably by the ‘next generation’ item of the same product line. (This philosophy actually extends to areas like food, where attributes like ‘Best Before Date’ pushes consumers to throw food away even when they are fit for human purpose) So when a group of people - in this case a country goes ahead and reuses the product even after it had its natural death is quite extraordinary. It must be noted that this observation is not based on several one off incidents - The Cuban government themselves formally identified the phenomena and fuelled the process by providing handbooks or guidelines for people encouraging them to invent more. After all “Worker, build your machine!” - is claim to be said by non other than Ernesto “Che” Guevara’s

How does this realisation relate to our profession of building Software? Are we trapped by the technical boundaries imposed by the language, tool, framework or the formal education we have received?


I’d like to think of Software engineering as a way of making a living out of repeatedly breaking boundaries, be it process, technological or even people behaviour. However with increasing trend of productization of our field, we tend to be losing that incentive to go beyond the boundaries or building stuff from scratch. Lack of popularity and understanding of what constitutes software engineering in broader society may well be another factor. After all most involved in software development is helping some big (or small) corporation grow their wealth as opposed to trying to solve a burning problem of the society that they live in.


On the other hand for most users software is seen as a black box, doing what its supposed to do and heavily blocked off. Although its true that more and more people are using software in their day to day life, not many of them have a proper understanding of how they should be used, let alone taking them beyond their original purpose.


One challenge the Cubans have faced during their product augmentation endeavour must be the scalability - scalability of ideas. Suppose couple of guys have come up with an ingenious way to resurrect a machine, how can this be scaled across the country. With the information technology infrastructure expanding every day, scalability should be the least of the problems if you want to break free. Sadly, this is not the case in most instances. Inherent issues with the way software is designed (client - server or binary delivery) and artificial constraints imposed by delivery mechanisms by vendors (Apple AppStore and its devices) make it quite hard to share an ‘improved’ software product even if you are somehow able to do it.


Fundamentally its even questionable whether software can be taken apart like the machines were in revolutionary Cuba. Cubans were able to pull it off even when the original product design has no intention of being improved by its users. Suppose one of our design requirements is to facilitate end user tinkering, can we even design it in such a way? How can this idea can coexist in a world where most innovations are based on more and more centralized software (SOA) and data (BigData).


I suppose this idea is only applicable to a certain class of Software. Example is when the intended use is purely personal (or among a group of people). However this class is ever increasing with the popularization of the mobile devices.


May be being ‘disobedient’ is not only cool, but can be useful or even revolutionary.
Inspiration :



Thursday, September 19, 2013

Glimpse - Firebug for Server

Glimpse is a browser based monitoring/analysis tool for your server side code. If you have used firebug (Firefox), Google chrome dev tools or F12 in IE you will feel right at home using Glimpse. After SignalR, this is the coolest piece of technology I’ve come across in Microsoft web development space.


Glimpse comprise of a server and a client component. Server component collects server information and sends it as json to the client component which renders it. One of the beauties about Glimpse is its extensibility. The developers (Anthony & Nick) have done a great job of keeping both server and client side as extensible as possible. Currently server supports ASP.Net MVC and WebForms (They are working on WebApi and OWIN) while the client supports any modern browser (yeah...not you IE < 8).





What I want to note today, is not why it’s absolutely necessary for a Microsoft web developer to be aware of Glimpse, but pick a few properties of Glimpse project as pointers to a successful open source project. There seem to be a number of successful open source projects on .Net (specially in the web domain) based technologies and Glimpse is a classic example. I will note down a few properties that I felt contributed to the success of Glimpse (Beside the cool way it’s solving a real problem)


Architecture that invites extension

Getting people to use your product is the best way of promoting it. A very close second runner is getting developers to extend it. Developer engagement enhances the application and expands it reachability, sometimes going into areas that you initially never thought it will.

Capability to write your own policies (controls Glimpse), extensions (adds capabilities), Tabs (adds client views) and persistence models (default persistence is in memory with a limited size queue) really has accelerated the growth of Glimpse. These are facilitated by well thought out design of features and interfaces. I think extensibility should not be an afterthought but one prime goal in any open source project with considerable potential reach.



Code that’s easy to comprehend

Although Glimpse documentation is still work in progress, that hasn’t stopped developers from getting engaged. The code is on GitHub public repository and anyone can read/contribute. However it should be noted that the code does not act as a barrier to understand the design of the solution. It’s quite easy to read and understand what’s going on without a lot of documentation.


Easy to integrate - Easier to upgrade

I got one word for this - NuGet. No .Net open source project can ignore NuGet support. NuGet brings ease and consistency to the whole dependency management business. It has done away with the configuration mess that we .net developers used to have and the upgrade is seamless. Glimpse has gone the extra mile here where they show you a ‘New version available’ link in the client UI as and when new versions get released. All you have to do is perform an ‘Update-Package’ in NuGet package explorer in VS.


Convention over configuration

Another reason for user/developer delightness with Glimpse is their preference of convention of configuration. There’s very little configuration considering the extensibility of the product. All of the extensions are interface based and automatically discovered and bound during run time getting rid of a lot of potential configurations. This of course enhances the buy-in a great deal.


Even the tabs (the UI components which shows information) are developed with convention in mind where your typical json data structures are automatically rendered using grids. Without writing a single piece of UI code you can add a customised tab to Glimpse client.


Dedicated developers

In Glimpse you got 2 full time developers paid by a company (RedGate) to fully work on the product. It’s not common for someone to be able to work on their ‘Pet Project’ full time and get paid for it. The community should be grateful to RedGate.


Nick and Anthony provide quick and excellent support on various forums - Github, StackOverflow and Google groups. Their presence help filter and funnel the concerns (e.g from stackoverflow) to Github which is where the actual product gets improved.


Github & Low barrier for contribution

We already mentioned the fact that features such as open source, simple code & design encourages developers to take part. Glimpse is doing another simple yet effective activity where they label selected Github issues as ‘Jump In’. ‘Jump In’ tickets are chosen by experienced developers to invite newbies to the code base. This lowers the barrier a great deal for anyone who wants to contribute or at least get their hands dirty and learn in the process.


A use case

I have integrated Glimpse to an Enterprise Web Application and using it for both development and user support activities. The interesting use case is when Glimpse was found out to be useful in troubleshooting a nasty problem we have.


The user base for our enterprise web app is geographically distributed across different domains and networks. Once in awhile we get a user complain about the application being ‘slow’ and the server logs does not indicate anything out of the ordinary. Although we are convinced that the issue is either client (browser) related or most likely network related, it’s hard to prove it with solid evidence due to the lack of access/relationship we have with users and network operators.
Glimpse was used in this scenario as follows;
  • A glimpse policy was written to allow different levels of access to users on demand. Glimpse allows request data to be just captured without sending them back to the clients. We used it to capture end user data.
  • Send the Glimpse configuration url (glimpse.axd) to user and ask him to click on the 'Turn Glimpse On' button.
  • Developers were able to watch Glimpse traffic on their browser which included the traffic of the user that we are interested of.
  • When Glimpse broke down the response time between client-wire-server it was clear who the culprit was.



This provided our team with a chance to communicate to business with proper evidence regarding occasional performance issues caused by intermittent network/client problems.

Friday, July 12, 2013

Managing Oracle Clients for .Net Apps


This is more of a self reference note and a bit of victory celebration as well. After several attempts spanning a couple of years, I feel I have come in to grips with managing oracle clients in a windows environment.


For MSSql developer (like I was 3 years back), will have no idea how frustrated a .Net developer could be trying to manage drivers/client requirements with an Oracle DB. In addition to the natively complicated way Oracle goes about doing things, the convoluted state of the Oracle drivers for .Net didn’t help. At that time there were 2 commonly available client drivers, one by Microsoft and one by Oracle. Microsoft packaged there driver along with the .Net framework so many end up using that one to realise later that in a production environment you better have ‘Oracles’ Oracle driver. Also they were named quite ambiguously, System.Data.OracleClient and Oracle.DataAccess. At last Microsoft decided to deprecate their driver from .Net 4.0, so the choice is quite obvious now. The oracle driver for .Net is now commonly identified as ODP.Net. (Oracle Data Provider for .Net)


ODP.Net comes in different versions mainly targetting major oracle releases like 10g, 11g etc. The documentations around installing and managing these drivers is quite confusing from Oracle. Also in a developer machine there’s a high possibility of coexistence between different versions.  To make it worst, in the environment that I currently work, an outdated oracle client is installed by default to support some legacy enterprise applications. So coexistence is inevitable.


When a development of a new applications comes about we have to choose between continuing with the enterprise approved legacy driver or a more modern and upto date driver where we will need to wrestle with challenges (both technical and bureaucratic) posed by rolling out a new driver within the enterprise. Of course we choose the later and went with Odp.Net 11.2.


So as developers we had to do several things

1. Install the oracle client (Something like 11.2) from oracle in both dev and server environments

2. Make sure the project references Oracle client instead of Microsoft client - both at compile time and at runtime

3. Make sure our application resolves to the new oracle client


This is pretty much what we needed to get a Web Application up and running in a server that we manage. However getting client applications (e.g WPF application) rolled out posed a far superior challenge as I’ll explain later.


Installing Oracle Client

This is not an exhaustive step by step instruction set, but rather some of the minor points that we as developers tend to step over and ignore until we realised that we’ve wasted a lot of time cursing Oracle.

  • There are 2 editions for 32 bit and 64 bit. You may want 32 bit for developer machines, but most of your servers may be in 64 bit
  • In either edition you got one packaged with Oracle Dev Tools and a slim version with just the client. I prefer the slim version. (Specially since I need to target servers later in the installation process)
  • After you unzip the package - Read the readme file carefully. I found it useful in several occasions.
  • Make sure you have full admin rights to the box. Without admin rights, the installation will look like a success while it’s not. (See verifying the installation - last bullet point)
  • Ready to restart the server once done. (This can be problematic in a production environment, where a reboot needs to cut through a lot of red tape)
  • Make sure you name the installation folder appropriately, so it can be distinctly identified in case of multiple Oracle client installations
  • Verifying the installation
    • Use a tool like Oracle Locator Express to check whether the new oracle client turnes up
    • Check the installation folder for availability of all components installed (like folder for each .net framework under odp.net folder)
    • Check the machine.config for an entry as follows;
<section name="oracle.dataaccess.client" type="System.Data.Common.DbProviderConfigurationHandler, System.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
(I found out that in the case of not having full admin rights, this step is not completed while the 2 before were successful)

Reference correct oracle client from the project (Compile Time)

Once the installation is complete, it’s easy to reference the client. The assembly of interest is Oracle.DataAccess.dll. Generally this can be kept as a dependency (copied) in the current project directory by copying it from the Odp.Net folder of the client installation. Make sure you copy the version matching the .Net framework in use.

Making sure that the application finds the new Oracle Client (Run Time)

In a runtime environment with just one Oracle client installation this is quite easy. You just have to make sure that the environment variable ORACLE_HOME is pointing to the correct folder. (Again you can use Oracle Locator Express to find out oracle home for a given client installation)

However, in an environment where you may have multiple oracle clients, this can be tricky. To isolate your app from any other client installation you can set ORACLE_HOME for the application process as follows. (This piece of code may be invoked at the start of the application before anything to do with Oracle DB)



          
The good thing about this approach is that it’s not changing the environment variables for any other application running in the same machine.

Above steps are sufficient to get an application (Web or Windows) running on a machine having an Oracle client installation. However there are instances where installing the client on every single machine wanting to run the application is not possible or productive. A self-sustained windows forms or Silveright app in an Enterprise is a classic example.In these instances one can resort to installing the client in a centralized location and enabling the client apps to resolve the client from that central location.

UPDATE:

A different type of error was encountered when installing oracle client in Windows Server 2008 r2 box. Ths issue was caused by a permission issue to $ORACLE_HOME folder, applications were not able to connect to the driver. This was fixed by adding the group 'Authenticated Users' with Read & Execute permissions to the $ORACLE_HOME folder. For details refer this link

UPDATE: 
Another potential error that can occur is having any residual (or previous) Oracle installation directories in your PATH variable. Make sure the current installation directory is mentioned before any other installation director in the PATH. If your installation directory is c:\oracle the path should have c:\oracle and c:\oracle\bin

Sunday, April 07, 2013

The converging future of Operating Systems

Recent News:

  • Opera starts using WebKit as their rendering engine - Feb 2013
  • Google folks WebKit, welcome Blink - March 2013
  • Opera follows Google and adapts Blink - March 2013

During a short period of 1 month web has experienced a significant turn of events. A useful coalition between Apple and Google (to use WebKit as the rendering engine for their browsers) that accelerated the ever increasing growth of the web has been parted. What would this mean to the web and to main companies (Apple, Google, Mozilla and Microsoft) involved in this? Is this separation a signal of bigger things to come?
For Google this is all about gearing towards the next big computing revolution, i.e Operating System on the Web, namely Chrome OS. If you want to build an OS on the web, you have to have total control over your browser. Initially google tried to achieve this by evolving the web towards its goals by complementing existing technologies (e.g Developing V8 - Javascript Engine to support javascript). However it slowly realised that it has to take drastic measures and started creating the web tool stack from ground up. Ditching javascript in place of Dart (The all new client side language of the web developed by Chrome) was a significant step towards this direction. Now with forking WebKit it has taken another step towards the same direction. I think Google will come up trumps here. I mean who thought Chrome would make such an impact on the browser market, but they did it. Both politically and technically Google is well positioned to deliver. I’m pretty confident that people will be talking about ‘Chrome Extensions’ in the same way they talk about ‘Apps’ in the mobile world presently.


For Apple this not such a good thing. It’s true that Apple (Safari) will gain the same independance in WebKit development by not having to consider about Google (Chrome) anymore. But Apple is not invested in the web to the same degree that Google is, which means that they might not cash-in on that advantage. In fact Apple wants its iOS to trump the web as the ubiquitous platform across web, desktop and mobile. With the loss of the current contribution to WebKit from Google (it’s close to 50%), we’ll probably see rate of innovations in WebKit going down.


Opera counting on Blink is further proof of this assumption. A reasonable question is whether Blink would end up as another WebKit with Opera and Google locking their horns. I think this will not be the case. My expectation is justified considering the relationship dynamics between the two, where Opera clearly has to follow Google shadows.


Fragmentation is a necessary evil


Another advantage of this move is the necessary fragmentation of a given technology (i.e browsers / rendering engines in this case) to make it more conducive for innovation and progression. Although it’s accepted norm that convergence of standards is a good thing, in the case of a fairly fluid and fastly progressing field like the Web there needs to be a certain amount of duplication of tools and technologies to make sure that innovation is not suppressed by the need for compliance. To put it bluntly, making sure that politics doesn’t trump technology. Mainly compliance driven by politics rather than technology. In that light Google and Apple having their own way with the browsers is a good thing.

What’s with Mobile

With Android been the most common mobile platform, I’m hopeful that we’ll see some good innovation in the mobile browser market as well. With opera also on the same platform this can work well.

And now to OS

Another speculation is the convergence of Android and Chrome OS towards a single OS. Although Google has been criticised in the past for seemly supporting 2 OSs for different platforms, their intention is quite clear that they want these to converge into ‘The OS’ over time.. The move of appointing the Sundar Pichai as the head of both Android and Chrome OS has clearly shown Googles’ intention if it wasn’t evident already.

Somewhat similar to Google, Microsoft is hedging on a converging OS but on Desktop and Mobile (Instead of Web and Mobile). Windows8 already shares a common kernel between the 2. It’s true that Microsoft is already finding it difficult to penetrate the mobile market - specially the consumer market. With Google potentially strengthening its position in the Mobile with these moves Microsoft will find it even harder. However they will still consider themselves to have a good chance in the Enterprise Mobile Market, provided they can get the deployment model fixed.

So as it stands here how the big 3 has put their bets;

  • Google - Web and Mobile (Google will claim that with desktop will transition into web with ‘Chrome OS’)
  • Microsoft - Desktop and Mobile (Counting on Windows 8 to deliver)
  • Apple - Mobile and Desktop (Counting on the consumer market penetration of the i devices)

This convergence is real good news for developers. We may not be too far away from developing a single app and running it in the web, mobile and hopefully in the desktop
Microsoft is quite close on getting a Windows 8 app to work in both mobile and desktop. I’d image Google could get a ‘Chrome OS’ app to work in both Web and Mobile soon. Aren’t we getting closer and closer to the single platform dream world. Remember ‘Write Once, run anywhere’ promised from the past?

Who will take us there this time?

Saturday, March 16, 2013

Simplifying design with Domain Events

Following post notes down how I used the concept of Domain Events to refactor the message processing component of an Enterprise Web Application. The outcome was great where the main achievements was a code base which was easier to maintain. The 'All hell break loose' kind of classes were gone, and instead a compact, self-explanatory, rich set of new classes were born.
The inspiration for the refactoring is this MSDN article from Udi Dahan.

Problem :
Managing a business process where multiple domain object updates are taken place. Depending on the new state of the domain, there are additional tasks to be performed.
E.g: Notifying stake holders about a milestone being reached.

Goals :
Encapsulate business logic to the Domain Model as much as possible while;

  • Not increasing the complexity of the code
  • Keeping the code testable
  • Keeping the code robust for future changes

Existing Solution:

  • All of the domain object updates (domain logic) sits safely within domain objects.
  • How ever there were lots of other business logic leaking out of domain objects. These were mainly reactions to the new state of the domain. Most of them were cross cutting operation that can't be attributed to a domain object thus ended up in the Service layer. To be honest for simple cases this might not be a problem, however as your code base gets more and more complex this becomes the class everyone wants to hate. Have a look;

Although the domain logic is nicely encapsulated in the domain object (Order) there are few problems with this code.

  • Complexity of the Service Layer will increase with time making it harder and harder to introduce changes
  • Same business logic (i.e What to do when an order is confirmed) may be required in another part of the application paving way for redundancy

Solution with Domain Events:
Rather than orchestrating the event dispatch logic in the Service layer,  the logic is modeled with events inside the domain it self. Each event and it’s handlers are separated to its own class which minimizes and transfers the complexity from 1 single class/method to a collection. (Note: 1 event could have more than 1 handler). All of the handlers extend a marker interface which is used by the dependency injection framework (In my case, Unity) to resolve the classes that handle a particular event.

Here’s the base class for all handlers

The following set of classes show how we can decompose different events and their respective handlers into their own classes.
The domain entities (Or any other class for that matter) should be able to raise these events easily, responding to different business conditions. This is achieved by having a generic static class exposing a Raise method as follows.



Here's how the domain entity has regained control of the situation by raising events as responses to business conditions internal to it. At the same time have a look at how bare bone the service has become.

NOTE:Here’s how you configure multiple implementations for the same interface using Unity.


QUESTION: How can we extend this design to handle events that respond to business states that depend on multiple domain objects? (E.g. Order is cancelled and User is a premium customer)