Thursday, August 16, 2007

Article on LoadRunner vs VSTS

Just came across this article on LoadRunner against Visual Studio Team System, by Scott Moore.

NOTE: the article in question is not totally impartial (who is???), as Scott Moore is a consultant for a company partnering with HP/Mercury. The same goes for this blog, in a direction tending towards Microsoft.

There are some valid points made re. the versatility of LoadRunner - Microsoft is not going to get their hands on the innards of SAP and Oracle in the near(?) future, so it's unlikely that they will be able to compete on that front. That being the case, Scott Moore is (IMO) spot on with his call that VSTS is not going to be a LoadRunner killer. ERP solutions will always need load testing and LoadRunner will probably handle that admirably... but having tried both VSTS and LoadRunner, the ease with which a test can be recorded and parameterized without delving into script is pretty impressive. However, the lack of support for Opera/Firefox browsers on VSTS is a fairly large shortcoming, as-is the current shortcomings in data generation and binding.

Having read the article, I think I'll stand by my previous conclusion of horses for courses: ASP.NET or another web app, which can be tested off pure HTTP - use VSTS. Heavy duty requirements? Go with LoadRunner. Either option has a price tag beyond my personal reach, anyway...!

Unit Testing in Visual Studio 2008 ('Orcas') - Professional Edition

The good word is that unit testing will be available in the professional editon of Visual Studio. This is a near mandatory feature in an IDE today, which most Java IDEs provide out-of-the-box; TDD in .NET would have been further along the road today if VS2005 Pro had included the unit testing feature.

Let's just hope that they don't provide unit testing while leaving out the code coverage feature! :P

Load testing with VSTS and LoadRunner

I've not been able to post for a while - been knee deep in work and life, while standing on my head...

Anyway, I've been working with the QA team at my workplace on load testing some of our applications headed for Production over the last few days, getting a taste of both LoadRunner and VSTS.

LoadRunner is *the* heavy duty tool supporting loads of protocols, including SAP and Siebel (amongst others), in addition to the basic requirement of HTTP. We were testing an ASP.NET application which came a cropper when ViewState values would not get replayed, but until then recording a test, replaying it and configuring a load was fairly simple. Debugging is not A-grade AFAIK - but breakpoints and analysis of variables is provided. We tested this off a 10 day trial license, which restricts load to 10 virtual users; the real thing is *said* to be expensive (claims unverified).

On to Visual Studio Team System (VSTS) - Microsoft have done a good job making setting up a test easy, but the bias towards testing Microsoft stuff is definitely visible (e.g. in browser support). Setting up a test was easy, as was parameterizing it. LoadRunner has lots of features to generate data, but VSTS 2005 required some effort to bind to a data source. Thankfully, VSTS 2008 ('Orcas') will have data binding to CSV/XML files in-built; VSTS does provide for extensibility via Request/Test Plugins, which can be used to provide data. I did hit an issue where if an asp:Button has its UseSubmitBehavior switched off, the script records the Cancel button's existence on the page as well, leading to the subsequent HTTP request not being handled correctly by the server. Tests were conducted on a 180-day trial license, without any feature restrictions.

The comparison plays out like this - for basic web apps (definitely ASP.NET apps), VSTS should suffice. While not being cheap, VSTS does not cost as much as LoadRunner. However, LoadRunner is a far more mature product and versatile with it - having support for ERPs and protocols like CITRIX and RDP. The VSTS feature of rigging Unit Tests and Web Tests into Load Tests was pretty impressive and the .NET language + debugging features of the VS IDE outstripped LoadRunner by some distance.

Final word: select what you need for your specific requirement; but for the majority of small-medium business applications, VSTS should meet your need.

Tuesday, May 1, 2007

Anemic vs. Rich Domain Models

There's debate at almost religious intensity on some forums as to the type of domain model to be used in developing applications.

The rich Domain Model (proposed by Martin Fowler) suggests having e.g. Employee objects, with all operations on an Employee provided by the class itself - such as Employee.Save(). Fowler argues that this makes for better object orientation, as there is greater encapsulation and the opportunity for polymorphic use of objects.

The alternative is the Anemic Domain Model (the term coined by Fowler himself makes clear how much he dislikes the model by using a term with negative implications) and probably inspired by the design of stateless session beans from EJB. This design has (to follow the above example) Employee objects containing their own data, but EmployeeManager or EmployeeService classes containing the operations - such as EmployeeManager.Save(Employee).

Wikipedia labels the latter as an anti-pattern; however, given that there are so many proponents of the Anemic Domain Model (ADM), labeling it an anti-pattern seems to be an extremist, dogmatic point of view. The pro-ADM argument is that breaking the functionality out into separate classes further enhances separation of concerns, which is a good design practice.

Personally, I've found the ADM to be a simple one to develop, lending itself well to generation of code via templates etc. Furthermore, with the whole SOA/Web Service buzz, services as per ADM make a lot of sense; even use of the GOF Facade design pattern would probably lead to the same solution. Say in a Web Service application, the designer would be forced to make the Domain Model dependent on the relevant Web Service - in my book, placing Web Service calls behind Employee.Save() would be mixing up far too many concerns (i.e. business logic + web service invocation code).

Fowler's rich Domain Model is an ideal solution - and I've used it successfully on several in-process solutions - and if it can be achieved outside of the pro-ADM arguments just outlined - I'm all for it. Maybe having the UI talk to a rich Domain Model (your formal API), which in turn talks to the back end comprising an ADM would be the answer... however, this would mean code that would be repetitive and leave that much more room for defects to creep in.

It all comes down to a question of how pragmatic a software designer is willing to be. The only certainty is that this argument will not be going away anytime soon.

A few blogs which have opinions on this topic:
Finding references on this matter is not hard at all...
Google for rich Domain Model
Google for Anemic Domain Model

Sunday, February 11, 2007

Unit Testing Internal Types (2)

It works!

In a previous article, I discussed unit testing internal classes which are typically instantiated using Reflection and are accessed via a Facade.

Learnings: When TestDriven.NET runs NUnit tests through the IDE, configuration values seem to be pulled from testassembly.dll.config.temp, while the specified configuration file is used if run through the NUnit GUI.

These files would have to be replaced with our custom content, to test the internal component, through the Facade.

Anyway, I scrapped together a helper class that would handle copying of the configuration content (overloads for config content as a string and alternatively, a source config file). The [TestFixtureTearDown] method is used to restore the original configuration, while individual [Test]s can use the helper to copy the relevant config file for running.

I'm glad that it worked, but given that it's a hacked together solution, could stop working with any forthcoming release of NUnit (e.g. if config files are locked for the duration of execution etc.).

Steven Cohn (in his blog) made good sense about not modifying the 'System Under Test', in order to test it. The only thing I've got to say about 'pragmatic unit testing' is, given all the restrictions in testing internal types, the most pragmatic thing would be to stop unit testing... :P

You'd think I'd be happy at having solved the problem (hack or otherwise), but strangely, jumping through hoops does sweet nothing for the temper! ;-)

Saturday, February 10, 2007

Unit Testing Internal Types

Problem: .NET classes of 'internal' visibility to be unit tested (in this case, using NUnit).

Complication: These classes are part of a 'provider' model (pluggable components instantiated via a Factory Method implementation, based on configuration settings). Access to the functionality to be tested would be via a Facade class, which looks up configuration and instantiates the relevant type using Reflection.

Known Solutions:
  • Use of Friend assemblies - however, to maintain security (ensuring only our Unit Test assembly invokes the internal types) we would need to strong name our assembly under test. This has a cascading effect, as all its references then need to be strong named. Not an option, when you consider that we might have a reference on an external vendor's component somewhere down the dependency chain.
  • Writing the Unit Tests in the same assembly - which clutters up the codebase goodstyle.
  • Write Reflection based wrappers (a la VSTS) - which the Professional edition of VS2005 does not have.

Yet another solution?

I'm sure there is a downside to the following approach, which I'm going to find out about the hard way - but till then:

The complication states that the required values need to be pulled from configuration. What price placing the default configuration file in the execution path during the method marked with [TestFixtureSetUp], to be overwritten at the beginning of the individual [Test] method by a separate configuration file containing the relevant settings, methinks... all this is very much guesswork, depending on NUnit allowing the configuration file to be overwritten.

I'm thinking a feature in NUnit like the VSTS [DeploymentItem] attribute would have been very helpful in this scenario...

Will blog about whether this works... if it works!

Thursday, February 8, 2007

And about time too!

Continuous Deprioritization: the reason why I never got around to running a blog thus far. After deliberating and procrastinating for a couple of years, I finally got around to starting up this blog.

And just like *that*, the words run dry!