Friday 10 November 2006

Tech Ed - Unit Testing Best Practices with VSTS 2005

Went to a lecture by Mark Seeman (who is a senior MS Consultant) about Unit Testing with VSTS. This was a good lecture as an introduction to unit testing as a concept as well as what VSTS gives us out of the box.

He started off by giving his opinion of a unit being a whole assembly, so he believes that we should be thinking of testing an assembly as a whole, but always abstract away from any volatile dependencies such as other changing components or db. Interestingly he was very adamant that we only test the public interface of the component. After explaining his reasons it really gave me a much better picture of what exactly the aim of the game is :). We are really setting out to test the component from a black box aspect and test the contract.

I really liked the agile type testing that he demoed where you create a test and then build the code to meet it, this really gives you a clear sense of a goal and will keep you from going off on a tangent :)

With VSTS he showed us how we can create a test project (and he recommends a test project for each component you test) and then create test methods within this which form our actual tests. We use a declarative TestMethod attribute to the methods to let VS know that the method is a test. Within the methods we can use the assert object to test the results of calls etc. by comparing our expected result with the actual result. When we build up our tests and run them we can use the test manager to check our results and the code coverage tool to see how much of our code we have tested.

He mentioned the following best practices:

  • Always keep tests simple.
  • Always aim to test all your code by aiming for code coverage of around 90%.
  • All logic should be in components so they can be tested, don’t put any login in the UI.
  • Test cases must be independent (should setup and clean down tests)
  • Test cases must be deterministic (you should not do things like create random values)
  • Reproduce bugs as test cases
  • Place tests in separate projects
  • Have a test project per test target
  • Use source control on test projects

Thursday 9 November 2006

Tech Ed - Asynchronous ASP.NET Programming

Went to a lecture by Jeff Prosise about Asynchronous ASP.NET Programming which was really interesting. The first thing to note is that this subject is really under documented considering its such an important architecture to use as it can allow you to scale your site.

He explained that when IIS receives an ASP.NET requests it is passed to the worker process, which then allocates it a worker thread from its managed pool of available threads. This thread will remain with the request for its entire lifetime. The worker process also manages an IO thread pool for allocating to any processes that require to carry out any IO.

As there is a finite number of worker process threads that can serve requests it is possible for a busy site to become saturated with requests that cannot be served and clients will start to receive 503 errors. If this occurs we need to scale out the site so serve more requests. One way of doing this is buying more hardware, but the better option is to write our code to utilise the thread by writing asynchronous code. If our code is asynchronous the worker thread can return to the pool to serve more requests whilst we wait for any long running actions such as IO or db actions to occur.

He showed us a couple of ways to make our ASP.NET pages asynchronous, they both involved setting the Async attribute to true in our page directive and then either do the following:
Call AddOnPreRenderCompleteAsync in the page load to register our two Begin and End delegates which can then be used in the page to do the long running process.
or
Create a PageAsyncTask delegate that contains our two Begin and End delegates and register this with the RegisterAsyncTask. This method has the advantage of being able to maintain the thread context, create many tasks, and have a timeout value.

The Async delegates will be called just after the preRender event.

We can use ADO.NET BeginExecuteReader to return the IAsyncResult object to pass up to the return of BeginAsyncOperation method. We also have to complete the EndAsyncResult.

I think if we do make all of our IO methos asynchronous we could really improve the scaleability of our sites – I certainly we be pushing to get some of these changes included in the next releases of my sites :)

He then spoke about creating http handlers which are just classes that adhere to the IHttpHandler object. You can register these with specific file types (not that useful) or create a ashx file with a webhandler directive and ASP.NET will automatically use this object when it is requested by the client. Using this instead of a classic aspx file is great for requests that will not be returning form/asp.net type data such as images as it does not have all the overhead of the pipeline that a aspx request moves through.

He showed us a demo of using a ashx object to return pictures by having a normal img tag on a page that had its src set to navigate to the ashx file with a set of params. When the page renders it calls off to the ashx file which returns the graphic without having to go through the same pipeline that a aspx page would have had to. This results is a much quicker response for the client.

He explained that by default an http handler is synchronous but we can make them asynchronous by conforming to IHttpAsyncHandler instead. You just have to have an empty ProcessRequest method and then fill in the BeginProcessRequest and EndProcessRequest with the work. This is then called asynchronously by ASP.NET.

He showed a good demo of a site pulling back images from virtual earth which render much quicker when using asynchronous calls.

He noted that more improvements could be made by editing the max no of concurrent connections in machine.config.

He also told us to Avoid thread.start Threadpool.queueuserworkitem and Asynchronous delegates and that we should use custom thread pools if necessary as otherwise you can steal a thread from the same pool as the ASP.NET worker process.

Tech Ed - UK Country Drinks

We were invited to Shoko a really cool far east style contemporary lounge club with a terrace that overlooked the beach. It was a really good evening. We got on a coach from the main event to the club, but it turned out it was really close to our hotel and we probably could of walked it quicker and had a chance to get changed - but never mind :)
Once in it was free drinks on tap :) and loads of different tapas to try, and a yummy chocolate fountain :) This was a really good night, nice one M$ ;)

Tech Ed - Patterns & anti patterns with SOA


We went to this lecture by Ron Jacobs who is fast becoming one of our favorite speakers, he is really interesting and engaging :)

Basically he was saying that using SOA technologies does not guarantee success and there is never a right answer coz as usual everything has pros & cons.

The goal of this is to have a friction free interaction between systems so there are no problems such as different file types or transportation methods.

He made an interesting point that SOA is not a noun; it’s a style of architecture which emphasizes on standard based communication.

He highlighted that tightly coupled systems defiantly have there place as if everything is lose its slow as hell :)

We must aim when designing SOA for a good set of explicit behaviors over implicit where the client has to ‘try things out’ to find out how things work.

He told us to think of service granularity at a business process level and that each of these have their interface.

As all boundaries should be explicit he gave a great metaphor of an explicit boundary being an international boarder between countries and that you know where they are clear and when you cross them you are not in control of anything. So when we are not in control of things such as server or config we know we have an explicit international boundary that will be an interface to a service. As with international borders we need to think carefully about how many we have and how we control them as they are expensive and problematic if they are not controlled. For internal business boundaries you can do anything you want and this includes tightly coupled objects to improve performance.

He spoke about Anti Patterns (patterns that show how to do things wrong so that you can make sure you don’t do the same). He discussed the following:

  • CRUDy interface - when you create an interface with simple CRUD commands on it when this should be a full business process with logic.
  • Enumeration - should not have enumeration commands such getnext() that go against the atomic nature of a service and causes the server to hold a big amount of data whilst a client navigates it.
  • Chatty interface - bad when a service offers lots of methods that must be called in a sequence of calls by the client to carry out an operation. The client may call one command but never get to call any others and the service is left in an inconsistent state. We should design larger web service method and do all the steps in there.
  • Loosy Goosy :) - where a service tries to be uber generic with a single command that takes a lump of xml and returns a lump of xml and uses a word doc to define the contract. This is hard to test and hard for the client to use as it may implicitly change. Sometimes this is done to stop versioning problems. Now this is easier with serialization improvements in .NET 2.0, but the message is to “receive liberally and send explicitly”.

He explained that the best way to start a SOA design is to start with the process and understand it. Then create the contract by defining the messages, operations and by group them.
Use portable types - returning datasets is not good, this can be used internally but for external service we should decouple internal and external objects by unloading one internal object into another external object.

His advice is to think of moving bits of paper not calling methods.

Ron has some really good web casts that he has shown at the event we can take a look over at http://www.arcast.net/

Wednesday 8 November 2006

Tech Ed - Unified Process and VSTS

WOW we were so looking forward to seeing Ivar Jacobson, the legend!!!!
However, the seminar was just way too sureal.
Ivar has down a total U turn ... moving from his strict methodology to now almost anything goes!!!! What was very clear is Ivar's goal is simply to help people produce good software and the means to getting there is fairly flexible. He admits that Agile has the correct emphasis on people rather than process and that the language is correct.

He also told us that he knows most developers just don't read books - they just buy them :)

His consultancy firm have developed a framework called the Unified Process model that allows you to use different processes from different methodologies to get the job done. There seems to an interesting "game" you play in this model with activities!! Explanations of the different processes are displayed on small cards with further reading available. It was not totally clear how to start on this or how exactly this model works but I'm sure Ivar will be writing a book on it :) More info at http://www.ivarjacobson.com/home.cfm

The whole integration with VSTS was the most confusing demo and piece of software we have ever seen. We really have no idea how to use it or how it works!!!!

Tech Ed - C# whiteboard session with Anders Hejlsberg

Lots of cool questions but an interesting feature was discussed was partial methods where a method call is put in a partial class to a partial method. This partial method can then be implemented in another partial class.

Anders talking about what he would do different with C# said;
  • He would of liked better difference between reference equals and value equals.
  • No goto.
  • Go straight to lambda instead of using annoymous types.

Tech Ed - Alternative .NET debugging facilities

Brian Long. http://blong.com Alternative .NET debugging facilities.

.NET supplied console debugger - mdebug. Load switch shows all objects loaded for the application.

.NET supplied Gui debugger - dbgclr. The debug engine used in visual studio. Useful for server or client site debugging.

Extra debugging tools for windows http://www.microsoft.com/whdc/devtools/debugging/default.mspx
  • Ntsd - uses existing console.
  • Cdb - launches new console.
  • Kd - kernal level!
  • Windbg - gui. Recommended debugger. Debug menu, event filter, add stop on .NET exceptions.

To fully maximise debuggers Microsoft's Symbol server provides all microsoft's dll's symbol extensions. Symbols are needed so the debugger can step into the call stack operations. To allow debuggers to use the symbols the _NT_SYMBOL_PATH machine environment variable is added with the unc for local symbols or url for symbol server.

Microsoft userdump http://www.microsoft.com/downloads/details.aspx?FamilyID=E089CA41-6A87-40C8-BF69-28AC08570B7E&displaylang=en - good tool for crash dump creation and extraction.

You can write debugger extensions - these must be unmanaged though. So you could write custom extensions specific to your complex application.

Furthermore all debuggers are unmanaged so you must add the SOS (Son Of Strike as originally .NET was going to be called lightening) debugger extension to take advantage of detailed .NET debugging. To add a debugger extension add the _NT_DEBUGGER_EXTENSION_PATH machine environment variable with the unc for SOS dll or you will have type the full path everytime in the debugger.

In task manager add virtual bytes column as use this rather than memory usage as memory usage can be compressed.

Tech Ed - SQL SODA

Implementing Service Oriented Database Architecture (SODA) With SQL Server 2005. Bob Beauchemin

How has SODA come around? Well this is to do with performance and a good rule of thumb is the second 10,000 users must perform as fast as the first 10,000. So you need scaling. Scale up is adding more machines or power. Scale out is moving the pressure out of the database.

Two main problems with data access:
  • As data is stored over time you will always have to infintly scale.
  • Sharing data across company boundaries.

Traditional database solutions:

  • Generally adopts a scale up approach.
  • Distributed transactions. Slow, long and succeptable to error.
  • Cache. Can end up with db in the cache.

Session oriented database solutions:

  • Generally adopts a scale out approach.
  • Parallel processing.
  • Smart cache.
  • Db contains services which receive an instruction. Then seperatly do processing. Then seperatly raise an event when criteria is met. Then seperatly events can be received by client applications.

Ok, this is getting way too deep into dba land. However, the SODA concept is moving away from a strict relational model to an object centric model. Lets take a web site order example to demo the difference:

  • The relational model is heavy so the order message will contain all the person details, the order details, the item details and the payment details. All will be processed at once. This kind of web site takes 2 or 3 minutes of processing before you get your order number. However, note your order is now fully processed and successful.
  • The SODA model is light so the while the order message will still contain all the person details, the order details, the item details and the payment details. However, only a skeleton order is created and this kind of site will provide your order number instantly. Now a series of events are fired by the database - full order and payment for example. These will be received by other server applications to process which could be anything from minutes to days. During this server processing time, if provided, the user can follow the status of their order, which can still fail on stock or payment.

Tuesday 7 November 2006

Tech Ed - ADO present and future


Jackie Goldstein. Renaissance Computer Systems
When coding optimistic concurrency handling with merge, get a refreshed datatable reflecting the new values in the database and fill a new datatable. Then using the original datatable.merge with the new datatable as the datatable parameter and true for the preserve changes parameter.

Sql dependancy for simple data result of a query has changed event handling in windows forms.

Handling database independence using the Syste.Data.Common db factory classes.

The next version of ado is the Entity Data Model. Its aims are to provide the client application a conceptual schema rather than the physical database schema. This client view is achieved by using the new client side map provider which is an extension of ado to provide the mapping at run time. The map provider returns datasets, datatables and atarows. Pros: each application can have its own specific view on the data. Cons: modelling is brought up to the client so while in code the objects are coded against, the developer still needs to know about the mapping.

To take the map provider further to an object model is object services which sits on top of the map provider to generate the object classes to return.

Hopefully the map provider and object services will be usable within a dll to provide a common dll om for clients to use.

Tech Ed - Visual Studio: The .NET Language Integrated Query (LINQ) Framework Overview


We attended a great lecture by the legend Andres Hejlsberg who gave us an insight into LINQ and explained that that LINQ will be included in C# 3.0 and VB.NET 9.0, more information and slides can be found at http://msdn.microsoft.com/data/ref/linq/

Here are some of our notes that we made whilst in the lecture:

He explained that we can use LINQ to query the following out of the box at RTM:
  • any memory objects that implement the IEneriable
  • Datasets
  • SQL
  • Entity Objects

He then whipped up a demo that took memory objects that had a composition relationship, he then queried these using LINQ with both a Lambda expression and by using the new extension methods.
He explained that the new Var object type is used as a strongly typed object and gives you type of the return statement when compiled.
He explained that we can use lambda expression or more gentle syntax which is converted by the compiler into lambda expressions at compile time.
We now also use the anonymous type on the select to create an object when compiled:
select new {c.companyname, c.phone}

He showed a nested in statement creation of new annoymous types on a select for a more hierarchical structure.

He reminded us that as this is just C# you can do anything in your LINQ statement.

He discussed Deferred Query Execution model were the LINQ query is built as a pipeline of separate query steps that is not actually executed until the results are iterated or a method is called on the results.

The LINQ to SQL api ships with a tool that you can point to a db and it will code gen all the objects for querying also there will be a WYSIWYG designer so you can drag tables over to create objects. This api also claims to creates slim SQL will create select, insert, update statements automatically which you can view by looking at database.log object.

The LINQ to XML api also ships which allows us to use XML more declaratively. It allows us to create and query XML which is easier, faster and more functional than XQuery. We can create new XElement objects by giving it a name and any IEnerable object as params and this will create an xml element with all the IEnerable objects within it. We can also query the relational world to create XML using LINQ.

PLINQ is another project that is being developed to use this more declarative way of querying instead of many for loops etc which allows for higher abstraction which means queries can be run in parallel on multi processor environment.
We can check out more info on LINQ over here http://msdn.microsoft.com/data/ref/linq/

Tech Ed - Agile Methodology


Roy Osherove

So how can you define agile development?


  • Executable requirements. That is something that can be measured.
  • Short iterations with simpler requirements. So 2 to 4 weeks in duration for each iteration resulting with a shippable product (may not actually release). With this the developer should be left alone with no interruptions.
  • To help with short iterations is to have automated test and build tools.
  • Have team based estimations.
  • Nothing wrong with change, so be adaptable.
  • Lots of verbal communications to define the requirement and then a small concise document is written.
  • Customer has more responsibility - They can contact us at any time to talk about change / They are responsible for feature priority order / Customer involved in testing.
  • A motto is "give value quick by priority".

All these definitions can be summarised into the agile manifesto which compares agile vs. standard methodologies:

  • Individuals and interactions vs. processes and tools.
  • Working software vs. documentation.
  • Customer collaboration vs. contract negotiation.
  • Responding to change vs. following a plan.


So with an agile approach, the team must always accept and prepare for things definitely changing by adopting an adaptive and more people oriented vs. a predictive and process oriented approach.

Extreme Programming (XP) and SCRUM are implementations of an agile approach;

  • As per the definitions of agile above.
  • Short daily meeting by team lead. Each developer has 5 mins to answer: What did you do yesterday? For accurate estimations. / What are you going to do today? For accurate estimations. / What is stopping you?
  • Developers work individually on feature design but whole team reviews designs and then code in pairs.
  • Helps reduce risk management as sharing of knowledge thorough paired coding and short iterations which everyone can understand.

Problems:

  • Generally need experienced developers.
  • Always need an active customer.
  • Will require a few iterations to see benefits.

Remember it is just a mind set, be flexible yourselves and feel free to change / create your own agile approach that works for your own team.

Read the good agile / bad agile article.

Visual Studio Team System provides a SCRUM template.

Tech Ed - Developing Rich Web Applications with ASP.NET Ajax


We attended a really cool lecture by Shanku Niyogi that discussed the two different / complementary approaches to developing Ajax applications; Server centric and Client centric.

Here are some of our notes that we made whilst in the lecture:

The Update Panel and how using triggers with an asyncPostbackTrigger & a postback trigger to causes a partial or full postback respectively, he also showed a timer example and wrapping a whole gridview in a update panel to allow it to update without a full postback.

The user experience can also be improved with an updateProgress control to show when an async update is in progress, this can be on per panel basis or any panel on a page. We can use the displayAfter property to only show the update information after a certain amount of time.

The Control toolkit which can be found at http://ajax.asp.net/default.aspx?tabid=47&subtabid=477 can add extra AJAX functionality really easily with little or no JavaScript. He showed how you can hide & show areas not always needed using the popupControl extender. He showed how we can make this disappear after the popup work has been complete by using the getProxyForCurrentPopup(this).Cancel()

He discussed the issue around how to handle state when using Ajax and recommended taking advantage of the profile store of SQL or AD or even sharepoint J This was demoed using a custom statebag type concept via javascript calls to the server side to store user state. He showed how this could be used anonymously with a non-persistent cookie or by using a login to gain identity. He also touched on saving your state and accessing it via a url. This potentially solves the problem of losing state by using the back button.

He then discussed how the MS AJAX library allows users to use a JavaScript pattern type library that is OO to wrap things such as Networking method calls from the client to the browser and back.

The networking stack which builds on top of web service architecture, allows you to return xml/strings and the library will convert to JavaScript objects. It will do all serialization/deserialization of objects & conversion of native .net objects.

He then demoed a cat conversation example which was first built as a normal web service which returns to a browser its heavy weight soap xml.
He then changed his web methods to add an extra decoration to [ScriptMethod]. Once compiled it returned a JavaScript object when you added a /js to the query string. This JavaScript is a clientside proxy object that handles all networking. You can call the proxy directly from JavaScript via async methods.

He discussed AJAX releases which included:

  • ASP.NET v1.0 core product release client & server components.
  • Will have more CTP features updated reg. incl. AJAX control toolkit.
  • RTM for end of year which will run on ASP.NET 2.0.
  • Will be fully integrated into VS code named Orcus.

Have a look at the latest AJAX release and try stuff out over here http://ajax.asp.net/Default.aspx

Tech Ed - Key Note

Wow what an audience, close to 4500 delegates are here at Tech Ed. The auditorium is huge and our early bird pass got us right to the front :) However, we were a little disappointed because for some reason we were expecting Bill Gates to be speaking the key note … and he was not.

Instead it was Microsoft’s senior vice president … however he was a superb speaker and really got you fired up for the future of Microsoft through Vista and the Office 2007 suites. As we all know, they certainly look the part and judging by the keynote and the demonstrations the integration of Office 2007 products has taken on a new level … looking forward to seeing that in action.

One superb demo within the key note was Language IndepeNant Query (LINQ), whereby the chief architect did a live demo of a web based resource task manager using LINQ against the OS and a DB. Furthermore within the click on a menu item a RSS feed was created!!!! LINQ certainly appears to have created that single layer for all data sources we developers have previously dreamed off!!!

Monday 6 November 2006

Tech Ed - Software Architecture

Today we have enjoyed a whole series of seminars today from Ron Jacobs (Microsoft Architect) and Scott Hanselman (Corrillian Architect). Both are superb speakers and clearly experts in the domain of software architecture.

The key points we took away that a key to an architect are;

Different lenses on documentation. It is vital the correct lens is applied to the current customer. For example the sales director does not care about how much money you have saved on servers or disc space … the sales director lens must be customised with sales information such as – this system will allow a transaction to complete 25% quicker. Whereas the IT director does not really care about the sales information so the IT director lens must be customized with IT information such as we can now decommission two other servers and save you £1500 a year in maintenance.

Executive buy in. It is vital to have "suits" buy in, to have them understand the benefits of how investing in an architecture will bring long term value to the business as a whole and to individual projects. Should management buy in be failing … then maybe a sneaky shadow government could form … a team working in their own time which achieve a final product with now has metrics (see below) to present to the management. Obviously very risky but does show team commitment to a process.

Metrics are vital and play two key roles;
First, working towards executive buy in to be able to show the sales director how sales can go through 20% quicker with this new technology or show the IT director that 30% development time was saved on project X because the architect already has a pattern for this main problem and some of the existing architecture was re used.
Second, and probably more important in order to demonstrate successfully delivering requirements both to the technical team and to the customer.

Testing. Implement automated testing and continuous integration. Therefore each unit of work results with a complete set of test results and on success a complete build. To fully benefit from automated testing it is important to remove as many "word" document requirements as possible and include them within the automated testing tools. For example; the requirement that the home page must load within less than 3 seconds on a 512mg connection - incorporate the load time testing within the automated test tools and then the only dependency on building is to ensure the tests are passed and not someone remembering to have to run further manual tests.

Responsibility. During the Q&A session it was explained that neither Ron nor Scott had any management responsibilities. They have a clear technical responsibility right to the top management but in regards to holidays, career and general HR stuff they are not involved. They noted this is on purpose and significant in that now the architect can focus on the technical goals of the company as a whole and get the positive buy in from the techies without any management politics becoming involved.

Methodologies. Tying in with the automated testing and continuous integration is agile. This is key when the team focus on delivering business benefit in small quick iterations. Worth looking at SCRUM.

On a more general note for very large scale smart client development Ron Jacobs points you to the CommSee case study.

We've arrived @ Tech Ed 2006


Got here nice and early and registered - got our cool bags and caps :)

Been to "Introduction to Software Architect" pre-conference seminars which have been awsome so far...

Only down side is a dodgy wireless connection here - everyone seems to be having problems - I think the wireless routers are being well spanked :)

Tried to use the M3100 to blog - but blogger doesn't seem to work well with IEMobile :(

More updates to come :)

Wednesday 1 November 2006

Off to Tech Ed 2006 @ Barcelona




We are off the Tech Ed next week in Barcelona. Keep an eye on this blog for (sort of) up to date pics and comments.