Tuesday 5 December 2006

DDD4 Wrap up

JeffAttended my 3rd DDD day on Saturday and these days just keep getting bigger and better :) The only down side of this one was the lack of SWAG :( I think Richard and Dave had it all for there session on Tech Ed 2006!!!

Went this year with Ben who is a Microsoft student partner for University of Hertfordshire where he is in his final year of his Computer Science degree :)

Arrived nice and early for the egg, bacon, and sausage baps - yummy :) Then headed into the first session I chose entitled 'How to write crap code in C#' by Ben Lamb. Unfortunately this session was not as great as I expected, I think Ben has a lot of knowledge but came across quite nervous and difficult to follow at times due to this. I feel for him though because I would be nervous in front of 120 people!! He covered a few simple anti-patterns including string concatenation, threading, and throwing errors. He managed to dramatically slow processing down by concatenating strings instead of using the new string builder - there was a huge difference in this. So defiantly use the string builder if your are performing a lost of string concatenation.

Next I joined Dave Verwer for his presentation titled 'Ruby on Rails for .NET Developers'. This was a fantastic introduction session to the dynamic Ruby language and the Rails framework. I defiantly see this technology as a start of a more dynamic like language trend. I think the power and RAD type capabilities of this language and framework are fantastic, he demonstrated how easy it is to create a database and web form in a matter of minutes :) I defiantly see this complementing an AGILE type methodology. I really liked the way it creates a strict model for you to develop within by creating template folders and files and leads you to develop within a tried and tested fashion. People may argue that this is too much hand holding and forcing you to develop in a certain way, however i believe the benefits of being able to pick up any Ruby project and understand the structure straight away is defiantly a win :) I also like the way it encourages the separation of the dev, test and production environments. My only worry is its performance issues, ability to create an n-tier architecture and how it is not fully support the Microsoft platform which could be a show stopper for some companies heavily tied to MS technologies. I think it could be great to quickly knock something up in a virtual machine though :)

Next I went to session called 'But it works on my PC! or continuous integration to improve software quality' by Richard Fennell. This was a great session that introduced you to the great mindset of continuous integration that I am totally a fan of and I'm currently trying to implement myself. He demoed Cruise Control which looks like a fantastic open source piece of software which with little configuration will do a great job of handling your automated builds and testing. It works well the MS Source Safe but he also showed you how to setup automated builds in TFS but this looked a little more clunky and not supported out of the box. He mentioned a new TFS plugin for Cruise Control that will give the best of both worlds.

Next was a lunch and time for the groktalks and Park Bench activities. These were great but I think the popularity was underestimated as there were too many people for the area where the speakers were placed. I could hardly see what they were demoing let alone hear what they were saying. I spoke to Mike Taulty whilst one speaker was on and he came up with a game where he would guess what the speakers were saying :) I think next time they should use a bigger area and a mic, I don't think a room would be a good idea as it would be too formal :)

I really liked the Park Bench concept, this is where 4 people sit in front of a gathered crowed and take any questions. If you make a statement of think you have a better answer one of the people on the bench have to get up and let you sit down to say your bit. This really lets everyone get involved and say there bit, a nice community type discussion :)

After lunch I want into the session by Abid Quereshi on an 'Introduction to Aspect Oriented Programming' which was quite in depth and I must admit most of it went over my head - partly because we got some bad seat, I was feeling tired after lunch and that I think he emphasized too much on the theory. However it was a great session and in a nut shell was all around the declarative model of programming which produces cleaner and more coherent code.

The final session was titled 'DataAccess Layers - Convenience vs. Control and Performance?' by Daniel Fisher and was the one session I was really looking forward to. Unfortunately Ed Gibson decided to do his usual FBI talk 5min before the session was going to start - I didn't attend this as I have listened to him twice already this year and once you've heard it once you defiantly don't want to hear him talk about small children again :) Then the projector wouldn't work properly so the session was cut short to about 30 min which means he really had to rush this as it was a shame as it looked like he had a good framework. He did say to email him if you would like a copy of the code, so I might just do that :)

Overall great day though and i look forward to DDD5 :)

They did announce a WebDD day on 3rd of February 2007 which is going to be a similar day focused on web development. Will defiantly be attending this one. They also mentioned that MS will be doing a Vista launch on 19th & 20th January 2007 at Reading so look out for this one as its rumoured they will be giving away some copies of the OS :)

Friday 10 November 2006

Tech Ed - Unit Testing Best Practices with VSTS 2005

Went to a lecture by Mark Seeman (who is a senior MS Consultant) about Unit Testing with VSTS. This was a good lecture as an introduction to unit testing as a concept as well as what VSTS gives us out of the box.

He started off by giving his opinion of a unit being a whole assembly, so he believes that we should be thinking of testing an assembly as a whole, but always abstract away from any volatile dependencies such as other changing components or db. Interestingly he was very adamant that we only test the public interface of the component. After explaining his reasons it really gave me a much better picture of what exactly the aim of the game is :). We are really setting out to test the component from a black box aspect and test the contract.

I really liked the agile type testing that he demoed where you create a test and then build the code to meet it, this really gives you a clear sense of a goal and will keep you from going off on a tangent :)

With VSTS he showed us how we can create a test project (and he recommends a test project for each component you test) and then create test methods within this which form our actual tests. We use a declarative TestMethod attribute to the methods to let VS know that the method is a test. Within the methods we can use the assert object to test the results of calls etc. by comparing our expected result with the actual result. When we build up our tests and run them we can use the test manager to check our results and the code coverage tool to see how much of our code we have tested.

He mentioned the following best practices:

  • Always keep tests simple.
  • Always aim to test all your code by aiming for code coverage of around 90%.
  • All logic should be in components so they can be tested, don’t put any login in the UI.
  • Test cases must be independent (should setup and clean down tests)
  • Test cases must be deterministic (you should not do things like create random values)
  • Reproduce bugs as test cases
  • Place tests in separate projects
  • Have a test project per test target
  • Use source control on test projects

Thursday 9 November 2006

Tech Ed - Asynchronous ASP.NET Programming

Went to a lecture by Jeff Prosise about Asynchronous ASP.NET Programming which was really interesting. The first thing to note is that this subject is really under documented considering its such an important architecture to use as it can allow you to scale your site.

He explained that when IIS receives an ASP.NET requests it is passed to the worker process, which then allocates it a worker thread from its managed pool of available threads. This thread will remain with the request for its entire lifetime. The worker process also manages an IO thread pool for allocating to any processes that require to carry out any IO.

As there is a finite number of worker process threads that can serve requests it is possible for a busy site to become saturated with requests that cannot be served and clients will start to receive 503 errors. If this occurs we need to scale out the site so serve more requests. One way of doing this is buying more hardware, but the better option is to write our code to utilise the thread by writing asynchronous code. If our code is asynchronous the worker thread can return to the pool to serve more requests whilst we wait for any long running actions such as IO or db actions to occur.

He showed us a couple of ways to make our ASP.NET pages asynchronous, they both involved setting the Async attribute to true in our page directive and then either do the following:
Call AddOnPreRenderCompleteAsync in the page load to register our two Begin and End delegates which can then be used in the page to do the long running process.
or
Create a PageAsyncTask delegate that contains our two Begin and End delegates and register this with the RegisterAsyncTask. This method has the advantage of being able to maintain the thread context, create many tasks, and have a timeout value.

The Async delegates will be called just after the preRender event.

We can use ADO.NET BeginExecuteReader to return the IAsyncResult object to pass up to the return of BeginAsyncOperation method. We also have to complete the EndAsyncResult.

I think if we do make all of our IO methos asynchronous we could really improve the scaleability of our sites – I certainly we be pushing to get some of these changes included in the next releases of my sites :)

He then spoke about creating http handlers which are just classes that adhere to the IHttpHandler object. You can register these with specific file types (not that useful) or create a ashx file with a webhandler directive and ASP.NET will automatically use this object when it is requested by the client. Using this instead of a classic aspx file is great for requests that will not be returning form/asp.net type data such as images as it does not have all the overhead of the pipeline that a aspx request moves through.

He showed us a demo of using a ashx object to return pictures by having a normal img tag on a page that had its src set to navigate to the ashx file with a set of params. When the page renders it calls off to the ashx file which returns the graphic without having to go through the same pipeline that a aspx page would have had to. This results is a much quicker response for the client.

He explained that by default an http handler is synchronous but we can make them asynchronous by conforming to IHttpAsyncHandler instead. You just have to have an empty ProcessRequest method and then fill in the BeginProcessRequest and EndProcessRequest with the work. This is then called asynchronously by ASP.NET.

He showed a good demo of a site pulling back images from virtual earth which render much quicker when using asynchronous calls.

He noted that more improvements could be made by editing the max no of concurrent connections in machine.config.

He also told us to Avoid thread.start Threadpool.queueuserworkitem and Asynchronous delegates and that we should use custom thread pools if necessary as otherwise you can steal a thread from the same pool as the ASP.NET worker process.

Tech Ed - UK Country Drinks

We were invited to Shoko a really cool far east style contemporary lounge club with a terrace that overlooked the beach. It was a really good evening. We got on a coach from the main event to the club, but it turned out it was really close to our hotel and we probably could of walked it quicker and had a chance to get changed - but never mind :)
Once in it was free drinks on tap :) and loads of different tapas to try, and a yummy chocolate fountain :) This was a really good night, nice one M$ ;)

Tech Ed - Patterns & anti patterns with SOA


We went to this lecture by Ron Jacobs who is fast becoming one of our favorite speakers, he is really interesting and engaging :)

Basically he was saying that using SOA technologies does not guarantee success and there is never a right answer coz as usual everything has pros & cons.

The goal of this is to have a friction free interaction between systems so there are no problems such as different file types or transportation methods.

He made an interesting point that SOA is not a noun; it’s a style of architecture which emphasizes on standard based communication.

He highlighted that tightly coupled systems defiantly have there place as if everything is lose its slow as hell :)

We must aim when designing SOA for a good set of explicit behaviors over implicit where the client has to ‘try things out’ to find out how things work.

He told us to think of service granularity at a business process level and that each of these have their interface.

As all boundaries should be explicit he gave a great metaphor of an explicit boundary being an international boarder between countries and that you know where they are clear and when you cross them you are not in control of anything. So when we are not in control of things such as server or config we know we have an explicit international boundary that will be an interface to a service. As with international borders we need to think carefully about how many we have and how we control them as they are expensive and problematic if they are not controlled. For internal business boundaries you can do anything you want and this includes tightly coupled objects to improve performance.

He spoke about Anti Patterns (patterns that show how to do things wrong so that you can make sure you don’t do the same). He discussed the following:

  • CRUDy interface - when you create an interface with simple CRUD commands on it when this should be a full business process with logic.
  • Enumeration - should not have enumeration commands such getnext() that go against the atomic nature of a service and causes the server to hold a big amount of data whilst a client navigates it.
  • Chatty interface - bad when a service offers lots of methods that must be called in a sequence of calls by the client to carry out an operation. The client may call one command but never get to call any others and the service is left in an inconsistent state. We should design larger web service method and do all the steps in there.
  • Loosy Goosy :) - where a service tries to be uber generic with a single command that takes a lump of xml and returns a lump of xml and uses a word doc to define the contract. This is hard to test and hard for the client to use as it may implicitly change. Sometimes this is done to stop versioning problems. Now this is easier with serialization improvements in .NET 2.0, but the message is to “receive liberally and send explicitly”.

He explained that the best way to start a SOA design is to start with the process and understand it. Then create the contract by defining the messages, operations and by group them.
Use portable types - returning datasets is not good, this can be used internally but for external service we should decouple internal and external objects by unloading one internal object into another external object.

His advice is to think of moving bits of paper not calling methods.

Ron has some really good web casts that he has shown at the event we can take a look over at http://www.arcast.net/

Wednesday 8 November 2006

Tech Ed - Unified Process and VSTS

WOW we were so looking forward to seeing Ivar Jacobson, the legend!!!!
However, the seminar was just way too sureal.
Ivar has down a total U turn ... moving from his strict methodology to now almost anything goes!!!! What was very clear is Ivar's goal is simply to help people produce good software and the means to getting there is fairly flexible. He admits that Agile has the correct emphasis on people rather than process and that the language is correct.

He also told us that he knows most developers just don't read books - they just buy them :)

His consultancy firm have developed a framework called the Unified Process model that allows you to use different processes from different methodologies to get the job done. There seems to an interesting "game" you play in this model with activities!! Explanations of the different processes are displayed on small cards with further reading available. It was not totally clear how to start on this or how exactly this model works but I'm sure Ivar will be writing a book on it :) More info at http://www.ivarjacobson.com/home.cfm

The whole integration with VSTS was the most confusing demo and piece of software we have ever seen. We really have no idea how to use it or how it works!!!!

Tech Ed - C# whiteboard session with Anders Hejlsberg

Lots of cool questions but an interesting feature was discussed was partial methods where a method call is put in a partial class to a partial method. This partial method can then be implemented in another partial class.

Anders talking about what he would do different with C# said;
  • He would of liked better difference between reference equals and value equals.
  • No goto.
  • Go straight to lambda instead of using annoymous types.

Tech Ed - Alternative .NET debugging facilities

Brian Long. http://blong.com Alternative .NET debugging facilities.

.NET supplied console debugger - mdebug. Load switch shows all objects loaded for the application.

.NET supplied Gui debugger - dbgclr. The debug engine used in visual studio. Useful for server or client site debugging.

Extra debugging tools for windows http://www.microsoft.com/whdc/devtools/debugging/default.mspx
  • Ntsd - uses existing console.
  • Cdb - launches new console.
  • Kd - kernal level!
  • Windbg - gui. Recommended debugger. Debug menu, event filter, add stop on .NET exceptions.

To fully maximise debuggers Microsoft's Symbol server provides all microsoft's dll's symbol extensions. Symbols are needed so the debugger can step into the call stack operations. To allow debuggers to use the symbols the _NT_SYMBOL_PATH machine environment variable is added with the unc for local symbols or url for symbol server.

Microsoft userdump http://www.microsoft.com/downloads/details.aspx?FamilyID=E089CA41-6A87-40C8-BF69-28AC08570B7E&displaylang=en - good tool for crash dump creation and extraction.

You can write debugger extensions - these must be unmanaged though. So you could write custom extensions specific to your complex application.

Furthermore all debuggers are unmanaged so you must add the SOS (Son Of Strike as originally .NET was going to be called lightening) debugger extension to take advantage of detailed .NET debugging. To add a debugger extension add the _NT_DEBUGGER_EXTENSION_PATH machine environment variable with the unc for SOS dll or you will have type the full path everytime in the debugger.

In task manager add virtual bytes column as use this rather than memory usage as memory usage can be compressed.

Tech Ed - SQL SODA

Implementing Service Oriented Database Architecture (SODA) With SQL Server 2005. Bob Beauchemin

How has SODA come around? Well this is to do with performance and a good rule of thumb is the second 10,000 users must perform as fast as the first 10,000. So you need scaling. Scale up is adding more machines or power. Scale out is moving the pressure out of the database.

Two main problems with data access:
  • As data is stored over time you will always have to infintly scale.
  • Sharing data across company boundaries.

Traditional database solutions:

  • Generally adopts a scale up approach.
  • Distributed transactions. Slow, long and succeptable to error.
  • Cache. Can end up with db in the cache.

Session oriented database solutions:

  • Generally adopts a scale out approach.
  • Parallel processing.
  • Smart cache.
  • Db contains services which receive an instruction. Then seperatly do processing. Then seperatly raise an event when criteria is met. Then seperatly events can be received by client applications.

Ok, this is getting way too deep into dba land. However, the SODA concept is moving away from a strict relational model to an object centric model. Lets take a web site order example to demo the difference:

  • The relational model is heavy so the order message will contain all the person details, the order details, the item details and the payment details. All will be processed at once. This kind of web site takes 2 or 3 minutes of processing before you get your order number. However, note your order is now fully processed and successful.
  • The SODA model is light so the while the order message will still contain all the person details, the order details, the item details and the payment details. However, only a skeleton order is created and this kind of site will provide your order number instantly. Now a series of events are fired by the database - full order and payment for example. These will be received by other server applications to process which could be anything from minutes to days. During this server processing time, if provided, the user can follow the status of their order, which can still fail on stock or payment.

Tuesday 7 November 2006

Tech Ed - ADO present and future


Jackie Goldstein. Renaissance Computer Systems
When coding optimistic concurrency handling with merge, get a refreshed datatable reflecting the new values in the database and fill a new datatable. Then using the original datatable.merge with the new datatable as the datatable parameter and true for the preserve changes parameter.

Sql dependancy for simple data result of a query has changed event handling in windows forms.

Handling database independence using the Syste.Data.Common db factory classes.

The next version of ado is the Entity Data Model. Its aims are to provide the client application a conceptual schema rather than the physical database schema. This client view is achieved by using the new client side map provider which is an extension of ado to provide the mapping at run time. The map provider returns datasets, datatables and atarows. Pros: each application can have its own specific view on the data. Cons: modelling is brought up to the client so while in code the objects are coded against, the developer still needs to know about the mapping.

To take the map provider further to an object model is object services which sits on top of the map provider to generate the object classes to return.

Hopefully the map provider and object services will be usable within a dll to provide a common dll om for clients to use.

Tech Ed - Visual Studio: The .NET Language Integrated Query (LINQ) Framework Overview


We attended a great lecture by the legend Andres Hejlsberg who gave us an insight into LINQ and explained that that LINQ will be included in C# 3.0 and VB.NET 9.0, more information and slides can be found at http://msdn.microsoft.com/data/ref/linq/

Here are some of our notes that we made whilst in the lecture:

He explained that we can use LINQ to query the following out of the box at RTM:
  • any memory objects that implement the IEneriable
  • Datasets
  • SQL
  • Entity Objects

He then whipped up a demo that took memory objects that had a composition relationship, he then queried these using LINQ with both a Lambda expression and by using the new extension methods.
He explained that the new Var object type is used as a strongly typed object and gives you type of the return statement when compiled.
He explained that we can use lambda expression or more gentle syntax which is converted by the compiler into lambda expressions at compile time.
We now also use the anonymous type on the select to create an object when compiled:
select new {c.companyname, c.phone}

He showed a nested in statement creation of new annoymous types on a select for a more hierarchical structure.

He reminded us that as this is just C# you can do anything in your LINQ statement.

He discussed Deferred Query Execution model were the LINQ query is built as a pipeline of separate query steps that is not actually executed until the results are iterated or a method is called on the results.

The LINQ to SQL api ships with a tool that you can point to a db and it will code gen all the objects for querying also there will be a WYSIWYG designer so you can drag tables over to create objects. This api also claims to creates slim SQL will create select, insert, update statements automatically which you can view by looking at database.log object.

The LINQ to XML api also ships which allows us to use XML more declaratively. It allows us to create and query XML which is easier, faster and more functional than XQuery. We can create new XElement objects by giving it a name and any IEnerable object as params and this will create an xml element with all the IEnerable objects within it. We can also query the relational world to create XML using LINQ.

PLINQ is another project that is being developed to use this more declarative way of querying instead of many for loops etc which allows for higher abstraction which means queries can be run in parallel on multi processor environment.
We can check out more info on LINQ over here http://msdn.microsoft.com/data/ref/linq/

Tech Ed - Agile Methodology


Roy Osherove

So how can you define agile development?


  • Executable requirements. That is something that can be measured.
  • Short iterations with simpler requirements. So 2 to 4 weeks in duration for each iteration resulting with a shippable product (may not actually release). With this the developer should be left alone with no interruptions.
  • To help with short iterations is to have automated test and build tools.
  • Have team based estimations.
  • Nothing wrong with change, so be adaptable.
  • Lots of verbal communications to define the requirement and then a small concise document is written.
  • Customer has more responsibility - They can contact us at any time to talk about change / They are responsible for feature priority order / Customer involved in testing.
  • A motto is "give value quick by priority".

All these definitions can be summarised into the agile manifesto which compares agile vs. standard methodologies:

  • Individuals and interactions vs. processes and tools.
  • Working software vs. documentation.
  • Customer collaboration vs. contract negotiation.
  • Responding to change vs. following a plan.


So with an agile approach, the team must always accept and prepare for things definitely changing by adopting an adaptive and more people oriented vs. a predictive and process oriented approach.

Extreme Programming (XP) and SCRUM are implementations of an agile approach;

  • As per the definitions of agile above.
  • Short daily meeting by team lead. Each developer has 5 mins to answer: What did you do yesterday? For accurate estimations. / What are you going to do today? For accurate estimations. / What is stopping you?
  • Developers work individually on feature design but whole team reviews designs and then code in pairs.
  • Helps reduce risk management as sharing of knowledge thorough paired coding and short iterations which everyone can understand.

Problems:

  • Generally need experienced developers.
  • Always need an active customer.
  • Will require a few iterations to see benefits.

Remember it is just a mind set, be flexible yourselves and feel free to change / create your own agile approach that works for your own team.

Read the good agile / bad agile article.

Visual Studio Team System provides a SCRUM template.

Tech Ed - Developing Rich Web Applications with ASP.NET Ajax


We attended a really cool lecture by Shanku Niyogi that discussed the two different / complementary approaches to developing Ajax applications; Server centric and Client centric.

Here are some of our notes that we made whilst in the lecture:

The Update Panel and how using triggers with an asyncPostbackTrigger & a postback trigger to causes a partial or full postback respectively, he also showed a timer example and wrapping a whole gridview in a update panel to allow it to update without a full postback.

The user experience can also be improved with an updateProgress control to show when an async update is in progress, this can be on per panel basis or any panel on a page. We can use the displayAfter property to only show the update information after a certain amount of time.

The Control toolkit which can be found at http://ajax.asp.net/default.aspx?tabid=47&subtabid=477 can add extra AJAX functionality really easily with little or no JavaScript. He showed how you can hide & show areas not always needed using the popupControl extender. He showed how we can make this disappear after the popup work has been complete by using the getProxyForCurrentPopup(this).Cancel()

He discussed the issue around how to handle state when using Ajax and recommended taking advantage of the profile store of SQL or AD or even sharepoint J This was demoed using a custom statebag type concept via javascript calls to the server side to store user state. He showed how this could be used anonymously with a non-persistent cookie or by using a login to gain identity. He also touched on saving your state and accessing it via a url. This potentially solves the problem of losing state by using the back button.

He then discussed how the MS AJAX library allows users to use a JavaScript pattern type library that is OO to wrap things such as Networking method calls from the client to the browser and back.

The networking stack which builds on top of web service architecture, allows you to return xml/strings and the library will convert to JavaScript objects. It will do all serialization/deserialization of objects & conversion of native .net objects.

He then demoed a cat conversation example which was first built as a normal web service which returns to a browser its heavy weight soap xml.
He then changed his web methods to add an extra decoration to [ScriptMethod]. Once compiled it returned a JavaScript object when you added a /js to the query string. This JavaScript is a clientside proxy object that handles all networking. You can call the proxy directly from JavaScript via async methods.

He discussed AJAX releases which included:

  • ASP.NET v1.0 core product release client & server components.
  • Will have more CTP features updated reg. incl. AJAX control toolkit.
  • RTM for end of year which will run on ASP.NET 2.0.
  • Will be fully integrated into VS code named Orcus.

Have a look at the latest AJAX release and try stuff out over here http://ajax.asp.net/Default.aspx

Tech Ed - Key Note

Wow what an audience, close to 4500 delegates are here at Tech Ed. The auditorium is huge and our early bird pass got us right to the front :) However, we were a little disappointed because for some reason we were expecting Bill Gates to be speaking the key note … and he was not.

Instead it was Microsoft’s senior vice president … however he was a superb speaker and really got you fired up for the future of Microsoft through Vista and the Office 2007 suites. As we all know, they certainly look the part and judging by the keynote and the demonstrations the integration of Office 2007 products has taken on a new level … looking forward to seeing that in action.

One superb demo within the key note was Language IndepeNant Query (LINQ), whereby the chief architect did a live demo of a web based resource task manager using LINQ against the OS and a DB. Furthermore within the click on a menu item a RSS feed was created!!!! LINQ certainly appears to have created that single layer for all data sources we developers have previously dreamed off!!!

Monday 6 November 2006

Tech Ed - Software Architecture

Today we have enjoyed a whole series of seminars today from Ron Jacobs (Microsoft Architect) and Scott Hanselman (Corrillian Architect). Both are superb speakers and clearly experts in the domain of software architecture.

The key points we took away that a key to an architect are;

Different lenses on documentation. It is vital the correct lens is applied to the current customer. For example the sales director does not care about how much money you have saved on servers or disc space … the sales director lens must be customised with sales information such as – this system will allow a transaction to complete 25% quicker. Whereas the IT director does not really care about the sales information so the IT director lens must be customized with IT information such as we can now decommission two other servers and save you £1500 a year in maintenance.

Executive buy in. It is vital to have "suits" buy in, to have them understand the benefits of how investing in an architecture will bring long term value to the business as a whole and to individual projects. Should management buy in be failing … then maybe a sneaky shadow government could form … a team working in their own time which achieve a final product with now has metrics (see below) to present to the management. Obviously very risky but does show team commitment to a process.

Metrics are vital and play two key roles;
First, working towards executive buy in to be able to show the sales director how sales can go through 20% quicker with this new technology or show the IT director that 30% development time was saved on project X because the architect already has a pattern for this main problem and some of the existing architecture was re used.
Second, and probably more important in order to demonstrate successfully delivering requirements both to the technical team and to the customer.

Testing. Implement automated testing and continuous integration. Therefore each unit of work results with a complete set of test results and on success a complete build. To fully benefit from automated testing it is important to remove as many "word" document requirements as possible and include them within the automated testing tools. For example; the requirement that the home page must load within less than 3 seconds on a 512mg connection - incorporate the load time testing within the automated test tools and then the only dependency on building is to ensure the tests are passed and not someone remembering to have to run further manual tests.

Responsibility. During the Q&A session it was explained that neither Ron nor Scott had any management responsibilities. They have a clear technical responsibility right to the top management but in regards to holidays, career and general HR stuff they are not involved. They noted this is on purpose and significant in that now the architect can focus on the technical goals of the company as a whole and get the positive buy in from the techies without any management politics becoming involved.

Methodologies. Tying in with the automated testing and continuous integration is agile. This is key when the team focus on delivering business benefit in small quick iterations. Worth looking at SCRUM.

On a more general note for very large scale smart client development Ron Jacobs points you to the CommSee case study.

We've arrived @ Tech Ed 2006


Got here nice and early and registered - got our cool bags and caps :)

Been to "Introduction to Software Architect" pre-conference seminars which have been awsome so far...

Only down side is a dodgy wireless connection here - everyone seems to be having problems - I think the wireless routers are being well spanked :)

Tried to use the M3100 to blog - but blogger doesn't seem to work well with IEMobile :(

More updates to come :)

Wednesday 1 November 2006

Off to Tech Ed 2006 @ Barcelona




We are off the Tech Ed next week in Barcelona. Keep an eye on this blog for (sort of) up to date pics and comments.

Tuesday 31 October 2006

Cookie Timeout Problem

JeffI recently had a problem where for some reason my Cookies were timing out before the time I set in the forms timeout tag.

Background

ASP.NET 2.0 site with forms auth using Active Directory Membership provider and the ASP Login Control. IIS 6.0 with separate App Pool being run by a custom domain account.

Following in Web.Config:

<membership defaultProvider="MyADMembershipProvidor">
<providers>
<add name="MyADMembershipProvidor" type="System.Web.Security.ActiveDirectoryMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ADConnectionString" attributeMapUsername="sAMAccountName" enableSearchMethods="true" />
</providers>
</membership>

<authentication mode="Forms">
<forms name=".ADAuthCookie" timeout="50000000" slidingExpiration="true" loginUrl="FormsLogin.aspx" />
</authentication>

<sessionState timeout="30">
</sessionState>

Problem

Everything works fine so I could authenticate using forms auth on the site and this would use AD fine. However once I left the page idle for 20min I clicked on a link and would be redirected back to the login page to authenticate. This was the same problem for both persistent and non-persistent cookies.

Solution

After spending some time thinking I was going mad I created a simple test harness. I used this to play around with the forms timeout, session timeout and roles cookie timeout. If I used a forms timeout value that was less than 20min all would work as expected, however using a value greater than 20min would not work and after 20min I would still be required to log in again.

After about a day of debugging I finally tracked the problem down to the worker process shutting down after 20min idle time. This config setting is found in the properties of the worker process under performance. If I unchecked this everything worked as expected. So the issue is around the App Pool recycling.

I found this article about invalid viewstate after an App Pool recycling when the identity is not Network Service.

So a known ASP.NET bug is the decryption and validation keys used for encryption are not maintained between App Pool recycling if the identity is not Network Service. So any encryption performed using these keys will not be valid after the App Pool is recycled, this will include any encrypted cookies.

Finally I had found the problem - When the App Pool recycles the keys are not maintained and new ones are generated, this results in any encrypted cookies, including the forms auth cookie not being decrypted on any subsequent requests from the browser and they are discarded.

To resolve this I edited the machine.config with a static decryption and validation key using this console app.

Everything is working fine now :)

Sunday 22 October 2006

ASP.NET Build numbers

Jeff
Whilst building my website for deployment I did some research on creating build numbers.
I wanted to use the Major.Minor.Build.Revision format where :
Major = the major release version. This is only changed when major changes are made to the application.
Minor = the minor release version. This is changed for small changes such as user requests, bug fixes.
Build = the build number. This is constructed by the date of the build e.g. if a build is completed on 1st August 2006 the build number would be 60801, that is YYMMDD.
Revision = the revision of the build. This is incremented every time a build is carried out on the same day. So when the first build of the day is performed it will be a 0, then if another build is made on the same day it will increment to 1 and so on.

Obviously I wanted to automate this using a build task as part of my Web Deployment Project.
For class library's this is quite easily done by using AssemblyInfoTask module written by the MSBuild team. Once installed a help document for the module can be found at [Program Files]\MSBuild\Microsoft\AssemblyInfoTask and this explains quite well how to use this module with your build.
For the Web site build I followed this blog which seems to work well.

Transparent png's IE6

Jeff
Wow what a messy job IE6 does with transparent png images!!! I haven't come across this before but after putting this in google I noticed this is a serious flaw in IE6.
I won't go into the exact issues as there is loads of info on the web.
However i thought I would just post some good links I found that helped me overcome this major issue. I warn you now ther is no 'nice' solution to this problem as it is an issue with IE6 outdated graphic rendering engine. However I recommend using Conditional Comments to include a new style sheet for IE 6, as this issue has been fixed in IE7 and this at least this the nasty bit is isolated.

This is how I overcame it
This is similar
I had this issue too and its is a good description of the problem.

UPDATE: This is now the easyist way to fix the transparent png issue. Full Credit goes to Angus Turnbull :)

Favicon

Jeff
Just for my own reference really thought I would document how to add a favicon to a site

Creating the icon


Used this as a guide to creating my icon. I used photoshop with the mentioned Plugin from telegraphics

Adding to site


You can either add the new favaicon.ico to the home directory of your website and this will be picked up automatically by the browser.
or
(This is the option I choose) You can add a link to every page header (or the master page) like the following:
<link rel="shortcut icon" href="images/favicon.ico" type="image/x-icon" />
This allows you to add the favicon to a folder such as images along with all your other graphical content.

Busy Busy

Jeff
Been really busy lately with my first release at RVC. All settled down a bit now so I have a few blogs to do....

Sunday 15 October 2006

Hackers Vs Crackers

ScottThe terms Hackers and Crackers are so regularly incorrectly used and while reading The Hacker Ethic by Pekka Himanen a superb definition has been written.

At the core of our technological time stands a fascinating group of people who call themselves hackers. They are not TV celebrities with wide name recognition, but everyone knows their achievements, which form a large part of our new, emerging society's technological basis: The internet and the Web, the personal computer, and an important portion of the software used for running them. The hackers' "jargon file," compiled collectively on the Net, defines them as people who "program enthusiastically" and who believe that "information-sharing is a powerful positive good, and that it is an ethical duty of hackers to share their expertise be writing free software and facilitating access to information and to computing resources wherever possible."

This has been the hacker ethic ever since a group of MIT's passionate programmers started calling themselves hackers in the early sixties. Later, in the mid-eighties, the media started applying the term to computer criminals. In order to avoid the confusion with virus writers and intruders into information systems, hackers began calling these destructuve computer users crackers.

Observe the distinction between hackers and crackers :)

Monday 2 October 2006

Radio Buttons / UltraOptionSet

ScottI am currently using data binding extensivly in a winforms application. This has proved to be very successful until radio buttons were required. The radio button control does have a databindings property but this means each radio button itself would have to be created and data bound to which is no good when the options are dynamic :(

Dynamic databound radio buttons were successfully acheived using the Infragistics UltraOptionSet. This control allows DisplayMember, ValueMember and DataBindings properties to be set and from this a dynamic number of correctly labelled radio buttons are created and on top of that they are all databound :)

Winforms DateTime Databinding

ScottWinforms and databinding has significantly improved in .NET 2.0 and a line of code such as - txtName.DataBindings.Add("Text", DataObject, "ClientName"); - just works with both data and nulls :)

However, the DateTimePicker and its ability to handle nulls is not quite a simple. To handle nulls with the DateTimePicker a few extra lines of code is needed.

The format property of the DateTimePicker control needs to be set. I have chosen Custom and as such the custom format is also set.
dtpAppointment.Format = DateTimePickerFormat.Custom;
dtpAppointment.CustomFormat = "dd/MMM/yyyy HH:mm";


This next line is really the most crucial. Notice the only difference with the simple single data binding line above is now there is a true paramater at the end. This is the formattingEnabled argument and without formattingEnabled set to true the handling of nulls just does not work!!!
dtpAppointment.DataBindings.Add("Value", DataObject, "AppointmentDate", true);

Happy binding :)

Saturday 30 September 2006

Caricatures

Scott
We wanted to create a fun and geeky theme and DimpleArt supplied the goods :)

Thursday 24 August 2006

'Mashing up' Windows AND Forms Authentication

Jeff

I had a classic requirement that a website must automatically log in users that have authenticated against its local domain controller (windows authentication). Any users who have not authenticated with its DC will need to login using a web based login form, which will then authenticate them against the DC using the ActiveDirectoryMembershipProvider.

I have used these resources to tackle this requirement:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnaspp/html/MixedSecurity.asp - Paul Wilsons msdn article titled "Mixing Forms and Windows Security in ASP.NET"

http://aspadvice.com/blogs/rjdudley/archive/2005/03/10/2562.aspx - Richard Dudley's blog about how he modified Pauls method to stop the browser popup for credentials for remote 'internet' users.

I'm just going to walk through my solution for my own reference and for anyone else with this requirement.

1. Make sure the whole website has the 'Enable Anonymous Access' checkbox ticked under IIS->Website->Properties->Directory Security->Edit->Enable Anonymous Access.
Note: The Integrated Windows authentication check box, under the Authenticated access, may also be selected as this is required to debug in VS.
2. Create both WinLogin.aspx and FormsLogin.aspx pages.
3. Create a Redirect401.htm file.
4. In the web.config file I have the following:

<configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0%22>
...
<location path="FormsLogin.aspx">
<system.web>
<authorization>
<allow users="?,*" />
</authorization>
</system.web>
</location>

<location path="WinLogin.aspx">
<system.web>
<authorization>
<allow users="?,*" />
</authorization>
</system.web>
</location>

<appSettings>
...
<add key="LanIPMask" value="192.168.\d{1,3}\.\d{1,3}"/>
...
</appSettings>


...
<system.web>
...
<authentication mode="Forms">
<forms name=".ADAuthCookie"
slidingExpiration="true" loginUrl="FormsLogin.aspx"/>
</authentication>
...
</system.web>
</configuration>

5. The FormsLogin just has the ASP.NET Login control and in the code behind of the I have the following:

protected override void OnPreRender(EventArgs e)
{
base.OnPreRender(e);

// Is this a postback?
if (!Page.IsPostBack)
{
// NO - this is not a post back.

// Try to authenticate the user via windows Auth.
AttemptWindowsAuth();

// The user must authenticate using Forms.
}
}

private void AttemptWindowsAuth()
{
// Is the user not using internet explorer?
if (!Request.Browser.IsBrowser("IE"))
{
// NO - the user is not using IE and therefore can not perform windows authentication, don't redirect them.
return;
}

// Is the user on a mobile device?
if (Request.Browser.IsMobileDevice)
{
// YES - the user is on a mobile device, don't use windows auth.
return;
}

// Has the user already had a failed login?
if (Request.QueryString["failedlogin"] != null)
{
// YES - the user has already had a failed login, don't redirect them again.
return;
}
// Is the user on the local Lan?
if (Regex.IsMatch(this.Request.UserHostAddress, ConfigurationManager.AppSettings["LanIPMask"]))
{
// YES - the user is on the local lan so redirect them to the windows page for windows Auth.
RedirectToWinAuth();
}

// Is the user on the local server?
if (this.Request.UserHostAddress.Equals("127.0.0.1") this.Request.UserHostName.ToLower().Equals("localhost"))
{
// YES - the user is on the local server so redirect them to the windows page for windows Auth.
RedirectToWinAuth();
}
}

private void RedirectToWinAuth()
{
// Transfere to the windows login page.
Response.Redirect("WinLogin.aspx?" + Request.QueryString.ToString(), true);
}

5. In IIS make sure the WinLogin.aspx does NOT allow anonymous access and only uses Integrated Windows authentication to authentic access. This can be set by navigating to IIS->Website->WinLogin.aspx->Properties->Directory Security->Edit
6. Whilst you are in IIS navigate to IIS->Website->WinLogin.aspx->Properties->Custom Errors and change all the 401 errors to point to your Redirect401.htm file you created earlier.
7. The winLogin.aspx is an empty page and in the codebehind has the following:

protected override void OnLoad(EventArgs e)
{
base.OnLoad(e);
int start = this.Request.ServerVariables["LOGON_USER"].LastIndexOf('\\');
string userName = this.Request.ServerVariables["LOGON_USER"].Substring(start + 1);
FormsAuthentication.RedirectFromLoginPage(userName, false);
}

8. The Redirect401.htm has the following html:

<html xmlns="http://www.w3.org/1999/xhtml%22>
<head>
<title>Redirect 401</title>
<script type="text/javascript" language="javascript">
window.location = "FormsLogin.aspx?failedlogin=1"
</script>
</head>
<body>
<p>
If you are not automatically redirected please click <a href="FormsLogin.aspx?failedlogin=1">here</a>
</p>
</body>
</html>

So the users always hits the Formslogin page for authentication, it will check to see if their IP address matched the RegEx expression in the web config (which is the mask for local IP addresses) if it does then they are redirected to the Windows login page which will cause IIS to authenticate them using windows authentication. If this is successful they will be redirected via formsauthentication, giving them a forms authentication ticket :-) They are now free to move around the site.
If they are not in the local IP range they will be shown the forms login page for them to enter their details and use the ActiveDirectoryMembershipProvider to authenticate them against active directory.
If the user has just 'plugged into' the local domain and has received an IP address via the DHCP, when they visit the site they will be pushed to the windows authentication page and as they have not been authenticated by the DC they will be prompted with the browsers credential request box. If they cancel this or enter an invalid username and password combination a 401 error will be raised and handled by are custom page which will redirect them back to the FormsLogin page. The only way for them to gain access to the system is to enter a valid username and password that is stored in Active Directory.
I also use the SQLRolesProvider within this web application and it works fine with this solution.

Friday 18 August 2006

Winforms visibility and event firing

ScottI am currently working with a MDI application and at times MDI children forms toggle visibility for various reasons which i wont bore you with ;) With events being used quite extensively in the design i just wanted to share this little snippet ... when a winform has it's visible property set to false, the form events do not fire.

Winforms Combo Box and Sorted Property

Scott

Had a very strange one the other day ... the conclusion is the Sorted property of the forms.combobox control should never be true when the combo box is using a datasource.

The combo box was set up no different - Set the DataSource with a DataRow[], Set the DisplayMember and ValueMember. The DataRow[] was already in the desired Description ASC sort order;

IDDescription
302Alopecia
345Appetite Decreased
346Appetite Increased
303Behavioural Abnormality
304Bleeding
313Dystocia

So with the DisplayMember = Description column, the ValueMember = ID column and the Sorted property set to false.

As expected the SelectedValue returns the expected ID value.

However, with the DisplayMember = Description column, the ValueMember = ID column and the Sorted property set to true. The combo box data actually is;

IDDescription
302Alopecia
303Appetite Decreased
304Appetite Increased
305Behavioural Abnormality
306Bleeding
307Dystocia

So the ValueMember has begun from the first ID value and just incremented as each item was bound.

In this case the use of SelectedItem.ID returns the correct ID value.

So just my experience ... maybe obvious ... but i have not seen this documented anywhere and surely i cant be the first person to have experienced it ?!?!?!!?

Thursday 17 August 2006

Only validate when visible

Jeff

I hit a bit of a problem today with a ASP.NET Validator Control specifically the RequiredFieldValidator. I had a dropdown with the classic 'other' option which when selected shows a textbox for the user to specify the 'other'. I only wanted to have the RequiredFieldValidator fire when the textbox is visible!! This isn't built in with the control and it was just firing even when the textbox wasn't visible for the user to enter anything in it!! After a few hours of searching the web and going round in circles I finally came up with a solution :-)

I modified my javascript that shows/hides the textbox to include the action of disabling the validator too!! Here is my script:

// Will show/hide an object depending on a selection on a dropdown.
function ShowHideObject(dropDownID, dropDownShowString, objectToHideID, validatorToDisable)
{
// Get the selected text from the dropdown.
selectedText = document.getElementById(dropDownID).options[document.getElementById(dropDownID).selectedIndex].text;
// Set the object to hide to hidden.
document.getElementById(objectToHideID).style.visibility='hidden';
// Disable the validator.
document.getElementById(validatorToDisable).enabled=false;
// Does the selected text match the supplied dropDownShowString.
if (selectedText.match(dropDownString.toString()))
{
// YES - The selected text matches the supplied dropDownShowString.
// Show the object and enable the validator.
document.getElementById(objectToHideID).style.visibility='visible';
document.getElementById(validatorToDisable).enabled=true;
}
}

I then had to check before I called Page.isValid on the sever to see if the 'other' option was selected, if not then disable the validator.

All seems to be working cooool now :-)

Monday 14 August 2006

My RVC office

Jeff

Thought I would try out the 'Insert Map' function in the new Windows Live Writer. It uses Windows Live local and Virtual Earth, which now seems to include the UK :-) You can really zoom in quite far, and add pushpins. Any its quite a cool way to add maps to blogs, not sure how much I will use it though!! Anyway this is my office, Scott also lives here :-)


Windows Live Writer

JeffThis is my first blog using the new free Windows Live Writer (Beta) :-) You can download it here http://windowslivewriter.spaces.live.com/ Its pretty good, configured straight away with Blogger and pulls down all your styles. Has a nice and simple WYSIWYG interface, easy to insert photos and even maps - not that I've tried that yet!!

Wednesday 9 August 2006

Anyone for some Security Trimmings?

Jeff
I have used a SiteMap in my website to give a central repository of the site structure. I have also created a ul menu using a repeater control as shown here, as again the ASP.NET menu control uses tables to perform its layout.
What I really wanted was for the menu items to only be shown to users who are authorized to view that page. I could do this is the code behind by checking the Roles.IsUserInRole() method but wanted a more declarative method using my sitemap and role provider. This can be achieved by using security trimming. Enabling this feature on the SiteMap provider results in all the url's being checked again the url authorization rules. If the current user is not authorized to view the page it will not be included in the SitMap when used at runtime as a datasource. This results in my menu not rendering the link if the user is not authorized to view the page. Coooool :-)

ASP.NET 2.0 Roles, Forms Auth and Membership

Jeff
I'm not going to attempt to blog about how to setup ASP.NET 2.0 security, there are more than enough good blogs and How To's to get it working. The best place to start is from the awesome blogs of Scott Gu here.

What I do want to blog is my experiences of setting it up.
My scenario
I have a website that will be deployed on the WWW but will only be accessed by users contained in a AD. There will be two levels of users but not all users in the AD will have access.
My Solution
Ok first of all I would like to say that the Provider model used in ASP.NET 2.0 is spot on :-) It really does allow for less code, more productivity and a neat design.
First of all I am using Forms authentication with Active Directory as my membership provider. A great How To here. This provider along with the new LogIn control authenticates users against an LDAP store. I found the Login control really useful and nicely customizable, the only downside is the way the control renders in tables, which is a bit annoying for styling and you can't get full control over it. Other than that authenticating users via Forms auth is a lot easier than in .NET 1.1.
For my role management I originally wanted to use the AD roles but discovered that there is not yet an AD Role provider, and didn't really have the time to look into creating one!! So I opted for the SqlRoleProvider to manage my roles. There is a great Role Manager How To over here. This provider will use the aspnetdb database to store the roles and integrates well with the AD Membership provider by using the AD usernames. New Roles can be created and managed using the ASP.NET Web Site Administration Tool. I will have two roles, Admin and User and add only the users using the website to the roles from the AD store. After using the SQLRole provider I have realized that it may be a neater place to store role information than in the AD as it can all be stored in a central database, with no replication problems. Roles can be application specific by setting the application name in the providers tag in the web.config under the role manager tag. Here is my config:

<roleManager enabled="true" defaultProvider="SqlRoleManager" cacheRolesInCookie="true">
<providers>
<add name="SqlRoleManager"
type="System.Web.Security.SqlRoleProvider"
connectionStringName="SqlRoleManagerConnectionString"
applicationName="MyAppName" />
</providers>
</roleManager>

I then set my authorization tag in my web.config to allow only the users in my User and Admin roles and Deny everyone else. This ensures that only authenticated users in my roles can access the web site.

<authorization>
<allow roles="User" />
<allow roles="Admin" />
<deny users="*" />
<deny users="?" />
</authorization>

Deploying my role setup is the next issue I face. My plan is to run the Aspnet_regsql.exe tool to setup the aspnetdb database and then run SQL scripts to add the two roles. I have then created an admin page within my site which will add/remove users to these roles. Obviously the first time the web site is accessed no users will be in any roles and everyone will be locked out!! So I will amend the authorization tag in the web.config to allow the administrator user:

<authorization>
<allow users="Administrator" />
<allow roles="User" />
<allow roles="Admin" />
<deny users="*" />
<deny users="?" />
</authorization>

I have also set the following authorization tag up on the admin page as the following:

<location path="admin.aspx">
<system.web>
<authorization>
<allow users="Administrator" />
<allow roles="Admin" />
<deny users="*" />
</authorization>
</system.web>
</location>

This will allow the administrator user to access the site and the admin page to add all the users to the roles the first time the website is used.

Roll Over images

Jeff
I wanted to have my image buttons change on roll over on a website and really thought the ASP.NET ImageButton would of had a RollOverImageUrl property on it that automatically changed the image onmouseover and onmouseout. However I soon found out the is no such functionality :-(

So I extended the ImageButton and the ButtonField controls to have this property and handle the rollover. It simple to do just set the onmouseover and onmouseout attributes of the ImageButton to toggle the src to the correct url of the image. You can find the source here.

Mr Designer meets Mr Coder

Jeff
Been working with a web designer lately, something I have never done before. Its been very interesting seeing the two worlds meet. Mr Designer is an expert in HTML and CSS but knows little about the ASP.NET world.

The interesting areas have been the line between ASP.NET Skins and CSS and ASP.NET controls rendering in tables. I have decided that Skin vs CSS is not so much which one is best more which is most suitable when. I found Skins very powerful with controls when we can set properties and found CSS great for lay out and styling of HTML controls.

Mr Designer is obviously keen to give me his CSS to apply to my site, but sometimes I needed to adapt this to work with my site. I had to set the CssClass on many controls to use with the classes in the CSS and sometimes had to take parts out and just use the Skins to set the styles on ASP.NET controls.

Its quite frustrating that many of the ASP.NET controls render in tables as these are HTML heavy and not as flexible as using DIVs to style. Scott Gu has an interesting post here about the CSS Control Adapter Toolkit for ASP.NET 2.0, which looks like it could be interesting in solving this problem. When I get a chance I will have a play :-)

I still think their is a big gap between a pure web designer and a developer. It will be interesting to see if the Expression Products with WPF will help to shrink the gap :-)

Escape Characters

Jeff
Had a bit of an escape character jungle experience the other day. Had to use some HTML, JavaScript and C# escape characters all in the same day :-) So just thought I would record the links for future reference and for anyone else.

HTML escape characters

JavaScript escape characters

C# escape characters

Custom DateTime Format Strings

ScottSo simple, yet so easy to get mixed up :)
MSDN2 Custom DateTime Format Strings

Expandable gridview

Jeff

Been working on a project where I needed a grid which could have rows that expand with another grid within it for more details. I found this cool gridview over at The Code Project.

This control is great, it does exactly what it says and all credit to its authors. However I couldn't resist having a tinker with it :-) I have ended up amending it slightly.

The part I wanted to improve was where the client had to handle the RowCreated event, in order to bind any data to the child control in the item template and you had to set whether the row should expand or not. I just felt this could be encapsulated more by being handled internally by the control.

I set about solving this by adding the following extra properties to the control. These properties are designed around the nested object being another gridview, or expandable gridview, with some relationship between the two:

  • NestedGridName (string) - The name of a gridview which is nested.

  • NestedGridDataHandlerName (string) - The name of a datahandler object that is used to retrieve data for the nested grid, this maybe a TableAdapter or custom dataobject.

  • NestedGridDataHandlerMethodName (string) - The name of the method used to extract data on the data handler object.

  • NestedGridDataHandlerSingleton (bool) - Whether the data handler object is a singleton.

  • NestedGridDataHandlerInstanceProperty (string) - If the data handler is a singleton then this field must be set with the name of the property to access the instance.

  • NestedGridForeignDataKeyNames (string[]) - An array of foreign key column names on the containing grid used to reference the nested table.

  • ExpandCollapseCellPosition (int) - The position to add the cell which contains the Expand/Collapse button.


With these extra properties I was able to use reflection to call the data handler for the nested grid, get the data and bind it. I could also determine whether the row needed to be expanded. I also added the CellPosition so that the position of the expand/collapse button can be set and not always at the 0 position.

now you can use the control by dragging it onto the page, setting a few more properties, but can write zero code and it works :-) There are a few assumptions in here, such as the datahandler method takes the foreign key fields as arguments to retrieve the data for the nested grid and that the nested object is a grid. However if you don't set the DatahandlerName then the grid can be used as before with adding anything into the ItemTemplate field.

The other change I made was to make it XHTML compliant. The original version added a few attributes (expandClass, expandText, collapseClass, collapseText) to the grid, which were picked up in the javascript to perform the expand/collapse function. However these none standard attributes made it fail the XHTML standards. I changed this by adding the attributes to the javascript when it is built up in the OnInit method of the ExtGridView class. Now it has a big green XHTML pass :-)

You can download my modified version here

Tuesday 8 August 2006

System.Windows.Forms.Keys.Return & beep

ScottWhen using Keys.Return you will automatically be provided with the classic PC beep. If this is what you want then you are on a winner. However, if this is not what you want then this here is the gem to turn the beep off;
Within the KeyPress event, where e is KeyPressEventArgs.
e.Handled = true;

Friday 4 August 2006

DataSet Relationships

ScottBeen working quite heavily with strongly typed datasets at the moment. I keep getting asked the same question about the difference between relationship types. So in a simple nutshell;

Imagine two tables "Father" and "Son" with a relationship between the two.

Relationship type - Relation Only. Will generate intellisense method Son.FatherRow

Relationship type - Foreign Key Constraint Only. Will enforce relational integrity. No intellisense method as in Relation Only.

Relationship type - Both Relation and Foreign Key Constraint. Provides both intellisense method and relational integrity

Thursday 3 August 2006

Welcome

Welcome to our first post on our new blog site :-)

We are Jeff and Scott and just to be different we thought we would do a joint blog. We are both software developers currently working at the Royal Veterinary College in London. Between us we specialise in ASP.NET, C#, WinForms, enterprise level design and anything else we come across :-) We are often off on tangents trying things out so we thought we would record them here.

We are currently redeveloping our main site here. We are going to include this blog in our site along with all our favourite links, pictures and projects. We hope to get it finished soon :-)

Jeff Scott