Is Agile that good or that bad?

I don’t often write opinion pieces and in fact, I’ll try and keep the opinion in this post to a minimum. I don’t intend to link bait, I think this is my way of rubber duck debugging, of gathering my thoughts, getting my observances out and hopefully hearing from the community at large to voice your opinions.

I need to start by saying I’ve never worked in an Agile shop. I’ve never seen it in practice, never seen how truly effective it can be (if it can be). To me Agile feels like how MVVM did before I started working with it. Though I had understood the principles of MVVM, it felt bloated and impractical, too much process in the practice. Eventually, though, as you start trudging through your first application the blinds get pulled back a little bit and you start to see some of the benefits, everything starts to wire up so easily. Adding new features, fixing bugs is easier, unit tests hook in easier, solid n-tiered architecture.

In the end you end up with virtually all your code being watched over by unit tests because they just work on your view models, and you don’t care about code behind causing problems. But I don’t think until I had put MVVM into practice did I really understand the benefit. It’s easy to dismiss, easy to point out that there is a significant overhead initially in developing an MVVM based application (which is why MVVM should not be used for simple projects). You have to have management that understands the long-term benefits in the maintenance tail and this doesn’t always happen. But Agile isn’t an architecture for software development, it’s a set of development practices.

Iterative Waterfall

iterfallI’ve always used iterative waterfall (aka iterfall) as the basis for my development. Start with as much of a thorough understanding of requirements as possible, make sure they’re all written down and agreed upon. Move on to design. Do your ERDs and XSDs and class diagrams and wire frames. Take those designs to the client. The wire frames are especially important for the client. Once they understand what the application is going to look like and how it’s intended to work there will be requirements changes. No biggie, go back to requirements, fix the problems, modify the design documents and then get approval again. Once design is done start implementing the application. You may run into a problem with the design, so go back to design, if it’s going to be a problem there then go back to requirements. Does it need to be a requirement? If yes, then move back down and, though it may be painful, fix it in design and then get approval from the customer. Then fix it in implementation.

The key to iterative waterfall is to never being afraid to move back a step. Unlike a traditional waterfall, never feel like you’re locked into a step. Just because you’ve got sign-off on requirements doesn’t mean they won’t change. And that’s okay. The other thing I like about iterative waterfall is that as you’re sitting down to do your project plan and laying out how the application is going to happen, each task becomes it’s own iterative waterfall. Then when you get into the maintenance tail the iterative waterfall starts all over again.

This is still, however, a “Big Design Up Front” approach and is distinctive from the iterative and incremental development practices of some Agile process because rather then working each cycle completely, in iterative waterfall you move within a path forwards or backwards.  So is this bad?

Agile Practices

There are some aspects of the Agile process I have seen as beneficial and attempted to incorporate it within my process when I worked as a project lead. One of the best, with a major caveat, is pair programming. That caveat is that when attempting to do pair programming you have to be very selective in terms of who you pair up together and it’s not for everyone. Where it made sense I attempted to incorporate pair programming, working with another software engineer side-by-side. We would discuss our thoughts on what we were implementing, potential issues and conflicts with other aspects of the application. It would be a quality collaboration that produced better code.

But I couldn’t work with some engineers like this. One engineer would just shut down. He would mindlessly type at the keyboard whatever I was saying. When I was at the keyboard it was like trying to pull teeth to get him to contribute. It worked better to step away from the computer and have more of a casual conversation regarding what he was working on and then to let him go forward. Then there are those engineers who fit the stereotype for software engineers. They tend to be introverted. They don’t talk a lot. It’s incredibly hard to get them to contribute at meetings. But let them sit alone in front of a computer with an idea of what you need from them and you’ll get 12 hours worth of work in 2. All I’d often get from software engineers like this is uncomfortable squirming. They don’t want you there, they don’t want you on their computer, they don’t want to be on your computer. They just want to be left alone to work. They have things set up just right for their environment and that’s how they like it. So pair programming works, just in a limited scope with limited participants. This doesn’t make it bad, it just needs to be applied within the scope of the engineers assigned to a project.

Now I do have to say I’m not a fan of the “Daily Scrum” idea. What I found most effective is for me to meet with each of the engineers (or if I was in a larger project I imagine with each team lead) and see where they were at. I had a time, 9 AM, because by then everybody would be in, where I would walk around to each engineers, ask them what they were working on, how things were going and any problems whey were having. Now the largest team I’ve ever lead had only 4 engineers other then myself. So maybe it’s just that where simple projects don’t benefit from MVVM, maybe I’ve never worked on a project large enough that it would have benefited from Agile.

Continuous integration is another aspect of Agile I think is necessary to any development methodology. With so many tools out there it should be just the norm. We use CrouseControl at work but Scott Hanselman just did a great article on AppVeyor for continuous integration.

Use case and user stories are fundamental to the design of an application. They’re listed as an “Agile practice” oddly enough but I always thought they were crucial to the design phase of a waterfall.

I like the idea of Sprints post-implementation. When the user’s start using the application and you start getting bug reports and feature requests in it makes sense to go with a more rapid release cycle until things settle down and then move out to a quarterly release cycle (except for high priority bugs). But does this make sense during initial application development. Maybe it’s my own ignorance. Maybe I don’t understand what a Sprint is. So here is the definition from the creators:

The Sprint

The heart of Scrum is a Sprint, a time-box of one month or less during which a “Done”, useable,
and potentially releasable product Increment is created. Sprints best have consistent durations
throughout a development effort. A new Sprint starts immediately after the conclusion of the
previous Sprint.
The Scrum Guide™
The Definitive Guide to Scrum:
The Rules of the Game


Ken Schwaber and Jeff Sutherland

If you read the above reference you get a bigger understanding of the picture. It just means within the timeframe of the sprint you have to have your code ready for production. It doesn’t mean that the application as a whole needs to be ready but that the code you are responsible is ready for production. And there is a lot more to Scrum. (But isn’t this really just milestones?)

Agile Methodologies

But is Scrum Agile? Well, I mean, it’s a part of Agile, right? Ken Schwaber and Jeff Sutherland were two people who signed the Agile Manifesto. The Agile Manifesto reads, in its entirety, as follows:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:
Individuals and interactions over Processes and tools
Working software over Comprehensive documentation
Customer collaboration over Contract negotiation
Responding to change over Following a plan
That is, while there is value in the items on the right, we value the items on the left more.

Kent Beck James Grenning Robert C. Martin
Mike Beedle Jim Highsmith Steve Mellor
Arie van Bennekum Andrew Hunt Ken Schwaber
Alistair Cockburn Ron Jeffries Jeff Sutherland
Ward Cunningham Jon Kern Dave Thomas
Martin Fowler Brian Marick

© 2001, the above authors. This declaration may be freely copied in any form, but only in its entirety through this notice.

These sound like noble goals. These are all things we want. (And hey, Uncle Bob’s in there! 🙂 So what is Agile? Well, it seems to be a bunch of software development methodologies and practices that people wanted to lump together and call Agile. The best I can get out of all the websites and blogs and wiki pages on Agile, the only thing that seems to tie them together is that their not “classic” waterfall. One of the points that seem to be made about the differences is the Agile moves testing to a different place then classic waterfall. In classic waterfall you don’t test anything until you are all done. Test Driven Development (TDD) moves testing to the very front of development. As a part of iterative development, having continuous cycles means that at each cycle you are testing. But this doesn’t apply when looking at the iterative waterfall. As I discussed above, each task during the implementation of the iterative waterfall is itself an iterative waterfall. This makes it sound a lot like iterative development but the process itself is fundamentally different. You have to ask yourself, do I have the requirements to implement this? Are the designs done? Do I need to modify them? How do I implement this? Now lets test my code (i.e. write a unit test). And now we’re in the maintenance phase of this task. Iterative waterfall is an incredibly recursive process.

So if I utilize the Agile practices without utilizing the Agile methodologies am I doing Agile development? A lot of the Agile practices are really just good development practices that happen to be parallel to the purpose of Agile. Why does that make them Agile practices? So with the 10 or so listed Agile methodologies, which ones produce the best output? The Agile methodologies themselves seem so different. I mean, if I were to spin up a new project would I use Extreme Programming or Scrum? Are there aspects of each of them that can be co-mingled? Why would I choose one over the other? And then there’s method tailoring which seems to be about adapting the methodology and practices to suit your needs. But there is no guidance in terms of what situations work better for what methodologies.

But Agile is so Damn Profitable

There are a lot of books about how each of the Agile methodologies is better than classic waterfall but there doesn’t seem to be anything on how and where one methodology would be better than another. I see a lot of “Buy this Agile book” and “Hire me as your Agile consultant” and “Pay a bunch of money to become a Professional Scrum Master™” (scrum.org and by extension EBMgt are really good at this). Believe it or not I’m actually okay with this. If there is truly value to be gained then it makes sense to pay for it. Agile (writers, teachers, consultants) seems to have truly grasped the concept of capitalism and run with it. But more to my point, I have yet to have someone from the Agile community provode some sort of guide on where and how the Agile methodologies are better or worse. I’ve read a lot of blogs and web sites touting Agile from people much smarter than me.

Wonderful, I’m sold. So should I go with Adaptive Software Development or Agile Modeling or Agile Unified Process or Crystal Methods (Crystal Clear) or Disciplined Agile Delivery or Dynamic Systems Development Method or Extreme Programming or Feature Driven Development or Lean software development or Kanban or Scrum or Scrum-ban.

And what if I’m not sold? The Editor in Chief of drdobbs.com, Andrew Binstock, had a great article titled The Corruption of Agile. It seems to me that one of the key issues with Agile is understanding in what scenarios to apply the right methodologies and practices and then integrate method tailoring to your specific needs. To paraphrase I read the article as, “People have so integrated Agile practices into their culture that they no longer apply method tailoring.” That’s just my interpretation. This all concludes in Andrews response to the responses title, Addressing the Corruption of Agile, which links to responses from Rob Myers of the Agile Institute and Uncle Bob.

So there are a lot of people that love Agile. And a lot of people that hate it. It seems absurd to me that this should be such a polarizing issue. And I sit right in the middle. Is Agile that good or that bad? It troubles me that, other than hating on a classic waterfall (which makes sense to me), the various methodologies that make up Agile don’t defend why and when they are better than the other Agile methodologies. Every method has it’s good points and bad points and shouldn’t be universally applied. Even the iterative waterfall I mentioned above. It fails on really large applications when resources at immediately available. When we’re still defining requirements but we have engineers available to begin development, well, there’s iterative waterfall’s downfall. You end up with idle resources. The easy way around that is to start with a smaller team and do frequent iterations between requirements and design.

Anyways this was just my place to get out some of my thoughts on Agile. I freely admit that I’m ignorant in Agile. I’m just not sure how to move forward so I won’t be. I want that forward movement to be meaningful.

Thanks for reading,
Brian

This continues my series on ways you’ve probably used design patterns in real-life and may not have even known it. The previous post was on the Adapter Design Pattern.
This is a kind of “catch-all” post where I want to talk not only about the Iterator Design Pattern but also custom enumerators for Parallel.ForEach and ensuring you give your threads enough work.

The iterator pattern is a way to move through a group of objects without having to understand the internals of the container of those objects. Anything in .NET that implements IEnumerable or IEnumerable<T> provides an iterator to move over the values. List<T> and Dictionary<TKey, TValue> are good examples.

If we look at my TPL sampler in my GreyScaleParallelSample we have the following code:

System.Drawing.Imaging.BitmapData bmData = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
int stride = bmData.Stride;
unsafe
{
	byte* start = (byte*)(void*)bmData.Scan0;

	int height = bmp.Height;
	int width = bmp.Width;

	Parallel.For(0, height, y =>
	{
		byte* p = start + (y * stride);
		for (int x = 0; x < width; ++x)
		{
			byte blue = p[0];
			byte green = p[1];
			byte red = p[2];

			p[0] = p[1] = p[2] = (byte)(.299 * red
				+ .587 * green
				+ .114 * blue);

			p += 3;
		}
	});
}
bmp.UnlockBits(bmData);

This code is very similar to code I used in some image manipulation I had to implement. Here, however, all we’re doing is setting each pixel to grey scale (I’m not sure why but for some reason I use the British spelling of grey). If we look at it we’re iterating over the height and then by the width. But an image is really just a byte array where every three places identifies the blue, green and red bytes for a given pixel. We don’t need to treat it like a map with height and width.

Now to do this we’ll need a custom iterator (see? I brought it back to the purpose of this post 🙂 Fortunately Parallel.ForEach allows you to define an IEnumerable so that you can customize how it iterates over the values. We can just set up a simple for loop and yield on each value.

public static IEnumerable<int> ByVariable(int max, int increment)
{
	for (int i = 0; i < max; i+= increment)
		yield return i;
}

What this does is allow you to iteratate over a Parallel.ForEach by some amount up to some supplied maximum. I’ve added a new sample to my TPLSampler called GreyScaleBySingleParallelSample that uses this.

System.Drawing.Imaging.BitmapData bmData = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
int stride = bmData.Stride;
System.IntPtr Scan0 = bmData.Scan0;
unsafe
{
	byte* start = (byte*)(void*)Scan0;

	Parallel.ForEach(ByVariable(bmp.Height * bmp.Width * 3, 3), i =>
	{
		byte* p = (start + i);
		byte blue = p[0];
		byte green = p[1];
		byte red = p[2];

		p[0] = p[1] = p[2] = (byte)(.299 * red
					+ .587 * green
					+ .114 * blue);
	});
}
bmp.UnlockBits(bmData);

The max value of ByVariable is the height of the image by the width times 3 (since each byte represents one color of the three that make up a pixel) and the amount to increment is by 3. This way we can move through the byte array 3 bytes (or 1 pixel) at a time.

So this is awesome, right? We’ll spin off a bunch of threads and this will crank through a big image in no time. So let’s run this against an 8 MB image and compare it to the first method.

Reseting Image
Starting Grey Scale Parallel Sample
Completed Grey Scale Parallel Sample
Grey Scale Parallel Sample ran in 00:00:00.1700515

Reseting Image
Starting Grey Scale By Single Parallel Sample
Completed Grey Scale By Single Parallel Sample
Grey Scale By Single Parallel Sample ran in 00:00:01.5654025

Wait, what? This second method runs significantly slower (and “Resetting” is spelled wrong). As I’ve mentioned in the past, when you can’t give your threads enough work such that you overcome the cost of having to spin up and/or set up the thread you just end up wasting time. If you’ve read my past posts on this, I know I may seem like I keep harping on this but it is important. I’ve seen quite a few cases where people think that the solution to a problem with a long running process is just to throw more threads at it. It may very well be that is a solution but you need to understand what your code is doing. It doesn’t make sense when optimizing code to just throw everything against a wall and see what sticks.

That being said, there are times where using the “ByVariable” enumerable is helpful. There is an interface I interact with that returns a string array where the values are grouped by (value, unit, error). I have to do a bunch of handling and work on the values that returned in the array. In this use it makes sense.

So what have we covered?

  1. What the Iterator Design Pattern is.
  2. It’s implementation in .NET.
  3. How to use a custom iterator in a Parallel.ForEach.
  4. Making sure to give each thread in a Parallel.For/Each enough work.

Thanks,
Brian

Globalization and Localization in WPF – Some thoughts

A part of my responsibilities is globalization of our primary application. The application consists of a mix of WinForms and WPF. Localization of WinForms is done using the built-in mechanism provided, which sucks. That right, I said it, it sucks. You identify the window/control as localizable and then pick a language other than default. This will create a resource file for you of the chosen language. Anytime you change any values while the window/control is selected with a language chosen other than default, those values will be moved from the default Designer.cs (if they haven’t already been moved) into a default resource file for you and add your new values to the resource file for the language. This sounds well and good, like everything should be sunshine and rainbows. But it often moves in a ton of stuff that wasn’t changed, sometimes won’t move in stuff right, and sometimes wipes stuff from the Designer.cs and then somehow misses adding it to the resource file. I mean, having a Designer.cs is a serious pain to begin with, but to mess it and the resource files up is considerably more of a pain.

So I was hoping when it got to WPF things would get easier. But Microsoft doesn’t seem to have planned out globalization for WPF. It doesn’t mean they didn’t think about it. It just doesn’t seem like developer experience was a part of the thought process. If you want to read what Microsoft has to day on the subject check out Globalization and Localization but I’ll give a quick summary below.

The first method they give has to do with using msbuild to add x:Uid attributes to all your xaml and another tool, LocBaml Tool, to generate satellite assemblies that contain the translated dlls. If you have names in all your xaml this might be nice. You’d have to set up a post-build process to generate the satellite assemblies but it’s not too bad. The problem, however, is if you’re doing MVVM dev. Since we rarely name our controls, when the uids get auto-generated they look like x:Uid=”TextBlock_1″ which is pretty ugly and if you use the LocBaml tool to generate your file for translation, that uid is meaningless. I’m surprised this is the first method as it is the ugliest. They go into quite a bit of detail and leave the second and third methods nearly empty.

The second method they give is using ResourceDictionary. As mentioned examples are pretty anemic and there is a better sample over at codeproject called Globalization in WPF using ResourceDictionary. This is actually pretty nice. The best part about this one is that because you are dynamically changing the dictionary on the fly you’re values can change on the fly too if the user chooses to.

The third method, and the one I ultimately choose was to use resource files with static references to the resource properties. The single example Microsoft gives is useless so check out WPF Localization for Dummies. I choose this method only because that is how localization is done in WinForms and since our application is a blend of both WinForms and WPF it made sense to be consistent.

Now obviously for any project of real size, just using the resource file in the Properties isn’t going to help. You’ll end up with a ton of strings and images in here that just starts looking horrible and becomes hard to manage. I’ve thrown together a sample project that shows using a resource file for English and Spanish. Note that I’ve directly modified the project file to identify that the resource files are dependent upon MainWindow.xaml. I’m not sure if there’s an easier way to do this but I just add the dependentUpon element in the project xml. This just makes the solution a little bit cleaner. Also notice that I named it MainWindowResx, this is to prevent class name conflicts as the name of the resource class will be the name of the file.

It is important to remember, once you add the resource file, that you open it in the designer and change the “AccessModifier” from internal to public. The problem is that the class will default to internal but will still not allow xaml to find the value. To fix this you need to make the AccessModifier public. Otherwise you will get a “StaticExtension value cannot be resolved to an enumeration, static field, or static property.” exception.

So after doing all this work to get things ready for globalization I began to think there had to be a better way. I’m not sure if the better path is a Visual Studio Extension or if it should be a custom tool that sits external and modifies the project file. This is something I’ll write about on and off as I work on the project in my personal time. If you have any feedback as I’m going through the process please feel free to leave comments or email me.

Thanks,
Brian

In the Prism 4.1 Developer’s Guide there was a multi-purpose object, DomainObject, that implemented INotifyPropertyChanged and INotifyDataErrorInfo. This was a nice generalized object to inherit from in MVVM. In models where I need to implement DataContract this was nice because I can just throw in the [DataContract(IsReference = true)] attribute so I can serialize.

The problem is that DomainObject uses ValidateProperty(“PropertyName”) and RaisePropertyChanged(“PropertyName”) instead of the SetProperty(ref _holder, value) and OnPropertyChanged(() => Property) that BindableBase uses. To start with I despise strings in code unless it is explicitly intended to be the text a user should see. I also would like to make everything consistent. It would be nice to just extend BindableBase but since it doesn’t implement DataContract I can’t. Not only that but I want to add validation and implement INotifyDataErrorInfo.

Fortunately, since Prism is open-source we can dive into the code and figure out how they implemented BindableBase and add that code to our own DomainObject while still implementing INotifyDataErrorInfo.

[DataContract(IsReference = true)]
public abstract class DomainObject : INotifyPropertyChanged, INotifyDataErrorInfo
{
	protected DomainObject() { }

	#region INotifyPropertyChanged Members

	public event PropertyChangedEventHandler PropertyChanged;

	/// <summary>
	/// sets the storage to the value
	/// </summary>
	/// <typeparam name="T"></typeparam>
	/// <param name="storage"></param>
	/// <param name="value"></param>
	/// <param name="propertyName"></param>
	/// <returns>True if the value was changed, false if the existing value matched the desired value.</returns>
	protected bool SetProperty<T>(ref T storage, T value, [CallerMemberName] string propertyName = null)
	{
		ValidateProperty(propertyName, value);

		if (object.Equals(storage, value)) return false;

		storage = value;
		OnPropertyChanged(propertyName);

		return true;
	}

	/// <summary>
	/// Notifies listeners that a property value has changed.
	/// </summary>
	/// <param name="propertyName">Name of the property used to notify listeners. This
	/// value is optional and can be provided automatically when invoked from compilers
	/// that support <see cref="CallerMemberNameAttribute"/>.</param>
	protected void OnPropertyChanged(string propertyName)
	{
		var eventHandler = this.PropertyChanged;
		if (eventHandler != null)
		{
			eventHandler(this, new PropertyChangedEventArgs(propertyName));
		}
	}

	/// <summary>
	/// Raises this object's PropertyChanged event.
	/// </summary>
	/// <typeparam name="T">The type of the property that has a new value</typeparam>
	/// <param name="propertyExpression">A Lambda expression representing the property that has a new value.</param>
	protected void OnPropertyChanged<T>(Expression<Func<T>> propertyExpression)
	{
		var propertyName = Microsoft.Practices.Prism.Mvvm.PropertySupport.ExtractPropertyName(propertyExpression);
		this.OnPropertyChanged(propertyName);
	}

	#endregion

	#region INotifyDataErrorInfo Members

	private ErrorsContainer<string> errorsContainer;

	public event EventHandler<DataErrorsChangedEventArgs> ErrorsChanged = delegate { };

	protected ErrorsContainer<string> ErrorsContainer
	{
		get
		{
			if (errorsContainer == null)
			{
				errorsContainer =
					new ErrorsContainer<string>(pn => OnErrorsChanged(pn));
			}

			return this.errorsContainer;
		}
	}

	public IEnumerable GetErrors(string propertyName)
	{
		return ErrorsContainer.GetErrors(propertyName);
	}

	public bool HasErrors
	{
		get { return ErrorsContainer.HasErrors; }
	}

	protected virtual void OnErrorsChanged(string propertyName)
	{
		var eventHandler = this.PropertyChanged;
		if (eventHandler != null)
		{
			eventHandler(this, new PropertyChangedEventArgs(propertyName));
		}
	}

	protected void OnErrorsChanged<T>(Expression<Func<T>> propertyExpression)
	{
		var propertyName = Microsoft.Practices.Prism.Mvvm.PropertySupport.ExtractPropertyName(propertyExpression);
		OnErrorsChanged(propertyName);
	}

	#endregion

	#region property validation

	protected void ValidateProperty(object value, [CallerMemberName] string propertyName = null)
	{
		ValidateProperty(propertyName, value);
	}

	protected virtual void ValidateProperty(string propertyName, object value) { }

	#endregion
}

So let’s look at this wall of code. First, on line 18 SetProperty is implemented, almost identical to how BindableBase in Prism 5 is. The awesome thing is the use of the Caller Information attribute, CallerMemberName, that was added in C# 5. CallerMemberName, when calling from a get or set, is the name of the property. This makes the boilerplate code a lot cleaner, as noted in my previous post on upgrading to Prism 5. The only real difference here is that I call ValidateProperty, which is optionally implemented in the models that extend from DomainObject. Most of the time nothing happens but there are times where errors get added to the ErrorsContainer because the model implements ValidateProperty.

The other cool thing from BindableBase is the use of Expression<Func> propertyExpression that then extracts the property name. This makes the code a lot cleaner since you can just pass in () => Property. The original DomainObject used slightly different terminology with RaisePropertyChanged and RaiseErrorChanged. In the above code, at lines 36 and 88 you can see I’ve changed the methods to be the consistent with BindableBase. Again, this is all so I can get the benefits of INotifyPropertyChanged and INotifyDataErrorInfo while still being able to utilize DataContract. Your situation may be differently than mine but this provides a generalized class that has made MVVM easier for me.

Thanks,
Brian

Adapter Design Pattern – A Real World Example

I think the simplest design pattern we’ve all used without really calling it a pattern, other than observer, is the adapter design pattern. The adapter design pattern is known more colloquially as a wrapper where you wrap a bunch of functionality from one or more classes into a single class because of incompatibilities between the interfaces.

In my post, Observer and Command Design Patterns – A Real World Example I discussed the need for the interfaces:

public interface IOpenDialog
{
	string Filter { get; set; }
	string FileName { get; set; }
	string[] FileNames { get; set; }
	bool Multiselect { get; set; }
	bool? ShowDialog();
}

public interface ISaveDialog
{
	string Filter { get; set; }
	string FileName { get; set; }
	string[] FileNames { get; set; }
	bool? ShowDialog();
}

These are modified a bit from the original interfaces I showed but you get the idea.

See, the problem comes when running your unit tests. I had a need in my view models where I wanted to open files and save files. But if I had used the standard dialogs from the Microsoft.Win32 namespace when I ran my unit tests things would just not have worked. I mean, how do I show a SaveFileDialog in a unit test?  There is some stuff I could have done with binding and listening for a property changed event in my view, that would have an ugly, unnecessary hack.

This is all made easier just by wrapping the SaveFileDialog and OpenFileDialog classes.

public class OpenFile : IOpenFileDialog
{
	#region IOpenDialog Members

	public string Filter { get; set; }
	public string FileName { get; set; }
	public string[] FileNames { get; set; }
	public bool Multiselect { get; set; }

	public bool? ShowDialog()
	{
		OpenFileDialog ofd = new OpenFileDialog();
		ofd.Filter = Filter;
		ofd.FileName = FileName;
		ofd.Multiselect = Multiselect;

		bool? result = ofd.ShowDialog();
		if (result == true)
		{
			FileName = ofd.FileName;
			FileNames = ofd.FileNames;
		}

		return result;
	}

	#endregion
}

public class SaveFile : ISaveFileDialog
{
	#region ISaveDialog Members

	public string Filter { get; set; }
	public string FileName { get; set; }
	public string[] FileNames { get; set; }

	public bool? ShowDialog()
	{
		SaveFileDialog sfd = new SaveFileDialog();
		sfd.Filter = Filter;
		sfd.FileName = FileName;

		bool? result = sfd.ShowDialog();
		if (result == true)
		{
			FileName = sfd.FileName;
			FileNames = sfd.FileNames;
		}

		return result;
	}

	#endregion
}

Since opening the dialogs in the view model is not possible, by utilizing the above classes (which wrap the standard dialogs) we abstract the dependencies away from the view model into the classes that implement the interface. Now all I have to do is register the types with my unity container:

container.RegisterType<ISaveFileDialog, SaveFile>();
container.RegisterType<IOpenFileDialog, OpenFile>();

And in my view models when I use it:

IOpenFileDialog ofd = container.Resolve<IOpenFileDialog>();
ofd.Filter = "Xml Files (*.xml)|*.xml";
if (ofd.ShowDialog() != true)
	return;

But in my mind there is an even cooler thing. In my NUnit project I have the two following classes in my Mocks directory:

public class OpenFileForUnitTest : IOpenFileDialog
{
	public bool? ShowDialogShouldReturn { get; set; }

	#region IOpenFileDialog Members

	public string Filter { get; set; }
	public string FileName { get; set; }
	public string[] FileNames { get; set; }
	public bool Multiselect { get; set; }

	public bool? ShowDialog()
	{
		return ShowDialogShouldReturn;
	}
	
	#endregion

	public OpenFileForUnitTest()
	{
		Flush();
	}

	public void SetFileNames(params string[] FileNamesToAdd)
	{
		FileNames = FileNamesToAdd;
		if (FileNamesToAdd == null || FileNamesToAdd.Length == 0)
		{
			FileName = null;
		}
		else
		{
			FileName = FileNamesToAdd[0];
		}
	}

	public void Flush()
	{
		Filter = null;
		FileName = null;
		FileNames = new string[0];
		Multiselect = false;
		ShowDialogShouldReturn = true;
	}
}

public class SaveFileForUnitTest : ISaveFileDialog
{
	public bool? ShowDialogShouldReturn { get; set; }
	public bool IgnoreFileNameSet { get; set; }
	
	#region ISaveFileDialog Members

	public string Filter { get; set; }
	
	string fileName;
	public string FileName
	{
		get { return fileName; }
		set
		{
			if (IgnoreFileNameSet)
				return;

			fileName = value;
		}
	}
	
	public string[] FileNames { get; set; }

	public bool? ShowDialog()
	{
		return ShowDialogShouldReturn;
	}

	#endregion

	public SaveFileForUnitTest()
	{
		Flush();
	}

	/// <summary>
	/// will override IgnoreFileNameSet to set FileName and then restore it
	/// </summary>
	/// <param name="FileNamesToAdd"></param>
	public void SetFileNames(params string[] FileNamesToAdd)
	{
		FileNames = FileNamesToAdd;
		bool ignore = IgnoreFileNameSet;
		IgnoreFileNameSet = false;
		if (FileNamesToAdd == null || FileNamesToAdd.Length == 0)
		{
			FileName = null;
		}
		else
		{
			FileName = FileNamesToAdd[0];
		}
		IgnoreFileNameSet = ignore;
	}

	public void Flush()
	{
		Filter = null;
		FileName = null;
		FileNames = new string[0];
		ShowDialogShouldReturn = true;
		IgnoreFileNameSet = false;
	}
}

And in my unit tests similar to how it’s used:

var ofd = new OpenFileForUnitTest();
container.RegisterInstance<IOpenFileDialog>(ofd);
ofd.SetFileNames(pathToTestFile);

var myVM = new myVM();
myVM.MethodToTest();

And auto-magically the view model gets the path to the test file for the unit test. Following this is a bunch of asserts to ensure that the state and properties of the view model loaded as I expected from the test file.

That’s it for this week.

Thanks,
Brian

Upgrading to Prism 5.0 – BindableBase

I recently upgraded my TPL Samples solution to the latest Prism libraries. I’ve removed the references to the “lib” directory and added Prism as a nuget package. The first thing you’ll notice is that NotificationObject has been deprecated and replaced with Microsoft.Practices.Prism.Mvvm.BindableBase. This makes things really nice as we no longer have such horrible boilerplate code.

What used to be

private string currentState;
public string CurrentState
{
	get { return this.currentState; }
	set
	{
		if (this.currentState != value)
		{
			this.currentState = value;
			this.RaisePropertyChanged(() => this.CurrentState);
		}
	}
}

Now looks like:

private string currentState;
public string CurrentState
{
	get { return this.currentState; }
	set { SetProperty(ref this.currentState, value); }
}

You can see how much cleaner this code is. The SetProperty method in BindableBase will take care of firing any RaisePropertyChanged events for you as well as take care of any needed validation. Check out Upgrading from Prism Library 4.1, which is Microsoft’s guide on upgrading.

Thanks,
Brian

 

Update 05/15/14:

Well, after using this I’ve discovered validation still has to be implemented manually.  I’ll do a future post regarding this.

Brian

FormatException.com – Now on Azure

For the past year this blog has been getting been getting slower and slower.  I hosted on bluehost, which had been my hosting provider since I first started this blog six years ago.  Now, I never contacted bluehost to see if they could resolve the slow response issues.  For the five years prior to the slow down they were a wonderful, cheap provider.  And maybe if I had sent an email their way they could have helped me.  But I figured as a .NET developer maybe it was time to go Azure.  I have a few web projects in the planning stages and wanted to start with a low-entry point penetration into Azure.  The biggest issue I had in the port over to Azure was a problem with Azure detecting that I had changed my cname entry at my domain provider.  It took roughly 24 hours for them to pick up the change.

Now, everybody warns you that when you make changes to your DNS entries it could take as many as 3 days to propagate.  In reality, however, I’ve never had it take more then an hour, until now.  So that’s why I’ve been down for 24 hours.  But I’m back with a new, cooler look.  I’m still playing around with the css a bit but I like the new look and feel.

The cname taking a while to update is an issue that other people have had with some reports I say taking as much as 4 or 5 days.  Scott Hanselman has been a very obvious proponent of Azure and what it can provide for you.  He did a great session with us here at the Tucson .NET Users Group.  I let him know of my troubles.  They may seem rather mild but I couldn’t imagine being down even longer then I was.

So thanks for your continued patronage and here’s to another six years,

Brian

Observer and Command Design Patterns – A Real World Example

Up next in my series on ways you’ve probably used design patterns in real-life and may not have even known it, the Observer and Command design patterns. This continues on from my post Composite and Memento Design Patterns – A Real World Example. Command pattern is meant to actually decouple the GUI and back-end code. It may seem like using an event and executing a command are the same things, and as we are using them here they are pretty close.

XAML for this post

<Window x:Class="CompositeMementoSample.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525">
    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition Height="*" />
            <RowDefinition Height="Auto" />
        </Grid.RowDefinitions>
        <ListView Grid.Row="0" MinWidth="150" ItemsSource="{Binding CompositeList}" />
        <StackPanel Grid.Row="1" Orientation="Horizontal">
            <Button Click="AddFiles">Add Files</Button>
            <Button Command="{Binding SaveXmlCommand}">Save XML</Button>
            <Button Command="{Binding LoadXmlCommand}">Load XML</Button>
        </StackPanel>
    </Grid>
</Window>

Looking at the button where we add files you can see clearly the “Click” event. Again, what that does is register our code-behind as a listener such that when the click event is fired on the button the event is called. But how do we generate unit tests for this? Well, let’s look at the event.

Click event for AddFiles in the code-behind

private void AddFiles(object sender, RoutedEventArgs e)
{
	FileGroupViewModel vm = (FileGroupViewModel)DataContext;
	OpenFileDialog ofd = new OpenFileDialog();
	ofd.Filter = vm.Model.OpenFileDialogFilter;
	ofd.Multiselect = true;

	if (ofd.ShowDialog() != true)
		return;

	vm.AddFiles(ofd.FileNames);
}

ObserverWe can see here that the AddFiles exists in the view model and provides a way for us to get from the view to the view model.  In standard GUI code-behind we create the button and add a “Click” event. This is part of the observer design pattern, where the code-behind is registered as a listener and gets notified any time the button is clicked.  Observer pattern is kind of like, “Tell me what to call (aka notify) when I’ve been clicked and I’ll do it.”

Okay, cool, but, well, this isn’t very MVVMy. MVVM likes to bind everything to properties. If you look at the “Save XML” button and the “Load XML” button, that is exactly what we are doing.  Command pattern works by saying, “hey, you, command there, when I do my click, you execute whatever it is you are supposed to do.”

Notice that this is different from observer.  I know the differences may seem pretty minor, and they really are, but they are also important. In the code-behind above there is still a dependency between the view and the view model. There still has to be this AddFiles event that calls the view models AddFiles. That means there is code that we can’t test against without instantiating the view. In the example here the code bit is fairly small but it still is code that isn’t being tested in a unit test. In command, because the command is looking for the action to happen rather then waiting to be told that it has happened the dependencies are opposite.Command

Now we have to get a bit off track here as we look at the SaveXmlCommand and the LoadXmlCommand.  We don’t want to create a SaveFileDialog and an OpenFileDialog in our view model. The view model is not a view, so we shouldn’t be creating elements that affect the UI there. So I have two separate classes that implement the below ISaveDialog and IOpenDialog interfaces that are a part of my views.  Then on my constructor I take the services that implement the interface and use them when the commands are called.

public interface ISaveDialog
{
	string Filter { get; set; }
	string SaveFileName { get; set; }
	bool? ShowSaveFileDialog();
}

public interface IOpenDialog
{
	string Filter { get; set; }
	string OpenFileName { get; set; }
	bool? ShowOpenFileDialog();
}

And finally we get to the guts of my FileGroupViewModel

IOpenDialog OpenDialogService;
ISaveDialog SaveDialogService;

public FileGroupViewModel(IOpenDialog OpenDialogService, ISaveDialog SaveDialogService)
{
	this.Model = new FileGroupModel();
	this.OpenDialogService = OpenDialogService;
	this.SaveDialogService = SaveDialogService;

	OpenDialogService.Filter = model.OpenFileDialogFilter;
	SaveDialogService.Filter = model.OpenFileDialogFilter;

	saveXmlCommand = new DelegateCommand(SaveXml, CanSaveXml);
	loadXmlCommand = new DelegateCommand(LoadXml);
}

public void AddFiles(string[] FileNames)
{
	foreach (string path in FileNames)
	{
		if (path.EndsWith(Model.SerializedExtension))
		{
			try
			{
				FileGroupModel fgm = FileGroupModel.ReadFromFile(path);
				if (fgm != null)
				{
					if (string.IsNullOrEmpty(fgm.Name))
					{
						fgm.Name = System.IO.Path.GetFileNameWithoutExtension(path) + " (" + path + ")";
					}
					Model.Groups.Add(fgm);
					continue;
				}
			}
			//if we get an exception assume it's not a FileGroupModel and add as a regular file
			catch { }
		}
		Model.Files.Add(new System.IO.FileInfo(path));
	}
}

DelegateCommand saveXmlCommand;
public ICommand SaveXmlCommand
{
	get { return saveXmlCommand; }
}

bool CanSaveXml()
{
	return Model.CompositeList.Count > 0;
}

void SaveXml()
{
	if (SaveDialogService.ShowSaveFileDialog() != true)
		return;

	Model.WriteToFile(SaveDialogService.SaveFileName);
}

DelegateCommand loadXmlCommand;
public ICommand LoadXmlCommand
{
	get { return loadXmlCommand; }
}

void LoadXml()
{
	if (OpenDialogService.ShowOpenFileDialog() != true)
		return;

	FileGroupModel newModel = FileGroupModel.ReadFromFile(OpenDialogService.OpenFileName);
	if (newModel == null)
		return;

	Model = newModel;
}

What we can do here is that when creating our unit tests we can fake an ISaveDialog and an IOpenDialog that could return true or false on their respective show methods depending on what we are testing.

Now, I have to wade a bit further into the weeds here. This isn’t really the way I would implement this in a production application. One of the problems with MVVM is that views and view models tend to be classes unto themselves that don’t deal with anything else, just themselves. Models will get referenced all over but not necessarily views and view models.

A lot of blogs, like mine, tend to sometimes do things a bit off in the interest of brevity. A lot of what is here in this post, if you’ve read my past series on MVVM, you’ve already seen. The purpose here was that I wanted to talk about the differences between Command and Observer design patterns. The differences are important and I hope I’ve gotten across to you why it is best to use Command in an MVVM development environment.

How to do it more right

So how would I implement something like this in a production application? Think about the Cut command (of Cut/Paste fame). Anywhere in a UI I should be able to add a cut command. I want to be able to have it in my Edit menu, I want to be able to have it in my context menu, I want to be able to put it as a button on my toolbar and finally I want to bind a key command to it. In standard code-behind each button click would have to implement some click event in which it has to then figure out what to do. There is a huge potential for a lot of duplication of code as well as a lot of code that may not be subjected to a unit test. The solution in MVVM is to use a library like Unity where you can register global commands to handle this exact situation. Unity is outside of the scope of this post but I encourage you to find out more about it. I may even do a series on unity.

I know at first MVVM seems like a massive headache (and it can be). It seems like there is so much extra crap you have to add on. But if you can hang on long enough to get through the crap I think you’ll find that you have better, sounder, more stable applications.

Thanks,
Brian

Image Credits:
Observer
Command

Icons in Modern UI (with a nod to UX)

I was battling with some icons this weeks so I figured I’d pass on some of my general experience in this area. I’ll hold off on the sample of the command pattern for next week.

Let’s make sure we get the terminology correct, what Microsoft once labeled “Metro UI” is now called “Modern UI”, which is the Windows 8 Surface UI for Windows Runtime (aka Windows RT) apps. It was a lot easier when they just called everything Metro UI.

First off, I’m not a graphics designer or UX expert. I’m a software engineer. My degree is in computer science, not graphics design. As a whole we, and yes, I’m sure I probably mean you too, software engineers design user interfaces poorly. Sometimes downright horribly. But not all software companies have graphics designers or UX experts on staff (which I consider a shame). Not all clients want to pay for UX work (which is doing themselves a disservice). So I’ve had to fill in for the role of both graphics designer and UX expert many times.

I try and keep up with the latest releases in the Microsoft ethos as it relates to .NET but I also try to round off my rough edges with some UX reading. I know this doesn’t make me a UX expert by any stretch of the imagination. But it does make me just slightly more competent this area. Also, since I do a lot of UI work, having a general understanding of UX I feel is necessary for me to competently do my job. Little things like “error messages should be apologetic” and “where there is both an icon and text, the icon should come first” end up making a big difference to a good UI.

Getting Started in UX

If you are interested in UX I would recommend starting with the Microsoft UX guidelines for Windows Runtime apps. After you’ve digested that thriller, try doing some reading on ux.stackexchange.com. After that you can start in on some industry sites like UX Magazine.

On To Icons

The Microsoft UX guidelines above recommend using the Segoe UI font for Windows 8 development, but there is a brother font, Segoe UI Symbol, that provides a huge number of potiential icons. To look at all the images I would recommend using the Word Insert Symbol tool in Word if possible as there seems to be quite a few more symbols available than what’s in Character Map.InsertSymbol

Choosing the magnifying glass you can use the following code for a search button:

<Button Margin="0 0 5 0" Width="16" Height="16" FontFamily="Segoe UI Symbol" Opacity=".75" Command="{Binding SearchCommand}" ToolTip="Search" Style="{DynamicResource SearchStyle}" Content="🔎" />

Note that if you are in FF or IE the magnifying glass should show up correctly but in Chrome it shows up as a box. These icons are really just Unicode but with Segoe UI Symbol they look like they are a part of Modern UI.

My final search control ended up looking like:SearchBar

which looks great since by using the font directly you get to take advantage of the TrueType nature of the font, which means it’s vector based and always scales in size. 

Okay, hundreds of icons at your finger tips, but what if that’s not enough?  What do you do if you can’t find what your looking for?  Head over to Modern UI Icons. As of right now, 1229 icons, nearly all of which are Creative Commons, that you can use however you want. Make sure you read the license as not all of them are CC, but by far the vast majority of them are. And they look like icons you would use in Modern UI, just like they should.

All the icons are available as images in both light and dark themes, as well as xaml and svg so you can use them as vectors.

appbar.input.keyboard
appbar.input.keyboard
<?xml version="1.0" encoding="utf-8"?>
<Canvas xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Name="appbar_input_keyboard" Width="76" Height="76" Clip="F1 M 0,0L 76,0L 76,76L 0,76L 0,0">
	<Path Width="50.6667" Height="28.5" Canvas.Left="12.6667" Canvas.Top="23.75" Stretch="Fill" Fill="#FF000000" Data="F1 M 15.8333,23.75L 60.1667,23.75C 61.9156,23.75 63.3333,25.1678 63.3333,26.9167L 63.3333,49.0833C 63.3333,50.8322 61.9156,52.25 60.1667,52.25L 15.8333,52.25C 14.0844,52.25 12.6667,50.8322 12.6667,49.0833L 12.6667,26.9167C 12.6667,25.1678 14.0844,23.75 15.8333,23.75 Z M 17.4167,28.5L 17.4167,47.5L 58.5833,47.5L 58.5833,28.5L 17.4167,28.5 Z M 20.5833,30.0834L 28.5,30.0833L 28.5,38L 20.5833,38L 20.5833,30.0834 Z M 30.0833,30.0833L 36.4166,30.0834L 36.4166,38L 30.0833,38L 30.0833,30.0833 Z M 20.5833,39.5834L 28.5,39.5833L 28.5,45.9167L 20.5833,45.9167L 20.5833,39.5834 Z M 30.0833,39.5833L 45.9167,39.5834L 45.9167,45.9167L 30.0833,45.9167L 30.0833,39.5833 Z M 38,30.0834L 45.9167,30.0833L 45.9167,38L 38,38L 38,30.0834 Z M 47.5,30.0833L 55.4167,30.0833L 55.4167,38L 47.5,38L 47.5,30.0833 Z M 47.5,39.5834L 55.4167,39.5833L 55.4167,45.9167L 47.5,45.9167L 47.5,39.5834 Z "/>
</Canvas>

That’s it for now.  Thousands of icons you may not have known you had easy (and free) access to.

 

Thanks,
Brian

 

Image Source:
appbar.input.keyboard, Modern UI Icons

So you don’t need to know software design patterns. But, as I hope got across in my post, knowing and understanding patterns can only benefit you. I wanted to put together a real-life example of some of the instances where I’ve used patterns, even if the use of the patterns was unintentional. Hopefully you will get some use out of them.

I had a requirement where I had to track a list of files and needed to be able to save this list. But the list needed to be able to not only have a list of files but needed to support a list of lists. The use case was that the user could define a list of files say, “My files from client A.” The user could then put together a list of lists defined as, “My clients from the east coast” which would be comprised of lists from any clients on the east coast.

Of course this defines the composite pattern. The composite pattern is just an object that contains zero or more of that object. The two most obvious classes in .NET that use this are TreeViewItem and MenuItem. A MenuItem contains it’s own content as a MenuItem but also contains children MenuItems. In my case my class, “FileGroupModel” has a list of files (think of that list as the content of the class) as well as a list of FileGroupModels, which is exactly what the composite design pattern is.

Now to facilitate this I obviously need to save out the FileGroupModel, which is the memento pattern. As you’ll see in the code I went with a DataContract to save out the data.

using CompositeMementoSample.Infrastructure;
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.IO;
using System.Linq;
using System.Runtime.Serialization;
using System.Text;
using System.Threading.Tasks;

namespace CompositeMementoSample.Models
{
    [DataContract(IsReference = true)]
    public class FileGroupModel : DomainObject
    {
        public virtual string OpenFileDialogFilter 
        {
            get { return "Files & Groups (*.*)|*.*|File Groups (*.fxml)|*.fxml"; }
        }
        public virtual string SerializedExtension 
        {
            get { return ".fxml"; }
        }

        [DataMember]
        private string name;
        public string Name
        {
            get { return name; }
            set
            {
                if (this.name != value)
                {
                    this.ValidateProperty("Name", value);
                    this.name = value;
                    this.RaisePropertyChanged("Name");
                }
            }
        }

        [DataMember]
        private ObservableCollection<FileInfo> files;
        public ObservableCollection<FileInfo> Files
        {
            get
            {
                return files;
            }
        }

        [DataMember]
        private ObservableCollection<FileGroupModel> groups;
        public ObservableCollection<FileGroupModel> Groups
        {
            get
            {
                return groups;
            }
        }

        public ReadOnlyCollection<object> CompositeList
        {
            get
            {
                List<object> allItems = new List<object>();
                allItems.AddRange(files);
                allItems.AddRange(groups);
                return new ReadOnlyCollection<object>(allItems);
            }
        }

        public FileGroupModel()
        {
            files = new ObservableCollection<FileInfo>();
            files.CollectionChanged += child_CollectionChanged;
            groups = new ObservableCollection<FileGroupModel>();
            groups.CollectionChanged += child_CollectionChanged;
        }

        void child_CollectionChanged(object sender, System.Collections.Specialized.NotifyCollectionChangedEventArgs e)
        {
            this.RaisePropertyChanged("CompositeList");
        }

        public void WriteToFile(string Path)
        {
            using (MemoryStream memStm = new MemoryStream())
            using (StreamWriter outfile = new StreamWriter(Path))
            {
                DataContractSerializer ser = new DataContractSerializer(typeof(FileGroupModel));
                ser.WriteObject(memStm, this);
                memStm.Seek(0, SeekOrigin.Begin);
                string result = new StreamReader(memStm).ReadToEnd();
                outfile.Write(result);
            }
        }

        public static FileGroupModel ReadFromFile(string Path)
        {
            string contents = System.IO.File.ReadAllText(Path);
            using (Stream stream = new MemoryStream())
            {
                byte[] data = System.Text.Encoding.UTF8.GetBytes(contents);
                stream.Write(data, 0, data.Length);
                stream.Position = 0;
                DataContractSerializer deserializer = new DataContractSerializer(typeof(FileGroupModel));
                object o = deserializer.ReadObject(stream);
                return o as FileGroupModel;
            }
        }

        public override string ToString()
        {
            return Name;
        }
    }
}

To start with, FileGroupModel extends DomainObject which in turn implements INotifyPropertyChanged and INotifyDataErrorInfo. This is done so that the model can integrate into MVVM. DomainObject can be found in my series on MVVM. So the MVVM stuff out of the way, we can get on to the composite design pattern.

Implementing Composite

Looking at line 42 you can see the observable collection that contains our FileInfos. At line 52 you can see the observable collection that contains our FileGroupModels, which makes this the composite pattern.  That’s it.  The composite pattern is just about implementing tree structure data.

CompositeMementoBut we need to use this in MVVM and we can only to bind an ItemsSource to one list. That is the purpose of the CompositeList. By hooking into the CollectionChanged event of the two observable collections, anytime files or file group models are added to either list we can raise a property changed event so the view way down the link that gets hooked up to the model via the view model, can get notified of the change to the collection.  My use of naming the combined list “CompositeList” is a bit unfortunate as I obviously mean that to indicate a combined list and not anything that really has to do with the composite pattern.

Implementing Memento

The last requirement is to be able to save out the state of the object (i.e. the memento design pattern). In .NET, arguably, the two easiest ways to write out an object is BinaryFormatter and DataContract. I tend to only use the binary formatter if there is proprietary data that needs to be stored. When that’s the case I use not only the BinaryFormater but I use it with a CryptoStream to ensure the security of the data. Most of the time, however, I try to use a DataContractSerializer. It’s a bit more in the set-up, having to define the DataContract and DataMember attributes but on the whole everything seems a bit cleaner. That way, if I need to, I can read the XML directly and see what’s going on with the data.

Now the tough part is that if you are going to use multiple classes that extend from a base class then you have to violate the Open-Closed principle. In my use of the above class I actually extend FileGroupModel (which is an abstract in my production code) to limit the types of files that are embedded. The problem with this approach is that you have to define a KnownType attribute in the base class so when you deserialize the object the DataContractSerializer knows what to do with it. This means that every time you add a class that extends the base class you have to add a KnownType for that class in the base. See? An obvious violation of the OCP but definitely a situation where we can ignore the rules on the paint can.

Next week I’ll follow up this post with an MVVM sample that is a bit closer to how I actually use it. I’ll show a sample extending FileGroupModel so you can get a better idea of using DataContractSerializer, but this also leads into using the command design pattern.

Thanks,
Brian