March 1st, 2010
For the first ~18 months of Glo’s development, it self-hosted a fairly large set of WCF services through which we pumped the data coming from SQL Compact over named pipes. Data access & entity-mapping layers sat behind five WCF endpoints used by the Presenters for data retrieval.
The reasoning behind this architecture was simple: it would be nice to leverage the same code for both the desktop and Silverlight apps, so let’s build it as a WCF service, then just point to a different URI to connect to the service based on the platform.
This worked fine until we started hitting WCF limits. Of course it’s simple to adjust
MaxReceivedMessageSize, MaxItemsInObjectGraph, MaxArrayLength etc defaults, but we also saw we needed to code around perf issues when returning big data chunks. I dealt with tweaking limits for awhile, but after failing to fix WCF errors saying I had exceeded
MaxNameTableCharCount, it made me start thinking about bailing on it altogether. The services appeared to be too big & had caused me one too many headaches.
I did tests to gauge potential benefit of eliminating WCF, and was sort of amazed to see the overhead they had imposed. Serialization & transport issues aside, I had attended Juval Lowy’s lecture on “Every Class a WCF Service” and came away thinking perf would not be terribly impacted.
So I spent a couple days replacing the services with a layer that loads data entities directly. This boosted backend performance by over 30%, and naturally eliminated associated headaches.
Moral of the story: even if I have to replicate Glo’s data layer as online WCF services for a web version, removing them from the desktop version was definitely worth it. LESSON LEARNED.