|Number of votes:||2|
When I use DDS I always use a pattern where I have a public static Items implementation that I can do my query against. This logic is placed inside a common class all my DDS tables inherit from
So when I needed to speed things up a bit I changed my Items implementation to instead return a memory list with all my items,
Since all my DDS classes inherits from the same base, I made my change so I could turn on memory cache using appsettings.
When I retrieve a object and want to make a change to I just make the change and save it using my LazyDDSSave class. Even before the item is saved all new querys will access the changed object since it’s the same object. One could make a CreateWritebleClone implementation update the cache when one have done the save, but I didn’t.
Its only when we create a new object or delete a object we need to change the number of elements in the memory cache.
I have selected a full reread from the data store when I delete or add a new item, but this could also be changed to just add or delete the object from the memory list.
If you are in a enterprise load balance server situation one could either implement a event based reload or one could reload the memory list after a fixed amount of time.
I gain a lot performance by just using my implementation above in a project I worked on. I had many updates on objects, and not very many delete or new ones.
Since I then cache all my DDS tables (or most of them) I don’t need to cache my results from query's against my DDS tables So when updates are done I don’t need to invalidate my aggregate cache. That saves me a lot of worries .
Have uploaded the base class in the code section here