|Number of votes:||7|
...when building a GREAT Episerver solution. These are my favorite top areas that are often forgotten that separates a great solution from a decent solution. (Sorry for caps earlier. I tend to get excited when talking about this area)
You can start from an empty piece of paper if you wish. But should you? I actually like the Alloy site way of doing things and trust me, I've seen quite a few tries at making a better architecture. There is also a value in standardization. New Episerver developers will need zero additional time when entering the projects. Even if your architecture is potentially a few percent better it's seldom worth it. You have support for IoC, custom viewmodels and easy way to handle the current page model without additional mappings between models etc. It's a nice balance between rigid design patterns and efficient development in my eyes. What's not to like? If you don't have a very specific reason, stick to Alloy way of doing things. Other than that you can use SOLID principles as a good general guide for architecture. Cudos to those involved making the Alloy template site and Episerver for making it btw. I like it!
Episerver is great at caching normal GetChildren / GetPage calls. No need to add an extra layer of caching above it normally. Extra complexity in solution hurts too in the long run. Don't cache if you don't need to. You can get funny bugs with access rights if you cache menues for instance. If you do decide to cache menues, only cache for anon user.
Works from a technical point of view with blocks in blocks in blocks. Hell for editors. I normally stick to a single layer of blocks if at all possible. Avoid trying to build too generic just because you can, you will regret it. YAGNI.
Normal blocks don't need a controller btw. Use a partial view only if possible.
Sometimes a normal page is a better solution. You can show those equally well in a content area you know with a partial view...great for teasers for article pages for instance.
Oh, didn't the property name ListItemMaxLimit make sense to you? As long as you stick to using Episerver default rendering of properties you can build a great editor experience with very little effort. Use the Html.PropertyFor instead of custom rendering if possible or hook it up yourself using EditorHints.
If you do any recursive GetChildren / GetDescendants / FetchPagesByCriteria where you traverse a large part of the tree structure to find something, the result normally needs to be cached to get decent performance. Although Episerver caches very good by itself you will likely get a few issues with this part otherwise. One common is that when Editor publishes, the site stops responding for a while. If you are just looking for a single page of a specific type you can also use ContentModelUsage class. Episerver Find is the next step if you can't fix the performance...
DO build a repository type of class for encapsulating external calls. DO add caching to any relevant calls. I've never regretted building this part yet. My favorite problem site was an intranet with so many uncached external calls on the start page that for more than 10 users it wouldn't even start. Prefix is a great tool to check how many db/external calls a page has. Kill em! Kill em all!
Use interfaces vs class instead. Want most of your pages to have SEO information...make an interface and implement it. Build the functionality around the interface instead. A base class or two is fine. 5 or 6 is too many. There is a difference between "has" and "is" in OOP.
Boring? Yes maybe but for any project of size with a number of intergrations you will spend most part of the time figuring out what is sent back and forth between systems rather than building blocks etc. Build logging early. Log everything to and from external calls. Execution times, parameters and result etc. Make sure it's possible to turn on and off in production. Log4net works great. Add logging when blocks throw errors as well. These are usually swallowed by the ContentAreaRenderer and never seen making it difficult to understand what happened. Also check out this link for adding detailed logging.
At least glance at the OWASP list or go through this one
Customer tend to get cranky if they get hacked. Just saying it wasn't part of your requirements won't save you. :)
If you followed the advice above with caching external calls and the ones you traverse the tree you should be pretty fine backend. For frontend I recommend minifying and bundling scripts and css to lower the number of calls. Check size of images never hurt. Get rid of heavy pngs that you don't need. Add some gzip of content on the IIS and you should be pretty good to go. If you have any kind of lists on you site, check that it will work decently when you have 1000s of objects and add paging. Had a nice little site that went down for 10 mins when seaching for the letter 'a' because it rendered 4Mb of html. That's a lot of tags...! IIS logs and LogParser 2.0 is great for finding bottlenecks. Check worst urls with time taken and multiply it with the number of views and you will get a number that tells you what pages you need to optimize first.
There are a few browser plugins you can use as well that will give you hints if you have done bad things on your website with your frontend.
I normally check with https://www.webpagetest.org/ when I get a publically available solution and aim for grade A.
Hope that helps someone out there! Feel free to disagree or add your top do/don't in comments below!