Blog posts by Dan Matthews2019-01-25T01:36:15.0000000Z/blogs/Dan-Matthews/Optimizely WorldQuickStart for Epi, Azure AD and WS-Federation/blogs/Dan-Matthews/Dates/2019/1/quickstart-for-epi-azure-ad-and-ws-federation/2019-01-25T01:36:15.0000000Z<p>On any project, the concept of identity can be one very hot potato. Who needs to access the system? How do you know who they are? How do you know what they can do? For greenfield sites, you can often use the standard ASP.Net Identity SQL-stored user management to authenticate and authorise users and keep everything within the application itself. However, if you're working with existing sites or larger enterprises, it's highly unlikely that the standard OOTB user management will be enough and you'll probably need to connect to something external.</p>
<p>Ideally, projects that required an external identity management system would use a lovely, rich SSO provider like <a href="https://auth0.com/">Auth0</a> or <a href="https://www.okta.com">Okta.</a> Unfortunately, that's not a luxury that we always have available. In the case of corporates, it's highly likely they are still running on-premise Active Directory and authenticating with vanilla LDAP-based AD. In that case, you might find it easiest just to use the <a href="/link/f799727c412744ff922950542ab2e70c.aspx">OOTB membership provider for Active Directory</a>. If you're running in the <a href="/link/5d17930c8cda4653b6d50a78c589a51e.aspx">Episerver DXC Cloud Service</a> that would really suck though, and a VPN would be needed to tunnel to the on-premise AD (along with all the paperwork that would bring).</p>
<p>This is where both ADFS and Azure AD come in. Before going further, I should probably explain what they both are, what the difference is and where the overlap is. ADFS is a Secure Token Service (STS). This means that it simply a way of serving tokens to identified users. It can't exist on it's own, but uses other systems to authenticate the users against some identity management system (such as Active Directory). Azure AD is an entire identity and access management system (IdAM) that <em>includes</em> a STS. In this sense it's more 'standalone' than ADFS. The difference is therefore that ADFS is a highly flexible STS designed to be customised and backed by full AD, whereas Azure AD is a cloud-based solution that is somewhat more limited on the STS part than ADFS but provides many efficiencies by being a 'one stop shop'.</p>
<p>The commonality between them is that both ADFS and Azure AD support integration via a standard called WS-Federation. This means that using WS-Federation I can request that ADFS/Azure AD authenticates me and issues me a token. From my application's perspective, I don't really care about the IdAM that is 'backing' the STS. I just care that I can ask for a token, something authenticates me and then I get issued a token.</p>
<h3>I thought you said it was a QuickStart?</h3>
<p>Okay, I'll get to some steps now. I just needed to cover the intro so we understood what we were doing and why! In this QuickStart, I'm going to use Azure AD as my STS. As stated before, the approach to ADFS and Azure AD is nearly identical as far as my application is concerned because the bit we talk to is just a WS-Federation compliant STS, however by using Azure AD it's going to be much easier (and cheaper!) to set up. Setting up ADFS is rather tedious (you require Active Directory domain controllers, ADFS servers, ADFS Web Application or 'WAP' servers and a pile of configuration) and I wouldn't go there unless you need to.</p>
<p>To start, let's set up our Episerver site. Simply create an Alloy MVC site using <a href="https://marketplace.visualstudio.com/items?itemName=EPiServer.EpiserverCMSVisualStudioExtension">Episerver's Visual Studio extension</a>. It's a good idea to spin it up and create a username and password, just in case you need to troubleshoot later. If you open up the code then you'll find that Startup.cs contains code to talk to the built-in user and role managers. We'll replace those later.</p>
<h3>Setting up Azure AD</h3>
<p>Now we need to set up our Azure AD. Log onto the <a href="https://portal.azure.com">Azure Portal</a> and select the 'Azure Active Directory' option on the left-hand navigation. You should have one already provisioned, even if you're logging in with a Hotmail account or similar. In that case, it will be an Azure AD with just you in it. In order to be able to integrate with Azure AD, we need to create an <em>application</em>. This is effectively the 'vehicle' that we use to communicate with the STS and obtain tokens. To create this, do the following:</p>
<ul>
<li>Select 'Application Registrations'</li>
<li>Select 'New Application Registration' (at the top)</li>
<li>Give the new application a Name and for the Sign-on URL, use the URL of your new Episerver site (e.g. <a href="http://localhost:59574/">http://localhost:59574/</a>) - leave the application type as Web app / API</li>
<li>Click 'Create'</li>
</ul>
<p>By this point you should have a screen that looks something like this:</p>
<p><img src="/link/df6c33d061a34663891413dde93d0b38.aspx" /></p>
<p>Our application is now created, but we need to configure a few settings on it. Click the 'Settings' option and set the following:</p>
<ul>
<li>Properties -> App ID URL -> [set this to the URL of your site, e.g. <span><a href="http://localhost:59574/]">http://localhost:59574/]</a></span></li>
<li><span>Reply URLs -> [add one for your episerver CMS login, e.g. <a href="http://localhost:59574/episerver/cms]">http://localhost:59574/episerver/cms]</a></span></li>
<li><span>Required Permissions -> (Add) -> Select an API -> [select 'Microsoft Graph' then choose the two permissions: 'Read all users full profiles', 'Read directory data']</span></li>
</ul>
<p>Quick gotcha... if you've left the screens and go back to 'App registrations' you probably won't see the app you created. That's because you need to change the filter from 'my apps' to 'all apps':</p>
<p><img src="/link/248a7ac5873048ee9b451ac553b6d124.aspx" /></p>
<p>Of the three settings we just changed, the third one (permissions) isn't actually necessary for <em>authentication</em>. We can log on users fine without it. But if we want to <em>authorise</em> them to do things, like log on to Edit mode, then we need to assign them to roles. For this, we need to allow access for the application to be able to read roles from Azure AD. There are two kinds of role we could read: Azure AD groups or Role Based Access Control (RBAC). The first is the more traditional AD-like groups. However, I've found it much harder to get these as role claims through to our website. The permissions and mechanisms for pulling Azure AD groups through as applications roles are not trivial. The simpler option is to use RBAC. In this case, the roles are held within the application itself. Permissions and mechanism becomes much simpler. Let's set up a RBAC group for 'WebAdmins' and add our user to it now.</p>
<ul>
<li>Open your application in App Registrations (if it's not open aready)</li>
<li>Select 'Manifest'</li>
<li>In the displayed JSON file, find the 'appRoles' element (near the start) and replace it with the following - note that you can create your own GUID if you prefer and make sure that there is a comma at the end before the next item in the JSON file!<br />
<pre class="language-markup"><code>"appRoles": [
{
"allowedMemberTypes": [
"User"
],
"displayName": "WebAdmins",
"id": "814a5ee4-b1a1-44f7-b509-23e1889ec119",
"isEnabled": true,
"description": "Web Adminstrators.",
"value": "WebAdmins"
}
]</code></pre>
</li>
<li>Save the manifest</li>
<li>Go to Azure Active Directory -> Enterprise Applications -> [your application] -> Users and Groups</li>
<li>Select 'Add user'</li>
<li>Choose one or more users that you want to use to log in as a web admin</li>
<li>Choose the WebAdmins role (if this is the only role, it will be preselected and greyed out - that's okay)</li>
<li>Click 'Assign'</li>
</ul>
<p>We have now added the user(s) into that RBAC group, and when they log on, they will get the role claim sent to your website to be used as an Episerver role. We're nearly there on the Azure AD side of things, there is just one more security check we need to do. The permissions we added require an administrator to confirm them:</p>
<ul>
<li>Open your application in Enterprise Applications (if it's not open aready)</li>
<li>Click 'Permissions'</li>
<li>Click the button called 'Grant admin consent for ...' with the name of your organisation</li>
<li>You will be prompted for an admin user login, login and Accept the permissions request</li>
</ul>
<p>The last thing we want to do is get our Azure AD endpoint as we'll be using this just now. You can find this here:</p>
<ul>
<li>Azure Active Directory -> App registrations -> Endpoints -> Federation Metadata Document</li>
</ul>
<h3>Setting up the website</h3>
<p>Now we are ready to flip the code over in our website to use Azure AD. To to the project in Visual Studio and do the following:</p>
<ul>
<li>Add the nuget package <strong>Microsoft.Owin.Security.WsFederation</strong></li>
<li>Comment out the contents of the Configuration method of the Startup.cs file and paste the following there instead, replacing the two settings in square brackets as needed (with thanks to articles from <a href="/link/31fe81f94c5b4d8ca67478b1ee44c6db.aspx">Episerver World</a> and the <a href="/link/ec8fa9e14f1f4cf89d1bbe8ae3f3752d.aspx">Episerver Forums</a> on which code this is based):<br />
<pre class="language-csharp"><code>app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);
app.UseCookieAuthentication(new CookieAuthenticationOptions());
app.UseWsFederationAuthentication(
new WsFederationAuthenticationOptions
{
MetadataAddress = "[your Azure AD metadata endpoint]",
Wtrealm = "[your site URL, e.g. http://localhost:59574/]",
Notifications = new WsFederationAuthenticationNotifications()
{
RedirectToIdentityProvider = (ctx) =>
{
if (ctx.OwinContext.Response.StatusCode == 401 && ctx.OwinContext.Authentication.User.Identity.IsAuthenticated)
{
ctx.OwinContext.Response.StatusCode = 403;
ctx.HandleResponse();
}
ctx.ProtocolMessage.Wreply = SiteDefinition.Current.SiteUrl.ToString();
return Task.FromResult(0);
},
SecurityTokenValidated = (ctx) =>
{
var redirectUri = new Uri(ctx.AuthenticationTicket.Properties.RedirectUri, UriKind.RelativeOrAbsolute);
if (redirectUri.IsAbsoluteUri)
{
ctx.AuthenticationTicket.Properties.RedirectUri = redirectUri.PathAndQuery;
}
ServiceLocator.Current.GetInstance<ISynchronizingUserService>().SynchronizeAsync(ctx.AuthenticationTicket.Identity);
return Task.FromResult(0);
},
AuthenticationFailed
= (ctx) =>
{
throw new Exception(ctx.Exception.ToString());
}
}
});
app.UseStageMarker(PipelineStage.Authenticate);
app.Map("/util/logout.aspx", map =>
{
map.Run(ctx =>
{
ctx.Authentication.SignOut();
return Task.FromResult(0);
});
});
AntiForgeryConfig.UniqueClaimTypeIdentifier = ClaimTypes.Name;</code></pre>
</li>
<li>Resolve any references as needed</li>
<li>Build and run the project</li>
</ul>
<p>It's that easy! When you now go to your site /episerver/cms, you should get taken to Azure AD for authentication, something like this:</p>
<p><img src="/link/4b07be93b1b04aedb51dd82312dadb0e.aspx" /></p>
<p>Pick your account that you assigned the WebAdmins group to, and you should be able to get into the Episerver CMS edit mode successfully!</p>
<h3>Conclusion</h3>
<p>In this QuickStart we walked through setting up a new application in Azure AD, configuring it and then switching our Alloy site to use that application configuration and authenticate to Azure AD. The setup you've gone through here supports SSL and multi-site, both of which you can set up on IIS Express (maybe the subject of another post?!).</p>
<p>I hope this helps you get started using WS-Federation and Episerver. Some of the screens and things may change over time, so I don't know how long this QuickStart will be perfectly valid, but the concepts inside should be around for some time to come.</p>Vulcan gets Parallel Indexing and Always-On features/blogs/Dan-Matthews/Dates/2018/5/vulcan-gets-parallel-indexing-and-always-on-features/2018-05-22T16:16:54.0000000Z<p>As I move around in Episerver circles, I get many questions and requests about Vulcan, <a href="/link/e865faa3858d4e5099ced7f03a3ac221.aspx">the lightweight ElasticSearch client for Episerver</a>. Two of the most asked-for features are the ability to do Parallel Indexing and have an Always-On feature so that the search is still available even during a reindex.</p><p>I’m glad to announce that we currently have both those features in test! You can grab the nuget packages already from <a href="https://ci.appveyor.com/project/dan-matthews/vulcan/build/1.0.81/artifacts">appveyor</a> and drop them into a local package source if you just can’t wait to test, or you can wait until they drop into the main <a href="https://nuget.episerver.com/">Episerver nuget feed</a>. Just remember that they are pre-release at the moment so you’ll have to check the ‘Include prerelease’ checkbox in Nuget Package Manager. So what’s new in Vulcan, and how do you use them?</p><p>Firstly, parallel indexing. <a href="https://www.wsol.com/brad-mcdavid/">Brad McDavid</a> did some of the ground work on this one already, and all that remained was to finish off the implementation. It’s off by default, but you can turn it on by simply enabling it in your <font face="Courier New">IVulcanIndexContentJobSettings</font> implementation. Here is an example:</p><p><br /></p><div class="wlWriterEditableSmartContent" id="scid:C89E2BDB-ADD3-4f7a-9810-1B7EACF446C1:9d535ad0-c7ee-4b60-b544-a93a1e308fb4" style="margin: 0px; padding: 0px; float: none; display: inline;"><pre style="white-space:normal">
[sourcecode language='csharp' ]
[ModuleDependency(typeof(ServiceContainerInitialization))]
public class VulcanParallel : IConfigurableModule
{
public void Initialize(InitializationEngine context)
{
}
public void Uninitialize(InitializationEngine context)
{
}
public void ConfigureContainer(ServiceConfigurationContext context)
{
context.Services.AddSingleton <IVulcanIndexContentJobSettings, ParallelIndexing>();
}
}
public class ParallelIndexing : IVulcanIndexContentJobSettings
{
public bool EnableParallelIndexers => true;
public bool EnableParallelContent => true;
public bool EnableAlwaysUp => true;
public int ParallelDegree => 4;
}
[/sourcecode]
</pre>
</div><p><br /></p><p>Once you’ve switched the parallel indexing on, the ‘ParallelDegree’ number can be used to choose just how parallel you go. The higher the number, the more threads it will spin off. Set it to –1 and it will grab all the capacity it can. If you’re running a nice multi-core, maybe you can go large! The default is 4, which isn’t very parallel at all, but the problem with going too parallel too quickly is that you’ll swallow the server up. This might not be such an issue if you’re running the indexing scheduled job on a back-end server, but if this is one of your public facing servers then you don’t really want to kill it with an indexing job. Try changing the number until you find a balance that works for you.</p><p>The second feature is Always-On. This means that, quite simply, your index is still available during a reindex. This is achieved using ElasticSearch aliases. By default it’s off, but simply turn it on (also in the example above) and you’re good to go! Just be aware of a couple of side effects of this – firstly, while the indexing is happening there will be an additional set of indexes on your ElasticSearch server. You can see this in the example pic below (you’ll also notice the aliases that are now used by Vulcan):</p><p><a href="/link/3fc2b18ea35445d2b8747e9afb4af7e8.aspx"><img width="589" height="145" title="image" style="display: inline; background-image: none;" alt="image" src="/link/c6d1a352908443949545f2111a88a7be.aspx" border="0" /></a></p><p>They are cleared down when the job successfully completes, but make sure your ElasticSearch has capacity for the extra indexes. Secondly, we’ve had to change the naming conventions of the indexes that Vulcan creates. You may well want to clear the old indexes up. If you have access to your ElasticSearch server you can do this yourself, but there’s a new scheduled job called ‘Vulcan Index Clear’ that wipes ALL the Vulcan indexes for your site. Just remember to reindex your Vulcan content once you’ve wiped everything out!</p><p><a href="/link/b1b0a39b5f7340028e7f1fc950e2ed95.aspx"><img width="180" height="49" title="image" style="margin: 0px; display: inline; background-image: none;" alt="image" src="/link/e86e8e190db5429b857597fc6cc247a4.aspx" border="0" /></a></p><p>Another nice thing about this new Always-On feature is that you can even use it to ‘segment’ your Vulcan data, if you want to. When you call <font face="Courier New">GetClient</font> on the <font face="Courier New">IVulcanHandler</font>, it now takes an ‘alias’ parameter. It’s default null, so your code will work as-is, but if you put something in there then it will get stored in a separate Vulcan index with it’s own alias. Just remember that anything you put in there is your job to update, reindex and clear down as the default Vulcan index job won’t know about things you put in your own aliased Vulcan clients.</p><p>So there’s two of the most asked-for features ready to go. As always, please do test it and give us feedback – good and bad – and why not think about contributing? After all, <a href="https://github.com/TCB-Internet-Solutions/vulcan">Vulcan is Open Source</a>!</p><p><strong>DISCLAIMER:</strong> This project is in no way connected with or endorsed by Episerver. It is being created under the auspices of a South African company and is entirely separate to what I do as an Episerver employee.</p>Vulcan + Epi Commerce + Google Merchant = Happy/blogs/Dan-Matthews/Dates/2017/7/vulcan--epi-commerce--google-merchant--happy/2017-07-27T17:19:58.0000000Z<p>I recently had to get to grips with the <a href="https://support.google.com/merchants/answer/7052112?hl=en">Google Product Feed</a> on an <a href="http://www.episerver.com/products/platform/ecommerce-platform/">Episerver Commerce</a> site. This potentially requires some fairly heavy lifting. This is probably why the awesome dudes over at <a href="https://getadigital.com/">Geta</a> made a nice <a href="https://getadigital.com/blog/google-product-feed-for-episerver/">product feed tool</a> that uses the Dynamic Data Store as a sort of cache. It’s probably a great tool, but I couldn’t get it working and it didn’t do quite what I wanted. I could have persisted, but I actually had a great opportunity in that the site I was working on was already using <a href="https://github.com/TCB-Internet-Solutions/vulcan">Vulcan, the lightweight ElasticSearch client for Episerver</a>. With Vulcan, we can make the most of the high-performance search to do much of the heavy lifting, like the price handling. Because this code could be useful to others, I decided to make my product feed / Vulcan implementation generic, open source and on the Episerver Nuget feed. To install it, simply find it on nuget as <a href="http://nuget.episerver.com/en/OtherPages/Package/?packageId=TcbInternetSolutions.Vulcan.Commerce.GoogleProductFeed">TcbInternetSolutions.Vulcan.Commerce.GoogleProductFeed</a>.</p> <p>Once installed, you can hit the default product feed straight away. Simply go to <font color="#0000ff">http://yoursite.com</font><font color="#ff0000">/GoogleProductFeed/Default</font>. This is the default feed, which for some territories may be enough. However, you may well want to specify additional properties to be included. For example, you may want to set a GTIN on your Variants and have that included. To configure that, create an InitializationModule and create a feed in the Initialize method. For example:</p><pre class="language-csharp"><code> var feed = ServiceLocator.Current.GetInstance<IGoogleProductFeedService>().CreateFeed<SiteVariationBase>("MyFeed");
feed.BrandSelector = p => p.Brand;
feed.DescriptionSelector = p => p.ShortDescription;
feed.GTINSelector = p => p.GTIN;
</code></pre>
<p>This will create a feed at <font color="#0000ff"><a href="http://yoursite.com/GoogleProductFeed/MyFeed">http://yoursite.com</a></font><font color="#ff0000">/GoogleProductFeed/MyFeed </font><font color="#000000">and will use the specified properties from your Variant type in the appropriate fields. f you don’t have the properties, then add them to whatever your base class for Variants is. You can also specify the Query to be passed through to Vulcan if you want by setting the Query property on the feed – this could do things like filter out certain products, or maybe you could create multiple feeds for different product categories. The URL for the feed is also very powerful, and uses the following segments:</font></p>
<p><font face="Courier New">GoogleProductFeed/[YOUR FEED NAME]/{market}/{language}/{currency}</font></p>
<p>If you skip the additional parameters, it will use the default market, language and currency. If you specify them, use short names, such as: </p>
<p><font face="Courier New">GoogleProductFeed/MyFeed/SouthAfrica/en-za/zar</font></p>
<p>With this technique, you can register multiple product feeds in your Google Merchant Centre for your different territories. Note that some territories have mandatory requirements on shipping, brand etc. Make sure that your feeds are correctly configured!</p>
<p><strong>DISCLAIMER:</strong> This project is in no way connected with or endorsed by Episerver. It is being created under the auspices of a South African company and is entirely separate to what I do as an Episerver employee.</p>Vulcan gets Commerce Manager search provider/blogs/Dan-Matthews/Dates/2017/3/vulcan-gets-commerce-manager-search-provider/2017-03-14T11:50:49.0000000Z<p>I was contacted by one of our partners creating an <a href="http://www.episerver.com/ecommerce-platform/">Episerver Commerce</a> site and using Vulcan, <a href="/link/e865faa3858d4e5099ced7f03a3ac221.aspx">the lightweight Elasticsearch client for Episerver.</a> They had come across an interesting bug where some features in the Commerce Manager UI were throwing error 500’s… specifically, any time they were looking for a product (catalog search, add line item etc.)</p> <p><a href="/link/cd237d0e92f4476b80899f5fe20700a0.aspx"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/aeca24d8ffdd49f986a06d7b705c3600.aspx" width="533" height="363" /></a></p> <p>Looking at it, the errors were being thrown by the Lucene search provider. Unsurprising, really, as the Lucene search provider was the one configured, but the Lucene index wasn’t even being built – the site was totally running on Vulcan. This causes a bit of an issue because the Commerce Manager doesn’t use the shared search provider model yet, it still runs a legacy system. The best way to fix this was to create a lightweight Commerce Manager compatible search provider for Vulcan. This did require a <strong>breaking change to the Vulcan Core</strong>. Unfortunately, the Vulcan Core was dependent on the CMS UI packages for the shared search provider capability, and these in turn break the Commerce Manager when they are pulled in during build (at runtime, you get nasty errors about Episerver Shell). So we had to break that dependency and created a new package that contains the core search providers called <strong>TcbInternetSolutions.Vulcan.Core.SearchProviders</strong>. This frees up the main Core package to be able to be used anywhere.</p> <p>What you will now find in the <a href="http://nuget.episerver.com/en/">Episerver Nuget feed</a> are six packages for Vulcan, as below:</p> <ul> <li><strong>TcbInternetSolutions.Vulcan.Core</strong> (the core package, can be deployed to any Episerver CMS front-end / commerce front-end / commerce manager site)</li> <li><strong>TcbInternetSolutions.Vulcan.Commerce</strong> (should only be deployed to an Episerver commerce front-end site)</li> <li><strong>TcbInternetSolutions.Vulcan.UI</strong> (should only be deployed to an Episerver CMS front-end / commerce front-end site)</li> <li><strong>TcbInternetSolutions.Vulcan.AttachmentIndexer</strong> (should only be deployed to an Episerver CMS front-end / commerce front-end site)</li> <li><strong>TcbInternetSolutions.Vulcan.Core.SearchProviders</strong> (should only be deployed to an Episerver CMS front-end / commerce front-end site)</li> <li><strong>TcbInternetSolutions.Vulcan.Commerce.SearchProviders</strong> (can be deployed to an Episerver commerce front-end site, but really intended only for a commerce manager site)</li></ul> <p>If you’re running Vulcan already, you’ll probably just want to update the core and commerce packages and add the <strong>TcbInternetSolutions.Vulcan.Core.SearchProviders</strong> nuget package to your project. Then you’re back as you were before. If you’re working with commerce, you can also add the <strong>TcbInternetSolutions.Vulcan.Commerce.SearchProviders</strong> nuget package to your commerce manager (note, this will now pull in the core package to your commerce manager too). If you do this, you’ll also need to change your Commerce Manager search configuration to use Vulcan and update your <strong>web.config </strong>with the same Vulcan configuration you used in the CMS project. The first part of this configuration you will find in the file <strong>Configs/Mediachase.Search.config </strong>inside your commerce manager project. Simply add the new Vulcan search engine to the configuration and make it the default. Here is an example of that file:</p><pre class="language-html"><code><?xml version="1.0" encoding="utf-8"?>
<Mediachase.Search>
<SearchProviders defaultProvider="<strong><em>VulcanSearchProvider</em></strong>">
<providers>
<add name="SolrSearchProvider" type="Mediachase.Search.Providers.Solr.SolrSearchProvider, Mediachase.Search.SolrSearchProvider" queryBuilderType="Mediachase.Search.Providers.Solr.SolrSearchQueryBuilder, Mediachase.Search.SolrSearchProvider" url="http://localhost:8080/solr" shareCores="true" />
<add name="Solr35SearchProvider" type="Mediachase.Search.Providers.Solr35.SolrSearchProvider, Mediachase.Search.Solr35SearchProvider" queryBuilderType="Mediachase.Search.Providers.Solr35.SolrSearchQueryBuilder, Mediachase.Search.Solr35SearchProvider" url="http://localhost:8080/solr" shareCores="true" facetLocalizedFieldValuesOnly="true" commitWithin="10000" maximumBatchSize="50" />
<add name="LuceneSearchProvider" type="Mediachase.Search.Providers.Lucene.LuceneSearchProvider, Mediachase.Search.LuceneSearchProvider" queryBuilderType="Mediachase.Search.Providers.Lucene.LuceneSearchQueryBuilder, Mediachase.Search.LuceneSearchProvider" storage="[appDataPath]\Search\ECApplication\" simulateFaceting="true" />
<add name="FindSearchProvider" type="EPiServer.Commerce.FindSearchProvider.FindSearchProvider, EPiServer.Commerce.FindSearchProvider" serviceUrl="http://localhost:9200" defaultIndex="myindex" />
<strong><em> <add name="VulcanSearchProvider" type="TcbInternetSolutions.Vulcan.Commerce.SearchProviders.VulcanSearchProvider, TcbInternetSolutions.Vulcan.Commerce.SearchProviders" />
</em></strong> </providers>
</SearchProviders>
<Indexers basePath="[appDataPath]\Search\ECApplication\">
<add name="catalog" type="Mediachase.Search.Extensions.Indexers.CatalogIndexBuilder, Mediachase.Search.Extensions" />
</Indexers>
</Mediachase.Search></code></pre>
<p>As for the web.config, you just need to add the Vulcan settings to your appSettings. Here is an example snippet:</p>
<p>…</p><pre class="language-html"><code> <appSettings>
<add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" />
<add key="CommerceManagerLink" value="http://localhost:62549/" />
<add key="episerver:SkipCatalogContentModelCheck" value="true" />
<add key="ShellFirstPageUrl" value="~/Apps/Shell/Pages/ContentFrame.aspx" />
<add key="AppsDir" value="~/Apps" />
<add key="ValidationSettings:UnobtrusiveValidationMode" value="None" />
<add key="owin:AutomaticAppStartup" value="false" />
<em><strong> <add key="VulcanUrl" value="http://localhost:9200" />
<add key="VulcanIndex" value="mycommercesite" /></strong>
</em> </appSettings>
</code></pre>
<p>…</p>
<p>Once you’re all done, you’ll see that you can now search for products/variants etc. throughout the commerce manager, and it will use your Vulcan index. The search is still somewhat limited – it’s simply to get this functionality working but we’re not intending to give commerce manager a lot of Vulcan love. If you have enhancements you’d like for it, <a href="https://github.com/TCB-Internet-Solutions/vulcan">the source is on GitHub</a>!</p>
<p><strong>DISCLAIMER:</strong> This project is in no way connected with or endorsed by Episerver. It is being created under the auspices of a South African company and is entirely separate to what I do as an Episerver employee.</p>Moving Commerce content in Vulcan/blogs/Dan-Matthews/Dates/2017/2/moving-commerce-content-in-vulcan/2017-02-06T16:02:00.0000000Z<p>I’ve just released a minor update of Vulcan that addresses two quite significant issues.</p> <ol> <li>Reindexing commerce content if a product/variant is moved between categories or, more to the point, the categories change by being modified, added to or removed.</li> <li>Indexing all the categories of a product/variant if it belongs to multiple, rather than just the first one.</li></ol> <p>Note this has also introduced a breaking change if you created custom <strong>IVulcanIndexingModifier</strong> classes.</p> <p>The first ‘bug’ occurred because although we were listening to the move of content on <strong>IContentEvents</strong> and updating the index appropriately, moving commerce content doesn’t actually fire a move event of the content itself on <strong>IContentEvents</strong>. This is correct as, technically, it’s not being moved. It’s simply changing the relations, and we weren’t picking that event up. To replicate the issue, cut-paste a variant/product to a different category and you’ll see the ‘ancestors’ in ElasticSearch don’t change for an item. I’ve updated the Vulcan Commerce package to listen for these relation changes and index appropriately.</p> <p>The second ‘bug’ occurred because of a limitation of the <strong>GetAncestors</strong> extension API call for <strong>IContentLoader</strong>. It only understands one ‘parent’ but with Commerce content you can have multiple ‘parents’ as it can belong to many categories. I’ve had to rewrite a commerce-specific version of it to work nicely with variants/products. This implementation can be found in the Vulcan Commerce library codebase inside <strong>VulcanCommerceIndexingModifier</strong> for the curious. To replicate the issue, add another category to a variant/product and you’ll see the ‘ancestors’ in ElasticSearch don’t change for an item. In fixing this, I’ve taken the opportunity to create the ability for customising the ‘ancestors’ that are picked up when an item is indexed. The <strong>IVulcanIndexingModifier</strong> interface now has a <strong>GetAncestors</strong> method. In your custom indexer classes you can leave it throwing a not implemented exception, that’s OK. However, the OOTB CMS and Commerce indexers will now return ancestors as appropriate which will get indexed with the item. (This is the breaking change as your custom classes WILL need to implement the method, even if it just throws a not implemented exception.)</p> <p>Side note: in the bugged version, manually firing a re-index is a workaround for issue (1) but it still only picks up the <em>first</em> category that something belongs to as it won’t resolve issue (2). If your products/variants only ever belong to one category then manual reindexing would be a workaround until you can update to the latest Vulcan packages.</p> <p><strong>DISCLAIMER:</strong> This project is in no way connected with or endorsed by Episerver. It is being created under the auspices of a South African company and is entirely separate to what I do as an Episerver employee.</p>Vulcan comes of age/blogs/Dan-Matthews/Dates/2017/1/vulcan-comes-of-age/2017-01-16T13:18:39.0000000Z<p>Vulcan, <a href="/link/e865faa3858d4e5099ced7f03a3ac221.aspx">the lightweight ElasticSearch client for Episerver</a>, has been around for almost a year and is being used on various projects in all sorts of places – the US, South Africa, Scandinavia. It’s been surprisingly stable (considering I wrote the core codebase) and we’ve been able to add some quite cool features – like a simple UI , Commerce support and a POCO indexer. The project has also been driven forward <em>massively</em> by <a href="/link/e2b26bb15e814dde81ce11632b89470a.aspx">Brad McDavid</a> from Episerver partner <a href="https://www.wsol.com/">WSOL</a>, who has been an absolute legend in adding new features as well as bug fixing, helping with deployments and generally being an all-round top geezer. Coming into 2017, we need to bring Vulcan up to speed with latest developments, and so we’ve released a new version of Vulcan in the <a href="https://nuget.episerver.com/">Episerver Nuget feed</a> with the following features:</p> <ul> <li>Support (in fact, a requirement) for Epi 10 <li>Support for controlling what gets indexed <li>Support for automatically re-indexing Commerce variants on price change <li>Some bug fixes</li></li></li></li></ul> <p>In addition, we’ve moved the codebase from <a href="https://gitlab.com/DataVenia/Vulcan">GitLab</a> to <a href="https://github.com/TCB-Internet-Solutions/vulcan">GitHub</a>, simply because it seems to be more familiar and provides plenty of third party tools/integrations. If you want to use Vulcan with Episerver 9 or earlier, we suggest forking from the old GitLab repo and fixing up / working with that as you need to – we aren’t intending to actively do anything to the older codebase from this point on as Epi 10 did introduce breaking changes that we needed to pick up too.</p> <p>In code terms, there’s been a few improvements and tweaks, but the most significant in the core is the ability to <em>exclude</em> content from being indexed. This is better than using filters if there is something you really don’t want to even index – for security, simplicity, compactness or any other reason. To do that, simply give the Vulcan Handler an instruction on how to handle a particular content type. By default, everything is indexed, but if you specify a content type to the handler, you can control whether content of that type should be indexed. It does support inheritance, so at the simplest level you could specify IContent and restrict that somehow! In reality you’re more likely to want to handle specific types. The instruction itself is a simple lambda expression - here is an example of restricting certain products in a commerce site from being indexed if they have specific parents:</p><pre class="language-csharp"><code>VulcanHandler.Service.AddConditionalContentIndexInstruction<GeneralVariation>(v => !excludeReferences.Contains(v.ParentLink.ToReferenceWithoutVersion()));</code></pre>
<p>Normally you’d probably put this in an Initialization Module so that it starts up with the website. In this case, I’m using property injection for the Vulcan Handler but you can use another choice if you think that is an anti-pattern.</p>
<p>Where to from here? We want to move to the latest version of the Elasticsearch .Net library and API (NEST). This will probably be the next major Vulcan release. Other than that, we’re always on the lookout for things that would make Vulcan even better… features or enhancements. And while we’re on the subject, we’re always on the lookout for people to contribute more features / bug-fix / clean-up / enhance this module as well. It’s become very apparent that there’s a gap between DIY Lucene and full <a href="http://www.episerver.com/services/cloud-service/episerver-find/">Episerver Find</a> where Vulcan is ideal. If you want to use it, consider contributing to make it even more awesome for everyone!</p>
<p><strong>DISCLAIMER:</strong> This project is in no way connected with or endorsed by Episerver. It is being created under the auspices of a South African company and is entirely separate to what I do as an Episerver employee.</p>Vulcanised enhancements/blogs/Dan-Matthews/Dates/2016/5/vulcanised-enhancements/2016-05-31T16:03:05.0000000Z<p>These weekly updates are becoming a habit! In this edition, I’m going to share with you a couple of significant enhancements I’ve made to <a href="/link/e865faa3858d4e5099ced7f03a3ac221.aspx">Vulcan</a>, the lightweight Elasticsearch client for Episerver. The most obvious and most generally useful enhancement is the addition of an additional optional parameter to the <strong>SearchContent</strong> method that takes a <strong>ContentReference</strong> to search beneath. This was possible before but only by doing funky stuff like getting a list of the ancestors of a page and sending them all over to Elasticsearch as a search filter (ick!). Now it’s all neatly done by Vulcan for you. For example, if I wanted to search below the current content item my search might look like this (commerce example below but works just the same for CMS… note that I’m sending in <strong>null</strong> as the search query here because I don’t actually want to do a search, just pull back the contents):</p> <div id="codeSnippetWrapper"> <div id="codeSnippetWrapper"><pre id="codeSnippet" style="border-top-style: none; overflow: visible; font-size: 8pt; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; border-left-style: none; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4">model.Products = VulcanHandler.Service.GetClient().SearchContent<ProductBase>(<span style="color: #0000ff">null</span>, <span style="color: #0000ff">false</span>, currentContent.ContentLink).GetContents<ProductBase>();</pre><br /></div>In order to support this, Vulcan adds an additional field to the content as it indexes it (more about that field later) so that this filter can run much faster and more efficiently. You don’t really need to think about the property much… although if you look in the index, you’ll see it there something like this (again, example is for a bit of commerce content):</div>
<div> </div>
<div><a href="/link/6dd209205df040848dfffa51147d7e1e.aspx"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/290a876433f34fc985e58cca0a9d5c58.aspx" width="258" height="118" /></a></div>
<div id="codeSnippetWrapper"> </div>
<div>Rather than hardcode this extra bit of indexing logic, I made this mechanism generic so that it opens up some cool options for you. One thing you may well want to do is customise the object being indexed. Normally this is a bit of a black box, but I’ve added a hook into the JSON serialisation process so that you can add in custom fields as needed. Simply create a class that implements the <strong>IVulcanIndexingModifier</strong> interface. It just has one method, <strong>ProcessContent</strong>, that receives whatever bit of content is being indexed and the outgoing stream of JSON. Simply put whatever logic you need into there and, if needed, spit out JSON properties into the stream. This method will be automatically found and used when indexing content. For an example of how this works, see the two built-in indexing modifiers for <a href="https://gitlab.com/DataVenia/Vulcan/blob/master/TcbInternetSolutions.Vulcan.Core/Implementation/VulcanCmsIndexingModifier.cs">CMS</a> and <a href="https://gitlab.com/DataVenia/Vulcan/blob/master/TcbInternetSolutions.Vulcan.Commerce/VulcanCommerceIndexingModifier.cs">Commerce</a> that add ancestor and pricing properties. The joys of open source! You can check that your indexing modifier has been located and is being used by checking the Vulcan UI (here you’ll see the two built-in indexing modifiers have been detected and are available):</div>
<div> </div>
<div><a href="/link/d85357dc5a6f4a6c8d93db64d3e1d29e.aspx"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/eacf2be273194f5db221e0cf16cf2f3d.aspx" width="524" height="225" /></a></div>
<div> </div>
<div>Using this new technique of modifying content as it’s indexed, I’ve also added some additional properties for pricing to commerce content. For variants, I’ve added the <a href="/link/77224e3bcd7b4425af388ea9e3a76c05.aspx">default price</a> for the various markets and currencies. For products, the variants could be priced differently so there you’ll find two properties for price low and price high, so showing the bracket of prices of the variants of that product. Typically, you would aggregate these for facets and this gives you the ability to do that on products – low price or high price is up to you! Again, you shouldn’t need to worry too much about the exact implementation so I’ve added a few helper methods in a <strong>VulcanFieldHelper</strong> class (part of the Vulcan commerce package) which will give you the field name you need. For example, if I wanted to get all the variants below my current node and create a price facet, I could use the following:</div>
<div> </div>
<div id="codeSnippetWrapper"><pre id="codeSnippet" style="border-top-style: none; overflow: visible; font-size: 8pt; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; border-left-style: none; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4">model.SearchResponse = VulcanHandler.Service.GetClient().SearchContent<EPiServer.Reference.Commerce.Site.Features.Product.Models.FashionVariant>(<br /> q => q.Aggregations(a => a<br /> .Filter(<span style="color: #006080">"this_node"</span>, cm => cm<br /> .Filter(f => f<br /> .Bool(b => b<br /> .Must(m => m<br /> .Term(TcbInternetSolutions.Vulcan.Core.VulcanFieldConstants.Ancestors, currentContent.ContentLink.ToReferenceWithoutVersion().ToString()))))<br /> .Aggregations(agg => agg<br /> .Terms(<span style="color: #006080">"prices"</span>, t => t<br /> .Field(VulcanFieldHelper.GetPriceField()))))));</pre><br /></div>
<p>Note that in this case I am having to manually specify the query to narrow down the aggregate results to this node. The reason for this is because you might not want the aggregation to do this, so it’s better that you can choose yourself whether or not you want it to. In effect, it’s doing pretty much the same kind of filter as the main search does to narrow down search results to a node. You will see that the prices aggregation is being done on a field retrieved from the <strong>VulcanFieldHelper</strong>. In this case, it’s going to use the current market and the current market’s default currency. You can override that though by passing in parameters to get the price field for another market or currency. If you really do want to see what this looks like in the index, here’s a sample of a product showing the price brackets for the variants (in this case, it seems like the variants are all the same price, so the low and high values are all the same):</p>
<p><a href="/link/c55fac72239c4af8a547f3de89d3e7af.aspx"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/99f5262649f843b8a13e8487c0c162a3.aspx" width="176" height="490" /></a></p>
<p>One fairly major update in this version is the ability to play nicely with other shell modules such as Episerver Google Analytics and Episerver Forms. Previously, a bug in the code meant that it didn’t… somewhat hampering it’s usefulness! So how do you get all these Vulcan goodies? Simply update to the latest package in the <a href="http://nuget.episerver.com/en/">Episerver Nuget</a> feed and you should be good to go! As usual, all feedback and comments are appreciated and if you’d like to contribute, simply request developer access to <a href="https://gitlab.com/DataVenia/Vulcan">the Vulcan project on GitLab</a>.</p>
<p><strong>DISCLAIMER:</strong> This project is in no way connected with or endorsed by Episerver. It is being created under the auspices of a South African company and is entirely separate to what I do as an Episerver employee.</p>Vulcan goes Nuggety and gets a UI/blogs/Dan-Matthews/Dates/2016/5/vulcan-goes-nuggety-and-gets-a-ui/2016-05-24T14:40:09.0000000Z<p>Today marks a significant milestone in the journey of <a href="/link/e865faa3858d4e5099ced7f03a3ac221.aspx">Vulcan</a>, the lightweight Elasticsearch client for Episerver. It’s going onto the <a href="https://nuget.episerver.com/">Episerver Nuget Feed</a> which will make it more readily available to more people, and it now has a UI. It’s still very lightweight, but it now supports index-time <a href="https://www.elastic.co/guide/en/elasticsearch/guide/current/using-synonyms.html">synonyms</a>. Unlike <a href="http://www.episerver.com/products/add-on-store/episerver-find/">Episerver Find</a>, which supports more dynamic synonyms, Vulcan processes synonyms on objects at index time and as Vulcan manages your objects for you that means there has to be a way to register synonyms with it. For now, that’s all the UI does although maybe it will be extended later. Note that once you’ve added/removed synonyms, you’ll need to a re-index your content with the scheduled job. And yes… I’m useless at user interfaces, and so it’s basic. Very basic. But it does the job! Enter the synonyms comma-delimited. I also added a small count of types just so that you can see what each of your language clients contains.</p> <p><a href="/link/d10ad703a31f4ca5bdb248574d6121fe.aspx"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/ccdacfa164364942a8680e4610a62110.aspx" width="639" height="425" /></a></p> <p>So how do you install Vulcan from the Nuget feed? In Visual Studio, browse the Episerver Nuget package (or search it for ‘Vulcan’) and you’ll see three Nuget packages: <strong>TcbInternetSolutions.Vulcan.Core</strong>, <strong>TcbInternetSolutions.Vulcan.UI</strong> and <strong>TcbInternetSolutions.Vulcan.Commerce</strong>.</p> <p><a href="/link/34883414384a4a7a916f9bdbc177e5a0.aspx"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/13b3214616d34eaabc194f6498a5c2c5.aspx" width="536" height="286" /></a></p> <p>The Core package contains the core of Vulcan and the content indexer. The UI package contains a shell module that adds the UI for managing synonyms. The Commerce package is only required for an Episerver Commerce site – it simply ensures that Vulcan will index the product catalogue as well as the CMS content. The dependencies are set between them so if you install the UI or Commerce package it will pull the Core one down automatically too. Once you’ve installed Vulcan, look in the <strong>web.config</strong> for the text ‘<strong>SET THIS</strong>’ and set your Elasticsearch Url and Index name. Optionally, depending on where you are connecting to, you may need to add the <strong>VulcanUsername </strong>and <strong>VulcanPassword</strong> app settings keys as well (they aren’t automatically added because you might well be using Elasticsearch locally, for example). Then run your <strong>Vulcan Index Content</strong> scheduled job (if you go to the UI before you do this, it won’t find any clients.) Now you should be good to search and use the UI. Enjoy!</p> <p><strong>DISCLAIMER:</strong> This project is in no way connected with or endorsed by Episerver. It is being created under the auspices of a South African company and is entirely separate to what I do as an Episerver employee.</p>Vulcan fires up its language and commerce engines/blogs/Dan-Matthews/Dates/2016/5/vulcan-fires-up-its-language-and-commerce-engines/2016-05-05T12:25:49.0000000Z<p>It’s been over a week since I first launched the alpha of <a href="/link/e865faa3858d4e5099ced7f03a3ac221.aspx">Vulcan, the lightweight Elasticsearch client for Episerver</a>. Since then I’ve been working hard to test it, stretch it and improve it. Some of it has simply ben bug fixes and simplifications but most of the effort has gone into handling analysis of textual content. The issue is that we want to analyse ‘free text’ content as language, but things like product codes shouldn’t be analyzed at all. Fortunately, Elasticsearch has a great feature called <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/_multi_fields.html">Multi-Fields</a>. This enables us to deal with the field as non-analyzed, but also analyze and store a copy of those fields so that we can do free-text queries against it. So what has changed generally in Vulcan, and how do you use the new language handling?</p> <p>Before we start, just one important note. <strong>I recommend you use an Elasticsearch 2.x cluster</strong>. I found that the 1.x clusters I was testing on didn’t work so nicely with the <a href="https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/index.html">latest version of NEST</a>, which kind of expects a 2.x cluster. I did get the Vulcan core running fine against a 1.x index, but I can’t guarantee that your queries will work as expected. You may get some random 400 bad requests as the NEST client creates 2.x compatible queries and tries to pass them to the 1.x index. For that reason, if you are testing then I suggest you use a 2.x cluster. I found a free one you can use in the cloud from <a href="https://bonsai.io/">Bonsai</a>, or you can of course <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-service-win.html">host your own</a>.</p> <p>Other than that, the most significant change is something that you won’t see at first glance. I’ve split the index across multiple language-based indexes. This is <a href="https://www.elastic.co/guide/en/elasticsearch/guide/current/one-lang-docs.html">Elasticsearch recommended best practice</a>, so it seemed the right thing to do. When you now run the <strong>Vulcan Index Content</strong> scheduled job, you’ll see an index created per language, with the start of the index the name you set in the <strong>web.config</strong>. So, for example, if you set a Vulcan index name of ‘vulcan’, then you might see indexes called ‘vulcan_en’, ‘vulcan_de’, ‘vulcan_invariant’ and so forth. That last one – the invariant index – is particularly interesting as it’s where all the content is stored that is not localizable. You can get a handle to it by getting a Vulcan client for <strong>CultureInfo.InvariantCulture</strong>:</p> <div><pre id="codeSnippet" style="border-top-style: none; overflow: visible; font-size: 8pt; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; border-left-style: none; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4">var client = VulcanHandler.Service.GetClient(CultureInfo.InvariantCulture);<br /></pre></div>
<div> </div>
<div>Note that passing in <strong>null</strong> is NOT the same as passing in the invariant culture. Passing in <strong>null</strong> is just a shortcut to whatever your current UI culture is. Note also that I’ve changed the <strong>Client</strong> property of the <strong>VulcanHandler</strong> to a <strong>GetClient()</strong> method so that you can specify what culture you want to handle with your calls. Most of the other calls you make with Vulcan (indexing, deleting etc.) are now also overloaded to take a <strong>CultureInfo</strong> parameter (or null for the current UI culture).</div>
<div> </div>
<div>So once we have our client, how do we run a query? For non-free-text queries (such as <a href="https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/term-query-usage.html">term queries</a> or any queries on non-string fields) you just query like you always would. If you want to do a free-text based query, you need to specify that you want the query to run against the <strong>analyzed</strong> version of the fields. In practice, that means adding one little call to our fluent query DSL. For example, the following query is from my version of the Alloy search page and looks for the query as free-text, along with some hit highlighting and aggregation:</div>
<div> </div>
<div id="codeSnippetWrapper"><pre id="codeSnippet" style="border-top-style: none; overflow: visible; font-size: 8pt; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; border-left-style: none; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4">model.ContentHits = VulcanHandler.Service.GetClient().SearchContent<IContent>(d => d<br /> .Query(query => query.SimpleQueryString(sq => sq.Fields(fields => fields.Field(<span style="color: #006080">"*.analyzed"</span>)).Query(q)))<br /> .Highlight(h => h.Encoder(<span style="color: #006080">"html"</span>).Fields(f => f.Field(<span style="color: #006080">"*"</span>)))<br /> .Aggregations(agg => agg.Terms(<span style="color: #006080">"types"</span>, t => t.Field(<span style="color: #006080">"_type"</span>))));</pre><br /></div>
<div>You’ll notice the <strong>*.analyzed</strong> instruction on the query that tells Elasticsearch to look at the analyzed version of the fields. You can specify exact fields if you prefer (such as <strong>mainBody.analyzed</strong>) but in most circumstances you are most likely to run a free-text query against all fields. So when would you use a non-free-text query on a string field? Usually that would be when you are doing filters and aggregations. For example, in a commerce environment you may well want to filter based on market. Lets say that we want to aggregate the prices and then show them on the front end as a facet. We would want to filter the prices to the current market first.</div>
<div> </div>
<div>Let’s look at this in two parts. Firstly, lets get the price indexed. By default, there’s no property on a variation we have for that, so we’ll add one. In theory you could use any kind of object to hold that price, but for clarity I’m going to use a little construct:</div>
<div> </div>
<div id="codeSnippetWrapper"><pre id="codeSnippet" style="border-top-style: none; overflow: visible; font-size: 8pt; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; border-left-style: none; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4"><span style="color: #0000ff">public</span> <span style="color: #0000ff">class</span> PriceConstruct<br />{<br /> <span style="color: #0000ff">public</span> <span style="color: #0000ff">string</span> MarketId { get; set; }<br /><br /> <span style="color: #0000ff">public</span> Money Price { get; set; }<br />}</pre><br /></div>
<div>Now we can get a list of these by adding a property called <strong>Price</strong> to the variant type:</div>
<div> </div>
<div id="codeSnippetWrapper">
<div id="codeSnippetWrapper"><pre id="codeSnippet" style="border-top-style: none; overflow: visible; font-size: 8pt; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; border-left-style: none; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4"><span style="color: #0000ff">public</span> IEnumerable<PriceConstruct> Price<br />{<br /> get<br /> {<br /> var prices = <span style="color: #0000ff">new</span> List<PriceConstruct>();<br /><br /> <span style="color: #0000ff">foreach</span> (var market <span style="color: #0000ff">in</span> ServiceLocator.Current.GetInstance<IMarketService>().GetAllMarkets())<br /> {<br /> var variantPrices = <span style="color: #0000ff">this</span>.GetPrices(market.MarketId, Mediachase.Commerce.Pricing.CustomerPricing.AllCustomers);<br /><br /> <span style="color: #0000ff">if</span> (variantPrices != <span style="color: #0000ff">null</span>)<br /> {<br /> <span style="color: #0000ff">foreach</span> (var price <span style="color: #0000ff">in</span> variantPrices)<br /> {<br /> <span style="color: #0000ff">if</span> (price.MinQuantity == 0 && price.CustomerPricing == Mediachase.Commerce.Pricing.CustomerPricing.AllCustomers) <span style="color: #008000">// this is a default price</span><br /> {<br /> prices.Add(<span style="color: #0000ff">new</span> PriceConstruct() { MarketId = market.MarketId.Value, Price = price.UnitPrice });<br /><br /> <span style="color: #0000ff">break</span>;<br /> }<br /> }<br /> }<br /> }<br /><br /> <span style="color: #0000ff">return</span> prices;<br /> }<br />}</pre><br />All this does is loops the prices and tries to get the default prices for the various markets. You could of course make this more robust like checking currency, but this is just a simple example. Now that we have this property, when we run our index job it will get persisted into Elasticsearch. We can now query it with Vulcan something like this (this is from my Quicksilver demo that I’ve updated to use Vulcan):</div>
<div> </div>
<div id="codeSnippetWrapper"><pre id="codeSnippet" style="border-top-style: none; overflow: visible; font-size: 8pt; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; border-left-style: none; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4">model.SearchResponse = VulcanHandler.Service.GetClient().SearchContent<EPiServer.Reference.Commerce.Site.Features.Product.Models.FashionVariant>(<br /> q => q.Aggregations(a => a<br /> .Filter(<span style="color: #006080">"current_market"</span>, cm => cm<br /> .Filter(f => f<br /> .Term(p => p<br /> .Price.First().MarketId, CurrentMarket.Service.GetCurrentMarket().MarketId.Value))<br /> .Aggregations(agg => agg<br /> .Terms(<span style="color: #006080">"prices"</span>, t => t<br /> .Field(fld => fld.Price.First().Price.Amount))))));</pre><br /></div>In this particular case, we are using a filter aggregation to narrow down to the current market, and then using a child aggregation to get the prices. In reality, you probably wouldn’t use a Terms aggregation for prices. You’d probably use a <a href="https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/range-aggregation-usage.html">Range aggregation</a>.</div>
<div> </div>
<div>Lastly, just some housekeeping. Some hosted clusters require a username and password to access it, so I’ve added support for this to the web.config. For example, here is my configuration talking to Bonsai:</div>
<div> </div>
<div id="codeSnippetWrapper"><pre id="codeSnippet" style="border-top-style: none; overflow: visible; font-size: 8pt; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; border-left-style: none; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4"><span style="color: #0000ff"><</span><span style="color: #800000">add</span> <span style="color: #ff0000">key</span><span style="color: #0000ff">="VulcanUrl"</span> <span style="color: #ff0000">value</span><span style="color: #0000ff">="https://vulcancluster-452277433331.eu-west-1.bonsai.io/"</span> <span style="color: #0000ff">/></span><br /><span style="color: #0000ff"><</span><span style="color: #800000">add</span> <span style="color: #ff0000">key</span><span style="color: #0000ff">="VulcanUsername"</span> <span style="color: #ff0000">value</span><span style="color: #0000ff">="jkda99asdk"</span> <span style="color: #0000ff">/></span><br /><span style="color: #0000ff"><</span><span style="color: #800000">add</span> <span style="color: #ff0000">key</span><span style="color: #0000ff">="VulcanPassword"</span> <span style="color: #ff0000">value</span><span style="color: #0000ff">="r9088fsaff"</span> <span style="color: #0000ff">/></span><br /><span style="color: #0000ff"><</span><span style="color: #800000">add</span> <span style="color: #ff0000">key</span><span style="color: #0000ff">="VulcanIndex"</span> <span style="color: #ff0000">value</span><span style="color: #0000ff">="vulcan_quicksilverdemo"</span> <span style="color: #0000ff">/></span><br /></pre><br /></div>
<div>I’m very open to ideas and suggestions on how to drive Vulcan forward, particularly on Episerver Commerce projects. I’m thinking of trying to somehow generalise the price management, maybe do market handling in a nice way too. If you have any feedback, do let me know on here or on my email at <a href="mailto:firstname.lastname@episerver.com">firstname.lastname@episerver.com</a>.</div>
<p><strong>DISCLAIMER:</strong> This project is in no way connected with or endorsed by Episerver. It is being created under the auspices of a South African company and is entirely separate to what I do as an Episerver employee.</p>Introducing Vulcan, the lightweight Elasticsearch client for Episerver/blogs/Dan-Matthews/Dates/2016/4/introducing-vulcan-the-lightweight-elasticsearch-client-for-episerver/2016-04-26T13:54:15.0000000Z<p>I like <a href="http://find.episerver.com/">Episerver Find</a>. No, more than that… I LOVE it. Fast, configurable, powerful, reliable, scalable… it’s everything you need in search and SO much more. I try and throw all my site data that I can into it, and I’ve never been sorry yet. If you are looking at an Episerver project, I’d suggest you just factor in Find right up front and make sure you budget for it (and, if you go <a href="http://www.episerver.com/cloud-platform/">Episerver Cloud</a>, there’s a good chance you will get it bundled anyway.)</p> <p>Having said all that, Episerver are fully aware that for much smaller sites that are licensed on-premise and trying to manage costs tightly, Find may be an expense too far. Even more than that, here in South Africa not only are costs very constrained (go see the value of the Rand for a clue) but Find isn’t available on a convenient node for Sub-Saharan Africa, and so latency is a little higher. For that reason, Episerver still provides <a href="/link/77de2b135d6740fdb7124b4821cbce40.aspx">Episerver Search</a> as an option, and there are various other providers you can use, some of whom you’ll find in the <a href="http://www.episerver.com/add-on-store/">Add-On Store</a>. However, Episerver Search is somewhat limited (outdated port of Lucene.Net, limited capabilities when it comes to some features such as faceting, as well as scalability and UI is… well… non-existent) and the 3rd party options are, again, paid-for. So what does that leave us with?</p> <p>This is the (admittedly small) niche where Vulcan could be useful. Firstly, what is it not; It’s NOT Episerver Find and it’s NOT supported – not by Episerver, by me or anyone else. If you want a proper, enterprise-level, supported product with fabulous UI and integration, go dig a little in your pockets and get Find. So what IS Vulcan? It is a small, lightweight wrapper around <a href="https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/index.html">Elasticsearch’s NEST client</a> that provides helpers and tools to index and search for CMS and Commerce content. As of now, there is no UI (other than an indexing scheduled job) and configuration is fairly limited. That said, it’s simple and as it’s Open Source, you can do what you like with it when it comes to extending and customising it. You can even host your own Elasticsearch instance, so it could be very cost effective! I just have one request – if you add a cool feature or fix a bug, be prepared to commit it back into the repo so that we can all benefit. Sound fair?</p> <p>I’m also fairly inexperienced with Elasticsearch and NEST, and the documentation isn’t fabulous which doesn’t help, so if you think I’m barking up the wrong tree then I’m all ears. Maybe I’ve spectacularly missed the point or done things in a much harder way than necessary. So, how do you get Vulcan and use it? I’m hoping at some point to make it available as a Nuget package, but for now you can pull the latest code from here:</p> <p><a href="https://gitlab.com/DataVenia/Vulcan.git">Vulcan repo on GitLab</a></p> <p>Note that it’s a PRIVATE repo and you’ll need to login then request access. I’ve done this simply because it’s very much in alpha and I don’t want it grabbed by just anyone. I’d rather know where it’s going. Once you’ve downloaded the repo you’ll see two projects:</p> <p><strong>TcbInternetSolutions.Vulcan.Core</strong> – this is the main project including an implementation</p> <p><strong>TcbInternetSolutions.Vulcan.Commerce</strong> – this adds Commerce support</p> <p>Compile them then drop the appropriate assemblies into your CMS/Commerce project’s bin folder (or just add the projects to your solution and reference them if you prefer). For CMS, you just need the Core one. For Commerce, you’ll need Core and Commerce. Note that I’ve built against the very latest version of Episerver CMS and Commerce. No reason for that really, and I’ll probably refactor to an older set of packages at some point, but it was just easiest at the time. Once that’s done there is just one last thing to do – get an Elasticsearch instance somewhere. I’ve found that <a href="https://facetflow.com/">FacetFlow</a> is awesome… they give you fairly sizable index for free. Once you have registered and gotten yourself a URL, you need to plug that into the appSettings of your web.config along with whatever your index name should be (you can pick your own). For example:</p> <p> </p> <div><pre id="codeSnippet" style="border-top-style: none; overflow: visible; font-size: 8pt; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; border-left-style: none; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4"><span style="color: #0000ff"><</span><span style="color: #800000">add</span> <span style="color: #ff0000">key</span><span style="color: #0000ff">="VulcanUrl"</span> <span style="color: #ff0000">value</span><span style="color: #0000ff">="https://yourkey:@somesite.azr.facetflow.io"</span> <span style="color: #0000ff">/></span><br /><span style="color: #0000ff"><</span><span style="color: #800000">add</span> <span style="color: #ff0000">key</span><span style="color: #0000ff">="VulcanIndex"</span> <span style="color: #ff0000">value</span><span style="color: #0000ff">="vulcan"</span> <span style="color: #0000ff">/></span><br /></pre></div>
<div> </div>
<div>You’re now ready to start playing. If you go into the admin mode you’ll see a <strong>Vulcan Index Content</strong> job. This will try to index all the content on your site. Yeah, everything. Later I might add restrictions, but right now it will do the whole darn lot. If you have the Commerce assembly added too that will also cause the Commerce content to be indexed. Check your logs for warnings and errors if it doesn’t run nicely. There is a ‘listener’ in the project that will track content being published/deleted/moved and so it should be kept in sync fairly well, but it’s probably not a bad idea to run the indexing job regularly. Once the job completes, you should be able to browse/search your index manually if you know how to use the Elasticsearch syntax in your browser address bar (e.g. mysite/myindex/_search). But we want to do code, so let’s go there.</div>
<p>All of the Vulcan features are exposed via the <strong>IVulcanHandler</strong> interface, which is registered with the IoC container. You can therefore inject it, constructor it or service locate it… whatever flavour floats your boat. Once you have it, you can then use it like any other NEST client, but I have added some helper methods:</p>
<p> </p>
<div><pre id="codeSnippet" style="border-top-style: none; overflow: visible; font-size: 8pt; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; border-left-style: none; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4">ISearchResponse<IContent> SearchContent<T>(Func<SearchDescriptor<T>, SearchDescriptor<T>> searchDescriptor = <span style="color: #0000ff">null</span>) <span style="color: #0000ff">where</span> T : <span style="color: #0000ff">class</span>, IContent;<br /><br /><span style="color: #0000ff">void</span> IndexContent(IContent content);<br /><br /><span style="color: #0000ff">void</span> DeleteContent(IContent content);<br /></pre></div>
<div> </div>
<div>The Index and Delete you can call but mostly they are there for the internal implementation. The Search is the one that gets tasty. At the simplest level, you can just call it with some kind of content, e.g. (using property injection syntax):</div>
<div> </div>
<div>
<div><pre id="codeSnippet" style="border-top-style: none; overflow: visible; font-size: 8pt; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; border-left-style: none; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4">var searchResponse = VulcanHandler.Service.Client.SearchContent<StandardPage>();<br /></pre></div>
<div> </div>
<div>This will get all of the StandardPage content in the index. Vulcan does support inheritance, and so anything that inherits from StandardPage will also be returned. Normally you would then look at the <strong>Documents</strong> property of the response to get the content. You can still do that, but you’ll find that inside there you will have a set of Vulcan content constructs. This is done for performance… they inherit from <strong>IContent</strong> and you can use them directly if you like for the common properties, but probably you want the actual content hits. There are a couple of extension methods that I’ve created that can help you here. Simply add a ‘using’ to <strong>TcbInternetSolutions.Vulcan.Core.Extensions</strong> and then you can directly get the content by using the <strong>GetContents()</strong> extension method on the search response. This will give an <strong>IEnumerable</strong> of <strong>IContent</strong>.</div>
<div> </div>
<div><pre id="codeSnippet" style="border-top-style: none; overflow: visible; font-size: 8pt; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; border-left-style: none; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4">var contents = VulcanHandler.Service.Client.SearchContent<StandardPage>().GetContents();<br /></pre></div>
<div> </div>
<div>You may also want to get the content along with the ‘hits’ that correspond to them. Typically, this would be for something like hit highlighting. Here is a Razor snippet that shows the extension method <strong>GetHitContents()</strong> which returns an <strong>IDictionary</strong> of the <strong>IHit</strong> and the <strong>IContent</strong> (the model is passing the search response in the <strong>ContentHits</strong> property):</div>
<div> </div>
<div id="codeSnippetWrapper"><pre id="codeSnippet" style="border-top-style: none; overflow: visible; font-size: 8pt; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; border-left-style: none; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4">@<span style="color: #0000ff">foreach</span> (var hit <span style="color: #0000ff">in</span> Model.ContentHits.GetHitContents())<br />{<br /> <div <span style="color: #0000ff">class</span>=<span style="color: #006080">"listResult"</span>><br /> <h3><a href=<span style="color: #006080">"@Url.ContentUrl(hit.Value.ContentLink)"</span>>@hit.Value.Name</a></h3><br /> @<span style="color: #0000ff">if</span> (hit.Key.Highlights != <span style="color: #0000ff">null</span>)<br /> {<br /> <span style="color: #0000ff">foreach</span> (var highlight <span style="color: #0000ff">in</span> hit.Key.Highlights)<br /> {<br /> <p>@Html.Raw(<span style="color: #0000ff">string</span>.Join(<span style="color: #006080">","</span>, highlight.Value.Highlights))</p><br /> }<br /> }<br /> <hr /><br /> </div><br />}</pre><br /></div>
<div>Now you’ve seen the basics, you can make it useful by using the standard NEST Query DSL and Aggregations to do funky searches, facets etc. For example:</div>
<div> </div>
<div id="codeSnippetWrapper"><pre id="codeSnippet" style="border-top-style: none; overflow: visible; font-size: 8pt; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; border-left-style: none; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4">model.ContentHits = VulcanHandler.Service.Client.SearchContent<IContent>(d => d.Query(query => query.SimpleQueryString(sq => sq.Query(q)))<br /> .Highlight(h => h.Encoder(<span style="color: #006080">"html"</span>).Fields(f => f.Field(<span style="color: #006080">"*"</span>)))<br /> .Aggregations(agg => agg.Terms(<span style="color: #006080">"types"</span>, t => t.Field(<span style="color: #006080">"_type"</span>))));</pre><br /></div>
<div>This snippet passes in a simple text string <strong>q </strong>into the query, then hit-highlights on all fields (encoded HTML so that it’s neat) and facets the return on the type of content. Put it all together, and you get something like this (I threw together a little test in the Alloy templates):</div>
<div> </div>
<div><a href="/link/f8d1f9363b8543a7ae0356d5c3dd0528.aspx"><img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="/link/d435a136b6c345e593fe230175b05ade.aspx" width="481" height="517" /></a><br /></div>
<div>So that’s Vulcan! A lightweight client side library to drive an Episerver website. And it’s Open Source. Which is nice. If you have questions, comments or queries let me know… post here or you can email me on <a href="mailto:firstname.lastname@episerver.com">firstname.lastname@episerver.com</a>.</div>
<div> </div>
<div><strong>DISCLAIMER:</strong> This project is in no way connected with or endorsed by Episerver. It is being created under the auspices of a South African company and is entirely separate to what I do as an Episerver employee.<br /></div>
<div><br /></div></div>The Art of Snuggling/blogs/Dan-Matthews/Dates/2016/2/the-art-of-snuggling/2016-02-17T17:26:34.0000000Z<p>I gave a technical demonstration at the Africa e-commerce show in Cape Town all about snuggling… how to build lasting relationships with your website visitora. The first half of my demo was actually a short presentation about getting to know your visitors, and I thought it would be nice to share here. <p>These are the seven steps to a good snuggle :) <p><strong>1) Nothing is worse when snuggling than using the wrong name.</strong></p> <p>How often have you been greeted with a 'hello Mr. User' or had a mail addressed to Dear ListMember5575? When 'talking' to the customer, either on-screen or via direct marketing, either use the right info or nothing at all.</p> <p><strong>2) When snuggling, keep your hands to yourself until invited.</strong></p> <p>Isn't it annoying when you register on a website and immediately it's inviting you to newsletters or showing you special ads? It’s important to give people personal space. A good tip is to set triggers in the site to go deeper with your visitors, and only bother them when the relationship is established. </p> <p><strong>3) Snuggling is best done on cold days.</strong></p> <p>Understanding the context of an engagement is vitally important. How often have you gone to a website and found something irrelevant to your country or season displayed? Make sure that your ads, banners and teasers are relevant to the person AND the context. Remember, maybe they don't want to snuggle today so hold the ads and banners until next time. </p> <p><strong>4) The best snugglers know what each other likes without asking.</strong></p> <p>Giving the best snuggling experience should be something that your website does without explicitly being asked. For example, how about someone using a mobile phone. Do we want to bump them off to a .mobi site? Of course not. We want to understand what they like. As an editor, I need to be able to see it for myself as well, so I understand what they like. </p> <p><strong>5) A good snuggle can turn into something else.</strong></p> <p>Too often sites have three page sign ups or four page checkouts. Find the easiest path to engaging / buying something and remove roadblocks. Remove information and barriers that are non critical. </p> <p><strong>6) Snuggling is private (and probably between only 2)</strong></p> <p>The visitor wants to engage with you. Not you and your friends. Don't push them off to third parties like subdomains. Keep payment providers 'inline' where possible. Make sure they know you don't share their details and they know they are secure. Keep it cozy.</p> <p><strong>7) If your snuggling is rejected, find out why.</strong></p> <p>Sometimes your visitor doesn't want to snuggle, and that's okay. But you have already gathered some profile about them... You might know what pages they've looked at, what products they had in their cart, at what point they abandoned their purchase. You might even have captured contact details. Look at the stats, learn from them, and if you think it's worth it, pursue them!</p> <p>I hope that these seven simple steps have given you some food for thought. If you have comments, feedback or suggestions on what I might have missed, do let me know! </p></p></p>Database failures when creating a site in Visual Studio/blogs/Dan-Matthews/Dates/2014/10/Database-failures-when-creating-a-site-in-Visual-Studio/2014-10-05T14:55:00.0000000Z<p>I’ve seen a few cases where Visual Studio is not able to create the database successfully when using the template to create a new EPiServer site. The MDF file is in the correct location, but either there are errors in the Visual Studio console window saying that the database could not be connected to, or when the site runs you get an error about missing schema or ‘Could not find stored procedure 'sp_DatabaseVersion'’. There are a number of reasons why this could happen – funny Local DB configurations, odd permissions issues, SQL Express version issues, but the one thing I’ve found is that troubleshooting can be long and tedious. For this reason, I’ve found that often the easiest thing to do is to set up the database myself following a few simple steps.</p> <p>Firstly, run the aspnet_regsql wizard in your .Net 4 framework folder .Connect to your Local DB instance and choose your EPiServer database (it will be named the same as the MDF file). If you are unsure of the instance name, grab it from the connection strings file in your project. I have found that sometimes the Local DB isn’t started properly… if this is the case then you can start an instance using <a href="http://msdn.microsoft.com/en-us/library/hh212961.aspx">these instructions</a>, and try again. Follow the wizard through and it will create all your ASP.NET support in the database (needed for user management, among other things).</p> <p>Next, open SQL Server Management Studio (<a href="http://msdn.microsoft.com/en-us/evalcenter/dn434042.aspx">download the latest version for free if you need to</a> – it supports all versions of Local DB and you only need to download the management studio package, not the database engine itself or any of the advanced services). You should see your database listed. Open and run the following queries against your database, in this order:</p> <ul> <li>%systemroot%\Microsoft.NET\Framework64\v4.0.30319\SQL\en\SqlPersistenceService_Schema.sql </li> <li>%systemroot%\Microsoft.NET\Framework64\v4.0.30319\SQL\en\SqlPersistenceService_Logic.sql </li> <li>[your site installation folder]\packages\EPiServer.CMS.Core.7.7.1\tools\EPiServer.Cms.Core.sql </li> </ul> <p>The first two add .Net 3.5 Workflow support to the site – needed because the workflow subsystem in EPiServer currently runs the older workflow engine using the backwards-compatibility in .Net 4. The third script is the EPiServer schema.</p> <p>Once done, try and spin your site up again and you should be good to go!</p>Mixing Forms and Windows Authentication/blogs/Dan-Matthews/Dates/2014/8/Mixing-Forms-and-Windows-Authentication/2014-08-27T17:10:52.0000000Z<p>Recently I worked on a project where the client had both internal (Active Directory) and external (database-stored) users and wanted to authenticate both against the website. In itself, that’s very straightforward in EPiServer – we can just use the multiplexing authentication and role providers. However, there was a twist. This client wanted to authenticate external users using a clean, external-friendly form, and they wanted internal users to authenticate automatically by passing their AD credentials directly to the site via the Intranet Sites zone. This is where it gets interesting. To explain the issue and how we solved it, we need to take a step back and understand how authentication works. For the sake of clarity, we’ll consider just three aspects of security. There are more in play here, but they aren’t core to what are looking at:</p> <ul> <li>Authentication Method </li> <li>Authentication Provider </li> <li>Role Provider </li> </ul> <p>If the end user requests a resource to which they do not have access, then IIS will trigger the configured authentication method to capture credentials from the user. This could be forms authentication (the website captures the information in a plain-text HTML form, should be on HTTPS), basic authentication (browser captures credentials and sends them in plain text, not secure), Windows authentication (browser captured credentials and – via one of several mechanisms – passes them via simple enryption/hashing to the website, semi-secure) or one of various other methods. The site will then take those credentials and pass them to the authentication provider for authentication. In the case of the multiplexing authentication provider, that may in turn call multiple other providers in order to try and authenticate the user until the user can either be authenticated or all attempts fail. If the user can be authenticated, the site will set the HttpContext.Current.User property which is an IPrincipal object. This IPrincipal can be a built-in principal type or a custom one, but whatever type it is, it will have an Identity property. This stores information for the authentication method, and will be a an IIdentity object. If you are using Forms Authentication, this will be a FormsIdentity object which contains various information about the forms ticket. If you are using Windows authentication, it will be a WindowsIdentity with various IDs etc. related to Windows Authentication. Note that this does NOT equate to whether the user is a Windows (or AD) user or not! You can use the Windows Authentication Method to authenticate both internal and external users – it’s simply the mechanism by which credentials are gathered.</p> <p>At this point we have an authenticated user along with the mechanism used to collect those credentials. The roles that are available to the user can now be identified by calling the role provider associated with that user. This can be done directly on the Principal using a method, or indirectly via the role manager. We now know who someone is, how we authenticated them, and what they can do. In a simple scenario, this is all we need.</p> <p>One of the features of Windows is that you can add sites to the ‘Intranet Sites’ zone and enable an option to try and automatically authenticate any sites in that zone using your windows credentials. For an end user, that means that they can visit the site and the same credentials they log on to their PC with will be passed to the site to try and authenticate – typically Active Directory credentials. This only works when the site is configured for the Windows authentication method, as otherwise the site won’t send the right challenge to the browser which can capture the credentials.</p> <p>The problem with this project we were working on is that we wanted a combination of two different authentication methods. For external users, we wanted to use forms authentication. For internal users, we wanted to use the Windows authentication method so that we could try and log them on automatically. Unfortunately, IIS only allows us to configure a single authentication method. At least – you can enable both forms and windows authentication for the website in IIS, ignoring the error message, but you can still only configure one in your Web.Config. Either you choose forms, and specify the logon form URL, or you choose Windows (and optionally set the type of credential exchange that Windows will use). In our project, if we choose forms then we can’t log on Intranet users automatically, and if we choose Windows then external users will get a browser-triggered popup rather than the clean form we created on the site. At this point, we therefore need to work around the limitations that ASP.NET imposes on us. To see how we do that, we need to understand the difference between forms and Windows authentication.</p> <p>In forms authentication, when the website receives a request to which the anonymous or authenticated user does not have access then it will get the form configured in the Web.Config and do a HTTP 302 redirect to that form – passing the original URL in as a QueryString parameter so that it can be redirected to once the user is successfully logged in. When the user is logged in, an authentication ticket is written to a cookie which is then sent back on each browser request in the cookie collection. Usually it will be a session cookie unless you want the user to be ‘remembered’, in which case you can write a permanent cookie. In that way, once logged on the user will stay logged in for that session or – with a permanent cookie - until the cookie expires or they log out and the cookie is deleted.</p> <p>With Windows authentication, when a website receives a request to which the anonymous or authenticated user does not have access then it will send a 401 challenge back to the browser. Note that this is not a 403 forbidden, but rather a challenge to see if the user can authenticate. At this point, when the browser picks up a 401, then if the site is in the Intranet Sites zone and the automatic logon option is enabled, then the browser will silently try to negotiate a logon with the website using the currently logged on user’s credentials. If that doesn’t work (or the site is not in the Intranet zone and/or the automatic logon is not enabled), a logon popup is displayed. Note that when using the Windows authentication method, both the browser and the server need to keep track together of the user logon for the duration of the browser session. Every resource request needs to share this negotiated logon.</p> <p>Because these two methods send back totally different HTTP statuses, 302 or 401, they are fundamentally incompatible. Even more than that, unless the Windows authentication method is configured in the Web.Config, then any 401 challenge/response based user logon will not be negotiated for the ongoing session. You could get them to log on for the first request, but every subsequent request will have ‘forgotten’ the login.</p> <p>So how do we get this to work?</p> <p>We need to make sure that we have both forms and Windows authentication methods enabled in IIS, and the Web.Config needs to be configured for forms authentication. After that, the first trick is that we switch between 302 and 401 responses based on some criteria on the incoming request. You can use anything to do this, for example you could pick up a specific referrer, requests coming from a specific IP subnet or requests with a specific QueryString. There are various places that you can switch this, but probably the easiest is in your Global.asax file in the Application_EndRequest method. An example of how this could look is below.</p> <div id="scid:9D7513F9-C04C-4721-824A-2B34F0212519:ffefd229-21e5-48e9-bd08-a1e5cc630310" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px"><pre style=" width: 745px; height: 281px;background-color:White;overflow: auto;"><div><!--
Code highlighting produced by Actipro CodeHighlighter (freeware)
http://www.CodeHighlighter.com/
--><span style="color: #0000FF;">protected</span><span style="color: #000000;"> </span><span style="color: #0000FF;">void</span><span style="color: #000000;"> Application_EndRequest(</span><span style="color: #0000FF;">object</span><span style="color: #000000;"> sender, EventArgs e)
{
</span><span style="color: #008000;">//</span><span style="color: #008000;"> we only want 302 redirects if they are for login purposes</span><span style="color: #008000;">
</span><span style="color: #000000;"> </span><span style="color: #0000FF;">if</span><span style="color: #000000;"> (</span><span style="color: #0000FF;">this</span><span style="color: #000000;">.Response.StatusCode </span><span style="color: #000000;">==</span><span style="color: #000000;"> </span><span style="color: #800080;">302</span><span style="color: #000000;"> </span><span style="color: #000000;">&&</span><span style="color: #000000;"> </span><span style="color: #0000FF;">this</span><span style="color: #000000;">.Response.RedirectLocation.Contains(</span><span style="color: #800000;">"</span><span style="color: #800000;">/login</span><span style="color: #800000;">"</span><span style="color: #000000;">))
{
</span><span style="color: #008000;">//</span><span style="color: #008000;"> look for a setting on the QueryString to trigger a challenge</span><span style="color: #008000;">
</span><span style="color: #000000;"> </span><span style="color: #0000FF;">if</span><span style="color: #000000;"> (</span><span style="color: #000000;">!</span><span style="color: #0000FF;">string</span><span style="color: #000000;">.IsNullOrEmpty(Request.QueryString[</span><span style="color: #800000;">"</span><span style="color: #800000;">internal</span><span style="color: #800000;">"</span><span style="color: #000000;">]))
{
</span><span style="color: #0000FF;">this</span><span style="color: #000000;">.Response.StatusCode </span><span style="color: #000000;">=</span><span style="color: #000000;"> </span><span style="color: #800080;">401</span><span style="color: #000000;">;
</span><span style="color: #008000;">//</span><span style="color: #008000;"> note that the following line is .NET 4.5 or later only
</span><span style="color: #008000;">//</span><span style="color: #008000;"> otherwise you have to suppress the return URL etc manually!</span><span style="color: #008000;">
</span><span style="color: #000000;"> </span><span style="color: #0000FF;">this</span><span style="color: #000000;">.Response.SuppressFormsAuthenticationRedirect </span><span style="color: #000000;">=</span><span style="color: #000000;"> </span><span style="color: #0000FF;">true</span><span style="color: #000000;">;
}
}
}</span></div></pre><!-- Code inserted with Steve Dunn's Windows Live Writer Code Formatter Plugin. http://dunnhq.com --></div>
<p>So far so good, and if you try and hit your website with the specified QueryString you will get a 401 challenge returned – you will either be auto-logged on or prompted depending on your configuration described earlier. Otherwise, forms login will work just as before. However, you’ll notice that if you use the QueryString method to trigger a 401, then other secured resources such as images may not load. The reason for this is that the 401 challenge worked for the initial request, but because your site is not configured for Windows authentication, it is not retaining the logon credentials through the session. Effectively, you’re not logged in for all the other resource requests. You can see this because if you try and access another page without your QueryString, then you won’t be logged in. We therefore need to do our second trick.</p>
<p>One of the nice things about forms authentication is that we can log someone on programmatically and write an authentication cookie. The trick we therefore make is that when the response comes back from our initial 401 challenge, we can pick it up and write a forms authentication cookie that matches the username and details logged on using Windows authentication. As far as the site is concerned, the user has then been logged on using forms authentication and because the cookie comes back on each request, the user is logged on for all resources. Again, we can do this in the Global.asax file. This time we use the Application_AuthenticateRequest method, and it could look something like this:</p>
<div id="scid:9D7513F9-C04C-4721-824A-2B34F0212519:745cdc9c-c153-4d45-9107-1033e105b5bd" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px"><pre style=" width: 745px; height: 281px;background-color:White;overflow: auto;"><div><!--
Code highlighting produced by Actipro CodeHighlighter (freeware)
http://www.CodeHighlighter.com/
--><span style="color: #0000FF;">protected</span><span style="color: #000000;"> </span><span style="color: #0000FF;">void</span><span style="color: #000000;"> Application_AuthenticateRequest(</span><span style="color: #0000FF;">object</span><span style="color: #000000;"> sender, EventArgs e)
{
</span><span style="color: #0000FF;">if</span><span style="color: #000000;"> (Request.IsAuthenticated </span><span style="color: #000000;">&&</span><span style="color: #000000;"> HttpContext.Current.User.Identity </span><span style="color: #0000FF;">is</span><span style="color: #000000;"> WindowsIdentity)
{
</span><span style="color: #008000;">//</span><span style="color: #008000;"> note that we will be stripping the domain from the username as forms authentication doesn't capture this anyway
</span><span style="color: #008000;">//</span><span style="color: #008000;"> create a temp cookie for this request only (not set in response)</span><span style="color: #008000;">
</span><span style="color: #000000;"> var tempCookie </span><span style="color: #000000;">=</span><span style="color: #000000;"> FormsAuthentication.GetAuthCookie(Regex.Replace(HttpContext.Current.User.Identity.Name, </span><span style="color: #800000;">"</span><span style="color: #800000;">.*\\\\(.*)</span><span style="color: #800000;">"</span><span style="color: #000000;">, </span><span style="color: #800000;">"</span><span style="color: #800000;">$1</span><span style="color: #800000;">"</span><span style="color: #000000;">, RegexOptions.None), </span><span style="color: #0000FF;">false</span><span style="color: #000000;">);
</span><span style="color: #008000;">//</span><span style="color: #008000;"> set the user based on this temporary cookie - just for this request
</span><span style="color: #008000;">//</span><span style="color: #008000;"> we grab the roles from the identity we are replacing so that none are lost</span><span style="color: #008000;">
</span><span style="color: #000000;"> HttpContext.Current.User </span><span style="color: #000000;">=</span><span style="color: #000000;"> </span><span style="color: #0000FF;">new</span><span style="color: #000000;"> GenericPrincipal(</span><span style="color: #0000FF;">new</span><span style="color: #000000;"> FormsIdentity(FormsAuthentication.Decrypt(tempCookie.Value)), (HttpContext.Current.User.Identity </span><span style="color: #0000FF;">as</span><span style="color: #000000;"> WindowsIdentity).Groups.Select(group </span><span style="color: #000000;">=></span><span style="color: #000000;"> group.Value).ToArray());
</span><span style="color: #008000;">//</span><span style="color: #008000;"> now set the forms cookie</span><span style="color: #008000;">
</span><span style="color: #000000;"> FormsAuthentication.SetAuthCookie(HttpContext.Current.User.Identity.Name, </span><span style="color: #0000FF;">false</span><span style="color: #000000;">);
}
}</span></div></pre><!-- Code inserted with Steve Dunn's Windows Live Writer Code Formatter Plugin. http://dunnhq.com --></div>
<p>Now when an internal user authenticates using Windows authentication either automatically or via popup, they will end up being a forms-authenticated user on the site, just like the external users that came through the forms authentication logon form.</p>
<p>This is not necessarily the cleanest way to handle this – there are some funky ways to do this I’ve seen with HTTP modules and subsites with different Web.Config files, but I think this is probably one of the easiest ways to implement this and it’s fairly versatile. I hope it helps someone out who finds themselves with this interesting edge case!</p>Auto-translate using the EPiServer languages Add-On/blogs/Dan-Matthews/Dates/2014/4/Auto-translate-using-the-EPiServer-languages-Add-On/2014-04-24T17:50:53.0000000Z<p>When working with multiple languages on your EPiServer website, you have three options when it comes to translating your content:</p> <ol> <li>Do it manually</li> <li>Do it with a translation agency</li> <li>Do it automagically</li> </ol> <p>Option (1) is simple enough, you create a blank version of the page in a new language, and enter your content. With the EPiServer languages Add-On you can even duplicate the content of an existing language and edit that, which might make it even easier. This works well if you have an internal team to do the translation, or your comfortable giving a third party access to your site to do the translations. Option (2) is good if you want to outsource the translation and you don’t mind paying for it. You get quality translation by professionals and they use their own internal tools for the translation – they don’t need to have access to your site itself. The process is seamless and efficient. There are a number of Add-Ons from various translation vendors that you could use – I won’t name them for the sake of avoiding any favouritism, but they work well. However, what about option (3)… the automagic option? If you are happy with a quick and dirty automatic translate, maybe you want to go Google Translate style? The EPiServer languages Add-On will do this for you as well… albeit with the Microsoft alternative to Google Translate, called Bing Translate. I believe it’s been chosen for the API and the more favourable restrictions, but it does a very good job with the common language translations.</p> <p>In order to make it easy for you to try it out, this is a step-by-step guide to adding and setting up the Add-On for an EPiServer 7.5 site. Firstly, download the Add-On itself from the Add-On store. At time of writing, the latest version was 1.1.0.75 and it was in the EPiServer Beta section of the Add-On store. Simply click the ‘Install’ button and follow the instructions, remembering to restart your site. (In my screenshot, I’ve already installed it so you’ll see the ‘Install’ button is greyed out.)</p> <p><a href="/link/694c4d0117834ebfb06cbac56160ae31.png"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/c02118a9b65649b3b68d0c6f309cd6d3.png" width="684" height="508" /></a></p> <p>Next we need to add the languages gadget to our website. Go to the edit mode and in one of your panels (I’d suggest the assets panel on the right) choose to add a gadget:</p> <p><a href="/link/87cc166245a2408ebc5b74a92d0a3e0f.png"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/a904357cabfa4e96a61eb72194be07f7.png" width="682" height="507" /></a></p> <p>In the dialog that’s shown, just click the Languages item once (you may not notice, but in the background it is added to the assets panel):</p> <p><a href="/link/367375e03b5c4f0f8fd55c940c19eafe.png"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/55c1405c29ba475dadb295dda00f8785.png" width="491" height="368" /></a></p> <p>Before we go any further, make sure your site is configured for multiple languages. This is a much wider topic, but at a simple level you can go to your root page and edit it’s language settings:</p> <p><a href="/link/febfacb1a7a14533a1730890dc90bcfb.png"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/eb2844433d124a3f984ac5359667b5c0.png" width="672" height="500" /></a></p> <p>Now make sure that under ‘Settings for Editors’ you have at least two languages ticked:</p> <p><a href="/link/056b4a61dd1e48fa802d1e1148515324.png"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/b88fa1f20cdc4718856ddc4d35e44f93.png" width="435" height="481" /></a></p> <p>Now we have enabled multiple languages and we have our languages Add-On, we are almost ready to go. We can use the Add-On already, but we haven’t yet configured Bing Translate and so if we look at the gadget, we’ll see the auto translate option is greyed out:</p> <p><a href="/link/fb2998b88e094c70a9ddcb758b2dc8f5.png"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/ffc65a900c2c4f3b9d4549223eab1d86.png" width="676" height="504" /></a></p> <p>To set this up, click the settings cog in the bottom-right hand corner of the gadget and choose ‘Manage Add-On Settings’:</p> <p><a href="/link/c1790624fe0245899d7d233d481e5f23.png"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/9d95a261256e4d06be5f6ee0329d8f12.png" width="384" height="325" /></a></p> <p><a href="/link/ab237c58b08045d6ae010e9b8987e25a.png"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/59709a03256849eea36d1fea97802af7.png" width="696" height="517" /></a></p> <p>You’ll see that we have some of the Bing Translation already set up, but we are missing two things; a consumer key and a consumer secret. These we need to get from Microsoft, and Microsoft will use them to track our usage of the translation API to make sure we are not abusing it. To get these, log on to the Azure Marketplace: <a title="http://datamarket.azure.com/" href="http://datamarket.azure.com/">http://datamarket.azure.com/</a>. You may need to register for it with a Windows Live account if you haven’t done so before. Once logged on, select the ‘My Account’ option and then choose the ‘Developers’ link in the subnavigation. Unless you’ve been here before for some reason, it will say that you have no applications registered, so click the ‘Register’ button to create a new application. You will need to give it a unique Client ID and a name. Make sure you copy down the Client ID and the auto-generated Client Secret. You don’t need a Redirect URI for translations, so you can put anything you like in there. Finally, click create. (If it says something about HTTPS and the redirect URI, you can just click create again to ignore that warning and create your new application anyway). We can now go back to our website and plug the Clint ID into the Consumer Key field, and the Client Secret into the Consumer Secret field. Click ‘save’ and we’re done!</p> <p>Now you’ll see the auto-translate option is enabled:</p> <p><a href="/link/d397cab6728d427fb19051564e22dcf7.png"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/fe71668bcc1a460c9f366aaf599c2d5a.png" width="696" height="517" /></a></p> <p>If we choose it then, voila! We have an auto-translated page.</p> <p><a href="/link/aba4d8264316435999c2fe7d6d09c63e.png"><img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image" src="/link/b430d51bae0f4dc0b90680f4837f7173.png" width="696" height="517" /></a></p> <p>I hope this helps you getting up and running with automagic translations.</p>Cookies missing on XForm submission/blogs/Dan-Matthews/Dates/2014/1/Cookies-missing-on-XForm-submission/2014-01-02T20:52:59.0000000Z<p>I came across a nasty little gotcha when working with XForms recently and although I grant it is somewhat of an edge case the cause is a bit obscure so I though it worth blogging about in case anyone else happens across it. The cause will also have other side effects that haven’t affected me but might get you! So, what’s the problem? If you send an XForm to a custom URL (quite a common scenario when storing your XForm submitted data in somewhere other than the standard EPiServer storage) then if you need to pull any information from the HTTP Request headers you may find it missing or incorrect, including cookies. If you check the request on form submission with Fiddler you’ll see all the request data including cookies there, but by the time your page/controller which handles the submission is called, it’s all disappeared or changed.</p> <p>The reason for this is that the actual request from the browser is not the HTTP Post to your handler page/controller. It’s a postback to the server, which then prepares the XForm submission and make a SECOND call to your handler. This second call does not pull the request headers and cookies across from the initial call, and so you’ll find them missing. (Side note – be aware that when sending to a custom page, you need a full URL for this very reason. A relative URL will not work.) In my particular case I needed a cookie setting from the page, but by the time my handler was called it was of course missing.</p> <p>There are different ways to solve this. In my case I was only concerned with one particular value and so the easiest thing was to intercept the BeforeSubmitPostedData event of the XForm Control (you can put this in an initializable module) and manipulate my URL to add the value I needed to it. At this point I’m still in the ‘first call’ and so I have my cookies – I could simply pull out my value and inject it into my second request in the way I wanted to. If your handler has the SaveFormDataEventArgs called ‘e’ then you’ll find the URL that you need to change in e.FormData.CustomUrl. You could manipulate the URL to inject a URL segment with your value which matches a custom route – ideal if sending to an MVC controller which is what I was doing – or you could add a querystring value which is probably better if you are sending to a WebForm.</p> <p>I hope that if anyone else comes across a related problem with XForm submission to a custom URL, this little post can help them out!</p>Restricting available page types for Root page/blogs/Dan-Matthews/Dates/2013/11/Restricting-available-page-types-for-Root-page/2013-11-20T08:18:51.0000000Z<p>Now that we have strongly typed page types in EPiServer 7, we wonder how we ever lived without it. Actually… we just used the superb <a href="http://pagetypebuilder.codeplex.com/">Page Type Builder</a>, but that’s not the point. We now want to do everything in code rather than in the Admin mode, and we nearly can. However, there are a couple of little things that we can’t quite do yet, and restricting the available page types for the Root page is one of them. Typically, you will only ever be creating ‘start pages’ under the Root page, and you’ll probably have a specific type for the start page. If we do a ‘New page’ under the Root though, we get all of our page types listed! That’s a bit messy. So what are our options? Well, the Root page is in Admin mode; it’s called ‘Welcome page in Edit Mode’ and it’s type is ‘SysRoot’. We could restrict available page types there. But that’s going back to the dark ages. You can’t commit that to source control like other code and have it picked up by everyone else. A much nicer way would be to use the AvailablePageTypes attribute. No luck there though – the SysRoot has no code definition that I can find (according to Admin mode it doesn’t come from code) and so you can’t use it with the strongly-typed AvailablePageTypes attribute. Possibly you could create a definition for the SysRoot in code, but that’s a pretty scary thing to do and I personally wouldn’t want to go there – I used to do it in Page Type Builder but I wouldn’t want to in EPiServer 7.</p> <p>So what do we do? I was asked that by a student in a training course recently, and so I had to comeup with an answer. Actually, it’s pretty easy. We use an Initialization Module and the API to do what Admin mode is doing, but automatically and in code. The code to achieve this is shown below. Note that in this example the start page type is ‘StartPage’.</p> <div class="csharpcode"> <pre class="language-csharp"><code><span class="kwrd">using</span> System;</code></pre>
<pre class="language-csharp"><code><span class="kwrd">using</span> System.Collections.Generic;</code></pre>
<pre class="language-csharp"><code><span class="kwrd">using</span> System.Linq;</code></pre>
<pre class="language-csharp"><code><span class="kwrd">using</span> System.Web;</code></pre>
<pre class="language-csharp"><code><span class="kwrd">using</span> EPiServer.Framework;</code></pre>
<pre class="language-csharp"><code><span class="kwrd">using</span> EPiServer.ServiceLocation;</code></pre>
<pre class="language-csharp"><code><span class="kwrd">using</span> EPiServer.DataAbstraction;</code></pre>
<pre class="language-csharp"><code><span class="kwrd">using</span> EPiServerZA.Models.Pages;</code></pre>
<pre class="language-csharp"><code> </code></pre>
<pre class="language-csharp"><code><span class="kwrd">namespace</span> EPiServerZA.Business</code></pre>
<pre class="language-csharp"><code>{</code></pre>
<pre class="language-csharp"><code> [InitializableModule]</code></pre>
<pre class="language-csharp"><code> [ModuleDependency(<span class="kwrd">typeof</span>(EPiServer.Data.DataInitialization))]</code></pre>
<pre class="language-csharp"><code> <span class="kwrd">public</span> <span class="kwrd">class</span> RestrictRootPages : IInitializableModule</code></pre>
<pre class="language-csharp"><code> {</code></pre>
<pre class="language-csharp"><code> <span class="kwrd">public</span> <span class="kwrd">void</span> Initialize(EPiServer.Framework.Initialization.InitializationEngine context)</code></pre>
<pre class="language-csharp"><code> {</code></pre>
<pre class="language-csharp"><code> var sysRoot = ServiceLocator.Current.GetInstance<IContentTypeRepository>().Load(<span class="str">"SysRoot"</span>) <span class="kwrd">as</span> PageType;</code></pre>
<pre class="language-csharp"><code> var startPage = ServiceLocator.Current.GetInstance<IContentTypeRepository>().Load(<span class="kwrd">typeof</span>(StartPage));</code></pre>
<pre class="language-csharp"><code> </code></pre>
<pre class="language-csharp"><code> var setting = <span class="kwrd">new</span> EPiServer.DataAbstraction.PageTypeAvailability.AvailableSetting();</code></pre>
<pre class="language-csharp"><code> </code></pre>
<pre class="language-csharp"><code> setting.Availability = EPiServer.DataAbstraction.PageTypeAvailability.Availability.Specific;</code></pre>
<pre class="language-csharp"><code> setting.AllowedPageTypeNames.Add(startPage.Name);</code></pre>
<pre class="language-csharp"><code> </code></pre>
<pre class="language-csharp"><code> ServiceLocator.Current.GetInstance<EPiServer.DataAbstraction.PageTypeAvailability.IAvailableSettingsRepository>().RegisterSetting(sysRoot, setting);</code></pre>
<pre class="language-csharp"><code> }</code></pre>
<pre class="language-csharp"><code> </code></pre>
<pre class="language-csharp"><code> <span class="kwrd">public</span> <span class="kwrd">void</span> Preload(<span class="kwrd">string</span>[] parameters)</code></pre>
<pre class="language-csharp"><code> {</code></pre>
<pre class="language-csharp"><code> </code></pre>
<pre class="language-csharp"><code> }</code></pre>
<pre class="language-csharp"><code> </code></pre>
<pre class="language-csharp"><code> <span class="kwrd">public</span> <span class="kwrd">void</span> Uninitialize(EPiServer.Framework.Initialization.InitializationEngine context)</code></pre>
<pre class="language-csharp"><code> {</code></pre>
<pre class="language-csharp"><code> </code></pre>
<pre class="language-csharp"><code> }</code></pre>
<pre class="language-csharp"><code> }</code></pre>
<pre class="language-csharp"><code>}</code></pre>
</div>
<style type="text/css">
.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, "Courier New", courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }</style>
<p>The code simply grabs the SysRoot and StartPage types from the content type repository then makes the StartPage the only available page type on the SysRoot. There is a dependency on DataInitialization so that the repositories should be ready to use when this runs. Any clarifications or easier ways to achieve this are welcome <img class="wlEmoticon wlEmoticon-smile" style="border-top-style: none; border-left-style: none; border-bottom-style: none; border-right-style: none" alt="Smile" src="/link/618d58066f054894a881253fbb0af649.png" /></p>Avoiding spam with XForms/blogs/Dan-Matthews/Dates/2013/9/Avoiding-spam-with-XForms/2013-09-13T10:10:27.0000000Z<p><em>Simple instructions</em>: If you are using a WebForms based EPiServer 7 site, Install <a href="https://dl.dropboxusercontent.com/u/3400242/EPiServerZA.AddOns.XPathMaths.1.0.0.1.nupkg">this</a> AddOn then add a text box to your form of type ‘Maths Problem’. Save your form and you should be up and running!</p> <p><em>More detail</em>:-</p> <p>We all have a love-hate relationship with XForms in EPiServer. They are quick, easy, standard… but not very flexible. One such issue with flexibility is the need to put some kind of spam-catching filter onto forms. Traditionally, we’d use a <a href="http://www.captcha.net/">CAPTCHA</a> or <a href="http://www.google.com/recaptcha">reCAPTCHA</a>, but using this with XForms has three drawbacks:</p> <ul> <li>You need to put in on the page or block that embeds the form, so it’s not very flexible as to when you show it (although you could add a ‘show spam catching’ flag if you wanted – it’s a bit of work)</li> <li>Because it’s not part of the XForm, you need to put it above your form or at the bottom below your submit, which is ugly</li> <li>It’s hard to do client-side validation with a reCAPTCHA, so you have to do it server-side which is a nuisance</li> </ul> <p>As using these is quite a heavy thing to implement, I decided to find a simpler solution. One alternative that is cropping up in a few places is the ‘maths problem’ approach. It’s a very simple sum (one that even my six year old son could do easily) but one that the spammers haven’t fully exploited yet with an automatic solver. I’m sure they will, but right now if you avoid the big company implementations which are targeted for cracking, then you can avoid the worst of the bot-based spam form submissions. Because it’s so simple, it’s easy to write an AddOn that intercepts the XForm calls and sets up a sum to solve.</p> <p>This implementation adds a new XForm data type of ‘Maths Problem’ which you can put on your form:</p> <p><a href="/link/0803b91df1e5438d8ca53dccdbe52d4b.png"><img style="background-image: none; border-bottom: 0px; border-left: 0px; padding-left: 0px; padding-right: 0px; display: inline; border-top: 0px; border-right: 0px; padding-top: 0px" title="image" border="0" alt="image" src="/link/caeb4d69b72d444d941d51967f26e37d.png" width="327" height="206" /></a></p> <p>When the form is rendered, some code will intercept this field and add a sum to it, along with validators to ensure the sum is filled in correctly:</p> <p><a href="/link/f6ba656822fa429395e31ba1cf6656f5.png"><img style="background-image: none; border-bottom: 0px; border-left: 0px; padding-left: 0px; padding-right: 0px; display: inline; border-top: 0px; border-right: 0px; padding-top: 0px" title="image" border="0" alt="image" src="/link/c14419c3046648ad907d776d1deb80de.png" width="327" height="81" /></a></p> <p><a href="/link/c67e5602469b4b7a95f487c33e836094.png"><img style="background-image: none; border-bottom: 0px; border-left: 0px; padding-left: 0px; padding-right: 0px; display: inline; border-top: 0px; border-right: 0px; padding-top: 0px" title="image" border="0" alt="image" src="/link/b577302560514ca2bd3280df13bc5b02.png" width="325" height="79" /></a></p> <p>With this solution, you can decide where your maths problem goes and how it is styled.</p> <p><em>Known issues</em>:-</p> <ul> <li>Will probably only work with WebForms, haven’t tried with MVC yet</li> <li>Currently the validation error is fixed and only in English</li> </ul> <p><em>Disclaimer</em>:-</p> <p>This AddOn is provided As-Is. I haven’t tested it fully and it was a quick throw-together for another project I’m working on. Use it at your own risk!</p>The first, easiest (and worst) decision your tech start-up will make/blogs/Dan-Matthews/Dates/2013/5/The-first-easiest-and-worst-decision-your-tech-start-up-will-make/2013-05-24T10:35:24.0000000Z<p>Yesterday I was at the premier conference for technology entrepreneurs and start-ups in South Africa, <a href="http://www.netprophet.org.za">Net Prophet</a>. I had a chance to mingle with some of the fantastic talent we have here in Africa and there is one thing that I kept hearing in conversations and it went something like this:</p> <p><em>“We had to keep costs low, so we built our own platform”</em></p> <p>Or, sometimes, a variation:</p> <p><em>“We had to keep costs low, so we chose an Open Source platform”</em></p> <p>On the face of it, that makes perfect sense. For a technology start-up you start with an idea for a product or service, and the first decision you need to make is what platform to use. Apparently, it’s a no brainer. An easy decision that you can’t get wrong. Or can you?</p> <p>Let’s take a step back and see what you actually need for a start-up. Yes, you need that great idea or service that someone wants to buy from you, but what you then need is capital. Money. Greenbacks. Moolah. There are various ways of raising it – running a start-up in parallel with your day job, selling your house and car as capital and living out a cardboard box, getting an angel investor, joining an accelerator program… but whatever option you choose, your start-up needs to be as lean and mean as it can. This is something we all know to be self evident; low overheads mean better margins and higher profit or – if required – the same margins and lower end cost to customer, increasing competitiveness.</p> <p>This is then when we come to our first decision. We need a platform, we need to keep costs low. If we custom build or go Open Source, our cost is zero. Lean. Mean. Profitable. Successful.</p> <p>But wait a moment… what many start-ups fail to realise is that the art of keeping costs low has two distinct and very important parts; <strong>capital costs</strong> and <strong>running costs</strong>. What is the distinction? Capital costs are the costs incurred by a business to get to an operational state, and running costs are the costs to operate the business. For our start-up, we need to consider both. Remember, time is money. Every minute you spend before you are operational is a capital cost. That includes time developing or extending your platform. Somehow, you need to pay for your time to do that. Just because the platform is free doesn’t mean it doesn’t cost you anything. Then, once you are live, you need to consider how much time you spend maintaining that platform. Something that was free might actually incur far higher operational cost than you expect, both in time and hard cash.</p> <p>On top of all this, we have a golden rule for start-ups… get to market as quickly as possible. Why? Because then you can start making revenue, and that makes a successful business. This is lots to think about, so let’s summarise our options.</p> <h3>Option One – Build It Yourself</h3> <p>Pros</p> <ul> <li>Exactly what you need</li> <li>No initial license cost</li> </ul> <p>Cons</p> <ul> <li>Long time to develop causes high cost in man hours (capital cost)</li> <li>Long time to develop causes slower time to market</li> <li>Continuous maintenance required</li> <li>Difficult transfer of skills</li> <li>Limited to knowledge of developers (if you didn’t know something, how could you have built it?)</li> </ul> <p>Summary</p> <ul> <li>Capital cost HIGH</li> <li>Operational cost HIGH</li> <li>Time to market LONG</li> </ul> <p>Best when…</p> <ul> <li>You are happy to take a long time to go live</li> <li>You understand your market intimately</li> <li>You have another steady revenue stream</li> <li>You have an extremely knowledgeable development team</li> <li>You can guarantee your development teams availability long term</li> <li>You have very cheap operational resources</li> <li>You can afford to burn time</li> </ul> <h3>Option Two – Open Source</h3> <p>Pros</p> <ul> <li>No initial license cost</li> <li>Existing platform reduces effort required (capital cost)</li> <li>Quick time to market than build-it-yourself</li> <li>Paid-for-maintenance available</li> <li>Skills can be transferred</li> </ul> <p>Cons</p> <ul> <li>Technically oriented (designed for developers by developers)</li> <li>Often disjointed and inconsistent as it’s build by a disparate team with different needs</li> <li>Still requires heavy development (although less than build-it-yourself)</li> <li>Lock-in to a product that is potentially transient</li> <li>Community support often lacking</li> <li>Additional modules can cause ‘patchwork’ software of varying quality</li> <li>Requires continual attention and maintenance</li> <li>Vendor support (if it exists) can be expensive</li> </ul> <p>Summary</p> <ul> <li>Capital cost AVERAGE</li> <li>Operational cost AVERAGE</li> <li>Time to market AVERAGE</li> </ul> <p>Best when…</p> <ul> <li>You have a very small initial investment</li> <li>You need to get live fairly quickly</li> <li>You have a reasonable development team</li> <li>You want medium-term future proofing</li> <li>You want some 3rd party support options</li> </ul> <h3></h3> <h3>Option Three – Paid for Product</h3> <p>Pros</p> <ul> <li>Fully Supported</li> <li>Very quick time to market</li> <li>Proven scalability</li> <li>Powerful feature set</li> <li>Consistent experience</li> <li>Dedicated development team</li> <li>Business oriented</li> <li>Skills transferable</li> <li>Migration options</li> </ul> <p>Cons</p> <ul> <li>License cost (capital cost)</li> </ul> <p>Summary</p> <ul> <li>Capital cost HIGH</li> <li>Operational cost LOW</li> <li>Time to market LOW</li> </ul> <p>Best when…</p> <ul> <li>You have raised a decent amount of initial investment (yourself or via investors)</li> <li>You want to get to market fast</li> <li>You want long-term profitability with low operational cost</li> <li>You want to be long-term future proofed</li> <li>You want the security of dedicated support</li> </ul> <h3>So… where to from here?</h3> <p>Ultimately, which option you choose depends very much on where you want to go with your start-up. If you have a great idea you want to develop yourself and time is no consideration, build it yourself. If you are really, really squeezed for initial capital but you can’t take the risk to build it yourself, go Open Source. And if you can raise the capital to do with a paid-for platform up front, you will get to market and profitability far quicker with lower operational costs. In real-world experience, if a start-up is successful then if they chose build-it-yourself or Open Source then they will tend to re-platform within a couple of years to paid-for, and at a significant migration cost. Those start-ups that started with paid-for can just continue making money with their scalable platforms and don’t need to hit that speed bump on their journey. If you can raise the capital, it’s a long term win to start that way.</p> <p>Here’s a question to finish up. Do you think that an angel investor or venture capitalist will be impressed if you tell them about how you’re going to take ages to get to market because you’re keeping capital costs low? Or do you think they care about fast time to market, quality of platform and operational costs? They have money to place where they think it will be effective, and for them the capital cost is the small part of the picture. If you pick a paid-for platform and partner with them, you already have a team on your side, you have credibility, and you have a product you can be proud of. That makes capital raising for your business a much easier affair. It shows you’re serious, and you’ll find investors are likely to reward your long-term vision.</p> <p>So your first decision, your platform, does require serious thought. Make a wise decision, and a good one.</p> <p> </p> <div style="padding-bottom: 0px; margin: 0px; padding-left: 0px; padding-right: 0px; display: inline; float: none; padding-top: 0px" id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:900f8357-3f9f-449a-9e5b-3e786ed73524" class="wlWriterEditableSmartContent">Technorati Tags: <a href="http://technorati.com/tags/Net+Prophet" rel="tag">Net Prophet</a>,<a href="http://technorati.com/tags/Start-ups" rel="tag">Start-ups</a>,<a href="http://technorati.com/tags/EPiServer" rel="tag">EPiServer</a>,<a href="http://technorati.com/tags/Open+Source" rel="tag">Open Source</a>,<a href="http://technorati.com/tags/Profit" rel="tag">Profit</a>,<a href="http://technorati.com/tags/Entrepreneur" rel="tag">Entrepreneur</a>,<a href="http://technorati.com/tags/South+Africa" rel="tag">South Africa</a></div>EPiServer 7: Strongly typed page types in FPWC/blogs/Dan-Matthews/Dates/2013/5/EPiServer-7-Strongly-typed-page-types-in-FPWC/2013-05-07T15:48:59.0000000Z<p>No, not really :) FindPagesWithCriteria (FPWC) is still the same beast it always was. For those of us who used PageTypeBuilder a lot, you will have worked with the page type resolver that allowed us to – at the very least – turn a strongly typed page type into a page type ID and feed it into FPWC.</p> <p>EPiServer 7 provider the same feature as well. For example, let’s say we have a method to get all children of type ‘StandardPage’. We use the content type repository to ‘resolve’ the page type ID:</p> <pre class="language-csharp"><code><span style="color: blue">private </span><span style="color: #2b91af">IEnumerable</span><<span style="color: #2b91af">StandardPage</span>> GetPosts()
{
<span style="color: #2b91af">PropertyCriteriaCollection </span>criterias = <span style="color: blue">new </span><span style="color: #2b91af">PropertyCriteriaCollection</span>();
<span style="color: #2b91af">PropertyCriteria </span>criteria = <span style="color: blue">new </span><span style="color: #2b91af">PropertyCriteria</span>();
criteria.Condition = <span style="color: #2b91af">CompareCondition</span>.Equal;
criteria.Name = <span style="color: #a31515">"PageTypeID"</span>;
criteria.Type = <span style="color: #2b91af">PropertyDataType</span>.PageType;
criteria.Value = Locate.ContentTypeRepository().Load<<span style="color: #2b91af">StandardPage</span>>().ID.ToString();
criteria.Required = <span style="color: blue">true</span>;
criterias.Add(criteria);
<span style="color: blue">var </span>posts = Locate.PageCriteriaQueryService().FindPagesWithCriteria(CurrentPage.ContentLink <span style="color: blue">as </span><span style="color: #2b91af">PageReference</span>, criterias).Cast<<span style="color: #2b91af">StandardPage</span>>();
<span style="color: blue">return </span>EPiServer.Filters.<span style="color: #2b91af">FilterForVisitor</span>.Filter(posts).Cast<<span style="color: #2b91af">StandardPage</span>>();
}</code></pre>
<p>This works, and is as performant as FPWC is, but we have a problem with inheritance. What if we had a ‘DetailPage’ that is inherited from ‘StandardPage’? Because FPWC requires a page type ID, this won’t help us. We’ll only ever get StandardPage items back, never DetailPage ones. We could do two FPWC calls of course, but now we’re getting messy.</p>
<p>So what’s the alternative? In old-style code the temptation would be to grab all the descendants then filter them ‘after the event’. You can do this using LINQ (bearing in mind that it’s actually still enumerating behind the scenes – this is not a highly performant way of doing things). At least the code is clean and we are dealing with real strong types so we respect inheritance – no need for page type IDs.</p>
<pre class="language-csharp"><code><span style="color: blue">private </span><span style="color: #2b91af">IEnumerable</span><<span style="color: #2b91af">StandardPage</span>> GetPosts()
{
<span style="color: blue">return </span>EPiServer.Filters.<span style="color: #2b91af">FilterForVisitor</span>.Filter(Locate.ContentRepository().GetDescendents(CurrentPage.ContentLink).Select(pageRef => GetPage(pageRef <span style="color: blue">as </span><span style="color: #2b91af">PageReference</span>)).Where(page => page <span style="color: blue">is </span><span style="color: #2b91af">StandardPage</span>)).Cast<<span style="color: #2b91af">StandardPage</span>>();
}</code></pre>
<p>However, there is now a better and badder way to do this. Although the Get<T> method doesn’t like you passing the incorrect page type as the generic type, the GetChildren<T> does automagical filtering for you to get the right type (including inherited types that can cast back to it). In essence, it’s doing pretty much what our code above does, but in less code and – probably – more efficiently:</p>
<pre class="language-csharp"><code><span style="color: blue">private </span><span style="color: #2b91af">IEnumerable</span><<span style="color: #2b91af">StandardPage</span>> GetPosts()
{
<span style="color: blue">return </span>EPiServer.Filters.<span style="color: #2b91af">FilterForVisitor</span>.Filter(GetChildren<<span style="color: #2b91af">StandardPage</span>>(CurrentPage.ContentLink)).Cast<<span style="color: #2b91af">StandardPage</span>>();
}</code></pre>
<p>For more detail on this, you can see the <a href="http://world.episerver.com/Blogs/Johan-Bjornfot/Dates1/2012/8/EPiServer7-Working-with-IContentRepositoryDataFactory/">related article in the SDK</a>. Note that in all my examples I’m filtering the results for the visitor – this is an often-overlooked and crucial thing that you must do when doing things via the API! Using a PageList or similar web control, if using Web Forms, will do this filtering for you if you treat it nicely.</p>EPiServer 7 and Live Monitor/blogs/Dan-Matthews/Dates/2013/3/EPiServer-7-and-Live-Monitor/2013-03-20T15:32:09.0000000Z<p>Many of you have probably used the funky Live Monitor with EPiServer CMS 6 (otherwise known as EPiServer Trace). If you come to deploy it in EPiServer 7 and you are creating your own site using the web site template rather than starting with the Alloy templates, you may have a little difficulty in getting it running. It spins up, but no visits are ever tracked. Why is this? Well, Live Monitor works by injecting some tracking javascript into the page which, in turn, calls a handler URL to record the ‘visit’. To inject that tracking code, you need to use a new EPiServer feature called ‘Required Client Resources’. In the Alloy templates, this is done for you. However, if you started from scratch with the EPiServer web site template, you won’t have this and will need to add it yourself.</p> <p>To get it all working, take a look at the following article on the SDK:</p> <p><a title="http://sdkbeta.episerver.com/SDK-html-Container/?path=/SdkDocuments/CMS/7/Knowledge%20Base/Developer%20Guide/Configuration/Configuring%20Live%20Monitor.htm&vppRoot=/SdkDocuments//CMS/7/Knowledge%20Base/Developer%20Guide/" href="http://sdkbeta.episerver.com/SDK-html-Container/?path=/SdkDocuments/CMS/7/Knowledge%20Base/Developer%20Guide/Configuration/Configuring%20Live%20Monitor.htm&vppRoot=/SdkDocuments//CMS/7/Knowledge%20Base/Developer%20Guide/">Live Monitor Configuration in EPiServer 7 (SDK)</a></p> <p>Annoyingly, if you followed the instructions in the documentation on World rather than the SDK, then you’re probably tying yourself in knots and not getting anywhere – it’s not updated yet. If you’re interested in the background…</p> <p>In EPiServer 6, getting that tracking javascript code injected could be troublesome because it requires EPiServer to be able to intercept, parse and modify the pages coming through. In an ideal world, the tracking code should be automatically inserted by EPiServer hooks that are added when you deploy Live Monitor. However, this often didn’t work because of the parsing requirements. The workaround was therefore to insert the tracking code manually by adding an ASP.NET control to the page called ‘VisitTracker’. Indeed, this is what the CMS 7 documentation on World currently tells you to do:</p> <p><a title="http://world.episerver.com/Documentation/HTML-Documentation/?path=/SdkDocuments/cms/7/Knowledge%20Base/Developer%20Guide/Configuration/Configuring%20Live%20Monitor.htm&vppRoot=/SdkDocuments//cms/7/Knowledge%20Base/Getting%20Started/" href="http://world.episerver.com/Documentation/HTML-Documentation/?path=/SdkDocuments/cms/7/Knowledge%20Base/Developer%20Guide/Configuration/Configuring%20Live%20Monitor.htm&vppRoot=/SdkDocuments//cms/7/Knowledge%20Base/Getting%20Started/">Live Monitor Configuration in EPiServer 7 (World) - DO NOT USE THIS!</a></p> <p>Unfortunately, if you try and do this in EPiServer 7 then you’ll soon find out that ‘VisitTracker’ no longer exists (resulting in various missing tag prefix errors at runtime). The reason for this is that because EPiServer 7 uses this new Required Client Resources technique, there is now a much better way to inject the needed tracker javascript and it doesn’t have anything to do with manually adding controls.</p> <p>Simply put, this new technique will scan all assemblies for classes that are marked with an attribute for required client resources, then it will call the classes to generate the resources (it’s an interface it calls) and insert the resulting resource references at a defined point on the page. Where these resources are inserted depends on an ASP.NET control called, unsuprisingly, ‘RequiredClientResources’. In Alloy, this is pre-set up for you in Root.Master:</p> <p><font face="Courier New"><EPiServer:RequiredClientResources RenderingArea="Header" ID="RequiredResourcesHeader" runat="server" /></font></p> <p>The solution is very simple then – simply include the RequiredClientResources controls where needed. You should add two… one in the Header and one in the Footer. The link at the top of this article will take you from here.</p>