Blog posts by Drew Null2022-05-07T23:49:48.0000000Z/blogs/drew-null/Optimizely WorldUpgrading Optimizely CMS 11 to 12 and Commerce 13 to 14/blogs/drew-null/dates/2022/5/upgrading-optimizely-cms-11-to-12-and-commerce-13-to-14/2022-05-07T23:49:48.0000000Z<h2>Background</h2>
<p>There are many great resources for learning how to build a new solution using CMS 12 and Commerce 14. The official <a>developer documentation</a> has been updated, the official <a href="https://webhelp.optimizely.com">user guide</a> has been updated, and an excellent <a href="https://www.optimizely.com/support/education/product/migrating-to-optimizely-cms-12-and-commerce-14">masterclass</a> is hosted by Mark Price and Scott Reed (to name a few). But there isn't much information on how to take an existing CMS 11 / Commerce 13 solution and upgrade it to .NET 5+.</p>
<p>As described in the <a href="https://docs.developers.optimizely.com/content-cloud/v11.0.0-content-cloud/docs/upgrading-to-content-cloud-cms-12">official documentation</a>, there are three "phases" to upgrading from CMS 11 to 12:</p>
<ol>
<li>Run Upgrade-Assistant</li>
<li>Fix code issues</li>
<li>Upgrade service environment</li>
</ol>
<p>This blog post will walk through the first two in detail and provide a starting point for the third. This is not intended to be the definitive guide to ugprading an existing solution to .NET 5+, but rather a collection of learnings from misadventures in upgrading two Commerce 13 solutions to date.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#prerequisites"></a></p>
<h2>Prerequisites</h2>
<p>Before we get started, make sure that your solution is ready to upgrade.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#1-read-the-official-documentation"></a></p>
<h3>1. Read the official documentation</h3>
<ol>
<li><a href="https://docs.developers.optimizely.com/content-cloud/v11.0.0-content-cloud/docs/upgrading-to-content-cloud-cms-12">Upgrading to Content Cloud (CMS 12)</a></li>
<li><a href="https://docs.developers.optimizely.com/content-cloud/v11.0.0-content-cloud/docs/breaking-changes-in-content-cloud-cms-12">Breaking changes in Content Cloud (CMS 12)</a></li>
<li><a href="https://docs.developers.optimizely.com/content-cloud/v12.0.0-content-cloud/docs/system-requirements-for-optimizely">System requirements for Optimizely (CMS 12)</a></li>
</ol>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#2-be-on-net-framework-472-or-higher"></a></p>
<h3>2. Be on .NET Framework 4.7.2 or higher</h3>
<p>CMS 11 only <a href="https://docs.developers.optimizely.com/content-cloud/v11.0.0-content-cloud/docs/system-requirements-for-optimizely">requires</a> .NET Framework 4.6.1, but Microsoft <a href="https://docs.microsoft.com/en-us/dotnet/core/porting/premigration-needed-changes">recommends</a> being on 4.7.2 or higher when using Upgrade-Assistant</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#3-update-to-the-latest-version-of-cms11-commerce13-before-upgrading"></a></p>
<h3>3. Update to the latest version of CMS 11 (Commerce 13) before upgrading</h3>
<p>The official documentation doesn't explicitly say to do this, but is there any reason <em>not</em> to?</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#4-check-the-status-of-add-on-packages"></a></p>
<h3>4. Check the status of add-on packages</h3>
<p>Optimizely maintains a list of the .NET 5 migration status of the official platform and addon NuGet packages:</p>
<ul>
<li><a href="https://docs.developers.optimizely.com/integrations/v1.1.0-apps-and-integrations/docs/add-ons-platform-compatibility">Add-ons platform compatibility (Optimizely Developer Docs)</a></li>
<li><a href="/link/42b7913c321d4886b468000231d9baa4.aspx">Add-Ons Status (Optimizely.com)</a></li>
</ul>
<p>No such list exists for unofficial add-ons (as of this writing). So, when planning the upgrade, give yourself time to check the status of your favorite third party add-ons. Having no workaround for unsupported add-ons could derail your whole upgrade project. Know what you’re getting into.</p>
<p>Note that some old .NET Framework add-ons will still work, just with a warning. For example, there is no .NET Core package for Authorize.Net, but your .NET 5 solution will still compile and run with it installed.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#phase-1-upgrade-assistant"></a></p>
<h2>Phase 1: Upgrade-Assistant</h2>
<p>Once you have reviewed the prerequisites and your solution is ready, it's time to start making changes. The <a href="https://dotnet.microsoft.com/en-us/platform/upgrade-assistant">.NET Upgrade Assistant</a> is Microsoft's CLI tool for upgrading .NET Framework solutions to .NET 5+.</p>
<p>Read and bookmark the official Optimizely documentation: <a href="https://docs.developers.optimizely.com/content-cloud/v12.0.0-content-cloud/docs/upgrade-assistant">Upgrade Assistant</a></p>
<p><strong>Important</strong>: The following steps, under this Upgrade-Assistant heading, should be conducted in the sequence that they are listed below.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#5-delete-commerce-manager"></a></p>
<h3>5. Delete Commerce Manager</h3>
<p>As of Commerce 14, Commerce Manager is no more. 👏</p>
<p>Remove the Commerce Manager project before getting started with Upgrade-Assistant. Take note that some of Commerce Manager’s functionality hasn’t been ported over to the CMS yet and can only be done with APIs:</p>
<ul>
<li>Importing and exporting catalogs</li>
<li>Adding countries and regions</li>
<li>Adding currencies</li>
<li>Working with business objects</li>
<li>Working with catalog and order meta classes and fields</li>
</ul>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#6-delete-node_modules"></a></p>
<h3>6. Delete <code>node_modules</code></h3>
<p>As a first step, Upgrade-Assistant copies all files in your solution/project folder into a Backup directory. If you have NPM's <code>node_modules</code> directory in the solution you are upgrading, you <em>probably</em> want to delete it first so you don't have to sit around waiting for it to get backed up. Upgrade-Assistant's backup step can be disabled, but to play it safe, delete your <code>node_modules</code> folder before moving forward.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#7-use-optis-upgrade-assistant-extensions"></a></p>
<h3>7. Use Opti’s Upgrade-Assistant-Extensions</h3>
<p>Upgrade-Assistant can be extended to automatically execute additional commands. Optimizely has a public GitHub repo for their own Upgrade-Assistant extensions which provide some Opti-specific capabilities:</p>
<ul>
<li>String Replacement</li>
<li>Remove Default Argument for the TemplateDescriptor Attribute</li>
<li>Base Class Mapping</li>
<li>Replace IFindUIConfiguration with FindOptions</li>
<li>Remove PropertyData ParseToObject method overrides</li>
<li>Remove obsolete using statements like Mediachase.BusinessFoundation</li>
<li>Type Mapping like EPiServer.Web.Routing to EPiServer.Core.Routing</li>
</ul>
<p>Additionally, NuGet packages can be specified, and templates for <code>Program.cs</code> and <code>Startup.cs</code> (required by .NET 5+) can be added as well.</p>
<p>Read how it works on GitHub (there are a couple gotchas): <a href="https://github.com/episerver/upgrade-assistant-extensions">Upgrade Assistant Extensions</a>. Check the <a href="https://github.com/episerver/upgrade-assistant-extensions/releases">Releases</a> page to learn what the configuration options are and how to use them.</p>
<p>Note that, although Ugrade-Assistant-Extensions will do some nice things for you out of the box (e.g., replace <code>BlockController</code>s with <code>BlockComponent</code>s), <em>do</em> expect to spend time customizing the config for string/type/class replacements.</p>
<p>How to get it ready:</p>
<ol>
<li>Download the latest release (Epi.Source.Updater.X.Y.Z.zip): <a href="https://github.com/episerver/upgrade-assistant-extensions/releases">Releases</a></li>
<li>Unzip it to your local file system, such as <code>C:\Temp\Epi.Source.Updater\</code>.</li>
<li>Make your preferred configuration changes.</li>
</ol>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#8-make-a-plan-of-attack-before-running-upgrade-assistant"></a></p>
<h3>8. Make a plan-of-attack before running Upgrade-Assistant</h3>
<p>Upgrade-Assistant can run against a Solution file (<code>.sln</code>) or Project file (<code>.csproj</code>). If you run it against the Solution, it is smart enough to analyze your project dependency tree and execute against one project at a time, in order, starting with those that have no project dependencies themselves.</p>
<p>For example, consider a fictitious onion architecture inspired <code>MySolution.sln</code>. If you run Upgrade-Assistant against the <code>.sln</code>, it will execute against each project in this order:</p>
<ol>
<li><code>MySolution.Domain.csproj</code>
<ul>
<li>Depends on nothing</li>
</ul>
</li>
<li><code>MySolution.Application.csproj</code>
<ul>
<li>Depends on Domain</li>
</ul>
</li>
<li><code>MySolution.Web.csproj</code>
<ul>
<li>Depends on Application, which depends on Domain</li>
</ul>
</li>
</ol>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#9-consider-doing-one-csproj-at-a-time"></a></p>
<h3>9. Consider doing one CSPROJ at a time</h3>
<p>Upgrade-Assistant will track progress and start where it left off if you cancel it at any time. But—do figure out the dependency sequence first and consider running UA manually against each Project. This will allow you to resolve code issues in isolation on a per-Project basis without getting confused about where you are with UA. Especially if you find yourself mindlessly jamming that Enter key while it runs.</p>
<p>For example,</p>
<ol>
<li><code>MySolution.Domain.csproj</code>
<ol>
<li>Run Upgrade-Assistant</li>
<li>Fix code issues</li>
<li>Commit to source control</li>
</ol>
</li>
<li><code>MySolution.Application.csproj</code>
<ol>
<li>Run Upgrade-Assistant</li>
<li>Fix code issues</li>
<li>Commit to source control</li>
</ol>
</li>
<li><code>MySolution.Web.csproj</code>
<ol>
<li>Run Upgrade-Assistant</li>
<li>Fix code issues</li>
<li>Commit to source control</li>
</ol>
</li>
</ol>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#10-think-about-which-flags-to-use"></a></p>
<h3>10. Think about which flags to use</h3>
<p>Upgrade-Assistant has a number of flags that can modify execution behavior.</p>
<p>The UA basic syntax, if your terminal is at the solution root, is the following:</p>
<pre>upgrade-assistant upgrade MySolution.Web/MySolution.Web.csproj --flags-go-here</pre>
<p>Consider using the following flags:</p>
<p><code>--extension "c:\temp\epi.source.updater"</code><br /><code>--target-tfm-support LTS</code> <br />These two flags enable Opti’s Upgrade-Assistant-Extensions.</p>
<p><code>--ignore-unsupported-features</code> <br />This is required for upgrading the web application CSPROJ.</p>
<p><code>--skip-backup</code> <br />Without this, UA will copy all solution files into <code>/Backup/</code> first (RE: deleting <code>node_modules</code>). But... don’t you have source control?</p>
<p><code>--non-interactive</code> <br />Officially: Microsoft’s documentation says that Upgrade-Assistant is meant to be interactive, and that you should think twice about using this flag. <br />Unofficially: If you don’t use this flag, you will be sitting at your keyboard, pressing Enter repeatedly, for hours.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#11-install-and-update-upgrade-assistant"></a></p>
<h3>11. Install and update Upgrade-Assistant</h3>
<p>To install Upgrade-Assistant globally on your local machine, open a terminal from anywhere and enter the following:</p>
<pre>dotnet tool install -g upgrade-assistant<br />dotnet tool update -g upgrade-assistant
</pre>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#12-run-upgrade-assistant"></a></p>
<h3>12. Run Upgrade-Assistant</h3>
<p>If you've made it this far, you're finally ready to run Upgrade-Assistant.</p>
<p>From a terminal in your solution root (recommended):</p>
<div>
<pre>set DefaultTargetFrameworks__LTS=net5.0
</pre>
</div>
<p>This ☝ is required by Upgrade-Assistant-Extensions.</p>
<p>Then, with the framework set, enter:</p>
<div>
<pre>upgrade-assistant upgrade MySolution.Web/MySolution.Web.csproj<br /> --ignore-unsupported-features<br /> --skip-backup<br /> --non-interactive<br /> --extension "c:\temp\epi.source.updater"<br /> --target-tfm-support LTS
</pre>
</div>
<p>This ☝ is written on multiple lines for readability, not for copy-paste.</p>
<p>At this point, Upgrade-Assistant starts doing its thing.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#13-wait"></a></p>
<h3>13. Wait</h3>
<p>Upgrade-Assistant can take from several minutes to, depending on the size of your solution, hours.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#14-review-the-code-changes"></a></p>
<h3>14. Review the code changes</h3>
<p>Here are some of the changes you should expect to see when upgrading your web application solution project.</p>
<p><code>+ Properties/launchSettings.json</code> <br />Local server/IIS Express settings. Note that .NET5+ runs on HTTPS by default!</p>
<p><code>+ appsettings.Development.json</code> <br /><code>+ appsettings.json</code> <br />Where your Web.config appSettings and connectionStrings went. TBD on guidance from the DXP team...</p>
<p><code>- packages.config</code> <br />Packages are now referenced in the CSPROJ files.</p>
<p><code>+ Program.cs</code> <br /><code>+ Startup.old.cs</code> <br />Program.cs and Startup.cs will need to be ported over. Look at Foundation for inspiration: <a href="https://github.com/episerver/Foundation/tree/main/src/Foundation">GitHub</a>.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#15-commit-the-broken-code"></a></p>
<h3>15. Commit the broken code</h3>
<p>Be sure to commit the code at this stage, even though it is broken. This way, if your code fixes go sideways, you can easily go back to the state immediately after running the Upgrade-Assistant.</p>
<p>Do check in <code>.upgrade-assistant</code>. This is where UA internally tracks its own progress, allowing it to pick up where it left off if you need to shut down along the way. Jot down a reminder to delete this file once the upgrade is complete. It is not needed by the solution in any way.</p>
<p>Make a mental note to commit frequently from this point on. Comitting progress on code fixes along the way can be a lifesaver.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#16-delete-leftover-assembly-dependencies"></a></p>
<h3>16. Delete leftover assembly dependencies</h3>
<p>Some .NET Framework System assemblies will not have corresponding packages and will get orphaned in the Dependencies > Assemblies node. Unless any of these were explicitly added by your implementation, you should be free to delete them.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#17-uninstall-obsolete-nuget-packages"></a></p>
<h3>17. Uninstall obsolete NuGet packages</h3>
<p>Some <code>EPiServer</code> packages will need to be replaced entirely (i.e., removed and replaced with something else). These are listed in the official documentation: <a href="https://docs.developers.optimizely.com/content-cloud/v12.0.0-content-cloud/docs/breaking-changes-in-content-cloud-cms-12">Breaking changes in Content Cloud (CMS 12)</a>.</p>
<p>In summary, the following <code>EPiServer</code> packages must be uninstalled:</p>
<ul>
<li><code>EPiServer.CMS.AspNet</code></li>
<li><code>EPiServer.Framework.AspNet</code></li>
<li><code>EPiServer.ServiceLocation.StructureMap</code></li>
<li><code>EPiServer.Logging.Log4Net</code></li>
</ul>
<p>NuGet Package Manager reveals, to a sharp eye, which packages must be removed. For example:</p>
<ul>
<li>The latest version of <code>EPiServer.CMS.AspNet</code> is <code>11.x</code>, so you know this one must be replaced.</li>
<li>But the latest version of, say, <code>EPiServer.CMS.UI.AspNetIdentity</code> is <code>12</code>+, so you know this can be updated.</li>
</ul>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#18-manually-resolve-package-errors"></a></p>
<h3>18. Manually resolve package errors</h3>
<p>If you've made it this far, the following error will have probably started plaguing your attempts to build the solution:</p>
<p><em>NU1177: Version conflict detected for Xyz. Install/reference Xyz.1.2.3 directly to project MySolution.Web to resolve this issue.</em></p>
<p>Here is what Microsoft says about this error: <a href="https://docs.microsoft.com/en-us/nuget/reference/errors-and-warnings/nu1107">NuGet Error NU1107</a>.</p>
<p><em><strong>Issue</strong></em><br /><em>Unable to resolve dependency constraints between packages. Two different packages are asking for two different versions of 'PackageA'. The project needs to choose which version of 'PackageA' to use.</em><br /><br /><em><strong>Solution</strong></em><br /><em>Install/reference 'PackageA' directly (in the project file) with the exact version that you choose. Generally, picking the higher version is the right choice.</em></p>
<p>In other words, this error can be addressed by doing the following for each package that Visual Studio complains about:</p>
<ol>
<li>Open your new CSPROJ file (double-click the project in Solution).</li>
<li>Find where all the <code><PackageReference /></code> elements are.</li>
<li>Manually add the package reference it is complaining about, e.g., <br /><code><PackageReference Include="Xyz" Version="1.2.3" /></code></li>
</ol>
<p>This manual process might result in your Project(s) taking on dependencies that aren't actually needed. When the upgrade is complete, go through each Project's dependencies and clean out the ones that are unused.</p>
<p><a href="https://www.jetbrains.com/resharper/">ReSharper</a> has an Optimize References tool that can help you explore whether and how each dependency is used. Right-click on a Project's Packages node (under its Dependencies node) to access this tool.</p>
<p>When doing this, be careful not to delete the central <code>EPiServer</code> product pacakges from your web application Project, such as <code>EPiServer.CMS</code>, <code>EPiServer.Commerce</code>, <code>EPiServer.Find</code>, <code>EPiServer.Forms</code>, etc.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#19-update-nuget-packages"></a></p>
<h3>19. Update NuGet packages</h3>
<p>At this point, the Project should be ready for updating its NuGet packages.</p>
<p>As mentioned above, there are a couple <code>EPiServer</code> packages that must be <em>replaced</em>:</p>
<ul>
<li><code>EPiServer.Framework.AspNet</code> should be replaced with <code>EPiServer.Framework.AspNetCore</code>.</li>
<li><code>EPiServer.CMS.AspNet</code> should be replaced with the following:
<ul>
<li><code>EPiServer.CMS.AspNetCore</code></li>
<li><code>EPiServer.CMS.AspNetCore.Templating</code></li>
<li><code>EPiServer.CMS.AspNetCore.Routing</code></li>
<li><code>EPiServer.CMS.AspNetCore.Mvc</code></li>
<li><code>EPiServer.CMS.AspNetCore.HtmlHelpers</code></li>
</ul>
</li>
</ul>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#20-address-known-breaking-changes"></a></p>
<h3>20. Address known breaking changes</h3>
<p>There are too many <code>EPiServer</code> breaking changes to list here. If you haven't already, go through the official breaking changes documentation and make sure each is taken care of before moving on: <a href="https://docs.developers.optimizely.com/content-cloud/v11.0.0-content-cloud/docs/breaking-changes-in-content-cloud-cms-12">Breaking Changes in Content Cloud (CMS 12)</a>. It is a dense read, but worth it.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#phase-2-code-fixes"></a></p>
<h2>Phase 2: Code Fixes</h2>
<p>The following section is a list of commonly-encountered code fixes. <strong>This is not an exhaustive list</strong> (obviously). Much of this content is about replacing <code>System.Web</code>, which was removed in .NET Core. But there are some Opti-specific topics too.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#21-replace-httpcontexthelper-with-ihttpcontextaccessor"></a></p>
<h3>21. Replace <code>HttpContextHelper</code> with <code>IHttpContextAccessor</code></h3>
<p><code>HttpContext.Current</code>, which <code>EPiServer</code> solutions tend to use liberally, was removed in .NET Core. Because this is such a common issue, Upgrade-Assistant automatically creates and adds an <code>HttpContextHelper</code> static helper class, which provides a static means of accessing the current `HttpContext.</p>
<p>Although better than nothing, this does not obey <a href="https://en.wikipedia.org/wiki/SOLID">SOLID</a>. In .NET Core, the <code>IHttpContextAccessor</code> abstraction was introduced, which can be injected by default in ASP.NET Core, and has an <code>HttpContext</code> member property that provides best-practice access to the current request's context.</p>
<p>For example:</p>
<pre class="language-csharp"><code>// .NET Framework
string myCookie = HttpContext.Current.Request.Cookies[CookieNames.PostalCode]?.Value;
// .NET Core
string myCookie = _httpContextAccessor.HttpContext?.Request.Cookies["MyCookie"];</code></pre>
<h3>22. Give yourself time to replace HttpRequest</h3>
<p>Microsoft reimagined the <code>HttpRequest</code> concept in .NET Core. Most of the legacy <code>System.Web</code> capability is still present, but in many cases has been reorganized to better conform to web and HTTP standards. Because use of the <code>HttpRequest</code> object is critical to ASP.NET solutions, expect to spend a nontrivial amount of time fixing compiler errors due to <code>HttpRequest</code> changes.</p>
<p>For example:</p>
<div class="highlight highlight-source-cs position-relative overflow-auto">
<pre class="language-csharp"><code>// .NET Framework
string userIp = httpRequest.ServerVariables["HTTP_X_FORWARDED_FOR"]
?? httpRequest.UserHostAddress;
string userAgent = httpRequest.UserAgent;
string host = httpRequest.Url.Host;
string url = httpRequest.Url.ToString();
string anonymousId = httpRequest.AnonymousID;
// .NET Core
string userIP = httpRequest.HttpContext.GetServerVariable("HTTP_X_FORWARDED_FOR")
?? request.HttpContext.Connection.RemoteIpAddress?.ToString();
string userAgent = httpRequest.Headers["User-Agent"];
string host = httpRequest.Host.ToString();
string url = httpRequest.GetDisplayUrl(); // or GetEncodedUrl()
// There is no AnonymousID. Roll your own!</code></pre>
</div>
<h3>23. Use <code>IHttpClientFactory</code></h3>
<p>We need to talk about <code>HttpClient</code>.</p>
<p>Managing the lifecycle of <code>HttpClient</code> in the .NET Framework was always a pain. The central point of confusion is that <code>HttpClient</code> implements <code>IDisposable</code>, but putting it in a <code>using</code> statement can lead to SNAT port exhaustion (i.e., when your web server runs out of outgoing connections and stops processing incoming requests until connections free up) and bring your entire application to its knees.</p>
<p>Much has been written on this:</p>
<ul>
<li><a href="https://www.aspnetmonsters.com/2016/08/2016-08-27-httpclientwrong/">You're using HttpClient wrong and it is destabilizing your software</a></li>
<li><a href="https://josef.codes/you-are-probably-still-using-httpclient-wrong-and-it-is-destabilizing-your-software/">You're (probably still) using HttpClient wrong and it is destabilizing your software</a></li>
<li><a href="http://byterot.blogspot.com/2016/07/singleton-httpclient-dns.html">Singleton HttpClient? Beware of this serious behaviour and how to fix it</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests">Issues with the original HttpClient class available in .NET</a> (Microsoft)</li>
</ul>
<p>In practice, most .NET Framework solutions that use <code>HttpClient</code> either new up <code>HttpClient</code>s on-demand, or <em>carefully</em> roll their own DI-friendly management of the <code>HttpClient</code> lifecycle (or, more specifically, the underlying request handler which is the true source of the problem).</p>
<p>Fortunately, .NET Core introduced <code>IHttpClientFactory</code>, which makes these problems go away: <a href="https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests">Use IHttpClientFactory to implement resilient HTTP requests</a>. Once configured, <code>IHttpClientFactory</code> can be injected and used to access a safe and reliable instance of <code>HttpClient</code>. Multiple <code>HttpClient</code>s can be registered per application by giving them names.</p>
<p>This is a two-step process:</p>
<ol>
<li>Register the <code>HttpClient</code>(s) as application middleware in <code>Startup.cs</code></li>
<li>Inject <code>IHttpClientFactory</code> where ever <code>HttpClient</code> is needed</li>
</ol>
<p>Example: Say we depend on a custom API that requires a client certificate...</p>
<p>In .NET Framework, the <code>HttpClient</code> might be newed-up on demand, like this:</p>
<div class="highlight highlight-source-cs position-relative overflow-auto">
<pre class="language-csharp"><code>// .NET Framework
public static HttpClient GetHttpClientForCustomApi()
{
var certificate = LoadX509Certificate2ForCustomApi(); // from file, blob, etc.
var requestHandler = new WebRequestHandler();
requestHandler.ClientCertificates.Add(certificate);
var httpClient = new HttpClient(requestHandler);
return httpClient;
}</code></pre>
</div>
<p>But in .NET Core, one must first register the <code>HttpClient</code> as middleware first, and then access it via <code>IHttpClientFactory</code>, like this:</p>
<div class="highlight highlight-source-cs position-relative overflow-auto">
<pre class="language-csharp"><code>// .NET Core
// Register a named HttpClient as middleware in Startup.cs:
public static void AddHttpClientForCustomApi(this IServiceCollection services)
{
var certificate = LoadX509Certificate2ForCustomApi(); // from file, blob, etc.
var handler = new HttpClientHandler();
handler.ClientCertificates.Add(certificate);
services.AddHttpClient("CustomAPI", httpClient => { })
.ConfigurePrimaryHttpMessageHandler(() => handler);
}
// Then you can get the custom API HttpClient by name:
public static HttpClient GetHttpClientForCustomApi(IHttpClientFactory factory) =>
factory.CreateClient("CustomAPI");</code></pre>
</div>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#24-replace-hostingenvironment-with-iwebhostenvironment"></a></p>
<h3>24. Replace <code>HostingEnvironment</code> with <code>IWebHostEnvironment</code></h3>
<p><code>IWebHostEnvironment</code>, introduced in .NET Core, should be used to replace <code>HostingEnvironment</code>. One common use case is for accessing the file system:</p>
<div class="highlight highlight-source-cs position-relative overflow-auto">
<pre class="language-csharp"><code>// .NET Framework
string myFilePath = HostingEnvironment.MapPath("~/App_Data/MyFile.zip");
// .NET Core
string myFilePath = Path.Combine(_webHostEnvironment.WebRootPath, "App_Data/MyFile.zip");</code></pre>
</div>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#25-replace-output-caching"></a></p>
<h3>25. Replace Output Caching</h3>
<p>Microsoft's <code>[OutputCache]</code> and Optimizely's <code>[ContentOutputCache]</code> have been removed. Opti no longer has its own wrapper around output caching, and instead recommends to use the new .NET Core types: <a href="https://docs.developers.optimizely.com/content-cloud/v12.0.0-content-cloud/docs/caching#output-caching">Output Caching</a>.</p>
<p>It is best practice to replace .NET Framework output caching with the server-side Response Caching Middleware new in ASP.NET Core: <a href="https://docs.microsoft.com/en-us/aspnet/core/performance/caching/middleware">Response Caching Middleware in ASP.NET Core</a>. The <code>[ResponseCache]</code> attribute should feel familiar.</p>
<ul>
<li>Note that this is different than .NET Core's plain-vanilla "Response Caching" concept.</li>
<li>RCM does not account for whether the user is authenticated, unlike .NET Framework output caching. This is a significant departure from how output caching worked before. Do consider this when rendering content that could vary by user.</li>
<li>Be careful about the sequence in which you call <code>app.UseResponseCaching()</code>. It cannot be invoked before <code>app.UseCors()</code>.</li>
<li>ASP.NET Core introduces a new caching tool called <a href="https://docs.microsoft.com/en-us/aspnet/core/mvc/views/tag-helpers/built-in/distributed-cache-tag-helper">Cache Tag Helpers</a>. This is represented as two new Razor elements, <code><cache></code> and <code><distributed-cache></code>, which facilitate the caching of server-rendered markup from within Razor, and can be used to achieve donut caching. Although Optimizely does not have official helpers to manage <code>vary-by</code>, it's easy to imagine an extension for <code><distributed-cache></code> that uses Opti's <code>ISynchronizedObjectInstanceCache</code> under the hood.</li>
</ul>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#26-replace-routetable-with-useendpoints"></a></p>
<h3>26. Replace <code>RouteTable</code> with <code>UseEndpoints</code></h3>
<p><code>RouteTable</code> is from <code>System.Web</code> and no longer exists. Optimizely controllers are automatically routed by the CMS, and custom API controllers <em>should</em> use attribute routing. But there are some scenarios where custom routes will need to be manually registered. ASP.NET Core introduces <code>app.UseEndpoints()</code> to register custom routes as middleware.</p>
<p>Example:</p>
<div class="highlight highlight-source-cs position-relative overflow-auto">
<pre class="language-csharp"><code>// .NET Framework
RouteTable.Routes.MapRoute(
"RobotsTxtRoute", "robots.txt",
new { controller = "RobotsTxt", action = "Index" });
// .NET Core
app.UseEndpoints(endpoints =>
{
endpoints.MapControllerRoute(
"RobotsTxtRoute", "robots.txt",
new { controller = "RobotsTxt", action = "Index" });
});</code></pre>
</div>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#27-use-async-controller-methods"></a></p>
<h3>27. Use async controller methods</h3>
<p>Although there probably aren't many use cases for it, <code>PageController</code>s and <code>ContentController</code>s (Commerce) can now be <em>async all the way</em>. Note that partial content controllers, such <code>BlockComponent</code>s (f.k.a. <code>BlockController</code>s), must still be synchronous.</p>
<div class="highlight highlight-source-cs position-relative overflow-auto">
<pre class="language-csharp"><code>// .NET Framework
public ActionResult Index(HomePage currentPage) {}
public ActionResult Index(StandardProduct currentContent) {}
// .NET Core
public async Task<ActionResult> IndexAsync(HomePage currentPage) {}
public async Task<ActionResult> IndexAsync(StandardProduct currentContent) {}</code></pre>
</div>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#28-delete-sessionstatebehaviordisabled"></a></p>
<h3>28. Delete <code>SessionStateBehavior.Disabled</code></h3>
<p>In the .NET Framework, MVC controllers would—by default—handle multiple incoming requests within a single session synchronously. That is, if a user’s browser where to issue, say, 3 requests at the same time, ASP.NET would execute them one at a time in a FIFO sequence. At scale, this would lead to performance issues because the browser would be held in limbo as ASP.NET took its time processing one request at a time.</p>
<p>This could be mitigated using the <code>SessionState</code> attribute on controllers, which explicitly tells ASP.NET to process requests asynchronously:</p>
<div class="highlight highlight-source-cs position-relative overflow-auto">
<pre class="language-csharp"><code>[SessionState(SessionStateBehavior.Disabled)]
public class MyController : Controller</code></pre>
</div>
<p>In .NET Core, however, this asynchronous behavior is the default. So, the <code>SessionState</code> attribute is no longer needed—in fact, it no longer exists— and must be removed.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#29-take-care-when-replacing-newtonsoftjson-with-systemtextjson"></a></p>
<h3>29. Take care when replacing <code>Newtonsoft.Json</code> with <code>System.Text.Json</code></h3>
<p>.NET Core introduced a performant JSON toolkit with <code>System.Text.Json</code>. It isn't as feature rich as Newtonsoft, but it is the default (and preferred) JSON de/serializer in ASP.NET Core. That said, take care not to blindly migrate serializable types over to STJ. Optimizely Search & Navigation still uses Newtonsoft under the hood, STJ ships with a new set of attributes, and the default serialization settings might not be the same.</p>
<p>Some areas to test when migrating to STJ:</p>
<ul>
<li>API controller requests and response models</li>
<li>Anything that is indexed or projected with Opti Search & Nav</li>
<li>External API client requests and responses</li>
</ul>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#30-dont-get-confused-by-authorization-action-filters"></a></p>
<h3>30. Don't get confused by authorization action filters</h3>
<p>Authorization filters did not receive a major reworking between .NET Framework and .NET Core, but they are useful for triggering custom behavior when an authentication check either succeeds or fails, and best practice code examples can be difficult to find when searching the web due to the many versions and variations.</p>
<p>Do implement both <code>ActionFilterAttribute</code> and <code>IAuthorizationFilter</code>:</p>
<div class="highlight highlight-source-cs position-relative overflow-auto">
<pre class="language-csharp"><code>public class AuthenticationRequiredAttribute
: ActionFilterAttribute, IAuthorizationFilter {}</code></pre>
</div>
<p>Note that the <code>OnAuthorization</code> signature changed slightly:</p>
<div class="highlight highlight-source-cs position-relative overflow-auto">
<pre class="language-csharp"><code>// .NET Framework
public void OnAuthorization(AuthorizationContext filterContext) {}
// .NET Core
public void OnAuthorization(AuthorizationFilterContext filterContext) {}</code></pre>
</div>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#31-check-virtual-roles-in-appsettingsjson"></a></p>
<h3>31. Check virtual roles in appsettings.json</h3>
<p>If administrators are unable to assign users to the WebAdmins or Administrators user groups in CMS Admin, try explitly setting the mapped roles in <code>appsettings.json</code>.</p>
<p>Example (note that this is <em>not</em> exhaustive):</p>
<div class="highlight highlight-source-json position-relative overflow-auto">
<pre class="language-javascript"><code>{
"EPiServer": {
"Cms": {
"MappedRoles": {
"Items": {
"CmsAdmins": {
"MappedRoles": ["WebAdmins", "Administrators"],
"ShouldMatchAll": "false"
},
"CmsEditors": {
"MappedRoles": ["WebEditors"],
"ShouldMatchAll": "false"
},
"CommerceAdmins": {
"MappedRoles": ["WebAdmins", "Administrators"],
"ShouldMatchAll": "false"
}
}
}
}
}
}</code></pre>
</div>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#32-remove-visitorgrouphelper"></a></p>
<h3>32. Remove VisitorGroupHelper</h3>
<p>This is not defined in the official list of <a href="https://docs.developers.optimizely.com/content-cloud/v11.0.0-content-cloud/docs/breaking-changes-in-content-cloud-cms-12">breaking changes</a>, but <code>VisitorGroupHelper</code> was removed in CMS 12. One of its use cases was checking to see whether a user meets the criteria of a Visitor Group. This can now be done with only a dependency on <code>IVisitorGroupRepository</code>.</p>
<p>Example:</p>
<div class="highlight highlight-source-cs position-relative overflow-auto">
<pre class="language-csharp"><code>// CMS 11
public IEnumerable<string> GetVisitorGroupIds()
{
var helper = new VisitorGroupHelper(_visitorGroupRoleRepository);
foreach (var visitorGroup in _visitorGroupRepository.List())
{
if (visitorGroup != null
&& helper.IsPrincipalInGroup(PrincipalInfo.CurrentPrincipal, visitorGroup.Name))
{
yield return visitorGroup.Id.ToString();
}
}
}
// CMS 12
public IEnumerable<string> GetVisitorGroupIds()
{
foreach (var visitorGroup in _visitorGroupRepository.List()?.ToList())
{
_visitorGroupRoleRepository.TryGetRole(visitorGroup.Name, out var visitorGroupRole);
if (visitorGroupRole != null
&& visitorGroupRole.IsMatch(PrincipalInfo.CurrentPrincipal, _httpContextAccessor.HttpContext))
{
yield return visitorGroup.Id.ToString();
}
}
}</code></pre>
</div>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#33-replace-imageprocessor-with-imagesharp"></a></p>
<h3>33. Replace ImageProcessor with ImageSharp</h3>
<p><code>ImageProcessor</code> was, for many <code>EPiServer</code> solutions, <em>the</em> go-to NuGet package for conducting dynamic image manipulation at runtime. It even had a cache-to-Azure-blob-storage add-on developed by the Optimizely community.</p>
<p><code>ImageProcessor</code> was, however, developed only for the .NET Framework and abandoned in .NET Core. Its successor is <code>ImageSharp</code>, also developed by Six Labors, which has inherited this role and is even shipped as a dependency for <code>EPiServer.CMS.Core</code> 12+.</p>
<p><code>SixLabors.ImageSharp.Web</code> is the NuGet package that adds runtime dynamic image manipulation, such as resizing, for the web and is necessary for using <code>ImageSharp</code> to conduct image optimization. It depends on <code>SixLabors.ImageSharp</code>, which is, conveniently, a dependency for <code>EPiServer.CMS.Core</code>.</p>
<p>Side note: Vincent Baaij created an add-on that adds Azure blob caching support for <code>ImageSharp.Web</code>: <a href="https://github.com/vnbaaij/Baaijte.Optimizely.ImageSharp.Web">GitHub</a>.</p>
<p>In <code>EPiServer.CMS.Core</code> 12.4.2, Opti upgraded its dependency on <code>ImageSharp</code> from version 1 to version 2. But <code>ImageSharp.Web</code> version 1 is incompatible with <code>ImageSharp</code> 2. <code>ImageSharp.Web</code> 2, however, is only compatible with .NET 6, <em>not</em> .NET 5. Which means that Optimizely inadvertently broke the implicit dependency on <code>ImageSharp.Web</code> when it upgraded <code>EPiServer.CMS.Core</code> from version 12.4.1 to 12.4.2. This is a significant problem because there aren't many alternatives to <code>ImageSharp</code> in the .NET 5 ecosystem, and none that are already a dependency for <code>EPiServer</code>.</p>
<p>As of this writing—May 2022—it is unclear which <code>EPiServer</code> packages <a href="/link/e03e6d0cfc68494081ab90db55640907.aspx">officially support .NET 6</a>. Because of this, there are presently two options for developers to choose from:</p>
<ol>
<li>Target .NET 5, upgrade <code>EPiServer.CMS.Core</code> no further than 12.4.1, and use <code>ImageSharp.Web</code> version 1.</li>
<li>Target .NET 6, upgrade <code>EPiServer.CMS.Core</code> to 12.4.2 or later, and use <code>ImageSharp.Web</code> version 2.</li>
</ol>
<p>Option #1 is the stable, officially supported option. Option #2 is a dice roll but doesn't get you stuck on CMS 12.4.</p>
<p>More information can be found at the following forum post: <a href="/link/78f3246b2ca840deabf3a94028d460f9.aspx">EPiServer.CMS.Core 12.4.2 breaks ImageSharp</a>.</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#34-to-be-continued"></a></p>
<h3>34. To be continued</h3>
<p>As mentioned above, this list of code fixes is a brief subset of the issues that developers are likely to encounter when upgrading CMS 11 to 12. Do leverage the official Optimizely developer community as new problems arise. Ask questions and share your answers. Be a good Optimizely citizen!</p>
<p><a href="https://github.com/DrewNull/optimizely-cms12-upgrade#phase-3-cms-12-on-dxp"></a></p>
<h2>Phase 3: CMS 12 on DXP</h2>
<p>Official DXP support has not yet been announced for solutions that were upgraded from CMS 11 to 12. Optimizely hosts CMS 12+ sites in Linux, so developers can expect their DXP upgrade(s) to involve standing up an entirely new environment, rather than deploying new packages to existing environments like typical version upgrades.</p>
<p>The <a href="https://docs.developers.optimizely.com/content-cloud/v11.0.0-content-cloud/docs/upgrading-to-content-cloud-cms-12">official documentation</a> has this to say:</p>
<p><em>Once the codebase is upgraded to .NET 5, and everything works locally, DXP customers will need to migrate their service environment to the latest version using migration tool that will soon be available in <a href="https://paasportal.episerver.net">the portal</a> (paasportal.episerver.net).</em></p>
<p>To my knowledge, no CMS 11-to-12 solutions have been deployed to DXP yet. If you know of any, or are preparing one yourself, please share your experience(s) in the comments below. Thank you!</p>Product and category URLs without the catalog slug in Commerce 14 (.NET 5)/blogs/drew-null/dates/2021/12/product-and-category-urls-without-the-catalog-slug-in-commerce-14--net-5---take-2/2022-02-06T02:50:52.0000000Z<p>This one stumped me the other day, and I couldn't find anything by doing a web or World search.</p>
<h3>Problem: How do we exclude the catalog route segment from the URLs of categories and products?</h3>
<p>Say you have the following catalog structure:</p>
<pre class="language-markup"><code>Catalog Root
My Catalog // CatalogContent
My Category // NodeContent
My Product // ProductContent
My Variant // VariationContent</code></pre>
<p>By default, the URL to My Variant will look like this:</p>
<pre class="language-markup"><code>/my-catalog/my-category/my-variant</code></pre>
<p>But <span style="font-family: terminal, monaco, monospace;">/my-catalog</span> won't resolve to an actual page, so we want to remove it from the URL. Which would render like this:</p>
<pre class="language-markup"><code>/my-category/my-variant</code></pre>
<p>Much better. But how do we do this in Commerce 14?</p>
<p>Prior to Commerce 14, this could be done using <span style="font-family: terminal, monaco, monospace;">System.Web.Mvc</span>'s <span style="font-family: terminal, monaco, monospace;">RouteTable</span>. But that was killed off in ASP.NET Core, so we need another way.</p>
<h3><span style="font-family: terminal, monaco, monospace;">PartialRouteHandler </span>to the rescue.</h3>
<p>This is pretty simple: Create an initialization module and use <span style="font-family: terminal, monaco, monospace;">PartialRouteHandler </span>to register a <span style="font-family: terminal, monaco, monospace;">PageData</span>-to-<span style="font-family: terminal, monaco, monospace;">CatalogContentBase</span> partial router. No custom implementation needed.</p>
<p>Note that how you, say, resolve a catalog to a site, could vary based on the needs of your project. In the example below, we resolve all catalogs to all sites. </p>
<pre class="language-csharp"><code>using EPiServer;
using EPiServer.Commerce.Catalog.ContentTypes;
using EPiServer.Core;
using EPiServer.Core.Routing;
using EPiServer.Framework;
using EPiServer.Framework.Initialization;
using EPiServer.ServiceLocation;
using Mediachase.Commerce.Catalog;
[InitializableModule]
[ModuleDependency(typeof(EPiServer.Web.InitializationModule))]
[ModuleDependency(typeof(EPiServer.Commerce.Initialization.InitializationModule))]
public class CustomHierarchicalCatalogPartialRouterInitialization : IInitializableModule
{
public void Initialize(InitializationEngine context)
{
/* MapDefaultHierarchialRouter [sic] is no longer needed... */
////CatalogRouteHelper.MapDefaultHierarchialRouter(false);
var referenceConverter = ServiceLocator.Current.GetInstance<ReferenceConverter>();
var contentLoader = ServiceLocator.Current.GetInstance<IContentLoader>();
var partialRouteHandler = ServiceLocator.Current.GetInstance<PartialRouteHandler>();
var catalogs = contentLoader.GetChildren<CatalogContentBase>(referenceConverter.GetRootLink());
foreach (var catalog in catalogs)
{
// This implementation will register all catalogs for all sites; if you want it to work differently, do so here--
partialRouteHandler.RegisterPartialRouter(
new PartialRouter<PageData, CatalogContentBase>(
new HierarchicalCatalogPartialRouter(() => ContentReference.StartPage, catalog, false)));
}
}
public void Uninitialize(InitializationEngine context)
{
}
}</code></pre>
<p>And that's it.</p>
<p>Your category, product, and variant pages (and bundles/packages) should resolve without including the catalog slug in the URL. And <span style="font-family: terminal, monaco, monospace;">UrlResolver </span>should now give you the catalog-less URL on the frontend and in the CMS backoffice.</p>Episerver Find Wildcard Queries and Best Bets/blogs/drew-null/dates/2018/4/find-wildcardquery-and-best-bets---a-workaround/2018-04-15T21:05:02.0000000Z<!DOCTYPE html>
<html>
<head>
</head>
<body>
<p>I have used the approach detailed in Joel Abrahamsson's 2012 blog post, <a href="http://joelabrahamsson.com/wildcard-queries-with-episerver-find/">Wildcard Queries with Episerver Find</a>, for quite a while. The Episerver Find built-in WildcardQuery has some important advantages. Notably, it provides a means to boost results that have wildcard search hits against a specific field or set of fields. But, in practice, wildcards are only one piece in the puzzle of constructing a good search experience for the user. </p>
<p>The purpose of this blog post is to address some of the challenges that come up when using WildcardQuery: </p>
<ul>
<li>Best Bets</li>
<li>Multiple Fields</li>
<li>Multiple Words</li>
<li>Apostrophes</li>
</ul>
<h3>Getting Started</h3>
<p>The code block below is the base query that we'll be working with. For the uninitiated, I've taken Joel's extension method and made one key update: asterisks are added to the query string within the method itself. </p>
<pre class="language-csharp"><code>public static ITypeSearch<T> WildcardSearch<T>(this ITypeSearch<T> search,
string query, Expression<Func<T, string>> fieldSelector, double? boost = null)
{
query = query?.ToLowerInvariant();
query = WrapInAsterisks(query);
var fieldName = search.Client.Conventions
.FieldNameConvention
.GetFieldNameForAnalyzed(fieldSelector);
var wildcardQuery = new WildcardQuery(fieldName, query)
{
Boost = boost
};
return new Search<T, WildcardQuery>(search, context =>
{
if (context.RequestBody.Query != null)
{
var boolQuery = new BoolQuery();
boolQuery.Should.Add(context.RequestBody.Query);
boolQuery.Should.Add(wildcardQuery);
boolQuery.MinimumNumberShouldMatch = 1;
context.RequestBody.Query = boolQuery;
}
else
{
context.RequestBody.Query = wildcardQuery;
}
});
}
public static string WrapInAsterisks(string input)
{
return string.IsNullOrWhiteSpace(input) ? "*" : $"*{input.Trim().Trim('*')}*";
}</code></pre>
<p>In Joel's version, the asterisks were added by the consuming code. But here, if the query "viol" is passed, it will convert it to "*viol*" itself, which will match against both of the words "violin" and "viola". </p>
<p>This extension method can be called as follows: </p>
<pre class="language-csharp"><code>string query = "viol";
double pageNameBoost = 1.5;
var result = SearchClient.Instance.Search<PageData>()
.WildcardSearch(query, x => x.PageName, pageNameBoost)
.GetPagesResult();</code></pre>
<h3>Best Bets</h3>
<p>One of the challenges of using wildcards is getting them to work with Episerver Find's Best Bets. Because wildcard queries use query strings with asterisks, best bets do not match. Consider the following example...</p>
<p>Say we have defined a Best Bet, with the phrases "violin", "viola", and "viol", to a music teacher profile page: "Chen, L.", our primary music teacher for violins and violas. So whenever a user searches for "viol", the Best Bet is found, and the "Chen, L." teacher profile page appear at the top of the results.</p>
<p>But our site requirements also state that search should support partial word matches. Which leads us to use the WildcardSearch method defined above.</p>
<p>This is a problem because Best Bets are not wildcard enabled. Best Bet lookup doesn't treat an asterisk any differently than, say, an "a" or a "3". So when our WildcardSearch() method passes the phrase "*viol*" to Find, the string doesn't match on any Best Bet, and the "Chen, L." teacher profile page does not (necessarily) appear at the top of the results.</p>
<p>Note that the Find admin UI does not permit special characters, so even if we wanted to add a best bet for "*viol*" -- not that we should -- the system wouldn't allow it.</p>
<p>Fortunately, Best Bets can be added by chaining a plain vanilla For() to the search object. In our consuming code: </p>
<pre class="language-csharp"><code>string query = "viol";
double pageNameBoost = 1.5;
var result = SearchClient.Instance.Search<PageData>()
.For(query)
.InField(x => x.PageName)
.ApplyBestBets()
.WildcardSearch(query, x => x.PageName, pageNameBoost)
.GetPagesResult();</code></pre>
<p>Although repetitive, this works because WildcardSearch() ORs the query generated by For() with the WildcardQuery it uses under the hood. Which is the purpose of BoolQuery and this line: </p>
<pre class="language-csharp"><code>boolQuery.Should.Add(context.RequestBody.Query);</code></pre>
<p>InField() ensures that we only search against the field we are passing to WildcardSearch(), and avoid false positives from searching against the built-in All field.</p>
<p>We can tighten up reusability by putting these additional chains into another extension method:</p>
<pre class="language-csharp"><code>public static ITypeSearch<T> ForWithWildcards<T>(this ITypeSearch<T> search,
string query, Expression<Func<T, string>> fieldSelector, double? boost = null)
{
return search
.For(query)
.InField(fieldSelector)
.ApplyBestBets()
.WildcardSearch(query, fieldSelector, boost);
}</code></pre>
<p>Which would be called by the following code: </p>
<pre class="language-csharp"><code>string query = "viol";
double pageNameBoost = 1.5;
var result = SearchClient.Instance.Search<PageData>()
.ForWithWildcards(query, x => x.PageName, pageNameBoost)
.GetPagesResult();</code></pre>
<p>I like to keep WildcardSearch() separate from ForWithWildcards() for situations where I need to provide my own sort order instead of sorting by score. Since Best Bets are irrelevant without score, I can spare Find the load of processing the QueryStringQuery created in For().</p>
<p><strong>Side note</strong>: When the requirements call for Best Bets to appear at the top of a <em>custom sorted</em> set of results, you can retrieve Best Bets from <em>BestBetRepository</em>. BestBetRepository lives in the <span>EPiServer.Find.Framework.BestBets namespace, and </span>can be injected (or service located) into your consuming service.</p>
<h3>Multiple Fields</h3>
<p>With some minor refactoring, ForWithWildcards() and WildcardSearch() can accept multiple fields. In C# 7, <em>System.ValueTuple</em> -- which you can install from NuGet -- makes this a trivial effort:</p>
<pre class="language-csharp"><code>public static ITypeSearch<T> ForWithWildcards<T>(this ITypeSearch<T> search,
string query, params (Expression<Func<T, string>>, double?)[] fieldSelectors)
{
return search
.For(query)
.InFields(fieldSelectors.Select(x => x.Item1).ToArray())
.ApplyBestBets()
.WildcardSearch(query, fieldSelectors);
}
public static ITypeSearch<T> WildcardSearch<T>(this ITypeSearch<T> search,
string query, params (Expression<Func<T, string>>, double?)[] fieldSelectors)
{
query = query?.ToLowerInvariant();
query = WrapInAsterisks(query);
var wildcardQueries = new List<WildcardQuery>();
foreach (var fieldSelector in fieldSelectors)
{
string fieldName = search.Client.Conventions
.FieldNameConvention
.GetFieldNameForAnalyzed(fieldSelector.Item1);
wildcardQueries.Add(new WildcardQuery(fieldName, query)
{
Boost = fieldSelector.Item2
});
}
return new Search<T, WildcardQuery>(search, context =>
{
var boolQuery = new BoolQuery();
if (context.RequestBody.Query != null)
{
boolQuery.Should.Add(context.RequestBody.Query);
}
foreach (var wildcardQuery in wildcardQueries)
{
boolQuery.Should.Add(wildcardQuery);
}
boolQuery.MinimumNumberShouldMatch = 1;
context.RequestBody.Query = boolQuery;
});
}</code></pre>
<p>The calling code would then look something like this (depending on which fields you want to search against): </p>
<pre class="language-csharp"><code>var result = SearchClient.Instance.Search<PageData>()
.ForWithWildcards("viol",
(x => x.PageName, 1.5),
(x => x.SearchText(), null));</code></pre>
<p>ValueTuple can, of course, be replaced with your own strongly typed class, but I have used it here for brevity.</p>
<h3>Multiple Words and Apostrophes</h3>
<p>In our example above, we used the query string "viol", which WildcardSearch() mutates into "*viol*". But what if the user searches for, say, "viol lessons"? In the code above, this will become "*viol lessons*", which will not match against "violin" or "viola".</p>
<p>I like to solve this problem by splitting the query string, by whitespace, into an array, and then ORing a separate WildcardQuery per word. This is done in our WildcardSearch()... </p>
<pre class="language-csharp"><code>var words = query.Split(new [] { " " }, StringSplitOptions.RemoveEmptyEntries)
.Select(WrapInAsterisks)
.ToList();
...
foreach (var word in words)
{
wildcardQueries.Add(new WildcardQuery(fieldName, word)
{
Boost = fieldSelector.Item2
});
}</code></pre>
<p>Another challenge is presented by apostrophes. The Find (Elasticsearch) standard analyzer interprets apostrophes as whitespace. So the phrase, "Chen's" is indexed as "Chen s". This works with both plurals -- thanks to stemming -- and possessives, but causes trouble with other words that contain apostrophes.</p>
<p>For example, the name "O'Reilly Books" is indexed as "O Reilly Books". This presents a pattern matching issue for our WildcardSearch() -- and Find in general -- because the code above will mutate "O'Reilly Books" into "o'reilly* books*", which Find will then interpret as "o reilly* books*". If the user searches for "O'Reilly", then "O'Whatever" will also appear in the result list.</p>
<p>To address this scenario, I like to convert apostrophes into asterisks. "O'Reilly Books" becomes "o*reilly* books*" (note that there are no spaces in "o*reilly*"). Searches for "O'Reilly Books" do match "O'Reilly", do not match "O'Whatever", and don't interfere with plurals or possessives.</p>
<pre class="language-csharp"><code>query = query.ToLowerInvariant().Replace('\'', '*');</code></pre>
<p>With multiple words and apostrophes accounted for, the final extension method code is the following: </p>
<pre class="language-csharp"><code>public static ITypeSearch<T> ForWithWildcards<T>(this ITypeSearch<T> search,
string query, params (Expression<Func<T, string>>, double?)[] fieldSelectors)
{
return search
.For(query)
.InFields(fieldSelectors.Select(x => x.Item1).ToArray())
.ApplyBestBets()
.WildcardSearch(query, fieldSelectors);
}
public static ITypeSearch<T> WildcardSearch<T>(this ITypeSearch<T> search,
string query, params (Expression<Func<T, string>>, double?)[] fieldSelectors)
{
if (string.IsNullOrWhiteSpace(query))
return search;
query = query.ToLowerInvariant().Replace('\'', '*');
var words = query.Split(new[] { " " }, StringSplitOptions.RemoveEmptyEntries)
.Select(WrapInAsterisks)
.ToList();
var wildcardQueries = new List<WildcardQuery>();
foreach (var fieldSelector in fieldSelectors)
{
string fieldName = search.Client.Conventions
.FieldNameConvention
.GetFieldNameForAnalyzed(fieldSelector.Item1);
foreach (var word in words)
{
wildcardQueries.Add(new WildcardQuery(fieldName, word)
{
Boost = fieldSelector.Item2
});
}
}
return new Search<T, WildcardQuery>(search, context =>
{
var boolQuery = new BoolQuery();
if (context.RequestBody.Query != null)
{
boolQuery.Should.Add(context.RequestBody.Query);
}
foreach (var wildcardQuery in wildcardQueries)
{
boolQuery.Should.Add(wildcardQuery);
}
boolQuery.MinimumNumberShouldMatch = 1;
context.RequestBody.Query = boolQuery;
});
}
public static string WrapInAsterisks(string input)
{
return string.IsNullOrWhiteSpace(input) ? "*" : $"*{input.Trim().Trim('*')}*";
}</code></pre>
<p>Enjoy!</p>
</body>
</html>Deploying to DXC from VSTS Release Management (with FTP)/blogs/drew-null/dates/2018/3/deploying-to-dxc-from-vsts-release-management-with-ftp/2018-03-28T16:20:48.0000000Z<!DOCTYPE html>
<html>
<head>
</head>
<body>
<p>The purpose of this blog post is to walk through the steps for configuring continuous deployments to an Episerver Digital Experience Cloud (DXC) integration environment using the Release Management feature of Microsoft Visual Studio Team Services (VSTS).</p>
<p>Basic deployments from VSTS to DXC are not difficult to configure. When your DXC account is created, Episerver Support provides a Visual Studio publish profile which can be executed from an MSBuild step in your VSTS build definition. Although easy to get to started, this unfortunately sidesteps the entire<span> </span><a href="https://docs.microsoft.com/en-us/vsts/build-release/concepts/definitions/release/what-is-release-management?view=vsts">Release Management</a><span> </span>feature of VSTS.</p>
<p>I work on projects in which we must continuously deploy Episerver CMS and Commerce applications to DXC as part of a larger solution. In these scenarios, Episerver may share dependencies with point-of-sale systems, mobile apps, identity servers, many flavors of middleware, and more. It is important that the release strategy accounts for all of these interconnected systems, and VSTS has just the tool for the job.</p>
<p>The following walkthrough assumes that you are building an Episerver solution, within an existing VSTS team project, for deployment to DXC. It also assumes that you are using Git for version control and have some familiarity with<span> </span><a href="http://nvie.com/posts/a-successful-git-branching-model/">GitFlow</a>.</p>
<p><em>Update (2018-03-31): It should be noted that this walkthrough uses FTP to conduct the actual deployment. But an arguably better approach is to use Azure's App Service Deploy. For this you will need to submit a ticket to Episerver Support and and request the appropriate access to the Azure subscription. Thanks to <a href="/link/5341f632537c4b0ab6b8fb651bd310f8.aspx?userid=b8a75950-1020-e111-9d3b-0050568d002c">Scott Reed</a> for the reminder!</em></p>
<h3><br /><br /></h3>
<h3>Background: DXC and VSTS</h3>
<p>DXC is one of Episerver's differentiator offerings, and I have the pleasure of working with it on nearly all of my projects. It provides, out of the box, the benefits of Azure, Find, New Relic, official Episerver support, and much more. So my team can focus on feature development and our clients can enjoy infrastructure peace of mind. If you are unfamiliar with DXC, or are on the fence about it, take it from someone who has been using it since the beginning: You will love it.</p>
<p>I also work with VSTS on most of my projects. If you haven't tried VSTS, or haven't used it in a little while, I encourage you to<span> </span><a href="https://www.visualstudio.com/team-services/">check it out</a>. Much more than just a cloud source code repository, VSTS provides work item management, team management, build agents, deployment options, dashboarding, test plans, and integrations with many 3rd party systems, and many other features. It seamlessly relates each of these concepts so that you can follow, for example, a<span> </span><em>user story</em><span> </span>to a<span> </span><em>team member</em><span> </span>to a<span> </span><em>commit</em><span> </span>to a<span> </span><em>branch</em><span> </span>to a<span> </span><em>build</em><span> </span>to a<span> </span><em>release</em>. And Microsoft regularly adds new features, so it keeps getting better and better.</p>
<h3><br /><br /></h3>
<h3>Prerequisites</h3>
<p>Before getting started, you'll need each of the following ready to go:</p>
<ul>
<li>A VSTS Git branch with your Episerver CMS Visual Studio solution, ready to deploy</li>
<li>A provisioned DXC instance and<span> </span><code>.ProfileSettings</code><span> </span>file provided by Episerver Support or downloaded from the Azure Portal</li>
</ul>
<h3><br /><br /></h3>
<h3>Part 1: Build Definition</h3>
<ol>
<li>
<p>In VSTS, start by creating a new build definition:<span> </span><em>Build and Release</em><span> </span>><span> </span><em>Builds</em><span> </span>><span> </span><em>New</em>.</p>
</li>
<li>
<p>In the<span> </span><em>Select your repository</em><span> </span>step, use<span> </span><em>VSTS Git</em><span> </span>as the selected source and choose the branch from which you want to continuously deploy.</p>
<p><img src="https://i.imgur.com/h1vedH5.jpg" alt="New build definition - Select your repository" /></p>
<p>Hit<span> </span><strong>Continue</strong>.</p>
</li>
<li>
<p>In the<span> </span><em>Choose a template</em><span> </span>step, under<span> </span><em>Featured</em>, select<span> </span><strong>ASP.NET</strong><span> </span>(hit<span> </span><strong>Apply</strong>).</p>
<p><img src="https://i.imgur.com/VvE0RSO.jpg" alt="New build definition - Choose a template" /></p>
</li>
<li>
<p>The next screen is the<span> </span><em>Process</em><span> </span>view, under the<span> </span><em>Tasks</em><span> </span>tab of the new build definition.</p>
<p>If you don't know which agent queue to use, select<span> </span><em>Hosted VS2017</em>. Or, select<span> </span><em>Hosted</em><span> </span>if you are using a version of Visual Studio earlier than 2017.</p>
<p>If your repository has multiple VS solutions, use the<span> </span><em>Path to solution or packages.config</em><span> </span>browse button, under<span> </span><em>Parameters</em>, to select the<span> </span><code>.sln</code><span> </span>file that you want to build and deploy.</p>
<p>Hit<span> </span><strong>Save</strong><span> </span>from the<span> </span><em>Save & queue</em><span> </span>dropdown.</p>
<p><img src="https://i.imgur.com/NGsHQoj.jpg" alt="Sample build definition - Process settings" /></p>
</li>
<li>
<p>Out of the box, the<span> </span><em>ASP.NET</em><span> </span>build definition template gives six steps, which are all pretty self-explanatory. Take some time now to add your own steps such npm or Gulp script execution (whatever your project needs). But before we move on, let's look closely at the<span> </span><em>Build solution</em><span> </span>and<span> </span><em>Publish artifact</em><span> </span>steps.</p>
<p>Don't forget to hit<span> </span><strong>Save</strong><span> </span>when you're done.</p>
<strong>Build solution</strong>
<p>The<span> </span><em>MSBuild Arguments</em><span> </span>setting should have the following value:</p>
<p><code>/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation="$(build.artifactstagingdirectory)\\"</code></p>
<p>These arguments tell MSBuild to publish the compiled solution to the agent's artifact staging directory. This is where the build agent places all files that will be deployed to the target system.</p>
<p>Typically, it is here that we would specify a publish profile — the one provided by Episerver Support — that MSBuild can use to build the solution and then, as a single step, deploy it to DXC. But we will do that ourselves through Release Management, so there are no changes to make here.</p>
<p><img src="https://i.imgur.com/ahexHRc.jpg" alt="Sample build definition - Build Solution step settings" /></p>
<strong>Publish Artifact</strong>
<p><em>Path to publish</em><span> </span>is set using the built-in<span> </span><code>$(build.artifactstagingdirectory)</code><span> </span>variable, which represents the directory to which the build agent publishes files after building the solution.</p>
<p><em>Artifact name</em><span> </span>(<code>drop</code>) specifies the name of the artifact so that we can reference it our release definition.</p>
<p><em>Artifact publish location</em><span> </span>is set to<span> </span><strong>Visual Studio Team Services/TFS</strong>, which makes the artifact available to our release definition. We'll get to that in a moment.</p>
<p><img src="https://i.imgur.com/76Ba2Mp.jpg" alt="Sample build definition - Publish Artifact settings" /></p>
</li>
<li>
<p>Navigate to the<span> </span><em>Triggers</em><span> </span>tab in your build definition.</p>
<p>On the right, check<span> </span><em>Enable continuous integration</em>. This will instruct the agent to automatically conduct a build when any commit is pushed to the specified branch.</p>
<p>The<span> </span><em>Branch filters</em><span> </span>settings appear. The branch that was specified when we first created the build definition should be automatically included.</p>
<p>Hit<span> </span><strong>Save and queue</strong><span> </span>to save the build definition and queue a build. We want to conduct a build now to create an initial artifact that we can select in our release definition deployment process later on (Part 3).</p>
<p><img src="https://i.imgur.com/VMbJ3TX.jpg" alt="Enable continuous integration" /></p>
</li>
</ol>
<h3><br /><br /></h3>
<h3>Part 2: Release Definition</h3>
<ol>
<li>
<p>Now that our build is ready to go, let's create a new release definition:<span> </span><em>Build and Release</em><span> </span>><span> </span><em>Releases</em><span> </span>><span> </span><em>+</em><span> </span>><span> </span><em>Create release definition</em>.</p>
</li>
<li>
<p>A<span> </span><em>New Release Definition</em><span> </span>should open and the<span> </span><em>Select a Template</em><span> </span>menu should appear.</p>
<p>Select<span> </span><strong>Empty process</strong><span> </span>to start with a blank release definition.</p>
</li>
<li>
<p>The<span> </span><em>Environment</em><span> </span>menu should automatically appear.</p>
<p>Rename<span> </span><code>Environment 1</code><span> </span>to<span> </span><code>DXC</code>, which is much more meaningful, then close the<span> </span><em>Environment</em><span> </span>menu (it saves by itself).</p>
<p><img src="https://i.imgur.com/PJlEZ9S.jpg" alt="Sample release definition - Environment settings" /></p>
</li>
<li>
<p>Under the<span> </span><em>Pipeline</em><span> </span>tab, click<span> </span><strong>Add artifact</strong><span> </span>in the<span> </span><em>Artifacts</em><span> </span>column.</p>
<p>In the<span> </span><em>Add artifact</em><span> </span>menu, the<span> </span><em>Source type</em><span> </span>(<strong>Build</strong>) and<span> </span><em>Project</em><span> </span>should be automatically populated.</p>
<p>Under<span> </span><em>Source (Build definition)</em>, select the new build you created in Part 1.</p>
<p>Leave the defaults for<span> </span><em>Default version</em><span> </span>(<strong>Latest</strong>) and<span> </span><em>Source alias</em>, and click<span> </span><strong>Add</strong>.</p>
<p><img src="https://i.imgur.com/TEckp9W.jpg" alt="Sample release definition - Add Artifact settings" /></p>
</li>
<li>
<p>Back to the<span> </span><em>Pipeline</em><span> </span>tab, click the<span> </span><em>Continuous deployment trigger</em><span> </span>button on your newly added artifact (the button looks like a lightning bolt).</p>
<p>The<span> </span><em>Continuous deployment trigger</em><span> </span>menu opens. Click the toggle to set CD to<span> </span><strong>Enabled</strong>.</p>
<p>The<span> </span><em>Build branch filters</em><span> </span>input appears. Click the chevron button to the right of<span> </span><em>Add</em>, and select<span> </span><strong>Build Definition's default branch</strong><span> </span>from the dropdown that appears.</p>
<p>The default branch will be added to the list of filters. Close the<span> </span><em>Continuous deployment trigger</em><span> </span>menu (it saves automatically).</p>
<p><img src="https://i.imgur.com/FZmHk7L.jpg" alt="Sample release definition - Continuous deployment trigger" /></p>
</li>
<li>
<p>Lastly, let's rename the release definition.</p>
<p>Near the top, in large letters, click<span> </span><code>New Release Definition</code><span> </span>and give it a more meaningful name:<span> </span><code>DXC Release Definition</code>.</p>
<p><img src="https://i.imgur.com/EGkm6nU.jpg" alt="Sample release definition - Name" /></p>
</li>
</ol>
<h3><br /><br /></h3>
<h3>Part 3: Deployment Process</h3>
<ol>
<li>
<p>Still in our release definition, navigate to the<span> </span><em>Tasks</em><span> </span>tab.</p>
<p>The<span> </span><em>DXC</em><span> </span>deployment process that we named in Part 2 should be selected by default.</p>
</li>
<li>
<p>To the left of the<span> </span><em>Agent phase</em>, click the<span> </span><strong>+</strong><span> </span>(<em>Add task to the phase</em>) button.</p>
<p>The<span> </span><em>Add tasks</em><span> </span>menu should appear.</p>
</li>
<li>
<p>In the<span> </span><em>Search</em><span> </span>textbox, type:<span> </span><code>FTP Upload</code>.</p>
<p>The result list should filter with the<span> </span><em>FTP Upload</em><span> </span>task at the top.</p>
<p>Add it to the agent phase by clicking its<span> </span><strong>Add</strong><span> </span>button.</p>
<p><img src="https://i.imgur.com/bG114Eq.jpg" alt="Sample deployment process - Add FTP Upload task" /></p>
<p>The<span> </span><em>FTP Upload</em><span> </span>step is added to the agent phase, and its settings menu appears.</p>
</li>
<li>
<p>There are two options for the<span> </span><em>Authentication Method</em>. I like to define an<span> </span><em>FTP Service Endpoint</em><span> </span>for reuse with other deployments. But for the sake of simplicity, let's define the FTP settings here.</p>
<p>Select<span> </span><strong>Enter Credentials</strong>.</p>
</li>
<li>
<p>Next we need to mine our FTP credentials from the DXC Integration environment<span> </span><code>.PublishSettings</code><span> </span>file provided by Episerver Support (or downloaded from the Azure Portal).</p>
<p>Since it is just an XML file, open it with your favorite XML code editor (I use Visual Studio Code).</p>
<p>Look for the<span> </span><code><publishProfile publishMethod="FTP"></code><span> </span>element.</p>
<ul>
<li>The scheme and host of our<span> </span><code>publishUrl</code><span> </span>attribute is the FTP<span> </span><em>Server URL</em></li>
<li>Our DXC integration environment name (e.g.,<span> </span><code>qjet123456inte</code>) is the FTP<span> </span><em>Username</em>.</li>
<li>The<span> </span><code>userPWD</code><span> </span>attribute is our FTP<span> </span><em>Password</em></li>
</ul>
<p>Enter all of these values into the<span> </span><em>FTP Upload</em><span> </span>settings.</p>
</li>
<li>
<p>In the<span> </span><em>Root Folder</em><span> </span>field, click the browse (<strong>...</strong>) button and drill down to select the<span> </span><strong>drop</strong><span> </span>artifact built in Part 1.</p>
<p>If your artifact is not in the list, either your build hasn't finished yet or it failed. Troubleshoot, queue a new build if you need to, and wait for it to complete successfully before moving on. The<span> </span><strong>drop</strong><span> </span>artifact will appear in the<span> </span><em>Root folder</em><span> </span>tree once the build completes.</p>
</li>
<li>
<p>Next we need to specify the<span> </span><em>Remote directory</em>. I recommend using<span> </span><code>/temp/</code><span> </span>for now. This will cause the release to be deployed to a<span> </span><code>/temp/</code><span> </span>folder within the DXC integration environment Azure App Service. Once our release definition is complete and we have tested it with our build, then we can set the<span> </span><em>Remote directory</em><span> </span>to the path where the live site actually lives:<span> </span><code>/site/wwwroot/</code>.</p>
<p>It is important to note that the same FTP settings we mined from our<span> </span><code>.PublishSettings</code><span> </span>file can be plugged into our FTP client of choice for management and troubleshooting of our DXC integration environment file system directly. I strongly recommend using this technique to see the result of your deployments before finalizing your builds and releases, and promoting code to Preproduction and Production.</p>
</li>
<li>
<p>Lastly, scroll down and expand the<span> </span><em>Advanced</em><span> </span>section.</p>
<p>Check the<span> </span><em>Preserve file paths</em><span> </span>option. If this option isn't checked, then the deployment will flatten all of your files from your solution into the root folder, which is not what we want.</p>
<p><img src="https://i.imgur.com/rJjlh5p.jpg" alt="Sample deployment process - FTP Upload task settings" /></p>
</li>
<li>
<p>At this point, our release definition is complete, and we are ready to start testing.</p>
</li>
</ol>
<h3><br /><br /></h3>
<h3>Part 4: Testing</h3>
<ol>
<li>
<p>Now that our build and release definitions are complete, all of the plumbing for continuous deployment is in place.</p>
<p>To start testing, we need to kick off a build. I recommend doing so by pushing a commit to your CI branch. This will trigger the build automatically and show you the entire process from start to finish.</p>
</li>
<li>
<p>To follow the build's progress, navigate to your build definitions (<em>Build and Release</em><span> </span>><span> </span><em>Builds</em>), find your CI build definition in the list, and click on the link to the build itself (e.g.,<span> </span><em>#20180325.2</em>).</p>
<p><img src="https://i.imgur.com/6J0AGRb.jpg" alt="Build queue" /></p>
</li>
<li>
<p>On the Build page, if the build is still running then you should see the build console, from which you can watch the build work its magic.</p>
<p><img src="https://i.imgur.com/lKyY0EW.jpg" alt="Build monitoring" /></p>
</li>
<li>
<p>Once the build completes, and assuming there are no errors to troubleshoot,navigate to Releases (<em>Build and Release</em><span> </span>><span> </span><em>Releases</em>) and click on your release in the list.</p>
</li>
<li>
<p>On the Release page, click the<span> </span><em>Logs</em><span> </span>tab. If the release is still being deployed, you can watch it log all of its steps in this view.</p>
<p><img src="https://i.imgur.com/sDwXCdp.jpg" alt="Release monitoring" /></p>
</li>
<li>
<p>Once the deployment completes, connect to the DXC integration instance via FTP client and verify that all of your solution files and folders appear in the<span> </span><code>/temp/</code><span> </span>folder that we created above.</p>
</li>
</ol>
<h3><br /><br /></h3>
<h3>Part 5: Going Live</h3>
<p>There are many configuration options within VSTS. This walkthrough has barely scratched the surface, and may not meet the needs of your project. I recommend that you go back and do some exploring of your own, referencing Microsoft's VSTS documentation along the way, because there is a lot to learn.</p>
<p>Once you're ready, the last step is to go back to your release definition and — if you haven't already — set the deployment process FTP<span> </span><em>Remote directory</em><span> </span>to that of your live DXC Integration environment webroot (<code>/site/wwwroot/</code>).</p>
<p>Enjoy!</p>
</body>
</html>