Application restart loop because of "BuildResult change, cache key=cindexingservice.svc.1f180bcd"

Vote:
 

Hello everyone!

We are currently experiencing an issue that has started happening more and more frequently recently and we don't really know how to proceed to track it down...

For some reason our application pool decides to restart about 1-2 a week nowadays and once it has sucessfully restarted once it only runs for 5-10 minutes before restarting again in a restart loop. It keeps doing this until we perform an IIS reset or restart the entire webserver. Once we do this the problem always stops.

In order to try and figure out why the application is restarting I have added some code to catch the reason in the "Application_End" event in Global.asax.cs:

protected void Application_End()
{
	HttpRuntime runtime = (HttpRuntime) typeof(HttpRuntime).InvokeMember("_theRuntime",
		BindingFlags.NonPublic
		| BindingFlags.Static
		| BindingFlags.GetField,
		null,
		null,
		null);

	if (runtime == null)
		return;

	string shutDownMessage = (string) runtime.GetType().InvokeMember("_shutDownMessage",
		BindingFlags.NonPublic
		| BindingFlags.Instance
		| BindingFlags.GetField,
		null,
		runtime,
		null);

	string shutDownStack = (string) runtime.GetType().InvokeMember("_shutDownStack",
		BindingFlags.NonPublic
		| BindingFlags.Instance
		| BindingFlags.GetField,
		null,
		runtime,
		null);

	Logger.Log(
		Level.Critical,
		" ==== APPLICATION END EVENT ====" + Environment.NewLine + shutDownMessage + Environment.NewLine
		+ shutDownStack + " ==============================="
	);
}

With this code in place we see this message in our logs each time the application restarts:

FATAL Piab.CMS.Global: ==== APPLICATION END EVENT ====

BuildResult change, cache key=cindexingservice.svc.1f180bcd

HostingEnvironment initiated shutdown

HostingEnvironment caused shutdown

It's always the same message each time. Even the "1f180bcd" part seems to be static. The only file called "Indexingservice.svc" in our project comes from the EPiServer.Search package and is located at the \IndexingService\IndexingService.svc path. I'm not sure if this means that our problem is somehow related to indexing/search and the EPiServer.Search package or if the restarts would still happen with a different message if we remove it.

We have a testing/staging server but unfortunately we've never seen this error there and we've never been able to reproduce it ourselves.


I have googled all I can for similar error messages but the only thing I find is the Microsoft.NET reference source code that decides to return the "BuildResult change, cache key=" part of the message I now see in the logs...

Has anyone of you guys seen this type of error and/or does anyone have any idea about how to proceeed troubleshooting something like this?
No modified dates are changing on this file itself or any other file in the "bin" folder... We've also tried to disable the IIS file system watcher completely (so dynamic recompilations should not be happening?):

<httpRuntime fcnMode="Disabled" ... />

Since we disabled this the application no longer restarts automatically if we manually edit a .cshtml file for example so the IIS should not be monitoring any files modified/created date and kick off any dynamic recompilations because of such things...

#206409
Edited, Aug 20, 2019 15:04
Vote:
 

Hi Alexander,

Were you able to resolve this?

We are experiencing the same issue in our EpiServer CMS Admin environment, which is hosted as an Azure App Service. Our Admin environment uses EpiServer's Search (Lucene index, not Epi Find) for indexing and searching of Media content. Coincidentally we had also added shutdown logging on Application_End event:

2019-08-29 14:42:46,681 [454] FATAL : Application is shutting down. Host: RD0003FF6F3B8B Process: 3884 Reason: BuildManagerChange
2019-08-29 14:42:46,692 [454] FATAL : Application is shutting down continued. Host: RD0003FF6F3B8B Message: BuildResult change, cache key=cindexingservice.svc.1f180bcd
HostingEnvironment initiated shutdown
HostingEnvironment caused shutdown

Similarly, this happens once or twice a week - but in our case, it doesn't seem to get into a restart loop as you describe.

It does however, often fail to shutdown the existing .NET AppDomain, so we often find multiple AppDomains running and both processing the EpiServer scheduled jobs, and the queued Epi Find index tasks - which causes us real headache with old versions of content appearing in our Epi Find index.

So far, we haven't tried much to address this, similarly found very little information online about BuildManagerChange events and how it works.

But it's interesting that disabling the IIS file system watcher didn't work. I had suspected the cause might be the shared disk on the Azure App Service falsely eventing a file system change on the compiled output of the indexingservice.svc file. I also suspected that the '1f180bcd' is a hash of the indexingservice.svc contents.

So my next step in troubleshooting this (when I find time) was to try and dive into HttpRuntime.Cache and see if this key 'cindexingservice.svc.1f180bcd' is present with a file cache dependency and get the path to that file.

If I find anything I will post here.

Kind Regards,

Cameron

#206794
Aug 30, 2019 1:51
Vote:
 

Hello!

No this is very much still an ongoing issue for us. It feels like it's been happening for almost a year or more but it's been happening more and more recently as of late. I don't think this was causing application restart loops at first for us either and 1-2 restarts a week would just pause the website for 1-2 minutes and wasn't necessarily being noticed much or reported back to us. But something we updated or something we changed trying to track this issue down has made it much worse for us with the restart loop. As I think I explained in my first post, once this issue happens once for us the application only lives for 5-10 more minutes before it happens again. Only a full IISReset will allow the application to run fine for a couple of more days before the issue occurs again.

I have already tried to search for the cache key 'cindexingservice.svc.1f180bcd' inside HttpRuntime.Cache but I have never found it there. I suspect that this key is stored inside "HttpRuntime.Cache.InternalCache" but that is an internal field for .NET framework which I don't think we can access :-/.

That field is used a lot in this file at least https://referencesource.microsoft.com/#system.web/Compilation/BuildResultCache.cs,7b600c15d3b0aebb and on line 274 there you can see the code that decides to print the "BuildResult change, cache key=" message.

I'm so glad though that we're not the only ones with this issue. Perhaps it is something related to EPiServer or EpiServer.Search then as well since we have both of those components in common. The cache key is mentioning a file used by the EPiServer.Search Lucene indexer after all as well...

Any updates if you find more information are very welcomed and I promise to keep this thread updated as well. We have registered this bug with our EPiServer partner as well which is currently trying to investigate it but we've had no success so far.

#206803
Aug 30, 2019 9:06
Vote:
 

Hi there,

I wrote some utility code that uses reflection to pull out details of the cached item stored in "HttpRuntime.Cache.InternalCache" which may help you debug:

var internalCacheMember = typeof(System.Web.Caching.Cache).GetProperty("InternalCache", BindingFlags.Instance | BindingFlags.NonPublic);
var internalCache = internalCacheMember.GetValue(System.Web.HttpRuntime.Cache) as CacheStoreProvider;
var buildResult = internalCache.Get("cindexingservice.svc.1f180bcd");
var resultAssemblyMember = typeof(System.Web.Compilation.AssemblyBuilder).Assembly.GetType("System.Web.Compilation.BuildResultCompiledAssembly")
.GetProperty("ResultAssembly", BindingFlags.Instance | BindingFlags.NonPublic);
var tempAssembly = resultAssemblyMember.GetValue(buildResult) as System.Reflection.Assembly;
logger.Log("Location: " + tempAssembly.Location);

The results I got was a path generated assembly stored in the Temporary ASP.NET Files which is difficult to monitor in our environment:

Location: D:\local\Temporary ASP.NET Files\root\b7011411\71ab0411\App_Web_3nj3gt4f.dll

In our case I think it might be something to do with our hosting provider (Azure) falsely reporting a disk change to this file, which triggers the cache dependency to eject the cached item "cindexingservice.svc.1f180bcd". Likely because of some quirk with how Azure App Service handles storage of local temporary files. Out of interest, are you hosting in Azure as well?

Did you make any progress with your EpiServer partner?

#207130
Sep 11, 2019 7:20
Vote:
 

Cool! I didn't realize we could access items from the InternalCache. We received a suggestion from our partner to delete all files in this folder (Temporary ASP.NET Files) as that sometimes solves weird errors. Each time we publish new code through our continious integration server it stops the IIS, runs a script to delete everything there, publishes and then restarts the IIS. So unless those steps or the script isn't fully working that should already be happening pretty frequently.

We are not hosting the site on Azure but on a Windows 2016 server we are managing ourselves. The location for us is:

C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files\root\c28f5259\e22ef605

We don't have an App_Web_3nj3gt4f.dll file in there though (but tons of other App_Web_xxxxxxxx.dll files). I suspect that the c28f5259\e22ef605\ and 3nj3gt4f parts of these URLs are not static.

It looks like most files in that folder have a created and modified date that equals the last iisreset I had to do this tuesday. Some of them have newer dates though.

We are still waiting for our partner to do further investigation but this is good stuff. Perhaps we need to monitor if some other process is accessing or changing files in the Temporary ASP.NET folder or something... It seems like Azure it NOT something we have in common though, but perhaps we can find something else...

#207135
Sep 11, 2019 9:21