Don't miss out Virtual Happy Hour this Friday (April 26).

Try our conversational search powered by Generative AI!

Database connection leak inside ContentRepository

Vote:
 

Hi.

For one of our customers we have developed a rather heavy scheduled jobs that imports pages and updates previously imported pages from an external system. During this we have discovered that ContentRepository is posibly leaking connections and therefor we get a timeout error in the scheduled job. Is this something well known?
As for runtimes the job has been avaraging around 20-25min. but during the last week its been running for about 8-10 min before we get the following error message:

System.Data (Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
 at System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal& connection) 
at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions) 
at System.Data.SqlClient.SqlConnection.TryOpenInner(TaskCompletionSource`1 retry) 
at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry) 
at System.Data.SqlClient.SqlConnection.Open() 
at EPiServer.Data.Providers.Internal.ConnectionContext.b__15_0() 
at EPiServer.Data.Providers.SqlTransientErrorsRetryPolicy.Execute[TResult](Func`1 method) 
at EPiServer.Data.Providers.Internal.ConnectionContext.OpenConnection() 
at EPiServer.Data.Providers.Internal.SqlDatabaseExecutor.GetConnection(Boolean requireTransaction) 
at EPiServer.Data.Providers.Internal.SqlDatabaseExecutor.<>c__DisplayClass31_0`1.b__0()
at EPiServer.Data.Providers.SqlTransientErrorsRetryPolicy.Execute[TResult](Func`1 method) 
at EPiServer.DataAbstraction.Internal.DefaultProjectRepository.GetItems(IEnumerable`1 contentReferences) 
at EPiServer.Cms.Shell.UI.Rest.Projects.Internal.ProjectLoaderService.GetItems(IEnumerable`1 contentReferences) 
at EPiServer.Cms.Shell.UI.Rest.Projects.Internal.ProjectEventListener.ContentEvents_UpdatedContent(Object sender, ContentEventArgs e) 
at System.EventHandler`1.Invoke(Object sender, TEventArgs e) 
at EPiServer.Core.Internal.DefaultContentEvents.RaiseContentEvent(String key, ContentEventArgs eventArgs) 
at EPiServer.Core.Internal.DefaultContentRepository.RaisePostSaveEvents(SaveContentEventArgs eventArgs, StatusTransition transition, Boolean isNew, Boolean isNewLanguageBranch) 
at EPiServer.Core.Internal.DefaultContentRepository.Save(IContent content, SaveAction action, AccessLevel access

The data for the integration is not in the episerver database, and as i dont get any EF errors i assume the problem is internal in episerver. The job runs every night outside of peak hours and is only ran on 1 out of 2 servers.

Where the call to ConentRepository.save comes from varies from day to day, the error is allways the same.

I believe it can be solved by setting the max thread pool on the connectionstring higher then the default 100, but i am wondering if this is a good solution, and if there is a connection leak inside the ContentRepository?

#192654
Edited, May 22, 2018 11:01
Vote:
 

Now that is not a common error.

I would recommend to use ADPlus and configure it to take a Dump when that exception is thrown. Then from the dump it is possible to see the stacktraces for the different threads in the process. From that it might be possible to figure out what is causing the starvation of connections.

#192664
May 22, 2018 13:02
Vote:
 

Found the cause of the error. Due to removing some properties on the PageModel in question, which had data in the database, there was a memory model vs database model mismatch, and therefor for each save with the ContentRepository they tried to sync up and cause the usual "deleting this field would delete data and therefor we aren't doing that", entering admin mode for the site and removing the missing properties so that these fields got removed from the database fixed the issue and the integration is back to working smoothly.

Thx Johan for the tip about ADPlus that realy help in figuring out why the connections didnt close quickly, it seems the attempt to sync is a bit slower then a normal save/update which cause the connections to stay open alot longer then anticipated which cause the inital crash.

#192707
May 23, 2018 13:28
This topic was created over six months ago and has been resolved. If you have a similar question, please create a new topic and refer to this one.
* You are NOT allowed to include any hyperlinks in the post because your account hasn't associated to your company. User profile should be updated.