Why is MDB 10.2.14 not handling deadlocks properly when it did prior to upgrade from 10.1.22MySql replication...
If I tried and failed to start my own business, how do I apply for a job without job experience?
Taking an academic pseudonym?
1990s-2000s horror alien movie with slugs infecting people through the mouth
Have any astronauts or cosmonauts died in space?
Sing Baby Shark
How can I give a Ranger advantage on a check due to Favored Enemy without spoiling the story for the player?
What does "move past people" mean in this context?
Is Screenshot Time-tracking Common?
Can't figure out a htaccess rule
Boss asked me to sign a resignation paper without a date on it along with my new contract
How do I narratively explain how in-game circumstances do not mechanically allow a PC to instantly kill an NPC?
What is formjacking?
Reason for small-valued feedback resistors in low noise Op Amp
Do the speed limit reductions due to pollution also apply to electric cars in France?
Missing a connection and don't have money to book next flight
Minimum Viable Product for RTS game?
Protagonist constantly has to have long words explained to her. Will this get tedious?
Existence of a strange function
Isn't a semicolon (';') needed after a function declaration in C++?
How to transport 10,000 terrestrial trolls across ocean fast?
Using functions like sine, cosine and tangent to calculate coordinates in Tikz
Two oatmeal pies a day keep the doctor away?
Is the tritone (A4 / d5) still banned in Roman Catholic music?
Why do single electrical receptacles exist?
Why is MDB 10.2.14 not handling deadlocks properly when it did prior to upgrade from 10.1.22
MySql replication from 5.1 master to 5.6 slave keep crashingPrevent replication of ALTER commandsChanging table types from MyISAM to InnoDB in a MySQLdump fileMariaDB service won't start (mysqld.sock not found)SOLUTION - OOM Killer was killing mariadb every hour or soDeadlock on MySQL insert statmentsReplica Set Question - one primary instance with two secondary instancesMariaDB can't find tables mysql.user and mysql.serversMySQL 5.6.34 RDS Warning: a long semaphore wait causes crash — log includedMariaDB keeps crashing after a few hours with ridiculous memory expectations
Since upgrading MariaDB 10.1.22 to 10.2.14 our MariaDB slaves are encountering deadlocks that are not handled in less than 600 seconds thus the classic semaphore decision to crash the server. The server has crashed 3 times. The extremely high volumes have not changed; only the MDB performance has improved with the upgrade.
Note we have Insert on Duplicate Updates that process super high volumes on our master. The deadlocks on same queries occur on the slaves so it has to be related to the slave parallel replication locking. Reducing slave_parallel_workers
has mitigated some of this.
In summary looking to understand what has changed with mdb 10.2.x regarding threads, timeouts, etc. to zoom in on this issue. Why MDB is unable to determine the deadlock and rollback one of the offending transactions.
I ACKNOWLEDGE all deadlocks should be addressed but as stated above they are not occuring on the master, only on the slave for same statements.
We had the deadlocks prior to the upgrade but MDB always managed same with NO problems.
2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait:
--Thread 139518736328448 has waited at read0read.cc line 579 for 910.00 seconds the semaphore: Mutex at 0x7f2b63dc13a0, Mutex TRX_SYS created trx0sys.cc:554, lock var 2
2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait:
--Thread 139518749968128 has waited at dict0dict.cc line 1160 for 910.00 seconds the semaphore: Mutex at 0x7f2b63dcb500, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2
2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait:
--Thread 139518750574336 has waited at dict0dict.cc line 1160 for 890.00 seconds the semaphore: Mutex at 0x7f2b63dcb500, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2
InnoDB: ###### Starts InnoDB Monitor for 30 secs to print diagnostic
info: InnoDB: Pending reads 2, writes 0 InnoDB: ###### Diagnostic info
printed to the standard error stream 2018-06-11 10:32:32
139519224362752 [ERROR] [FATAL] InnoDB: Semaphore wait has lasted >
600 seconds. We intentionally crash the server because it appears to
be hung. 180611 10:32:32 [ERROR] mysqld got signal 6 ;
replication mariadb deadlock master-slave-replication
bumped to the homepage by Community♦ 8 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
Since upgrading MariaDB 10.1.22 to 10.2.14 our MariaDB slaves are encountering deadlocks that are not handled in less than 600 seconds thus the classic semaphore decision to crash the server. The server has crashed 3 times. The extremely high volumes have not changed; only the MDB performance has improved with the upgrade.
Note we have Insert on Duplicate Updates that process super high volumes on our master. The deadlocks on same queries occur on the slaves so it has to be related to the slave parallel replication locking. Reducing slave_parallel_workers
has mitigated some of this.
In summary looking to understand what has changed with mdb 10.2.x regarding threads, timeouts, etc. to zoom in on this issue. Why MDB is unable to determine the deadlock and rollback one of the offending transactions.
I ACKNOWLEDGE all deadlocks should be addressed but as stated above they are not occuring on the master, only on the slave for same statements.
We had the deadlocks prior to the upgrade but MDB always managed same with NO problems.
2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait:
--Thread 139518736328448 has waited at read0read.cc line 579 for 910.00 seconds the semaphore: Mutex at 0x7f2b63dc13a0, Mutex TRX_SYS created trx0sys.cc:554, lock var 2
2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait:
--Thread 139518749968128 has waited at dict0dict.cc line 1160 for 910.00 seconds the semaphore: Mutex at 0x7f2b63dcb500, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2
2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait:
--Thread 139518750574336 has waited at dict0dict.cc line 1160 for 890.00 seconds the semaphore: Mutex at 0x7f2b63dcb500, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2
InnoDB: ###### Starts InnoDB Monitor for 30 secs to print diagnostic
info: InnoDB: Pending reads 2, writes 0 InnoDB: ###### Diagnostic info
printed to the standard error stream 2018-06-11 10:32:32
139519224362752 [ERROR] [FATAL] InnoDB: Semaphore wait has lasted >
600 seconds. We intentionally crash the server because it appears to
be hung. 180611 10:32:32 [ERROR] mysqld got signal 6 ;
replication mariadb deadlock master-slave-replication
bumped to the homepage by Community♦ 8 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
1
Let me repeat again, there are no deadlocks on the MASTER, and htis shop is as high volume as it gets. The servers are 40 cpu, 384gb ram, 280GB bufferpools, 33 TB disks.
– SAK
Jun 13 '18 at 17:56
1
The deadlocks are occurring ON the slaves with parallel replication. The deadlock sql involves competing Insert on duplicate update statements, the SELECTS are never involved. Prior to mdb 10.2.14 MDB was handling the few deadlocks without issues, as of 10.2.14 the semaphores began timing out > 600 seconds. Note I have been tuning relational sql databases since 1981 so I am confident of the queries performance. The issue is the database is not properly handling deadlocks where the second transaction should be rolled back within seconds.
– SAK
Jun 13 '18 at 18:01
add a comment |
Since upgrading MariaDB 10.1.22 to 10.2.14 our MariaDB slaves are encountering deadlocks that are not handled in less than 600 seconds thus the classic semaphore decision to crash the server. The server has crashed 3 times. The extremely high volumes have not changed; only the MDB performance has improved with the upgrade.
Note we have Insert on Duplicate Updates that process super high volumes on our master. The deadlocks on same queries occur on the slaves so it has to be related to the slave parallel replication locking. Reducing slave_parallel_workers
has mitigated some of this.
In summary looking to understand what has changed with mdb 10.2.x regarding threads, timeouts, etc. to zoom in on this issue. Why MDB is unable to determine the deadlock and rollback one of the offending transactions.
I ACKNOWLEDGE all deadlocks should be addressed but as stated above they are not occuring on the master, only on the slave for same statements.
We had the deadlocks prior to the upgrade but MDB always managed same with NO problems.
2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait:
--Thread 139518736328448 has waited at read0read.cc line 579 for 910.00 seconds the semaphore: Mutex at 0x7f2b63dc13a0, Mutex TRX_SYS created trx0sys.cc:554, lock var 2
2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait:
--Thread 139518749968128 has waited at dict0dict.cc line 1160 for 910.00 seconds the semaphore: Mutex at 0x7f2b63dcb500, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2
2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait:
--Thread 139518750574336 has waited at dict0dict.cc line 1160 for 890.00 seconds the semaphore: Mutex at 0x7f2b63dcb500, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2
InnoDB: ###### Starts InnoDB Monitor for 30 secs to print diagnostic
info: InnoDB: Pending reads 2, writes 0 InnoDB: ###### Diagnostic info
printed to the standard error stream 2018-06-11 10:32:32
139519224362752 [ERROR] [FATAL] InnoDB: Semaphore wait has lasted >
600 seconds. We intentionally crash the server because it appears to
be hung. 180611 10:32:32 [ERROR] mysqld got signal 6 ;
replication mariadb deadlock master-slave-replication
Since upgrading MariaDB 10.1.22 to 10.2.14 our MariaDB slaves are encountering deadlocks that are not handled in less than 600 seconds thus the classic semaphore decision to crash the server. The server has crashed 3 times. The extremely high volumes have not changed; only the MDB performance has improved with the upgrade.
Note we have Insert on Duplicate Updates that process super high volumes on our master. The deadlocks on same queries occur on the slaves so it has to be related to the slave parallel replication locking. Reducing slave_parallel_workers
has mitigated some of this.
In summary looking to understand what has changed with mdb 10.2.x regarding threads, timeouts, etc. to zoom in on this issue. Why MDB is unable to determine the deadlock and rollback one of the offending transactions.
I ACKNOWLEDGE all deadlocks should be addressed but as stated above they are not occuring on the master, only on the slave for same statements.
We had the deadlocks prior to the upgrade but MDB always managed same with NO problems.
2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait:
--Thread 139518736328448 has waited at read0read.cc line 579 for 910.00 seconds the semaphore: Mutex at 0x7f2b63dc13a0, Mutex TRX_SYS created trx0sys.cc:554, lock var 2
2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait:
--Thread 139518749968128 has waited at dict0dict.cc line 1160 for 910.00 seconds the semaphore: Mutex at 0x7f2b63dcb500, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2
2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait:
--Thread 139518750574336 has waited at dict0dict.cc line 1160 for 890.00 seconds the semaphore: Mutex at 0x7f2b63dcb500, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2
InnoDB: ###### Starts InnoDB Monitor for 30 secs to print diagnostic
info: InnoDB: Pending reads 2, writes 0 InnoDB: ###### Diagnostic info
printed to the standard error stream 2018-06-11 10:32:32
139519224362752 [ERROR] [FATAL] InnoDB: Semaphore wait has lasted >
600 seconds. We intentionally crash the server because it appears to
be hung. 180611 10:32:32 [ERROR] mysqld got signal 6 ;
replication mariadb deadlock master-slave-replication
replication mariadb deadlock master-slave-replication
edited Jun 12 '18 at 9:37
dbdemon
3,0622625
3,0622625
asked Jun 11 '18 at 16:32
SAKSAK
264
264
bumped to the homepage by Community♦ 8 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 8 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
1
Let me repeat again, there are no deadlocks on the MASTER, and htis shop is as high volume as it gets. The servers are 40 cpu, 384gb ram, 280GB bufferpools, 33 TB disks.
– SAK
Jun 13 '18 at 17:56
1
The deadlocks are occurring ON the slaves with parallel replication. The deadlock sql involves competing Insert on duplicate update statements, the SELECTS are never involved. Prior to mdb 10.2.14 MDB was handling the few deadlocks without issues, as of 10.2.14 the semaphores began timing out > 600 seconds. Note I have been tuning relational sql databases since 1981 so I am confident of the queries performance. The issue is the database is not properly handling deadlocks where the second transaction should be rolled back within seconds.
– SAK
Jun 13 '18 at 18:01
add a comment |
1
Let me repeat again, there are no deadlocks on the MASTER, and htis shop is as high volume as it gets. The servers are 40 cpu, 384gb ram, 280GB bufferpools, 33 TB disks.
– SAK
Jun 13 '18 at 17:56
1
The deadlocks are occurring ON the slaves with parallel replication. The deadlock sql involves competing Insert on duplicate update statements, the SELECTS are never involved. Prior to mdb 10.2.14 MDB was handling the few deadlocks without issues, as of 10.2.14 the semaphores began timing out > 600 seconds. Note I have been tuning relational sql databases since 1981 so I am confident of the queries performance. The issue is the database is not properly handling deadlocks where the second transaction should be rolled back within seconds.
– SAK
Jun 13 '18 at 18:01
1
1
Let me repeat again, there are no deadlocks on the MASTER, and htis shop is as high volume as it gets. The servers are 40 cpu, 384gb ram, 280GB bufferpools, 33 TB disks.
– SAK
Jun 13 '18 at 17:56
Let me repeat again, there are no deadlocks on the MASTER, and htis shop is as high volume as it gets. The servers are 40 cpu, 384gb ram, 280GB bufferpools, 33 TB disks.
– SAK
Jun 13 '18 at 17:56
1
1
The deadlocks are occurring ON the slaves with parallel replication. The deadlock sql involves competing Insert on duplicate update statements, the SELECTS are never involved. Prior to mdb 10.2.14 MDB was handling the few deadlocks without issues, as of 10.2.14 the semaphores began timing out > 600 seconds. Note I have been tuning relational sql databases since 1981 so I am confident of the queries performance. The issue is the database is not properly handling deadlocks where the second transaction should be rolled back within seconds.
– SAK
Jun 13 '18 at 18:01
The deadlocks are occurring ON the slaves with parallel replication. The deadlock sql involves competing Insert on duplicate update statements, the SELECTS are never involved. Prior to mdb 10.2.14 MDB was handling the few deadlocks without issues, as of 10.2.14 the semaphores began timing out > 600 seconds. Note I have been tuning relational sql databases since 1981 so I am confident of the queries performance. The issue is the database is not properly handling deadlocks where the second transaction should be rolled back within seconds.
– SAK
Jun 13 '18 at 18:01
add a comment |
1 Answer
1
active
oldest
votes
Answer originally left in comments by the question author
Finally I discovered mdb addressed this bug in 10.2.13 but it remains in 10.2.14.
In summary have gotten around the problem by turning INNODB_ADAPTIVE_HASH_INDEX = ON
and have not had a problem since Monday AM. See https://jira.mariadb.org/browse/MDEV-14441 for the supposed 10.2.13 fix.
Evidently MDB was bypassing releasing the latches associated with the semaphores when deadlocks occurred, when INNODB_ADAPTIVE_HASH_INDEX = OFF
The mdb 10.2.13 fix explains mdb hanging during deadlocks in 10.2.13. The root cause of this bug is that in the function btr_cur_update_in_place()
we are skipping this call if the adaptive hash index was disabled during the execution:
if (block->index) {
btr_search_x_unlock(index);
When debugging the code, I mistook the leaked X lock for a leaked S lock. I did not find any other rw-lock leaks during the MDEV-14952 review/refactoring effort.
The lock leak was introduced by me in MDEV-12121.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f209311%2fwhy-is-mdb-10-2-14-not-handling-deadlocks-properly-when-it-did-prior-to-upgrade%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Answer originally left in comments by the question author
Finally I discovered mdb addressed this bug in 10.2.13 but it remains in 10.2.14.
In summary have gotten around the problem by turning INNODB_ADAPTIVE_HASH_INDEX = ON
and have not had a problem since Monday AM. See https://jira.mariadb.org/browse/MDEV-14441 for the supposed 10.2.13 fix.
Evidently MDB was bypassing releasing the latches associated with the semaphores when deadlocks occurred, when INNODB_ADAPTIVE_HASH_INDEX = OFF
The mdb 10.2.13 fix explains mdb hanging during deadlocks in 10.2.13. The root cause of this bug is that in the function btr_cur_update_in_place()
we are skipping this call if the adaptive hash index was disabled during the execution:
if (block->index) {
btr_search_x_unlock(index);
When debugging the code, I mistook the leaked X lock for a leaked S lock. I did not find any other rw-lock leaks during the MDEV-14952 review/refactoring effort.
The lock leak was introduced by me in MDEV-12121.
add a comment |
Answer originally left in comments by the question author
Finally I discovered mdb addressed this bug in 10.2.13 but it remains in 10.2.14.
In summary have gotten around the problem by turning INNODB_ADAPTIVE_HASH_INDEX = ON
and have not had a problem since Monday AM. See https://jira.mariadb.org/browse/MDEV-14441 for the supposed 10.2.13 fix.
Evidently MDB was bypassing releasing the latches associated with the semaphores when deadlocks occurred, when INNODB_ADAPTIVE_HASH_INDEX = OFF
The mdb 10.2.13 fix explains mdb hanging during deadlocks in 10.2.13. The root cause of this bug is that in the function btr_cur_update_in_place()
we are skipping this call if the adaptive hash index was disabled during the execution:
if (block->index) {
btr_search_x_unlock(index);
When debugging the code, I mistook the leaked X lock for a leaked S lock. I did not find any other rw-lock leaks during the MDEV-14952 review/refactoring effort.
The lock leak was introduced by me in MDEV-12121.
add a comment |
Answer originally left in comments by the question author
Finally I discovered mdb addressed this bug in 10.2.13 but it remains in 10.2.14.
In summary have gotten around the problem by turning INNODB_ADAPTIVE_HASH_INDEX = ON
and have not had a problem since Monday AM. See https://jira.mariadb.org/browse/MDEV-14441 for the supposed 10.2.13 fix.
Evidently MDB was bypassing releasing the latches associated with the semaphores when deadlocks occurred, when INNODB_ADAPTIVE_HASH_INDEX = OFF
The mdb 10.2.13 fix explains mdb hanging during deadlocks in 10.2.13. The root cause of this bug is that in the function btr_cur_update_in_place()
we are skipping this call if the adaptive hash index was disabled during the execution:
if (block->index) {
btr_search_x_unlock(index);
When debugging the code, I mistook the leaked X lock for a leaked S lock. I did not find any other rw-lock leaks during the MDEV-14952 review/refactoring effort.
The lock leak was introduced by me in MDEV-12121.
Answer originally left in comments by the question author
Finally I discovered mdb addressed this bug in 10.2.13 but it remains in 10.2.14.
In summary have gotten around the problem by turning INNODB_ADAPTIVE_HASH_INDEX = ON
and have not had a problem since Monday AM. See https://jira.mariadb.org/browse/MDEV-14441 for the supposed 10.2.13 fix.
Evidently MDB was bypassing releasing the latches associated with the semaphores when deadlocks occurred, when INNODB_ADAPTIVE_HASH_INDEX = OFF
The mdb 10.2.13 fix explains mdb hanging during deadlocks in 10.2.13. The root cause of this bug is that in the function btr_cur_update_in_place()
we are skipping this call if the adaptive hash index was disabled during the execution:
if (block->index) {
btr_search_x_unlock(index);
When debugging the code, I mistook the leaked X lock for a leaked S lock. I did not find any other rw-lock leaks during the MDEV-14952 review/refactoring effort.
The lock leak was introduced by me in MDEV-12121.
answered Dec 17 '18 at 1:26
community wiki
Comment Converter
add a comment |
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f209311%2fwhy-is-mdb-10-2-14-not-handling-deadlocks-properly-when-it-did-prior-to-upgrade%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Let me repeat again, there are no deadlocks on the MASTER, and htis shop is as high volume as it gets. The servers are 40 cpu, 384gb ram, 280GB bufferpools, 33 TB disks.
– SAK
Jun 13 '18 at 17:56
1
The deadlocks are occurring ON the slaves with parallel replication. The deadlock sql involves competing Insert on duplicate update statements, the SELECTS are never involved. Prior to mdb 10.2.14 MDB was handling the few deadlocks without issues, as of 10.2.14 the semaphores began timing out > 600 seconds. Note I have been tuning relational sql databases since 1981 so I am confident of the queries performance. The issue is the database is not properly handling deadlocks where the second transaction should be rolled back within seconds.
– SAK
Jun 13 '18 at 18:01