Mongo queries are slow and yielding/waiting for locks thousand timesWrite Locks in Mongo DBChunk Size is...

Is the percentage symbol a constant?

The relationship between entanglement of vector states to matrix operations

How can I deduce the power of a capacitor from its datasheet?

my cron command doesn’t work

What is the draw frequency for 3 consecutive games (same players; amateur level)?

What are some ways of extending a description of a scenery?

Is it really OK to use "because of"?

Solving the linear first order differential equation?

Buying a "Used" Router

Maybe pigeonhole problem?

How do you get out of your own psychology to write characters?

How do I add a strong "onion flavor" to the biryani (in restaurant style)?

Other than edits for international editions, did Harry Potter and the Philosopher's Stone receive errata?

Why did Luke use his left hand to shoot?

What species should be used for storage of human minds?

Do we still track damage on indestructible creatures?

Why is "rm -r" unable to delete this folder?

Renting a 2CV in France

Is it possible to detect 100% of SQLi with a simple regex?

Why does this relation fail symmetry and transitivity properties?

Why might frozen potatoes require a hechsher?

Writing dialogues for characters whose first language is not English

Homeostasis logic/math problem

How to not let the Identify spell spoil everything?



Mongo queries are slow and yielding/waiting for locks thousand times


Write Locks in Mongo DBChunk Size is shown as 1 KB in mongo db logs even it is set to 300 MBMySQL queries waiting for other queries to finishAll queries waiting for query cache lockWhy are these MongoDB queries using indexes differently?All queries waiting for lock in mongodbCross-database queries are slowReplica Set Question - one primary instance with two secondary instancesMongo Db Update (upsert) taking several seconds













1















We are using mongo setup with replica sets on AWS.
Setup Details:
1 Primary node: r3.8xlarge
4 secondary nodes: r5.xlarge
Mongo version: 3.0.8 (WiredTiger)
Database size: 358GB



We have configured mongo cloud monitoring for this setup. Here are few stats from it:



Mongo Primary Network: 350MB/S (In: 40MB/S, Out: 310MB/S, Num of requests: 5.32KB/S)
1.1K Connections on average on primary node
Tickets available on primary: Reads: 125, Write: 100
Queues: Total: 15-20, Read : 0-1, Write: 15-20



Other Points:
CPU and memory stats on the instance look pretty much under control.
We are using 800gb GP2 EBS volume(2400 IOPS) and we are consuming around 2000 IOPS. The burst balance is almost available to full capacity which means that also is not exhausted.
Primary node is of r3.8xlarge type, so it has 10gig network.
Ulimits are set as follows:



Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 64000 64000 processes
Max open files 64000 64000 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 1967994 1967994 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us



Problem:
We are accessing mongo using Java application and we are seeing that a lot of our queries are taking considerable amount of time such as 2-4 seconds.
We enabled profiling and listed slowest queries. In that we found that a sample query for update object, acquired intent lock 17000 times and had to yield those many times.
We are not able to understand if it is intent lock then why does it need to yield so many times? Does intent lock also not allow other operations to proceed?
If the query has to yield so many times, how can we see the advantage of document level lock?
We have also seen some of our queries getting timed out(probably waiting for the lock too long and eventually dying).
Could you please guide us on how to debug this problem and improve the performance of our mongo ?



Here's the profiler output of a sample query:



{
"op" : "update",
"ns" : "backend.Scores",
"query" : {
"channel.id" : "hkxj",
"metric" : "YVR",
"date" : ISODate("2018-11-20T10:00:00Z")
},
"updateobj" : {
"$set" : {
"channel" : {
"id" : "hkxj",
"failures" : 39,
"items" : [

{
"_id" : ObjectId("5bf3e434800075956f1"),
"image" : "http://images.abcd.com/something/wscoob.jpg",
"b_time" : ISODate("2018-11-26T19:24:00Z"),
"title" : "What's New ",
"id" : "fq969"
},
{
"_id" : ObjectId("5bf3e43f800075956f0"),
"image" : "http://images.abcd.com/something/w4citv2.jpg",
"broadcast_time" : ISODate("2018-11-26T20:24:00Z"),
"title" : "Let's End This",
"id" : "fsfgd"
}
]
},
"metric" : "YVR",
"date" : ISODate("2018-11-20T10:00:00Z")
},
"$setOnInsert" : {
"__v" : 0
}
},
"nscanned" : 0,
"nscannedObjects" : 2209900,
"nMatched" : 1,
"nModified" : 1,
"keyUpdates" : 0,
"writeConflicts" : 0,
"numYield" : 17264,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(17266),
"w" : NumberLong(17266)
}
},
"Database" : {
"acquireCount" : {
"w" : NumberLong(17266)
}
},
"Collection" : {
"acquireCount" : {
"w" : NumberLong(17265)
}
},
"oplog" : {
"acquireCount" : {
"w" : NumberLong(1)
}
}
},
"millis" : 3864,
"execStats" : {

},
"ts" : ISODate("2018-11-20T10:40:25.104Z"),
"client" : "172.2.3.52",
"allUsers" : [ ],
"user" : ""
}









share|improve this question














bumped to the homepage by Community 5 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.











  • 2





    Cross-posted from StackOverflow: stackoverflow.com/questions/53398638/…. The first comment there (from 9 hours ago) identifies an obvious issue: "nscannedObjects" : 2209900 means that your update needed to read over 2M docs to find the documents to update. Add an index to efficiently support your update query.

    – Stennie
    Nov 21 '18 at 6:29











  • Add it as an answer.

    – Iľja
    Nov 21 '18 at 11:33











  • That was one off query. Have got rid of those queries now. However, now I come to other slowest queries with the help of db profiler. What I found is time to acquire lock is 2497196 micros. (2497 millis).How can I reduce the time required to acquire a lock?

    – Abhishek S
    Nov 22 '18 at 6:52
















1















We are using mongo setup with replica sets on AWS.
Setup Details:
1 Primary node: r3.8xlarge
4 secondary nodes: r5.xlarge
Mongo version: 3.0.8 (WiredTiger)
Database size: 358GB



We have configured mongo cloud monitoring for this setup. Here are few stats from it:



Mongo Primary Network: 350MB/S (In: 40MB/S, Out: 310MB/S, Num of requests: 5.32KB/S)
1.1K Connections on average on primary node
Tickets available on primary: Reads: 125, Write: 100
Queues: Total: 15-20, Read : 0-1, Write: 15-20



Other Points:
CPU and memory stats on the instance look pretty much under control.
We are using 800gb GP2 EBS volume(2400 IOPS) and we are consuming around 2000 IOPS. The burst balance is almost available to full capacity which means that also is not exhausted.
Primary node is of r3.8xlarge type, so it has 10gig network.
Ulimits are set as follows:



Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 64000 64000 processes
Max open files 64000 64000 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 1967994 1967994 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us



Problem:
We are accessing mongo using Java application and we are seeing that a lot of our queries are taking considerable amount of time such as 2-4 seconds.
We enabled profiling and listed slowest queries. In that we found that a sample query for update object, acquired intent lock 17000 times and had to yield those many times.
We are not able to understand if it is intent lock then why does it need to yield so many times? Does intent lock also not allow other operations to proceed?
If the query has to yield so many times, how can we see the advantage of document level lock?
We have also seen some of our queries getting timed out(probably waiting for the lock too long and eventually dying).
Could you please guide us on how to debug this problem and improve the performance of our mongo ?



Here's the profiler output of a sample query:



{
"op" : "update",
"ns" : "backend.Scores",
"query" : {
"channel.id" : "hkxj",
"metric" : "YVR",
"date" : ISODate("2018-11-20T10:00:00Z")
},
"updateobj" : {
"$set" : {
"channel" : {
"id" : "hkxj",
"failures" : 39,
"items" : [

{
"_id" : ObjectId("5bf3e434800075956f1"),
"image" : "http://images.abcd.com/something/wscoob.jpg",
"b_time" : ISODate("2018-11-26T19:24:00Z"),
"title" : "What's New ",
"id" : "fq969"
},
{
"_id" : ObjectId("5bf3e43f800075956f0"),
"image" : "http://images.abcd.com/something/w4citv2.jpg",
"broadcast_time" : ISODate("2018-11-26T20:24:00Z"),
"title" : "Let's End This",
"id" : "fsfgd"
}
]
},
"metric" : "YVR",
"date" : ISODate("2018-11-20T10:00:00Z")
},
"$setOnInsert" : {
"__v" : 0
}
},
"nscanned" : 0,
"nscannedObjects" : 2209900,
"nMatched" : 1,
"nModified" : 1,
"keyUpdates" : 0,
"writeConflicts" : 0,
"numYield" : 17264,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(17266),
"w" : NumberLong(17266)
}
},
"Database" : {
"acquireCount" : {
"w" : NumberLong(17266)
}
},
"Collection" : {
"acquireCount" : {
"w" : NumberLong(17265)
}
},
"oplog" : {
"acquireCount" : {
"w" : NumberLong(1)
}
}
},
"millis" : 3864,
"execStats" : {

},
"ts" : ISODate("2018-11-20T10:40:25.104Z"),
"client" : "172.2.3.52",
"allUsers" : [ ],
"user" : ""
}









share|improve this question














bumped to the homepage by Community 5 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.











  • 2





    Cross-posted from StackOverflow: stackoverflow.com/questions/53398638/…. The first comment there (from 9 hours ago) identifies an obvious issue: "nscannedObjects" : 2209900 means that your update needed to read over 2M docs to find the documents to update. Add an index to efficiently support your update query.

    – Stennie
    Nov 21 '18 at 6:29











  • Add it as an answer.

    – Iľja
    Nov 21 '18 at 11:33











  • That was one off query. Have got rid of those queries now. However, now I come to other slowest queries with the help of db profiler. What I found is time to acquire lock is 2497196 micros. (2497 millis).How can I reduce the time required to acquire a lock?

    – Abhishek S
    Nov 22 '18 at 6:52














1












1








1








We are using mongo setup with replica sets on AWS.
Setup Details:
1 Primary node: r3.8xlarge
4 secondary nodes: r5.xlarge
Mongo version: 3.0.8 (WiredTiger)
Database size: 358GB



We have configured mongo cloud monitoring for this setup. Here are few stats from it:



Mongo Primary Network: 350MB/S (In: 40MB/S, Out: 310MB/S, Num of requests: 5.32KB/S)
1.1K Connections on average on primary node
Tickets available on primary: Reads: 125, Write: 100
Queues: Total: 15-20, Read : 0-1, Write: 15-20



Other Points:
CPU and memory stats on the instance look pretty much under control.
We are using 800gb GP2 EBS volume(2400 IOPS) and we are consuming around 2000 IOPS. The burst balance is almost available to full capacity which means that also is not exhausted.
Primary node is of r3.8xlarge type, so it has 10gig network.
Ulimits are set as follows:



Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 64000 64000 processes
Max open files 64000 64000 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 1967994 1967994 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us



Problem:
We are accessing mongo using Java application and we are seeing that a lot of our queries are taking considerable amount of time such as 2-4 seconds.
We enabled profiling and listed slowest queries. In that we found that a sample query for update object, acquired intent lock 17000 times and had to yield those many times.
We are not able to understand if it is intent lock then why does it need to yield so many times? Does intent lock also not allow other operations to proceed?
If the query has to yield so many times, how can we see the advantage of document level lock?
We have also seen some of our queries getting timed out(probably waiting for the lock too long and eventually dying).
Could you please guide us on how to debug this problem and improve the performance of our mongo ?



Here's the profiler output of a sample query:



{
"op" : "update",
"ns" : "backend.Scores",
"query" : {
"channel.id" : "hkxj",
"metric" : "YVR",
"date" : ISODate("2018-11-20T10:00:00Z")
},
"updateobj" : {
"$set" : {
"channel" : {
"id" : "hkxj",
"failures" : 39,
"items" : [

{
"_id" : ObjectId("5bf3e434800075956f1"),
"image" : "http://images.abcd.com/something/wscoob.jpg",
"b_time" : ISODate("2018-11-26T19:24:00Z"),
"title" : "What's New ",
"id" : "fq969"
},
{
"_id" : ObjectId("5bf3e43f800075956f0"),
"image" : "http://images.abcd.com/something/w4citv2.jpg",
"broadcast_time" : ISODate("2018-11-26T20:24:00Z"),
"title" : "Let's End This",
"id" : "fsfgd"
}
]
},
"metric" : "YVR",
"date" : ISODate("2018-11-20T10:00:00Z")
},
"$setOnInsert" : {
"__v" : 0
}
},
"nscanned" : 0,
"nscannedObjects" : 2209900,
"nMatched" : 1,
"nModified" : 1,
"keyUpdates" : 0,
"writeConflicts" : 0,
"numYield" : 17264,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(17266),
"w" : NumberLong(17266)
}
},
"Database" : {
"acquireCount" : {
"w" : NumberLong(17266)
}
},
"Collection" : {
"acquireCount" : {
"w" : NumberLong(17265)
}
},
"oplog" : {
"acquireCount" : {
"w" : NumberLong(1)
}
}
},
"millis" : 3864,
"execStats" : {

},
"ts" : ISODate("2018-11-20T10:40:25.104Z"),
"client" : "172.2.3.52",
"allUsers" : [ ],
"user" : ""
}









share|improve this question














We are using mongo setup with replica sets on AWS.
Setup Details:
1 Primary node: r3.8xlarge
4 secondary nodes: r5.xlarge
Mongo version: 3.0.8 (WiredTiger)
Database size: 358GB



We have configured mongo cloud monitoring for this setup. Here are few stats from it:



Mongo Primary Network: 350MB/S (In: 40MB/S, Out: 310MB/S, Num of requests: 5.32KB/S)
1.1K Connections on average on primary node
Tickets available on primary: Reads: 125, Write: 100
Queues: Total: 15-20, Read : 0-1, Write: 15-20



Other Points:
CPU and memory stats on the instance look pretty much under control.
We are using 800gb GP2 EBS volume(2400 IOPS) and we are consuming around 2000 IOPS. The burst balance is almost available to full capacity which means that also is not exhausted.
Primary node is of r3.8xlarge type, so it has 10gig network.
Ulimits are set as follows:



Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 64000 64000 processes
Max open files 64000 64000 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 1967994 1967994 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us



Problem:
We are accessing mongo using Java application and we are seeing that a lot of our queries are taking considerable amount of time such as 2-4 seconds.
We enabled profiling and listed slowest queries. In that we found that a sample query for update object, acquired intent lock 17000 times and had to yield those many times.
We are not able to understand if it is intent lock then why does it need to yield so many times? Does intent lock also not allow other operations to proceed?
If the query has to yield so many times, how can we see the advantage of document level lock?
We have also seen some of our queries getting timed out(probably waiting for the lock too long and eventually dying).
Could you please guide us on how to debug this problem and improve the performance of our mongo ?



Here's the profiler output of a sample query:



{
"op" : "update",
"ns" : "backend.Scores",
"query" : {
"channel.id" : "hkxj",
"metric" : "YVR",
"date" : ISODate("2018-11-20T10:00:00Z")
},
"updateobj" : {
"$set" : {
"channel" : {
"id" : "hkxj",
"failures" : 39,
"items" : [

{
"_id" : ObjectId("5bf3e434800075956f1"),
"image" : "http://images.abcd.com/something/wscoob.jpg",
"b_time" : ISODate("2018-11-26T19:24:00Z"),
"title" : "What's New ",
"id" : "fq969"
},
{
"_id" : ObjectId("5bf3e43f800075956f0"),
"image" : "http://images.abcd.com/something/w4citv2.jpg",
"broadcast_time" : ISODate("2018-11-26T20:24:00Z"),
"title" : "Let's End This",
"id" : "fsfgd"
}
]
},
"metric" : "YVR",
"date" : ISODate("2018-11-20T10:00:00Z")
},
"$setOnInsert" : {
"__v" : 0
}
},
"nscanned" : 0,
"nscannedObjects" : 2209900,
"nMatched" : 1,
"nModified" : 1,
"keyUpdates" : 0,
"writeConflicts" : 0,
"numYield" : 17264,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(17266),
"w" : NumberLong(17266)
}
},
"Database" : {
"acquireCount" : {
"w" : NumberLong(17266)
}
},
"Collection" : {
"acquireCount" : {
"w" : NumberLong(17265)
}
},
"oplog" : {
"acquireCount" : {
"w" : NumberLong(1)
}
}
},
"millis" : 3864,
"execStats" : {

},
"ts" : ISODate("2018-11-20T10:40:25.104Z"),
"client" : "172.2.3.52",
"allUsers" : [ ],
"user" : ""
}






query-performance mongodb optimization locking mongodb-3.0






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 21 '18 at 5:05









Abhishek SAbhishek S

62




62





bumped to the homepage by Community 5 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 5 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.










  • 2





    Cross-posted from StackOverflow: stackoverflow.com/questions/53398638/…. The first comment there (from 9 hours ago) identifies an obvious issue: "nscannedObjects" : 2209900 means that your update needed to read over 2M docs to find the documents to update. Add an index to efficiently support your update query.

    – Stennie
    Nov 21 '18 at 6:29











  • Add it as an answer.

    – Iľja
    Nov 21 '18 at 11:33











  • That was one off query. Have got rid of those queries now. However, now I come to other slowest queries with the help of db profiler. What I found is time to acquire lock is 2497196 micros. (2497 millis).How can I reduce the time required to acquire a lock?

    – Abhishek S
    Nov 22 '18 at 6:52














  • 2





    Cross-posted from StackOverflow: stackoverflow.com/questions/53398638/…. The first comment there (from 9 hours ago) identifies an obvious issue: "nscannedObjects" : 2209900 means that your update needed to read over 2M docs to find the documents to update. Add an index to efficiently support your update query.

    – Stennie
    Nov 21 '18 at 6:29











  • Add it as an answer.

    – Iľja
    Nov 21 '18 at 11:33











  • That was one off query. Have got rid of those queries now. However, now I come to other slowest queries with the help of db profiler. What I found is time to acquire lock is 2497196 micros. (2497 millis).How can I reduce the time required to acquire a lock?

    – Abhishek S
    Nov 22 '18 at 6:52








2




2





Cross-posted from StackOverflow: stackoverflow.com/questions/53398638/…. The first comment there (from 9 hours ago) identifies an obvious issue: "nscannedObjects" : 2209900 means that your update needed to read over 2M docs to find the documents to update. Add an index to efficiently support your update query.

– Stennie
Nov 21 '18 at 6:29





Cross-posted from StackOverflow: stackoverflow.com/questions/53398638/…. The first comment there (from 9 hours ago) identifies an obvious issue: "nscannedObjects" : 2209900 means that your update needed to read over 2M docs to find the documents to update. Add an index to efficiently support your update query.

– Stennie
Nov 21 '18 at 6:29













Add it as an answer.

– Iľja
Nov 21 '18 at 11:33





Add it as an answer.

– Iľja
Nov 21 '18 at 11:33













That was one off query. Have got rid of those queries now. However, now I come to other slowest queries with the help of db profiler. What I found is time to acquire lock is 2497196 micros. (2497 millis).How can I reduce the time required to acquire a lock?

– Abhishek S
Nov 22 '18 at 6:52





That was one off query. Have got rid of those queries now. However, now I come to other slowest queries with the help of db profiler. What I found is time to acquire lock is 2497196 micros. (2497 millis).How can I reduce the time required to acquire a lock?

– Abhishek S
Nov 22 '18 at 6:52










1 Answer
1






active

oldest

votes


















0















We are accessing mongo using Java application and we are seeing that a lot of our queries are taking considerable amount of time such as 2-4 seconds.




The provided output indicates a query that requires a collection scan of 2,209,900 documents to update a single document:



"nscanned" : 0,
"nscannedObjects" : 2209900,
"nMatched" : 1,
"nModified" : 1,



we found that a sample query for update object, acquired intent lock 17000 times and had to yield those many times.




Since a collection scan is a long-running query it will yield multiple times to allow other operations to interleave. See: Does a read or write operation ever yield the lock? in the MongoDB Concurrency FAQ.



You can significantly improve this query by adding a supporting index on {'channel.id': 1, metric: 1, date: 1}. Use the query explain option (i.e. db.Scores.find({..}).explain(true)) to see candidate indexes and usage.



Note: to minimise impact in your production environment you'll want to do a background or rolling index build when adding this index. The MongoDB documentation includes more information on procedures and behaviour: Index Build Operations on a Populated Collection.




We enabled profiling and listed slowest queries.




If your deployment is already stressed, I would caution against enabling the Database Profiler as this will add additional write load and only capture a limited sample of queries (by default profiling uses a 1MB capped collection for each database).



Since slow query information is already available in the MongoDB server logs, I would instead take a log-based approach. Some general tips can be found on ServerFault: Understanding IXSCAN and COLLSCAN in MongoDB logs.




What I found is time to acquire lock is 2497196 micros. (2497 millis).How can I reduce the time required to acquire a lock?




Significant time acquiring locks suggests either contention or a resource constraint (for example, I/O). If you have optimised all of your slow queries you'd have to look into more general metrics and what is logged during periods of notable slowness. Your question description suggests that you aren't concerned about resource limits yet, so logs and serverStatus metrics are a more likely focus.




Could you please guide us on how to debug this problem and improve the performance of our mongo ?




Overall performance tuning is a much bigger topic than an answer here, however some suggested starting points are:




  • Add a monitoring service to capture & visualise MongoDB metrics and history. If you don't have a solution in place yet, I'd look for a SaaS service like MongoDB Cloud Manager which has a free tier for monitoring (and more detailed metrics available with a free trial).

  • Investigate common slow query patterns and create appropriate indexes.

  • Upgrade to the latest version of MongoDB 3.0. MongoDB 3.0.8 was released in Dec 2015; the final version in that series was 3.0.15 (released in May 2017). There have been a lot of improvements in performance and stability, so it would be good to minimise the possibility you are encountering known issues that have been addressed.

  • Review the MongoDB Production Notes for suggestions relevant to your deployment environment.

  • Plan and test your upgrade path to a supported release series (currently MongoDB 3.4 or newer). WiredTiger was first introduced in the MongoDB 3.0 release series and there have been many significant improvements in successive releases of MongoDB.






share|improve this answer























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "182"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f223059%2fmongo-queries-are-slow-and-yielding-waiting-for-locks-thousand-times%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0















    We are accessing mongo using Java application and we are seeing that a lot of our queries are taking considerable amount of time such as 2-4 seconds.




    The provided output indicates a query that requires a collection scan of 2,209,900 documents to update a single document:



    "nscanned" : 0,
    "nscannedObjects" : 2209900,
    "nMatched" : 1,
    "nModified" : 1,



    we found that a sample query for update object, acquired intent lock 17000 times and had to yield those many times.




    Since a collection scan is a long-running query it will yield multiple times to allow other operations to interleave. See: Does a read or write operation ever yield the lock? in the MongoDB Concurrency FAQ.



    You can significantly improve this query by adding a supporting index on {'channel.id': 1, metric: 1, date: 1}. Use the query explain option (i.e. db.Scores.find({..}).explain(true)) to see candidate indexes and usage.



    Note: to minimise impact in your production environment you'll want to do a background or rolling index build when adding this index. The MongoDB documentation includes more information on procedures and behaviour: Index Build Operations on a Populated Collection.




    We enabled profiling and listed slowest queries.




    If your deployment is already stressed, I would caution against enabling the Database Profiler as this will add additional write load and only capture a limited sample of queries (by default profiling uses a 1MB capped collection for each database).



    Since slow query information is already available in the MongoDB server logs, I would instead take a log-based approach. Some general tips can be found on ServerFault: Understanding IXSCAN and COLLSCAN in MongoDB logs.




    What I found is time to acquire lock is 2497196 micros. (2497 millis).How can I reduce the time required to acquire a lock?




    Significant time acquiring locks suggests either contention or a resource constraint (for example, I/O). If you have optimised all of your slow queries you'd have to look into more general metrics and what is logged during periods of notable slowness. Your question description suggests that you aren't concerned about resource limits yet, so logs and serverStatus metrics are a more likely focus.




    Could you please guide us on how to debug this problem and improve the performance of our mongo ?




    Overall performance tuning is a much bigger topic than an answer here, however some suggested starting points are:




    • Add a monitoring service to capture & visualise MongoDB metrics and history. If you don't have a solution in place yet, I'd look for a SaaS service like MongoDB Cloud Manager which has a free tier for monitoring (and more detailed metrics available with a free trial).

    • Investigate common slow query patterns and create appropriate indexes.

    • Upgrade to the latest version of MongoDB 3.0. MongoDB 3.0.8 was released in Dec 2015; the final version in that series was 3.0.15 (released in May 2017). There have been a lot of improvements in performance and stability, so it would be good to minimise the possibility you are encountering known issues that have been addressed.

    • Review the MongoDB Production Notes for suggestions relevant to your deployment environment.

    • Plan and test your upgrade path to a supported release series (currently MongoDB 3.4 or newer). WiredTiger was first introduced in the MongoDB 3.0 release series and there have been many significant improvements in successive releases of MongoDB.






    share|improve this answer




























      0















      We are accessing mongo using Java application and we are seeing that a lot of our queries are taking considerable amount of time such as 2-4 seconds.




      The provided output indicates a query that requires a collection scan of 2,209,900 documents to update a single document:



      "nscanned" : 0,
      "nscannedObjects" : 2209900,
      "nMatched" : 1,
      "nModified" : 1,



      we found that a sample query for update object, acquired intent lock 17000 times and had to yield those many times.




      Since a collection scan is a long-running query it will yield multiple times to allow other operations to interleave. See: Does a read or write operation ever yield the lock? in the MongoDB Concurrency FAQ.



      You can significantly improve this query by adding a supporting index on {'channel.id': 1, metric: 1, date: 1}. Use the query explain option (i.e. db.Scores.find({..}).explain(true)) to see candidate indexes and usage.



      Note: to minimise impact in your production environment you'll want to do a background or rolling index build when adding this index. The MongoDB documentation includes more information on procedures and behaviour: Index Build Operations on a Populated Collection.




      We enabled profiling and listed slowest queries.




      If your deployment is already stressed, I would caution against enabling the Database Profiler as this will add additional write load and only capture a limited sample of queries (by default profiling uses a 1MB capped collection for each database).



      Since slow query information is already available in the MongoDB server logs, I would instead take a log-based approach. Some general tips can be found on ServerFault: Understanding IXSCAN and COLLSCAN in MongoDB logs.




      What I found is time to acquire lock is 2497196 micros. (2497 millis).How can I reduce the time required to acquire a lock?




      Significant time acquiring locks suggests either contention or a resource constraint (for example, I/O). If you have optimised all of your slow queries you'd have to look into more general metrics and what is logged during periods of notable slowness. Your question description suggests that you aren't concerned about resource limits yet, so logs and serverStatus metrics are a more likely focus.




      Could you please guide us on how to debug this problem and improve the performance of our mongo ?




      Overall performance tuning is a much bigger topic than an answer here, however some suggested starting points are:




      • Add a monitoring service to capture & visualise MongoDB metrics and history. If you don't have a solution in place yet, I'd look for a SaaS service like MongoDB Cloud Manager which has a free tier for monitoring (and more detailed metrics available with a free trial).

      • Investigate common slow query patterns and create appropriate indexes.

      • Upgrade to the latest version of MongoDB 3.0. MongoDB 3.0.8 was released in Dec 2015; the final version in that series was 3.0.15 (released in May 2017). There have been a lot of improvements in performance and stability, so it would be good to minimise the possibility you are encountering known issues that have been addressed.

      • Review the MongoDB Production Notes for suggestions relevant to your deployment environment.

      • Plan and test your upgrade path to a supported release series (currently MongoDB 3.4 or newer). WiredTiger was first introduced in the MongoDB 3.0 release series and there have been many significant improvements in successive releases of MongoDB.






      share|improve this answer


























        0












        0








        0








        We are accessing mongo using Java application and we are seeing that a lot of our queries are taking considerable amount of time such as 2-4 seconds.




        The provided output indicates a query that requires a collection scan of 2,209,900 documents to update a single document:



        "nscanned" : 0,
        "nscannedObjects" : 2209900,
        "nMatched" : 1,
        "nModified" : 1,



        we found that a sample query for update object, acquired intent lock 17000 times and had to yield those many times.




        Since a collection scan is a long-running query it will yield multiple times to allow other operations to interleave. See: Does a read or write operation ever yield the lock? in the MongoDB Concurrency FAQ.



        You can significantly improve this query by adding a supporting index on {'channel.id': 1, metric: 1, date: 1}. Use the query explain option (i.e. db.Scores.find({..}).explain(true)) to see candidate indexes and usage.



        Note: to minimise impact in your production environment you'll want to do a background or rolling index build when adding this index. The MongoDB documentation includes more information on procedures and behaviour: Index Build Operations on a Populated Collection.




        We enabled profiling and listed slowest queries.




        If your deployment is already stressed, I would caution against enabling the Database Profiler as this will add additional write load and only capture a limited sample of queries (by default profiling uses a 1MB capped collection for each database).



        Since slow query information is already available in the MongoDB server logs, I would instead take a log-based approach. Some general tips can be found on ServerFault: Understanding IXSCAN and COLLSCAN in MongoDB logs.




        What I found is time to acquire lock is 2497196 micros. (2497 millis).How can I reduce the time required to acquire a lock?




        Significant time acquiring locks suggests either contention or a resource constraint (for example, I/O). If you have optimised all of your slow queries you'd have to look into more general metrics and what is logged during periods of notable slowness. Your question description suggests that you aren't concerned about resource limits yet, so logs and serverStatus metrics are a more likely focus.




        Could you please guide us on how to debug this problem and improve the performance of our mongo ?




        Overall performance tuning is a much bigger topic than an answer here, however some suggested starting points are:




        • Add a monitoring service to capture & visualise MongoDB metrics and history. If you don't have a solution in place yet, I'd look for a SaaS service like MongoDB Cloud Manager which has a free tier for monitoring (and more detailed metrics available with a free trial).

        • Investigate common slow query patterns and create appropriate indexes.

        • Upgrade to the latest version of MongoDB 3.0. MongoDB 3.0.8 was released in Dec 2015; the final version in that series was 3.0.15 (released in May 2017). There have been a lot of improvements in performance and stability, so it would be good to minimise the possibility you are encountering known issues that have been addressed.

        • Review the MongoDB Production Notes for suggestions relevant to your deployment environment.

        • Plan and test your upgrade path to a supported release series (currently MongoDB 3.4 or newer). WiredTiger was first introduced in the MongoDB 3.0 release series and there have been many significant improvements in successive releases of MongoDB.






        share|improve this answer














        We are accessing mongo using Java application and we are seeing that a lot of our queries are taking considerable amount of time such as 2-4 seconds.




        The provided output indicates a query that requires a collection scan of 2,209,900 documents to update a single document:



        "nscanned" : 0,
        "nscannedObjects" : 2209900,
        "nMatched" : 1,
        "nModified" : 1,



        we found that a sample query for update object, acquired intent lock 17000 times and had to yield those many times.




        Since a collection scan is a long-running query it will yield multiple times to allow other operations to interleave. See: Does a read or write operation ever yield the lock? in the MongoDB Concurrency FAQ.



        You can significantly improve this query by adding a supporting index on {'channel.id': 1, metric: 1, date: 1}. Use the query explain option (i.e. db.Scores.find({..}).explain(true)) to see candidate indexes and usage.



        Note: to minimise impact in your production environment you'll want to do a background or rolling index build when adding this index. The MongoDB documentation includes more information on procedures and behaviour: Index Build Operations on a Populated Collection.




        We enabled profiling and listed slowest queries.




        If your deployment is already stressed, I would caution against enabling the Database Profiler as this will add additional write load and only capture a limited sample of queries (by default profiling uses a 1MB capped collection for each database).



        Since slow query information is already available in the MongoDB server logs, I would instead take a log-based approach. Some general tips can be found on ServerFault: Understanding IXSCAN and COLLSCAN in MongoDB logs.




        What I found is time to acquire lock is 2497196 micros. (2497 millis).How can I reduce the time required to acquire a lock?




        Significant time acquiring locks suggests either contention or a resource constraint (for example, I/O). If you have optimised all of your slow queries you'd have to look into more general metrics and what is logged during periods of notable slowness. Your question description suggests that you aren't concerned about resource limits yet, so logs and serverStatus metrics are a more likely focus.




        Could you please guide us on how to debug this problem and improve the performance of our mongo ?




        Overall performance tuning is a much bigger topic than an answer here, however some suggested starting points are:




        • Add a monitoring service to capture & visualise MongoDB metrics and history. If you don't have a solution in place yet, I'd look for a SaaS service like MongoDB Cloud Manager which has a free tier for monitoring (and more detailed metrics available with a free trial).

        • Investigate common slow query patterns and create appropriate indexes.

        • Upgrade to the latest version of MongoDB 3.0. MongoDB 3.0.8 was released in Dec 2015; the final version in that series was 3.0.15 (released in May 2017). There have been a lot of improvements in performance and stability, so it would be good to minimise the possibility you are encountering known issues that have been addressed.

        • Review the MongoDB Production Notes for suggestions relevant to your deployment environment.

        • Plan and test your upgrade path to a supported release series (currently MongoDB 3.4 or newer). WiredTiger was first introduced in the MongoDB 3.0 release series and there have been many significant improvements in successive releases of MongoDB.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 22 '18 at 10:02









        StennieStennie

        6,65411627




        6,65411627






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Database Administrators Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f223059%2fmongo-queries-are-slow-and-yielding-waiting-for-locks-thousand-times%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Szabolcs (Ungheria) Altri progetti | Menu di navigazione48°10′14.56″N 21°29′33.14″E /...

            Discografia di Klaus Schulze Indice Album in studio | Album dal vivo | Singoli | Antologie | Colonne...

            How to make inet_server_addr() return localhost in spite of ::1/128RETURN NEXT in Postgres FunctionConnect to...