Author’s note

This post was recovered from my old blog via the Wayback Machine. I’ve left some old posts behind, but I’m porting this one over because it had code written in Ruby. Looking back at this 10-year old post, it’s still readable to me, even though I haven’t used Ruby in a long time. I think I’ll find another excuse to write in Ruby again.

@jbnunn

25-Sept 2022


Fighting MongoDB cursor errors - June 25 2012

Looking through my MongoDB error logs, I found a lot of:

Query response returned CURSOR_NOT_FOUND. Either an invalid cursor was specified, or the cursor may have timed out on the server.

What’s happening is that when you run a query in Mogno, a cursor is opened at the position of your query. To access the data fast, Mongo will leave that cursor there in order to perform operations at that specific position. However, if these operations take too long, the cursor will time out, leading you to CURSOR_NOT_FOUND errors.

One easy solution is to disable the Mongo timeout length, like

collection.find({"type" => "a"}, {:timeout=>false})

…and that’s fine, but it’s not exactly the cleanest solution. Imagine having simultaneous calls against your database, each without cursor timeouts. Your RAM will fill, and eventually your queries will fail again. The timeout is there to prevent that, so disabling it is not always the best solution.

What I’m doing now is taking my queries and chunking them into groups that are manageable within the default MongoDB timeout limits. Imagine I’m searching through 1M tweets, and wanting to perform an operation with each. I now process the records in batches, instead of all 1M at once. I’ll use MongoDB’s built-in limit and skip. “Limit” will limit my results to a certain size (chunk size), and “skip” will tell MongoDB where to start looking. Because the natural ID (_id) of a record in Mongo is BSON based, we’re able to sort based off of this record.

tweet_set = Tweet.all # Grab all of your records
 
# Set the size of the chunks you wish to process.
# Larger chunks may lead you back to time out errors
chunk_size = 1000
 
# Determine the number of chunks you'll need
chunks = (tweet_set.count.to_f / chunk_size).ceil
 
i = 0 # Set the starting chunk number
start_from_id = Tweet.first.id.to_s # We'll need this variable later
tweets = [] # Initialize an array to store your tweets
 
# Start looping through the chunks of tweets
while i < chunks do
  tweet_chunk = Tweet.where(:_id.gt => BSON::ObjectId(start_from_id)).limit(chunk_size).to_a
  if tweet_chunk.count > 0
    start_from_id = tweet_chunk.last.id.to_s
    tweets << tweet_chunk
    tweets.flatten!
  end
  
 # Increment the counter to move to the next chunk 
  i += 1
end
- @jbnunn