7

I have a table that gets inserted, updated and selected from thousands of times within one second. I am having deadlock issues though.

  1. The database has 2-5 simultaneous 1000+ row inserts using Linq to Sql.
  2. 40 times per second, there is also a select statement from that table, and if a condition is true (95% of the time it is), an update happens with code similar to this:

    create procedure AccessFile (@code, @admin) AS

    declare @id int, @access datetime, @file string

    select @id=Id, @accessed = accessed, @file = file from files where code = @code

    IF @admin<> 0 IF @accessed is null begin

    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

    update files set accessed = getdate() where id = @id

    SET TRANSACTION ISOLATION LEVEL READ COMMITTED

    end

    select @id as Id, @file as File

It seems as though it is the updates conflicting with the inserts that are causing the deadlocks.

The example is a 1 to 1 with the stored procedure, only difference is the names. Assume the 1 and 2, regardless of the stored procedures names.

Jeremy Boyd
  • 677
  • 2
  • 6
  • 13
  • 1
    Can you give us your table structure? Columns, keys, indexes? – Eric Humphrey - lotsahelp Jan 21 '11 at 22:57
  • is it SQL Server? Oracle? MySQL? – eiefai Jan 21 '11 at 23:02
  • @eric - Very simple: ID (PK; bigint), Code (Index, Unique;varchar(16)), Accessed (datetime), Created (datetime), File (varchar(48)) – Jeremy Boyd Jan 21 '11 at 23:02
  • @eiefai - MSSQL 2008 R2 – Jeremy Boyd Jan 21 '11 at 23:03
  • Is the update query the one you are using right now or is a simplified version? If is the second one, check if on the query used for updating/insert exist something like 'NO LOCK' – eiefai Jan 21 '11 at 23:04
  • 2
    You can get some clue here, in this question: http://dba.stackexchange.com/questions/126/what-are-the-main-causes-of-deadlock-and-how-to-prevent/ It's about deadlocks and how to catch and treat them. – Marian Jan 21 '11 at 23:05
  • @eifai - not a simplified, but I did try something, and updated the question with the new SQL – Jeremy Boyd Jan 21 '11 at 23:11
  • @Jeremy try adding WITH(NOLOCK) to your select query, like this: "select @id=Id, @accessed = accessed, @file = file from files WITH(NOLOCK) where code = @code". Also, You should edit your question to add the structure to the table, not everyone will see that comment. – eiefai Jan 21 '11 at 23:25
  • @Jeremy Also, NO LOCK, ROW LOCK and READPAST shouldn't be used lightly, read this to see if it will work for you: http://articles.techrepublic.com.com/5100-10878_11-6185492.html – eiefai Jan 21 '11 at 23:30
  • @eiefai - Added nolock to the froms, no change. The locking isn't occuring on the selects, it happens only on the updates. I believe it is because of the 1000+ inserts that are always happening and the updates that are going against the same table... How do i insert without locking the whole table? – Jeremy Boyd Jan 21 '11 at 23:42
  • @Jeremy: What is the point of using READ UNCOMMITTED isolation level for the update? SQL Server is still acquiring exclusive locks for the update and ignoring your set isolation level. Or it's not the complete piece of code? – Marian Jan 21 '11 at 23:49

3 Answers3

2

Change the updates to use WITH (ROWLOCK). That will reduce the locks from page level to row level. You can try the same on the inserts as well.

mrdenny
  • 26,776
  • 2
  • 40
  • 79
  • I have updated both the inserts and updates to use with(rowlock), but it hasn't decreased the amount of deadlocks. If i kill the inserts (turn off the button that allows our users to insert), the deadlocks disappear. – Jeremy Boyd Jan 24 '11 at 15:16
  • 2
    Try giving snapshot isolation level a try. I'll take same changes to the code that is doing the inserts (unless they are done by a stored procedure). It might help, but it'll increase the tempdb usage by a lot. BTW, changing the isolation level to read uncommitted for an update won't do anything. I'm assuming that the ID column is your clustered index? – mrdenny Jan 24 '11 at 23:25
2

You could also try at the beginning of the proc, setting isolation level to SNAPSHOT. More info available at: http://msdn.microsoft.com/en-us/library/ms173763.aspx

You will incur some cost in tempdb for the row versioning.

  • 1
    That reminded me of this post about setting the database to a snapshot level http://www.codinghorror.com/blog/2008/08/deadlocked.html – Beth Lang Jan 22 '11 at 05:17
0

I would redesign the applications so that rows are saved in batches. Typically batches of 500-1000 rows work well for me, but you need to run your own tests. Before saving a batch, I would serialize my batch updates using sp_getapplock.

Surely this serialization slows modifications down a little bit, but saving in batches more than compensates for it, so overall this works much faster thgan saving rows one by one.

Also I would run my selects with SNAPSHOT isolation level so that they are npot blocked by modifications.

That done, you may never have any deadlocks at all - we do not have deadlocks in our mixed load system, and so can you.

Good luck!

A-K
  • 7,074
  • 3
  • 30
  • 49