Wednesday, March 28, 2012

LOCK Table necessary in stored procedure?

Hi,

I have some questions about locking tables in stored procedures. I got some excellent tips from my last post, but since it's sort of a different problem I figured I'd post it separately.

I have a large log table I need to do manual, periodic clean-up process on, which basically is purging unneccessary log-entries. The idea is to select out the 1-3% I need to another table, drop the old table, and rename the new table to the old one.

The problem is that I most likely will need to lock the entire table while I do all the clean-up stuff. If a client manages to add things inbetween this is going on, I could end up loosing data.

The table looks like this:
Logid PK
LogTypeID -- what category
LogValue --
LogTime -- when it occurred

My imaginary stored procedure looks something like this:

CREATE PROCEDURE ShrinkDB AS
-- 1)
"lock table log" -- do I have to do something like this?

-- 2)
select * into log_keep FROM Log where
(
logtypeid <> 2020 AND -- activity played
logtypeid <> 5020 AND -- database connected
-- ...etc et.... about 10 different things I don't need to keep
or logtime > dateadd(d, -1, getdate()) -- keep everything from last 24 hours
)
-- 3)
drop table log

-- 4)
EXEC sp_rename 'log_keep', 'log'
GO

I'm not able to figure out wether I need to run some sort of "Lock" command or not, or if everything inside a stored procedure automatically is locked. If so, I shouldn't worry about loosing any data I guess??

Hopefully it works that way, but if not I assume I'll run into these two problems:

- If a client logs immediately after the Selecet, could data be logged AFTER the select, but BEFORE the drop table-command? In which case I guess I would loose data?

- Immediately after the drop table log in step 3, there's no table named 'log' in my database. 'Log' will be "created" when I run step 4. This means I could perhaps loose data since the client for a brief moment can't log data to the 'log' table?

Hopefully someone can clearify this for me, I've read the documentation, but I don't feel too sure on this subject... :-)Hi,

Why drop the table? why not delete the data you have just taken, then you will never lose any data|||...simply because it takes such an insane amount of time. A regular "delete from log where logtypeId = stuff-I-don't-need" takes forever. :( The complete post is here, if you want the details: http://www.dbforums.com/showthread.php?threadid=979910|||You can open a transaction, and then do SELECT * FROM TBL WITH (TABLOCK) WHERE 1=2 as you first statement. Then, instead of moving 1-3% of data to a different permanent table, you can put it into a temptable (whether # or @., doesn't matter,) then truncate the original (instead of dropping it,) and putting the data from your temptable back into the original.|||Even truncate takes quite a few seconds, allthough I guess that's related to some locking issues and not the truncate process in it self.

The method involving dropping the table takes about 3-5 seconds, and is much faster than select-away-and-truncate. It sort of works, the problem with that solution is that PK, indexes, rights and what not are gone too. This can of course be re-created with a number of SQL sentences, but it's not exactly ideal......

But I cannot understand why deleting takes so much time? I'm assuming the cause of this problem is that about 50 clients are constantly logging into the table while I'm deleting.

I'm not an expert sql-stored procedure maker, anyone who whould take a crack at setting up something for me with locking (and whatever else) that would work? Would help me a lot!! :-)|||I can send you a contract, and upon signing you'll never have to deal with it again :cool:sql

No comments:

Post a Comment