Hello,
I'd like to get a general opinion if in other people's experience they tend
to get whole table locks.
We have an inhouse customer accounting app. One process in particular takes
just under 2 minutes to run. It processes 1400 customers by checking 4200
balance histories, checking and updating 1400 status records, and inserting
8400 new records across 5 tables. The whole process is in an expicit
transaction.
Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
dedicated to SQLServer.
I have noticed that the locks escalate to table locks so that noone can view
any data when the posting is being done.
Does this behaviour sound normal for this many records or is something
wrong? I don't think that to whole tables should be locked for 8000
records. The 5 tables have an average of 400K records each. The whole DB
is 900MB.
Thanks
Brian Morris"Brian" <s@.y> wrote in message news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
> Hello,
> I'd like to get a general opinion if in other people's experience they
> tend to get whole table locks.
> We have an inhouse customer accounting app. One process in particular
> takes just under 2 minutes to run. It processes 1400 customers by
> checking 4200 balance histories, checking and updating 1400 status
> records, and inserting 8400 new records across 5 tables. The whole
> process is in an expicit transaction.
> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
> dedicated to SQLServer.
> I have noticed that the locks escalate to table locks so that noone can
> view any data when the posting is being done.
> Does this behaviour sound normal for this many records or is something
> wrong? I don't think that to whole tables should be locked for 8000
> records. The 5 tables have an average of 400K records each. The whole DB
> is 900MB.
It depends. What are the queries? Can you post them? And more
importantly, what are the indexes.
Most likely you're doing some sort of range query of an indexed field.
Say something like
Select orderdate from ordertable where orderdate between '2007-01-01' and
'2007-01-31'
and orderdate has no index on it. (Obviously your queries wil be more
complex ;-)
But, while that may only cover say 8000 orders (out of 400K), if there's no
index on orderdate, SQL Server has to do a table scan. And to make sure
that another transaction can't insert (or delete) records while it's
scanning the table, it has to lock the table.
Note the CPU speed really won't matter much here. Memory helps some because
it can cache more of the database, but ultimately it's probably dominated by
disk I/O.
If you can, post the queries and DDL and perhaps we can help find a more
effecient query.
> Thanks
> Brian Morris
>
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html|||Greg,
All the fields I query on are indexed. Actually in checking I did find a
couple extra unnecessary indexes that I deleted which actually shortened the
run time by 15% but the table locks are still there.
I'm not so worried about the runtime but more the locking.
I can't easily post the queries but I can do the DDL. Would that be still
useful?
Thanks
Brian
"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in message
news:%233KA3zxfHHA.3676@.TK2MSFTNGP05.phx.gbl...
> "Brian" <s@.y> wrote in message
> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience they
>> tend to get whole table locks.
>> We have an inhouse customer accounting app. One process in particular
>> takes just under 2 minutes to run. It processes 1400 customers by
>> checking 4200 balance histories, checking and updating 1400 status
>> records, and inserting 8400 new records across 5 tables. The whole
>> process is in an expicit transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone can
>> view any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is something
>> wrong? I don't think that to whole tables should be locked for 8000
>> records. The 5 tables have an average of 400K records each. The whole
>> DB is 900MB.
> It depends. What are the queries? Can you post them? And more
> importantly, what are the indexes.
> Most likely you're doing some sort of range query of an indexed field.
> Say something like
> Select orderdate from ordertable where orderdate between '2007-01-01' and
> '2007-01-31'
> and orderdate has no index on it. (Obviously your queries wil be more
> complex ;-)
> But, while that may only cover say 8000 orders (out of 400K), if there's
> no index on orderdate, SQL Server has to do a table scan. And to make
> sure that another transaction can't insert (or delete) records while it's
> scanning the table, it has to lock the table.
> Note the CPU speed really won't matter much here. Memory helps some
> because it can cache more of the database, but ultimately it's probably
> dominated by disk I/O.
> If you can, post the queries and DDL and perhaps we can help find a more
> effecient query.
>
>> Thanks
>> Brian Morris
>
> --
> Greg Moore
> SQL Server DBA Consulting Remote and Onsite available!
> Email: sql (at) greenms.com
> http://www.greenms.com/sqlserver.html
>|||Your server doesn't have much RAM. Locks use memory and when the optimizer
is pinched for memory, it is more likely to escalate to a table lock. With
just 1GB of RAM for SQL Server, I am not surprised you're seeing this.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Brian" <s@.y> wrote in message news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
Hello,
I'd like to get a general opinion if in other people's experience they tend
to get whole table locks.
We have an inhouse customer accounting app. One process in particular takes
just under 2 minutes to run. It processes 1400 customers by checking 4200
balance histories, checking and updating 1400 status records, and inserting
8400 new records across 5 tables. The whole process is in an expicit
transaction.
Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
dedicated to SQLServer.
I have noticed that the locks escalate to table locks so that noone can view
any data when the posting is being done.
Does this behaviour sound normal for this many records or is something
wrong? I don't think that to whole tables should be locked for 8000
records. The 5 tables have an average of 400K records each. The whole DB
is 900MB.
Thanks
Brian Morris|||A possible solution would be to split your big explicit single transaction
into multiple transactions.
--
Sylvain Lafontaine, ing.
MVP - Technologies Virtual-PC
E-mail: sylvain aei ca (fill the blanks, no spam please)
"Brian" <s@.y> wrote in message news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
> Hello,
> I'd like to get a general opinion if in other people's experience they
> tend to get whole table locks.
> We have an inhouse customer accounting app. One process in particular
> takes just under 2 minutes to run. It processes 1400 customers by
> checking 4200 balance histories, checking and updating 1400 status
> records, and inserting 8400 new records across 5 tables. The whole
> process is in an expicit transaction.
> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
> dedicated to SQLServer.
> I have noticed that the locks escalate to table locks so that noone can
> view any data when the posting is being done.
> Does this behaviour sound normal for this many records or is something
> wrong? I don't think that to whole tables should be locked for 8000
> records. The 5 tables have an average of 400K records each. The whole DB
> is 900MB.
> Thanks
> Brian Morris
>|||Is the amount of RAM it's actually using the same or more than what I see
under Memory Usage in Task Manager for sqlservr.exe? Although I have SQL
configured for 1024MB out of 2046MB, usage is only between 300 an 600MB.
Someone had told us that 1GB should be more than enough for a DB less than
1GB (and it's only one db that we have). How can I get SQL to make use of
more for the locks?
Thanks
Brian
"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:ODrceT1fHHA.596@.TK2MSFTNGP06.phx.gbl...
> Your server doesn't have much RAM. Locks use memory and when the
> optimizer
> is pinched for memory, it is more likely to escalate to a table lock.
> With
> just 1GB of RAM for SQL Server, I am not surprised you're seeing this.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> https://mvp.support.microsoft.com/profile/Tom.Moreau
>
> "Brian" <s@.y> wrote in message
> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
> Hello,
> I'd like to get a general opinion if in other people's experience they
> tend
> to get whole table locks.
> We have an inhouse customer accounting app. One process in particular
> takes
> just under 2 minutes to run. It processes 1400 customers by checking 4200
> balance histories, checking and updating 1400 status records, and
> inserting
> 8400 new records across 5 tables. The whole process is in an expicit
> transaction.
> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
> dedicated to SQLServer.
> I have noticed that the locks escalate to table locks so that noone can
> view
> any data when the posting is being done.
> Does this behaviour sound normal for this many records or is something
> wrong? I don't think that to whole tables should be locked for 8000
> records. The 5 tables have an average of 400K records each. The whole DB
> is 900MB.
> Thanks
> Brian Morris
>|||I thought that may be part of the problem but the process has to be a go or
no-go kind of thing for each group of customers and this group is 1400 and
we can't take people out of the group.
Brian
"Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
wrote in message news:eub2vO3fHHA.596@.TK2MSFTNGP06.phx.gbl...
>A possible solution would be to split your big explicit single transaction
>into multiple transactions.
> --
> Sylvain Lafontaine, ing.
> MVP - Technologies Virtual-PC
> E-mail: sylvain aei ca (fill the blanks, no spam please)
>
> "Brian" <s@.y> wrote in message
> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience they
>> tend to get whole table locks.
>> We have an inhouse customer accounting app. One process in particular
>> takes just under 2 minutes to run. It processes 1400 customers by
>> checking 4200 balance histories, checking and updating 1400 status
>> records, and inserting 8400 new records across 5 tables. The whole
>> process is in an expicit transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone can
>> view any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is something
>> wrong? I don't think that to whole tables should be locked for 8000
>> records. The 5 tables have an average of 400K records each. The whole
>> DB is 900MB.
>> Thanks
>> Brian Morris
>|||Hum, maybe if you give an explicit ROWLOCK hint on your tables, they will
not automatically escalate to a table lock. You could also try with a
PagLock (possibly in conjonction with a clustered index).
Another possibility would be to set up the table lock from the beginning in
order to accelerate the update.
Maybe redesigning your schema in order that this transaction doesn't block
other peoples could also be the solution. However, as I don't know what you
are doing with this stuff, I cannot tell you anything more on this.
Changing the isolation level is another possibility but again, I don't know
what you are doing with this stuff.
--
Sylvain Lafontaine, ing.
MVP - Technologies Virtual-PC
E-mail: sylvain aei ca (fill the blanks, no spam please)
"Brian" <s@.y> wrote in message news:eOvaPq3fHHA.4536@.TK2MSFTNGP04.phx.gbl...
>I thought that may be part of the problem but the process has to be a go or
>no-go kind of thing for each group of customers and this group is 1400 and
>we can't take people out of the group.
> Brian
> "Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
> wrote in message news:eub2vO3fHHA.596@.TK2MSFTNGP06.phx.gbl...
>>A possible solution would be to split your big explicit single transaction
>>into multiple transactions.
>> --
>> Sylvain Lafontaine, ing.
>> MVP - Technologies Virtual-PC
>> E-mail: sylvain aei ca (fill the blanks, no spam please)
>>
>> "Brian" <s@.y> wrote in message
>> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience they
>> tend to get whole table locks.
>> We have an inhouse customer accounting app. One process in particular
>> takes just under 2 minutes to run. It processes 1400 customers by
>> checking 4200 balance histories, checking and updating 1400 status
>> records, and inserting 8400 new records across 5 tables. The whole
>> process is in an expicit transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone can
>> view any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is something
>> wrong? I don't think that to whole tables should be locked for 8000
>> records. The 5 tables have an average of 400K records each. The whole
>> DB is 900MB.
>> Thanks
>> Brian Morris
>>
>|||You should actually use PerfMon and look at MSSQL:Memory Manager->Total
Server Memory (KB). Also, data cache is different from other memory usage.
At the very least, try removing the limit on memory.
Another thing to consider is using dirty reads for those who are browsing.
his way, they are not blocked, though they can read logically inconsistent
data.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Brian" <s@.y> wrote in message news:O$BAcj3fHHA.1008@.TK2MSFTNGP05.phx.gbl...
Is the amount of RAM it's actually using the same or more than what I see
under Memory Usage in Task Manager for sqlservr.exe? Although I have SQL
configured for 1024MB out of 2046MB, usage is only between 300 an 600MB.
Someone had told us that 1GB should be more than enough for a DB less than
1GB (and it's only one db that we have). How can I get SQL to make use of
more for the locks?
Thanks
Brian
"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:ODrceT1fHHA.596@.TK2MSFTNGP06.phx.gbl...
> Your server doesn't have much RAM. Locks use memory and when the
> optimizer
> is pinched for memory, it is more likely to escalate to a table lock.
> With
> just 1GB of RAM for SQL Server, I am not surprised you're seeing this.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> https://mvp.support.microsoft.com/profile/Tom.Moreau
>
> "Brian" <s@.y> wrote in message
> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
> Hello,
> I'd like to get a general opinion if in other people's experience they
> tend
> to get whole table locks.
> We have an inhouse customer accounting app. One process in particular
> takes
> just under 2 minutes to run. It processes 1400 customers by checking 4200
> balance histories, checking and updating 1400 status records, and
> inserting
> 8400 new records across 5 tables. The whole process is in an expicit
> transaction.
> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
> dedicated to SQLServer.
> I have noticed that the locks escalate to table locks so that noone can
> view
> any data when the posting is being done.
> Does this behaviour sound normal for this many records or is something
> wrong? I don't think that to whole tables should be locked for 8000
> records. The 5 tables have an average of 400K records each. The whole DB
> is 900MB.
> Thanks
> Brian Morris
>|||I can't do the ROWLOCK hint becuase I've got some ADO code that does some
complicated calculations and updates via recordsets instead of SQL.
We don't have any clustered indexes.
We need the ReadCommitted isolation.
I'm guessing by your response that this kind of locking is not supposed to
happen.
I'll keep reading around and see if we can figure out what we are doing
wrong.
Thanks
Brian
"Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
wrote in message news:%23dHiNA4fHHA.588@.TK2MSFTNGP06.phx.gbl...
> Hum, maybe if you give an explicit ROWLOCK hint on your tables, they will
> not automatically escalate to a table lock. You could also try with a
> PagLock (possibly in conjonction with a clustered index).
> Another possibility would be to set up the table lock from the beginning
> in order to accelerate the update.
> Maybe redesigning your schema in order that this transaction doesn't block
> other peoples could also be the solution. However, as I don't know what
> you are doing with this stuff, I cannot tell you anything more on this.
> Changing the isolation level is another possibility but again, I don't
> know what you are doing with this stuff.
> --
> Sylvain Lafontaine, ing.
> MVP - Technologies Virtual-PC
> E-mail: sylvain aei ca (fill the blanks, no spam please)
>
> "Brian" <s@.y> wrote in message
> news:eOvaPq3fHHA.4536@.TK2MSFTNGP04.phx.gbl...
>>I thought that may be part of the problem but the process has to be a go
>>or no-go kind of thing for each group of customers and this group is 1400
>>and we can't take people out of the group.
>> Brian
>> "Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
>> wrote in message news:eub2vO3fHHA.596@.TK2MSFTNGP06.phx.gbl...
>>A possible solution would be to split your big explicit single
>>transaction into multiple transactions.
>> --
>> Sylvain Lafontaine, ing.
>> MVP - Technologies Virtual-PC
>> E-mail: sylvain aei ca (fill the blanks, no spam please)
>>
>> "Brian" <s@.y> wrote in message
>> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience they
>> tend to get whole table locks.
>> We have an inhouse customer accounting app. One process in particular
>> takes just under 2 minutes to run. It processes 1400 customers by
>> checking 4200 balance histories, checking and updating 1400 status
>> records, and inserting 8400 new records across 5 tables. The whole
>> process is in an expicit transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone can
>> view any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is something
>> wrong? I don't think that to whole tables should be locked for 8000
>> records. The 5 tables have an average of 400K records each. The whole
>> DB is 900MB.
>> Thanks
>> Brian Morris
>>
>>
>|||I'll check that.
If we use READ UNCOMMITTED won't that mean that we can get some data with
really wrong info?
Other users are not so much browsing but are looking up balances in order to
post a transaction based on the balance.
Thanks
Brian
"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:%23DOnnT4fHHA.1216@.TK2MSFTNGP03.phx.gbl...
> You should actually use PerfMon and look at MSSQL:Memory Manager->Total
> Server Memory (KB). Also, data cache is different from other memory
> usage.
> At the very least, try removing the limit on memory.
> Another thing to consider is using dirty reads for those who are browsing.
> his way, they are not blocked, though they can read logically inconsistent
> data.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> https://mvp.support.microsoft.com/profile/Tom.Moreau
>
> "Brian" <s@.y> wrote in message
> news:O$BAcj3fHHA.1008@.TK2MSFTNGP05.phx.gbl...
> Is the amount of RAM it's actually using the same or more than what I see
> under Memory Usage in Task Manager for sqlservr.exe? Although I have SQL
> configured for 1024MB out of 2046MB, usage is only between 300 an 600MB.
> Someone had told us that 1GB should be more than enough for a DB less than
> 1GB (and it's only one db that we have). How can I get SQL to make use of
> more for the locks?
> Thanks
> Brian
> "Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
> news:ODrceT1fHHA.596@.TK2MSFTNGP06.phx.gbl...
>> Your server doesn't have much RAM. Locks use memory and when the
>> optimizer
>> is pinched for memory, it is more likely to escalate to a table lock.
>> With
>> just 1GB of RAM for SQL Server, I am not surprised you're seeing this.
>> --
>> Tom
>> ----
>> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
>> SQL Server MVP
>> Toronto, ON Canada
>> https://mvp.support.microsoft.com/profile/Tom.Moreau
>>
>> "Brian" <s@.y> wrote in message
>> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience they
>> tend
>> to get whole table locks.
>> We have an inhouse customer accounting app. One process in particular
>> takes
>> just under 2 minutes to run. It processes 1400 customers by checking
>> 4200
>> balance histories, checking and updating 1400 status records, and
>> inserting
>> 8400 new records across 5 tables. The whole process is in an expicit
>> transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone can
>> view
>> any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is something
>> wrong? I don't think that to whole tables should be locked for 8000
>> records. The 5 tables have an average of 400K records each. The whole
>> DB
>> is 900MB.
>> Thanks
>> Brian Morris
>>
>|||"Brian" <s@.y> wrote in message news:essnYa4fHHA.4596@.TK2MSFTNGP05.phx.gbl...
> I'll check that.
> If we use READ UNCOMMITTED won't that mean that we can get some data with
> really wrong info?
Yes. In some cases this might be ok. But...
> Other users are not so much browsing but are looking up balances in order
> to post a transaction based on the balance.
This is one of those where it's a really bad idea to do read-uncommitted.
;-)
> Thanks
> Brian
> "Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
> news:%23DOnnT4fHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> You should actually use PerfMon and look at MSSQL:Memory Manager->Total
>> Server Memory (KB). Also, data cache is different from other memory
>> usage.
>> At the very least, try removing the limit on memory.
>> Another thing to consider is using dirty reads for those who are
>> browsing.
>> his way, they are not blocked, though they can read logically
>> inconsistent
>> data.
>> --
>> Tom
>> ----
>> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
>> SQL Server MVP
>> Toronto, ON Canada
>> https://mvp.support.microsoft.com/profile/Tom.Moreau
>>
>> "Brian" <s@.y> wrote in message
>> news:O$BAcj3fHHA.1008@.TK2MSFTNGP05.phx.gbl...
>> Is the amount of RAM it's actually using the same or more than what I see
>> under Memory Usage in Task Manager for sqlservr.exe? Although I have SQL
>> configured for 1024MB out of 2046MB, usage is only between 300 an 600MB.
>> Someone had told us that 1GB should be more than enough for a DB less
>> than
>> 1GB (and it's only one db that we have). How can I get SQL to make use
>> of
>> more for the locks?
>> Thanks
>> Brian
>> "Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
>> news:ODrceT1fHHA.596@.TK2MSFTNGP06.phx.gbl...
>> Your server doesn't have much RAM. Locks use memory and when the
>> optimizer
>> is pinched for memory, it is more likely to escalate to a table lock.
>> With
>> just 1GB of RAM for SQL Server, I am not surprised you're seeing this.
>> --
>> Tom
>> ----
>> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
>> SQL Server MVP
>> Toronto, ON Canada
>> https://mvp.support.microsoft.com/profile/Tom.Moreau
>>
>> "Brian" <s@.y> wrote in message
>> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience they
>> tend
>> to get whole table locks.
>> We have an inhouse customer accounting app. One process in particular
>> takes
>> just under 2 minutes to run. It processes 1400 customers by checking
>> 4200
>> balance histories, checking and updating 1400 status records, and
>> inserting
>> 8400 new records across 5 tables. The whole process is in an expicit
>> transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone can
>> view
>> any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is something
>> wrong? I don't think that to whole tables should be locked for 8000
>> records. The 5 tables have an average of 400K records each. The whole
>> DB
>> is 900MB.
>> Thanks
>> Brian Morris
>>
>>
>
--
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html|||"Brian" <s@.y> wrote in message news:OrVBqNyfHHA.208@.TK2MSFTNGP05.phx.gbl...
> Greg,
> All the fields I query on are indexed. Actually in checking I did find a
> couple extra unnecessary indexes that I deleted which actually shortened
> the run time by 15% but the table locks are still there.
> I'm not so worried about the runtime but more the locking.
> I can't easily post the queries but I can do the DDL. Would that be still
> useful?
Hmm, w/o the queries, the DDL won't really help. But perhaps someone might
have some insight. So I'd say do it.
Can't easily post due to complexity or privacy? If the latter, perhaps
sanitizing them some?
> Thanks
> Brian
> "Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in
> message news:%233KA3zxfHHA.3676@.TK2MSFTNGP05.phx.gbl...
>> "Brian" <s@.y> wrote in message
>> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience they
>> tend to get whole table locks.
>> We have an inhouse customer accounting app. One process in particular
>> takes just under 2 minutes to run. It processes 1400 customers by
>> checking 4200 balance histories, checking and updating 1400 status
>> records, and inserting 8400 new records across 5 tables. The whole
>> process is in an expicit transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone can
>> view any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is something
>> wrong? I don't think that to whole tables should be locked for 8000
>> records. The 5 tables have an average of 400K records each. The whole
>> DB is 900MB.
>> It depends. What are the queries? Can you post them? And more
>> importantly, what are the indexes.
>> Most likely you're doing some sort of range query of an indexed field.
>> Say something like
>> Select orderdate from ordertable where orderdate between '2007-01-01' and
>> '2007-01-31'
>> and orderdate has no index on it. (Obviously your queries wil be more
>> complex ;-)
>> But, while that may only cover say 8000 orders (out of 400K), if there's
>> no index on orderdate, SQL Server has to do a table scan. And to make
>> sure that another transaction can't insert (or delete) records while it's
>> scanning the table, it has to lock the table.
>> Note the CPU speed really won't matter much here. Memory helps some
>> because it can cache more of the database, but ultimately it's probably
>> dominated by disk I/O.
>> If you can, post the queries and DDL and perhaps we can help find a more
>> effecient query.
>>
>> Thanks
>> Brian Morris
>>
>>
>> --
>> Greg Moore
>> SQL Server DBA Consulting Remote and Onsite available!
>> Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
>>
>
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html|||"Brian" <s@.y> wrote in message news:OW9ugU4fHHA.3632@.TK2MSFTNGP02.phx.gbl...
>I can't do the ROWLOCK hint becuase I've got some ADO code that does some
>complicated calculations and updates via recordsets instead of SQL.
> We don't have any clustered indexes.
> We need the ReadCommitted isolation.
> I'm guessing by your response that this kind of locking is not supposed to
> happen.
> I'll keep reading around and see if we can figure out what we are doing
> wrong.
Also, ROWLOCK would consume more memory which could potentially make the
issue worse.
As for not having any clustered indexes, any reason why not? (not saying
they would necessarily help here, but they do have their place and in
general are useful.)
> Thanks
> Brian
>
--
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html|||« I'm guessing by your response that this kind of locking is not supposed to
happen. »
No, I didn't say that: a lock escalation from a rowlock to a tablelock can
happens when the server choose to do so. However, if you make an explicit
hint about a Rowlock or a Paglock, then it's quite possible (but not sure)
that SQL-Server will wait further before making the decision of going with a
table lock escalation. It's a possibility, not a certainty.
You say that you are using ADO code. It's possible that the total time
required to make the transaction (2 minutes) is a direct consequence of
this. Maybe putting all this stuff into a single SP (or even redesigning
your algorithm) will speed things up.
--
Sylvain Lafontaine, ing.
MVP - Technologies Virtual-PC
E-mail: sylvain aei ca (fill the blanks, no spam please)
"Brian" <s@.y> wrote in message news:OW9ugU4fHHA.3632@.TK2MSFTNGP02.phx.gbl...
>I can't do the ROWLOCK hint becuase I've got some ADO code that does some
>complicated calculations and updates via recordsets instead of SQL.
> We don't have any clustered indexes.
> We need the ReadCommitted isolation.
> I'm guessing by your response that this kind of locking is not supposed to
> happen.
> I'll keep reading around and see if we can figure out what we are doing
> wrong.
> Thanks
> Brian
> "Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
> wrote in message news:%23dHiNA4fHHA.588@.TK2MSFTNGP06.phx.gbl...
>> Hum, maybe if you give an explicit ROWLOCK hint on your tables, they will
>> not automatically escalate to a table lock. You could also try with a
>> PagLock (possibly in conjonction with a clustered index).
>> Another possibility would be to set up the table lock from the beginning
>> in order to accelerate the update.
>> Maybe redesigning your schema in order that this transaction doesn't
>> block other peoples could also be the solution. However, as I don't know
>> what you are doing with this stuff, I cannot tell you anything more on
>> this.
>> Changing the isolation level is another possibility but again, I don't
>> know what you are doing with this stuff.
>> --
>> Sylvain Lafontaine, ing.
>> MVP - Technologies Virtual-PC
>> E-mail: sylvain aei ca (fill the blanks, no spam please)
>>
>> "Brian" <s@.y> wrote in message
>> news:eOvaPq3fHHA.4536@.TK2MSFTNGP04.phx.gbl...
>>I thought that may be part of the problem but the process has to be a go
>>or no-go kind of thing for each group of customers and this group is 1400
>>and we can't take people out of the group.
>> Brian
>> "Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
>> wrote in message news:eub2vO3fHHA.596@.TK2MSFTNGP06.phx.gbl...
>>A possible solution would be to split your big explicit single
>>transaction into multiple transactions.
>> --
>> Sylvain Lafontaine, ing.
>> MVP - Technologies Virtual-PC
>> E-mail: sylvain aei ca (fill the blanks, no spam please)
>>
>> "Brian" <s@.y> wrote in message
>> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience they
>> tend to get whole table locks.
>> We have an inhouse customer accounting app. One process in particular
>> takes just under 2 minutes to run. It processes 1400 customers by
>> checking 4200 balance histories, checking and updating 1400 status
>> records, and inserting 8400 new records across 5 tables. The whole
>> process is in an expicit transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone
>> can view any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is something
>> wrong? I don't think that to whole tables should be locked for 8000
>> records. The 5 tables have an average of 400K records each. The
>> whole DB is 900MB.
>> Thanks
>> Brian Morris
>>
>>
>>
>|||I would look at putting clustered indexes on your tables. generally, you
have a clustered index on every table, unless you have determined a reason -
i.e. measured poor performance - not to do so.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Brian" <s@.y> wrote in message news:OW9ugU4fHHA.3632@.TK2MSFTNGP02.phx.gbl...
I can't do the ROWLOCK hint becuase I've got some ADO code that does some
complicated calculations and updates via recordsets instead of SQL.
We don't have any clustered indexes.
We need the ReadCommitted isolation.
I'm guessing by your response that this kind of locking is not supposed to
happen.
I'll keep reading around and see if we can figure out what we are doing
wrong.
Thanks
Brian
"Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
wrote in message news:%23dHiNA4fHHA.588@.TK2MSFTNGP06.phx.gbl...
> Hum, maybe if you give an explicit ROWLOCK hint on your tables, they will
> not automatically escalate to a table lock. You could also try with a
> PagLock (possibly in conjonction with a clustered index).
> Another possibility would be to set up the table lock from the beginning
> in order to accelerate the update.
> Maybe redesigning your schema in order that this transaction doesn't block
> other peoples could also be the solution. However, as I don't know what
> you are doing with this stuff, I cannot tell you anything more on this.
> Changing the isolation level is another possibility but again, I don't
> know what you are doing with this stuff.
> --
> Sylvain Lafontaine, ing.
> MVP - Technologies Virtual-PC
> E-mail: sylvain aei ca (fill the blanks, no spam please)
>
> "Brian" <s@.y> wrote in message
> news:eOvaPq3fHHA.4536@.TK2MSFTNGP04.phx.gbl...
>>I thought that may be part of the problem but the process has to be a go
>>or no-go kind of thing for each group of customers and this group is 1400
>>and we can't take people out of the group.
>> Brian
>> "Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
>> wrote in message news:eub2vO3fHHA.596@.TK2MSFTNGP06.phx.gbl...
>>A possible solution would be to split your big explicit single
>>transaction into multiple transactions.
>> --
>> Sylvain Lafontaine, ing.
>> MVP - Technologies Virtual-PC
>> E-mail: sylvain aei ca (fill the blanks, no spam please)
>>
>> "Brian" <s@.y> wrote in message
>> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience they
>> tend to get whole table locks.
>> We have an inhouse customer accounting app. One process in particular
>> takes just under 2 minutes to run. It processes 1400 customers by
>> checking 4200 balance histories, checking and updating 1400 status
>> records, and inserting 8400 new records across 5 tables. The whole
>> process is in an expicit transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone can
>> view any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is something
>> wrong? I don't think that to whole tables should be locked for 8000
>> records. The 5 tables have an average of 400K records each. The whole
>> DB is 900MB.
>> Thanks
>> Brian Morris
>>
>>
>|||A bit of both and also most of them are done in ADO recordset updates not
SQL.
Thanks
Brian
"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in message
news:O1k%232j4fHHA.596@.TK2MSFTNGP05.phx.gbl...
> "Brian" <s@.y> wrote in message
> news:OrVBqNyfHHA.208@.TK2MSFTNGP05.phx.gbl...
>> Greg,
>> All the fields I query on are indexed. Actually in checking I did find a
>> couple extra unnecessary indexes that I deleted which actually shortened
>> the run time by 15% but the table locks are still there.
>> I'm not so worried about the runtime but more the locking.
>> I can't easily post the queries but I can do the DDL. Would that be
>> still useful?
> Hmm, w/o the queries, the DDL won't really help. But perhaps someone
> might have some insight. So I'd say do it.
>
> Can't easily post due to complexity or privacy? If the latter, perhaps
> sanitizing them some?
>
>> Thanks
>> Brian
>> "Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in
>> message news:%233KA3zxfHHA.3676@.TK2MSFTNGP05.phx.gbl...
>> "Brian" <s@.y> wrote in message
>> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience they
>> tend to get whole table locks.
>> We have an inhouse customer accounting app. One process in particular
>> takes just under 2 minutes to run. It processes 1400 customers by
>> checking 4200 balance histories, checking and updating 1400 status
>> records, and inserting 8400 new records across 5 tables. The whole
>> process is in an expicit transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone can
>> view any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is something
>> wrong? I don't think that to whole tables should be locked for 8000
>> records. The 5 tables have an average of 400K records each. The whole
>> DB is 900MB.
>> It depends. What are the queries? Can you post them? And more
>> importantly, what are the indexes.
>> Most likely you're doing some sort of range query of an indexed field.
>> Say something like
>> Select orderdate from ordertable where orderdate between '2007-01-01'
>> and '2007-01-31'
>> and orderdate has no index on it. (Obviously your queries wil be more
>> complex ;-)
>> But, while that may only cover say 8000 orders (out of 400K), if there's
>> no index on orderdate, SQL Server has to do a table scan. And to make
>> sure that another transaction can't insert (or delete) records while
>> it's scanning the table, it has to lock the table.
>> Note the CPU speed really won't matter much here. Memory helps some
>> because it can cache more of the database, but ultimately it's probably
>> dominated by disk I/O.
>> If you can, post the queries and DDL and perhaps we can help find a more
>> effecient query.
>>
>> Thanks
>> Brian Morris
>>
>>
>> --
>> Greg Moore
>> SQL Server DBA Consulting Remote and Onsite available!
>> Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
>>
>>
>
> --
> Greg Moore
> SQL Server DBA Consulting Remote and Onsite available!
> Email: sql (at) greenms.com
> http://www.greenms.com/sqlserver.html
>|||I'd read that they were better for data that is generally static since the
server has to make room to stick rows in. During a 1 week (or so) window
our data does in out of sequence (with reference to the main indexes) and I
thought it would mean that the server would have a lot ot reorganising to
do.
Brian
"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in message
news:uEETmk4fHHA.596@.TK2MSFTNGP05.phx.gbl...
> "Brian" <s@.y> wrote in message
> news:OW9ugU4fHHA.3632@.TK2MSFTNGP02.phx.gbl...
>>I can't do the ROWLOCK hint becuase I've got some ADO code that does some
>>complicated calculations and updates via recordsets instead of SQL.
>> We don't have any clustered indexes.
>> We need the ReadCommitted isolation.
>> I'm guessing by your response that this kind of locking is not supposed
>> to happen.
>> I'll keep reading around and see if we can figure out what we are doing
>> wrong.
> Also, ROWLOCK would consume more memory which could potentially make the
> issue worse.
> As for not having any clustered indexes, any reason why not? (not saying
> they would necessarily help here, but they do have their place and in
> general are useful.)
>
>> Thanks
>> Brian
> --
> Greg Moore
> SQL Server DBA Consulting Remote and Onsite available!
> Email: sql (at) greenms.com
> http://www.greenms.com/sqlserver.html
>|||I will take a look at that and see if it makes a difference.
Generally the 5 main transaction tables look similar to the following.
Within each transaction table a given AcNo may have up to 9 sub account
histories each with a separate balance. Transactions for a given day do not
necessarily occur on that day and a often out of date sequence.. There may
be more than one transaction per day. By far the most regular search/sort
is by AcNo and SubAc and Date. Would this be better as a clustered index?
Brian
CREATE TABLE [dbo].[tblShares] (
[TranID] [int] IDENTITY (1, 1) NOT NULL ,
[AcNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[SubAc] [smallint] NULL ,
[TranDate] [datetime] NULL ,
[AmtDr] [money] NULL ,
[AmtCr] [money] NULL ,
[TranCode] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Balance] [money] NULL ,
[DrIFNo] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[CrIFNo] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[RefNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[CheqNo] [int] NULL ,
[CheqPtr] [int] NULL ,
[TranType] [nvarchar] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[XferAcNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[UserNo] [smallint] NULL ,
[TStamp] [datetime] NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[tblShares] ADD
CONSTRAINT [DF__tblShares__SubAc__6D2D2E85] DEFAULT (0) FOR [SubAc],
CONSTRAINT [DF__tblShares__AmtDr__6E2152BE] DEFAULT (0) FOR [AmtDr],
CONSTRAINT [DF__tblShares__AmtCr__6F1576F7] DEFAULT (0) FOR [AmtCr],
CONSTRAINT [DF__tblShares__Balan__70099B30] DEFAULT (0) FOR [Balance],
CONSTRAINT [DF__tblShares__RefNo__70FDBF69] DEFAULT ('00000000') FOR
[RefNo],
CONSTRAINT [DF__tblShares__CQVJ__71F1E3A2] DEFAULT ('C') FOR [CQVJ],
CONSTRAINT [DF__tblShares__CheqN__72E607DB] DEFAULT (0) FOR [CheqNo],
CONSTRAINT [DF__tblShares__CheqP__73DA2C14] DEFAULT (0) FOR [CheqPtr],
CONSTRAINT [DF__tblShares__TranT__74CE504D] DEFAULT ('G') FOR [TranType],
CONSTRAINT [DF__tblShares__UserN__75C27486] DEFAULT (0) FOR [UserNo],
CONSTRAINT [DF__tblShares__TStam__76B698BF] DEFAULT (getdate()) FOR
[TStamp],
CONSTRAINT [aaaaatblShares_PK] PRIMARY KEY NONCLUSTERED
(
[TranID]
) ON [PRIMARY] ,
CONSTRAINT [CK tblShares TranType] CHECK ([TranType] = 'O' or [TranType] ='PS' or [TranType] = 'IS' or [TranType] = 'ES' or [TranType] = 'G' or
[TranType] = 'D' or [TranType] = 'R' or [TranType] = 'I' or [TranType] = 'T'
or [TranType] = 'X')
GO
CREATE INDEX [AcNoSubAc] ON [dbo].[tblShares]([AcNo], [SubAc], [TranDate],
[TranID]) ON [PRIMARY]
GO
CREATE INDEX [CheqPtr] ON [dbo].[tblShares]([CheqPtr]) ON [PRIMARY]
GO
CREATE INDEX [TranDate] ON [dbo].[tblShares]([TranDate]) ON [PRIMARY]
GO
"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:OggiiQ5fHHA.4596@.TK2MSFTNGP05.phx.gbl...
>I would look at putting clustered indexes on your tables. generally, you
> have a clustered index on every table, unless you have determined a
> reason -
> i.e. measured poor performance - not to do so.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> https://mvp.support.microsoft.com/profile/Tom.Moreau
>
> "Brian" <s@.y> wrote in message
> news:OW9ugU4fHHA.3632@.TK2MSFTNGP02.phx.gbl...
> I can't do the ROWLOCK hint becuase I've got some ADO code that does some
> complicated calculations and updates via recordsets instead of SQL.
> We don't have any clustered indexes.
> We need the ReadCommitted isolation.
> I'm guessing by your response that this kind of locking is not supposed to
> happen.
> I'll keep reading around and see if we can figure out what we are doing
> wrong.
> Thanks
> Brian
> "Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
> wrote in message news:%23dHiNA4fHHA.588@.TK2MSFTNGP06.phx.gbl...
>> Hum, maybe if you give an explicit ROWLOCK hint on your tables, they will
>> not automatically escalate to a table lock. You could also try with a
>> PagLock (possibly in conjonction with a clustered index).
>> Another possibility would be to set up the table lock from the beginning
>> in order to accelerate the update.
>> Maybe redesigning your schema in order that this transaction doesn't
>> block
>> other peoples could also be the solution. However, as I don't know what
>> you are doing with this stuff, I cannot tell you anything more on this.
>> Changing the isolation level is another possibility but again, I don't
>> know what you are doing with this stuff.
>> --
>> Sylvain Lafontaine, ing.
>> MVP - Technologies Virtual-PC
>> E-mail: sylvain aei ca (fill the blanks, no spam please)
>>
>> "Brian" <s@.y> wrote in message
>> news:eOvaPq3fHHA.4536@.TK2MSFTNGP04.phx.gbl...
>>I thought that may be part of the problem but the process has to be a go
>>or no-go kind of thing for each group of customers and this group is 1400
>>and we can't take people out of the group.
>> Brian
>> "Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
>> wrote in message news:eub2vO3fHHA.596@.TK2MSFTNGP06.phx.gbl...
>>A possible solution would be to split your big explicit single
>>transaction into multiple transactions.
>> --
>> Sylvain Lafontaine, ing.
>> MVP - Technologies Virtual-PC
>> E-mail: sylvain aei ca (fill the blanks, no spam please)
>>
>> "Brian" <s@.y> wrote in message
>> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience they
>> tend to get whole table locks.
>> We have an inhouse customer accounting app. One process in particular
>> takes just under 2 minutes to run. It processes 1400 customers by
>> checking 4200 balance histories, checking and updating 1400 status
>> records, and inserting 8400 new records across 5 tables. The whole
>> process is in an expicit transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone
>> can
>> view any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is something
>> wrong? I don't think that to whole tables should be locked for 8000
>> records. The 5 tables have an average of 400K records each. The
>> whole
>> DB is 900MB.
>> Thanks
>> Brian Morris
>>
>>
>>
>|||Is there a way to see exactly when in the process it escalates so that we
can know if its the number or records or the time that's triggering it?
Can you only set the hint in an SQL statement? Is there a way to do it on a
connection or recordset?
The code was originally done as ADO recordsets with some SQL and a few SPs
because we had 2 different back-end versions and wanted to stay more or less
portable. We assumed that it would take us way too much effort to try to
move all the code to SPs.
I've tried to measure the efficiency of the code and it actually spends more
time getting data from SPs than it does from calculations and adding or
updating records in the recordsets. It's still a work in progress so with
the advice from you guys we should eventually figure out how to make it
better.
Brian
"Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
wrote in message news:uiVg8l4fHHA.4804@.TK2MSFTNGP06.phx.gbl...
>« I'm guessing by your response that this kind of locking is not supposed
>to happen. »
> No, I didn't say that: a lock escalation from a rowlock to a tablelock can
> happens when the server choose to do so. However, if you make an explicit
> hint about a Rowlock or a Paglock, then it's quite possible (but not sure)
> that SQL-Server will wait further before making the decision of going with
> a table lock escalation. It's a possibility, not a certainty.
> You say that you are using ADO code. It's possible that the total time
> required to make the transaction (2 minutes) is a direct consequence of
> this. Maybe putting all this stuff into a single SP (or even redesigning
> your algorithm) will speed things up.
> --
> Sylvain Lafontaine, ing.
> MVP - Technologies Virtual-PC
> E-mail: sylvain aei ca (fill the blanks, no spam please)
>
> "Brian" <s@.y> wrote in message
> news:OW9ugU4fHHA.3632@.TK2MSFTNGP02.phx.gbl...
>>I can't do the ROWLOCK hint becuase I've got some ADO code that does some
>>complicated calculations and updates via recordsets instead of SQL.
>> We don't have any clustered indexes.
>> We need the ReadCommitted isolation.
>> I'm guessing by your response that this kind of locking is not supposed
>> to happen.
>> I'll keep reading around and see if we can figure out what we are doing
>> wrong.
>> Thanks
>> Brian
>> "Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
>> wrote in message news:%23dHiNA4fHHA.588@.TK2MSFTNGP06.phx.gbl...
>> Hum, maybe if you give an explicit ROWLOCK hint on your tables, they
>> will not automatically escalate to a table lock. You could also try
>> with a PagLock (possibly in conjonction with a clustered index).
>> Another possibility would be to set up the table lock from the beginning
>> in order to accelerate the update.
>> Maybe redesigning your schema in order that this transaction doesn't
>> block other peoples could also be the solution. However, as I don't
>> know what you are doing with this stuff, I cannot tell you anything more
>> on this.
>> Changing the isolation level is another possibility but again, I don't
>> know what you are doing with this stuff.
>> --
>> Sylvain Lafontaine, ing.
>> MVP - Technologies Virtual-PC
>> E-mail: sylvain aei ca (fill the blanks, no spam please)
>>
>> "Brian" <s@.y> wrote in message
>> news:eOvaPq3fHHA.4536@.TK2MSFTNGP04.phx.gbl...
>>I thought that may be part of the problem but the process has to be a go
>>or no-go kind of thing for each group of customers and this group is
>>1400 and we can't take people out of the group.
>> Brian
>> "Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
>> wrote in message news:eub2vO3fHHA.596@.TK2MSFTNGP06.phx.gbl...
>>A possible solution would be to split your big explicit single
>>transaction into multiple transactions.
>> --
>> Sylvain Lafontaine, ing.
>> MVP - Technologies Virtual-PC
>> E-mail: sylvain aei ca (fill the blanks, no spam please)
>>
>> "Brian" <s@.y> wrote in message
>> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience
>> they tend to get whole table locks.
>> We have an inhouse customer accounting app. One process in
>> particular takes just under 2 minutes to run. It processes 1400
>> customers by checking 4200 balance histories, checking and updating
>> 1400 status records, and inserting 8400 new records across 5 tables.
>> The whole process is in an expicit transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone
>> can view any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is
>> something wrong? I don't think that to whole tables should be locked
>> for 8000 records. The 5 tables have an average of 400K records each.
>> The whole DB is 900MB.
>> Thanks
>> Brian Morris
>>
>>
>>
>>
>|||Since the PK is an identity - monotonically increasing - I'd be tempted to
cluster on it. That table's leaf pages should not be fragmented. The other
ones should be justified based on actual usage. What you could also try is
a series of performance tests, clustering each index in turn. Keep in mind
that you can't defrag a heap, although you can add and drop a clustered
index.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Brian" <s@.y> wrote in message
news:Ow%23Kf$7fHHA.1220@.TK2MSFTNGP03.phx.gbl...
I will take a look at that and see if it makes a difference.
Generally the 5 main transaction tables look similar to the following.
Within each transaction table a given AcNo may have up to 9 sub account
histories each with a separate balance. Transactions for a given day do not
necessarily occur on that day and a often out of date sequence.. There may
be more than one transaction per day. By far the most regular search/sort
is by AcNo and SubAc and Date. Would this be better as a clustered index?
Brian
CREATE TABLE [dbo].[tblShares] (
[TranID] [int] IDENTITY (1, 1) NOT NULL ,
[AcNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[SubAc] [smallint] NULL ,
[TranDate] [datetime] NULL ,
[AmtDr] [money] NULL ,
[AmtCr] [money] NULL ,
[TranCode] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Balance] [money] NULL ,
[DrIFNo] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[CrIFNo] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[RefNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[CheqNo] [int] NULL ,
[CheqPtr] [int] NULL ,
[TranType] [nvarchar] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[XferAcNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[UserNo] [smallint] NULL ,
[TStamp] [datetime] NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[tblShares] ADD
CONSTRAINT [DF__tblShares__SubAc__6D2D2E85] DEFAULT (0) FOR [SubAc],
CONSTRAINT [DF__tblShares__AmtDr__6E2152BE] DEFAULT (0) FOR [AmtDr],
CONSTRAINT [DF__tblShares__AmtCr__6F1576F7] DEFAULT (0) FOR [AmtCr],
CONSTRAINT [DF__tblShares__Balan__70099B30] DEFAULT (0) FOR [Balance],
CONSTRAINT [DF__tblShares__RefNo__70FDBF69] DEFAULT ('00000000') FOR
[RefNo],
CONSTRAINT [DF__tblShares__CQVJ__71F1E3A2] DEFAULT ('C') FOR [CQVJ],
CONSTRAINT [DF__tblShares__CheqN__72E607DB] DEFAULT (0) FOR [CheqNo],
CONSTRAINT [DF__tblShares__CheqP__73DA2C14] DEFAULT (0) FOR [CheqPtr],
CONSTRAINT [DF__tblShares__TranT__74CE504D] DEFAULT ('G') FOR [TranType],
CONSTRAINT [DF__tblShares__UserN__75C27486] DEFAULT (0) FOR [UserNo],
CONSTRAINT [DF__tblShares__TStam__76B698BF] DEFAULT (getdate()) FOR
[TStamp],
CONSTRAINT [aaaaatblShares_PK] PRIMARY KEY NONCLUSTERED
(
[TranID]
) ON [PRIMARY] ,
CONSTRAINT [CK tblShares TranType] CHECK ([TranType] = 'O' or [TranType] ='PS' or [TranType] = 'IS' or [TranType] = 'ES' or [TranType] = 'G' or
[TranType] = 'D' or [TranType] = 'R' or [TranType] = 'I' or [TranType] = 'T'
or [TranType] = 'X')
GO
CREATE INDEX [AcNoSubAc] ON [dbo].[tblShares]([AcNo], [SubAc], [TranDate],
[TranID]) ON [PRIMARY]
GO
CREATE INDEX [CheqPtr] ON [dbo].[tblShares]([CheqPtr]) ON [PRIMARY]
GO
CREATE INDEX [TranDate] ON [dbo].[tblShares]([TranDate]) ON [PRIMARY]
GO
"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:OggiiQ5fHHA.4596@.TK2MSFTNGP05.phx.gbl...
>I would look at putting clustered indexes on your tables. generally, you
> have a clustered index on every table, unless you have determined a
> reason -
> i.e. measured poor performance - not to do so.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> https://mvp.support.microsoft.com/profile/Tom.Moreau
>
> "Brian" <s@.y> wrote in message
> news:OW9ugU4fHHA.3632@.TK2MSFTNGP02.phx.gbl...
> I can't do the ROWLOCK hint becuase I've got some ADO code that does some
> complicated calculations and updates via recordsets instead of SQL.
> We don't have any clustered indexes.
> We need the ReadCommitted isolation.
> I'm guessing by your response that this kind of locking is not supposed to
> happen.
> I'll keep reading around and see if we can figure out what we are doing
> wrong.
> Thanks
> Brian
> "Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
> wrote in message news:%23dHiNA4fHHA.588@.TK2MSFTNGP06.phx.gbl...
>> Hum, maybe if you give an explicit ROWLOCK hint on your tables, they will
>> not automatically escalate to a table lock. You could also try with a
>> PagLock (possibly in conjonction with a clustered index).
>> Another possibility would be to set up the table lock from the beginning
>> in order to accelerate the update.
>> Maybe redesigning your schema in order that this transaction doesn't
>> block
>> other peoples could also be the solution. However, as I don't know what
>> you are doing with this stuff, I cannot tell you anything more on this.
>> Changing the isolation level is another possibility but again, I don't
>> know what you are doing with this stuff.
>> --
>> Sylvain Lafontaine, ing.
>> MVP - Technologies Virtual-PC
>> E-mail: sylvain aei ca (fill the blanks, no spam please)
>>
>> "Brian" <s@.y> wrote in message
>> news:eOvaPq3fHHA.4536@.TK2MSFTNGP04.phx.gbl...
>>I thought that may be part of the problem but the process has to be a go
>>or no-go kind of thing for each group of customers and this group is 1400
>>and we can't take people out of the group.
>> Brian
>> "Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
>> wrote in message news:eub2vO3fHHA.596@.TK2MSFTNGP06.phx.gbl...
>>A possible solution would be to split your big explicit single
>>transaction into multiple transactions.
>> --
>> Sylvain Lafontaine, ing.
>> MVP - Technologies Virtual-PC
>> E-mail: sylvain aei ca (fill the blanks, no spam please)
>>
>> "Brian" <s@.y> wrote in message
>> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience they
>> tend to get whole table locks.
>> We have an inhouse customer accounting app. One process in particular
>> takes just under 2 minutes to run. It processes 1400 customers by
>> checking 4200 balance histories, checking and updating 1400 status
>> records, and inserting 8400 new records across 5 tables. The whole
>> process is in an expicit transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone
>> can
>> view any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is something
>> wrong? I don't think that to whole tables should be locked for 8000
>> records. The 5 tables have an average of 400K records each. The
>> whole
>> DB is 900MB.
>> Thanks
>> Brian Morris
>>
>>
>>
>|||You can trace the Lock Escalation event with the Profiler.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Brian" <s@.y> wrote in message
news:%23ruuqa8fHHA.668@.TK2MSFTNGP05.phx.gbl...
Is there a way to see exactly when in the process it escalates so that we
can know if its the number or records or the time that's triggering it?
Can you only set the hint in an SQL statement? Is there a way to do it on a
connection or recordset?
The code was originally done as ADO recordsets with some SQL and a few SPs
because we had 2 different back-end versions and wanted to stay more or less
portable. We assumed that it would take us way too much effort to try to
move all the code to SPs.
I've tried to measure the efficiency of the code and it actually spends more
time getting data from SPs than it does from calculations and adding or
updating records in the recordsets. It's still a work in progress so with
the advice from you guys we should eventually figure out how to make it
better.
Brian
"Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
wrote in message news:uiVg8l4fHHA.4804@.TK2MSFTNGP06.phx.gbl...
>« I'm guessing by your response that this kind of locking is not supposed
>to happen. »
> No, I didn't say that: a lock escalation from a rowlock to a tablelock can
> happens when the server choose to do so. However, if you make an explicit
> hint about a Rowlock or a Paglock, then it's quite possible (but not sure)
> that SQL-Server will wait further before making the decision of going with
> a table lock escalation. It's a possibility, not a certainty.
> You say that you are using ADO code. It's possible that the total time
> required to make the transaction (2 minutes) is a direct consequence of
> this. Maybe putting all this stuff into a single SP (or even redesigning
> your algorithm) will speed things up.
> --
> Sylvain Lafontaine, ing.
> MVP - Technologies Virtual-PC
> E-mail: sylvain aei ca (fill the blanks, no spam please)
>
> "Brian" <s@.y> wrote in message
> news:OW9ugU4fHHA.3632@.TK2MSFTNGP02.phx.gbl...
>>I can't do the ROWLOCK hint becuase I've got some ADO code that does some
>>complicated calculations and updates via recordsets instead of SQL.
>> We don't have any clustered indexes.
>> We need the ReadCommitted isolation.
>> I'm guessing by your response that this kind of locking is not supposed
>> to happen.
>> I'll keep reading around and see if we can figure out what we are doing
>> wrong.
>> Thanks
>> Brian
>> "Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
>> wrote in message news:%23dHiNA4fHHA.588@.TK2MSFTNGP06.phx.gbl...
>> Hum, maybe if you give an explicit ROWLOCK hint on your tables, they
>> will not automatically escalate to a table lock. You could also try
>> with a PagLock (possibly in conjonction with a clustered index).
>> Another possibility would be to set up the table lock from the beginning
>> in order to accelerate the update.
>> Maybe redesigning your schema in order that this transaction doesn't
>> block other peoples could also be the solution. However, as I don't
>> know what you are doing with this stuff, I cannot tell you anything more
>> on this.
>> Changing the isolation level is another possibility but again, I don't
>> know what you are doing with this stuff.
>> --
>> Sylvain Lafontaine, ing.
>> MVP - Technologies Virtual-PC
>> E-mail: sylvain aei ca (fill the blanks, no spam please)
>>
>> "Brian" <s@.y> wrote in message
>> news:eOvaPq3fHHA.4536@.TK2MSFTNGP04.phx.gbl...
>>I thought that may be part of the problem but the process has to be a go
>>or no-go kind of thing for each group of customers and this group is
>>1400 and we can't take people out of the group.
>> Brian
>> "Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
>> wrote in message news:eub2vO3fHHA.596@.TK2MSFTNGP06.phx.gbl...
>>A possible solution would be to split your big explicit single
>>transaction into multiple transactions.
>> --
>> Sylvain Lafontaine, ing.
>> MVP - Technologies Virtual-PC
>> E-mail: sylvain aei ca (fill the blanks, no spam please)
>>
>> "Brian" <s@.y> wrote in message
>> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience
>> they tend to get whole table locks.
>> We have an inhouse customer accounting app. One process in
>> particular takes just under 2 minutes to run. It processes 1400
>> customers by checking 4200 balance histories, checking and updating
>> 1400 status records, and inserting 8400 new records across 5 tables.
>> The whole process is in an expicit transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone
>> can view any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is
>> something wrong? I don't think that to whole tables should be locked
>> for 8000 records. The 5 tables have an average of 400K records each.
>> The whole DB is 900MB.
>> Thanks
>> Brian Morris
>>
>>
>>
>>
>|||"Brian" <s@.y> wrote in message
news:Ow%23Kf$7fHHA.1220@.TK2MSFTNGP03.phx.gbl...
>I will take a look at that and see if it makes a difference.
> Generally the 5 main transaction tables look similar to the following.
> Within each transaction table a given AcNo may have up to 9 sub account
> histories each with a separate balance. Transactions for a given day do
> not necessarily occur on that day and a often out of date sequence..
You mention in another response not using clustered index, etc. I'd suspect
that you might want to try one on the TranID (generally if you do use an
IDENTITY column (and I'll leave the debate to the value of that elsewhere)
it's a very good candidate for a clustered index. So I'd consider putting
one there. Note that'll force all the other indexes to rebuild.)
W/o the queries it's tough to say for sure, but you may want to make sure
the indexes you have are covering indexes. Looks like the only one you have
that covers more than one field is: AcNoSubAc.
You may want to make sure that covers all you need to. Generally a covered
index will give you better performance for your queries.
(also as an aside, you may want to review if nvarchar makes sense in all
those cases. I'm guessing AcNO is always 8 characters... so that probably
is better as nchar(8). And Trantype nvarchar(2) actually wastes space (a
varchar has an overhead of 2 bytes).
But like I say, w/o the queries, it's really tough to say for sure.
> There may be more than one transaction per day. By far the most regular
> search/sort is by AcNo and SubAc and Date. Would this be better as a
> clustered index?
> Brian
> CREATE TABLE [dbo].[tblShares] (
> [TranID] [int] IDENTITY (1, 1) NOT NULL ,
> [AcNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [SubAc] [smallint] NULL ,
> [TranDate] [datetime] NULL ,
> [AmtDr] [money] NULL ,
> [AmtCr] [money] NULL ,
> [TranCode] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [Balance] [money] NULL ,
> [DrIFNo] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [CrIFNo] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [RefNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [CheqNo] [int] NULL ,
> [CheqPtr] [int] NULL ,
> [TranType] [nvarchar] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [XferAcNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [UserNo] [smallint] NULL ,
> [TStamp] [datetime] NULL
> ) ON [PRIMARY]
> GO
> ALTER TABLE [dbo].[tblShares] ADD
> CONSTRAINT [DF__tblShares__SubAc__6D2D2E85] DEFAULT (0) FOR [SubAc],
> CONSTRAINT [DF__tblShares__AmtDr__6E2152BE] DEFAULT (0) FOR [AmtDr],
> CONSTRAINT [DF__tblShares__AmtCr__6F1576F7] DEFAULT (0) FOR [AmtCr],
> CONSTRAINT [DF__tblShares__Balan__70099B30] DEFAULT (0) FOR [Balance],
> CONSTRAINT [DF__tblShares__RefNo__70FDBF69] DEFAULT ('00000000') FOR
> [RefNo],
> CONSTRAINT [DF__tblShares__CQVJ__71F1E3A2] DEFAULT ('C') FOR [CQVJ],
> CONSTRAINT [DF__tblShares__CheqN__72E607DB] DEFAULT (0) FOR [CheqNo],
> CONSTRAINT [DF__tblShares__CheqP__73DA2C14] DEFAULT (0) FOR [CheqPtr],
> CONSTRAINT [DF__tblShares__TranT__74CE504D] DEFAULT ('G') FOR [TranType],
> CONSTRAINT [DF__tblShares__UserN__75C27486] DEFAULT (0) FOR [UserNo],
> CONSTRAINT [DF__tblShares__TStam__76B698BF] DEFAULT (getdate()) FOR
> [TStamp],
> CONSTRAINT [aaaaatblShares_PK] PRIMARY KEY NONCLUSTERED
> (
> [TranID]
> ) ON [PRIMARY] ,
> CONSTRAINT [CK tblShares TranType] CHECK ([TranType] = 'O' or [TranType] => 'PS' or [TranType] = 'IS' or [TranType] = 'ES' or [TranType] = 'G' or
> [TranType] = 'D' or [TranType] = 'R' or [TranType] = 'I' or [TranType] => 'T' or [TranType] = 'X')
> GO
> CREATE INDEX [AcNoSubAc] ON [dbo].[tblShares]([AcNo], [SubAc],
> [TranDate], [TranID]) ON [PRIMARY]
> GO
> CREATE INDEX [CheqPtr] ON [dbo].[tblShares]([CheqPtr]) ON [PRIMARY]
> GO
> CREATE INDEX [TranDate] ON [dbo].[tblShares]([TranDate]) ON [PRIMARY]
> GO
>
--
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html|||I'll do some tests clustering the ID columns.
I'm not sure what you meant about the dabate though.
What do you mean by 'covering indexes' ?
The nvarchar caught our eyes too and I going to do some tests to see how
much space it is really wasting and if that can give some performanace
boost. It would have gotten there because we used the Access2003 upsizing
wizard to convert the data from Advantage and Access to SQLServer. We were
thinking about dropping the 'n' since we will always be using english and
don't plan on doing our app in another language and this could half the
number of bytes stored. AcNo is variable length so it has to be varchar(8).
We were debating if we should use char(2) or VarChar(2) because some of the
Trantypes are one character and we were not sure yet how it would appear to
the code and report query builder.
Regards
Brian
"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in message
news:OoSB018fHHA.1816@.TK2MSFTNGP06.phx.gbl...
> "Brian" <s@.y> wrote in message
> news:Ow%23Kf$7fHHA.1220@.TK2MSFTNGP03.phx.gbl...
>>I will take a look at that and see if it makes a difference.
>> Generally the 5 main transaction tables look similar to the following.
>> Within each transaction table a given AcNo may have up to 9 sub account
>> histories each with a separate balance. Transactions for a given day do
>> not necessarily occur on that day and a often out of date sequence..
> You mention in another response not using clustered index, etc. I'd
> suspect that you might want to try one on the TranID (generally if you do
> use an IDENTITY column (and I'll leave the debate to the value of that
> elsewhere) it's a very good candidate for a clustered index. So I'd
> consider putting one there. Note that'll force all the other indexes to
> rebuild.)
>
> W/o the queries it's tough to say for sure, but you may want to make sure
> the indexes you have are covering indexes. Looks like the only one you
> have that covers more than one field is: AcNoSubAc.
> You may want to make sure that covers all you need to. Generally a
> covered index will give you better performance for your queries.
> (also as an aside, you may want to review if nvarchar makes sense in all
> those cases. I'm guessing AcNO is always 8 characters... so that probably
> is better as nchar(8). And Trantype nvarchar(2) actually wastes space (a
> varchar has an overhead of 2 bytes).
> But like I say, w/o the queries, it's really tough to say for sure.
>
>> There may be more than one transaction per day. By far the most regular
>> search/sort is by AcNo and SubAc and Date. Would this be better as a
>> clustered index?
>> Brian
>> CREATE TABLE [dbo].[tblShares] (
>> [TranID] [int] IDENTITY (1, 1) NOT NULL ,
>> [AcNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [SubAc] [smallint] NULL ,
>> [TranDate] [datetime] NULL ,
>> [AmtDr] [money] NULL ,
>> [AmtCr] [money] NULL ,
>> [TranCode] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [Balance] [money] NULL ,
>> [DrIFNo] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [CrIFNo] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [RefNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [CheqNo] [int] NULL ,
>> [CheqPtr] [int] NULL ,
>> [TranType] [nvarchar] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [XferAcNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [UserNo] [smallint] NULL ,
>> [TStamp] [datetime] NULL
>> ) ON [PRIMARY]
>> GO
>> ALTER TABLE [dbo].[tblShares] ADD
>> CONSTRAINT [DF__tblShares__SubAc__6D2D2E85] DEFAULT (0) FOR [SubAc],
>> CONSTRAINT [DF__tblShares__AmtDr__6E2152BE] DEFAULT (0) FOR [AmtDr],
>> CONSTRAINT [DF__tblShares__AmtCr__6F1576F7] DEFAULT (0) FOR [AmtCr],
>> CONSTRAINT [DF__tblShares__Balan__70099B30] DEFAULT (0) FOR [Balance],
>> CONSTRAINT [DF__tblShares__RefNo__70FDBF69] DEFAULT ('00000000') FOR
>> [RefNo],
>> CONSTRAINT [DF__tblShares__CQVJ__71F1E3A2] DEFAULT ('C') FOR [CQVJ],
>> CONSTRAINT [DF__tblShares__CheqN__72E607DB] DEFAULT (0) FOR [CheqNo],
>> CONSTRAINT [DF__tblShares__CheqP__73DA2C14] DEFAULT (0) FOR [CheqPtr],
>> CONSTRAINT [DF__tblShares__TranT__74CE504D] DEFAULT ('G') FOR [TranType],
>> CONSTRAINT [DF__tblShares__UserN__75C27486] DEFAULT (0) FOR [UserNo],
>> CONSTRAINT [DF__tblShares__TStam__76B698BF] DEFAULT (getdate()) FOR
>> [TStamp],
>> CONSTRAINT [aaaaatblShares_PK] PRIMARY KEY NONCLUSTERED
>> (
>> [TranID]
>> ) ON [PRIMARY] ,
>> CONSTRAINT [CK tblShares TranType] CHECK ([TranType] = 'O' or [TranType]
>> = 'PS' or [TranType] = 'IS' or [TranType] = 'ES' or [TranType] = 'G' or
>> [TranType] = 'D' or [TranType] = 'R' or [TranType] = 'I' or [TranType] =>> 'T' or [TranType] = 'X')
>> GO
>> CREATE INDEX [AcNoSubAc] ON [dbo].[tblShares]([AcNo], [SubAc],
>> [TranDate], [TranID]) ON [PRIMARY]
>> GO
>> CREATE INDEX [CheqPtr] ON [dbo].[tblShares]([CheqPtr]) ON [PRIMARY]
>> GO
>> CREATE INDEX [TranDate] ON [dbo].[tblShares]([TranDate]) ON [PRIMARY]
>> GO
> --
> Greg Moore
> SQL Server DBA Consulting Remote and Onsite available!
> Email: sql (at) greenms.com
> http://www.greenms.com/sqlserver.html
>|||"Brian" <s@.y> wrote in message news:u8wfML9fHHA.4692@.TK2MSFTNGP03.phx.gbl...
> I'll do some tests clustering the ID columns.
> I'm not sure what you meant about the dabate though.
> What do you mean by 'covering indexes' ?
A covering index is an index that covers all the the columns in your search.
With a covering index, when your query executes, it can simply go from node
to node in the index and return all the data it needs.
If you select more columns than in your index, the optimizer will look for
other indexes it can also use, otherwise, it then has to do a table scan.
And since a table scan can escalate to table locks, that may be part of the
source of the issue.
Let me guess, your query returns or uses TransID? If this is a case, a
clustered index probably would be fairly helpful, even w/o any other
changes.
BTW, can't recall if you said you're using SQL 2000 or 2005, but if you're
using SQL 2005 I highly suggest getting Iztik Ben-Gan's "Inside Microsoft
SQL Server 2005: T-SQL Querying" and at the very least reading chapter 3.
VERY useful.
> The nvarchar caught our eyes too and I going to do some tests to see how
> much space it is really wasting and if that can give some performanace
> boost.
Honestly, you'll save a little space and might gain a small boost in
performance, but my guess is not much.
You'll probably save about 8-10 bytes per row. This will save about 10% of
the total row size.
So, doing real rough math, you'll go from 93 rows per page to 106. This
will mean reading in a page will get you 13% more rows, so that could reduce
your I/O a bit, but I don't think it's where you're going to see that much
improvement.
(actually above I just realized I'm assuming varchar, not nvarchar. But
yeah, going from nvarchar to char on some of those will definitely be an
improvement.)
>It would have gotten there because we used the Access2003 upsizing wizard
>to convert the data from Advantage and Access to SQLServer. We were
>thinking about dropping the 'n' since we will always be using english and
>don't plan on doing our app in another language and this could half the
>number of bytes stored.
Yeah, that's probably worth it.
> AcNo is variable length so it has to be varchar(8).
Well, determine what the average is.. if most are close to 8 characters, I
wouldn't worry.
> We were debating if we should use char(2) or VarChar(2) because some of
> the Trantypes are one character and we were not sure yet how it would
> appear to the code and report query builder.
char(2). Varchar(2) uses 3 bytes minimum. ;-)
> Regards
> Brian
> "Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in
> message news:OoSB018fHHA.1816@.TK2MSFTNGP06.phx.gbl...
>> "Brian" <s@.y> wrote in message
>> news:Ow%23Kf$7fHHA.1220@.TK2MSFTNGP03.phx.gbl...
>>I will take a look at that and see if it makes a difference.
>> Generally the 5 main transaction tables look similar to the following.
>> Within each transaction table a given AcNo may have up to 9 sub account
>> histories each with a separate balance. Transactions for a given day do
>> not necessarily occur on that day and a often out of date sequence..
>> You mention in another response not using clustered index, etc. I'd
>> suspect that you might want to try one on the TranID (generally if you do
>> use an IDENTITY column (and I'll leave the debate to the value of that
>> elsewhere) it's a very good candidate for a clustered index. So I'd
>> consider putting one there. Note that'll force all the other indexes to
>> rebuild.)
>>
>> W/o the queries it's tough to say for sure, but you may want to make sure
>> the indexes you have are covering indexes. Looks like the only one you
>> have that covers more than one field is: AcNoSubAc.
>> You may want to make sure that covers all you need to. Generally a
>> covered index will give you better performance for your queries.
>> (also as an aside, you may want to review if nvarchar makes sense in all
>> those cases. I'm guessing AcNO is always 8 characters... so that
>> probably is better as nchar(8). And Trantype nvarchar(2) actually wastes
>> space (a varchar has an overhead of 2 bytes).
>> But like I say, w/o the queries, it's really tough to say for sure.
>>
>> There may be more than one transaction per day. By far the most regular
>> search/sort is by AcNo and SubAc and Date. Would this be better as a
>> clustered index?
>> Brian
>> CREATE TABLE [dbo].[tblShares] (
>> [TranID] [int] IDENTITY (1, 1) NOT NULL ,
>> [AcNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [SubAc] [smallint] NULL ,
>> [TranDate] [datetime] NULL ,
>> [AmtDr] [money] NULL ,
>> [AmtCr] [money] NULL ,
>> [TranCode] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [Balance] [money] NULL ,
>> [DrIFNo] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [CrIFNo] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [RefNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [CheqNo] [int] NULL ,
>> [CheqPtr] [int] NULL ,
>> [TranType] [nvarchar] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [XferAcNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [UserNo] [smallint] NULL ,
>> [TStamp] [datetime] NULL
>> ) ON [PRIMARY]
>> GO
>> ALTER TABLE [dbo].[tblShares] ADD
>> CONSTRAINT [DF__tblShares__SubAc__6D2D2E85] DEFAULT (0) FOR [SubAc],
>> CONSTRAINT [DF__tblShares__AmtDr__6E2152BE] DEFAULT (0) FOR [AmtDr],
>> CONSTRAINT [DF__tblShares__AmtCr__6F1576F7] DEFAULT (0) FOR [AmtCr],
>> CONSTRAINT [DF__tblShares__Balan__70099B30] DEFAULT (0) FOR [Balance],
>> CONSTRAINT [DF__tblShares__RefNo__70FDBF69] DEFAULT ('00000000') FOR
>> [RefNo],
>> CONSTRAINT [DF__tblShares__CQVJ__71F1E3A2] DEFAULT ('C') FOR [CQVJ],
>> CONSTRAINT [DF__tblShares__CheqN__72E607DB] DEFAULT (0) FOR [CheqNo],
>> CONSTRAINT [DF__tblShares__CheqP__73DA2C14] DEFAULT (0) FOR [CheqPtr],
>> CONSTRAINT [DF__tblShares__TranT__74CE504D] DEFAULT ('G') FOR
>> [TranType],
>> CONSTRAINT [DF__tblShares__UserN__75C27486] DEFAULT (0) FOR [UserNo],
>> CONSTRAINT [DF__tblShares__TStam__76B698BF] DEFAULT (getdate()) FOR
>> [TStamp],
>> CONSTRAINT [aaaaatblShares_PK] PRIMARY KEY NONCLUSTERED
>> (
>> [TranID]
>> ) ON [PRIMARY] ,
>> CONSTRAINT [CK tblShares TranType] CHECK ([TranType] = 'O' or [TranType]
>> = 'PS' or [TranType] = 'IS' or [TranType] = 'ES' or [TranType] = 'G' or
>> [TranType] = 'D' or [TranType] = 'R' or [TranType] = 'I' or [TranType] =>> 'T' or [TranType] = 'X')
>> GO
>> CREATE INDEX [AcNoSubAc] ON [dbo].[tblShares]([AcNo], [SubAc],
>> [TranDate], [TranID]) ON [PRIMARY]
>> GO
>> CREATE INDEX [CheqPtr] ON [dbo].[tblShares]([CheqPtr]) ON [PRIMARY]
>> GO
>> CREATE INDEX [TranDate] ON [dbo].[tblShares]([TranDate]) ON [PRIMARY]
>> GO
>>
>> --
>> Greg Moore
>> SQL Server DBA Consulting Remote and Onsite available!
>> Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
>>
>
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html|||Yes my indexes are covering.
Yes, TransID is used because on the same day there can be more than one
transaction and we always need to find the last one for the day (can't use a
timestamp because they might be entered out of sequence)
simplified...
SELECT @.TranID = (SELECT TOP 1 TranID FROM dbo.tblShares WHERE AcNo = @.AcNo
AND SubAc = @.SubAc AND TranDate <= @.Date ORDER BY TranDate DESC, TranID
DESC)
SELECT @.TheBalance = (SELECT Balance FROM dbo.tblShares WHERE TranID =@.TranID)
So I'll try clustering the Primary Index for TransID only.
Right now 2000 but we did some tests with 2005 and it seems to run the same.
Is it supposed to handle the indexes and the locking differently?
Thanks for the book recommendation.
Brian Morris
"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in message
news:OVsJze9fHHA.4536@.TK2MSFTNGP04.phx.gbl...
> "Brian" <s@.y> wrote in message
> news:u8wfML9fHHA.4692@.TK2MSFTNGP03.phx.gbl...
>> I'll do some tests clustering the ID columns.
>> I'm not sure what you meant about the dabate though.
>> What do you mean by 'covering indexes' ?
> A covering index is an index that covers all the the columns in your
> search.
> With a covering index, when your query executes, it can simply go from
> node to node in the index and return all the data it needs.
> If you select more columns than in your index, the optimizer will look for
> other indexes it can also use, otherwise, it then has to do a table scan.
> And since a table scan can escalate to table locks, that may be part of
> the source of the issue.
> Let me guess, your query returns or uses TransID? If this is a case, a
> clustered index probably would be fairly helpful, even w/o any other
> changes.
> BTW, can't recall if you said you're using SQL 2000 or 2005, but if you're
> using SQL 2005 I highly suggest getting Iztik Ben-Gan's "Inside Microsoft
> SQL Server 2005: T-SQL Querying" and at the very least reading chapter 3.
> VERY useful.
>
>> The nvarchar caught our eyes too and I going to do some tests to see how
>> much space it is really wasting and if that can give some performanace
>> boost.
> Honestly, you'll save a little space and might gain a small boost in
> performance, but my guess is not much.
> You'll probably save about 8-10 bytes per row. This will save about 10%
> of the total row size.
> So, doing real rough math, you'll go from 93 rows per page to 106. This
> will mean reading in a page will get you 13% more rows, so that could
> reduce your I/O a bit, but I don't think it's where you're going to see
> that much improvement.
> (actually above I just realized I'm assuming varchar, not nvarchar. But
> yeah, going from nvarchar to char on some of those will definitely be an
> improvement.)
>
>>It would have gotten there because we used the Access2003 upsizing wizard
>>to convert the data from Advantage and Access to SQLServer. We were
>>thinking about dropping the 'n' since we will always be using english and
>>don't plan on doing our app in another language and this could half the
>>number of bytes stored.
> Yeah, that's probably worth it.
>> AcNo is variable length so it has to be varchar(8).
> Well, determine what the average is.. if most are close to 8 characters, I
> wouldn't worry.
>> We were debating if we should use char(2) or VarChar(2) because some of
>> the Trantypes are one character and we were not sure yet how it would
>> appear to the code and report query builder.
> char(2). Varchar(2) uses 3 bytes minimum. ;-)
>
>> Regards
>> Brian
>> "Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in
>> message news:OoSB018fHHA.1816@.TK2MSFTNGP06.phx.gbl...
>> "Brian" <s@.y> wrote in message
>> news:Ow%23Kf$7fHHA.1220@.TK2MSFTNGP03.phx.gbl...
>>I will take a look at that and see if it makes a difference.
>> Generally the 5 main transaction tables look similar to the following.
>> Within each transaction table a given AcNo may have up to 9 sub account
>> histories each with a separate balance. Transactions for a given day
>> do not necessarily occur on that day and a often out of date sequence..
>> You mention in another response not using clustered index, etc. I'd
>> suspect that you might want to try one on the TranID (generally if you
>> do use an IDENTITY column (and I'll leave the debate to the value of
>> that elsewhere) it's a very good candidate for a clustered index. So
>> I'd consider putting one there. Note that'll force all the other
>> indexes to rebuild.)
>>
>> W/o the queries it's tough to say for sure, but you may want to make
>> sure the indexes you have are covering indexes. Looks like the only one
>> you have that covers more than one field is: AcNoSubAc.
>> You may want to make sure that covers all you need to. Generally a
>> covered index will give you better performance for your queries.
>> (also as an aside, you may want to review if nvarchar makes sense in all
>> those cases. I'm guessing AcNO is always 8 characters... so that
>> probably is better as nchar(8). And Trantype nvarchar(2) actually
>> wastes space (a varchar has an overhead of 2 bytes).
>> But like I say, w/o the queries, it's really tough to say for sure.
>>
>> There may be more than one transaction per day. By far the most
>> regular search/sort is by AcNo and SubAc and Date. Would this be
>> better as a clustered index?
>> Brian
>> CREATE TABLE [dbo].[tblShares] (
>> [TranID] [int] IDENTITY (1, 1) NOT NULL ,
>> [AcNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [SubAc] [smallint] NULL ,
>> [TranDate] [datetime] NULL ,
>> [AmtDr] [money] NULL ,
>> [AmtCr] [money] NULL ,
>> [TranCode] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [Balance] [money] NULL ,
>> [DrIFNo] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [CrIFNo] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [RefNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [CheqNo] [int] NULL ,
>> [CheqPtr] [int] NULL ,
>> [TranType] [nvarchar] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [XferAcNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [UserNo] [smallint] NULL ,
>> [TStamp] [datetime] NULL
>> ) ON [PRIMARY]
>> GO
>> ALTER TABLE [dbo].[tblShares] ADD
>> CONSTRAINT [DF__tblShares__SubAc__6D2D2E85] DEFAULT (0) FOR [SubAc],
>> CONSTRAINT [DF__tblShares__AmtDr__6E2152BE] DEFAULT (0) FOR [AmtDr],
>> CONSTRAINT [DF__tblShares__AmtCr__6F1576F7] DEFAULT (0) FOR [AmtCr],
>> CONSTRAINT [DF__tblShares__Balan__70099B30] DEFAULT (0) FOR [Balance],
>> CONSTRAINT [DF__tblShares__RefNo__70FDBF69] DEFAULT ('00000000') FOR
>> [RefNo],
>> CONSTRAINT [DF__tblShares__CQVJ__71F1E3A2] DEFAULT ('C') FOR [CQVJ],
>> CONSTRAINT [DF__tblShares__CheqN__72E607DB] DEFAULT (0) FOR [CheqNo],
>> CONSTRAINT [DF__tblShares__CheqP__73DA2C14] DEFAULT (0) FOR [CheqPtr],
>> CONSTRAINT [DF__tblShares__TranT__74CE504D] DEFAULT ('G') FOR
>> [TranType],
>> CONSTRAINT [DF__tblShares__UserN__75C27486] DEFAULT (0) FOR [UserNo],
>> CONSTRAINT [DF__tblShares__TStam__76B698BF] DEFAULT (getdate()) FOR
>> [TStamp],
>> CONSTRAINT [aaaaatblShares_PK] PRIMARY KEY NONCLUSTERED
>> (
>> [TranID]
>> ) ON [PRIMARY] ,
>> CONSTRAINT [CK tblShares TranType] CHECK ([TranType] = 'O' or
>> [TranType] = 'PS' or [TranType] = 'IS' or [TranType] = 'ES' or
>> [TranType] = 'G' or [TranType] = 'D' or [TranType] = 'R' or [TranType]
>> = 'I' or [TranType] = 'T' or [TranType] = 'X')
>> GO
>> CREATE INDEX [AcNoSubAc] ON [dbo].[tblShares]([AcNo], [SubAc],
>> [TranDate], [TranID]) ON [PRIMARY]
>> GO
>> CREATE INDEX [CheqPtr] ON [dbo].[tblShares]([CheqPtr]) ON [PRIMARY]
>> GO
>> CREATE INDEX [TranDate] ON [dbo].[tblShares]([TranDate]) ON [PRIMARY]
>> GO
>>
>> --
>> Greg Moore
>> SQL Server DBA Consulting Remote and Onsite available!
>> Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
>>
>>
>
> --
> Greg Moore
> SQL Server DBA Consulting Remote and Onsite available!
> Email: sql (at) greenms.com
> http://www.greenms.com/sqlserver.html
>|||"Brian" <s@.y> wrote in message news:OtK5x59fHHA.588@.TK2MSFTNGP06.phx.gbl...
> Yes my indexes are covering.
> Yes, TransID is used because on the same day there can be more than one
> transaction and we always need to find the last one for the day (can't use
> a timestamp because they might be entered out of sequence)
> simplified...
> SELECT @.TranID = (SELECT TOP 1 TranID FROM dbo.tblShares WHERE AcNo => @.AcNo AND SubAc = @.SubAc AND TranDate <= @.Date ORDER BY TranDate DESC,
> TranID DESC)
> SELECT @.TheBalance = (SELECT Balance FROM dbo.tblShares WHERE TranID => @.TranID)
> So I'll try clustering the Primary Index for TransID only.
Yeah, try that and let us know the results.
Also, what you can do is create some test queries and execute them via Query
analyzer with
SET SHOWPLAN ON
> Right now 2000 but we did some tests with 2005 and it seems to run the
> same. Is it supposed to handle the indexes and the locking differently?
Not really. But it has better tools for tuning. And the book below relies
a lot on those tools. But some of the theory is still applicable to SQL
2000.
> Thanks for the book recommendation.
> Brian Morris
> "Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in
> message news:OVsJze9fHHA.4536@.TK2MSFTNGP04.phx.gbl...
>> "Brian" <s@.y> wrote in message
>> news:u8wfML9fHHA.4692@.TK2MSFTNGP03.phx.gbl...
>> I'll do some tests clustering the ID columns.
>> I'm not sure what you meant about the dabate though.
>> What do you mean by 'covering indexes' ?
>> A covering index is an index that covers all the the columns in your
>> search.
>> With a covering index, when your query executes, it can simply go from
>> node to node in the index and return all the data it needs.
>> If you select more columns than in your index, the optimizer will look
>> for other indexes it can also use, otherwise, it then has to do a table
>> scan. And since a table scan can escalate to table locks, that may be
>> part of the source of the issue.
>> Let me guess, your query returns or uses TransID? If this is a case, a
>> clustered index probably would be fairly helpful, even w/o any other
>> changes.
>> BTW, can't recall if you said you're using SQL 2000 or 2005, but if
>> you're using SQL 2005 I highly suggest getting Iztik Ben-Gan's "Inside
>> Microsoft SQL Server 2005: T-SQL Querying" and at the very least reading
>> chapter 3.
>> VERY useful.
>>
>> The nvarchar caught our eyes too and I going to do some tests to see how
>> much space it is really wasting and if that can give some performanace
>> boost.
>> Honestly, you'll save a little space and might gain a small boost in
>> performance, but my guess is not much.
>> You'll probably save about 8-10 bytes per row. This will save about 10%
>> of the total row size.
>> So, doing real rough math, you'll go from 93 rows per page to 106. This
>> will mean reading in a page will get you 13% more rows, so that could
>> reduce your I/O a bit, but I don't think it's where you're going to see
>> that much improvement.
>> (actually above I just realized I'm assuming varchar, not nvarchar. But
>> yeah, going from nvarchar to char on some of those will definitely be an
>> improvement.)
>>
>>It would have gotten there because we used the Access2003 upsizing wizard
>>to convert the data from Advantage and Access to SQLServer. We were
>>thinking about dropping the 'n' since we will always be using english and
>>don't plan on doing our app in another language and this could half the
>>number of bytes stored.
>> Yeah, that's probably worth it.
>> AcNo is variable length so it has to be varchar(8).
>> Well, determine what the average is.. if most are close to 8 characters,
>> I wouldn't worry.
>> We were debating if we should use char(2) or VarChar(2) because some of
>> the Trantypes are one character and we were not sure yet how it would
>> appear to the code and report query builder.
>> char(2). Varchar(2) uses 3 bytes minimum. ;-)
>>
>> Regards
>> Brian
>> "Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in
>> message news:OoSB018fHHA.1816@.TK2MSFTNGP06.phx.gbl...
>> "Brian" <s@.y> wrote in message
>> news:Ow%23Kf$7fHHA.1220@.TK2MSFTNGP03.phx.gbl...
>>I will take a look at that and see if it makes a difference.
>> Generally the 5 main transaction tables look similar to the following.
>> Within each transaction table a given AcNo may have up to 9 sub
>> account histories each with a separate balance. Transactions for a
>> given day do not necessarily occur on that day and a often out of date
>> sequence..
>> You mention in another response not using clustered index, etc. I'd
>> suspect that you might want to try one on the TranID (generally if you
>> do use an IDENTITY column (and I'll leave the debate to the value of
>> that elsewhere) it's a very good candidate for a clustered index. So
>> I'd consider putting one there. Note that'll force all the other
>> indexes to rebuild.)
>>
>> W/o the queries it's tough to say for sure, but you may want to make
>> sure the indexes you have are covering indexes. Looks like the only
>> one you have that covers more than one field is: AcNoSubAc.
>> You may want to make sure that covers all you need to. Generally a
>> covered index will give you better performance for your queries.
>> (also as an aside, you may want to review if nvarchar makes sense in
>> all those cases. I'm guessing AcNO is always 8 characters... so that
>> probably is better as nchar(8). And Trantype nvarchar(2) actually
>> wastes space (a varchar has an overhead of 2 bytes).
>> But like I say, w/o the queries, it's really tough to say for sure.
>>
>> There may be more than one transaction per day. By far the most
>> regular search/sort is by AcNo and SubAc and Date. Would this be
>> better as a clustered index?
>> Brian
>> CREATE TABLE [dbo].[tblShares] (
>> [TranID] [int] IDENTITY (1, 1) NOT NULL ,
>> [AcNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [SubAc] [smallint] NULL ,
>> [TranDate] [datetime] NULL ,
>> [AmtDr] [money] NULL ,
>> [AmtCr] [money] NULL ,
>> [TranCode] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [Balance] [money] NULL ,
>> [DrIFNo] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [CrIFNo] [nvarchar] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [RefNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [CheqNo] [int] NULL ,
>> [CheqPtr] [int] NULL ,
>> [TranType] [nvarchar] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [XferAcNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
>> [UserNo] [smallint] NULL ,
>> [TStamp] [datetime] NULL
>> ) ON [PRIMARY]
>> GO
>> ALTER TABLE [dbo].[tblShares] ADD
>> CONSTRAINT [DF__tblShares__SubAc__6D2D2E85] DEFAULT (0) FOR [SubAc],
>> CONSTRAINT [DF__tblShares__AmtDr__6E2152BE] DEFAULT (0) FOR [AmtDr],
>> CONSTRAINT [DF__tblShares__AmtCr__6F1576F7] DEFAULT (0) FOR [AmtCr],
>> CONSTRAINT [DF__tblShares__Balan__70099B30] DEFAULT (0) FOR [Balance],
>> CONSTRAINT [DF__tblShares__RefNo__70FDBF69] DEFAULT ('00000000') FOR
>> [RefNo],
>> CONSTRAINT [DF__tblShares__CQVJ__71F1E3A2] DEFAULT ('C') FOR [CQVJ],
>> CONSTRAINT [DF__tblShares__CheqN__72E607DB] DEFAULT (0) FOR [CheqNo],
>> CONSTRAINT [DF__tblShares__CheqP__73DA2C14] DEFAULT (0) FOR [CheqPtr],
>> CONSTRAINT [DF__tblShares__TranT__74CE504D] DEFAULT ('G') FOR
>> [TranType],
>> CONSTRAINT [DF__tblShares__UserN__75C27486] DEFAULT (0) FOR [UserNo],
>> CONSTRAINT [DF__tblShares__TStam__76B698BF] DEFAULT (getdate()) FOR
>> [TStamp],
>> CONSTRAINT [aaaaatblShares_PK] PRIMARY KEY NONCLUSTERED
>> (
>> [TranID]
>> ) ON [PRIMARY] ,
>> CONSTRAINT [CK tblShares TranType] CHECK ([TranType] = 'O' or
>> [TranType] = 'PS' or [TranType] = 'IS' or [TranType] = 'ES' or
>> [TranType] = 'G' or [TranType] = 'D' or [TranType] = 'R' or [TranType]
>> = 'I' or [TranType] = 'T' or [TranType] = 'X')
>> GO
>> CREATE INDEX [AcNoSubAc] ON [dbo].[tblShares]([AcNo], [SubAc],
>> [TranDate], [TranID]) ON [PRIMARY]
>> GO
>> CREATE INDEX [CheqPtr] ON [dbo].[tblShares]([CheqPtr]) ON [PRIMARY]
>> GO
>> CREATE INDEX [TranDate] ON [dbo].[tblShares]([TranDate]) ON [PRIMARY]
>> GO
>>
>> --
>> Greg Moore
>> SQL Server DBA Consulting Remote and Onsite available!
>> Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
>>
>>
>>
>> --
>> Greg Moore
>> SQL Server DBA Consulting Remote and Onsite available!
>> Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
>>
>
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html|||To Greg and Tom,
We did go over the tables and changed the unnecessary nvarchar to char or
varchar and I saw the db lose 30% of its size and the indexes are much
smaller. We then also used the Clustered indexes on the ID fields. The
running time is now down to 45seconds and the table locks are GONE!! (Well
almost, one table I did not cluster is still locking) At worst we now have
page locks so i still have a bit of blocking but not so bad.
When we get some slow time we will try Clustering the other non-primary
indexes and see if we get even better performance.
Thanks a bunch
Brian Morris
"Brian" <s@.y> wrote in message news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
> Hello,
> I'd like to get a general opinion if in other people's experience they
> tend to get whole table locks.
> We have an inhouse customer accounting app. One process in particular
> takes just under 2 minutes to run. It processes 1400 customers by
> checking 4200 balance histories, checking and updating 1400 status
> records, and inserting 8400 new records across 5 tables. The whole
> process is in an expicit transaction.
> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
> dedicated to SQLServer.
> I have noticed that the locks escalate to table locks so that noone can
> view any data when the posting is being done.
> Does this behaviour sound normal for this many records or is something
> wrong? I don't think that to whole tables should be locked for 8000
> records. The 5 tables have an average of 400K records each. The whole DB
> is 900MB.
> Thanks
> Brian Morris
>|||Let us know how it turns out with the addition of the last clustered index.
:-)
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Brian" <s@.y> wrote in message news:OnOHTl5gHHA.5008@.TK2MSFTNGP02.phx.gbl...
To Greg and Tom,
We did go over the tables and changed the unnecessary nvarchar to char or
varchar and I saw the db lose 30% of its size and the indexes are much
smaller. We then also used the Clustered indexes on the ID fields. The
running time is now down to 45seconds and the table locks are GONE!! (Well
almost, one table I did not cluster is still locking) At worst we now have
page locks so i still have a bit of blocking but not so bad.
When we get some slow time we will try Clustering the other non-primary
indexes and see if we get even better performance.
Thanks a bunch
Brian Morris
"Brian" <s@.y> wrote in message news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
> Hello,
> I'd like to get a general opinion if in other people's experience they
> tend to get whole table locks.
> We have an inhouse customer accounting app. One process in particular
> takes just under 2 minutes to run. It processes 1400 customers by
> checking 4200 balance histories, checking and updating 1400 status
> records, and inserting 8400 new records across 5 tables. The whole
> process is in an expicit transaction.
> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
> dedicated to SQLServer.
> I have noticed that the locks escalate to table locks so that noone can
> view any data when the posting is being done.
> Does this behaviour sound normal for this many records or is something
> wrong? I don't think that to whole tables should be locked for 8000
> records. The 5 tables have an average of 400K records each. The whole DB
> is 900MB.
> Thanks
> Brian Morris
>|||"Brian" <s@.y> wrote in message news:OnOHTl5gHHA.5008@.TK2MSFTNGP02.phx.gbl...
> To Greg and Tom,
> We did go over the tables and changed the unnecessary nvarchar to char or
> varchar and I saw the db lose 30% of its size and the indexes are much
> smaller. We then also used the Clustered indexes on the ID fields. The
> running time is now down to 45seconds and the table locks are GONE!!
> (Well almost, one table I did not cluster is still locking) At worst we
> now have page locks so i still have a bit of blocking but not so bad.
45 seconds is better! Not great, but clearly better!
I'll be curious to see what difference that final table makes.
> When we get some slow time we will try Clustering the other non-primary
> indexes and see if we get even better performance.
Not sure if I follow you. You realize you can only have one clustered index
per table, right?
Anyway, glad to see an improvement. Let us know if you need more help.
> Thanks a bunch
> Brian Morris
> "Brian" <s@.y> wrote in message
> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience they
>> tend to get whole table locks.
>> We have an inhouse customer accounting app. One process in particular
>> takes just under 2 minutes to run. It processes 1400 customers by
>> checking 4200 balance histories, checking and updating 1400 status
>> records, and inserting 8400 new records across 5 tables. The whole
>> process is in an expicit transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone can
>> view any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is something
>> wrong? I don't think that to whole tables should be locked for 8000
>> records. The 5 tables have an average of 400K records each. The whole
>> DB is 900MB.
>> Thanks
>> Brian Morris
>
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html|||Greg,
I meant clustering an index other than the Identity.
Regards
Brian Morris
"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in message
news:O0DjZ$7gHHA.4936@.TK2MSFTNGP04.phx.gbl...
> "Brian" <s@.y> wrote in message
> news:OnOHTl5gHHA.5008@.TK2MSFTNGP02.phx.gbl...
>> To Greg and Tom,
>> We did go over the tables and changed the unnecessary nvarchar to char or
>> varchar and I saw the db lose 30% of its size and the indexes are much
>> smaller. We then also used the Clustered indexes on the ID fields. The
>> running time is now down to 45seconds and the table locks are GONE!!
>> (Well almost, one table I did not cluster is still locking) At worst we
>> now have page locks so i still have a bit of blocking but not so bad.
> 45 seconds is better! Not great, but clearly better!
> I'll be curious to see what difference that final table makes.
>> When we get some slow time we will try Clustering the other non-primary
>> indexes and see if we get even better performance.
> Not sure if I follow you. You realize you can only have one clustered
> index per table, right?
> Anyway, glad to see an improvement. Let us know if you need more help.
>
>> Thanks a bunch
>> Brian Morris
>> "Brian" <s@.y> wrote in message
>> news:OWuWOJxfHHA.1216@.TK2MSFTNGP03.phx.gbl...
>> Hello,
>> I'd like to get a general opinion if in other people's experience they
>> tend to get whole table locks.
>> We have an inhouse customer accounting app. One process in particular
>> takes just under 2 minutes to run. It processes 1400 customers by
>> checking 4200 balance histories, checking and updating 1400 status
>> records, and inserting 8400 new records across 5 tables. The whole
>> process is in an expicit transaction.
>> Our server is SQLServer2000 on a 3GHz Pentium4 with 2GB but with 1GB
>> dedicated to SQLServer.
>> I have noticed that the locks escalate to table locks so that noone can
>> view any data when the posting is being done.
>> Does this behaviour sound normal for this many records or is something
>> wrong? I don't think that to whole tables should be locked for 8000
>> records. The 5 tables have an average of 400K records each. The whole
>> DB is 900MB.
>> Thanks
>> Brian Morris
>>
>
> --
> Greg Moore
> SQL Server DBA Consulting Remote and Onsite available!
> Email: sql (at) greenms.com
> http://www.greenms.com/sqlserver.html
>
No comments:
Post a Comment