Monday, March 12, 2012

quality of service

I have been having what I believe to be quality of service issues with the company that hosts my company's web site and connected SQL Server database. The same stored procedure is called from a number of different web pages with identical parameter specs, etc. (they were all created from the same template web page). Over the past three weeks, this stored procedure has been called more than 20,000 times by more than 1,000 different end users, and has failed to drop a table it creates 70 of these times (a 99.7% successful execution rate), which causes significant problems for the end user, since the next time this procedure is called, the table already exists. There is seemingly no pattern to these failed executions with regard to particular end users, or to which pages from which they were called. However, there is a pattern in the timing. All but 4 of the failed executions occurred clustered around about 10 very specific time stamps (i.e., 5 of the tables that were not dropped were created within a minute of 2:27 pm, 15 of the tables that were not dropped were created within 1 minute of 2:27 pm, etc.). The company that hosts my web site and connected SQL Server database insists that the problem is in my scripting and refuse to look into the issue unless I pay them $120 per hour to do so. It seems to me that, if this were scripting error, it probably would not be succeeding 99.7% of the time, and any failures that did occur would be randomly distributed, not clustered around specific time stamps. It seems to me that this clustering is indicative of slowed or failed SQL Server responses due to server performance issues. I work in statistics and have calculated the odds of this time clustering being random, and, even with the most conservative estimates, it works out to be on in a number I don't even know the name for (with more than 40 zeros). I have included a little information about the stored procedure below, if you think it is relevant. my question to you is: Does this seem likely to be a scripting issue or a server performance issue?
Part of the stored procedure uses the sp_executesql command with a text string to create a table with the unique name of the end user ID (a unique, randomly generated 12 character alphanumeric ID), inserts data into this new table through a loop, then pulls a record set from the table, and drops the table (there is a good reason I need to generate the recordset this way).
Thank you for your help
SQL SearcherWhy can't you use a temp table? That's what they're there for...
"SQL Searcher" <anonymous@.discussions.microsoft.com> wrote in message
news:FB3A473E-7ED7-4E01-A8F9-95F7ABA12313@.microsoft.com...
> I have been having what I believe to be quality of service issues with the
company that hosts my company's web site and connected SQL Server database.
The same stored procedure is called from a number of different web pages
with identical parameter specs, etc. (they were all created from the same
template web page). Over the past three weeks, this stored procedure has
been called more than 20,000 times by more than 1,000 different end users,
and has failed to drop a table it creates 70 of these times (a 99.7%
successful execution rate), which causes significant problems for the end
user, since the next time this procedure is called, the table already
exists. There is seemingly no pattern to these failed executions with
regard to particular end users, or to which pages from which they were
called. However, there is a pattern in the timing. All but 4 of the failed
executions occurred clustered around about 10 very specific time stamps
(i.e., 5 of the tables that were not dropped were created within a minute of
2:27 pm, 15 of the tables that were not dropped were created within 1 minute
of 2:27 pm, etc.). The company that hosts my web site and connected SQL
Server database insists that the problem is in my scripting and refuse to
look into the issue unless I pay them $120 per hour to do so. It seems to
me that, if this were scripting error, it probably would not be succeeding
99.7% of the time, and any failures that did occur would be randomly
distributed, not clustered around specific time stamps. It seems to me that
this clustering is indicative of slowed or failed SQL Server responses due
to server performance issues. I work in statistics and have calculated the
odds of this time clustering being random, and, even with the most
conservative estimates, it works out to be on in a number I don't even know
the name for (with more than 40 zeros). I have included a little information
about the stored procedure below, if you think it is relevant. my question
to you is: Does this seem likely to be a scripting issue or a server
performance issue?
> Part of the stored procedure uses the sp_executesql command with a text
string to create a table with the unique name of the end user ID (a unique,
randomly generated 12 character alphanumeric ID), inserts data into this new
table through a loop, then pulls a record set from the table, and drops the
table (there is a good reason I need to generate the recordset this way).
> Thank you for your help!
> SQL Searcher|||I tried, and it didn't work for some reason.

No comments:

Post a Comment