5

We have a group of fairly complex stored procedures that process a transaction queue on a round the clock schedule. These scripts use a number of print statements to document what is being done to aid in debugging.

I want to keep this information while it is running in production (as an SQL Server Agent job) to help with troubleshooting when something goes wrong. I have looked at this site which has a fairly good method of saving the print output but it also has some downsides (output is not available until the script completes, etc). It also requires SSIS or DTS which I have no experience with.

I have thought of changing all of the print statements to insert statements to a log table. The problem is that much of the work we need debug info for is under transaction control and if there is an error, all of the log information would be rolled back with the actual transaction.

Does anyone have a better solution or an idea of how to work either of the solutions I have looked at above?

Caynadian
  • 213
  • 4
  • 8
  • you can also try [this approach][1] it works if you call your Sp from .Net code. [1]: http://stackoverflow.com/questions/28537767/capture-sql-print-output-in-net-while-the-sql-is-running – Luca Mar 25 '15 at 17:48

2 Answers2

4

One solution to the rollback problem is to log information to a table variable (which isn't affected by the transaction), then roll back, then insert into the real logging table. Quick example:

DECLARE @LogRow TABLE
(
  dt DATETIME, objectid INT, 
  errornumber INT /* other columns */
);

BEGIN TRANSACTION;

BEGIN TRY

  SELECT 1/0;

  COMMIT TRANSACTION;

END TRY
BEGIN CATCH

  INSERT @LogRow(dt, objectid, errornumber /* , ... */)
    SELECT SYSDATETIME(), @@PROCID, ERROR_NUMBER() /* , ... */;

  ROLLBACK TRANSACTION;

  INSERT dbo.RealLogTable(/*...columns...*/) 
    SELECT /*...columns...*/ FROM @LogRow;

END CATCH
Aaron Bertrand
  • 175,645
  • 25
  • 380
  • 589
  • Can the log table be created in the root SP and shared among all called sub-SPs? – Caynadian Mar 27 '14 at 16:24
  • 2
    @Caynadian Well you could use a Table-Valued Parameter (TVP) for that, but perhaps you should step back and not have procedures calling procedures calling procedures... – Aaron Bertrand Mar 27 '14 at 16:38
  • Well, we have one root SP that then calls 3 primary processing sub-SPs. The 3 sub-SPs all do similar things so those similar functions are broken out in to sub-sub-SPs. Combining everything back in to 1 SP would result in a nightmarishly huge SP that would be impossible to work with. I will try out what you've suggested, thanks! – Caynadian Mar 27 '14 at 17:03
  • 1
    Yikes, I'll take one procedure over a dozen in terms of "impossible to work with"... YMMV. – Aaron Bertrand Mar 27 '14 at 17:15
  • Well, unfortunately, TVPs won't work as they are readonly when passed to the sub-SPs. – Caynadian Mar 27 '14 at 19:25
  • @Caynadian I can't really assist further, sorry. Your convoluted design is far too vague and high-level and makes it hard for me to envision how to better solve the problem. My first instinct would be to have each stored procedure log its own errors, rather than try to share everything up and down the chain. – Aaron Bertrand Mar 27 '14 at 19:53
1

In the end, we used the SSIS package with a script that captured the print statements as illustrated in the link in my original question. The captured print output is put in to the SSIS log which can then be directed to a CSV file, XML file, database, etc. (we used a database table). We then run the SSIS package using an SQL Server Agent job (just make sure the SQL Agent user has access to the table you are outputting the logs to).

Although this was simpler due to the complexity of the SPs we are using, it was not ideal. I did try the suggestion from @AaronBertrand above and it does work to enable logging despite using transactions.

Caynadian
  • 213
  • 4
  • 8
  • I'm confused, you were using transactions already, no? Hence: `The problem with this is that much of the work we need debug info for is under transaction control and if there is an error, all of the log information would be rolled back with the actual transaction.` I wasn't suggesting adding additional transaction code on top of that, just using the transactions you already had in the way I described. *shrug* I guess if you find it easier to capture print output... – Aaron Bertrand Apr 01 '14 at 16:19
  • The problem is that transactions are started in one SP and then not committed until after one or more sub-SPs are executed. So unless there was a way to pass around the table variable from one SP to another, it wouldn't work for us. – Caynadian Apr 01 '14 at 20:22
  • Your example was great, it just wouldn't work for us unfortunately. The way we went is far from perfect because the print output is not captured in real time but in batches. If the SP hangs in the middle, there is no way to find out what happened without killing it first. – Caynadian Apr 01 '14 at 20:23
  • I still don't understand why each stored procedure can't log its own errors or why you have to pass all of the errors around from procedure to procedure to procedure. *shrug* Anyway, you can try using `RAISERROR('foo',0,1) WITH NOWAIT;` instead of `PRINT` - the `NOWAIT` forces it to flush regardless of the current output buffer - should be much more real time (though there may be an upper bound on buffer size still). – Aaron Bertrand Apr 01 '14 at 20:40
  • Won't RAISERROR cause the SP to stop processing then? What I'm trying to log is debug level information about what is going on not just processing errors. I can send you a sample of the code if you would like to have a look. – Caynadian Apr 01 '14 at 20:50
  • 1
    no, did you try it? The parameters `0,1` are there for a reason... – Aaron Bertrand Apr 01 '14 at 20:52