SSIS 2012 Deadlock When Running SSIS from SQL Agent Job

You may also like...

6 Responses

  1. Steve Wake says:

    Sorry I didn’t respond to this when you posted that error via email. Now that I read your post I recognized this issue from another client and my fix was the same as yours, just make the packages not start at exactly the same second. I did some tests and I think that even if you start the 2nd execution 5 seconds or so after the first they will both run ok.

    We brought this up with Microsoft last year and I guess I’m not too surprised that even SP1 didn’t fix it.

    This is very similar to an issue in SSRS with the snapshots only being stored in the database down to the second, so if 2 snapshots are started for the same report at the same second one will overwrite the other.

    Seems that MSFT is really not keeping track of things being run in large batches in enterprise setups.

  2. Farid Kettani says:

    I have exactly the same problem in SQL Server 2008. Do you know whether there is any fix for 2008 ?

    • Can you post the error message you are seeing? On 2008 I would think it should be different since there is no SSISDB. Are your packages stored in MSDB or on the file system? I could see a similar error coming up if you stored the packages in MSDB. Have you tried scheduling them a few seconds apart rather than the exact same time?

      • Farid Kettani says:

        Thanks for your quick response. Yes, that’s exactly what we did as a workaroud, stagerring the SQL Server Agent jobs 1 minute apart and run all the packages every half hour instead of every 15 minutes

        • Unfortunately the only fix I know of is scheduling them at different times. I was originally doing them 5 minutes apart, but I tested running them 10 seconds apart and it worked just fine too. So if you need to get a smaller window move those schedules to a smaller interval. Another option, granted may not be a good one, is to make a master package that calls each of the packages you want to schedule at the given time. This would only work if all the packages ran on the same schedule which is partially why I don’t think it’s the best idea, but for some scenarios it may work well.

          So for the time being the long and short answer is to code around it. If you find a better solution feel free to share it!

          Thanks for checking out the blog too Farid.

  3. Mark Tassin says:

    A couple of ways around this.
    1. you could edit catalog.create_execution and cleanup more Microsoft mess (they didn’t encrypt the original stored procs)
    2. Just configure the job to retry 3-5 times in the advanced tab

    What’s even more fun is when the internal cleanup proc runs and uses the FK on delete cascade stuff to cleanout about 10 tables…. 10 rows at a time = hundres of thousands if not millions of rows in the lower end tables. Depending on how many jobs your run per day that cleanup can take hours, even after the indexes Microsoft recommends. Better yet would be to not depend on the on delete cascade FKs and instead start the deletes at the bottom tables.

Leave a Reply