SQL Server Performance Troubleshooting with Ozar Unlimited


I was lucky enough to attend the SQL Server Performance Troubleshooting  in San Diego with the Ozar Unlimited team.  I like to review events that I attend and I take some pride in telling people how it is, and how it was, at least from my perspective.  With that being said, anything said in this post is my own opinion and not related in any way to any other entity.  I’m also going to leave out quite a few of the details since part of the fun was the unknown, and the Ozar Unlimited team did a great job at keeping the training fun.  In order to keep this post a reasonable length, I’m only going to be picking my top three modules from each day.



 Some Background

 Back in college the ability to serialize things to a file in C++ escaped me.  I had a straight up disability when it came to file IO.  I got around this disability by using databases because somehow it was easier for me to read and write into various database systems (right?).  This is how I gained rudimentary SQL skills, mostly in the world of Oracle and MySQL.  When I landed a job as a Software Engineer for National RV, I found myself developing small java based data entry apps that backed by MySQL.  Tragically National RV shutdown and I was out in the cold, cruel harsh world…then I found a job shortly thereafter at American Bath Factory [ABF], where I think my original title was “MAS 200 Support.”

At the time, ABF was using an ERP called MAS 200 and a home grown PHP web application called CRS.  We performed ETL between MAS 200 and MS SQL, so that CRS could access the ERP data and allow customer service reps to input RMAs.  Because I knew how to write queries and I had a small grasp on database internals, the “MAS 200 Support” title quickly changed to “Database Administrator” and I became the go to person for maintaining and optimizing the database server.  I read a couple books, crashed a couple servers, and I was getting the hang of things pretty quickly.  Sound familiar?

In 2009 while browsing the internet for stuff, I stumbled upon a man named Brent Ozar and his script sp_Blitz.  Through my interest of the script, I started reading Brent’s other posts.  I felt as if I had hit the mother-load of educational reference material.  This is when I started stalking…er I mean following his blog and SQLServerPedia videos.  The man never knew about me, but he had became a virtual mentor in the ways of SQL Server.  Through his blog, I discovered there was a DBA community, and I started learning real world DBA skills, without having to crash production servers :)

Since then my career has skyrocketed.  My ability to assimilate and optimize have proven to be extremely valuable and luckily in demand.  I was truly inspired by Brent Ozar and the community, and because of that inspiration (and the complete lack of a social life), I began a quest of constant improvement.  Years later, I still follow the Ozar Unlimited team through their posts, I attend their webinars on the regular, I’ve had a critical care session with Jes Schultz Borland, and recently I was finally able to meet Brent, Kendra and Jeremiah at the San Diego SQL Server Performance Troubleshooting session.

Day 1

As excited as I was to finally meet the people that have been so influential in my career, I totally showed up late -_- .  Somehow my calendar was set to a 9am start time (I blame ghosts) when in fact the training started at 8am.  I walked in during a break and it looked somewhat chaotic which was great because I thought everyone else was still arriving.  Brent was handing out training material and people were walking around getting refreshments.  I eventually found a seat and mentioned to the guy next to me something about how I thought I was going to be late.  He said, “Don’t worry you only missed one module, “ which made me laugh because I thought he was joking.  I later found out he wasn’t.  Damn, total fail.

The “Fundamentals of SQL Server Internals” module was awesome.  I very often find myself reviewing the Microsoft SQL Server 2008 Internals book because I think the internal functions of database systems are extremely interesting.  Learning the basics, and reviewing the basics frequently is in my opinion a great way to retain and master a a subject.  Maintaining structural integrity at the foundation of your knowledge can make all the difference when you are trying to troubleshoot problems and design solutions.  It was great that the team presented this information in a way that didn’t put us to sleep (read the internals book, see what happens).

  The module “How to Tell When TempDB is a Performance Problem” was a sneak attack.  I really haven’t had a good (or bad) experience with TempDB in the field, so I wasn’t originally interested in this module.  I’ve always followed recommended optimizations such as multiple TempDB files and placing them properly across fast disk subsystems.  During this module we got to learn more about the inner workings of TempDB, and I now have a new appreciation for it.  Knowing that TempDB does…EVERYTHING…makes me take a second look at my own understanding of the component.  This module definitely sparked a new found interest that is going to result in further research.

Another module that really stuck out was “Why NOLOCK isn’t enough: Finding Query Isolation Problems.”  This is where Kendra Little taught us about Read Commit Snapshot Isolation and how, if carefully and implemented, could cure many of our locking problems.  To be completely honest, I haven’t paid much attention to isolation levels, but that all changes now.  For me this module was a great introduction into isolation levels and how to avoid locking.  RCSI is now on my list for things I need to experiment with and implement.

Day 2

I made sure I was on time for Day 2 :) and they had breakfast waiting for us in the conference room.  The day started out with Jeremiah Peschka’s architecture madness of doom.  One thing I need to say about Jeremiah is that I always kind of considered him to be like the drummer of the band.  The drummer is completely necessary and critical to the process, but somewhat disregarded by the spotlight.  By Day 2 Jeremiah was proving himself to be a constant source of architecture and theory whoop ass.  The “Rules, Rules, and Rules” module was another seriously badass module where we discussed architecture, and theory behind the engine.  Jeremiah used words like: “Science”, and “Domains,” as he lectured about the Heap data structure, the nested loop join, the merge join, the hash join, and MVCC.  During this module, something dark and sinister was stirring inside (and I’m not talking indigestion), the more I was hearing someone else talk about architecture and theory…the more I wanted to keep hearing about architecture and theory.

“Seek and Destroy: How to Identify Indexing Problems” was great.  Pretty much anytime Kendra Little talks indexes, you better sit down and STFU, cause she knows a thing or two about them.  IMHO one of the more fun optimizations in SQL is the never-ending quest to tune indexes.  Because of how fun this optimization is, learning about indexes can never really be dull.  I’ve been following Kendra’s posts on tuning indexes for a couple years, so I was familiar with her tuning techniques which are very structured, in depth and win.  Her presentation style is very organized and illustrated, and the demos were gold.

Another module that I really enjoyed was “What Queries Are Killing My Server?” One thing that you can’t argue with is that Brent Ozar is extremely good and methodical when it comes to troubleshooting and fixing problems.  Getting into his head and seeing how he finds the killers, and deals with them, is entertaining and enlightening.  This is the first real introduction to extended events that I have had.  For some reason I’ve been avoiding them, although I started watching some PluralSight videos on them, I’ve just been sticking with an old profiler template that I made a few years ago.  After this module, I plan on converting that template into extended events and becoming teh homies with XE.

Day 3

Up to this point, I was learning so much and I was able to gauge my own skill sets realizing that I’m not doing too shabby.  After grabbing breakfast and socializing for a bit, it was time to begin.  I was feeling totally pumped and ready absorb more information when Jeremiah Peschka stepped up to the plate again, and damn, he laid the intellectual smack down on all of us.  He completely wtfpwnd our brains with “Caching,” “Dynamic SQL,” & Windowing Functions.”  I’ve had exposure to these subjects in the field and in text, but any exposure was still not enough to keep my mind from being blown.  The wonders of application caching, the beauty of handling dynamic SQL (the beauty of QUOTENAME() and PARSENAME()), and the pure amazement of Windowing.  By lunchtime I was kind of star struck.  Earlier I had said that I thought Jeremiah was like the drummer of the band, but after these three modules I realized that he is in fact a true Vatican Assassin Warlock.  You don’t fuck with Peschka’s knowledge…Peshka’s knowledge fucks with you!

Every module in Day 3 was amazing, but like I said earlier I only wanted to review the three that impacted me the most.

Other Stuff

It wasn’t just training with the Ozar Unlimited team, it was a complete experience.  They ate lunch with us and were able to somehow digest food while being bombarded with questions.  Two nights we got to accompany them to dinner where they proved to be interesting people on a personal level.  They kept things fun from start to finish, even threw in a contest, where I proudly walked away with a 2014 Justin Bieber calendar.  At one point Brent even showed us how to use Twitter and #sqlhelp.  I met a lot of other very cool DBAs from various parts of the county and had a great time getting to know each of them.

One thing I left with (little jars of jam and Hilton pens aside) was a list of items that I want to research and write about, if anything just to better understand and retain the skills that were taught to me during this training.  Becoming encumbered with schwag was not the only thing that happened during this journey, I also returned with a rather spiritual experience.  For years I’ve been riding the rainbow tail of database server dragons, but never truly understanding why I was drawn to them in the first place.  Did I just understand them?  Did they just understand me?  Was I part dragon myself?  Were my true birth parents database servers?  I’ve pondered these questions for years, and during this training, while I was learning how to be awesome with nunchucks of database administration fury, I realized that the theory and architecture of database systems is what I love.  Pages, Algebrizers, Hashing, Trees, and Maths are all fascinating components of databases.  Not only do they sound cool when you talk about them at a bar, they look great no matter what font you dress them in.

Conclusion

I have to say this was a 10/10 training session.  It was worth more than what we actually paid for it, and the Ozar Unlimited team can melt faces with laser beams of thunder and awesome.  I would recommend this training to anyone wanting to be serious about acquiring skills to advance their career as a DBA, and I would love to attend further training sessions with group.

Good Game Ozar Unlimited…Good Game.

Source Controlling Indexes



With my recent move to into a software development team I find myself paying more attention to software engineering techniques and tactics, one of them being source control.  In the past I’ve had a very manual…archaic…system for maintaining indexes.

  1. Everyone should get SQL Source Control.  It’s such a low price that I can’t imagine any employer saying no to a tool that helps you source control your databases.  I really think you owe it to yourself to at least try the demo.  With that said, if you don’t have SQL Source Control then I’m sure you can still accomplish the end goal, but you’ll need to experiment yourself.
  2. I am using Git because it’s seriously easy to use, it installs seriously fast and it’s seriously easy to use (did I already mention that it’s seriously easy to use?).  I installed Git on an Ubuntu Server using Azure.
  3. I am using Atlassian SourceTree and Stash, mostly because I use JIRA, but also because I like having a graphical interface.  It also has a terminal you can pop open when needed, but in my case I’ve found it much easier to teach other people about Git when I can show them using a GUI.
One thing that I commonly do when performing index analysis is to identify indexes that are not being helpful.  How I do that is not in the scope of this post, but I do it mostly with thunder.  After a Critical Care session with Jes Schultz Borland, I was convinced that disabling indexes was the way to go.  Taking her advice, I started using a workflow that went a little something like this:
  1. Identify
  2. Track possible uses in the plan cache
  3. Script the index creation and save in folders identified by date
  4. Disable the index
  5. Chill or be Chilled (being Chilled is like being Iced)
  6. Drop
Seems pretty standard right?  Item 3 was a painful area for me.  Scripting the index and filing it away was a manual process, tedious, boring, excruciatingly monotonous (you get the idea…), and I also had to make sure it was backed up and available.
After setting up Git and Stash on an Azure VM for my team, I installed SQL Source Control [SSC] and linked our first database.  This was cool, I got to see all the important stuff go into source control and it was win.  So the time came for me to start testing how indexes worked with SSC and Git.
Creating an index is perfect, SSC detects the change, and adds in the tsql to the schema.tablename.sql file.  Dropping the index was just the same, the tsql was removed, and the commit messages saved.  Next I figured that disabling an index would be just as smooth, so I disabled my test index, and clicked the commit changes tab in SSC. …damn.  Ok, so nothing happened but silence,  so I thought to myself, this won’t do at all.  I was on the hook with my homey Josh and I mentioned the slight frustration and he said “why don’t you rename the index?”.  Simply genius…that is, if renaming the index will allow usage stats to remain intact (this is why disabling is preferred over dropping).
Now like any good software engineer would, I decided to perform an experiment to see how the usage stats would react.  I cracked open Red Gate SQL Data Generator, inserted a milly, and then wrote a while loop that would run a select statement that used the index, and ran it some random amount of clicks.  I then ran the following tsql excerpt and recorded the usage stats and executed the following tsql:
EXEC sp_rename N'testingGit.dbo.testingGitTable1.IX_testingGitTable1col1', N'IX_Iwasrenamed', N'INDEX';
Reran this guy and to my delight, usage stats remained the same.
Now at this point I needed a more organized method of running the process for renaming and disabling an index.  I wrote the following stored procedure to handle this task, it takes in the database name, schema name, table name and index name, then handles it for you.  All indexes disabled in this fashion will have the postfix _wasDisabled_ + MMDDYYYY.  Example: IX_RenameMe gets disabled and becomes IX_RenameMe_wasDisabled_01012014.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
– =============================================
– Author: Tim M. Hidalgo
– Create date: 01-16-2014
– Description: Input variables Database Name, Schema Name, Table Name and Index Name to be renamed/disabled
– =============================================

IF EXISTS (SELECT OBJECT_ID(‘sql4gnt_DisableAndRename’,‘p’))
   DROP PROCEDURE [dbo].[sql4gnt_DisableAndRename]
GO

CREATE PROCEDURE sql4gnt_DisableAndRename
– Add the parameters for the stored procedure here
@databaseName NVARCHAR(MAX), @schemaName NVARCHAR(MAX), @tableName NVARCHAR(MAX), @indexName NVARCHAR(MAX)AS
BEGIN
DECLARE 
@newName NVARCHAR(MAX), @alter NVARCHAR(MAX)
– SET NOCOUNT ON added to prevent extra result sets from
– interfering with SELECT statements.
SET NOCOUNT ON;
SET @newName @indexName ‘_wasDisabled_’ REPLACE(CONVERT(CHAR(10), GETDATE(), 103), ‘/’)
–Print statements for debugging
–PRINT(@indexName);
–PRINT(@tableName);
–PRINT(@newName);
EXEC (‘sp_rename N”’ @databaseName ‘.’ @schemaName ‘.’ @tableName ‘.’ @indexName ”’,N”’ @newName ”’,N”INDEX”’)
–PRINT(‘sp_rename N”’ + @databaseName + ‘.’ + @schemaName + ‘.’ + @tableName + ‘.’ + @indexName + ”’,N”’ + @newName + ”’,N”INDEX”’)
EXEC (‘ALTER INDEX ‘ @newName ‘ ON ‘ @databaseName ‘.’ @schemaName ‘.’ +@tableName ‘ DISABLE’)
–PRINT(‘ALTER INDEX ‘ + @newName + ‘ ON ‘ + @databaseName + ‘.’ + @schemaName + ‘.’ +@tableName + ‘ DISABLE’)
END
GO
After the rename SSC will detect the change, then you can save your changes, commit to Git and win.  One problem with this code is that I have to create the proc in each database, I wasn’t able to get it to run from the master without the following error:

Either the parameter @objname is ambiguous or the claimed @objtype (INDEX)

If anyone has ideas on how to improve this procedure, please let me know.

Cheers

No More Excuses…



I remember a facebook post from a musician I enjoy listening to named Gentry , although I cannot locate the exact quote it was something like “Traktor DJ for IOS is now free, no more excuses for suck ipod dj sets.”  After I was done ROTFL, I continued on with my life (after downloading the app of course).  Something about that quote stuck with me and while I was doing my normal routine of researching and learning, I started thinking to myself, as Technology Professionals, we really have no excuse for not constantly improving our skill sets.

You can do a quick google image search for “whats your excuse” you’ll see tons of what should be motivational pictures.  Little tink tink, Mother’s with hard bodies, Athletes with no arms…and memes of Darth Vader.  Regardless of what you find, the message is the same…people everywhere are able to find the motivation and means to accomplish amazing things.  As a DBA, no one is asking you to be an olympic track runner or champion body builder…they are asking that you keep your skills and knowledge updated so that you can provide a continual benefit to the business.  If you are a DBA and you are not able to keep up your skills, what’s your excuse?

I don’t have the time –  Time is a bitch.  You have to make time.  Wake up earlier, stop binge watching Netflix, go to bed later, carry a kindle with you and read when you are waiting for things, or carve some time on the weekends for you get your studying in.  I have 3-3.5 hours of commuting every day, an 8 year old son, a very beautiful wife, and 3 dogs.  I also work side jobs, and I have hobbies like music production and keeping my car shiny.  I am able to find the time…what’s your excuse?

It’s too expensive – This one gets you smacked upside the head…hard.  There are so many free resources on the Internet that you couldn’t possibly have exhausted them all.  I don’t believe that you have read every article or watched every video on SQL Server Central, not to mention the ENTIRE INTERNET.  Professionals like Brent Ozar, Paul Randall, and Grant Fritchey all offer free advice, hell Ozar Unlimited offers FREE weekly webinars!  Aside from the free resources, you really should get your finances in order and create a budget for training material.  Maybe a book every three months?  My wife and I decided that it was best for her to home school our son, so we are a single income family.  We have to go over our personal finances weekly to make sure we are on target, and we review them monthly to see if they can be optimized.  I can understand how hard things can be on a tight budget, but maybe you can cancel out trips to the bar, eating out a couple times, or buying snacks at the gas station?  You can also look into tax write offs that are available for educational and training purposes.  So now I ask again…what’s your excuse?

When I’m off work, I don’t like to think about work – Why?  When I’m reading a book on Filestream this weekend (a FREE book from Red Gate btw), yes it is because of a task assigned at work, but it’s also self improvement and viewed as something that will help advance my career.  I take this assignment as a way for me to better myself.  Instead of writing music this weekend, I will read the book and work through the examples.  I do have a Superbowl party to attend, but I’ll just have to figure out something.  Perhaps at the bottom of it all, I love my job, and my employers treat me very well.  Because of how well they treat me, I enjoy learning new things so that I can better contribute to the business which really is a second home for me (you spend 8+ hours at the office, you really need to make a home where you work).  If you really love what you do, and you are grateful to the people that pay your rent…why would you not think about them off the clock?  One more time…what’s your excuse?

Personally, I believe that if you love something, finding time, resources and motivation to go that extra mile is not hard.  Life is too short and we should all strive to enjoy every moment of it, so wouldn’t it be a great idea to find a career that motivates you enough to continuously optimize?

Cheers

Virtualize Everything


_

I began dabbling in virtualization somewhere around 2007.  I used VMware sever for creating some virtual labs but nothing too serious.  At that time it was a cool technology and I was happy to have a small group of linux servers running on one single physical server.

Later in 2008 I got a taste of virtualizing windows servers in a production environment using VMware and vSphere.  This was where I fell in love with the technology.  The speed at which I could provision servers was amazing, and having that well designed console that is vSphere…just beautiful, it truly made me feel like a superhero, but it wasn’t until late 2012 that I finally got to try my hand at constructing a fully virtualized infrastructure.

As IT Manager for a EMR software company, I reached my final destination that was over 100 virtual machines across 4 UCS blades, a couple 10G Cisco switches and a VNX5300.  This project is one of my most beautiful creations to date.  The architecture of that infrastructure made our IT department much more flexible and able to keep up with the demands of a extremely fast paced business.  We virtualized both production and staging environments that included application servers, web servers and database servers, and we utilized VMware High Availability for protection, and vMotion for meeting SLAs.  I still to this day think back to that work of art.

Since then I have setup three more virtualized environments.  I’ve helped larger environments empower their IT departments in order to manage remote virtual data centers, as well as helping small businesses realize the benefits of virtualization with amazing results.

With that brief history you’ll find it no surprise that wherever I go I always make sure I have the capabilities to virtualize.  I find that very often development departments and IT departments will be testing things in production, or testing things local to their workstation then rolling it out to production.  The two most obvious problems with these practices are, when you test in production…you are being completely unsafe, and when you develop on a different architecture, you don’t know how something will react in production.  Usually the common excuses I hear are; It’s too expensive to configure a virtualized environment, We don’t have the expertise, and We just aren’t sure about the viability of virtualized solutions yet.

It’s too expensive to configure a virtualized environment - Yes, it is expensive, 4 UCS Blades, 2x 10G Cisco switches, and the VNX5300 along with VMware Enterprise plus will cost somewhere around 225k.  1 HP Server with a couple terabytes of space, 32GB of RAM and VMware Essentials plus will run you around 15k.  If your business doesn’t have the funds for such configurations, then you should get creative.  Sometimes baby steps are best.  Check your server graveyard, put together something with as much RAM and as much hard drive space as you can muster.  You’ll be surprised how little cpu it takes to run 6 vms for use in a staging or development environment.  I have found that these first baby steps will provide enough illumination for management to take a more serious look at the benefits of virtualization.

We don’t have the expertiseYes, you will need someone to configure this.  If you go the expensive route, I would highly recommend that you not only get your staff training, but also get at least one consultant that has been to the rodeo a couple times.  I would also suggest that if you are truly interested in being awesome, then start learning virtualization by setting up your own development environment or proof of concept environment.  Don’t just hack it, configure and install, learning as you go.  Virtualization knowledge is never a bad thing to have.

We just aren’t sure about the viability of virtualized solutions yetReally?  Virtualization is not new, its not bleeding edge, you are behind in the times if you aren’t reaping some of the benefits that come with virtualization.  Your sandbox environment could be completely virtualized for little to no hardware and software costs.  I too was weary of virtualization, people would say scary things like SQL performs poorly.  Most of the people saying these things were older IT workers that clearly hadn’t been keeping up with the times.  Once I finally took the plunge, I found performance to be more than acceptable (especially when moving from lesser physical servers to greater virtual servers), and to be honest, one should probably be more worried about the unoptimized code running in their infrastructure.

At this point, I ask you the reader…why aren’t you virtualizing something?  You can use VMware, Virtualbox, or Hyper V.  Virtualbox is free, Hyper-V comes with Windows 8.1 and VMware Workstation has a price tag of $249.  For the love of all that is holy and light…please just download Virtualbox and start learning how to virtualize.  Don’t just create a client workstation vm, create a domain controller, maybe a web server, learn how to use virtual networking, read how to optimize settings for different server types.  I can assure you that you will not be wasting time.

In my current position I am using a laptop with Windows 8.1 and a 1TB external drive, so naturally, I am using Hyper-V.  I highly recommend that you get familiar with PoSH because creating Hyper-V machines is much quicker when you script them.  Here is a link to the script I used as a reference.  I wrote a modified version that takes in a couple parameters and loops through a file for creating multiple machines.

I like to create host only networks so that all my traffic is contained within my virtualized environment.  This is very simple in VMware (click, click, click), just as simple in Virtualbox, but when it came to Hyper-V I referenced this page to get an Internal network with internet access (routing and remote access server…fu).

After the machines are created, you should setup a domain.  I like to use sandbox.dev so that it’s extremely obvious what the domain was created for (or it should be).

To configure the dc

  1. Install Windows Server 2012 R2 Core
  2. Run through sconfig.cmd and set your ip address, set the dns server, and computer name set.
  3. Run Install-WindowsFeatures AD-Domain-Services -IncludeManagementTools
  4. Run Install-ADDSForest -DomainName “sandbox.dev” -DomainNetbiosName “SANDBOX”
  5. Reboot and win
  6. Create other machines, join them to the sanbox.dev domain
  7. Win again
The finished product is a mini domain where I can haz admins, and I can test various configurations.  Currently on my laptop I have virtualized; a core dc, a core app server, a core sql server and an instance of kali.  If you are like me then after reading this you will be thinking about how you can refine the process in order to quickly provision new servers for any purpose in a matter of minutes (think PoSH and unattended windows install files).  Want to test some security changes?  Test them out on a fresh vm.  Want to see about how large an index might be?  Create it on a vm.  Want to do anything that makes you feel uneasy while doing it on a production machine…do it on a vm!

From this point you should have a good idea on how virtualization can help you configure sandboxes in order to test without impacting production.  There are always extreme cases, like Mr. “What about my multi-terabyte db servers”…, you sir, will need to use your head.  I have simply laid out a scenario where a professional can self contain severs to at least provide a framework for sandboxing.

 

When heroes collide…

batman_vs_superman
I was browsing SQL Server Central like I do most nights, and I came across this post discussing the use of SA.
Now anyone that reads my blog will know that I have much respect for Brent Ozar, and much respect for Grant Fritchey.  In my opinion both of them have been and continue to be a constant source of motivation and information for me.  So when I read the above mentioned post I instantly went back to something that I had learned from Brent Ozar’s writing (as well as a critical care session with Jes Borland), and it was about using SA.  A couple items that sp_Blitz returns are Database Owners <> SA and Jobs Owned by User Accounts.  Because I’m a sp_Blitz fanboy I’ve been going by these rules for quite some time, I’ve even been in that scenario where IT disabled an old user account that had been used for jobs and they started failing right away.  Using SA for the database owner also makes complete sense to me because well…why else would anyone need to be the db_owner?  (If you have some good reason, please let me know)
Grant and Jeff brought up some interesting points, and I’ve come to the conclusion that there is no single right answer.  Using SA definitely ensures that trifling hands will not do something like lock out the jobs user account but SA is also a known account and definitely an attack vector.  Renaming the SA account surely mitigates that vector, but disabling the account could possibly affect service pack installs and upgrades, all great reasons for doing and not doing, all requiring due diligence and testing on my own part, but the point of this post is not to define a “correct” security policy, the point of this post is what to do when your heroes and their advice collide?
In a situation like this, I personally feel that a death match in the thunderdome is the BEST way to choose a winner, but in reality I view it as an opportunity to learn from both parties, collect, combine and assimilate techniques and tools into my arsenal.  For some time I’ve been using SA for jobs and database owners by default as one of my initial re-configurations (unless the business has a reason to not do this).  In this case, Grant and Jeff have made great points that now make me go back, evaluate my own reasons, optimize and re-deploy.  Armed with knowledge and hopefully a tested proof of concept, you can take it to your manager and the business, get a solid definition that directly impacts your paycheck, and win.
Rarely is anything black and white, especially in the modern business driven infrastructures, and when two experts are both giving accepted expert advice, my advice would be to digest both sets of advice, optimize, and redeploy.  This is how I believe the community works, experts helping to shape and mold other experts.

Reverse Engineering using SQL Profiler

There may come a time in your career when you are tasked with finding out how a 3rd party application functions because some 3rd parties have completely closed source applications, or better yet, APIs that perform poorly.  Microsoft SQL Server Profiler can help you identify what tables are used, procedures and what data is inserted when functions are run.

I was recently given this task.  We needed to import upwards of 400k records from Numara TrackIt 8 to an unnamed vendor we will call X Engine [XE].  The XE API posed 2 problems; speed is less than desirable and there is no way to import attachments.  In this case, not being to import the attachments is the deal breaker.  With over 5 years of historical ticket information, we needed to make sure that our clients did not receive any kind of service degradation due to not being able to look up attachments would definitely cause some client frustration.  I know right now I’m on hold because Staples can’t find an attachment of a receipt that I emailed them in previously…and it’s kind of pissing me off (not to mention their extremely rude, and absolutely non personable customer service reps).

I’m not going to go into the fluff about profiler and tracing, you can read about profile here http://technet.microsoft.com/en-us/library/ms181091.aspx and you can google tracing if you need to.  I’m simply presenting a great use for the tool.

So where do we start?  Well, the goal is to get data from one system to another, so knowing how and what is inserted would be a great place to begin.  But how can we get this information?  There are probably some great 3rd party tools that will provide documentation of the database, this “could” get you close, but I like to dive right in and get dirty.  The first function I wanted to look at is how records are cleanly inserted into the XE database.  So I crack open management studio, hit tools and call up my homey SQL Server Profiler.

I opened up my browser and went to the ticket creation screen in XE.  I loaded the default template in profiler.  Assuming that no one else is testing the system at the same time I am trying to trace (if you can control this variable, then do it.  Kick everyone out of the system), I begin the trace.  I switch back to XE and submit a ticket, right away profiler starts filling up with all kinds of useful information.  In this particular case I was presented with about 8 sequential insert statements and their values.  Armed with this information I made a mapping between the values I inputted into the form, and the values being inserted into the database.  In order to really understand what was going on and make sense of it, I had to look at the columns in each table and figure out what they meant, and what their function was.  For example, during one of the inserts a bit column is set, the column name was something like isActive, and the table was something like WOQueue.  Makes sense right?

After dissecting what the application was doing behind the scenes, the next step is to try and do the insertion manually through TSQL.  Here you can basically take all the information that was given to you by profiler, change some of the values so that they are meaningful in helping you determine their function (for example, what if I set the isActive bit to false?  What happens in the application?), and fire off the code.  I ran through this process quite a few times in order to see what was happening behind the scenes.  During this process I would check the interface and make sure everything was ok.  When I had set the isActive bit to false, the work order wouldn’t appear in the queue, regardless of it’s status, the isActive bit had to be set to true in order to see it (this made no sense to me).

I’m hoping by now you can see the value of using profiler to help you navigate through uncharted territory.  Using tools that should be at hand, you can get a handle and gain knowledge on how applications work behind the scenes.

Trace Flag 3226 – Suppress Log was backed up messages

I’ve been out for the last month adventuring in Bolivia.  During this time my inbox stacked up with newsletters and digests galore, mostly from SQL Skills and SQL Server Central.  In the 11/25/2013 SQLSkills Insiders email Paul Randall highlights a couple trace flags that should be enabled.  First, if you haven’t been to sqlskills.com…you probably aren’t googling right.  These are the pros that provide pro information to help other pros become pro…er?  I learn so much on a weekly basis by reading articles from the SQL Skills team, so I highly recommend you check it out, and sign up for the emails.

Back to the topic at hand, in this RunAs Radio Show Trace Flag 3226 is mentioned.  Stop filling up your logs with successful transaction log backup messages = WIN, especially for those of us that perform log backups every 5 or 10 minutes.  After adding this flag, Log File Viewer and my eyes are much happier.  I would assume that you have setup your transaction log backup job to send an email on failure (you did this right?), so you can enable this trace flag safely and experience a pruned log file summary.

You can view trace flags here and read up on what they do.  I would suggest before enabling any trace flags, you first research, and test.  I went the route of enabling this flag on one of my virtual test servers, and I will wait at least a week before deploying into production.

Cheers