SQL Saturday #638, Philadelphia

This coming Saturday, June 3, I will be speaking at SQL Saturday #638, Philadelphia (okay, it’s actually in a town called Whitpain Township, not Philadelphia, but that’s what they call the event, so…)!

I will be giving the following two presentations:

  • Tech Writing for Techies: A Primer — Documentation is one of the most critical, yet most blatantly ignored and disrespected tasks when it comes to technology. Businesses and technical professionals ignore documentation at their own risk. This session discusses what tech writing and documentation is about and why it’s critical for business. It also explores possible reasons for why it’s ignored, how documentation can be improved, and how “non-writers” can contribute to the process.
  • Disaster Documents: The role of documentation in disaster recovery — I was an employee of a company that had an office in the World Trade Center on Sept. 11, 2001. Prior to that infamous date, I had written several departmental documents that ended up being critical to our recovery. In this presentation, I provide a narrative of what happened in the weeks following 9/11, and how documentation played a role in getting the organization back on its feet.

    While other disaster recovery presentations talk about strategies, plans, and techniques, this presentation focuses on the documentation itself. We will discuss the documents we had and how they were used in our recovery. We will also discuss what documents we didn’t have, and how they could have made the process better.

Hope to see you there!

My 2017 SQL Saturday Schedule (so far)

My 2017 SQL Saturday schedule is slowly taking shape!  For those of you who are interested, here’s where I’ll be (so far)!

Come on out to a SQL Saturday near you!  The events are free (although there is usually a nominal fee for lunch), there are lots of networking opportunities, and you might just learn something new!

And as an added bonus, you might even get to hear me speak! 😉

SQL Saturday #615, Baltimore, MD

My next speaking engagement is set!  On Saturday, May 6, I will be speaking at SQL Saturday #615 in Baltimore, MD.  I will be giving not one, but two presentations that day: “Tech Writing for Techies: A Primer” and “Disaster Documents: The role of documentation in disaster recovery.”

Hope to see you there!

On data types

Previously, I talked about something I saw in the code that built the tables in my baseball database.  I wrote last time about primary keys.  In this article, I want to talk about data types.

Let’s revisit the code that builds the Batting table again.

CREATE TABLE [dbo].[Batting](
	[playerID] [varchar](50) NULL,
	[yearID] [varchar](50) NULL,
	[stint] [varchar](50) NULL,
	[teamID] [varchar](50) NULL,
	[lgID] [varchar](50) NULL,
	[G] [varchar](50) NULL,
	[AB] [varchar](50) NULL,
	[R] [varchar](50) NULL,
	[H] [varchar](50) NULL,
	[2B] [varchar](50) NULL,
	[3B] [varchar](50) NULL,
	[HR] [varchar](50) NULL,
	[RBI] [varchar](50) NULL,
	[SB] [varchar](50) NULL,
	[CS] [varchar](50) NULL,
	[BB] [varchar](50) NULL,
	[SO] [varchar](50) NULL,
	[IBB] [varchar](50) NULL,
	[HBP] [varchar](50) NULL,
	[SH] [varchar](50) NULL,
	[SF] [varchar](50) NULL,
	[GIDP] [varchar](50) NULL
) ON [PRIMARY]

Last time, I mentioned that every single column allowed NULL values (and I’d like to expand on that in a future article).  Today, I’d like to talk about the fact that every single column is defined as varchar(50).

For those of you who are getting your feet wet with SQL Server (or don’t know what I’m talking about), let’s take a moment to discuss…

What are data types, and why are they important?

Imagine that you’re presented with a math problem (don’t worry; we’ll keep it simple).  However, when you’re presented with the problem, it looks like this:

XII + IX

Want to give it a shot?  The truth is, there’s a reason why we don’t normally use Roman numerals in our society.  Unlike the Arabic numeral system that we use in everyday life, the Roman numeral system is inefficient and not very intuitive at a glance.  For us to use Roman numerals, we’d first need to convert them into a format that we can easily understand.

So, knowing that XII = 12 and IX = 9, our new math problem is 12 + 9.  We now know our answer is 21.  (And if we want to convert back to Roman, it’d be XXI.)

Data types operate along the same principle.  A number that’s stored as a data type of, let’s say, varchar(50) (for those of you who don’t yet know T-SQL, that’s “variable character of up to 50 characters”) needs to be converted to a numeric type of (for example) int (for “integer”) before it can be processed.

“Okay,” you might ask, “why is that such a big deal?  Just let the system do the conversion and let it do the work.”

Let’s go back to that Roman numeral example for a moment.  In order for you to understand what the Roman numerals were, your brain had to do some work to perform the conversion.  Imagine that you had to process dozens, even hundreds, of those problems.  That’s a lot of Roman numerals that you have to process.  That makes for a lot of work.

Likewise, imagine that a computer system has to convert millions of data types.  That’s a lot of extra work that your computer has to do.  That can make for a very inefficient system.  One of the big issues that a DBA or a developer often has to deal with is a slow system.  Using proper data types can often help with trying to fix a slow system.

What kinds of data types are there?

In a word, many.  A number of web sites list data types for various systems, including MSDN and W3Schools.  Since I’m writing primarily about SQL Server, I’ll focus on SQL Server data types.  I won’t talk about all SQL Server data types (I’ll leave you to look at the links for that); rather, I’ll just discuss a few that are relevant to our conversation.

Let’s start first with the varchar data type, since that’s what’s defined in our tables.  Varchar is for variable character.  What that means is the data type accepts character strings up to a length that you specify.  A data type of varchar(50) means that the field accepts strings up to 50 characters.

There is also a char data type.  Like the varchar type, you specify the number of characters allowed by the data column.  Unlike the varchar type, however, it is a fixed length string.  A column of type char(50) means that the data will always have 50 characters.  If data that is fewer than 50 characters is stored, it pads the rest with spaces.  This is important to consider, because those characters (even the space characters) take up space.  If you’re storing millions of rows of data, and you’re using char data types for data that takes up fewer than the number of characters you specify, that’s a lot of wasted space.

So, let’s revisit our Batting table.

BaseballBatting

When we built our primary key, we created a composite primary key from the playerID, yearID, and stint columns.  Can we improve upon these columns?

Let’s look at playerID.  Do we need to reserve fifty characters for this column?  To find out, I ran the following query.

select distinct playerID, len(playerID) from batting
order by len(playerID) desc

To translate this query into English*, this means “give me each distinct playerID, along with the length of the playerID, from the batting table, and give me the results in descending order of length.”

(*For those of you who don’t speak T-SQL, yes, I do intend to write an article about it.)

The results looked like this:

PlayerIDLength

So what does this mean?  It means the longest playerID is nine characters.  Could it be longer than nine?  Possibly.  Will it ever get up to fifty?  Unlikely.  So while the varchar(50) will definitely accommodate this, my thinking is that it’s overkill.  I’m going to shorten the column.  I’m going to cut it down to 25 from 50.

Now, let’s look at the yearID column.  I ran the same query as earlier, but this time, I replaced playerID with yearID.  Every column was four characters long.

SQL Server provides us with a number of date and time data types.  My first thought was to use one of these data types, but at that point, I stopped to think.  What, exactly, is the purpose of the yearID column?  It identifies a season.  That’s it.  Will I perform any calculations with it?  It’s possible, but unlikely.  Will it ever change from four characters/digits?  No.  I’m going to make this a fixed four-character field.  For that, we’ll use char(4).

Now, let’s look at stint.  What is this column?  It keeps track of a player’s stint with a team in a season.  So, let’s see how this works.

I went into the database and looked to see what stint values were in the Batting table.  There were five.  I looked up the player who had five stints, and found a man named Frank Huelsman, who played in the early 1900s.  I ran a query that gave me this.

FrankHuelsmanStint

In 1904, he had five stints with four different teams, having played for the White Sox (twice), Tigers, Browns, and Senators.

So, what data type should we use for stint?  Let’s think about this.  We could use a char(1), but I don’t think we should.  Is it possible that a player could have ten or more stints in a year?  Unlikely, but yes.  So we’d need at least two characters.  Let’s try char(2) — or should we?

I say no.  I say that we should use a tinyint data type for stint.  Why?

I say it for a couple of reasons.  First, there the matter of ordering.  Let’s order the numerals 1-10.  How would you expect them to be ordered if they were stored as characters?  If you say 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, you’d be wrong.  If it was defined as a character data type, the order would look like this: 1, 10, 2, 3, 4, 5, 6, 7, 8, 9.  When we’re talking about characters, 10 comes after 1.

Second, there’s a matter of memory.  tinyint uses only 1 byte, as opposed to 4 bytes for int. The memory used by char and varchar data types are dependent on their sizes (i.e. how much is allocated).  Also, tinyint accommodates values from 0 to 255, which is plenty for our purposes.

So, let’s start by redefining the data types for our composite primary key.  Because T-SQL in SQL Server won’t let us alter multiple columns at once, we need to run them as separate queries.  I wrote three statements like this:

alter table Batting
alter column playerID varchar(25)

alter table Batting
alter column yearID char(4)

alter table Batting
alter column stint tinyint

This should do the trick — or so I thought.  When I ran the queries, SSMS gave me error messages that looked like this:

Msg 5074, Level 16, State 1, Line 1
The object 'PK_playerID_yearID_stint' is dependent on column 'playerID'.
Msg 4922, Level 16, State 9, Line 1
ALTER TABLE ALTER COLUMN playerID failed because one or more objects 
access this column.

What’s going on here?  Keep in mind what we’re trying to change: three columns that are part of a primary key.  So before we can alter these columns, let’s drop our primary key.

alter table Batting
drop constraint PK_playerID_yearID_stint

Now when we run our alter table statements, we should be set.

Or are we?  I tried to recreate the primary key (I decided to rename the primary key, while I was at it).

alter table Batting
add constraint PK_Batting primary key (playerID, yearID, stint)

Instead, it gave me the following error:

Msg 8111, Level 16, State 1, Line 1
Cannot define PRIMARY KEY constraint on nullable column in table 'Batting'.
Msg 1750, Level 16, State 0, Line 1
Could not create constraint or index. See previous errors.

Now what’s going on?  Apparently, I’d forgotten to include the NOT NULL statement when I altered the columns.  So this time, I’ll rewrite the statements like this and rerun them.

alter table Batting
alter column playerID varchar(25) not null

alter table Batting
alter column yearID char(4) not null

alter table Batting
alter column stint tinyint not null

After running these statements (which ran successfully), I retried my statement that added the primary key.  This time, it worked.

So, our primary key columns are now altered.  Let’s look at the rest of our table.

Our first two columns are teamID (for “team ID”) and lgID (for “league ID”).  I checked the data in these two columns (I’ll save you the trouble of looking at the T-SQL I used).  I found that they are never null, and they are always 3 characters and 2 characters, respectively.  They are also not unique.  For our table, I’ll redefine them as char(3) and char(2).

Now let’s look at the rest of the table.  For the benefit of anyone who doesn’t understand baseball, let’s write out what these columns are.

G Games
AB At Bats
R Runs (Scored)
H Hits
2B Doubles
3B Triples
HR Home Runs
RBI Runs Batted In
SB Stolen Bases
CS Caught Stealing
BB Base On Balls (Walks)
SO Strikeouts
IBB Intentional Base On Balls (Walks)
HBP Hit By Pitch
SH Sacrifice Hits
SF Sacrifice Flies
GIDP Grounded Into Double Play

They’re all statistics, and they’re all integer values.  For now, we’ll make them all integer values.  Do we make them NOT NULL?  Not necessarily.  Why?  Because for some records, certain statistics didn’t exist during their time, so they might not exist.  Some of these numbers may very well be NULL.  So we won’t add the NULL constraint.

Here’s what I wrote for my T-SQL:

alter table Batting alter column [teamID] char(3) not null
alter table Batting alter column [lgID] char(2) not null
alter table Batting alter column [G] int
alter table Batting alter column [AB] int
alter table Batting alter column [R] int
alter table Batting alter column [H] int
alter table Batting alter column [2B] int
alter table Batting alter column [3B] int
alter table Batting alter column [HR] int
alter table Batting alter column [RBI] int
alter table Batting alter column [SB] int
alter table Batting alter column [CS] int
alter table Batting alter column [BB] int
alter table Batting alter column [SO] int
alter table Batting alter column [IBB] int
alter table Batting alter column [HBP] int
alter table Batting alter column [SH] int
alter table Batting alter column [SF] int
alter table Batting alter column [GIDP] int

The query, which took 20 seconds to run, ran successfully.

Note that I only dealt with a few data types in this article.  I didn’t talk about decimal data types or computed values.  (What if, for example, we wanted to keep track of batting averages?)  I’ll cover those in a later article.  Hopefully, at this point, you’ll have a better idea of what data types are and what they can do.

Setting up a primary key

When I set up my baseball database, I noticed something in the SQL code used to build the tables.  In the CREATE TABLE T-SQL code that was supplied by the Sean Lahman’s baseball database, I saw code that looked like this:

CREATE TABLE [dbo].[Batting](
	[playerID] [varchar](50) NULL,
	[yearID] [varchar](50) NULL,
	[stint] [varchar](50) NULL,
	[teamID] [varchar](50) NULL,
	[lgID] [varchar](50) NULL,
	[G] [varchar](50) NULL,
	[AB] [varchar](50) NULL,
	[R] [varchar](50) NULL,
	[H] [varchar](50) NULL,
	[2B] [varchar](50) NULL,
	[3B] [varchar](50) NULL,
	[HR] [varchar](50) NULL,
	[RBI] [varchar](50) NULL,
	[SB] [varchar](50) NULL,
	[CS] [varchar](50) NULL,
	[BB] [varchar](50) NULL,
	[SO] [varchar](50) NULL,
	[IBB] [varchar](50) NULL,
	[HBP] [varchar](50) NULL,
	[SH] [varchar](50) NULL,
	[SF] [varchar](50) NULL,
	[GIDP] [varchar](50) NULL
) ON [PRIMARY]

Granted, there are a number of things about this code that strikes me, but one thing hit me immediately: all the columns are nullable.  That tells me that there are no primary keys defined.  I checked the other CREATE TABLE statements in the code as well; likewise, all of them had all their columns set to be nullable.

(There are some other things about the CREATE TABLE statements that struck me as well, but I’ll save those topics for another time.  For the purposes of this article, I’ll concentrate on primary keys.)

First, if you’re getting your feet wet with relational databases, it behooves the question…

What is a primary key?

First, I’ll start with a few links that help explain this:

primary key uniquely identifies a row of data.  To illustrate this, here’s an example.

Let’s say we have a person named John Smith taking night classes at State University.  In his database file (which, for the purposes of this example, has his name and address), he has the following information:

Name Address
John Smith 123 Main Street

Now, let’s say his son, John Smith Jr. (who lives at the same address) enrolls at State University (for the purposes of this example, let’s assume the database doesn’t allow suffixes like “Jr.”).  He is entered in the database.  So now, you have this.

Name Address
John Smith 123 Main Street
John Smith 123 Main Street

How do you know which record is which?  Answer: you don’t.  So, let’s add an ID column to our table.

ID Name Address
1 John Smith 123 Main Street
2 John Smith 123 Main Street

Now we know that we have two different records for two different people who are both named John Smith who live at 123 Main Street.

Granted, this is an overly-simplistic example, but the point is that the ID column — which is always unique and is never NULL — uniquely identifies each row.

So now that we know what a primary key is, let’s go back to our baseball database.

Creating primary keys for our baseball tables

We could create an ID column for our baseball tables, but I don’t want to do that here.  Instead, I want to make use of existing columns for our primary keys.  I’ll start with the MASTER table, which keeps track of ballplayers.

I ran a simple SELECT * FROM MASTER to get an idea of what’s in the table.  Running the query gave me this.

BaseballMasterQuery

I noted a couple of things.  First, I see a column called “playerID” which could be unique and used as a primary key.  Second, I see that the table contains 18,846 rows.  So can I use “playerID” for my primary key?  Let’s find out.

I ran two query statements.  The first gives me a count of unique entries I have for “playerID.”  The second tells me if any NULL values for “playerID” exist.

Here’s what I got.

BaseballMasterPlayerID.png

The first statement returns 18,846, which matches the number of rows in my earlier query, telling me that all values are unique.  The second statement doesn’t return anything, which tells me that there are no NULL values for “playerID.”  Therefore, the “playerID” column can be used as the primary key for this table.

So how do we make this column the primary key?  There are a couple of ways to do it in SQL Server.  For this, I decided to use T-SQL.

alter table dbo.Master
add constraint PK_playerID primary key clustered (playerID)

When I ran this, it returned an error.

Msg 8111, Level 16, State 1, Line 1
Cannot define PRIMARY KEY constraint on nullable column in table 'Master'.
Msg 1750, Level 16, State 0, Line 1
Could not create constraint or index. See previous errors.

Okay.  Why did this appear?  Keep in mind that a primary key cannot be null.  When our code created our tables, it made them all nullable.  So we need to add a NOT NULL constraint before we create the primary key.

alter table dbo.Master
alter column playerID varchar(50) not null

Running this statement sets the column to not allow NULL values.  Once I ran this, I re-ran the command to create the primary key, and it completed successfully.  So I now have a primary key on my Master table.

What if I don’t have a single unique column?

Let’s take a look at the Batting table.

BaseballBatting

So, what should we use for our primary key?  A batter can play multiple seasons, and can play for multiple teams in a single season.  Chances are that you won’t be able to make use of a single column.  However, we can make use of multiple columns for our primary key (this is called a “composite” primary key).

We find that the Batting table contains 101,332 rows, so we need a combination of columns that will give us 101,332 unique results.  I figured that playerID, yearID, and stint should give us those results.

I ran the following query.

select distinct playerID, yearID, stint from Batting

This returned 101,332 results.  Indeed, I seem to have identified my composite primary key.

My first task is to set these three columns to not accept NULL values (otherwise, we’d get the same error from earlier).

alter table dbo.Batting alter column playerID varchar(50) not null
alter table dbo.Batting alter column yearID varchar(50) not null
alter table dbo.Batting alter column stint varchar(50) not null

Once the NOT NULL constraint is in place, we can create our composite primary key.  Note that in my T-SQL statement below, I provide three columns (playerID, yearID, stint), not just one.

alter table dbo.Batting 
add constraint PK_playerID_yearID_stint 
primary key clustered (playerID, yearID, stint)

The query ran successfully.  We now have a composite primary key on the Batting table that uses three columns.

Hopefully, this gives you a good introduction to primary keys on database tables.  In upcoming articles, I’ll cover other aspects such as foreign keys, indexes, and data types as we continue to further refine and improve our baseball database.

SQL Saturday #513: two week notice

It’s coming quickly!  On July 30 (that’s two weeks from this Saturday), the Capital Area SQL Server User Group will be hosting SQL Saturday on the UAlbany campus!

I will be speaking!  I will be giving my presentation, “Whacha just say?  Talking technology to non-technical people.”

It’s going to be a great day of sessions (free training!) and networking.  If you’re interested in SQL Server, come check us out!

Hope to see you there!

Installing a Baseball Database

Earlier, I wrote about how I installed SQL Server 2016 on my machine.  In this article, I talk about creating and populating a baseball database.

I am a huge baseball fan.  I started following the New York Yankees when I was about 12.  (In an act that is likely anathema to religious fanatics, I married a Boston Red Sox fan; during baseball season, one of us — metaphorically speaking — ends up sleeping on the couch!)  I have been known to schedule vacations around Major League Baseball schedules.  I believe that Cooperstown is Mecca.

Why, you might ask, is this relevant to installing SQL Server?  Because I believe that one of the best ways to learn something is to have fun while you’re doing it.  So in this implementation, instead of installing NorthWind Traders or AdventureWorks (the standard practice databases used by SQL Server enthusiasts everywhere), I will instead install a copy of Sean Lahman’s baseball database.  (At the time of this article, the most recent version goes to and includes the 2015 baseball season.)

I should note that this isn’t to say NorthWind or AdventureWorks isn’t fun; rather, baseball is something about which I’m passionate, and is more likely to keep my attention.  At some point, I’ll likely install NorthWind or AdventureWorks as well, as many SQL references and guides refer to them.

I downloaded the most recent SQL version from Sean Lahman’s website, which left me with a .ZIP file.  Inside the file, I found a MSSQLMASTER.SQL file (you’ll find it in \mysql core\core\) that I extracted from the .ZIP file.  I opened the file with SSMS.

The first thing I noticed was that it created a database called “stats.”  I did not want to use this as my database name, so I first replaced all instances of “stats’ with “baseball”.  Once this was done, I ran the database build query.

Unfortunately, it would not be that easy.  The query file is very large.  When I ran the query, I received the following message.

SSMSOutOfMemory

Okay.  It has a lot of rows to insert (the data includes over a hundred years of statistics, after all).  Instead, I opted for just creating the database and tables.  However, even that proved to be problematic.

SQLBaseballBuildErrors

“Okay,” I said to myself, “what’s going on now?”

First, I had a permission issue to deal with (this doesn’t appear in the above log; I neglected to include that message).  I looked for the C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA folder.  When I tried to open it, I got a message saying that I had insufficient permissions to open it.  “What?” I said, “this is my own machine, and I’m the administrator!”  However, it also prompted me asking whether I wanted to open it.  I answered yes.  Once I did, I tried running the query again, and the permission error seemed to disappear.

However, that’s when I got the errors shown above.

That’s when I noticed that the filename references MSSQL11.  My database path includes MSSQL13.  Okay.  I changed the path in the filename from MSSQL11 to MSSQL13 and reran the query.  This time, it ran successfully.

I did notice one thing about the table infrastructure.  All columns were defined as NULL, which indicated to me that there were no unique indices or primary keys defined.  All tables would be heaps.  That’s something that I’d need to fix, but that was likely another project for another time.  For now, I was more concerned about building the database infrastructure and inserting the data.

Once my database and tables were built, I commented out the CREATE DATABASE and CREATE TABLE lines from the code, leaving only the INSERT commands.  Just for grins, I tried running the query again, and once again got the “insufficient memory” message.  Okay, no dice.  How was I going to get around this?  I suppose I could’ve selected and run the INSERT queries piecemeal, but that was going to be a long and painful process.

I ran a Google search, and came across my old friend, StackOverflow, where I found this entry.  So I tried the command line, substituting my database name and SQL filename (including the path).

sqlcmd -d baseball -i Downloads\mssqlmaster.sql

Here’s what I got:

sqlcmd: Error: Connection failure. SQL Native Client is not 
installed correctly. To correct this, run SQL Server Setup.

Interesting.  I went to look up SQL Server 2016 Native Client, and found this.  So SQL Server 2016 does not include Native Client.  So, I tried to download and install Native Client.

Of course, nothing ever goes as planned.  I got this message.

NativeClientErrorMsg

So, I went into my Settings to check my Native Client.  It told me that 2012 Native Client was already installed.

After Google-searching my error messages, I realized that I had SQL Server 2008 Express installed, and my SQLCMD was likely getting confused between versions.  That was likely the culprit.  So I went ahead and uninstalled SQL Server 2008, and tried my SQLCMD command line again.  This time, the SQLCMD did not give me any errors, and my command prompt started scrolling with the familiar “(1 rows affected)” message that appears when SQL Server inserts rows into tables.  The script finished without any problem.

Once the script finished, I ran a query to see if my data had made it into the tables.

BattingQuery

So it appears that we are good to go!  Our baseball database is populated!

I now have a baseball database with which I can play to my heart’s content.  I’ll be using this to practice my SQL skills — and ‘blog about my adventures (or misadventures, as the case may be) as I do so.