Speedy Answer – How to find what is using PORT 80 on your PC

Open Command Prompt and type:
netstat -onb | findstr :80

note down the number at the end of each TCP Line, this is the Process ID (PID)

  • Open Windows Task Manager
  • click on Processes
  • then on the View menu Select Columns… tick PID (Process Identifier)

Now you can find which application is listening on port 80 by identifying the PID, sort by that column, makes it easier!

C:\Users\leonv.ARCDOM>netstat -onb | findstr :80
TCP      ESTABLISHED     4620
TCP       ESTABLISHED     6948
TCP       ESTABLISHED     6948
TCP       TIME_WAIT       0
TCP      ESTABLISHED     6948
TCP       ESTABLISHED     6948
TCP        ESTABLISHED     6948
TCP        TIME_WAIT       0

4620 is Dropbox
6948 is Firefox

A Difficult Date

There is a datatype that encompass a whole set of functionality, rules and formatting needs, that other datatypes just do not have or need.
It’s a real diva of a datatype that likes to be handled with care. The DATETIME!
You can tell by the way they wrote up the help file, “she” is a special lady; when INT and VARCHAR only need about half a page of trivial stuff, DATETIME’s page is a broadsheet full of special instructions!

This post is mostly about how to pull on the DATETIME data you have stored in a WHERE Clause, for example:

-- e.g. 1
SET @dtStart = '1 jan 2012'
SET @dtEnd = '1 jan 2013'


Familiar? SALES_DATE is a running DATETIME field, i.e. it has the date and exact time of the transaction!

So the code above shows you all the sales data for the year 2012 and that’s easy enough to see how it works.

What about when we want to see the sales data just for the 1st of january 2013.
Our code is formed to accept a date period, From a start to an End. We must pass in a date range, so will it be like this?

-- e.g. 2
SET @dtStart = '1 jan 2013'
SET @dtEnd = '1 jan 2013'


You may think that this is the 1st of January sales, it’s not!
These are the sales that occurred at ‘1st Jan 2013 00:00:00.000’ exact time! You won’t get much in return!

So how do you code around that without messing about with the result the query returns when the date range is a longer one?
Remember we design code that needs to work with any period passed in; imagine it in a stored procedure.

So what do you do? do you manipulate @dtEnd to be +24h or +1day?

-- e.g. 3
SET @dtStart = '1 jan 2013'
SET @dtEnd = '1 jan 2013 23:59:59'
-- or --
SET @dtEnd = '2 jan 2013'


You can see that the above is almost an alchemy, it clearly shows that your modeling is not quite there yet if you need to manipulate parameters at that level!

Improve the Model, Don’t Hack!
I believe that the best way is to avoid using the [SALE_DATE] field and add a new column [SALE_DATE_SHORT] as a date only field, so all times are ’00:00:00.000′ and use the [SALE_DATE_SHORT] as a WHERE clause:

-- e.g. 4
SET @dtStart = '1 jan 2013'
SET @dtEnd = '1 jan 2013'


So what’s the shortest way to convert the DATETIME to a zero hour datetime? I’d use a function but in a lazy quick and dirty way try LEFT(SALES_DATE, 11) and test it likes your locale!

-- e.g. 5
SET @SALE_DATE = '1 jan 2013 17:35:51.000'

 WHERE convert(datetime, left(@SALE_DATE, 11)) = '1 jan 2013'
-- I have the WHERE also in this script to prove the field converts to date ok!

So five e.g’s later you’ll probably think, big deal the above is simple stuff; easy to miss thou and considering that these parameters are not SET but passed in a stored proc from an SSRS call, the plot thickens! You probably have seen the calendar datepick control passing in the zero hour datetime!
Have you see what happens when the calendar view is initialised with a time hh:mm:ss value? Ugly!

So what’s your decision now about datetime fields, will you implement a _short version of it just to make reporting easier? would you use a computed field for that?

More on the Datetime dilemmas on a following post!

ADD COLUMN, a Quick & Dirty tip

Adding Columns the quick and dirty way brings an added bonus!

So you have an amazing table, it’s a Main Table and it has a set of trailing columns,  for example some mini auditing; e.g. CREATION_DATETIME, CREATION_USERID, MODIFICATION_DATETIME, MODIFICATION_USERID.

Your  Main Table is lovely, standardised and it looks grand.
Then a new column needs to go in and you don’t want to see it at the end of the table, after the tail columns! Something tells you it will be great to stick the new column next to existing ones that hold similar data. For example, your table has Unit_Price and now a Retail_Price comes in, your gut feeling says, keep the price columns together.

Embrace the lazy ADD COLUMN, it is your friend!


Conceder this; to add a column at the end of a table is not a big task for sgl server. Just add it with one line of code and it’s done.
On the other hand, to add it in the middle of a table you basically need to make a new table, copy all the data across and rename both tables, dropping the old one. If your table is massive it will take a lot of time, twice as much space as it uses now to make the copy, some considerable processing power to do so and you probably need to run it out of hours or have a plan in place just in case!

The Lazy ADD COLUMN method has also a very important benefit!
The new “out of place” column, sticking out at the end of a previously “squared out” table denotes that something new happened to your schema! It almost screams “stage 2” or “something got developed here!” and once you make friends with this concept you will see that tables will stand out; the ones with heavy post initial design look and feel will show you where the development and the action is!

One thing that’s worth mentioning is that when we add a new column at a table, the Strored Procs won’t care, the result sets won’t care, the app won’t care, it is invisible to everybody UNLESS you use SELECT * FROM in your released code. Naughty!


Best use of a Tooltip Ever!

One thing that got me when I migrated SSRS to sql2008, a migrating species we SQL developers are, was the free and easy way the charts allow you to click anywhere on their lines! So when you think you click on a point of the line you are actually clicking some pixel perfect near point to that!

Let me explain; I see a marker above year 2009 and click it, chances are I am actually clicking on the 2010 part of the line or worse!

So above I click on the 2009 point, I am seeing the tooltip saying 2009 but a few pixels to the right, 2013! Why?

Why you ask! Well the answer is simple, 2013 is the next point with data drawn on the graph, 2010 to 2012 has no data, the line is straight connecting 2009 to 2013. You might want to educate your users on why this is, test data does do things like that, real data might also get spatial like this also. You can’t get away with not knowing your data.

This can also escalate to weird results on a click-through report, when you pass the date across as a parameter, thinking it’s the one when in fact it’s the other!

Did you notice the highlighted tooltip? By using the tooltip to show and let your user see where they click seems to be the most direct solution to this problem but you still need to explain the data because even the tooltip can be seen as wrong, until explained!

By the way, I am referring to the Tooltip: property you express under the Series Properties, not the one of the whole chart area.

Once you are confident that what you see is what you should get, that the underlying data is true to the view then you might even think of another way to remove this problematic representation of our lovely data.

It all starts with a SELECT

In programming we write commands and we know that if we write plenty of them, one after the other, we have a program! We issue commands and we expect the computer to execute them. When the computer does something we do not expect, well then it must be a bug! We know what we asked for and we know what we expect to see happening.

When it comes to databases the story is somewhat different. We do not issue commands. Instead we construct queries. We query the database to SELECT something for us! If we are specific enough we will get it but it might be something we do not expect!

So how do we know that what we get in return is correct?

Let’s frame this a bit more.

SET Lecture_Mode ON; — Sorry!

We are at a point where we can access our stored data. Further more, let’s assume that we are in a well established environment, working on a live DB that is running for years where data is established and trusted. We also have some knowledge of the information the database holds (business knowledge).

So we just focus on the Query, we are constructing a query.

Zen Moment! We take it easy and we write one simple query statement, one simple Select to query the Table(s) and get results. Then we refine this query a little by little until the results are more and more accurate, we build it up step by step.

The initial results we see are very important, if we kept it simple we know we have reasonable data back. Building it up might involve filtering, joining or aggregating the results but with each operation we must be able to detect if our reasonable data is consistent or somehow corrupted.

If during the the sequence of building up the query we see consistency in the results then we are on the right path.

SET Lecture_Mode OFF;

Yes, It all starts with a SELECT but that’s only half the truth… It also ends with a SELECT…

You can tell I’m a big fan of the SELECT statement, that’s why I chose it to be the first entry of my blog.

What’s next? Here’s a hint; To transform a SELECT to an UPDATE or a DELETE is only a matter of “conviction”…