Latest Entries »

So, this is the second article I’ve written against the TPC-H Benchmark (Part one here). Recently, Amazon announced that their ‘fast, fully managed petabyte-scale data warehouse service’ was available for public consumption. Having finally had some time to play, I thought I’d take it for a spin.

I was able to get a single node cluster up and running pretty quickly, and installed their sample data set easily. You can read how to go about this in their Getting Started Guide.

The initial issue I had with the sample data set was, well, it was pretty small. Ok, it got the concepts over, but I wanted more. I wanted to get an idea of performance and how it compared across the different levels. I wanted more data.

So, I decided to dump my set of test data (1Gb TPC-H, see part 1 for creating this) into it, and covered here is how I did it.

Getting Started

I’m going to assume that you’ve made it through steps 1-4 of the Getting Started guide above (which covers Prerequisites, Launching the Cluster, Security setup and Connecting to the cluster).

Shown below are the statements used to create the TPC-H tables, within the Redshift environment. You’ll need to create a connection to the Redshift environment, use SQL Workbench to connect to it, and copy and paste this into the SQL window.

CREATE TABLE customer(
C_CustKey int ,
C_Name varchar(64) ,
C_Address varchar(64) ,
C_NationKey int ,
C_Phone varchar(64) ,
C_AcctBal decimal(13, 2) ,
C_MktSegment varchar(64) ,
C_Comment varchar(120) ,
skip varchar(64)
);

CREATE TABLE lineitem(
L_OrderKey int ,
L_PartKey int ,
L_SuppKey int ,
L_LineNumber int ,
L_Quantity int ,
L_ExtendedPrice decimal(13, 2) ,
L_Discount decimal(13, 2) ,
L_Tax decimal(13, 2) ,
L_ReturnFlag varchar(64) ,
L_LineStatus varchar(64) ,
L_ShipDate datetime ,
L_CommitDate datetime ,
L_ReceiptDate datetime ,
L_ShipInstruct varchar(64) ,
L_ShipMode varchar(64) ,
L_Comment varchar(64) ,
skip varchar(64)
);
CREATE TABLE nation(
N_NationKey int ,
N_Name varchar(64) ,
N_RegionKey int ,
N_Comment varchar(160) ,
skip varchar(64)
);
CREATE TABLE orders(
O_OrderKey int ,
O_CustKey int ,
O_OrderStatus varchar(64) ,
O_TotalPrice decimal(13, 2) ,
O_OrderDate datetime ,
O_OrderPriority varchar(15) ,
O_Clerk varchar(64) ,
O_ShipPriority int ,
O_Comment varchar(80) ,
skip varchar(64)
);

CREATE TABLE part(
P_PartKey int ,
P_Name varchar(64) ,
P_Mfgr varchar(64) ,
P_Brand varchar(64) ,
P_Type varchar(64) ,
P_Size int ,
P_Container varchar(64) ,
P_RetailPrice decimal(13, 2) ,
P_Comment varchar(64) ,
skip varchar(64)
);
CREATE TABLE partsupp(
PS_PartKey int ,
PS_SuppKey int ,
PS_AvailQty int ,
PS_SupplyCost decimal(13, 2) ,
PS_Comment varchar(200) ,
skip varchar(64)
);
CREATE TABLE region(
R_RegionKey int ,
R_Name varchar(64) ,
R_Comment varchar(160) ,
skip varchar(64)
);
CREATE TABLE supplier(
S_SuppKey int ,
S_Name varchar(64) ,
S_Address varchar(64) ,
S_NationKey int ,
S_Phone varchar(18) ,
S_AcctBal decimal(13, 2) ,
S_Comment varchar(105) ,
skip varchar(64)
);

Next up, we need to get some data into it. I’ve had a copy of the TPC-H files sitting on my S3 account for a while, so I was hoping to just point Redshift at that (just like the sample code does). This was where I ran into my first issue. There may be an easier way, but I wanted to do it quickly. The problem was that I couldn’t get the S3 URL syntax to work, and this appears to be because my S3 Buckets are sitting in Ireland (EU). The S3 syntax looks to only work if you are using ‘US Standard’ as your S3 storage. I could be wrong, but I’m not an S3 expert.

Anyway, having created an S3 bucket in US Standard, and transferred the files over, I used the following to copy the contents from these files into the tables created in Redshift.

copy customer from ‘s3://oldnick-tpch/customer.tbl’ CREDENTIALS ‘aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>’ delimiter ‘|’;
copy orders from ‘s3://oldnick-tpch/orders.tbl’ CREDENTIALS ‘aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>’ delimiter ‘|’;
copy lineitem from ‘s3://oldnick-tpch/lineitem.tbl’ CREDENTIALS ‘aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>’ delimiter ‘|’;
copy nation from ‘s3://oldnick-tpch/nation.tbl’ CREDENTIALS ‘aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>’ delimiter ‘|’;
copy part from ‘s3://oldnick-tpch/part.tbl’ CREDENTIALS ‘aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>’ delimiter ‘|’;
copy partsupp from ‘s3://oldnick-tpch/partsupp.tbl’ CREDENTIALS ‘aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>’ delimiter ‘|’;
copy region from ‘s3://oldnick-tpch/region.tbl’ CREDENTIALS ‘aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>’ delimiter ‘|’;
copy supplier from ‘s3://oldnick-tpch/supplier.tbl’ CREDENTIALS ‘aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>’ delimiter ‘|’;

You’ll need to replace <Your-Access-Key-ID> with your Amazon access key and <Your-Secret-Access-Key> with your secret key, though I bet you’d guessed that. Also, note that it’s possible to load from a gzipped file by adding the gzip parameter to the  copy statement, though I didn’t discover this till after the load.

After waiting a little while, though not too long, for Redshift to bring the data in from S3, you can use these queries to check the counts.

select count(*) from customer;
select count(*) from orders;
select count(*) from lineitem;
select count(*) from nation;
select count(*) from part;
select count(*) from partsupp;
select count(*) from region;
select count(*) from supplier;

Next, the Developer Guide section covering loading data into Redshift say you should run the following statements after loading. Analyze updates the database statistics, and Vacuum then reclaims storage space.

analyze;
vacuum;

So, there we go, now we’ve got a Redshift cluster running the TPC-H tables. So next I thought I’d do a basic test to compare results.

My test query for this is shown below, and just does some aggregation against the lineitem table (6 million or so rows).

select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty,
sum(l_extendedprice) as sum_base_price, sum(l_extendedprice*(1-l_discount)) as sum_disc_price,
sum(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge, avg(l_quantity) as avg_qty,
avg(l_extendedprice) as avg_price, avg(l_discount) as avg_disc,  count(*) as count_order
from lineitem
group by l_returnflag, l_linestatus
order by l_returnflag,l_linestatus;

So I ran this on my laptop (i7, 12 Gb RAM, 512GB SSD) a couple of times, once as a straight query, and once with a Columnstore index on it, cold (after restart) and warm (2nd time).

SQL times are shown based on SET STATISTICS TIME ON times.

Analysing Redshift was interesting. Since I’ve not done much with Postgres-SQL, I had a look through the Redshift documentation to see what is going on. I found an interesting page showing how to determine if a query is running from disk . Working through this I saw that, once I got the query id from the query below, I could get the query details including memory used and times.

Getting the Query Id

select query, elapsed, substring
from svl_qlog
order by query
desc limit 5;

select *
from svl_query_summary
where query = 5931

image

So, having seen those figures, I had a look at the cluster details.

Initially I was using 1 node, so I went up a notch, to a 2 node cluster of the more powerful nodes.

Single Node Testing
image
Multi Node Testing
image

The Results

Time to Return (sec)
Laptop – SQL 2012 (Cold) 24515ms CPU time, 6475ms elapsed
Laptop – SQL 2012 (Warm) 24016ms CPU time, 6060ms elapsed.
Laptop – SQL 2102 Columnstore (Cold) 531ms CPU time, 258ms elapsed
Laptop – SQL 2102 Columnstore (Warm) 389ms CPU time, 112ms elapsed
Redshift (1 node cluster) 1.24 sec
Redshift (2 node cluster 1.4 sec

So, obviously, I’m not stretching the performance of the Redshift cluster.

Part 2b of this will cover similar tests, though I’ll be doing it with a 100GB TPC-H test data set.

Keep ’em peeled for the next post!

TestDay

Unit Testing is a methodology that we should all embrace and understand.

It’s not just for Programmers

Unit Testing Frameworks are available for almost every Platform, from ABAP to XSLT. So if you are a hard-core coder, a SQL DBA, a Web Developer, or a Sys Admin, you can join in!

While the use of Unit Testing is getting more common, it’s not as common as it could be.

So I’d like to propose TEST DAY 2012!!!

How do you do Testing ?

Why not share how you do Unit Testing, why you started it, what your experiences have been, or something related to Unit Testing ?

We can all learn from each others experiences.

What do you need to do ?

  • Write a blog Post and Share it with the internet, so everyone can learn from your experiences.
  • Your blog post must be published between Wednesday, 12th December 2012 00:00:00 GMT and Thursday, 13th December 2012 00:00:00 GMT.
  • If you are on Twitter please tweet your blog using the #TestDay2012 hashtag. I can be contacted there as @nhaslam, in case you have questions or problems with comments/trackback.
  • Either, include the TestDay2012 Picture (above) and hyperlink it back to this post, or have a link back to this post.
  • If you don’t see your post in trackbacks, add the link to the comments below.

What will I do ?

A week or so later (depending on the number of posts), I’ll do a summary post and cover all the submissions.

I look forward to reading your posts!

Thanks to everyone who posted on T-SQL Tuesday this month. Below is a summary of the posts, so have a look through if you’ve not had chance yet.

As an aside, if you’ve not watched the film yet, it’s available here (Google | AmazonUK | AmazonUS)

There were some really interesting, and terrifying posts here, so pull up a chair, grab some whiskey, turn the lights down, and have a read through.

Don’t forget to keep an eye out for the next TSQL2sDay post, in a couple of weeks time.

The Posts!

Rob FarleyWhen someone deletes a shared data source in SSRS

Thomas RushtonSQL Wildcards

Rick KruegerNightmare on TSQL Street – The Case of the Missing Cache

Matthew VelicSoylent Growth

Ted KruegerHorrify Me!

Ken WatsonSoylent Green

Chris ShawAre you kidding me ?

Thomas Rushton (Again!) – Soylent Inbox

Jason BrimhallHigh Energy Plankton

Jes BorlandSoylent Green SQL Server

Bob PusateriA Horror Story

Steve Jones (Voice of the DBA)Soylent Green

Chris Yates – Soylent Green

Jeffrey VerheulSoylent Green

 

image

20121003-200545.jpg Welcome to TSql2sday issue #35, this time hosted by me…

It’s a bit last minute, as I stepped in to help Adam out, so bear with me. As always, thanks to Adam for starting this off, I’ve posted a few articles on previous runs, and have found other people’s posts to be really interesting. I hope this follows in the same way.

Over the past couple of days I’ve been attending a training course in Paris, and one evening, to relax I watched ‘Soylent Green‘, a classic science fiction film. If you’ve not seen it, I recommend it, and go and watch it …

So, what I’d like to know is, what is your most horrifying discovery from your work with SQL Server?

We all like to read stories of other people’s misfortunes and, in some ways they help to make us better people by learning from them. Hopefully, there is nothing as bad as Charlton Heston’s discovery, but there may be in its own way.

A couple of extra thoughts for motivational thinking…

Soylent Brown – You did a post, Great Job!!

Soylent Orange – You did a post, it made me wince!

Soylent Green  – You did a post, it made me wince, and it included some T-SQL.

Do you have the words straight?

Here are the rules as usual: If you would like to participate in T-SQL Tuesday please be sure to follow the rules below:

  • Your blog post must be published between Tuesday, October 9th 2012 00:00:00 GMT and Wednesday, October 10th 2012 00:00:00 GMT.
  • Include the T-SQL Tuesday logo (above) and hyperlink it back to this post.
  • If you don’t see your post in trackbacks, add the link to the comments below.
  • If you are on Twitter please tweet your blog using the #TSQL2sDay hashtag. I can be contacted there as @nhaslam, in case you have questions or problems with comments/trackback.

Thank you all for participating, and special thanks to Adam Machanic (b|t) for all his help and for continuing this series!

Thanks for posting, and I’ll have a follow-up post listing all the contributions as soon as I can.

Over the past few evenings, I’ve been playing with SQLIO, to get an idea of how SSD compares to a couple of servers (one quite old, one a bit newer) that I have access too.

SQLIO can be used to do performance testing of an IO subsystem, prior to deploying SQL Server onto it. It doesn’t actually do anything specifically with SQL, it’s just IO.

If you haven’t looked at SQLIO, I would highly recommend looking at these websites:

http://www.sqlskills.com/BLOGS/PAUL/post/Cool-free-tool-to-parse-and-analyze-SQLIO-results.aspx

http://tools.davidklee.net/sqlio/sqlio-analyzer.aspx

The SQLIO Analyser, created by David Klee, is amazing. It allows you to run the SQLIO package (a preconfigured one is available on the site) and submit the results. It then generates an Excel file that contains various metrics. It’s nice!

Running on my Laptop…

Having run the pre-built package on my laptop, I got the following metrics out of it. As you can see, it’s an SSD  (Crucial M4 SSD), and pretty nippy.

image

image

Interesting metrics here, and one of the key benefits of an SSD, is that regardless of what you are doing, the average latency is so low. For these tests, I was getting:

Avg. Metrics Sequential Read Random Read Sequential Write Random Write
Latency (ms) 19.28ms 18.38ms 23.21ms 51.51ms
Avg IOPs 3777 3493 2930 1340
MB/s 236.07 218.3 183 83.7

Running on an older server

So, running this on an older server, connected to a much older (6-8+ years old) SAN gave me these results. You can see that the metrics are all much lower, and there is a much wider spread of for all the metrics, and that is down to the spinning disks.

image

image

As you can see from the metrics below, there is a significant drop in the performance of the server, a lot more variance across the load types.

Avg. Metrics Sequential Read Random Read Sequential Write Random Write
Latency (ms) 24.81ms 66.79 373 260
Avg IOPs 1928 710 186 210
MB/s 120 44.3 11.6 13.14

Slightly newer Server

So, next I had the SQLIO package running on a slightly newer server (with a higher spec I/O system, I was told), which gave the following results.

image

image

As expected, this did give generally better results, it is interesting that Sequential read had better throughput on the older server.

Avg. Metrics Sequential Read Random Read Sequential Write Random Write
Latency (ms) 35.13 44.17 41.81 77.44
Avg IOPs 1474 1021 1314 794
MB/s 92.7 63.8 82.8 49.6

Cracking open VMware

Since I use VMware Workstation for compartmentalising projects on my laptop, I thought I’d run this against a VM. The VM was running on the SSD (at the top of the post), so I could see how much of an impact the VMware drivers had on the process. This gave some interesting results, which you can see below. Obviously there is something screwy going on here, it’s not likely that the VM can perform that much faster than the drive it’s sitting on. Would be nice if it could though…

image

image

Avg. Metrics Sequential Read Random Read Sequential Write Random Write
Latency (ms) 7.8 7.5 7.63 7.71
Avg IOPs 12435 13119 15481 14965
MB/s 777 819 967 935

While the whole process was running, Task manager on the host machine was sitting at around 0-2% for disk utilisation, but the CPU was sitting at 50-60%. So, it was hardly touching the disk.

image

Conclusion

Just to summarise this, in case you didn’t already know, SSD’s are really quick. For the testing I was doing, the SSD was giving me approx. double the performance from some pretty expensive hardware (or at least it was 5-10 years ago…)

Also, take your test results with a grain of salt.

When it works, it WORKS!

Having read the article Everything’s broken and nobody’s upset by Scott Hanselman, it prompted me to write this, which has been bubbling away for a little while now.

Yes, I agree that there are many issues with our industry. Many things don’t work as seamlessly as they could / should.

However, in defence of our industry, I hold up the following:

1. Way, way back in the late 1980’s, I got my first dot-matrix printer, a Star LC-10, which I hooked up to my Atari ST.  It, at the time, amazed me that I could make the printer (a physical device, with moving bits) do stuff just by simply typing on a keyboard. It was a long time ago, and I’m not so easily impressed now.

2. In the mid 1990’s, when I was at University, I was able to chat with a course mate over the internal systems (VAX-VMS), and arrange to meet them outside the building to go for a pint. Again, this was interaction with the real world.

3. A couple of months ago, having changed my car to a VW Golf, and the first car I’ve ever had with Satnav , I was able to save myself 2 hours sitting in traffic, when the Satnav dynamically changed my route based on traffic conditions. I had never been convinced about the need for Satnav, since I was ‘capable of reading a map’. However, following this, I’m a complete convert.

4. A month or so back, I met up with a guy from the UK, over in Seattle. He’d requested some chocolate be brought over, and since I was going over for a course, I took it. This was completely arrange over Twitter.

image

5. A few weeks ago, having spent a few days in London on a project, I was able to use my iPad, and it’s 3G connection to access a website to order a Curry, which heading home on the train. Then having left the train, and stopped at the curry house on the way back home, the curry was ready and waiting for me.

image

6. We’ve sent a huge robot to another Planet, ok, it’s a mildly depressed robot, but still !

image

You can see that there is a common theme in my examples, interaction with the real world. All of these items continue to make me impressed with what my industry has achieved.

Thank you, and please continue to impress me.

It’s another TSQL2sday post, this time hosted by Rob Volk (b | t ). Thanks for hosting Rob.

So this month, it’s about how we fixed a problem, or found help when we couldn’t fix a problem, with a theme based on ‘Help’ by The Beatles

I chose the 2nd verse…

When I was younger, so much younger than today

So, many years ago, when I started out with SQL Server, back in the heady days of 6.5, there was much less of a SQL Community, actually, I don’t even remember one. The only way I could get help, was either through using MSDN, or by emailed colleagues I met on a SQL training course.

I never needed anybody’s help in any way.

Though that’s primarily due to stopping using SQL for a while, just a year or so, but still.

Everyone needs help, at some point, with something. It’s not a weakness, it’s a strength.

But now these days are gone, I’m not so self assured.

In the past few years, I’ve started working more and more with SQL, and found that it is such a huge product that no one can know the whole thing (SSAS, SSIS, SSRS included), and because of that, I’ve found several ways to get help if I need it.

Though, before I get into that, I need to say something about the community. There is a huge SQL Community out there, though the first community event I attended wasn’t a SQL One. It was a Developer event, Remix Uk, back in 2008 (http://www.microsoft.com/uk/remix08/default.aspx). It was a great event and I got to meet some great people there, including Scott Guthrie! Getting to this event was pretty much solely due to an ex-colleague, Jes Kirkup. Thanks Jes!

Since then I’ve started attending community events where I can, including the local DevEvening events (where I’ve done a couple of short presentations), and SQL community events (SQLMaidenhead, SQL in the Evening, and SQLBits of course!). I’ve found that these are a great way of getting a great insight into what skills others in the industry have, and so where I should be targeting my learning. Following on from that, I’ve met some great people, and there are people who I know I could ask for help if I needed to.

Not to mention the #SQLHelp hash tag on twitter, where there is help, pretty much 24hours a day, the only restriction being the need to phrase your question in 150 characters (160-hash tag).

Now I find I’ve changed my mind and opened up the doors.

Now I find that I am helping people where I get the opportunity, am publishing blog articles (here, like this one!) and am hoping to do more Community presentations. Furthermore, I’m doing internal training courses (next month I’m doing one on SSAS), and have recently started mentoring a colleague in SQL.

It’s great to be able to share knowledge and experience.

Thanks for listening, and reading, and thanks again to Rob for hosting.

Travelling with Gadgets

Following on from a previous post on my journey to Seattle (Sleeplessness in Seattle) for the SQL Skills Immersion Event on Performance Tuning (IE2), last week, I thought, I’d share my experiences of travelling with Gadgets.

To allow me to have access to everything I needed while I’d be away, I took the following with me:

  1. Apple iPhone 4s – My personal mobile
  2. Blackberry Bold 9700 – Work mobile
  3. Apple iPad 2
  4. Amazon Kindle (currently reading SnowCrash by Neil Stephenson)
  5. Acer Aspire 3810TZ laptop
  6. North Face Borealis rucksack
  7. Logitech M510 Wireless mouse
  8. Noice Cancelling earphones and iPod Nano
  9. Chargers, US Adapters…

photoOut of all these items, I’d have to give a special shout out to the iPhone and iPad. They surpassed themselves, by giving me perfect access to the internet through numerous WiFi access points, and also by allowing me to speak to my family through Skype, over these devices.

Also, and this is a surprise to me, I have to mention the Acer Laptop. For a very long time, I’ve always found Acer laptops to be somewhat shoddy. However, this one has carried out a sterling job, with 8+ hours of battery life, and no issues with responsiveness. Having said that, I did improve it’s performance with a Crucial M4 SSD and a memory upgrade (to 8Gb, from 4Gb), just to ensure that it would be bearable running SQL Server on it.

I’ve been impressed with the quality of the WiFi access in the US (I was in Seattle). All the Starbucks I’ve been to had free WiFi, as did the hotel I stayed in (Courtyard Marriott in Downtown Bellevue).

While I could have taken notes on the course on the iPad, or typed them into the Laptop, I prefer to use a Moleskine to take notes. Yes, it may be a little old-school, but if it was good enough for Picasso, Van Gogh and Hemingway, then it’s good enough for me. Smile

VS2012 Schema Comparison

Having recently been playing with the newly released Visual Studio 2012, one of the really nice features that I’ve seen is the Database Schema Comparison functionality.

If you’d like to follow along with this, you’ll need the ContosoBI database, which is available here.

This can be seen by launching VS2012, choosing New Project, and selecting the SQL Server Database Project. Don’t forget to give the Project a name, I called mine dbSchemaComparison.

image

When the Solution has been created, you’ll be presented with the Solution explorer.

image

In here, you’ll want to right-click on the Project name, and choose Import > Database. In here, you’ll need to create a new connection to your database. Also, if you are wanting to track everything, you need to check the Permissions and Database Settings tick boxes. Then click Start.

image

While the process is running, you’ll be presented with a dialog box showing the progress. When it’s completed, click Finish.

image

Now, when you look in the Solution Explorer, you’ll see a set of SQL Scripts that have been created to match the structure in the database.

image

My next step was to connect to the database using SQL Server Management Studio, and alter one of the tables. I decided to add an Index to the DimAccount table. The index was called ix_date, and I added the LoadDate field from the DimAccount table to it.

image

The final step in this process is to go back into Visual Studio, right click on the Project and choose Schema Compare. When this window opens, you have two drop down boxes. The left contains the Project that you have in VS2012; the right will need to be populated with a database for comparison.

image

When you’ve populated the database on the right, click the Compare button. The Schemas from the two projects are loaded and compared. The results are then displayed on the screen. As can be seen below, it’s pretty obvious what the differences between the environments are.

image

If you then want to sync the environments, you need to remember that you need to move the changes from Source (Left) to Target (Right). If you want to remove them from the Right (database), then you can click Update (or the script button next to it, to generate a script). Alternatively, if you want to update your project, you can click the ‘switch’ button between the two projects and rerun the Compare.

A really nice feature, I think you’ll agree.

Sleeplessness in Seattle

2012-08-12 11.40.57Over the past week, I’ve been attending the IE2 Course, held by SQLSkills, in Bellevue (near Seattle). It’s been a really intense week, covering a lot of really deep technical stuff.

However, I’m not going to talk about that. The benefits of training by some of the leading SQL Server people in the world should be obvious. Also, my poor brain needs time to assimilate everything that’s been hosed into it.

It has, however, been a great honour to spend time with the great people on this course, and I mean the other attendees (such as Kendra Little, Jes Borland, Tim Ford and Dan Taylor among others) as well as the Instructors (Paul Randal, Kimberly Tripp, Jonathan Kehayias and Joe Sack).

A couple of the most impressive nuggets of knowledge I’ve gained over the past week:

sys.login_token – Gives you a list of the Active Directory groups against a SQL login

SQLIO Analyzer – David Klee has written a website that will analyze the output from SQLIO

Adventure Works Workload Generator – Jon Kehayias has a SQL workload generator.

There are a great many other bits of knowledge I’ve gained, but these, so far, are the most immediate, quick wins, if you see what I mean…

It was a hard flight over here, 9.5 hours, on a plane that was an hour late departing, but I had the SQL Internals book to keep me occupied (between films, Marvel The Avengers, and The Hunger Games…).

I’d like to thank my employers, TAH Ltd (twitter|web), for sending me on the course, I hope that the benefits of this training, will continue to be obvious for many moons.

More importantly, I’d like to thank my wife, Emma, since without her support, I’d never have had the confidence to travel 4500 miles for a training course.

Thank you, to everyone on the course, for making it a great learning experience.

ps. Sleeplessness, since the majority of the week here, I woke up at 3am, almost every day, for no apparent reason.