Jedi Princess

Some Girls Wander, I Just Ramble

It’s still around: SQL 2000 and SQL 2005 32-Bit… Memory dilemma October 1, 2010

Filed under: Uncategorized — Jedi Princess @ 1:28 pm

I have customers that find themselves still in the situation where they are sitting with SQL Server 2000 servers (32-bit, no less!) in their environment not because they really don’t want to move to a later version and not because they don’t want to be using 64-bit, but because they are waiting for their SOFTWARE vendor to release updates to the product they are bound to for business.

 
 

Because this product faces end of life with Microsoft and it is on extended support only at this time – it is really hard to find guidance on how to configure the memory for the servers that are sitting in datacenters running this version of SQL.

 
 

Things to know:

 
 

Advanced Windowing Extensions (AWE) API allows applications that are written to use the AWE API to access more than 4GB of RAM.

 
 

Physical Address Extensions (PAE) is a function of the Windows 2000 and Windows Server 2003 memory managers that provides more physical memory to a program that requests memory. The program is not aware that any of the memory that it uses resides in the range greater than 4 GB, just as a program is not aware that the memory it has requested is actually in the page file.

 
 

AWE is an API set that enables programs to reserve large chunks of memory. The reserved memory is non-pageable and is only accessible to that program.

 
 

AWE is only available starting with Windows 2000 Advanced Server and Windows 2000 Datacenter Edition. Windows 2000 Server *does not* have the ability to leverage AWE. Windows Server 2003 Enterprise and Windows Server 2003 Datacenter can leverage AWE. Windows Server 2003 Standard edition cannot leverage AWE.

 
 

SQL Server 2000 STANDARD edition does not have the ability to leverage AWE and therefore, regardless of the server version, the AWE enabled option should be left to 0 (which is the default) – which means that AWE memory is not being used. SQL Server 2005 STANDARD DOES have the ability to leverage AWE as long as the OS supports it as well.

 
 

SQL Server 2000 ENTERPRISE edition can leverage AWE if it is enabled.

 
 

By default, EVEN IF you are running Windows 2000 Advanced Server or Windows 2000 Datacenter Edition AND you are running SQL Server 2000 Enterprise, and you have more than 4GB of RAM, your server will still not utilize it by default. You must make changes to the OS configuration and to the SQL configuration.

 
 

If your 32 -bit server has the following RAM configurations, then you require special switches in the boot.ini file – after you make a change to the boot.ini file, you must reboot your server:

 
 

4GB RAM: /3GB (AWE support is not used)

 
 

8GB RAM: /3GB /PAE

 
 

16GB RAM: /3GB /PAE

 
 

16GB + RAM: / PAE

 
 

 
 

 
 

Then for servers where there is more than 4GB of RAM, AWE needs to be enabled and the SQL Server service needs to be restarted. And without Physical Address Extensions (PAE), AWE cannot reserve memory in excess of 4 GB.

 
 

 
 

Here is an example of how to configure it:

 
 

SP_CONFIGURE ‘show advanced options’, 1 

RECONFIGURE                             

GO

SP_CONFIGURE ‘awe enabled’, 1

RECONFIGURE

GO

 
 

THEN, I configure max server memory to tell SQL what is the maximum amount of memory it can use within the system. This is important because if you don’t, you will end up with SQL taking all but 128MB of RAM and leave only about 128MB for the OS and then you will find the OS being starved of memory. My rule of thumb is to leave anywhere from 1GB to 2GB of RAM for the OS itself. Therefore, if I have a 16GB system, I will tell SQL it can use anywhere from 14 – 15GB of that memory by using the following:

 
 

 

SP_CONFIGURE ‘max server memory’, 14336

RECONFIGURE

GO

 
 

Some other things to consider is that is AWE is not necessary on 64-bit platforms, but it is there. Memory pages that are allocated through the AWE mechanism are referred to as locked pages on the 64-bit platform. On both 32-bit and 64-bit platforms, memory allocated through AWE cannot be paged out – this can be advantages to the application and it is one of the reasons why it will sometimes be used on a 64-bit platform. BUT, this comes at a cost – it affects the amount of RAM that is available to the system and to other applications (hence the ‘ max server memory’ recommendation earlier). Because of the underlying constructs, it is necessary for the account that runs the SQL Server service to have Lock Pages in Memory privilege. This privilege is enabled by default during the installation, but important to know if there are have permissions settings modifications. Using gpedit.msc may be necessary to re-enable this privilege.

 
 

 
 

 

ALU and RAID Stripe sizing Recommendations for Exchange 2010 August 13, 2010

Filed under: Exchange 2010 — Jedi Princess @ 11:42 am

 

This is a topic of discussion where it is really easy for propellers to start spinning and discussions to take adventures through the jungle (forget the weeds). I will try to keep this pretty light and to the point though.

 
 

The question that I am looking to answer is this: Do those little pesky settings within storage configuration really matter when configuring my disks for Exchange? Isn’t the most important point to just make sure that I have enough bandwidth, IOPs and capacity? And what are the recommended settings for things like allocation unit size and stripe size for Exchange 2010?

 
 

I will start off by saying this; an incorrectly configured disk layout can affect performance upwards of 20% or more. You might say, “what is 20% when running this on a SAN?”. 20% can be a pretty big number when you have opted to layout your environment on SATA disks and you are already at 95% IOPs capability across your array.

 
 

Good News! Microsoft has addressed what has been in the past one of the biggest offenders of incorrectly configured disks – that is the 64th sector misalignment. I already have a post on my blog site regarding this in a lot more detail, but the jist is the internal structure of the master boot record (MBR) on a disk inhabits the hidden sectors on a drive – and this value is defaulted to 63. The result is that when a partition is laid down on that disk, in versions of Windows previous to Windows Server 2008, the starting sector was at 64 thus misaligning it with the underlying RAID stripe. This causes disk crossings for a percentage of small I/O (typical of Exchange and other DB applications), resulting in lower performance. Depending on your IO type, sometimes this was a slightly lower performance, and sometimes this was a significant amount of lower performance. It is my delight to say that new partitions on Windows Server 2008 are created at a 1MB starting offset which (with high probability) aligns with underlying RAID stripe units. 64KB is a valid starting offset – it works well for aligning across all pieces and parts of the disk subsystem. Windows Server 2008 starting offset is 1024KB which is also completely valid and generally a good default. I will say, though, for systems from which high performance is required, it is essential to experiment with representative workloads and determine the validity of disk partition alignment for your environment. Exchange 2010 workloads don’t typically fall into this category – I am speaking more to the SQL folks reading this. Regardless if this is Exchange 2010 or SQL, aren’t you using tools like SQLIO, Jetstress and Loadgen to test your environment before deployment anyway? (I’m not joking…. I’ll probably talk about that later in another post)

 
 

Now onto looking at the other configuration options when setting up you disks. The other configuration settings that are correlated with performance and partition offset are these: file allocation unit size and
stripe unit size. If you want to understand what these things are, then, I’m sorry – I’ve decided, we have to go into the weeds (and maybe venture close to the jungle) a bit. If you just want to know the values, and save the weeds for another time (perhaps when you have insomnia?) you can skip the weeds and go right down to the recommendations.

 
 

<Weeds>

A physical hard drive contains inside of it something called platters. One or more platters are inside each physical hard drive. These platters are thin circular disks and it is on the surface of these disks that our electronic media is stored. The platter contains thousands of tracks on each side – think of it for our purpose as a bunch of concentric circles on this platter – this isn’t entirely accurate with modern drives, but it is easier to visualize it this way. When looking at all of the platters inside of a physical disk, each track that is the same diameter across all these platters make up a cylinder. Each platter has its own dedicated read/write head and the tracks are divided into sectors. A sector is the minimum chunk of data that can be read or written to a hard drive. Sector size has been more or less “defaulted” at 512 bytes, but newer drives offer different sector sizes. Probably worth mentioning here is that Microsoft clearly states that 512 byte sector disks are the only supported disks for Exchange. http://technet.microsoft.com/en-us/library/ee832792.aspx

 
 

The file allocation unit size is configured when the partition is formatted by the operating system. If the sectors of a hard drive are 512 bytes, then if I use a 4KB allocation unit size I am using 8 sectors. (Math: 4KB is equal to 4096 bytes. 4096 / 512 = 8). The idea is that if you use a lot of small files for a particular file system, you may want to use a smaller file allocation unit size because they will take up less space – each file is written across one or more allocation units.

 

To determine what your file allocation unit size is after the fact, you can run: fsutil fsinfo ntfsinfo c: from a command prompt

And you will see your NTFS information for your c:\ drive. Look for the line called Bytes per cluster and that will tell you what your file allocation unit size is. The “typical” recommendation in the event of not having tested performance data against different allocation unit sizes is to measure your application’s block size when reading and writing to a disk and start at that point with the ALU and test and modify as necessary.

 
 

The stripe size is the amount of user data in a RAID group stripe (and it does not include drives used for parity or mirroring). Sometimes this term “stripe size” is used to define the amount of contiguous data stored on a single disk of the stripe – but that is *not* how I am using it. In my world (surrounded by really smart EMC disk people) the term used to define the amount of contiguous data stored on a single disk of the stripe is called a stripe element. And the default setting on the EMC CLARiiON for a stripe element is 64KB which has been highly optimized for FLARE and we generally recommend NOT changing this value unless you have a really good reason to do so (if you aren’t sure, then you don’t). You cannot change this through the GUI; changing it will usually result in reduced performance.

 
 

There is also a term called stripe width, and it is entirely different. The stripe width is the number of stripes that can be written to or read from simultaneously. This is equal to the number of disks in the RAID group – without including parity or mirror. The stripe size is something that we typically have a lot more control of – stripe width is quite a bit more static, because changing it requires adding or removing disks from a RAID group.

 
 

For example, an 8 disk RAID 1/0 has a stripe width of 4 and a stripe element of 64KB and it has a stripe size of 256KB (4 * 64KB). A 5 disk RAID 5 (4+1) with a 64KB stripe element also has a stripe size of 256KB. The way you change your stripe size in the EMC world, is through how you configure your RAID groups. A 4 disk RAID 1/0 has a stripe width of 2 and a stripe element of 64K and it has a stripe size of 128KB (2 *64KB).

</Weeds>

 
 

The good news about Exchange2010 is that the data profile of how Exchange reads and writes data to a disk is very predictable. It tends to be very sequential in nature and writes and reads in mostly 32KB block sizes.

 
 

What my SQL admins, DBAs, storage designers, et al, will need to understand is that this is where we diverge. There is absolutely *nothing* about SQL that tends to be “typical”. I think of Exchange as an application. I think of SQL as an environment that HAS applications. Each SQL application and how it reads and writes data to and from the disk can be vastly (and usually are) different. For serious SQL workloads, you will need to test different disk configurations to see which one works best for your given application. I can give you good “places to start” for SQL, but the best configuration for your application will come from you using SQLIO to validate different disk scenarios. You will also need to understand that as I/O size (block size) goes up, the quantity of data being transferred at one point in time goes up (good), but the I/O rate (throughput) goes down (bad). If this is an “off-the-shelf” application that runs within SQL – like a SharePoint database, you can do some research with the application vendor (this case, Microsoft) to see if they have recommended settings (they do).

 
 

Recommendations for Exchange 2010:

 
 

What we have tested and found at EMC is that when formatting a new NTFS volume to be used for Exchange databases and logs, it is best to set the allocation unit size to 64 KB. This is also exactly what Microsoft recommends. http://technet.microsoft.com/en-us/library/ee832792.aspx

This has specifically shown to increase performance when performing large sequential reads. This is a performance advantage for all versions of Exchange even back to Exchange 2003. It probably could go further back, but I don’t have performance data for Exchange 2000 and previous. This value is especially pertinent for Exchange 2010 because of the architectural changes that Microsoft made within the data structures to make data IO operations much more sequential in nature.

 

For the LUN stripe element, we find that 128 blocks or 64KB (default) performs the best. (CLARiiON default – see earlier discussion in the weeds)

 
 

For stripe size, 256KB and greater yields better performance – RAID 5 (4+1), RAID 1/0 (4+4), etc, for Exchange 2010 workloads

 
 

 
 

 

My Quick Math and Microsoft Don’t Agree… August 12, 2010

Filed under: Exchange 2010 — Jedi Princess @ 8:50 pm

Or, Why is the Exchange 2010 Sizer from the Microsoft Exchange Team recommending SO MUCH space for my

Exchange environment???

 
 

Because there is A LOT OF STUFF buried built into Microsoft’s tool:

 
 

Let’s be simple: My current customer example I’m working on has a configuration similar to this:

 
 


 
 

 
 


 
 


 
 

 
 

And:

 
 


 
 

 
 

Let’s break this down:

 
 

I have 650 users, each of them is being offered a mailbox size limit of 8546MB. (These folks are lawyers; they live and die by the CYA rule). I have 4 mailbox servers at my Primary Datacenter (PDC) and I have a Database Availability Group (DAG) that contains two copies of my data locally and one copy of my data remotely. For my example, I will be focusing solely on the Primary Datacenter Sizing

 
 

If I did quick math, I might think.. Ok, 8546MB mailbox * 650 users = 5554900MB of data total, and then if I want to have a local replica of it, EASY, 5554900*2 = 11109800 MB (or 11109800/1024 = 10849.4141 GB ) for the entire local environment, right???

 
 

Uh, no.

 
 

That 8546MB of mailbox data per user is actually on disk somewhere closer to 9826MB according to the way Exchange is going to deal with each individual mailbox. Because, based on the number of messages sent and received a day (in our case 150) and the deleted item retention (our case 30), and single item recovery being enabled, there is so much “dumpster and whitespace” that is calculated. In our case, it ended up being somewhere close to 1280MB, which is what brought my mailbox value back up to 9826MB.

 
 

Next, we need to calculate the number of mailboxes we can fit in a single database. Once again, the answer isn’t always what we would expect. For example, I might say, ok, each mailbox is 9826MB in size, and we specified that the maximum database size was 800GB (or 819200MB), that we could fit 819200/9826 = 83.3706 mailboxes into a single database. But that isn’t the case either. Because, remember, the default setting in the Exchange calculator is to expect 20% Data Overhead Factor.

 
 

Here is what Microsoft has to say about this value:


 
 

Ok, so, what does that MEAN about our data? Well, basically, remember that 9826MB per mailbox that we calculated after figuring out there was something called “dumpster and whitespace”? It can grow. And we should plan for it to. So, the default (and recommended value) by Microsoft is an additional 20%. 9826*1.20 = 11791MB. This is what we should be using to determine actual number of mailboxes per database. Math again: each mailbox is now 11791MB in size, we have a 819200MB database, that will allow us to have 819200/11791= 69.4767 mailboxes per database. That isn’t pretty math, so we can even it out a bit. Let’s call it 69 mailboxes per database.

 
 

If I can have 69 mailboxes per database, then I will need to figure out how many databases that is across my four local servers. 650 mailboxes / 69 mailboxes per database. 650/69= 9.4203 databases. We will need to round that up to 10. 10 databases are needed to hold my 650 mailboxes. Across 4 servers, that doesn’t divide evenly. The Microsoft calculator likes things to divide up evenly. The calculator will figure out that it would be much nicer and prettier and more even to have 12 databases and 650/12=54.1667 or 54 mailboxes per database.

 
 

So, to summarize this far, I have 4 mailbox servers, 12 databases, each with 54 mailboxes inside. This will mean that each Exchange server will be hosting 3 active databases and 3 * 54 = 162 mailboxes total for each Server. Let’s not forget, though, that I want to have a local replica copy as well, so, each mailbox server will have 3 active databases and 3 passive databases. How big will each database be and how big do we need to make the LUN for each database?

 
 

Database Size is a function of the size of each mailbox, while remembering that each one could potentially grow to fill its 20% Data Overhead Factor. 11791 * 54 = 636714 MB or 636714/1024 = 621.791 GB – this is the potential size of our database! —- remember, each server has 6 of these databases; 621.791*6*4 = 14,922.984 GB for our local environment with its production and local replica.

 
 

But we are not finished…. Exchange sizing has something called a Content Index Factor. Because we have the ability to search deep within our mailboxes for lost items or calendar invitations with the word “lunch” somewhere in the text of it, we have to pay for that with Content Index SPACE. And this is roughly 10% of the data size. That 621.791GB database really needs to be considered as 621.791*1.10 = 683.9701 GB. (This is not exact math, because the values for the transaction logs are also evaluated and added to the mix to determine the size of the indexes – I left them out of my example because they are really and truly quite small and not at all a significant factor)

 
 

And finally, when we look for the LUN size on the Exchange 2010 calculator, you will not see 682.97GB as the recommended LUN size for this database, because after all, we have a value called LUN Free Space Percentage and it’s default value is 20%. What we will see instead is the recommendation for 682.97*1.20 = 819.564 GB LUN for each of our Exchange databases. (I am going to call that 820 GB)

 
 

So, instead of 10,849 GB for our entire environment as our very quick math earlier would have possibly led us to believe, we will see the recommendation from our calculator to allocate 4,920GB per Exchange server, leading us to 4,920 *4 = 19,680 GB total for the capacity of our databases.

Maybe our quick math in the future needs to be – figure out what the quick math is, and then more or less double it.

<Please note, I have not addressed IOPs AT ALL in this example and is an important part of your overall sizing for the actual number of disks needed to support this workload. Nor have I addressed the size for Log LUNs or Maintenance LUNs which are also critical to a happy and well performing Exchange system>

 

Fantasy or Fact: EMC CLARiiON Integration Pack for Windows SCOM 2007 (&R2) July 15, 2010

Filed under: CLARiiON,Technology — Jedi Princess @ 4:15 pm

 

It is a FACT!  Too bad it is really HARD TO FIND!



 
 

 

 

 

 

 

 

 

It works against alerts; this is not a facility to import array performance data into SCOM. But it’s very handy way to monitor and track events, and it’s free.


  
 

 

 

 

 

 

 

 

 

If you are licensed for Navisphere then the SCOM pack for CLARiiON is available on Powerlink at:

Home > Support > Software Downloads and Licensing > Downloads J-O > Navisphere Server Based Software

Home > Support > Software Downloads and Licensing > Downloads C > CLARiiON AX4-5

If you are having trouble navigating through Powerlink, send me an email at yourjediprincess@earthlink.net and I will help you navigate through PL to find it.

 

Expanding Boot Partition in VMWare (and possibly other virtualization…) May 27, 2010

Filed under: Technology,Virtualization — Jedi Princess @ 1:07 pm
Tags:

It has been a while since I’ve found myself in this predicament – I underestimated the size of the OS partition that I needed for one of my utility VMs that I use to run my performance and sizing tools.  I was stuck at 10GB and after having added several more very needed utilities, I found myself with precious little disk space remaining.  In the past, to resize the OS partition for a VMWare Worktation image, I found myself using a ghost boot iso and jumping through too many loops.  With the help of a friend, I have a new favorite way to expand the OS partition.

For one, VMWare Workstation offers the ability to expand the .vmdk (and uh, be careful – it expands and doesn’t shrink – so if you have in mind for the VM to fit on a 16GB or 32GB thumb drive, keep that in mind)

Once this has been done, within the Windows OS disk manager, you will see additional space next to the drive (C:\ in my case)

I tried to use Diskpart’s EXTEND (http://support.microsoft.com/kb/325590) feature to extend the OS partition – it failed.  I then reread the part in the MS Support article that said:

Only the extension of data volumes is supported. System or boot volumes may be blocked from being extended, and you may receive the following error:

Diskpart failed to extend the volume. Please make sure the volume is valid for extending

I then decided that MS must be serious. 

A friend then referred me to using EXTPART.  The instructions for using this method are thus:

1.  Power on the VM and then download extpart from here- http://support.dell.com/support/downloads/download.aspx?c=us&cs=19&l=en&s=dhs&releaseid=R64398&formatcnt=2&fileid=83929

2.  Extract the download within the VM OS

3.  From the cmd prompt, go to the extpart directory, type extpart and press Enter

4.  Type C: and press Enter

5.  Type the size in MB that you want TO ADD to the disk and press Enter

All was well again in my world!

 

Disk Alignment in 2010

Filed under: Storage,Technology — Jedi Princess @ 11:35 am
Tags:

Correct disk alignment has to be one the easiest yet most often overlooked ways to squeeze the most performance for the buck out of your applications. While Microsoft has taken the initiative with Windows Server 2008 to help prevent incorrect disk alignment (it should probably be noted here that this was never a “Microsoft” problem… ) it is still a relevent conversation to have and be up to speed on uses. Microsoft has released a new document called Disk Alignment Best Practices for SQL Server: http://msdn.microsoft.com/en-us/library/dd758814.aspx and it is a wonderful read! This paper not only addresses disk alignment, but also great disk technology overview and it speaks about how to optimize disks specifically for SQL Server.

With previous versions of Windows, it is imperative with IO intensive applications to correctly setup your disk partitions with correct alignment.  Disk alignment is *not* available from the disk management snap-in tool in Windows.  Microsoft has provided two tools to implement disk partition alignment:  diskpart.exe and diskpar.exe.  The Windows 2000 Resource Kit introduced the command-line utility diskpar.exe. Its successor, diskpart.exe, was introduced in Windows Server 2003. Note the presence or absence of a “t” in their names. The /align option debuted in Windows Server 2003 Service Pack 1 (SP1). Both utilities are powerful and should be exercised with caution – they are DESTRUCTIVE.

Diskpar.exe reliably reports partition alignment in terms of bytes. However, results are valid only for MBR basic disks, and this tool is no longer supported by Microsoft.

Diskpart.exe reports alignment for basic disks in terms of kilobytes.  The Windows Server 2003 (and earlier) default alignment is 32,256 bytes, exactly 31.5 KB; unfortunately DiskPart rounds this up to 32 KB. Though DiskPart is the tool of choice to implement partition alignment, the value it reports for partition offset is not sufficiently granular.

Therefore, use the wmic command to report partition offsets of basic disks; use dmdiag –v for Windows dynamic disks.

Therefore, to validate your partition alignment with granular tools, one quick way is to use the Wmic.exe command for basic disks and dmdiag -v command for dyanmic disks.

   wmic partition get BlockSize, StartingOffset, Name, Index
For Example:

 

 Because versions of Windows earlier than and including Windows Server 2003 comply with the 63 hidden sectors reported by disk hardware, and because the most common sector size 512-byte sectors, the default (and suboptimal) starting partition offset is 32,256 bytes, exactly 31.5 KB. (32 256 Byte = 31.5 Kilobyte – http://www.onlineconversion.com/computer_base2.htm is a great bytes to kilobytes conversion tool)

Explicitly defining the starting offset from 31.5 KB to exactly 32 KB might seem like a legitimate approach. However, when choosing a valid partition starting offset, refer first to your storage vendor best practices. Make certain their recommendations correlate with the stripe unit size and file allocation unit size configured for your target application (like SQL Server, for instance). In the absence of definitive vendor information, choose the Windows Server 2008 default.  Windows Server 2008 partition alignment defaults to 1024 KB (that is, 1,048,576 bytes). This value provides a durable solution. It correlates well with common stripe unit sizes such as 64 KB, 128 KB, and 256 KB as well as the less frequently used values of 512 KB and 1024 KB. Also, the value virtually guarantees hidden structures allocated by storage hardware are skipped.

If you are an application owner (SQL, Exchange, etc… ), it is really in your best interest to verify with your storage team that disk alignment has taken place correctly for the LUNs that are presented to your operating system. In honor of full disclosure, I should also say that I do work for a storage vendor and I still encourage customers to verify for themselves that disk alignment has correctly taken place.