Friday, January 30, 2015

How to determine the SSL TPS of your workload?

The KEMP Technologies LoadMaster range of load balancers goes from very affordable entry level models up to the real work horses. To choose the right option you need to think about the number of network interfaces you need, how many real or virtual servers you want to be able to use and estimate your expected throughput and SSL TPS.

image

Of all those parameters SSL TPS is the one that confuses some people.

What is SSL TPS?

SSL TPS is the number of SSL (Secure Sockets Layer) Transactions per Second. First we need to understand what a Transaction is. A SSL transaction consists of three phases:

image

The Session Establishment phase is the most expensive from a performance point-of-view. This is where the authentication and handshake, key exchange takes place and the encrypted sessions basically is created. The Data Transfer phase is where the actual data is being transferred and during the Session Closure phase the client and server tear down the connection.

So TPS is the number of new SSL sessions per second, not to be confused with concurrent (already established) SSL sessions.

SSL and ADCs

Creating a SSL session requires CPU resources and our common x86 processors are not particularly good at this task. This is why certain ADCs have a dedicated CPU to perform this task, this is called an ASIC (Application Specific Integrated Circuit). The LoadMaster LM-2600, LM3600 and LM-5400 are examples of ADCs with an SSL ASIC. Traditionally an ADC with SSL ASIC was used to offload the SSL traffic and transfer the traffic over unencrypted HTTP to the real server.

Today SSL offloading enables the ADC to perform L7 task such as content switching and Intrusion Prevention Detection (IPS). And with the power of modern hardware it's common practice to even re-encrypt the traffic again before it leaves the ADS to the real server.

Calculate the TPS

To calculate the expected SSL TPS you need to understand both the traffic characteristics of your application as well as the expected load the users will cause.

For a typical HTTP application you need to understand:

  • the number of unique visitors
  • the number of HTML pages loaded per user session
  • the number of requests made to the web-server per HTML page

Plan for peak usage, burst load can be up to three or four times the average load.

Measure the TPS

A more hands-on and practical way to determine SSL TPS may be to simply measure it from a production or lab deployment. If you don't have an existing solution in place to measure, I suggest you download a trial version of the KEMP LoadMaster VLM. The VLM comes with a 30 days temporary license which should be sufficient to perform some tests in your environment.

After you created the Virtual Service and directed users to the LoadMaster you can read the TPS and throughput in real-time in the System Metrics section of the Home page.

image

This screenshot is taken from a small Exchange 2013 environment with ~700 active users with Outlook Anywhere in Online Mode and an average of 1.5 ActiveSync device per user.

This customer plans to use the LoadMaster for several other applications in the near feature. The choice for the VLM-2000 with its 2 Gbps throughput and up to 1.000 SSL TPS seems to be the right one, this unit offers more than enough performance with sufficient headroom for peak usage.

An alternative approach would be to enable SNMP on the LoadMaster:

image

The MIB can be located under the Tools section of the LoadMaster Documentation site. Then use your favorite SNMP tool to collect and log the data, for instance Peassler's PRTG.

Friday, January 16, 2015

Soon: Import PST files to Office 365

In an on-premises environment an admin can use the New-MailboxImportRequest cmdlet to import a batch of PST files to a mailbox or even directly into the users In-place Archive mailbox with the -IsArchive switch. Currently this is not possible with Exchange Online.

Of course there are some alternatives, such as an Outlook based manual import.

image

However, if your organization wants to move away from PST, and you should, then a manual process may not be the best solution.

When Microsoft bought the PST Importer tool from Red Gate and re-released it in 2012 as PST Capture (and more recent as PST Capture 2.0) it looked like this would be the perfect tool to locate and import PST files. Unfortunately the tool has severe shortcomings, most important areas are features, stability, performance and the fact that the tool is not supported through Office 365 Support.

So it is great news that Microsoft is working on providing...

The ability to import data into Office 365 in a quick and easy manner has been a known constraint of Office 365, and a solution for this issue has emerged as a key request from customers.  The engineering team has been working on a solution that will allow quicker imports of data into Exchange Online Archive Mailboxes.  You will now be able to import Exchange Online data through PST files into the service without using third party tools.

The announcement continues with the mention of Drive Shipping and Network Based Ingestion:

Drive Shipping and Network Based Ingestion options will use Azure-based services to import data.  Over time we will be extending this to other data types across Office 365.

Imagine you would be able to ship a 4TB USB drive to Microsoft and have them import your files to Exchange Online or SharePoint Online!

Expect the experience to be quite different from what you would do on-premises. Because the actual import process is handled by the Mailbox Replication Service (MRS) it won't be possible to have your local files imported into Exchange Online with the New-MailboxImportRequest cmdlet. Instead expect in interface to upload (or ship) your files to an Azure datacenter and start the import process from here.

Note that the announcement specifically mentions Exchange Online Archive Mailboxes. I hope it will be possible to import the data to the primary mailbox too to facilitate scenarios where that makes more sense.

If you want to be the first to know what Microsoft has in the pipeline for Office 365, make sure to keep an eye on the Office 365 roadmap.

image

Monday, January 12, 2015

Considering an Exchange 2013 DAG without AAP? Careful!

Exchange 2013 SP1 can now benefit from a couple of new clustering features in Windows Server 2012 R2, read all about them in the Scott Schnoll's blog post Windows Server 2012 R2 and Database Availability Groups.

My personal favorite is the option create a DAG without a Cluster Administrative Access Point. This feature allows Exchange to use a cluster without an assigned IP address, IP Address or Network Name cluster resources or Computer Name Object. Windows Server 2012 R2 and Exchange 2013 SP1 no longer need those to manage the cluster and are able to talk to the cluster API directly.

A DAG without an AAP reduces the complexity and simplifies DAG management. Everyone who has worked with Exchange 2000/2003 clusters will agree that reducing the complexity can improve the stability and availability of Exchange greatly.

Unfortunately there are many 3rd party solutions which still require the legacy cluster objects, for instance backup software trying to access the database through the DAG CNO. An example of such software is BackupExec 2012-2014:

Symantec states in HOWTO99184   Backing up Exchange data that:

Backup Exec requires an Exchange DAG to be configured with a Cluster Administrator Access Point to facilitate connectivity to the Cluster Name and Cluster IP address.

Symantec NetBackup has a similar issue however can be tricked to talk to a static server by editing the hosts file: Backing up an Exchange 2013 IP less DAG. Another example is NetApp SnapManager which currently does not support a DAG without AAP.

Unfortunately there's no (supported) way to convert your DAG to a DAG with an AAP so you need to destroy and rebuilt your DAG to correct this issue. So check any dependencies carefully before you opt to deploy a DAG without an AAP.