This post has been moved.
Please click here to access it from its new location.
I had a customer who was unhappy with the default of 50Gb archive mailbox sizes. The goal was to reduce clutter. The solution adopted by the client was to set a lower quota on archive mailboxes, effectively forcing users to clean up their mailboxes.
Since there is no option in the Set-MailboxDatabase cmdlet to change or set the default archive quota on the database, I searched the web for possible solutions to avoid rediscovering the wheel. However all I could find was scheduled scripts to set quotas at mailbox level. Nothing at database level.
I set out to check my options. The logical first stop was ADSI Edit. I fired it up and opened the properties of the mailbox database.
Sure enough, I found two attributes: msExchArchiveQuota and msExchArchiveWarnQuota. They were not set. (Note these attributes exist for both mailbox database and user objects). I thought I solved the problem. I set new limits on both attributes and started testing – notice that my new limits in the following screenshot have one zero less than the default.
However after some testing, my enthusiasm suffered a blow. Things I’ve done:
Whatever I’ve done, nothing helped: the archive mailbox quota was still showing default values on the users’ mailboxes.
What I found is that during the provisioning process, the user’s msExchArchiveQuota and msExchArchiveWarnQuota properties are pre-populated with the out-of-the-box default values of (approx.) 50Gb and 45Gb, regardless of whether the new user did not have an archive mailbox before and regardless of the fact that the archive mailbox was created in the database with the custom archive quota settings. There is no documented way (or at least I couldn’t find one) to change this behavior.
This was tested in Exchange 2010 SP3 UR5.
The bottom line: it looks like my only option is what I wanted to avoid – scheduled scripts.
Virtual and shared desktop solutions are widely used for obvious benefits such as consolidation, centralised control and management, remote workforce enablement – just to name a few, which all lead to increased productivity and profitability.
However with cloud (a.k.a. hosted) services becoming more popular, businesses who implemented a shared or virtual desktop solution often find that it doesn’t play well with the cloud. While the most popular hosted Exchange solution is Microsoft’s Exchange Online, the principles in this article equally apply to all hosted Exchange solutions where the mailbox is remote from the user.
I have seen instances where impact on user productivity was significant, even dramatic, in a bad way. In one instance a customer admitted that he faced the chop if the situation didn’t improve quickly.
In order to understand the underlying issue, we need to look at the way Outlook accesses user data. Then we’ll discuss its effects in a virtual/shared desktop environment. Finally we’ll analyse how bringing a hosted, remote Exchange mailbox into the mix affects infrastructure, user experience and ultimately productivity.
If you understand the basics and want to cut straight to the chase then skip the intro and jump to the Desktop Virtualization and Remote Exchange Mailboxes section.
I will avoid delving into deep technical matters on purpose. Instead I will present the case from a higher level in an attempt to make the content accessible to the less technical reader too, with the aim of helping everyone to make an informed decision when it comes to adopting a hosted (a.k.a. cloud) solution, regardless of the reader’s technical background.
IMPORTANT: I am using the terms “shared” and “virtual” desktop interchangeably throughout this document. They are rather different technologies with different use cases. However from the perspective of how the Outlook client works, the concepts in this article apply to both. Each implementation needs to be analysed individually.
Outlook Concepts
Let’s look at Outlook first. In the context of this article, we’ll only look at how Outlook accesses a user’s mailbox: there is online mode and cached mode.
In online mode Outlook stores no user data locally on the user’s computer. The data sits on the Exchange server and Outlook serves as an interface for the user to interact with the data. Effectively all the hard work is done by the Exchange server. Therefore computer resources such as disk space and processing power on the user’s desktop or laptop is available to the user for running his/her important, and often demanding, business applications. While online mode is great for “outsourcing” much of the rendering and e-mail processing to the server so that more power is reserved for user applications, it has two major disadvantages:
Cached mode attempts to mitigate the disadvantages of online mode. Essentially, the first time the user opens his/her mailbox after the Outlook profile is created, Outlook will download (or cache) the entire mailbox to the local computer. It is very important to understand that Outlook does that every time when the Outlook profile is recreated – we’ll touch on the implications later in this article. Cached mode brings the following benefits:
Overall, in a traditional setup, cached mode is a win-win situation: users will be more productive because their data is always available and the entire system is more responsive.
IMPORTANT: This article is solely concerned about user experience and architectural considerations. Therefore issues such as security in case of lost/stolen laptops along with the CEO’s entire cached mailbox on it, are not discussed.
Outlook in Virtual and Shared Desktop Environments
When deploying a virtual or shared desktop solution, a number of things will affect the way Outlook will be configured. To better illustrate the point, we’ll start off by looking at how cached mode impacts the system and users.
It is important to understand that one of the main features of shared and virtual desktop solutions is to consolidate as many users as possible on as little hardware as practical. In a Citrix XenApp farm it is not unusual to see 50-100 users logged on to and working simultaneously on a single XenApp server, sharing memory, processing power and storage. Let’s get into it.
1,000 users X 1Gb/mailbox X 10 servers = 10,000Gb (or 10Tb)
If you disobey the recommendations and you still go ahead and store the cache on a network share in an unsupported environment, then you risk that Microsoft turns its back on you if you call them for help. Regardless what 3rd party vendors say. Full stop. There is a lot of information on do’s and dont’s in the Microsoft article, I encourage everyone interested to read it very carefully.
In order to avoid all this pain, authoritative sources such as Microsoft and Citrix, advise to use Outlook in online mode when deployed in a shared/virtual desktop solution. In fact Outlook cached mode wasn’t supported in such environments until recently – see the announcement here.
And last, but not least, don’t forget about your legal obligations: if you rely on the Office Pro Plus package that comes with your Office 365 subscription, then you aren’t allowed to install it in a shared desktop environment. Virtual desktop environments however may be OK as long as some conditions are met. From “Determine the deployment method to use for Office 365 ProPlus“:
You can deploy Office 365 ProPlus to a virtual desktop, but the virtual desktop must be assigned to a single user.
To use Remote Desktop Services, you must use a volume license version of Office Professional Plus 2013, which is available on the Volume Licensing Service Center (VLSC). The Office programs that are included with Office Professional Plus 2013 are the same programs that are included with Office 365 ProPlus. For more information, see Microsoft Volume Licensing Product Use Rights.
Therefore if you want to deploy a Citrix-like environment (shared desktop) and use it to access Office 365 mailboxes, then you’ll end up paying twice for your Office software.
Now that we covered the basics, we have arrived to the main point of this article. Let’s see how cloud (hosted) services affect your business when thrown into the mix.
Desktop Virtualization and Remote Exchange Mailboxes
We established that the only supported and technically feasible option in a shared/virtualised desktop solution is Outlook online mode. For the sake of consistency we’ll consider a Citrix XenApp environment.
When accessing a remote mailbox in online mode the following will (NOT “might”) happen:
In summary, we saw that having Outlook in cached mode is not recommended, however working in online mode can be just as debilitating. The outcome of both options is an unhappy, frustrated and unproductive user base, and soon the business will start feeling the pinch.
Give Me Options Please
If you are already in this predicament, and putting up with the pain is not an option, then realistically there are very few options. None of them are ideal as you will see. You are between the rock and the hard place.
I will try to quickly summarize them.
The catch:
The catch:
The catch:
The catch:
Again, I want to emphasise that I am neither in favour nor against a particular solution. Do your due diligence. Analyse the financial, technical, operational and human/social impact of the solution before you adopt it. Ask questions. As many as you can. If you don’t know what to ask, get help. Don’t be afraid to ask the difficult questions from the cloud vendor. Get a second opinion, or as a matter of fact, get many “second opinions”. Seek feedback from adopters with similar needs and architecture. Don’t fall for fashionable buzz-words; make sure you understand fully what they mean. Do your homework. Everyone wants your money, spend it wisely.
Well, that’s it. I hope I captured the most important aspects and that you enjoyed it and found it useful.
Occasionally you’ll need to test a specific Exchange functionality in the real world, and that demands a publicly registered domain, MX records, a public IP address and so on.
You may already have a publicly registered domain, or you are probably using a service such as DynDNS or FreeDNS that gives you automatic and *free* DNS registration and a public (sub)domain name that is valid on the Internet and can be used with your dynamic IP address that your ISP gives you.
Getting a SAN certificate however can be a showstopper: no-one offers limited time, free SAN certificate trial options. SAN certificates are extensively used by Exchange from version 2007 onwards, and you need a globally trusted certificate if you want to test your server with public services.
You can get, however, a free, trial Class 1 certificate, even for a full year if you go with StartCom, trusted all across the public Internet, that can be used for testing with public services such as Office 365 or any other public service.
At minimum, an Exchange system requires at least two namespaces:
– mail.yourdomain.tld (for example), used for access and SMTP transport.
– autodiscover.yourdomain.tld, used by the autodiscover service.
Here comes the trick: use autodiscover.yourdomain.tld instead of mail.yourdomain.tld. It’s a test environment, right? So it doesn’t matter whether you access your system via autodiscover.yourdomain.tld. Exchange doesn’t care either whether you use autodiscover.yourdomain.tld for access and transport on top of the autodiscover service for discovery and automatic configuration. It will not whinge if you configure, say, your external OWA with https://autodiscover.yourdomain.tld/owa – it will work just fine. And so will your test ADFS and DirSync with Office 365.
All you have to do is this:
There you have it, enjoy your testing.
Have you ever found yourself in a situation where a mailbox database was dismounted because the log files filled up the disk? Was it caused by an unresolved backup issue or just forgot to turn on circular logging before a migration? Did it hit when email was needed most? What did you do to restore service a.s.a.p. and how long it took?
Provision additional storage and expand the disk, or move the logs to a larger volume and reconfigure paths on your databases are both valid. However they take a relatively long time when you’re already under the pump.
Here is a less conventional trick which takes less than 5 minutes to restore service. It served me well a couple of times when I was called in to help.
The idea is simple: there are still a couple of kilobytes free space left after the database is dismounted, just enough to start compressing log files one by one. Compression has been around for long enough, however here are a couple of screenshots as a refresher:
Open the Properties of the file:
Open advanced attributes:
Enable compression and close all dialogs:
DO NOT compress the entire volume or the folder holding the files!
As space is freed up, increase the number of files selected for compression, until there is sufficient space to mount the database. It shouldn’t take more than 5 minutes to get to this point.
From here you have two options:
2 – Compress as many log files as you think may be necessary to buy you time to run a full Exchange server backup. You can quickly install the Windows Server Backup feature if your everyday backup solution is broken.
NB #1: A full backup may take a very long time, make sure you free up sufficient space so that the disk doesn’t fill up again before the backup is complete.
NB #2: You must do a full, volume-level backup of both the database and log drives in a single backup job to have the logs flushed. Click here for details.
The beauty of this trick is that file compression frees up much needed space very quickly. Once there is sufficient space, the database can be mounted, thus users can resume work. Once the pressure is off, you can concentrate on fixing the core issue undisturbed.
Additionally, individually compressed log files will disappear automatically and forever as soon as circular logging kicks in, or a full backup is done. New log files are created uncompressed. Pressure on the system is therefore temporary and minimal.
And last but not least, you’ll be the hero of the day.
Sometimes you may want to know how the autodiscover information returned by the Exchange server is affected by various configuration changes. Not having access to Microsoft Remote Connectivity Analyser, or not having a publicly registered domain name, can hinder your efforts.
Here is how you can query your Exchange server for autodiscover information in an isolated test lab. Steps 1 and 2 require that the test workstation is connected to the Internet before it is moved to the isolated lab network.
1. Download and install Firefox if you don’t have it already installed.
2. Download and install the HttpRequester plugin from here.
3. Connect the workstation to the isolated lab network.
4. Create an XML file with the following content, replacing user@contoso.com with a valid test account on your Exchange server. More details here.
<Autodiscover
</AcceptableResponseSchema>
</Request>
</Autodiscover>
5. Open Firefox. On the Tools menu select HttpRequester:
6. In the URL field, type the autodiscover URL as configured in your Exchange server. In the Content field copy and paste the content of the XML file prepared in step 4. Alternatively load the file itself.
7. Click POST. You’ll be prompted for credentials of the user specified in the XML file, then Exchange will return the autodiscover information.
NOTE: If you get an empty response then likely Firefox doesn’t trust the Exchange server certificate – it’s an isolated lab, so you’ll likely be using an untrusted, local CA. Make sure you import your CA’s root certificate into Firefox. Firefox maintains its own certificate store, separate from the system store that’s used by Internet Explorer, so don’t use the Certificates MMC snap-in.
Enjoy!
With the option of keeping active mailboxes on premise and moving archives to the cloud, there is a player which is often overlooked: the Dumpster. Beyond the fact that when moving a mailbox, its dumpster is moved also, and in some cases it is many-fold larger than the mailbox itself, it is also a strong player when company policy regarding data retention is involved.
As long as the message size limits of the on premise system hasn’t been modified, life is good. However as soon as we start pushing them up, expect the unexpected.
I was recently working on an archive migration project. Moved a couple of pilot archive mailboxes. Most of them failed. The reason: TooManyLargeItemsPermanentException.
First thought: use Search-Mailbox to find, export and delete large items. Forget it, Search-Mailbox cannot search for items based on size. More on that here.
Next thought: Empty the Archive dumpster. The active and archive mailboxes have their own dumpster, easily seen by comparing the output of these commands:
Get-MailboxStatistics user_ID | ft DisplayName,DeletedItemCount,TotalDeletedItemSize – AutoSize
Get-MailboxStatistics user_ID –Archive | ft DisplayName,DeletedItemCount,TotalDeletedItemSize – AutoSize
So you want to empty the archive dumpster, right? Wrong.
Roadblock #2: You cannot empty the archive dumpster without flushing the active mailbox dumpster also. If you know of a way then I welcome corrections. Please post it in a comment.
Now you’re stuck because you don’t have a tool to find and handle large items, you haven’t got selective access to the archive dumpster only, and you aren’t allowed to flush the active dumpster because it affects users’ ability to recover items.
What can be done in this situation? If you’re already in the middle of the project, you may as well be in hot water and the project might have to be cancelled, depending on a combination of factors.
Best is to try not getting into this predicament in the first place. Before you start the implementation, ensure that:
Here are a couple of things to be aware of, learned from experience:
I deliberately refrained from mentioning the LargeItemLimit parameter of the New-MoveRequest cmdlet. It is a viable option as long as data loss is acceptable. Again, check with the business.
Exchange Online Archive is a viable option for those who want to lower their TCO. However it is insufficiently supported and it needs some work to streamline its deployment. On its marketing page I’d like to see sections such as “Is it right for me?” and “What to watch out for”. While the information is available on the Internet, it is scattered and hard to find. This article is an attempt to help in this respect.
The toolset for effective management of a successful end-to-end Exchange Online Archive migration is incomplete. Lots of thanks go to Michael for writing up and making his script freely available, filling a gap in this area: it gives visibility and enables one to wiggle his/her way around the roadblocks which otherwise would be a very difficult exercise.
De-coupling settings and management of archive mailboxes from active ones would be a great step forward. It would be nice to have the ability to flush the archive dumpster only, without affecting the active mailbox dumpster. We are not there yet.
An even bigger improvement would be the addition of item size as a query criterion to the Search-Mailbox cmdlet. Maybe in a future UR/CU/SP…
Until then we’ll need to be as diligent as possible and identify anything that has the potential to cause the project to fail.
Until later.