Cannot Override Default Archive Mailbox Quotas

I had a customer who was unhappy with the default of 50Gb archive mailbox sizes. The goal was to reduce clutter. The solution adopted by the client was to set a lower quota on archive mailboxes, effectively forcing users to clean up their mailboxes.

Since there is no option in the Set-MailboxDatabase cmdlet to change or set the default archive quota on the database, I searched the web for possible solutions to avoid rediscovering the wheel. However all I could find was scheduled scripts to set quotas at mailbox level. Nothing at database level.

I set out to check my options. The logical first stop was ADSI Edit. I fired it up and opened the properties of the mailbox database.


Sure enough, I found two attributes: msExchArchiveQuota and msExchArchiveWarnQuota. They were not set. (Note these attributes exist for both mailbox database and user objects). I thought I solved the problem. I set new limits on both attributes and started testing – notice that my new limits in the following screenshot have one zero less than the default.


However after some testing, my enthusiasm suffered a blow. Things I’ve done:

  • Moved existing archive mailboxes to the database with custom settings.
  • Created new archive mailbox for user who hasn’t had one before, specifying the archive mailbox database with the custom settings.
  • Created new AD user with a mailbox and its associated archive mailbox, specifying the archive mailbox database with the custom settings.

Whatever I’ve done, nothing helped: the archive mailbox quota was still showing default values on the users’ mailboxes.

What I found is that during the provisioning process, the user’s msExchArchiveQuota and msExchArchiveWarnQuota properties are pre-populated with the out-of-the-box default values of (approx.) 50Gb and 45Gb, regardless of whether the new user did not have an archive mailbox before and regardless of the fact that the archive mailbox was created in the database with the custom archive quota settings. There is no documented way (or at least I couldn’t find one) to change this behavior.

This was tested in Exchange 2010 SP3 UR5.

The bottom line: it looks like my only option is what I wanted to avoid – scheduled scripts.

Exchange Online in Shared and Virtual Desktop Environments

Virtual and shared desktop solutions are widely used for obvious benefits such as consolidation, centralised control and management, remote workforce enablement – just to name a few, which all lead to increased productivity and profitability.

However with cloud (a.k.a. hosted) services becoming more popular, businesses who implemented a shared or virtual desktop solution often find that it doesn’t play well with the cloud. While the most popular hosted Exchange solution is Microsoft’s Exchange Online, the principles in this article equally apply to all hosted Exchange solutions where the mailbox is remote from the user.

I have seen instances where impact on user productivity was significant, even dramatic, in a bad way. In one instance a customer admitted that he faced the chop if the situation didn’t improve quickly.

In order to understand the underlying issue, we need to look at the way Outlook accesses user data. Then we’ll discuss its effects in a virtual/shared desktop environment. Finally we’ll analyse how bringing a hosted, remote Exchange mailbox into the mix affects infrastructure, user experience and ultimately productivity.

If you understand the basics and want to cut straight to the chase then skip the intro and jump to the Desktop Virtualization and Remote Exchange Mailboxes section.

I will avoid delving into deep technical matters on purpose. Instead I will present the case from a higher level in an attempt to make the content accessible to the less technical reader too, with the aim of helping everyone to make an informed decision when it comes to adopting a hosted (a.k.a. cloud) solution, regardless of the reader’s technical background.

IMPORTANT: I am using the terms “shared” and “virtual” desktop interchangeably throughout this document. They are rather different technologies with different use cases. However from the perspective of how the Outlook client works, the concepts in this article apply to both. Each implementation needs to be analysed individually.

Outlook Concepts

Let’s look at Outlook first. In the context of this article, we’ll only look at how Outlook accesses a user’s mailbox: there is online mode and cached mode.

In online mode Outlook stores no user data locally on the user’s computer. The data sits on the Exchange server and Outlook serves as an interface for the user to interact with the data. Effectively all the hard work is done by the Exchange server. Therefore computer resources such as disk space and processing power on the user’s desktop or laptop is available to the user for running his/her important, and often demanding, business applications. While online mode is great for “outsourcing” much of the rendering and e-mail processing to the server so that more power is reserved for user applications, it has two major disadvantages:

  1. Travelling users who aren’t connected to the Exchange server will have no access to their e-mails. Therefore their productivity will degrade and time is wasted.
  2. All the hard work is done by the Exchange server. Therefore it will be hit hard when many users are requesting services concurrently. One of the well-known issues is the 5,000-item limit in the so-called “critical folders”, dating back to the time of Exchange 2003 and which is well documented in various blogs and by Microsoft. Newer versions of Exchange servers raised the limits, but there is a limit regardless. Additionally since data needs to be pulled across the network every time it is accessed, there will be increased stress on the network infrastructure. Typically pulling data across the wire is not an issue in a traditional environment. However when remote users connect via a VPN or Outlook Anywhere across low-bandwidth WAN connections in online mode, they’ll feel the pain.

Cached mode attempts to mitigate the disadvantages of online mode. Essentially, the first time the user opens his/her mailbox after the Outlook profile is created, Outlook will download (or cache) the entire mailbox to the local computer. It is very important to understand that Outlook does that every time when the Outlook profile is recreated – we’ll touch on the implications later in this article. Cached mode brings the following benefits:

  1. Disconnected users such as road warriors can access items from the local cache. They can read and compose emails, review calendars, work on tasks – in other words, they remain productive even when they are disconnected from the Exchange server.
  2. The server can breathe easy: once content has been downloaded to the user’s computer, the server no longer has to serve it because it is accessed from the user’s local cache. Thus it will have more resources left for other tasks. It will also reduce network traffic.

Overall, in a traditional setup, cached mode is a win-win situation: users will be more productive because their data is always available and the entire system is more responsive.

IMPORTANT: This article is solely concerned about user experience and architectural considerations. Therefore issues such as security in case of lost/stolen laptops along with the CEO’s entire cached mailbox on it, are not discussed.

Outlook in Virtual and Shared Desktop Environments

When deploying a virtual or shared desktop solution, a number of things will affect the way Outlook will be configured. To better illustrate the point, we’ll start off by looking at how cached mode impacts the system and users.

It is important to understand that one of the main features of shared and virtual desktop solutions is to consolidate as many users as possible on as little hardware as practical. In a Citrix XenApp farm it is not unusual to see 50-100 users logged on to and working simultaneously on a single XenApp server, sharing memory, processing power and storage. Let’s get into it.

  1. Cached content is stored locally where the user is logged on. Let’s assume that we have 1,000 users, each with a 1Gb mailbox, and there are 10 Citrix XenApp servers in the farm. Given that generally users have no control over which XenApp server they log on to, it is a fair assumption that, over time, every user will log on to, and thus have a local profile on, every XenApp server. The disk space required to store cached mailboxes will therefore be
    1,000 users X 1Gb/mailbox X 10 servers = 10,000Gb (or 10Tb)
    Considering that the same content is also stored on the Exchange server, it is a redundancy factor of 11 – each user will have 11 copies of the same data. I don’t know about you, but I wouldn’t finance the storing of the same stuff in 11 different places.
  2. Effects on commissioning a new server can be dramatic. Imagine that a XenApp server is replaced with a new server. As soon as users log on and open Outlook, a local cache is created and content is downloaded from the Exchange server. With 100 users on the server, 100Gb worth of data will be transferred from the Exchange server in a relatively short time. That will not go unnoticed: the data has to be retrieved, prepared and served by the Exchange server, transferred over the network, and processed by the XenApp server. Therefore it will impact performance at every level.
  3. User profile corruption has similar effects as commissioning a new server. Depending on various factors, user profile corruption may affect a single user or all users on a server. It is not uncommon that administrators wipe users’ profiles and start from scratch – including the creation of a new local cache and thus a new wave of performance issues while things settle. This is a day-to-day operational risk and it will happen every now and then.
  4. Access to files impacts performance. In cached mode files are stored locally, therefore they will be served locally. The disks in a XenApp server have a limit in terms of how many requests they can serve per second, referred to as IOPS (Input/Output Operations per Second). It is not an issue if there is only one user, on a laptop or desktop computer for instance. Things change however when 100 users start hammering the same disk with requests for e-mail content, in addition to the server’s usual disk access requirements. The effect can be severe and it is documented, among others, in a Citrix knowledge base article here.
  5. Storing Outlook cached content on the network is a bad idea. You might think that having a single, central store for cached Outlook content will improve performance and lower storage requirements. Wrong. Due to the way Outlook stores and accesses cached content (stored in OST files), it is unsuitable for storing it on a network share, even if it is on the same network segment. The above Citrix article makes this point. The same is spelled out in a Microsoft article here. In fact the Microsoft article makes the point that storing OST and PST files on a network share is not supported unless:
    • The client is running Outlook 2010 or 2013.
    • A high-bandwidth, low-latency network connection is used.
    • There is a single client access per file (a.k.a. no multiple Citrix sessions by the same user concurrently accessing the same cache – recipe for corrupting the cache and causing further issues).
    • Windows Server 2008 R2 Remote Desktop Session Host or Virtual Desktop Infrastructure is used.

    If you disobey the recommendations and you still go ahead and store the cache on a network share in an unsupported environment, then you risk that Microsoft turns its back on you if you call them for help. Regardless what 3rd party vendors say. Full stop. There is a lot of information on do’s and dont’s in the Microsoft article, I encourage everyone interested to read it very carefully.

In order to avoid all this pain, authoritative sources such as Microsoft and Citrix, advise to use Outlook in online mode when deployed in a shared/virtual desktop solution. In fact Outlook cached mode wasn’t supported in such environments until recently – see the announcement here.

And last, but not least, don’t forget about your legal obligations: if you rely on the Office Pro Plus package that comes with your Office 365 subscription, then you aren’t allowed to install it in a shared desktop environment. Virtual desktop environments however may be OK as long as some conditions are met. From “Determine the deployment method to use for Office 365 ProPlus:

You can deploy Office 365 ProPlus to a virtual desktop, but the virtual desktop must be assigned to a single user.

To use Remote Desktop Services, you must use a volume license version of Office Professional Plus 2013, which is available on the Volume Licensing Service Center (VLSC). The Office programs that are included with Office Professional Plus 2013 are the same programs that are included with Office 365 ProPlus. For more information, see Microsoft Volume Licensing Product Use Rights.

Therefore if you want to deploy a Citrix-like environment (shared desktop) and use it to access Office 365 mailboxes, then you’ll end up paying twice for your Office software.

Now that we covered the basics, we have arrived to the main point of this article. Let’s see how cloud (hosted) services affect your business when thrown into the mix.

Desktop Virtualization and Remote Exchange Mailboxes

We established that the only supported and technically feasible option in a shared/virtualised desktop solution is Outlook online mode. For the sake of consistency we’ll consider a Citrix XenApp environment.

When accessing a remote mailbox in online mode the following will (NOT “might”) happen:

  1. WAN link flooding #1: Whenever content is accessed, it must be transferred from the hosting provider to the user’s session in the Citrix server. Factors such as limited bandwidth and the size of the e-mail will exacerbate the situation, even more so when all 1000 users do the same over a 10Mbps WAN link (not every business has access to Gb-speed WAN links). Besides e-mail users having to wait for their content to arrive, other users who share the same link, such as remote users logged on remotely, will also suffer. I witnessed a case where composing a 1-paragraph text-only e-mail took 5 minutes to compose and send. Imagine the pain inflicted on (and by) marketing users, for instance, who deal with multi-Mb, high-res graphics attachments on a regular basis.
  2. WAN link flooding #2: If you have users on stand-alone, dedicated desktop or laptop computers and you have (rightfully) configured them in cached mode, the first thing Outlook will do when a new profile is created is that it will cache the mailbox. Considering that these days hosted mailboxes can be ridiculously large, in the space of tens of Gb’s such as Office 365 E-class accounts, caching (downloading) that content to the user’s local computer over a 10Mbps WAN link (or even over a 50Mbps link) will take a very loooooooooooong time. One user can effectively disable an entire business’ connection to the Internet for as long as the transfer lasts. Add another 10 users who just had their laptops replaced in one batch as part of the hardware refresh cycle, and you quickly realise that you have a problem. Don’t try to connect via phone tethering – I don’t tell you why, you’ll surely figure it out when you get your data usage bill from your mobile provider.
  3. Cowboys do exist: No matter what respected sources and common sense says, there will always be administrators who will disregard the “no caching in Citrix” advice. Whether due to ignorance, cowboyish attitude or sheer defiance, is irrelevant. They will enable cache mode on all 10 XenApp servers in the farm for all 1000 users. With the above example of 1Gb average mailbox size, it will result in a massive 100Gb download just on one server, or a total combined of 10Tb as each user logs on to each server in the farm over time. All that across your skinny 10 or even 50Mbps WAN link. Profiles get corrupted, servers are replaced, and it starts all over again – in fact the pain never goes away. Your business will be in a never ending, agonizing grip which will cripple its ability to make money. On a positive note, workarounds are emerging which may alleviate some of the pain. For instance, Outlook 2013 introduced a cache management feature which allows you to set how much data is stored at any time in the local cache. See this TechNet article for more details.
  4. “Smart” cowboys do exist also: These admins argue that if they configure Outlook in cached mode and they only download message headers then cached data will only consume insignificant space on the XenApp servers and the caching process will complete in no time with very little impact. While they have a point, this reasoning ignores the fact that users generally open almost every e-mail, therefore content will be downloaded and cached anyway, thus bandwidth will be consumed and the cache will bloat to a comparable size of a full cache. Additionally, it only takes one misconfigured setting and all your disk space disappears: the cache fills up XenApp servers’ system drives, taking out the servers and with them the majority of users, after a choked WAN has already upset the entire user base.
  5. Alternatives aren’t better either: I’ve seen one instance recently where a business was happy to give up Outlook in favour of Outlook Web App. Instantly the issue of cached vs. online mode becomes irrelevant. However content still has to be pulled across the wire every time. Additionally users accustomed to a zippy “click-and-behold” experience in Outlook suddenly find that the OWA interface not only lacks popular features that exist in Outlook, but it doesn’t even come close to the speed they were used to, not even over a largely idle WAN connection. Even if these limitations would be acceptable, it greatly limits the business’ future ability to deploy applications that require Outlook, such as document management or other functionality that is normally implemented by means of Outlook plugins, which clearly cannot be employed if using OWA.

In summary, we saw that having Outlook in cached mode is not recommended, however working in online mode can be just as debilitating. The outcome of both options is an unhappy, frustrated and unproductive user base, and soon the business will start feeling the pinch.

Give Me Options Please

If you are already in this predicament, and putting up with the pain is not an option, then realistically there are very few options. None of them are ideal as you will see. You are between the rock and the hard place.

I will try to quickly summarize them.

  1. Ditch your shared/virtual desktop infrastructure. Replace it with desktops or laptops, one for every user. That will solve the Outlook cache dilemma: having Outlook in cache mode will be fully supported, and once the content is cached, it will be the fastest solution you can get.
  2. The catch:

    • You’ll have to buy or lease, then maintain, all these devices. It is an additional technological, financial and operational burden which you may not be willing to absorb.
    • Your investment in the shared/virtual desktop infrastructure goes up in smoke. The board will not like it.
    • It doesn’t solve the problem of choking the WAN link, either when caching the content or accessing it online. Users, and ultimately the board, will not like it.
    • It will affect remote workforce in a large way.
  3. Go for a hybrid deployment. Set up ADFS/SSO and install an on premise hybrid Exchange server. Move VIP, heavy and noisy/influential users’ mailboxes off the cloud back onto the on premise server. Keep light and accommodating users in the cloud.
  4. The catch:

    • You went cloud because you no longer wanted to manage your Exchange server, right? Now you not only have to manage the on-premise server *again*, but your cloud environment and the integration between the two also. Double trouble.
    • While Office 365 allows you to have such a configuration, other providers may not, so it may not even be an option.
    • You already spent a truckload of money on migrating to the cloud. Now you’ll have to spend more to bring it back on premise. You’ll need to explain to the board why you spent all that money only to add to the complexity, dragging the entire business through a painful experience, bringing little if any benefit.
  5. Ditch the cloud. Bring back your on premise Exchange infrastructure fully. All the bad things associated with a remote mailbox will have passed and users can start being productive again.
  6. The catch:

    • The experience will leave a bitter taste in many users’ mouth. You may lose some of them as they cannot get over their experience. That always comes at a costs.
    • If you happened to be the champion of this cloud (mis)adventure, then you likely realise that you caused a massive loss to the business with no real benefit, and you’ve put everyone through unnecessary pain. It can be a debilitating situation, a potential career limiting factor.
  7. Embrace the cloud fully. Move your shared/virtual desktop architecture fully into the cloud so users aren’t limited by your WAN link bandwidth. If you move it into Microsoft Azure, Amazon AWS or similar services, “theoretically” bandwidth isn’t an issue, so having Outlook in online mode “theoretically” will perform well.
    CAVEAT: I am yet to see such a deployment. This option, while technically conceivable, is still purely theoretical as far as my personal experience goes. Surely, the way technology moves, it may be a perfectly valid and tested option in a couple of years.
  8. The catch:

    • You have to move supporting infrastructure along with the shared desktop solution, including SQL servers, domain controllers, backup, etc.
    • You will want to keep user data close to where they are processed. If they are processed in the cloud then they’ll have to live in the cloud. You don’t want to move users’ desktops to a Microsoft Assure-based Citrix server only to have files pulled across your skinny WAN link from your in-house file server.
    • Dig deep into your pocket. IaaS cost can sky-rocket as you start bringing in more and more infrastructure. This will change in the future, but for the SMB/SME is probably still unfeasible.

Again, I want to emphasise that I am neither in favour nor against a particular solution. Do your due diligence. Analyse the financial, technical, operational and human/social impact of the solution before you adopt it. Ask questions. As many as you can. If you don’t know what to ask, get help. Don’t be afraid to ask the difficult questions from the cloud vendor. Get a second opinion, or as a matter of fact, get many “second opinions”. Seek feedback from adopters with similar needs and architecture. Don’t fall for fashionable buzz-words; make sure you understand fully what they mean. Do your homework. Everyone wants your money, spend it wisely.

Well, that’s it. I hope I captured the most important aspects and that you enjoyed it and found it useful.

How to Use a Free SSL Certificate for Exchange Testing

Occasionally you’ll need to test a specific Exchange functionality in the real world, and that demands a publicly registered domain, MX records, a public IP address and so on.

You may already have a publicly registered domain, or you are probably using a service such as DynDNS or FreeDNS that gives you automatic and *free* DNS registration and a public (sub)domain name that is valid on the Internet and can be used with your dynamic IP address that your ISP gives you.

Getting a SAN certificate however can be a showstopper: no-one offers limited time, free SAN certificate trial options. SAN certificates are extensively used by Exchange from version 2007 onwards, and you need a globally trusted certificate if you want to test your server with public services.

You can get, however, a free, trial Class 1 certificate, even for a full year if you go with StartCom, trusted all across the public Internet, that can be used for testing with public services such as Office 365 or any other public service.

At minimum, an Exchange system requires at least two namespaces:

– mail.yourdomain.tld (for example), used for access and SMTP transport.
– autodiscover.yourdomain.tld, used by the autodiscover service.

Here comes the trick: use autodiscover.yourdomain.tld instead of mail.yourdomain.tld. It’s a test environment, right? So it doesn’t matter whether you access your system via autodiscover.yourdomain.tld. Exchange doesn’t care either whether you use autodiscover.yourdomain.tld for access and transport on top of the autodiscover service for discovery and automatic configuration. It will not whinge if you configure, say, your external OWA with https://autodiscover.yourdomain.tld/owa – it will work just fine. And so will your test ADFS and DirSync with Office 365.

All you have to do is this:

  • Configure all external (and/or internal) Exchange Web URLs to use autodiscover.yourdomain.tld, such as https://autodiscover.yourdomain.tld/owa.
  • Obtain a free, single name Class 1 SSL certificate for autodiscover.yourdomain.tld, install it on your test Exchange server and assign it to Exchange services.

There you have it, enjoy your testing.

How to Mount Mailbox Database Quickly

Have you ever found yourself in a situation where a mailbox database was dismounted because the log files filled up the disk? Was it caused by an unresolved backup issue or just forgot to turn on circular logging before a migration? Did it hit when email was needed most? What did you do to restore service a.s.a.p. and how long it took?

Provision additional storage and expand the disk, or move the logs to a larger volume and reconfigure paths on your databases are both valid. However they take a relatively long time when you’re already under the pump.

Here is a less conventional trick which takes less than 5 minutes to restore service. It served me well a couple of times when I was called in to help.

The idea is simple: there are still a couple of kilobytes free space left after the database is dismounted, just enough to start compressing log files one by one. Compression has been around for long enough, however here are a couple of screenshots as a refresher:

Open the Properties of the file:


Open advanced attributes:


Enable compression and close all dialogs:


DO NOT compress the entire volume or the folder holding the files!

As space is freed up, increase the number of files selected for compression, until there is sufficient space to mount the database. It shouldn’t take more than 5 minutes to get to this point.

From here you have two options:

    1 – Turn on circular logging on the database. As soon as the database can be mounted, the logs will be flushed. Disable circular logging as soon as practical.

    2 – Compress as many log files as you think may be necessary to buy you time to run a full Exchange server backup. You can quickly install the Windows Server Backup feature if your everyday backup solution is broken.
    NB #1: A full backup may take a very long time, make sure you free up sufficient space so that the disk doesn’t fill up again before the backup is complete.
    NB #2: You must do a full, volume-level backup of both the database and log drives in a single backup job to have the logs flushed. Click here for details.

The beauty of this trick is that file compression frees up much needed space very quickly. Once there is sufficient space, the database can be mounted, thus users can resume work. Once the pressure is off, you can concentrate on fixing the core issue undisturbed.

Additionally, individually compressed log files will disappear automatically and forever as soon as circular logging kicks in, or a full backup is done. New log files are created uncompressed. Pressure on the system is therefore temporary and minimal.

And last but not least, you’ll be the hero of the day.

Testing Autodiscover in Isolated Environments

Sometimes you may want to know how the autodiscover information returned by the Exchange server is affected by various configuration changes. Not having access to Microsoft Remote Connectivity Analyser, or not having a publicly registered domain name, can hinder your efforts.

Here is how you can query your Exchange server for autodiscover information in an isolated test lab. Steps 1 and 2 require that the test workstation is connected to the Internet before it is moved to the isolated lab network.

1. Download and install Firefox if you don’t have it already installed.

2. Download and install the HttpRequester plugin from here.

3. Connect the workstation to the isolated lab network.

4. Create an XML file with the following content, replacing with a valid test account on your Exchange server. More details here.







5. Open Firefox. On the Tools menu select HttpRequester:

Open HttpRequester

6. In the URL field, type the autodiscover URL as configured in your Exchange server. In the Content field copy and paste the content of the XML file prepared in step 4. Alternatively load the file itself.

7. Click POST. You’ll be prompted for credentials of the user specified in the XML file, then Exchange will return the autodiscover information.

Fields and Results

NOTE: If you get an empty response then likely Firefox doesn’t trust the Exchange server certificate – it’s an isolated lab, so you’ll likely be using an untrusted, local CA. Make sure you import your CA’s root certificate into Firefox. Firefox maintains its own certificate store, separate from the system store that’s used by Internet Explorer, so don’t use the Certificates MMC snap-in.


Pitfalls to Avoid in an Exchange Online Archive Migration

With the option of keeping active mailboxes on premise and moving archives to the cloud, there is a player which is often overlooked: the Dumpster. Beyond the fact that when moving a mailbox, its dumpster is moved also, and in some cases it is many-fold larger than the mailbox itself, it is also a strong player when company policy regarding data retention is involved.

As long as the message size limits of the on premise system hasn’t been modified, life is good. However as soon as we start pushing them up, expect the unexpected.

I was recently working on an archive migration project. Moved a couple of pilot archive mailboxes. Most of them failed. The reason: TooManyLargeItemsPermanentException.


First thought: use Search-Mailbox to find, export and delete large items. Forget it, Search-Mailbox cannot search for items based on size. More on that here.

Next thought: Empty the Archive dumpster. The active and archive mailboxes have their own dumpster, easily seen by comparing the output of these commands:

Get-MailboxStatistics user_ID | ft DisplayName,DeletedItemCount,TotalDeletedItemSize – AutoSize
Get-MailboxStatistics user_ID –Archive | ft DisplayName,DeletedItemCount,TotalDeletedItemSize – AutoSize


So you want to empty the archive dumpster, right? Wrong.

    Roadblock #1: In some cases company policy restricts flushing the dumpster. Emptying the dumpster against policy is a career limiting move. Don’t do it without prior consulting with the business.

    Roadblock #2: You cannot empty the archive dumpster without flushing the active mailbox dumpster also. If you know of a way then I welcome corrections. Please post it in a comment.

Now you’re stuck because you don’t have a tool to find and handle large items, you haven’t got selective access to the archive dumpster only, and you aren’t allowed to flush the active dumpster because it affects users’ ability to recover items.

What can be done in this situation? If you’re already in the middle of the project, you may as well be in hot water and the project might have to be cancelled, depending on a combination of factors.

Best is to try not getting into this predicament in the first place. Before you start the implementation, ensure that:

  • Message size limits haven’t been raised in the on premise system beyond what Office 365 accepts. See for details.
  • Company policy allows flushing the dumpster if needed.
  • Grab Michael Hall’s large item discovery script from here and run it against the archive mailboxes – thanks Michael, your script is a lifesaver. You might be surprised by what you’ll find. In one instance I found a mailbox with a 792Mb item in it.
  • User impact is expected and, most importantly, accepted. As per, true to its word, users will be prompted for credentials when Outlook attempts to open the Exchange Online Archive mailbox, even if Single Sign On is deployed. While users have the option to save the password, it may not go down well. Let them know in advance.

Here are a couple of things to be aware of, learned from experience:

  • If Michael’s script finds items that don’t fit into an Office 365 mailbox, then you’ll need to work with the end users to remove it. The script doesn’t export and/or delete items from mailboxes. While it is OK in small environments, it become an issue in large deployments with hundreds or thousands of users. Factor in additional time, cost and resources.
  • OK, you removed every large item. They are now in the dumpster. You cannot use Search-Mailbox or Michael’s script to manipulate the dumpster. Your options:

    • Wait it out until the items are flushed from the dumpster. Remember, there is no tool to check whether they were flushed, so it is a hit-and-miss exercise. It may also be impractical: in some cases I saw the retention time set to 3 months or higher.
    • Flush the dumpster, but not without the blessing of the business. You confirmed at the outset of the engagement, right?
  • You cannot flush the archive dumpster alone. While you can export it to a PST file and flush it in the process, it may not be an option. In one case one of my customers did not have sufficient space to store the PST files and it had insufficient bandwidth to push it across the WAN to a remote site. Provisioning additional storage would have cost an arm and a leg, being managed by an IaaS provider (look out for *aaS, they tend to be expensive).

I deliberately refrained from mentioning the LargeItemLimit parameter of the New-MoveRequest cmdlet. It is a viable option as long as data loss is acceptable. Again, check with the business.

Exchange Online Archive is a viable option for those who want to lower their TCO. However it is insufficiently supported and it needs some work to streamline its deployment. On its marketing page I’d like to see sections such as “Is it right for me?” and “What to watch out for”. While the information is available on the Internet, it is scattered and hard to find. This article is an attempt to help in this respect.

The toolset for effective management of a successful end-to-end Exchange Online Archive migration is incomplete. Lots of thanks go to Michael for writing up and making his script freely available, filling a gap in this area: it gives visibility and enables one to wiggle his/her way around the roadblocks which otherwise would be a very difficult exercise.

De-coupling settings and management of archive mailboxes from active ones would be a great step forward. It would be nice to have the ability to flush the archive dumpster only, without affecting the active mailbox dumpster. We are not there yet.

An even bigger improvement would be the addition of item size as a query criterion to the Search-Mailbox cmdlet. Maybe in a future UR/CU/SP…

Until then we’ll need to be as diligent as possible and identify anything that has the potential to cause the project to fail.

Until later.