Common Client Access Considerations for Outlook 2003 and Exchange 2010


Edit 10/12/2010: Added a note about RPC Encryption setting on Exchange 2010 SP1 servers.

There are several scenarios for consideration when deploying Exchange Server 2010 into an environment where Outlook 2003 is used. Most of these scenarios have been documented prior to the product release and some applied to previous versions. However, in a review of support cases, we have found that they have not been used prior to contacting Microsoft.

This document introduces some of the scenarios and the articles that will resolve these issues. If you are planning a deployment of Exchange Server 2010, understanding client configuration, and the requirements and capabilities of your organization are of importance to the user experience.  Primarily field office environments or environments where users are not joined to the domain, profile distribution, or the ability or inability to enforce policies or distribute the solutions will dictate how you address the issue.

Encryption

This is a top support issue for Outlook 2003 access to Exchange 2010.

Note:  In Exchange Server 2010 Service Pack 1, the RPC encryption requirement has been disabled, by default. Any new Client Access Servers (CAS) deployed in the organization will not require encryption. However, any CAS servers deployed prior to Service Pack 1, or upgraded to Service Pack 1, will retain the existing RPC encryption requirement setting. Also bear in mind disabling the RPC encryption requirement on a CAS server won’t lower the security between Outlook 2007/2010 and any CAS server as RPC communication for these Outlook versions will remain encrypted.

Exchange Server 2010 introduces additional "out of the box" security for Client communications to the Exchange Server – encryption between the Client and the Server is enabled, by Default. This is RC4 encryption – where the client negotiates the encryption level based on the client operating system’s capabilities – up to 128-bit encryption.  This is documented in the following topic in TechNet online:

Understanding RPC Client Access
http://technet.microsoft.com/en-us/library/ee332317.aspx

Prior to Outlook 2007, encryption was not enabled on the Client side, by default.   However, if profiles for Outlook 2007 exist where encryption is disabled, or if Outlook 2003 profiles created with default settings are used with Exchange Server 2010, the connection will fail when Outlook attempts to connect to an Exchange Server 2010 mailbox. One or more of the following common error messages will be displayed:

  • Cannot start Microsoft Office Outlook. Unable to open the Outlook window. The set of folders could not be opened.
  • Unable to open your default e-mail folders. The Microsoft Exchange Server computer is not available. Either there are network problems or the Microsoft Exchange Server computer is down for maintenance.
  • The connection to the Microsoft Exchange Server is unavailable. Outlook must be online or connected to complete this action.
  • Unable to open your default e-mail folders. The information store could not be opened.
  • Outlook could not log on. Check to make sure you are connected to the network and are using the proper server and mailbox name. The connection to the Microsoft Exchange Server is unavailable. Outlook must be online or connected to complete this action.

There are several methods to work around this issue, from immediate manual change by the administrator or the user, to deployment of administrative templates or new profiles.  Each of these scenarios is documented in the following article from the Microsoft Knowledge Base:

Outlook connection issues with Exchange 2010 mailboxes because of the RPC encryption requirement
http://support.microsoft.com/kb/2006508

New Mail Notifications and UDP

Exchange 2010 no longer supports UDP for new mail notifications. However, Outlook 2003 relied primarily upon UDP notifications to display new messages and changes to folders. The result is that Outlook 2003 users will see delays in updates to folders and the Send/Receive process appears to take a long time.

The following article discusses the issue and two possible resolutions for the organization:

In Outlook 2003, e-mail messages take a long time to send and receive when you use an Exchange 2010 mailbox
http://support.microsoft.com/kb/2009942

Address Book Service (Directory Access)

Directory access has changed in the Exchange Server 2010 world. The following TechNet topic introduces the changes and is currently being updated with more information.

Understanding the Address Book Service
http://technet.microsoft.com/en-us/library/ee332346.aspx

A future topic will cover this in more detail.

Public Folders, Offline Address Book and Free/Busy

Outlook 2003 uses the Public Folders free/busy messages to determine availability in the Calendar and as the source for Offline Address Book synchronization. If Public Folders are not configured during Exchange Server 2010 setup, Offline Address Book and Free/Busy will not be available to Outlook 2003 users. These users will encounter connection errors.

If free/busy Public Folders folder is not replicated to Exchange Server 2010, users will encounter the following:

Users who use Outlook 2003 cannot publish their free/busy data in Exchange Server 2010 or in Exchange Server 2007
http://support.microsoft.com/kb/945602

If clients inside the organization or connected via VPN/RAS, and the organization uses a Proxy server, the Client Access Server should be listed in the "Bypass proxy server for local addresses" configuration.

Error message when Outlook synchronizes an offline address book with Exchange Server 2007 and Exchange Server 2010: "0x8004010F"
http://support.microsoft.com/kb/939765

Also, if there are missing address book list objects or missing or incorrect address lists, the following may occur:

An error occurs when you try to synchronize the offline address list on an Exchange Server server while you are using Outlook 2003: "0x8004010F"
http://support.microsoft.com/kb/905813

Opening Additional Mailboxes

Delegate Access issues, opening other user’s folders or mailboxes are a common operation in the enterprise. Outlook 2003 users may encounter issues, if the environment is not properly prepared for their use:

Office Outlook 2003 does not connect to two or more additional mailboxes in a mixed Exchange Server 2007 and Exchange Server 2010 environment
http://support.microsoft.com/kb/978777

An error occurs when an Exchange server 2003 user tries to open more than one delegate mailboxes of Exchange Server 2010 in Outlook 2003
http://support.microsoft.com/kb/979690

RPC over HTTP Connectivity

The following article discusses issues with Outlook 2003 connectivity when the RPC proxy server extensions do not load correctly. This article also applies to Exchange Server 2010 connections.

Error message when Outlook 2003 users connect to an Exchange server by using RPC over HTTP: "Server Unavailable"
http://support.microsoft.com/kb/919092

Unified Communications

Integration features with Office Communicator and functionality with Office Communications Server have been documented in the following documents:

The presence information for a Communications Server user may not appear, or may appear intermittently, in Outlook 2003 Service Pack 2 or in Outlook 2007
http://support.microsoft.com/kb/968099

*Communicator does not update the free/busy information as scheduled
http://support.microsoft.com/kb/941103

*Note: This functionality is not available to Outlook 2003/Exchange Server 2003 users, as the Availability Service functionality is required for both the client and the Exchange Server. The only method to obtain this functionality is to upgrade both the client and the server(s).

Source: Will Duff

Kind Regards
Catastrophic Failure “JV”

Designing a Highly Available Database Copy Layout


One of the things about Exchange 2010 that many of my customers find very attractive (and I have to agree with them) is the idea of the multi-role or “all-in-one” DAG server. This means having all three of the core Exchange 2010 roles installed on all of the servers in the DAG – Mailbox, Hub Transport and Client Access. There are a lot of reasons why this is an attractive solution, but here are a few of the main ones:

  • All servers in the Exchange environment (discounting Unified Messaging and Edge) are exactly the same – same hardware, same configuration, etc. This simplifies ordering the hardware, maintenance and management of the servers.
  • In many cases you end up with less Exchange servers in the environment. This is interesting because usually the operational expenditure is higher than the capital expenditure, meaning that it costs more to run a server than it did to purchase it in the first place.
  • Less servers means that you also have less of some other things – a sort of “trickle down” effect. For instance,
  • you might have a lower number of network switch ports needed, and for large customers this could be significant. Or maybe you need less physical space for these servers, and for some customers, space is a significant issue.
  • The idea of “all roles on one server” allows you to take full advantage of the new hardware capabilities out there today. As part of the design, you want to ensure you utilize as much of the hardware as possible to drive down the cost/mailbox; the latest hardware is quite fast from a megacycle perspective, which means you are either increasing the scale on the server in terms of the number of mailboxes, virtualizing or deploying multiple roles. 
  • For the multi-role server, Microsoft supports up to 24-cores – that is a 4-socket, 6-cores per socket server. Granted, this is only useful for larger customers, but for those larger customers it can be quite important!

But, when talking about these multi-role solutions, I always make sure to let my customers know that this is a starting point. There are no technical reasons why having the roles combined like this is “better” than having the roles separated. Exchange 2007 didn’t support CAS and HT on clustered servers, and we took a lot of customer feedback to help us decide that we needed to support that in Exchange 2010. That doesn’t mean that it is better, it means that it is another option, and if it is right for your environment, then great! If not, well, having separate CAS/HT servers (or separate CAS and separate HT servers) is fully supported and is still a valid solution model.

In fact, there are a couple of things that you need to carefully consider before deciding that the multi-role server is right for you. First is the idea that by putting your CAS role on the DAG member servers, you are forcing yourself to require a hardware load balancer (or third party software load balancer). The DAG still leverages Windows Failover Clustering as a container that defines the boundaries of the DAG and to help the DAG determine whether quorum can be met (amongst other things). When you combine that fact with the idea that you cannot have Windows Load Balancing Services loaded on a server that also has Windows Failover Clustering loaded, you are forced to find another solution for load balancing. For smaller customers, or for branch office scenarios, this might be a deal-breaker. There are some less expensive hardware load-balancing solutions out there, but for some customers, even those might be too expensive, which means that this multi-role server idea won’t work.

Another thing to consider as you determine the architecture you want to utilize for your Exchange 2010 environment is how you will patch your highly available servers. You don’t deploy a DAG unless you really need mailbox resiliency within the datacenter and/or between physical locations – it is a very expensive way to deploy Exchange 2010 if your requirements don’t drive you to need mailbox resiliency! So, if you are spending that money, you need to make sure you understand how to patch these servers and provide the highest level of availability possible.

As you think about these multi-role solutions, also remember the fact that all of the Client Access role servers will be identified as part of the CAS array in a given Active Directory site (this is automatic). When you configure your hardware load balancers, you will need to add all of these Client Access servers into the load-balanced array to allow them to actually be utilized for client access. But, if you’ve done that, how do you patch your servers? If you patch one of the servers (for ease of writing, let’s assume RTM to RU1) and add it back into the array, you now have the possibility a Client Access server at RTM (one of the un-patched servers) fronting a mailbox on RU1 (the newly patched server)! Not good – we all know that you patch these servers in alphabetical order – CAS, HT, MBX – and that means you aren’t supposed to have the mailbox at a newer build than the Client Access server!

So, what do you do about that? We’ll look at some scenarios in the rest of this article, and talk at a high level about the process you’ll take to ensure that you don’t get into a situation where your mailbox is at a newer build than the CAS in front of it.

All of the scenarios below have the patching impact that you will have to manipulate your load-balanced array every time you patch your servers. This possibly means coordinating with another team and putting some management load on them. Of course, any time you patch a CAS array, you’ll probably need to interface with the load-balancer team anyway – you need to “drain stop” each individual CAS server as you patch it anyway to keep the client disconnects and reconnects as low as possible. So, really, this might add a little management overhead to the load-balancer team, but it is possible that it isn’t a significantly high amount of additional work.

The only way you can balance this “additional work” for the load-balancer owners is the fact that HA costs money. You have to remember that. The higher you want your availability to be, the more it costs. Whether it is you or one of your customers, this core tenet of HA must be kept in mind. The other option is to just say that your maintenance window is a time when taking email services completely down is acceptable. But, then again, if you’re spending all this money on HA, is that really an option?

Patching Scenario – Small Office / Branch Office

This is the simplest DAG architecture out there. We’re talking about a single DAG in a single location with only 2 or 3 servers. This is for HA only – no site resilience. For our example here, we’ll look at a 3-member DAG – see this simple diagram:

How do we patch this DAG? Here are a few steps:

  1. As is always necessary when moving databases around on your DAG, ensure that your replication is healthy and all copies are up to date.  For the purposes of this example, we’ll start the upgrade process with Server 3.
  2. Activation block Server 3 so that a failure at this time won’t activate copies of the mailbox databases on that server.
  3. Perform a switchover of all databases away from Server 3 to Server 1 and Server 2.
  4. Drain-stop all connections to the CAS on Server 3, and then remove it from the load-balanced array (note that we do not remove the server from the CAS Array) and patch it.
  5. Add Server 3 back into the load-balanced array, drain-stop Server 1 and Server 2 and remove them from the load-balanced array.
    1. Notice that for a short period, we have both upgraded and not-upgraded servers in the load-balanced array. This is not an issue, because we still have all mailboxes on the not-upgraded mailbox servers.
  6. Remove the activation block on Server 3, activation block Server 2, perform the switchover of all databases from Server 2 to Server 1 and Server 3, patch Server 2 and add it back into the load-balanced array.
  7. Remove the activation block on Server 2, activation block Server 1, perform the switchover of all databases from Server 1 to Server 2 and Server 3, patch Server 1 and add it back into the load-balanced array.
  8. Remove the activation block on Server 1, and redistribute your databases evenly across all three servers (Note that there will be a pretty sweet script in Exchange 2010 Service Pack 1 to do this for you!).

Impact(s) of this process:

  • At that time when you have patched a single server and you have the entire load-balanced array pointing at that one server (in our example above, when Server 3 has been added into the load-balanced array and Servers 1 and 2 have been removed from the load-balanced array), you have no HA on your CAS services. This makes the assumption that at the time of patching, that one CAS server can handle the load sufficiently. Until you have patched your second server and added it into the load-balanced array, you could have a service interruption by the failure of that one server.
    • Remember that this is only for the time period that you have not patched that second server and added it back into the load-balanced array.
    • Also remember that under normal conditions, this process will cause less interruption of service than taking the entire system offline for the time that we need to patch the 2 or 3 servers – so in most instances you will be at a higher level of availability than you would be if you didn’t follow this procedure.
  • You should also keep in mind that during this patching process, you cannot take two servers offline at the same time – a three member DAG (as shown in our example) will lose quorum when you take two members down, and all email services will be unavailable in that case.

Patching Scenario – Medium Sized Deployment

This is a slightly larger environment – a single DAG with 4 or more servers in the DAG, all located in one location. Let’s use a 6-member DAG for our example this time. Refer to this diagram for this example:

Also note that for this example, we’re going to say that we have designed this DAG to support two concurrent failures. This means that if we take two servers out of actively hosting mailboxes for patching, by having three copies of all databases, we are assured that we can continue to provide email services. It is possible to modify this solution to only take a single server out of service at a given time, and that is a perfectly acceptable solution – this is just an example presented here for discussion.

  1. Ensure that your replication is healthy and all copies are up to date.
  2. In our example we are going to patch two servers at a time, starting with Server 5 and Server 6. 
  3. Activation block Server 5 and Server 6 so that a failure at this time won’t activate copies of the mailbox databases on those servers.
  4. Perform a switchover of all databases away from Server 5 and Server 6 to the other four servers in the DAG.
  5. Drain-stop all connections to the CAS on Server 5 and Server 6, and then remove them from the load-balanced array
  6. Patch Server 5 and Server 6.
  7. Add Server 5 and Server 6 back into the load-balanced array, drain-stop all other servers and remove them from the load-balanced array.
    1. Notice that for a short period, we have both upgraded and not-upgraded servers in the load-balanced array. This is not an issue, because we still have all mailboxes on the not-upgraded mailbox servers.
  8. Remove activation block from Server 5 and Server 6, activation block Server 3 and Server 4, perform the switchover of all databases from Server 3 and Server 4 to the other four servers, patch Server 3 and Server 4 and add them back into the load-balanced array.
  9. Remove activation block from Server 3 and Server 4, activation block Server 1 and Server 2, perform the switchover of all databases from Server 1 and Server 2 to the other four servers, patch Server 1 and server 2 and add them back into the load-balanced array.
  10. Remove activation block from Server 1 and Server 2, and redistribute your databases evenly across all three servers.

Impact(s) of this process:

  • For a period of time, you will be running with a possibly lower availability stance than normal operating conditions. You only have 2 servers providing CAS services until you have patched those other servers and added them back into the load-balanced array. (If you only have 4 servers in the DAG, this might not be the case.)
  • In the case where you have very high numbers of users in this physical location, it is possible that you would introduce a performance impact on CAS services, because of the reduced number of Client Access servers in service.
    • Think about the situation where you have 8 or 10 servers in the DAG in this physical location, and you have only patched 2 servers. In that case, those 2 servers could probably not handle the load of all users under full production load. But, you typically won’t be patching during a “full production” time of the day – you’ll have a maintenance window that you will be working in, and users will know to have a lower expectation of availability and such. As long as you understand this and are willing to accept this risk, this is fine, but you should almost certainly make sure that you document this case and make sure that it really is acceptable!
    • The other way to think about this is to make sure that the two servers you bring back into the load-balanced array have enough processing power and memory to support the entire load. This is probably the best engineering solution for a highly available Exchange 2010 environment, but it does have a cost associated with it just like everything else in HA. If I were going to recommend a solution, this is how I would recommend it – make sure you have enough processing that if you end up in a production environment on two upgraded servers, that they can handle your full production peak load.

Patching Scenario – Large Deployment

Multiple DAGs, multiple servers in each DAG, DAGs spread across multiple locations. Think about the scenario where you have two datacenters with two DAGs of 12 servers each, and users active in both datacenters. At any given time, you have 6 servers in a passive mode in each of the two datacenters. This is for big customers – a lot of my customers are very large, and I’m working with three customers right now: 120K mailboxes, 250K mailboxes and 600K mailboxes.

To help define this environment, here is a relatively simple diagram of two DAGs showing the replication data flow direction.

Now, how to patch this beast… This example will discuss patching the West Datacenter servers – just repeat this process for the East Datacenter after completing the West Datacenter upgrades.

  1. Ensure that all replication is healthy and all copies are up to date on both DAGs.
  2. Activation block all of the DAG2 servers in the West Datacenter to keep databases from failing over to these servers.
  3. Ensure no databases from DAG2 are active in the West Datacenter. If so, perform the switchovers necessary to move those databases back to the East Datacenter and ensure that all replication comes back healthy and all copies come up to date.
  4. In the West Datacenter only, drain-stop all of the servers in DAG2 and remove those servers from the West Datacenter load-balanced array.
  5. Patch all DAG2 servers in the West Datacenter.
  6. Add all DAG2 servers in the West Datacenter back into the load-balanced array and remove the activation block on those servers.
  7. Drain-stop all DAG1 servers in the West Datacenter and remove them from the West Datacenter load-balanced array.
  8. Work through the DAG1 servers patching them “two by two”. This means to move active mailboxes off the servers two at a time, patch those two servers, move databases off of two more servers, and patch. This would probably look like this (assuming you had 6 servers from DAG1 in the West Datacenter):
    1. In our example we are going to patch two servers at a time, starting with Server 5 and Server 6 (of DAG1 in the West Datacenter).  Activation block Server 5 and Server 6 so that a failure at this time won’t activate copies of the mailbox databases on those servers.
    2. Perform a switchover of all databases away from Server 5 and Server 6 to the other four West Datacenter servers in DAG1.
    3. Patch Server 5 and Server 6.
    4. Remove activation block from Server 5 and Server 6, activation block Server 3 and Server 4, perform the switchover of all databases from Server 3 and Server 4 to the other four servers in the West Datacenter, and patch Server 3 and Server 4.
    5. Remove activation block from Server 3 and Server 4, activation block Server 1 and Server 2, perform the switchover of all databases from Server 1 and Server 2 to the other four servers in the West Datacenter, and patch Server 1 and server 2.
    6. Remove activation block from Server 1 and Server 2, and redistribute your databases evenly across all of the DAG1 West Datacenter servers.
  9. Once you have patched all servers from DAG1 in the West Datacenter and evenly distributed your databases, you can then add all of the West Datacenter DAG1 servers back into the West Datacenter load-balanced array, completing the patch of the West Datacenter Exchange servers.

Impact(s) of this process:

  • While you are patching, it is probable that your site resilience stance will be lower. Think about the scenario where you have a datacenter failure while the servers in the other datacenter are in the process of being patched. If half of your servers have been patched, then this could cause a performance issue in the case where the “passive” datacenter needs to be activated, or it could add time to the activation process while other server patches are completed.

Conclusion

Most of this isn’t “rocket science” – it is just something to think about. We have to be aware that in some instances, especially in those very small environments (small orgs or branch offices), we might want to look at another solution such as virtualizing the whole thing and using Windows NLB instead of using the multi-role servers. This goes back to one of the first things I said – it is all driven by the requirements. If you don’t need mailbox resiliency, don’t deploy a DAG. If your requirements drive you away from the multi-role server, don’t hesitate to go with roles broken out onto separate servers. Just make sure that you make these decisions with your eyes open – understand the implications of everything right down to how you will patch these servers once they have been deployed!

Source: Robert Gillies

Exchange Server Webcasts and Podcasts


TechNet webcasts are 60-to-90–minute live broadcasts featuring interactive technical presentations, product demonstrations, and question-and-answer sessions presented by an expert on Microsoft technology, the industry, or both. All content is recorded and made available on demand.

Current Webcast series
On-demand Webcasts
Upcoming Webcasts

Kind Regards

Catastrophic Failure “JV” Nerd smile

Released: Update Rollup 1 for Exchange Server 2010 SP1


Earlier today the Exchange CXP team released Update Rollup 1 for Exchange Server 2010 SP1 to the Download Center. The release of the rollup via Microsoft Update will happen on October 12th.

This update includes new fixes for the following server roles:

  • Client Access
  • Mailbox
  • Edge Server
  • Hub Transport

In particular we would like to call out the following fixes which have been included in this release:

  • 2028967 Event ID 3022 is logged and you still cannot replicate a public folder from one Exchange Server 2010 server to another
  • 2251610 The email address of a user is updated unexpectedly after you run the Update-Recipient cmdlet on an Exchange Server 2010 server
  • 978292 An IMAP4 client cannot send an email message that has a large attachment in a mixed Exchange Server 2010 and Exchange Server 2003 environment

Note: The above links may not be live at the time of this posting.

Rollup 2 for Exchange Server 2010 Service Pack 1 is currently scheduled to release in early December.

Note for Exchange 2010 Customers using the Arabic and Hebrew language version

We introduced two new languages with the release of Service Pack 1, Arabic and Hebrew. At present we are working through the process of modifying our installers to incorporate these two languages. Due to the timing of RU1 we were unable to complete this work in time.

Customers running either of the two language versions affected are advised to download and install the English language version of the rollup which contains all of the same fixes.

Note for Forefront users

For those of you running Forefront, be sure you perform these important steps from the command line in the Forefront directory before and after this rollup’s installation process. Without these steps, Exchange services for Information Store and Transport will not start back up. You will need to disable ForeFront via fscutility /disable before installing the patch and then re-enable after the patch by running fscutility /enable to start it up again post installation.

Kind Regards

Catastrophic Failure “JV”Nerd smile

So you want to change your expired passwords in OWA?


A while back, I posted "What you need to know about the OWA Change Password feature of Exchange Server 2007" (http://msexchangeteam.com/archive/2008/12/09/450238.aspx) on this blog. A significant pain point was highlighted there – the loss of the IISADMPWD virtual directory as a supported feature in Windows Server 2008/IIS 7.0.  This prevented web client users with expired passwords from being able to change their password and log on.  This was a problem for many OWA users – especially remote/mobile users that were not joined to a domain.

Good news!  Exchange Server 2007 Service Pack 3 and Exchange Server 2010 Service Pack 1, running on Windows Server 2008 or Windows Server 2008 R2 have a new feature that will allow users with expired passwords to change their password. This also works for users with the User must change password at next logon specified on their AD account.

The procedure below is same for both Exchange 2007 Service Pack 3 and Exchange Server 2010 Service Pack 1.  Here’s how you do it:

  1. On the Client Access Server (CAS), click Start, click Run and type regedit.exe and click OK.
    Note:  If you are using a CAS Array, you must perform these steps on each CAS in the array.
  2. Navigate to HKLM\SYSTEM\CurrentControlSet\Services\MSExchange OWA.
  3. Right click the MSExchange OWA key and click New then click DWord (32-bit).
  4. The DWORD value name is ChangeExpiredPasswordEnabled and set the value to 1
    Note:  The values accepted are 1 (or any value not zero) for "Enabled" or 0 or blank / not present for "Disabled"
  5. After you configure this DWORD value, you must reset IIS – the recommended method is to use IISReset /noforce from a command prompt.

Important: When you attempt to change your password, currently you cannot use UPN (johndoe@contoso.com) in the Domain\user name (contoso\johndoe) field in the ‘Change Password’ window shown below:

That’s it.  No other steps are required.

Enjoy!

Reference: TechNet: How to Enable the Exchange 2007 SP3 Password Reset Tool


Source: 
Will Duff

 

Kind Regards
Catastrophic Failure “JV”Nerd smile

Microsoft Exchange Server 2010


Vodpod videos no longer available.

Exchange 2010 (RTM) public folder replica lists could be modified in unexpected ways if public folder was created when Exchange 5.5 was in the organization


We wanted to let you know about a problem that can cause some or all of the replicas of a public folder to unexpectedly disappear when changing the replica lists on Exchange 2010. This can happen when using the Exchange 2010 RTM (Released to Manufacturing) Public Folder Management Console, the Set-PublicFolder CMDlet (or scripts that call it) or the ExFolders tool connected to Exchange 2010 RTM server. If only some of the replicas disappear, you may try to add them back, only to see them instantly vanish again. If all of the replicas disappear, you must restore the public folder data from a backup.

This is due to a bug in the Information Store in Exchange 2010 RTM. When Store receives the new replica list from the admin tools, under certain conditions it makes the wrong decision about which replicas are supposed to be added and which are supposed to be removed. This bug has been fixed in Exchange 2010 Service Pack 1 (SP1) Store.

This behavior will only affect folders that were originally created when Exchange 5.5 was present in the environment, so you typically only see this on very old public folders.

It is not possible to accurately predict whether a public folder will be affected by this bug, aside from ruling out folders that were created after Exchange 5.5 was removed (as those will not be impacted). Because the consequences of the bug can be severe, we recommend that you do not manage public folder replicas on or connected to an Exchange 2010 RTM server if you can avoid it if you had Exchange 5.5 in your organization at some point in the past. Managing public folder replicas with either "legacy" management tools (Exchange 2003 or 2007), Exchange 2010 SP1 management tools or any other public folder management interface connected to Exchange 2010 SP1 server is safe.

A quick summary of the issue and resolution, so there are no misunderstandings:

You could run into this problem if:

  • You have public folders that were created when Exchange 5.5 was in your organization.
  • You use Exchange 2010 RTM management tools (or any other management tools connected to an Exchange 2010 RTM server) to make replica changes of those "old" folders.

You will not run into this problem if:

  • You do not have any public folders created when Exchange 5.5 was in your organization.
  • You use either Exchange 2010 SP1, Exchange 2007 or Exchange 2003 management tools (or any other management tools connected to an Exchange 2010 SP1, Exchange 2007 or Exchange 2003 server) to make replica changes of those "old" folders.
  • You already used Exchange 2007 RTM management tools to manage replicas of specific folders; if you did not have a problem managing  those folders, you will not have it in the future either.

Source: Bill Long

 

Kind Regards
Catastrophic Failure “JV”Nerd smile

Microsoft Exchange User Monitor (Exmon) tool – Updated Version


I am pleased to announce that there is a new version of this tool that the Exchange performance, development, and operations teams at Microsoft have used for quite some time called Exchange User Monitor (Exmon) and can be downloaded here.

Exmon for the first time allows an Exchange administrator the ability to see in amazing detail the performance of an Exchange server. Shown on a user by user basis, Exmon allows you to see how much CPU, latency, network traffic, and disk each user on an Exchange server consumes. It can be run in almost realtime (minute by minute analysis) or over longer (multiple-hour) capture periods. Exmon also ‘bubbles’ up data sent back to the Exchange server from Outlook 2003 and higher about the user’s actual experience, showing the actual RPC (network+server) latency and even the name of the process talking to the Exchange server (so you can see ActiveSync usage and other 3rd party MAPI applications).

The data Exmon exposes is the ‘raw’ data that many of the Exchange Performance counters use in calculating the running averages.

Internally, this tool was used to help understand the performance of Outlook 2003 and other MAPI applications during the development of Exchange Server 2003. We use it to understand the broad impact of performance across a server, but also to troubleshoot specific performance problems with individual users. The impact to the server being ‘traced’ is minimal, allowing it to be run on very large servers.

I’d love for you to download the tool, give it a whirl, and tell us what you think. We’d love to see what use you can come up with for this data, problems you’re able to solve, and conclusions you’re able to make.

It is also now available the ExPerfwiz that can enable the Exmon on a time interval basis and automation.

Kind Regards
Catastrophic Failure “JV”

Exchange Server 2010 SP1 VHD


This download comes as a pre-configured VHD. This download enables you evaluate Microsoft Exchange Server 2010 SP1 for 180 days.

One VHD with

  • Windows 2008 Domain Controller & Global Catalog

  • Rights Management

  • Exchange Server 2010 SP1

  • DNS

  • Certificate Authority

  • Microsoft Office Professional Plus 2010

  • Office Communications Server Standard Edition

http://www.microsoft.com/downloads/details.aspx?FamilyID=53F7382A-3664-4DE3-8303-31E514D69F02&displaylang=en#filelist

Kind Regards
Catastrophic Failure “JV”

Exchange Management Shell (EMS) missing after applying Exchange 2010 SP1


 

Problem: After installing Service Pack 1 for Exchange Server 2010, the Exchange Management Shell is missing.

Cause: Missing certain .ps1 scripts from the bin directory, removed during SP1 setup.Resolution: There is no resolution at this time. 


Workaround:

1. Verify that the ConnectFunctions.ps1 , RemoteExchange.ps1 and CommonConnectFunctions.ps1 files are present in the %ExchangeInstallPath%bin directory.


NOTE: If these files are missing, you can copy the files from the Exchange Server 2010 Service Pack 1 installation media to the %ExchangeInstallPath%bin directory.
These files are present in the setupserverrolescommon folder.

2. Right click an open area on the Desktop, click New , click Shortcut .

3. In the Type the location of the item , type in the text below:
%SystemRoot%System32WindowsPowerShellv1.0powershell.exe -noexit -command ". ‘%ExchangeInstallPath%binRemoteExchange.ps1’; Connect-ExchangeServer -auto"

4. Name the shortcut Exchange Management Shell and click Finish .

5. Right click the Exchange Management Shell shortcut, click Properties and remove the text from Start In field on the Shortcut tab.

6. On the Shortcut tab, click Change Icon , click Browse and type in the text below:
%SystemRoot%Installer{4934D1EA-BE46-48B1-8847-F1AF20E892C1}PowerShell.exe

7. Select the Exchange icon and click OK .

8. Click Apply and click OK .

9. Copy the new Exchange Management Shell shortcut to this location:
%systemdrive%ProgramDataMicrosoftWindowsStart MenuProgramsMicrosoft Exchange Server 2010

Kind Regards
Catastrophic Failure ”JV” Nerd smile