Configuring a DHCP Superscope

superscope is an administrative feature of Dynamic Host Configuration Protocol (DHCP) servers running Windows Server 2008 that you can create and manage by using the DHCP Microsoft Management Console (MMC) snap-in. By using a superscope, you can group multiple scopes as a single administrative entity. With this feature, a DHCP server can:

  • Support DHCP clients on a single physical network segment (such as a single Ethernet LAN segment) where multiple logical IP networks are used. When more than one logical IP network is used on each physical subnet or network, such configurations are often called multinets.
  • Support remote DHCP clients located on the far side of DHCP and BOOTP relay agents (where the network on the far side of the relay agent uses multinets).

In multinet configurations, you can use DHCP superscopes to group and activate individual scope ranges of IP addresses used on your network. In this way, the DHCP server can activate and provide leases from more than one scope to clients on a single physical network.

Superscopes can resolve specific types of DHCP deployment issues for multinets, including situations in which:

  • The available address pool for a currently active scope is nearly depleted, and more computers need to be added to the network. The original scope includes the full addressable range for a single IP network of a specified address class. You need to use another range of IP addresses to extend the address space for the same physical network segment.
  • Clients must be migrated over time to a new scope (such as to renumber the current IP network from an address range used in an existing active scope to a new scope that contains another range of IP addresses).
  • You want to use two DHCP servers on the same physical network segment to manage separate logical IP networks.

Superscope configurations for multinets

The following section shows how a simple DHCP network consisting originally of one physical network segment and one DHCP server can be extended to use superscopes for support of multinet configurations.

Example 1: Non-routed DHCP server (before superscope)

In this example, a small local area network (LAN) with one DHCP server supports a single physical subnet, Subnet A. The DHCP server in this configuration is limited to leasing addresses to clients on this same physical subnet.

The following illustration shows this example network in its original state. At this point, no superscopes have been added and a single scope, Scope 1, is used to service all DHCP clients on Subnet A.

Single subnet and DHCP server (before superscope)

Example 2: Superscope for non-routed DHCP server supporting local multinets

To include multinets implemented for client computers on Subnet A, the same network segment where the DHCP server is located, you can configure a superscope that includes as members the original scope (Scope 1) and additional scopes for the logical multinets for which you need to add support (Scope 2 and Scope 3).

This illustration shows the scope and superscope configuration to support the multinets on the same physical network (Subnet A) as the DHCP server.

Superscope for non-routed DHCP server

Example 3: Superscope for routed DHCP server with relay agent supporting remote multinets

To include multinets implemented for client computers on Subnet B, the remote network segment located across a router from the DHCP server on Subnet A, you can configure a superscope that includes as members the additional scopes for the logical multinets for which you need to add remote support (Scope 2 and Scope 3).

Because the multinets are for the remote network (Subnet B), the original scope (Scope 1) does not need to be part of the added superscope.

This illustration shows the scope and superscope configuration to support the multinets on the remote physical network (Subnet B) away from the DHCP server. A DHCP relay agent is used for DHCP servers to support clients on remote subnets.

Superscope for routed DHCP server

Create a superscope

You can use this procedure to create a DHCP superscope.

Membership in the Administrators or DHCP Administrators group is the minimum required to complete this procedure.

To create a superscope

  1. Open the DHCP snap-in.

  2. In the console tree, click the DHCP server you want to configure.

  3. On the Action menu, click New Superscope.

    This menu option only appears if at least one scope that is not currently part of a superscope has been created at the DHCP server.

  4. Follow the instructions in the New Superscope Wizard.

SharePoint 2010 Correlation ID in error messages: what it is and how to use it

By Dina Ayoub

Program Manager on SharePoint, MSFT

 

Who might find this post useful:

 

If you find yourself wondering what a SharePoint “Correlation ID” is whenever you get an unexpected error in SharePoint 2010, or what to do with it once you have it, then hopefully this blog post will shed some light on that.

 

This article may be useful to you if:

  • You’re a SharePoint user who happens to be performing an important action that should normally work, and get an Error page that says “An unexpected error has occurred”. The error also states this strange hyphenated string called the “Correlation ID” that looks something like this ab961971-84fa-45c7… etc; and you really need to find a solution for this problem, or figure out why it’s happening. Then you might want to call up your organization’s help desk and give them this Correlation ID so they can investigate what’s going on, if all other troubleshooting steps don’t help.  Here’s a screenshot of what this might look like:
    Error Message with Correlation ID
  • You’re a SharePoint support admin trying to help someone with a problem, and can’t figure it out using all the regular troubleshooting steps you normally follow, you may want to ask them for this correlation ID, that will help you figure out the details of what is happening.
  • You’re a SharePoint administrator and you find a problem is occurring (such as failed requests or a slow page) and want to track the problem, you may want to get one of the correlation IDs for the requests that are exhibiting the problematic behavior, and use that correlation ID to investigate deeper.  If no failure is happening, so you aren’t getting an error message with this correlation ID, then you might want to enable developer dashboard; and that will allow you to see the correlation ID generated by that page request.  Here’s what it might look like on the developer dashboard:Developer Dashboard Correlation ID
  • You just happen to be looking through the ULS logs for any reason, and a request piques your interest, you can see its correlation ID in the log file.

What it is:

The correlation ID is a GUID (globally unique identifier) that is automatically generated for every request that the SharePoint web server receives.

 

Basically, it is used to identify your particular request, and it persists throughout further communications, if any, between the other servers in the farm. Technically, this correlation ID is visible at every level in the farm, even at a SQL profiler level and possibly on a separate farm from which your SharePoint site consumes federated services. So for example, if your request needs to fetch some information from an application server (say, if you are using the web client to edit an Excel spreadsheet), then all the other operations that occur will be linked to your original request via this unique correlation ID, so you can trace it to see where the failure or error occurred, and get something more specific than “unknown error”.

 

How to use it:

If you’re an end user, you might not be able to do much more without help, since you won’t have access to the logs that provide the information. In that case, you can stop here and with this understanding of what a correlation ID is, take note of it, call up helpdesk, explain your problem, and try to help them diagnose the problem. If they can’t figure it out give them the correlation ID you see on your error message. You may need to refer them to this post if they are unsure on how to use that ID.

 

As an IT Pro or admin, to figure out what happened, you need to find the ULS logs for the time at which this event occurred, and search for the Correlation ID in those logs. You may have to look across several of the web front ends to find the one that has the correlation ID you’re looking for. This may give you some insight into what happened right before the request that generated the error, what error messages showed up or what events were triggered because of this error (if any). You can use a tool such as ULSViewer to facilitate looking at this data and filtering out the requests you don’t need to look at.

 

Example #1:

  1. User gets this error, and gives you (the admin) the correlation ID and date/time of the incident:Error Message Correlation ID
  2. Find the log directory, and the date you’re looking for and open that file in Excel:Find the Log File
  3. Find the correlation ID that you’re looking for. You can filter by the level of the events as well to get a good idea of what’s going on:Finding the Correlation ID in Log Files
  4. If you don’t find the correlation ID, try another Web Server that was in rotation at the time of the issue reported.

 

Example #2:

In this example, there is a  business intelligence center provisioned on the root of the site.  The Excel Calculation Services service is running on the server, but there is no Excel Service Application created.

 

I browsed to the View Excel Services samples link from the main page Create Dashboards mouseover.  Before browsing, I opened ULSViewer in realtime mode (File | Open From | ULS).

 

Example 2 ULS Viewer Excel Services

(View a large version of this image)

 

Notes about the image:

  • Purple box + arrow shows the notification balloon.  If you click on this, you’ll get the notification popup.  By default when Critical events occur, they will show up here.  You can jump right to them in the log by double clicking.
  • Red box + arrow shows where you go after clicking on the critical event.  Notice that the line is also highlighted red.  When you automatically move to it, the active line will show blue (I moved up a line before taking the screenshot), but the point is that you can easily spot critical events when scrolling through a log as well.
  • Green box + arrow shows that after enabling Smart Highlight, as the mouse moves over any given element, it highlights every other string that matches that string.  This can also be very useful for quickly identifying a chain of events or common patterns that are repeating.
  • Blue box + arrow shows that the exact URL can also be searched for as it will show up in the log data.  I didn’t specifically do this, but ULSViewer also has a find dialog that could take you to this line very quickly.

More Information:

If you’d like to read more about this, take a look at these blog posts:

http://blogs.msdn.com/spses/archive/2009/12/18/sharepoint-2010-logging-improvements-part-1.aspx

http://blogs.msdn.com/spses/archive/2010/03/11/sharepoint-2010-logging-improvements-part-2-introducing-developer-dashboard.aspx

 

So that’s the correlation ID in a nutshell. If you have any questions, comments or feedback, please feel free to post here. Thanks for reading.

http://sharepoint.microsoft.com/blogs/GetThePoint/Lists/Posts/Post.aspx?ID=353

How to do enable Target Audience in SharePoint 2010?

How to specifiy Target Audience for Webpart in SharePoint 2010?
Well, It’s obvious that free versions WSS 3.0/SharePoint Foundation 2010 won’t provide Target Audience feature. SharePoint 2007/SharePoint Server 2010 is required for the same. I have MSS 2010 and still not getting Target Audience option for webparts, what to do? A service which determines Target Audience should be enabled and running. It’s SharePoint User Profile Service that depends on “User Profile Synchronization Service”. So first check the Services via Central Administration -> System Settings -> Manage Services on server.

Check the status of “User Profile Service” and “User Profile Synchronization Service” (This service required to compile audiences. If you have fixed no of users and don’t want scheduled syncrhonization then later you can disable it). Check the SharePoint Server Search Service status. It must be started, if so then Start the above 2 specified services(UPS and UPSS).

Now we need to configure audiences in User Profile service application. Open Central Administration -> Application Management -> Manage Service Applications. Check status of “User Profile Service Application” and it should be “Started” (otherwise go to previous step and run User Profile Service). Now Click “User Profile Service Application” link. It’ll open the user profile service application settings.

Select Configure Syncrhronization Connections and create new connection if required.

Configure Synchronization Connections
Create New Connection

Provide LDAP Connection details and click “Populate Containers” button to list all domain containers. Select “Domain -> Users” (or whichever container that has user details) Note: It’ll take few minutes based on network traffic. Then click “Ok” to create the connection.

New Connection Settings

User Profile Synchronization service must be started otherwise it’ll show error.
Error

If you’ve created new connection then open “Configure Synchronization Timer Job” and run it now.

How to create Target Audience in SharePoint 2010?
Central Adminstrion -> Application Management -> User Profile Service Application -> People -> Manage Audiences.
Manage Audiences

It’ll show audiences if created already. Otherwise click New Audience -> specify name, description, owner.

Create Targer Audience

Select a option based on requirement (I chose “Satisfy any of the Rules”). Click Ok to go next page. Select Properties and select the property. Specify the value to compare. Now it’ll show the Audience properties. Click “Compile Audience” link to validate AD Users and add if the details satisfy specified condition.

Create Targer Audience

Login to your SharePoint site and edit any webpart. It’ll now show the option to specify Target Audience.

Sharepoint Performance / Load Testing using Microsoft Visual Studio vs Quotium QTest

Below is a white paper written in 2009 – QTest is now even better for performance testing Share point – contact me for more information.
SharePoint Performance Testing White Paper: Scripting
Performance Test Scripts for Complex SharePoint Applications by Adam Brown

Introduction

This paper focuses on the scripting phase of performance testing as this can be one of the most time & effort intensive parts of any load testing project, particularly when working with bespoke and complex applications, including SharePoint applications. Also this can be the phase where most technical experience is required.

Abstract

The reason for the paper is following an attempt to use Microsoft Visual Studio Team Test System Web Testing to performance test a complex SharePoint implementation.
It was found that creating scripts (transactions, user journeys whatever you like to call them) for simple SharePoint applications could be straight forward, however for a more complex implementation the dynamic parameterisation that visual studio does simply did not parameterise all necessary variables and a coding approach was required. The same scenario was then evaluated using QTest from Quotium which was able to parameterise all variables quickly and without the need for coding.

Executive Summary

·  In this scenario, Visual Studio Team System for Testers would require at least 540 – 1080 lines of code to be manually edited or written plus other investigative activity to make a suite of scripts.
·  QTest was able to handle the required parameterisation without any code having to be written; QTest generated and integrated all the code required.

The application under test (AUT)

The objective of the application was to make more of the functionality of Microsoft Office Project available via a web interface, this was then deployed via SharePoint so that staff unfamiliar with MS Project would be more comfortable with a web interface and MS Project did not need to be installed on all machines. It enables users to create, edit and view projects. The application was written by Raona of Spain.


The scenario

We decided to record the process of creating a project using the application, that way we could see on the server if the transactions simulated with the tool were being properly executed. The transaction steps were as follows:
1: Navigate to Application HomePage
2: Click New Project
3: Enter Project Name, Template & and Zone  Click Create Project
4: Click Save and Publish
5: Click OK when asked to check in
6: Click Save

The captures

Captures were made using the built in recording mechanisms featured with each tool, this comprises of clicking the record button and interacting with the AUT as a real user would.

Microsoft’s Visual Studio Team System for Testers

First we used the Microsoft tool, capture was straight forward and the tool appeared to detect dynamic parameters while generating its visual script (no code seen at this point). Dynamic parameters are incredibly important when generating scripts as if these are not dealt with correctly then the entire script (and any load tests executed with it) can be rendered useless.
Looking at the script that was generated and the dynamic parameters, at first glance the tool had done a good job.
It was clear that we would have to parameterise the project name (used in step 3 of our transaction steps list above) as duplicate project names are not allowed in the AUT, however before parameterising this value we thought we’d better check that the script would run after simply manually updating the project name parameter in the script by using find and replace (we changed it from “TestProj2” to “TestProj3”). This way we could quickly find out what else, if anything needed to be parameterised.
After attempting to run the script with the new project name parameter it failed receiving a HTTP Code 500 – internal server error from the application under test.
Note the Server 500 Error (highlighted in blue) and the details in the window below it.
After a closer look at the script (at this point we dropped down into the generated Visual Basic code) we could see exactly what had been automatically parameterised and what had not and it quickly became obvious why this script had caused a 500 internal server error and why this script, in it’s current state, could never work and could never be used to generate accurate load.
The reasons for this are explained below as follows:
The dynamic parameters that the Visual Studio Web Testing tool did not deal with are as below:
ProjectUID
CLSID
SessionUID
JobUID
ViewUID
ViewTimeStamp
The problem is that the AUT as with many applications needs to use these types of parameters to maintain state, identify objects and maintain sessions. To use a parameter from a previous transaction recording simply can’t work and if it does not cause visible errors, it will result in inaccurate load being generated were the script to be used as part of a test.
The parameters it did deal with were as follows:               __VIEWSTATE
__EVENTARGUMENT
__EVENTVALIDATION
__REQUESTDIGEST
__LASTFOCUS
Cookies
URL Parameters in redirects
These are standard Microsoft parameters and are dealt with correctly. The problem with the parameters mentioned previously is that they are more than likely created in the development process, so Visual Studio’s web testing tool can’t know anything about them.
An example of a parameter that has not been parameterised is shown below is the XML request made by Visual Studio during a failed replay of the script.
If we look at the script below we can see the parameter is static in the script not dynamic (look in the chunk of unparsed XML):
Compare this with a parameter that has been automatically parameterised and made dynamic (see __VIEWSTATE):
So it seems that the only way to make the script work is to manually insert code to extract the dynamic parameters from the relevant web server responses, store them as variables and insert them in the place of the static parameters that need to be replaced in the XML.
Of course this should be no problem for a seasoned developer or tester with VB / C# coding / scripting experience, however it may be time consuming as there are at least 6 parameters here that need to be replaced, each of which appears any number of times in the script, depending on the size of the script. Add to that the fact that we will need to produce more than one script to create a realistic scenario, furthermore, when the application changes, the script will more than likely need to be re-recorded to ensure that any changes are correctly handled in the script. This makes for a lot of lines of code and hence a lot of time spent during the scripting process. Let’s quantify this:
Once we’ve figured out a way of finding out where the parameter came from and the best way to extract it with Visual studio we have the following:
6 Parameters each appearing on average 6 times in each script.
This means that approximately 36 lines of code need to be altered.
Further more at least 18 lines of code need to be inserted for declaration and extraction (1 for declaration and one for extraction – possibly more).
More lines of code may be required for complex parsing of parameters.
This means that for each script we have there are at least 54 changes required.
Typically in load testing 10-20 scripts are required for an accurate scenario, this means that we have at least 540-1080 lines of code to edit or insert for each load test we prepare.
If the application is changed then all of this work has to be re-visited or repeated.
What’s required is a parsing engine that can be configured to deal with bespoke / non standard parameters.


Quotium QTest

Next we used QTest to capture the same transaction. QTest does not automatically parameterise the script once it is generated; automatic parameterisation is achieved by selecting a model from the drop down box and clicking ‘Apply Model’. QTest is not limited to Microsoft technologies so there are other models on there for J2EE, Siebel, SAP etc.

Note the Model drop down list in the top left of the application
By default the SharePoint model was not there, this was downloaded separately as an XML file. We found however that the .Net model was able to make most of the parameterisation that Visual Studio could.
After the parameterisation process QTest had covered everything that could be seen as a header parameter, it had not covered any parameters that appeared in the body part of a request, such as an XML request, these remained static.
It was however quite straight forward to parameterise these using Find In HTTP Responses from the right click menu (see above). Highlighting the parameter we needed to make dynamic and right clicking presented the menu we needed (below):
QTest then presented us with the locations of every instance of this parameter in all of the HTTP responses (lower part of screen in screenshot above)
After double clicking the response (above: I chose the one from the body of response ID 87 rather than the header of response 87) QTest then highlighted the parameter for extraction in the HTTP window on the right of the tool (see highlighted text in widow below),

 

where we were then able to select Extract Value following a right click on the selected text (see right). QTest then evaluated the text to the immediate right and to the immediate left of the parameter for extraction and used this to build an extraction rule (see Left Delimiter and Right Delimiter in screen shot above).
Using the Magnifying glass button we were able to verify that the extraction had worked correctly. Finally, the Apply button created the variable in the script, generated the extraction code and inserted it at the relevant place. All that was left was to use the find and replace to replace all instances of the hard coded value in the script with the variable that had just been parameterised.
By highlighting the static value in the script, right clicking and using Find and replace in strings… it was possible to quickly parameterise all instances of this static value as per screen shot to the right.
This process was repeated for all variables that remained hard coded in the script including the ‘Project Name’ that the user would normally enter through the keyboard.
Rather than use a list of values we decided to use a timestamp on the project name as that way we would always have a unique name for the Project. Looking at the help we found the command for timestamps. This did mean that we had a small piece of code to write as follows, this was placed at the top of the script:
UniqeName = “MyProj” + Date(“%f”);
To save time we then modified the SharePoint model to ensure that all references to the hard coded project name were parameterised. To do this we selected ‘SharePoint’ from the model drop down list on the toolbar and clicked the ‘Apply Model’ button to edit it. We inserted a rule for the project name to ensure that the variable we just created was used instead of the hard coded value. Please see the screen shots below.

 

 

The next step was to replay the script to see if it works.
During the replay the tool shows the HTML pages that the server responds with and finally pops up with a window offering to show replay differences. This proved to be especially useful as it compares the traffic that was generated by the browser when the script was recorded with the traffic that QTest generated. Any unexpected behaviour is quickly highlighted with this tool by severity 1,2 & 3.
Looking at the screen capture below we can see that the request that had failed with Visual Studio (see 2nd illustration on 2nd page) has now worked with QTest. This is because all of the parameters in the XML statements have been correctly dealt with.

 

We can also see in the replay window in QTest that the Project has been successfully created with the unique name ‘MyProj’ and a UTC:

 

This can also be verified in a browser:


Summary

Visual Studio is a capable tool in the hands of developers with the necessary experience to use it and the time to correctly program it. It can be suitable for use by non programmers with some simple web applications where nothing has been bespoken (in our experience this is a rare case in large organisations).
However if the application is not a standard out of the box vanilla affair, time is limited and programmers scarce then QTest offers a better approach as it’s features make a typically difficult and lengthy task (scripting) a relatively straight forward and short one.
QTest is a very capable and powerful tool in the hands of anyone with an IT background, developer or not. Therefore it’s highly suitable for testers.

How to Scale Out a SharePoint 2010 Farm From Two-Tier to Three-Tier By Adding A Dedicated Application Server

any small to medium-sized organizations start using SharePoint in a “two-tier” server farm topology.  The two tiers consist of:

  1. Tier 1 – SharePoint Server with all web page serving and all Service Applications running on it
  2. Tier 2 – A SQL Server to store the SharePoint databases – the SQL Server could be dedicated to the farm or it might be shared with other non-SharePoint applications.

Visually, this topology looks like this:

image

My experience is that this farm topology can frequently support companies with hundreds of employees.  Of course, it depends a lot on the specifications of the hardware, but with late-model quad-core Xeons running on the two servers and 8 – 16 GBs of RAM on each one with RAID built with 15k RPM SAS drives in the SQL Server, this configuration with SharePoint Server 2010 can perform very well in many organizations that have less than 1000 users.

At some point, an organization that started with this two-tier topology may want to scale out to the next level which is a three-tier topology.  The three tiers would be:

  1. Tier 1 – SharePoint Server dedicated as a Web Front-End (WFE) with only the web application(s) and the search query service running on it
  2. Tier 2 – SharePoint Server dedicated as an Application Server with all of the other service applications running on it, but no web applications or query service
  3. Tier 3 – SQL Server for the databases

Visually, this topology looks like this:

image

There are many different reasons why a company might want to scale out to three-tiers from two.  Some kind of performance improvement is frequently what drives it.  However, it may not be the obvious one of desiring better page serving times for the end users.  For instance, I frequently see companies do this to move the search crawling and index building process to a different server that is more tuned for its unique resource requirements and can do a more efficient job of crawling and indexing the company’s content.  Perhaps in the two-tier approach their crawlindex component can’t get enough hardware resources to crawl through all of the content on a timely basis.

One more point.  Many organizations will also choose to add a second WFE when they scale out to a three-tier farm.  (I don’t show this in the diagram above).  The second WFE will be configured exactly like the first one and some type of network load balancing (NLB) mechanism will be put in front of the WFEs to intelligently route user traffic to the two servers to balance out the load.   In this scenario, the three-tier farm diagram above would be modified to add a second WFE and the total number of servers in the SharePoint farm would be four.

Getting From Here to There

Here is a screen shot of all of the service applications that run on the SharePoint 2010 server in a two-tier farm when you install SharePoint Server 2010 Enterprise edition and run the out-of-the-box Configure Your SharePoint Farm Wizard and choose to provision all service applications:

image

(2nd Reminder: for this post, I am working under the assumption that you have used the SharePoint 2010 “Configure your SharePoint Farm” wizard and have opted for it to provision all of the SharePoint Server 2010 **Enterprise Edition** service applications).

Your goal is to add a third server to the SharePoint 2010 farm and have it take over running all of the service applications in the list above, with the exception of the three that have been circled.  The three that have been circled in the screen shot are the ones that are necessary for the original server to function as a dedicated WFE with query processing.

The Search Query and Site Settings Service and some of its associated functionality in the SharePoint Server Search Service are technically not required on a WFE, but it is the best place to put them.  The reason is that this is the process that takes the user’s search query and looks it up in the indexes.  The indexes are files that the query processor needs local access to and are stored on the file system of the server(s) that is running the query service, not in SQL Server.

So, for best performance it is recommended to run the Search Query and Site Settings Service on the WFEs that are serving the pages.  The crawling and index process is a separate process whose job it is to build the indexes and push them up to the query servers.

The Search Topology configuration settings in SharePoint 2010 dictate what functionality of the SharePoint Server Search Service runs on what server in the farm.  So, while the SharePoint Server Search Service needs to run on both the WFE and the Application Server in this example, it will be possible break out the functionality that it performs on each.  We will want it to perform query-related functionality on the WFE and crawling/indexing functionality on the Application Server.  Later in this post I will show you how to do this.

Now, on to the actual steps to doing the work:

 

Step by Step: Scaling SharePoint 2010 to Three Tiers

Step 1 – Build a new SharePoint Server with exactly the same software

I’m talking about taking a fresh physical or virtual server that has Windows Server 2008 (R1 or R2) running on it, and installing all the same SharePoint Server 2010 software on it that is installed on the existing SharePoint 2010 server in your existing farm.  That includes the full RTM Enterprise edition, whatever patches have been applied in your farm since RTM, and any other separate products that have been installed on your existing server such as the Office 2010 Web Applications and its patches.

Step 2 – Run the SharePoint 2010 Products Configuration Wizard on the new server and join the existing farm

I recommend installing all RTM software and all patches that have previously been applied to the farm BEFORE running the SharePoint 2010 Products Configuration Wizard from the new server’s Start menu.  This means that you will want to respond NO to the prompt to automatically run the wizard until you have installed all software packages on the new server.  This will save you from having to run the wizard multiple times.  Run it once – after you have installed all software and patches on the new server.

When you do run the SharePoint 2010 Products Configuration Wizard, you will run it on the new server that will be your application server.  The wizard is going to help you join the server to the farm and get all of the software configured and running that you installed in Step 1.

Here are what the pages of the wizard look like as you go through the process:

image

 

image

 

image

 

image

 

image

Oops, you forgot to install a piece of software on this new server that is already installed on the other server.  The wizard has caught your error and is not going to let you proceed until you get this done.

Exit the wizard and go install the software – in this case, the Microsoft Office 2010 Web Apps.

OK, you got the missing software installed and have restarted the wizard.  The next screen asks you for the Farm PassPhrase.  This is a special password you created when you originally created the farm.  You have to enter it here in order to join this server to the farm:

image

image

If you click on Advanced Settings above, the next page asks whether or not you want to use this server to host the Central Administration website (sort of implying that you could move it from your existing SharePoint 2010 server to the new one).

I haven’t tried selecting the second option in SharePoint 2010.  In MOSS 2007, according to this blog post you needed to remove the Central Administration web application from the original server before you got to this step on the new server. In the context of scaling out by adding an application server, that is probably what you would want do.  If you choose to go this route, just make sure you have good backups before you delete the Central Admin site from the existing server. Smile

For this walkthrough, you are going to leave Central Administration on the existing server:

image

 

image

 

image

Now the server has been joined to the farm and is a full-fledged farm member.  But, the Configure Your SharePoint Farm Wizard in Central Administration needs to run to add the service applications that exist in the farm to this new server.  So, it automatically fires up your browser and asks you to run the Farm Configuration Wizard:

image

 

After you start the wizard, it will just run for a while without any input from you and return this page if everything was successful:

image

 

Step 3 – Verifying that everything is running properly on the new server

It’s a good idea at this point to go verify that the new server is showing up as a member of the farm with a healthy status.  To do that go to Central Administration > System Settings > Manage Servers In This Farm and find the new server and verify that it has a “No action required” status:

image

 

image

 

Take a moment to breathe deep and pat yourself on the back Smile.  You have done a lot of work to get to this point.  You now have a three-tier SharePoint 2010 farm.

But, there is more work to be done so that your three-tier farm has only the web page serving and query processing services running on the WFE and all of the other service applications running only on the Application Server.  Until you get that accomplished, the job is not done.

(Note: the farm will work and be fully functional if you stop here.  You will have the same Service Applications running on multiple servers and SharePoint 2010 will automatically use this topology as a load balancing technique for the Service Applications.  There may be some environments where this is desired.  But, most organizations will want to separate the web-serving services and the application-serving services to provide a better balance for the farm as a whole as opposed to just load balancing the Service Applications.)

Step 4 – Re-configure the servers to run the services that are appropriate for their individual roles

You want the Web Front-End to run these (and only these) services:

  1. Microsoft SharePoint Foundation Web Application (this is what turns IIS into a SharePoint “page-serving” machine)
  2. Search Query and Site Settings Service (the process that takes the user’s query string and looks it up in the index)
  3. SharePoint Server Search Service (but just the functionality that is necessary for the query processor)
  4. Central Administration (assuming you didn’t decide to move it to the Application Server)

You want the Application Server to run these (and only these) services:

  1. Access Database Service
  2. Application Registry Service
  3. Business Data Connectivity Service
  4. Excel Calculation Services
  5. Managed Metadata Web Service
  6. Microsoft SharePoint Foundation Incoming E-mail
  7. Microsoft SharePoint Foundation Workflow Timer Service
  8. PerformancePoint Service
  9. Secure Store Service
  10. SharePoint Server Search (but just the scheduled content crawling and indexing building functionality)
  11. User Profile Service
  12. Visio Graphics Service
  13. Web Analytics Data Processing Service
  14. Web Analytics Web Service
  15. Word Automation Services
  16. Word Viewing Service

If you can get this done and everything works properly, you will have achieved your overall goal.

(Important Note: Step 1 above is really the only step in the process that can be done during normal working hours.  Everything else has the potential to impact the availability of the system to the users.  If everything goes smoothly, it is possible to do Step 2 through Step 4 in two to four hours.  Of course, it is highly recommended to have solid backups in place before starting Step 2.)

For the most part, the re-configuration of the services involves stopping a lot of services on the WFE server (using the Services on Server page in Central Admin) and verifying that they are running on the new server (which they probably are because the Configure Your SharePoint Farm wizard started them up when you ran it in Step 2).  Then, you will want to make one last pass over the list of services running on the Application Server and make sure that the Microsoft SharePoint Foundation Web Application Service and the Search Query and Site Settings are not running on it.

Adjusting the Search Application Topology

The exception to the statements of the previous paragraph is the search-related services:  SharePoint Server Search Service and Search Query and Site Settings Service.  Search is complicated enough that it has its own topology configuration settings.   You need to use this capability to place the query functionality of the SharePoint Server Search Service on the WFE and to place the crawlingindexing functionality of the service on the Application Server.

Since this is a little more complicated than the other Service Applications, go ahead and do this one first.

Navigate to the Search Administration home page in Central Administration.  Scroll down to the bottom of the page until you see the section titled Search Application Topology:

image

This part of the page shows you what servers the following four components of the Search service are running on:

  • Search Administration component
  • Crawling component (this is the crawling engine that crawls your content and builds full-text indexes from it)
  • Database component (as the crawling engine crawls through the content, it stores the full-text indexes in SQL Server.  It also compiles the full-text indexes into special non-SQL files that can be propagated up to the WFE)
  • Query component (this is the component that receives the user’s query and looks up the results in the special files that have been propagated to the hard drive of the WFE)

The Server Name column shows that the Search Administration, Crawl, and Query components are currently running on the existing server (SPS-INTRANET in the example).  The search-related databases are running on the SQL Server.

You want to do the following:

  1. Move the Search Administration component to the new Application Server
  2. Move the Crawl component to the new Application Server
  3. Leave the Database component running on the SQL Server
  4. Leave the Query component running on the WFE

To accomplish this, click on the Modify button to go to the Topology for Search Service Application page:

image

By hovering your mouse over the component lines, you can bring up a drop down menu and select Edit Properties for the components you want to move to the new server.

Do this now for the Search Administration component:

image

 

Now do it the same way for the Crawl component (screen shot is the same as the one above).

 

Once you have changed the server assignments for these two components, you need to kick of the actual transfer of responsibilities by clicking on Apply Topology Changes:

image

 

The actual transfer of responsibilities begins:

image

When it is finished, you will be returned to the Search Administration home page and you should see that the components have been transferred as directed and all of the search-related servers should have a status of “Online”:

image

Note:  I am not sure why, but this page never shows anything in the Status column for the Databases component.  So, it is normal for that column to be blank for that component.

 

Transferring the remaining Service Applications

All that is left is to use the Services on Server page in Central Administration to make sure the list of services running on each server matches your master list from above:

You want the Web Front-End to run these (and only these) services:

  1. Microsoft SharePoint Foundation Web Application (this is what turns IIS into a SharePoint page-serving machine)
  2. Search Query and Site Settings Service (the process that takes the user’s query string and looks it up in the index)
  3. SharePoint Server Search Service (only the functionality that is necessary for the query processor)
  4. Central Administration (assuming you didn’t decide to move it to the Application Server)

You want the Application Server to run these (and only these) services:

  1. Access Database Service
  2. Application Registry Service
  3. Business Data Connectivity Service
  4. Excel Calculation Services
  5. Managed Metadata Web Service
  6. Microsoft SharePoint Foundation Incoming E-mail
  7. Microsoft SharePoint Foundation Workflow Timer Service
  8. PerformancePoint Service
  9. Secure Store Service
  10. SharePoint Server Search (only the scheduled content crawling and indexing building functionality)
  11. User Profile Service
  12. Visio Graphics Service
  13. Web Analytics Data Processing Service
  14. Web Analytics Web Service
  15. Word Automation Services
  16. Word Viewing Service

To do this, you use the Server drop-down control to select the server you want to adjust, and then use the Start/Stop link in the Action column to turn on/off the services.

Here is what your Services on Server page should look like once each has been properly adjusted fore each server:

For the Web Front-End (SPS-INTRANET in this example):

image

 

For the Application Server (SPS-APPSVR in this example):

image

 

If you navigate to the Servers in Farm page of Central Administration, you will see a more succinct view of your new farm topology:

image

 

Step 5 – Testing and Verifying

Even though you are ready to head out the door and head home since you are probably doing this on a night or weekend, it is really important to fight the urge to leave too soon.  You really need to do some basic testing and verification before you leave.  It will be a lot better to find out about any problems now rather than when the next business day has already started.

Here is what I recommend doing before you leave:

  1. Browse to each of your SharePoint web applications and log in with your user account and make sure you can hit the home page of each of them.
  2. While you are there, try to open up and edit a document in the browser using one of the Office 2010 Web Apps (Word, PowerPoint, Excel or OneNote).
  3. Browse to your My Site and verify that everything is working normally.
  4. Add a unique phrase to a test page somewhere in one of your Sites (I always use the phrase “jabborwocky” Smile) and then go run an incremental Search crawl from Central Administration.  After the crawl completes, go back to your Site Collection and search for the phrase.  Verify that it comes up in the results.
  5. Run an incremental User Profile Synchronization from the User Profile Administration page.  While it is running, logon to the desktop of the new Application Server, and find this program and run it:  c:program filesmicrosoft office servers14.0synchronization serviceuishellmiisclient.exe.  This is the Forefront Identity Management (FIM) client application that you can use to see the details of the AD synchronization process.  Several jobs will be run by FIM.  Verify that they all complete successfully with no error messages.
  6. In Central Administration, go into Manage Service Applications and click on Managed Metadata Service and select Manage in the ribbon.  Verify that the Term Store management interface loads and that you can add/change/delete a Term Set and some Terms.
  7. Finally, reboot your WFE and Application Server.  When they come back up, check your Windows System and Application event logs on those servers and verify that there are no SharePoint-related critical or warning events that you haven’t seen before you scaled out to three tiers.
  8. Browse to your primary web application one more time before you head out the door.

 

I hope this blog post is a good resource for those SharePoint Server Administrators who find themselves needing to scale out to the next level!

How to install and configure SharePoint Server 2010 SP1 on the existing SP 2010 Farm

by  Adnan AhmedNo presence information on 05/07/2011 12:39

This article assumes that SharePoint Server 2010 is already installed and configured. You can download the SharePoint Foundation 2010 and SharePoint Server 2010 SP1 from the following locations:

Upgrading the existing SharePoint Server 2010 farm to SharePoint Server 2010 SP1 consist of the following three steps:

 

Install SharePoint Foundation 2010 SP1

 

Follow the steps below to install SharePoint Foundation 2010 SP1:

  1. Log into the SharePoint application server by using the SharePoint Setup account and execute thesharepointfoundation2010sp1-kb2460058-x64-fullfile-en-us.exe file.
  2. SharePoint Foundation 2010 SP1 installation wizard will start.
  3. Tick Click here to accept the Microsoft Software License Terms and Click Continue.

     

  4. SharePoint Foundation 2010 SP1 installation is in progress…

     

     

  5. Once the installation of SharePoint Foundation 2010 SP1 is installed. The installer will request you to reboot the server. You should ignore this message and click NO.

     

     

  6. Now log into the remaining SharePoint Web/App servers in the farm and follow the above steps.

     

     

Follow the steps below to install SharePoint Server 2010 SP1:

 

  1. Log into the SharePoint application server by using the SharePoint Setup account and execute theofficeserver2010sp1-kb2460045-x64-fullfile-en-us.exe file.
  2. SharePoint Server 2010 SP1 installation wizard will start.

     

     

  3. Tick Click here to accept the Microsoft Software License Terms and Click Continue.
  4. SharePoint Server 2010 SP1 installation is in progress…

     

     

  5. If SharePoint Server 2010 SP1 installation is failed for any reason as shown in the figure below then you need to re-execute the officeserver2010sp1-kb2460045-x64-fullfile-en-us.exe file.

     

     

  6. SharePoint Server 2010 SP1 installation is in progress…

     

  7. SharePoint Server 2010 SP1 installation is in progress…

     

     

  8. Now log into the remaining SharePoint Web/App servers in the farm and follow the above steps.

     

     

Follow the steps below to run the SharePoint configuration wizard:

 

  1. Log into the SharePoint application server by using the SharePoint Setup account and run the SharePoint configuration wizard.
  2. Click Next to proceed…

     

     

  3. You will be informed that IIS, SharePoint Timer and Administration services will reset during the configuration wizard. Click Yes and then Click Next to proceed…

     

     

  4. On the next screen, click Next to proceed…

     

     

  5. SharePoint configuration wizard will start upgrading the farm. Please be patient as it may take 15-20 minutes depending upon the connected content databases on the farm.

     

     

  6. Unfortunately the configuration wizard was failed in my case as shown in the figure below.

     

     

  7. Click Finish to complete the wizard and analyse the log file.
  8. In my case, configuration wizard was failed due to the following errors.

     

     

  9. Re-run the SharePoint configuration wizard. SharePoint configuration wizard is successful.

     

     

  10. Now verify that SharePoint 2010 farm is upgraded to SP1 level by checking the version number of Microsoft.Office.Server.dll, Microsoft.SharePoint.dll files and on the central administration web site. See screen shot below for details:

     

     

     

     

    I do hope that this article will help you to upgrade your existing SharePoint Server 2010 farm. Please share your experience in upgrading SharePoint Server 2010 farm to SP1 level.

You should restart the servers after running the SP Config Wizard.

Credits from:

http://www.sp-blogs.com/blogs/adnan/Lists/Posts/Post.aspx?ID=8

Upgrade from a SharePoint Server 2010 Standard CAL to an Enterprise CAL

Updated: September 30, 2010

This article provides information and procedures on how to upgrade from a Microsoft SharePoint Server 2010 Standard client access license (CAL) to an Enterprise CAL.

In this article:

Process overview

The same Setup program can install both the Standard and Enterprise editions of SharePoint Server 2010. It is the product key that you enter when you run Setup that determines which set of features is available for use. If you installed SharePoint Server 2010 by using a Standard CAL, and now want to convert the license type to the Enterprise CAL, you can enable and then push the Enterprise feature set to all sites in your server farm.

If you are unsure about upgrading and want to evaluate the different feature sets, we recommend that you configure a separate installation and deploy SharePoint Server 2010 Trial Version. To download the trial version, seeSharePoint Server 2010 Trial (http://go.microsoft.com/fwlink/p/?LinkId=196695).

Before you perform the following procedures, confirm that you have purchased the Enterprise CAL.

View the list of features that are included in each license type

Features that are available with the Standard license type include the following:

  • Collaboration
  • Enterprise content management
  • Workflow
  • My Sites
  • Profiles and personalization
  • Enterprise search
  • Business Data Catalog

Additional features that are available with the Enterprise license type include the following:

  • Access Services
  • Excel Services
  • Visio Services
  • Forms Services
  • PerformancePoint Services

For a complete list of the features that are available in the two CALs, see Compare SharePoint Editions (http://go.microsoft.com/fwlink/p/?LinkId=196571).

Enable Enterprise features on existing sites

To convert the license type to the Enterprise CAL, you enable the Enterprise features on the SharePoint Central Administration Web site. Any new sites that you create will automatically have these features. However, existing sites do not receive the Enterprise feature set until you perform the steps to enable the features on existing sites. You have to perform these procedures only one time to update all sites in the server farm.

This procedure uses a SharePoint 2010 Timer service and may take a long time to complete, depending on the number of sites in the server farm.

To enable Enterprise features for the server farm

  1. Verify that you have the following administrative credentials:

    • To enable enterprise features, you must be a member of the Farm Administrators group on the computer that is running Central Administration.
  2. On the Central Administration Web site, click Upgrade and Migration.

  3. In the Upgrade and Patch Management section, click Enable Enterprise Features.

  4. Enter the product key, and then click OK.

After you have enabled the features for the farm, you can enable the features on existing sites in the farm.

To enable Enterprise features on existing sites by using Central Administration

  1. Verify that you have the following administrative credentials:

    • To enable enterprise features on existing sites, you must be a member of the Farm Administrators group on the computer that is running Central Administration.
  2. On the Central Administration Web site, click Upgrade and Migration.

  3. In the Upgrade and Patch Management section, click Enable Features on Existing Sites.

  4. On the Enable Features on Existing Sites page, select the Enable all sites in this installation to use the following set of features check box, and then click OK.

To enable Enterprise features on existing sites by using Windows PowerShell

  1. Verify that you meet the following minimum requirements: See Add-SPShellAdmin.

  2. On the Start menu, click All Programs.

  3. Click Microsoft SharePoint 2010 Products.

  4. Click SharePoint 2010 Management Shell.

  5. At the Windows PowerShell command prompt, type the following command:

    Enable-SPFeature [-Identity] <FeatureID> [-URL] <site URL>
    

     

    Where:

    • <Identity> specifies the name of the feature or GUID to install.
    • <URL> specifies the URL of the Web application, site collection, or Web site for which the feature is being activated.

     

    Example

    Enable-SPFeature -Identity MyCustom -URL http://somesite
    

For more information, see Enable-SPFeature.

note Note:

We recommend that you use Windows PowerShell when performing command-line administrative tasks. The Stsadm command-line tool has been deprecated, but is included to support compatibility with previous product versions.

 

Verification

Use the following procedure to verify that the enterprise features have been enabled on existing sites.

To verify that enterprise features are enabled on existing sites

  1. Verify that you have the following administrative credentials:

    • To verify that enterprise features are enabled on existing sites, you must be a member of the Farm Administrators SharePoint group on the computer that is running Central Administration.
  2. On the site collection Web site, on the Site Actions menu, click Site Settings.

  3. On the Site Settings page, in the Site Administration section, click Site features.

    In the Status column for SharePoint Server Enterprise Site features, ensure that Active appears.

IP Subnet Calculator

IP Subnet Calculator

The IP Subnet Mask Calculator enables subnet network calculations using network class, IP address,subnet mask, subnet bits, mask bits, maximum required IP subnets and maximum required hosts per subnet.

Results of the subnet calculation provide the hexadecimal IP address, the wildcard mask, for use with ACL (Access Control Lists), subnet ID, broadcast address, the subnet address range for the resulting subnet network and a subnet bitmap.

For classless supernetting, please use the CIDR Calculator. For classful supernetting, please use the IP Supernet Calculator. For simple ACL (Access Control List) wildcard mask calculations, please use the ACL Wildcard Mask Calculator.

Note:
These online network calculators may be used totallyfree of charge provided their use is from this url (www.subnet-calculator.com).

Subnet Calculator Subnet Calculator
Network Class
First Octet Range
IP Address
. . .
Hex IP Address
Subnet Mask
Wildcard Mask
Subnet Bits
Mask Bits
Maximum Subnets
Hosts per Subnet
Host Address Range
Subnet ID
Broadcast Address
Subnet Bitmap
Notes about the Subnet Calculator

  1. The subnet calculator implements a classful / classed IP addressing scheme where the following rules are adhered to:
    • Class A addresses have their first octet in the range 1 to 126 (binary address begins with 0).
    • Class B addresses have their first octet in the range 128 to 191 (binary address begins with 10).
    • Class C addresses have their first octet in the range 192 to 223 (binary address begins with 110).
  2. The subnet calculator allows the use of a single subnet bit – for example, a class C address with a subnet mask of 255.255.255.128 is permitted.
  3. The subnet calculator allows a subnet ID to have its final octet equal to the final octet of its subnet mask – for example, a class C network address of 192.168.0.192 with a subnet mask of 255.255.255.192 is permitted.

The above is generally accepted as being ‘normal’, however, certification students should keep in mind that, in some certification programs, the final two points are regarded as inacceptible.

For classless subnetting, you can use the CIDR calculator.

Using Group Nesting Strategy – AD Best Practices for Group Strategy

Accessing resources across forests

When two Windows Server 2003 forests are connected by a forest trust, authentication requests made using the Kerberos V5 or NTLM protocols can be routed between forests to provide access to resources in both forests. For more information about routing authentication requests across forests, see Routing name suffixes across forests.

Before authentication protocols can follow the forest trust path, the service principal name (SPN) of the resource computer must be resolved to a location in the other forest. An SPN can be one of the following:

  • Domain Name System (DNS) name of a host
  • DNS name of a domain
  • Distinguished name of a service connection point object

When a workstation in one forest attempts to access data on the resource computer in another forest, Kerberos contacts the domain controller for a service ticket to the SPN of the resource computer. Once the domain controller queries the global catalog and identifies that the SPN is not in the same forest as the domain controller, the domain controller sends a referral for its parent domain back to the workstation. At that point, the workstation queries the parent domain for the service ticket and follows the referral chain until it gets to the domain where the resource is located.

The following figure and corresponding steps provide a detailed description of the Kerberos authentication process that is used when computers running Windows 2000 Professional, Windows XP Professional, Windows 2000 Server, or a member of the Windows Server 2003 family attempt to access resources from a computer located in another forest.

Kerberos authentication between forests in a trust

  1. User1 logs on to Workstation1 using credentials from the child.microsoft.com domain. The user then attempts to access a shared resource on FileServer1 located in the msn.com forest.
  2. Workstation1 contacts the Key Distribution Center (KDC) on a domain controller in its domain (ChildDC1) and requests a service ticket for the FileServer1 SPN.
  3. ChildDC1 does not find the SPN in its domain database and queries the global catalog to see if any domains in the microsoft.com forest contain this SPN. Since a global catalog is limited to its own forest, the SPN is not found. The global catalog then checks its database for information about any forest trusts that are established with its forest, and, if found, it compares the name suffixes listed in the forest trust trusted domain object (TDO) to the suffix of the target SPN to find a match. Once a match is found, the global catalog provides a routing hint back to ChildDC1.
  4. ChildDC1 sends a referral for its parent domain back to Workstation1.
  5. Workstation1 contacts a domain controller in ForestRootDC1 (its parent domain) for a referral to a domain controller (ForestRootDC2) in the forest root domain of the msn.com forest.
  6. Workstation1 contacts ForestRootDC2 in the msn.com forest for a service ticket to the requested service.
  7. ForestRootDC2 contacts its global catalog to find the SPN, and the global catalog finds a match for the SPN and sends it back to ForestRootDC2.
  8. ForestRootDC2 then sends the referral to child.msn.com back to Workstation1.
  9. Workstation1 contacts the KDC on ChildDC2 and negotiates the ticket for User1 to gain access to FileServer1.
  10. Now that workstation1 has a service ticket, it sends the server service ticket to FileServer1, which reads User1’s security credentials and constructs an access token accordingly.

When a forest trust is first established, each forest collects all of the trusted namespaces in its partner forest and stores the information in a TDO. Trusted namespaces include domain tree names, user principal name (UPN) suffixes, service principal name (SPN) suffixes, and security ID (SID) namespaces used in the other forest. TDO objects are replicated to the global catalog. For more information about TDOs, see Trusts.

Routing hints

Routing hints are only used when all traditional authentication channels (local domain controller and then global catalog) have failed to produce a location of a SPN. Routing hints help direct authentication requests toward the destination forest.

When an SPN cannot be located in the domain from where the network logon request originated or from the global catalog database, the global catalog checks the forest trust TDO for trusted name suffixes located in the other forest that might match the suffix in the SPN. If a match is found, the forest root domain returns a routing hint back to the original source computer so that it can continue the SPN location process in the other forest.

Notes

  • Routing hints can only reference trusted name suffixes that are listed in the TDO for its forest trust. They do not verify the name suffix before sending the hint back to the original source computer.�
  • Accessing NetBIOS names and Kerberos delegation across forest trusts is not supported. NTLM is fully supported and cannot be disabled.�

Planning your access control strategy for multiple forests

It is recommended that you carefully plan the most efficient access control strategy for your organization’s resource needs. Your design and implementation of security groups throughout each forest will be an important factor to consider during your planning. For information about planning an access control strategy for multiple domains, see Accessing resources across domains.

It is important to understand the following security group concepts before you begin the planning process:

  • Security groups. User rights can be applied to groups in Active Directory while permissions can be assigned to security groups on member servers hosting a resource. For more information, see Group types.
  • Group nesting. The ability to nest security groups is dependent on groups scopes and domain functionality. For more information, see Nesting groups.
  • Group scope. Group scope helps determine the domain-wide and forest-wide access boundaries of security groups. For more information, see Group scope.
  • Domain functionality. The domain functional level of the trusting and trusted domains can affect group functionality such as group nesting. For more information, see Domain and forest functionality.

Once you have gained a thorough understanding of security group concepts, determine the resource needs of each department and geographical division to assist you with the planning effort.

Best practices for using security groups across forests

By carefully using domain local, global, and universal groups, administrators can more effectively control access to resources located in other forests. Consider the following best practices:

  • To represent the sets of users who need access to the same types of resources, create role-based global groups in every domain and forest that contains these users. For example, users in the Sales Department in ForestA require access to an order-entry application that is a resource in ForestB. Account Department users in ForestA require access to the same application, but these users are in a different domain. In ForestA, create the global group SalesOrder and add users in the Sales Department to the group. Create the global group AccountsOrder and add users in the Accounting Department to that group.
  • To group the users from one forest who require similar access to the same resources in a different forest, create universal groups that correspond to the global group roles. For example, in ForestA, create a universal group called SalesAccountsOrders and add the global groups SalesOrder and AccountsOrder to the group.
    noteNote
    Universal groups are not available as security groups in Windows 2000 Server mixed-mode domains or in Windows Server 2003 domains that have a domain functional level of Windows 2000 mixed. They are available as distribution groups.

     

     

     

  • To assign permissions to resources that are to be accessed by users from a different forest, create resource-based domain local groups in every domain and use these groups to assign permissions on the resources in that domain. For example, in ForestB, create a domain local group called OrderEntryApp. Add this group to the access control list (ACL) that allows access to the order entry application, and assign appropriate permissions.
  • To implement access to a resource across a forest, add universal groups from trusted forests to the domain local groups in the trusting forests. For example, add the SalesAccountsOrders universal group from ForestA to the OrderEntryApp domain local group in ForestB.

When a new user account needs access to a resource in a different forest, add the account to the respective global group in the domain of the user. When a new resource needs to be shared across forests, add the appropriate domain local group to the ACL for that resource. In this way, access is enabled across forests for resources on the basis of group membership.

For more information, see Set permissions on a shared resource.

Selective authentication between forests

Using Active Directory Domains and Trusts, you can determine the scope of authentication between two forests that are joined by a forest trust. You can set selective authentication differently for outgoing and incoming forest trusts. With selective trusts, administrators can make flexible forest-wide access control decisions. For more information about how to set selective authentication, see Select the scope of authentication for users.

If you use forest-wide authentication on an incoming forest trust, users from the outside forest have the same level of access to resources in the local forest as users who belong to the local forest. For example, if ForestA has an incoming forest trust from ForestB and forest-wide authentication is used, users from ForestB would be able to access any resource in ForestA (assuming they have the required permissions).

If you decide to set selective authentication on an incoming forest trust, you need to manually assign permissions on each computer in the domain as well as the resources to which you want users in the second forest to have access. To do this, set a control access right Allowed to authenticate on the computer object that hosts the resource in Active Directory Users and Computers in the second forest. Then, allow user or group access to the particular resources you want to share.

When a user authenticates across a trust with the Selective authentication option enabled, an Other Organization security ID (SID) is added to the user’s authorization data. The presence of this SID prompts a check on the resource domain to ensure that the user is allowed to authenticate to the particular service. Once the user is authenticated, then the server to which he authenticates adds the This Organization SID if the Other Organization SID is not already present. Only one of these special SIDs can be present in an authenticated user’s context. For more detailed information about how selective authentication works, see Security Considerations for Trusts.

Administrators in each forest can add objects from one forest to access control lists (ACLs) on shared resources in the other forest. You can use the ACL editor to add or remove objects residing in one forest to ACLs on resources in another forest. For more information about how to set permissions on resources, see Set permissions on a shared resource.

For information about setting authentication restrictions for external domains, see Accessing resources across domains.

Credits from:

http://theos.in/windows-xp/free-fast-public-dns-server-list/

Accessing resources across forests

ccessing resources across forests

When two Windows Server 2003 forests are connected by a forest trust, authentication requests made using the Kerberos V5 or NTLM protocols can be routed between forests to provide access to resources in both forests. For more information about routing authentication requests across forests, see Routing name suffixes across forests.

Before authentication protocols can follow the forest trust path, the service principal name (SPN) of the resource computer must be resolved to a location in the other forest. An SPN can be one of the following:

  • Domain Name System (DNS) name of a host
  • DNS name of a domain
  • Distinguished name of a service connection point object

When a workstation in one forest attempts to access data on the resource computer in another forest, Kerberos contacts the domain controller for a service ticket to the SPN of the resource computer. Once the domain controller queries the global catalog and identifies that the SPN is not in the same forest as the domain controller, the domain controller sends a referral for its parent domain back to the workstation. At that point, the workstation queries the parent domain for the service ticket and follows the referral chain until it gets to the domain where the resource is located.

The following figure and corresponding steps provide a detailed description of the Kerberos authentication process that is used when computers running Windows 2000 Professional, Windows XP Professional, Windows 2000 Server, or a member of the Windows Server 2003 family attempt to access resources from a computer located in another forest.

Kerberos authentication between forests in a trust

  1. User1 logs on to Workstation1 using credentials from the child.microsoft.com domain. The user then attempts to access a shared resource on FileServer1 located in the msn.com forest.
  2. Workstation1 contacts the Key Distribution Center (KDC) on a domain controller in its domain (ChildDC1) and requests a service ticket for the FileServer1 SPN.
  3. ChildDC1 does not find the SPN in its domain database and queries the global catalog to see if any domains in the microsoft.com forest contain this SPN. Since a global catalog is limited to its own forest, the SPN is not found. The global catalog then checks its database for information about any forest trusts that are established with its forest, and, if found, it compares the name suffixes listed in the forest trust trusted domain object (TDO) to the suffix of the target SPN to find a match. Once a match is found, the global catalog provides a routing hint back to ChildDC1.
  4. ChildDC1 sends a referral for its parent domain back to Workstation1.
  5. Workstation1 contacts a domain controller in ForestRootDC1 (its parent domain) for a referral to a domain controller (ForestRootDC2) in the forest root domain of the msn.com forest.
  6. Workstation1 contacts ForestRootDC2 in the msn.com forest for a service ticket to the requested service.
  7. ForestRootDC2 contacts its global catalog to find the SPN, and the global catalog finds a match for the SPN and sends it back to ForestRootDC2.
  8. ForestRootDC2 then sends the referral to child.msn.com back to Workstation1.
  9. Workstation1 contacts the KDC on ChildDC2 and negotiates the ticket for User1 to gain access to FileServer1.
  10. Now that workstation1 has a service ticket, it sends the server service ticket to FileServer1, which reads User1’s security credentials and constructs an access token accordingly.

When a forest trust is first established, each forest collects all of the trusted namespaces in its partner forest and stores the information in a TDO. Trusted namespaces include domain tree names, user principal name (UPN) suffixes, service principal name (SPN) suffixes, and security ID (SID) namespaces used in the other forest. TDO objects are replicated to the global catalog. For more information about TDOs, see Trusts.

Routing hints

Routing hints are only used when all traditional authentication channels (local domain controller and then global catalog) have failed to produce a location of a SPN. Routing hints help direct authentication requests toward the destination forest.

When an SPN cannot be located in the domain from where the network logon request originated or from the global catalog database, the global catalog checks the forest trust TDO for trusted name suffixes located in the other forest that might match the suffix in the SPN. If a match is found, the forest root domain returns a routing hint back to the original source computer so that it can continue the SPN location process in the other forest.

Notes

  • Routing hints can only reference trusted name suffixes that are listed in the TDO for its forest trust. They do not verify the name suffix before sending the hint back to the original source computer.
  • Accessing NetBIOS names and Kerberos delegation across forest trusts is not supported. NTLM is fully supported and cannot be disabled.

Planning your access control strategy for multiple forests

It is recommended that you carefully plan the most efficient access control strategy for your organization’s resource needs. Your design and implementation of security groups throughout each forest will be an important factor to consider during your planning. For information about planning an access control strategy for multiple domains, see Accessing resources across domains.

It is important to understand the following security group concepts before you begin the planning process:

  • Security groups. User rights can be applied to groups in Active Directory while permissions can be assigned to security groups on member servers hosting a resource. For more information, see Group types.
  • Group nesting. The ability to nest security groups is dependent on groups scopes and domain functionality. For more information, see Nesting groups.
  • Group scope. Group scope helps determine the domain-wide and forest-wide access boundaries of security groups. For more information, see Group scope.
  • Domain functionality. The domain functional level of the trusting and trusted domains can affect group functionality such as group nesting. For more information, see Domain and forest functionality.

Once you have gained a thorough understanding of security group concepts, determine the resource needs of each department and geographical division to assist you with the planning effort.

Best practices for using security groups across forests

By carefully using domain local, global, and universal groups, administrators can more effectively control access to resources located in other forests. Consider the following best practices:

  • To represent the sets of users who need access to the same types of resources, create role-based global groups in every domain and forest that contains these users. For example, users in the Sales Department in ForestA require access to an order-entry application that is a resource in ForestB. Account Department users in ForestA require access to the same application, but these users are in a different domain. In ForestA, create the global group SalesOrder and add users in the Sales Department to the group. Create the global group AccountsOrder and add users in the Accounting Department to that group.
  • To group the users from one forest who require similar access to the same resources in a different forest, create universal groups that correspond to the global group roles. For example, in ForestA, create a universal group called SalesAccountsOrders and add the global groups SalesOrder and AccountsOrder to the group.
    noteNote
    Universal groups are not available as security groups in Windows 2000 Server mixed-mode domains or in Windows Server 2003 domains that have a domain functional level of Windows 2000 mixed. They are available as distribution groups.

     

     

  • To assign permissions to resources that are to be accessed by users from a different forest, create resource-based domain local groups in every domain and use these groups to assign permissions on the resources in that domain. For example, in ForestB, create a domain local group called OrderEntryApp. Add this group to the access control list (ACL) that allows access to the order entry application, and assign appropriate permissions.
  • To implement access to a resource across a forest, add universal groups from trusted forests to the domain local groups in the trusting forests. For example, add the SalesAccountsOrders universal group from ForestA to the OrderEntryApp domain local group in ForestB.

When a new user account needs access to a resource in a different forest, add the account to the respective global group in the domain of the user. When a new resource needs to be shared across forests, add the appropriate domain local group to the ACL for that resource. In this way, access is enabled across forests for resources on the basis of group membership.

For more information, see Set permissions on a shared resource.

Selective authentication between forests

Using Active Directory Domains and Trusts, you can determine the scope of authentication between two forests that are joined by a forest trust. You can set selective authentication differently for outgoing and incoming forest trusts. With selective trusts, administrators can make flexible forest-wide access control decisions. For more information about how to set selective authentication, see Select the scope of authentication for users.

If you use forest-wide authentication on an incoming forest trust, users from the outside forest have the same level of access to resources in the local forest as users who belong to the local forest. For example, if ForestA has an incoming forest trust from ForestB and forest-wide authentication is used, users from ForestB would be able to access any resource in ForestA (assuming they have the required permissions).

If you decide to set selective authentication on an incoming forest trust, you need to manually assign permissions on each computer in the domain as well as the resources to which you want users in the second forest to have access. To do this, set a control access right Allowed to authenticate on the computer object that hosts the resource in Active Directory Users and Computers in the second forest. Then, allow user or group access to the particular resources you want to share.

When a user authenticates across a trust with the Selective authentication option enabled, an Other Organization security ID (SID) is added to the user’s authorization data. The presence of this SID prompts a check on the resource domain to ensure that the user is allowed to authenticate to the particular service. Once the user is authenticated, then the server to which he authenticates adds the This Organization SID if the Other Organization SID is not already present. Only one of these special SIDs can be present in an authenticated user’s context. For more detailed information about how selective authentication works, see Security Considerations for Trusts.

Administrators in each forest can add objects from one forest to access control lists (ACLs) on shared resources in the other forest. You can use the ACL editor to add or remove objects residing in one forest to ACLs on resources in another forest. For more information about how to set permissions on resources, see Set permissions on a shared resource.

For information about setting authentication restrictions for external domains, see Accessing resources across domains.