Sunday, January 20, 2008

ATM-to-the-Desktop Environment

1 Preface
This Redpaper describes experiences made during installation of an ATM to the desktop
network in a Windows NT 4.0 Server/Workstation environment. It covers the physical and logical network design, the reasons for it, the problems faced and how to solve them. It also describes the dependencies and peculiarities of a multiprotocol and/or multihomed Windows NT installation running on an ATM to the desktop emulated LAN. Hopefully this document will help to avoid any of these problems in other installations.


2 About the Author
Matthias Enders is a Networking Specialist in Germany. He has nine years of experience in the networking field. His areas of expertise include TCP/IP, campus LAN products and LAN
protocol analysis as well as multiprotocol network design and implementation. He has written extensively on the following redbooks:
M TCP/IP Tutorial and Technical Overview, fifth edition M IBM Nways RouteSwitch Implementation Guide, first edition
3 Acknowledgments
I would like to thank the following colleagues for their invaluable help and support. Marc Gerbrecht, IBM PSS Software Support, Mainz, MCSE Niels Junge, IBM PSS NSDU, Frankfurt, MCSE
4 Why ATM to the Desktop in this Installation?
About a year ago the customer moved his whole business to a new built location. With this movement he had the unique opportunity to build the whole network and PC client/server
environment almost from scratch. The whole institution was equipped with new PCs and servers. They also decided to migrate from Novell Netware to Windows NT 4.0 as network operating system for file and print services.
On the other hand they had to provide an infrastructure that met their today's requirements and as important had the potential to support their medium and long term applications. The most important requirements were:
M Support for their newly developed multimedia application. This highly sophisticated system
had to be able to deliver many different audio and video data streams simultaneously to hundreds of workstations.
M Multi protocol support for IP, IPX and DLC M The new network had to be very flexible to adapt easily to modifications of the logical structure. They had only eight NIC assigned class C networks available for the whole intranet.
M Seamless LAN/WAN integration of two remote sites also running an ATM network. One of his sites had to have access to the multimedia application as well.
M The network had to provide end to end QOS for future multimedia applications even in this
heavy subnetted IP structure where many client/server communications relied on inter subnet
connections.
M The backbone had to be very scalable since nobody was able to forecast the amount of andwidth needed to drive good quality audio and video within the whole building.
At the time the decision had to be made there were two solutions technically feasible: A fully
witched Fast Ethernet or ATM to the desktop. There were two possible networking architectures included in the customers invitation of tenders:
First, a fully switched Ethernet or second, an ATM to the desktop network. As they compared different tenders it turned out that the hardware and labor costs for both technologies were almost equal at that time. Finally they chose the ATM solution since this technology came up loser to their today’s and future requirements.
5 The Physical Network
This chapter describes the physical network topology, the network devices used, their code levels and the redundancy features.
5.1 Detailed Physical Network Topology As with most ATM networks the physical topology was very simple. The backbone consisted of two IBM 8265-17S connected via two OC-12 links. All 500 client PCs were equipped with an IBM Turboways 25 Mbit/s PCI NIC that connected to one of the 24 IBM 8285-001 ATM switches.
All IBM 8285s were at least connected to either of both backbone switches via an OC-3 link. There are 13 IBM 8285 switches that had an expansion chassis attached. These switches wereconnected to both backbone switches for link redundancy and bandwidth demands. There was also an IBM 8260-A17 with three OC-3 links to each backbone switch installed because of the high demand on 25 Mbit/s ports in that particular wiring closet.
All LAN and network services were fulfilled by two IBM 8210-001 each equipped with two ATM adapters. The legacy Ethernet attachment was done by an IBM 8271-216 and a three slot wide IBM 8271 ATM/LAN Switch blade in one of the IBM 8265’s compatibility slots. All available feature slots of the IBM 8271s carried a three port 10Base-FL UFC for concentration of all IBM 8224 hubs located in every wiring closet. These hubs were used to provide a legacy LAN attachment for network printers, the UPS management NIC and for testing purposes.
All ATM attached servers were directly connected to one of the backbone switches via an OC-3 interface. We used Olicom OC-615x adapters for all servers since they were Microsoft NT 4.0 certified. All non ATM attached servers had a dedicated Ethernet port at one of the IBM 8271s. Both remote ATM network sites were connected through the ATM network of a service provider. Therefore, one IBM 8265 held a WAN2 module with E1 ports in a compatibility slot.
5.2 Code Levels Used
V4.06 W-NT 4.0 Olicom 615x 155 Mbit/s ATM Adapter
V2.3.1 W-NT 4.0 IBM Turboways 25 Mbit/s ATM Adapter
V3.2.0 IBM 8285-001
V5.1/V1.15.0 IBM 8271-216/ATM UFC
V3.3.5 IBM 8265-17S
V3.2.0 IBM 8260-A17
V1.2.1 PTF5 IBM 8210-001
Code Level Device
5.3 Physical Network Redundancy
Both IBM 8265 and the IBM 8260 were equipped with all possible redundancy features: Redundant control points, controller modules and n+1 power supplies. All major end station concentration points (8285/8260) had at least two backbone connections. There were two IBM 8210 with a fully redundant configuration. All network devices were connected to a UPS to prevent power drops.
6 ATM Network Configuration
This chapter describes some peculiarities of the ATM switch configuration.
6.1 ATM Address Prefix
To avoid future ATM addressing conflicts, the customer requested a sub part of a registered DCC network prefix from a service provider. They got a unique 11 Bytes prefix assigned thus Bytes 12 and 13 were used to build the internal addressing scheme. They decided to use Byte 12
to indicate the PNNI peer group and Byte 13 for ATM switch addresses within a peer group.
Each location got its own peer group id in order to minimize PNNI routing traffic over the WAN
links. The headquarters peer group consisted of 27 switches. The interconnection of all three peer groups via WAN links was done by IISP.
6.2 ATM WAN Connection
The ATM network service provider offered the following services:
M Two physical PDH based E1 links, G.703
M Symmetrical VBR with PCR/SCR of 4420/2210 cps (both directions had the same cell
rates)
M Virtual path to each remote location with a maximum of 300 concurrent SVCs (VP
tunneling)
M Local VPI 14 and 15, one for each physical link and remote location respectively
We mapped these given WAN specifications to the following IBM 8265 configuration:
set port 1.1 enable void VPI_VCI: 4.8 shaping: 912
set port 1.3 enable void VPI_VCI: 4.8 shaping: 912
There is a 12 Bits VPI.VCI range supported for WAN ports. We had to spend four Bits on the
VPI range in order to support the given VPIs 14 and 15. Therefore, SVCs will be allocated in
the range 14.32...14.255. We didn’t use the VPI_OFFSET parameter to increase the possible
amount of VCIs since there was an IBM 8260 on the other side of the link that didn’t support
this parameter.
set VPC_LINK 1.1 14 enable IISP network bandwidth: 912 ILMI_VCI: NONE
set VPC_LINK 1.3 15 enable IISP network bandwidth: 912 ILMI_VCI: NONE
Set reachable_address 1.1 12 39. ... .02 VPI: 14
Set reachable_address 1.3 12 39. ... .03 VPI: 15
6.3 Calculation of the Shaping Bandwidth
Since the ATM network provider offered a VBR service but we just tunneled UBR traffic fromthe LAN emulation services through the VP, we had to make sure that the cell rate never exceeded the given VBR limits. Otherwise the provider discarded all cells that were above the limit. To meet the provider specs we used the intelligent shaping function provided on WAN ports. To avoid any cell dropping at the provider we based our bandwidth calculation on the Sustainable Cell Rate (SCR).
BW = SCR * 53 * 8 / 1024
BW = Bandwidth in kbps
SCR = Sustainable Cell Rate in cps
53 = ATM Cell length in Bytes
The entered shaping bandwidth is automatically adjusted to a multiple of 8 kbps by the IBM 8265. We defined a shaping bandwidth of 912 kbps that corresponded to the SCR of 2210 cps. We also made some tests with higher bandwidths to push the limits a little further and defined a
bandwidth that corresponded to a cell rate of 10 % below PCR. Pings with 64 Bytes did fine but larger ones didn’t come through since cells were dropped randomly by the provider. Therefore,
we decided to keep the SCR bandwidth since no cells were dropped even if the link was saturated.
6.4 PNNI Configuration
We set the VPI.VCI Bits for every PNNI port to 0.14 (the default is 4.10) since most ports had to support more than 1000 simultaneous SVCs and the PNNI implementation supports VPI=0 for SVC allocation only.
Another experience we made was the influence of ILMI on IBM Nways Campus Manager for AIX. First, we set ILMI=NONE on all PNNI ports since ILMI is not used by PNNI. Then we
ecognized that the discovery function of the ATM network topology by the IBM Nways Campus Manager relied on ILMI to draw a correct ATM network map.
We chose shortest_path for PNNI UBR path selection in order to avoid SVCs routed through redundant links of the IBM 8285. This configuration also assures the equal distribution of UBR-SVCs on parallel links between any two switches.
6.5 UNI Port Configuration
We also changed the VPI_VCI Bits on all server UNI ports since these adapters only supported
VPI=0 but up to 1000 simultaneous SVCs. Thus the default of 4.10 was not sufficient since this
setting supported 992 SVCs only (1024-32 reserved VCIs). Therefore, we changed it to 0.14.
The same change was done on all ports where an IBM 8210 was attached to. Meanwhile MSS
supports VPIs higher than 0.
7 Logical Network Topology
To find out which network topology fits best for this particular environment we did a network
assessment with the customer to clearly identify the needs, requirements and expectations. 7.1 Customer Requirements
1. Communication between NT workstations and NT servers is done via TCP/IP only
2. Routing from NT workstations to NT servers has to be avoided even if the workstations resided in different IP subnets
3. All default gateway addresses provided by the MSS have to be redundant since all clients have a static default gateway configured
4. Five of the nine IP subnets have to be available at all IBM 8224 Ethernet hubs for flexibility and testing reasons
5. Most PCs have IPX active as a second protocol
6. All IPX PCs have to reside in only one IPX network throughout the whole location in order to avoid IPX routing to a gateway
7. Many other requirements ...
7.2 ELAN Considerations
One of the first questions that came up was which ELAN design would be best?
M One simple flat ELAN
M A separate ELAN for each IP subnet
We checked both approaches and decided finally for the second alternative in conjunction with a
SuperELAN. Following are the major reasons for that:
Most client PCs were split up into three IP subnets. To avoid routing on the path to the NT server we assigned three IP addresses to it, one in each subnet (multihomed server). Since we mapped each IP subnet to a separate ELAN, the server had to have three LECs. This multiple LEC configuration is the only way to force NT servers to really use their different IP interfaces for TCP/IP NetBIOS services. It is not sufficient to configure just multiple IP addresses for one single LEC since NetBIOS’ multihoming capability relies on multiple physical (or in our case emulated) network adapters. NetBIOS is not aware of TCP/IP at all. Only true TCP/IP socket applications running on an NT server are able to differentiate between multiple IP addresses configured for one LEC.
Now one might think why not configuring multiple LECs at the NT server to a single ELAN to
achieve a simple ELAN structure? This is not always feasible since some ATM adapter drivers don’t support multiple LECs connection to one ELAN even if every LEC uses a unique MAC address. The Olicom driver we used refused to accept multiple connections to the same point to
multipoint VCC from one LES/BUS. So we had no other chance except to create one ELAN for every IP subnet. Note: Please refer to chapter 7.6 Multihoming of NT Servers for further implications when using multiple LECs on a NT server. The second reason for multiple ELANs was the requirement for an IP redundant default gateway function on every IP subnet. Our initial planning was based on MSS 1.1.1 PTF7 that supported multiple IP addresses per LEC but only one redundant default gateway address. Later after all planning and design was finished we migrated to MSS 1.2.1 where this function is supported now. Today there would be no need for multiple ELANs to achieve this particular function. After that we looked at requirements four, five and six. To meet those we had to enlarge the broadcast domain beyond ELAN borders. Requirement four means that the customer could plug an Ethernet PC or any other device configured with an IP address within these five IP subnets into any IBM 8224 hub and it should work. This functionality couldn’t be achieved with the IBM
8271 since this device supports only one LEC per domain.
We got this solved with the precious SuperELAN function of the MSS. We simply put all IBM
8271 ports in one domain and assigned the uplink LEC to one of the five ELANs which was a
member of the SuperELAN. We also achieved the fifth and sixth requirement with the SuperELAN structure since all PCs requiring IPX access resided in one of the five IP ELANs.
Thus from their IPX perspective all ELANS looked as one single segment.
Note: In MSS code prior 1.2.1 PTF5 there was a problem related to the redundant default
gateway function within a SuperELAN. When a LEC sent an LE_ARP_REQUEST to its LES
after he had resolved the MAC address of its default gateway, all MSS LEC configured for
redundant default gateway function within the same SuperELAN established an SVC to this PC.
These useless SVCs are not of a problem in a small installation but in our 400 LEC SuperELAN
environment we got up to 1600 additional unusable SVCs to the MSS. This problem was fixed
with 1.2.1 PTF5.
7.3 Broadcast Considerations in the SuperELAN Environment
After installation we monitored the broadcast traffic at an Ethernet hub in order to get some information on broadcasts within the SuperELAN since each broadcast is flooded to every device connected to the SuperELAN. In our case the broadcast rate was pretty low and the occupied bandwidth was about 1 % of 10 Mbit/s which was acceptable. The majority were IPX RIP and SAP broadcasts as we expected. We also had a closer look at the amount of IP ARP broadcasts coming from the five ELANs which build the SuperELAN. This particular broadcast rate was that low that we decided not to enable the IP Broadcast Manager (BCM) on the MSS to further reduce this kind of traffic and therefore to lose the much more important fast BUS mode operation for the ELANs.
7.4 Splitting of Functions for Load-Balancing/Redundancy
We had two IBM 8210-001 with two ATM adapters each. The network had to fully function with only one MSS active. We also didn’t want to have a simple redundancy configuration with a
primary MSS fulfilling all functions and a backup only MSS idling during normal operation. This
would be to much waste of resources. We decided for the following splitting of functions:
primary n/a backup n/a Routing
n/a backup n/a primary LES/BUS/LECS

Artikel yang Berkaitan

0 komentar:

Post a Comment