How many dpc cards can an mx960




















Actions Shares. No notes for slide. MX Router 1. MX Router Kashif Latif 2. Count… The MX router enables a wide range of business and residential applications and services, including high-speed transport and VPN services, next-generation broadband multiply services, and high-volume Internet data center internetworking.

The hardware system is fully redundant, including power supplies, fan trays, Routing Engines, and Switch Control Boards. Count… The MX router is 16 rack units U tall. Three routers can be stacked in a single floor-to-ceiling rack, for increased port density per unit of floor space.

MX Chassis Description The router chassis is a rigid sheet metal structure that houses all the other router. The chassis installs in many types of racks, including mm deep or larger enclosed cabinets, standardin. For an open-frame rack, center- mounting is preferable because of the more even distribution of weight. MX Midplane Description The midplane is located toward the rear of the chassis and forms the rear of the card cage.

Count… The router has 11 dedicated DPC slots. DPCs install vertically in the front of the router. The dedicated DPC slots are numbered 0 though 5, and 7 though11, left to right. You can install any combination of DPC types in the router.

The MPCs are inserted into a slot in a router. Count… When a slot is not occupied by an MPC or other line card, you must insert a blank DPC panel to fill the empty slot and ensure proper cooling of the system.

MPCs are hot-removable and hot-insert able. Forwarding on other MPCs continues uninterrupted during this process. MICs allow different physical interfaces to be supported on a single line card. MICs are hot-removable and hot-insertable. MX Switched Fabric SF Switched Fabric is a network topology where network nodes connect with each other via one or more network switches. The dedicated DPC slots are numbered 0 though 5, and 7 though 11, left to right.

MX PIC PICs provide the physical connection to various network media types, receiving incoming packets from the network and transmitting out going packets to the network. During this process, each PIC performs framing and line-speed signaling for its media type.

PICs are hot-removable and hot-insert able. MX Host Subsystem The host subsystem provides the routing and system management functions of the router. You can install one or two host subsystems on the router. Each host subsystem functions as a unit; the Routing Engine must be installed directly into the Switch Control Board. MX Routing Engine If the host system is redundant, the backup Routing Engine is hot-removable and hot- insertable, but the master Routing Engine is hot- pluggable.

Software processes that run on the Routing Engine maintain the routing tables, manage the routing protocols used on the router, control the router interfaces, control some chassis components, and provide the interface for system management and user access to the router.

If the master Routing Engine fails or is removed and the backup is configured appropriately, the backup takes over as the master. Routing Engine Interface Ports Three ports, located on the right side of the routing engine, connect the Routing Engine to one or more external devices on which system administrators can issue Junos OS command- line interface CLI commands to manage the router.

The ports with the indicated labels function as follows: 1. AUX 2. The port uses an auto sensing RJ connector to support Mbps or Mbps connections. Two small LEDs on the bottom of the port indicate the connection in use: the LED flashes yellow or green for aMbps orMbps connection, and the LED is light green when traffic is passing through the port.

There are three copies of software: 1. One copy on the CompactFlash card in the Routing Engine. One copy on the hard disk in the Routing Engine. Normally, the router boots from the copy of the software on the CompactFlash card. MX Craft Interface The craft interface allows you to view status and troubleshooting information at a glance and to perform many system control functions.

It is hot-insert able and hot-removable. The craft interface is located on the front of the router above the upper fan tray and contains LEDs for the router components, the alarm relay contacts, and alarm cutoff button. The MX router is configurable with three or four normal-capacity AC power supplies, up to four high-capacity DC power supplies, and up to four high-capacity AC power supplies.

MX Cooling System The cooling system consists of the following components: 1. Upper front fan tray 2. Lower front fan tray 3. There are eight horizontal slots total. All components between the MX and MX are interchangeable. This makes the sparing strategy cost effective and provides FPC investment protection. The MX is numbered from the bottom up. From there, the FPCs may be installed and are numbered from the bottom up as well.

Some types of traffic require a big hammer. Enter the MX The MX is all about scale and performance. It stands at 16U and weighs in at lbs. Because of the large scale, three SCBs are required for full redundancy.

The MX is numbered from the left to the right. The first six slots are reserved for FPCs and are numbered from left to right beginning at 0 and ending with 5. The next two slots are reserved and keyed for SCBs. In the case of full redundancy, SCB2 needs to be installed into this slot.

The next five slots are reserved for FPCs and begin numbering at 7 and end at Figure Skip to main content. Juniper MX Series by. Start your free trial. A typical use case for a Service Provider is having to manage large routing tables, many customers, and provide H-QoS to enforce customer service level agreements SLAs.

This adds the ability to configure H-QoS and increase the scale of queues. The use case for this MPC is to offer an intelligently oversubscribed line card for an attractive price. Figure MPC2 architecture. Please note that Figure uses several abbreviations:. However, it does support 16 fixed 10G ports. This allows each group of 4x10G interfaces to have a dedicated Trio chipset. We found what we were looking for. No worries, we can perform a match for DPC3.

As expected, the Buffering Block is handling the preclassification. The preclassifications engines are listed ID 1 through 4 and match our previous calculation using the show chassis fabric map command. There are several new and improved features on the MPC3E. The most notable is that the Buffering Block has been increased to support Gbps and number of Lookup Blocks has increased to four in order to support GE interfaces.

The other major change is that the fabric switching functionality has been moved out of the Buffering Block and into a new Fabric Functional Block. This can be written as , or roughly 1. This creates an interesting challenge in synchronizing the Lookup Block operations. In general, the Buffering Block will spray packets across all Lookup Blocks in a round-robin fashion. This means that a particular traffic flow will be processed by multiple Lookup Blocks.

One of the four Lookup Blocks is designated as the master and the three remaining Lookup blocks are designated as the slaves. Figure illustrates the steps taken to synchronize all of the Lookup Blocks. LU1 updates its own table with the source MAC address.

The update happens via the Buffering Block to reach LU0. The Lookup Block that receives the packet from the switch fabric is responsible for updating the other Lookup Blocks. Figure illustrates how destination MAC addresses are synchronized:. The packet enters the Fabric Block and Buffering Block. The packet happens to be sprayed to LU1.

LU1 updates its local table. As each Lookup Block receives the update, the local table is updated accordingly. When defining and configuring a policer, the MPC3E must take the bandwidth and evenly distribute it among the Lookup Blocks. The example policer M is configured to enforce a bandwidth-limit of m. Because packets are statistically distributed round-robin to all four Lookup blocks evenly, the aggregate will equal the original policer bandwidth-limit of m.

Otherwise, they are operationally equivalent. There are couple of scenarios: ingress and egress. The Interfaces Block will inspect each packet and perform preclassification.

Depending on the type of packet, it will be marked as high or low priority. The packet enters the Buffering Block. The Buffering Block will enqueue the packet as determined by the preclassification and service the high priority queue first.

The packet enters the Lookup Block.



0コメント

  • 1000 / 1000