Print
Parent Category: Knowledge Base
Category: Oracle VM for SPARC (LDoms)

If you are converting several previously standalone machines into a smaller number of ldoms, you should at a minimum use link aggregation on 1Gb links to provide some level of load balancing. If you plan on using 10Gb Ethernet, note that the ldoms virtualization layer can introduce a significant performance penalty for 10Gb Ethernet unless configured correctly.

Modern SPARC T4 and T5 CPUs come with on-board 10Gb Ethernet capability:

In addition, on all of the non-blade servers it is possible to add 10Gb PCIe option cards with optical or TwinAx connectivity.

Not surprisingly, given the wide range of connectivity options, as of writing it is rare to see data centre infrastructure ready to plug in and go. When planning a SPARC Consolidation project, it is important to consider the IO infrastructure such as 10Gb switch modules with the right ports, the appropriate cables, length limitations, and so forth. Consider flattening your network to balance the consolidation and virtualization at the server level.

In an ldoms configuration, there are currently two different mechanisms to provide native-level performance for 10Gb interfaces, depending on the type of back-end bus:

Note that these IO acceleration features are not compatible with using link aggregation, so for network redundancy you should use IPMP. However, if you are only using the 10Gb Ethernet ports for Oracle Direct NFS or iSCSI, you may be able to use the redundancy features of these technologies instead. See LDoms Network Redundancy for more details.

Useful links: