- Written by Tom Shaw
- Category: Knowledge Base
Oracle's T-Series servers support two virtualization technologies:
This article will take you through both, and help you decide whether to use ldoms, zones, or a mixture of both in your consolidated SPARC environment.
In an ldoms configuration, a service domain is responsible for handling hardware resources such as network and storage on behalf of its guests. The simplest configuration is to use a single primary domain which functions as a service domain for all of the guest domains.
It is possible to configure SPARC servers with redundant service domains (e.g. primary and secondary).
If you are converting several previously standalone machines into a smaller number of ldoms, you should at a minimum use link aggregation on 1Gb links to provide some level of load balancing. If you plan on using 10Gb Ethernet, note that the ldoms virtualization layer can introduce a significant performance penalty for 10Gb Ethernet unless configured correctly.
The headline feature of ldoms is the ability to live migrate a guest instance from one physical server to another, as long as they are configured correctly using shared storage.
The Oracle documentation on configuring virtual disk devices provides instructions on several ways to configure vdisks for your guest ldoms, but unfortunately doesn't tell you about the severe performance issues you can encounter from following those instructions.
LDoms are most efficient when they are allocated whole CPU cores. On the SPARC T4 and T5 servers, each CPU core has 8 threads. If multiple ldoms share a core:
- The per-core cache may be thrashed by the different workloads, causing unpredictable performance
- The automatic single-thread optimization feature on the CPU may work correctly, potentially halving single-thread performance
I would only ever use partial-core allocations on a "play" server where there is a need for more guest ldoms than there are CPU cores.
One use case for limiting the CPU usage is to reduce Oracle license costs through hard partitioning. Note that there are specific instructions for configuring ldoms to comply with Oracle's hard partitioning rules.
- Hard partitioning can also be achieved by configuring zones to comply with Oracle's hard partitioning rules.
The ZFS filesystem built in to Solaris is an amazing piece of technology: combining the functionality of a volume manager and a filesystem, it delivers a pooled storage model with built in snapshots, clones, block-level compression, and clever use of SSDs to accelerate read and write performance. When configured correctly, it complements a zones environment very well.
In most new environments I like to use 802.3ad (LACP) link aggregation, also known as bonding and port-channeling.
Side note: The term "trunking" can confuse people because it means two different things. It can mean link aggregation: bundling multiple physical links to create one logical link; or it can mean VLAN tagging: sending multiple separate networks over one link. Therefore I try to avoid the the term "trunking" and instead I just talk about link aggregation or VLAN tagging.
In Solaris 10, a major design choice is whether to use sparse root zones or full root zones. Remember, a zone is a lightweight virtual server running within a shared Solaris instance.
With sparse root zones, the standard Solaris software packages are shared read-only from the global zone into the non-global zones. With whole root zones, the same software packages are instead copied into the non-global zones.
Note: In Solaris 11, sparse root zones are no longer available -- the benefits are now provided by ZFS and the new Image Packaging System.