Solaris · Lesson

Solaris · Interview Preparation

Solaris Interview Preparation — Complete Guide

This final lesson is your Solaris interview playbook. It combines concepts, commands, real-world scenarios and typical questions asked in L1/L2/L3 Solaris and SPARC admin interviews, so you can answer confidently with practical examples instead of just theory.

Add your interview prep video URL in solarisLessons.js (slug: interview-preparation).

Interview Structure – Levels & Expectations

Level 1 – Basic Solaris Admin

Linux/Solaris basics, users/groups, run-levels, SMF, filesystem, basic troubleshooting, logs.

Level 2 – Core Solaris Engineer

ZFS, Zones, SMF deep dive, networking (ipadm/dladm/IPMP), IPS patching, system monitoring.

Level 3 – SPARC / Enterprise

LDOMs, kernel zones, storage & performance tuning, live migration, DR design, complex troubleshooting.

How to use this page

  • For every topic: know 1–2 lines of definition, 3–4 important commands, and 1 real problem you solved.
  • Do not just memorise commands – be ready to explain why you used them.
  • Use stories: 'Once in production I faced X, I did Y, and result was Z'.

SMF, Milestones, Run Levels & repository.db

SMF (Service Management Facility) is one of the favourite topics in Solaris interviews because it shows how comfortable you are with modern Solaris service management and boot process.

Q1: What is SMF? How is it different from init scripts?

SMF replaces legacy /etc/init.d scripts. It manages services via FMRIs like svc:/network/ssh:default with proper dependencies, states, and logging. Instead of manual shell scripts in different run-level directories, SMF gives you a central framework to enable/disable/troubleshoot services.

Checking and fixing an SMF service
interview-smf
# List all services
svcs
 
# Show only services with issues and explanations
svcs -xv
 
# Example: fix SSH service in maintenance after bad config
svcs -xv svc:/network/ssh:default
vi /etc/ssh/sshd_config # correct the error
 
svcadm clear svc:/network/ssh:default
svcadm restart svc:/network/ssh:default
 
svcs -xv svc:/network/ssh:default

Real-life example: “App team reported SSH down. Instead of just restarting services, I used svcs -xv and found SSH in maintenance state due to invalid config. Fixed the file, cleared and restarted the service. Issue resolved without reboot.”

Q2: What is a milestone? How is it related to run levels?

Milestones are SMF targets that group services together, similar to run levels. Examples: milestone/single-user:default, milestone/multi-user:default, milestone/multi-user-server:default.

In Solaris 11, milestones are more important than classic run levels, but you can still view run-level with who -r.

Milestones and run levels
interview-smf
# Show current run level
who -r
 
# See milestone services
svcs milestone
 
# Move to multi-user-server milestone
svcadm milestone -d milestone/multi-user-server:default

Q3: What is repository.db? What if it is corrupted?

/etc/svc/repository.db is the main SMF service configuration database. It stores service definitions, properties and state. If it is corrupted, SMF may fail to start correctly and the system may boot with very limited services.

In worst-case scenarios, you boot to single-user, move corrupted repo aside, and restore from backup:

Fixing a corrupted SMF repository (worst-case)
interview-smf
cp /etc/svc/repository.db /etc/svc/repository.db.bad
cp /etc/svc/repository.db.bak /etc/svc/repository.db
reboot

ZFS Deep Dive – ARC, L2ARC, ZIL, Snapshots, Send/Receive

High-value ZFS concepts

  • ARC & L2ARC – read cache in RAM and SSD.
  • ZIL & SLOG – synchronous write log and dedicated log device.
  • Snapshots, clones, quotas, reservations.
  • ZFS send/receive for backup and DR.
  • Self-healing via checksums and redundancy.

ARC, L2ARC, ZIL & SLOG

ZFS keeps frequently used data in RAM cache called ARC (Adaptive Replacement Cache). If you add an SSD as L2ARC, cold data can be cached there as a second-level cache. ZIL (ZFS Intent Log) stores synchronous writes. If you add a fast SSD as a separate log device (SLOG), synchronous write performance (databases, NFS) improves.

Adding a separate log device (SLOG)
interview-zfs
# Example: add SLOG to pool for sync write performance
zpool add appPool log c2t1d0
 
# Check status
zpool status appPool

Example story: “Our Oracle DB on ZFS had high latency for synchronous writes. We added a dedicated SSD SLOG to the pool. After that, DB commit times improved significantly according to the DBA team.”

Snapshots, clones, quota vs reservation

Snapshots / clones / quotas / reservations
interview-zfs
# Take snapshot before patch
zfs snapshot appPool/appFS@pre_patch
 
# Rollback if patch fails
zfs rollback appPool/appFS@pre_patch
 
# Create clone for testing
zfs clone appPool/appFS@pre_patch appPool/appFS_clone
 
# Limit maximum usage for a dataset
zfs set quota=500G appPool/projects/proj1
 
# Guarantee minimum space
zfs set reservation=100G appPool/logs

In interviews, explain quota/reservation with a scenario: logs get a reservation so they can’t be starved; users get quotas so they can’t fill up the entire pool.

ZFS send/receive – backup and DR

ZFS send/receive
interview-zfs
# Full replication of snapshot to remote host
zfs send appPool/appFS@day1 | ssh dr-host zfs receive drPool/appFS
 
# Incremental replication between two snapshots
zfs send -i appPool/appFS@day1 appPool/appFS@day2 | \
ssh dr-host zfs receive drPool/appFS

Interview story: “We implemented nightly ZFS send/receive between DC and DR. During a DR drill, we imported the DR pool and started the app from replicated data with minimal RPO.”

Zones & Kernel Zones – Concepts and Practical Usage

Global zone

Zone ID 0, full control of hardware and other zones.

Non-global zones

Isolated environments sharing same kernel, great for app separation.

Kernel zones

Zones with their own kernel and SRU level, like light VMs on Solaris.

Basic zone commands

Zones overview
interview-zones
# In global zone
zoneadm list -cv
 
# Inside zone
zonename
 
# View zone configuration
zonecfg -z appzone info
 
# Verify and fix configuration
zonecfg -z appzone verify
 
# Install and boot a zone
zoneadm -z appzone install
zoneadm -z appzone boot
 
# Clone a zone
zoneadm -z appzone-clone clone appzone

Exclusive-IP vs Shared-IP zones

In shared-IP, the zone shares network stack with global zone and IPs are configured in global. In exclusive-IP, the zone gets its own IP stack and you configure networking inside zone (recommended in Solaris 11).

Good interview line: “In Solaris 11, I normally use exclusive-IP zones so each zone has its own IP stack and firewall, and I configure IP addresses inside the zone using ipadm.”

LDOMs (Oracle VM for SPARC) – Domain Roles & Lifecycle

Key domain types to mention

  • Control Domain – runs ldmd, manages all domains.
  • Primary Domain – first domain on system; usually control + service + I/O.
  • Service Domain – provides virtual disk (vds) and virtual switch (vsw).
  • I/O Domain (root) – owns physical PCIe root complexes/HBAs/NICs.
  • Guest Domain – runs the application workload.

Guest domain lifecycle (create → bind → install → boot)

Creating a guest LDOM (ldg1)
interview-ldoms
# Create domain
ldm add-domain ldg1
 
# Assign resources
ldm set-vcpu 4 ldg1
ldm set-memory 8G ldg1
 
# Create ZFS volume for root disk
zfs create -V 50G rpool/ldoms/ldg1-root
 
# Export volume via VDS and add vdisk
ldm add-vdsdev /dev/zvol/dsk/rpool/ldoms/ldg1-root ldg1-root@primary-vds0
ldm add-vdisk vdisk0 ldg1-root@primary-vds0 ldg1
 
# Network via vsw/vnet
ldm add-vsw net-dev=net0 primary-vsw0 primary
ldm add-vnet vnet0 primary-vsw0 ldg1
 
# Bind and start
ldm bind ldg1
ldm start ldg1
 
# Connect console and install Solaris inside
ldm console ldg1 # or: telnet localhost 5000

Dynamic resource changes (vCPU / memory / vdisk)

Dynamic resource operations in LDOMs
interview-ldoms
# Increase vCPUs online
ldm set-vcpu 8 ldg1
 
# Increase memory (usually online)
ldm set-memory 16G ldg1
 
# Add extra data disk
zfs create -V 100G rpool/ldoms/ldg1-data
ldm add-vdsdev /dev/zvol/dsk/rpool/ldoms/ldg1-data ldg1-data@primary-vds0
ldm add-vdisk data1 ldg1-data@primary-vds0 ldg1

SP configurations (spconfig) – DR point

Always mention SP config because it shows real-world DR knowledge. An SP configuration is a snapshot of the current LDOM layout stored on the service processor.

SP configuration commands
interview-ldoms
# List SP configs
ldm list-spconfig
 
# Save current layout as 'prod-layout'
ldm add-spconfig prod-layout

Interview line: “After any major LDOM change, I always run ldm add-spconfig to save a named SP config. If the primary OS is reinstalled, I can recover the LDOM layout from SP.”

Networking – ipadm, dladm, IPMP

Key Solaris 11 networking points

  • ipadm – manages IP configuration (persistent).
  • dladm – manages data links, VLANs, aggregations.
  • IPMP – provides IP-level failover between NICs.
Configuring IP and IPMP (basic example)
interview-net
# Configure static IP on net0
ipadm create-ip net0
ipadm create-addr -T static -a 10.1.1.10/24 net0
 
# View links and addresses
dladm show-phys
dladm show-link
ipadm show-addr
 
# Configure simple IPMP group ipmp0 on net0 + net1
ipadm create-ipmp ipmp0
ipadm add-ipmp -i net0 -i net1 ipmp0
ipadm create-addr -T static -a 10.1.1.20/24 ipmp0
 
# Monitor IPMP
ipmpstat -g
ipmpstat -i

Patching & IPS – Solaris 10 vs Solaris 11

When they ask about patching

  • Solaris 10 used patchadd/patchrm and patch clusters.
  • Solaris 11 uses IPS (pkg) and SRU repositories.
  • Boot environments (BEs) allow safe rollback after patching.
Basic Solaris 11 patching flow
interview-patch
# Check publishers and repo
pkg publisher
 
# Set local repository
pkg set-publisher -G "*" -g file:///opt/localrepo solaris
 
# Dry-run update
pkg update -nv
 
# Actual update with license accept
pkg update --accept
 
# Manage boot environments
beadm list
beadm create prepatch
# After update, a new BE is usually created by pkg automatically.
# Activate and reboot if needed
beadm activate solaris-11-4-76 # example name
init 6

Interview hint: Always mention that you do pkg update -nv first (dry-run), check disk space and read release notes. It shows maturity.

Real Troubleshooting Scenarios (L2/L3 Style)

Scenarios you should be ready to explain

  • Server not booting – use OBP 'boot -L' to choose previous BE.
  • Zone stuck in 'incomplete' state and how you fixed it.
  • Kernel zone failing to boot due to insufficient memory.
  • ZFS pool degraded – identifying and replacing bad disk.
  • SMF service in maintenance and recovering it.
  • High CPU or memory usage – prstat, pstack, vmstat, iostat.
  • Network outage – dladm/ipadm/IPMP troubleshooting.
Example: cleaning an incomplete zone
interview-troubleshoot
zoneadm list -cv
 
# If appzone is incomplete or broken:
zoneadm -z appzone uninstall -F
zonecfg -z appzone delete -F
 
# Then recreate from proper zonecfg file / template
Example: ZFS pool degraded & scrub
interview-troubleshoot
zpool status
zpool status -xv # show errors verbosely
 
# Start scrub
zpool scrub rpool
 
# After replacement:
zpool replace rpool c3t0d0 c3t1d0

HR & Behavioural Questions + Final Checklist

Common HR questions for Solaris roles

  • Tell me about a critical production issue you handled on Solaris.
  • What is the hardest technical problem you solved on Solaris/LDOMs?
  • How do you keep your Solaris skills updated?
  • Have you worked with change management and CAB processes?

What NOT to say

  • Don't say: 'I only know commands, not troubleshooting'.
  • Don't say: 'I have never worked on production, only lab' (better: mention lab + mock production).
  • Don't say: 'I don't know Solaris 11 at all' — instead say you are familiar and improving.

Last-minute revision checklist

  • ZFS: list, snapshot, clone, send/receive, quota/reservation.
  • Zones: create, clone, migrate, troubleshoot incomplete zone.
  • Kernel zones: create, boot, memory/CPU caps.
  • LDOMs: domain roles, create guest, dynamic resources, SP config.
  • SMF: svcs -xv, milestones, repository.db basics.
  • Networking: ipadm, dladm, IPMP commands.
  • Patching: pkg, publishers, BE with beadm.
  • Performance: prstat, vmstat, iostat, mpstat, pstack.