Philosophy

Everything on this site gets tested on real gear before it’s published. I don’t write about how something should work — I write about how it actually works, including what breaks and how I fixed it.

The software stack is entirely free and open source. Proxmox VE instead of vSphere (for the cluster), OPNsense instead of commercial firewalls (for software firewall labs), TrueNAS Scale for storage, GNS3 and EVE-NG for network topologies. The physical gear is secondhand enterprise equipment — meaningful horsepower without the new-hardware price.


Compute

DeviceSpecsRole
Dell PowerEdge R6302× Xeon E5-2680 v4 · 256GB RAM · 4× 10GbEPrimary hypervisor host (vSphere / Proxmox)
HP ProLiant DL380 Gen92× Xeon E5-2690 v3 · 192GB RAM · 4× 10GbESecondary hypervisor host (vSphere / Proxmox)

Both servers are connected to a 10GbE switch trunk for vMotion and storage traffic. iDRAC (R630) and iLO (DL380) provide out-of-band management.


Networking

DeviceRole
Cisco ASA 5516-XPerimeter firewall · VPN · ACL labs
Cisco 3750G Stack (2U)Core switching · VLANs · STP · inter-VLAN routing · SVIs
Cisco ASR 1002 (planned)WAN simulation · BGP · MPLS labs
Unifi APWireless for management network

The physical network gear is the backbone of all Cisco CLI lab content. The 3750G runs IOS 12.2(55)SE, which covers CCNA switching exam topics directly.


VLAN Design

VLANNameSubnetPurpose
1NativeUntagged / unused
10Servers10.0.10.0/24Hypervisor management, VMs
20Lab-Net10.0.20.0/24General lab workloads
30Management10.0.30.0/24OOB management (iDRAC, iLO)
40GNS3-Bridge10.0.40.0/24GNS3/EVE-NG cloud node integration
99Storage10.0.99.0/24iSCSI / NFS storage traffic

Software Stack

All free and open source unless noted.

ToolPurpose
Proxmox VEHypervisor cluster (replacing vSphere) · Ceph for shared storage
VMware vSphere 7Current production hypervisor (existing license)
OPNsenseSoftware firewall VM · micro-segmentation labs
TrueNAS ScaleNAS · iSCSI shared datastore for Proxmox
Proxmox Backup ServerVM backup · incremental forever · deduplication
GNS3Virtual network lab · Cisco IOSv · ASAv topologies
EVE-NG CommunityMulti-vendor virtual lab · Juniper · Fortinet · Cisco
WazuhOpen source SIEM · custom detection rules · endpoint agents
Windows Server 2022Active Directory · DNS · DHCP · Group Policy
Ubuntu Server 22.04Linux workloads · Netmiko/Ansible control node

Virtual Lab Integration

GNS3 and EVE-NG topologies connect to the physical network via a Cloud node bridged to VLAN 40 (GNS3-Bridge). This means:

  • Virtual routers in GNS3 can exchange routes with the physical Cisco 3750G
  • Physical VMs in Proxmox can reach virtual hosts in GNS3 topologies
  • Lab scenarios can test real physical-to-virtual integration, not just isolated simulations

This is the setup that makes content like “BGP from a GNS3 router to a physical switch” possible.


Planned Additions

  • Cisco ASR 1002 — WAN simulation, BGP labs, eventually MPLS
  • Additional 10GbE switch — dedicated storage fabric for Ceph replication
  • Raspberry Pi cluster — lightweight automation control plane experiments
  • XCP-ng node — side-by-side hypervisor comparison content

What Gets Tested Here

Every blog post and video that involves a lab environment was built and tested in this setup before publication. That includes:

  • All GNS3 and EVE-NG topology configs
  • All Proxmox and TrueNAS procedures
  • All Netmiko, Ansible, and Python network automation scripts
  • All OPNsense firewall policies
  • All Windows Server and Active Directory procedures

If something on this site doesn’t work the way I describe it, that’s a bug in my documentation — not a difference between theory and practice. I test everything.