I have worked as a SysAdmin for around 7 years, working across a wide range of technologies. My current role is a 3rd Line Server Infrastructure Engineer, specifically supporting VMware and Cisco UCS platforms providing IaaS for over 50 clients totalling around 1500 VMs.
The virtualised environment I support is almost exclusively running on vSphere 5.1, we have 5 different Production vCenter servers, none of them linked, and running on a variety of hardware. The majority of the compute is running on Cisco UCS blades, which are a challenge to manage in themselves, but more on that another time. Most of our storage is EMC VNX, with some IBM SVC fronted kit thrown in for good measure, all running on Fibre Channel, and our network stack is running on Cisco Nexus switches, using the simultaneously great, and terrible, Cisco Nexus 1000v for our vDS.
I guess this is all pretty standard stuff, so what problems do I see on a daily basis? Well I plan to talk in this blog about the problems I see, how we can get around them, and what steps we can take to more effectively manage the issues thrown up by this infrastructure.