With the average infrastructure refresh period sitting at around 5 years, the average campus network has the difficult task of continually meeting the demand of up to 4 generations of smart phones and 4 generations of tablet, and that’s just from the World’s most successful fruit company!
To put that into context, just think of what you could do with the phone you had 5 years ago for a minute……now get the shiny new slab out of your pocket and get out the accompanying VR headset for some immersive augmented reality.….not even the same sport eh!?
One key benefit of Wi-Fi has always been its ability to continuously advance while remaining backwards compatible, meaning that a state of the art infrastructure has been able to support a wide range of devices, from the newest smartphone to the oldest laptop computer. Whilst this has been great, there was always going to be a limit, and with mobile devices being launched at a phenomenal rate, each one equipped with faster, smarter silicon and the capability to dramatically change the user experience (and their demands), new standards such as 802.11n/ac are causing a departure from this.
Market intelligence shows that shipment of devices built around 802.11n/ac chipsets are set to completely eclipse their predecessors this year, making the need to upgrade the underlying infrastructure a real ‘here and now’ problem. Aging infrastructures unable to support the new standards will create a range of negative side-effects for the operator as they fail to deliver the services to match new-device capability. So how can organisations make the most of new technologies without constantly feeling the pinch of massive forklift upgrades?
Source : ABI Research 2015
Let’s delve deeper into my opening remark with some assumptions and fag-packet maths. So, assumption time - the average user changes their mobile device (or adds a new one to their existing arsenal) every 13 months or so and the average network refresh is every 4-5 years. It is clear from this that campus infrastructures must be built to accommodate the exiting device landscape alongside the next 3-4 generations of increasingly amazing devices. As wireless and wired speeds become ever closer, this basic design principle applies to both the wired and wireless domain.
In other words, creating the infrastructure that will continually deliver against user expectation for the entire period needs serious planning!
The best strategies to achieve this require a mix of high-speed infrastructure, scalable design and usable intelligence around what is going on across the RF domain. In fact, that list should be reversed, with intelligence around wireless use now being the essential bedrock onto which the other elements may be placed. Visibility over estate use, RF coverage, user behaviours, device types, user classifications etc. must all be understood if the IT team are to stand a chance in delivering (and continuing to deliver)services to the required level.
If done properly this allows a solution designed to deliver ‘Pay as You Grow’ flexibility rather than ‘optimum workload at all locations on day one’ heaviness. This can be as simple as using a single wired backhaul from an access point to the core on day one, but with provision to go to 2 or more bonded links if the traffic requirement demands such. Conversely, it is also possible to create a plan to flood the areas of most use and dial-down the lesser used areas with significantly less coverage (ubiquitous wireless only pays for itself when people are connected after all!). Modeling and detailed desktop RF planning is core to this really working, allowing IT to understand the challenges of signal propagation in the physical environment and model the various deployment and upgrade paths open to them.
There are lots of things to consider of course, but it constantly surprises me how many people don’t model for now and the future. If there was only one take away from the ramblings above I’d make it this……..As the old adage goes, ‘fail to plan, plan to fail’.