By Deepak Kumar, Founder and CEO, Adaptiva
When COVID-19 unexpectedly shut down the country, enterprises were hit hard. Millions of workers went remote almost instantly. In the months since, we’ve learned a lot about our networks and how they respond to various stressors. With Google, Facebook and others announcing that employees won’t return to campus until at least July 2021, and as companies like Twitter and Square debut new policies that allow any employee to work from home permanently, post-pandemic, it is clear that remote work will be a fixture in business for some time to come. As such, issues in endpoint management will need to be addressed in order for networks to run efficiently.
To understand what’s happening with infrastructure today and how COVID-19 is prompting long-term changes, it’s important to first understand the past.
Built for a Different Purpose
In the ’90s, enterprise software and endpoint management were built on the concept that machines wouldn’t move — they remained in the office 24x7, leveraging now ancient protocols running off of massive servers. This started to change in the early 2000s when laptops began gaining popularity.
People were enamored with the ability to hook a machine into a docking station and a monitor for a desktop experience at work, while being able to fold it up and take it with them at the end of the day. As laptops became faster and more powerful and as Wi-Fi started to take off, mobility became a real movement.
When it did, endpoint management became much harder because it was designed for static machines. Software deployment suddenly needed to know where a machine was and what resources were available around it. What were the best ways to get its needs fulfilled? Microsoft’s early endpoint management server (SMS) — pretty much the only game in town back then — wasn’t equipped to deal with the mobile world, but it soon developed System Center Configuration Manager (SCCM) which didn’t bear the legacy of NetWare. This was really high performance and designed for mobile, evolving over time.
Fast Forward to Today
Many of the system management processes that had been heavy-handed throughout the past two decades are now becoming obsolete thanks to ongoing innovation. Today, monolithic applications are getting broken into pieces, a bit like a Lego system, so that they can be combined together to create a very flexible work process in which they can talk with other components. It’s software that adapts to the environment rather than something that tells users and customers how they need to adapt. It is far less cumbersome and a notable change.
At the same time, most large enterprises are still running primarily on those big old servers and remain reliant on their corporate networks. The problem with this is that in the event of COVID-19, one part of the corporate network failed to scale — the VPN part. Companies quickly started maxing out, unable to support 30,000 employees working from home or to let components talk to each other as easily as they should. COVID-19 exposed not just a tactical issue with VPNs but also the fundamentally larger issue of absolute dependence on corporate networks. This is a problem because employees can’t get their software or their updates and patches without eating up all the bandwidth or compromising performance — and this leaves systems vulnerable to attack.
Fortunately, technologies are emerging that make it possible to deploy content and manage systems without crushing bandwidth. Distributed computing architecture has been used effectively in this regard. Advanced enterprise content delivery solutions can harvest unused disk space from endpoints and bring it all together to create a massive virtual SAN at every location, without disrupting workers and their endpoints. This removes the need for servers to store the content.
With single download solutions, one employee downloads the software and then others on their network can access it securely from that source remarkably quickly. Now, cloud-based offerings make it so easy that as long as employees can get on the internet, regardless of where they are, they can get their software completely independent of the company’s on-premises network. Such solutions eliminate considerable strain so that enterprises can run much more efficiently with their remote workforce.
When you look at the outcomes, there are cost savings in addition to elimination of complexity and moving parts. Organizations also get high performance and high resilience, which is important. Traditional client-server architecture must be maintained and is not very fault tolerant. Peer-to-peer-based solutions, on the other hand, are extremely resilient and require much less administration. A few endpoints may go down, but there are so many others within the network to pick up the slack that no one notices. Resilience and scale are built into it because the more endpoint devices an enterprise has, the greater the demand and sources of content.
These types of solutions can be implemented easily in today’s environment to help enterprises overcome software distribution hurdles. This is essential in the sense that software distribution doesn’t just help with performance. By providing the mechanism for timely updates, patches and general endpoint management, machines and devices are guarded against vulnerabilities. If there is any question about why this is necessary, just look at Garmin. It had to pay a $10 million ransom to get its systems up and running again because some endpoints were susceptible to well-known malware that should have been protected against.
The Next Frontier
Just as software deployment needs to happen at speed and scale, so does malware detection. If detection software takes a week to serve out beta signatures, scan, and collect results, within that week the network is exposed. All it takes is one endpoint.
Moving forward, peer-to-peer technology will be used to provide immediate protection against security threats. Every machine will be able to essentially scan itself. Within minutes of compromise, threats will be detected and remediated — which will change the game.
Threats are rising dramatically, and enterprises are being forced to shore up their systems in new ways. Peer-to-peer detection and remediation technology will be particularly important for enterprises with a large number of machines on their network — the larger the network, the greater the issues with speed and scale. Additionally, organizations with high numbers of remote workers or significant digital assets that could be compromised in an attack would benefit from the speed and scale peer-to-peer security solutions provide.
So while COVID-19 has been perhaps the most negatively disruptive event in the life of the modern enterprise, it has helped identify underlying problems in infrastructure. This has led to rapid innovation and/or deployment of incredible new technologies that can assist organizations in preparing for the new world ahead.
As first published in BetaNews.
###
Dr. Deepak Kumar is the founder and chief executive officer at Adaptiva. He is responsible for overseeing the company’s ability to execute on its strategic product vision in the endpoint management and security space. He was the lead program manager with Microsoft’s Systems Management Server 2003 team and program manager with the Windows NT Networking team. Prior to Microsoft, he was a group manager for IP Telephony products at Nortel. Dr. Kumar has received five patents related to his work on SCCM/MEM at Microsoft and has written more than 50 publications, including a book on Windows programming. While at Microsoft, Dr. Kumar also authored the Think Week paper for Bill Gates that became Project Greenwich, now known as Microsoft Office Communications Server/Lync. Deepak is an avid outdoorsman and hiker. For more information, please visit https://adaptiva.com/, and follow the company on LinkedIn, Facebook and Twitter.