There is no lack of articles declaring the death of the “perimeter.” That is, if one’s threat model assumes an unsafe exterier environment and a safe interior network protected by VPNs, firewalls, etc., then something is wrong with your security architecture. The argument for zero trust security has strengths: more workers are remote, more employees are connecting to and regularly using internal software and systems, zero trust requires robust monitoring and maintenance which can improve security more than adding a “secure” firewall, as companies grow networks become more complex and hard to segment in a safe way. All of these things are true, but I think painting the picture with such broad strokes is a mistake.
I’ve spent most of my time in the startup space. This is a world (unfortunately) embodied by the phrase “move fast and break things”. Time to market and rapid pivotability are key business requirements. This means startups often leave security as an afterthought, and not completely incorrectly. Security is a non-trivial time and cost trade off, and short term survival is more important than protecting companies’ resources from a potential breach. I believe this line of thinking continues even as organizations grow. Security is seen as a cost center and a blocker of rapid product progress and useability. This is why I believe declaring the death of the perimeter is a mistake.
First, zero trust/perimeterless security is great, but it is a much larger time and money sink than perimeter-based security. Google has huge, dedicated internal teams and systems built specifically to support their BeyondCorp model. Second, declaring something “dead” implies that it’s no longer useful; however, I don’t think anyone would argue that securing your internal networks is a mistake. While perimeter-based security may not be perfect, it’s much better than nothing. Finally, startups have something that moots many of the arguments for perimeterless security: they’re tiny. It’s very possible to have only ports 80 and 443 open to the internet, with an internal network or segmented VPN for accessing internal resources with limited access and IP whitelists. Due to lower volumes, its also much easier to log and review everything.
Don’t get me wrong: I’m pro-perimeterless (in theory). I think it’s an awesome idea, and would encourage any organizations that have robust and dedicated security teams that can follow it to do so. It should just be made clear that a more traditional security model should not be ignored just because it’s unsexy, or will be migrated away in the future. The number of internet-exposed admin interfaces is just too damn high to think otherwise.