Approximately sixteen years ago, in the year 2007, Palo Alto Networks introduced their inaugural product, thus significantly transforming the realm of network security. At that time, the term Next-Generation Firewall (NGFW) had not yet been coined; it would only be introduced by Gartner in 2009. During this period, Palo Alto’s products were categorized under the earlier designation of Unified Thread Management (UTM). While Palo Alto was not responsible for originating UTM or NGFW, they have since become the benchmark product in this field against which all competitors are evaluated. For simplicity’s sake, we will utilize the NGFW label throughout this discourse. Although what we now refer to as the cloud did exist in 2007, data centers were predominantly constructed using traditional methods: a robust layer of security encompassing a vulnerable core of data and computation. Only the most audacious organizations were transitioning their assets to AWS (Amazon Web Services) and other emerging cloud platforms. The data center still possessed a well-defined and recognizable boundary; clear demarcations existed between internal assets and those situated outside.

NGFWs: Perfect for protecting the data center perimeter

The introduction of public cloud technology revolutionized the security landscape; however, its impact was not uniform and often not immediately evident. It is crucial to note that Next-Generation Firewalls (NGFWs) perform extensive processing on every packet they handle within their framework. Unfortunately, given its widespread use, the Intel architecture proves inadequate for managing the substantial low-level operations required by NGFWs effectively. To address this challenge, Palo Alto opted for Cavium Network Processors as a solution for conducting meticulous packet inspection and manipulation while Fortinet developed custom in-house processors specifically designed for this purpose. In comparison with a decade ago, today’s NGFWs can be considered supercomputers on par with those used by government agencies, which significantly contributes to their cost. Recognizing the limitations imposed by the Intel architecture, NGFW vendors swiftly responded to the shift towards cloud adoption by offering virtual versions of their products. However, performance suffered across the board due to these architectural constraints. As a result, significant changes were implemented in securing cloud network borders. Load balancing multiple virtual firewalls became standard practice, and networks underwent re-architecting with an increased number of Internet peering points compared to traditional physical setups. Additionally, firewall vendors began selling their Virtual Machine (VM) implementations in bundles of six or ten units since one or two firewalls were no longer sufficient for meeting the required demands.

If that sounds complex to build and manage, pity the more typical company who moved only a portion of their assets to the cloud. As both IaaS (Infrastructure-as-a-Service) and PaaS (Platform-as-a-Service) proliferated, network boundaries started becoming increasingly indistinct. Whereas the IT-related definition for “cloud” had been derived from the idea of a large number of computers (analogous to water vapor) seen together as one from a distance, it started becoming more appropriate to use a different definition: “Something that obscures or blemishes.”

As data centers started hosting a random selection of applications and parts of applications in the cloud, with other applications (and parts of applications) remaining on-site, it became incredibly difficult to protect them and enforce security policy. This is largely because it became nearly impossible to define boundaries, where security is typically applied. And even in the cases where boundaries were clear, the sheer volume of security hardware, software, and configuration became overwhelming. As a result, security took a big step backwards.

Looking to the future: NGFW features in a microsegmentation environment

Thus began the area of what was known in the early days as microsegmentation, and what is now more often called Zero Trust Segmentation (ZTS).

The concept of microsegmentation is simple: policy enforcement (i.e., firewalls) on every server, controlling both inbound and outbound traffic to every other server. Fundamentally, microsegmentation is simply an extension of the idea of network segmentation taken to the ultimate (one firewall per server). Microsegmentation gave security teams a powerful new tool to deal with the “fuzzy boundaries” around and within our data centers by addressing security on a server-by-server (or at least application-by-application) basis.

Historically, microsegmentation has dealt with ports and protocols without diving into the Deep Inspection territory necessary for NGFW features.


The evolution of firewalls has been remarkable over the years, leading to the development of Next Generation firewalls that provide advanced security features and capabilities. These firewalls have proven to be essential in protecting networks from sophisticated cyber threats and attacks. However, they also bring challenges such as increased complexity, configuration difficulties, and potential performance bottlenecks. As technology continues to advance, the future of firewalls lies in finding solutions for these challenges while keeping up with evolving cyber threats. It is crucial for organizations to invest in skilled professionals who can effectively manage and optimize Next Generation firewalls while also staying updated on emerging trends and best practices in cybersecurity. By doing so, businesses can ensure a robust and secure network infrastructure that can withstand even the most advanced.

Leave a Comment

Your email address will not be published. Required fields are marked *