Quantcast
Channel: Network and Security Virtualization
Viewing all 481 articles
Browse latest View live

Advanced Solutions Customer Story Part 1: Why NSX-T?

$
0
0

 

Customer Overview

Advanced Solutions, a DXC Technology company, was formed in 2004 and employs about 500 staff to support the government of the Canadian province of British Columbia and other public sector customers with IT and business process solutions. For government agencies and services to continue operating efficiently and effectively, it is essential that the IT resources that they require are provided quickly and accurately.

Key Pain Points

All IT organizations are acutely familiar with the wide range of pain points and obstacles that can stand in the way of delivering resources to empower their businesses to move with speed and agility. One of the most common hindrances to IT, and therefore business agility is painfully slow provisioning processes, which can take weeks just to provision an application. The most common bottleneck within these processes is provisioning networking and security services. This is a key pain point for Advanced Solutions, but one that VMware is helping them solve with the VMware NSX Data Center network virtualization platform.

Dan Deane, Solutions Lead at Advanced Solutions says, “The key IT pain points that VMware solutions are helping us solve are around networking and provisioning.”

New Use Cases

Advanced Solutions was already a user of NSX Data Center for vSphere, but some new initiatives and use cases led them to choose to deploy NSX-T Data Center.

Dan Says, “Our primary reasons for deploying NSX-T were in support of a next generation SDN platform. We traditionally have been an NSX for vSphere customer, but with NSX-T we can begin looking at how we can provide broad data center networking and security not only for vSphere workloads, but for other hypervisors and physical workloads. On top of that, we’ve recently deployed PAS (Pivotal Application Service) and PKS. From an operational delivery point of view, it becomes very compelling.”

This highlights a fundamental benefit of the NSX-T Data Center platform: extension of consistent networking and security policies across heterogeneous hypervisors, clouds, and application frameworks (VMs, containers, bare metal). In addition to this, NSX-T is embedded into PKS, bringing enterprise-grade networking and security to what is already an enterprise-grade Kubernetes platform. The extension of networking and security policy to containers takes the concept of micro-segmentation to a new level, bringing the software-defined, policy-driven ability to secure individual workloads down to individual containers and microservices.

“The benefits we’re realizing from NSX-T are in the container management space. We’ve just recently deployed PAS and PKS, and we’re waiting to see that level of security that, customers want when they do microservices and containers from the platform, so it’s great. What we can do today that we couldn’t do before is micro-segmentation for containerized workloads,” Dan says.

Closing Thoughts

These new and expanded use cases, based on customer demands, are enabling Advanced Solutions to delivery IT resources for traditional and containerized applications that drive business value and empower their customers to move with speed and agility as they continue their journeys of digital transformation.

“These new technologies are allowing us to help our customers digitally transform and enable their business and application services to mature,” Dan says.

Be sure to check the Network Virtualization blog in the coming week to get Part 2 of Advanced Solution’s NSX-T story.

The post Advanced Solutions Customer Story Part 1: Why NSX-T? appeared first on Network Virtualization.


Coppell ISD Integrates Security into Infrastructure via VMware AppDefense

$
0
0

What do you get when you provide 12,800 kids with technology and programming classes? You get 12,800 people who are getting ready for the modern workforce of today and tomorrow. You also get 12,800 potential vulnerabilities. With the growing quantity of phishing emails, ransomware and malware that Coppell Independent School District (CISD) already had to combat with a small staff, this Texas school system was looking for smarter solutions.

“All these students who have taken programming classes, they’re often looking to bypass administrative privileges, looking for ways around the internet filters, or looking for ways to play games on the school computers,” said Stephen McGilvray, CISD Executive Director of Technology. “So, in addition to all these external threats we have to worry about, we also have a bunch of homegrown, internal threats.”

The school district recently underwent a data center refresh, which included updates for VMware vSphere, VMware App Volumes and VMware Horizon, and launched the implementation of VMware NSX Data Center. During the refresh, their VMware sales rep told them about a relatively new security product called VMware AppDefense.

At its core, AppDefense shifts the advantage from attackers to defenders by determining and ensuring good application behavior within the data center, instead of continuing the losing battle of attempting to identify unknown threats within the internal network.

AppDefense looks to prevent the execution and propagation of attacks with a combination of application whitelisting and behavioral awareness. It also integrates into NSX Data Center to prevent the lateral spread of threats with adaptive micro-segmentation. Security is enforced at the most granular level of an app – the process level – and micro-segmentation of the network stops an attack in its tracks.

“Before, our security footprint had a strong dependence on keeping people out,” said Systems Engineer James Holloway. “We didn’t have much in place for mitigation, if and when somebody did get through.”

“The AppDefense message really resonated with us,” said McGilvray. “With a small technology team serving 17 schools, anything that saves us time is appreciated. We liked that AppDefense integrated easily with NSX and into our VMware stack, to add another layer of security.”

CISD started its AppDefense journey by protecting its most critical systems, including file servers and student information databases – “The things that would cause the most headaches if they were compromised,” said Holloway. “Along the way, we’ve gotten a better understanding of how all our systems communicate. You think of your servers, and each has different services, but from a technical standpoint we were surprised to see just how much data is going in and out. That drill-down opened our eyes to how much we might have been missing.”

 

Secure, Game-changing App Delivery

Horizon and App Volumes are another significant part of CISD’s VMware footprint. The school district uses Horizon to provide remote access for students and teachers. Students can securely access their school desktops and applications and access school apps from anywhere using their school-issued Apple iPads, with security underpinned by NSX Data Center and AppDefense.

Holloway calls App Volumes a “game changer. We’re able to easily provide whatever apps our teachers need and meet their expectations.” He noted that while 90 percent of the school district’s devices are Apple, the district has purchased many programs over the years that are Windows-only or have other system restrictions. CISD uses App Volumes to deliver those apps to any endpoint, regardless of infrastructure or operating system.”

 

Flexible and Responsive for the Future

CISD’s IT team appreciates the flexibility they’ve seen from their software-defined data center refresh. Using virtualization and hyperconverged infrastructure, they were able to shrink their hardware footprint and related costs such as power and cooling. “Our systems are fairly static and don’t change that much,” said McGilvray. “Yet, when we do need to make a change, we can make it quickly, scale it out and give people what they need. That’s just gotten better and better as VMware has refined its products over the years.”

In the future, CISD plans to expand their AppDefense and NSX Data Center footprints to more secondary systems. With AppDefense, “We know we’ll be alerted when something isn’t operating within its normal parameters. said Holloway. “For us, the value of AppDefense lies in having peace of mind.”

 

Learn more about VMware AppDefense.

The post Coppell ISD Integrates Security into Infrastructure via VMware AppDefense appeared first on Network Virtualization.

Securing your SWIFT environment with VMware

$
0
0

The SWIFT Controls Framework was created to help customers figure out which controls are needed to better secure their SWIFT environment.  The SWIFT security controls framework is broken down into objectives, principles, and controls.   The three objectives are “Secure your environment, Know and Limit Access, and Detect and Respond”.

Customers interested in exploring VMware product alignment with the SWIFT framework should evaluate the end-to-end solution. This includes VMware products, as well as other technology that support a customer’s SWIFT platform. The following is a high-level alignment of some of the SWIFT framework controls and VMware products.

VMware Product Alignment with SWIFT Objectives

Restrict internet access & Protect Critical Systems from General IT Environment

As part of a SWIFT deployment, a secured and zoned off environment must be created. This zone contains the SWIFT infrastructure that is used for all SWIFT transaction.  Two SWIFT Principles that we will discuss are

  • Protect Critical Systems from General IT Environment
  • Detect Anomalous Activity to Systems or Transaction Records

These controls are required to be enforced on the SWIFT infrastructure.  SWIFT requires that all traffic from the general IT infrastructure to the SWIFT zone be as restricted as possible.   They also require that the customer protect and monitor all systems for compromise.

Creating an architecture that meets the requirement of Protect Critical Systems from General IT Environment can be challenging for the customer.  The principle of Protect Critical Systems from General IT Environment calls for a stateful firewall to provide segmentation into and out of the zone, and that all ports and communications be limited and reviewed annually.  It also recommends, though optional, that organizations restrict communication between components within the SWIFT environment (micro-segmentation).

Figure 1: Architecture A-1 Full Stack within the user Location

With VMware NSX®, customers can meet this requirement as well as the optional recommendations.  NSX provides not only a stateful firewall to segment the SWIFT environment from the rest of the organization, it also allows users to segment the components within the SWIFT zone, allowing for a better security posture.  NSX’s ability to perform protocol validation lessens the ability of criminals to tunnel traffic over approved ports. Since NSX is all in software, customers do not need to purchase a dedicated firewall for their SWIFT zone.

In the case of Detect Anomalous Activity to Systems or Transaction Records, customers can leverage VMware AppDefense to protect their systems for unknown processes and compromises.  AppDefense allows customers to tighten down the applications running within the SWIFT zone by limiting and enforcing that only approved executables can run and communicate.   Enabling AppDefense provides organization a way to better filter out the noise and be alerted to real issues. AppDefense can learn how your applications run and communicate and alert/react to anything outside of these known parameters.

 

VMware Product Product Capability SWIFT Control
NSX Boundary protection via Stateful firewall 1.1c 1
Restricting the communication between components in the secure zone 1.1c 2
AppDefense Integrity checks of software 6.2
Alert/detect anomalous activity 6.4

 

Additional information

VMware can help customers better secure their SWIFT environment, their data center, and their clouds.  Security features are present throughout the VMware stack, with NSX and AppDefense being at the top of that list.  If you want to learn more about VMware’s security capabilities, please reach out your local account team.

VMware’s security and compliance team can is constantly creating and working with customers and auditors to better help our customers meet their security and compliance needed.   You can find their blog with a treasure trove of information at https://blogs.vmware.com/vmwaresecurity/.

 

Disclaimer: This document is intended to provide general guidance for organizations that are considering VMware solutions to help them address compliance requirements.  The information contained in this document is for educational and informational purposes only.  This document is not intended to provide regulatory advice and is provided “AS IS.” without warranty of any kind. VMware makes no claims, promises, or guarantees about the accuracy, completeness, or adequacy of the information contained herein.  Organizations should engage appropriate legal, business, technical, and audit expertise within their specific organization for review of regulatory compliance requirements.

The post Securing your SWIFT environment with VMware appeared first on Network Virtualization.

Where in the World is NSX?

$
0
0

VMware NSX is going worldwide! We’ll be out and about through the end of the year, spreading networking and security love across America, Asia Pacific, and Europe. Our goal is to help agile organizations move toward a Virtual Cloud Network with consistent connectivity, branch optimization, and security across all infrastructure.

Whether we’ll be at a booth, product demo, talk, or otherwise – we want to connect! Join us at any of the major conferences and NSX upcoming events listed below to chat with our product experts. And, if you think you’ll be in attendance, be sure to tweet at us to let us know!

NSX Upcoming Events

 

NSX Upcoming Events

Checkpoint CPX –  2/4
When: February 2 – 4, 2019
Where: Las Vegas, NV
Click here to learn more

Networking Field Day – 2/13
When: February 13 – 15, 2019
Where: Palo Alto, CA
Click here to learn more

Mobile World Congress – 2/25
When: February 25 – 28, 2019
Where: Barcelona, Spain
Click here to learn more

RSAC – 3/4
When: March 4 – 8, 2019
Where: San Francisco, CA
Click here to learn more

Cisco Live APJ– 3/5
When: March 5 – 8, 2019
Where: Melbourne, AUS
Click here to learn more

SANS 2019– 4/1
When: April 1 – 8, 2019
Where: Orlando, FL
Click here to learn more

Cloud Foundry Summit US– 4/2
When: April  2 – 4, 2019
Where: Philidephia, PA
Click here to learn more

Open Networking Summit – 4/3
When: April 3 – 5, 2019
Where: San Jose, CA
Click here to learn more

FSISAC Summit – 4/28
When: April 28 – May 1, 2019
Where: Orlando, FL
Click here to learn more

Dell World – 4/29
When: April 29 – May 2, 2019
Where: Las Vegas, NV
Click here to learn more

H-ISAC Summit – 5/13
When: May 13 – 17, 2019
Where: Ponte Verde Beach, FL
Click here to learn more

KubeCon EMEA – 5/20
When: May 20 – 23, 2019
Where: Barcelona, Spain
Click here to learn more

InfoSec EMEA – 6/4
When: June 4 – 6, 2019
Where: London, England
Click here to learn more

Cisco Live US – 6/9
When: June 9 – 13, 2019
Where: San Diego, CA
Click here to learn more

Gartner SRM US – 6/17
When: June 17 – 20, 2019
Where: Maryland, MA
Click here to learn more

AWS re:Inforce- 6/25
When: June 25 – 26, 2019
Where: Maryland, MA
Click here to learn more

 

Don’t see an event you’ll be attending? Subscribe to our blog for additional event updates and all things network virtualization and security.

Be sure to follow us on Twitter and on Facebook to stay up-to-date on the latest from NSX!

The post Where in the World is NSX? appeared first on Network Virtualization.

NSX-T Integration with Openshift

$
0
0

I am sometimes being approached with questions about NSX-T integration details for Openshift. It seems people are well aware how NSX-T works and integrates with Pivotal Container Service (aka PKS), Pivotal Application Service (PAS formerly known as PCF), and even with vanilla Kubernetes but there is no much information how we integrate with Redhat’s Openshift. This post aims to throw some light on the integration with this platform. In the examples below I am using Openshift Origin (aka OKD) but for a supported solution you need to go with Openshift Enterprise Platform. The same NSX-T instance can be used for providing networking, security, and visibility to multiple Openshift clusters.

 

Example Topology

 

In this topology we have a T0 router that connects physical with virtual world. We also have T1 router acting as a default gateway for the Openshift VMs. Those VMs have two vNICs each. One vNIC is connected to Management Logical Switch for accessing the VMs. The second vNIC is connected to a disconnected Logical Switch and is used by nsx-node-agent to uplink the POD networking. The LoadBalancer used for configuring Openshift Routes plus all project’s T1 routers and Logical Switches are created automatically later when we install Openshift. In this topology we use the default Openshift HAProxy Router for all infra components like Grafana, Prometheus, Console, Service Catalog and other. This means that the DNS records for the infra components need to point the infra nodes IP addresses since the HAProxy uses the host network namespace. This works well for infra routes but in order to avoid exposing the infra nodes management IPs to the outside world we will be deploying application specific routes to the NSX-T LoadBalancer. The topology here assumes 3 x Openshift master VMs and 4 x Openshift worker VMs(two for infra and two for compute). Anyway, if you are interested in a POC type of setup with one master and two compute nodes you can refer to the youtube video below.

 

Prerequisites

ESXi hosts requirements

Although , NSX-T supports different kind of hypervisors I will focus on  vSphere. ESXi servers that host Openshift node VMs must be NSX-T Transport Nodes.

Virtual Machine requirements

Openshift node VMs must have two vNICs:

  1. Management vNIC connected to the Logical Switch that is uplinked to the management T1 router.
  2. The second vNIC on all VMs needs to have two Tags in NSX in order nsx-container-plugin(NCP) to know which port needs to be used as a parent VIF for all PODs running in the particular openshift node.

 

Tags need to be as following:

{'ncp/node_name':  'node_name'}
{'ncp/cluster': 'cluster_name'}

 

 

 

 

 

 

* The order in the UI is reverse than in the API.

The node_name must be exactly as kubelet will see it and the cluster name must be the same as specified as nsx_openshift_cluster_name in the ansible hosts file shown below.

We need to make sure that the proper tags are applied on the second vNIC on every node.

NSX-T requirements

The following objects need to be pre-created in NSX in order later to be referred in the ansible hosts file:

  1. T0 Router
  2. Overlay Transport Zone
  3. IP Block for POD networking
  4. IP Block for routed(NoNAT) POD networking – optional
  5. IP Pool for SNAT – by default the subnet given per Project from the IP Block in point 3 is routable only inside NSX. NCP uses this IP Pool in order to provide connectivity to the outside.
  6. Top and Bottom FW sections (optional) in dFW. NCP will be placing k8s Network Policy rules between those two sections.
  7. Openvswitch and CNI plugin RPMs need to be hosted on a HTTP server reachable from the Openshift Node VMs (http://1.1.1.1 in this example).

Installation

Below is an example of ansible hosts file. This is focused on the NSX-T integration part. For full production deployment we recommend to refer to Openshift documentation. We also assume you will run the Openshift ansible installation from the first master node.

[OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=origin

openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_master_htpasswd_users={'yasen' : '$apr1$dNJrJ/ZX$VvO7eGjJcYbufQkY6nc4x/'}

openshift_master_default_subdomain=demo.corp.local
openshift_use_nsx=true
os_sdn_network_plugin_name=cni
openshift_use_openshift_sdn=false
openshift_node_sdn_mtu=1500
openshift_master_cluster_method=native
openshift_master_cluster_hostname=master1.corp.local
openshift_master_cluster_public_hostname=master1.corp.local
# NSX specific configuration
#nsx_use_loadbalancer=false
nsx_openshift_cluster_name='cl1'
nsx_api_managers='192.168.110.202'
nsx_api_user='admin'
nsx_api_password='VMware1!VMware1!'
nsx_tier0_router='DefaultT0'
nsx_overlay_transport_zone='overlay-tz'
nsx_container_ip_block='pod-networking'
nsx_no_snat_ip_block='nonat-pod-networking'
nsx_external_ip_pool='external-pool'
nsx_top_fw_section='containers-top'
nsx_bottom_fw_section='containers-bottom'
nsx_ovs_uplink_port='ens224'
nsx_cni_url='http://1.1.1.1/nsx-cni-2.4.0.x86_64.rpm'
nsx_ovs_url='http://1.1.1.1/openvswitch-2.10.2.rhel76-1.x86_64.rpm'
nsx_kmod_ovs_url='http://1.1.1.1/kmod-openvswitch-2.10.2.rhel76-1.el7.x86_64.rpm'

[masters]
master1.corp.local
master2.corp.local
master3.corp.local

[etcd]
master1.corp.local
master2.corp.local
master3.corp.local

[nodes]
master1.corp.local ansible_ssh_host=10.0.0.11 openshift_node_group_name='node-config-master'
master2.corp.local ansible_ssh_host=10.0.0.12 openshift_node_group_name='node-config-master'
master3.corp.local ansible_ssh_host=10.0.0.13 openshift_node_group_name='node-config-master'
node1.corp.local ansible_ssh_host=10.0.0.21 openshift_node_group_name='node-config-infra'
node2.corp.local ansible_ssh_host=10.0.0.22 openshift_node_group_name='node-config-infra'
node3.corp.local ansible_ssh_host=10.0.0.23 openshift_node_group_name='node-config-compute'
node4.corp.local ansible_ssh_host=10.0.0.24 openshift_node_group_name='node-config-compute'

Run on all node VMs:

yum -y install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct
yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sed -i -e "s/^enabled=1/enabled=0/" /etc/yum.repos.d/epel.repo

Run on the master node:

ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub master1
ssh-copy-id -i ~/.ssh/id_rsa.pub master2
ssh-copy-id -i ~/.ssh/id_rsa.pub master3
ssh-copy-id -i ~/.ssh/id_rsa.pub node1
ssh-copy-id -i ~/.ssh/id_rsa.pub node2
ssh-copy-id -i ~/.ssh/id_rsa.pub node3
ssh-copy-id -i ~/.ssh/id_rsa.pub node4

yum -y --enablerepo=epel install ansible pyOpenSSL
git clone https://github.com/openshift/openshift-ansible
cd openshift-ansible/
git checkout release-3.11
cd
ansible-playbook -i hosts openshift-ansible/playbooks/prerequisites.yml

Once the above playbook finish do the following on all nodes:

# Assuming NCP Container image is downloaded locally on all nodes
docker load -i nsx-ncp-rhel-xxx.tar
# Get the image name 
docker images
docker image tag registry.local/xxxxx/nsx-ncp-rhel nsx-ncp

Last step is to deploy the Openshift cluster:

ansible-playbook -i hosts openshift-ansible/playbooks/deploy_cluster.yml

This step will take around 40 minutes  depending on the options, number of hosts, and other.

Once it is done you can validate that the NCP and nsx-node-agent PODs are running:

oc get pods -o wide -n nsx-system

Check NSX-T routing section:

 

 

 

 

 

 

 

 

 

Check NSX-T Switching section:

 

 

You can follow the entire installation process in the following 45 min. video:

DNS records

If you didn’t disable any of the default infrastructure services you would have the following default openshift routes.

docker-registry-default.demo.corp.local
registry-console-default.demo.corp.local
grafana-openshift-monitoring.demo.corp.local
prometheus-k8s-openshift-monitoring.demo.corp.local
alertmanager-main-openshift-monitoring.demo.corp.local
console.demo.corp.local
apiserver-kube-service-catalog.demo.corp.local
asb-1338-openshift-ansible-service-broker.demo.corp.local

You need to add DNS A records for those routes pointing the ip addresses of your infra nodes (in my example 10.0.0.21 and 10.0.0.22). You also need a wildcard DNS record for your domain pointing to the NSX-T Load Balancer VS(Virtual IP). In my example it is *.demo.corp.local.

 

Deploy a test Application

 

The video below shows deploying a test application and gives an overview how NSX-T provides networking, security, and visibility in an Openshift environment.

Additional Resources:

NCP release Notes: https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3.2/rn/VMware-NSX-T-Data-Center-232-Release-Notes.html

Administering NCP: https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.ncp_kubernetes.doc/GUID-7D35C9FD-813B-43C0-ADA8-C5C82596E1C9.html

VMware NSX-T Documentation: https://docs.vmware.com/en/VMware-NSX-T/index.html

All VMware Documentation: https://docs.vmware.com

VMware NSX YouTube Channel: https://www.youtube.com/VMwareNSX

The post NSX-T Integration with Openshift appeared first on Network Virtualization.

IHS Markit Talks Pioneering Private Cloud, Containers, and VMware Cloud on AWS

$
0
0

Global information, analytics, and solutions company IHS Markit provides data-driven insight for its government and corporate customers. Using VMware vRealize Automation, the company has already rolled out a private cloud that helped developers cut a 6-month infrastructure provisioning process down to one week. They’ve also been using VMware NSX-T Data Center to secure their workloads at a granular level with micro-segmentation, and to fundamentally re-think network design.

At VMworld 2018 in Las Vegas, Andrew Hrycaj, Principal Network Engineer for IHS Markit, spoke about the company’s plans for software-defined networking and hybrid cloud. IHS Markit has deployed VMware NSX Data Center, including NSX-T Data Center and VMware NSX Data Center for vSphere, into five data centers. “The NSX Data Center advantage for us is the fact that it can interact with so many different environments; from containers, to the public cloud environment with AWS and Azure, to on-prem,” said Hrycaj. “We’ll be able to utilize micro-segmentation across all of them with a common security footprint. If NSX-T goes to all those different environments, we can apply the same security policy across all those different platforms. It makes operations’ life easier because the transparency is there.”

 

Innovating with IaaS

The company continues to use the VMware vRealize Suite, including VMware vRealize Automation, heavily. IHS Markit has also installed VMware vRealize Network Insight and the VMware Identity Manager component of vRealize Automation, to help manage its NSX implementation. The goal is for the network operations team to use vRealize Network Insight to troubleshoot the environment, and to discover connections in the NSX Data Center environment when developers are writing applications in the private cloud. Hrycaj noted that the vRealize components help his team spin up VMs, and implement tagging and firewalls. “That way we can deploy product loads really quickly with a pervasive security strategy across the board.”

A major reason for Hrycaj attending VMworld was to give a talk about how IHS Markit developed automation strategies to provision VMs, and implemented firewall-as-a-service with vRealize Automation and VMware vRealize Orchestrator. It had taken up to 200 days for IHS Markit developers to provision resources manually. With these VMware-driven IaaS solutions, that time was cut to two weeks.

 

Into the Cloud with VMware Cloud on AWS

IHS Markit is beginning a project to move a lot of infrastructure to AWS using VMware Cloud on AWS. Hrycaj noted that one of the reasons they undertook their private cloud project was because they wanted some of the as-a-service “luxuries” that developers already enjoyed in the cloud. “In the beginning, we were trying to bring those developers back home, so to speak. But now, we’re embracing both worlds: our on-prem solution and VMware Cloud on AWS.” Using VMware Cloud on AWS will also allow Hrycaj and his team to extend their security footprint, including network and policy templates developed with NSX and Palo Alto Networks, into the cloud through a single control plane with no re-work.

 

What’s Next?

So what’s Hrycaj’s vision for what IHS Markit will look like at VMworld 2019? “We went from last year just learning about NSX-T to now having five fully-fledged data centers running all these different vRealize Automation and vRealize Operations integrations,” he said. “Next year, I think we’ll be heavily into VMware Cloud on AWS, and really big container champions.” vRealize Automation provides container management capabilities, including new integrations with Pivotal Container Service (PKS) for Kubernetes cluster management use cases. IHS Markit developers plan to use containers in the company’s private cloud.

Hrycaj continued, “We’re really interested in how VMware is stepping up container integration with NSX-T. We plan to harness the power of NSX-T to apply the same security strategies to our containers and multi-cloud environment alike. To see NSX-T grow at such an accelerated rate, and to think of where it’s going to be 12 months from now is really, really exciting.”

As IHS Markit has innovated with a new private cloud, “VMware has been with us step by step,” said Hrycaj. “That helps us develop our own product. And because we have such tight integration, we have the confidence to move into new products that VMware may be testing out in the market. We know that they’ll be there with us and not abandon us if things go wrong, and stay with us when things go right. So, we’re willing to go into a brave new world because we have a trusted partner.”

 

Learn more about how IHS Markit has been successful with automation and cloud services.

The post IHS Markit Talks Pioneering Private Cloud, Containers, and VMware Cloud on AWS appeared first on Network Virtualization.

Re-Introducing VMware AppDefense, Part I – Application Security in Virtualized and Cloud Environments

$
0
0

This blog will be part of a series where we start off with a basic re-introduction of VMware AppDefense and then progressively get into integrations, best practices, mitigating attacks and anomaly detection with vSphere Platinum, vRealize Log Insight, AppDefense and NSX Data Center. Before we get into the meat of things, let’s level-set on a few core principles of what VMware believes to be appropriate cyber hygiene. The full white paper can be viewed here.

  1. Follow a least privileged model
    • The principle of least privilege is the idea that at any user, program, or process should have only the bare minimum privileges necessary to perform its function. For example, a user account created for pulling records from a database doesn’t need admin rights, while a programmer whose main function is updating lines of legacy code doesn’t need access to financial records. The principle of least privilege can also be referred to as the principle of minimal privilege (POMP) or the principle of least authority (POLA). Following the principle of least privilege is considered a best practice in information security.
    • The least privilege model works by allowing only enough access to perform the required job. In an IT environment, adhering to the principle of least privilege reduces the risk of attackers gaining access to critical systems or sensitive data by compromising a low-level user account, device, or application. Implementing this methodology helps contain compromises to their area of origin, stopping them from spreading to the system at large.
  2. Zero Trust Micro-segmentation of applications and network
    • The Zero Trust model, of “never trust, always verify,” is designed to address multiple threats within the network and application tiers by leveraging micro-segmentation and granular perimeters enforcement as well as application enforcement, based on user, data, behavior and location what VMware refers to as known good. Lateral movement for example, defines different techniques that attackers use to move through a network in search of valuable assets and data within the datacenter. With micro-segmentation businesses can define sub-perimeters within their organization networks using a specific set of rules for each leveraging the context around user, application traffic direction, etc . These rules are designed to identify the spread of an attack within an organization and stop the unrestricted lateral movement, command and control communications as well as data exfiltration throughout the network and application. Remember, the point of infiltration of an attack is often not the target location, and thus the reason for stopping lateral movement is so important. For example, if an attacker infiltrates an endpoint, they may still need to move laterally throughout the environment to reach the data center where the targeted content resides, or if credential phishing is successfully used, those credentials should be authenticated against the database to reach the location of the data an attacker is seeking to extract.

As we are aware, security is on everyone’s mind nowadays, and two of the biggest goals are least privilege and zero trust, where people, processes, and software only get the privileges they need to do their job. AppDefense, part of vSphere Platinum, is part of VMware’s evolving intrinsic security story.

AppDefense is a cloud-based security product that provides foundational security for data center endpoints and applications.

The core of AppDefense focuses on protecting applications that are running on virtualized or cloud environments. It creates a least privileged environment on the compute stack. It uses the hypervisor to introspect the guest VM application behavior and enforces the model of least privilege. It watches the processes running and makes sure they continue to run as they initially were intended to run. AppDefense is part of our least privilege / zero trust story focusing on compute isolation / segmentation. Combined AppDefense, NSX and our newest vSphere addition (vSphere Platinum) The combination of these three products assist in visibility of process and network behavior, limiting lateral movement by ensuring good, enforcing micro segmentation rules by only allowing the application to talk to only what it needs to speak to as well as mitigating dwell time from an attacker perspective. This provides application visibility and isolation for the VI admin, the security operations center and security architect.

In essence, AppDefense provides four basic functions:

  1. Application control: AppDefense implements application control by first assigning virtual machines to a scope and a service. A scope is the representation of an application. A scope is made up of multiple services. A service represents an application tier. All VMs within a service are expected to be homogeneous and have the exact same allowed behavior/rules. Scopes and their services are the foundational components that establishes what the intended state (allowed behaviors) of an application or virtual machine (VM) in the data center. Scopes can also be integrated and dynamically created from automation tool integrations such as Puppet, vRealize automation, etc.
  2. Process analysis: Once AppDefense has established the known state and allowed behaviors for the application, AppDefense verifies that the learned behavior as ‘known good’ with VMware’s Application Cloud verification engine and cloud-based reputations feeds.
  3. Anomaly detection: After creating the intended state, AppDefense monitors for deviations to that state, alerting and preventing anomalies that could be attacks on the environment. Examples include unknown process execution, unknown command line arguments, unknown network connections or open ports.
  4. Response and remediation: When anomalous events are seen and the application’s behavior deviates from the known state, AppDefense responds to potential threats by triggering a response. The response is configurable, with responses ranging from a simple alert, isolating the VM, to shutting it down completely. AppDefense includes an orchestration capability that can remediate threats in real time with no administrator oversight. Built in to the infrastructure (intrinsic), AppDefense ensures that applications are behaving only as intended, monitoring and preventing unknown behavior that could potentially be attacks on the environment. It uses machine learning and reputation data to discern what is normal and good it can then take a remediation action or alert on unknown and potential malicious behavior in ways traditional security technologies such as antivirus could never do. It leverages some unique advantages of the virtualization layer to enforce least privilege security, application control and visibility while providing system integrity validation, and dynamic response capability through the infrastructure. Unlike traditional application control solutions, AppDefense analyzes every deviation so that it only sends alerts that matter to the security team and or security operations center. AppDefense is simple and easy to deploy within your infrastructure, leveraging existing lifecycle management workflows, resulting in a true agent-less experience. It flips the protection model around. Instead of chasing an always-incomplete list of unknown or malicious behavior AppDefense allows the intended behavior we want and takes a remediation action on everything else.

In addition, combining AppDefense with VMware NSX is even more powerful. NSX allows organizations to implement micro-segmentation into the environment at the hypervisor level so that only network resources that need access will gain access to certain resources in the environment. AppDefense provides not only deep visibility into applications and network behavior at the guest OS, but it works in conjunction with NSX to generate firewall rules as well as quarantine and block unknown or malicious behavior at the network level (adaptive). This is important for preventing attacks and intrusions. Not only do they ensure that there is a very tight & accurate set of firewall rules around your systems, but they also limit an attacker’s ability to move laterally in your organization, where they can attack other systems once they’ve established a foothold on one system.

AppDefense blocks an attacker’s ability to establish a foothold, and NSX ensures that the affected server is isolated correctly. Ultimately VMware AppDefense builds on the security foundation of VMware NSX.

Once AppDefense is in protect mode it proactively takes the security posture of micro segmentation one step further and provides the functionality to secure the endpoint if any unknown behavior makes it through the network defenses. AppDefense automatically triggers responses from a configurable set of automatic remediation policies. The automatic responses can include:

  • Blocking process and network communication
  • Snapshotting a VM for forensic analysis
  • Suspending or shutting down a VM if malicious software is detected
  • Alerting of any anomalous behavior / deviation from known behavior

With the latest release of AppDefense it introduces a completely new plugin for vSphere 6.7u1 (Platinum).

Plugin Dashboard
The Plugin Dashboard delivers aggregated security metrics, visibility, and health statistics for applications and workloads running on vSphere. Users can drill into individual behaviors and reputation scores, leading to deeper visibility in the VM Monitor page. This high-level summary provides focused, at-a-glance statistics and a starting place for additional discovery.

Lifecycle Management
AppDefense announces one-click, integrated installation and upgrade workflows for AppDefense directly within vCenter. Users can now get a full report of their protection status, deploy AppDefense modules into entire clusters with a single click, and schedule regular upgrades, all while leveraging familiar workflows. Managing AppDefense components in this way greatly increases ease of operation for IT admins.

VM Monitoring
This release delivers a new virtual machine monitor tab that provides VM-specific behavior monitoring for visibility, security assessment, and troubleshooting directly within vCenter. Integrating this capability in vCenter enables IT admins to play pivotal roles in the protection of their organizations’ apps and data.

Connectivity Modes
The AppDefense Plugin can operate in three different connectivity modes: Online, Offline, and SaaS. Offline mode requires no internet connectivity and provides a basic visibility-only view of your environment. Online mode adds security feeds from the AppDefense Service. SaaS mode (recommended) provides the full AppDefense feature set. Select the connectivity mode that meets your compliance requirements. For more information, go to AppDefense Appliance Connectivity Modes.

Scope Level Dashboard
With this release, AppDefense has introduced the newly designed scope level dashboard, providing a real-time snapshot of your application scopes. The visual information allows users to see the protection status of your applications, understand quickly if there are any behaviors that need addressing, and provides an overview of the security validation checks that AppDefense has performed. It simplifies application-specific summaries into the following 4 sections:

  • Process burn down chart: The process summary info in a graphical representation
  • Process reputation: Summary of the process reputation information from various sources
  • Behavior risk analysis: Behavior risk analysis summary based on machine learning
  • Integrity check status: Integrity status summary to show the overall health of the org

Adaptive Allowed Behavior
AppDefense has added the ability to adjust allowed behavior automatically by adapting to security events that have been classified as normal by the AppDefense Verification Engine. This ability to automatically de-classify alerts and dynamically adjust the allowed behavior tremendously reduces ongoing operational tasks and improves operational efficiency.

Monitoring Events
AppDefense adds the Monitoring Event support to distinguish observed deviation from malicious behaviors which are categorized as critical alerts. Monitoring Events will be classified by AppDefense Verification Engine into three severities: Serious, Minor, and Info. Separating Monitoring Events further increases operational efficiency by allowing customers to focus on the alerts that matter the most.

Usage Counters Improvement
This release also improves usability by adding the following usage counters:

  • Allowed Behavior count for each service
  • Connection Count for each process

With these usage counters, users can easily evaluate the health of the application and have a glance at how many allowed behaviors and connections are protected and monitored by AppDefense.

As you can see, we have made major advancements as well as vast improvements and integrations with in the last year and there is way more to come. In the next post we will be getting into Visibility setting up scopes and services as well as basic remediations. Stay Tuned…

Want to see Adaptive Micro-segmentation in action? We showcased how adaptive micro-segmentation stops a live attack at VMworld 2018 in Las Vegas, NV. Make sure you check out our Security Showcase session, Transforming Security in a Cloud and Mobile World (SEC3730KU). Also, check out Introduction to VMware AppDefense (SAI3217BU) and Introduction to NSX Data Center for Security ( SAI2026BU) for a crash course in the technology behind adaptive micro-segmentation.

To read more about how customers are doing adaptive micro-segmentation today, check out this case study.

References:
https://www.csoonline.com/article/3247848/network-security/what-is-zero-trust-a-model-for-more-effective-security.html
https://blogs.vmware.com/vsphere/2018/08/introducing-vsphere-platinum-and-vsphere-6-7-update-1.html

The post Re-Introducing VMware AppDefense, Part I – Application Security in Virtualized and Cloud Environments appeared first on Network Virtualization.

Introducing NSX-T 2.4 – A Landmark Release in the History of NSX

$
0
0

In February 2017, we introduced VMware NSX-T Data Center to the world. For years, VMware NSX for vSphere had been spearheading a network transformation journey with a software-defined, application-first approach. In the meantime, as the application landscape was changing with the arrival of public clouds and containers, NSX-T was being designed to address the evolving needs of organizations to support cloud-native applications, bare metal workloads, multi-hypervisor environments, public clouds, and now, even multiple clouds.

Today, we are excited to announce an important milestone in this journey – the NSX-T 2.4 release, expected to post tomorrow. This fifth release of NSX-T delivers advancements in networking, security, automation, and operational simplicity for everyone involved – from IT admins to DevOps-style teams to developers. Today, NSX-T has emerged as the clear choice for customers embracing cloud-native application development, expanding use of public cloud, and mandating automation to drive agility.

Let’s take a look at some of the new features in NSX-T 2.4:

 

Operational Simplicity: Easy to Install, Configure, Operate

What if delivering new networks and network services was as easy as spinning up a workload in AWS? In keeping with the ethos that networking can be made easier, over the past few releases, we have focused on improving the user experience at every level – UI, dashboards, APIs, systems. In today’s app-centric, multi-cloud world, networking and security aren’t solely the responsibility of network architects and admins with control, visibility and automation being shared across DevOps teams. VMware NSX serves this need by delivering a centralized and consistent operational tool for everyone to work together. This release brings a number of operational enhancements that makes it easy to install, upgrade, and operate consistently, accelerating Day 0 installation to Day 1 provisioning from days to minutes, and significantly simplifying Day 2 operations for administrators.

Day 0: Installation

The 2.4 release introduces a new converged NSX manager appliance with 3-node clustering support which merges policy, management and central control services on a cluster of nodes, bringing high availability and a scale-out architecture to the management plane. In addition, due to the convergence of management and control plane nodes, fewer VMs are needed which means less management overhead. NSX-T now also includes installation enhancements such as Ansible modules that enable automation of installation workflows.

Day 1: Configuration

Simplified UI – The latest NSX-T release brings a radically simplified user interface (UI) that requires just the bare minimum user input, offering strong default values with prescriptive guidance for ease of use. This means fewer clicks and page hops are required to complete configuration tasks.

Day 2: Ongoing Operations

UI Enhancements – Intelligent search provides easy access to information, anticipating user intent through type-ahead auto-completions and suggestions on common search phrases.

NSX-T enables customers to provision new networks and services with a single API call or a few clicks in a new simplified UI, making NSX the industry’s simplest way to manage an application-centric, software-defined network.

Simplified UI – Overview Tab
Networking Configuration Overview – Centralized View

Keep watching this space as we bring you deep-dives and demos on each topic in upcoming blogs.

Infrastructure as Code: A New Declarative Policy Model

Automation-driven fast and agile IT environments is becoming a necessity for every enterprise and service provider in the world. By implementing networking completely in software, NSX makes networks programmable, agile, and dynamic, squarely addressing the many challenges with physical network automation. Consuming NSX through configuration frameworks like Ansible or scripting languages such as Python or PowerShell goes a step beyond the simple usage of the GUI and allows for greater agility, consistency, and scale.

Taking a page from iterative automation in cloud-native environments using NSX-T, we are introducing in this release, a new declarative policy model to enable a one-step approach to configuring networking and security for applications. It drastically simplifies network automation by allowing users to specify what the connectivity and security needs of applications are as opposed to how networking and security should be configured step-by-step. Unlike the imperative-based model where detailed tasks need to be explicitly called out, this new way of provisioning infrastructure gives operators a one-shot, application-focused approach to automating configuration of the network.

This approach eliminates the need for a tedious set of sequential commands to configure networking and security services which is time-consuming and error-prone. The declarative interface takes in simple, user-defined terms the connectivity and security requirements for the application environment specified in a JSON file. These policies are platform-agnostic and easily replicable, simplifying operations and allowing IT teams to scale to new levels.

Check out the recent Networking Field Day demo video for a closer look at the UI enhancements and advanced automation capabilities discussed above.

New Declarative Policy Vs Imperative Model Based Automation

Expanding Security Features, Delivered Intrinsically

Today’s aggressive threats require proactive, modern defense approaches. Micro-segmentation with NSX not only delivers on this promise but takes it one step further with seamless operations and optimized user experience. Not surprisingly, thousands of customers across the globe use NSX for micro-segmentation to protect their data centers and cloud environments from sophisticated attacks.

With every release, NSX-T continues to bolster its ability to deliver consistent, pervasive connectivity and intrinsic security for applications and data across any environment to drastically shrink the application attack surface and reduce business risk. NSX-T delivers security to diverse endpoints such as VMs, containers, and bare metal, as well as to various cloud platforms.

NSX-T 2.4 introduces support for advanced security capabilities such as Layer 7 application context-based firewalling, identity-based firewalling, FQDN/URL whitelisting, guest introspection, and E-W service insertion.  The FQDN/URL whitelisting feature applies to E-W traffic in the distributed firewall and it enables customers to allow/whitelist specific traffic going from a VM to a specific FQDN or URL. Benefits of this feature include support for communication to a different system/application in a multi-site environment, support for applications that use native cloud services, and support for URL domain on the internet.

NSX-T 2.4 also brings a new level of analytics and visualization with a new simplified management dashboard and UI for security, as well as support for Splunk app and VMware vRealize Log Insight.

FQDN/URL Whitelisting Enforced at DFW Level
Layer 7 Application Context-Aware Firewalling

Watch the latest Networking Field Day demo video for an in-depth view of the new security capabilities in NSX-T 2.4. And keep an eye out on the Security section of the NSX blog over the next few weeks for technical deep-dives on the security capabilities supported in NSX-T 2.4.

Higher Levels of Scale, Resiliency, Performance

NSX-T was designed from the ground-up to enable a modular, resilient, and distributed architecture that is flexible and scalable to the demands of cloud-scale and multi-cloud environments.

NSX-T 2.4 now supports greater scale, resiliency and performance, with near line-rate speed using a DPDK-based hardware-accelerated data plane.

As global IPv4 address space shortage continues with the explosion of IoT and mobile devices and governments mandate the transition to IPv6, the support for IPv6 in NSX-T 2.4 addresses a critical global problem and a key requirement of cloud-scale networks.

The NSX-T 2.4 release brings support for a converged NSX manager appliance design with 3-node clustering support that merges policy, management and central control services on a cluster of nodes. This brings the benefits of high availability and scale to the management plane.

NSX-T can scale to hundreds of thousands of routes, over a thousand hosts per NSX domain, and enables high-scale multi-tenancy. The higher levels of scale, resiliency, and performance positions NSX to deliver even greater capabilities in a multi-cloud world.

Customer Momentum, Partnerships, and New Opportunities

2019 was an outstanding year for NSX-T in terms of customer momentum. For a peek into the wide breadth of NSX-T customers and VMworld 2018 sessions on NSX-T, check out the blog post here.

NSX-T has been a key enabler of our customers’ multi-cloud and hybrid cloud initiatives as well as cloud-native projects using Kubernetes, VMware PKS, Pivotal Application Service (PAS), and Red Hat OpenShift. It is driving value inside the data center today and expanding across datacenters and to the cloud via our VMware Cloud Provider partnerships, and to VMware Cloud on AWS and native public cloud workloads via VMware NSX Cloud. The network virtualization platform is embedded throughout the VMware portfolio, including VMware Cloud Foundation, VMware vCloud NFV, and in the future AWS Outposts and VMware Cloud Foundation for EC2.

Summary

Our relentless focus on the development cycle of NSX-T continues with the primary goal of uniting and helping secure different clouds and making it easy to use for everyone. With every release, NSX-T continues to gain new capabilities that address emerging customer use-cases, spurring new avenues for organizations to innovate and thrive in the market.

Now is the time to assess how NSX-T can help your organization transform your journey forward – from scaling and securing networks and embracing new clouds to adopting latest practices in automation and new application frameworks – NSX-T delivers on every count.

For customers using NSX for vSphere, VMware will continue to support you throughout your ongoing transformation journey: from investment protection, to continuing to support NSX for vSphere, to giving customers multiple different choices to migrate to NSX-T – stay tuned for more blogs on this topic.

Watch this space for a series of technical deep-dive blogs on some of the key capabilities supported in this release of NSX-T 2.4.

NSX has become the bridge that enables customers to unify networking and security across their private and public clouds, bringing VMware closer to fulfilling its Virtual Cloud Network vision.

Resources

The post Introducing NSX-T 2.4 – A Landmark Release in the History of NSX appeared first on Network Virtualization.


Introducing IPv6 in NSX-T Data Center 2.4

$
0
0

With the latest release for VMware NSX-T Data Center 2.4, we announced the support for IPv6. Since the advent of IPv4 address space exhaustion, IPv6 adoption has continued to increase around the world. A quick look at the Google IPv6 adoption statistics proves the fact that IPv6 adoption is ramping up. With the advances in IoT space and explosion in number of endpoints (mobile devices), this adoption will continue to grow. IPv6 increases the number of network address bits from its predecessor IPv4 from 32 to 128 bits, providing more than enough globally unique IP addresses for global end-to-end reachability. Several government agencies mandate use of IPv6. In addition to that, IPv6 also provides operational simplification.

NSX-T Data Center 2.4 release introduces the dual stack support for the interfaces on a logical router (now referred as Gateway). You can now leverage all the goodness of distributed routing or distributed firewall in a single tier topology or multi-tiered topology. If you are wondering what dual stack is; it is the capability of a device that can simultaneously originate and understand both IPv4 and IPv6 packets. In this blog, I will discuss the IPv6 features that are made generally available with NSX-T 2.4 Data Center.

IPv6 Addressing

Let’s start by understanding IPv6 addressing in NSX-T Datacenter world and what kind of IPv6 addresses are supported. NSX-T Datacenter supports following unicast IPv6 addresses.

Global Unicast: Globally unique IPv6 address and internet routable

Link-Local: Link specific IPv6 address and used as next hop for IPv6 routing protocols

Unique local: Site specific unique IPv6 addresses used for inter-site communication but not routable on internet. Based on RFC4193.

The following table shows a summarized view of IPv6 unicast and multicast address types on NSX-T Datacenter components.

NSX-T Data Center 2.4 release introduces dual stack support for the interfaces on a logical router/gateway in both single tier topology and multi-tiered topology.

The diagram on the left shows a single tiered routing topology with a Tier-0 Gateway supporting dual stack on all interfaces. The diagram on the right shows a multi-tiered routing topology with a Tier-0 Gateway and Tier-1 Gateway supporting dual stack on all interfaces. You can either assign static IPv6 addresses to the workloads or use a DHCPv6 relay supported on gateway interfaces to get dynamic IPv6 addresses from an external DHCPv6 server.

In my previous blog, I explained how connectivity is provided between Tier-1 and Tier-0 Gateway in a multi-tiered topology. Each tier-0-to-tier-1 peer connection is provided a /31 subnet within the 100.64.0.0/10 reserved address space (RFC6598). Similarly, for IPv6, each tier-0-to-tier-1 peer connection is provided a /64 unique local IPv6 address from a pool i.e. fc5f:b8e2:ac6a::/48. A user has the flexibility to change this subnet range and use another subnet if desired.

IPv6 Routing

A global flag is provided to enable/disable IPv6 forwarding. It is disabled by default because of security reasons and to ensure that an upgrade from 2.3 to 2.4 release doesn’t enable link-local IPv6 address on all interfaces. The following screenshot shows how to enable IPv6 forwarding.

Let’s start with E-W routing. In my previous blog, I have discussed how NSX-T Data Center can provide optimal distributed routing for E-W traffic. If your workloads are on the same hypervisor but in different subnets, traffic doesn’t have to leave the hypervisor to get routed. This is also applicable for IPv6 workloads now, whether it’s a single tiered topology or multi-tiered topology.

Let’s validate E-W distributed routing by running traceflow between two IPv6 workloads 2001::10/64 and 2002::20/64. Both workloads are logically connected to a Tier-1 Gateway and hosted on the same hypervisor.

Notice that the packet doesn’t leave the hypervisor to get routed. Moving on, let’s look at the IPv6 routing feature set that we have introduced in this release.

Tier-0 Gateway supports following IPv6 routing features:

  • Static routes with IPv6 Next-hop
  • MP-eBGP with IPv4 and IPv6 address families
  • Multi-hop eBGP
  • IBGP
  • ECMP support with static routes, EBGP and IBGP
  • Outbound and Inbound route influencing using Weight, Local Pref, AS Path prepend and MED.
  • IPv6 Route Redistribution
  • IPv6 Route Aggregation
  • IPv6 Prefix List and Route map

Tier1 Gateway supports following IPv6 routing features:

  • Static routes with IPv6 Next-hop

Now, let’s understand how IPv6 routing is done in a multi-tiered topology. IPv6 routing between Tier-1 and Tier-0 Gateways are auto plumbed just like IPv4 routing. Configuring routing between Tier-1 and Tier-0 Gateway is a one-click or one API call configuration. Same is true for advertising routes from Tier-1 to Tier-0. Following diagram shows what happens in the background:

When a Tier-1 Gateway is connected to Tier-0 Gateway, management plane configures a default route (::/0) on Tier-1 Gateway with next hop IPv6 address as Router link IP of Tier-0 Gateway (fc5f:b8e2:ac6a:5000::1/64, in the following topology). To provide reachability to subnets connected to the Tier-1 Gateway, the Management Plane (MP) configures routes on the Tier-0 Gateway for all the LIFs connected to Tier-1 Gateway with a next hop IPv6 address as Tier-1 Gateway Router link IP (fc5f:b8e2:ac6a:5000::2/64, in the following topology). 2001::/64 & 2002:/64 are seen as “Tier-1 Connected” routes on Tier-0. Tier-0 Gateway can now redistribute and advertise these routes in BGP peering towards physical router.

IPv6 Security

Nearly all organizations that use NSX, leverage Micro-segmentation. Users can now enforce L2-L4 stateful distributed firewall (DFW) for IPv6 VM workloads. These firewall rules can use IPv6 addresses, IPv6 CIDR, IP Sets that include both IPv4 and IPv6 addresses and NSGroups that can include logical ports that have both IPv4 and IPv6 addresses.

Along with distributed firewall (DFW), we also support Edge firewall for IPv6 VM workloads connected to both Tier-0 and Tier-1 Gateways. This Edge firewall is a perimeter firewall that provides inter-tenant/inter-zone firewalling capability and can be used for developing PCI zones.

IPv6 Switch Security

Along with DFW and Edge firewall features for IPv6, we also support DHCPv6 guard and RA guard.

DHCPv6 server block feature prevents unauthorized or rogue DHCPv6 servers to send DHCP reply to a VM, this is done by filtering UDP Source port 547.

DHCPv6 client block feature prevents a VM from sending out DHCPv6 messages which are typically sent out by a DHCPv6 client, this is done by filtering UDP Source port 546.

RA Guard feature prevents against rogue RA (Router Advertisement) generated by unauthorized routers/devices on the network.

IPv6 Operations

With NSX-T Data Center 2.4, we have enhanced existing operational tools to support IPv6. Along with enhancing CLI and UI to show IPv6 counters/statistics, we have added IPv6 support in following operational tools:

  • Ping, Traceroute, Traceflow
  • Port Mirroring

– IPv6 packet support to packet mirroring on all transport nodes
– Destination address in ERSPAN can be an IPv6 address

  • IPFIX

– IPv6 address in flow info
– Collector IP can now be IPv6 address

  • Packet capture on all TN (KVM, ESX and Edge) with IPv6 filters

Summary

VMware NSX-T Data Center 2.4 introduces IPv6 support along with a plethora of networking and security features. This release introduces distributed routing support for E-W IPv6 and centralized routing support for N-S IPv6 traffic with static routing or BGP with all the inbound and outbound route influencing knobs.  Users can now leverage NSX-T’s unique distributed firewall (DFW) functionality or Edge firewall functionality for IPv6 VM workloads available on both Tier-0 and Tier-1 Gateway.

Here are some resources to explore NSX-T Data Center.

NSX-T Data Center Reference Design Guide
Beginner or Advanced NSX Hands-On-Lab (HOL)

 

 

 

 

 

The post Introducing IPv6 in NSX-T Data Center 2.4 appeared first on Network Virtualization.

Meet the VMware Service-defined Firewall: A new approach to firewalling

$
0
0

VMware has had front row seats to the digital transformation that has touched virtually every organization. We’ve been there (and helped drive!) the journey from monolithic applications hosted on a single server, to distributed apps running in VMs, to further decentralization in the form of cloud-native apps composed of microservices. Now, we’re watching the proliferation of public clouds, the up and coming space of serverless and the adoption of functions as a service as ways to build and deploy applications faster than ever.

 

It’s this vantage point that also gives us clear line of sight to one of the biggest cyber security challenges that modern enterprises face: as their applications become more distributed, an organization’s attack surface significantly increases. Despite all of the advancements and innovation in the way applications are built, we have not seen the same rate of progress with respect to the way applications are secured. Adopting a zero-trust network security model in an enterprise environment remains incredibly hard to achieve. How do you know what security policies to create? How do you enforce those policies consistently across on-premises physical and virtual environments, let alone the public cloud? How do you enforce them across different types of workloads (bare metal servers, VMs, containers)? How do you maintain security policies as developers continually make changes to their apps?

 

For the better part of a decade, VMware has helped customers reduce the attack surface of their environments. In 2013, when VMware NSX Data Center was launched, micro-segmentation immediately became one of the most valuable and deployed use cases for the product. Customers were able to define network security policies from a single location and enforce those policies in a distributed manner, entirely in software and without the need for specialized hardware or agents.

 

Since then, we’ve learned that the more flexibility we can give customers in the definition and enforcement of security policies, the better they can reach the level of segmentation they desire in their environment. As a result, we’ve introduced features like identity-based firewalling, and the ability to create policies based on workload-level attributes. We’ve also moved up the stack from basic Layer 4 port blocking to do stateful Layer 7 enforcement. And we’ve introduced the ability to automatically deploy and dynamically adapt policies based on continual changes to applications.

 

All of this brings us to our announcement. Today, we are launching the VMware Service-defined Firewall. The VMware Service-defined Firewall is a new approach to firewalling, different from that of traditional perimeter firewalls, due to the fact that it solves a different problem than perimeter firewalls. The VMware Service-defined Firewall reduces the attack surface inside the network perimeter. Unlike perimeter firewalls that must filter traffic from an unlimited number of unknown hosts, the VMware Service-defined Firewall has the advantage of deep visibility into the hosts and services that generate network traffic. The solution uses this visibility to determine the expected – or “known good” – behavior of applications and verifies that this behavior is, in fact, known good behavior by analyzing it with the Application Verification Cloud. Finally, the VMware Service-defined Firewall automatically generates the necessary security policies to consistently enforce the application’s known good behavior across heterogenous workloads and both private and public clouds.

VMware has been working on solving the problem of reducing attack surface inside the network perimeter for the better part of a decade.

 

This launch marks a turning point in the way that internal network security will be viewed by the industry moving forward. Ultimately, the Service-defined Firewall will provide the answer for securing environments comprised of applications that span the timeline of technology, from mainframes to microservices to whatever comes next.

 

To learn more about the capabilities of the VMware Service-defined Firewall, watch a demo of the solution in action, and read the Service-defined Firewall Effectiveness Validation report by Verodin, visit vmware.com/go/service-defined-firewall.

The post Meet the VMware Service-defined Firewall: A new approach to firewalling appeared first on Network Virtualization.

Context-aware Micro-segmentation with NSX-T 2.4

$
0
0

With last’s week landmark release of NSX-T 2.4,  and the RSA conference in full swing,  this is the perfect time to talk about to some of the new security functionality we are introducing in NSX-T 2.4.

If you prefer seeing NSX-T in action, you can watch this demo which covers Layer 7 application identity, FQDN Filtering and Ientity Firewall. Or if you are around at RSAC in San Francisco this week, swing by the VMware booth. 

Micro-segmentation has been one of the key reasons why our customers deploy NSX. With Micro-segmentation, NSX enables organizations to implement a  zero-trust network security model  in their on-premise datacenter as well as in the cloud and beyond.  A key component making Micro-segmentation possible is the Distributed Firewall, which is deployed at the logical port of every workload allowing the most granular level of enforcement, regardless of the form factor of that workload – Virtual Machine – Container – Bare Metal Server or where that workload resides – On Premise – AWS -Azure – VMC.

NSX-T 2.4 provides significant new security features and functionality such as Context-aware Micro-segmentation, Network (and Security) Intrastructure as Code, E-W Service Insertion and Guest Introspection.  Layer 7 Application Identity, FQDN /URL whitelisting and Identity Firewalling are key features that make NSX-T Context-aware.

In this blog and the below demo, I’m covering how the new Context-aware capabilities can be leveraged to enable the zero-trust network security model.

Challenges with traditional Data Center security

In the traditional network-centric approach to security, network and security teams are tasked with determining the appropriate policy and rules after a new application has been developed. This often is a very time consuming, manual and error-prone process involving various review cycles, and results in a complex set of rules based on network constructs such as IP addresses and Ports that are hard to tie to applications. In addition to that initial complexity, network-based security policies are not conducive to changing applications

Modern day applications are a network of distributed servers, which can consists of Virtual Machines, Containers and sometimes Bare Metal systems, which are all intended to work together. In the traditional network-centric approach to security, VLANs and hairpinning of traffic to hardware-based firewalls are used to provide a certain level of segmentation beween different kinds of workloads. This is often used to segment the tiers of applications, but does not prevent lateral communication between workload within a tier, leading to a large lateral attack surface between various applications. Most importantly, this model and the lifecycle of associated policies is also not aligned with applications The disconnect between policies and the actual applications these policies should protect leads to explosion in the number of rules providing inadequate and inflexible security.

Zero Trust through Context

How can we provide the ultimate level of segmentation, and how can we deliver a security policy that is aligned with our applications, and is provisioned and decommissioned right along with the application itself ?
We need to be able to identify our application,  determine which workloads comprise the application, and what network traffic is necessary for the application to function. Then we can create micro-segmentation policies to restrict any other traffic, which immediately reduces the attack surface.

The NSX-T Distributed Firewall is the key component in enforcing Micro-segmentation. It’s built directly into the hypervisor kernel and provides Layer 2 to Layer 7 stateful filtering, enabling a context-defined and network-independent policy and enforcement at line rate. This enforcement is distributed to the most granular level, with basically a firewall sitting right at the vNic of every virtual machine.   All policies are centrally configured either from the UI, through a CMP or using our API.

In traditional security approaches, policies are applied to static groups defined by the network topology like security zones and IP subnets.

Because NSX is embedded in the hypervisor, it has rich contextual knowledge of what is taking place with dynamic workloads in both the physical and virtual environments.  Instead of grouping and rules based on where something is in the network, we can use constructs based on specific characteristics of that workload,  including for example the Operating System or Name of the workload. Through the use of security tags, workload can also be grouped based on criteria such as the function of the application, the application tier the workload is part of, the security posture, regulatory requirements such as PCI or GDPR or the environment the application is deployed in. With Identity Firewalling, we can also create a policy that limits access to applications based on the Active Directory group a user belongs to, and with Layer 7 Application Identity in NSX-T 2.4, we now also provide customers the ability to define a policy that allows/denies flows from a particular application/protocol to traverse E-W between workloads regardless of the port that is being used.

In NSX-T, Groups can be based on a combination of static and dynamic criteria such as tags as well as AD User Group. Criteria in Red are new in NSX-T 2.4.

Layer 7 Application-ID

Application and Protocol Identification is the ability identify which application a particular packet or flow is generated by, independent of the port that is being used.  In addition to visibility into flows, enforcement based on application identity is another key aspect, enabling customers to allow or deny applications to run on any port, or to force applications to run on their standard port. This enables our customers to reduce the attack surface even further by ensuring only the intended application is able to communicate on a given port and preventing any other protocols from tunneling across an open port. Beyond identifying the protocol/application itself, NSX-T 2.4 also enables the enforcement of additional Layer 7 attributes, including protocol versions for TLS and CIFS and Cipher suites for TLS, enforcing the usage of secure protocol versions and cipher suites.

Enforcement based on combination of port and/or app-ID/version 

With Layer 7 Application Identity, NSX-T leverages a set of built-in application signatures, which allow us to fingerprint network flows in order to determine which Layer 7 application/protocol generated the flow. A central component to this is the DPI or Deep Packet Inspection Engine. When a flow matches a firewall rule with a Layer 7 context profile applied to it, a few packets for that flow are punted to the DPI engine, which matches these packets against a set of application signatures. When a match is found, the DPI engine programs the APP-ID to Flow mapping information into the in-kernel context-table. When the next packet for this flow comes in, we match the information we have in the context and flow table with the Rule table entries and either deny or allow the flow. All subsequent packets for that flow are processed in kernel, delivering unmatched L7 firewall throughput.

NSX-T Distributed Firewall Architecture with Layer 7 Flow evaluation

In NSX-T, Layer 7 App-ID is configured via Context Profiles, which can then be applied to one or more Distributed Firewall rules. A number of context profiles are built-in to NSX-T for the most common applications and protocol used in datacenters. In addition, customes can define custom context profiles based on a specific APP-ID and sub-attributes such as protocol version (for SSL and CIFS) and/or FQDN/URL.

Defining a custom Context Profile

 

FQDN Whitelisting

With FQDN whitelisting, security administrators can define firewall rules that explicitly provide access to a set of URLs or FQDNs. This can be leveraged to micro-segment applications that are accessing external SaaS/Cloud services for which the IP addresses are unknown or subject to change. In addition to application Micro-segmentation FQDN whitelisting can also provide granular SaaS application or web access to users in a VDI environment.

FQDN Whitelisting

 

FQDN whitelisting leverages the same context-aware architecture that is used for Layer 7 App-ID, in which nearly all packets for a flow are processed in kernel. Furthermore we leverage Distributed DNS Snooping to map domain names to IP addresses. Distributed DNS Snooping is unique to the NSX and take advantage of the unique position of the Distributed Firewall – In kernel, and applied to the logical port of every workload – which enables us to learn about every DNS query and response, regardless of whether it’s going to an external or internal DNS server, without requiring any agent.

In NSX-T 2.4, FQDN whitelisting is configured by defining a context profile with one or more pre-canned URLs, and then applying the context profile to a firewall rule. In addition to FQDN,  Layer 7 App-ID can also be defined in the same context profile to limit the types of applications/protocols that can access the specified URL/FQDN.

Identity Firewall

Besides micro-segmenting applications, many of our customers also leverage NSX to protect their VDI infrastructure and Desktops. NSX provides isolation between desktops, granular access to applications for distinct groups of desktops, and micro-segmentation for the VDI infrastructure. With NSX-T 2.4, we further expand on this with E-W Service Insertion and Guest Introspection, allowing customers to insert additional security controls such as IPS/IDS or Agentless Anti-virus into their VDI environment. One other key feature introduced in NSX-T 2.4 is User-based or Identity Firewall (IDFW). With IDFW, customers can create firewall rules based on active directory user groups in order to provide granular per-user access to applications. The Identity Firewall features is based on flow context, and therefore can be applied to both users accessing their apps from VDI Desktops or RDSH sessions. With NSX-T 2.4, IDFW-based rules can also use Layer 7 and/or FQDN context-profiles to provide even more granular per-user control.

NSX-T can retrieve User to Group mapping from Active Directory, allowing users to then configure a Group based on one or more AD-Groups. When firewall rules are configured with an AD-based group as the source, the Security Identifier (SID) of that group is programmed in the dataplane of the Distributed Firewall. When a user logs in to a VDI desktop or RDSH host, VMware tools (thin agent) is used to retrieve the user/group information. The Context-engine which is an NSX  component running on the hypervisor then programs the group SID to flow mapping into the context table. Finally, the information about the flow in the context table is matched against SID-based rules in the firewall rule table and the appropriate action is taken.

Identity Firewall is disabled by default, and can be enabled per cluster and/or for standalone hosts. Active Directory needs to be registered with NSX in order for NSX to retrieve group and user information. Once that is done, security administrators can create a group based on one or more AD-groups. This group can then be used as the source of one or more Distributed Firewall rules.

Creating AD-Based Groups and using them in Distributed Firewall Rules

Demo

In this 15 minute demo, we are covering how the new Context-aware capabilities in NSX-T 2.4 including Layer 7 App-ID, FQDN whitelisting and Identity Firewall can be leveraged to enable the zero-trust network security model.

The post Context-aware Micro-segmentation with NSX-T 2.4 appeared first on Network Virtualization.

NSX-T 2.4 – NSX Cloud eases your Adoption/Operations between on-premises Datacenter, AWS and Azure

$
0
0

2018 was a great year for NSX with Cloud seeing increased customer traction, strong partnerships established across the board, and a whole host of new features being released throughout the year! While most of our competitors are just starting on their public cloud solution, NSX Cloud is entering its second year of adoption, enabling consistent networking and security across on-premises Datacenter, AWS, and Azure. With NSX-T 2.4, we’re extending our industry-leading capabilities, which will further enable our customers to seamlessly, & consistently manage their public cloud and private cloud workloads.

If you would like to have a refresher on NSX Cloud before we get into the details of what’s new in NSX-T 2.4, here are some pointers to our previous blogs:

At a high level these are some of the key NSX Cloud features that were released in NSX-T 2.4:

  • Shared Gateway in Transit VPC/VNET for simplified, faster onboarding and consolidation
  • VPN support in Public Cloud
  • Selective North-South Service Insertion and Partner Integration
  • Micro-segmentation on Horizon Cloud for Azure.
  • Declarative Policy for Hybrid Workloads

Now, let’s take a closer look at each of these features:

 

Shared Gateway & Simplified Transit VPC/VNET architecture:

Instead of having to install an NSX Cloud gateway in every VPC/VNET, customers can now choose to have a single NSX cloud gateway (deployed in a transit VPC/VNET) manage multiple compute VPCs/VNETs. This greatly reduces the PCG footprint, deployment costs and operational overhead involved in managing Public Cloud workloads. It also solves the transitive routing limitation in AWS and reduces the number of VPN tunnels required to back-haul data traffic. Consolidated NSX Cloud Gateways would enable quicker on-boarding of VPCs/VNETs and these gateways can be shared by Compute VPCs/VNETs across different accounts. Since this would prevent unauthorized termination of the Cloud Gateway by any end user, this provides an additional layer of security.

 

 

VPN Support in Public Cloud

NSX Cloud now has built-in support to setup VPN tunnels to back-haul traffic from public cloud to on-premises Data Center. VPNs from on-premises Data Center can now be directly terminated at the NSX Cloud Gateway in the public cloud. Customers don’t need the VGW provided by public cloud vendors and this reduces cost. It also reduces the management overhead as NSX Cloud Gateway automatically propagates the routes over BGP. From a BW perspective, NSX Cloud gives a huge bump in the capacity as well: Inter-VPC traffic flows can be at 5Gbps over peered VPCs vs. just 1Gbps offered over VGW

 

 

It is also possible to establish VPN connectivity to NSX Cloud Gateways located in different regions in different public clouds. The termination need not necessarily be an NSX Cloud gateway and the user can choose to have a third-party VPN Gateway at any of the endpoints. This gives great flexibility when a user tries to architect their VPNs.

 

 

Selective North-South Service Insertion & Partner Integration:

Customers can deploy Partner Service directly from Public Cloud Marketplace in the Shared Services / Transit architecture. The NSX Cloud gateway present in the transit VPC/VNET can be programmed to selectively route traffic to partner service appliance based on NSX policies. This can be huge cost savings to a customer as they are not forced to direct all traffic through a virtual L7 firewall appliance that they have bought for the public cloud which is billed based on the traffic that passes through it. And if that wasn’t enough, service insertion with NSX Cloud requires no VPNs to compute VPCs/VNETs. More cost savings and less operational overhead

 

 

 

Micro-segmentation on Horizon Cloud for Azure: NSX Cloud now has a combined solution with Horizon Cloud for Azure. For customers who choose to have a Horizon VDI environment deployed in Azure, NSX Cloud will provide the necessary micro-segmentation and secure the VDI env.  We did write a blog about this few months ago. The feature is now GA in 2.4

 

Declarative Policy for Hybrid Workloads:  With NSX 2.4, the NSX platform moves to declarative policies. Users can now define a single intent-based policy from the Policy Manager without worrying about where the workloads are deployed or where they will move in the future. NSX Cloud, as an extension to the on-premises NSX-T platform, enforces this policy in a consistent manner across your public cloud footprints in both Azure and AWS. This makes managing the public cloud simply an extension of your on-premises DC.

 

What’s Cooking?

With the NSX 2.4 release, NSX offers a full suite of capabilities for public cloud management but we’re not done. As we gather feedback from customers regarding native services, NSX cloud life-cycle management we’re working hard to augment these capabilities in our upcoming releases. Reach out to us and check out the NSX Cloud product page to find out more about what’s in-store for NSX Cloud in 2019! Exciting times ahead…

 

The post NSX-T 2.4 – NSX Cloud eases your Adoption/Operations between on-premises Datacenter, AWS and Azure appeared first on Network Virtualization.

VMware Cloud on AWS with Transit Gateway Demo

$
0
0

At AWS re:Invent 2018 last November, AWS introduced a regional construct called Transit Gateway (TGW). AWS Transit Gateway allows customers to connect multiple Virtual Private Clouds (VPCs) together easily. TGW can be seen as a hub and all the VPCs can be seen as spokes in a hub and spoke-type model; any-to-any communication is made possible by traversing the TGW. TGW can replace the popular AWS Transit VPC design many customers have deployed prior for connecting multiple Virtual Private Clouds (VPCs) together. In this post, I will discuss TGW and how it can currently be used with VMware Cloud on AWS. At the end of this post there’s also a video you can watch of a demo using the same setup described in this blog; feel free to jump to the video if you like.

VMware Cloud on AWS SDDC is leveraging familiar VMware technologies such as vSphere ESXi, vSAN, and NSX to provide a robust SDDC in the cloud. This SDDC, powered by the VMware stack, can also easily be connected to on-prem data centers and enable ease of migration and even vMotion from on-prem to cloud and vice-versa. To learn more about networking and security provided by NSX in VMware Cloud on AWS, read my prior blogs on the VMware Network Virtualization blog or some of my prior blog posts on my personal website.

Today, customers can use TGW VPN attachments to connect to VMware Cloud on AWS SDDC. This VPN connectivty can be established leveraging route based IPSEC VPN. In the below diagram, you can see I have a TGW deployed and connected to both VMware Cloud on AWS SDDCs and native AWS VPCs.

My connections from TGW to VMware Cloud on AWS SDDCs leverage VPN attachments while my connections from TGW to native AWS VPCs leverage VPC attachments. VPN attachments can be used to connect to VMware Cloud on AWS SDDC from TGW; I leverage the route based IPSEC VPN capability provided by NSX for this. My connectivity to native AWS VPCs use VPC attachments and thus leverage the native underlying ENI connectivity to connect directly from TGW to VPC.

.

 

Right away, you can see there are two very clear advantages with TGW:

1.) Ease of creating Any-to-Any communications between native AWS VPCs and VMware Cloud on AWS SDDCs

2.) Connectivity to on-prem can be shared easily with all connecting VMware Cloud on AWS SDDCs and native AWS VPCs

By default, a TGW has one route table. However, it’s also possible to leverage multiple route tables to take advantage of functionality where you can do some traffic engineering by controlling routes between different route tables. You can have TGW attachments on the TGW associated with different route tables and then control traffic either via static routes or propagation. This is some pretty cool advanced functionality which I will cover in a later post in more detail.

In the above example, you can see I also have a VPN connection from TGW to my on-prem data center. AWS announced at re:Invent 2019 Direct Connect attachment to TGW is not yet supported but would be available sometime early 2019, so, as of right now, the only option for this design is VPN to on-prem. Because this VPN connection from on-prem is connecting to a TGW, it can be shared among all the spoke VMware Cloud on AWS SDDCs and native AWS VPCs. This is depicted more clearly in the diagram below.

The above design provides both shared connectivity to on-prem for all my spokes, but also easily provides any-to-any configuration and communication.

It’s possible some of the spokes may even have Direct Connect connectivity to on-prem, but still be connected to a TGW. One such use case is when workloads in a VMware Cloud on AWS SDDC needs to access shared services in multiple other VPCs/SDDCs. This design is shown below and is also what I have setup in my lab. This is the same setup I use in my demo.

The VMware Cloud on AWS SDDCs below are attached to TGW via VPN attachment. There are two IPSEC VPN tunnels in Active/Standby mode per VMware Cloud on AWS SDDC.

Reflecting the above diagram, you can see below in the screenshot from AWS Console, I have created a AWS Transit Gateway. Note, I’ve disabled ECMP as currently only Active/Standby VPN is supported with VMware Cloud on AWS.

Below, you can see I created two VPN attachments, each connected to a different VMware Cloud on AWS SDDC. I also created two VPC attachments, each connected to a different native AWS VPC.

Clicking on the Resource ID of the first row shown above takes me to the VPN Connections I’ve established to my first VMware Cloud on AWS SDDC. I created a Customer Gateway in AWS which has the public IP Address for my VMware Cloud on AWS SDDC 1VPN. You can see below under Tunnel Details, AWS provided two public IP addresses and I selected two different Inside IP CIDR blocks to use for my virtual tunnel interfaces (VTI) interfaces which BGP will peer over; this occurs over the IPSEC VPN connection. You can see the tunnels are up. The routes are propagated and learned automatically by TGW and by VMware Cloud on AWS SDDC 1 via BGP.

I used the same process to setup connectivity to my VMware Cloud on AWS SDDC 2. Notice, the Customer Gateway Address is different because I’m connecting to a different VMware Cloud on AWS SDDC. Also, note, I have two different AWS public IPs and selected different Inside IP CIDR blocks.

You can see below my subnet for VPC 1 is 172.32.0.0/16. For the respective VPC attachment, you can see below for my native AWS VPC 1, I manually created two route entries. To reach 10.72.31.16/28, which is the subnet of my App network segment in VMware Cloud on AWS SDDC 1, traffic is sent through the Transit Gateway I created, which you can see is the Target. To reach 10.61.4.0/28, which is the subnet of my Web network segment in VMware Cloud on AWS SDDC 2, traffic is also sent through the Transit Gateway.

With VPC attachments, the routes are not propagated from the TGW to the spoke VPCs. However, the routes from the VPCs are propagated to the TGW as propagation is enabled by default on the TGW.

You can see below, the subnet of VPC 2 is 172.33.0.0/16. I configured the route table in VPC 2 identically.

Looking at the AWS Transit Gateway’s route table, I can see all the routes are learned from my VMware Cloud on AWS SDDCs and native AWS VPCs.

In my VMware Cloud on AWS SDDCs, I can also verify the respective VPN connections are up. Below is the screen shot from my VMware Cloud on AWS SDDC 1.

With some recent enhancements in VMware Cloud on AWS, you can now view Advertised Routes and see Learned Routes.

The below routes are being Advertised from VMware Cloud on AWS via BGP over VPN to AWS TGW.

Below are the network segments in VMware Cloud on AWS SDDC 1. From the screen shot above, you can see these networks are being advertised to TGW.

The below routes are being learned by VMware Cloud on AWS via BGP over VPN from AWS TGW. Note, the subnet from VMware Cloud on AWS SDDC 2 (10.61.4.0/28), native AWS VPC 1 (172.32.0.0/16), and native AWS VPC 2 (172.33.0.0/16) are all being learned via BGP over VPN from AWS TGW.

I confirm below from my App VM (10.72.31.17/28) on the App network segment in VMware Cloud on AWS SDDC 1 that I can communicate to workloads in VMware Cloud on AWS SDDC 2 (10.61.4.0/28), native AWS VPC 1 (172.32.0.0/16), and native AWS VPC 2 (172.33.0.0/16).

You can see how easy it is to setup connectivity from VMware Cloud on AWS SDDC to AWS TGW and enable Any-to-Any communication.

See the full demo here on the NSX YouTube channel or in the embedded video below.

My Blogs on VMware Network Virtualization Blog

My blogs on HumairAhmed.com

Follow me on Twitter: @Humair_Ahmed

The post VMware Cloud on AWS with Transit Gateway Demo appeared first on Network Virtualization.

Cross-vCenter NSX at the Center for Advanced Public Safety

$
0
0

Jason Foster is an IT Manager at the Center for Advanced Public Safety at the University of Alabama. The Center for Advanced Public Safety (CAPS) originally developed a software that provided crash reporting and data analytics software for the State of Alabama. Today, CAPS specializes in custom software mostly in the realm of law enforcement and public safety. They have created systems for many states and government agencies across the country.

Bryan Salek, Networking and Security Staff Systems Engineer, spoke with Jason about network virtualization and what led the Center for Advanced Public Safety to choosing VMware NSX Data Center and what the future holds for their IT transformation.

 

The Need for Secure and Resilient Infrastructure

As part of a large modernize data center initiative, the forward-thinking CAPS IT team began to investigate micro-segmentation. Security is a primary focus at CAPS due to the fact that the organization develops large software packages for various state agencies. The applications that CAPS writes and builds are hosted together, but contain confidential information and need to be segmented from one another.

Once CAPS rolled out the micro-segmentation use-case, the IT team decided to leverage NSX Data Center for disaster recovery purposes as well. With networking and security set up across two data centers, it is essential that CAPS operates with maximum performance, security, and resilience for the state infrastructure agencies they support.

 

Consistent Networking and Security Across Two Sites with Cross-vCenter NSX

Cross-vCenter NSX provides the ability to manage the NSX environment across multiple vCenter domains in a centralized manner. In Cross-VC NSX, logical networks are stretched across sites, providing consistent networking and security across vCenter domains or sites.

One of the advantages of Cross-vCenter NSX is increased mobility of workloads – VMs can be migrated using vMotion across vCenters without reconfiguring the VM or changing firewall rules. Applications can be restarted at the recovery site upon a DR event while maintaining their IP addresses – no need to re-IP. Enforcement of the applications is maintained due to the universal nature of the distributed firewall and security policies. This means there is no need to perform manual mapping as all networking and security services are synchronized across sites, providing huge benefits for highly dynamic environments.

 

Listen to the podcast to learn more about what Jason’s team at the Center for Advanced Public Safety is up to. Then, check out Humair Ahmed’s VMware NSX Multi-site Solutions and Cross-vCenter NSX Design Day 1 Guide to learn more about the Cross-VC NSX solution.

 

LISTEN NOW:

The post Cross-vCenter NSX at the Center for Advanced Public Safety appeared first on Network Virtualization.

Switzerland’s Leading Provider of Customized Financial Services for Dental Facilities Ensures the Safe Handling of Patient Records

$
0
0

The core business of Zahnärztekasse AG revolves around financial services for dentists and therefore secure patient records. The 33 employees look after the fee management of over 1,000 dental facilities in Switzerland. Recently, the company introduced a new level of security, because in the face of current threats and threats of cybercrime, sensitive data can fall into the wrong hands. The dental facilities often ask about the level of safety of the IT products and services offered. In addition, it is necessary to comply with the new federal law on data protection, the Swiss counterpart to GDPR. Therefore, IT security is very important. A digital transformation was necessary, because the systems in use were not completely protected against current threat scenarios. Furthermore, Zahnärztekasse was also striving for an ISO certification.

 

Interfaces and platforms already digitized

Digitalization is a major challenge for the conservative dental market. Zahnärztekasse has responded to this trend by digitizing its assets including interfaces, various platforms (www.debident.ch and www.zahngeld.ch) and the iOS app Crediflex, and is now considered to be a market leader and pioneer in the field. As early as 2010, Zahnärztekasse started virtualizing its systems and built on this trend until the last systems were virtualized about three years ago.

 

Implementation of the solution within four days

Looking for the next step in security, it quickly became clear that VMware NSX Data Center and VMware vRealize Network Insight offered a very good extension to existing VMware systems. Because IT staff already used and understood VMware solutions, the project could be implemented quickly. “Thanks to good preparation, we were able to implement VMware NSX Data Center and VMware vRealize Network Insight in just four days and also jointly develop the basic settings of the firewall regulations. Implementing a classic firewall would have taken two weeks,” said Pascal Fröhlich, System Engineer at AGIBA IT Services AG. With NSX Data Center, individual systems can be completely separated by micro-segmentation and critical systems can be completely isolated if necessary. The data was analyzed with vRealize Network Insight, so that all connections between the different virtual machines could be evaluated. Thanks to vRealize Network Insight, the Zahnärztekasse can set and control security guidelines and compliance requirements.

 

Firewall capacity of 100 GBit/s

The key advantages of the implemented solutions are centralized administration, the simplification of processes, flexibility, and a massive consolidation of resources. The throughput rate is much higher than a physical solution, with 5-6 GBit/s (physical solution) versus 20 GBit/s (NSX) per host. The total firewall capacity in Zahnärztekasses data center is 100 GBit/s.

 

NSX is the perfect fit

In addition, the reasonable price of the VMware solution and half the implementation time compared to other options convinced Zahnärztekasse. Andy Hosennen, Head of IT Infrastructure and Support at Zahnärztekasse summarizes: “I can’t imagine working without VMware today. The solutions complement each other perfectly: VMware NSX Data Center and VMware vRealize Network Insight were integrated into our existing system like one building block on top of the other, fitting exactly and easily.” The successful project is also of strategic importance to Zahnärztekasse. “Protecting our systems from threats affects everyone – our employees as well as our customers. After all, our customers trust us to handle their data responsibly,” adds Andy Hosennen. The entire company benefits from cost reductions, increased efficiency, and automation in the data center.

 

To learn more, check out the Zahnärztekasse case study.

The post Switzerland’s Leading Provider of Customized Financial Services for Dental Facilities Ensures the Safe Handling of Patient Records appeared first on Network Virtualization.


How Istio, NSX Service Mesh and NSX Data Center Fit Together

$
0
0

This is the year of the service mesh. Service mesh solutions like Istio are popping up everywhere faster than you can say Kubernetes. Yet, with the exponential growth in interest also comes confusion. These are a few of the questions I hear out there:

  1. Where is the overlap between NSX Service Mesh with NSX Data Center?
  2. Is there synergy between the NSX Data Center and Istio?
  3. Can service mesh be considered networking at all?

These are all excellent and valid questions. I will try to answer them at the end of the post, but to get there let’s first understand what each solution is trying to achieve and place both on the OSI layer to bring more clarity to this topic.

*Note – I focused this post on NSX Data Center and Istio, to prevent confusion. Istio is an open source service mesh project. NSX Service Mesh is a VMware service delivering enterprise-grade service mesh, while it is built on top of Istio, it brings extensive capabilities beyond those that are offered by the Istio Open Source project.

 

Before we start, in a nutshell, what is Istio?

Istio is an open source service mesh project led by Google that addresses many of the challenges that come up with the rise of micro-services distributed architectures. A lot of attention is paid to networking, security and observability capabilities.  We will not explain everything about service mesh and Istio, as that would require a book 😊 but to really summarize it:

It is an abstraction layer that takes care of service to service communication (service discovery, encryption), observability (monitoring and tracing) and resiliency (circuit-breakers and retries), from the developer and onto a declarative configuration layer.

The data plane of Istio is based on Envoy which is by itself a very successful open source reverse proxy project started by Lyft. Istio adds a control plane on Envoy.

Why should you care?

Istio solves a long-time problem of middleware management, and according to Gartner, “by 2020, all leading container management systems (delivered either as software or as a service) will include service mesh technology, up from less than 10% of generally available offerings today.”

Source: Innovation Insight for Service Mesh 3 December 2018

We at VMware see it even more than just containers (no reason it cannot be extended to VMs, but that is a different subject).


NSX Data Center

NSX Data Center delivers virtualized networking and security entirely in software, focusing its networking capabilities on layers 2-4 and some security and load balancing capabilities at layer 7. Here are the main components of NSX Data Center in their respective place in the OSI model

Logical Switching and Routing

Switching is a layer 2 component and routing is a layer 3 component. That is basic networking instantiated in software.


Firewall

The NSX Data Center distributed firewall (DFW) is the basis for achieving micro-segmentation on L4, which is the ability to inspect each packet flowing between all application endpoints irrespective of network topology against a security policy. Ports that are not defined as allowed will not be able to be accessed. When implemented on vSphere or KVM, it is enforced within the hypervisor kernel.

NSX Data Center focuses on L3-L7 firewalling with added capabilities for L7 service insertion with next generation firewall partners such as Checkpoint and the enforcement is L4 (ports).


L7 Based Application Signatures (DPI)

With NSX-T Data Center 2.4, NSX Data Center added DPI (deep packet inspection) capabilities which is the ability to define L7 based application signatures in distributed firewall rules. Users can either use a combination of L3/L4 rules with L7 application signatures or they can create L7 application signature-based rules only. We currently support application signatures with various sub-attributes for server-server or client-server communications only.

Read this blog post for more information about NSX Data Center and DPI.


Load Balancing

NSX Data Center provides load balancing on layer 4 (TCP load balancing) and layer 7 (HTTP, HTTPS) with many enterprise level load balancing capabilities. This is a traditional edge load balancing implemented in software.


NSX Data Center Observability

NSX Data Center observability capabilities are significant to our enterprise customer base and focus on L2-L4 networking and security. Among other things, we can provide visibility into the traffic the flows through NSX Data Center firewall (FW) and provide physical FW traffic relatability into the NSX Data Center logical networks and the platforms running with NSX Data Center using SNAT tools. Read this blog for more on NSX Data Center firewall.

As you can see, most capabilities concentrate on L2-L4 networking instantiated in software with extensive enterprise grade monitoring and operational tools.

Let’s look at service mesh now. I am concentrating on Istio, mainly because it is the front runner service mesh and because it is the first service mesh to be supported by NSX Service Mesh.


Traffic Management

Istio’s traffic management capabilities are based on the envoy L7 proxy, which is a distributed load balancer that is attached to each micro-service, in the case of Kubernetes as a sidecar. The placement of that load balancer (close to the workload) and the fact that all traffic flows through it allows it to be programmed with very interesting rules. Before placing it on the OSI layer I will quickly cover some of the main traffic management capabilities.

Also, there is an ingress and egress proxy for edge load balancing in Istio that I will touch upon as well.

 

Traffic Splitting (sometimes called traffic shifting)

Provides the ability to send a percentage of traffic from one version of a service to another version of a service. Sometimes referred to as “canary upgrade” or “rolling upgrade”. As you can see in the diagram below, 1% of traffic hitting the front end service is directed to a new version of service B. If it looks good (no errors etc) more traffic will be shifted to “service B 2.0” until 100% of traffic has moved. The fact that it is L7 routing means that the decision where to send traffic is application based (service to service communications) and is unrelated to L3 routing which is IP based.  This will be a common theme as you will see.

Traffic Steering

Allows us to control where incoming traffic will be sent based on application attributes (some web attribute) such as authentication (send Jason to service A and Linda to service B), location (UK to service A and US to service B), device (watch to service A and mobile to service B) or anything else the application passes. This was achieved in the past in different ways from the application code to libraries but with Istio we can configure that declaratively using a yaml file. Cool stuff!

Ingress and Egress Traffic Control

The Istio gateway is the same Envoy proxy, only this time it’s sitting at the edge. It controls traffic coming and going from the Mesh and allows us to apply monitoring and routing rules from Istio Pilot. This is very much like the traditional load balancing we know:

Now, let’s place Istio Traffic management on the OSI model

 

As we can see in the diagram above, all the traffic management capabilities are on the L7 traffic management and load balancing level. We can see some overlap here between Istio with NSX Data Cener LB on the edge load balancing in a purely functional sense, though we still need a load balancer in front of Envoy to send traffic to the Istio Gateway. The decision where to send traffic within the application would be for Istio. This is a common topology in container/micro-services platforms that is present in solutions like Pivotal Application services (PAS), where a reverse proxy is placed in front of the application and behind a traditional load balancer.

NSX Data Center provides ingress L4 load balancing that is not achieved by Envoy and is required for service management on Kubernetes and traditional applications. NSX Data Center also provides the enterprise level operational tools needed by enterprise customers. The NSX Data Center load balancer can send and manage traffic to the mesh through the ingress Envoy and to other entities that are not part of the mesh (not everything can be covered by service mesh, yet ). So, there is partial overlap, or more accurately, some of the intelligence is moved up the stack, however, a traditional edge LB is still very much needed.

 

Istio Observability and Telemetry

Istio provides significant capabilities for telemetry, again, all focused on L7. Let’s look at some of those:


Service Discovery

While it is part of the load balancing act of Istio, I placed service discovery under observability as a category on its own. Istio’s Pilot component consumes information from the underlying platform service registry (e.g. Kubernetes) and provides a platform-independent service discovery interface. With this interface, tools like NSX Service Mesh can provide service observability (only NSX Service Mesh does that across Kubernetes clusters in multiple clouds and is not restricted to a site or a specific cluster).

 

Here is a sneak peek to NSX Service Mesh’s service discovery map:

Istio’s observability is focused on L7 and completes the picture with NSX Data Center that focuses on L4. When troubleshooting a micro-services problem, especially cascading issues, one needs to visualize the service dependencies and flows. Problems can occur on different levels, can be a service application issue, compute issue or a network issue. Having the full picture is a powerful thing that reduces the diagnostics and troubleshooting significantly. No overlap here:

Istio Security

Istio provides security features that are focused on the identity of the services. Even more so NSX Data Center service mesh, extends that uniquely to users and data. When we talk about identity, we focus on L7 (did I say that Istio is all about L7?). Here are a few key capabilities:


Mutual TLS Authentication
 

Istio’s service to service communication all flow through the envoy proxies.  The Envoys can create mTLS tunnels between them where each service will have its certificate (identity) received from the Citadel component (the root CA of the mesh). The encryption is all happening on L7 meaning what gets encrypted is the payload, not the traffic. See my earlier post about mTLS and the OSI layer.


Istio Authorization

This is the micro-segmentation that Istio provides that I mentioned earlier. Istio authorization provides namespace-level, service-level, and method-level access control for services in an Istio mesh. It is called micro-segmentation because it is using the same notion that what is not defined as allowed to pass shall not pass, only this time the controls and enforcement is at L7 (service to service). The authorization service provides authorization for HTTP and provides micro-segmentation for services in an Istio mesh. Not to be confused with NSX Data Center micro-segmentation which operates on L4 based ports access.

Now, NSX Data Center DPI does provide L7 controls based on application signatures. But it does not cover service to service communication, and enforcement is done on L3-L4, hence there is no overlap with NSX Data Center micro-segmentation.

Istio can segment the services based on L7 constructs, such as:


RPC Level Authorization:

This method controls which service can access other services. Best described in the Istio docs:

Authorization is performed at the level of individual RPCs. Specifically, it controls “who can access my bookstore service”, or “who can access method getBook in my bookstore service”. It is not designed to control access to application-specific resource instances, like access to “storage bucket X” or access to “3rd book on 2nd shelf”. Today this kind of application-specific access control logic needs to be handled by the application itself.


Role-based Access Control with Conditions

This is method controls access based on User identity, plus a group of additional attributes. It’s the combination of RBAC (role-based access) with the flexibility of ABAC (Attribute-based access)

 

When leveraging Istio “micro-segmentation” at L7 one may wonder “do I need L4 micro-segmentation?” The correct answer is “absolutely yes”. Not having both layers of protection is like leaving your house open when your front gate is locked, the network is a hostile environment, just as we did not drop the perimeter firewall when we added East-West DFW with micro-segmentation, it doesn’t make sense to do the same with the addition of L7 controls and enforcement.

As security attacks become much more sophisticated, a multi-layered security approach is important as each layer focuses specifically on an area where attacks could happen. Here’s a more in-depth analysis of the layered approach with Istio.

 

To summarize, let’s go back to the original questions we asked:

  1. Where is the overlap between Istio and NSX Data Center?

As you can see the amount of overlap between the solutions is minuscule, and even where it exists (edge LB) we still need both. Matter of fact when it comes to security and network intelligence, more is better.

We’ve seen the Istio solution and NSX Data Center on the OSI model. Let’s see now VMware’s NSX Service Mesh solution with NSX Data Center on a logical diagram

2. Is there synergy between the solutions?

You will see the discussed synergies being realized soon. We are working on some important integration services around NSX Service Mesh and NSX-T Data Center, bringing the two solutions closer together.

To see where each one fits check out this physical diagram:

 

Beyond what we have discussed in this post you can see that NSX Data Center also covers a wider range of workloads, where NSX Service Mesh is focused on cloud native workloads (for now)

Lastly, can service mesh be considered networking at all?

In my view, since Istio lives in L7 where network and application boundaries start to blur, it can be considered networking and at the same time be an application platform.


What do you think? 

The post How Istio, NSX Service Mesh and NSX Data Center Fit Together appeared first on Network Virtualization.

Guest Introspection Re-introduction for NSX-T 2.4

$
0
0

(Re-)Introduction to Guest Introspection

The Guest Introspection platform has been included in NSX Data Center for vSphere for several years, mostly as a replacement for the VMware vShield Endpoint product and providing customers the ability to plug in their VMware certified partner solutions to allow agent-less anti-virus and anti-malware protections for a variety of data center workloads.

 

The Benefit of the Guest Introspection Platform

The Guest Introspection platform provides customers several outcomes.

Simplified AV management – Manual installation of agents into the guest operating system requires massive operational overhead just getting the agents deployed out on every virtual workload, managing the agent life-cycle post deployment, and for troubleshooting issues with the in-guest agents in day 2 operations.

Guest Introspection provides a centralized management interface for deploying the agentless components to the vSphere hosts, including the security policies, all while using vSphere objects and grouping of those objects to associate the endpoint policy.  This provides granular policy creation and association in the workload environments.

Improved endpoint performance – When several or all of the virtual workloads kick off a scheduled AV scan, this can produce a massive resource drain from host resources where workloads might suffer performance concerns during the scanning and can potentially cause end users to notice the performance drop.  These performance impacts could lead to reduced consolidation ratios where more host hardware is required to provide the extra overhead to run these types of scans.

Guest Introspection reduces this by moving the AV scan workload to the partner service virtual appliance which uses far less resources than all of the in-guest agents running the same scan on all of the machines at the same time.  We’ve seen customers reduce their overall CPU needs and even lower CapEx costs by not reducing consolidation ratios and requiring more host hardware to perform the same types of scheduled scans.

Strengthened security posture – In-guest agents are exposed to attack by malicious payloads.  Typically, the malicious payload will search for the anti-virus and anti-malware solutions and shut them down or completely disable them in general.

Guest Introspection doesn’t require any partner agents to be deployed into the guest operating system to provide the same security benefits of anti-virus and anti-malware solutions. These agent-based services are offloaded and the attack surface is further isolated from the malicious payload from being directly disabled or shutdown.

 

Taking Guest Introspection to the Next Level for NSX-T 2.4

With the General Availability announcement of NSX-T Data Center 2.4, the Guest Introspection platform is now integrated into NSX-T to provide the same benefits listed above as it does for NSX Data Center for vSphere, and has few new enhancements specific to NSX-T.  These new enhancements could represent compelling reasons for customers to look at for their new deployments of NSX, or migrating their current NSX Data Center for vSphere deployments over to NSX-T Data Center.

Guest Introspection for NSX-T 2.4 brings several enhancements to the already strong benefits it does in NSX Data Center for vSphere with features that are only available with NSX-T.

Simplified Life-cycle management – Guest Introspection in NSX Data Center for vSphere requires a separate Guest Introspection service virtual appliance to help with agentless offload capabilities in addition to the VMware partner appliance on a per host basis.  While NSX manages its own appliance efficiently, with NSX-T, we moved those components into the NSX Agent installation to simplify the overall architecture and reduce the number of service virtual appliances needed for Guest Introspection to only one partner appliance per host.  The reduction of the number of service virtual appliances to run Guest Introspection means less server downtime and/or possible interruption during deployments and upgrades.

Multi-vCenter support – NSX Data Center for vSphere has a dependency on vCenter for integration and can only be associated with one vCenter at a time.  NSX-T Data Center 2.4 supports up to 16 vCenters (compute managers) in which it can be associated with at the same time.  This means common security profiles from the anti-virus and anti-malware VMware partners can extend across multiple vCenter workload domains and management of those policies is significantly reduced and no longer requires a customer to set the same policy multiple times in each NSX Manager deployment.

Partner scale enhancements – In NSX Data Center for vSphere, the partner appliances were limited to the amount of resources they could have associated with them.  This includes vCPU and vRAM attributes.  These resource limits were not configurable and agnostic to how many workloads actually resided on the hosts that needed these security solutions.  A host with 10 VMs to protect would need the same resources that the same host with 100 VMs to protect would need.  This presented a model which was not scalable and could result in under-utilized or insufficient resources for the security solution.  NSX-T Data Center now allows VMware partner service virtual appliances to be deployed using ‘t-shirt style’ deployment models of Small, Medium, and Large.  The security solution can be adapted to fit the number of workloads that require protection by either increasing as the number of workloads grow, or decreasing for lower densities on hosts.  This also means that a customer can mix and match deployment models based on the workload types and densities across their environment rather than having to rely on using one static resource service virtual appliance deployment model.

Along with the re-introduction to Guest Introspection for NSX-T 2.4, we’d like to point out that we have our first certified partner integration with Bitdefender GravityZone.  Jump over to their site for more information on Bitdefender and their solution and be sure to check out the functional joint demo of NSX-T 2.4 and GravityZone together in action. More certified partners are coming soon!

The post Guest Introspection Re-introduction for NSX-T 2.4 appeared first on Network Virtualization.

VMware and Google Showcase Hybrid Cloud Deployment for Application Platform and Development Teams

$
0
0

VMware and Google have been collaborating on a hybrid cloud for application platform and development teams. Both Google and VMware’s platforms are built on community-driven open-source technologies – namely Kubernetes, Envoy, and Istio. Having a common hybrid cloud foundation allows teams to run their applications on the optimal infrastructure and gives them more choice when modernizing existing applications or developing new cloud-native applications.

Digital transformation is rapidly changing the IT and application landscape. We are seeing a confluence of transformations that are happening simultaneously. These include hybrid clouds, microservice architectures, containerized applications, and service meshes – to name a few.

In this blog post, I will walk you through the architecture and specific use cases to illustrate the value a hybrid cloud deployment can deliver to application platform and development teams. We’ll do this by showing how a retail company can leverage many of these technology trends to help transform its business.

 

A Large Global Retailer Pursing Digital Transformation

Our retailer has a digital business transformation initiative. Its main goals are to become more agile and leapfrog its competitors. It operates a global network of stores. The retailer has data centers and branch offices across multiple countries. These data centers and branches run hundreds of applications, including the retailer’s primary e-commerce application used by its customers to browse and shop for products. The e-commerce application is made up of several polyglot microservices and different databases. The retailer also offers its own branded credit card and loyalty program, so it also handles PCI and Personally Identifiable Information (PII) of its customers.

Over the past few years, our retailer has built several new microservice applications. Most recently it developed a containerized e-commerce application. The retailer wants to deploy some of the e-commerce microservices on-premises and some on Google Cloud Platform to take advantage of Google services and Google’s global infrastructure, which will allow the retailer to meet application SLAs and data residency requirements. Let’s look at how the retailer may go about this journey.

 

Establish Hybrid Cloud Connectivity

The retailer is a global corporation, so it requires multiple entry points into Google Cloud across the world. First, our retailer must connect its corporate data centers and branches to Google Cloud Platform (GCP). It uses VMware SD-WAN by VeloCloud to establish more secure and optimized network connectivity between its on-premises sites and GCP.

The workflow to establish these connections is completely automated via the VMware SD-WAN Orchestrator portal. Now our retail organization has hybrid cloud connectivity between its on-premises sites and GCP public cloud – without having to manually change VPN and network firewall configurations for GCP service end points. Having a hybrid cloud that can provide resources to regions globally allows the teams to more easily address application SLAs and data residency requirements. You can read this blog for more details about VMware SD-WAN support for GCP.

 

Consistent K8s-based Environments for Containerized Applications

Our retailer is a long-time VMware customer and has recently deployed VMware Enterprise PKS to run containerized applications on-premises. Enterprise PKS simplifies the lifecycle management of Kubernetes clusters on multiple clouds. Enterprise PKS also offers tight integration with VMware NSX Data Center, providing advanced network virtualization and micro-segmentation security to Kubernetes. My colleague Niran Even-Chen recently wrote an insightful blog post about “How Istio, NSX Service Mesh and NSX Data Center Fit Together.” The retailer also wants to deploy some of its microservices to Google Kubernetes Engine (GKE) on Google Cloud. After doing so, it now has consistent Kubernetes-based environments for managing its containerized applications at scale, with bi-directional application portability between environments.

 

Service Mesh Data Plane Across Hybrid Cloud K8s Clusters

Before the team deploys any microservices on GCP, they want to get a service mesh into place, so they have a consistent way to discover, observe, control, and secure their services across environments. The team installs Istio-based service meshes into each cluster and enables auto-injection of Envoy to enable each microservice to get a sidecar proxy when its deployed.

The job of the Envoy sidecar proxy is to control all the East-West traffic flowing between microservices inside the meshes, and to act as an ingress and egress gateway to control all of the North-South traffic at the edge of these meshes.

Once Istio is installed in each cluster, it can now federate across any of the services running in those clusters. The retailer can leverage Istio’s advanced routing to easily communicate across meshes, and leverage mTLS encryption for more secure service-to-service communications across environments. And they can do this while using different identity and infrastructure providers for the different on-premises and public cloud service meshes.

 

Consistent Operations and Security for Cloud-Native Applications and Data

In addition to enabling federation, Istio can provide discovery, observability, control, and better security for all microservices running in the PKS and GKE clusters. Teams can define traffic management rules, global security policies, and application resiliency features (e.g., circuit breaking) – and apply those uniformly to their cloud-native applications. Many different use cases are possible, for example, canary deployments with performance and health monitoring. Or, the retailer can create and enforce security policies that help protect customer PCI and PII data by default.

Federation across on-premises and public clouds, enabling consistent developer, operational, and security models for microservice applications running on Kubernetes.

 

Application Innovation with Google Cloud’s Anthos

The development teams that work on the retailer’s e-commerce application want to leverage services on Google Cloud’s Anthos to enhance the customer experience and deliver more value. The team can push inventory and sales data to Google’s BigQuery for data analytics. The business goal is to help the retailer avoid stockouts and better personalize the shopping experience for customers.

Application teams can deliver more value to their end-users by rapidly adding Google Cloud’s Anthos services to their applications – including databases, storage, data analytics, machine learning, and more.

This week at Google Cloud Next, we are discussing a proof of concept, similar to that described here. You can attend the session jointly presented by Pere Monclus, VMware CTO of Networking & Security, and Ines Envid, Group Product Manager of Google Cloud, or stop by the VMware booth to hear more about it. You may also want to head over to the Google Cloud blog to hear what the good folks at Google have to say about our partnership.

 

Some Final Words Specifically About VMware NSX Service Mesh

Independently from our collaboration with Google, the VMware service mesh team is focused on adding value above and beyond Istio and Envoy. VMware NSX Service Mesh extends the concept of a service mesh to end-users who use the microservice applications (e.g., a retailer’s customers or suppliers) and the data stores (e.g., MariaDB, MySQL, and MongoDB) and data elements the microservices interact with on behalf of these end-users. Extending the service mesh beyond services – to users and data – allows VMware enterprise customers to achieve end-to-end visibility, control, enhanced security, and confidentiality across any Kubernetes environment.

Discovery, visibility, control, and enhanced security for users, apps, and data – on ANY application or cloud services platform.

Later this year you will also be able to try NSX Service Mesh in a private beta. You can add yourself to the NSX Service Mesh beta list here.

The post VMware and Google Showcase Hybrid Cloud Deployment for Application Platform and Development Teams appeared first on Network Virtualization.

Distributed Firewall on VMware Cloud on AWS

$
0
0

This blog post will provide a deep dive on the distributed firewall (DFW) on VMware Cloud on AWS (VMC on AWS). Let’s start with the basic concepts of a distributed firewall:

Distributed Firewall Concepts

The distributed firewall is an essential feature of NSX Data Center and essentially provides the ability to wrap virtual machines around a virtual firewall.

The virtual firewall is a stateful Layer 4 (L4) firewall – it’s capable of inspecting the traffic up to the Layer 4 of the OSI model: in simple terms, it means they look at IP addresses (source and destination) and TCP/UDP ports and filter the traffic based upon these criteria.

What’s unique about our firewall is that it has contextual view of the virtual data center – this means our distributed firewall can secure workloads based on VM criteria instead of just source and destination IP addresses.

Traditional firewalling is based on source and destination IPs – constructs that have no business logic or context into applications. Our distributed firewall can secure workloads based on smarter criteria such as the name of the virtual machine or metadata such as tags.

This enables us to build security rules based on business logic (using tags or naming convention (which, if well-built, could specify physical location, business application, whether a workload is test, dev or prod, etc…)).

Using our distributed firewall, we can offer east-west firewalling within the data center and achieve micro-segmentation and ultimately reduce the impact of a potential security breach and achieve compliance targets. If you want to learn more about this concept, read the Micro-Segmentation for Dummies book.

By North-South, we mean traffic coming to and from the Internet to the VMware Cloud on AWS. By East-West, we refer to traffic within the DC/Cloud.

Now let’s focus on using the DFW on VMware Cloud on AWS.

Distributed Firewall on VMware Cloud on AWS

Without the distributed firewall, VMware Cloud is essentially one flat zone.

  • All VMs on a logical segment can talk to each other.
  • A VM on a logical segment can talk to a VM on a different logical segment.
  • There is no East-West security.

It might be acceptable for customers using VMware Cloud to host a specific application or for more test/dev workloads but as customers evacuate entire DCs towards VMC on AWS or want to run a DMZ within VMC, they often require some level of segmentation.

Once you open the VMC Cloud Console, you will find the following menu:

Remember that you don’t have direct access to the NSX Manager on VMware Cloud on AWS and all networking and security configuration is built either via the VMware Cloud console or via the APIs.

Distributed Firewall Sections

There are several four higher-level sections of rules to break down security requirements.

These higher-level sections are simply a way to build your security logic. You don’t have to create your security rules to fit this model. You might want to define all rules one of these pre-set sections if you find it easier to organize your security rules that way.

The four higher-level sections include:

Emergency Rules: applies to temporary rules needed in emergency situations.

For example, imagine that one of your VMs has been compromised: you might want to create a rule to block traffic to/from it.

Infrastructure Rules: Applies to infrastructure rules only.

Infrastructure rules are global rules that define communications between your workloads and core/common services. For example: All Applications need to talk to the set of AD and DNS Servers.

Environment Rules: Applies to broad groups to separate zones.

Such as, setting rules so that the production environment cannot reach the test environment.

Application Rules: Applies to specific application rules.

Such as allow ‘app-1’ talk to ‘app-2’ or block traffic between ‘app-3’ and ‘app-4’.

Default Rules   The default rules allows all traffic.

Traffic is treated by the emergency rules first, then by the infrastructure rules, the environment rules, the applications and finally by the default rule.

The last rule allows all traffic. Today, we cannot change the default rule from ‘allow’ to ‘deny’. With an ‘allow all traffic’ rule at the end, it means we are ‘blacklisting’ traffic before that (previous rules specify which traffic cannot be allowed).

If we want to ‘whitelist’ (specify which traffic is allowed to transit and block everything else), we need to add a manual rule at the bottom of the rules in the “application rules” category to deny all the traffic.

Sections and rules

Within the distributed firewall portal, you can then create sub-sections and then creates rules within each section.

Each rule is a standard firewall rule:

It’s nothing out of the ordinary – each firewall rule has a name, a source (a group called “Nico’s VM”), a destination (“any”), services (typically TCP/UDP with a port number or ICMP) and an action (AllowDrop or Reject) and whether we want to log into VMware Log Intelligence (our Cloud Log Platform).

Inventory, Grouping and Tagging

Above, you saw I had created a group named “Nico’s VM”.

The concept of grouping is to enable us to create a group of objects based on criteria and apply a security policy to this group.

Tagging allows a user to assign tags to virtual machines. These tagged virtual machines can be automatically made part of a group that is used for firewall policies.

We can create grouping objects based on different criteria such as:

  • IP Address
  • VM Instance
  • Matching criteria of VM Name
  • Matching Criteria of Security Tag

Let’s try these out:

Groups based on IP address:

Group based on a single IP address or a CIDR range or both. For example, I created a group of 10.10.0.0/16 and the unique IP 10.20.34.56.

Groups based on VM instance

Group based on manually selected VMs. It has limited value as only 5 VMs can be added to this group. For example, I created a group with a single VM called “WindowsM” (yes, that VM was supposed to called “WindowsVM” but my fat fingers let me down).

Groups based on name

Groups based upon the names of the VMs. If you build a consistent naming convention (for example, COUNTRY-CITY-DC-PROD-APP-NUMBER), your VM (for example, UK-LONDON-DC-TEST-SQL-01A) could automatically, at creation, be attached to a security group and be assigned a number of security policy.

As its name includes TEST, there might be a rule specified in the Environment Section that this group cannot talk to PRODUCTION.

Ai its name contains SQL, there might be a rule specified in the Application Section that only port 1433 for SQL is allowed to that VM.

There are multiple advantage of building policies like this is that: 1) from day 1, the VM is protected, 2) you have security consistency and 3) your security rule reads more like a business logic (instead of trying to work out what IP addresses represent).

In the example below, I have built a group called “webSG”. If a VM has “web” in its name, it will automatically become part of the group and only 443 or 80 traffic will be allowed.

All these VMs below were automatically added to the group. Actually WebServer-4 was created after I had created the group and was added to this group automatically and secured at birth.

Groups based on tags

Tags have become very popular in the Cloud era and have been especially useful in the AWS world to operate a cloud platform at scale. Tags can be used to identify cloud entities and to verify costs and charges based up on these tags. Tags can be represent business units, tenants, countries, environments (test, dev, prod), etc….

In the lab I shared with the other Cloud Solution Architects, I decided to tag my VMs with my name (nico).

As soon as the group is created, I can immediately see which members are part of the group and if it’s used anywhere (View References will tell me if it’s used by any firewall rule).

 

In practical terms, this helps users greatly simplify their security policies and use metadata to build security policy instead of leveraging IP addresses like in ‘traditional’ firewalls.

VMware Cloud on AWS Resources

Check out my personal blog #RunVMC for more on VMC on AWS. Here are a few deep dive posts:

Hands-on Lab (HOL): Test drive for free, no installation required!

The post Distributed Firewall on VMware Cloud on AWS appeared first on Network Virtualization.

gRPC-Web and Istio: A Report from Service Mesh Day

$
0
0

In this post I’ll briefly describe the problem in the gRPC domain and a solution based on gRPC-Web, Envoy proxy and Istio to neatly solve it.

What is gRPC?

gRPC is a universal, high-performance, open-source RPC framework based on HTTP/2. Essentially, it lets you easily define a service using Protocol Buffers (Protobufs), works across multiple languages and platforms, and is simple to set up and scale. All this leads to better network performance and flexible API management.

Benefits of gRPC-Web

gRPC-Web addresses a shortcoming in the core gRPC framework. As developers look to benefit from the advantages it confers beyond backend microservices—the fact that it doesn’t work so well with web applications running on browsers. Although most browsers support HTTP/2 and gRPC is based on HTTP/2, gRPC has its own protocols that web applications must understand in order to work properly with it. Web applications do not have this capability because browsers don’t support gRPC out of the box.

One way to get around this problem is to use the gRPC-Web plugin and run a proxy like Envoy along with it. Envoy serves as the default proxy for Istio, and on configuring its gRPC-Web filter, it can transcode HTTP requests/responses into gRPC requests/responses for you.


Service Mesh Day

The first ever Service Mesh conference named Service Mesh Day was hosted by Tetrate, with support from Google and CNCF, on 29th March 2019 in San Francisco. The conference attracted over 200 attendees ranging from investors, customers, engineers, and leaders.

At the conference I spoke about the relevance of Service Mesh technologies, particularly Istio and Envoy, for gRPC workloads in my talk Seamless Cloud-Native Apps with gRPC-Web and Istio.

In my talk, I showed a demo of a web app that uses gRPC-Web on the front end, and a simple Go backend exposing a gRPC API. I deployed the application on Istio, which configured the Envoy proxy and the gRPC-Web filter automagically. The front and backend talked seamlessly, while Istio provided additional observability into the system via Grafana metrics, Jaeger tracing and the Kiali service graph.

The bottom line here is that by switching over to a gRPC-Web and Istio paradigm, developing a cloud-native application becomes a relatively seamless experience.

Resources on gRPC-Web and Service Mesh

The post gRPC-Web and Istio: A Report from Service Mesh Day appeared first on Network Virtualization.

Viewing all 481 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>