Vmware Fusion Kubernetes

Posted onby admin

VMware Brings Kubernetes to Fusion 12 and Workstation 16 Releases PALO ALTO, Calif., August 21, 2020 – VMware, Inc. (NYSE: VMW), a leading innovator in enterprise software, today unveiled the newest versions of its VMware Fusion and VMware Workstation desktop hypervisor solutions. We’ve also brought support for Windows 10 20H2, Ubuntu 20.10, RHEL 8.3 and Fedora 33 There’s a new feature: Fusion health check – When Fusion is running on macOS 10.15, the ‘Pipe Broken’ / ‘cannot connect to /dev/vmmon’ issue can be easily fixed with a click.

Vmware

A number of months back, I wrote an article which looked at how we now provide a Kubernetes in Docker (KinD) service in VMware Fusion 12. In a nutshell, this allows us to very quickly stand up a Kubernetes environment using the Nautilus Container Engine with a very lightweight virtual machine (CRX) based on VMware Photon OS. In this post, I wanted to extend the experience, and demonstrate how we can stand up a simple Nginx deployment. First, we will do a simple deployment. Then we will extend it to use a Load Balancer service (leveraging MetalLB).

This post will not cover how to launch the Container Engine or KinD with Fusion, since these are both covered in the previous post. Instead we will focus on deploying an Nginx web server. First, let’s look at a sample deployment and service for the Nginx application. Here is a simple manifest, which describes 2 objects; a deployment with 2 replicas (Pods), and a service. These are linked through the use of spec.selector.matchLabels. There is a single container image which presents its web service via port 80.

Assuming that I have once again used VMware Fusion to launch the Container Engine and KinD, I can apply the above manifest via kubectl create or kubectl apply via my MacOS terminal. Next, I will look at what objects are created. I should see a deployment, two Pods, two endpoints and a service.

As we can see, the deployment and two Pods are up and running. What is interesting to observe is the networking configuration. The idea behind a deployment is that there can be multiple Pods to provide the service, in this case an Nginx web server. If one of the Pods fails, the other Pod continues to provide the functionality.

Each of the Pods gets its own IP addresses (e.g. 10.244.0.13, 10.244.0.14) from the Pod network range. These IP addresses are also assigned to the endpoints, which can be referenced by the service.

Similarly, the idea of creating a Service is to provide a “front-end” or “virtual” IP address from the Service network range to access the deployment (e.g. 10.97.25.54). It gets its own unique IP address so that clients of the web server can avoid using the Pod IP/Endpoints. If clients use the Pod IP addresses, then they would lose connectivity to the application (e.g. web server) if that Pod failed. If connectivity is made via the Service, then there is no loss of connectivity if the Pod fails as the Service would redirect the connection to the other Pod IP address/Endpoint.

When a service is created, it typically gets (1) a virtual IP address, (2) a DNS entry and (3) networking rules that ‘proxy’ or redirects the network traffic to the Pod/Endpoint that actually provides the service. When that virtual IP address receives traffic, the traffic is redirected to the correct back-end Pod/Endpoint.

Let’s test the deployment, and see if we can verify that the web service is running. At present, there is no route from my MacOS to either the Pod network (10.244.0.0) or the service network (10.97.25.0). In order to reach them, I can add a static route to them using the IP address of the KinD node as the gateway. You can get the KinD node IP address by simply running a docker ps as shown below:

Now that the IP address of the KinD node has been identified, we can use it as a gateway when adding routes to the Pod network and the Service network. We can then test that the web server is running by using curl to retrieve the index.html landing page, as follows:

This looks good – we can get the Nginx web server landing page from both Pods. Let’s now check accessibility via the service. First, let’s remove the route to the Pods, and then add the route to the Service.

Excellent, everything appears to be working as expected. However, we would not normally allow external clients to access the ClusterIP directly as shown here. We would typically setup a Load Balancer service, which creates an EXTERNAL-IP. This is presently set to none, as per the service output seen earlier. We will configure the LoadBalancer using MetalLB.There are only a few steps needed. (1) Deploy the MetalLB namespace manifest, (2) deploy the MetalLB objects manifest and (3) create and deploy a ConfigMap with the range of Load Balancer / External IP addresses. Steps 1 and 2 are covered in the MetalLB Installation page. Item 3 is covered in the MetalLB Configuration Page. Below are the steps taken from my KinD setup. Note that the range of Load Balancer IP addresses that I chose are from 192.168.1.1 to 192.168.1.250 as per the ConfigMap.
Now there is only a single change need to my Nginx manifest, and that is to add the spec.type: LoadBalancer to the Service, highlighted in blue below:
Let’s again query the objects that were created from this manifest, and we should see that the Service now has both a ClusterIP and an External-IP populated. It should match the first IP address in the range provided in MetalLB’s ConfigMap, which it does (192.168.1.1):
This is now the IP address that should be used by external clients to access the web service. However, as before, there is no route from my desktop to this network, so I need to add a static route, once again using the KinD node as the gateway.

Everything is working as expected. Hopefully that has given you a good idea of how you can use KinD in VMware Fusion (and indeed VMware Workstation) to become familiar with Kubernetes.

Register to download your 0 day trial

Fusion Player offers a Personal Use License, available for free with a valid MyVMware account. Home users, Open Source contributors, students, and anyone else can use Fusion Player Free for Non-Commercial activity.

Top Evaluation Resources

VMware Fusion Player – Personal Use License

Use the following resources to learn more about VMware Fusion Player.

Troubleshooting & Support

VMware Fusion Player Personal Use licenses do not include technical support directly from VMware, however there is a vast library of information, as well as a massive technical community, to help.

Other Resources

Kind Kubernetes

How to Buy

In order to use VMware Fusion for Commercial purposes or as a function of your employment, you must purchase a commercial license from either store.vmware.com or from your preferred VMware Partner or reseller.

Buy Online

Purchase through the VMware online store.

Vmware Fusion Kubernetes Free

Purchase from a VMware Partner

Leverage the power of the VMware Partner
Network to help you purchase VMware solutions
and products.

Contact Sales

Contact a Sales Rep
Monday - Friday 8am -5pm
1-877-4-VMWARE (1-877-486-9273). Outside of North America dial 1-650-427-5000.

Vmware Fusion Docker

Please login or create an account to access your downloads

Connect Support

Vmware Fusion For Mac

View the top articles related to troubleshooting and support for this product. Add keywords to narrow your search.

Vmware Fusion Kubernetes Free

Relevant Keywords: