By Adam Hevenor, Product Manager, Pivotal Container Services

 

Editor’s note: On February 26th, 2019, VMware renamed VMware PKS to VMware Enterprise PKS. To learn more about the change, read here.

The Docker logging driver only offers a limited set of capabilities by mapping all container logs to STDERR and STDOUT. Kubernetes requires users to manage the transport of logs off the cluster. For public cloud providers, this means using hosted logging systems like Google Stackdriver or Amazon CloudWatch. However, operators who want to adopt a multi-cloud strategy and standardize this transport are forced to develop their own logging solution.

Logging is one of the three pillars of observability (metrics and traces being the other two), and logging is essential to many operational functions related to security, auditing, and monitoring. This post describes how VMware PKS uses a set of custom resources to provide a unified logging experience and how you can use that resource to monitor Kubernetes events.

The Cloud Native Computing Foundation (CNCF) has adopted Fluentd and Fluent Bit as incubating projects to help address this common problem. To further simplify and enhance the experience for operators, VMware PKS standardizes on the TCP-based transport described in the RFC 5424 syslog specification by packaging Fluent Bit with an output plugin for syslog along with controllers for interacting with Fluent Bit and the Kubernetes API.

With VMware PKS, you can set a “sink” for a cluster by using either kubectl or the VMware PKS CLI. To set up a sink for a namespace, you use kubectl.

 

Getting to Know Sink Resources by Using kubectl

When you set up a sink for a namespace by applying the following YAML file with kubectl, it ensures that logs are included and scoped appropriately, in addition to important Kubernetes API events.

 

When the YAML file is applied using kubectl, the Kubernetes API creates a new sink resource and updates the sink and event controllers to securely send logs and events for the appropriate namespace to the specified host and port. Here’s a diagram that illustrates what happens:

Using the VMware PKS CLI

We have simplified the experience of capturing the entire cluster as well. You can easily use the VMware PKS CLI to ensure all logs and events for a cluster are captured for security or audit purposes by creating a sink:

 

Understanding Kubernetes API Events for Monitoring

Kubernetes API events can help you monitor cluster activity without direct knowledge of the workloads in the cluster. The following provides a rough guide to detecting and making sense of Kubernetes API events.

 

  • ImagePullBackOff – Image pull backoffs occur when the Kubernetes API cannot reach a registry to retrieve a container or the container does not exist in the registry. If the scheduler tries to access a registry not available on the network (e.g, if Docker Hub is blocked by a firewall), the registry could be experiencing an outage, or a container specified could have been deleted or never uploaded. This issue will be indicated in a sink with the string “Error:ErrImagePull.” Here’s an example:

 

  • CrashLoopBackOff – Crash loop backoff implies that the container is not functioning as intended. It results in the string “Back-off restarting failed container” appearing in the sink. You should review any recent logs for that workload to better understand the cause of the crash.

 

  • ContainerCreated – The successful scheduling of a container will result in the following series of k8s.events ending with the final string “Started container.”

 

  • FailedScheduling – This occurs when a container is unable to be scheduled for reasons such as a lack of node resources. This issue will include the string “Insufficient cpu.”

 

Getting Started

With VMware PKS version 1.3, all of this is available today. Read the documentation to see all the commands and arguments. All that’s needed is your own syslog server, or you can try VMware Log Intelligence with a 30-day free trial.