posted

0 Comments

Overview

Red Hat OpenShift is an open-source container application platform based on the Kubernetes container orchestrator for enterprise application development and deployment. In this blog, I will be showing procedure on how to forward logs from Openshift Container Platform  to vRealize Log Insight Cloud (vRLIC)

Once the logs are flowing you can create Dashboard to visualize your open shift environment like below I have created a sample dashboard

Pre-requisties

Procedure

This procedure is applicable for 4.x and it has been tested with Version 4.3 

The following section includes steps for running  vRealize Log Insight Cloud Fluentd plugin as a Daemon set

Step 1

Generate vRealize Log Insight Cloud API Key from here

Step 2

Update the fluent.conf file with given configuration

<source>
@id in_tail_container_logs
@type tail
path <kubernetes_log_path>
pos_file <kubernetes_log_path>/fluentd-containers.log.pos
tag raw.kubernetes.*
read_from_head true
<parse>
@type multi_format
<pattern>
format json
time_key time
time_format %Y-%m-%dT%H:%M:%S.%NZ
</pattern>
<pattern>
format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
time_format %Y-%m-%dT%H:%M:%S.%N%:z
</pattern>
</parse>
</source>

# Detect exceptions in the log output and forward them as one log entry.
<match raw.kubernetes.**>
@id raw.kubernetes
@type detect_exceptions
remove_tag_prefix raw
message log
stream stream
multiline_flush_interval 5
max_bytes 500000
max_lines 1000
</match>

# Concatenate multi-line logs
<filter **>
@id filter_concat
@type concat
key message
multiline_end_regexp /\n$/
separator ""
</filter>
<filter *.**>
@type record_transformer
<record>
fluentdhost ${hostname}
environment openshift
log_type kubernetes
</record>
</filter>
# Enriches records with Kubernetes metadata
<filter kubernetes.**>
@id filter_kubernetes_metadata
@type kubernetes_metadata
watch false
</filter>

<match **>
@type vmware_log_intelligence
endpoint_url https://data.mgmt.cloud.vmware.com/le-mans/v1/streams/ingestion-pipeline-stream
verify_ssl false
<headers>
Content-Type application/json
Authorization Bearer <Access Key>
structure simple
</headers>
<buffer>
chunk_limit_records 300
flush_interval 3s
retry_max_times 3
</buffer>
<format>
@type json
tag_key text
</format>
</match>

<kubernetes_log_path> : Complete path to Kubernetes access log file. The default path is /var/log/containers/*.log
<Access Key> : generated in Step 1
<pos_file> : Ensure the path have write access

Step 3

Create a configMap in Kubernetes for fluent.conf

kubectl --kubeconfig=<config-name> create configmap <configmap-name> --from-file=fluent.conf -n openshift-logging

Step 4

Create Daemon set vRLIC Fluentd YAML with following configuration

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-lint-logging
namespace: kube-system
labels:
k8s-app: fluentd-lint-logging
app: fluentd-lint-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
name: fluentd-lint-logging
template:
metadata:
labels:
name: fluentd-lint-logging
app: fluentd-lint-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccount: logcollector
serviceAccountName: logcollector
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd-lint
image: docker.io/vmware/log-intelligence-fluentd
command: ["fluentd"]
env:
- name: FLUENTD_ARGS
value: --no-supervisor -q
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 200Mi
securityContext:
privileged: true
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: config-volume
mountPath: /etc/fluent
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: config-volume
configMap:
name: lint-fluent-config
- name: lint-fluent-volume
emptyDir: {}
- name: var-logs
emptyDir: {}

<serviceAccount> : Yaml uses logcollector service account which gets created when the cluster-logging operator is installed. If you plan to use any other service account then ensure it has rights to mount host path of /var/log/containers/*.log

Step 5

Apply the creation/changes of new daemon set configuration to the cluster.

kubectl --kubeconfig=<config-name> apply -f lint-fluent.yml -n openshift-logging

If everything is successful you can search for logs using filter environment contains openshift

 

Getting Started with vRealize LogInsight Cloud

For a free trial, you can click here or reach out to your account team

To learn more about vRealize Log Insight Cloud please visit here