Log Analytics

Sending AWS CloudWatch Logs to VMware Log Intelligence

Sending AWS CloudWatch Logs to VMware Log Intelligence

 

This article was written by Senior Technical Account Manager, Nico Guerrera. See his full bio at the bottom of this article.

As more and more customers move workloads to the cloud, we at VMware want to make sure that they can leverage their investment in our software to help manage and streamline their day-to-day operations, no matter what their cloud provider(s) of choice is. To that end, this blog is meant to help guide customers who have an Amazon AWS footprint in the cloud and want to be able to send their logs and events from AWS to a central log repository. In this case, we will be using VMware Log Intelligence as our event destination, so that we can access all our logs and events, from any public or private cloud, from one central SaaS log aggregation tool.

This blog assumes that you have a working knowledge of AWS, CloudWatch, and installing the CloudWatch agent on your AWS EC2 instances. We will not go into detail on the AWS end of logging configuration. Amazon documentation does a very good job of teaching the basics for managing CloudWatch in EC2 and is a good place to start.

 

Importing the AWS Lambda Function to Forward Logs to Log Intelligence

 

Our log forwarding solution uses a simple AWS Lambda function to forward logs to Log Intelligence via its API from AWS CloudWatch or CloudTrail. The Lambda function we will be using has already been written by our development teams at VMware, and is hosted on GitHub at this URL:

https://github.com/vmware/vmware-log-collectors-for-aws

  • Once the code is downloaded and unzipped, open the ‘src’ directory and there are two .js files that we will be copying into Lambda.

 

  • In the AWS management console, navigate to ‘Compute -> Lambda’ and create a new function from scratch. I named the function ‘li_log_collector_aws’ and chose Node.js 8.10 as the runtime.

 

 

  • Hit the ‘Create function’ button and at the next screen, you should have the function designer and function code editor open. Under the function code editor, you should already have an ‘index.js’ file. Create a second file called ‘lint.js’ and paste the code from the GitHub download into each file. Your final product should look like the screenshot below.

 

 

  • Finally, to finish the function, we will need an API key from Log Intelligence. Log onto your Log Intelligence instance and navigate to the ‘Configuration -> API Keys’ section to generate a new key.

 

 

  • Click ‘New API key’, name your key and then click ‘Create’.

 

 

  • A new key will be generated. Copy the key to your clipboard and navigate back to your Lambda function in AWS. Underneath ‘Function code’, create a new environment variable called ‘LogIntelligence_API_Token’ and paste the API key into the value field.

 

 

  • I had some initial trouble testing the function, which was due to a bad Log Intelligence ingestion URL on line 202 of ‘index.js’. I had to change the URL to the one what is provided during the API key generation.

 

Adding a CloudWatch Trigger to the Log Collector Function

 

Now that we have our function, we need to add a trigger from a CloudWatch log stream. There are many ways to configure a trigger to forward logs and events, but I chose a basic one that just forwards an EC2 AMI Linux messages file from one stream. We just have to click ‘Add trigger’ in our function and choose ‘CloudWatch Logs’, then our log group, which is ‘var/log/messages’ for me, and then give the filter a name. Once the filter is created, logs should start appearing in Log Intelligence from CloudWatch via our function.

 

Checking Log Intelligence for our CloudWatch Logs and Creating a Dashboard

  • Now that our CloudWatch logs are flowing into Log Intelligence, let’s look for them in Log Intelligence and make a basic dashboard. The name of my log stream from EC2 is called ‘AmazonAMI’, so let’s add a field to query for the ‘logstream’ parameter and look for any matches.

 

 

  • Run the query, and we should get matches from our EC2 instance in AWS. The query for the past 5 minutes is showing that I logged in as the ec2-user account and restarted the sshd daemon. If we look at the event details, we can see the event was logged in the messages file, from the AmazonAMI log stream.

 

 

  • Now that we have our logs in Log Intelligence, let’s save this query and create a dashboard from it. It will show us all the events from the messages file of our ec2 instance. Save the query by clicking on the floppy disk icon in the query browser, and then go to the dashboard section of Log Intelligence and click ‘Create Dashboard’. Filter for the saved query and give the dashboard a name.

 

 

  • Once the dashboard is created, it should show up in your ‘My Dashboards’ section. Now that we have a basic dashboard, we can drill down if we see any spikes in log activity or zoom in on a certain date and time if we are looking for any anomalies.

 

Conclusion

 

Hopefully, this helps to show how powerful Log Intelligence can be as a log aggregation tool. We can pull in logs from AWS, or any cloud for that matter, and store them in one central location for root cause analysis or troubleshooting, without having to worry about the operational overhead that comes with an on-premises solution. There is no need to jump between your logging tool for VMware, AWS, Linux, VMC, etc, and waste time trying to work with different formats, interfaces, and data. We make it easy to ingest data from AWS to Log Intelligence by providing the functions and tools, you just need to provide the data!

 

 

Nico Guerrera is a senior technical account manager who has been with VMware since 2016. He is a captain for the cloud management TAM tech lead team and focuses on vRealize Log Insight and vRealize Log Intelligence. Nico has 13 years of VMware career experience and is also an avid Linux/open-source software enthusiast.