Best Practices How-tos monitoring Monitoring and Observability Tanzu Observability tutorials

2 Ways to Integrate the Jaeger App with VMware Tanzu Observability Without Code Changes

In microservices architecture, to identify performance issues—including latency—it’s important to monitor each service and all inter-service communication. Jaeger and VMware Tanzu Observability can help. 

Jaeger is an open source, distributed tracing system released by Uber Technologies. VMware Tanzu Observability is a high-performance streaming analytics platform that supports 3D observability (e.g., metrics, histograms, and traces/spans). The Jaeger client is compatible with OpenTracing, so any client using an OpenTracing interface can easily be moved to the Jaeger client. It’s this Jaeger client that will help applications send span metrics to a Wavefront proxy. The integration of Jaeger with Tanzu Observability will help you visualize the application traces and identify any errors or performance issues. Anyone who wants to monitor its distributed application for performance and latency optimization can use this integration.

In this post, we will start by running a simple Jaeger app that will take a string argument and print it in reverse. The app will call one service that reverses the string passed to it, and another that prints it.

We will then demonstrate two ways to configure our Jaeger app to communicate with Tanzu Observability without making any code changes. Then we will cover how to see those spans in the Tanzu Observability user interface (UI).

Sending span metrics

There are two ways the Jaeger client can send span metrics to a Wavefront proxy:

  1. Directly to the Wavefront proxy, via HTTP
  2. To the Jaeger agent, which will batch the metrics and send them to the Wavefront proxy via gRPC

Let’s use our simple app, which will take one string argument and print it in reverse, for our next steps:

package main

import (
   "fmt"
   "io"
   "log"
   "os"
   "time"

   opentracing "github.com/opentracing/opentracing-go"
   jaeger "github.com/uber/jaeger-client-go"
   config "github.com/uber/jaeger-client-go/config"
)

var APP_NAME string = "myFirstAppWithTraces"




func main() {
   if len(os.Args) != 2 {
      panic("ERROR: Expecting one argument")
   }
   name := os.Args[1]
   revrseAndPrint(name)
}

func revrseAndPrint(s string) {
   t, closer := initJaeger("main-span")
   mainSpan := t.StartSpan("main-span")
   defer func() {
      mainSpan.Finish()
      closer.Close()
   }()
   mainSpan.SetTag("application", APP_NAME)
   nameInReverse := reverseString(s, mainSpan)
   printString(nameInReverse, mainSpan)
}

func reverseString(s string, parentSpan opentracing.Span) string {
   t, closer := initJaeger("reverse-string")
   revrsreStringSpan := t.StartSpan("reverse-string" , opentracing.ChildOf(parentSpan.Context()))
   defer func() {
      revrsreStringSpan.Finish()
      closer.Close()
   }()
   time.Sleep(2 * time.Second)
   revrsreStringSpan.SetTag("application", APP_NAME)
   revStr := ""
   for i := len(s) - 1; i >= 0; i--{
      revStr = revStr + string(s[i])
   }
   return revStr
}




func printString(s string, parentSpan opentracing.Span) {
   t, closer := initJaeger("print-string")
   printSpan := t.StartSpan("print-string" , opentracing.ChildOf(parentSpan.Context()))
   defer func() {
      printSpan.Finish()
      closer.Close()
   }()
   time.Sleep(2 * time.Second)
   printSpan.SetTag("application", APP_NAME)
   helloStr := fmt.Sprintf("Your name in reverse is %s", s)
   println(helloStr)
}

// initJaeger returns an instance of Jaeger Tracer that samples 100% of traces and logs all spans to stdout.
func initJaeger(service string) (opentracing.Tracer, io.Closer) {
   cfg, err := config.FromEnv()
   if err != nil {
      log.Fatalf("error while reading config from env", err)
   }
   cfg.ServiceName = service
   cfg.Sampler.Type = "const"
   cfg.Sampler.Param = 1

   tracer, closer, err := cfg.NewTracer(config.Logger(jaeger.StdLogger))
   if err != nil {
      panic(fmt.Sprintf("ERROR: cannot init Jaeger: %vn", err))
   }
   return tracer, closer
}

// JAEGER_AGENT_HOST

Sending metrics directly via HTTP

To send the metrics directly, using HTTP: 

  1. In a Wavefront proxy config file, add traceJaegerHttpListenerPorts=14268

  2. Restart the proxy

  3. Set the environment variable JAEGER_ENDPOINT to wavefront proxy-host and the port entered in Step 1

export JAEGER_ENDPOINT=http://proxy-host:14268/api/traces
go run main.go John

4. Run the app

Sending metrics via the Jaeger agent

To send the metrics using the Jaeger agent: 

  1. In a Wavefront proxy config file, add traceJaegerGrpcListenerPorts=14250

  2. Restart the proxy

  3. Start the Jaeger-agent

docker run 
--rm 
-p6831:6831/udp 
jaegertracing/jaeger-agent:1.22 
--reporter.grpc.host-port={proxy-host}:14250

4. Set the environment variable JAEGER_AGENT_HOST

export JAEGER_AGENT_HOST={host-of-jaeger-agent}
export JAEGER_AGENT_PORT=6831
go run main.go John

5. Run the app Viewing spans in the Tanzu Observability UI

Viewing spans in the Tanzu Observability UI 

To view the spans in the Tanzu Observability user interface: 

  1. Log in to {cluster}.wavefront.com

  2. Under the Applications tab, go to Traces
    Graphical user interface, application, TeamsDescription automatically generated

  3. In the Operation search box, type your application name
    Graphical user interface, text, applicationDescription automatically generated

  4. Select your app and click “Search”

  5. View the spans, such as those shown here
    Graphical user interfaceDescription automatically generated with medium confidence

Now you’ve seen how just changing a few configs in an existing application can help you visualize the spans in the Tanzu Observability UI, which can help you find errors and performance problems in your application. 

Do check out our official documentation on distributed tracing for details on how you can use these metrics for performance and latency optimization. You can also try out this integration with a VMware Tanzu Observability free trial.