Skip to content

1. About

Support Policy

This is an enablement project created by the Center of Excellence - Enablement Team at Dynatrace.

Support is provided via GitHub issues only. The materials provided in this repository are offered "as-is" without any warranties, express or implied. Use them at your own risk.

What's this tutorial all about#

In this tutorial we'll learn the basics about AI & LLM Monitoring.

Dynatrace supports AI and LLM Observability for more than 40 different technologies providing visibility into the different layers of AI and LLM applications.

  • Monitor service health and performance: Track real-time metrics (request counts, durations, and error rates). Stay aligned with SLOs.
  • Monitor service quality and cost: Implement error budgets for performance and cost control. Validate model consumption and response times. Prevent quality degradation by monitoring models and usage patterns in real time.
  • End-to-end tracing and debugging: Trace prompt flows from initial request to final response for quick root cause analysis and troubleshoot. Gain granular visibility into LLM prompt latencies and model-level metrics. Pinpoint issues in prompts, tokens, or system integrations.
  • Build trust, reduce compliance and audit risks: Track every input and output for an audit trail. Query all data in real time and store for future reference. Maintain full data lineage from prompt to response.

What will we do

In this tutorial we will learn how it is easy to observe an AI application that uses Ollama as Large Language Model, Weaviate as Vector Database, and LangChain as an orchestrator to create Retrieval augmented generation (RAG) and Agentic AI Pipelines.