Observability Pipelines

Ce produit n'est pas pris en charge par le site Datadog que vous avez sélectionné. ().
Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.

Overview

A graphic showing data being aggregated from a variety of sources, processed and enriched by the observability pipelines worker in your own environment, and then being routed to the security, analytics, and storage destinations of your choice

Datadog Observability Pipelines allows you to collect and process logs and metrics (PREVIEW indicates an early access version of a major product or feature that you can opt into before its official release.Glossary) within your own infrastructure, and then route the data to different destinations. It gives you control over your observability data before it leaves your environment.

With out-of-the-box templates, you can build pipelines that redact sensitive data, enrich data, filter out noisy events, and route data to destinations like Datadog, SIEM tools, or cloud storage.

Key components

Observability Pipelines Worker

The Observability Pipelines Worker runs within your infrastructure to aggregate, process, and route data.

Datadog recommends you update Observability Pipelines Worker (OPW) with every minor and patch release, or, at a minimum, monthly.

Upgrading to a major OPW version and keeping it updated is the only supported way to get the latest OPW functionality, fixes, and security updates. See Upgrade the Worker to update to the latest Worker version.

Observability Pipelines UI

The Observability Pipelines UI provides a centralized control plane where you can:

  • Build and edit pipelines with guided templates.
  • Deploy and manage Workers.
  • Enable monitors to track pipeline health.

Get started

  1. Navigate to Observability Pipelines.
  2. Select a template based on your use case.
  3. Set up your pipeline:
    1. Choose a log source.
    2. Configure processors.
    3. Add one or more destinations.
  4. Install the Worker in your environment
  5. Enable monitors for real-time observability into your pipeline health.

See Set Up Pipelines for detailed instructions.

Common use cases and templates

Observability Pipelines includes prebuilt templates for common data routing and transformation workflows. You can fully customize or combine them to meet your needs.

The Observability Pipelines UI showing the eight templates

Templates

TemplateDescription
Archive LogsStore raw logs in Amazon S3, Google Cloud Storage, or Azure Storage for long-term retention and rehydration.
Dual Ship LogsSend the same log stream to multiple destinations (for example, Datadog and a SIEM).
Generate Log-based MetricsConvert high-volume logs into count or distribution metrics to reduce storage needs.
Log EnrichmentAdd metadata from reference tables or static mappings for more effective querying.
Log Volume ControlReduce indexed log volume by filtering low-value logs before they’re stored.
Sensitive Data RedactionDetect and remove personally identifiable information (PII) and secrets using built-in or custom rules.
Split LogsRoute logs by type (for example, security vs. application) to different tools.
Metric Tag Governance is in Preview. Fill out the form to request access.
TemplateDescription
Metric Tag GovernanceManage the quality and volume of your metrics by keeping only the metrics you need, standardizing metrics tagging, and removing unwanted tags to prevent high cardinality.

See Explore templates for more information.

Further Reading

Documentation, liens et articles supplémentaires utiles:

Simplify log collection and aggregation for MSSPs with Datadog Observability PipelinesBLOG more more Manage metric volume and tags in your environment with Observability PipelinesBLOG more more Set up pipelinesDOCUMENTATION more more Explore use cases and templatesDOCUMENTATION more more Install the Observability Pipelines WorkerDOCUMENTATION more more Dual shipping with Observability PipelinesDOCUMENTATION more more Strategies for Reducing Log VolumeDOCUMENTATION more more Redact sensitive data from your logs on-prem by using Observability PipelinesBLOG more more Dual ship logs with Datadog Observability PipelinesBLOG more more Control your log volumes with Datadog Observability PipelinesBLOG more more Archive your logs with Observability Pipelines for a simple and affordable migration to DatadogBLOG more more Aggregate, process, and route logs easily with Datadog Observability PipelinesBLOG more more Stream logs in the OCSF format to your preferred security vendors or data lakes with Observability PipelinesBLOG more more Simplify your SIEM migration to Microsoft Sentinel with Datadog Observability PipelinesBLOG more more How state, local, and education organizations can manage logs flexibly and efficiently using Datadog Observability PipelinesBLOG more more How to optimize high-volume log data without compromising visibilityBLOG more more Search your historical logs more efficiently with Datadog Archive SearchBLOG more more Store and search logs at petabyte scale in your own infrastructure with Datadog CloudPremBLOG more more Control logging costs on any SIEM or data lake using Packs with Observability PipelinesBLOG more more Use OpenTelemetry with Observability Pipelines for vendor-neutral log collection and cost controlBLOG more more