All Collections
Logs Management
Ingestion Pipelines
Logs Management: Ingestion Pipeline Overview
Logs Management: Ingestion Pipeline Overview

Learn about Log Management Ingestion Pipelines on the Logit.io platform

Lee Smith avatar
Written by Lee Smith
Updated over a week ago

What are Ingestion Pipelines?

Ingestion pipelines are a series of processes and operations used to collect, process, and prepare data for storage or analysis in a data repository or data warehouse. Ingestion pipelines are a crucial part of data management and analytics workflows, especially in large-scale and complex data environments.

Working with Logs Management Ingestion Pipelines

Ingestion pipelines can be found in three separate places under the initial ‘Overview’ dashboard depending on your account subscription. This help article will only focus on ingestion pipelines in relation to logs management.

Log Management Ingestion Pipeline

Firstly, ingestion pipelines can be found under the ‘Log Management’ section. This will show ingestion pipelines in relation to your Logs Management.

The ingestion pipeline allows you to optimise your data processing by employing Logstash inputs and filters, enabling you to modify and enhance your log data. Also, you can view the health of all the service nodes. By clicking the ellipses at the top right you can ‘configure inputs’, view ‘firewall rules’ and use the ‘filter editor’.

Configure Inputs

If you wish to work with your Logstash inputs then select ‘configure inputs’ from the ellipses at the top right. Here you can locate, copy, and set up the Logstash endpoint information to ship data to your Stack via Logstash.

Firewall Rules

To view and configure your Logstash firewall rules, select ‘firewall rules’ from the ellipses at the top right. From here, you have the capability to set up Firewall Groups to protect your Logstash endpoints. This is important because configuring Firewall Groups improves Stack security by restricting the IP addresses that can send data to specific ports on your Stacks.

Filter Editor

To work with your Logstash filter choose ‘filter editor’ from the ellipses at the top right. From this point, you can modify, test, and update the Logstash pipelines associated with this Stack, or initiate a restart of your Logstash instance for this stack.

Logstash Input Nodes

Logstash Input Nodes refer to components or configurations that define how Logstash collects data from various sources. Logstash is a data collection and processing tool that allows you to ingest data from different input sources, perform transformations on that data, and then send it to an output destination. Here you can view each separate instance, its health, and memory details. To work with your Logstash Inputs choose Configure Inputs from the dropdown.

Kafka Nodes

Kafka nodes, also known as Kafka brokers, are part of an Apache Kafka cluster and are responsible for receiving, storing, and serving data in Kafka topics. Kafka nodes ensure fault tolerance and scalability for data streaming. Here you can monitor each individual instance and the health of it.

Logstash Filter Nodes

The Logstash filtering mechanism is an integral part of its processing pipeline. This filtering process is achieved through filter plugins, which can be added to Logstash configurations to modify, enrich, or transform incoming data before it's sent to the output. These individual filter plugins/instances can be viewed here, as well as their health and memory details. To work with your Logstash filters choose Filter Editor from the dropdown.

Whats next:

Did this answer your question?