Skip to content

Commit ddf59a8

Browse files
Robert FeketeRobert Fekete
authored andcommitted
Updates generated docs
1 parent 58005e1 commit ddf59a8

File tree

9 files changed

+186
-2
lines changed

9 files changed

+186
-2
lines changed

content/docs/configuration/crds/v1beta1/common_types.md

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,9 @@ Metrics defines the service monitor endpoints
4040
### prometheusRules (bool, optional) {#metrics-prometheusrules}
4141

4242

43+
### prometheusRulesOverride ([]PrometheusRulesOverride, optional) {#metrics-prometheusrulesoverride}
44+
45+
4346
### serviceMonitor (bool, optional) {#metrics-servicemonitor}
4447

4548

@@ -50,6 +53,44 @@ Metrics defines the service monitor endpoints
5053

5154

5255

56+
## PrometheusRulesOverride
57+
58+
### alert (string, optional) {#prometheusrulesoverride-alert}
59+
60+
Name of the alert. Must be a valid label value. Only one of `record` and `alert` must be set.
61+
62+
63+
### annotations (map[string]string, optional) {#prometheusrulesoverride-annotations}
64+
65+
Annotations to add to each alert. Only valid for alerting rules.
66+
67+
68+
### expr (*intstr.IntOrString, optional) {#prometheusrulesoverride-expr}
69+
70+
PromQL expression to evaluate.
71+
72+
73+
### for (*v1.Duration, optional) {#prometheusrulesoverride-for}
74+
75+
Alerts are considered firing once they have been returned for this long. +optional
76+
77+
78+
### keep_firing_for (*v1.NonEmptyDuration, optional) {#prometheusrulesoverride-keep_firing_for}
79+
80+
KeepFiringFor defines how long an alert will continue firing after the condition that triggered it has cleared. +optional
81+
82+
83+
### labels (map[string]string, optional) {#prometheusrulesoverride-labels}
84+
85+
Labels to add or overwrite.
86+
87+
88+
### record (string, optional) {#prometheusrulesoverride-record}
89+
90+
Name of the time series to output to. Must be a valid metric name. Only one of `record` and `alert` must be set.
91+
92+
93+
5394
## BufferMetrics
5495

5596
BufferMetrics defines the service monitor endpoints

content/docs/configuration/crds/v1beta1/fluentbit_types.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -749,6 +749,7 @@ Configurable TTL for K8s cached namespace metadata. (15m)
749749

750750
Include Kubernetes namespace labels on every record
751751

752+
Default: On
752753

753754
### Regex_Parser (string, optional) {#filterkubernetes-regex_parser}
754755

content/docs/configuration/crds/v1beta1/logging_types.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,14 @@ Namespace for cluster wide configuration resources like ClusterFlow and ClusterO
3434
Default flow for unmatched logs. This Flow configuration collects all logs that didn't matched any other Flow.
3535

3636

37+
### enableDockerParserCompatibilityForCRI (bool, optional) {#loggingspec-enabledockerparsercompatibilityforcri}
38+
39+
Enables a log parser that is compatible with the docker parser. This has the following benefits:
40+
41+
- automatically parses JSON logs using the Merge_Log feature
42+
- downstream parsers can use the `log` field instead of the `message` field, just like with the docker runtime
43+
- the `concat` and `parser` filters are automatically set back to use the `log` field.
44+
3745
### enableRecreateWorkloadOnImmutableFieldChange (bool, optional) {#loggingspec-enablerecreateworkloadonimmutablefieldchange}
3846

3947
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the fluentbit daemonset and the fluentd statefulset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn't be managed with a simple update.

content/docs/configuration/crds/v1beta1/loggingroute_types.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,6 @@ Enumerate all loggings with all the destination namespaces expanded
6060

6161
## LoggingRoute
6262

63-
LoggingRoute (experimental)
6463
Connects a log collector with log aggregators from other logging domains and routes relevant logs based on watch namespaces
6564

6665
### (metav1.TypeMeta, required) {#loggingroute-}

content/docs/configuration/crds/v1beta1/syslogng_output_types.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,10 @@ SyslogNGOutputSpec defines the desired state of SyslogNGOutput
1111
### elasticsearch (*output.ElasticsearchOutput, optional) {#syslogngoutputspec-elasticsearch}
1212

1313

14+
### elasticsearch-datastream (*output.ElasticsearchDatastreamOutput, optional) {#syslogngoutputspec-elasticsearch-datastream}
15+
16+
Available in Logging operator version 4.9 and later.
17+
1418
### file (*output.FileOutput, optional) {#syslogngoutputspec-file}
1519

1620

@@ -37,6 +41,11 @@ Available in Logging operator version 4.4 and later.
3741
### mongodb (*output.MongoDB, optional) {#syslogngoutputspec-mongodb}
3842

3943

44+
### opentelemetry (*output.OpenTelemetryOutput, optional) {#syslogngoutputspec-opentelemetry}
45+
46+
Available in Logging operator version 4.9 and later.
47+
48+
4049
### openobserve (*output.OpenobserveOutput, optional) {#syslogngoutputspec-openobserve}
4150

4251
Available in Logging operator version 4.5 and later.

content/docs/configuration/plugins/outputs/forward.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -111,6 +111,10 @@ Server definitions at least one is required [Server](#fluentd-server)
111111
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
112112

113113

114+
### time_as_integer (bool, optional) {#forwardoutput-time_as_integer}
115+
116+
Format forwarded events time as an epoch Integer with second resolution. Useful when forwarding to old ( <= 0.12 ) Fluentd servers.
117+
114118
### tls_allow_self_signed_cert (bool, optional) {#forwardoutput-tls_allow_self_signed_cert}
115119

116120
Allow self signed certificates or not.

content/docs/configuration/plugins/outputs/kafka.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ spec:
3333
## Configuration
3434
## Kafka
3535
36-
Send your logs to Kafka
36+
Send your logs to Kafka. Set `use_rdkafka` to `true` to use the rdkafka2 client, which offers higher performance than ruby-kafka.
3737

3838
### ack_timeout (int, optional) {#kafka-ack_timeout}
3939

@@ -240,6 +240,11 @@ Use default for unknown topics
240240

241241
Default: false
242242

243+
### use_rdkafka (bool, optional) {#kafka-use_rdkafka}
244+
245+
Use rdkafka2 instead of the legacy kafka2 output plugin. This plugin requires fluentd image version v1.16-4.9-full or higher.
246+
247+
243248
### username (*secret.Secret, optional) {#kafka-username}
244249

245250
Username when using PLAIN/SCRAM SASL authentication
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
---
2+
title: Elasticsearch datastream
3+
weight: 200
4+
generated_file: true
5+
---
6+
7+
## Overview
8+
9+
Based on the [ElasticSearch datastream destination of AxoSyslog](https://axoflow.com/docs/axosyslog-core/chapter-destinations/configuring-destinations-elasticsearch-datastream/).
10+
11+
Available in Logging operator version 4.9 and later.
12+
13+
## Example
14+
15+
{{< highlight yaml >}}
16+
apiVersion: logging.banzaicloud.io/v1beta1
17+
kind: SyslogNGOutput
18+
metadata:
19+
name: elasticsearch-datastream
20+
spec:
21+
elasticsearch-datastream:
22+
url: "https://elastic-endpoint:9200/my-data-stream/_bulk"
23+
user: "username"
24+
password:
25+
valueFrom:
26+
secretKeyRef:
27+
name: elastic
28+
key: password
29+
{{</ highlight >}}
30+
31+
32+
## Configuration
33+
## ElasticsearchDatastreamOutput
34+
35+
### (HTTPOutput, required) {#elasticsearchdatastreamoutput-}
36+
37+
38+
### disk_buffer (*DiskBuffer, optional) {#elasticsearchdatastreamoutput-disk_buffer}
39+
40+
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the [Syslog-ng DiskBuffer options](../disk_buffer/).
41+
42+
Default: false
43+
44+
### record (string, optional) {#elasticsearchdatastreamoutput-record}
45+
46+
Arguments to the `$format-json()` template function. Default: `"--scope rfc5424 --exclude DATE --key ISODATE @timestamp=${ISODATE}"`
47+
48+
49+
Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
---
2+
title: OpenTelemetry output
3+
weight: 200
4+
generated_file: true
5+
---
6+
7+
## Overview
8+
9+
Sends messages over OpenTelemetry GRPC. For details on the available options of the output, see the [documentation of AxoSyslog](https://axoflow.com/docs/axosyslog-core/chapter-destinations/opentelemetry/).
10+
11+
Available in Logging operator version 4.9 and later.
12+
13+
## Example
14+
15+
A simple example sending logs over OpenTelemetry GRPC to a remote OpenTelemetry endpoint:
16+
17+
{{< highlight yaml >}}
18+
kind: SyslogNGOutput
19+
apiVersion: logging.banzaicloud.io/v1beta1
20+
metadata:
21+
name: otlp
22+
spec:
23+
opentelemetry:
24+
url: otel-server
25+
port: 4379
26+
{{</ highlight >}}
27+
28+
29+
30+
## Configuration
31+
## OpenTelemetryOutput
32+
33+
### (Batch, required) {#opentelemetryoutput-}
34+
35+
Batching parameters
36+
37+
<!-- FIXME -->
38+
39+
40+
### auth (*Auth, optional) {#opentelemetryoutput-auth}
41+
42+
Authentication configuration, see the [documentation of the AxoSyslog syslog-ng distribution](https://axoflow.com/docs/axosyslog-core/chapter-destinations/destination-syslog-ng-otlp/#auth).
43+
44+
45+
### channel_args (filter.ArrowMap, optional) {#opentelemetryoutput-channel_args}
46+
47+
Add GRPC Channel arguments https://axoflow.com/docs/axosyslog-core/chapter-destinations/opentelemetry/#channel-args
48+
<!-- FIXME -->
49+
50+
51+
### compression (*bool, optional) {#opentelemetryoutput-compression}
52+
53+
Enable or disable compression.
54+
55+
Default: false
56+
57+
### disk_buffer (*DiskBuffer, optional) {#opentelemetryoutput-disk_buffer}
58+
59+
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the [Syslog-ng DiskBuffer options](../disk_buffer/).
60+
61+
Default: false
62+
63+
### url (string, required) {#opentelemetryoutput-url}
64+
65+
Specifies the hostname or IP address and optionally the port number of the web service that can receive log data via HTTP. Use a colon (:) after the address to specify the port number of the server. For example: `http://127.0.0.1:8000`
66+
67+
68+

0 commit comments

Comments
 (0)