Skip to content

Commit 996936c

Browse files
Merge pull request #253 from kube-logging/4.9-docs
4.9 docs
2 parents 0f2e267 + 8fac401 commit 996936c

File tree

13 files changed

+400
-4
lines changed

13 files changed

+400
-4
lines changed
Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
name: Publish version 4.8
2+
3+
env:
4+
doc_versionnumber: "4.8"
5+
6+
on:
7+
push:
8+
branches:
9+
- release-4.8
10+
workflow_dispatch:
11+
12+
jobs:
13+
build:
14+
name: Build
15+
runs-on: ubuntu-latest
16+
17+
permissions:
18+
contents: write
19+
pages: write
20+
id-token: write
21+
22+
concurrency:
23+
group: "pages"
24+
cancel-in-progress: false
25+
26+
environment:
27+
name: github-pages-test
28+
url: ${{ steps.deployment.outputs.page_url }}
29+
30+
steps:
31+
- name: Checkout code
32+
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
33+
with:
34+
ref: release-4.8
35+
submodules: 'recursive'
36+
37+
- name: Set up Pages
38+
id: pages
39+
uses: actions/configure-pages@1f0c5cde4bc74cd7e1254d0cb4de8d49e9068c7d # v4.0.0
40+
41+
- name: Set up Hugo
42+
uses: peaceiris/actions-hugo@16361eb4acea8698b220b76c0d4e84e1fd22c61d # v2.6.0
43+
with:
44+
hugo-version: '0.110.0'
45+
extended: true
46+
47+
- name: Set up Node
48+
uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2
49+
with:
50+
node-version: 18
51+
52+
- name: Install dependencies
53+
run: |
54+
cd themes/docsy
55+
npm install
56+
57+
- name: Set up PostCSS
58+
run: npm install --save-dev autoprefixer postcss-cli postcss
59+
60+
- name: Build
61+
run: hugo --environment production --baseURL ${{ steps.pages.outputs.base_url }}/${{ env.doc_versionnumber }}/
62+
63+
# - name: Upload artifact
64+
# uses: actions/upload-pages-artifact@64bcae551a7b18bcb9a09042ddf1960979799187 # v1.0.8
65+
# with:
66+
# path: ./public/
67+
68+
- name: Checkout code to update
69+
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
70+
with:
71+
ref: 'gh-pages-test'
72+
path: 'tmp/gh-pages'
73+
# - name: Display file structure
74+
# run: ls -R
75+
- name: Copy built site to GH pages
76+
run: |
77+
rm -rf tmp/gh-pages/${{ env.doc_versionnumber }}
78+
mkdir -p tmp/gh-pages/${{ env.doc_versionnumber }}
79+
mv public/* tmp/gh-pages/${{ env.doc_versionnumber }}
80+
- name: Commit & Push changes
81+
uses: actions-js/push@master
82+
with:
83+
github_token: ${{ secrets.GITHUB_TOKEN }}
84+
message: 'Publish updated docs for ${{ env.doc_versionnumber }}, ${{ github.event.repository.pushed_at}}'
85+
branch: 'gh-pages-test'
86+
directory: 'tmp/gh-pages'

config/_default/config.toml

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -169,9 +169,13 @@ twitter = "AxoflowIO"
169169
#######################
170170
# Add your release versions here
171171
[[params.versions]]
172-
version = "latest (4.8.0)"
172+
version = "latest (4.9.0)"
173173
githubbranch = "master"
174174
url = ""
175+
[[params.versions]]
176+
version = "4.8"
177+
githubbranch = "release-4.8"
178+
url = "/4.8/"
175179
[[params.versions]]
176180
version = "4.7"
177181
githubbranch = "release-4.7"
@@ -204,7 +208,7 @@ twitter = "AxoflowIO"
204208
# Cascade version number to every doc page (needed to create sections for pagefind search)
205209
# Update this parameter when creating a new version
206210
[[cascade]]
207-
body_attribute = 'data-pagefind-filter="section:4.8"'
211+
body_attribute = 'data-pagefind-filter="section:4.9"'
208212
[cascade._target]
209213
path = '/docs/**'
210214

content/docs/configuration/crds/v1beta1/common_types.md

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,9 @@ Metrics defines the service monitor endpoints
4040
### prometheusRules (bool, optional) {#metrics-prometheusrules}
4141

4242

43+
### prometheusRulesOverride ([]PrometheusRulesOverride, optional) {#metrics-prometheusrulesoverride}
44+
45+
4346
### serviceMonitor (bool, optional) {#metrics-servicemonitor}
4447

4548

@@ -50,6 +53,44 @@ Metrics defines the service monitor endpoints
5053

5154

5255

56+
## PrometheusRulesOverride
57+
58+
### alert (string, optional) {#prometheusrulesoverride-alert}
59+
60+
Name of the alert. Must be a valid label value. Only one of `record` and `alert` must be set.
61+
62+
63+
### annotations (map[string]string, optional) {#prometheusrulesoverride-annotations}
64+
65+
Annotations to add to each alert. Only valid for alerting rules.
66+
67+
68+
### expr (*intstr.IntOrString, optional) {#prometheusrulesoverride-expr}
69+
70+
PromQL expression to evaluate.
71+
72+
73+
### for (*v1.Duration, optional) {#prometheusrulesoverride-for}
74+
75+
Alerts are considered firing once they have been returned for this long. +optional
76+
77+
78+
### keep_firing_for (*v1.NonEmptyDuration, optional) {#prometheusrulesoverride-keep_firing_for}
79+
80+
KeepFiringFor defines how long an alert will continue firing after the condition that triggered it has cleared. +optional
81+
82+
83+
### labels (map[string]string, optional) {#prometheusrulesoverride-labels}
84+
85+
Labels to add or overwrite.
86+
87+
88+
### record (string, optional) {#prometheusrulesoverride-record}
89+
90+
Name of the time series to output to. Must be a valid metric name. Only one of `record` and `alert` must be set.
91+
92+
93+
5394
## BufferMetrics
5495

5596
BufferMetrics defines the service monitor endpoints

content/docs/configuration/crds/v1beta1/fluentbit_types.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -749,6 +749,7 @@ Configurable TTL for K8s cached namespace metadata. (15m)
749749

750750
Include Kubernetes namespace labels on every record
751751

752+
Default: On
752753

753754
### Regex_Parser (string, optional) {#filterkubernetes-regex_parser}
754755

content/docs/configuration/crds/v1beta1/logging_types.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,14 @@ Namespace for cluster wide configuration resources like ClusterFlow and ClusterO
3434
Default flow for unmatched logs. This Flow configuration collects all logs that didn't matched any other Flow.
3535

3636

37+
### enableDockerParserCompatibilityForCRI (bool, optional) {#loggingspec-enabledockerparsercompatibilityforcri}
38+
39+
Enables a log parser that is compatible with the docker parser. This has the following benefits:
40+
41+
- automatically parses JSON logs using the Merge_Log feature
42+
- downstream parsers can use the `log` field instead of the `message` field, just like with the docker runtime
43+
- the `concat` and `parser` filters are automatically set back to use the `log` field.
44+
3745
### enableRecreateWorkloadOnImmutableFieldChange (bool, optional) {#loggingspec-enablerecreateworkloadonimmutablefieldchange}
3846

3947
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the fluentbit daemonset and the fluentd statefulset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn't be managed with a simple update.

content/docs/configuration/crds/v1beta1/loggingroute_types.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,6 @@ Enumerate all loggings with all the destination namespaces expanded
6060

6161
## LoggingRoute
6262

63-
LoggingRoute (experimental)
6463
Connects a log collector with log aggregators from other logging domains and routes relevant logs based on watch namespaces
6564

6665
### (metav1.TypeMeta, required) {#loggingroute-}

content/docs/configuration/crds/v1beta1/syslogng_output_types.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,10 @@ SyslogNGOutputSpec defines the desired state of SyslogNGOutput
1111
### elasticsearch (*output.ElasticsearchOutput, optional) {#syslogngoutputspec-elasticsearch}
1212

1313

14+
### elasticsearch-datastream (*output.ElasticsearchDatastreamOutput, optional) {#syslogngoutputspec-elasticsearch-datastream}
15+
16+
Available in Logging operator version 4.9 and later.
17+
1418
### file (*output.FileOutput, optional) {#syslogngoutputspec-file}
1519

1620

@@ -37,6 +41,11 @@ Available in Logging operator version 4.4 and later.
3741
### mongodb (*output.MongoDB, optional) {#syslogngoutputspec-mongodb}
3842

3943

44+
### opentelemetry (*output.OpenTelemetryOutput, optional) {#syslogngoutputspec-opentelemetry}
45+
46+
Available in Logging operator version 4.9 and later.
47+
48+
4049
### openobserve (*output.OpenobserveOutput, optional) {#syslogngoutputspec-openobserve}
4150

4251
Available in Logging operator version 4.5 and later.

content/docs/configuration/plugins/outputs/forward.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -111,6 +111,10 @@ Server definitions at least one is required [Server](#fluentd-server)
111111
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
112112

113113

114+
### time_as_integer (bool, optional) {#forwardoutput-time_as_integer}
115+
116+
Format forwarded events time as an epoch Integer with second resolution. Useful when forwarding to old ( <= 0.12 ) Fluentd servers.
117+
114118
### tls_allow_self_signed_cert (bool, optional) {#forwardoutput-tls_allow_self_signed_cert}
115119

116120
Allow self signed certificates or not.

content/docs/configuration/plugins/outputs/kafka.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ spec:
3333
## Configuration
3434
## Kafka
3535
36-
Send your logs to Kafka
36+
Send your logs to Kafka. Set `use_rdkafka` to `true` to use the rdkafka2 client, which offers higher performance than ruby-kafka.
3737

3838
### ack_timeout (int, optional) {#kafka-ack_timeout}
3939

@@ -240,6 +240,11 @@ Use default for unknown topics
240240

241241
Default: false
242242

243+
### use_rdkafka (bool, optional) {#kafka-use_rdkafka}
244+
245+
Use rdkafka2 instead of the legacy kafka2 output plugin. This plugin requires fluentd image version v1.16-4.9-full or higher.
246+
247+
243248
### username (*secret.Secret, optional) {#kafka-username}
244249

245250
Username when using PLAIN/SCRAM SASL authentication
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
---
2+
title: Elasticsearch datastream
3+
weight: 200
4+
generated_file: true
5+
---
6+
7+
## Overview
8+
9+
Based on the [ElasticSearch datastream destination of AxoSyslog](https://axoflow.com/docs/axosyslog-core/chapter-destinations/configuring-destinations-elasticsearch-datastream/).
10+
11+
Available in Logging operator version 4.9 and later.
12+
13+
## Example
14+
15+
{{< highlight yaml >}}
16+
apiVersion: logging.banzaicloud.io/v1beta1
17+
kind: SyslogNGOutput
18+
metadata:
19+
name: elasticsearch-datastream
20+
spec:
21+
elasticsearch-datastream:
22+
url: "https://elastic-endpoint:9200/my-data-stream/_bulk"
23+
user: "username"
24+
password:
25+
valueFrom:
26+
secretKeyRef:
27+
name: elastic
28+
key: password
29+
{{</ highlight >}}
30+
31+
32+
## Configuration
33+
## ElasticsearchDatastreamOutput
34+
35+
### (HTTPOutput, required) {#elasticsearchdatastreamoutput-}
36+
37+
38+
### disk_buffer (*DiskBuffer, optional) {#elasticsearchdatastreamoutput-disk_buffer}
39+
40+
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the [Syslog-ng DiskBuffer options](../disk_buffer/).
41+
42+
Default: false
43+
44+
### record (string, optional) {#elasticsearchdatastreamoutput-record}
45+
46+
Arguments to the `$format-json()` template function. Default: `"--scope rfc5424 --exclude DATE --key ISODATE @timestamp=${ISODATE}"`
47+
48+
49+

0 commit comments

Comments
 (0)