Skip to content

Commit 15e80bf

Browse files
committed
Integrate/Kafka: Implement suggestions by Kenneth
1 parent f1e17c4 commit 15e80bf

File tree

2 files changed

+6
-5
lines changed

2 files changed

+6
-5
lines changed

docs/integrate/kafka/docker-python.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ docker compose up -d
5252

5353
### Create a demo table in CrateDB
5454

55-
The easiest way to do this is through the CrateDB cloud UI at `http://localhost:4200` and execute this using the console:
55+
The easiest way to do this is through the CrateDB Admin UI at `http://localhost:4200` and execute this using the console:
5656

5757
```sql
5858
CREATE TABLE IF NOT EXISTS sensor_readings (
@@ -63,7 +63,7 @@ CREATE TABLE IF NOT EXISTS sensor_readings (
6363
);
6464
```
6565

66-
But this can also be done using curl:
66+
But this can also be done using `curl`:
6767

6868
```bash
6969
curl -sS -H 'Content-Type: application/json' \
@@ -75,7 +75,8 @@ curl -sS -H 'Content-Type: application/json' \
7575

7676
### Create a Kafka topic and send a couple of messages
7777

78-
This can be done in several ways, but we can use **docker-exec** in this way:
78+
Creating a Kafka topic can be done in several ways, we are selecting to use
79+
`docker exec` in this way:
7980

8081
```bash
8182
docker exec -it kafka kafka-topics.sh --create \

docs/integrate/kafka/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ This option gives you full control: you can transform data on the fly, filter ou
5858

5959
For more advanced pipelines, you can process events while they’re still in Kafka before they ever reach CrateDB. Frameworks like Flink, Kafka Streams, or Spark let you enrich records, join multiple streams together, run aggregations, or apply windowing functions in real time.
6060

61-
The processed results are then written into CrateDB, where they’re immediately available for SQL queries and dashboards. This approach is powerful when raw events need to be cleaned, combined, or summarised before storage, though it adds more moving parts compared to a simple connector.
61+
The processed results are then written into CrateDB, where they’re immediately available for SQL queries and dashboards. This approach is powerful when raw events need to be cleaned, combined, or summarised before storing them, though it adds moving parts compared to a simple connector.
6262

6363
## Typical use cases
6464

@@ -79,7 +79,7 @@ The processed results are then written into CrateDB, where they’re immediately
7979

8080
How you run Kafka and CrateDB depends a lot on your environment and preferences. The most common approaches are:
8181

82-
* **Containerised on-premise** – Run both Kafka and CrateDB on Docker or Kubernetes in your own data centre or private cloud. This gives you the most control, but also means you manage scaling, upgrades, and monitoring.
82+
* **Containerised on-premise** – Run both Kafka and CrateDB on Docker or Kubernetes in your own data centre or private cloud. This gives you the most control, but also means you manage scaling, upgrading, and monitoring.
8383
* **Managed Kafka services** – Use a provider such as Confluent Cloud or AWS MSK to offload the operational heavy lifting of Kafka. You can still connect these managed clusters directly to a CrateDB deployment that you operate. CrateDB is also available on the major cloud providers as well.
8484
* **Managed CrateDB** – Crate\.io offers CrateDB Cloud, which can pair with either self-managed Kafka or managed Kafka services. This option reduces database operations to a minimum.
8585
* **Hybrid setups** – A common pattern is managed Kafka + self-managed CrateDB, or vice versa, depending on where you want to keep operational control.

0 commit comments

Comments
 (0)