Skip to content

Commit 07eeed9

Browse files
committed
Integrate: Implement changes suggested by CodeRabbit
1 parent 9efca51 commit 07eeed9

File tree

6 files changed

+14
-12
lines changed

6 files changed

+14
-12
lines changed

docs/connect/mcp/index.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
(mcp)=
2+
(connect-mcp)=
13
# Model Context Protocol (MCP)
24

35
```{toctree}
@@ -17,7 +19,7 @@ integration between LLM applications and external data sources and tools.
1719
MCP is sometimes described as "OpenAPI for LLMs" or as "USB-C port for AI",
1820
providing a uniform way to connect LLMs to resources they can use.
1921

20-
The main entities of MCP are [prompts], [resources], and [tools].
22+
The main entities of MCP are [Prompts], [Resources], and [Tools].
2123
MCP clients call MCP servers, either by invoking them as a subprocess and
2224
communicating via Standard Input/Output (stdio), Server-Sent Events (sse),
2325
or HTTP Streams (streamable-http), see [transports].
@@ -63,7 +65,6 @@ To get in touch with us to discuss CrateDB and MCP, please head over to
6365
the CrateDB community forum at [Introducing the CrateDB MCP Server].
6466

6567

66-
[Community Forum]: https://community.cratedb.com/
6768
[CrateDB]: https://cratedb.com/database
6869
[CrateDB Cloud]: https://cratedb.com/docs/cloud/
6970
[Introducing the CrateDB MCP Server]: https://community.cratedb.com/t/introducing-the-cratedb-mcp-server/2043

docs/integrate/influxdb/learn.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
(influxdb-learn)=
12
(import-influxdb)=
23
# Import data from InfluxDB
34

@@ -46,7 +47,7 @@ data in schemas and tables.
4647
- A **field** is similar to an un-indexed column in an SQL database.
4748
- A **point** is similar to an SQL row.
4849

49-
-- [via][What are series and bucket in InfluxDB]
50+
> via: [What are series and bucket in InfluxDB]
5051
5152
## Tutorial
5253

docs/integrate/mongodb/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ and [MongoDB Change Streams], to relay CDC events from MongoDB into CrateDB (`cd
3939
* - [MongoDB CDC Relay]
4040
-
4141
-
42-
- CLI `ctk load table` for streaming changes of collections into CrateDB (`full-load`).
42+
- CLI `ctk load table` for streaming changes of collections into CrateDB (`cdc`).
4343
* - {ref}`MongoDB CDC integration <cloud:integrations-mongo-cdc>`
4444
-
4545
-

docs/integrate/mongodb/learn.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22
(migrating-mongodb)=
33
(integrate-mongodb-quickstart)=
44
(import-mongodb)=
5+
(mongodb-learn)=
56

67
# Import data from MongoDB
78

docs/integrate/mysql/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
```{div}
1212
:style: "float: right; margin-left: 1em"
1313
14-
[![mysql-logo](https://labs.mysql.com/common/logos/mysql-logo.svg?v2){w=180px}](https://www.mysql.com/)
14+
[![mysql-logo](https://www.mysql.com/common/logos/powered-by-mysql-167x86.png){w=180px}](https://www.mysql.com/)
1515
<br><br>
1616
[![mariadb-logo](https://mariadb.com/wp-content/themes/mariadb-2025/public/images/logo-dark.4482a1.svg){w=180px}](https://www.mariadb.com/)
1717
```

docs/integrate/streamsets/index.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,13 @@
44
:::{rubric} About
55
:::
66

7-
The [StreamSets Data Collector] is a lightweight and powerful engine that
8-
allows you to build streaming, batch and change-data-capture (CDC) pipelines
9-
that can ingest and transform data from a variety of sources.
7+
The [StreamSets Data Collector] is a lightweight, powerful engine for building
8+
streaming, batch, and change data capture (CDC) pipelines that ingest and transform
9+
data from various sources.
1010

11-
StreamSets Data Collector Engine makes it easy to run data pipelines from Kafka,
12-
Oracle, Salesforce, JDBC, Hive, and more to Snowflake, Databricks, S3, ADLS, Kafka
13-
and more. Data Collector Engine runs on-premises or in any cloud, wherever your data
14-
lives.
11+
Use it to run pipelines from sources such as Kafka, Oracle, Salesforce, JDBC, and Hive
12+
to destinations including Snowflake, Databricks, Amazon S3, and Azure Data Lake Storage (ADLS).
13+
It runs on‑premises or in any cloud.
1514

1615
:::{rubric} Learn
1716
:::

0 commit comments

Comments
 (0)