Skip to content

Commit 6aad3f1

Browse files
authored
better and simple structure and better why timeplus (#371)
1 parent 8da53d6 commit 6aad3f1

File tree

4 files changed

+75
-48
lines changed

4 files changed

+75
-48
lines changed

docs/enterprise-v2.4.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Key highlights of this release:
2929
Please use the stable releases for production deployment, while we also provide latest engineering builds for testing and evaluation.
3030

3131
### 2.4.28 (Stable) {#2_4_28}
32-
Built on 08-05-2025. You can install via:
32+
Built on 08-12-2025. You can install via:
3333
* For Linux or Mac users: `curl https://install.timeplus.com/2.4 | sh` [Downloads](/release-downloads#2_4_28)
3434
* For Kubernetes users: `helm install timeplus/timeplus-enterprise --version v3.0.12 ..`
3535
* For Docker users (not for production): `docker run -p 8000:8000 docker.timeplus.com/timeplus/timeplus-enterprise:2.4.28`

docs/glossary.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Key Terms and Concepts
1+
# Concepts
22

33
This page lists key terms and concepts in Timeplus, from A to Z.
44

docs/why-timeplus.md

Lines changed: 37 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,57 +1,78 @@
1-
# Why Timeplus?
1+
# What is Timeplus
2+
3+
Timeplus is a unified real-time data processing platform built for developers who need to move, transform, and act on data fast. At the heart of Timeplus is an incremental processing engine that uses modern vectorization (SIMD), just-in-time (JIT) compilation, and advanced database internals to ingest, transform, store, and serve data with low latency and high throughput.
4+
5+
It plugs right into the tools you already use — stream sources like Kafka, Redpanda, and Pulsar, and sinks like ClickHouse, Apache Iceberg, S3, Splunk, Elasticsearch, and MongoDB. You can easily build pipelines that consume events, run streaming ETL, joins, aggregations, filtering, and other transformations, then push the results wherever they need to go — fast.
6+
7+
Timeplus isn’t just for streaming. It also supports scheduled batch jobs, so you can mix real-time and periodic workloads in one place. Pair it with Timeplus Alert to trigger actions on live data and manage the full lifecycle of your data applications—without duct-taping multiple tools together.
8+
9+
## Why Timeplus
210

311
Timeplus simplifies stateful stream processing and analytics with a fast, single-binary engine. Using SQL as a domain-specific language and both row and column-based state stores, it enables developers to build real-time applications, data pipelines, and analytical dashboards at the edge or in the cloud, reducing the cost, time, and complexity of multi-component stacks.
412

5-
## Architecture: The Best of Both Worlds {#architecture}
13+
### Architecture: The Best of Both Worlds {#architecture}
14+
615
![overview](/img/product_diagram_web.png)
716

8-
## Unified streaming and historical data processing {#unified}
17+
### Unified Streaming and Historical Data Processing {#unified}
918

10-
Timeplus streams offer high performance, resiliency, and seamless querying by using an internal Write Ahead Log (WAL) and Historical Store. The WAL ensures ultra-fast inserts and updates, while the Historical Store, optimized for various query types, handles efficient historical queries.
19+
Timeplus streams deliver high performance, resiliency, and smooth querying through an internal Write-Ahead Log (WAL, called NativeLog) and a Historical Store. The WAL enables ultra-fast data ingestion, while the Historical Store — stored in either columnar or row format and enhanced with compaction and indexing — supports efficient historical range and point queries.
1120

1221
This architecture transparently serves data to users based on query type from both, often eliminating the need for Apache Kafka as a commit log or a separate downstream database, streamlining your data infrastructure.
1322

14-
## Append and Mutable Streams {#streams}
23+
### Append and Mutable Streams {#streams}
1524

1625
Configure types of streams to optimize performance.
1726

1827
* [Append streams:](/append-stream)
1928
Excel at complex aggregations, storing data in a columnar format for faster access and processing.
20-
* [Mutable streams:](/mutable-stream) Support UPSERTs and DELETEs, ideal for applications like Materialized Caches or GDPR compliance, using a row-based store optimized for fast data retrieval and query consistency.
2129

22-
## Single Binary {#binary}
23-
Timeplus is a fast, powerful, and efficient SQL stream processing platform with no dependencies, JVM, or ZK. It runs in bare-metal or Kubernetes environments, from edge to cloud, using a single binary (~150MB).
30+
* [Mutable streams:](/mutable-stream) Support UPSERTs and DELETEs, ideal for applications which require high frequent and high cardinality data mutations, using a row-based store optimized for fast data retrieval and query consistency.
31+
32+
### Single Binary {#binary}
33+
34+
Timeplus is a fast, powerful, and efficient SQL stream processing platform with no dependencies, JVM, or ZK. It runs in bare-metal or Kubernetes environments, from edge to cloud, using a single binary.
2435

2536
Timeplus scales easily from edge devices to multi-node clusters, and with its Append-Only data structures and historical stores, some use cases may not need Kafka or a separate database at all.
2637

27-
## Multi-JOINs and ASOF JOINs {#join}
38+
### Multi-JOINs and ASOF JOINs {#join}
2839

2940
Stream processing involves combining multiple data sources, and [MULTI-JOINs](/joins) are essential for enriching and correlating events in streaming queries. Timeplus allows you to run ad-hoc historical queries on the same data, reducing the need for denormalization in downstream data warehouses.
3041

3142
In many cases, Business Intelligence and analytical queries can be executed directly in Timeplus, eliminating the need for a separate data warehouse. [ASOF JOINs](/joins) enable approximate time-based lookups for comparing recent versus historical data.
3243

33-
## Python and JavaScript UDF {#udf}
44+
### Python and JavaScript UDF {#udf}
45+
3446
We understand that SQL may not be able to express all business logic for streaming or querying. [JavaScript](/js-udf) and [Python](/py-udf) User Defined Functions (UDFs) and User Defined Aggregate Functions (UDAFs) can be used to extend Timeplus to encapsulate custom logic for both stateless and stateful queries.
3547

3648
With Python UDFs, this opens up the possibility to bring in pre-existing and popular libraries, including data science and machine learning libraries!
3749

38-
## External Stream, External Table {#external}
50+
### External Stream, External Table {#external}
51+
3952
We want to simplify the experience of joining data from Apache Kafka and writing results out to data warehouses such as Clickhouse, or another Timeplus instance. Timeplus implements native integration to these systems in timeplusd via EXTERNAL STREAM (with [Kafka](/proton-kafka) and [Timeplus](/timeplus-external-stream)) and [EXTERNAL TABLE (with ClickHouse)](/proton-clickhouse-external-table). No need for deploying yet another Connector component.
4053

4154
We understand that we cannot do this for all systems and for that, we have Timeplus Connector which can be configured to integrate with hundreds of other systems if needed.
4255

43-
## Collection {#collection}
44-
With built-in External Streams and External Tables, Timeplus can natively collect real-time data from, or send data to, Kafka, Redpanda, ClickHouse, or another Timeplus instance, without any data duplication.
56+
### Collection {#collection}
57+
58+
With built-in External Streams and External Tables, Timeplus can natively collect real-time data from, or send data to, Kafka, Redpanda, ClickHouse, or another Timeplus instance, without duplicating data in yet another place.
4559

4660
Timeplus also supports a wide range of data sources through sink/source connectors. Users can push data from files (CSV/TSV), via native SDKs in Java, Go, or Python, JDBC/ODBC, Websockets, or REST APIs.
4761

48-
## Transformation {#transformation}
62+
### Transformation {#transformation}
63+
4964
With a powerful streaming SQL console, users can leverage their preferred query language to create Streams, Views, and incremental Materialized Views. This enables them to transform, roll up, join, correlate, enrich, aggregate, and downsample real-time data, generating meaningful outputs for real-time alerting, analytics, or any downstream systems.
5065

51-
## Routing {#routing}
66+
### Routing {#routing}
67+
5268
Timeplus allows data to be routed to different sinks based on SQL-based criteria and provides a data lineage view of all derived streams in its console. A single data result can generate multiple outputs for various scenarios and systems, such as analytics, alerting, compliance, etc., without any vendor lock-in.
5369

54-
## Analytics and Alerting {#alerts}
70+
### Analytics and Alerting {#alerts}
71+
5572
Powered by SSE (Server-Sent Events), Timeplus supports push-based, low-latency dashboards to visualize real-time insights through data pipelines or ad-hoc queries. Additionally, users can easily build observability dashboards using Grafana plugins.
5673

5774
SQL-based rules can be used to trigger or resolve alerts in systems such as PagerDuty, Slack, and other downstream platforms.
75+
76+
### Scalability and Elasticity
77+
78+
Timeplus supports both MPP architecture for pure on-prem deployments—ideal when ultra-low latency is critical and storage/compute separation for elastic, cloud-native setups. In the latter mode, S3 is used for the NativeLog, Historical Store, and Query State Checkpoints. Combined with Kubernetes HPA or AWS Auto Scaling Groups, this enables highly concurrent continuous queries on clusters that scale automatically with demand.

sidebars.js

Lines changed: 36 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -39,54 +39,60 @@ const sidebars = {
3939
{
4040
type: "doc",
4141
id: "why-timeplus",
42-
},
43-
{
44-
type: "doc",
45-
id: "glossary",
42+
label: "What is Timeplus",
4643
},
4744
{
4845
type: "doc",
4946
id: "architecture",
5047
},
5148
{
5249
type: "doc",
53-
id: "showcases",
50+
id: "glossary",
5451
},
5552
],
5653
},
57-
{
58-
type: "doc",
59-
label: "Quickstart",
60-
id: "quickstart",
61-
},
6254
{
6355
type: "category",
64-
label: "Guides & Tutorials",
56+
label: "Get Started",
6557
items: [
66-
"understanding-watermark",
67-
"tutorial-sql-kafka",
68-
"tutorial-github",
69-
"marimo",
70-
"tutorial-sql-connect-kafka",
71-
"tutorial-sql-connect-ch",
72-
"tutorial-cdc-rpcn-pg-to-ch",
58+
{
59+
type: "doc",
60+
id: "quickstart",
61+
},
7362
{
7463
type: "category",
75-
label: "Streaming ETL",
64+
label: "Guides & Tutorials",
7665
items: [
77-
"tutorial-sql-etl",
78-
"tutorial-sql-etl-kafka-to-ch",
79-
"tutorial-sql-etl-mysql-to-ch",
66+
"understanding-watermark",
67+
"tutorial-sql-kafka",
68+
"tutorial-github",
69+
"marimo",
70+
"tutorial-sql-connect-kafka",
71+
"tutorial-sql-connect-ch",
72+
"tutorial-cdc-rpcn-pg-to-ch",
73+
{
74+
type: "category",
75+
label: "Streaming ETL",
76+
items: [
77+
"tutorial-sql-etl",
78+
"tutorial-sql-etl-kafka-to-ch",
79+
"tutorial-sql-etl-mysql-to-ch",
80+
],
81+
},
82+
"tutorial-sql-join",
83+
"tutorial-python-udf",
84+
"sql-pattern-topn",
85+
"usecases",
86+
"tutorial-kv",
87+
"tutorial-sql-read-avro",
88+
"tutorial-testcontainers-java",
8089
],
8190
},
82-
"tutorial-sql-join",
83-
"tutorial-python-udf",
84-
"sql-pattern-topn",
85-
"usecases",
86-
"tutorial-kv",
87-
"tutorial-sql-read-avro",
88-
"tutorial-testcontainers-java",
89-
],
91+
{
92+
type: "doc",
93+
id: "showcases",
94+
},
95+
]
9096
},
9197
{
9298
type: "category",

0 commit comments

Comments
 (0)