+ {isWaitingForConfirmation + ? "Waiting for confirmation..." + : error + ? `Error: ${error}` + : statusResult?.statusCode === 200 + ? "Swap confirmed!" + : `Status: ${statusResult?.statusCode}`} +
+ )} ++ {isWaitingForConfirmation + ? "Waiting for cross-chain confirmation..." + : error + ? `Error: ${error}` + : statusResult?.statusCode === 200 + ? "Cross-chain swap confirmed!" + : `Status: ${statusResult?.statusCode}`} +
+ )} +
+
+This concludes our successful deployment of a Mirror pipeline streaming ERC-20 Tokens from Scroll chain into our database using inline decoders. Congrats! 🎉
+
+### ERC-20 Transfers using decoded datasets
+
+As explained in the Introduction, Goldsky provides decoded datasets for Raw Logs and Raw Traces for a number of different chains. You can check [this list](/mirror/sources/direct-indexing) to see if the chain you are interested in has these decoded datasets.
+In these cases, there is no need for us to run Decoding Transform Functions as the dataset itself will already contain the event signature and event params decoded.
+
+Click on the button below to see an example pipeline definition for streaming ERC-20 tokens on the Ethereum chain using the `decoded_logs` dataset.
+
+
+
+In general, public endpoints come in the form of `https://api.goldsky.com/api/public/
+
+Private subgraphs endpoints follow the same format as public subgraph endpoints except they start with `/api/private`
+instead of `/api/public`. For example, the private endpoint for the `prod` tag of the `uniswap-v3-base/1.0.0` subgraph
+would be `https://api.goldsky.com/api/private/project_cl8ylkiw00krx0hvza0qw17vn/subgraphs/uniswap-v3-base/1.0.0/gn`.
+
+### Revoking access
+
+To revoke access to a private endpoint you can simply delete the API token that was used to access the endpoint. If you
+don't know which key is used to access the endpoint, you'll have to revoke all API tokens for all users that have access
+to the project.
+
+## Enabling and disabling public and private endpoints
+
+By default, all new subgraphs and their tags come with the public endpoint enabled and the private endpoint disabled.
+Both of these settings can be changed using the CLI and the webapp. To change either setting, you must have [`Editor` permissions](../rbac).
+
+### CLI
+
+To toggle one of these settings using the CLI you can use the `goldsky subgraph update` command with the
+`--public-endpoint
+
+We can see how the pipeline starts in `STARTING` status and becomes `RUNNING` as it starts processing data successfully into our Postgres sink.
+This pipeline will start processing the historical data of the source dataset, reach its edge and continue streaming data in real time until we either stop it or it encounters error(s) that interrupts it's execution.
+
+### Unsuccessful pipeline lifecycle
+
+Let's now consider the scenario where the pipeline encounters errors during its lifetime and ends up failing.
+
+There can be multitude of reasons for a pipeline to encounter errors such as:
+
+* secrets not being correctly configured
+* sink availability issues
+* policy rules on the sink preventing the pipeline from writing records
+* resource size incompatiblity
+* and many more
+
+These failure scenarios prevents a pipeline from getting-into or staying-in a `RUNNING` runtime status.
+
+
+
+As expected, the pipeline has encountered a terminal error. Please note that the desired status is still `ACTIVE` even though the pipeline runtime status is `TERMINATED`
+
+```
+❯ goldsky pipeline list
+✔ Listing pipelines
+─────────────────────────────────────────
+│ Name │ Version │ Status │ Resource │
+│ │ │ │ Size │
+─────────────────────────────────────────
+│ bad-base-logs-pipeline │ 1 │ ACTIVE │ s │
+─────────────────────────────────────────
+```
+
+## Runtime visibility
+
+Pipeline runtime visibility is an important part of the pipeline development workflow. Mirror pipelines expose:
+
+1. Runtime status and error messages
+2. Logs emitted by the pipeline
+3. Metrics on `Records received`, which counts all the records the pipeline has received from source(s) and, `Records written` which counts all records the pipeline has written to sink(s).
+4. [Email notifications](/mirror/about-pipeline#email-notifications)
+
+Runtime status, error messages and metrics can be seen via two methods:
+
+1. Pipeline dashboard at `https://app.goldsky.com/dashboard/pipelines/stream/
+
+You can configure this nofication in the [Notifications section](https://app.goldsky.com/dashboard/settings#notifications) of your project
+
+## Error handling
+
+There are two broad categories of errors.
+
+**Pipeline configuration schema error**
+
+This means the schema of the pipeline configuration is not valid. These errors are usually caught before pipeline execution. Some possible scenarios:
+
+* a required attribute is missing
+* transform SQL has syntax errors
+* pipeline name is invalid
+
+**Pipeline runtime error**
+
+This means the pipeline encountered error during execution at runtime.
+
+Some possible scenarios:
+
+* credentails stored in the secret are incorrect or do not have needed access privilages
+* sink availability issues
+* poison-pill record that breaks the business logic in the transforms
+* `resource_size` limitation
+
+Transient errors are automatically retried as per retry-policy (for upto 6 hours) whearas non-transient ones immediately terminate the pipeline.
+
+While many errors can be resolved by user intervention, there is a possibility of platform errors as well. Please [reachout to support](/getting-support) for investigation.
+
+## Resource sizing
+
+`resource_size` represents the compute (vCPUs and RAM) available to the pipeline. There are several options for pipeline sizes: `s, m, l, xl, xxl`. This attribute influences [pricing](/pricing/summary#mirror) as well.
+
+Resource sizing depends on a few different factors such as:
+
+* number of sources, transforms, sinks
+* expected amount of data to be processed.
+* transform sql involves joining multiple sources and/or transforms
+
+Here's some general information that you can use as reference:
+
+* A `small` resource size is usually enough in most use case: it can handle full backfill of small chain datasets and write to speeds of up to 300K records per second. For pipelines using
+ subgraphs as source it can reliably handle up to 8 subgraphs.
+* Larger resource sizes are usually needed when backfilling large chains or when doing large JOINS (example: JOIN between accounts and transactions datasets in Solana)
+* It's recommended to always follow a defensive approach: start small and scale up if needed.
+
+## Snapshots
+
+A Pipeline snapshot captures a point-in-time state of a `RUNNING` pipeline allowing users to resume from it in the future.
+
+It can be useful in various scenarios:
+
+* evolving your `RUNNING` pipeline (eg: adding a new source, sink) without losing progress made so far.
+* recover from new bug introductions where the user fix the bug and resume from an earlier snapshot to reprocess data.
+
+Please note that snapshot only contains info about the progress made in reading the source(s) and the sql transform's state. It isn't representative of the state of the source/sink. For eg: if all data in the sink database table is deleted, resuming the pipeline from a snapshot does not recover it.
+
+Currently, a pipeline can only be resumed from the latest available snapshot. If you need to resume from older snapshots, please [reachout to support](/getting-support)
+
+Snapshots are closely tied to pipeline runtime in that all [commands](/reference/config-file/pipeline#pipeline-runtime-commands) that changes pipeline runtime has options to trigger a new snapshot and/or resume from the latest one.
+
+```mermaid theme={null}
+%%{init: { 'gitGraph': {'mainBranchName': 'myPipeline-v1'}, 'theme': 'default' , 'themeVariables': { 'git0': '#ffbf60' }}}%%
+gitGraph
+ commit id: " " type: REVERSE tag:"start"
+ commit id: "snapshot1"
+ commit id: "snapshot2"
+ commit id: "snapshot3"
+ commit id: "snapshot4" tag:"stop" type: HIGHLIGHT
+ branch myPipeline-v2
+ commit id: "snapshot4 " type: REVERSE tag:"start"
+```
+
+### When are snapshots taken?
+
+1. When updating a `RUNNING` pipeline, a snapshot is created before applying the update. This is to ensure that there's an up-to-date snapshot in case the update introduces issues.
+2. When pausing a pipeline.
+3. Automatically on regular intervals. For `RUNNING` pipelines in healthy state, automatic snapshots are taken every 4 hours to ensure minimal data loss in case of errors.
+4. Users can request snapshot creation via the following CLI command:
+
+* `goldsky pipeline snapshot create
+2. Alternatively, buy the IPv4 add-on, if session pooling doesn't fit your needs. It can lead to more persistent direct connections,
+
+
+---
+
+> To find navigation and other pages in this documentation, fetch the llms.txt file at: https://docs.goldsky.com/llms.txt
+
+# PostgreSQL
+
+[PostgreSQL](https://www.postgresql.org/) is a powerful, open source object-relational database system used for OLTP workloads.
+
+Mirror supports PostgreSQL as a sink, allowing you to write data directly into PostgreSQL. This provides a robust and flexible solution for both mid-sized analytical workloads and high performance REST and GraphQL APIs.
+
+When you create a new pipeline, a table will be automatically created with columns from the source dataset. If a table is already created, the pipeline will write to it. As an example, you can set up partitions before you setup the pipeline, allowing you to scale PostgreSQL even further.
+
+The PostgreSQL also supports Timescale hypertables, if the hypertable is already setup. We have a separate Timescale sink in technical preview that will automatically setup hypertables for you - contact [support@goldsky.com](mailto:support@goldsky.com) for access.
+
+Full configuration details for PostgreSQL sink is available in the [reference](/reference/config-file/pipeline#postgresql) page.
+
+## Role Creation
+
+Here is an example snippet to give the permissions needed for pipelines.
+
+```sql theme={null}
+
+CREATE ROLE goldsky_writer WITH LOGIN PASSWORD 'supersecurepassword';
+
+-- Allow the pipeline to create schemas.
+-- This is needed even if the schemas already exist
+GRANT CREATE ON DATABASE postgres TO goldsky_writer;
+
+-- For existing schemas that you want the pipeline to write to:
+GRANT USAGE, CREATE ON SCHEMA
+2. Alternatively, buy the IPv4 add-on, if session pooling doesn't fit your needs. It can lead to more persistent direct connections,
+
+
+---
+
+> To find navigation and other pages in this documentation, fetch the llms.txt file at: https://docs.goldsky.com/llms.txt
+
+# ClickHouse
+
+[ClickHouse](https://clickhouse.com/) is a highly performant and cost-effective OLAP database that can support real-time inserts. Mirror pipelines can write subgraph or blockchain data directly into ClickHouse with full data guarantees and reorganization handling.
+
+Mirror can work with any ClickHouse setup, but we have several strong defaults. From our experimentation, the `ReplacingMergeTree` table engine with `append_only_mode` offers the best real-time data performance for large datasets.
+
+[ReplacingMergeTree](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/replacingmergetree) engine is used for all sink tables by default. If you don't want to use a ReplacingMergeTree, you can pre-create the table with any data engine you'd like. If you don't want to use a ReplacingMergeTree, you can disable `append_only_mode`.
+
+Full configuration details for Clickhouse sink is available in the [reference](/reference/config-file/pipeline#clickhouse) page.
+
+## Secrets
+
+