Conversation
There was a problem hiding this comment.
Can we update with latest screenshot without paid resources? Thanks.
| In the C2D workflow, the following steps are performed: | ||
|
|
||
| 1. The consumer initiates a compute-to-data job by selecting the desired data asset and algorithm, and then, the orders are validated via the dApp used. | ||
| 2. A dedicated and isolated execution pod is created for the C2D job. |
There was a problem hiding this comment.
Container not pod for now
| 3. The execution pod loads the specified algorithm into its environment. | ||
| 4. The execution pod securely loads the selected dataset for processing. | ||
| 5. The algorithm is executed on the loaded dataset within the isolated execution pod. | ||
| 6. The results and logs generated by the algorithm are securely returned to the user. |
There was a problem hiding this comment.
We can mention that we use a web3 auto-generated/custom PK and this is how we ensure private access to results
There was a problem hiding this comment.
we can mention the signature generation that it is required on get results endpoint.
|
|
||
| Now, let's delve into the inner workings of the Provider. Initially, it verifies whether the Consumer has sent the appropriate datatokens to gain access to the desired data. Once validated, the Provider interacts with the Operator-Service, a microservice responsible for coordinating the job execution. The Provider submits a request to the Operator-Service, which subsequently forwards the request to the Operator-Engine, the actual compute system in operation. | ||
|
|
||
| The Operator-Engine, equipped with functionalities like running Kubernetes compute jobs, carries out the necessary computations as per the requirements. Throughout the computation process, the Operator-Engine informs the Operator-Service of the job's progress. Finally, when the job reaches completion, the Operator-Engine signals the Operator-Service, ensuring that the Provider receives notification of the job's successful conclusion. |
There was a problem hiding this comment.
This is old stack, will remove these lines.
| - `GetComputeEnvironments` - returns list of environments that can be selected to run the algorithm on | ||
| - `InitializeCompute` - generates provider fees necessary for asset's ordering | ||
| - `FreeStartCompute` - runs algorithms without necessary publish the assets on-chain (dataset and algorithm), using free resources from the selected environment | ||
| - `PaidStartCompute` - runs algorithms with on-chain assets (dataset and algorithm), using paid resources from the selected environment. The payment is requested at every start compute call, being handled by `Escrow` contract. |
There was a problem hiding this comment.
startCompute
COMPUTE_START: 'startCompute',
There was a problem hiding this comment.
This list represents the handlers from Ocean Node, not the command names
| - `FreeStartCompute` - runs algorithms without necessary publish the assets on-chain (dataset and algorithm), using free resources from the selected environment | ||
| - `PaidStartCompute` - runs algorithms with on-chain assets (dataset and algorithm), using paid resources from the selected environment. The payment is requested at every start compute call, being handled by `Escrow` contract. | ||
| - `ComputeGetStatus` - retrieves compute job status. | ||
| - `ComputeStop` - stops compute job execution when the job is `Running`. |
There was a problem hiding this comment.
COMPUTE_STOP: 'stopCompute',
There was a problem hiding this comment.
This list represents the handlers from Ocean Node, not the command names
| - `PaidStartCompute` - runs algorithms with on-chain assets (dataset and algorithm), using paid resources from the selected environment. The payment is requested at every start compute call, being handled by `Escrow` contract. | ||
| - `ComputeGetStatus` - retrieves compute job status. | ||
| - `ComputeStop` - stops compute job execution when the job is `Running`. | ||
| - `ComputeGetResult` - returns compute job results when job is `Finished`. |
There was a problem hiding this comment.
COMPUTE_GET_RESULT: 'getComputeResult',
There was a problem hiding this comment.
This list represents the handlers from Ocean Node, not the command names
|
|
||
| One of its responsibility revolves around fetching and preparing the required assets and files, ensuring a smooth and seamless execution of the job. By meticulously handling the environment configuration, the **C2D Engine** guarantees that all necessary components are in place, setting the stage for a successful job execution. | ||
|
|
||
| 1. **Fetching Dataset Assets**: It downloads the files corresponding to datasets and saves them in the location `/data/inputs/DID/`. The files are named based on their array index ranging from 0 to X, depending on the total number of files associated with the dataset. |
There was a problem hiding this comment.
Dataset can be did/url/arweave/ipfs
There was a problem hiding this comment.
And also need to specify for the algorithm as well, good catch!
|
|
||
| ## Prerequisites | ||
|
|
||
| The prerequisite for this flow is the algorithm code which can be input for consumers components: Ocean CLI and it is open for integration with other systems (e.g. Ocean Enterprise Marketplace). |
There was a problem hiding this comment.
CLI and vscode extension, might be useful to add links
There was a problem hiding this comment.
There are linked in Setup , but I will link as well in the Prerequisites, thank you.
There was a problem hiding this comment.
New dataset here will be url/did/ipfs/arweave.
We will now use the c2d_examples not algo_dockers repo
giurgiur99
left a comment
There was a problem hiding this comment.
Some small comments. Thanks!
There was a problem hiding this comment.
Broken links for Architecture & Overview Guides
There was a problem hiding this comment.
I referenced statically .md files for each section, in md works fine. How does it look for you?
There was a problem hiding this comment.
PaidStartCompute -> startCompute
ComputeGetStatus -> getComputeStatus
ComputeStop -> stopCompute
ComputeGetResult -> getComputeResult
The rest should be lowercase to follow the names from P2P handler
There was a problem hiding this comment.
Here I targeted only handler names which are valid for both protocols as specified "which includes handlers for operations that can be called via HTTP and P2P protocols". If we want to ref commands and HTTP endpoint names, we can link the README for Ocean Node API.md.
I can add the link for it and keep the handler names as they are currently described in the documentation.
The rest should be lowercase to follow the names from P2P handler
The handlers (which include the core functionality) are used for both protocols in the code.
There was a problem hiding this comment.
We don't use kubernetes at the moment, therfore no pods
There was a problem hiding this comment.
I specified that currently only Docker engine is supported and in the future we can extend to support Kubernetes as well.
There was a problem hiding this comment.
They are still called pods in some place here in the .md. Thanks!
There was a problem hiding this comment.
Can we add a screenshot from the latest version of vscode extension? Also for now in extension we only use the rawAlgorithm and send the algo in the request, no did, ipfs, ...
There was a problem hiding this comment.
Yes, sure, I have added 2 screenhots for VS code extension and updated algorithms supported formats. Thanks!
|
|
||
| 1. The consumer initiates a compute-to-data job by selecting the desired data asset and algorithm, and then, the orders are validated via the dApp used. | ||
| 2. A dedicated and isolated execution container is created for the C2D job. | ||
| 3. The execution pod loads the specified algorithm into its environment. |
There was a problem hiding this comment.
Yep but here and below they are still called "pods". Thanks!
There was a problem hiding this comment.
They are still called pods in some place here in the .md. Thanks!
|
Whilst checking the docs in the onboarding phase I found these links from the docs that don't work:
|
Fixes #1515 .
Changes proposed in this PR: