From bd34ab60638c9c885b63015f9a0163727867ae3d Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Mon, 23 Jun 2025 13:42:33 -0400 Subject: [PATCH 01/17] DOCSP-50497 fixing git rebases and uploading changes to agg pages --- snooty.toml | 2 +- source/aggregation.txt | 184 ++++++-------- source/aggregation/pipeline-stages.txt | 321 +++++++++++++++++++++++++ 3 files changed, 394 insertions(+), 113 deletions(-) create mode 100644 source/aggregation/pipeline-stages.txt diff --git a/snooty.toml b/snooty.toml index 55b428e51..265be5c0f 100644 --- a/snooty.toml +++ b/snooty.toml @@ -11,7 +11,7 @@ toc_landing_pages = [ "/connect", "/security", "/security/authentication", - "/aggregation-tutorials", + "/aggregation", "/data-formats", "/connect/connection-options", "/crud", diff --git a/source/aggregation.txt b/source/aggregation.txt index eafa24d31..cb0e702d5 100644 --- a/source/aggregation.txt +++ b/source/aggregation.txt @@ -14,22 +14,29 @@ Aggregation .. contents:: On this page :local: + :backlinks: none :depth: 2 :class: singlecol +.. toctree:: + + Pipeline Stages + .. _nodejs-aggregation-overview: Overview -------- -In this guide, you can learn how to use **aggregation operations** in -the MongoDB Node.js driver. +In this guide, you can learn how to use the MongoDB Node.js Driver to perform +**aggregation operations**. -Aggregation operations are expressions you can use to produce reduced -and summarized results in MongoDB. MongoDB's aggregation framework -allows you to create a pipeline that consists of one or more stages, -each of which performs a specific operation on your data. +Aggregation operations process data in your MongoDB collections and return +computed results. The MongoDB Aggregation framework is modeled on the +concept of data processing pipelines. Documents enter a pipeline comprised of one or +more stages, and this pipeline transforms the documents into an aggregated result. + +To learn more about the aggregation stages supported by the Node.js Driver, see :ref:`Aggregation Stages `. .. _node-aggregation-tutorials: @@ -42,114 +49,67 @@ each of which performs a specific operation on your data. Analogy ~~~~~~~ -You can think of the aggregation pipeline as similar to an automobile factory. -Automobile manufacturing requires the use of assembly stations organized -into assembly lines. Each station has specialized tools, such as -drills and welders. The factory transforms and -assembles the initial parts and materials into finished products. - -The **aggregation pipeline** is the assembly line, **aggregation -stages** are the assembly stations, and **expression operators** are the -specialized tools. - -Comparing Aggregation and Query Operations -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Using query operations, such as the ``find()`` method, you can perform the following actions: - -- Select *which documents* to return -- Select *which fields* to return -- Sort the results - -Using aggregation operations, you can perform the following actions: - -- Perform all query operations -- Rename fields -- Calculate fields -- Summarize data -- Group values - -Aggregation operations have some :manual:`limitations `: - -- Returned documents must not violate the :manual:`BSON-document size limit ` +The aggregation pipeline is similar to an automobile factory assembly line. An +assembly line has stations with specialized tools that are used to perform +specific tasks. For example, when building a car, the assembly line begins with +a frame. As the car frame moves though the assembly line, each station assembles +a separate part. The result is a transformed final product, the finished car. + +The *aggregation pipeline* is the assembly line, the *aggregation stages* are +the assembly stations, the *expression operators* are the specialized tools, and +the *aggregated result* is the finished product. + +Compare Aggregation and Find Operations +--------------------------------------- + +The following table lists the different tasks you can perform with find +operations compared to what you can achieve with aggregation +operations. The aggregation framework provides expanded functionality +that allows you to transform and manipulate your data. + +.. list-table:: + :header-rows: 1 + :widths: 50 50 + + * - Find Operations + - Aggregation Operations + + * - | Select *certain* documents to return + | Select *which* fields to return + | Sort the results + | Limit the results + | Count the results + - | Select *certain* documents to return + | Select *which* fields to return + | Sort the results + | Limit the results + | Count the results + | Group the results + | Rename fields + | Compute new fields + | Summarize data + | Connect and merge data sets + +Server Limitations +------------------ + +Consider the following :manual:`limitations ` when performing aggregation operations: + +- Returned documents must not violate the :manual:`BSON document size limit ` of 16 megabytes. -- Pipeline stages have a memory limit of 100 megabytes by default. You can exceed this - limit by setting the ``allowDiskUse`` property of ``AggregateOptions`` to ``true``. See - the `AggregateOptions API documentation <{+api+}/interfaces/AggregateOptions.html>`__ - for more details. - -.. important:: $graphLookup exception - - The :manual:`$graphLookup - ` stage has a strict - memory limit of 100 megabytes and will ignore ``allowDiskUse``. - -References -~~~~~~~~~~ - -To view a full list of expression operators, see :manual:`Aggregation -Operators ` in the Server manual. - -To learn about assembling an aggregation pipeline and view examples, see -:manual:`Aggregation Pipeline ` in the -Server manual. - -To learn more about creating pipeline stages, see :manual:`Aggregation -Stages ` in the Server manual. - -Runnable Examples ------------------ - -The example uses sample data about restaurants. The following code -inserts data into the ``restaurants`` collection of the ``aggregation`` -database: - -.. literalinclude:: /code-snippets/aggregation/agg.js - :start-after: begin data insertion - :end-before: end data insertion - :language: javascript - :dedent: - -.. tip:: - - For more information on connecting to your MongoDB deployment, see the :doc:`Connection Guide `. - -Aggregation Example -~~~~~~~~~~~~~~~~~~~ - -To perform an aggregation, pass a list of aggregation stages to the -``collection.aggregate()`` method. - -In the example, the aggregation pipeline uses the following aggregation stages: - -- A :manual:`$match ` stage to filter for documents whose - ``categories`` array field contains the element ``Bakery``. - -- A :manual:`$group ` stage to group the matching documents by the ``stars`` - field, accumulating a count of documents for each distinct value of ``stars``. - -.. literalinclude:: /code-snippets/aggregation/agg.js - :start-after: begin aggregation - :end-before: end aggregation - :language: javascript - :dedent: - -This example produces the following output: - -.. code-block:: json - :copyable: false - - { _id: 4, count: 2 } - { _id: 3, count: 1 } - { _id: 5, count: 1 } +- Pipeline stages have a memory limit of 100 megabytes by default. If required, + you can exceed this limit by enabling the `AllowDiskUse + `__ + property of the ``AggregateOptions`` object that you pass to the + ``Aggregate()`` method. -For more information, see the `aggregate() API documentation <{+api+}/classes/Collection.html#aggregate>`__. +Additional information +---------------------- -Additional Examples -~~~~~~~~~~~~~~~~~~~ +To view a full list of expression operators, see +:manual:`Aggregation Operators `. -You can find another aggregation pipeline example in the `Aggregation -Framework with Node.js Tutorial -`_ -blog post on the MongoDB website. +To learn about explaining MongoDB aggregation operations, see +:manual:`Explain Results ` and +:manual:`Query Plans `. diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt new file mode 100644 index 000000000..c4f0da3ad --- /dev/null +++ b/source/aggregation/pipeline-stages.txt @@ -0,0 +1,321 @@ +.. _node-aggregation-pipeline-stages: + +=============== +Aggregation Pipeline Stages +=============== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +.. facet:: + :name: genre + :values: reference + +.. meta:: + :keywords: node.js, code example, transform, pipeline + :description: Learn the different possible stages of the aggregation pipeline in the {+driver-short+}. + +Overview +------------ + +On this page, you can learn how to create an aggregation pipeline and pipeline +stages by using methods in the {+driver-short+}. + +Build an Aggregation Pipeline +----------------------------- + +You can use the {+driver-short+} to build an aggregation pipeline by creating a +pipeline variable or passing aggregation stages directly into the aggregation +method. See the following examples to learn more about each of these approaches. + +.. tabs:: + + .. tab:: Create a Pipeline + :tabid: pipeline-definition + + .. code-block:: javascript + + // Defines the aggregation pipeline + const pipeline = [ + { $match: { ... } }, + { $group: { ... } } + ]; + + // Executes the aggregation pipeline + const results = collection.aggregate(pipeline); + + .. tab:: Direct Aggregation + :tabid: pipeline-direct + + .. code-block:: javascript + + // Defines and executes the aggregation pipeline + collection.aggregate([ + { $match: { ... } }, + { $group: { ... } } + ]); + + +Aggregation Stage Methods +------------------------- + +The following table lists the stages in the aggregation pipeline. The stages are +formatted as they are listed when used in Node.js unless noted otherwise in the +description column. To learn more about an aggregation stage and see a code +example in a Node.js application, follow the link from the stage name to its +reference page in the {+mdb-server+} manual. + + +.. list-table:: + :header-rows: 1 + :widths: 30 70 + + * - Stage + - Description + + * - :manual:`$addFields ` + - Adds new fields to documents. Outputs documents that contain both the + existing fields from the input documents and the newly added fields. + + ``$set`` is an alias for ``$addFields``. + + * - :manual:`$bucket ` + - Categorizes incoming documents into groups, called buckets, + based on a specified expression and bucket boundaries. + + * - :manual:`$bucketAuto ` + - Categorizes incoming documents into a specific number of + groups, called buckets, based on a specified expression. + Bucket boundaries are automatically determined in an attempt + to evenly distribute the documents into the specified number + of buckets. + + * - :manual:`$changeStream ` + - Returns a change stream cursor for the collection. + + Instead of being passed to the ``.aggregate()`` method, + ``$changeStream`` uses the ``.watch()`` method on a ``collection`` + object. + + * - :manual:`$changeStreamSplitLargeEvent ` + - Splits large change stream events that exceed 16 MB into smaller fragments returned + in a change stream cursor. + + Instead of being passed to the ``.aggregate()`` method, + ``$changeStreamSplitLargeEvent`` uses the ``.watch()`` method on a + ``collection`` object. + + * - :manual:`$collStats ` + - Returns statistics regarding a collection or view. + + * - :manual:`$count ` + - Returns a count of the number of documents at this stage of + the aggregation pipeline. + + * - :manual:`$currentOp ` + - Returns a stream of documents containing information on active and/or + dormant operations as well as inactive sessions that are holding locks as + part of a transaction. + + * - :manual:`$densify ` + - Creates new documents in a sequence of documents where certain values in a field are missing. + + * - :manual:`$documents ` + - Returns literal documents from input expressions. + + * - :manual:`$facet ` + - Processes multiple aggregation pipelines + within a single stage on the same set + of input documents. Enables the creation of multi-faceted + aggregations capable of characterizing data across multiple + dimensions, or facets, in a single stage. + + * - :manual:`$geoNear ` + - Returns documents in order of nearest to farthest from a + specified point. This method adds a field to output documents + that contains the distance from the specified point. + + * - :manual:`$graphLookup ` + - Performs a recursive search on a collection. This method adds + a new array field to each output document that contains the traversal + results of the recursive search for that document. + + * - :manual:`$group ` + - Groups input documents by a specified identifier expression + and applies the accumulator expressions, if specified, to + each group. Consumes all input documents and outputs one + document per each distinct group. The output documents + contain only the identifier field and, if specified, accumulated + fields. + + * - :manual:`$indexStats ` + - Returns statistics regarding the use of each index for the collection. + + * - :manual:`$limit ` + - Passes the first *n* documents unmodified to the pipeline, + where *n* is the specified limit. For each input document, + outputs either one document (for the first *n* documents) or + zero documents (after the first *n* documents). + + * - :manual:`$listSampledQueries ` + - Lists sampled queries for all collections or a specific collection. Only + available on collections with :manual:`Queryable Encryption + ` enabled. + + * - :manual:`$listSearchIndexes ` + - Returns information about existing :ref:`Atlas Search indexes + ` on a specified collection. + + * - :manual:`$lookup ` + - Performs a left outer join to another collection in the + *same* database to filter in documents from the "joined" + collection for processing. + + * - :manual:`$match ` + - Filters the document stream to allow only matching documents + to pass unmodified into the next pipeline stage. + For each input document, outputs either one document (a match) or zero + documents (no match). + + * - :manual:`$merge ` + - Writes the resulting documents of the aggregation pipeline to + a collection. The stage can incorporate (insert new + documents, merge documents, replace documents, keep existing + documents, fail the operation, process documents with a + custom update pipeline) the results into an output + collection. To use this stage, it must be + the last stage in the pipeline. + + * - :manual:`$out ` + - Writes the resulting documents of the aggregation pipeline to + a collection. To use this stage, it must be + the last stage in the pipeline. + + * - :manual:`$project ` + - Reshapes each document in the stream, such as by adding new + fields or removing existing fields. For each input document, + outputs one document. + + * - :manual:`$redact ` + - Reshapes each document in the stream by restricting the content for each + document based on information stored in the documents themselves. + Incorporates the functionality of :manual:`$project + ` and :manual:`$match + `. Can be used to implement field + level redaction. For each input document, outputs either one or zero + documents. + + * - :manual:`$replaceRoot ` + - Replaces a document with the specified embedded document. The + operation replaces all existing fields in the input document, + including the ``_id`` field. Specify a document embedded in + the input document to promote the embedded document to the + top level. + + The ``$replaceWith`` stage is an alias for the ``$replaceRoot`` stage. + + * - :manual:`$replaceWith ` + - Replaces a document with the specified embedded document. + The operation replaces all existing fields in the input document, including + the ``_id`` field. Specify a document embedded in the input document to promote + the embedded document to the top level. + + The ``$replaceWith`` stage is an alias for the ``$replaceRoot`` stage. + + * - :manual:`$sample ` + - Randomly selects the specified number of documents from its + input. + + * - :manual:`$search ` + - Performs a full-text search of the field or fields in an + :atlas:`Atlas ` + collection. + + This stage is available only for MongoDB Atlas clusters, and is not + available for self-managed deployments. To learn more, see + :atlas:`Atlas Search Aggregation Pipeline Stages + ` in the Atlas documentation. + + * - :manual:`$searchMeta ` + - Returns different types of metadata result documents for the + :atlas:`Atlas Search ` query against an + :atlas:`Atlas ` + collection. + + This stage is available only for MongoDB Atlas clusters, + and is not available for self-managed deployments. To learn + more, see :atlas:`Atlas Search Aggregation Pipeline Stages + ` in the Atlas documentation. + + * - :manual:`$set ` + - Adds new fields to documents. Like the ``Project()`` method, + this method reshapes each + document in the stream by adding new fields to + output documents that contain both the existing fields + from the input documents and the newly added fields. + + * - :manual:`$setWindowFields ` + - Groups documents into windows and applies one or more + operators to the documents in each window. + + * - :manual:`$skip ` + - Skips the first *n* documents, where *n* is the specified skip + number, and passes the remaining documents unmodified to the + pipeline. For each input document, outputs either zero + documents (for the first *n* documents) or one document (if + after the first *n* documents). + + * - :manual:`$sort ` + - Reorders the document stream by a specified sort key. The documents remain unmodified. + For each input document, outputs one document. + + * - :manual:`$sortByCount ` + - Groups incoming documents based on the value of a specified + expression, then computes the count of documents in each + distinct group. + + * - :manual:`$unionWith ` + - Combines pipeline results from two collections into a single + result set. + + * - :manual:`$unset ` + - Removes/excludes fields from documents. + + ``$unset`` is an alias for ``$project`` that removes fields. + + * - :manual:`$unwind ` + - Deconstructs an array field from the input documents to + output a document for *each* element. Each output document + replaces the array with an element value. For each input + document, outputs *n* Documents, where *n* is the number of + array elements. *n* can be zero for an empty array. + + * - :manual:`$vectorSearch ` + - Performs an :abbr:`ANN (Approximate Nearest Neighbor)` or + :abbr:`ENN (Exact Nearest Neighbor)` search on a + vector in the specified field of an + :atlas:`Atlas ` collection. + + This stage is available only for MongoDB Atlas clusters, and is not + available for self-managed deployments. To learn more, see + :ref:`Atlas Vector Search `. + +API Documentation +~~~~~~~~~~~~~~~~~ + +To learn more about assembling an aggregation pipeline, see :manual:`Aggregation +Pipeline ` in the MongoDB Server manual. + +To learn more about creating pipeline stages, see :manual:`Aggregation Stages +` in the MongoDB Server manual. + +For more information about the methods and classes used on this page, see the +following API documentation: + +- `Collection() `__ +- `aggregate() `__ +- `AggregateOptions `__ + From 1b6154d676c8b7d6cb11b5da6565b8023f92cc4f Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Mon, 23 Jun 2025 14:10:34 -0400 Subject: [PATCH 02/17] DOCSP-50497 fix errors --- snooty.toml | 1 + source/aggregation.txt | 1 - source/aggregation/pipeline-stages.txt | 25 +++++++++++-------------- 3 files changed, 12 insertions(+), 15 deletions(-) diff --git a/snooty.toml b/snooty.toml index 265be5c0f..b6b7cf252 100644 --- a/snooty.toml +++ b/snooty.toml @@ -12,6 +12,7 @@ toc_landing_pages = [ "/security", "/security/authentication", "/aggregation", + "/aggregation/pipeline-stages", "/data-formats", "/connect/connection-options", "/crud", diff --git a/source/aggregation.txt b/source/aggregation.txt index cb0e702d5..973cb1ffd 100644 --- a/source/aggregation.txt +++ b/source/aggregation.txt @@ -14,7 +14,6 @@ Aggregation .. contents:: On this page :local: - :backlinks: none :depth: 2 :class: singlecol diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt index c4f0da3ad..e0562ac18 100644 --- a/source/aggregation/pipeline-stages.txt +++ b/source/aggregation/pipeline-stages.txt @@ -62,11 +62,10 @@ method. See the following examples to learn more about each of these approaches. Aggregation Stage Methods ------------------------- -The following table lists the stages in the aggregation pipeline. The stages are -formatted as they are listed when used in Node.js unless noted otherwise in the -description column. To learn more about an aggregation stage and see a code -example in a Node.js application, follow the link from the stage name to its -reference page in the {+mdb-server+} manual. +The following table lists the stages in the aggregation pipeline. To learn more +about an aggregation stage and see a code example in a Node.js application, +follow the link from the stage name to its reference page in the {+mdb-server+} +manual. .. list-table:: @@ -116,7 +115,7 @@ reference page in the {+mdb-server+} manual. the aggregation pipeline. * - :manual:`$currentOp ` - - Returns a stream of documents containing information on active and/or + - Returns a stream of documents containing information on active and dormant operations as well as inactive sessions that are holding locks as part of a transaction. @@ -162,12 +161,12 @@ reference page in the {+mdb-server+} manual. * - :manual:`$listSampledQueries ` - Lists sampled queries for all collections or a specific collection. Only - available on collections with :manual:`Queryable Encryption + available for collections with :manual:`Queryable Encryption ` enabled. * - :manual:`$listSearchIndexes ` - Returns information about existing :ref:`Atlas Search indexes - ` on a specified collection. + ` on a specified collection. * - :manual:`$lookup ` - Performs a left outer join to another collection in the @@ -202,11 +201,9 @@ reference page in the {+mdb-server+} manual. * - :manual:`$redact ` - Reshapes each document in the stream by restricting the content for each document based on information stored in the documents themselves. - Incorporates the functionality of :manual:`$project - ` and :manual:`$match - `. Can be used to implement field - level redaction. For each input document, outputs either one or zero - documents. + Incorporates the functionality of ``$project`` and ``$match``. Can be used + to implement field level redaction. For each input document, outputs + either one or zero documents. * - :manual:`$replaceRoot ` - Replaces a document with the specified embedded document. The @@ -301,7 +298,7 @@ reference page in the {+mdb-server+} manual. This stage is available only for MongoDB Atlas clusters, and is not available for self-managed deployments. To learn more, see - :ref:`Atlas Vector Search `. + :ref:`Atlas Vector Search `. API Documentation ~~~~~~~~~~~~~~~~~ From 800d864ee971d23c7d771a186347630aeb7cc1c4 Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Mon, 23 Jun 2025 14:28:10 -0400 Subject: [PATCH 03/17] DOCSP-50497 fix errors --- snooty.toml | 2 -- 1 file changed, 2 deletions(-) diff --git a/snooty.toml b/snooty.toml index b6b7cf252..00c8d1bac 100644 --- a/snooty.toml +++ b/snooty.toml @@ -11,8 +11,6 @@ toc_landing_pages = [ "/connect", "/security", "/security/authentication", - "/aggregation", - "/aggregation/pipeline-stages", "/data-formats", "/connect/connection-options", "/crud", From b2cfd96d74f96d61e47bf3f340f2edd8b48a627b Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Mon, 23 Jun 2025 14:36:09 -0400 Subject: [PATCH 04/17] DOCSP-50497 fix errors --- snooty.toml | 1 + source/aggregation.txt | 4 +++- source/aggregation/pipeline-stages.txt | 2 +- 3 files changed, 5 insertions(+), 2 deletions(-) diff --git a/snooty.toml b/snooty.toml index 00c8d1bac..96dd2c2e9 100644 --- a/snooty.toml +++ b/snooty.toml @@ -13,6 +13,7 @@ toc_landing_pages = [ "/security/authentication", "/data-formats", "/connect/connection-options", + "/aggregation", "/crud", "/crud/query", "crud/update", diff --git a/source/aggregation.txt b/source/aggregation.txt index 973cb1ffd..9cc61323e 100644 --- a/source/aggregation.txt +++ b/source/aggregation.txt @@ -19,7 +19,9 @@ Aggregation :class: singlecol .. toctree:: - + :titlesonly: + :maxdepth: 1 + Pipeline Stages .. _nodejs-aggregation-overview: diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt index e0562ac18..e25c4d169 100644 --- a/source/aggregation/pipeline-stages.txt +++ b/source/aggregation/pipeline-stages.txt @@ -166,7 +166,7 @@ manual. * - :manual:`$listSearchIndexes ` - Returns information about existing :ref:`Atlas Search indexes - ` on a specified collection. + ` on a specified collection. * - :manual:`$lookup ` - Performs a left outer join to another collection in the From fa1706152e4c1999845c1e544dcc8b8773bd1134 Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Mon, 23 Jun 2025 15:00:57 -0400 Subject: [PATCH 05/17] DOCSP-50497 final changes --- source/aggregation.txt | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/source/aggregation.txt b/source/aggregation.txt index 9cc61323e..1090860e2 100644 --- a/source/aggregation.txt +++ b/source/aggregation.txt @@ -1,9 +1,9 @@ .. _node-aggregation: .. _nodejs-aggregation: -=========== -Aggregation -=========== +====================== +Aggregation Operations +====================== .. facet:: :name: genre @@ -21,7 +21,7 @@ Aggregation .. toctree:: :titlesonly: :maxdepth: 1 - + Pipeline Stages .. _nodejs-aggregation-overview: @@ -37,7 +37,7 @@ computed results. The MongoDB Aggregation framework is modeled on the concept of data processing pipelines. Documents enter a pipeline comprised of one or more stages, and this pipeline transforms the documents into an aggregated result. -To learn more about the aggregation stages supported by the Node.js Driver, see :ref:`Aggregation Stages `. +To learn more about the aggregation stages supported by the Node.js Driver, see :ref:`Aggregation Pipeline Stages `. .. _node-aggregation-tutorials: From 8f0f3432d93eedef9fa22e638937d25e1a89b5de Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Mon, 23 Jun 2025 15:19:28 -0400 Subject: [PATCH 06/17] DOCSP-41784 Support for Explicit Resource Management (#1165) * DOCSP-41784 add support for explicit resource management note where applicable resources are declared * DOCSP-41784 fix note format * DOCSP-41784 change to tip and move pages around * DOCSP-41784 reword tip for clarity * DOCSP-41784 update backport file and move tips to below code examples --- .backportrc.json | 3 +++ source/connect/mongoclient.txt | 2 ++ source/crud/query/cursor.txt | 2 ++ source/crud/transactions.txt | 2 ++ source/crud/transactions/transaction-core.txt | 4 +++- source/includes/explicit-resource-management.rst | 7 +++++++ source/monitoring-and-logging/change-streams.txt | 2 ++ 7 files changed, 21 insertions(+), 1 deletion(-) create mode 100644 source/includes/explicit-resource-management.rst diff --git a/.backportrc.json b/.backportrc.json index 1df6bbc2b..65257f2fc 100644 --- a/.backportrc.json +++ b/.backportrc.json @@ -5,6 +5,9 @@ // the branches available to backport to "targetBranchChoices": [ "master", + "v6.17", + "v6.16", + "v6.15", "v6.14", "v6.13", "v6.12", diff --git a/source/connect/mongoclient.txt b/source/connect/mongoclient.txt index f1b27a261..5e56b757c 100644 --- a/source/connect/mongoclient.txt +++ b/source/connect/mongoclient.txt @@ -119,6 +119,8 @@ verify that the connection is successful: Call the ``MongoClient.connect()`` method explicitly if you want to verify that the connection is successful. +.. include:: /includes/explicit-resource-management.rst + To learn more about the {+stable-api+} feature, see the :ref:`{+stable-api+} page `. diff --git a/source/crud/query/cursor.txt b/source/crud/query/cursor.txt index 2abaaa62b..cab9d7d2a 100644 --- a/source/crud/query/cursor.txt +++ b/source/crud/query/cursor.txt @@ -170,6 +170,8 @@ and the {+mdb-server+}: :start-after: start close cursor example :end-before: end close cursor example +.. include:: /includes/explicit-resource-management.rst + Abort ~~~~~ diff --git a/source/crud/transactions.txt b/source/crud/transactions.txt index 5a18b9c0e..d2e696ec7 100644 --- a/source/crud/transactions.txt +++ b/source/crud/transactions.txt @@ -181,6 +181,8 @@ the Core API: const session = client1.startSession(); client2.db('myDB').collection('myColl').insertOne({ name: 'Jane Eyre' }, { session }); +.. include:: /includes/explicit-resource-management.rst + To see a fully runnable example that uses this API, see the :ref:`node-usage-core-txn` usage example. diff --git a/source/crud/transactions/transaction-core.txt b/source/crud/transactions/transaction-core.txt index bf8d3714e..c4ef5d60c 100644 --- a/source/crud/transactions/transaction-core.txt +++ b/source/crud/transactions/transaction-core.txt @@ -148,6 +148,8 @@ This example code performs a transaction through the following actions: :start-after: start placeOrder :end-before: end placeOrder +.. include:: /includes/explicit-resource-management.rst + .. _nodejs-transaction-example-payment-result: Transaction Results @@ -165,7 +167,7 @@ order ``_id`` appended to the orders field: "_id": 98765, "orders": [ "61dc..." - ] + ] } The ``inventory`` collection contains updated quantities for the diff --git a/source/includes/explicit-resource-management.rst b/source/includes/explicit-resource-management.rst new file mode 100644 index 000000000..393b0e06b --- /dev/null +++ b/source/includes/explicit-resource-management.rst @@ -0,0 +1,7 @@ +.. tip:: Explicit Resource Management + + The {+driver-short+} natively supports explicit resource management for + ``MongoClient``, ``ClientSession``, ``ChangeStreams``, and cursors. This + feature is experimental and subject to change. To learn how to use explicit + resource management, see the `v6.9 Release Notes. + `__ \ No newline at end of file diff --git a/source/monitoring-and-logging/change-streams.txt b/source/monitoring-and-logging/change-streams.txt index a99f4b27a..d0c573eb0 100644 --- a/source/monitoring-and-logging/change-streams.txt +++ b/source/monitoring-and-logging/change-streams.txt @@ -135,6 +135,8 @@ the ``insertDB`` database and prints change events as they occur: :language: javascript :linenos: +.. include:: /includes/explicit-resource-management.rst + When you run this code and then make a change to the ``haikus`` collection, such as performing an insert or delete operation, you can see the change event document printed in your terminal. From e7a22e508890ba9f335c25ca2662340f2df0c19e Mon Sep 17 00:00:00 2001 From: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> Date: Wed, 25 Jun 2025 15:48:46 -0400 Subject: [PATCH 07/17] Add redirect for quick start landing page (#1171) --- config/redirects | 1 + 1 file changed, 1 insertion(+) diff --git a/config/redirects b/config/redirects index 08f8e2def..fcf34c60a 100644 --- a/config/redirects +++ b/config/redirects @@ -29,6 +29,7 @@ raw: ${prefix}/stable -> ${base}/current/ # Comprehensive Coverage Redirects +[v6.5-master]: ${prefix}/${version}/quick-start/ -> ${base}/${version}/get-started/ [v6.5-master]: ${prefix}/${version}/quick-start/download-and-install/ -> ${base}/${version}/get-started/ [v6.5-master]: ${prefix}/${version}/quick-start/create-a-deployment/ -> ${base}/${version}/get-started/ [v6.5-master]: ${prefix}/${version}/quick-start/create-a-connection-string/ -> ${base}/${version}/get-started/ From a11226ba65bfa2096fb2e376ae54b1551666173f Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Mon, 30 Jun 2025 11:59:08 -0400 Subject: [PATCH 08/17] DOCSP-50497 suggestions --- source/aggregation.txt | 3 +-- source/aggregation/pipeline-stages.txt | 29 ++++++++++++++------------ 2 files changed, 17 insertions(+), 15 deletions(-) diff --git a/source/aggregation.txt b/source/aggregation.txt index 1090860e2..507bf0055 100644 --- a/source/aggregation.txt +++ b/source/aggregation.txt @@ -29,7 +29,7 @@ Aggregation Operations Overview -------- -In this guide, you can learn how to use the MongoDB Node.js Driver to perform +In this guide, you can learn how to use the {+driver-long+} to perform **aggregation operations**. Aggregation operations process data in your MongoDB collections and return @@ -98,7 +98,6 @@ Consider the following :manual:`limitations - Returned documents must not violate the :manual:`BSON document size limit ` of 16 megabytes. - - Pipeline stages have a memory limit of 100 megabytes by default. If required, you can exceed this limit by enabling the `AllowDiskUse `__ diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt index e25c4d169..48a91f723 100644 --- a/source/aggregation/pipeline-stages.txt +++ b/source/aggregation/pipeline-stages.txt @@ -1,8 +1,8 @@ .. _node-aggregation-pipeline-stages: -=============== +=========================== Aggregation Pipeline Stages -=============== +=========================== .. contents:: On this page :local: @@ -16,14 +16,17 @@ Aggregation Pipeline Stages .. meta:: :keywords: node.js, code example, transform, pipeline - :description: Learn the different possible stages of the aggregation pipeline in the {+driver-short+}. + :description: Learn the different possible stages of the aggregation pipeline in the Node.js Driver. Overview ------------ -On this page, you can learn how to create an aggregation pipeline and pipeline +In this guide, you can learn how to create an aggregation pipeline and pipeline stages by using methods in the {+driver-short+}. +To learn more about the aggregation stages supported by the Node.js Driver, see +:ref:`Aggregation Pipeline Stages `. + Build an Aggregation Pipeline ----------------------------- @@ -58,7 +61,6 @@ method. See the following examples to learn more about each of these approaches. { $group: { ... } } ]); - Aggregation Stage Methods ------------------------- @@ -95,17 +97,17 @@ manual. * - :manual:`$changeStream ` - Returns a change stream cursor for the collection. - Instead of being passed to the ``.aggregate()`` method, - ``$changeStream`` uses the ``.watch()`` method on a ``collection`` + Instead of being passed to the ``aggregate()`` method, + ``$changeStream`` uses the ``watch()`` method on a ``Collection`` object. * - :manual:`$changeStreamSplitLargeEvent ` - Splits large change stream events that exceed 16 MB into smaller fragments returned in a change stream cursor. - Instead of being passed to the ``.aggregate()`` method, - ``$changeStreamSplitLargeEvent`` uses the ``.watch()`` method on a - ``collection`` object. + Instead of being passed to the ``aggregate()`` method, + ``$changeStreamSplitLargeEvent`` uses the ``watch()`` method on a + ``Collection`` object. * - :manual:`$collStats ` - Returns statistics regarding a collection or view. @@ -312,7 +314,8 @@ To learn more about creating pipeline stages, see :manual:`Aggregation Stages For more information about the methods and classes used on this page, see the following API documentation: -- `Collection() `__ -- `aggregate() `__ -- `AggregateOptions `__ +- `Collection <{+api+}/classes/Collection.html>`__ +- `aggregate() <{+api+}/classes/Collection.html#aggregate>`__ +- `watch() <{+api+}/classes/Collection.html#watch>`__ +- `AggregateOptions <{+api+}/interfaces/AggregateOptions.html>`__ From c4f6c02b1a74ef7a62b9ad701f33ba740df71012 Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Mon, 30 Jun 2025 12:28:35 -0400 Subject: [PATCH 09/17] DOCSP=50497 final changes --- source/aggregation/pipeline-stages.txt | 3 --- 1 file changed, 3 deletions(-) diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt index 48a91f723..162e6ecfd 100644 --- a/source/aggregation/pipeline-stages.txt +++ b/source/aggregation/pipeline-stages.txt @@ -24,9 +24,6 @@ Overview In this guide, you can learn how to create an aggregation pipeline and pipeline stages by using methods in the {+driver-short+}. -To learn more about the aggregation stages supported by the Node.js Driver, see -:ref:`Aggregation Pipeline Stages `. - Build an Aggregation Pipeline ----------------------------- From f07410bfeac02f6629a654e3ed0c9d7a49781dd2 Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Mon, 30 Jun 2025 13:16:58 -0400 Subject: [PATCH 10/17] revert "Add redirect for quick start landing page (#1171)" This reverts commit e7a22e508890ba9f335c25ca2662340f2df0c19e. revert commit --- config/redirects | 1 - 1 file changed, 1 deletion(-) diff --git a/config/redirects b/config/redirects index fcf34c60a..08f8e2def 100644 --- a/config/redirects +++ b/config/redirects @@ -29,7 +29,6 @@ raw: ${prefix}/stable -> ${base}/current/ # Comprehensive Coverage Redirects -[v6.5-master]: ${prefix}/${version}/quick-start/ -> ${base}/${version}/get-started/ [v6.5-master]: ${prefix}/${version}/quick-start/download-and-install/ -> ${base}/${version}/get-started/ [v6.5-master]: ${prefix}/${version}/quick-start/create-a-deployment/ -> ${base}/${version}/get-started/ [v6.5-master]: ${prefix}/${version}/quick-start/create-a-connection-string/ -> ${base}/${version}/get-started/ From 99cfdd98b0724b9443e3e2c1ad0cf23b57b43663 Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Mon, 30 Jun 2025 13:19:22 -0400 Subject: [PATCH 11/17] Revert "DOCSP-41784 Support for Explicit Resource Management (#1165)" This reverts commit 8f0f3432d93eedef9fa22e638937d25e1a89b5de. --- .backportrc.json | 3 --- source/connect/mongoclient.txt | 2 -- source/crud/query/cursor.txt | 2 -- source/crud/transactions.txt | 2 -- source/crud/transactions/transaction-core.txt | 4 +--- source/includes/explicit-resource-management.rst | 7 ------- source/monitoring-and-logging/change-streams.txt | 2 -- 7 files changed, 1 insertion(+), 21 deletions(-) delete mode 100644 source/includes/explicit-resource-management.rst diff --git a/.backportrc.json b/.backportrc.json index 65257f2fc..1df6bbc2b 100644 --- a/.backportrc.json +++ b/.backportrc.json @@ -5,9 +5,6 @@ // the branches available to backport to "targetBranchChoices": [ "master", - "v6.17", - "v6.16", - "v6.15", "v6.14", "v6.13", "v6.12", diff --git a/source/connect/mongoclient.txt b/source/connect/mongoclient.txt index 5e56b757c..f1b27a261 100644 --- a/source/connect/mongoclient.txt +++ b/source/connect/mongoclient.txt @@ -119,8 +119,6 @@ verify that the connection is successful: Call the ``MongoClient.connect()`` method explicitly if you want to verify that the connection is successful. -.. include:: /includes/explicit-resource-management.rst - To learn more about the {+stable-api+} feature, see the :ref:`{+stable-api+} page `. diff --git a/source/crud/query/cursor.txt b/source/crud/query/cursor.txt index cab9d7d2a..2abaaa62b 100644 --- a/source/crud/query/cursor.txt +++ b/source/crud/query/cursor.txt @@ -170,8 +170,6 @@ and the {+mdb-server+}: :start-after: start close cursor example :end-before: end close cursor example -.. include:: /includes/explicit-resource-management.rst - Abort ~~~~~ diff --git a/source/crud/transactions.txt b/source/crud/transactions.txt index d2e696ec7..5a18b9c0e 100644 --- a/source/crud/transactions.txt +++ b/source/crud/transactions.txt @@ -181,8 +181,6 @@ the Core API: const session = client1.startSession(); client2.db('myDB').collection('myColl').insertOne({ name: 'Jane Eyre' }, { session }); -.. include:: /includes/explicit-resource-management.rst - To see a fully runnable example that uses this API, see the :ref:`node-usage-core-txn` usage example. diff --git a/source/crud/transactions/transaction-core.txt b/source/crud/transactions/transaction-core.txt index c4ef5d60c..bf8d3714e 100644 --- a/source/crud/transactions/transaction-core.txt +++ b/source/crud/transactions/transaction-core.txt @@ -148,8 +148,6 @@ This example code performs a transaction through the following actions: :start-after: start placeOrder :end-before: end placeOrder -.. include:: /includes/explicit-resource-management.rst - .. _nodejs-transaction-example-payment-result: Transaction Results @@ -167,7 +165,7 @@ order ``_id`` appended to the orders field: "_id": 98765, "orders": [ "61dc..." - ] + ] } The ``inventory`` collection contains updated quantities for the diff --git a/source/includes/explicit-resource-management.rst b/source/includes/explicit-resource-management.rst deleted file mode 100644 index 393b0e06b..000000000 --- a/source/includes/explicit-resource-management.rst +++ /dev/null @@ -1,7 +0,0 @@ -.. tip:: Explicit Resource Management - - The {+driver-short+} natively supports explicit resource management for - ``MongoClient``, ``ClientSession``, ``ChangeStreams``, and cursors. This - feature is experimental and subject to change. To learn how to use explicit - resource management, see the `v6.9 Release Notes. - `__ \ No newline at end of file diff --git a/source/monitoring-and-logging/change-streams.txt b/source/monitoring-and-logging/change-streams.txt index d0c573eb0..a99f4b27a 100644 --- a/source/monitoring-and-logging/change-streams.txt +++ b/source/monitoring-and-logging/change-streams.txt @@ -135,8 +135,6 @@ the ``insertDB`` database and prints change events as they occur: :language: javascript :linenos: -.. include:: /includes/explicit-resource-management.rst - When you run this code and then make a change to the ``haikus`` collection, such as performing an insert or delete operation, you can see the change event document printed in your terminal. From a6e5c27e74814f09c414dcb934e4ec1ce2a4cb3c Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Mon, 30 Jun 2025 13:22:42 -0400 Subject: [PATCH 12/17] DOCSP-50497 removing commits --- source/aggregation/pipeline-stages.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt index 162e6ecfd..ab76fca88 100644 --- a/source/aggregation/pipeline-stages.txt +++ b/source/aggregation/pipeline-stages.txt @@ -22,7 +22,7 @@ Overview ------------ In this guide, you can learn how to create an aggregation pipeline and pipeline -stages by using methods in the {+driver-short+}. +stages by using methods in the {+driver-long+}. Build an Aggregation Pipeline ----------------------------- From 1b44eecfa4650aa2452c6e7462a9d7f6258614cf Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Mon, 30 Jun 2025 13:40:03 -0400 Subject: [PATCH 13/17] DOCSP-50497 remove capital in aggregate method --- source/aggregation.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/aggregation.txt b/source/aggregation.txt index 507bf0055..72d7e6de7 100644 --- a/source/aggregation.txt +++ b/source/aggregation.txt @@ -102,7 +102,7 @@ Consider the following :manual:`limitations you can exceed this limit by enabling the `AllowDiskUse `__ property of the ``AggregateOptions`` object that you pass to the - ``Aggregate()`` method. + ``aggregate()`` method. Additional information ---------------------- From b75194c2a90105a3325d94e9f7ae04f00178320e Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Tue, 1 Jul 2025 09:31:50 -0400 Subject: [PATCH 14/17] DOCSP-50497 edits --- source/aggregation.txt | 20 +++++++++--------- source/aggregation/pipeline-stages.txt | 28 +++++++++++++------------- 2 files changed, 25 insertions(+), 23 deletions(-) diff --git a/source/aggregation.txt b/source/aggregation.txt index 72d7e6de7..f80823e40 100644 --- a/source/aggregation.txt +++ b/source/aggregation.txt @@ -37,7 +37,8 @@ computed results. The MongoDB Aggregation framework is modeled on the concept of data processing pipelines. Documents enter a pipeline comprised of one or more stages, and this pipeline transforms the documents into an aggregated result. -To learn more about the aggregation stages supported by the Node.js Driver, see :ref:`Aggregation Pipeline Stages `. +To learn more about the aggregation stages supported by the {+driver-short+}, +see :ref:` `. .. _node-aggregation-tutorials: @@ -94,10 +95,11 @@ that allows you to transform and manipulate your data. Server Limitations ------------------ -Consider the following :manual:`limitations ` when performing aggregation operations: +Consider the following :manual:`limitations +` when performing aggregation operations: -- Returned documents must not violate the :manual:`BSON document size limit ` - of 16 megabytes. +- Returned documents must not violate the :manual:`BSON document size limit + ` of 16 megabytes. - Pipeline stages have a memory limit of 100 megabytes by default. If required, you can exceed this limit by enabling the `AllowDiskUse `__ @@ -107,9 +109,9 @@ Consider the following :manual:`limitations Additional information ---------------------- -To view a full list of expression operators, see -:manual:`Aggregation Operators `. +To view a full list of expression operators, see :manual:`Aggregation Operators +` in the {+mdb-server+} manual. -To learn about explaining MongoDB aggregation operations, see -:manual:`Explain Results ` and -:manual:`Query Plans `. +To learn about explaining MongoDB aggregation operations, see :manual:`Explain +Results ` and :manual:`Query Plans +` in the {+mdb-server+} manual. diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt index ab76fca88..5f12216ed 100644 --- a/source/aggregation/pipeline-stages.txt +++ b/source/aggregation/pipeline-stages.txt @@ -38,25 +38,25 @@ method. See the following examples to learn more about each of these approaches. .. code-block:: javascript - // Defines the aggregation pipeline - const pipeline = [ - { $match: { ... } }, - { $group: { ... } } - ]; + // Defines the aggregation pipeline + const pipeline = [ + { $match: { ... } }, + { $group: { ... } } + ]; - // Executes the aggregation pipeline - const results = collection.aggregate(pipeline); + // Executes the aggregation pipeline + const results = collection.aggregate(pipeline); .. tab:: Direct Aggregation :tabid: pipeline-direct .. code-block:: javascript - // Defines and executes the aggregation pipeline - collection.aggregate([ - { $match: { ... } }, - { $group: { ... } } - ]); + // Defines and executes the aggregation pipeline + collection.aggregate([ + { $match: { ... } }, + { $group: { ... } } + ]); Aggregation Stage Methods ------------------------- @@ -115,7 +115,7 @@ manual. * - :manual:`$currentOp ` - Returns a stream of documents containing information on active and - dormant operations as well as inactive sessions that are holding locks as + dormant operations and any inactive sessions that are holding locks as part of a transaction. * - :manual:`$densify ` @@ -303,7 +303,7 @@ API Documentation ~~~~~~~~~~~~~~~~~ To learn more about assembling an aggregation pipeline, see :manual:`Aggregation -Pipeline ` in the MongoDB Server manual. +Pipeline ` in the {+mdb-server+} manual. To learn more about creating pipeline stages, see :manual:`Aggregation Stages ` in the MongoDB Server manual. From bc6bff590a17833e0a75f3a4d05a7c4f0790e24f Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Tue, 1 Jul 2025 09:46:14 -0400 Subject: [PATCH 15/17] DOCSP-50497 tweaks --- source/aggregation.txt | 14 ++++++------ source/aggregation/pipeline-stages.txt | 30 +++++++++++++------------- 2 files changed, 22 insertions(+), 22 deletions(-) diff --git a/source/aggregation.txt b/source/aggregation.txt index f80823e40..d1509fd75 100644 --- a/source/aggregation.txt +++ b/source/aggregation.txt @@ -33,12 +33,12 @@ In this guide, you can learn how to use the {+driver-long+} to perform **aggregation operations**. Aggregation operations process data in your MongoDB collections and return -computed results. The MongoDB Aggregation framework is modeled on the -concept of data processing pipelines. Documents enter a pipeline comprised of one or -more stages, and this pipeline transforms the documents into an aggregated result. +computed results. The MongoDB Aggregation framework is modeled on the concept of +data processing pipelines. Documents enter a pipeline comprised of one or more +stages, and this pipeline transforms the documents into an aggregated result. To learn more about the aggregation stages supported by the {+driver-short+}, -see :ref:` `. +see :ref:`node-aggregation-pipeline-stages`. .. _node-aggregation-tutorials: @@ -65,9 +65,9 @@ Compare Aggregation and Find Operations --------------------------------------- The following table lists the different tasks you can perform with find -operations compared to what you can achieve with aggregation -operations. The aggregation framework provides expanded functionality -that allows you to transform and manipulate your data. +operations compared to what you can achieve with aggregation operations. The +aggregation framework provides expanded functionality that allows you to +transform and manipulate your data. .. list-table:: :header-rows: 1 diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt index 5f12216ed..b6c4343a5 100644 --- a/source/aggregation/pipeline-stages.txt +++ b/source/aggregation/pipeline-stages.txt @@ -62,11 +62,10 @@ Aggregation Stage Methods ------------------------- The following table lists the stages in the aggregation pipeline. To learn more -about an aggregation stage and see a code example in a Node.js application, +about an aggregation stage and see a code example in a {+environment+} application, follow the link from the stage name to its reference page in the {+mdb-server+} manual. - .. list-table:: :header-rows: 1 :widths: 30 70 @@ -98,7 +97,8 @@ manual. ``$changeStream`` uses the ``watch()`` method on a ``Collection`` object. - * - :manual:`$changeStreamSplitLargeEvent ` + * - :manual:`$changeStreamSplitLargeEvent + ` - Splits large change stream events that exceed 16 MB into smaller fragments returned in a change stream cursor. @@ -119,7 +119,8 @@ manual. part of a transaction. * - :manual:`$densify ` - - Creates new documents in a sequence of documents where certain values in a field are missing. + - Creates new documents in a sequence of documents where certain values in a + field are missing. * - :manual:`$documents ` - Returns literal documents from input expressions. @@ -142,21 +143,20 @@ manual. results of the recursive search for that document. * - :manual:`$group ` - - Groups input documents by a specified identifier expression - and applies the accumulator expressions, if specified, to - each group. Consumes all input documents and outputs one - document per each distinct group. The output documents - contain only the identifier field and, if specified, accumulated - fields. + - Groups input documents by a specified identifier expression and applies + the accumulator expressions, if specified, to each group. Consumes all + input documents and outputs one document per each distinct group. The + output documents contain only the identifier field and, if specified, + accumulated fields. * - :manual:`$indexStats ` - Returns statistics regarding the use of each index for the collection. * - :manual:`$limit ` - - Passes the first *n* documents unmodified to the pipeline, - where *n* is the specified limit. For each input document, - outputs either one document (for the first *n* documents) or - zero documents (after the first *n* documents). + - Passes the first *n* documents unmodified to the pipeline, where *n* is + the specified limit. For each input document, outputs either one document + (for the first *n* documents) or zero documents (after the first *n* + documents). * - :manual:`$listSampledQueries ` - Lists sampled queries for all collections or a specific collection. Only @@ -306,7 +306,7 @@ To learn more about assembling an aggregation pipeline, see :manual:`Aggregation Pipeline ` in the {+mdb-server+} manual. To learn more about creating pipeline stages, see :manual:`Aggregation Stages -` in the MongoDB Server manual. +` in the {+mdb-server+} manual. For more information about the methods and classes used on this page, see the following API documentation: From ac6f5eb4358fc6eedfc53a58ba96bbc742b87bea Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Tue, 1 Jul 2025 20:44:30 -0400 Subject: [PATCH 16/17] DOCSP-50497 tech review comments --- source/aggregation/pipeline-stages.txt | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt index b6c4343a5..4da389a8b 100644 --- a/source/aggregation/pipeline-stages.txt +++ b/source/aggregation/pipeline-stages.txt @@ -45,7 +45,7 @@ method. See the following examples to learn more about each of these approaches. ]; // Executes the aggregation pipeline - const results = collection.aggregate(pipeline); + const results = async collection.aggregate(pipeline); .. tab:: Direct Aggregation :tabid: pipeline-direct @@ -53,7 +53,7 @@ method. See the following examples to learn more about each of these approaches. .. code-block:: javascript // Defines and executes the aggregation pipeline - collection.aggregate([ + const results = async collection.aggregate([ { $match: { ... } }, { $group: { ... } } ]); @@ -91,20 +91,21 @@ manual. of buckets. * - :manual:`$changeStream ` - - Returns a change stream cursor for the collection. + - Returns a change stream cursor for the collection. Must be the first stage + in the pipeline. - Instead of being passed to the ``aggregate()`` method, - ``$changeStream`` uses the ``watch()`` method on a ``Collection`` - object. + ``$changeStream`` returns an ``AggregationCursor`` when passed to the + ``aggregate()`` method and a ``ChangeStreamCursor`` when passed to the + ``watch()`` method. * - :manual:`$changeStreamSplitLargeEvent ` - Splits large change stream events that exceed 16 MB into smaller fragments returned - in a change stream cursor. + in a change stream cursor. Must be the first stage in the pipeline. - Instead of being passed to the ``aggregate()`` method, - ``$changeStreamSplitLargeEvent`` uses the ``watch()`` method on a - ``Collection`` object. + ``$changeStreamSplitLargeEvent`` returns an ``AggregationCursor`` when + passed to the ``aggregate()`` method and a ``ChangeStreamCursor`` when + passed to the ``watch()`` method. * - :manual:`$collStats ` - Returns statistics regarding a collection or view. From 79b06dbb565ebc7c3dc6336c6294a17cfb8693a6 Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Wed, 2 Jul 2025 09:51:03 -0400 Subject: [PATCH 17/17] DOCSP-50497 edits --- source/aggregation/pipeline-stages.txt | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt index 4da389a8b..eb60fde27 100644 --- a/source/aggregation/pipeline-stages.txt +++ b/source/aggregation/pipeline-stages.txt @@ -45,7 +45,7 @@ method. See the following examples to learn more about each of these approaches. ]; // Executes the aggregation pipeline - const results = async collection.aggregate(pipeline); + const results = await collection.aggregate(pipeline); .. tab:: Direct Aggregation :tabid: pipeline-direct @@ -53,7 +53,7 @@ method. See the following examples to learn more about each of these approaches. .. code-block:: javascript // Defines and executes the aggregation pipeline - const results = async collection.aggregate([ + const results = await collection.aggregate([ { $match: { ... } }, { $group: { ... } } ]); @@ -101,12 +101,12 @@ manual. * - :manual:`$changeStreamSplitLargeEvent ` - Splits large change stream events that exceed 16 MB into smaller fragments returned - in a change stream cursor. Must be the first stage in the pipeline. + in a change stream cursor. Must be the last stage in the pipeline. ``$changeStreamSplitLargeEvent`` returns an ``AggregationCursor`` when passed to the ``aggregate()`` method and a ``ChangeStreamCursor`` when passed to the ``watch()`` method. - + * - :manual:`$collStats ` - Returns statistics regarding a collection or view.