Skip to content

fix(qdrant): add pandas dep, fix cross-repo ID collisions, fix comment indexing#2323

Open
gvago wants to merge 2 commits intomainfrom
fix/qdrant-pandas-and-ids
Open

fix(qdrant): add pandas dep, fix cross-repo ID collisions, fix comment indexing#2323
gvago wants to merge 2 commits intomainfrom
fix/qdrant-pandas-and-ids

Conversation

@gvago
Copy link
Copy Markdown

@gvago gvago commented Apr 14, 2026

Summary

  • Missing pandas dependency: The Qdrant indexing path (_update_qdrant_with_issues) imports and uses pandas, but it was not listed in requirements.txt. Added pandas to the commented qdrant extras section so users who uncomment qdrant-client know to also uncomment pandas.
  • Cross-repo ID collisions: Qdrant point IDs were derived from uuid5(NAMESPACE_DNS, row["id"]) where row["id"] is just issue_123.issue -- no repo context. When indexing multiple repos into the same collection, different repos with the same issue number silently overwrite each other. Fixed by including the repo identifier in the UUID seed: uuid5(NAMESPACE_DNS, f"{repo_name_for_index}:{row['id']}").
  • Off-by-one comment URL indexing: Comment IDs were constructed with 1-based indices (j+1) during indexing, but the retrieval path at line 383 uses the parsed number as a 0-based list index into issue.get_comments(). This caused the wrong comment URL to be linked. Changed all three vector DB paths (pinecone, lancedb, qdrant) to use 0-based indexing (j) to match retrieval.

Test plan

  • Verify pandas import succeeds when qdrant extras are installed
  • Index two different repos into the same Qdrant collection and verify no point ID collisions (same issue numbers across repos produce different UUIDs)
  • Index an issue with multiple comments, query for a comment match, and verify the returned URL points to the correct comment (not off by one)

…t indexing

1. Added pandas to the qdrant extras in requirements.txt - the Qdrant
   indexing path (via _update_qdrant_with_issues) imports and uses pandas
   but it was missing from the dependency list.

2. Fixed cross-repo ID collisions in Qdrant point IDs by including the
   repo name in the UUID seed. Previously uuid5(NAMESPACE_DNS, row["id"])
   would generate identical point IDs for the same issue number across
   different repos, causing silent overwrites when indexing multiple repos
   into the same collection.

3. Fixed off-by-one error in comment URL indexing. Comment IDs were
   constructed with 1-based indices (j+1) but retrieved as 0-based list
   indices, causing the wrong comment URL to be returned. Changed all
   three vector DB paths (pinecone, lancedb, qdrant) to use 0-based
   indexing (j) to match the retrieval code.
@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Fix Qdrant pandas dependency, cross-repo collisions, and comment indexing

🐞 Bug fix

Grey Divider

Walkthroughs

Description
• Fixed cross-repo ID collisions in Qdrant by including repo name in UUID seed
• Corrected off-by-one error in comment indexing across all vector DB paths
• Added missing pandas dependency to qdrant extras in requirements.txt
Diagram
flowchart LR
  A["Comment Indexing<br/>j+1 to j"] --> B["0-based Index<br/>Matches Retrieval"]
  C["UUID Generation<br/>row[id] only"] --> D["UUID with Repo<br/>repo_name:row[id]"]
  E["Missing pandas<br/>in requirements.txt"] --> F["Added pandas<br/>to qdrant extras"]
  B --> G["Correct Comment URLs"]
  D --> G
  F --> G
Loading

Grey Divider

File Changes

1. pr_agent/tools/pr_similar_issue.py 🐞 Bug fix +4/-4

Fix comment indexing and Qdrant UUID generation

• Changed comment ID indexing from 1-based (j+1) to 0-based (j) in three vector DB update methods:
 _update_index_with_issues, _update_table_with_issues, and _update_qdrant_with_issues
• Fixed Qdrant UUID generation to include repo name in seed: `uuid5(NAMESPACE_DNS,
 f"{repo_name_for_index}:{row['id']}") instead of just row["id"]`
• Ensures consistent 0-based indexing for comment retrieval and prevents cross-repo ID collisions

pr_agent/tools/pr_similar_issue.py


2. requirements.txt Dependencies +1/-0

Add missing pandas dependency for qdrant

• Added pandas as a commented dependency under the qdrant extras section
• Includes explanatory comment noting that pandas is required by the qdrant indexing path

requirements.txt


Grey Divider

Qodo Logo

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects bot commented Apr 14, 2026

Code Review by Qodo

🐞 Bugs (2)   📘 Rule violations (1)   📎 Requirement gaps (0)
🐞\ ≡ Correctness (1) ☼ Reliability (1)
📘\ ⚙ Maintainability (1)

Grey Divider


Action required

1. requirements.txt adds commented pandas 📘
Description
The PR introduces a new commented-out dependency line for pandas, which is inactive code and
violates the no-commented-out-code requirement. This can also cause runtime failures for users who
enable Qdrant without actually installing pandas.
Code

requirements.txt[42]

+# pandas  # required by qdrant indexing path
Evidence
PR Compliance ID 2 forbids adding commented-out code/blocks; the PR adds `# pandas  # required by
qdrant indexing path as a new commented line in requirements.txt`.

Rule 2: No Dead or Commented-Out Code
requirements.txt[42-42]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A new commented-out dependency line (`# pandas ...`) was added to `requirements.txt`, violating the requirement to avoid commented-out code and leaving the dependency inactive.
## Issue Context
The code path `_update_qdrant_with_issues` imports/uses `pandas`, so leaving it commented relies on manual user action and can still lead to runtime `ImportError`.
## Fix Focus Areas
- requirements.txt[42-42]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. PointStruct call exceeds 120📘
Description
The updated PointStruct(...) construction is a single very long line and is likely to violate
enforced Ruff line-length/formatting rules. This can cause CI lint failures and reduces
readability/maintainability.
Code

pr_agent/tools/pr_similar_issue.py[676]

+                PointStruct(id=uuid.uuid5(uuid.NAMESPACE_DNS, f"{repo_name_for_index}:{row['id']}").hex, vector=row["vector"], payload={"id": row["id"], "text": row["text"], "metadata": row["metadata"]})
Evidence
PR Compliance ID 19 requires changes to comply with lint/format tooling (including line-length
constraints); the modified PointStruct(...) call is formatted as a single long line in the changed
code.

pr_agent/tools/pr_similar_issue.py[676-676]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The modified `PointStruct(...)` call is on one long line and is likely to violate the repository's Ruff/formatting constraints.
## Issue Context
This code is in the Qdrant upsert path; formatting should comply with Ruff to avoid CI failures.
## Fix Focus Areas
- pr_agent/tools/pr_similar_issue.py[676-676]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Qdrant upsert creates duplicates 🐞
Description
Changing Qdrant PointStruct.id generation to include repo_name_for_index alters point IDs for
existing repos, so re-ingesting into an existing collection will add a second copy of each point
instead of overwriting the old ones. This can reduce result diversity (duplicates consume top_k)
because querying/parsing uses payload["id"], not the Qdrant point ID.
Code

pr_agent/tools/pr_similar_issue.py[676]

+                PointStruct(id=uuid.uuid5(uuid.NAMESPACE_DNS, f"{repo_name_for_index}:{row['id']}").hex, vector=row["vector"], payload={"id": row["id"], "text": row["text"], "metadata": row["metadata"]})
Evidence
The PR changes Qdrant point ID derivation to use a different UUID seed, so old points (seeded by
just row["id"]) and new points (seeded by repo_name_for_index + row["id"]) cannot overwrite each
other on upsert. The re-index path (e.g., when force_update_dataset is enabled) calls
_update_qdrant_with_issues which upserts without deleting existing points, and query handling
identifies items by payload["id"], so both old and new points remain eligible and can surface as
duplicates.

pr_agent/tools/pr_similar_issue.py[212-229]
pr_agent/tools/pr_similar_issue.py[672-679]
pr_agent/tools/pr_similar_issue.py[345-373]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Qdrant point IDs changed (UUID seed now includes `repo_name_for_index`), so re-ingesting an already-indexed repo into an existing collection will create duplicate points rather than overwrite the existing ones.
### Issue Context
- `_update_qdrant_with_issues()` always calls `qdrant.upsert(...)` with newly generated point IDs.
- When re-indexing (e.g., `force_update_dataset`), the code does not delete existing points for that repo first.
- Querying later reads `payload['id']`, so duplicates remain queryable and can consume the top_k.
### Fix Focus Areas
- pr_agent/tools/pr_similar_issue.py[212-257]
- pr_agent/tools/pr_similar_issue.py[587-679]
### What to change
1. Before calling `_update_qdrant_with_issues(..., ingest=True)` for an already-existing collection, delete existing points for that repo using a filter on `metadata.repo == repo_name_for_index` (and optionally `level in {issue,comment}` if needed).
2. Alternatively (or additionally), set `PointStruct.id` to a stable, human-readable, repo-scoped string like `f"{repo_name_for_index}:{row['id']}"` (Qdrant supports string point IDs) to avoid needing UUIDs and make overwrite semantics explicit.
3. Add a small log/metric indicating how many points were deleted before reindexing so operators can validate migration behavior.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Comment IDs not migrated 🐞
Description
Comment record IDs are now generated as 0-based (comment_0, comment_1, ...) but existing indexed
data in Pinecone/LanceDB/Qdrant will still contain the old 1-based IDs unless the dataset is
rebuilt/migrated. Because the query path parses the stored ID suffix and uses it directly as a list
index into issue.get_comments(), mixed/old data will continue to map to the wrong comment.
Code

pr_agent/tools/pr_similar_issue.py[448]

+                                id=issue_key + ".comment_" + str(j),
Evidence
The indexing change switches the stored ID suffix to str(j) (0-based). The query path parses the
comment suffix from the stored ID and uses it directly to index into list(issue.get_comments()),
so correctness depends on the stored IDs matching the indexing convention. However, the
existing-index update flows add only new issues and do not rewrite existing vectors, so previously
indexed 1-based comment IDs will remain in the DB unless a rebuild/migration is performed.

pr_agent/tools/pr_similar_issue.py[438-455]
pr_agent/tools/pr_similar_issue.py[290-317]
pr_agent/tools/pr_similar_issue.py[375-384]
pr_agent/tools/pr_similar_issue.py[84-111]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Indexing now stores comment IDs as 0-based (`comment_0`...), but existing vector DB entries created with the old 1-based scheme remain unless the dataset is rebuilt. Since retrieval parses the stored suffix and uses it as a 0-based list index into `issue.get_comments()`, older entries will still map to the wrong comment.
### Issue Context
- Indexing paths now create IDs with `str(j)`.
- Querying parses the suffix from the stored ID (`...comment_N`) and uses `N` directly.
- Update flows typically only append new issues and do not rewrite existing records.
### Fix Focus Areas
- pr_agent/tools/pr_similar_issue.py[290-399]
- pr_agent/tools/pr_similar_issue.py[401-455]
- pr_agent/tools/pr_similar_issue.py[496-552]
- pr_agent/tools/pr_similar_issue.py[587-650]
### What to change
Choose one (or combine):
1. **Migration by rebuild**: When enabling this fix (e.g., via a new setting flag), delete existing records for the repo (per vector DB) and re-index the full repo so all stored comment IDs follow the new convention.
2. **Backward-compatible retrieval**: When parsing `comment_N`, detect legacy 1-based IDs and translate to 0-based (e.g., store an explicit `comment_index` in metadata at index-time and prefer it over parsing; or implement a safe translation path that avoids shifting new 0-based records).
3. Add a clear operator-facing log message indicating that a one-time rebuild is required to fully apply the comment-index fix to previously indexed data.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Grey Divider

Previous review results

Review updated until commit d1b7a85

Results up to commit N/A


🐞 Bugs (2)  
📘 Rule violations (1)  
📎 Requirement gaps (0)

Grey Divider
Action required
1. requirements.txt adds commented pandas 📘
Description
The PR introduces a new commented-out dependency line for pandas, which is inactive code and
violates the no-commented-out-code requirement. This can also cause runtime failures for users who
enable Qdrant without actually installing pandas.
Code

requirements.txt[42]

+# pandas  # required by qdrant indexing path
Evidence
PR Compliance ID 2 forbids adding commented-out code/blocks; the PR adds `# pandas  # required by
qdrant indexing path as a new commented line in requirements.txt`.

Rule 2: No Dead or Commented-Out Code
requirements.txt[42-42]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A new commented-out dependency line (`# pandas ...`) was added to `requirements.txt`, violating the requirement to avoid commented-out code and leaving the dependency inactive.
## Issue Context
The code path `_update_qdrant_with_issues` imports/uses `pandas`, so leaving it commented relies on manual user action and can still lead to runtime `ImportError`.
## Fix Focus Areas
- requirements.txt[42-42]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. PointStruct call exceeds 120📘
Description
The updated PointStruct(...) construction is a single very long line and is likely to violate
enforced Ruff line-length/formatting rules. This can cause CI lint failures and reduces
readability/maintainability.
Code

pr_agent/tools/pr_similar_issue.py[676]

+                PointStruct(id=uuid.uuid5(uuid.NAMESPACE_DNS, f"{repo_name_for_index}:{row['id']}").hex, vector=row["vector"], payload={"id": row["id"], "text": row["text"], "metadata": row["metadata"]})
Evidence
PR Compliance ID 19 requires changes to comply with lint/format tooling (including line-length
constraints); the modified PointStruct(...) call is formatted as a single long line in the changed
code.

pr_agent/tools/pr_similar_issue.py[676-676]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The modified `PointStruct(...)` call is on one long line and is likely to violate the repository's Ruff/formatting constraints.
## Issue Context
This code is in the Qdrant upsert path; formatting should comply with Ruff to avoid CI failures.
## Fix Focus Areas
- pr_agent/tools/pr_similar_issue.py[676-676]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Qdrant upsert creates duplicates 🐞
Description
Changing Qdrant PointStruct.id generation to include repo_name_for_index alters point IDs for
existing repos, so re-ingesting into an existing collection will add a second copy of each point
instead of overwriting the old ones. This can reduce result diversity (duplicates consume top_k)
because querying/parsing uses payload["id"], not the Qdrant point ID.
Code

pr_agent/tools/pr_similar_issue.py[676]

+                PointStruct(id=uuid.uuid5(uuid.NAMESPACE_DNS, f"{repo_name_for_index}:{row['id']}").hex, vector=row["vector"], payload={"id": row["id"], "text": row["text"], "metadata": row["metadata"]})
Evidence
The PR changes Qdrant point ID derivation to use a different UUID seed, so old points (seeded by
just row["id"]) and new points (seeded by repo_name_for_index + row["id"]) cannot overwrite each
other on upsert. The re-index path (e.g., when force_update_dataset is enabled) calls
_update_qdrant_with_issues which upserts without deleting existing points, and query handling
identifies items by payload["id"], so both old and new points remain eligible and can surface as
duplicates.

pr_agent/tools/pr_similar_issue.py[212-229]
pr_agent/tools/pr_similar_issue.py[672-679]
pr_agent/tools/pr_similar_issue.py[345-373]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Qdrant point IDs changed (UUID seed now includes `repo_name_for_index`), so re-ingesting an already-indexed repo into an existing collection will create duplicate points rather than overwrite the existing ones.
### Issue Context
- `_update_qdrant_with_issues()` always calls `qdrant.upsert(...)` with newly generated point IDs.
- When re-indexing (e.g., `force_update_dataset`), the code does not delete existing points for that repo first.
- Querying later reads `payload['id']`, so duplicates remain queryable and can consume the top_k.
### Fix Focus Areas
- pr_agent/tools/pr_similar_issue.py[212-257]
- pr_agent/tools/pr_similar_issue.py[587-679]
### What to change
1. Before calling `_update_qdrant_with_issues(..., ingest=True)` for an already-existing collection, delete existing points for that repo using a filter on `metadata.repo == repo_name_for_index` (and optionally `level in {issue,comment}` if needed).
2. Alternatively (or additionally), set `PointStruct.id` to a stable, human-readable, repo-scoped string like `f"{repo_name_for_index}:{row['id']}"` (Qdrant supports string point IDs) to avoid needing UUIDs and make overwrite semantics explicit.
3. Add a small log/metric indicating how many points were deleted before reindexing so operators can validate migration behavior.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended
4. Comment IDs not migrated 🐞
Description
Comment record IDs are now generated as 0-based (comment_0, comment_1, ...) but existing indexed
data in Pinecone/LanceDB/Qdrant will still contain the old 1-based IDs unless the dataset is
rebuilt/migrated. Because the query path parses the stored ID suffix and uses it directly as a list
index into issue.get_comments(), mixed/old data will continue to map to the wrong comment.
Code

pr_agent/tools/pr_similar_issue.py[448]

+                                id=issue_key + ".comment_" + str(j),
Evidence
The indexing change switches the stored ID suffix to str(j) (0-based). The query path parses the
comment suffix from the stored ID and uses it directly to index into list(issue.get_comments()),
so correctness depends on the stored IDs matching the indexing convention. However, the
existing-index update flows add only new issues and do not rewrite existing vectors, so previously
indexed 1-based comment IDs will remain in the DB unless a rebuild/migration is performed.

pr_agent/tools/pr_similar_issue.py[438-455]
pr_agent/tools/pr_similar_issue.py[290-317]
pr_agent/tools/pr_similar_issue.py[375-384]
pr_agent/tools/pr_similar_issue.py[84-111]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Indexing now stores comment IDs as 0-based (`comment_0`...), but existing vector DB entries created with the old 1-based scheme remain unless the dataset is rebuilt. Since retrieval parses the stored suffix and uses it as a 0-based list index into `issue.get_comments()`, older entries will still map to the wrong comment.
### Issue Context
- Indexing paths now create IDs with `str(j)`.
- Querying parses the suffix from the stored ID (`...comment_N`) and uses `N` directly.
- Update flows typically only append new issues and do not rewrite existing records.
### Fix Focus Areas
- pr_agent/tools/pr_similar_issue.py[290-399]
- pr_agent/tools/pr_similar_issue.py[401-455]
- pr_agent/tools/pr_similar_issue.py[496-552]
- pr_agent/tools/pr_similar_issue.py[587-650]
### What to change
Choose one (or combine):
1. **Migration by rebuild**: When enabling this fix (e.g., via a new setting flag), delete existing records for the repo (per vector DB) and re-index the full repo so all stored comment IDs follow the new convention.
2. **Backward-compatible retrieval**: When parsing `comment_N`, detect legacy 1-based IDs and translate to 0-based (e.g., store an explicit `comment_index` in metadata at index-time and prefer it over parsing; or implement a safe translation path that avoids shifting new 0-based records).
3. Add a clear operator-facing log message indicating that a one-time rebuild is required to fully apply the comment-index fix to previously indexed data.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider Grey Divider

Qodo Logo

Comment thread requirements.txt Outdated
# pinecone-datasets @ git+https://github.com/mrT23/pinecone-datasets.git@main
# lancedb==0.5.1
# qdrant-client==1.15.1
# pandas # required by qdrant indexing path
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. requirements.txt adds commented pandas 📘 Rule violation ⚙ Maintainability

The PR introduces a new commented-out dependency line for pandas, which is inactive code and
violates the no-commented-out-code requirement. This can also cause runtime failures for users who
enable Qdrant without actually installing pandas.
Agent Prompt
## Issue description
A new commented-out dependency line (`# pandas ...`) was added to `requirements.txt`, violating the requirement to avoid commented-out code and leaving the dependency inactive.

## Issue Context
The code path `_update_qdrant_with_issues` imports/uses `pandas`, so leaving it commented relies on manual user action and can still lead to runtime `ImportError`.

## Fix Focus Areas
- requirements.txt[42-42]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread pr_agent/tools/pr_similar_issue.py Outdated
Comment thread pr_agent/tools/pr_similar_issue.py Outdated
for row in df.to_dict(orient="records"):
points.append(
PointStruct(id=uuid.uuid5(uuid.NAMESPACE_DNS, row["id"]).hex, vector=row["vector"], payload={"id": row["id"], "text": row["text"], "metadata": row["metadata"]})
PointStruct(id=uuid.uuid5(uuid.NAMESPACE_DNS, f"{repo_name_for_index}:{row['id']}").hex, vector=row["vector"], payload={"id": row["id"], "text": row["text"], "metadata": row["metadata"]})
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Qdrant upsert creates duplicates 🐞 Bug ≡ Correctness

Changing Qdrant PointStruct.id generation to include repo_name_for_index alters point IDs for
existing repos, so re-ingesting into an existing collection will add a second copy of each point
instead of overwriting the old ones. This can reduce result diversity (duplicates consume top_k)
because querying/parsing uses payload["id"], not the Qdrant point ID.
Agent Prompt
### Issue description
Qdrant point IDs changed (UUID seed now includes `repo_name_for_index`), so re-ingesting an already-indexed repo into an existing collection will create duplicate points rather than overwrite the existing ones.

### Issue Context
- `_update_qdrant_with_issues()` always calls `qdrant.upsert(...)` with newly generated point IDs.
- When re-indexing (e.g., `force_update_dataset`), the code does not delete existing points for that repo first.
- Querying later reads `payload['id']`, so duplicates remain queryable and can consume the top_k.

### Fix Focus Areas
- pr_agent/tools/pr_similar_issue.py[212-257]
- pr_agent/tools/pr_similar_issue.py[587-679]

### What to change
1. Before calling `_update_qdrant_with_issues(..., ingest=True)` for an already-existing collection, delete existing points for that repo using a filter on `metadata.repo == repo_name_for_index` (and optionally `level in {issue,comment}` if needed).
2. Alternatively (or additionally), set `PointStruct.id` to a stable, human-readable, repo-scoped string like `f"{repo_name_for_index}:{row['id']}"` (Qdrant supports string point IDs) to avoid needing UUIDs and make overwrite semantics explicit.
3. Add a small log/metric indicating how many points were deleted before reindexing so operators can validate migration behavior.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

…andas comment

- Break PointStruct(...) call into multiple lines to stay within 120-char limit
- Update pandas comment to note it is needed by both qdrant and pinecone paths

Note: the ID generation change in the parent commit is intentional — existing
points with old IDs (without repo prefix) will become orphans and be
overwritten on the next full reindex.
@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects bot commented Apr 15, 2026

Persistent review updated to latest commit d1b7a85

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant