Skip to content

[18.0][MIG] dms_storage: Migration to 18.0#410

Open
xaviedoanhduy wants to merge 11 commits intoOCA:18.0from
xaviedoanhduy:18.0-mig-dms_storage
Open

[18.0][MIG] dms_storage: Migration to 18.0#410
xaviedoanhduy wants to merge 11 commits intoOCA:18.0from
xaviedoanhduy:18.0-mig-dms_storage

Conversation

@xaviedoanhduy
Copy link
Copy Markdown

No description provided.

@xaviedoanhduy xaviedoanhduy mentioned this pull request Apr 17, 2025
8 tasks
Copy link
Copy Markdown

@wlin-kencove wlin-kencove left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did some functional testing on runboat.
Looks good to me.

Copy link
Copy Markdown

@BhaveshHeliconia BhaveshHeliconia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

functional test OK

@OCA-git-bot
Copy link
Copy Markdown
Contributor

This PR has the approved label and has been created more than 5 days ago. It should therefore be ready to merge by a maintainer (or a PSC member if the concerned addon has no declared maintainer). 🤖

@dnplkndll
Copy link
Copy Markdown

can this be merged?

@Lukwos
Copy link
Copy Markdown

Lukwos commented Nov 21, 2025

Any update on this PR ? It can be merged I suppose

@CariacouP
Copy link
Copy Markdown

Hi, any update here ? This merge is quite old

@pabloperez14
Copy link
Copy Markdown

pabloperez14 commented Apr 23, 2026

Hi everyone,

Thanks for the great work on this module! While testing large file uploads using dms_storage mapped to an external provider (OneDrive), I noticed a critical bug regarding how the file content is being stored.

Currently, when save_type == "storage", the file is successfully sent to the external storage backend. However, a full copy of the binary is also being saved in the PostgreSQL database inside the content_binary field, causing massive and unnecessary database bloat.

The Cause

This happens because of the fallback logic in the base dms module. In dms/models/dms_file.py, the _update_content_vals method evaluates:

if self.storage_id.save_type in ["file", "attachment"]:
    new_vals["content_file"] = self.content
else:
    new_vals["content_binary"] = self.content and binary # <--- FALLBACK TRIGGERED

Since dms_storage introduces the new save_type "storage", the base method falls into the else clause and assigns the binary to content_binary.

When dms_storage overrides this method in dms_storage/models/dms_file.py to add the external upload logic, it calls super()._update_content_vals(), but it forgets to clear the local binary data returned by the base module.

Proposed Fix

We can easily prevent this duplication by explicitly clearing both content_binary and content_file when the storage is handled externally.

In dms_storage/models/dms_file.py:

def _update_content_vals(self, vals, binary):
    result = super()._update_content_vals(vals, binary)
    result.update(
        {
            "storage_path": False,
            "storage_backend_id": False,
        }
    )
    if self.storage_id.save_type == "storage":
        storage_path = self.path_names
        if self.storage_path:
            self.storage_id.storage_backend_id.delete(self.storage_path)
        self.storage_id.storage_backend_id.add(storage_path, binary)
        result["storage_path"] = storage_path
        result["storage_backend_id"] = self.storage_id.storage_backend_id.id
        
        # --- FIX: Prevent local duplication in DB or filestore ---
        result["content_binary"] = False
        result["content_file"] = False
        
    return result

I wanted to flag this before the PR gets merged, as it defeats the main purpose of externalizing storage if the database is still taking the hit. Let me know what you think!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.