-
Notifications
You must be signed in to change notification settings - Fork 1
FAQ
All signs and solutions will be marked with [UI], [Python], or [CLI] to indicate if the following information pertains to the User Interface, Python Module, or Command Line Tool, respectively.
If you're unable to execute any jobs, this is most often due to the Lapdog Engine not being initialized for a given namespace.
- [UI] When viewing a workspace, the
EXECUTE NEW JOBbutton is greyed out, and displays "Lapdog is not initialized for this namespace" when moused over - [UI] When viewing a workspace, the
Execution Statusreads "Not Ready. Contact Namespace Admin" - [UI] When viewing a workspace, a popup opens which reads "Lapdog is not initialized for this namespace. Please contact an administrator"
- [Python] When creating a
WorkspaceManageryou receive the warning: "Gateway does not exist" - [Python] When using a
WorkspaceManageryou see a message: "Lapdog Engine Project ld-{engine id} for namespace {namespace} does not support api version frozen" - [CLI] When executing a job, the command crashes with the error "The existence endpoint could not be found. Project ld-{engine id} may not be initialized. Please contact the namespace admin"
Lapdog requires a new Google Project to be created for each Firecloud Namespace. These projects are called "Engines". A Lapdog Engine manages job execution for all workspaces under that namespace. Only an administrator with access to the underlying Google Billing Account can perform the one-time initialization for a Lapdog Engine. Contact the administrator and ask them to run one of the following solutions:
- [CLI]
lapdog initialize-project - [Python]
lapdog.gateway.Gateway.initialize_lapdog_for_project(billing_account_id, firecloud_namespace)
- [UI] When viewing a workspace, the
Execution Statusreads "Not Registered. Insufficient Permissions" - [Python] When creating a
WorkspaceManageryou receive the warning: "Gateway for namespace {namespace} not registered. Please run Gateway.register()" - [Python, CLI] When executing a job, the command fails with the error "Gateway failed to launch submission" and above the exception traceback, you see the message" (401) : User has not registered with this Lapdog Engine"
- [UI] Use the
Workspacestab on the left to switch to any workspace in the same namespace that you have WRITER level permissions for- WARNING: While you register with the entire namespace, you can only run jobs in workspaces that you have WRITER access to
- [Python] Create a
WorkspaceManagerfor a workspace that you have WRITER level permissions for, within the same namespace. CallWorkspaceManager.gateway.register(WorkspaceManager.workspace, WorkspaceManager.bucket_id)
- [UI] When viewing a workspace, the
Access Levelis "READER" - [Python, CLI] When executing a job, the command fails with the error "Gateway failed to launch submission" and above the exception traceback, you see the message" (401) : User lacks read/write permissions to the requested bucket"
- [UI] When executing a job, you get an alert and the lapdog console displays the error described above
You must have WRITER access to a workspace in order to run jobs through the Lapdog Engine. You'll need to contact the workspace owner(s) and request that they add your lapdog service account as a WRITER.
NOTE: You can find your lapdog service account at the bottom of every page in the UI, or by calling lapdog.gateway.proxy_group_for_user(lapdog.gateway.get_account())+"@firecloud.org"
- [UI] All your workflows fail and the cromwell log for each workflow has errors like "AccessDeniedException: 403 lapdog-{user id}@ld-{engine id}.iam.gserviceaccount.com does not have storage.objects.list to {bucket}."
The solutions for this issue depend on where the permissions error is occurring. Pay attention to the bucket id in the error message (described above)
- Situation 1) The service account cannot read from the workspace bucket.
- If the bucket starts with
fc-, it's likely a firecloud workspace bucket - If that bucket id matches the bucket id of the workspace (displayed on the workspace page), then the solution is simply to add your Lapdog Service Account as a WRITER to the current workspace
- Your lapdog service account is displayed at the bottom of every page in the UI
- If that bucket starts with
fc-but does not match the bucket for the current workspace, you need to add your Lapdog Service Account as a READER for the workspace to which the bucket belongs.- Unfortunately, there is no easy way to map bucket->workspace. Often times, the bucket belongs to the workspace that the current workspace was cloned from
- Contact the workspace owner for help identifying the workspace for a particular bucket
- In either case, if the bucket starts with
fc-and your service account already has the required permissions on that workspace, you need to revoke access then re-add the account.- Contact the owner of the workspace in question and ask them to set the Access Level for your service account to "No Access", save the changes, then add your service account back with the desired access level
- If the bucket starts with
- Situation 2) The service account cannot read from a non-workspace bucket
- If the bucket does not start with
fc-, it's likely not a firecloud workspace bucket - Due to current limitations with Firecloud, your lapdog service account cannot be directly added to the bucket.
- Contact the owner of the bucket and ask them to add your Pet Account as a reader of the bucket
- You have a different Pet Account for each Firecloud Namespace/Lapdog Engine
- You can find your Pet account by running
lapdog.cloud.ld_acct_in_project(lapdog.gateway.get_account(), ld_project=lapdog.cloud.ld_project_for_namespace(NAMESPACE))
- If the bucket does not start with
- [Python]
lapdog.get_adapter(GLOBAL_SUBMISSION_ID).data['runtime']has "private_access" set to True - [UI/Python] The logs from your workflows indicate that they were unable to connect to a URL
- These errors may say things like "Unresolved Hostname" or "Request Timeout"
- [UI] When executing a new job, open the
Advanced Optionsand make sure the slider for "Internet Access" is set to "Unrestricted" - [Python] Make sure you're passing
private=Falseas an argument tolapdog.WorkspaceManager.execute
If you're having trouble executing jobs, aborting jobs, registering with new namespaces, checking your registration, or fetching quota usage, you may be having API version errors.
- [Python, CLI] When using Lapdog, you get a
ValueErrorwhich reads "The project api for ld-{engine id} does not support {endpoint} version {version}. Please contact the namespace admin". - [UI] When using the UI you get error popups or other unexpected behavior, and the lapdog console displays the error described above
This problem occurs when your installation of Lapdog is using a different version of the API than the Engine for a namespace. To resolve this issue, contact an administrator of the namespace and ask them to take the following steps:
- Update Lapdog to the latest version:
pip install --upgrade lapdog - Run the following command to patch the namespace:
lapdog apply-patch {namespace}
NOTE: Patches are always backwards compatible, unless otherwise specified. If a patch is not backwards compatible, you will receive a warning when running the command and it will wait for your confirmation before making any changes
Your submission is set to "Error" status for either of two conditions:
- The python wrapper for Cromwell crashes due to an unhandled exception
- Lapdog claims a submission is "Running" but the Google operation has completed
Errors are generally an issue on the side of Lapdog. If your submission status is "Error" please report it to the Lapdog Issues Page. However, there are a couple cases where a submission will Error out that are not bugs with Lapdog
Google Genomics places a hard 10MiB cap on the amount of data that can be sent through a Pipelines request. This means that the total string length of all your input parameters (File parameters are counted as the length of the path, not the size of the content) cannot exceed 10MiB for a single workflow. Lapdog checks this at two stages:
- When you submit a job, Lapdog checks the size of input data for each workflow as it's written to the submission config file. If it's more than 10MiB, you'll get a
ValueErrorwhich reads: "The size of input metadata cannot exceed 10 Mib for an individual workflow" - As Cromwell prepares to launch jobs, it checks the size of input data again. If it's more than 10MiB, you'll get the same error appearing in a submission's
stderr.logfile - Sometimes, your input data may be just under the 10MiB limit, but due to other metadata such as the size of the WDL, or the size of extra workflow options, the Pipelines request may still fail. If you see several
ConnectionReseterrors in the submission'sstderr.logfile and your input data was large, your Pipeline request was likely above 10MiB
Unfortunately, there's not a simple solution. The 10MiB limit is set by Google to prevent users from abusing the service to transfer large amounts of data into an instance for free.
The best solution is to store workflow inputs in a file and change your WDL to read the inputs from the File instead of using standard WDL inputs. For instance, if your WDL takes an Array of Files, you may consider changing it to take a single File of file paths. Then your WDL can download the specified files as part of it's script.
You can easily evaluate entity expressions to get their value (to store in a file) by running lapdog.WorkspaceManager.operator.evaluate_expression(entity_type, entity_name, entity_expression). For example, workspace.operator.evaluate_expression('sample_set', 'all_samples', 'this.samples.fastq_1')