Skip to content

ASFOpenSARlab/opensciencelab-portal-v2

Repository files navigation

opensciencelab-portal-v2

Table of Contents

  1. Architecture
  2. Directoy Structure
  3. Deployments
  4. Notifications

Architecture

At the most basic level, the new portal is a Python Lambda app, back-ended by AWS DynamoDB, and front-ended by API Gateway and CloudFront.

Architecture Diagram

Directory Structure

The README structure is spread out through the repo. The more you dive in, the more specific the sections should get to that specific section. They should also be linked together, for easy navigation.

All Portal-related Stuff. Includes the CDK infrastructure code, Lambda code, and tests.

Permissions for GH Actions when deploying, along with the CDK infrastructure.

Deployments

AWS Accounts

Maturity Environment AWS Account
dev Non-prod 97********89
test Non-Prod 97********89
stage Prod 97********89
prod Prod 70********05

Maturities

  • Non-main branches with specified prefix/suffix (eg ab/ticket.feature) will deploy a matched prefix (ie ab) dev maturity ( and dev GitHub environment!) deployment.
  • Merges into main branch will create/update the test maturity deployment.
  • Merges from main into stage will create/update the stage maturity deployment.
  • Symantic Tags (where v#.#.# is v[Major].[Minor].[Patch]) will deploy to Prod.

Dev/Development

While development maturity can be deployment manually via Makefile, it may be easier and more consistent to rely on the GitHub action. However, when necessary, developer deployments can be completed using the following steps:

First-time AWS Account Setup

First time account setup requires oidc provider and cdk bootstrapping.

One of the parameters to pass in is an existing, VERIFIED SES domain.

  • Create one first with AWS SES -> Identities -> Create Identity -> Domain. Name it opensciencelab.asf.alaska.edu. Accept all the defaults and create it.
  • Open a PR with Platform to add the records to ASF's DNS. There's already some examples in there to work from.
  • Wait for the domain to be verified in SES. This can take a while (Up to 24 hours).
Create SSL Certificate for .asf.alaska.edu domains

Certificate is needed when doing test or production deployments since those will have a asf.alaska.edu url. Important: SSL certificate must be created us the us-east-1 region, regardless of deployment region (us-west-2). The arn for the certificate must be provided by the SSL_CERT_ARN environment variable. Where required, the arn value should be stored as a GitHub actions variable. If deploying using a .asf.alaska.edu address, also be sure to set the DEPLOY_DOMAINS environment variable. Both are required to successfully attach a custom domain to the cloudfront endpoint. DEPLOY_DOMAINS should also be stored a GitHub actions variable.

For testing purposes, both SSL_CERT_ARN and DEPLOY_DOMAINS can be provided on the command line for make actions.

Troubleshooting
During deployment, if a DNS record exists for a domain that is a callback domain of the deployment you are building, and does not point to the cloudfront distribution of the new portal stack, the portal will fail to build with the error
Invalid request provided: AWS::CloudFront::Distribution: One or more aliases specified for the distribution includes an incorrectly configured DNS record that points to another CloudFront distribution.
To bypass this error,

  • unset DEPLOY_DOMAINS
  • build the portal
  • set DNS record to new cloudfront domain name
  • set DEPLOY_DOMAINS
  • build the portal
Ensure AWS credentials are present

The Makefile + Docker process will need to communicate with AWS. In actions, this is done through an OIDC Provider in AWS and requires no authentication. Locally however, a profile must be present in ~/.aws/credentials and the AWS_DEFAULT_PROFILE env var needs to be set accordingly, OR AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY must be set. Either solution works, and will get automagically populated into the dockerized build/deploy environment.

~/.aws configuration

You will need a section in ~/.aws/credentials like

[portalv2]
aws_access_key_id = <YOUR KEY ID HERE>
aws_secret_access_key = <YOUR KEY VALUE HERE>

and a section in ~/.aws/config like

[portalv2]
region = us-west-2
output = json

You can generate AWS Access Keys from the IAM console: AWS docs

Updating environment variables

You can deploy a new stack without conflicting with any others.

First, create your environments file from the example:

cp .env.example .env
nano .env

.env:

AWS_PROFILE=<YOUR PROFILE> # from ~/.aws/config
DEPLOY_PREFIX=<YOUR INITIALS>
SES_DOMAIN=opensciencelab.asf.alaska.edu # SAME NAME as the SES above!!
SES_EMAIL=<THE TEAM EMAIL>
### OPTIONAL:
DEV_SES_EMAIL=<YOUR EMAIL> # For testing, if you want to receive emails. You'll have to confirm a email sent to you too.

Once you've updated the values of the variables in your .env, load them into your environment:

set -a && source .env && set +a
Start CDK Shell

From the root of the cloned repo, start the container:

$ make cdk-shell
export AWS_DEFAULT_ACCOUNT=`aws sts get-caller-identity --query 'Account' --output=text` && \
...
... this takes a while ...
...
[ root@a7a585db4d88:/cdk ]#

Change to /code, the virtual mount point, and run make synth-portal to test your environment:

[ root@a7a585db4d88:/cdk ]# cd /code
[ root@a7a585db4d88:/cdk ]# make synth-portal

If you see CloudFormation after a few minutes, you're ready to deploy!

Deploy via CDK
[ root@a7a585db4d88:/cdk ]# make deploy-portal
Linting

Before committing changes, the code can be easily linted by utilizing the lint target of the Makefile. This will call the same linting routines used by the GitHub actions.

Accessing the Deployment

In order to sign up for an account on your dev instance, add your email to the "Verified Identities" table in AWS SES. This will enable SES to send email to you specifically.

Update your SSO token: In AWS Secrets Manager, update the SSO token to match the test deployment.

Configuring Lab Access

When the deployment has completed (after about four minutes), a series of outputs will be displayed:

PortalCdkStack-<DEPLOY_PREFIX>.ApiGatewayURL = <URL>
PortalCdkStack-<DEPLOY_PREFIX>.CloudFrontURL = <URL>
PortalCdkStack-<DEPLOY_PREFIX>.CognitoURL = <URL>
PortalCdkStack-<DEPLOY_PREFIX>.SSOTOKENARN = <ARN>
PortalCdkStack-<DEPLOY_PREFIX>.returnpathwhitelistvalue = <URL>
Stack ARN: <ARN>

Copy the returnpathwhitelistvalue and add it to the PORTAL_DOMAINS variable in the configuration file of the lab you're adding (opensciencelab.yaml in SMCE-Test). After you "Release Changes" on the cluster CodePipeline and the cluster builds, you should be able to access the cluster.

To add your lab to the portal, give yourself Admin privileges in DynamoDB by adding the admin value to the access list for your profile (in addition to user). After refreshing your portal deployment, you'll be able to see all labs. Add yourself to the lab under the "Manage" button on the lab card with a valid profile, and when you return to the home page and click "Go to Lab" you should see the lab "Start Server" interface.

Test

Test is intended to be the stable integration/validation environment. ONLY complete, tested code should be released to Test. Dev, Test, and Stage exist in the same AWS account.

Stage

Stage is intended to a pre-release environment with data and structure equivalent to Prod, but isolated away from Prod. Stage should be used to internally vet what a production release will look like, without exposing our users. Stage can also be the origin for release tags. Ideally, only the main branch will be merged into stage.

Prod

The Prod environment is isolated in its own AWS account to reduce blast radius. Prod should never be released by any mechanism other than GitHub actions.

How to Test

For testing, see the Testing README.

Automation

GitHub Variables and Secrets

Environments: prod, stage, test, dev
Required variables:

  • AWS_ACCOUNT_NUMBER
  • DEPLOY_DOMAINS
  • SSL_CERT_ARN

GitHub Actions

Notifications

For setting up and using the toastr notifications, see the Notifications README.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 6