diff --git a/docs/commons/fair/fair-commons.rst b/docs/commons/fair/fair-commons.rst index c5e03f7..2de2ce8 100644 --- a/docs/commons/fair/fair-commons.rst +++ b/docs/commons/fair/fair-commons.rst @@ -20,4 +20,5 @@ The following resources in this section are part of the FAIR Commons component. Catalogue of Tests Catalogue of Benchmark Scoring Algorithms Code of Shared Tests + Tutorials diff --git a/docs/commons/fair/tutorials/create-benchmark.rst b/docs/commons/fair/tutorials/create-benchmark.rst new file mode 100644 index 0000000..d6fb978 --- /dev/null +++ b/docs/commons/fair/tutorials/create-benchmark.rst @@ -0,0 +1,248 @@ +.. _tutorial_create_fair_benchmark: + +Creating a FAIR Benchmark with its Associated Metrics +===================================================== + +This tutorial explains how to create a **community FAIR Benchmark** and +any additional **Metrics** using the OSTrails FAIR Assessment framework. + +You will start from the *OSTrails FAIR Assessment – Conceptual Requirements* +template and work through it to produce a **Benchmark narrative** describing +how your community interprets the FAIR Principles for a specific type of +digital object. + +By the end of this tutorial you will have: + +* a completed **community FAIR Benchmark narrative** +* a defined set of **FAIR Metrics** +* any required **community-specific specialised Metrics** + +For an overview of the process, see :ref:`benchmark_workflow`. + +.. _benchmark_prerequisites: + +Prerequisites +------------- + +Before starting you should: + +* Download the + `OSTrails FAIR Assessment – Conceptual Requirements template + `_. +* Be familiar with the **FAIR Principles**. +* Identify the **community or discipline** for which the Benchmark will apply. + +.. _benchmark_workflow: + +Workflow overview +----------------- + +Creating a FAIR Benchmark typically involves the following steps: + +1. :ref:`copy_template` +2. :ref:`define_benchmark` +3. :ref:`review_generic_metrics` +4. :ref:`define_specialised_metrics` +5. :ref:`review_benchmark` +6. :ref:`register_benchmark` + +Each step is described in the sections below. + +.. _copy_template: + +Step 1 – Copy the OSTrails template +----------------------------------- + +Begin by making a working copy of the **OSTrails FAIR Assessment – +Conceptual Requirements template**. + +The template provides a structured format for describing: + +* the **scope of the Benchmark** +* the **digital objects being assessed** +* the **Metrics used to evaluate FAIRness** +* the **community standards and practices** that apply + +Once the template has been downloaded and renamed, you should +work through your Benchmark narrative document sequentially, +completing each section with information relevant to your community. + +Proceed to +:ref:`define_benchmark`. + +.. _define_benchmark: + +Step 2 – Define the community Benchmark +--------------------------------------- + +The **Benchmark** section provides the narrative description of how +FAIR is interpreted for your community. + +Complete the Benchmark description by specifying: + +**Benchmark name** + + A short, descriptive title for the Benchmark. + +**Description** + + A concise explanation of the purpose of the Benchmark and the + community it serves. + +**Applicability** + + Define clearly: + + * the **type of digital object** being assessed + (for example datasets, workflows, software, or metadata records) + * the **disciplinary scope** of the Benchmark + +**Related resources** + + List any standards, repositories, policies, or vocabularies that + support FAIR practice in your community. + +The goal of this section is to describe **what FAIR means in practice +for the community and its digital objects**. + +Next, review the Metrics available in your Benchmark narrative as described in +:ref:`review_generic_metrics`. + +.. _review_generic_metrics: + +Step 3 – Review the generic FAIR Metrics +---------------------------------------- + +Your Benchmark narrative includes **generic Metrics** aligned with the +FAIR Principles. These are designed to be broadly applicable across many +disciplines. + +For each Metric card in your Benchmark narrative: + +1. Read the description of the Metric. +2. Decide whether it satisfies your community requirements. +3. Select the appropriate option in your Benchmark narrative: + + * ``This generic Metric is sufficient for our needs`` + * ``This generic Metric is not sufficient for our needs`` + * ``This principle is not applicable to our definition of FAIR`` + +Generic Metrics commonly address topics such as: + +* persistent identifiers +* structured metadata +* links between data and metadata +* indexing for discovery +* open and standardised access protocols + +In many cases these generic Metrics can be adopted without modification. + +If a generic Metric does not fully capture community practice, define a +specialised Metric as described in +:ref:`define_specialised_metrics`. + +.. _define_specialised_metrics: + +Step 4 – Define specialised Metrics where required +-------------------------------------------------- + +Some FAIR principles require **community-specific interpretation**. +Where the generic Metric does not adequately represent community +practice, define a **specialised Metric**. + +Specialised Metrics are commonly required for principles such as: + +* **I2 – Use of FAIR vocabularies** +* **R1.2 – Provenance** +* **R1.3 – Community standards** + +When defining a specialised Metric, include the following elements. + +**Metric name** + + A short descriptive title. + +**Metric description** + + A clear explanation of what is being evaluated and why it supports + FAIR. + +**Assessment criteria** + + The conditions that must be met for the Metric to pass. + +**Related standards or resources** + + References to relevant community standards, vocabularies, or + policies. + +**Examples** + + Where possible, provide: + + * a positive example + * a negative example + * an indeterminate example + +These examples help both implementers and assessment tools understand +how the Metric should be applied. + +Once specialised Metrics have been defined, review the Benchmark as +described in :ref:`review_benchmark`. + +.. _review_benchmark: + +Step 5 – Review the completed Benchmark +--------------------------------------- + +After completing all sections of your Benchmark narrative: + +* Review the Benchmark with **community experts or stakeholders**. +* Check that **all relevant FAIR principles are addressed**. +* Ensure that any referenced **standards, vocabularies, or + repositories** are clearly identified. + +The completed document now represents the **conceptual FAIR Benchmark** +for your community. + +The final step is to register the Benchmark and Metrics as described in +:ref:`register_benchmark`. + +.. _register_benchmark: + +Step 6 – Register the Benchmark and Metrics +------------------------------------------- + +To enable reuse and interoperability, the Benchmark and its Metrics +should be registered in community registries such as +`FAIRsharing `_. + +Registration should include: + +* the **Benchmark description** +* each **specialised Metric** +* references to any **standards, databases, or vocabularies** + +Registering these components allows: + +* FAIR assessment tools to discover and implement the Metrics +* other communities to reuse or adapt the Benchmark +* FAIR assessment results to be compared across tools + +Next steps +---------- + +Once the conceptual Benchmark has been created, the next stages +typically include: + +* implementing **machine-actionable Metric tests** +* defining **assessment workflows** +* applying the Benchmark to **evaluate digital objects** + +This converts the conceptual definition into a working +**FAIR assessment framework**. + +Continue with the tutorial: + +:ref:`tutorial_create_metric_tests` + diff --git a/docs/commons/fair/tutorials/create-metric.rst b/docs/commons/fair/tutorials/create-metric.rst new file mode 100644 index 0000000..eed5ce5 --- /dev/null +++ b/docs/commons/fair/tutorials/create-metric.rst @@ -0,0 +1,2 @@ +How to create a metric +====================== diff --git a/docs/commons/fair/tutorials/create-test-following-ftr.rst b/docs/commons/fair/tutorials/create-test-following-ftr.rst new file mode 100644 index 0000000..60a68e8 --- /dev/null +++ b/docs/commons/fair/tutorials/create-test-following-ftr.rst @@ -0,0 +1,88 @@ +.. _tutorial_create_metric_tests: + +How to Create and Register a Test following the FTR API +======================================================= + +Benchmark Assessment Algorithms rely on **FAIR Metric Tests**. + +Each test evaluates a specific FAIR requirement and returns a result of: + +* ``pass`` +* ``fail`` +* ``indeterminate`` + +A test has three main components. + +**DCAT description** + + A machine-readable metadata record describing the test. + +**API definition** + + An OpenAPI specification describing how to call the test service. + +**Test implementation** + + The executable service that performs the assessment. + + +.. _creating_metric_test: + +Creating a new Test Implementation +---------------------------------- +Tests can be written in any programming language provided they: + +* either: + * accept the **GUID of a digital object** as input, OR + * accept an upload of the Metadata record to be tested +* return a **JSON result object** containing the outcome following the FTR specification +* We highly recommend that Tests follow the FTR API, which defines the routes and HTTP protocols for each kind of test behaviour (e.g. discovery or execution) + + +Creating and Registering a new FAIR Metric Test DCAT Record +----------------------------------------------------------- + +Tests used with the OSTrails Benchmark Algorithms **must** be registered (using a DCAT Descriptor) in the OSTrails Test Registry. This can be done by: + +a. Authoring a test DCAT descriptor "manually", and doing a pull-request on the FAIR Metrics repository. This will result in the automatic addition of a landing page for your test, but it will not automatically notify FAIRsharing. + +or + +b. Test descriptors can be authored and registered using **FAIR Wizard**. + +1. Open FAIR Wizard and create a **new project**. +2. Select a knowledge model. +3. Enable **Filter by question tags**. +4. Choose **Test** as the artefact type. + +Two key fields must be completed. + +**Endpoint URL** + + The service endpoint that executes the test. + +**Endpoint URL Description** + + The location of the **OpenAPI description** of the test API. + +Once the questionnaire has been completed, create and submit the +resulting document. + +After processing, the test record is deposited in the +**OSTrails Assessment Component Metadata Records repository** and +indexed by FAIR Data Point. + +The test will then appear in the **FAIR Champion Test Registry** and +can be referenced, by the Test ID, in your Benchmark Configuration Spreadsheet. + +Next steps +---------- + +The next thing you probably want to do is to add this test to a benchmark. + +There is a semi-automated **community FAIR Benchmark assessments** that can be authored and executed by +the OSTrails FAIR Champion tool. + +Continue with the tutorial: + +:doc:`Benchmark <./tutorial_fair_benchmark_algorithm>` diff --git a/docs/commons/fair/tutorials/define-run-scoring-algorithm.rst b/docs/commons/fair/tutorials/define-run-scoring-algorithm.rst new file mode 100644 index 0000000..236627d --- /dev/null +++ b/docs/commons/fair/tutorials/define-run-scoring-algorithm.rst @@ -0,0 +1,220 @@ +.. _tutorial_fair_benchmark_algorithm: + +Defining and Running a FAIR Benchmark Assessment Algorithm +=========================================================== + +This tutorial explains how to define and run a **FAIR Benchmark Assessment +Algorithm** using the OSTrails tool **FAIR Champion**. + +A Benchmark Assessment Algorithm combines multiple **FAIR Metric Tests** +and scoring rules to assess the FAIRness of a digital object according to +a specific community Benchmark. + +In this tutorial you will: + +* run an existing Benchmark assessment +* create your own **Benchmark Configuration Spreadsheet** +* register the algorithm with FAIR Champion +* run the algorithm on one or more digital objects + +By the end of this tutorial you will have a working **FAIR Benchmark +Assessment Algorithm** that can be executed within FAIR Champion. + +.. _benchmark_algorithm_prerequisites: + +Prerequisites +------------- + +Before starting you should have completed: + +* :ref:`tutorial_create_fair_benchmark` +* :ref:`tutorial_create_metric_tests` + +And have access to: + +* The **FAIR Champion assessment service**. +* Your **Benchmark definition** and associated metrics. + +Some steps also require access to: + +* `FAIR Champion `_ +* `FAIR Wizard `_ +* `OSTrails Test Registry `_ + +.. _benchmark_algorithm_workflow: + +Workflow overview +----------------- + +Creating and running a Benchmark Assessment Algorithm (BAA) involves the +following steps: + +1. :ref:`run_existing_algorithm` +2. :ref:`create_configuration_spreadsheet` +3. :ref:`register_algorithm` +4. :ref:`run_single_assessment` +5. :ref:`run_multiple_assessments` + +Each step is described in the sections below. + +.. _run_existing_algorithm: + +Step 1 – Run an existing Benchmark assessment +--------------------------------------------- + +Before creating a new algorithm it is helpful to run an existing one to +confirm that the assessment service is working. + +1. Open the FAIR Champion assessment interface: + + https://tools.ostrails.eu/champion/assess/algorithms/new + +2. Select a **Benchmark Configuration Spreadsheet URI** from the list. + +3. Enter the **GUID of the digital object** to be assessed. + +4. Click **Run Benchmark Quality Assessment**. + +After a few seconds the results will be displayed on screen. + +The output typically shows: + +* individual test results +* weighted scores +* conclusions +* optional links to guidance for Conditions that were not met + +Running an existing algorithm confirms that the FAIR Champion service +is functioning correctly. + +You can now proceed to creating your own configuration spreadsheet as +described in :ref:`create_configuration_spreadsheet`. + +.. _create_configuration_spreadsheet: + +Step 2 – Create a Benchmark Configuration Spreadsheet +----------------------------------------------------- + +Benchmark Assessment Algorithms for FAIR Champion are defined +using a **configuration spreadsheet**. + +You can begin by copying the **Generic Algorithm spreadsheet** available +from the FAIR Champion assessment interface. + +The spreadsheet contains three sections. + +**General metadata** + + Describes the algorithm using DCAT properties. + +**Test references** + + Lists the FAIR Metric Tests used by the algorithm and assigns weights + to their outputs. + +**Conditions and calculations** + + Defines the scoring logic based on the test results. + Links to guidance may be included for some or all of the conditions. + +General rules for configuration spreadsheets: + +* Currently only **Google Sheets** are supported. +* The spreadsheet must be **publicly readable**. +* Headers must be used **exactly as provided in the template**. +* Each section must be separated by **one empty line**. +* The URIs of the tests must resolve to a **DCAT DataService record** + describing the test. + +A list of available tests can be found in the +`OSTrails Test Registry `_. + +Calculations reference tests by their **abbreviation**. Expressions use +Ruby-style syntax, for example:: + + test_identifier_1 + test_identifier_2 > 3 + +Each calculation returns either **true** or **false**, which determines +the narrative result associated with that condition. + +Once the spreadsheet is complete it must be registered with FAIR +Champion as described in :ref:`register_algorithm`. + +.. _register_algorithm: + +Step 3 – Register the Benchmark Assessment Algorithm +---------------------------------------------------- + +After creating your configuration spreadsheet you must register it so +that FAIR Champion can use it. + +1. Ensure the spreadsheet is **publicly accessible**. +2. Copy the **URI of the spreadsheet**. +3. Register the spreadsheet with FAIR Champion via the + 'Register a new Benchmark Quality Assessment Algorithm' option + on the home page. + +FAIR Champion will convert the spreadsheet into a registered +**Benchmark Assessment Algorithm**. + +You can verify that the registration succeeded by checking the +**FAIR Data Point index**, where the algorithm should appear with +status **Active**. + +Once the algorithm is registered it will appear in the list of +available Benchmark algorithms within the FAIR Champion interface. + +You can now run the algorithm as described in +:ref:`run_single_assessment`. + +.. _run_single_assessment: + +Step 4 – Run an assessment using your algorithm +----------------------------------------------- + +To run an assessment using your own algorithm: + +1. Return to the FAIR Champion assessment interface: + + https://tools.ostrails.eu/champion/assess/algorithms/new + +2. Select your **Benchmark Configuration Spreadsheet URI** from the + algorithm list. + +3. Enter the **GUID of a digital object**. + +4. Click **Run Benchmark Quality Assessment**. + +If the configuration spreadsheet has been correctly defined and +registered, the results will be displayed in the same way as when +running an existing algorithm. + +This confirms that your **Benchmark Assessment Algorithm** is working +correctly. + +.. _run_multiple_assessments: + +Step 5 – Run assessments on multiple objects +-------------------------------------------- + +To run assessments on multiple digital objects you can use the +**benchmark runner** application. + +1. Clone the repository:: + https://github.com/cessda/cessda.cmv.benchmark-runner + +2. Build the application:: + mvn compile + +3. Edit the ``guids.txt`` file so that it contains the GUIDs of the + digital objects to be assessed. + +4. Run the application using:: + mvn exec:java -Dexec.args= + +where ```` is the URL of your registered Benchmark +algorithm. + +The terminal will display progress as each object is assessed. + +Results for each GUID are written to the ``results`` directory. diff --git a/docs/commons/fair/tutorials/deploy-champion.rst b/docs/commons/fair/tutorials/deploy-champion.rst new file mode 100644 index 0000000..7884ad1 --- /dev/null +++ b/docs/commons/fair/tutorials/deploy-champion.rst @@ -0,0 +1,45 @@ +How to deploy Champion myself +=============================== + +You only need to do this if you are testing records that are, for example, behind a firewall. Most of the time you +will simply go to `Champion `_ + +If you feel the need to checkout the code, go to: +https://github.com/OSTrails/FAIR-Champion/ + +However, you can simply access the latest Docker image, using the following docker-compose file: + + services: + + champion: + image: markw/fair-champion:0.3.0 + environment: + - TESTHOST=https://tests.ostrails.eu/tests/ + - CHAMPHOST=http://localhost:8383/ + ports: + - "8383:4567" + + + swagger-converter: + image: swaggerapi/swagger-converter:latest + container_name: swagger-converter + +Find the latest versioned tag for Champion at `Dockerhub `_ + +The following environment variables should be set (in the docker-compose, probably) + + + RACK_ENV='development' + + TEST_HOST='https://tests.ostrails.eu/tests' # this is probably now deprecated, as per March 2026 + + CHAMP_HOST=https://tools.ostrails.eu/champion' # the URL of your own Champion instance + + FDPINDEX_SPARQL='https://tools.ostrails.eu/repositories/fdpindex-fdp' # THIS IS CRITICAL! + + FDPINDEXPROXY='https://tools.ostrails.eu/fdp-index-proxy/proxy' # if you plan to regisgter new tests or benchmarks using your local copy + + CHAMPION_HOST='https://tools.ostrails.eu/champion' # probably redundant to the one above. I'll check + + +That's all! \ No newline at end of file diff --git a/docs/commons/fair/tutorials/discover-test-cessda-benchmark.rst b/docs/commons/fair/tutorials/discover-test-cessda-benchmark.rst new file mode 100644 index 0000000..20be9e3 --- /dev/null +++ b/docs/commons/fair/tutorials/discover-test-cessda-benchmark.rst @@ -0,0 +1,2 @@ +How to know what are the tests in the CESSDA benchmark +======================================================== diff --git a/docs/commons/fair/tutorials/find-test-for-digital-object.rst b/docs/commons/fair/tutorials/find-test-for-digital-object.rst new file mode 100644 index 0000000..3e3aa24 --- /dev/null +++ b/docs/commons/fair/tutorials/find-test-for-digital-object.rst @@ -0,0 +1,2 @@ +How find a test for my digital object +====================================== diff --git a/docs/commons/fair/tutorials/host-deploy-test.rst b/docs/commons/fair/tutorials/host-deploy-test.rst new file mode 100644 index 0000000..b782350 --- /dev/null +++ b/docs/commons/fair/tutorials/host-deploy-test.rst @@ -0,0 +1,56 @@ +How host/deploy a test +========================= + +The easiest way is to ask someone from OSTrails WP3 to write the test for you, and host it on the OSTrails Infrastructure. +This helps keep tests quality-controlled, consistent, and with reliable up-time. + +If you really want to do it yourself, you need: + +1. Understand (or author) the Metric that you are going to test. **this is a pre-requisite for writing the test!** +2. Access to a server +3. Coding knowledge in any language +4. Understanding of the rules for Linked Data +5. Optimally, you would also use a Linked Data library from your language to reduce errors; however the data structures can simply be templated. +6. Understanding of the FTR Vocabulary, and specifically the pieces related to a TestResult object +7. Understand how to author an OpenAPI Service Descriptor +8. Either understand a DCAT DataService object, or use the Test Wizard to author this object + +Tests have two "modalities": + +1. A test can consume a GUID, and use normal identifier resolution on that to retrieve metadata +2. A test can consume an uploaded metadata record + +The API for option 1 is to consume the following JSON data structure: + + { "resource_identifier": "GUID" } + + +The API for option 2 is to follow this OpenAPI3 pattern for file upload: + + requestBody: + required: true + content: + multipart/form-data: + schema: + type: object + required: [file] + properties: + file: + type: string + format: binary + description: The uploaded metadata. + encoding: + file: + contentType: application/json + + +Test outputs must follow the FTR schema, and at a minimim must include + +1. The identifier of the metadata object that was tesated +2. The Metric that is associated with the Test +3. The "value" of the output (pass/fail/indeterminate are the only valid outputs) +4. A reference to the TestExecution, and metadata about that (e.g. test identifier, date, version, etc) + +Test outputs may optimally include a reference to a Guidance Object. Guidance Objects are added during test execution, when a test detects an error. They are intended to help the metadata author avoid the error. + +Guidance Objects are still under development, so are not deeply documented here. \ No newline at end of file diff --git a/docs/commons/fair/tutorials/how-comply-ftr.rst b/docs/commons/fair/tutorials/how-comply-ftr.rst new file mode 100644 index 0000000..5e214e1 --- /dev/null +++ b/docs/commons/fair/tutorials/how-comply-ftr.rst @@ -0,0 +1,2 @@ +As a fair assessment developer, how comply to FTR spec to interoperate with others +======================================================================================= diff --git a/docs/commons/fair/tutorials/others-use-my-metrics.rst b/docs/commons/fair/tutorials/others-use-my-metrics.rst new file mode 100644 index 0000000..00976a4 --- /dev/null +++ b/docs/commons/fair/tutorials/others-use-my-metrics.rst @@ -0,0 +1,2 @@ +How have others use my metrics +================================ diff --git a/docs/commons/fair/tutorials/register-curate-metric-fs.rst b/docs/commons/fair/tutorials/register-curate-metric-fs.rst new file mode 100644 index 0000000..0024637 --- /dev/null +++ b/docs/commons/fair/tutorials/register-curate-metric-fs.rst @@ -0,0 +1,2 @@ +How to register and curate a metric in FS +========================================== diff --git a/docs/commons/fair/tutorials/run-existing-test.rst b/docs/commons/fair/tutorials/run-existing-test.rst new file mode 100644 index 0000000..6265cdf --- /dev/null +++ b/docs/commons/fair/tutorials/run-existing-test.rst @@ -0,0 +1,29 @@ +How to run an existing Test +============================== + +Using the FAIR Champion GUI: + +https://tools.ostrails.eu/champion/tests/ + +1. Search (using the browser text search) for the test you want to execute. +2. Click "Execute Text" - this will trigger the creation of a form appropriate for that test +3. Fill-in the form field(s) +4. Click "Run Test" + +Alternatively, if you want to automate many executions, you can: + +1. a. Use the GUI to find the test (e.g. "fc_data_identifier_in_metadata") +2. a. Right-click on the blue test title and "copy Link" (this is necessary because clicking the link might follow a content-negotiated path to a nice web page, rather than giving you the identifier of the test!) + +OR + +1. b. Use FAIRSharing to find the Metric that interests you +2. b. Identify a test on the FAIRSharing page that implements that Metric + +3. Prepare your input, in the format: { "resource_identifier": "GUID" } +4. use the code example in the top of that Web page, together with your Test identifier and your input + + curl -H "Content-type: application/json" -H "Accept: application/json" + -d '{"resource_identifier": "https://exampledataset.org"}' + https://tests.ostrails.eu/tests/fc_data_identifier_in_metadata + diff --git a/docs/commons/fair/tutorials/tutorial-index.rst b/docs/commons/fair/tutorials/tutorial-index.rst new file mode 100644 index 0000000..744731f --- /dev/null +++ b/docs/commons/fair/tutorials/tutorial-index.rst @@ -0,0 +1,25 @@ +Tutorials for FTR implementation +================================== + +Documenattion that explain how implement different workflows for the FTR. + +The following list in this section are part of the tutorials documentation. + +.. toctree:: + :caption: Tutorials + :maxdepth: 1 + :titlesonly: + + How to create a FAIR Benchmark with its associated Metrics + How to find a Test appropriate for my digital object + How to run an existing Test + How to deploy Champion myself + How to define and run a scoring algorithm + How to create a benchmark + How to create a metric + How to know what are the tests in the CESSDA benchmark + How to register and curate a metric in FS + How to Create and Register a Test following the FTR API + How host/deploy a test + As a fair assessment developer, how comply to FTR spec to interoperate with others + How have others use my metrics