diff --git a/_quarto.yml b/_quarto.yml index 1458d2c..e393899 100644 --- a/_quarto.yml +++ b/_quarto.yml @@ -11,6 +11,7 @@ project: - 01_* - 02_* - 03_* + - 04_* website: google-analytics: "G-LZ35J3XE4D" announcement: @@ -69,6 +70,8 @@ website: href: course/02_FilePaths/index.qmd - text: "03 - Inside a .FCS file" href: course/03_InsideFCSFile/index.qmd + - text: "04 - Intro to Tidyverse" + href: course/04_IntroToTidyverse/index.qmd - section: "Cytometry Core" href: Schedule.qmd - section: "Beyond the Sandbox" diff --git a/course/01_InstallingRPackages/homeworks/jttoivon/cytomem.png b/course/01_InstallingRPackages/homeworks/jttoivon/cytomem.png new file mode 100644 index 0000000..e1581d8 Binary files /dev/null and b/course/01_InstallingRPackages/homeworks/jttoivon/cytomem.png differ diff --git a/course/01_InstallingRPackages/homeworks/jttoivon/exercises.html b/course/01_InstallingRPackages/homeworks/jttoivon/exercises.html new file mode 100644 index 0000000..4f8a6a7 --- /dev/null +++ b/course/01_InstallingRPackages/homeworks/jttoivon/exercises.html @@ -0,0 +1,3673 @@ + + + + + + + + + + + +Solutions for week01 + + + + + + + + + + + + + + + + + + + +
+ +
+ +
+
+

Solutions for week01

+
+ + + +
+ +
+
Author
+
+

Jarkko Toivonen

+
+
+ +
+
Published
+
+

February 10, 2026

+
+
+ + +
+ + + +
+ + +
+
library(PeacoQC)
+
+
+

Problem 1

+
+

We installed PeacoQC during this session, but we didn’t have time to explore what functions are present within the package. Using what you have learned about accessing documentation, figure out and list what functions it contains

+
+
+
help(package = PeacoQC)
+
+
+
ls("package:PeacoQC")
+
+
[1] "PeacoQC"        "PeacoQCHeatmap" "PlotPeacoQC"    "RemoveDoublets"
+[5] "RemoveMargins" 
+
+
+
+
+

Problem 2

+
+

Take a closer look at the list of Bioconductor cytometry packages. Report back on how many there are currently in Bioconductor, the author/maintainer with the most contributed cytometry R packages, and a couple packages that you would be interested in exploring more in-depth later in the course.

+
+

There are 69 Bioconductor packages about cytometry.

+

Mike Jiang has contributed to 10 cytometry packages.

+

These packages seem interesting:

+ ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PackageMaintainerTitleRank
flowVizMike JiangVisualization for flow cytometry227
ggcytoMike JiangVisualize Cytometry data with ggplot237
flowPeaksYongchao GeAn R package for flow data clustering648
+
+
+

Problem 3

+
+

There is another way to install R packages, using the newer pak package. Positron uses this when installing suggested dependencies.

+
+
+

After learning more about it via the documentation and it’s pkgdown website, I would like you to attempt to install the following three R packages using this newer method: “broom”, “cytoMEM”, “DillonHammill/CytoExploreR”.

+
+
+

Take screenshots, and in a new quarto markdown document, describe how the installation process differed from what you saw for install.packages(), install() and install_github().

+
+
+

broom

+
+
pak::pkg_install("broom")
+
+
ℹ Loading metadata database
+
+
+
✔ Loading metadata database ... done
+
+
+
+
+
+
 
+
+
+
✔ All system requirements are already installed.
+
+
+
  
+
+
+
ℹ No downloads are needed
+
+
+
✔ 1 pkg + 20 deps: kept 21 [6.4s]
+
+
+

It is easy to see in what stage the installation is. Meaning that all the time it shows how many packages have finished installing out of the total number of needed packages. It is then easier to estimate how long the installation will still take.

+
+
+

cytoMEM

+
+
pak::pkg_install("cytoMEM")
+
+
 
+
+
+
✔ All system requirements are already installed.
+
+
+
  
+
+
+
ℹ No downloads are needed
+
+
+
✔ 1 pkg + 14 deps: kept 15 [1.2s]
+
+
+

+

+

+

Installing cytoMEM causes pak::pkg_install() to install also KernSmooth, even though it is already installed. It fails installing it because fortran compiler and blas library development package are not installed.

+

Works after installing these.

+
+
+

DillonHammill/CytoExploreR

+

The next chunk is disabled.

+
+
pak::pkg_install("DillonHammill/CytoExploreR")
+
+

Error: ! ! error in pak subprocess Caused by error: ! Could not solve package dependencies: * DillonHammill/CytoExploreR: * Can’t install dependency EmbedSOM (>= 1.0.0) * Can’t install dependency superheat (>= 1.0.0) * EmbedSOM: Can’t find package called EmbedSOM. Show Traceback

+

+

Package ‘EmbedSOM’ was removed from the CRAN repository.

+

Formerly available versions can be obtained from the archive.

+

Archived on 2025-12-22 as issues were not corrected in time.

+

So, it seems that this package cannot be installed. At least not with pkg_install().

+

pkg_install() does not print endless messages about compilation, unlike some other installation methods, which is good.

+
+
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/course/01_InstallingRPackages/homeworks/jttoivon/exercises.qmd b/course/01_InstallingRPackages/homeworks/jttoivon/exercises.qmd new file mode 100644 index 0000000..be5a6cf --- /dev/null +++ b/course/01_InstallingRPackages/homeworks/jttoivon/exercises.qmd @@ -0,0 +1,112 @@ +--- +title: "Solutions for week01" +author: Jarkko Toivonen +date: "`r Sys.Date()`" +format: + html: + embed-resources: true +--- + +```{r} +library(PeacoQC) +``` + + +## Problem 1 + +> We installed PeacoQC during this session, but we didn’t have time to explore what functions are present within the package. Using what you have learned about accessing documentation, figure out and list what functions it contains + +```{r} +help(package = PeacoQC) +``` + + +```{r} +ls("package:PeacoQC") +``` + +## Problem 2 + +> Take a closer look at the list of Bioconductor cytometry packages. Report back on how many there are currently in Bioconductor, the author/maintainer with the most contributed cytometry R packages, and a couple packages that you would be interested in exploring more in-depth later in the course. + +There are 69 Bioconductor packages about cytometry. + +Mike Jiang has contributed to 10 cytometry packages. + +These packages seem interesting: + +Package | Maintainer | Title | Rank +--------+------------+-------+----- +flowViz | Mike Jiang | Visualization for flow cytometry | 227 +ggcyto | Mike Jiang | Visualize Cytometry data with ggplot | 237 +flowPeaks | Yongchao Ge | An R package for flow data clustering | 648 + +## Problem 3 + + + +> There is another way to install R packages, using the newer pak package. Positron uses this when installing suggested dependencies. + +> After learning more about it via the documentation and it’s pkgdown website, I would like you to attempt to install the following three R packages using this newer method: “broom”, “cytoMEM”, “DillonHammill/CytoExploreR”. + +> Take screenshots, and in a new quarto markdown document, describe how the installation process differed from what you saw for install.packages(), install() and install_github(). + +### broom + +```{r} +pak::pkg_install("broom") +``` + +It is easy to see in what stage the installation is. Meaning that all the time it +shows how many packages have finished installing out of the total number of needed packages. +It is then easier to estimate how long the installation will still take. + +### cytoMEM + +```{r} +pak::pkg_install("cytoMEM") +``` + + +![](cytomem.png) + +![](kernsmooth_fail1.png) + +![](kernsmooth_fail2.png) + +Installing cytoMEM causes pak::pkg_install() to install also KernSmooth, even though it is already installed. +It fails installing it because fortran compiler and blas library development package are not installed. + +Works after installing these. + +### DillonHammill/CytoExploreR + +The next chunk is disabled. + +```{r} +#| eval: FALSE +pak::pkg_install("DillonHammill/CytoExploreR") +``` + +Error: +! ! error in pak subprocess +Caused by error: +! Could not solve package dependencies: +* DillonHammill/CytoExploreR: + * Can't install dependency EmbedSOM (>= 1.0.0) + * Can't install dependency superheat (>= 1.0.0) +* EmbedSOM: Can't find package called EmbedSOM. +Show Traceback + +![](failure_due_to_embed_som.png) + +Package ‘EmbedSOM’ was removed from the CRAN repository. + +Formerly available versions can be obtained from the archive. + +Archived on 2025-12-22 as issues were not corrected in time. + +So, it seems that this package cannot be installed. At least not with pkg_install(). + +pkg_install() does not print endless messages about compilation, unlike some other installation +methods, which is good. \ No newline at end of file diff --git a/course/01_InstallingRPackages/homeworks/jttoivon/failure_due_to_embed_som.png b/course/01_InstallingRPackages/homeworks/jttoivon/failure_due_to_embed_som.png new file mode 100644 index 0000000..fe34d9d Binary files /dev/null and b/course/01_InstallingRPackages/homeworks/jttoivon/failure_due_to_embed_som.png differ diff --git a/course/01_InstallingRPackages/homeworks/jttoivon/kernsmooth_fail1.png b/course/01_InstallingRPackages/homeworks/jttoivon/kernsmooth_fail1.png new file mode 100644 index 0000000..1ad645a Binary files /dev/null and b/course/01_InstallingRPackages/homeworks/jttoivon/kernsmooth_fail1.png differ diff --git a/course/01_InstallingRPackages/homeworks/jttoivon/kernsmooth_fail2.png b/course/01_InstallingRPackages/homeworks/jttoivon/kernsmooth_fail2.png new file mode 100644 index 0000000..8acedd7 Binary files /dev/null and b/course/01_InstallingRPackages/homeworks/jttoivon/kernsmooth_fail2.png differ diff --git a/course/02_FilePaths/homeworks/DavidRach/images/Week_03.png b/course/02_FilePaths/homeworks/DavidRach/images/Week_03.png new file mode 100644 index 0000000..25d0058 Binary files /dev/null and b/course/02_FilePaths/homeworks/DavidRach/images/Week_03.png differ diff --git a/course/02_FilePaths/homeworks/DavidRach/index.qmd b/course/02_FilePaths/homeworks/DavidRach/index.qmd new file mode 100644 index 0000000..be3f588 --- /dev/null +++ b/course/02_FilePaths/homeworks/DavidRach/index.qmd @@ -0,0 +1 @@ +Making Sure I can save things. \ No newline at end of file diff --git a/course/02_FilePaths/homeworks/jttoivon/filepaths.html b/course/02_FilePaths/homeworks/jttoivon/filepaths.html new file mode 100644 index 0000000..357bc08 --- /dev/null +++ b/course/02_FilePaths/homeworks/jttoivon/filepaths.html @@ -0,0 +1,3710 @@ + + + + + + + + + + + +Solutions for week02 + + + + + + + + + + + + + + + + + + + +
+ +
+ +
+
+

Solutions for week02

+
+ + + +
+ +
+
Author
+
+

Jarkko Toivonen

+
+
+ +
+
Published
+
+

February 14, 2026

+
+
+ + +
+ + + +
+ + +
+
library(magrittr)  # For the pipe
+library(fs)
+
+
+

Problem 1

+
+

Plug in an external hard-drive or USB into your computer. Manually, create a folder within called “TargetFolder”. Try to programmatically specify the file path to identify the folders and files present on your external drive. Then, try to copy your .fcs files from their current folder on your desktop to the TargetFolder on your drive using R. Remember, just copy, no deletion, you need to walk before you can run :D

+
+
+
fcs_files <- list.files("data", pattern = ".fcs", full.names = TRUE)
+fcs_files
+
+
[1] "data/CellCounts3L_AB_02_INF052_00.fcs"     
+[2] "data/CellCounts3L_AB_02_ND050_02.fcs"      
+[3] "data/CellCounts4L_AB_03_INF134_00.fcs"     
+[4] "data/CellCounts4L_AB_03_NY068_03.fcs"      
+[5] "data/CellCounts4L_AB_04_INF124-7_00_01.fcs"
+[6] "data/CellCounts4L_AB_04_ND006_04.fcs"      
+[7] "data/CellCounts4L_AB_05_INF019-0_00_01.fcs"
+[8] "data/CellCounts4L_AB_05_ND050_05.fcs"      
+
+
+
+
usb_stick <- file.path("", "media", "jttoivon", "KINGSTON")
+if (file.exists(usb_stick)) {
+    target <- usb_stick
+} else {
+    target <- file.path("", "tmp")
+}
+target <- file.path(target, "TargetFolder")
+target
+
+
[1] "/media/jttoivon/KINGSTON/TargetFolder"
+
+
+
+
dir.create(target)
+
+
+
file.copy(fcs_files, target)
+
+
[1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
+
+
+
+
+

Problem 2

+
+

In this session, we used list.files() with the “full.names argument” set to TRUE, as well as the basename() function to identify specific files. But what if you wanted a particular directory. Run list.files() with “full.names argument” and “recursive” argument set to TRUE, and then search online to find an R function that would retrieve the “” individual directory folders.

+
+
+
all_files <- list.files(".", full.names = TRUE, recursive = TRUE)
+all_files
+
+
 [1] "./data/CellCounts3L_AB_02_INF052_00.fcs"             
+ [2] "./data/CellCounts3L_AB_02_ND050_02.fcs"              
+ [3] "./data/CellCounts4L_AB_03_INF134_00.fcs"             
+ [4] "./data/CellCounts4L_AB_03_NY068_03.fcs"              
+ [5] "./data/CellCounts4L_AB_04_INF124-7_00_01.fcs"        
+ [6] "./data/CellCounts4L_AB_04_ND006_04.fcs"              
+ [7] "./data/CellCounts4L_AB_05_INF019-0_00_01.fcs"        
+ [8] "./data/CellCounts4L_AB_05_ND050_05.fcs"              
+ [9] "./data/target/CellCounts3L_AB_02_INF052_00.fcs"      
+[10] "./data/target/CellCounts3L_AB_02_ND050_02.fcs"       
+[11] "./data/target2/CellCounts3L_AB_02_INF052_00.fcs"     
+[12] "./data/target2/CellCounts3L_AB_02_ND050_02.fcs"      
+[13] "./data/target3/CellCounts3L_AB_02_INF052_00.fcs"     
+[14] "./data/target3/CellCounts4L_AB_03_INF134_00.fcs"     
+[15] "./data/target3/CellCounts4L_AB_04_INF124-7_00_01.fcs"
+[16] "./data/target3/CellCounts4L_AB_05_INF019-0_00_01.fcs"
+[17] "./filepaths.html"                                    
+[18] "./filepaths.qmd"                                     
+[19] "./filepaths.rmarkdown"                               
+[20] "./README.md"                                         
+
+
+

Split the paths into components.

+
+
all_files %>%
+    fs::path_norm() %>%    # Get rid of the "." in the beginning
+    fs::path_split()       # Split into components
+
+
[[1]]
+[1] "data"                             "CellCounts3L_AB_02_INF052_00.fcs"
+
+[[2]]
+[1] "data"                            "CellCounts3L_AB_02_ND050_02.fcs"
+
+[[3]]
+[1] "data"                             "CellCounts4L_AB_03_INF134_00.fcs"
+
+[[4]]
+[1] "data"                            "CellCounts4L_AB_03_NY068_03.fcs"
+
+[[5]]
+[1] "data"                                 
+[2] "CellCounts4L_AB_04_INF124-7_00_01.fcs"
+
+[[6]]
+[1] "data"                            "CellCounts4L_AB_04_ND006_04.fcs"
+
+[[7]]
+[1] "data"                                 
+[2] "CellCounts4L_AB_05_INF019-0_00_01.fcs"
+
+[[8]]
+[1] "data"                            "CellCounts4L_AB_05_ND050_05.fcs"
+
+[[9]]
+[1] "data"                             "target"                          
+[3] "CellCounts3L_AB_02_INF052_00.fcs"
+
+[[10]]
+[1] "data"                            "target"                         
+[3] "CellCounts3L_AB_02_ND050_02.fcs"
+
+[[11]]
+[1] "data"                             "target2"                         
+[3] "CellCounts3L_AB_02_INF052_00.fcs"
+
+[[12]]
+[1] "data"                            "target2"                        
+[3] "CellCounts3L_AB_02_ND050_02.fcs"
+
+[[13]]
+[1] "data"                             "target3"                         
+[3] "CellCounts3L_AB_02_INF052_00.fcs"
+
+[[14]]
+[1] "data"                             "target3"                         
+[3] "CellCounts4L_AB_03_INF134_00.fcs"
+
+[[15]]
+[1] "data"                                 
+[2] "target3"                              
+[3] "CellCounts4L_AB_04_INF124-7_00_01.fcs"
+
+[[16]]
+[1] "data"                                 
+[2] "target3"                              
+[3] "CellCounts4L_AB_05_INF019-0_00_01.fcs"
+
+[[17]]
+[1] "filepaths.html"
+
+[[18]]
+[1] "filepaths.qmd"
+
+[[19]]
+[1] "filepaths.rmarkdown"
+
+[[20]]
+[1] "README.md"
+
+
+
+
+

Problem 3

+
+

R packages often come with internal datasets, that are typically used for use in the help documentation examples. These can be accessed through the use of the system.file() function. See an example below.

+
+
+
system.file("extdata", package = "FlowSOM")
+
+
+

Using what we have learned about file.path navigation, search your way down the file.directory of the FlowSOM and flowWorkspace packages, and identify any .fcs files that are present for use in the documentation.

+
+
+
system.file("extdata", package = "FlowSOM") %>% 
+    list.files(pattern = "\\.fcs$", full.names = TRUE, recursive = TRUE)
+
+
[1] "/usr/lib/R/site-library/FlowSOM/extdata/68983.fcs"
+
+
+
+
system.file("extdata", package = "flowWorkspace") %>% 
+    list.files(pattern = "\\.fcs$", full.names = TRUE, recursive = TRUE)
+
+
character(0)
+
+
+

No .fcs files found for package flowWorkspace.

+
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/course/02_FilePaths/homeworks/jttoivon/filepaths.qmd b/course/02_FilePaths/homeworks/jttoivon/filepaths.qmd new file mode 100644 index 0000000..44d1d40 --- /dev/null +++ b/course/02_FilePaths/homeworks/jttoivon/filepaths.qmd @@ -0,0 +1,82 @@ +--- +title: Solutions for week02 +author: Jarkko Toivonen +date: "`r Sys.Date()`" +format: + html: + embed-resources: true +--- + +```{r setup} +library(magrittr) # For the pipe +library(fs) +``` +## Problem 1 + +> Plug in an external hard-drive or USB into your computer. Manually, create a folder within called “TargetFolder”. Try to programmatically specify the file path to identify the folders and files present on your external drive. Then, try to copy your .fcs files from their current folder on your desktop to the TargetFolder on your drive using R. Remember, just copy, no deletion, you need to walk before you can run :D + +```{r} +fcs_files <- list.files("data", pattern = ".fcs", full.names = TRUE) +fcs_files +``` + + +```{r} +usb_stick <- file.path("", "media", "jttoivon", "KINGSTON") +if (file.exists(usb_stick)) { + target <- usb_stick +} else { + target <- file.path("", "tmp") +} +target <- file.path(target, "TargetFolder") +target +``` + +```{r} +dir.create(target) +``` + +```{r} +file.copy(fcs_files, target) +``` + +## Problem 2 + +> In this session, we used list.files() with the “full.names argument” set to TRUE, as well as the basename() function to identify specific files. But what if you wanted a particular directory. Run list.files() with “full.names argument” and “recursive” argument set to TRUE, and then search online to find an R function that would retrieve the “” individual directory folders. + + +```{r} +all_files <- list.files(".", full.names = TRUE, recursive = TRUE) +all_files +``` + +Split the paths into components. + +```{r} +all_files %>% + fs::path_norm() %>% # Get rid of the "." in the beginning + fs::path_split() # Split into components +``` + +## Problem 3 + +> R packages often come with internal datasets, that are typically used for use in the help documentation examples. These can be accessed through the use of the `system.file()` function. See an example below. + +```{r} +#| eval: FALSE +system.file("extdata", package = "FlowSOM") +``` + +> Using what we have learned about file.path navigation, search your way down the file.directory of the `FlowSOM` and `flowWorkspace` packages, and identify any .fcs files that are present for use in the documentation. + +```{r} +system.file("extdata", package = "FlowSOM") %>% + list.files(pattern = "\\.fcs$", full.names = TRUE, recursive = TRUE) +``` + +```{r} +system.file("extdata", package = "flowWorkspace") %>% + list.files(pattern = "\\.fcs$", full.names = TRUE, recursive = TRUE) +``` + +No .fcs files found for package flowWorkspace. \ No newline at end of file diff --git a/course/03_InsideFCSFile/homeworks/jttoivon/details.png b/course/03_InsideFCSFile/homeworks/jttoivon/details.png new file mode 100644 index 0000000..8814eb5 Binary files /dev/null and b/course/03_InsideFCSFile/homeworks/jttoivon/details.png differ diff --git a/course/03_InsideFCSFile/homeworks/jttoivon/keywords.png b/course/03_InsideFCSFile/homeworks/jttoivon/keywords.png new file mode 100644 index 0000000..a0f0604 Binary files /dev/null and b/course/03_InsideFCSFile/homeworks/jttoivon/keywords.png differ diff --git a/course/03_InsideFCSFile/homeworks/jttoivon/menu.png b/course/03_InsideFCSFile/homeworks/jttoivon/menu.png new file mode 100644 index 0000000..5456ef0 Binary files /dev/null and b/course/03_InsideFCSFile/homeworks/jttoivon/menu.png differ diff --git a/course/03_InsideFCSFile/homeworks/jttoivon/observables-and-cells.png b/course/03_InsideFCSFile/homeworks/jttoivon/observables-and-cells.png new file mode 100644 index 0000000..cd925bd Binary files /dev/null and b/course/03_InsideFCSFile/homeworks/jttoivon/observables-and-cells.png differ diff --git a/course/03_InsideFCSFile/homeworks/jttoivon/solutions_03.html b/course/03_InsideFCSFile/homeworks/jttoivon/solutions_03.html new file mode 100644 index 0000000..c12753a --- /dev/null +++ b/course/03_InsideFCSFile/homeworks/jttoivon/solutions_03.html @@ -0,0 +1,8555 @@ + + + + + + + + + + + +Solutions for week03 + + + + + + + + + + + + + + + + + + + +
+ +
+ +
+
+

Solutions for week03

+
+ + + +
+ +
+
Author
+
+

Jarkko Toivonen

+
+
+ +
+
Published
+
+

February 28, 2026

+
+
+ + +
+ + + +
+ + +
+
library(flowCore)
+library(magrittr)
+library(glue)
+library(tibble)
+library(dplyr)
+library(tidyr)
+library(stringr)
+library(purrr)
+library(ggplot2)
+
+# Default printing causes problems when there are dollar signs in the table.
+# In those cases use the below function instead of the default method
+mykable <- function(df) knitr::kable(df, escape = TRUE, format = "html")
+
+

Helper function.

+
+
get_spill <- function(flow_frame) 
+{
+    dl <- flow_frame@description
+    if ("$SPILLOVER" %in% names(dl)) {
+        return(dl[["$SPILLOVER"]])
+    } else if ("SPILL" %in% names(dl)) {
+        return(dl[["SPILL"]])
+    } else {
+        return(NULL)
+    }
+
+}
+
+
+

Problem 1

+
+

Today’s walkthrough focused on a raw spectral flow cytometry file. Within a subfolder in data you will also find an unmixed .fcs file (2025_07_26…). Using what learned to day, investigate it, and see if you can catalog the main differences that occured to the keyword, parameters and exprs. Did any keywords get added, changed, deleted entirely? etc.

+
+
+
filename1 <- "data/CellCounts4L_AB_05_ND050_05.fcs"
+filename2 <- "data/AdditionalFCSFiles/2025_07_26_AB_02_NY068_02_Ctrl.fcs"
+
+
+
flow_frame1 <- read.FCS(filename=filename1, transformation = FALSE, truncate_max_range = FALSE)
+flow_frame2 <- read.FCS(filename=filename2, transformation = FALSE, truncate_max_range = FALSE)
+
+
+
flow_frame1
+
+
flowFrame object 'CellCounts4L_AB_05-ND050-05.fcs'
+with 100 cells and 61 observables:
+       name   desc     range  minRange  maxRange
+$P1    Time     NA    272140         0    272139
+$P2   UV1-A     NA   4194304      -111   4194303
+$P3   UV2-A     NA   4194304      -111   4194303
+$P4   UV3-A     NA   4194304      -111   4194303
+$P5   UV4-A     NA   4194304      -111   4194303
+...     ...    ...       ...       ...       ...
+$P57   R4-A     NA   4194304      -111   4194303
+$P58   R5-A     NA   4194304      -111   4194303
+$P59   R6-A     NA   4194304      -111   4194303
+$P60   R7-A     NA   4194304      -111   4194303
+$P61   R8-A     NA   4194304      -111   4194303
+476 keywords are stored in the 'description' slot
+
+
+

MFI = mean/median fluorescence intensity

+
+
flow_frame2
+
+
flowFrame object '2025_07_26_AB_02_NY068_02_Ctrl.fcs'
+with 100 cells and 43 observables:
+               name      desc     range  minRange    maxRange
+$P1            Time        NA    516839         0     506.501
+$P2           SSC-W        NA   4194304         0 4194303.000
+$P3           SSC-H        NA   4194304         0 4194303.000
+$P4           SSC-A        NA   4194304         0 4194303.000
+$P5           FSC-W        NA   4194304         0 4194303.000
+...             ...       ...       ...       ...         ...
+$P39     APC-R700-A    CD107a   4194304      -111     4192506
+$P40   Zombie NIR-A Viability   4194304      -111     4192506
+$P41 APC-Fire 750-A      CD27   4194304      -111     4192506
+$P42 APC-Fire 810-A      CCR7   4194304      -111     4192506
+$P43           AF-A        NA   4194304      -111     4194303
+472 keywords are stored in the 'description' slot
+
+
+

For the second file the description column is empty only for those scatter things. In this case of unmixed .fcs file the name column contains the fluorophore or metal name and the desc column contains the name of the biomarker we are interested in.

+
+

exprs

+
+
e1 <- exprs(flow_frame1)
+e2 <- exprs(flow_frame2)
+
+
+
glue("file1: expr has {ncol(e1)} observables and {nrow(e1)} cells\n")
+
+
file1: expr has 61 observables and 100 cells
+
+
glue("file2: expr has {ncol(e2)} observables and {nrow(e2)} cells\n")
+
+
file2: expr has 43 observables and 100 cells
+
+
+

The column names differ, but the index names of column names are the same:

+
+
df1 <- tibble(id = names(colnames(e1)), name1 = colnames(e1))
+df2 <- tibble(id = names(colnames(e2)), name2 = colnames(e2))
+df <- full_join(df1, df2, by="id")
+mykable(df)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
idname1name2
$P1NTimeTime
$P2NUV1-ASSC-W
$P3NUV2-ASSC-H
$P4NUV3-ASSC-A
$P5NUV4-AFSC-W
$P6NUV5-AFSC-H
$P7NUV6-AFSC-A
$P8NUV7-ASSC-B-W
$P9NUV8-ASSC-B-H
$P10NUV9-ASSC-B-A
$P11NUV10-ABUV395-A
$P12NUV11-ABUV563-A
$P13NUV12-ABUV615-A
$P14NUV13-ABUV661-A
$P15NUV14-ABUV737-A
$P16NUV15-ABUV805-A
$P17NUV16-APacific Blue-A
$P18NSSC-HBV480-A
$P19NSSC-ABV570-A
$P20NV1-ABV605-A
$P21NV2-ABV650-A
$P22NV3-ABV711-A
$P23NV4-ABV750-A
$P24NV5-ABV786-A
$P25NV6-AAlexa Fluor 488-A
$P26NV7-ASpark Blue 550-A
$P27NV8-ASpark Blue 574-A
$P28NV9-ARB613-A
$P29NV10-ARB705-A
$P30NV11-ARB780-A
$P31NV12-APE-A
$P32NV13-APE-Dazzle594-A
$P33NV14-APE-Cy5-A
$P34NV15-APE-Fire 700-A
$P35NV16-APE-Fire 744-A
$P36NFSC-HPE-Vio770-A
$P37NFSC-AAPC-A
$P38NSSC-B-HAlexa Fluor 647-A
$P39NSSC-B-AAPC-R700-A
$P40NB1-AZombie NIR-A
$P41NB2-AAPC-Fire 750-A
$P42NB3-AAPC-Fire 810-A
$P43NB4-AAF-A
$P44NB5-ANA
$P45NB6-ANA
$P46NB7-ANA
$P47NB8-ANA
$P48NB9-ANA
$P49NB10-ANA
$P50NB11-ANA
$P51NB12-ANA
$P52NB13-ANA
$P53NB14-ANA
$P54NR1-ANA
$P55NR2-ANA
$P56NR3-ANA
$P57NR4-ANA
$P58NR5-ANA
$P59NR6-ANA
$P60NR7-ANA
$P61NR8-ANA
+ + +
+
+
+
+
+

Parameters

+
+

varMetadata

+

The varMetadata of the parameters are the same for flow frames:

+
+
parameters(flow_frame1)@varMetadata
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
labelDescription
nameName of Parameter
descDescription of Parameter
rangeRange of Parameter
minRangeMinimum Parameter Value after Transforamtion
maxRangeMaximum Parameter Value after Transformation
+
+
+
parameters(flow_frame2)@varMetadata
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
labelDescription
nameName of Parameter
descDescription of Parameter
rangeRange of Parameter
minRangeMinimum Parameter Value after Transforamtion
maxRangeMaximum Parameter Value after Transformation
+
+
+
+
+
+

data

+

In the parameters@data slot the range columns are the same except for Time. For other columns there are differences.

+
+
x <- full_join(
+    as_tibble(parameters(flow_frame1)@data, rownames="id") %>% select(-desc),
+    as_tibble(parameters(flow_frame2)@data, rownames="id"), # %>% select(-desc),
+    by="id", suffix = c("_1", "_2")) %>%
+    relocate(id, name_1, name_2, desc_2=desc, range_1, range_2, minRange_1, minRange_2, maxRange_1, maxRange_2)
+mykable(x)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
idname_1name_2desc_2range_1range_2minRange_1minRange_2maxRange_1maxRange_2
$P1TimeTimeNA2721405168390.000000.0000272139506.5013
$P2UV1-ASSC-WNA41943044194304-111.000000.000041943034194303.0000
$P3UV2-ASSC-HNA41943044194304-111.000000.000041943034194303.0000
$P4UV3-ASSC-ANA41943044194304-111.000000.000041943034194303.0000
$P5UV4-AFSC-WNA41943044194304-111.000000.000041943034194303.0000
$P6UV5-AFSC-HNA41943044194304-111.000000.000041943034194303.0000
$P7UV6-AFSC-ANA41943044194304-111.000000.000041943034194303.0000
$P8UV7-ASSC-B-WNA41943044194304-26.346490.000041943034194303.0000
$P9UV8-ASSC-B-HNA41943044194304-111.000000.000041943034194303.0000
$P10UV9-ASSC-B-ANA419430441943040.000000.000041943034194303.0000
$P11UV10-ABUV395-ACD62L41943044194304-111.00000-111.000141943034192505.7500
$P12UV11-ABUV563-ACD6941943044194304-111.00000-111.000141943034192505.7500
$P13UV12-ABUV615-ACCR441943044194304-111.00000-111.000141943034192505.7500
$P14UV13-ABUV661-AVd241943044194304-111.00000-111.000141943034192505.7500
$P15UV14-ABUV737-ACD3841943044194304-111.00000-111.000141943034192505.7500
$P16UV15-ABUV805-ACD441943044194304-111.00000-111.000141943034192505.7500
$P17UV16-APacific Blue-ADump41943044194304-111.00000-111.000141943034192505.7500
$P18SSC-HBV480-ACD161419430441943040.00000-111.000141943034192505.7500
$P19SSC-ABV570-ACD16419430441943040.00000-111.000141943034192505.7500
$P20V1-ABV605-ACD45RA41943044194304-111.00000-111.000141943034192505.7500
$P21V2-ABV650-ACD841943044194304-111.00000-111.000141943034192505.7500
$P22V3-ABV711-AVa7.241943044194304-111.00000-111.000141943034192505.7500
$P23V4-ABV750-AIFNg41943044194304-111.00000-111.000141943034192505.7500
$P24V5-ABV786-ACCR641943044194304-111.00000-111.000141943034192505.7500
$P25V6-AAlexa Fluor 488-AFoxP341943044194304-111.00000-111.000141943034192505.7500
$P26V7-ASpark Blue 550-ACD341943044194304-111.00000-111.000141943034192505.7500
$P27V8-ASpark Blue 574-ACD4541943044194304-111.00000-111.000141943034192505.7500
$P28V9-ARB613-APD141943044194304-111.00000-111.000141943034192505.7500
$P29V10-ARB705-ACD2641943044194304-111.00000-111.000141943034192505.7500
$P30V11-ARB780-ACXCR541943044194304-111.00000-111.000141943034192505.7500
$P31V12-APE-AICOS41943044194304-111.00000-111.000141943034192505.7500
$P32V13-APE-Dazzle594-ATNFa41943044194304-111.00000-111.000141943034192505.7500
$P33V14-APE-Cy5-ACXCR341943044194304-111.00000-111.000141943034192505.7500
$P34V15-APE-Fire 700-ACD12741943044194304-111.00000-111.000141943034192505.7500
$P35V16-APE-Fire 744-ACD2541943044194304-111.00000-111.000141943034192505.7500
$P36FSC-HPE-Vio770-AHLA-DR419430441943040.00000-111.000141943034192505.7500
$P37FSC-AAPC-ACD39419430441943040.00000-111.000141943034192505.7500
$P38SSC-B-HAlexa Fluor 647-AIL-2419430441943040.00000-111.000141943034192505.7500
$P39SSC-B-AAPC-R700-ACD107a419430441943040.00000-111.000141943034192505.7500
$P40B1-AZombie NIR-AViability41943044194304-111.00000-111.000141943034192505.7500
$P41B2-AAPC-Fire 750-ACD2741943044194304-111.00000-111.000141943034192505.7500
$P42B3-AAPC-Fire 810-ACCR741943044194304-111.00000-111.000141943034192505.7500
$P43B4-AAF-ANA41943044194304-111.00000-111.000041943034194303.0000
$P44B5-ANANA4194304NA-111.00000NA4194303NA
$P45B6-ANANA4194304NA0.00000NA4194303NA
$P46B7-ANANA4194304NA-111.00000NA4194303NA
$P47B8-ANANA4194304NA-111.00000NA4194303NA
$P48B9-ANANA4194304NA-111.00000NA4194303NA
$P49B10-ANANA4194304NA-111.00000NA4194303NA
$P50B11-ANANA4194304NA-111.00000NA4194303NA
$P51B12-ANANA4194304NA-111.00000NA4194303NA
$P52B13-ANANA4194304NA-111.00000NA4194303NA
$P53B14-ANANA4194304NA-111.00000NA4194303NA
$P54R1-ANANA4194304NA-111.00000NA4194303NA
$P55R2-ANANA4194304NA-111.00000NA4194303NA
$P56R3-ANANA4194304NA-111.00000NA4194303NA
$P57R4-ANANA4194304NA-111.00000NA4194303NA
$P58R5-ANANA4194304NA-111.00000NA4194303NA
$P59R6-ANANA4194304NA-111.00000NA4194303NA
$P60R7-ANANA4194304NA-111.00000NA4194303NA
$P61R8-ANANA4194304NA-111.00000NA4194303NA
+ + +
+
+
+
+
+

dimLabels and classVersion

+

No differences in the dimLabels and classVersion slots:

+
+
flow_frame1@parameters@dimLabels
+
+
[1] "rowNames"    "columnNames"
+
+
flow_frame2@parameters@dimLabels
+
+
[1] "rowNames"    "columnNames"
+
+
+
+
flow_frame1@parameters@.__classVersion__
+
+
AnnotatedDataFrame 
+           "1.1.0" 
+
+
flow_frame2@parameters@.__classVersion__
+
+
AnnotatedDataFrame 
+           "1.1.0" 
+
+
+
+
+
+

Description

+
+
dl1 <- keyword(flow_frame1)
+dl2 <- keyword(flow_frame2)
+
+
+
names(dl1)
+
+
  [1] "$BEGINANALYSIS"     "$BEGINDATA"         "$BEGINSTEXT"       
+  [4] "$BTIM"              "$BYTEORD"           "$CYT"              
+  [7] "$CYTOLIB_VERSION"   "$CYTSN"             "$DATATYPE"         
+ [10] "$DATE"              "$ENDANALYSIS"       "$ENDDATA"          
+ [13] "$ENDSTEXT"          "$ETIM"              "$FIL"              
+ [16] "$INST"              "$MODE"              "$NEXTDATA"         
+ [19] "$OP"                "$P10B"              "$P10E"             
+ [22] "$P10N"              "$P10R"              "$P10TYPE"          
+ [25] "$P10V"              "$P11B"              "$P11E"             
+ [28] "$P11N"              "$P11R"              "$P11TYPE"          
+ [31] "$P11V"              "$P12B"              "$P12E"             
+ [34] "$P12N"              "$P12R"              "$P12TYPE"          
+ [37] "$P12V"              "$P13B"              "$P13E"             
+ [40] "$P13N"              "$P13R"              "$P13TYPE"          
+ [43] "$P13V"              "$P14B"              "$P14E"             
+ [46] "$P14N"              "$P14R"              "$P14TYPE"          
+ [49] "$P14V"              "$P15B"              "$P15E"             
+ [52] "$P15N"              "$P15R"              "$P15TYPE"          
+ [55] "$P15V"              "$P16B"              "$P16E"             
+ [58] "$P16N"              "$P16R"              "$P16TYPE"          
+ [61] "$P16V"              "$P17B"              "$P17E"             
+ [64] "$P17N"              "$P17R"              "$P17TYPE"          
+ [67] "$P17V"              "$P18B"              "$P18E"             
+ [70] "$P18N"              "$P18R"              "$P18TYPE"          
+ [73] "$P18V"              "$P19B"              "$P19E"             
+ [76] "$P19N"              "$P19R"              "$P19TYPE"          
+ [79] "$P19V"              "$P1B"               "$P1E"              
+ [82] "$P1N"               "$P1R"               "$P1TYPE"           
+ [85] "$P20B"              "$P20E"              "$P20N"             
+ [88] "$P20R"              "$P20TYPE"           "$P20V"             
+ [91] "$P21B"              "$P21E"              "$P21N"             
+ [94] "$P21R"              "$P21TYPE"           "$P21V"             
+ [97] "$P22B"              "$P22E"              "$P22N"             
+[100] "$P22R"              "$P22TYPE"           "$P22V"             
+[103] "$P23B"              "$P23E"              "$P23N"             
+[106] "$P23R"              "$P23TYPE"           "$P23V"             
+[109] "$P24B"              "$P24E"              "$P24N"             
+[112] "$P24R"              "$P24TYPE"           "$P24V"             
+[115] "$P25B"              "$P25E"              "$P25N"             
+[118] "$P25R"              "$P25TYPE"           "$P25V"             
+[121] "$P26B"              "$P26E"              "$P26N"             
+[124] "$P26R"              "$P26TYPE"           "$P26V"             
+[127] "$P27B"              "$P27E"              "$P27N"             
+[130] "$P27R"              "$P27TYPE"           "$P27V"             
+[133] "$P28B"              "$P28E"              "$P28N"             
+[136] "$P28R"              "$P28TYPE"           "$P28V"             
+[139] "$P29B"              "$P29E"              "$P29N"             
+[142] "$P29R"              "$P29TYPE"           "$P29V"             
+[145] "$P2B"               "$P2E"               "$P2N"              
+[148] "$P2R"               "$P2TYPE"            "$P2V"              
+[151] "$P30B"              "$P30E"              "$P30N"             
+[154] "$P30R"              "$P30TYPE"           "$P30V"             
+[157] "$P31B"              "$P31E"              "$P31N"             
+[160] "$P31R"              "$P31TYPE"           "$P31V"             
+[163] "$P32B"              "$P32E"              "$P32N"             
+[166] "$P32R"              "$P32TYPE"           "$P32V"             
+[169] "$P33B"              "$P33E"              "$P33N"             
+[172] "$P33R"              "$P33TYPE"           "$P33V"             
+[175] "$P34B"              "$P34E"              "$P34N"             
+[178] "$P34R"              "$P34TYPE"           "$P34V"             
+[181] "$P35B"              "$P35E"              "$P35N"             
+[184] "$P35R"              "$P35TYPE"           "$P35V"             
+[187] "$P36B"              "$P36E"              "$P36N"             
+[190] "$P36R"              "$P36TYPE"           "$P36V"             
+[193] "$P37B"              "$P37E"              "$P37N"             
+[196] "$P37R"              "$P37TYPE"           "$P37V"             
+[199] "$P38B"              "$P38E"              "$P38N"             
+[202] "$P38R"              "$P38TYPE"           "$P38V"             
+[205] "$P39B"              "$P39E"              "$P39N"             
+[208] "$P39R"              "$P39TYPE"           "$P39V"             
+[211] "$P3B"               "$P3E"               "$P3N"              
+[214] "$P3R"               "$P3TYPE"            "$P3V"              
+[217] "$P40B"              "$P40E"              "$P40N"             
+[220] "$P40R"              "$P40TYPE"           "$P40V"             
+[223] "$P41B"              "$P41E"              "$P41N"             
+[226] "$P41R"              "$P41TYPE"           "$P41V"             
+[229] "$P42B"              "$P42E"              "$P42N"             
+[232] "$P42R"              "$P42TYPE"           "$P42V"             
+[235] "$P43B"              "$P43E"              "$P43N"             
+[238] "$P43R"              "$P43TYPE"           "$P43V"             
+[241] "$P44B"              "$P44E"              "$P44N"             
+[244] "$P44R"              "$P44TYPE"           "$P44V"             
+[247] "$P45B"              "$P45E"              "$P45N"             
+[250] "$P45R"              "$P45TYPE"           "$P45V"             
+[253] "$P46B"              "$P46E"              "$P46N"             
+[256] "$P46R"              "$P46TYPE"           "$P46V"             
+[259] "$P47B"              "$P47E"              "$P47N"             
+[262] "$P47R"              "$P47TYPE"           "$P47V"             
+[265] "$P48B"              "$P48E"              "$P48N"             
+[268] "$P48R"              "$P48TYPE"           "$P48V"             
+[271] "$P49B"              "$P49E"              "$P49N"             
+[274] "$P49R"              "$P49TYPE"           "$P49V"             
+[277] "$P4B"               "$P4E"               "$P4N"              
+[280] "$P4R"               "$P4TYPE"            "$P4V"              
+[283] "$P50B"              "$P50E"              "$P50N"             
+[286] "$P50R"              "$P50TYPE"           "$P50V"             
+[289] "$P51B"              "$P51E"              "$P51N"             
+[292] "$P51R"              "$P51TYPE"           "$P51V"             
+[295] "$P52B"              "$P52E"              "$P52N"             
+[298] "$P52R"              "$P52TYPE"           "$P52V"             
+[301] "$P53B"              "$P53E"              "$P53N"             
+[304] "$P53R"              "$P53TYPE"           "$P53V"             
+[307] "$P54B"              "$P54E"              "$P54N"             
+[310] "$P54R"              "$P54TYPE"           "$P54V"             
+[313] "$P55B"              "$P55E"              "$P55N"             
+[316] "$P55R"              "$P55TYPE"           "$P55V"             
+[319] "$P56B"              "$P56E"              "$P56N"             
+[322] "$P56R"              "$P56TYPE"           "$P56V"             
+[325] "$P57B"              "$P57E"              "$P57N"             
+[328] "$P57R"              "$P57TYPE"           "$P57V"             
+[331] "$P58B"              "$P58E"              "$P58N"             
+[334] "$P58R"              "$P58TYPE"           "$P58V"             
+[337] "$P59B"              "$P59E"              "$P59N"             
+[340] "$P59R"              "$P59TYPE"           "$P59V"             
+[343] "$P5B"               "$P5E"               "$P5N"              
+[346] "$P5R"               "$P5TYPE"            "$P5V"              
+[349] "$P60B"              "$P60E"              "$P60N"             
+[352] "$P60R"              "$P60TYPE"           "$P60V"             
+[355] "$P61B"              "$P61E"              "$P61N"             
+[358] "$P61R"              "$P61TYPE"           "$P61V"             
+[361] "$P6B"               "$P6E"               "$P6N"              
+[364] "$P6R"               "$P6TYPE"            "$P6V"              
+[367] "$P7B"               "$P7E"               "$P7N"              
+[370] "$P7R"               "$P7TYPE"            "$P7V"              
+[373] "$P8B"               "$P8E"               "$P8N"              
+[376] "$P8R"               "$P8TYPE"            "$P8V"              
+[379] "$P9B"               "$P9E"               "$P9N"              
+[382] "$P9R"               "$P9TYPE"            "$P9V"              
+[385] "$PAR"               "$PROJ"              "$SPILLOVER"        
+[388] "$TIMESTEP"          "$TOT"               "$VOL"              
+[391] "APPLY COMPENSATION" "CHARSET"            "CREATOR"           
+[394] "FCSversion"         "FILENAME"           "FSC ASF"           
+[397] "GROUPNAME"          "GUID"               "LASER1ASF"         
+[400] "LASER1DELAY"        "LASER1NAME"         "LASER2ASF"         
+[403] "LASER2DELAY"        "LASER2NAME"         "LASER3ASF"         
+[406] "LASER3DELAY"        "LASER3NAME"         "LASER4ASF"         
+[409] "LASER4DELAY"        "LASER4NAME"         "ORIGINALGUID"      
+[412] "P10DISPLAY"         "P11DISPLAY"         "P12DISPLAY"        
+[415] "P13DISPLAY"         "P14DISPLAY"         "P15DISPLAY"        
+[418] "P16DISPLAY"         "P17DISPLAY"         "P18DISPLAY"        
+[421] "P19DISPLAY"         "P1DISPLAY"          "P20DISPLAY"        
+[424] "P21DISPLAY"         "P22DISPLAY"         "P23DISPLAY"        
+[427] "P24DISPLAY"         "P25DISPLAY"         "P26DISPLAY"        
+[430] "P27DISPLAY"         "P28DISPLAY"         "P29DISPLAY"        
+[433] "P2DISPLAY"          "P30DISPLAY"         "P31DISPLAY"        
+[436] "P32DISPLAY"         "P33DISPLAY"         "P34DISPLAY"        
+[439] "P35DISPLAY"         "P36DISPLAY"         "P37DISPLAY"        
+[442] "P38DISPLAY"         "P39DISPLAY"         "P3DISPLAY"         
+[445] "P40DISPLAY"         "P41DISPLAY"         "P42DISPLAY"        
+[448] "P43DISPLAY"         "P44DISPLAY"         "P45DISPLAY"        
+[451] "P46DISPLAY"         "P47DISPLAY"         "P48DISPLAY"        
+[454] "P49DISPLAY"         "P4DISPLAY"          "P50DISPLAY"        
+[457] "P51DISPLAY"         "P52DISPLAY"         "P53DISPLAY"        
+[460] "P54DISPLAY"         "P55DISPLAY"         "P56DISPLAY"        
+[463] "P57DISPLAY"         "P58DISPLAY"         "P59DISPLAY"        
+[466] "P5DISPLAY"          "P60DISPLAY"         "P61DISPLAY"        
+[469] "P6DISPLAY"          "P7DISPLAY"          "P8DISPLAY"         
+[472] "P9DISPLAY"          "THRESHOLD"          "TUBENAME"          
+[475] "USERSETTINGNAME"    "WINDOW EXTENSION"  
+
+
+
+
names(dl2)
+
+
  [1] "$BEGINANALYSIS"     "$BEGINDATA"         "$BEGINSTEXT"       
+  [4] "$BTIM"              "$BYTEORD"           "$CYT"              
+  [7] "$CYTOLIB_VERSION"   "$CYTSN"             "$DATATYPE"         
+ [10] "$DATE"              "$ENDANALYSIS"       "$ENDDATA"          
+ [13] "$ENDSTEXT"          "$ETIM"              "$FIL"              
+ [16] "$INST"              "$MODE"              "$NEXTDATA"         
+ [19] "$OP"                "$P10B"              "$P10E"             
+ [22] "$P10N"              "$P10R"              "$P10TYPE"          
+ [25] "$P10V"              "$P11B"              "$P11E"             
+ [28] "$P11N"              "$P11R"              "$P11S"             
+ [31] "$P11TYPE"           "$P11V"              "$P12B"             
+ [34] "$P12E"              "$P12N"              "$P12R"             
+ [37] "$P12S"              "$P12TYPE"           "$P12V"             
+ [40] "$P13B"              "$P13E"              "$P13N"             
+ [43] "$P13R"              "$P13S"              "$P13TYPE"          
+ [46] "$P13V"              "$P14B"              "$P14E"             
+ [49] "$P14N"              "$P14R"              "$P14S"             
+ [52] "$P14TYPE"           "$P14V"              "$P15B"             
+ [55] "$P15E"              "$P15N"              "$P15R"             
+ [58] "$P15S"              "$P15TYPE"           "$P15V"             
+ [61] "$P16B"              "$P16E"              "$P16N"             
+ [64] "$P16R"              "$P16S"              "$P16TYPE"          
+ [67] "$P16V"              "$P17B"              "$P17E"             
+ [70] "$P17N"              "$P17R"              "$P17S"             
+ [73] "$P17TYPE"           "$P17V"              "$P18B"             
+ [76] "$P18E"              "$P18N"              "$P18R"             
+ [79] "$P18S"              "$P18TYPE"           "$P18V"             
+ [82] "$P19B"              "$P19E"              "$P19N"             
+ [85] "$P19R"              "$P19S"              "$P19TYPE"          
+ [88] "$P19V"              "$P1B"               "$P1E"              
+ [91] "$P1N"               "$P1R"               "$P1TYPE"           
+ [94] "$P20B"              "$P20E"              "$P20N"             
+ [97] "$P20R"              "$P20S"              "$P20TYPE"          
+[100] "$P20V"              "$P21B"              "$P21E"             
+[103] "$P21N"              "$P21R"              "$P21S"             
+[106] "$P21TYPE"           "$P21V"              "$P22B"             
+[109] "$P22E"              "$P22N"              "$P22R"             
+[112] "$P22S"              "$P22TYPE"           "$P22V"             
+[115] "$P23B"              "$P23E"              "$P23N"             
+[118] "$P23R"              "$P23S"              "$P23TYPE"          
+[121] "$P23V"              "$P24B"              "$P24E"             
+[124] "$P24N"              "$P24R"              "$P24S"             
+[127] "$P24TYPE"           "$P24V"              "$P25B"             
+[130] "$P25E"              "$P25N"              "$P25R"             
+[133] "$P25S"              "$P25TYPE"           "$P25V"             
+[136] "$P26B"              "$P26E"              "$P26N"             
+[139] "$P26R"              "$P26S"              "$P26TYPE"          
+[142] "$P26V"              "$P27B"              "$P27E"             
+[145] "$P27N"              "$P27R"              "$P27S"             
+[148] "$P27TYPE"           "$P27V"              "$P28B"             
+[151] "$P28E"              "$P28N"              "$P28R"             
+[154] "$P28S"              "$P28TYPE"           "$P28V"             
+[157] "$P29B"              "$P29E"              "$P29N"             
+[160] "$P29R"              "$P29S"              "$P29TYPE"          
+[163] "$P29V"              "$P2B"               "$P2E"              
+[166] "$P2N"               "$P2R"               "$P2TYPE"           
+[169] "$P2V"               "$P30B"              "$P30E"             
+[172] "$P30N"              "$P30R"              "$P30S"             
+[175] "$P30TYPE"           "$P30V"              "$P31B"             
+[178] "$P31E"              "$P31N"              "$P31R"             
+[181] "$P31S"              "$P31TYPE"           "$P31V"             
+[184] "$P32B"              "$P32E"              "$P32N"             
+[187] "$P32R"              "$P32S"              "$P32TYPE"          
+[190] "$P32V"              "$P33B"              "$P33E"             
+[193] "$P33N"              "$P33R"              "$P33S"             
+[196] "$P33TYPE"           "$P33V"              "$P34B"             
+[199] "$P34E"              "$P34N"              "$P34R"             
+[202] "$P34S"              "$P34TYPE"           "$P34V"             
+[205] "$P35B"              "$P35E"              "$P35N"             
+[208] "$P35R"              "$P35S"              "$P35TYPE"          
+[211] "$P35V"              "$P36B"              "$P36E"             
+[214] "$P36N"              "$P36R"              "$P36S"             
+[217] "$P36TYPE"           "$P36V"              "$P37B"             
+[220] "$P37E"              "$P37N"              "$P37R"             
+[223] "$P37S"              "$P37TYPE"           "$P37V"             
+[226] "$P38B"              "$P38E"              "$P38N"             
+[229] "$P38R"              "$P38S"              "$P38TYPE"          
+[232] "$P38V"              "$P39B"              "$P39E"             
+[235] "$P39N"              "$P39R"              "$P39S"             
+[238] "$P39TYPE"           "$P39V"              "$P3B"              
+[241] "$P3E"               "$P3N"               "$P3R"              
+[244] "$P3TYPE"            "$P3V"               "$P40B"             
+[247] "$P40E"              "$P40N"              "$P40R"             
+[250] "$P40S"              "$P40TYPE"           "$P40V"             
+[253] "$P41B"              "$P41E"              "$P41N"             
+[256] "$P41R"              "$P41S"              "$P41TYPE"          
+[259] "$P41V"              "$P42B"              "$P42E"             
+[262] "$P42N"              "$P42R"              "$P42S"             
+[265] "$P42TYPE"           "$P42V"              "$P43B"             
+[268] "$P43E"              "$P43N"              "$P43R"             
+[271] "$P43TYPE"           "$P43V"              "$P4B"              
+[274] "$P4E"               "$P4N"               "$P4R"              
+[277] "$P4TYPE"            "$P4V"               "$P5B"              
+[280] "$P5E"               "$P5N"               "$P5R"              
+[283] "$P5TYPE"            "$P5V"               "$P6B"              
+[286] "$P6E"               "$P6N"               "$P6R"              
+[289] "$P6TYPE"            "$P6V"               "$P7B"              
+[292] "$P7E"               "$P7N"               "$P7R"              
+[295] "$P7TYPE"            "$P7V"               "$P8B"              
+[298] "$P8E"               "$P8N"               "$P8R"              
+[301] "$P8TYPE"            "$P8V"               "$P9B"              
+[304] "$P9E"               "$P9N"               "$P9R"              
+[307] "$P9TYPE"            "$P9V"               "$PAR"              
+[310] "$PROJ"              "$SPILLOVER"         "$TIMESTEP"         
+[313] "$TOT"               "$VOL"               "APPLY COMPENSATION"
+[316] "CHARSET"            "CREATOR"            "FCSversion"        
+[319] "FILENAME"           "flowCore_$P10Rmax"  "flowCore_$P10Rmin" 
+[322] "flowCore_$P11Rmax"  "flowCore_$P11Rmin"  "flowCore_$P12Rmax" 
+[325] "flowCore_$P12Rmin"  "flowCore_$P13Rmax"  "flowCore_$P13Rmin" 
+[328] "flowCore_$P14Rmax"  "flowCore_$P14Rmin"  "flowCore_$P15Rmax" 
+[331] "flowCore_$P15Rmin"  "flowCore_$P16Rmax"  "flowCore_$P16Rmin" 
+[334] "flowCore_$P17Rmax"  "flowCore_$P17Rmin"  "flowCore_$P18Rmax" 
+[337] "flowCore_$P18Rmin"  "flowCore_$P19Rmax"  "flowCore_$P19Rmin" 
+[340] "flowCore_$P1Rmax"   "flowCore_$P1Rmin"   "flowCore_$P20Rmax" 
+[343] "flowCore_$P20Rmin"  "flowCore_$P21Rmax"  "flowCore_$P21Rmin" 
+[346] "flowCore_$P22Rmax"  "flowCore_$P22Rmin"  "flowCore_$P23Rmax" 
+[349] "flowCore_$P23Rmin"  "flowCore_$P24Rmax"  "flowCore_$P24Rmin" 
+[352] "flowCore_$P25Rmax"  "flowCore_$P25Rmin"  "flowCore_$P26Rmax" 
+[355] "flowCore_$P26Rmin"  "flowCore_$P27Rmax"  "flowCore_$P27Rmin" 
+[358] "flowCore_$P28Rmax"  "flowCore_$P28Rmin"  "flowCore_$P29Rmax" 
+[361] "flowCore_$P29Rmin"  "flowCore_$P2Rmax"   "flowCore_$P2Rmin"  
+[364] "flowCore_$P30Rmax"  "flowCore_$P30Rmin"  "flowCore_$P31Rmax" 
+[367] "flowCore_$P31Rmin"  "flowCore_$P32Rmax"  "flowCore_$P32Rmin" 
+[370] "flowCore_$P33Rmax"  "flowCore_$P33Rmin"  "flowCore_$P34Rmax" 
+[373] "flowCore_$P34Rmin"  "flowCore_$P35Rmax"  "flowCore_$P35Rmin" 
+[376] "flowCore_$P36Rmax"  "flowCore_$P36Rmin"  "flowCore_$P37Rmax" 
+[379] "flowCore_$P37Rmin"  "flowCore_$P38Rmax"  "flowCore_$P38Rmin" 
+[382] "flowCore_$P39Rmax"  "flowCore_$P39Rmin"  "flowCore_$P3Rmax"  
+[385] "flowCore_$P3Rmin"   "flowCore_$P40Rmax"  "flowCore_$P40Rmin" 
+[388] "flowCore_$P41Rmax"  "flowCore_$P41Rmin"  "flowCore_$P42Rmax" 
+[391] "flowCore_$P42Rmin"  "flowCore_$P43Rmax"  "flowCore_$P43Rmin" 
+[394] "flowCore_$P4Rmax"   "flowCore_$P4Rmin"   "flowCore_$P5Rmax"  
+[397] "flowCore_$P5Rmin"   "flowCore_$P6Rmax"   "flowCore_$P6Rmin"  
+[400] "flowCore_$P7Rmax"   "flowCore_$P7Rmin"   "flowCore_$P8Rmax"  
+[403] "flowCore_$P8Rmin"   "flowCore_$P9Rmax"   "flowCore_$P9Rmin"  
+[406] "FSC ASF"            "GROUPNAME"          "GUID"              
+[409] "LASER1ASF"          "LASER1DELAY"        "LASER1NAME"        
+[412] "LASER2ASF"          "LASER2DELAY"        "LASER2NAME"        
+[415] "LASER3ASF"          "LASER3DELAY"        "LASER3NAME"        
+[418] "LASER4ASF"          "LASER4DELAY"        "LASER4NAME"        
+[421] "LASER5ASF"          "LASER5DELAY"        "LASER5NAME"        
+[424] "ORIGINALGUID"       "P10DISPLAY"         "P11DISPLAY"        
+[427] "P12DISPLAY"         "P13DISPLAY"         "P14DISPLAY"        
+[430] "P15DISPLAY"         "P16DISPLAY"         "P17DISPLAY"        
+[433] "P18DISPLAY"         "P19DISPLAY"         "P1DISPLAY"         
+[436] "P20DISPLAY"         "P21DISPLAY"         "P22DISPLAY"        
+[439] "P23DISPLAY"         "P24DISPLAY"         "P25DISPLAY"        
+[442] "P26DISPLAY"         "P27DISPLAY"         "P28DISPLAY"        
+[445] "P29DISPLAY"         "P2DISPLAY"          "P30DISPLAY"        
+[448] "P31DISPLAY"         "P32DISPLAY"         "P33DISPLAY"        
+[451] "P34DISPLAY"         "P35DISPLAY"         "P36DISPLAY"        
+[454] "P37DISPLAY"         "P38DISPLAY"         "P39DISPLAY"        
+[457] "P3DISPLAY"          "P40DISPLAY"         "P41DISPLAY"        
+[460] "P42DISPLAY"         "P43DISPLAY"         "P4DISPLAY"         
+[463] "P5DISPLAY"          "P6DISPLAY"          "P7DISPLAY"         
+[466] "P8DISPLAY"          "P9DISPLAY"          "THRESHOLD"         
+[469] "transformation"     "TUBENAME"           "USERSETTINGNAME"   
+[472] "WINDOW EXTENSION"  
+
+
+
+
glue("file1: description list contains {length(dl1)} keywords")
+
+
file1: description list contains 476 keywords
+
+
glue("file2: description list contains {length(dl2)} keywords")
+
+
file2: description list contains 472 keywords
+
+
+
+

Metadata

+

Eight metadata values changed:

+
+
init1 <- dl1[1:19] 
+init2 <- dl2[1:19] 
+x1 <- dl1[1:19] %>% enframe() %>% unnest(value)
+x2 <- dl2[1:19] %>% enframe() %>% unnest(value)
+df <- inner_join(x1, x2, by="name")
+df %>% filter(value.x != value.y) %>% mykable()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
namevalue.xvalue.y
$BEGINDATA3331218357
$BTIM13:55:29.8521:30:10.27
$CYTSNV0333U1368
$DATE04-Aug-202526-Jul-2025
$ENDDATA5771135556
$ETIM13:55:57.0221:31:01.86
$FILCellCounts4L_AB_05-ND050-05.fcsCtrl.fcs
$INSTUMBCCytekbio
+ + +
+
+
+
+
+

Observables

+

Information about observables:

+
+
#dl1[20:384] %>% names()  # Each observable has 6 descriptions, except time has only 5
+parse_observables <- function(description_list) {
+    df <- description_list %>% 
+        enframe("keyword") %>%
+        separate_wider_regex(keyword, c("\\$P", i="[0-9]+", letters="[A-Z]+"), cols_remove = FALSE) %>% 
+        unnest(value) %>%
+        pivot_wider(names_from=letters, values_from=value, id_cols=i) %>%
+        mutate(i = as.integer(i)) %>% 
+        arrange(i)
+    df
+}
+df1 <- parse_observables(dl1[20:384])
+df2 <- parse_observables(dl2[20:308])
+df1
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
iBENRTYPEV
1320,0Time272140TimeNA
2320,0UV1-A4194304Raw_Fluorescence1008
3320,0UV2-A4194304Raw_Fluorescence286
4320,0UV3-A4194304Raw_Fluorescence677
5320,0UV4-A4194304Raw_Fluorescence1022
6320,0UV5-A4194304Raw_Fluorescence616
7320,0UV6-A4194304Raw_Fluorescence506
8320,0UV7-A4194304Raw_Fluorescence661
9320,0UV8-A4194304Raw_Fluorescence514
10320,0UV9-A4194304Raw_Fluorescence710
11320,0UV10-A4194304Raw_Fluorescence377
12320,0UV11-A4194304Raw_Fluorescence469
13320,0UV12-A4194304Raw_Fluorescence434
14320,0UV13-A4194304Raw_Fluorescence564
15320,0UV14-A4194304Raw_Fluorescence975
16320,0UV15-A4194304Raw_Fluorescence737
17320,0UV16-A4194304Raw_Fluorescence1069
18320,0SSC-H4194304Side_Scatter334
19320,0SSC-A4194304Side_Scatter334
20320,0V1-A4194304Raw_Fluorescence352
21320,0V2-A4194304Raw_Fluorescence412
22320,0V3-A4194304Raw_Fluorescence304
23320,0V4-A4194304Raw_Fluorescence217
24320,0V5-A4194304Raw_Fluorescence257
25320,0V6-A4194304Raw_Fluorescence218
26320,0V7-A4194304Raw_Fluorescence303
27320,0V8-A4194304Raw_Fluorescence461
28320,0V9-A4194304Raw_Fluorescence320
29320,0V10-A4194304Raw_Fluorescence359
30320,0V11-A4194304Raw_Fluorescence271
31320,0V12-A4194304Raw_Fluorescence234
32320,0V13-A4194304Raw_Fluorescence236
33320,0V14-A4194304Raw_Fluorescence318
34320,0V15-A4194304Raw_Fluorescence602
35320,0V16-A4194304Raw_Fluorescence372
36320,0FSC-H4194304Forward_Scatter55
37320,0FSC-A4194304Forward_Scatter55
38320,0SSC-B-H4194304Side_Scatter241
39320,0SSC-B-A4194304Side_Scatter241
40320,0B1-A4194304Raw_Fluorescence1013
41320,0B2-A4194304Raw_Fluorescence483
42320,0B3-A4194304Raw_Fluorescence471
43320,0B4-A4194304Raw_Fluorescence473
44320,0B5-A4194304Raw_Fluorescence467
45320,0B6-A4194304Raw_Fluorescence284
46320,0B7-A4194304Raw_Fluorescence531
47320,0B8-A4194304Raw_Fluorescence432
48320,0B9-A4194304Raw_Fluorescence675
49320,0B10-A4194304Raw_Fluorescence490
50320,0B11-A4194304Raw_Fluorescence286
51320,0B12-A4194304Raw_Fluorescence407
52320,0B13-A4194304Raw_Fluorescence700
53320,0B14-A4194304Raw_Fluorescence693
54320,0R1-A4194304Raw_Fluorescence158
55320,0R2-A4194304Raw_Fluorescence245
56320,0R3-A4194304Raw_Fluorescence338
57320,0R4-A4194304Raw_Fluorescence238
58320,0R5-A4194304Raw_Fluorescence191
59320,0R6-A4194304Raw_Fluorescence274
60320,0R7-A4194304Raw_Fluorescence524
61320,0R8-A4194304Raw_Fluorescence243
+
+
+
+

How many detectors of each color do we have?

+
+
tmp <- df1 %>% select(N, TYPE) %>% filter(N != "Time")
+tmp %>% mutate(color = str_extract(N, "^[A-Z]+") %>% 
+    replace_values("R" ~ "Red", 
+    "UV" ~ "Ultra violet",
+    "V" ~ "Violet",
+    "B" ~"Blue"
+    )) %>%
+        count(TYPE, color)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TYPEcolorn
Forward_ScatterFSC2
Raw_FluorescenceBlue14
Raw_FluorescenceRed8
Raw_FluorescenceUltra violet16
Raw_FluorescenceViolet16
Side_ScatterSSC4
+
+
+
+

In the second file the detectors are named completely differently.

+
+
df2
+
+
+ ++++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
iBENRTYPEVS
1320,0Time516839TimeNANA
2320,0SSC-W4194304Side_Scatter350NA
3320,0SSC-H4194304Side_Scatter350NA
4320,0SSC-A4194304Side_Scatter350NA
5320,0FSC-W4194304Forward_Scatter64NA
6320,0FSC-H4194304Forward_Scatter64NA
7320,0FSC-A4194304Forward_Scatter64NA
8320,0SSC-B-W4194304Side_Scatter266NA
9320,0SSC-B-H4194304Side_Scatter266NA
10320,0SSC-B-A4194304Side_Scatter266NA
11320,0BUV395-A4194304Unmixed_Fluorescence0CD62L
12320,0BUV563-A4194304Unmixed_Fluorescence0CD69
13320,0BUV615-A4194304Unmixed_Fluorescence0CCR4
14320,0BUV661-A4194304Unmixed_Fluorescence0Vd2
15320,0BUV737-A4194304Unmixed_Fluorescence0CD38
16320,0BUV805-A4194304Unmixed_Fluorescence0CD4
17320,0Pacific Blue-A4194304Unmixed_Fluorescence0Dump
18320,0BV480-A4194304Unmixed_Fluorescence0CD161
19320,0BV570-A4194304Unmixed_Fluorescence0CD16
20320,0BV605-A4194304Unmixed_Fluorescence0CD45RA
21320,0BV650-A4194304Unmixed_Fluorescence0CD8
22320,0BV711-A4194304Unmixed_Fluorescence0Va7.2
23320,0BV750-A4194304Unmixed_Fluorescence0IFNg
24320,0BV786-A4194304Unmixed_Fluorescence0CCR6
25320,0Alexa Fluor 488-A4194304Unmixed_Fluorescence0FoxP3
26320,0Spark Blue 550-A4194304Unmixed_Fluorescence0CD3
27320,0Spark Blue 574-A4194304Unmixed_Fluorescence0CD45
28320,0RB613-A4194304Unmixed_Fluorescence0PD1
29320,0RB705-A4194304Unmixed_Fluorescence0CD26
30320,0RB780-A4194304Unmixed_Fluorescence0CXCR5
31320,0PE-A4194304Unmixed_Fluorescence0ICOS
32320,0PE-Dazzle594-A4194304Unmixed_Fluorescence0TNFa
33320,0PE-Cy5-A4194304Unmixed_Fluorescence0CXCR3
34320,0PE-Fire 700-A4194304Unmixed_Fluorescence0CD127
35320,0PE-Fire 744-A4194304Unmixed_Fluorescence0CD25
36320,0PE-Vio770-A4194304Unmixed_Fluorescence0HLA-DR
37320,0APC-A4194304Unmixed_Fluorescence0CD39
38320,0Alexa Fluor 647-A4194304Unmixed_Fluorescence0IL-2
39320,0APC-R700-A4194304Unmixed_Fluorescence0CD107a
40320,0Zombie NIR-A4194304Unmixed_Fluorescence0Viability
41320,0APC-Fire 750-A4194304Unmixed_Fluorescence0CD27
42320,0APC-Fire 810-A4194304Unmixed_Fluorescence0CCR7
43320,0AF-A4194304Unmixed_Fluorescence0NA
+
+
+
+
+
df2 %>% count(TYPE)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + +
TYPEn
Forward_Scatter3
Side_Scatter6
Time1
Unmixed_Fluorescence33
+
+
+
+

Since file2 is unmixed we have unmixed fluorescence instead of raw fluorescence.

+

The second file has in addition the “S” value for each observable, except for observables listed below.

+
+
colnames(df1)
+
+
[1] "i"    "B"    "E"    "N"    "R"    "TYPE" "V"   
+
+
colnames(df2)
+
+
[1] "i"    "B"    "E"    "N"    "R"    "TYPE" "V"    "S"   
+
+
+

Which observables have missing values for some letter.

+
+
df1 %>%
+    filter(if_any(-i, is.na))   # Show rows that have at least one NA value
+
+
+ + + + + + + + + + + + + + + + + + + + + + + +
iBENRTYPEV
1320,0Time272140TimeNA
+
+
+
df2 %>% 
+    filter(if_any(-i, is.na))   # Show rows that have at least one NA value
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
iBENRTYPEVS
1320,0Time516839TimeNANA
2320,0SSC-W4194304Side_Scatter350NA
3320,0SSC-H4194304Side_Scatter350NA
4320,0SSC-A4194304Side_Scatter350NA
5320,0FSC-W4194304Forward_Scatter64NA
6320,0FSC-H4194304Forward_Scatter64NA
7320,0FSC-A4194304Forward_Scatter64NA
8320,0SSC-B-W4194304Side_Scatter266NA
9320,0SSC-B-H4194304Side_Scatter266NA
10320,0SSC-B-A4194304Side_Scatter266NA
43320,0AF-A4194304Unmixed_Fluorescence0NA
+
+
+
+
+
+

Middle metadata

+
+
middle1 <- dl1[385:398] %>% discard_at("$SPILLOVER")
+middle1
+
+
$`$PAR`
+[1] "61"
+
+$`$PROJ`
+[1] "CellCounts4L_AB_05"
+
+$`$TIMESTEP`
+[1] "0.0001"
+
+$`$TOT`
+[1] "100"
+
+$`$VOL`
+[1] "30.31"
+
+$`APPLY COMPENSATION`
+[1] "FALSE"
+
+$CHARSET
+[1] "utf-8"
+
+$CREATOR
+[1] "SpectroFlo 3.3.0"
+
+$FCSversion
+[1] "3"
+
+$FILENAME
+[1] "data/CellCounts4L_AB_05_ND050_05.fcs"
+
+$`FSC ASF`
+[1] "1.21"
+
+$GROUPNAME
+[1] "ND050"
+
+$GUID
+[1] "CellCounts4L_AB_05-ND050-05.fcs"
+
+
+
+
middle2_all <- dl2[309:408] %>% discard_at("$SPILLOVER")
+middle2 <- middle2_all %>% discard_at(~str_starts(., "flowCore_"))
+middle2
+
+
$`$PAR`
+[1] "43"
+
+$`$PROJ`
+[1] "2025_07_26_AB_02"
+
+$`$TIMESTEP`
+[1] "0.0001"
+
+$`$TOT`
+[1] "100"
+
+$`$VOL`
+[1] "60.66"
+
+$`APPLY COMPENSATION`
+[1] "FALSE"
+
+$CHARSET
+[1] "utf-8"
+
+$CREATOR
+[1] "SpectroFlo 3.3.0"
+
+$FCSversion
+[1] "3"
+
+$FILENAME
+[1] "data/AdditionalFCSFiles/2025_07_26_AB_02_NY068_02_Ctrl.fcs"
+
+$`FSC ASF`
+[1] "1.18"
+
+$GROUPNAME
+[1] "NY068_02"
+
+$GUID
+[1] "2025_07_26_AB_02_NY068_02_Ctrl.fcs"
+
+
+

File2 has lots of keywords that start with string flowCore_. Don’t know what these are.

+
+
middle2_all %>% keep_at(~str_starts(., "flowCore_"))
+
+
$`flowCore_$P10Rmax`
+[1] "4194303"
+
+$`flowCore_$P10Rmin`
+[1] "0"
+
+$`flowCore_$P11Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P11Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P12Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P12Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P13Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P13Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P14Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P14Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P15Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P15Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P16Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P16Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P17Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P17Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P18Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P18Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P19Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P19Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P1Rmax`
+[1] "506.501251220703"
+
+$`flowCore_$P1Rmin`
+[1] "0"
+
+$`flowCore_$P20Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P20Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P21Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P21Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P22Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P22Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P23Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P23Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P24Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P24Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P25Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P25Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P26Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P26Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P27Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P27Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P28Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P28Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P29Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P29Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P2Rmax`
+[1] "4194303"
+
+$`flowCore_$P2Rmin`
+[1] "0"
+
+$`flowCore_$P30Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P30Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P31Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P31Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P32Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P32Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P33Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P33Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P34Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P34Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P35Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P35Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P36Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P36Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P37Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P37Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P38Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P38Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P39Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P39Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P3Rmax`
+[1] "4194303"
+
+$`flowCore_$P3Rmin`
+[1] "0"
+
+$`flowCore_$P40Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P40Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P41Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P41Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P42Rmax`
+[1] "4192505.75"
+
+$`flowCore_$P42Rmin`
+[1] "-111.00008392334"
+
+$`flowCore_$P43Rmax`
+[1] "4194303"
+
+$`flowCore_$P43Rmin`
+[1] "-111"
+
+$`flowCore_$P4Rmax`
+[1] "4194303"
+
+$`flowCore_$P4Rmin`
+[1] "0"
+
+$`flowCore_$P5Rmax`
+[1] "4194303"
+
+$`flowCore_$P5Rmin`
+[1] "0"
+
+$`flowCore_$P6Rmax`
+[1] "4194303"
+
+$`flowCore_$P6Rmin`
+[1] "0"
+
+$`flowCore_$P7Rmax`
+[1] "4194303"
+
+$`flowCore_$P7Rmin`
+[1] "0"
+
+$`flowCore_$P8Rmax`
+[1] "4194303"
+
+$`flowCore_$P8Rmin`
+[1] "0"
+
+$`flowCore_$P9Rmax`
+[1] "4194303"
+
+$`flowCore_$P9Rmin`
+[1] "0"
+
+
+
+
+

Laser metadata

+
+
laser1 <- dl1 %>% keep_at(~ str_starts(., "LASER"))
+laser2 <- dl2 %>% keep_at(~ str_starts(., "LASER"))
+glue("file1: {length(laser1)} laser keywords")
+
+
file1: 12 laser keywords
+
+
glue("file1: {length(laser1)} laser keywords")
+
+
file1: 12 laser keywords
+
+
+
+
parse_lasers <- function(L) { 
+    L %>% enframe() %>% unnest(value) %>% 
+        separate_wider_regex(cols = name, c("LASER", i="[0-9]", col="[A-Z]+")) %>% 
+        pivot_wider(id_cols = i, names_from = col, values_from = value)
+}
+
+
+
parse_lasers(laser1)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Lasers of file1.
iASFDELAYNAME
11.09-19.525Violet
21.140Blue
31.0220.15Red
40.9240.725UV
+
+
+
+
+
parse_lasers(laser2)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Lasers of file2.
iASFDELAYNAME
11.12-39.65YellowGreen
21.09-19.825Violet
31.150Blue
41.0420.3Red
51.0939.8UV
+
+
+
+
+
+

Display

+
+
display1 <- dl1 %>% keep_at(~ str_ends(., "DISPLAY"))
+display2 <- dl2 %>% keep_at(~ str_ends(., "DISPLAY"))
+bind_rows(
+    tibble(file="file1", display=as.character(display1)),
+    tibble(file="file2", display=as.character(display2))
+) %>% table()
+
+
       display
+file    LIN LOG
+  file1   6  55
+  file2   9  34
+
+
+
+
+

Last few keywords

+
+
last1 <- dl1[473:476]
+last2 <- dl2[468:472]
+full_join(
+    enframe(last1) %>% unnest(value),
+    enframe(last2) %>% unnest(value),
+    by="name"
+)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
namevalue.xvalue.y
THRESHOLD(FSC,50000)(FSC,600000)
TUBENAME05Ctrl
USERSETTINGNAMEDTR_CellCountsDR_2025_AB_Nuclear
WINDOW EXTENSION33
transformationNAcustom
+
+
+
+

The transformation keyword has been added in the second file.

+
+
+
+
+

Problem 2

+
+

Today’s files were for spectral .fcs files from a Cytek Aurora within a subfolder in data you will also find a conventional flow cytometry file (2025-10_22…). Similarly, explore and see if you find any major differences (beyond the different detector or fluorophore names which will vary based on antibody panel used, etc)

+
+
+
filename3 <- "data/AdditionalFCSFiles/2025-10_22_Contrad.fcs"
+
+
+

Load file

+
+
flow_frame3 <- read.FCS(filename=filename3, transformation = FALSE, truncate_max_range = FALSE)
+flow_frame3
+
+
flowFrame object '5855c3b6-7adc-45da-8921-191c97c559f5'
+with 1852 cells and 33 observables:
+       name   desc     range  minRange  maxRange
+$P1    Time     NA    262144       0.0    262143
+$P2   FSC-A     NA    262144       0.0    262143
+$P3   FSC-H     NA    262144       0.0    262143
+$P4   FSC-W     NA    262144       0.0    262143
+$P5   SSC-A     NA    262144     -44.8    262143
+...     ...    ...       ...       ...       ...
+$P29 Y615-H     NA    262144      0.00    262143
+$P30 Y710-A     NA    262144   -111.00    262143
+$P31 Y710-H     NA    262144      0.00    262143
+$P32 Y780-A     NA    262144    -87.33    262143
+$P33 Y780-H     NA    262144      0.00    262143
+330 keywords are stored in the 'description' slot
+
+
+
+
+

exprs

+
+
e3 <- exprs(flow_frame3)
+
+
+
glue("file3: expr has {ncol(e3)} observables and {nrow(e3)} cells\n")
+
+
file3: expr has 33 observables and 1852 cells
+
+
+
+
+

Parameters

+
+

varMetadata

+
+
parameters(flow_frame3)@varMetadata
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
labelDescription
nameName of Parameter
descDescription of Parameter
rangeRange of Parameter
minRangeMinimum Parameter Value after Transforamtion
maxRangeMaximum Parameter Value after Transformation
+
+
+
+
+
+

data

+
+
x <- as_tibble(parameters(flow_frame3)@data, rownames="id") %>% select(-desc)
+mykable(x)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
idnamerangeminRangemaxRange
$P1Time2621440.00262143
$P2FSC-A2621440.00262143
$P3FSC-H2621440.00262143
$P4FSC-W2621440.00262143
$P5SSC-A262144-44.80262143
$P6SSC-H2621440.00262143
$P7SSC-W2621440.00262143
$P8R670-A262144-111.00262143
$P9R670-H2621440.00262143
$P10R730-A262144-53.95262143
$P11R730-H2621440.00262143
$P12R780-A262144-77.35262143
$P13R780-H2621440.00262143
$P14B530-A262144-67.20262143
$P15B530-H2621440.00262143
$P16B710-A262144-111.00262143
$P17B710-H2621440.00262143
$P18V450-A262144-64.35262143
$P19V450-H2621440.00262143
$P20V525-A262144-70.85262143
$P21V525-H2621440.00262143
$P22V610-A262144-111.00262143
$P23V610-H2621440.00262143
$P24V670-A262144-111.00262143
$P25V670-H2621440.00262143
$P26Y590-A262144-111.00262143
$P27Y590-H2621440.00262143
$P28Y615-A262144-111.00262143
$P29Y615-H2621440.00262143
$P30Y710-A262144-111.00262143
$P31Y710-H2621440.00262143
$P32Y780-A262144-87.33262143
$P33Y780-H2621440.00262143
+ + +
+
+
+
+
+

dimLabels and classVersion

+
+
flow_frame3@parameters@dimLabels
+
+
[1] "rowNames"    "columnNames"
+
+
flow_frame3@parameters@.__classVersion__
+
+
AnnotatedDataFrame 
+           "1.1.0" 
+
+
+
+
+
+

Description

+
+
dl3 <- keyword(flow_frame3)
+names(dl3) %>% head(n=40)
+
+
 [1] "FCSversion"         "$BEGINANALYSIS"     "$ENDANALYSIS"      
+ [4] "$BEGINSTEXT"        "$ENDSTEXT"          "$BEGINDATA"        
+ [7] "$ENDDATA"           "$FIL"               "$SYS"              
+[10] "$TOT"               "$PAR"               "$MODE"             
+[13] "$BYTEORD"           "$DATATYPE"          "$NEXTDATA"         
+[16] "CREATOR"            "TUBE NAME"          "$SRC"              
+[19] "EXPERIMENT NAME"    "GUID"               "$DATE"             
+[22] "$BTIM"              "$ETIM"              "SETTINGS"          
+[25] "WINDOW EXTENSION"   "EXPORT USER NAME"   "EXPORT TIME"       
+[28] "FSC ASF"            "AUTOBS"             "$INST"             
+[31] "$TIMESTEP"          "SPILL"              "APPLY COMPENSATION"
+[34] "THRESHOLD"          "$P1N"               "$P1R"              
+[37] "$P1B"               "$P1E"               "$P1G"              
+[40] "P1BS"              
+
+
names(dl3) %>% tail(n=5)
+
+
[1] "P33BS"             "P33MS"             "CST BEADS EXPIRED"
+[4] "FILENAME"          "ORIGINALGUID"     
+
+
+
+
glue("file3: description list contains {length(dl3)} keywords")
+
+
file3: description list contains 330 keywords
+
+
+
+

Initial metadata

+
+
init3 <- dl3[1:34]
+init3
+
+
$FCSversion
+[1] "3"
+
+$`$BEGINANALYSIS`
+[1] "0"
+
+$`$ENDANALYSIS`
+[1] "0"
+
+$`$BEGINSTEXT`
+[1] "0"
+
+$`$ENDSTEXT`
+[1] "0"
+
+$`$BEGINDATA`
+[1] "4378"
+
+$`$ENDDATA`
+[1] "248841             "
+
+$`$FIL`
+[1] "44633.fcs"
+
+$`$SYS`
+[1] "Windows 7 6.1"
+
+$`$TOT`
+[1] "1852               "
+
+$`$PAR`
+[1] "33"
+
+$`$MODE`
+[1] "L"
+
+$`$BYTEORD`
+[1] "4,3,2,1"
+
+$`$DATATYPE`
+[1] "F"
+
+$`$NEXTDATA`
+[1] "0"
+
+$CREATOR
+[1] "BD FACSDiva Software Version 8.0.2"
+
+$`TUBE NAME`
+[1] "WE_22_DR_00"
+
+$`$SRC`
+[1] "2025-10"
+
+$`EXPERIMENT NAME`
+[1] "QC_2025-10"
+
+$GUID
+[1] "5855c3b6-7adc-45da-8921-191c97c559f5"
+
+$`$DATE`
+[1] "22-OCT-2025"
+
+$`$BTIM`
+[1] "08:44:52"
+
+$`$ETIM`
+[1] "09:16:22"
+
+$SETTINGS
+[1] "Cytometer"
+
+$`WINDOW EXTENSION`
+[1] "0.00"
+
+$`EXPORT USER NAME`
+[1] "Administrator"
+
+$`EXPORT TIME`
+[1] "22-OCT-2025-08:44:51"
+
+$`FSC ASF`
+[1] "0.77"
+
+$AUTOBS
+[1] "TRUE"
+
+$`$INST`
+[1] " "
+
+$`$TIMESTEP`
+[1] "0.01"
+
+$SPILL
+      R670-A R730-A R780-A B530-A B710-A V450-A V525-A V610-A V670-A Y590-A
+ [1,]      1      0      0      0      0      0      0      0      0      0
+ [2,]      0      1      0      0      0      0      0      0      0      0
+ [3,]      0      0      1      0      0      0      0      0      0      0
+ [4,]      0      0      0      1      0      0      0      0      0      0
+ [5,]      0      0      0      0      1      0      0      0      0      0
+ [6,]      0      0      0      0      0      1      0      0      0      0
+ [7,]      0      0      0      0      0      0      1      0      0      0
+ [8,]      0      0      0      0      0      0      0      1      0      0
+ [9,]      0      0      0      0      0      0      0      0      1      0
+[10,]      0      0      0      0      0      0      0      0      0      1
+[11,]      0      0      0      0      0      0      0      0      0      0
+[12,]      0      0      0      0      0      0      0      0      0      0
+[13,]      0      0      0      0      0      0      0      0      0      0
+      Y615-A Y710-A Y780-A
+ [1,]      0      0      0
+ [2,]      0      0      0
+ [3,]      0      0      0
+ [4,]      0      0      0
+ [5,]      0      0      0
+ [6,]      0      0      0
+ [7,]      0      0      0
+ [8,]      0      0      0
+ [9,]      0      0      0
+[10,]      0      0      0
+[11,]      1      0      0
+[12,]      0      1      0
+[13,]      0      0      1
+
+$`APPLY COMPENSATION`
+[1] "TRUE"
+
+$THRESHOLD
+[1] "FSC,10000"
+
+
+
+
+

Observables

+
+
observable_list3 <- dl3[35:327] 
+
+

Those keywords that start with dollar sign.

+
+
x <- parse_observables(observable_list3 %>% keep_at(~ str_starts(., "\\$")))
+x
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
iNRBEGV
1Time262144320,00.01NA
2FSC-A262144320,01.0600
3FSC-H262144320,01.0600
4FSC-W262144320,01.0600
5SSC-A262144320,01.0280
6SSC-H262144320,01.0280
7SSC-W262144320,01.0280
8R670-A262144320,01.0511
9R670-H262144320,01.0511
10R730-A262144320,01.0515
11R730-H262144320,01.0515
12R780-A262144320,01.0459
13R780-H262144320,01.0459
14B530-A262144320,01.0498
15B530-H262144320,01.0498
16B710-A262144320,01.0525
17B710-H262144320,01.0525
18V450-A262144320,01.0473
19V450-H262144320,01.0473
20V525-A262144320,01.0422
21V525-H262144320,01.0422
22V610-A262144320,01.0542
23V610-H262144320,01.0542
24V670-A262144320,01.0461
25V670-H262144320,01.0461
26Y590-A262144320,01.0683
27Y590-H262144320,01.0683
28Y615-A262144320,01.0662
29Y615-H262144320,01.0662
30Y710-A262144320,01.0597
31Y710-H262144320,01.0597
32Y780-A262144320,01.0527
33Y780-H262144320,01.0527
+
+
+
+

Instead of TYPE we have G.

+

Those keywords that don’t start with a dollar sign.

+
+
observable_list3 %>% keep_at(~ ! str_starts(., "\\$")) %>% head()
+
+
$P1BS
+[1] "0"
+
+$P1MS
+[1] "0"
+
+$P2DISPLAY
+[1] "LIN"
+
+$P2BS
+[1] "0"
+
+$P2MS
+[1] "0"
+
+$P3DISPLAY
+[1] "LIN"
+
+
+

The DISPLAY keywords were in different place than in the first two files. Keywords of the form P33BS and P33MS did not appear in the first two files.

+
+
+

Laser metadata

+

No information about lasers.

+
+
+

Last few keywords

+
+
last3 <- dl3[328:330]
+last3
+
+
$`CST BEADS EXPIRED`
+[1] "False"
+
+$FILENAME
+[1] "data/AdditionalFCSFiles/2025-10_22_Contrad.fcs"
+
+$ORIGINALGUID
+[1] "5855c3b6-7adc-45da-8921-191c97c559f5"
+
+
+

Compensation is applied only to the third file.

+
+
c(dl1$`APPLY COMPENSATION`, dl2$`APPLY COMPENSATION`, dl3$`APPLY COMPENSATION`)
+
+
[1] "FALSE" "FALSE" "TRUE" 
+
+
+
+
+
+

All metadata

+
+
all1 <- c(init1, middle1, last1) %>% enframe(value = "file1") %>% unnest(file1)
+all2 <- c(init2, middle2, last2) %>% enframe(value = "file2") %>% unnest(file2)
+all3 <- c(init3, last3) %>% discard_at("SPILL") %>% enframe(value = "file3") %>% unnest(file3)
+
+
+
all <- all1 %>%
+    full_join(all2, by="name") %>%
+    full_join(all3, by="name")
+
+
+
mykable(all)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
namefile1file2file3
$BEGINANALYSIS000
$BEGINDATA33312183574378
$BEGINSTEXT000
$BTIM13:55:29.8521:30:10.2708:44:52
$BYTEORD4,3,2,14,3,2,14,3,2,1
$CYTAuroraAuroraNA
$CYTOLIB_VERSION2.22.02.22.0NA
$CYTSNV0333U1368NA
$DATATYPEFFF
$DATE04-Aug-202526-Jul-202522-OCT-2025
$ENDANALYSIS000
$ENDDATA5771135556248841
$ENDSTEXT000
$ETIM13:55:57.0221:31:01.8609:16:22
$FILCellCounts4L_AB_05-ND050-05.fcsCtrl.fcs44633.fcs
$INSTUMBCCytekbio
$MODELLL
$NEXTDATA000
$OPDavid RachDavid RachNA
$PAR614333
$PROJCellCounts4L_AB_052025_07_26_AB_02NA
$TIMESTEP0.00010.00010.01
$TOT1001001852
$VOL30.3160.66NA
APPLY COMPENSATIONFALSEFALSETRUE
CHARSETutf-8utf-8NA
CREATORSpectroFlo 3.3.0SpectroFlo 3.3.0BD FACSDiva Software Version 8.0.2
FCSversion333
FILENAMEdata/CellCounts4L_AB_05_ND050_05.fcsdata/AdditionalFCSFiles/2025_07_26_AB_02_NY068_02_Ctrl.fcsdata/AdditionalFCSFiles/2025-10_22_Contrad.fcs
FSC ASF1.211.180.77
GROUPNAMEND050NY068_02NA
GUIDCellCounts4L_AB_05-ND050-05.fcs2025_07_26_AB_02_NY068_02_Ctrl.fcs5855c3b6-7adc-45da-8921-191c97c559f5
THRESHOLD(FSC,50000)(FSC,600000)FSC,10000
TUBENAME05CtrlNA
USERSETTINGNAMEDTR_CellCountsDR_2025_AB_NuclearNA
WINDOW EXTENSION330.00
transformationNAcustomNA
$SYSNANAWindows 7 6.1
TUBE NAMENANAWE_22_DR_00
$SRCNANA2025-10
EXPERIMENT NAMENANAQC_2025-10
SETTINGSNANACytometer
EXPORT USER NAMENANAAdministrator
EXPORT TIMENANA22-OCT-2025-08:44:51
AUTOBSNANATRUE
CST BEADS EXPIREDNANAFalse
ORIGINALGUIDNANA5855c3b6-7adc-45da-8921-191c97c559f5
+ + +
+
+
+

Sample volume in nanoliters is not given for the last file.

+
+
all %>% filter(name == "$VOL")
+
+
+ + + + + + + + + + + + + + + + + +
namefile1file2file3
$VOL30.3160.66NA
+
+
+
+

Last file has much larger number of cells:

+
+
all %>% filter(name == "$TOT")
+
+
+ + + + + + + + + + + + + + + + + +
namefile1file2file3
$TOT1001001852
+
+
+
+

These differ as well. Not sure what they mean.

+
+
all %>% filter(name %in% c("$TIMESTEP", "FSC ASF", "THRESHOLD"))
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
namefile1file2file3
$TIMESTEP0.00010.00010.01
FSC ASF1.211.180.77
THRESHOLD(FSC,50000)(FSC,600000)FSC,10000
+
+
+
+
+
+

All spill matrices

+

Matrix dimensions.

+
+
list(file1 = flow_frame1, file2 = flow_frame2, file3 = flow_frame3) %>% map(get_spill) %>% map(dim)
+
+
$file1
+[1] 54 54
+
+$file2
+[1] 33 33
+
+$file3
+[1] 13 13
+
+
+

Matrix column names.

+
+
list(file1 = flow_frame1, file2 = flow_frame2, file3 = flow_frame3) %>% map(get_spill) %>% map(colnames)
+
+
$file1
+ [1] "UV1-A"  "UV2-A"  "UV3-A"  "UV4-A"  "UV5-A"  "UV6-A"  "UV7-A"  "UV8-A" 
+ [9] "UV9-A"  "UV10-A" "UV11-A" "UV12-A" "UV13-A" "UV14-A" "UV15-A" "UV16-A"
+[17] "V1-A"   "V2-A"   "V3-A"   "V4-A"   "V5-A"   "V6-A"   "V7-A"   "V8-A"  
+[25] "V9-A"   "V10-A"  "V11-A"  "V12-A"  "V13-A"  "V14-A"  "V15-A"  "V16-A" 
+[33] "B1-A"   "B2-A"   "B3-A"   "B4-A"   "B5-A"   "B6-A"   "B7-A"   "B8-A"  
+[41] "B9-A"   "B10-A"  "B11-A"  "B12-A"  "B13-A"  "B14-A"  "R1-A"   "R2-A"  
+[49] "R3-A"   "R4-A"   "R5-A"   "R6-A"   "R7-A"   "R8-A"  
+
+$file2
+ [1] "BUV395-A"          "BUV563-A"          "BUV615-A"         
+ [4] "BUV661-A"          "BUV737-A"          "BUV805-A"         
+ [7] "Pacific Blue-A"    "BV480-A"           "BV570-A"          
+[10] "BV605-A"           "BV650-A"           "BV711-A"          
+[13] "BV750-A"           "BV786-A"           "Alexa Fluor 488-A"
+[16] "Spark Blue 550-A"  "Spark Blue 574-A"  "RB613-A"          
+[19] "RB705-A"           "RB780-A"           "PE-A"             
+[22] "PE-Dazzle594-A"    "PE-Cy5-A"          "PE-Fire 700-A"    
+[25] "PE-Fire 744-A"     "PE-Vio770-A"       "APC-A"            
+[28] "Alexa Fluor 647-A" "APC-R700-A"        "Zombie NIR-A"     
+[31] "APC-Fire 750-A"    "APC-Fire 810-A"    "AF-A"             
+
+$file3
+ [1] "R670-A" "R730-A" "R780-A" "B530-A" "B710-A" "V450-A" "V525-A" "V610-A"
+ [9] "V670-A" "Y590-A" "Y615-A" "Y710-A" "Y780-A"
+
+
+

Looks like the spill matrices only contain non-scatter things that end with “-A”.

+
+
tidy_spill <- function(m) {
+    m %>% as_tibble() %>% 
+        mutate(rowname = colnames(.), .before=1) %>% 
+        pivot_longer(cols=-rowname, names_to = "colname")
+}
+
+
+
list(file1 = flow_frame1, file2 = flow_frame2, file3 = flow_frame3) %>% 
+    map(get_spill) %>% 
+    map(tidy_spill) %>%
+    bind_rows(.id = "file") %>%
+    ggplot(aes(x = colname, y = rowname, fill=value)) + 
+    geom_raster() +
+    facet_wrap(~file, scale="free")
+
+
+
+

+
+
+
+
+

Why are the spill matrices equal to the identity matrix? Shouldn’t at least the conventional flow frame (file 3) need it for computing compensation?

+
+
+
+

Problem 3

+
+

If you have access to commercial software, take one of the .fcs files and try to see if you can see similar internal information from within the software. For those without commercial access, try the equivalent process using Floreada.io.

+
+

I used Floreada.io.

+

The number of observables and cells can be found on the bottom of the screen:

+

+

By right clicking the filename a menu appears:

+

+

Not much details:

+

+

A list of all keywords:

+

+

By selecting the Export events choice the actual intensities can be saved to a spreadsheet file or opened in a spreadsheet program.

+
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/course/03_InsideFCSFile/homeworks/jttoivon/solutions_03.qmd b/course/03_InsideFCSFile/homeworks/jttoivon/solutions_03.qmd new file mode 100644 index 0000000..1c6fdf2 --- /dev/null +++ b/course/03_InsideFCSFile/homeworks/jttoivon/solutions_03.qmd @@ -0,0 +1,521 @@ +--- +title: Solutions for week03 +author: Jarkko Toivonen +date: "`r Sys.Date()`" +format: + html: + df-print: kable + embed-resources: true +toc: true +toc-depth: 5 +--- + + +```{r} +#| label: setup +#| output: false +library(flowCore) +library(magrittr) +library(glue) +library(tibble) +library(dplyr) +library(tidyr) +library(stringr) +library(purrr) +library(ggplot2) + +# Default printing causes problems when there are dollar signs in the table. +# In those cases use the below function instead of the default method +mykable <- function(df) knitr::kable(df, escape = TRUE, format = "html") +``` + +Helper function. + +```{r} +get_spill <- function(flow_frame) +{ + dl <- flow_frame@description + if ("$SPILLOVER" %in% names(dl)) { + return(dl[["$SPILLOVER"]]) + } else if ("SPILL" %in% names(dl)) { + return(dl[["SPILL"]]) + } else { + return(NULL) + } + +} +``` + +## Problem 1 + +> Today’s walkthrough focused on a raw spectral flow cytometry file. Within a subfolder in data you will also find an unmixed .fcs file (2025_07_26…). Using what learned to day, investigate it, and see if you can catalog the main differences that occured to the keyword, parameters and exprs. Did any keywords get added, changed, deleted entirely? etc. + + +```{r} +#| label: Filenames +filename1 <- "data/CellCounts4L_AB_05_ND050_05.fcs" +filename2 <- "data/AdditionalFCSFiles/2025_07_26_AB_02_NY068_02_Ctrl.fcs" +``` + + +```{r} +#| label: Load files +flow_frame1 <- read.FCS(filename=filename1, transformation = FALSE, truncate_max_range = FALSE) +flow_frame2 <- read.FCS(filename=filename2, transformation = FALSE, truncate_max_range = FALSE) +``` + +```{r} +flow_frame1 +``` + +MFI = mean/median fluorescence intensity + +```{r} +flow_frame2 +``` + +For the second file the description column is empty only for those scatter things. +In this case of unmixed .fcs file the name column contains the fluorophore or metal name and the desc column +contains the name of the biomarker we are interested in. + +### exprs + +```{r} +e1 <- exprs(flow_frame1) +e2 <- exprs(flow_frame2) +``` + +```{r} +glue("file1: expr has {ncol(e1)} observables and {nrow(e1)} cells\n") +glue("file2: expr has {ncol(e2)} observables and {nrow(e2)} cells\n") +``` + +The column names differ, but the index names of column names are the same: + +```{r} +df1 <- tibble(id = names(colnames(e1)), name1 = colnames(e1)) +df2 <- tibble(id = names(colnames(e2)), name2 = colnames(e2)) +df <- full_join(df1, df2, by="id") +mykable(df) +``` + +### Parameters + +#### varMetadata + +The varMetadata of the parameters are the same for flow frames: + +```{r} +parameters(flow_frame1)@varMetadata +parameters(flow_frame2)@varMetadata +``` + +#### data + +In the parameters@data slot the range columns are the same except for Time. For other columns there are +differences. + +```{r} +x <- full_join( + as_tibble(parameters(flow_frame1)@data, rownames="id") %>% select(-desc), + as_tibble(parameters(flow_frame2)@data, rownames="id"), # %>% select(-desc), + by="id", suffix = c("_1", "_2")) %>% + relocate(id, name_1, name_2, desc_2=desc, range_1, range_2, minRange_1, minRange_2, maxRange_1, maxRange_2) +mykable(x) +``` + +#### dimLabels and classVersion + +No differences in the dimLabels and __classVersion__ slots: + +```{r} +flow_frame1@parameters@dimLabels +flow_frame2@parameters@dimLabels +``` + + +```{r} +flow_frame1@parameters@.__classVersion__ +flow_frame2@parameters@.__classVersion__ +``` + +### Description + +```{r} +dl1 <- keyword(flow_frame1) +dl2 <- keyword(flow_frame2) +``` + +```{r} +names(dl1) +``` + +```{r} +names(dl2) +``` + +```{r} +glue("file1: description list contains {length(dl1)} keywords") +glue("file2: description list contains {length(dl2)} keywords") +``` + +#### Metadata + +Eight metadata values changed: + +```{r} +init1 <- dl1[1:19] +init2 <- dl2[1:19] +x1 <- dl1[1:19] %>% enframe() %>% unnest(value) +x2 <- dl2[1:19] %>% enframe() %>% unnest(value) +df <- inner_join(x1, x2, by="name") +df %>% filter(value.x != value.y) %>% mykable() +``` + +#### Observables + +Information about observables: + +```{r} +#dl1[20:384] %>% names() # Each observable has 6 descriptions, except time has only 5 +parse_observables <- function(description_list) { + df <- description_list %>% + enframe("keyword") %>% + separate_wider_regex(keyword, c("\\$P", i="[0-9]+", letters="[A-Z]+"), cols_remove = FALSE) %>% + unnest(value) %>% + pivot_wider(names_from=letters, values_from=value, id_cols=i) %>% + mutate(i = as.integer(i)) %>% + arrange(i) + df +} +df1 <- parse_observables(dl1[20:384]) +df2 <- parse_observables(dl2[20:308]) +df1 +``` + +How many detectors of each color do we have? + +```{r} +tmp <- df1 %>% select(N, TYPE) %>% filter(N != "Time") +tmp %>% mutate(color = str_extract(N, "^[A-Z]+") %>% + replace_values("R" ~ "Red", + "UV" ~ "Ultra violet", + "V" ~ "Violet", + "B" ~"Blue" + )) %>% + count(TYPE, color) +``` + +In the second file the detectors are named completely differently. + +```{r} +df2 +``` + +```{r} +df2 %>% count(TYPE) +``` + +Since file2 is unmixed we have unmixed fluorescence instead of raw fluorescence. + +The second file has in addition the "S" value for each observable, except for observables listed below. + +```{r} +colnames(df1) +colnames(df2) +``` + +Which observables have missing values for some letter. + +```{r} +df1 %>% + filter(if_any(-i, is.na)) # Show rows that have at least one NA value +df2 %>% + filter(if_any(-i, is.na)) # Show rows that have at least one NA value +``` + +#### Middle metadata + +```{r} +middle1 <- dl1[385:398] %>% discard_at("$SPILLOVER") +middle1 +``` + +```{r} +middle2_all <- dl2[309:408] %>% discard_at("$SPILLOVER") +middle2 <- middle2_all %>% discard_at(~str_starts(., "flowCore_")) +middle2 +``` + +File2 has lots of keywords that start with string flowCore_. Don't know what these are. + +```{r} +middle2_all %>% keep_at(~str_starts(., "flowCore_")) +``` + +#### Laser metadata + + +```{r} +laser1 <- dl1 %>% keep_at(~ str_starts(., "LASER")) +laser2 <- dl2 %>% keep_at(~ str_starts(., "LASER")) +glue("file1: {length(laser1)} laser keywords") +glue("file1: {length(laser1)} laser keywords") +``` + +```{r} +parse_lasers <- function(L) { + L %>% enframe() %>% unnest(value) %>% + separate_wider_regex(cols = name, c("LASER", i="[0-9]", col="[A-Z]+")) %>% + pivot_wider(id_cols = i, names_from = col, values_from = value) +} +``` + +```{r} +#| tbl-cap: Lasers of file1. +parse_lasers(laser1) +``` + +```{r} +#| tbl-cap: Lasers of file2. +parse_lasers(laser2) +``` + +#### Display + + +```{r} +display1 <- dl1 %>% keep_at(~ str_ends(., "DISPLAY")) +display2 <- dl2 %>% keep_at(~ str_ends(., "DISPLAY")) +bind_rows( + tibble(file="file1", display=as.character(display1)), + tibble(file="file2", display=as.character(display2)) +) %>% table() +``` + +#### Last few keywords + + +```{r} +last1 <- dl1[473:476] +last2 <- dl2[468:472] +full_join( + enframe(last1) %>% unnest(value), + enframe(last2) %>% unnest(value), + by="name" +) +``` + +The transformation keyword has been added in the second file. + +## Problem 2 + +> Today’s files were for spectral .fcs files from a Cytek Aurora within a subfolder in data you will also find a conventional flow cytometry file (2025-10_22…). Similarly, explore and see if you find any major differences (beyond the different detector or fluorophore names which will vary based on antibody panel used, etc) + +```{r} +filename3 <- "data/AdditionalFCSFiles/2025-10_22_Contrad.fcs" +``` + +### Load file + +```{r} +flow_frame3 <- read.FCS(filename=filename3, transformation = FALSE, truncate_max_range = FALSE) +flow_frame3 +``` + +### exprs + +```{r} +e3 <- exprs(flow_frame3) +``` + +```{r} +glue("file3: expr has {ncol(e3)} observables and {nrow(e3)} cells\n") +``` + +### Parameters + +#### varMetadata + +```{r} +parameters(flow_frame3)@varMetadata +``` + +#### data + +```{r} +x <- as_tibble(parameters(flow_frame3)@data, rownames="id") %>% select(-desc) +mykable(x) +``` + +#### dimLabels and classVersion + +```{r} +flow_frame3@parameters@dimLabels +flow_frame3@parameters@.__classVersion__ +``` + +### Description + +```{r} +dl3 <- keyword(flow_frame3) +names(dl3) %>% head(n=40) +names(dl3) %>% tail(n=5) +``` + + + +```{r} +glue("file3: description list contains {length(dl3)} keywords") +``` + +#### Initial metadata + +```{r} +init3 <- dl3[1:34] +init3 +``` + +#### Observables + +```{r} +observable_list3 <- dl3[35:327] +``` + +Those keywords that start with dollar sign. + +```{r} +x <- parse_observables(observable_list3 %>% keep_at(~ str_starts(., "\\$"))) +x +``` + +Instead of TYPE we have G. + +Those keywords that don't start with a dollar sign. + +```{r} +observable_list3 %>% keep_at(~ ! str_starts(., "\\$")) %>% head() +``` + +The DISPLAY keywords were in different place than in the first two files. +Keywords of the form P33BS and P33MS did not appear in the first two files. + +#### Laser metadata + +No information about lasers. + +#### Last few keywords + + +```{r} +last3 <- dl3[328:330] +last3 +``` + +Compensation is applied only to the third file. + +```{r} +c(dl1$`APPLY COMPENSATION`, dl2$`APPLY COMPENSATION`, dl3$`APPLY COMPENSATION`) +``` + +### All metadata + + +```{r} +all1 <- c(init1, middle1, last1) %>% enframe(value = "file1") %>% unnest(file1) +all2 <- c(init2, middle2, last2) %>% enframe(value = "file2") %>% unnest(file2) +all3 <- c(init3, last3) %>% discard_at("SPILL") %>% enframe(value = "file3") %>% unnest(file3) +``` + + +```{r} +all <- all1 %>% + full_join(all2, by="name") %>% + full_join(all3, by="name") +``` + +```{r} +mykable(all) +``` + +Sample volume in nanoliters is not given for the last file. + +```{r} +all %>% filter(name == "$VOL") +``` + +Last file has much larger number of cells: + +```{r} +all %>% filter(name == "$TOT") +``` + +These differ as well. Not sure what they mean. + +```{r} +all %>% filter(name %in% c("$TIMESTEP", "FSC ASF", "THRESHOLD")) +``` + +### All spill matrices + +Matrix dimensions. + +```{r} +list(file1 = flow_frame1, file2 = flow_frame2, file3 = flow_frame3) %>% map(get_spill) %>% map(dim) +``` + +Matrix column names. + +```{r} +list(file1 = flow_frame1, file2 = flow_frame2, file3 = flow_frame3) %>% map(get_spill) %>% map(colnames) +``` + +Looks like the spill matrices only contain non-scatter things that end with "-A". + +```{r} +tidy_spill <- function(m) { + m %>% as_tibble() %>% + mutate(rowname = colnames(.), .before=1) %>% + pivot_longer(cols=-rowname, names_to = "colname") +} +``` + +```{r} +list(file1 = flow_frame1, file2 = flow_frame2, file3 = flow_frame3) %>% + map(get_spill) %>% + map(tidy_spill) %>% + bind_rows(.id = "file") %>% + ggplot(aes(x = colname, y = rowname, fill=value)) + + geom_raster() + + facet_wrap(~file, scale="free") +``` + +Why are the spill matrices equal to the identity matrix? Shouldn't at least the conventional +flow frame (file 3) need it for computing compensation? + +## Problem 3 + +> If you have access to commercial software, take one of the .fcs files and try to see if you can see similar internal information from within the software. For those without commercial access, try the equivalent process using Floreada.io. + +I used Floreada.io. + +The number of observables and cells can be found on the bottom of the screen: + +![](observables-and-cells.png) + +By right clicking the filename a menu appears: + +![](menu.png) + +Not much details: + +![](details.png) + +A list of all keywords: + +![](keywords.png) + +By selecting the Export events choice the actual intensities can +be saved to a spreadsheet file or opened in a spreadsheet program. + + diff --git a/course/04_IntroToTidyverse/BonusContent.qmd b/course/04_IntroToTidyverse/BonusContent.qmd new file mode 100644 index 0000000..2459223 --- /dev/null +++ b/course/04_IntroToTidyverse/BonusContent.qmd @@ -0,0 +1,88 @@ +--- +title: "Bonus Content" +author: "David Rach" +date: 02-23-2026 +format: html +toc: true +toc-depth: 5 +--- + +![](/images/WebsiteBanner.png) + +::: {style="text-align: right;"} +[![AGPL-3.0](https://img.shields.io/badge/license-AGPLv3-blue)](https://www.gnu.org/licenses/agpl-3.0.en.html) [![CC BY-SA 4.0](https://img.shields.io/badge/License-CC%20BY--SA%204.0-lightgrey.svg)](http://creativecommons.org/licenses/by-sa/4.0/) +::: + +```{r} +thefilepath <- file.path("data", "Dataset.csv") + +thefilepath +``` + +```{r} +Data <- read.csv(file=thefilepath, check.names=FALSE) +colnames(Data) +``` + +## Pull + +## Case-When + +Case-when is an useful function, but may be a bit much to try to teach in the main segment. Basically, when the condition on the left side of the ~ is fulfilled, it will execute what is being specified on the right hand side. + +In turn, we can combine these together by adding a ",". I tend to use this mutate str_detect case_when combination when encountering messy data out in the while where I need to selectively change particular cell values in a consistent reproducible manner + +# Quasiquosure + +```{r} +library(dplyr) +DateColumn <- select(Data, Date) +DateColumn +``` + + +## Selecting Columns (Base R) + +As we saw [last week](/course/03_InsideFCSFile/index.qmd), there are multiple ways to select values from particular columns in base R. If we had wanted to retrieve the "Date" column, why not first identify its index position, and use [,] to extract the underlying data? + +```{r} +colnames(Data) +``` + +```{r} +colnames(Data)[4] +``` + +```{r} +DataColumn <- Data[,4] # Column specified after the , +DataColumn +``` + +However, looking at the output, we see this looks like the values, not a column. Our suspicions are confirmed when running DataColumn + +```{r} +str(DataColumn) +``` + +This is similarly the case when we use the $ accessor. + +```{r} +DataColumn <- Data$Date +str(DataColumn) +``` + +```{r} +head(DataColumn, 3) +``` + +By contrast, when selecting two columns, the structure is maintained. + +```{r} +TwoColumns <- Data[,4:5] +``` + +Why is the data.frame column structure lost in base R when isolating a single data.frame column? And who thought to make it that convoluted? If we were an R course in early 2010s, we might go into an explanation, but fortunately, we don't need to understand why, we have the `dplyr` R package to rescue us. + +::: {style="text-align: right;"} +[![AGPL-3.0](https://www.gnu.org/graphics/agplv3-with-text-162x68.png)](https://www.gnu.org/licenses/agpl-3.0.en.html) [![CC BY-SA 4.0](https://licensebuttons.net/l/by-sa/4.0/88x31.png)](http://creativecommons.org/licenses/by-sa/4.0/) +::: \ No newline at end of file diff --git a/course/04_IntroToTidyverse/data/Dataset.csv b/course/04_IntroToTidyverse/data/Dataset.csv new file mode 100644 index 0000000..a7ac736 --- /dev/null +++ b/course/04_IntroToTidyverse/data/Dataset.csv @@ -0,0 +1,197 @@ +bid,timepoint,Condition,Date,infant_sex,ptype,root,singletsFSC,singletsSSC,singletsSSCB,CD45,NotMonocytes,nonDebris,lymphocytes,live,Dump+,Dump-,Tcells,Vd2+,Vd2-,Va7.2+,Va7.2-,CD4+,CD4-,CD8+,CD8-,Tcells_count,lymphocytes_count,Monocytes,Debris,CD45_count +INF0052,0,Ctrl,2025-07-26,Male,HEU-hi,2098368,1894070,1666179,1537396,0.595294250798103,0.882034914658278,0.862764872929831,0.642013848293767,0.902058127245466,0.210909963527936,0.69114816371753,0.280426432119924,0.00812036098585309,0.991879639014147,0.0144807035218576,0.977398935492289,0.63411644039303,0.343282495099259,0.273482590989919,0.0697999041093396,164771,587573,0.117965085341722,0.137235127070169,915203 +INF0100,0,Ctrl,2025-07-26,Male,HEU-lo,2020184,1791890,1697083,1579098,0.91067622148847,0.905225628925897,0.860266041374169,0.214584780608701,0.890898072803751,0.06252774780205,0.828370325001701,0.6748297864756,0.00726562012283844,0.992734379877162,0.0157749914762223,0.976959388400939,0.611911199043416,0.365048189357523,0.335769613092523,0.0292785762650006,208241,308583,0.0947743710741026,0.139733958625831,1438047 +INF0100,4,Ctrl,2025-07-26,Male,HEU-lo,1155040,1033320,875465,845446,0.970576476794497,0.984540014867714,0.957879279037742,0.740311003326956,0.875766489924721,0.200238033703334,0.675528456221388,0.611912879006119,0.00465131294001178,0.995348687059988,0.0157940186644356,0.979554668395553,0.663962143854429,0.315592524541123,0.286210430885364,0.0293820936557598,371723,607477,0.0154599851322861,0.0421207209622579,820570 +INF0100,9,Ctrl,2025-07-26,Male,HEU-lo,358624,328624,289327,276289,0.981957298336163,0.985507032701324,0.941261463155722,0.651158847639548,0.915324178374523,0.21469246357451,0.700631714800014,0.631443094723257,0.0113489672977625,0.988651032702238,0.0170234509466437,0.971627581755594,0.437894434882387,0.533733146873207,0.486123063683305,0.0476100831899025,111552,176662,0.014492967298676,0.058738536844278,271304 +INF0179,0,Ctrl,2025-07-26,Male,HU,1362216,1206309,1032946,982736,0.957259121473112,0.955627154569894,0.840783729283442,0.705478600197931,0.895214015462574,0.338318765284397,0.556895250178177,0.439643676723417,0.00475363034097958,0.99524636965902,0.0133218176895369,0.981924551969484,0.739256349883644,0.24266820208584,0.195063353177255,0.0476048489085843,291777,663667,0.0443728454301061,0.159216270716558,940733 +INF0179,4,Ctrl,2025-07-26,Male,HU,1044808,917398,735579,685592,0.985800592772378,0.962289360027343,0.949924318309938,0.755677606357558,0.967305621365496,0.369473107121179,0.597832514244317,0.532316488164001,0.00516055467686762,0.994839445323132,0.0124618383786368,0.982377606944495,0.587626439106926,0.394751167837569,0.357137602530621,0.0376135653069482,271870,510730,0.0377106399726569,0.0500756816900616,675857 +INF0179,9,Ctrl,2025-07-26,Male,HU,1434840,1265022,988445,940454,0.98001603480872,0.980521016426882,0.965775882646529,0.7879673632359,0.929742040488104,0.238631137450808,0.691110903037296,0.671869277013871,0.00263148726167518,0.997368512738325,0.0135632264001295,0.983805286338195,0.699414063700847,0.284391222637349,0.258078399465505,0.0263128231718439,487937,726238,0.0194789835731181,0.0342241173534709,921660 +INF0186,4,Ctrl,2025-07-26,Female,HEU-hi,972056,875707,767323,718000,0.977238161559889,0.949066281673239,0.9189703801145,0.642831183897545,0.875651539639993,0.325167887160318,0.550483652479675,0.489159666287549,0.00930047046239474,0.990699529537605,0.0387700898320295,0.951929439705576,0.572472964275678,0.379456475429897,0.355552634680058,0.0239038407498391,220634,451047,0.0509337183267607,0.0810296198854996,701657 +INF0186,9,Ctrl,2025-07-26,Female,HEU-hi,1521928,1359574,1175755,1097478,0.972123359192622,0.95816977290877,0.918676257212593,0.666392972431867,0.874952880877231,0.228757011606776,0.646195869270455,0.584933976966485,0.0179408320448605,0.982059167955139,0.0367713716164062,0.945287796338733,0.511920878550113,0.43336691778862,0.413879918339277,0.0194869994493432,415867,710964,0.0418302270912302,0.0813237427874071,1066884 +INF0052,0,PPD,2025-07-26,Male,HEU-hi,2363512,2136616,1875394,1732620,0.587383846429107,0.86198368302262,0.84296849897761,0.640804431111718,0.900925393503078,0.207432282202851,0.693493111300228,0.28356755679248,0.00740820851132861,0.992591791488671,0.015070567241659,0.977521224247012,0.634034499540367,0.343486724706646,0.274411939652842,0.0690747850538041,184930,652155,0.13801631697738,0.15703150102239,1017713 +INF0100,0,PPD,2025-07-26,Male,HEU-lo,2049112,1821676,1717636,1597085,0.906308054987681,0.925196086085125,0.877188934202263,0.217428431083332,0.892967332555915,0.0618142648792407,0.831153067676675,0.67357975578059,0.00713723011316732,0.992862769886833,0.0167180062928387,0.976144763593994,0.61457070480737,0.361574058786624,0.331227858312066,0.0303462004745574,211987,314717,0.0748039139148752,0.122811065797737,1447451 +INF0100,4,PPD,2025-07-26,Male,HEU-lo,1063496,946587,796056,767297,0.970989069421619,0.984871892268438,0.955604889421599,0.731350255088002,0.878230739443147,0.207272019864815,0.670958719578332,0.598987305531646,0.00525464338895391,0.994745356611046,0.0160978987554308,0.978647457855615,0.655948011201735,0.32269944665388,0.291208353504219,0.0314910931496608,326378,544883,0.0151281077315624,0.0443951105784008,745037 +INF0100,9,PPD,2025-07-26,Male,HEU-lo,788368,714198,626387,600011,0.982280324860711,0.98421389292798,0.812304137066302,0.622322817745458,0.95666386756238,0.231645873320537,0.725017994241843,0.648940520851509,0.0119359216203612,0.988064078379639,0.0185529848206671,0.969511093558972,0.430688888795526,0.538822204763445,0.490855848853673,0.0479663559097727,238021,366784,0.0157861070720199,0.187695862933698,589379 +INF0179,0,PPD,2025-07-26,Male,HU,1380336,1242311,1047081,1000877,0.947027456920281,0.957568538747365,0.913443785883539,0.699650158568056,0.885689771385574,0.331861109309995,0.55382866207558,0.444153752663348,0.00438297193336253,0.995617028066637,0.0129723747152426,0.982644653351395,0.749919368254518,0.232725285096877,0.185089747376498,0.0476355377203793,294549,663169,0.0424314612526349,0.0865562141164605,947858 +INF0179,4,PPD,2025-07-26,Male,HU,1240984,1089933,868877,814909,0.985594710575046,0.954141713460413,0.940082423397288,0.730307406900158,0.960259887718413,0.343572109294685,0.616687778423727,0.565465484408271,0.00432042932947419,0.995679570670526,0.012668837433671,0.983010733236855,0.631877110467921,0.351133622768934,0.317746020260492,0.0333876025084419,331680,586561,0.0458582865395869,0.0599175766027118,803170 +INF0179,9,PPD,2025-07-26,Male,HU,1705960,1492142,1163543,1107878,0.982091890984386,0.981690896825295,0.968165633920874,0.793325233126049,0.934456560020761,0.24759143063691,0.686865129383851,0.668731927040854,0.00273375511929428,0.997266244880706,0.0133032354632831,0.983963009417423,0.701636095269114,0.282326914148309,0.255933530597961,0.0263933835503475,577228,863168,0.0183091031747052,0.0318343660791259,1088038 +INF0186,4,PPD,2025-07-26,Female,HEU-hi,848584,759606,648405,607514,0.98247777005962,0.953948018744482,0.925017047291784,0.672087174907727,0.862222920218972,0.326410701287305,0.535812218931666,0.475772034261669,0.00948363941211915,0.990516360587881,0.0435251892798198,0.946991171308061,0.530910900945744,0.416080270362317,0.391218464279165,0.0248618060831521,190855,401148,0.0460519812555183,0.0749829527082158,596869 +INF0186,9,PPD,2025-07-26,Female,HEU-hi,1425416,1259825,1089955,1014266,0.977148992473375,0.955257297780522,0.913761528984783,0.633243835821001,0.879303888935805,0.238632506959039,0.640671381976766,0.581861724248368,0.0182240393014894,0.981775960698511,0.0373818723523113,0.944394088346199,0.503380552444431,0.441013535901768,0.42133814561158,0.019675390290188,365177,627601,0.0447427022194777,0.0862384710152166,991089 +INF0052,0,SEB,2025-07-26,Male,HEU-hi,2523776,2282292,2041563,1889418,0.578359050247219,0.887807226093147,0.867014958426446,0.67185626879412,0.911565163264,0.233447156619843,0.678118006644156,0.274166109354803,0.00922563305131479,0.990774366948685,0.00842081207430187,0.982353554874383,0.608325425884434,0.37402812898995,0.281175634790126,0.0928524941998241,201287,734179,0.112192773906853,0.132985041573554,1092762 +INF0100,0,SEB,2025-07-26,Male,HEU-lo,1900240,1685653,1598641,1488015,0.911195787676871,0.901785786721913,0.85768873633445,0.217299850354716,0.890574990411735,0.0878318982048732,0.802743092206862,0.650759763908075,0.00764600957576643,0.992353990424234,0.008955114898766,0.983398875525468,0.611049683415565,0.372349192109902,0.335548207412353,0.0368009846975497,191734,294631,0.0982142132780872,0.14231126366555,1355873 +INF0100,4,SEB,2025-07-26,Male,HEU-lo,1009776,900919,761137,732910,0.969596539820715,0.983067066126111,0.953615609876912,0.724198489502932,0.875529258600756,0.21414206184966,0.661387196751095,0.594879866313018,0.00466770756436471,0.995332292435635,0.00950526872799253,0.985827023707643,0.639848307670197,0.345978716037446,0.310218000561823,0.0357607154756227,306146,514635,0.0169329338738888,0.0463843901230885,710627 +INF0179,0,SEB,2025-07-26,Male,HU,783096,710001,604070,579147,0.966198564440462,0.97132088689371,0.892646330849883,0.779109353415384,0.925393435741696,0.34958379877376,0.575809636967936,0.480378102012308,0.00410640360217544,0.995893596397825,0.00684718926223207,0.989046407135592,0.725620616056038,0.263425791079554,0.185747914567706,0.0776778765118489,209429,435967,0.0286791131062903,0.107353669150117,559571 +INF0179,4,SEB,2025-07-26,Male,HU,985280,860127,686367,640888,0.984159478723271,0.951545496055402,0.933428565992745,0.757827363587935,0.925535943295766,0.332859124373155,0.592676818922611,0.519413626673417,0.00457156206449326,0.995428437935507,0.00662171632953914,0.988806721605968,0.56142407179165,0.427382649814318,0.377611026527143,0.0497716232871747,248274,477989,0.048454503944598,0.0665714340072551,630736 +INF0179,9,SEB,2025-07-26,Male,HU,1111728,981610,783127,748455,0.984322370750413,0.984891159611305,0.974006442058798,0.818738708412004,0.935493764734359,0.263124562735625,0.672369201998733,0.651935899943964,0.00303888758913223,0.996961112410868,0.00824949902857317,0.988711613382295,0.696706303593771,0.292005309788524,0.256265957338596,0.0357393524499283,393236,603182,0.0151088403886953,0.0259935579412016,736721 +INF0186,4,SEB,2025-07-26,Female,HEU-hi,492984,443892,386199,365672,0.980110590912074,0.977871032006228,0.943936785537906,0.6789695283748,0.860509077758874,0.334175769082197,0.526333308676677,0.470724330366316,0.00879115123049927,0.991208848769501,0.0126410992867557,0.978567749482745,0.504692397007342,0.473875352475403,0.445432879080203,0.0284424733952002,114547,243342,0.0221289679937723,0.0560632144620939,358399 +INF0186,9,SEB,2025-07-26,Female,HEU-hi,1226128,1088491,957607,891839,0.9712649928967,0.956045402280273,0.910587708320827,0.60486116562689,0.868179059354351,0.229586325099535,0.638592734254816,0.574161446583374,0.0188780852655198,0.98112191473448,0.00923128064489321,0.971890634089587,0.483639990027425,0.488250644062162,0.460942408376963,0.027308235685199,300825,523938,0.0439545977197268,0.0894122916791732,866212 +INF0134,0,Ctrl,2025-07-29,Female,HEU-lo,1205504,1088093,965389,876164,0.787154003131834,0.798904123095483,0.783886926614816,0.577284116019696,0.882294876914846,0.15782427745084,0.724470599464006,0.321159193146112,0.00979932116434392,0.990200678835656,0.0251826130480347,0.965018065787621,0.698582891464502,0.26643517432312,0.225141945474168,0.0412932288489512,127866,398139,0.201095876904517,0.216113073385184,689676 +INF0134,4,Ctrl,2025-07-29,Female,HEU-lo,1277824,1143114,940824,904464,0.981203231969432,0.979719717892464,0.962388291117489,0.775382184947429,0.939878481027374,0.321119916061518,0.618758564965856,0.511173147823863,0.00979673063255153,0.990203269367449,0.0261066098081023,0.964096659559346,0.636952380952381,0.327144278606965,0.275565031982942,0.0515792466240227,351750,688123,0.0202802821075357,0.0376117088825112,887463 +INF0134,9,Ctrl,2025-07-29,Female,HEU-lo,424520,386914,358089,331672,0.95725897875009,0.970383878852017,0.900017638017487,0.427759719807494,0.822040762230142,0.119105822754985,0.702934939475157,0.535733219450417,0.029535864978903,0.970464135021097,0.0286699927156778,0.941794142305419,0.461001388144422,0.480792754160997,0.420387855797908,0.0604048983630891,72759,135812,0.029616121147983,0.0999823619825132,317496 +INF0148,0,Ctrl,2025-07-29,Female,HU,2034128,1833204,1602246,1509990,0.671517692170147,0.91727885520989,0.899144464661706,0.647882365123744,0.876630088150722,0.220344839658844,0.656285248491878,0.356705224045313,0.00385772505174216,0.996142274948258,0.0201335694625216,0.976008705485736,0.668798941686048,0.307209763799689,0.267719290758956,0.0394904730407323,234335,656943,0.0827211447901103,0.100855535338294,1013985 +INF0148,9,Ctrl,2025-07-29,Female,HU,872400,793908,670148,623296,0.982887745148373,0.981657770595629,0.948915332256011,0.610627948353819,0.969135686962194,0.210789411075974,0.75834627588622,0.728201043067291,0.0133988223719954,0.986601177628005,0.0208287446955347,0.96577243293247,0.550636535835426,0.415135897097044,0.362616184309061,0.0525197127879829,272412,374089,0.0183422294043714,0.051084667743989,612630 +INF0191,0,Ctrl,2025-07-29,Female,HEU-hi,1997680,1808121,1593734,1482293,0.482659636117826,0.866839147213684,0.851001407519537,0.578498916056206,0.822556133013436,0.201008497570569,0.621547635442867,0.134772387365512,0.035317318035138,0.964682681964862,0.0205091430620294,0.944173538902833,0.455557547508067,0.488615991394765,0.379221943348871,0.109394048045895,55780,413883,0.133160852786316,0.148998592480463,715443 +INF0191,4,Ctrl,2025-07-29,Female,HEU-hi,644496,572957,458792,438631,0.974322380315117,0.97583347372756,0.953489732502199,0.793023810860897,0.964819880028208,0.240123571536058,0.72469630849215,0.672588540421878,0.00767715585503775,0.992322844144962,0.0165168524538384,0.975805991691124,0.665047883517804,0.310758108173319,0.223865864732901,0.0868922434404187,227949,338913,0.0241665262724396,0.0465102674978005,427368 +INF0191,9,Ctrl,2025-07-29,Female,HEU-hi,1504536,1353339,1109880,1047290,0.954451011658662,0.972429613430347,0.953109634278957,0.693557439222399,0.937200604092772,0.393165154751894,0.544035449340878,0.415256083119011,0.0211785956197787,0.978821404380221,0.0144085311843271,0.964412873195894,0.570575056011949,0.393837817183945,0.260659638397277,0.133178178786668,287885,693271,0.0275703865696533,0.0468903657210428,999587 +INF0134,0,PPD,2025-07-29,Female,HEU-lo,1245024,1126248,993895,896183,0.791566008281791,0.804229843188777,0.789978121986839,0.592486763238172,0.900348082216877,0.154857329117327,0.74549075309955,0.331456116182849,0.00945360055128058,0.990546399448719,0.0258771677960262,0.964669231652693,0.69642241874354,0.268246812909154,0.226039393591363,0.0422074193177903,139312,420303,0.195770156811223,0.210021878013161,709388 +INF0134,4,PPD,2025-07-29,Female,HEU-lo,1340280,1201488,986763,947027,0.983254965275541,0.983444466042147,0.964371666152975,0.763760391507879,0.947150550485805,0.320176042970233,0.626974507515572,0.513328365134493,0.010466371201455,0.989533628798545,0.0256112459391795,0.963922382859365,0.628647890564653,0.335274492294713,0.281901751425738,0.0533727408689745,365074,711190,0.016555533957853,0.0356283338470246,931169 +INF0134,9,PPD,2025-07-29,Female,HEU-lo,929672,828324,752940,697560,0.958546935030678,0.966002835589641,0.899239356069897,0.431271349178337,0.776881543311128,0.129113248048494,0.647768295262634,0.490593583870554,0.0258639579843219,0.974136042015678,0.0275392129835797,0.946596829032098,0.444189975330633,0.502406853701465,0.440365870036969,0.0620409836644966,141471,288367,0.0339971644103588,0.100760643930103,668644 +INF0148,0,PPD,2025-07-29,Female,HU,1862952,1676887,1450059,1372237,0.674659697996775,0.911606590242095,0.896064238982148,0.658815739587575,0.888040699952617,0.226900924208963,0.661139775743655,0.360931718058063,0.00422000345231714,0.995779996547683,0.0193193484205649,0.976460648127118,0.664716410316977,0.311744237810141,0.271842719699103,0.0399015181110374,220142,609927,0.0883934097579049,0.103935761017852,925793 +INF0148,4,PPD,2025-07-29,Female,HU,632312,562703,496868,467363,0.972633691584486,0.986906393472556,0.950157620448201,0.512846121525036,0.914050770827793,0.240886902361813,0.67316386846598,0.639881437505898,0.00312388971194519,0.996876110288055,0.0167054359703163,0.980170674317738,0.619850777285434,0.360319897032305,0.305376978407621,0.0549429186246841,149173,233126,0.0130936065274444,0.0498423795517992,454573 +INF0148,9,PPD,2025-07-29,Female,HU,1110656,997595,827243,774214,0.980991043819926,0.975244141846512,0.945795704262163,0.654739913390046,0.956100572522538,0.196441391348414,0.759659181174124,0.730733420073079,0.012606845839273,0.987393154160727,0.0212756003456494,0.966117553815078,0.556118489490167,0.40999906432491,0.358245223929065,0.0517538403958456,363374,497273,0.0247558581534885,0.054204295737837,759497 +INF0191,0,PPD,2025-07-29,Female,HEU-hi,1968064,1782256,1568308,1456260,0.489216898081387,0.876586653790494,0.861205428766737,0.584636741729328,0.802235715263222,0.199096782557964,0.603138932705259,0.12735317914773,0.0365357062061685,0.963464293793832,0.0206055350275243,0.942858758766307,0.450814418218837,0.49204434054747,0.378968403589473,0.113075936957997,53044,416511,0.123413346209506,0.138794571233263,712427 +INF0191,4,PPD,2025-07-29,Female,HEU-hi,1066672,941050,756678,723688,0.979242435966881,0.966806647983676,0.947546516976968,0.794993692374123,0.960648579568146,0.289526700213886,0.67112187935426,0.608381479805107,0.00961333671769467,0.990386663282305,0.0165308545804121,0.973855808701893,0.631869013546198,0.341986795155695,0.243043824561711,0.0989429705939846,342753,563385,0.0331933520163237,0.052453483023032,708666 +INF0191,9,PPD,2025-07-29,Female,HEU-hi,2415400,2151222,1768089,1652532,0.93912916663641,0.928039145818043,0.910939268954168,0.649623922558912,0.888403635072378,0.410601104170097,0.477802530902281,0.346151175685246,0.0267177103690162,0.973282289630984,0.0146425890160524,0.958639700614931,0.53798476712266,0.420654933492272,0.276607389492868,0.144047543999404,348982,1008178,0.0719608541819566,0.0890607310458322,1551941 +INF0134,0,SEB,2025-07-29,Female,HEU-lo,1193160,1079791,959139,869331,0.782985997278367,0.805832160476234,0.791506653699128,0.566516423427368,0.876204899731078,0.163671867909018,0.71253303182206,0.346036051689129,0.00822116969933151,0.991778830300668,0.0153107107527204,0.976468119547948,0.632348091969184,0.344120027578764,0.261855870979346,0.0822641565994185,133436,385613,0.194167839523766,0.208493346300872,680674 +INF0134,4,SEB,2025-07-29,Female,HEU-lo,1291352,1157622,964426,923312,0.982650501672241,0.977873740897373,0.956105690223555,0.740553492642399,0.941799288285888,0.339208720358268,0.60259056792762,0.501288140032951,0.00984813621721123,0.990151863782789,0.0180811424669329,0.972070721315856,0.600050472811484,0.372020248504372,0.304793432596529,0.0672268159078426,336815,671899,0.0221262591026272,0.0438943097764449,907293 +INF0148,0,SEB,2025-07-29,Female,HU,2042880,1841683,1602125,1513482,0.68014684020028,0.901883639825528,0.887045726109638,0.655521231020313,0.869403233909367,0.222301259508556,0.647101974400811,0.382685202886244,0.00371760168221476,0.996282398317785,0.0109359449485151,0.98534645336927,0.659413470884596,0.325932982484675,0.255453450592686,0.0704795318919882,258231,674787,0.0981163601744722,0.112954273890362,1029390 +INF0191,0,SEB,2025-07-29,Female,HEU-hi,1956304,1772838,1558215,1448120,0.500612518299588,0.874513585131051,0.858720706479232,0.584808268742405,0.813640598648442,0.205064216721114,0.608576381927327,0.153905485251973,0.0330426519946666,0.966957348005333,0.0149274318380358,0.952029916167298,0.46397645940934,0.488053456757958,0.365047740195252,0.123005716562706,65249,423955,0.125486414868949,0.141279293520768,724947 +INF0191,9,SEB,2025-07-29,Female,HEU-hi,1814944,1620273,1311159,1239312,0.955994132228204,0.953430820197928,0.93669852925661,0.694926040809436,0.916094499053236,0.411346105029447,0.504748394023789,0.378588927175097,0.0235030670122937,0.976496932987706,0.0090374201165208,0.967459512871185,0.549190899058081,0.418268613813105,0.271844442163078,0.146424171650027,311704,823331,0.0465691798020721,0.0633014707433901,1184775 +INF0124,0,Ctrl,2025-07-31,Male,HEU-hi,1229248,1096279,962417,891568,0.771360120596522,0.857697900308265,0.851096376432269,0.620287326237422,0.848707405809876,0.216735742550119,0.631971663259757,0.164790521913621,0.0142253581233908,0.985774641876609,0.00988662389575658,0.975888017980853,0.672020143107103,0.30386787487375,0.264805041466919,0.039062833406831,70297,426584,0.142302099691735,0.148903623567731,687720 +INF0124,4,Ctrl,2025-07-31,Male,HEU-hi,1105296,993220,820093,766344,0.899851763698809,0.947894129316295,0.938007181016131,0.699090481963352,0.91153726482607,0.258244311228194,0.653292953597876,0.479005994731274,0.00947930920995652,0.990520690790043,0.00774280715733315,0.98277788363271,0.509990299838908,0.472787583793802,0.423485648958099,0.0493019348357035,230924,482090,0.0521058706837046,0.0619928189838688,689596 +INF0124,9,Ctrl,2025-07-31,Male,HEU-hi,1017128,915840,757350,701114,0.745743202959861,0.954792091819658,0.946617678841582,0.744550550730514,0.902542840922811,0.341029415164569,0.561513425758241,0.534849430628659,0.00372698848763994,0.99627301151236,0.0102107957792816,0.986062215733079,0.662625893924913,0.323436321808166,0.290926031765853,0.0325102900423128,208211,389289,0.045207908180342,0.053382321158418,522851 +INF0149,0,Ctrl,2025-07-31,Female,HU,1327920,1192637,1046943,958252,0.895218585507779,0.886543606362455,0.868127692065583,0.541476607079368,0.836291692411029,0.157884879107347,0.678406813303682,0.232291287677367,0.017191844300278,0.982808155699722,0.0213438368860056,0.961464318813716,0.62177015755329,0.339694161260426,0.299629286376274,0.040064874884152,107900,464503,0.113456393637545,0.131872307934417,857845 +INF0149,4,Ctrl,2025-07-31,Female,HU,1026960,910155,731344,694429,0.945859115906738,0.979643196433791,0.947015370749294,0.669381211634025,0.91294172233329,0.292650641047511,0.62029108128578,0.545926385865795,0.00758244871431666,0.992417551285683,0.0194019031113037,0.97301564817438,0.672871498325195,0.300144149849184,0.263023480593931,0.0371206692552536,240028,439671,0.0203568035662087,0.0529846292507065,656832 +INF0149,9,Ctrl,2025-07-31,Female,HU,757368,665858,536020,502656,0.935176343264579,0.969391922939465,0.928508824180126,0.660841317925765,0.898426811484566,0.184607411079599,0.713819400404966,0.632452686846316,0.00949268833951758,0.990507311660482,0.0293433502827447,0.961163961377738,0.614535774455761,0.346628186921977,0.304509154209104,0.0421190327128729,196467,310643,0.0306080770605354,0.0714911758198744,470072 +INF0169,0,Ctrl,2025-07-31,Female,HEU-lo,1288912,1144653,1051293,973587,0.877778770669699,0.842549795575443,0.826082326812498,0.351936709127375,0.771953332025548,0.130222135036557,0.641731196988991,0.251161213314138,0.0128673550436855,0.987132644956315,0.0153958167858088,0.971736828170506,0.745724119671697,0.226012708498809,0.135041037860736,0.0909716706380725,75540,300763,0.157450204424557,0.173917673187502,854594 +INF0169,4,Ctrl,2025-07-31,Female,HEU-lo,1068232,944862,869444,785215,0.916182192138459,0.94555323881012,0.865148735056992,0.368202668890742,0.781063480378277,0.167733167223512,0.613330313154765,0.37912679087151,0.0106148867313916,0.989385113268608,0.0155240229026637,0.973861090365945,0.812735872541698,0.161125217824247,0.0949166044311675,0.0662086133930794,100425,264885,0.0544467611898805,0.134851264943008,719400 +INF0169,9,Ctrl,2025-07-31,Female,HEU-lo,778672,685649,594814,542125,0.951313811390362,0.975039313130295,0.922139642565601,0.573702181951444,0.867772310021766,0.302004893942057,0.565767416079709,0.461223620705971,0.0194995053676767,0.980500494632323,0.0165243835415674,0.963976111090756,0.829546037445499,0.134430073645257,0.0826072619352948,0.0518228117099623,136465,295876,0.0249606868697053,0.0778603574343989,515731 +INF0124,0,PPD,2025-07-31,Male,HEU-hi,1246824,1108772,973652,900843,0.781135003546678,0.864643019554343,0.85943468622101,0.643083503865393,0.812693221368985,0.207716700734766,0.604976520634219,0.152471134191481,0.014218009478673,0.985781990521327,0.00927576561299767,0.976506224908329,0.66962331695581,0.30688290795252,0.266446367233358,0.0404365407191617,68997,452525,0.135356980445657,0.14056531377899,703680 +INF0124,4,PPD,2025-07-31,Male,HEU-hi,1515456,1357582,1150459,1069355,0.887128222152606,0.906098634382362,0.896618897280887,0.637026105380776,0.772967212626465,0.207635714971257,0.565331497655208,0.425400203204273,0.00905176270144743,0.990948237298553,0.0075463771554826,0.98340186014307,0.516109959272903,0.467291900870167,0.416626147029878,0.0506657538402891,257077,604318,0.0939013656176376,0.103381102719113,948655 +INF0124,9,PPD,2025-07-31,Male,HEU-hi,1583312,1418103,1190344,1093566,0.859891401159144,0.97322799644387,0.962200164194532,0.606947640660692,0.906363996341604,0.342808133972968,0.563555862368636,0.530500646526802,0.00374530598225108,0.996254694017749,0.00994454701283775,0.986310147004911,0.648634812850297,0.337675334154614,0.302699328553169,0.0349760056014453,302779,570742,0.0267720035561303,0.0377998358054678,940348 +INF0149,0,PPD,2025-07-31,Female,HU,1153168,1032394,892184,812261,0.891197533797634,0.892793744862789,0.87600240369672,0.594862443620188,0.826509711759078,0.153493167863413,0.673016543895665,0.213143154394211,0.0175960427970626,0.982403957202937,0.0222047896101632,0.960199167592774,0.618411017410821,0.341788150181953,0.301834782419211,0.0399533677627422,91782,430612,0.107206255137211,0.12399759630328,723885 +INF0149,4,PPD,2025-07-31,Female,HU,1238584,1092917,869629,826659,0.951025755480797,0.982711206425041,0.953305502344265,0.687894791738216,0.949294107857731,0.307957581753127,0.641336526104603,0.564305063747562,0.00772661290586836,0.992273387094132,0.0191494172272666,0.973123969866865,0.66995763142287,0.303166338443995,0.26586036391757,0.0373059745264255,305179,540805,0.0172887935749593,0.0466944976557352,786174 +INF0149,9,PPD,2025-07-31,Female,HU,1216752,1066815,870494,816153,0.934514729468617,0.965079643952396,0.928110008168274,0.65979596358759,0.877932798257659,0.211600636685737,0.666332161571922,0.587092210138088,0.0110308925918028,0.988969107408197,0.028672197344327,0.96029691006387,0.60315864650711,0.357138263556761,0.315184316433288,0.0419539471234722,295443,503231,0.0349203560476041,0.0718899918317257,762707 +INF0169,0,PPD,2025-07-31,Female,HEU-lo,1218624,1082765,987457,903485,0.85632190905217,0.842848538273226,0.825762530471491,0.384300105729287,0.764246291070654,0.123404512937109,0.640841778133545,0.249883123740848,0.012436739528373,0.987563260471627,0.0137019489609131,0.973861311510714,0.744037364057284,0.22982394745343,0.134381393345537,0.0954425541078927,74296,297323,0.157151461726774,0.174237469528509,773674 +INF0169,4,PPD,2025-07-31,Female,HEU-lo,940776,820971,737842,662123,0.926151183390397,0.92995404630593,0.8459621738152,0.381591778561248,0.740784266801138,0.161742207331561,0.579042059469577,0.342556901223066,0.0119512468967926,0.988048753103207,0.015793610199728,0.972255142903479,0.809965194176574,0.162289948726905,0.0932521613293579,0.0690377873975474,80159,234002,0.0700459536940704,0.1540378261848,613226 +INF0169,9,PPD,2025-07-31,Female,HEU-lo,1054184,923351,814226,740463,0.941281333435972,0.956064988579365,0.903875842200108,0.518514054842005,0.787211258564013,0.264463358753279,0.522747899810734,0.416426855858947,0.0214824412771188,0.978517558722881,0.0152895445031396,0.963228014219741,0.829416259676401,0.13381175454334,0.0789527891292069,0.0548589654141334,150495,361396,0.0439350114206352,0.0961241577998921,696984 +INF0124,0,SEB,2025-07-31,Male,HEU-hi,1004008,900404,794342,737214,0.815346697159848,0.858233028606603,0.850839731485564,0.645535989086402,0.90635582518517,0.244872713402848,0.661483111782322,0.178788831561097,0.0139389396603915,0.986061060339608,0.00593882434341396,0.980122235996195,0.65523106639375,0.324891169602445,0.256133421743016,0.0687577478594286,69374,388022,0.141766971393397,0.149160268514436,601085 +INF0124,4,SEB,2025-07-31,Male,HEU-hi,1571248,1425147,1204162,1120656,0.879540197884096,0.899691780752428,0.891540913619476,0.636573186345826,0.779254310331088,0.235330211683555,0.543924098647533,0.407646235691996,0.00896878518703866,0.991031214812961,0.00402305141999249,0.987008163392969,0.48618713249093,0.500821030902039,0.445170774427624,0.0556502564744151,255776,627446,0.100308219247572,0.108459086380524,985662 +INF0124,9,SEB,2025-07-31,Male,HEU-hi,1109328,998592,812606,756059,0.904774627376964,0.966187324851658,0.954087562110507,0.656252421195124,0.903795347925456,0.368147412222277,0.53564793570318,0.510429521649834,0.00333419161127865,0.996665808388721,0.00606176982731157,0.99060403856141,0.663831440030374,0.326772598531035,0.289782273796483,0.0369903247345521,229141,448918,0.0338126751483416,0.0459124378894926,684063 +INF0149,0,SEB,2025-07-31,Female,HU,1225472,1099942,970297,887529,0.895820868951888,0.877578870711525,0.860238193762287,0.535075660290265,0.797532326800981,0.16081481638189,0.636717510419091,0.218588175007816,0.0179800412938747,0.982019958706125,0.0121838437715072,0.969836114934618,0.576006538196834,0.393829576737784,0.319565123881624,0.0742644528561597,92992,425421,0.122421129288475,0.139761806237713,795067 +INF0149,4,SEB,2025-07-31,Female,HU,1124960,999291,795445,758864,0.946020630837673,0.983822281902379,0.956350527440413,0.711361315836028,0.920366095083681,0.332892750353935,0.587473344729746,0.523146271591014,0.007048105283646,0.992951894716354,0.0117006782350916,0.981251216481262,0.663539997903909,0.317711218577353,0.275523648395742,0.0421875701816113,267164,510687,0.0161777180976207,0.043649472559587,717901 +INF0149,9,SEB,2025-07-31,Female,HU,531432,467288,379931,358096,0.92417396452348,0.969384455933499,0.925790846157798,0.651432421897426,0.89664497395483,0.255715789913121,0.640929184041709,0.552259644598236,0.0108684696791534,0.989131530320847,0.0216865445993617,0.967444985721485,0.571140601377457,0.396304384344028,0.34592642365194,0.050377960692088,119060,215587,0.0306155440665009,0.0742091538422024,330943 +INF0169,0,SEB,2025-07-31,Female,HEU-lo,1237144,1103284,1012883,939441,0.876379676850382,0.844672764837418,0.827158034609204,0.358358425228985,0.774778927531614,0.138666413592779,0.636112513938835,0.265283572680222,0.0129553207527885,0.987044679247212,0.00994007844740574,0.977104600799806,0.719211948536458,0.257892652263348,0.137052983940002,0.120839668323346,78269,295039,0.155327235162582,0.172841965390796,823307 +INF0169,4,SEB,2025-07-31,Female,HEU-lo,698912,617805,566286,511948,0.921683452225617,0.933040728699979,0.850002331229575,0.381090337265341,0.73006189557277,0.16779094533948,0.56227095023329,0.332923662126916,0.0121103798483279,0.987889620151672,0.0104566866000735,0.977432933551599,0.795125780910701,0.182307152640898,0.0982360605351953,0.0840710921057027,59866,179819,0.0669592713000208,0.149997668770425,471854 +INF0169,9,SEB,2025-07-31,Female,HEU-lo,887744,782828,694751,633595,0.93819869159321,0.95722682601045,0.905700846850302,0.520298163980096,0.774770842426888,0.261616308582699,0.513154533844189,0.40678662075432,0.0211186443372307,0.978881355662769,0.0102612607600169,0.968620094902752,0.834238115298101,0.134381979604651,0.0766693425957572,0.0577126370088942,125813,309285,0.0427731739895498,0.0942991531496977,594438 +INF0019,0,Ctrl,2025-08-05,Female,HEU-lo,1324344,1197859,1022143,939295,0.930082668384267,0.868875783805811,0.863010546895568,0.690163480315285,0.91948810996746,0.102868932666824,0.816619177300636,0.345066357958145,0.00882458965177477,0.991175410348225,0.0146163274134243,0.976559082934801,0.733344548316551,0.24321453461825,0.222489245632165,0.0207252889860854,208055,602942,0.131124216194189,0.136989453104432,873622 +INF0019,4,Ctrl,2025-08-05,Female,HEU-lo,1244280,1117290,891334,840761,0.905377390245266,0.955721578652822,0.935905654973818,0.704970796341595,0.886148318760855,0.352806786078997,0.533341532681858,0.451748697421678,0.00439318375883277,0.995606816241167,0.0133404284282302,0.982266387812937,0.652608478638402,0.329657909174535,0.293807054669356,0.035850854505179,242421,536628,0.0442784213471781,0.0640943450261822,761206 +INF0019,9,Ctrl,2025-08-05,Female,HEU-lo,1844096,1632745,1274587,1217805,0.930321356867479,0.964861644379717,0.957356458802242,0.793717286729335,0.940427604582526,0.21288596395631,0.727541640626216,0.703361275385269,0.00365696325012807,0.996343036749872,0.0164096937194463,0.979933343030426,0.699705292715165,0.280228050315261,0.26313218190902,0.0170958684062407,632492,899242,0.0351383556202833,0.042643541197758,1132950 +INF0032,0,Ctrl,2025-08-05,Female,HU,1124480,1006880,824473,773851,0.973138239790347,0.93983645480331,0.936204625370486,0.81375420946958,0.977386102358157,0.189406487176265,0.787979615181892,0.589146047137036,0.00786906496341065,0.992130935036589,0.015089991524344,0.977040943512245,0.713304010148629,0.263736933363617,0.236961061839051,0.0267758715245656,361034,612809,0.0601635451966898,0.0637953746295136,753064 +INF0032,4,Ctrl,2025-08-05,Female,HU,711784,637561,497456,468981,0.969265279403643,0.940503820118926,0.934273715425889,0.812375293411092,0.942385025955985,0.363519181973521,0.578865843982463,0.431871295145405,0.0113430439989717,0.988656956001028,0.0133871746477637,0.975269781353265,0.678833215241941,0.296436566111324,0.218145108194707,0.078291457916617,159481,369279,0.0594961798810736,0.0657262845741112,454567 +INF0032,9,Ctrl,2025-08-05,Female,HU,1248456,1118033,911093,872062,0.984879515447296,0.972946036447636,0.965117199688896,0.784273864911815,0.958258238642268,0.255593131767801,0.702665106874467,0.660764496120809,0.0196613246399019,0.980338675360098,0.0175763390078794,0.962762336352219,0.707335869167152,0.255426467185067,0.221487035118977,0.0339394320660905,445087,673594,0.0270539635523638,0.0348828003111042,858876 +INF0180,0,Ctrl,2025-08-05,Male,HEU-hi,1614504,1435508,1209469,1117422,0.939361315599657,0.867784231701032,0.861666077588712,0.699933216660966,0.927674650943114,0.107572676515665,0.820101974427449,0.387859435356761,0.0045269829237993,0.995473017076201,0.0201398100772745,0.975333206998926,0.66050084573867,0.314832361260256,0.295819032980299,0.019013328279957,284958,734694,0.132215768298968,0.138333922411288,1049663 +INF0180,4,Ctrl,2025-08-05,Male,HEU-hi,528392,465660,387783,356166,0.986118832229915,0.924284355763591,0.897517809248851,0.643003570391376,0.977466048521721,0.210988456275987,0.766477592245735,0.5716158114038,0.012913271155455,0.987086728844545,0.0140674867536331,0.973019242090912,0.521325876119357,0.451693365971555,0.374554581228891,0.0771387847426641,129092,225837,0.0757156442364089,0.102482190751149,351222 +INF0019,0,PPD,2025-08-05,Female,HEU-lo,1271392,1149523,966376,895883,0.931680810998758,0.911539433816914,0.906465614842628,0.722051763736152,0.940014601446871,0.10286719320369,0.83714740824318,0.358623149930311,0.00879542878293659,0.991204571217063,0.0149813773798783,0.976223193837185,0.735419992134546,0.24080320170264,0.219931986952599,0.0208712147500405,216135,602680,0.0884605661830864,0.0935343851573722,834677 +INF0019,4,PPD,2025-08-05,Female,HEU-lo,1374576,1213048,954728,905952,0.929225830949101,0.959040618459221,0.938433230304312,0.715411827034784,0.885281723115343,0.3552298184499,0.530051904665442,0.4494967273162,0.00475780623760218,0.995242193762398,0.0131135187449439,0.982128675017454,0.658989409448383,0.323139265569071,0.289232508228271,0.0339067573408,270713,602258,0.0409593815407788,0.0615667696956882,841834 +INF0019,9,PPD,2025-08-05,Female,HEU-lo,1725968,1521365,1188395,1141230,0.939210325701217,0.973696068964552,0.966078434116555,0.812773182939857,0.963162395615118,0.221661549057308,0.74150084655781,0.717317416133383,0.00372374217686095,0.996276257823139,0.0165176049632827,0.979758652859856,0.698383284606239,0.281375368253618,0.263582377594178,0.01779299065944,624909,871175,0.0263039310354479,0.0339215658834451,1071855 +INF0032,0,PPD,2025-08-05,Female,HU,1240368,1107613,907113,859658,0.976240551475121,0.915239272049598,0.912261553108612,0.6988631285948,0.983314834043467,0.168643618426998,0.81467121561647,0.671650392406596,0.00751404440901787,0.992485955590982,0.0149722411906714,0.977513714400311,0.721688426086934,0.255825288313376,0.229117429790648,0.0267078585227287,393929,586509,0.0847607279504023,0.0877384468913878,839233 +INF0032,4,PPD,2025-08-05,Female,HU,774384,688988,533872,502702,0.962799034020155,0.938179879793637,0.931460885411394,0.810487994859515,0.962080366679668,0.362208337475814,0.599872029203853,0.450324643045603,0.0114179290356181,0.988582070964382,0.0143332654031655,0.974248805561216,0.680716889704051,0.293531915857165,0.216431175418337,0.0771007404388289,176652,392277,0.0618201202063632,0.0685391145886062,484001 +INF0032,9,PPD,2025-08-05,Female,HU,1335992,1193171,967189,926155,0.985408489939589,0.971373190553569,0.9628320445827,0.777570808236755,0.954451745455109,0.258002967689387,0.696448777765722,0.653293275632959,0.0199027189094164,0.980097281090584,0.0174696131404968,0.962627667950087,0.705177899289266,0.257449768660821,0.222760755384433,0.0346890132763883,463605,709643,0.0286268094464307,0.0371679554172999,912641 +INF0180,0,PPD,2025-08-05,Male,HEU-hi,1570368,1398767,1158451,1068711,0.942663638719916,0.896367507581134,0.890511050340717,0.70957332234834,0.929576735786159,0.10813752274956,0.821439213036599,0.407784021520629,0.00477523464515067,0.995224765354849,0.0192518799055931,0.975972885449256,0.659836571710851,0.316136313738405,0.296812393654976,0.0193239200834294,291504,714849,0.103632492418866,0.109488949659283,1007435 +INF0180,4,PPD,2025-08-05,Male,HEU-hi,638600,551845,454723,419608,0.989857200053383,0.93640815501069,0.908740056626668,0.631904986613764,0.981166869234902,0.198142214331163,0.783024654903739,0.594365682019942,0.0128398258963198,0.98716017410368,0.013935986769146,0.973224187334534,0.518778966531837,0.454445220802697,0.378848582362707,0.07559663843999,155999,262463,0.0635918449893103,0.0912599433733315,415352 +INF0180,9,PPD,2025-08-05,Male,HEU-hi,1516936,1391367,1205249,1116976,0.580700032946097,0.859905832002319,0.852679810307295,0.619310298044488,0.699471249831965,0.106603900403782,0.592867349428183,0.434677945342567,0.018629983219843,0.981370016780157,0.0152624977807813,0.966107518999376,0.298383263368287,0.667724255631089,0.591028056651643,0.0766961989794457,174611,401702,0.140094167997681,0.147320189692705,648628 +INF0019,0,SEB,2025-08-05,Female,HEU-lo,1353632,1227164,1049849,966363,0.931358092145498,0.874083086119352,0.868618823816984,0.700898858926925,0.923351774646101,0.104785758445223,0.818566016200878,0.359317090182775,0.00802054105564085,0.991979458944359,0.00897788836536256,0.983001570578997,0.701554696737078,0.281446873841919,0.234563326098082,0.0468835477438368,226668,630830,0.125916913880648,0.131381176183016,900030 +INF0019,4,SEB,2025-08-05,Female,HEU-lo,1170048,1047919,832442,792400,0.917228672387683,0.96310875439591,0.942818775694402,0.723126475622307,0.904638140557901,0.344459517825171,0.56017862273273,0.484003295425789,0.00425346232619575,0.995746537673804,0.00847547576273385,0.98727106191107,0.625577381958558,0.361693679952512,0.323259205679669,0.0384344742728427,254381,525577,0.0368912456040902,0.0571812243055976,726812 +INF0019,9,SEB,2025-08-05,Female,HEU-lo,1710322,1517501,1200599,1151235,0.925440939512784,0.969865778111507,0.962381265252487,0.80652900319129,0.946442121041435,0.211284849105526,0.735157271935909,0.711653764331833,0.00369415231550906,0.996305847684491,0.0108077258314296,0.985498121853061,0.692474493341859,0.293023628511203,0.272495654178938,0.0205279743322644,611507,859276,0.0301342218884926,0.0376187347475126,1065400 +INF0032,0,SEB,2025-08-05,Female,HU,1027392,922706,754901,711686,0.97571541382014,0.955185965498421,0.951156605026188,0.835023754217652,0.990864768566664,0.225678330168684,0.76518643839798,0.575916239395836,0.00765404667291528,0.992345953327085,0.0082170203718621,0.984128932955223,0.68755858070737,0.296570352247852,0.251981637474883,0.0445887147729689,333941,579843,0.0448140345015791,0.0488433949738121,694403 +INF0032,9,SEB,2025-08-05,Female,HU,1155456,1036243,850931,814746,0.98729910917022,0.970027275055383,0.962124470722205,0.785860979266482,0.962943628439678,0.261944648775202,0.700998979664476,0.658102175924827,0.0194511749548094,0.980548825045191,0.0122952001846083,0.968253624860582,0.710602476827814,0.257651148032768,0.222056363216799,0.0355947848159686,416016,632145,0.029972724944617,0.0378755292777954,804398 +INF0180,0,SEB,2025-08-05,Male,HEU-hi,1661856,1483795,1251178,1155566,0.945002708629364,0.876733152444156,0.869922793959413,0.699612550400041,0.922558790344326,0.124676368415128,0.797882421929198,0.402479888374918,0.00404892532741009,0.99595107467259,0.0108816900767182,0.985069384595872,0.612174744462403,0.372894640133468,0.313003066776372,0.0598915733570957,307489,763986,0.123266847555844,0.130077206040587,1092013 +INF0155,0,Ctrl,2025-08-07,Female,HEU-lo,1956872,1761294,1560508,1450803,0.734109317391817,0.86158464219453,0.853166242272649,0.705002967002426,0.827947079489973,0.243380274937339,0.584566804552634,0.37507025258969,0.0103115479394658,0.989688452060534,0.0114868655592878,0.978201586501246,0.696086298850248,0.282115287650998,0.228192709479949,0.0539225781710495,281626,750862,0.13841535780547,0.146833757727351,1065048 +INF0155,4,Ctrl,2025-08-07,Female,HEU-lo,1215184,1083687,851644,825005,0.98352979678911,0.985920925985036,0.98077190889518,0.854323978915897,0.967996849453198,0.341870391928599,0.626126457524599,0.59517060412889,0.00328178777449222,0.996718212225508,0.011180861893451,0.985537350332057,0.721515827233506,0.264021523098551,0.223845072470794,0.040176450627757,412580,693213,0.014079074014964,0.0192280911048204,811417 +INF0155,9,Ctrl,2025-08-07,Female,HEU-lo,1123680,1000603,815357,761574,0.980901396318677,0.959184984786401,0.928468640441,0.618258461184238,0.931716960011432,0.317645938028437,0.614071021982995,0.495439064472337,0.0109735951962661,0.989026404803734,0.011511130922726,0.977515273881008,0.501966594121195,0.475548679759813,0.399603185008435,0.0759454947513788,228822,461857,0.0408150152135994,0.0715313595589997,747029 +INF0158,0,Ctrl,2025-08-07,Male,HU,1974064,1742021,1484261,1352364,0.92381784785753,0.802042361634722,0.793962082318796,0.613198349846078,0.932262704740423,0.332249912543141,0.600012792197282,0.366683113777458,0.00460996821079836,0.995390031789202,0.0221207277698077,0.973269304019394,0.590346477379117,0.382922826640277,0.359495644558992,0.023427182081285,280913,766092,0.197957638365278,0.206037917681204,1249338 +INF0158,4,Ctrl,2025-08-07,Male,HU,1017536,914368,805877,754118,0.947414065172824,0.973280594349315,0.924193309091316,0.565141323121454,0.828913347136503,0.594199201529576,0.234714145606927,0.196596098788425,0.0103426555807508,0.989657344419249,0.0161375661375661,0.973519778281683,0.573066263542454,0.400453514739229,0.368808264046359,0.0316452506928697,79380,403772,0.0267194056506854,0.0758066909086837,714462 +INF0158,9,Ctrl,2025-08-07,Male,HU,1148432,1020361,904910,847242,0.957195228754004,0.964520528351024,0.91473730418656,0.565784930750108,0.797826248043972,0.320764627166887,0.477061620877085,0.433948365218225,0.00991904053999759,0.990080959460002,0.0223090521917313,0.967771907268271,0.505695287074611,0.46207662019366,0.431159347502913,0.0309172726907469,199112,458838,0.035479471648976,0.0852626958134396,810976 +INF0159,0,Ctrl,2025-08-07,Male,HEU-hi,2024024,1786795,1447822,1277260,0.931853342310884,0.818567843396887,0.811916966541452,0.651424653782203,0.946597483936038,0.162708392984737,0.783889090951301,0.583682213434657,0.00516847824886035,0.99483152175114,0.0213721768375277,0.973459344913612,0.724433268294623,0.249026076618989,0.217864947818036,0.0311611288009528,452551,775338,0.181432156603113,0.188083033458548,1190219 +INF0159,4,Ctrl,2025-08-07,Male,HEU-hi,1487344,1333646,1131385,1067284,0.940156509420173,0.982565521310247,0.942089705744588,0.646537720223158,0.759423194013056,0.264593946774156,0.4948292472389,0.42502524104232,0.0127949864542873,0.987205013545713,0.0225689344402012,0.964636079105512,0.490235118756188,0.474400960349323,0.394639016729953,0.0797619436193709,275733,648745,0.0174344786897532,0.0579102942554121,1003414 +INF0159,9,Ctrl,2025-08-07,Male,HEU-hi,1167840,1028158,742821,709883,0.982519654647315,0.986670470870599,0.965210172708947,0.813984463937007,0.923115267211876,0.0548056921123134,0.868309575099563,0.858577887845167,0.00239002303859118,0.997609976961409,0.0274596209197793,0.970150356041629,0.627872387130393,0.342277968911237,0.327390074326639,0.0148878945845976,487443,567733,0.0133295291294012,0.0347898272910532,697474 +INF0155,0,PPD,2025-08-07,Female,HEU-lo,2221632,2004398,1728584,1609792,0.784455383055699,0.904280137154441,0.884300092650517,0.762086141224729,0.92752060018496,0.266291551066638,0.661229049118322,0.42905223562663,0.0100119397346134,0.989988060265387,0.0111574761386947,0.978830584126692,0.693289287902603,0.285541296224089,0.231345072861443,0.0541962233626458,412907,962370,0.0957198628455587,0.115699907349483,1262810 +INF0155,4,PPD,2025-08-07,Female,HEU-lo,1393256,1237743,976719,947853,0.987092935296929,0.986962641844597,0.982248115953182,0.856480041555377,0.963070560649114,0.341770461689747,0.621300098959367,0.590333179840242,0.00351120478081922,0.996488795219181,0.0112248629657737,0.985263932253407,0.718674493771364,0.266589438482043,0.226748573639117,0.0398408648429259,473057,801339,0.013037358155403,0.0177518840468182,935619 +INF0155,9,PPD,2025-08-07,Female,HEU-lo,1302344,1145745,915824,856281,0.980871933395696,0.959576236275185,0.932713578488919,0.651871289745708,0.966500946104897,0.340636118558998,0.625864827545899,0.500440176216603,0.0116133506085877,0.988386649391412,0.0115622547856713,0.976824394605741,0.496768189200533,0.480056205405208,0.406102301136882,0.0739539042683261,273995,547508,0.0404237637248155,0.0672864215110811,839902 +INF0158,0,PPD,2025-08-07,Male,HU,1897704,1691391,1435638,1304624,0.922783116054894,0.848499649052858,0.840498884860265,0.656822703165169,0.897703287684052,0.29528959618787,0.602413691496183,0.380556669141145,0.00439650273659864,0.995603497263401,0.022052299440717,0.973551197822684,0.60114116329535,0.372410034527334,0.349540244781853,0.0228697897454814,300921,790739,0.151500350947142,0.159501115139735,1203885 +INF0158,4,PPD,2025-08-07,Male,HU,1229864,1101656,968001,903455,0.944291635997366,0.969077216117216,0.919578021978022,0.52821684981685,0.851467373816947,0.604866466208794,0.246600907608153,0.205767417089219,0.0116795720725579,0.988320427927442,0.0148394193645795,0.973481008562863,0.564210685244699,0.409270323318163,0.377326747622026,0.031943575696137,92726,450635,0.0309227838827839,0.080421978021978,853125 +INF0158,9,PPD,2025-08-07,Male,HU,1371136,1216419,1068812,1005622,0.948960941586401,0.96633644068507,0.920389480831943,0.593111571252525,0.84213539126932,0.346419813287539,0.495715577981781,0.454252973477219,0.00900785270060558,0.990992147299394,0.0225079635485339,0.968484183750861,0.506730608418997,0.461753575331863,0.43024553788471,0.0315080374471528,257109,566004,0.0336635593149296,0.0796105191680568,954296 +INF0159,0,PPD,2025-08-07,Male,HEU-hi,1898040,1677926,1357904,1200314,0.92715656069995,0.836210405623612,0.830270856040953,0.664310315856441,0.953162260904616,0.171448010745343,0.781714250159273,0.588793137264185,0.00438555180074111,0.995614448199259,0.021238568044972,0.974375880154287,0.719421171486792,0.254954708667495,0.223869899125417,0.0310848095420786,435293,739297,0.163789594376388,0.169729143959047,1112879 +INF0159,4,PPD,2025-08-07,Male,HEU-hi,1468272,1314327,1118313,1054808,0.946339997421332,0.981112134056363,0.940226826700274,0.633504874239511,0.820599997786103,0.307417030246581,0.513182967539522,0.438671092352724,0.0131866388850837,0.986813361114916,0.0228152644898018,0.963998096625114,0.479992934441713,0.484005162183402,0.403468612338772,0.0805365498446298,277402,632369,0.0188878659436369,0.0597731732997264,998207 +INF0159,9,PPD,2025-08-07,Male,HEU-hi,1124616,979389,693430,664229,0.986489599219546,0.987891736804755,0.970960923609892,0.834537699063723,0.953283897336491,0.0770854096756791,0.876198487660812,0.865529821609809,0.00256495852542351,0.997435041474577,0.0272806791435489,0.970154362331028,0.626557673536135,0.343596688794892,0.328836556786153,0.0147601320087386,473302,546835,0.0121082631952446,0.0290390763901077,655255 +INF0155,0,SEB,2025-08-07,Female,HEU-lo,2378848,2136279,1866817,1735030,0.768161357440505,0.871291875721704,0.849967324012986,0.725989902332188,0.917141300988955,0.265460366871403,0.651680934117552,0.427269072445165,0.00920371535000726,0.990796284649993,0.00827487784819312,0.9825214068018,0.683459919694258,0.299061487107542,0.234429877606308,0.0646316095012336,413420,967587,0.128708124278296,0.150032675987014,1332783 +INF0155,4,SEB,2025-08-07,Female,HEU-lo,995464,892094,720248,700245,0.986965990474762,0.988068607676258,0.983642446007773,0.852773332484467,0.955501410835693,0.343553337733534,0.611948073102159,0.563346777135469,0.00340041804962382,0.996599581950376,0.00690324018577306,0.989696341764603,0.704103391984772,0.285592949779831,0.238706937575674,0.0468860122041576,332018,589367,0.0119313923237421,0.016357553992227,691118 +INF0158,0,SEB,2025-08-07,Male,HU,1995888,1767501,1513529,1376026,0.927221578662031,0.808748621540724,0.801494026480526,0.617858562044579,0.886202706028309,0.318907186730161,0.567295519298148,0.350209434311708,0.00469075432400616,0.995309245675994,0.0136267318663407,0.981682513809653,0.587833016390474,0.39384949741918,0.357326813365933,0.0365226840532464,276075,788314,0.191251378459276,0.198505973519474,1275881 +INF0158,4,SEB,2025-08-07,Male,HU,1006016,899118,788056,735913,0.950509095504496,0.970617247945652,0.921042699559109,0.55347309190098,0.810776184941237,0.568911274699729,0.241864910241508,0.201115846571096,0.0105699827900645,0.989430017209936,0.00946546453982687,0.979964552670109,0.538285684929747,0.441678867740361,0.406193008142611,0.0354858595977499,77862,387150,0.029382752054348,0.0789573004408914,699492 +INF0158,9,SEB,2025-08-07,Male,HU,1120552,994396,877895,829211,0.951199393158074,0.969440059841901,0.923015676803023,0.590119747193326,0.769334885939319,0.291835498244724,0.477499387694595,0.4362257065145,0.00997818196145644,0.990021818038544,0.0136030299000704,0.976418788138473,0.498879547682018,0.477539240456455,0.442625453721625,0.0349137867348296,203043,465454,0.0305599401580993,0.0769843231969775,788745 +INF0159,0,SEB,2025-08-07,Male,HEU-hi,2086792,1847605,1507051,1327409,0.929341295712173,0.817877686411331,0.811086269957588,0.63500068092502,0.933116486052797,0.168540889286612,0.764575596766184,0.573310423094746,0.00499887553133928,0.995001124468661,0.0134001037628507,0.98160102070581,0.708139149100091,0.273461871605719,0.229282499927633,0.0441793716780858,449101,783347,0.182122313588669,0.188913730042412,1233616 +INF0159,4,SEB,2025-08-07,Male,HEU-hi,1459280,1303303,1109557,1042434,0.940995784865037,0.978019748686445,0.936432513767603,0.633759325372148,0.809547172057246,0.292138446219946,0.5174087258373,0.446210294512692,0.0123001052646758,0.987699894735324,0.0150975500728201,0.972602344662504,0.473806399515494,0.49879594514701,0.416260508442804,0.0825354367042063,277396,621671,0.0219802513135547,0.0635674862323967,980926 +INF0159,9,SEB,2025-08-07,Male,HEU-hi,527136,464846,340607,329303,0.989851291971224,0.990468184844199,0.978966808912723,0.810075438472701,0.908817547992259,0.101222103138385,0.807595444853874,0.791356280746668,0.00333078101071976,0.99666921898928,0.0168501148545176,0.979819104134763,0.607656967840735,0.372162136294028,0.351852029096478,0.0203101071975498,208960,264053,0.00953181515580082,0.0210331910872773,325961 +INF0013,0,Ctrl,2025-08-22,Male,HEU-lo,1284824,1149119,931095,865836,0.966202606498228,0.932750638617311,0.92304556804965,0.750423453781081,0.949825099078664,0.0932661552381074,0.856558943840557,0.291104902323092,0.00844865417973089,0.991551345820269,0.00902320643936285,0.982528139380906,0.850868121104672,0.131660018276234,0.12322777987535,0.00843223840088426,182751,627784,0.0672493613826887,0.0769544319503498,836573 +INF0013,4,Ctrl,2025-08-22,Male,HEU-lo,1212080,1066572,933636,828526,0.95181563402959,0.847583831682315,0.831272730039411,0.60063606068445,0.925172696257263,0.148377330766113,0.77679536549115,0.594396027563843,0.0112913079305544,0.988708692069446,0.0117814622226011,0.976927229846845,0.529906515500241,0.447020714346603,0.122662887506038,0.324357826840565,281544,473664,0.152416168317685,0.168727269960589,788604 +INF0013,9,Ctrl,2025-08-22,Male,HEU-lo,591208,513249,411678,387329,0.98218826888769,0.981357937071209,0.957161632889099,0.743813579370712,0.979025971042764,0.223218797818842,0.755807173223922,0.739819556205803,0.00397428181097322,0.996025718189027,0.00966820479015601,0.986357513398871,0.631418799499393,0.354938713899477,0.330366952318172,0.0245717615813056,209346,282969,0.0186420629287911,0.0428383671109008,380430 +INF0023,0,Ctrl,2025-08-22,Female,HEU-hi,1428512,1282467,1103378,1007499,0.960829737796266,0.84605411994401,0.843164761604694,0.703389856771708,0.877314930401553,0.283347481150115,0.593967449251439,0.320800521657909,0.00813056515668277,0.991869434843317,0.0207613248792547,0.971108109964062,0.74064138073111,0.230466729232953,0.199926751665255,0.0305399775676975,218435,680906,0.15394588005599,0.156835238395306,968035 +INF0023,4,Ctrl,2025-08-22,Female,HEU-hi,1945688,1737383,1389603,1347134,0.970804686096558,0.973963988487571,0.967753577753241,0.82823955271585,0.91921157707665,0.46567267523715,0.4535389018395,0.425440025849932,0.00191178449132644,0.998088215508674,0.0244929756567555,0.973595239851918,0.711776245263939,0.261818994587979,0.236718414325581,0.0251005802623984,460826,1083175,0.0260360115124285,0.0322464222467587,1307804 +INF0023,9,Ctrl,2025-08-22,Female,HEU-hi,987800,871641,686162,661242,0.967712274779884,0.980770192469979,0.97006369824908,0.82072599751208,0.977877892363703,0.332214343382028,0.645663548981675,0.598146145292245,0.00485146371589014,0.99514853628411,0.0375956604230069,0.957552875861103,0.581545337628767,0.376007538232335,0.327330548941209,0.048676989291126,314132,525176,0.0192298075300207,0.0299363017509204,639892 +INF0030,0,Ctrl,2025-08-22,Male,HU,1055672,939062,820166,756891,0.967538258481076,0.956857716766282,0.935349389134,0.547854014837756,0.885988459764958,0.214057651325382,0.671930808439576,0.213160354432273,0.00440827399118345,0.995591726008817,0.0174693934823026,0.978122332526514,0.649021877667473,0.32910045485904,0.304334607874089,0.0247658469849511,85521,401205,0.0431422832337185,0.064650610866,732321 +INF0030,4,Ctrl,2025-08-22,Male,HU,887272,774792,587687,558971,0.959545307359416,0.973952099157652,0.963873010190955,0.820582521375648,0.966016549806192,0.344733099157968,0.621283450648223,0.586763790369122,0.00488673765730881,0.995113262342691,0.0210029041626331,0.974110358180058,0.605254598257502,0.368855759922556,0.302152952565344,0.066702807357212,258250,440126,0.0260479008423479,0.0361269898090455,536358 +INF0030,9,Ctrl,2025-08-22,Male,HU,1059640,929289,780683,761469,0.997026799515148,0.989099123425162,0.984053055498844,0.880723915148083,0.981709414491887,0.19381141105212,0.787898003439767,0.731571076048755,0.00457105475657498,0.995428945243425,0.0326617807897131,0.962767164453712,0.52534420900923,0.437422955444482,0.392945120767021,0.0444778346774606,489165,668650,0.0109008765748382,0.0159469445011559,759205 +INF0013,0,PPD,2025-08-22,Male,HEU-lo,1309848,1169793,941515,876615,0.962326677047507,0.941158619708626,0.932124610296471,0.759162626394339,0.937937797264928,0.0886665355031526,0.849271261761776,0.284356252595945,0.00974147209348299,0.990258527906517,0.00882443385243921,0.981434094054078,0.848205460495969,0.133228633558108,0.124788586992334,0.00844004656577416,182108,640422,0.0588413802913738,0.0678753897035289,843590 +INF0013,4,PPD,2025-08-22,Male,HEU-lo,1021264,890816,775431,680459,0.904611446097414,0.847437499086185,0.828137717264695,0.527857155621549,0.906420905876161,0.1306524930522,0.775768412823961,0.582688821659285,0.00979247764473483,0.990207522355265,0.012153447173967,0.978054075181298,0.563225918903073,0.414828156278225,0.36157693750033,0.0532512187778946,189329,324923,0.152562500913815,0.171862282735305,615551 +INF0013,9,PPD,2025-08-22,Male,HEU-lo,739216,639880,503620,472587,0.982966099363715,0.979256765338391,0.957546976882358,0.746838249698086,0.975531369078845,0.202663907256135,0.77286746182271,0.752431298171987,0.00539755749988508,0.994602442500115,0.00971866811725226,0.984883774382863,0.620462450774582,0.364421323608281,0.338517644535021,0.0259036790732597,261044,346934,0.0207432346616093,0.0424530231176419,464537 +INF0023,0,PPD,2025-08-22,Female,HEU-hi,1572088,1406726,1193232,1088074,0.963874699698734,0.860464717139269,0.857511725674053,0.722300568191028,0.866451932279463,0.280125408402363,0.5863265238771,0.293390977195472,0.00882785679254536,0.991172143207455,0.0203418657283882,0.970830277479066,0.733432020553338,0.237398256925728,0.203517644465042,0.0338806124606863,222251,757525,0.139535282860731,0.142488274325947,1048767 +INF0023,4,PPD,2025-08-22,Female,HEU-hi,1960856,1733414,1389954,1349503,0.97374070305883,0.974950268860164,0.969395753333546,0.816614994985031,0.920252430839653,0.478423910105993,0.44182852073366,0.41255500491107,0.00191097046127574,0.998089029538724,0.0242485436191431,0.973840485919581,0.713799420384137,0.260041065535444,0.234965790014615,0.0250752755208298,442707,1073086,0.0250497311398362,0.0306042466664536,1314066 +INF0023,9,PPD,2025-08-22,Female,HEU-hi,942384,831282,659209,636544,0.965433654232857,0.981319391220439,0.969826260574966,0.819434342053663,0.976605318760227,0.343987799259695,0.632617519500532,0.588131682208842,0.00467976054212291,0.995320239457877,0.0367965587215407,0.958523680736336,0.586502976341211,0.372020704395126,0.324355351167745,0.0476653532273803,296169,503576,0.0186806087795607,0.0301737394250343,614541 +INF0030,0,PPD,2025-08-22,Male,HU,972560,864436,748225,696152,0.967973948218205,0.94563980191643,0.924446878195225,0.564357126215206,0.915665691987294,0.220120642867661,0.695545049119633,0.217351747060185,0.00416172663263084,0.995838273367369,0.0178930049118053,0.977945268455564,0.648612354521038,0.329332913934525,0.304435142393961,0.0248977715405647,82658,380296,0.0543601980835696,0.0755531218047746,673857 +INF0030,4,PPD,2025-08-22,Male,HU,983584,844256,638719,608344,0.973238825401418,0.972557020862609,0.963926197167874,0.819625918819587,0.948748225218486,0.355848587696359,0.592899637522127,0.555135171893643,0.0058947774795743,0.994105222520426,0.0206465694845039,0.973458653035922,0.598208551881837,0.375250101154085,0.307887048936304,0.0673630522177801,269391,485271,0.0274429791373906,0.0360738028321262,592064 +INF0030,9,PPD,2025-08-22,Male,HU,1369512,1205519,999002,973830,0.996570243266279,0.990564560170635,0.984433636616554,0.888463559645128,0.979156736194469,0.202834461203022,0.776322274991447,0.725544363840904,0.00440699044272911,0.995593009557271,0.0321021360396549,0.963490873517616,0.543279459460324,0.420211414057292,0.376963124823169,0.0432482892341236,625597,862245,0.00943543982936457,0.0155663633834455,970490 +INF0013,0,SEB,2025-08-22,Male,HEU-lo,1304808,1167431,967694,900086,0.963833455914213,0.930778425719828,0.919863567149607,0.731671302417314,0.937149960062954,0.10830737819201,0.828842581870944,0.296109170711573,0.00917240828921816,0.990827591710782,0.00503844005214014,0.985789151658642,0.773584102577745,0.212205049080897,0.17543028916496,0.0367747599159373,187955,634749,0.0692215742801715,0.080136432850393,867533 +INF0013,4,SEB,2025-08-22,Male,HEU-lo,1108496,975841,843244,746237,0.875108042083145,0.883216596890227,0.866020047837951,0.519903895332278,0.895616419796358,0.120630189357234,0.774986230439124,0.60606979915586,0.00892739987656181,0.991072600123438,0.00634200154540727,0.984730598578031,0.529000685227753,0.455729913350278,0.401436548396032,0.0542933649542453,205771,339517,0.116783403109773,0.133979952162049,653038 +INF0013,9,SEB,2025-08-22,Male,HEU-lo,338400,296428,235954,224609,0.973162250844801,0.98512679510113,0.96496035794511,0.781618713428889,0.98204826540706,0.217726972086136,0.764321293320925,0.748031864767892,0.00402976549112278,0.995970234508877,0.00665106925719294,0.989319165251684,0.625662172630459,0.363656992621226,0.337897792627485,0.0257591999937402,127799,170847,0.0148732048988705,0.0350396420548904,218581 +INF0023,0,SEB,2025-08-22,Female,HEU-hi,1553624,1392141,1200738,1097477,0.961156361363382,0.842094635525342,0.839047748156842,0.695863949937763,0.866243886489653,0.296521940520142,0.569721945969511,0.334008146805989,0.00759466823291404,0.992405331767086,0.010641508818299,0.981763822948787,0.657619956601896,0.324143866346891,0.233648214314848,0.0904956520320428,245172,734030,0.157905364474658,0.160952251843158,1054847 +INF0023,4,SEB,2025-08-22,Female,HEU-hi,1938960,1727707,1390235,1352003,0.974983043676678,0.973879895006752,0.968768301749382,0.812414086088395,0.935776929484139,0.488083943718788,0.447692985765351,0.423472417798728,0.00190297684674752,0.998097023153252,0.0136758544652701,0.984421168687982,0.703230429988975,0.281190738699008,0.245587651598677,0.0356030871003308,453500,1070908,0.0261201049932482,0.0312316982506182,1318180 +INF0023,9,SEB,2025-08-22,Female,HEU-hi,904872,793595,633506,611464,0.957374432509518,0.979145883156816,0.968143149982918,0.821083020157158,0.971231343438841,0.380720755957409,0.590510587481432,0.543729273377134,0.00462215419934953,0.99537784580065,0.0153969772335948,0.979980868567056,0.561159364836426,0.418821503730629,0.359801033097379,0.0590204706332504,261350,480662,0.0208541168431842,0.0318568500170824,585400 +INF0030,0,SEB,2025-08-22,Male,HU,1039944,924772,816017,756973,0.967017317658622,0.960038578918752,0.938458154714579,0.543828875719598,0.88617032500515,0.227832679370789,0.65833764563436,0.221035153208101,0.00398904433407962,0.99601095566592,0.0100464820265709,0.985964473639349,0.606050618813288,0.379913854826062,0.328624518416656,0.0512893364094055,87991,398086,0.039961421081248,0.0615418452854212,732006 +INF0030,4,SEB,2025-08-22,Male,HU,776400,671086,524037,503966,0.974065710782077,0.979785942439947,0.9725400084743,0.853854991688667,0.979747300514847,0.400874141723567,0.57887315879128,0.548695706112789,0.00444805815955615,0.995551941840444,0.0138442005669861,0.981707741273458,0.59258309129172,0.389124649981738,0.320364540758648,0.0687601092230899,229988,419154,0.0202140575600535,0.0274599915256999,490896 +INF0030,9,SEB,2025-08-22,Male,HU,1288944,1140767,954830,930021,0.996728030872421,0.988507817877015,0.982723430329522,0.876881651991741,0.981596850587439,0.215690471796764,0.765906378790675,0.717375899612475,0.00418782444063733,0.995812175559363,0.0166741265504983,0.979138049008864,0.543307626745141,0.435830422263723,0.388106029815526,0.0477243924481967,583119,812850,0.0114921821229845,0.0172765696704776,926978 +INF0166,0,Ctrl,2025-08-28,Female,HU,1142328,1017218,889137,819646,0.902212662539682,0.902008803304958,0.89485256830675,0.618988634135457,0.762528427772158,0.0749357166420165,0.687592711130142,0.492966515852921,0.00387325504099269,0.996126744959007,0.00858852204741857,0.987538222911589,0.842814092621316,0.144724130290273,0.135945047640151,0.00877908265012187,225650,457739,0.0979911966950419,0.10514743169325,739495 +INF0166,4,Ctrl,2025-08-28,Female,HU,910864,805652,630109,604513,0.989270702201607,0.977171599275618,0.961137206179654,0.796360365000242,0.943407279866455,0.410589087549476,0.53281819231698,0.485300633077513,0.0113749448343299,0.98862505516567,0.00761502583051375,0.981010029335156,0.667746904232397,0.31326312510276,0.263912565657964,0.0493505594447954,231122,476245,0.022828400724382,0.0388627938203459,598027 +INF0166,9,Ctrl,2025-08-28,Female,HU,1053896,933222,740554,713494,0.978962682236991,0.979133380292176,0.963402168124109,0.803150823784081,0.935201822498877,0.276155639692828,0.659046182806049,0.631649874863633,0.00772968945782113,0.992270310542179,0.00842956641493673,0.983840744127242,0.701231557677763,0.282609186449479,0.242981475837312,0.0396277106121666,354348,560988,0.0208666197078243,0.0365978318758912,698484 +INF0199,0,Ctrl,2025-08-28,Male,HEU-hi,1729240,1545717,1379096,1282394,0.867265442601884,0.896476816618952,0.873258369178979,0.717055573937938,0.28363795498889,0.0213393990159149,0.262298555972975,0.21283724476233,0.00818329641325352,0.991816703586746,0.0199721921101004,0.971844511476646,0.552840882311354,0.419003629165292,0.373862940095207,0.0451406890700853,169736,797492,0.103523183381048,0.126741630821021,1112176 +INF0199,4,Ctrl,2025-08-28,Male,HEU-hi,955672,847071,733820,687490,0.930010618336267,0.934141729475596,0.901921100828468,0.565672932701256,0.826955617735211,0.233648348245391,0.59330726948982,0.521875933155642,0.0159364238410596,0.98406357615894,0.0125139072847682,0.971549668874172,0.286638410596027,0.684911258278146,0.529997350993378,0.154913907284768,188750,361676,0.0658582705244044,0.0980788991715321,639373 +INF0199,9,Ctrl,2025-08-28,Male,HEU-hi,1090984,980857,815737,784725,0.987602026187518,0.97428760922637,0.950512002642594,0.742637381354226,0.926212033547567,0.20359800604996,0.722614027497607,0.674732121603848,0.00524803263153558,0.994751967368464,0.0140136376745911,0.980738329693873,0.336626014585308,0.644112315108566,0.534534526801533,0.109577788307033,388336,575541,0.0257123907736297,0.0494879973574057,774996 +INF0207,0,Ctrl,2025-08-28,Male,HEU-lo,1373960,1231782,1089637,986621,0.917642134112288,0.63901299475902,0.63159830565573,0.394152634572797,0.91575499086456,0.337731048165626,0.578023942698934,0.109443130485467,0.0260914095506337,0.973908590449366,0.015567789015491,0.958340801433875,0.651952374855972,0.306388426577903,0.247727563692229,0.058660862885674,39055,356852,0.36098700524098,0.36840169434427,905365 +INF0207,4,Ctrl,2025-08-28,Male,HEU-lo,864640,768455,603156,581744,0.986143389532165,0.971151315273418,0.956406238288393,0.818066772067501,0.970326712989894,0.492044720878053,0.478281992111841,0.372480082503926,0.00918716999696812,0.990812830003032,0.0178766539480233,0.972936176055009,0.451521374757593,0.521414801297416,0.454879325435189,0.0665354758622268,174809,469311,0.0288486847265824,0.0435937617116072,573683 +INF0207,9,Ctrl,2025-08-28,Male,HEU-lo,1155280,1020738,803205,743244,0.981793327628612,0.965042372881356,0.952004352402044,0.778534545135615,0.96670867760594,0.315897033300124,0.650811644305816,0.601995050219501,0.00591233256432074,0.994087667435679,0.0177428456974769,0.976344821738202,0.702509086337015,0.273835735401188,0.238192147884338,0.0356435875168496,341997,568106,0.034957627118644,0.0479956475979565,729712 +INF0166,0,PPD,2025-08-28,Female,HU,1421024,1265409,1081320,999289,0.913050178677039,0.909845561326654,0.902988927017835,0.662145262883316,0.743515597326456,0.0731665734214804,0.670349023904976,0.475618314899477,0.00287811346100974,0.99712188653899,0.009069363578466,0.988052522960524,0.838769963214439,0.149282559746086,0.140460289342628,0.00882227040345791,287341,604142,0.0901544386733465,0.0970110729821646,912401 +INF0166,4,PPD,2025-08-28,Female,HU,895072,775403,609274,585499,0.990645586072735,0.976435376589164,0.957963663447249,0.793280254886884,0.965443797270277,0.420251238807268,0.54519255846301,0.498737285925411,0.00980917643880268,0.990190823561197,0.00757367776572148,0.982617145795476,0.681099359854279,0.301517785941197,0.251665729761765,0.0498520561794325,229479,460120,0.0235646234108361,0.0420363365527514,580022 +INF0166,9,PPD,2025-08-28,Female,HU,996616,876933,700861,676818,0.985086094045962,0.980557172083201,0.964765030207402,0.808679453566993,0.94496500150232,0.279680098522533,0.665284902979787,0.638831083562391,0.0074672798429897,0.99253272015701,0.00825697662265268,0.984275743534358,0.704746309909533,0.279529433624824,0.239013924212336,0.0405155094124888,344436,539166,0.0194428279167992,0.0352349697925979,666724 +INF0199,0,PPD,2025-08-28,Male,HEU-hi,1516328,1353418,1204623,1122951,0.855110329836297,0.90664797703091,0.883277948277891,0.726175661053875,0.292509192494565,0.020824370292611,0.271684822201954,0.22272797673338,0.00813212285107205,0.991867877148928,0.019490052153757,0.972377824995171,0.554053183954671,0.4183246410405,0.370826089755972,0.0474985512845277,155310,697308,0.0933520229690903,0.116722051722109,960247 +INF0199,4,PPD,2025-08-28,Male,HEU-hi,897696,790855,670560,632602,0.937085244751044,0.939225913542802,0.909882220370377,0.599200070175202,0.859586100499146,0.233278060398584,0.626308040100561,0.548494258277568,0.0174050064415462,0.982594993558454,0.0131551257769634,0.96943986778149,0.267480713856767,0.701959153924724,0.548701681987794,0.153257471936929,194829,355207,0.0607740864571982,0.0901177796296234,592802 +INF0199,9,PPD,2025-08-28,Male,HEU-hi,858968,768197,633815,610401,0.98810617938044,0.973886703109223,0.953236473726707,0.773676470344414,0.952894761655766,0.188153078630881,0.764741683024885,0.707733651068499,0.00545943425363508,0.994540565746365,0.0140497919782955,0.980490773768069,0.311602584677248,0.668888189090821,0.558709357040339,0.110178832050482,330254,466636,0.0261132968907768,0.0467635262732926,603141 +INF0207,0,PPD,2025-08-28,Male,HEU-lo,1232224,1105088,964965,866432,0.912570172846802,0.669899327161431,0.662423483583751,0.437218596650984,0.927630893838588,0.339022273647671,0.588608620190917,0.112016199016488,0.0265210205557277,0.973478979444272,0.0136607788451606,0.959818200599112,0.65434872430534,0.305469476293771,0.245945666769962,0.0595238095238095,38724,345700,0.330100672838569,0.337576516416249,790680 +INF0207,4,PPD,2025-08-28,Male,HEU-lo,964456,862394,674289,650158,0.985832674519117,0.971358006200201,0.957436418299797,0.726423557642052,0.962862892746763,0.463789655905618,0.499073236841144,0.370627943788539,0.0095095153102617,0.990490484689738,0.0172283906260866,0.973262094063652,0.403937090007186,0.569325004056466,0.502746806981757,0.0665781970747085,172564,465599,0.0286419937997994,0.042563581700203,640947 +INF0207,9,PPD,2025-08-28,Male,HEU-lo,1131528,982637,762643,713953,0.983508718361013,0.955307692198143,0.941492126651466,0.919422255578706,0.941057839308921,0.299647304286407,0.641410535022514,0.565919401981726,0.00585728479268223,0.994142715207318,0.0182533795712139,0.975889335636104,0.679510725126383,0.296378610509721,0.256338321148904,0.040040289360817,365357,645599,0.0446923078018568,0.0585078733485337,702179 +INF0166,0,SEB,2025-08-28,Female,HU,1550664,1380306,1200328,1097863,0.91635112942143,0.876639616392387,0.869641799234216,0.617185605172023,0.785068915423591,0.0850933957797154,0.699975519643875,0.529613178162234,0.00253922880428172,0.997460771195718,0.00601508332319669,0.991445687872522,0.822019827271621,0.1694258606009,0.142005230507238,0.0274206300936626,328840,620906,0.123360383607613,0.130358200765784,1006028 +INF0166,4,SEB,2025-08-28,Female,HU,775352,679787,538754,516822,0.988949773809938,0.976631299267674,0.959359121599809,0.795521912069981,0.960580423020167,0.423162813575996,0.537417609444171,0.494367929168716,0.0101387990647231,0.989861200935277,0.0036664842545147,0.986194716680762,0.640256703646585,0.345938013034177,0.290030346748918,0.0559076662852594,201010,406600,0.0233687007323262,0.0406408784001909,511111 +INF0166,9,SEB,2025-08-28,Female,HU,1043248,923970,742880,716921,0.981115074045815,0.982730578831986,0.967342070169552,0.814757841400548,0.945940050882416,0.303301773206814,0.642638277675602,0.6141730909497,0.00777898367493053,0.992221016325069,0.0048838834686653,0.987337132856404,0.680834379812145,0.306502753044259,0.259226533778063,0.0472762192661958,351974,573086,0.017269421168014,0.0326579298304478,703382 +INF0199,0,SEB,2025-08-28,Male,HEU-hi,1582712,1412403,1263938,1177939,0.869944029359755,0.866133979220115,0.843422874658084,0.708885464717426,0.282640716716408,0.0414248427915377,0.24121587392487,0.187879255090691,0.0113936107854631,0.988606389214537,0.0120896834701055,0.976516705744431,0.533514067995311,0.443002637749121,0.377667057444314,0.0653355803048066,136480,726424,0.133866020779885,0.156577125341916,1024741 +INF0199,4,SEB,2025-08-28,Male,HEU-hi,915104,813472,701070,665961,0.932535088391062,0.95633880379755,0.926079493488258,0.603382756444112,0.84911133646456,0.242567783945346,0.606543552519214,0.533737190435525,0.017189828101719,0.982810171898281,0.0044249557504425,0.978385216147839,0.242657573424266,0.735727642723573,0.579969200307997,0.155758442415576,200002,374720,0.0436611962024501,0.0739205065117418,621032 +INF0199,9,SEB,2025-08-28,Male,HEU-hi,333800,301408,254051,245249,0.982495341469282,0.988259267252112,0.96349125981507,0.795941997709125,0.948302022556273,0.205358027394975,0.742943995161299,0.688774525906344,0.00454208239337462,0.995457917606625,0.00611667095641115,0.989341246650214,0.31760511135672,0.671736135293494,0.553104513315872,0.118631621977623,132098,191787,0.0117407327478876,0.03650874018493,240956 +INF0207,0,SEB,2025-08-28,Male,HEU-lo,1196056,1073884,953274,871433,0.919260574249541,0.604085265531025,0.597713819197727,0.39015496695686,0.891970064919067,0.385367773394381,0.506602291524686,0.11529933481153,0.022005772005772,0.977994227994228,0.0057997557997558,0.972194472194472,0.417637917637918,0.554556554556555,0.39005439005439,0.164502164502165,36036,312543,0.395914734468975,0.402286180802273,801074 +INF0207,4,SEB,2025-08-28,Male,HEU-lo,766016,680078,547528,530330,0.987837761393849,0.981113995571505,0.966207910208445,0.830596701534703,0.969487490031783,0.511599901639269,0.457887588392514,0.360494377581107,0.00866361092163225,0.991336389078368,0.00758623767236378,0.983750151406004,0.409236084991362,0.574514066414642,0.498772814494176,0.0757412519204656,156863,435133,0.0188860044284951,0.0337920897915553,523880 +INF0207,9,SEB,2025-08-28,Male,HEU-lo,677136,599173,485830,453507,0.985559208567894,0.957409868488762,0.946164964045839,0.899464826672752,0.94905764098074,0.329227432261338,0.619830208719402,0.550475967792887,0.00663340924700864,0.993366590752991,0.00992752051476702,0.983439070238224,0.675970610562846,0.307468459675379,0.258983118244587,0.048485341430792,221304,402023,0.0425901315112381,0.0538350359541613,446958 +INF0614,0,Ctrl,2025-08-30,Female,HU,2502056,2240583,1939275,1791189,0.875958371785445,0.859828541236591,0.858035050194167,0.720463324892751,0.868600121017824,0.205269406198802,0.663330714819022,0.198508154548961,0.00489313535000624,0.995106864649994,0.0189308187311717,0.976176045918822,0.539608549172,0.436567496746823,0.383946237900854,0.0526212588459687,224396,1130412,0.140171458763409,0.141964949805833,1569007 +INF0614,4,Ctrl,2025-08-30,Female,HU,1391096,1344752,1105255,990319,0.672744842823373,0.908350244359322,0.897685791135821,0.636089230178076,0.813475292779559,0.300045070236418,0.513430222543141,0.384902178709387,0.00855224841369586,0.991447751586304,0.0171044968273917,0.974343254758912,0.342206418784293,0.632136835974619,0.525016092940563,0.107120743034056,163115,423783,0.0916497556406777,0.102314208864179,666232 +INF0614,9,Ctrl,2025-08-30,Female,HU,985816,886942,751813,687188,0.944112819199404,0.96687644404986,0.93044823924178,0.507858559795802,0.823181280160248,0.305544933078394,0.517636347081854,0.383501775471183,0.0226970560303894,0.977302943969611,0.0197293447293447,0.957573599240266,0.266927825261159,0.690645773979107,0.576155428933207,0.114490345045901,126360,329490,0.0331235559501405,0.0695517607582197,648783 +INF0622,0,Ctrl,2025-08-30,Male,HEU-lo,1419968,1254039,1116059,1024019,0.917274972437035,0.785139469843193,0.771036519476593,0.350359360677606,0.912788100700406,0.0960209058174691,0.816767194882937,0.492028137771768,0.0128146537881969,0.987185346211803,0.015544329438502,0.971641016773301,0.670499740619056,0.301141276154245,0.202070106963761,0.0990711691904844,161924,329095,0.214860530156807,0.228963480523407,939307 +INF0622,9,Ctrl,2025-08-30,Male,HEU-lo,918688,813691,633925,594892,0.95913207775529,0.997215114444951,0.975340881208595,0.762248939675418,0.933758541722232,0.252400419383616,0.681358122338615,0.669201975517562,0.00334304522903124,0.996656954770969,0.0180139631406072,0.978642991630362,0.721925978862884,0.256717012767478,0.199417973420557,0.0572990393469208,291052,434924,0.0027848855550493,0.0246591187914053,570580 +INF0627,9,Ctrl,2025-08-30,Female,HEU-hi,1040808,940435,782722,722083,0.842248051816758,0.987569326490982,0.956834650666833,0.52995447019187,0.823986050436855,0.177313964455917,0.646672085980937,0.593625893566322,0.00370567820705804,0.996294321792942,0.0216225539387857,0.974671767854156,0.711500669008195,0.263171098845961,0.229224159558455,0.0339469392875063,191328,322304,0.0124306735090179,0.0431653493331667,608173 +INF0614,0,PPD,2025-08-30,Female,HU,2167928,1927832,1630916,1508441,0.901599068177012,0.899544047134982,0.897743323757416,0.776620595893115,0.908680178487064,0.223675951112041,0.685004227375023,0.185058667254933,0.00546912171737584,0.994530878282624,0.017650579911082,0.976880298371542,0.527435140513964,0.449445157857578,0.392548897222464,0.0568962606351139,195461,1056211,0.100455952865018,0.102256676242584,1360009 +INF0614,4,PPD,2025-08-30,Female,HU,1324872,1244577,972809,876491,0.785615596737445,0.948302678681644,0.935248371660724,0.680051119324412,0.859840306829563,0.3080638858102,0.551776421019363,0.414792225902412,0.00831977594266768,0.991680224057332,0.0176383368685517,0.974041887188781,0.343659259869437,0.630382627319343,0.52646265367903,0.103919973640314,194236,468273,0.0516973213183558,0.0647516283392755,688585 +INF0614,9,PPD,2025-08-30,Female,HU,1181336,1066572,902472,813387,0.935030926238064,0.937837752550155,0.912820067793758,0.477232026633638,0.668151699246463,0.24215123086884,0.426000468377623,0.311675001033186,0.0230631872988932,0.976936812701107,0.0201813938686751,0.956755418832432,0.25828294614759,0.698472472684841,0.58343057176196,0.115041900922881,113124,362955,0.062162247449845,0.0871799322062424,760542 +INF0622,0,PPD,2025-08-30,Male,HEU-lo,1269256,1125016,972593,879358,0.920196325046227,0.800979260537184,0.785547627109847,0.413038105148162,0.886671473836331,0.082516762760193,0.804154711076138,0.486926393455866,0.0125105995993659,0.987489400400634,0.014691966425385,0.972797433975249,0.669956126875668,0.302841307099581,0.200796352508879,0.102044954590702,162742,334223,0.199020739462816,0.214452372890153,809182 +INF0622,4,PPD,2025-08-30,Male,HEU-lo,741976,658809,504597,468821,0.970803355651731,0.996280208202877,0.982264524875146,0.819419817943326,0.97596160291732,0.177267425491694,0.798694177425626,0.765994985855823,0.00438611844270042,0.9956138815573,0.0175059683415362,0.978107913215763,0.71630599914588,0.261801914069884,0.200399756365648,0.0614021577042363,285674,372945,0.00371979179712301,0.0177354751248536,455133 +INF0622,9,PPD,2025-08-30,Male,HEU-lo,954616,852281,660431,621750,0.931760353839968,0.996385429864566,0.974601344330096,0.757045304683751,0.935292414261708,0.259471057269827,0.675821356991881,0.664087848545168,0.00325149098200521,0.996748509017995,0.0171123875969525,0.979636121421042,0.720845593663198,0.258790527757845,0.200030214488534,0.0587603132693107,291251,438573,0.0036145701354342,0.0253986556699037,579322 +INF0627,4,PPD,2025-08-30,Female,HEU-hi,1274376,1221638,990572,930591,0.858136388596064,0.979495951533609,0.966596708633139,0.547259239594577,0.906030977491093,0.356671326943187,0.549359650547907,0.521679438570157,0.00289050300893029,0.99710949699107,0.0154043195255891,0.981705177465481,0.796094531291121,0.18561064617436,0.150854430934961,0.0347562152393986,227988,437027,0.0205040484663913,0.0334032913668614,798574 +INF0627,9,PPD,2025-08-30,Female,HEU-hi,976928,872861,706543,648987,0.869935761425113,0.991138498380203,0.960790113660316,0.559101769997715,0.856188382289581,0.187127759332945,0.669060622956636,0.615128494310262,0.0036360078076315,0.996363992192368,0.0211568272999294,0.975207164892439,0.712137364872868,0.263069800019571,0.228496824930859,0.0345729750887114,194169,315656,0.00886150161979682,0.0392098863396844,564577 +INF0614,0,SEB,2025-08-30,Female,HU,2404768,2139021,1869790,1725277,0.909617991777552,0.851620710067844,0.849870933250411,0.716843927681839,0.850154759132211,0.209923073777705,0.640231685354506,0.167388757429061,0.00548038320198823,0.994519616798012,0.013132739979183,0.981386876818829,0.510089852794358,0.471297024024471,0.398103107674661,0.0731939163498099,188308,1124974,0.148379289932156,0.150129066749589,1569343 +INF0622,0,SEB,2025-08-30,Male,HEU-lo,1313704,1154243,1022408,932997,0.914149777544837,0.799174345379699,0.783710615207662,0.37059135958654,0.884101026015813,0.0979349968520329,0.78616602916378,0.508100874154083,0.0129515127740521,0.987048487225948,0.0107846250599319,0.976263862166016,0.655259372723367,0.321004489442649,0.203008736044434,0.117995753398215,160599,316077,0.200825654620301,0.216289384792338,852899 +INF0622,9,SEB,2025-08-30,Male,HEU-lo,611904,549416,424379,399846,0.9352450693517,0.997323200179701,0.976015766645095,0.781454403482781,0.95273211328141,0.290235706366262,0.662496406915148,0.649376514228616,0.00375725893995763,0.996242741060042,0.0132004679447319,0.98304227311531,0.715407396477767,0.267634876637543,0.205200088530084,0.0624347881074587,189766,292228,0.00267679982029878,0.0239842333549046,373954 +INF0627,9,SEB,2025-08-30,Female,HEU-hi,688488,616891,512153,475077,0.865657567089124,0.992228647016199,0.958218035569259,0.557577555476664,0.834356711119639,0.198455339153794,0.635901371965845,0.586578632918458,0.00385112931765126,0.996148870682349,0.00725618188036221,0.988892688801987,0.690489643584673,0.298403045217314,0.256196749587379,0.0422062956299347,134506,229306,0.00777135298380072,0.0417819644307411,411254 diff --git a/course/04_IntroToTidyverse/homeworks/README.md b/course/04_IntroToTidyverse/homeworks/README.md new file mode 100644 index 0000000..2818b5e --- /dev/null +++ b/course/04_IntroToTidyverse/homeworks/README.md @@ -0,0 +1,5 @@ +# Turning In Optional Take-Home Problems + +This folder is for the use of submitting your completed Take-Home Problems for evaluation by course instructors. Please see [Getting Help](/course/00_Homeworks/index.qmd) walkthrough for more detailed instructions. + +Within your branch, inside this "homeworks" folder, create a new folder (name it with your GitHub username). Then copy all files you will be submitting within your folder. Then commit the change to git, and push to GitHub. See [Getting Help](/course/00_Homeworks/index.qmd)for details on submitting the pull request to the UMGCCCFCSR/CytometryInR homework branch. \ No newline at end of file diff --git a/course/04_IntroToTidyverse/images/00_CheckNamesTRUE.png b/course/04_IntroToTidyverse/images/00_CheckNamesTRUE.png new file mode 100644 index 0000000..cb71c41 Binary files /dev/null and b/course/04_IntroToTidyverse/images/00_CheckNamesTRUE.png differ diff --git a/course/04_IntroToTidyverse/images/01_DataView.png b/course/04_IntroToTidyverse/images/01_DataView.png new file mode 100644 index 0000000..10c6e7f Binary files /dev/null and b/course/04_IntroToTidyverse/images/01_DataView.png differ diff --git a/course/04_IntroToTidyverse/images/02_Glimpse.png b/course/04_IntroToTidyverse/images/02_Glimpse.png new file mode 100644 index 0000000..13644d3 Binary files /dev/null and b/course/04_IntroToTidyverse/images/02_Glimpse.png differ diff --git a/course/04_IntroToTidyverse/images/03_ColumnClass.png b/course/04_IntroToTidyverse/images/03_ColumnClass.png new file mode 100644 index 0000000..f2fff2a Binary files /dev/null and b/course/04_IntroToTidyverse/images/03_ColumnClass.png differ diff --git a/course/04_IntroToTidyverse/images/TakeAway.jpg b/course/04_IntroToTidyverse/images/TakeAway.jpg new file mode 100644 index 0000000..4d60a0e Binary files /dev/null and b/course/04_IntroToTidyverse/images/TakeAway.jpg differ diff --git a/course/04_IntroToTidyverse/images/WebsiteBanner.png b/course/04_IntroToTidyverse/images/WebsiteBanner.png new file mode 100644 index 0000000..71d5502 Binary files /dev/null and b/course/04_IntroToTidyverse/images/WebsiteBanner.png differ diff --git a/course/04_IntroToTidyverse/index.qmd b/course/04_IntroToTidyverse/index.qmd new file mode 100644 index 0000000..c5b77de --- /dev/null +++ b/course/04_IntroToTidyverse/index.qmd @@ -0,0 +1,584 @@ +--- +title: "04 - Introduction to Tidyverse" +author: "David Rach" +date: 02-23-2026 +format: html +toc: true +toc-depth: 5 +--- + +![](/images/WebsiteBanner.png) + +::: {style="text-align: right;"} +[![AGPL-3.0](https://img.shields.io/badge/license-AGPLv3-blue)](https://www.gnu.org/licenses/agpl-3.0.en.html) [![CC BY-SA 4.0](https://img.shields.io/badge/License-CC%20BY--SA%204.0-lightgrey.svg)](http://creativecommons.org/licenses/by-sa/4.0/) +::: + +For the YouTube livestream recording, see [here](https://youtu.be/luC7SY4RJcU?t=204) + + + +For screen-shot slides, click [here](/course/04_IntroToTidyverse/slides.qmd) + + +# Background + +Within our daily workflows as cytometrist, after acquiring data on our respective instruments, we begin analyzing the resulting datasets. After implementing various workflows, we then export data for downstream statistical analysis. + +When I first started my Ph.D program, a substantial amount of my day was spent renaming column names of the exported data so that they would fit nicely in a Microsoft Excel sheet column; setting up formulas to combine proportion of positive cells across positive quadrants, etc. Once this was done, additional hours would go by as I copied and pasted contents of those columns over to a GraphPad Prism worksheet for statistical analysis. + +This of course was in an ideal scenario. Often times, the data was less organized, and instead of time spent copying and pasting over columns, it would first be spent rearranging values from individual cells in the worksheet that were separated by spaces, all the while trying to remember what various color codes and bold font stood for. + +Today, we will explore what makes data ["tidy"](https://vita.had.co.nz/papers/tidy-data.pdf), and how to use the toolsets implemented in the various [tidyverse](https://cran.r-project.org/web/packages/tidyverse/vignettes/paper.html) R packages. At it's simplest, if we think of and organize all our data in terms of rows and columns, we need fewer tools (ie. functions) to reshape and extract useful information that we are interested in. Additionally, this approach aligns more closely with how computers work, allowing us to carry out tasks that would otherwise have taken hours in mere seconds. + +The dataset we will be using today is a manually-gated spectral flow cytometry dataset (similar to ones we would see exported by commercial software), and has been intentionally left slightly messy. You could however just as easily use a "matrix" or "data.frame" object exported from inside an [fcs file](/course/03_InsideFCSFile/), or swap in your own dataset. You would just need to make sure to switch out the input data by providing an alternate [file path](/course/02_FilePaths/), etc. + +--- + +# Walk Through + +:::{.callout-important title="Housekeeping"} +As we do [every week](/course/02_FilePaths/index.qmd), on GitHub, sync your forked version of the CytometryInR course to bring in the most recent updates. Then within Positron, pull in those changes to your local computer. + +After creating a "Week04" project folder, copy over the contents of "course/04_IntroToTidyverse" to that folder. This will hopefully prevent any merge issues when you attempt to bring in new data to your local Cytometry in R folder next week. Please remember once you have set up your project folder to stage, commit and pus your changes to "Week04" to GitHub so that they are backed up remotely. + +If you are having issues syncing due to the Take-Home Problem merge conflict, see this [walkthrough](https://umgcccfcsr.github.io/CytometryInR/course/00_BonusContent/PullConflicts/) +::: + +--- + +## read.csv + +We will start by first loading in our copied over dataset (Dataset.csv) from it's location in the project folder. If you are following the organization scheme we have been using throughout the course, your file path will look something like this: + +```{r} +#| eval: FALSE +#| include: FALSE + +# For use only when building the website, otherwise keep eval to FALSE +thefilepath <- file.path(getwd(), "course", "04_IntroToTidyverse", "data", "Dataset.csv") + +thefilepath +``` +```{r} +#| eval: TRUE +thefilepath <- file.path("data", "Dataset.csv") + +thefilepath +``` + + +:::{.callout-tip title="Reminder"} +We encourage using the `file.path` function to build our file paths, as this keeps our code reproducible and replicable when a project folder is copied to other people's computers that differ on whether the operating system uses forward or backward slash separation between folders. +::: + +Above, we directly specified the name (Dataset) and filetype (.csv) of the file we wanted in the last argument of the file.path ("Dataset.csv"). This allows us to skip the `list.files()` step we used last week as we have provided the full file path. While this approach can be faster, if we accidentally mistype the file name, we could end up with an error at the next step due to no files being found with the mistyped name. + +Since our dataset is stored as a .csv file, we will be using the `read.csv()` function from the `utils` package (included in our base R software installation) to read it into R. We will also use the `colnames()` function from last week to get a read-out of the column names. + +```{r} +Data <- read.csv(file=thefilepath, check.names=FALSE) +colnames(Data) +``` + +As we look at the line of code, we now have enough context to decipher that the "file" argument is where we provide a file path to an individual file, but what does the "check.names" argument do? + +Let's see what happens to the column names when we set "check.names" argument to TRUE: + +```{r} +Data_Alternative <- read.csv(thefilepath, check.names=TRUE) +colnames(Data_Alternative) +``` + +As we can see, any column name that contained a special character or a space was automatically converted over to [R-approved syntax](https://ssojet.com/escaping/regex-escaping-in-r#understanding-the-need-for-escaping-special-characters). However, this resulted in the loss of both +" and "-", leaving us unable to determine whether we are looking at cells within or outside a particular gate. + +![](images/00_CheckNamesTRUE.png) + +Because of this, it is often better to rename columns individually after import, which we will learn how to do later today. + +Following up with what we practiced last week, lets use the `head()` function to visualize the first few rows of data. + +```{r} +head(Data, 3) +``` + +When working in Positron, we could have alternatively clicked on the little grid icon next to our created variable "Data" in the right secondary sidebar, which would have opened the data in our Editor window. From this same window, we can see it is stored as a "data.frame" object type. + +![](images/01_DataView.png) + +We could also achieve the same window to open using the `View()` function: + +```{r} +#| eval: FALSE +View(Data) +``` + +Wrapping up our brief recap of [last week](/course/03_InsideFCSFile/index.qmd) functions, we can check an objects type using both the `class()` and `str()` functions. + +```{r} +class(Data) +``` + +```{r} +str(Data) +``` + +## data.frame + +Or alternatively using the new-to-us `glimpse()` function + +```{r} +#| error: TRUE +glimpse(Data) +``` + +:::{.callout-tip title="Checkpoint 1"} +This however returns an error. Any idea why this might be occuring? +::: + +```{r} +#| code-fold: TRUE + +# We haven't attached/loaded the package in which the function glimpse is within +``` + +:::{.callout-tip title="Checkpoint 2"} +How would we locate a package a not-yet-loaded function is within? +::: + +```{r} +#| code-fold: TRUE +#| eval: FALSE + +# We can use double ? to search all installed packages for a function, regardless +# if the package is attached to the environment or not + +??glimpse +``` + +![](images/02_Glimpse.png) + +From the list of search matches (in the right secondary sidebar), it looks likely that the `glimpse()` function in the `dplyr` package was the one we were looking for. This is one the main tidyverse packages we will be using throughout the course. Let's attach it to our environment via the `library()` call first and try running `glimpse()` again. + +```{r} +#| message: FALSE +#| warning: FALSE +library(dplyr) +glimpse(Data) +``` + +We notice that while similar to the `str()` output, `glimpse()` handles spacing a little differently, and includes the dimensions at the top. However, we can also retrieve the dimensions directly using the `dim()` function (which maintains the row followed by column position convention of base R (ex. [196,31])) + +```{r} +dim(Data) +``` + +## Column value type + +As we saw last week, functions often need values that match a certain type (the paintbrush needing paint analogy). As we inspect the columns of Data, we can notice some of the columns contain values within that are character (ie. "char") values. Others appear to contain numeric values (which are [subtyped](https://www.r-bloggers.com/2023/09/understanding-data-types-in-r/) as either double ("ie. dbl") or integer (ie. "int")). At first glance, we do not appear to have any logical (ie. TRUE or FALSE) columns in this dataset. + +![](images/03_ColumnClass.png) + +If we were trying to verify type of values contained within a data.frame column, we could employ several similarly-named functions (`is.character()`, `is.numeric()` or `is.logical()`) to check + +```{r} +# colnames(Data) # To recheck the column names + +is.character(Data$bid) +``` + +```{r} +is.numeric(Data$bid) +``` + +```{r} +# colnames(Data) # To recheck the column names + +is.character(Data$Tcells_count) +``` + +For numeric columns using the `is.numeric()` function, we can also be [subtype](https://www.r-bloggers.com/2023/09/understanding-data-types-in-r/) specific using either `is.integer()` or `is.double()`. + +```{r} +# colnames(Data) # To recheck the column names + +is.numeric(Data$Tcells_count) +is.integer(Data$Tcells_count) +is.double(Data$Tcells_count) +``` + +:::{.callout-tip title="Reminder"} +As we observed last week with keywords, column names that contain [special characters](https://ssojet.com/escaping/regex-escaping-in-r#understanding-the-need-for-escaping-special-characters) like $ or spaces will need to be surrounded with tick marks in order for the function to be able to run. +::: + +```{r} +#| error: TRUE + +# colnames(Data) # To recheck the column names +is.numeric(Data$CD8-) +``` + +```{r} +# colnames(Data) # To recheck the column names +is.numeric(Data$`CD8-`) +``` + +## select (Columns) + +Now that we have read in our data, and have a general picture of the structure and contents, lets start learning the main `dplyr` functions we will be using throughout the course. To do this, lets go ahead and attach `dplyr` to our local environment via the `library()` call. + +```{r} +library(dplyr) +``` + +We will start with the `select()` function. It is used to "select" a column from a data.frame type object. In the simplest usage, we provide the name of our data.frame variable/object as the first argument after the opening parenthesis. This is then followed by the name of the column we want to select as the second argument (let's place around the "" around the column name for now) + +```{r} +DateColumn <- select(Data, "Date") +DateColumn[1:10,] +``` + +This results in the column being selected, resulting in the new object containing only that subsetted out column from the original Data object. + +### Pipe Operators + +While the above line of code works to select a column, when you encounter `select()` out in the wild, it will more often be in a line of code that looks like this: + +```{r} +DateColumn <- Data |> select("Date") +DateColumn[1:10,] +``` + +... **"What in the world is that thing |> ?"** ... + +Glad you asked! An useful feature of the tidyverse packages is their use of [pipes](https://r4ds.had.co.nz/pipes.html) (either the original `magrittr` package's "%>%" or `base R version >4.1.0's` "|>""), usually appearing like this: + +```{r} +# magrittr %>% pipe + +DateColumn <- Data %>% select("Date") + +# base R |> pipe +DateColumn <- Data |> select("Date") +``` + +... **"How do we interpret/read that line of code?"** ... + +Let's break it down, starting off just to the right of the assignment arrow (<-) with our data.frame "Data". + +```{r} +#| eval: false + +Data +``` + +We then proceed to read to the right, adding in our pipe operator. The pipe essentially serves as an intermediate passing the contents of data onward to the subsequent function. + +```{r} +#| eval: FALSE +Data |> +``` + +In our case, this subsequent function is the `select()` function, which will select a particular column from the available data. When using the pipe, the first argument slot we saw for "select(Data, "Date")" is occupied by the contents Data that are being passed by the pipe. + +```{r} +#| eval: FALSE +Data |> select() +``` + +To complete the transfer, we provide the desired column name to `select()` to act on ("Date" in this case) + + +```{r} +#| eval: FALSE +Data |> select("Date") +``` + +In summary, contents of Data are passed to the pipe, and select runs on those contents to select the Date column + +```{r} +#| eval: FALSE +Data |> select("Date") +``` + +One of the main advantages for using pipes, is they can be linked together, passing resulting objects of one operation on to the next pipe and subsequent function. We can see this in operation in the example below where we hand off the isolated "Date" column to the `nrow()` function to determine number of rows. We will use pipes throughout the course, so you will gradually gain familiarity as you encounter them. + +```{r} +Data |> select("Date") |> nrow() +``` + +For those with prior R experience, you will be more familiar with the older magrittr %>% pipe. The base R |> pipe operator was introduced starting with R version 4.1.0. While mostly interchangeable, they have a [few nuances](https://tidyverse.org/blog/2023/04/base-vs-magrittr-pipe/) that come into play for more advance use cases. You are welcome to use whichever you prefer (my current preference is |> as it's one less key to press). + +### R Quirks + +:::{.callout-note title="Odd R Behavior # 1"} +While we used "" around the column name in our previous example, unlike what we encountered with `install.packages()` when we forget to include quotation marks, `select()` still retrieves the correct column despite Date not being an environment variable: +::: + +```{r} +Data |> select(Date) |> head(3) +``` + +:::{.callout-note title="."} +The reasons for this Odd R behaviour are nuanced and for [another day](https://adv-r.hadley.nz/evaluation.html). For now, think of it as `dplyr` R package is picking up the slack, and using context to infer it's a column name and not an environmental variable/object. +::: + +### Selecting multiple columns + +Since we are able to select one column, can we select multiple (similar to a [Data[,2:5]] approach in base R)? We can, and they can be positioned anywhere within the data.frame: + +```{r} +Subset <- Data |> select(bid, timepoint, Condition, Tcells, `CD8+`, `CD4+`) + +head(Subset, 3) +``` + +You will notice that the order in which we selected the columns will dictate their position in the subsetted data.frame object: + +```{r} +Subset <- Data |> select(bid, Tcells, `CD8+`, `CD4+`, timepoint, Condition, ) + +head(Subset, 3) +``` + +## relocate + +Alternatively, we occasionally want to move one column. While we could respecify the location using `select()`, specifying the names of all the other columns out in a line of code to just to rearrange one does not sound like a good use of time. For this reason, the second `dplyr` function we will be learning is the `relocate()` function. + +Looking at our Data object, let's say we wanted to move the Tcells column from its current location to the second column position (right after the bid column). The line of code to do so would look like: + +```{r} +Data |> relocate(Tcells, .after=bid) |> head(3) + +# |> head(3) is used only to make the website output visualization manageable :D +``` + +Similar to what we saw with `select()`, this approach can also be used for more than 1 column: + +```{r} +Data |> relocate(Tcells, Monocytes, .after=bid) |> head(3) + +# |> head(3) is used only to make the website output visualization manageable :D +``` + +We can also modify the argument so that columns are placed before a certain column + +```{r} +Data |> relocate(Tcells, .before=Date) |> head(3) + +# |> head(3) is used only to make the website output visualization manageable :D +``` + +And as we might suspect, we could specify a column index location rather than using a column name. + +```{r} +Data |> relocate(Date, .before=1) |> head(3) + +# |> head(3) is used only to make the website output visualization manageable :D +``` + +## rename + +At this point, we are able to both move and select particular columns, allowing us to rearrange and subset a larger data.frame object however we want it to appear. However, as we encountered, some of the names contain special characters and spaces, requiring use of tick marks (``) to avoid issues. How can we change a column name? + +In base R, we could change individual column names by assigning a new value with the assignment arrow to the corresponding column name index. For example, looking at our Subset object, wen could rename CD8+ as follows: + +```{r} +colnames(Subset) +colnames(Subset)[3] +``` + +```{r} +colnames(Subset)[3] <- "CD8Positive" +colnames(Subset) +``` + +With the tidyverse, we can use the `rename()` function which removes the need to look up the column index number. The way we write the argument is placing within the parenthesis the old name to the right of the equals sign, with the new name to the left + +```{r} +Renamed <- Subset |> rename(CD4_Positive = `CD4+`) +colnames(Renamed) +``` + +If we wanted to rename multiple column names at once, we would just need to include a comma between the individual rename arguments within the parenthesis. + +```{r} +Renamed_Multiple <- Subset |> rename(specimen = bid, timepoint_months = timepoint, stimulation = Condition, CD4Positive=`CD4+`) +colnames(Renamed_Multiple) +``` + +## pull + +Sometimes, we may want to retrieve individual values present in a column, to use within either a vector or a list. We can do this using the `pull()` function, which will retrieve the column contents and strip the column formatting + +```{r} +Data |> pull(Date) |> head(5) +``` + +This can be useful when we are doing data exploration, and trying to determine how many unique variants might be present. For example, if we wanted to see what days individual samples were acquired, we could `pull()` the data and pass it to the `unique()` function: + +```{r} +Data |> pull(Date) |> unique() +``` + + +## filter (Rows) + +So far, we have been working with `dplyr` functions primarily used when working with and subsetting columns (including `select()`, `pull()`, `rename()` and `relocate()`). What if we wanted to work with rows of a data.frame? This is where the `filter()` function is used. + +The Condition column in this Dataset appears to be indicating whether the samples were stimulated. Let's see how many unique values are contained within that column + +```{r} +Data |> pull(Condition) |> unique() +``` + +In the case of this dataset, looks like the .fcs files where treated with either left alone, treated with [PPD (Purified Protein Derrivative)](https://en.wikipedia.org/wiki/Tuberculin) or [SEB](https://en.wikipedia.org/wiki/Enterotoxin_type_B). What if we wanted to subset only those treated with PPD? + +Within `filter()`, we would specify the column name as the first argument, and ask that only values equal to (==) "PPD" be returned. Notice in this case, "" are needed, as we are asking for a matching character value. + +```{r} +PPDOnly <- Data |> filter(Condition == "PPD") +head(PPDOnly, 5) +``` + +While this works, using "==" to match can glitch, especially with character values. Using the %in% operator is a better way of identifying and extracting only the rows whose Condition column contains "PPD" + +```{r} +Data |> filter(Condition %in% "PPD") |> head(5) +``` + +Similar to what we saw for `select()`, we can grab rows that contain various values at once. We would just need to modify the second part of the argument. If we wanted to grab rows whose Condition column contained either PPD or SEB, we would need to provide that argument as a vector, placing both within `c()`/ + +```{r} +Data |> filter(Condition %in% c("PPD", "SEB")) |> head(5) +``` + +Alternatively, we could have set up the vector externally, and then provided it to `filter()` + +```{r} +TheseConditions <- c("PPD", "SEB") +Data |> filter(Condition %in% TheseConditions) |> head(5) +``` + +While this works when we have a limited number of variant condition values, what if had many more but only wanted to exclude one value? +As we saw when learning about [Conditionals](/course/02_FilePaths/index.qmd), when we add a ! in front of a logical value, we get the opposite logical value returned + +```{r} +IsThisASpectralInstrument <- TRUE + +!IsThisASpectralInstrument +``` + +In the context of the `dplyr` package, we can use ! within the `filter()` to remove rows that contain a certain value + +```{r} +Subset <- Data |> filter(!Condition %in% "SEB") +Subset |> pull(Condition) |> unique() +``` + +Likewise, we can also use it with the `select()` to exclude columns we don't want to include +```{r} +Subset <- Data |> select(!timepoint) +Subset[1:3,] +``` + + +## mutate + +As we can see, with just these handful of functions, we have the building blocks to rearrange and subset a larger data.frame into a format that we prefer. But what if we wanted to alter the content of a column, or add new columns to an existing data.frame? This is where the `mutate()` function can be used. + +Let's start by slimming down our current Data to a smaller workable example, highlighting the functions and pipes we learned about today + +```{r} +TidyData <- Data |> filter(Condition %in% "Ctrl") |> filter(timepoint %in% "0") |> + select(bid, timepoint, Condition, Date, Tcells_count, CD45_count) |> + rename(specimen=bid, condition=Condition) |> relocate(Date, .after=specimen) +``` + +```{r} +TidyData +``` + +The `mutate()` function can be used to modify existing columns, as well as to create new ones. For example, let's derrive the proportion of T cells from the overall CD45 gate. To do so, within the parenthesis, we would specify a new column name, and then divide the original columns: + +```{r} +TidyData <- TidyData |> mutate(Tcells_ProportionCD45 = Tcells_count / CD45_count) +TidyData +``` + +We can see that we have many significant digits being returned. Let's round this new column to 2 significant digits by applying the `round()` function + +```{r} +TidyData <- TidyData |> mutate(TcellsRounded = round(Tcells_ProportionCD45, 2)) +TidyData +``` + + +## arrange + +And while we are here, let's rearrange the rows so that they are descending based on the Tcell proportion. We can use this by using the `desc()` and `arrange()` functions from `dplyr`: + +```{r} +TidyData <- TidyData |> arrange(desc(TcellsRounded)) +``` + +And let's go ahead and `filter()` and identify the specimens that had more than 30% T cells as part of the overall CD45 gate (context, these samples were Cord Blood): + +```{r} +TidyData |> filter(TcellsRounded > 0.3) +``` + +Which is we had wanted to just retrieve the specimen IDs, we could add `pull()` after a new pipe argument. + +```{r} +TidyData |> filter(TcellsRounded > 0.3) |> pull(specimen) +``` + +And finally, since I may want to send the data to a supervisor, let's go ahead and export this "tidyed" version of our data.frame out to it's own .csv file. Working within our project folder, this would look like this: + +```{r} +NewName <- paste0("MyNewDataset", ".csv") +StorageLocation <- file.path("data", NewName) +StorageLocation +``` +```{r} +#| eval: FALSE +write.csv(TidyData, StorageLocation, row.names=FALSE) +``` + +# Take Away + +In this session, we explored the main functions within the `dplyr` package used in context of "tidying" data, including selecting columns, filtering for rows, as well as additional functions used to create or modify existing values. We will continue to build on these throughout the course, introducing a few additional tidyverse functions we don't have time to cover today as appropiate. As we saw, knowing how to use these functions can allow us to extensively and quickly modify our existing exported data files. + +On important goal as we move through the course (in terms of both reproducibility and replicability) is to attempt to only modify files within R, not go back to the original csv or excel file and hand-modify individual values. This approach is not reproducible or replicable. Once set up, an R script can quickly re-carry out these same cleanup steps, and leave a documented process of how the data has changed (even more so if you are maintaining version control). If you do want to save the changes you have made, it is best to save it out as a new .csv file with which you work later. + +Next week, we will be using these skills when setting up metadata for our .fcs files. We will additionally take a look at the main format source of controversy within Bioconductor Flow Cytometry packages, ie. whether to use a flowframe or a cytoframe. Exciting stuff, but important information to know as the functions needed to import them are slightly different. We will also look at how to import existing manually gated .wsp from FlowJo/Diva/Floreada in via the `CytoML` package. + +![](images/TakeAway.jpg) + +# Additional Resources + +[Data Organization in Spreadsheets for Ecologists](https://datacarpentry.github.io/spreadsheet-ecology-lesson/) This Carpentry self-study course was one of my "Aha" moments early on when learning R, and reinforced the need to try to keep my own Excel/CSV files in a tidy manner. It is worth the time going through in its entirety (even for non-Ecologist). + +[Data Analysis and Visualization in R for Ecologists](https://datacarpentry.github.io/R-ecology-lesson/) Continuation of the above, and a good way to continue building on the tidyverse functions we learned today. + +[Simplistics: Introduction to Tidyverse in R](https://youtu.be/Bg4qxVNaDck?si=QPQq8TzOZ1w6XSy4) The YouTube channel is mainly focused on statistics for Psych classes, but at the end of the day, we are all working with similar objects with rows and columns, just the values contained within differ. + +[Riffomonas Project Playlist: Data Manipulation with R's Tidyverse](https://youtube.com/playlist?list=PLmNrK_nkqBpKf7j_ewpUm-w33R6PJYtD9&si=BVmDZPIXjRuHjERP) Riffomonas has a playlist that delves into both the tidyverse functions we used today, as well as other ones we will encounter later on in the course. + +# Take-home Problems + +:::{.callout-tip title="Problem 1"} +Taking a dataset (either todays or one of your own), work through the column-operating functions (`select()`, `rename()`, and `relocate()`). Once this is done, `filter()` by conditions from two separate columns, arrange in an order that makes sense, and export this "tidy" data as a .csv file. +::: + +:::{.callout-tip title="Problem 2"} +We used the `mutate()` function to create new columns, but it can also be used to modify existing ones. Various numeric columns are showing way to many significant digits. As was shown, use `round()` to round all these proportion columns, but use mutate to overwrite the existing column. Export this as it's own .csv file. +::: + +:::{.callout-tip title="Problem 3"} +We can also use `mutate()` to combine columns. For our dataset, "bid", "timepoint", "Condition" are separate columns that originally were all part of the filename for the individual .fcs file. Try to figure out a way to combine them back together using `paste0()`, and save the new column as "filename". Once this is done, `pull()` the contents of this column, and using try to determine whether there were any duplicates (think innovative ways of using !, `length()` and `unique()`) +::: + +::: {style="text-align: right;"} +[![AGPL-3.0](https://www.gnu.org/graphics/agplv3-with-text-162x68.png)](https://www.gnu.org/licenses/agpl-3.0.en.html) [![CC BY-SA 4.0](https://licensebuttons.net/l/by-sa/4.0/88x31.png)](http://creativecommons.org/licenses/by-sa/4.0/) +::: \ No newline at end of file diff --git a/course/04_IntroToTidyverse/slides.qmd b/course/04_IntroToTidyverse/slides.qmd new file mode 100644 index 0000000..9b98420 --- /dev/null +++ b/course/04_IntroToTidyverse/slides.qmd @@ -0,0 +1,1199 @@ +--- +title: "04 - Introduction to Tidyverse" +author: "David Rach" +date: 02-24-2026 +format: + revealjs: + theme: default + slide-number: true + incremental: true +page-layout: full +execute: + echo: true + warning: false + message: false +--- + +![](/images/WebsiteBanner.png) + +::: {style="text-align: right;"} +[![AGPL-3.0](https://img.shields.io/badge/license-AGPLv3-blue)](https://www.gnu.org/licenses/agpl-3.0.en.html) [![CC BY-SA 4.0](https://img.shields.io/badge/License-CC%20BY--SA%204.0-lightgrey.svg)](http://creativecommons.org/licenses/by-sa/4.0/) +::: + + +---- + +# Background + +::: {.fragment} +::: {.callout-tip title="."} +Within our daily workflows as cytometrist, after acquiring data on our respective instruments, we begin analyzing the resulting datasets. After implementing various workflows, we then export data for downstream statistical analysis. +::: +::: + +::: {.fragment} +::: {.callout-tip title="."} +When I first started my Ph.D program, a substantial amount of my day was spent renaming column names of the exported data so that they would fit nicely in a Microsoft Excel sheet column; setting up formulas to combine proportion of positive cells across positive quadrants, etc. Once this was done, additional hours would go by as I copied and pasted contents of those columns over to a GraphPad Prism worksheet for statistical analysis. +::: +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +This of course was in an ideal scenario. Often times, the data was less organized, and instead of time spent copying and pasting over columns, it would first be spent rearranging values from individual cells in the worksheet that were separated by spaces, all the while trying to remember what various color codes and bold font stood for. +::: +::: + +::: {.fragment} +::: {.callout-tip title="."} +Today, we will explore what makes data ["tidy"](https://vita.had.co.nz/papers/tidy-data.pdf), and how to use the toolsets implemented in the various [tidyverse](https://cran.r-project.org/web/packages/tidyverse/vignettes/paper.html) R packages. At it's simplest, if we think of and organize all our data in terms of rows and columns, we need fewer tools (ie. functions) to reshape and extract useful information that we are interested in. Additionally, this approach aligns more closely with how computers work, allowing us to carry out tasks that would otherwise have taken hours in mere seconds. +::: +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +The dataset we will be using today is a manually-gated spectral flow cytometry dataset (similar to ones we would see exported by commercial software), and has been intentionally left slightly messy. You could however just as easily use a "matrix" or "data.frame" object exported from inside an [fcs file](/course/03_InsideFCSFile/), or swap in your own dataset. You would just need to make sure to switch out the input data by providing an alternate [file path](/course/02_FilePaths/), etc. +::: +::: + +--- + +# Walk Through + +:::{.callout-important title="Housekeeping"} +As we do [every week](/course/02_FilePaths/index.qmd), on GitHub, sync your forked version of the CytometryInR course to bring in the most recent updates. Then within Positron, pull in those changes to your local computer. + +After creating a "Week04" project folder, copy over the contents of "course/04_IntroToTidyverse" to that folder. This will hopefully prevent any merge issues when you attempt to bring in new data to your local Cytometry in R folder next week. Please remember once you have set up your project folder to stage, commit and pus your changes to "Week04" to GitHub so that they are backed up remotely. + +If you are having issues syncing due to the Take-Home Problem merge conflict, see this [walkthrough](https://umgcccfcsr.github.io/CytometryInR/course/00_BonusContent/PullConflicts/) +::: + +--- + +## read.csv + +::: {.fragment} +::: {.callout-tip title="."} +We will start by first loading in our copied over dataset (Dataset.csv) from it's location in the project folder. If you are following the organization scheme we have been using throughout the course, your file path will look something like this: +::: +::: + +::: {.fragment} +```{r} +#| eval: FALSE +#| include: FALSE + +# For use only when building the website, otherwise keep eval to FALSE +thefilepath <- file.path(getwd(), "course", "04_IntroToTidyverse", "data", "Dataset.csv") + +thefilepath +``` + +::: + +::: {.fragment} +```{r} +#| eval: TRUE +thefilepath <- file.path("data", "Dataset.csv") + +thefilepath +``` + +::: + +--- + +::: {.fragment} +:::{.callout-tip title="Reminder"} +We encourage using the `file.path` function to build our file paths, as this keeps our code reproducible and replicable when a project folder is copied to other people's computers that differ on whether the operating system uses forward or backward slash separation between folders. +::: +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +Above, we directly specified the name (Dataset) and filetype (.csv) of the file we wanted in the last argument of the file.path ("Dataset.csv"). This allows us to skip the `list.files()` step we used last week as we have provided the full file path. While this approach can be faster, if we accidentally mistype the file name, we could end up with an error at the next step due to no files being found with the mistyped name. +::: +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +Since our dataset is stored as a .csv file, we will be using the `read.csv()` function from the `utils` package (included in our base R software installation) to read it into R. We will also use the `colnames()` function from last week to get a read-out of the column names. +::: +::: + +::: {.fragment} +```{r} +Data <- read.csv(file=thefilepath, check.names=FALSE) +colnames(Data) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +As we look at the line of code, we now have enough context to decipher that the "file" argument is where we provide a file path to an individual file, but what does the "check.names" argument do? + +Let's see what happens to the column names when we set "check.names" argument to TRUE: +::: +::: + + +::: {.fragment} +```{r} +Data_Alternative <- read.csv(thefilepath, check.names=TRUE) +colnames(Data_Alternative) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +As we can see, any column name that contained a special character or a space was automatically converted over to [R-approved syntax](https://ssojet.com/escaping/regex-escaping-in-r#understanding-the-need-for-escaping-special-characters). However, this resulted in the loss of both +" and "-", leaving us unable to determine whether we are looking at cells within or outside a particular gate. +::: +::: + +::: {.fragment} +![](images/00_CheckNamesTRUE.png) +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +Because of this, it is often better to rename columns individually after import, which we will learn how to do later today. + +Following up with what we practiced last week, lets use the `head()` function to visualize the first few rows of data. +::: +::: + +::: {.fragment} +```{r} +head(Data, 3) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +When working in Positron, we could have alternatively clicked on the little grid icon next to our created variable "Data" in the right secondary sidebar, which would have opened the data in our Editor window. From this same window, we can see it is stored as a "data.frame" object type. +::: +::: + +--- + +![](images/01_DataView.png) + +--- + +::: {.fragment} +::: {.callout-tip title="."} +We could also achieve the same window to open using the `View()` function: +::: +::: + +::: {.fragment} +```{r} +#| eval: FALSE +View(Data) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +Wrapping up our brief recap of [last week](/course/03_InsideFCSFile/index.qmd) functions, we can check an objects type using both the `class()` and `str()` functions. +::: +::: + +::: {.fragment} +```{r} +class(Data) +``` + +::: + +::: {.fragment} +```{r} +str(Data) +``` + +::: + +--- + +## data.frame + +::: {.fragment} +::: {.callout-tip title="."} +Or alternatively using the new-to-us `glimpse()` function +::: +::: + +::: {.fragment} +```{r} +#| error: TRUE +glimpse(Data) +``` + +::: + +--- + +:::{.callout-tip title="Checkpoint 1"} +This however returns an error. Any idea why this might be occuring? +::: + +::: {.fragment} +```{r} +#| code-fold: TRUE + +# We haven't attached/loaded the package in which the function glimpse is within +``` + +::: + +--- + +:::{.callout-tip title="Checkpoint 2"} +How would we locate a package a not-yet-loaded function is within? +::: + +::: {.fragment} +```{r} +#| code-fold: TRUE +#| eval: FALSE + +# We can use double ? to search all installed packages for a function, regardless +# if the package is attached to the environment or not + +??glimpse +``` + +::: + +--- + +![](images/02_Glimpse.png) + +::: {.fragment} +::: {.callout-tip title="."} +From the list of search matches (in the right secondary sidebar), it looks likely that the `glimpse()` function in the `dplyr` package was the one we were looking for. This is one the main tidyverse packages we will be using throughout the course. Let's attach it to our environment via the `library()` call first and try running `glimpse()` again. +::: +::: + +::: {.fragment} +```{r} +#| message: FALSE +#| warning: FALSE +library(dplyr) +glimpse(Data) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +We notice that while similar to the `str()` output, `glimpse()` handles spacing a little differently, and includes the dimensions at the top. However, we can also retrieve the dimensions directly using the `dim()` function (which maintains the row followed by column position convention of base R (ex. [196,31])) +::: +::: + +::: {.fragment} +```{r} +dim(Data) +``` + +::: + +--- + +## Column value type + +::: {.fragment} +::: {.callout-tip title="."} +As we saw last week, functions often need values that match a certain type (the paintbrush needing paint analogy). As we inspect the columns of Data, we can notice some of the columns contain values within that are character (ie. "char") values. Others appear to contain numeric values (which are [subtyped](https://www.r-bloggers.com/2023/09/understanding-data-types-in-r/) as either double ("ie. dbl") or integer (ie. "int")). At first glance, we do not appear to have any logical (ie. TRUE or FALSE) columns in this dataset. +::: +::: + +--- + +![](images/03_ColumnClass.png) + +--- + +::: {.fragment} +::: {.callout-tip title="."} +If we were trying to verify type of values contained within a data.frame column, we could employ several similarly-named functions (`is.character()`, `is.numeric()` or `is.logical()`) to check +::: +::: + +::: {.fragment} +```{r} +# colnames(Data) # To recheck the column names + +is.character(Data$bid) +``` + +::: + +::: {.fragment} +```{r} +is.numeric(Data$bid) +``` + +::: + +::: {.fragment} +```{r} +# colnames(Data) # To recheck the column names + +is.character(Data$Tcells_count) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +For numeric columns using the `is.numeric()` function, we can also be [subtype](https://www.r-bloggers.com/2023/09/understanding-data-types-in-r/) specific using either `is.integer()` or `is.double()`. +::: +::: + +::: {.fragment} +```{r} +# colnames(Data) # To recheck the column names + +is.numeric(Data$Tcells_count) +is.integer(Data$Tcells_count) +is.double(Data$Tcells_count) +``` + +::: + +--- + +:::{.callout-tip title="Reminder"} +As we observed last week with keywords, column names that contain [special characters](https://ssojet.com/escaping/regex-escaping-in-r#understanding-the-need-for-escaping-special-characters) like $ or spaces will need to be surrounded with tick marks in order for the function to be able to run. +::: + +::: {.fragment} +```{r} +#| error: TRUE + +# colnames(Data) # To recheck the column names +is.numeric(Data$CD8-) +``` + +::: + +::: {.fragment} +```{r} +# colnames(Data) # To recheck the column names +is.numeric(Data$`CD8-`) +``` + +::: + +--- + +## select (Columns) + +::: {.fragment} +::: {.callout-tip title="."} +Now that we have read in our data, and have a general picture of the structure and contents, lets start learning the main `dplyr` functions we will be using throughout the course. To do this, lets go ahead and attach `dplyr` to our local environment via the `library()` call. +::: +::: + +::: {.fragment} +```{r} +library(dplyr) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +We will start with the `select()` function. It is used to "select" a column from a data.frame type object. In the simplest usage, we provide the name of our data.frame variable/object as the first argument after the opening parenthesis. This is then followed by the name of the column we want to select as the second argument (let's place around the "" around the column name for now) +::: +::: + +::: {.fragment} +```{r} +DateColumn <- select(Data, "Date") +DateColumn +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +This results in the column being selected, resulting in the new object containing only that subsetted out column from the original Data object. +::: +::: + +--- + +### Pipe Operators + +::: {.fragment} +::: {.callout-tip title="."} +While the above line of code works to select a column, when you encounter `select()` out in the wild, it will more often be in a line of code that looks like this: +::: +::: + +::: {.fragment} +```{r} +DateColumn <- Data |> select("Date") +DateColumn +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +... **"What in the world is that thing |> ?"** ... +::: +::: + + +::: {.fragment} +::: {.callout-tip title="."} +Glad you asked! An useful feature of the tidyverse packages is their use of [pipes](https://r4ds.had.co.nz/pipes.html) (either the original `magrittr` package's "%>%" or `base R version >4.1.0's` "|>""), usually appearing like this: +::: +::: + +::: {.fragment} +```{r} +# magrittr %>% pipe + +DateColumn <- Data %>% select("Date") + +# base R |> pipe +DateColumn <- Data |> select("Date") +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +... **"How do we interpret/read that line of code?"** ... +::: +::: + +::: {.fragment} +::: {.callout-tip title="."} +Let's break it down, starting off just to the right of the assignment arrow (<-) with our data.frame "Data". +::: +::: + +::: {.fragment} +```{r} +#| eval: false + +Data +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +We then proceed to read to the right, adding in our pipe operator. The pipe essentially serves as an intermediate passing the contents of data onward to the subsequent function. +::: +::: + +::: {.fragment} +```{r} +#| eval: FALSE +Data |> +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +In our case, this subsequent function is the `select()` function, which will select a particular column from the available data. When using the pipe, the first argument slot we saw for "select(Data, "Date")" is occupied by the contents Data that are being passed by the pipe. +::: +::: + +::: {.fragment} +```{r} +#| eval: FALSE +Data |> select() +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +To complete the transfer, we provide the desired column name to `select()` to act on ("Date" in this case) +::: +::: + + +::: {.fragment} +```{r} +#| eval: FALSE +Data |> select("Date") +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +In summary, contents of Data are passed to the pipe, and select runs on those contents to select the Date column +::: +::: + +::: {.fragment} +```{r} +Data |> select("Date") +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +One of the main advantages for using pipes, is they can be linked together, passing resulting objects of one operation on to the next pipe and subsequent function. We can see this in operation in the example below where we hand off the isolated "Date" column to the `nrow()` function to determine number of rows. We will use pipes throughout the course, so you will gradually gain familiarity as you encounter them. +::: +::: + +::: {.fragment} +```{r} +Data |> select("Date") |> nrow() +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +For those with prior R experience, you will be more familiar with the older magrittr %>% pipe. The base R |> pipe operator was introduced starting with R version 4.1.0. While mostly interchangeable, they have a [few nuances](https://tidyverse.org/blog/2023/04/base-vs-magrittr-pipe/) that come into play for more advance use cases. You are welcome to use whichever you prefer (my current preference is |> as it's one less key to press). +::: +::: + +--- + +### R Quirks + +:::{.callout-note title="Odd R Behavior # 1"} +While we used "" around the column name in our previous example, unlike what we encountered with `install.packages()` when we forget to include quotation marks, `select()` still retrieves the correct column despite Date not being an environment variable: +::: + +::: {.fragment} +```{r} +Data |> select(Date) |> head(5) +``` + +::: + +--- + +:::{.callout-note title="."} +The reasons for this Odd R behaviour are nuanced and for [another day](https://adv-r.hadley.nz/evaluation.html). For now, think of it as `dplyr` R package is picking up the slack, and using context to infer it's a column name and not an environmental variable/object. +::: + +--- + +### Selecting multiple columns + +::: {.fragment} +::: {.callout-tip title="."} +Since we are able to select one column, can we select multiple (similar to a [Data[,2:5]] approach in base R)? We can, and they can be positioned anywhere within the data.frame: +::: +::: + +::: {.fragment} +```{r} +Subset <- Data |> select(bid, timepoint, Condition, Tcells, `CD8+`, `CD4+`) + +head(Subset, 5) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +You will notice that the order in which we selected the columns will dictate their position in the subsetted data.frame object: +::: +::: + +::: {.fragment} +```{r} +Subset <- Data |> select(bid, Tcells, `CD8+`, `CD4+`, timepoint, Condition, ) + +head(Subset, 5) +``` + +::: + +--- + +## relocate + +::: {.fragment} +::: {.callout-tip title="."} +Alternatively, we occasionally want to move one column. While we could respecify the location using `select()`, specifying the names of all the other columns out in a line of code to just to rearrange one does not sound like a good use of time. For this reason, the second `dplyr` function we will be learning is the `relocate()` function. +::: +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +Looking at our Data object, let's say we wanted to move the Tcells column from its current location to the second column position (right after the bid column). The line of code to do so would look like: +::: +::: + +::: {.fragment} +```{r} +Data |> relocate(Tcells, .after=bid) |> head(5) + +# |> head(5) is used only to make the website output visualization manageable :D +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +Similar to what we saw with `select()`, this approach can also be used for more than 1 column: +::: +::: + +::: {.fragment} +```{r} +Data |> relocate(Tcells, Monocytes, .after=bid) |> head(5) + +# |> head(5) is used only to make the website output visualization manageable :D +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +We can also modify the argument so that columns are placed before a certain column +::: +::: + +::: {.fragment} +```{r} +Data |> relocate(Tcells, .before=Date) |> head(5) + +# |> head(5) is used only to make the website output visualization manageable :D +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +And as we might suspect, we could specify a column index location rather than using a column name. +::: +::: + +::: {.fragment} +```{r} +Data |> relocate(Date, .before=1) |> head(5) + +# |> head(5) is used only to make the website output visualization manageable :D +``` + +::: + +--- + +## rename + +::: {.fragment} +::: {.callout-tip title="."} +At this point, we are able to both move and select particular columns, allowing us to rearrange and subset a larger data.frame object however we want it to appear. However, as we encountered, some of the names contain special characters and spaces, requiring use of tick marks (``) to avoid issues. How can we change a column name? +::: +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +In base R, we could change individual column names by assigning a new value with the assignment arrow to the corresponding column name index. For example, looking at our Subset object, wen could rename CD8+ as follows: +::: +::: + + +::: {.fragment} +```{r} +colnames(Subset) +colnames(Subset)[3] +``` + +::: + + +::: {.fragment} +```{r} +colnames(Subset)[3] <- "CD8Positive" +colnames(Subset) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +With the tidyverse, we can use the `rename()` function which removes the need to look up the column index number. The way we write the argument is placing within the parenthesis the old name to the right of the equals sign, with the new name to the left +::: +::: + +::: {.fragment} +```{r} +Renamed <- Subset |> rename(CD4_Positive = `CD4+`) +colnames(Renamed) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +If we wanted to rename multiple column names at once, we would just need to include a comma between the individual rename arguments within the parenthesis. +::: +::: + +::: {.fragment} +```{r} +Renamed_Multiple <- Subset |> rename(specimen = bid, timepoint_months = timepoint, stimulation = Condition, CD4Positive=`CD4+`) +colnames(Renamed_Multiple) +``` + +::: + +--- + +## pull + +::: {.fragment} +::: {.callout-tip title="."} +Sometimes, we may want to retrieve individual values present in a column, to use within either a vector or a list. We can do this using the `pull()` function, which will retrieve the column contents and strip the column formatting +::: +::: + +::: {.fragment} +```{r} +Data |> pull(Date) |> head(10) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +This can be useful when we are doing data exploration, and trying to determine how many unique variants might be present. For example, if we wanted to see what days individual samples were acquired, we could `pull()` the data and pass it to the `unique()` function: +::: +::: + +::: {.fragment} +```{r} +Data |> pull(Date) |> unique() +``` + +::: + +--- + + +## filter (Rows) + +::: {.fragment} +::: {.callout-tip title="."} +So far, we have been working with `dplyr` functions primarily used when working with and subsetting columns (including `select()`, `pull()`, `rename()` and `relocate()`). What if we wanted to work with rows of a data.frame? This is where the `filter()` function is used. +::: +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +The Condition column in this Dataset appears to be indicating whether the samples were stimulated. Let's see how many unique values are contained within that column +::: +::: + +::: {.fragment} +```{r} +Data |> pull(Condition) |> unique() +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +In the case of this dataset, looks like the .fcs files where treated with either left alone, treated with [PPD (Purified Protein Derrivative)](https://en.wikipedia.org/wiki/Tuberculin) or [SEB](https://en.wikipedia.org/wiki/Enterotoxin_type_B). What if we wanted to subset only those treated with PPD? +::: +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +Within `filter()`, we would specify the column name as the first argument, and ask that only values equal to (==) "PPD" be returned. Notice in this case, "" are needed, as we are asking for a matching character value. +::: +::: + +::: {.fragment} +```{r} +PPDOnly <- Data |> filter(Condition == "PPD") +head(PPDOnly, 5) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +While this works, using "==" to match can glitch, especially with character values. Using the %in% operator is a better way of identifying and extracting only the rows whose Condition column contains "PPD" +::: +::: + +::: {.fragment} +```{r} +Data |> filter(Condition %in% "PPD") |> head(10) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +Similar to what we saw for `select()`, we can grab rows that contain various values at once. We would just need to modify the second part of the argument. If we wanted to grab rows whose Condition column contained either PPD or SEB, we would need to provide that argument as a vector, placing both within `c()`/ +::: +::: + +::: {.fragment} +```{r} +Data |> filter(Condition %in% c("PPD", "SEB")) |> head(10) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +Alternatively, we could have set up the vector externally, and then provided it to `filter()` +::: +::: + +::: {.fragment} +```{r} +TheseConditions <- c("PPD", "SEB") +Data |> filter(Condition %in% TheseConditions) |> head(10) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +While this works when we have a limited number of variant condition values, what if had many more but only wanted to exclude one value? +As we saw when learning about [Conditionals](/course/02_FilePaths/index.qmd), when we add a ! in front of a logical value, we get the opposite logical value returned +::: +::: + +::: {.fragment} +```{r} +IsThisASpectralInstrument <- TRUE + +!IsThisASpectralInstrument +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +In the context of the `dplyr` package, we can use ! within the `filter()` to remove rows that contain a certain value +::: +::: + +::: {.fragment} +```{r} +Subset <- Data |> filter(!Condition %in% "SEB") +Subset |> pull(Condition) |> unique() +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +Likewise, we can also use it with the `select()` to exclude columns we don't want to include +::: +::: + +::: {.fragment} +```{r} +Subset <- Data |> select(!timepoint) +Subset[1:3,] +``` + +::: + +--- + + +## mutate + +::: {.fragment} +::: {.callout-tip title="."} +As we can see, with just these handful of functions, we have the building blocks to rearrange and subset a larger data.frame into a format that we prefer. But what if we wanted to alter the content of a column, or add new columns to an existing data.frame? This is where the `mutate()` function can be used. +::: +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +Let's start by slimming down our current Data to a smaller workable example, highlighting the functions and pipes we learned about today +::: +::: + +::: {.fragment} +```{r} +TidyData <- Data |> filter(Condition %in% "Ctrl") |> filter(timepoint %in% "0") |> + select(bid, timepoint, Condition, Date, Tcells_count, CD45_count) |> + rename(specimen=bid, condition=Condition) |> relocate(Date, .after=specimen) +``` + +::: + +--- + +::: {.fragment} +```{r} +TidyData +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +The `mutate()` function can be used to modify existing columns, as well as to create new ones. For example, let's derrive the proportion of T cells from the overall CD45 gate. To do so, within the parenthesis, we would specify a new column name, and then divide the original columns: +::: +::: + +::: {.fragment} +```{r} +TidyData <- TidyData |> mutate(Tcells_ProportionCD45 = Tcells_count / CD45_count) +TidyData +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +We can see that we have many significant digits being returned. Let's round this new column to 2 significant digits by applying the `round()` function +::: +::: + +::: {.fragment} +```{r} +TidyData <- TidyData |> mutate(TcellsRounded = round(Tcells_ProportionCD45, 2)) +TidyData +``` + +::: + +--- + + +## arrange + +::: {.fragment} +::: {.callout-tip title="."} +And while we are here, let's rearrange the rows so that they are descending based on the Tcell proportion. We can use this by using the `desc()` and `arrange()` functions from `dplyr`: +::: +::: + +::: {.fragment} +```{r} +TidyData <- TidyData |> arrange(desc(TcellsRounded)) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +And let's go ahead and `filter()` and identify the specimens that had more than 30% T cells as part of the overall CD45 gate (context, these samples were Cord Blood): +::: +::: + +::: {.fragment} +```{r} +TidyData |> filter(TcellsRounded > 0.3) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +Which is we had wanted to just retrieve the specimen IDs, we could add `pull()` after a new pipe argument. +::: +::: + +::: {.fragment} +```{r} +TidyData |> filter(TcellsRounded > 0.3) |> pull(specimen) +``` + +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +And finally, since I may want to send the data to a supervisor, let's go ahead and export this "tidyed" version of our data.frame out to it's own .csv file. Working within our project folder, this would look like this: +::: +::: + +::: {.fragment} +```{r} +NewName <- paste0("MyNewDataset", ".csv") +StorageLocation <- file.path("data", NewName) +StorageLocation +``` +::: + +::: {.fragment} +```{r} +#| eval: FALSE +write.csv(TidyData, StorageLocation, row.names=FALSE) +``` +::: + +--- + +# Take Away + +::: {.fragment} +::: {.callout-tip title="."} +In this session, we explored the main functions within the `dplyr` package used in context of "tidying" data, including selecting columns, filtering for rows, as well as additional functions used to create or modify existing values. We will continue to build on these throughout the course, introducing a few additional tidyverse functions we don't have time to cover today as appropiate. As we saw, knowing how to use these functions can allow us to extensively and quickly modify our existing exported data files. +::: +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +On important goal as we move through the course (in terms of both reproducibility and replicability) is to attempt to only modify files within R, not go back to the original csv or excel file and hand-modify individual values. This approach is not reproducible or replicable. Once set up, an R script can quickly re-carry out these same cleanup steps, and leave a documented process of how the data has changed (even more so if you are maintaining version control). If you do want to save the changes you have made, it is best to save it out as a new .csv file with which you work later. +::: +::: + +--- + +::: {.fragment} +::: {.callout-tip title="."} +Next week, we will be using these skills when setting up metadata for our .fcs files. We will additionally take a look at the main format source of controversy within Bioconductor Flow Cytometry packages, ie. whether to use a flowframe or a cytoframe. Exciting stuff, but important information to know as the functions needed to import them are slightly different. We will also look at how to import existing manually gated .wsp from FlowJo/Diva/Floreada in via the `CytoML` package. +::: +::: + +--- + +![](images/TakeAway.jpg) + +--- + +# Additional Resources + +[Data Organization in Spreadsheets for Ecologists](https://datacarpentry.github.io/spreadsheet-ecology-lesson/) This Carpentry self-study course was one of my "Aha" moments early on when learning R, and reinforced the need to try to keep my own Excel/CSV files in a tidy manner. It is worth the time going through in its entirety (even for non-Ecologist). + +[Data Analysis and Visualization in R for Ecologists](https://datacarpentry.github.io/R-ecology-lesson/) Continuation of the above, and a good way to continue building on the tidyverse functions we learned today. + +--- + +[Simplistics: Introduction to Tidyverse in R](https://youtu.be/Bg4qxVNaDck?si=QPQq8TzOZ1w6XSy4) The YouTube channel is mainly focused on statistics for Psych classes, but at the end of the day, we are all working with similar objects with rows and columns, just the values contained within differ. + +[Riffomonas Project Playlist: Data Manipulation with R's Tidyverse](https://youtube.com/playlist?list=PLmNrK_nkqBpKf7j_ewpUm-w33R6PJYtD9&si=BVmDZPIXjRuHjERP) Riffomonas has a playlist that delves into both the tidyverse functions we used today, as well as other ones we will encounter later on in the course. + +--- + +# Take-home Problems + +:::{.callout-tip title="Problem 1"} +Taking a dataset (either todays or one of your own), work through the column-operating functions (`select()`, `rename()`, and `relocate()`). Once this is done, `filter()` by conditions from two separate columns, arrange in an order that makes sense, and export this "tidy" data as a .csv file. +::: + +--- + +:::{.callout-tip title="Problem 2"} +We used the `mutate()` function to create new columns, but it can also be used to modify existing ones. Various numeric columns are showing way to many significant digits. As was shown, use `round()` to round all these proportion columns, but use mutate to overwrite the existing column. Export this as it's own .csv file. +::: + +--- + +:::{.callout-tip title="Problem 3"} +We can also use `mutate()` to combine columns. For our dataset, "bid", "timepoint", "Condition" are separate columns that originally were all part of the filename for the individual .fcs file. Try to figure out a way to combine them back together using `paste0()`, and save the new column as "filename". Once this is done, `pull()` the contents of this column, and using try to determine whether there were any duplicates (think innovative ways of using !, `length()` and `unique()`) +::: + +--- + +::: {style="text-align: right;"} +[![AGPL-3.0](https://www.gnu.org/graphics/agplv3-with-text-162x68.png)](https://www.gnu.org/licenses/agpl-3.0.en.html) [![CC BY-SA 4.0](https://licensebuttons.net/l/by-sa/4.0/88x31.png)](http://creativecommons.org/licenses/by-sa/4.0/) +::: \ No newline at end of file diff --git a/docs/Schedule.html b/docs/Schedule.html index 836bece..30eb6c5 100644 --- a/docs/Schedule.html +++ b/docs/Schedule.html @@ -30,7 +30,7 @@ - + @@ -813,8 +813,8 @@

Future Directions

diff --git a/docs/course/03_InsideFCSFile/slides.html b/docs/course/03_InsideFCSFile/slides.html index 12225a8..fd0963c 100644 --- a/docs/course/03_InsideFCSFile/slides.html +++ b/docs/course/03_InsideFCSFile/slides.html @@ -485,8 +485,8 @@

flowCore


-
- +
+
@@ -497,8 +497,8 @@

flowCore


-
- +
+
diff --git a/docs/course/04_IntroToTidyverse/BonusContent.html b/docs/course/04_IntroToTidyverse/BonusContent.html new file mode 100644 index 0000000..39bcd4a --- /dev/null +++ b/docs/course/04_IntroToTidyverse/BonusContent.html @@ -0,0 +1,1128 @@ + + + + + + + + + + + +Bonus Content – Cytometry in R + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ +
+ + +
+ + + +
+ +
+
+

Bonus Content

+
+ + + +
+ +
+
Author
+
+

David Rach

+
+
+ +
+
Published
+
+

February 23, 2026

+
+
+ + +
+ + + +
+ + +

+
+

AGPL-3.0 CC BY-SA 4.0

+
+
+
thefilepath <- file.path("data", "Dataset.csv")
+
+thefilepath
+
+
[1] "data/Dataset.csv"
+
+
+
+
Data <- read.csv(file=thefilepath, check.names=FALSE)
+colnames(Data)
+
+
 [1] "bid"               "timepoint"         "Condition"        
+ [4] "Date"              "infant_sex"        "ptype"            
+ [7] "root"              "singletsFSC"       "singletsSSC"      
+[10] "singletsSSCB"      "CD45"              "NotMonocytes"     
+[13] "nonDebris"         "lymphocytes"       "live"             
+[16] "Dump+"             "Dump-"             "Tcells"           
+[19] "Vd2+"              "Vd2-"              "Va7.2+"           
+[22] "Va7.2-"            "CD4+"              "CD4-"             
+[25] "CD8+"              "CD8-"              "Tcells_count"     
+[28] "lymphocytes_count" "Monocytes"         "Debris"           
+[31] "CD45_count"       
+
+
+
+

Pull

+
+
+

Case-When

+

Case-when is an useful function, but may be a bit much to try to teach in the main segment. Basically, when the condition on the left side of the ~ is fulfilled, it will execute what is being specified on the right hand side.

+

In turn, we can combine these together by adding a “,”. I tend to use this mutate str_detect case_when combination when encountering messy data out in the while where I need to selectively change particular cell values in a consistent reproducible manner

+
+
+

Quasiquosure

+
+
library(dplyr)
+
+

+Attaching package: 'dplyr'
+
+
+
The following objects are masked from 'package:stats':
+
+    filter, lag
+
+
+
The following objects are masked from 'package:base':
+
+    intersect, setdiff, setequal, union
+
+
DateColumn <- select(Data, Date)
+DateColumn
+
+
          Date
+1   2025-07-26
+2   2025-07-26
+3   2025-07-26
+4   2025-07-26
+5   2025-07-26
+6   2025-07-26
+7   2025-07-26
+8   2025-07-26
+9   2025-07-26
+10  2025-07-26
+11  2025-07-26
+12  2025-07-26
+13  2025-07-26
+14  2025-07-26
+15  2025-07-26
+16  2025-07-26
+17  2025-07-26
+18  2025-07-26
+19  2025-07-26
+20  2025-07-26
+21  2025-07-26
+22  2025-07-26
+23  2025-07-26
+24  2025-07-26
+25  2025-07-26
+26  2025-07-26
+27  2025-07-29
+28  2025-07-29
+29  2025-07-29
+30  2025-07-29
+31  2025-07-29
+32  2025-07-29
+33  2025-07-29
+34  2025-07-29
+35  2025-07-29
+36  2025-07-29
+37  2025-07-29
+38  2025-07-29
+39  2025-07-29
+40  2025-07-29
+41  2025-07-29
+42  2025-07-29
+43  2025-07-29
+44  2025-07-29
+45  2025-07-29
+46  2025-07-29
+47  2025-07-29
+48  2025-07-29
+49  2025-07-31
+50  2025-07-31
+51  2025-07-31
+52  2025-07-31
+53  2025-07-31
+54  2025-07-31
+55  2025-07-31
+56  2025-07-31
+57  2025-07-31
+58  2025-07-31
+59  2025-07-31
+60  2025-07-31
+61  2025-07-31
+62  2025-07-31
+63  2025-07-31
+64  2025-07-31
+65  2025-07-31
+66  2025-07-31
+67  2025-07-31
+68  2025-07-31
+69  2025-07-31
+70  2025-07-31
+71  2025-07-31
+72  2025-07-31
+73  2025-07-31
+74  2025-07-31
+75  2025-07-31
+76  2025-08-05
+77  2025-08-05
+78  2025-08-05
+79  2025-08-05
+80  2025-08-05
+81  2025-08-05
+82  2025-08-05
+83  2025-08-05
+84  2025-08-05
+85  2025-08-05
+86  2025-08-05
+87  2025-08-05
+88  2025-08-05
+89  2025-08-05
+90  2025-08-05
+91  2025-08-05
+92  2025-08-05
+93  2025-08-05
+94  2025-08-05
+95  2025-08-05
+96  2025-08-05
+97  2025-08-05
+98  2025-08-05
+99  2025-08-07
+100 2025-08-07
+101 2025-08-07
+102 2025-08-07
+103 2025-08-07
+104 2025-08-07
+105 2025-08-07
+106 2025-08-07
+107 2025-08-07
+108 2025-08-07
+109 2025-08-07
+110 2025-08-07
+111 2025-08-07
+112 2025-08-07
+113 2025-08-07
+114 2025-08-07
+115 2025-08-07
+116 2025-08-07
+117 2025-08-07
+118 2025-08-07
+119 2025-08-07
+120 2025-08-07
+121 2025-08-07
+122 2025-08-07
+123 2025-08-07
+124 2025-08-07
+125 2025-08-22
+126 2025-08-22
+127 2025-08-22
+128 2025-08-22
+129 2025-08-22
+130 2025-08-22
+131 2025-08-22
+132 2025-08-22
+133 2025-08-22
+134 2025-08-22
+135 2025-08-22
+136 2025-08-22
+137 2025-08-22
+138 2025-08-22
+139 2025-08-22
+140 2025-08-22
+141 2025-08-22
+142 2025-08-22
+143 2025-08-22
+144 2025-08-22
+145 2025-08-22
+146 2025-08-22
+147 2025-08-22
+148 2025-08-22
+149 2025-08-22
+150 2025-08-22
+151 2025-08-22
+152 2025-08-28
+153 2025-08-28
+154 2025-08-28
+155 2025-08-28
+156 2025-08-28
+157 2025-08-28
+158 2025-08-28
+159 2025-08-28
+160 2025-08-28
+161 2025-08-28
+162 2025-08-28
+163 2025-08-28
+164 2025-08-28
+165 2025-08-28
+166 2025-08-28
+167 2025-08-28
+168 2025-08-28
+169 2025-08-28
+170 2025-08-28
+171 2025-08-28
+172 2025-08-28
+173 2025-08-28
+174 2025-08-28
+175 2025-08-28
+176 2025-08-28
+177 2025-08-28
+178 2025-08-28
+179 2025-08-30
+180 2025-08-30
+181 2025-08-30
+182 2025-08-30
+183 2025-08-30
+184 2025-08-30
+185 2025-08-30
+186 2025-08-30
+187 2025-08-30
+188 2025-08-30
+189 2025-08-30
+190 2025-08-30
+191 2025-08-30
+192 2025-08-30
+193 2025-08-30
+194 2025-08-30
+195 2025-08-30
+196 2025-08-30
+
+
+
+

Selecting Columns (Base R)

+

As we saw last week, there are multiple ways to select values from particular columns in base R. If we had wanted to retrieve the “Date” column, why not first identify its index position, and use [,] to extract the underlying data?

+
+
colnames(Data)
+
+
 [1] "bid"               "timepoint"         "Condition"        
+ [4] "Date"              "infant_sex"        "ptype"            
+ [7] "root"              "singletsFSC"       "singletsSSC"      
+[10] "singletsSSCB"      "CD45"              "NotMonocytes"     
+[13] "nonDebris"         "lymphocytes"       "live"             
+[16] "Dump+"             "Dump-"             "Tcells"           
+[19] "Vd2+"              "Vd2-"              "Va7.2+"           
+[22] "Va7.2-"            "CD4+"              "CD4-"             
+[25] "CD8+"              "CD8-"              "Tcells_count"     
+[28] "lymphocytes_count" "Monocytes"         "Debris"           
+[31] "CD45_count"       
+
+
+
+
colnames(Data)[4]
+
+
[1] "Date"
+
+
+
+
DataColumn <- Data[,4] # Column specified after the ,
+DataColumn
+
+
  [1] "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26"
+  [6] "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26"
+ [11] "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26"
+ [16] "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26"
+ [21] "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26"
+ [26] "2025-07-26" "2025-07-29" "2025-07-29" "2025-07-29" "2025-07-29"
+ [31] "2025-07-29" "2025-07-29" "2025-07-29" "2025-07-29" "2025-07-29"
+ [36] "2025-07-29" "2025-07-29" "2025-07-29" "2025-07-29" "2025-07-29"
+ [41] "2025-07-29" "2025-07-29" "2025-07-29" "2025-07-29" "2025-07-29"
+ [46] "2025-07-29" "2025-07-29" "2025-07-29" "2025-07-31" "2025-07-31"
+ [51] "2025-07-31" "2025-07-31" "2025-07-31" "2025-07-31" "2025-07-31"
+ [56] "2025-07-31" "2025-07-31" "2025-07-31" "2025-07-31" "2025-07-31"
+ [61] "2025-07-31" "2025-07-31" "2025-07-31" "2025-07-31" "2025-07-31"
+ [66] "2025-07-31" "2025-07-31" "2025-07-31" "2025-07-31" "2025-07-31"
+ [71] "2025-07-31" "2025-07-31" "2025-07-31" "2025-07-31" "2025-07-31"
+ [76] "2025-08-05" "2025-08-05" "2025-08-05" "2025-08-05" "2025-08-05"
+ [81] "2025-08-05" "2025-08-05" "2025-08-05" "2025-08-05" "2025-08-05"
+ [86] "2025-08-05" "2025-08-05" "2025-08-05" "2025-08-05" "2025-08-05"
+ [91] "2025-08-05" "2025-08-05" "2025-08-05" "2025-08-05" "2025-08-05"
+ [96] "2025-08-05" "2025-08-05" "2025-08-05" "2025-08-07" "2025-08-07"
+[101] "2025-08-07" "2025-08-07" "2025-08-07" "2025-08-07" "2025-08-07"
+[106] "2025-08-07" "2025-08-07" "2025-08-07" "2025-08-07" "2025-08-07"
+[111] "2025-08-07" "2025-08-07" "2025-08-07" "2025-08-07" "2025-08-07"
+[116] "2025-08-07" "2025-08-07" "2025-08-07" "2025-08-07" "2025-08-07"
+[121] "2025-08-07" "2025-08-07" "2025-08-07" "2025-08-07" "2025-08-22"
+[126] "2025-08-22" "2025-08-22" "2025-08-22" "2025-08-22" "2025-08-22"
+[131] "2025-08-22" "2025-08-22" "2025-08-22" "2025-08-22" "2025-08-22"
+[136] "2025-08-22" "2025-08-22" "2025-08-22" "2025-08-22" "2025-08-22"
+[141] "2025-08-22" "2025-08-22" "2025-08-22" "2025-08-22" "2025-08-22"
+[146] "2025-08-22" "2025-08-22" "2025-08-22" "2025-08-22" "2025-08-22"
+[151] "2025-08-22" "2025-08-28" "2025-08-28" "2025-08-28" "2025-08-28"
+[156] "2025-08-28" "2025-08-28" "2025-08-28" "2025-08-28" "2025-08-28"
+[161] "2025-08-28" "2025-08-28" "2025-08-28" "2025-08-28" "2025-08-28"
+[166] "2025-08-28" "2025-08-28" "2025-08-28" "2025-08-28" "2025-08-28"
+[171] "2025-08-28" "2025-08-28" "2025-08-28" "2025-08-28" "2025-08-28"
+[176] "2025-08-28" "2025-08-28" "2025-08-28" "2025-08-30" "2025-08-30"
+[181] "2025-08-30" "2025-08-30" "2025-08-30" "2025-08-30" "2025-08-30"
+[186] "2025-08-30" "2025-08-30" "2025-08-30" "2025-08-30" "2025-08-30"
+[191] "2025-08-30" "2025-08-30" "2025-08-30" "2025-08-30" "2025-08-30"
+[196] "2025-08-30"
+
+
+

However, looking at the output, we see this looks like the values, not a column. Our suspicions are confirmed when running DataColumn

+
+
str(DataColumn)
+
+
 chr [1:196] "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" ...
+
+
+

This is similarly the case when we use the $ accessor.

+
+
DataColumn <- Data$Date
+str(DataColumn)
+
+
 chr [1:196] "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" ...
+
+
+
+
head(DataColumn, 3)
+
+
[1] "2025-07-26" "2025-07-26" "2025-07-26"
+
+
+

By contrast, when selecting two columns, the structure is maintained.

+
+
TwoColumns <- Data[,4:5]
+
+

Why is the data.frame column structure lost in base R when isolating a single data.frame column? And who thought to make it that convoluted? If we were an R course in early 2010s, we might go into an explanation, but fortunately, we don’t need to understand why, we have the dplyr R package to rescue us.

+
+

AGPL-3.0 CC BY-SA 4.0

+
+ + +
+
+ +
+ +
+ + + + + + \ No newline at end of file diff --git a/docs/course/04_IntroToTidyverse/images/00_CheckNamesTRUE.png b/docs/course/04_IntroToTidyverse/images/00_CheckNamesTRUE.png new file mode 100644 index 0000000..cb71c41 Binary files /dev/null and b/docs/course/04_IntroToTidyverse/images/00_CheckNamesTRUE.png differ diff --git a/docs/course/04_IntroToTidyverse/images/01_DataView.png b/docs/course/04_IntroToTidyverse/images/01_DataView.png new file mode 100644 index 0000000..10c6e7f Binary files /dev/null and b/docs/course/04_IntroToTidyverse/images/01_DataView.png differ diff --git a/docs/course/04_IntroToTidyverse/images/02_Glimpse.png b/docs/course/04_IntroToTidyverse/images/02_Glimpse.png new file mode 100644 index 0000000..13644d3 Binary files /dev/null and b/docs/course/04_IntroToTidyverse/images/02_Glimpse.png differ diff --git a/docs/course/04_IntroToTidyverse/images/03_ColumnClass.png b/docs/course/04_IntroToTidyverse/images/03_ColumnClass.png new file mode 100644 index 0000000..f2fff2a Binary files /dev/null and b/docs/course/04_IntroToTidyverse/images/03_ColumnClass.png differ diff --git a/docs/course/04_IntroToTidyverse/images/TakeAway.jpg b/docs/course/04_IntroToTidyverse/images/TakeAway.jpg new file mode 100644 index 0000000..4d60a0e Binary files /dev/null and b/docs/course/04_IntroToTidyverse/images/TakeAway.jpg differ diff --git a/docs/course/04_IntroToTidyverse/images/WebsiteBanner.png b/docs/course/04_IntroToTidyverse/images/WebsiteBanner.png new file mode 100644 index 0000000..71d5502 Binary files /dev/null and b/docs/course/04_IntroToTidyverse/images/WebsiteBanner.png differ diff --git a/docs/course/04_IntroToTidyverse/index.html b/docs/course/04_IntroToTidyverse/index.html new file mode 100644 index 0000000..7e19156 --- /dev/null +++ b/docs/course/04_IntroToTidyverse/index.html @@ -0,0 +1,1908 @@ + + + + + + + + + + + +04 - Introduction to Tidyverse – Cytometry in R + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ +
+ + +
+ + + +
+ +
+
+

04 - Introduction to Tidyverse

+
+ + + +
+ +
+
Author
+
+

David Rach

+
+
+ +
+
Published
+
+

February 23, 2026

+
+
+ + +
+ + + +
+ + +

+
+

AGPL-3.0 CC BY-SA 4.0

+
+

For the YouTube livestream recording, see here

+ +

For screen-shot slides, click here

+
+

Background

+

Within our daily workflows as cytometrist, after acquiring data on our respective instruments, we begin analyzing the resulting datasets. After implementing various workflows, we then export data for downstream statistical analysis.

+

When I first started my Ph.D program, a substantial amount of my day was spent renaming column names of the exported data so that they would fit nicely in a Microsoft Excel sheet column; setting up formulas to combine proportion of positive cells across positive quadrants, etc. Once this was done, additional hours would go by as I copied and pasted contents of those columns over to a GraphPad Prism worksheet for statistical analysis.

+

This of course was in an ideal scenario. Often times, the data was less organized, and instead of time spent copying and pasting over columns, it would first be spent rearranging values from individual cells in the worksheet that were separated by spaces, all the while trying to remember what various color codes and bold font stood for.

+

Today, we will explore what makes data “tidy”, and how to use the toolsets implemented in the various tidyverse R packages. At it’s simplest, if we think of and organize all our data in terms of rows and columns, we need fewer tools (ie. functions) to reshape and extract useful information that we are interested in. Additionally, this approach aligns more closely with how computers work, allowing us to carry out tasks that would otherwise have taken hours in mere seconds.

+

The dataset we will be using today is a manually-gated spectral flow cytometry dataset (similar to ones we would see exported by commercial software), and has been intentionally left slightly messy. You could however just as easily use a “matrix” or “data.frame” object exported from inside an fcs file, or swap in your own dataset. You would just need to make sure to switch out the input data by providing an alternate file path, etc.

+
+
+
+

Walk Through

+
+
+
+ +
+
+Housekeeping +
+
+
+

As we do every week, on GitHub, sync your forked version of the CytometryInR course to bring in the most recent updates. Then within Positron, pull in those changes to your local computer.

+

After creating a “Week04” project folder, copy over the contents of “course/04_IntroToTidyverse” to that folder. This will hopefully prevent any merge issues when you attempt to bring in new data to your local Cytometry in R folder next week. Please remember once you have set up your project folder to stage, commit and pus your changes to “Week04” to GitHub so that they are backed up remotely.

+

If you are having issues syncing due to the Take-Home Problem merge conflict, see this walkthrough

+
+
+
+
+

read.csv

+

We will start by first loading in our copied over dataset (Dataset.csv) from it’s location in the project folder. If you are following the organization scheme we have been using throughout the course, your file path will look something like this:

+
+
thefilepath <- file.path("data", "Dataset.csv")
+
+thefilepath
+
+
[1] "data/Dataset.csv"
+
+
+
+
+
+ +
+
+Reminder +
+
+
+

We encourage using the file.path function to build our file paths, as this keeps our code reproducible and replicable when a project folder is copied to other people’s computers that differ on whether the operating system uses forward or backward slash separation between folders.

+
+
+

Above, we directly specified the name (Dataset) and filetype (.csv) of the file we wanted in the last argument of the file.path (“Dataset.csv”). This allows us to skip the list.files() step we used last week as we have provided the full file path. While this approach can be faster, if we accidentally mistype the file name, we could end up with an error at the next step due to no files being found with the mistyped name.

+

Since our dataset is stored as a .csv file, we will be using the read.csv() function from the utils package (included in our base R software installation) to read it into R. We will also use the colnames() function from last week to get a read-out of the column names.

+
+
Data <- read.csv(file=thefilepath, check.names=FALSE)
+colnames(Data)
+
+
 [1] "bid"               "timepoint"         "Condition"        
+ [4] "Date"              "infant_sex"        "ptype"            
+ [7] "root"              "singletsFSC"       "singletsSSC"      
+[10] "singletsSSCB"      "CD45"              "NotMonocytes"     
+[13] "nonDebris"         "lymphocytes"       "live"             
+[16] "Dump+"             "Dump-"             "Tcells"           
+[19] "Vd2+"              "Vd2-"              "Va7.2+"           
+[22] "Va7.2-"            "CD4+"              "CD4-"             
+[25] "CD8+"              "CD8-"              "Tcells_count"     
+[28] "lymphocytes_count" "Monocytes"         "Debris"           
+[31] "CD45_count"       
+
+
+

As we look at the line of code, we now have enough context to decipher that the “file” argument is where we provide a file path to an individual file, but what does the “check.names” argument do?

+

Let’s see what happens to the column names when we set “check.names” argument to TRUE:

+
+
Data_Alternative <- read.csv(thefilepath, check.names=TRUE)
+colnames(Data_Alternative)
+
+
 [1] "bid"               "timepoint"         "Condition"        
+ [4] "Date"              "infant_sex"        "ptype"            
+ [7] "root"              "singletsFSC"       "singletsSSC"      
+[10] "singletsSSCB"      "CD45"              "NotMonocytes"     
+[13] "nonDebris"         "lymphocytes"       "live"             
+[16] "Dump."             "Dump..1"           "Tcells"           
+[19] "Vd2."              "Vd2..1"            "Va7.2."           
+[22] "Va7.2..1"          "CD4."              "CD4..1"           
+[25] "CD8."              "CD8..1"            "Tcells_count"     
+[28] "lymphocytes_count" "Monocytes"         "Debris"           
+[31] "CD45_count"       
+
+
+

As we can see, any column name that contained a special character or a space was automatically converted over to R-approved syntax. However, this resulted in the loss of both +” and “-”, leaving us unable to determine whether we are looking at cells within or outside a particular gate.

+

+

Because of this, it is often better to rename columns individually after import, which we will learn how to do later today.

+

Following up with what we practiced last week, lets use the head() function to visualize the first few rows of data.

+
+
head(Data, 3)
+
+
      bid timepoint Condition       Date infant_sex  ptype    root singletsFSC
+1 INF0052         0      Ctrl 2025-07-26       Male HEU-hi 2098368     1894070
+2 INF0100         0      Ctrl 2025-07-26       Male HEU-lo 2020184     1791890
+3 INF0100         4      Ctrl 2025-07-26       Male HEU-lo 1155040     1033320
+  singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris lymphocytes
+1     1666179      1537396 0.5952943    0.8820349 0.8627649   0.6420138
+2     1697083      1579098 0.9106762    0.9052256 0.8602660   0.2145848
+3      875465       845446 0.9705765    0.9845400 0.9578793   0.7403110
+       live      Dump+     Dump-    Tcells        Vd2+      Vd2-     Va7.2+
+1 0.9020581 0.21090996 0.6911482 0.2804264 0.008120361 0.9918796 0.01448070
+2 0.8908981 0.06252775 0.8283703 0.6748298 0.007265620 0.9927344 0.01577499
+3 0.8757665 0.20023803 0.6755285 0.6119129 0.004651313 0.9953487 0.01579402
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990       164771
+2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858       208241
+3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209       371723
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            587573 0.11796509 0.13723513     915203
+2            308583 0.09477437 0.13973396    1438047
+3            607477 0.01545999 0.04212072     820570
+
+
+

When working in Positron, we could have alternatively clicked on the little grid icon next to our created variable “Data” in the right secondary sidebar, which would have opened the data in our Editor window. From this same window, we can see it is stored as a “data.frame” object type.

+

+

We could also achieve the same window to open using the View() function:

+
+
View(Data)
+
+

Wrapping up our brief recap of last week functions, we can check an objects type using both the class() and str() functions.

+
+
class(Data)
+
+
[1] "data.frame"
+
+
+
+
str(Data)
+
+
'data.frame':   196 obs. of  31 variables:
+ $ bid              : chr  "INF0052" "INF0100" "INF0100" "INF0100" ...
+ $ timepoint        : int  0 0 4 9 0 4 9 4 9 0 ...
+ $ Condition        : chr  "Ctrl" "Ctrl" "Ctrl" "Ctrl" ...
+ $ Date             : chr  "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" ...
+ $ infant_sex       : chr  "Male" "Male" "Male" "Male" ...
+ $ ptype            : chr  "HEU-hi" "HEU-lo" "HEU-lo" "HEU-lo" ...
+ $ root             : int  2098368 2020184 1155040 358624 1362216 1044808 1434840 972056 1521928 2363512 ...
+ $ singletsFSC      : int  1894070 1791890 1033320 328624 1206309 917398 1265022 875707 1359574 2136616 ...
+ $ singletsSSC      : int  1666179 1697083 875465 289327 1032946 735579 988445 767323 1175755 1875394 ...
+ $ singletsSSCB     : int  1537396 1579098 845446 276289 982736 685592 940454 718000 1097478 1732620 ...
+ $ CD45             : num  0.595 0.911 0.971 0.982 0.957 ...
+ $ NotMonocytes     : num  0.882 0.905 0.985 0.986 0.956 ...
+ $ nonDebris        : num  0.863 0.86 0.958 0.941 0.841 ...
+ $ lymphocytes      : num  0.642 0.215 0.74 0.651 0.705 ...
+ $ live             : num  0.902 0.891 0.876 0.915 0.895 ...
+ $ Dump+            : num  0.2109 0.0625 0.2002 0.2147 0.3383 ...
+ $ Dump-            : num  0.691 0.828 0.676 0.701 0.557 ...
+ $ Tcells           : num  0.28 0.675 0.612 0.631 0.44 ...
+ $ Vd2+             : num  0.00812 0.00727 0.00465 0.01135 0.00475 ...
+ $ Vd2-             : num  0.992 0.993 0.995 0.989 0.995 ...
+ $ Va7.2+           : num  0.0145 0.0158 0.0158 0.017 0.0133 ...
+ $ Va7.2-           : num  0.977 0.977 0.98 0.972 0.982 ...
+ $ CD4+             : num  0.634 0.612 0.664 0.438 0.739 ...
+ $ CD4-             : num  0.343 0.365 0.316 0.534 0.243 ...
+ $ CD8+             : num  0.273 0.336 0.286 0.486 0.195 ...
+ $ CD8-             : num  0.0698 0.0293 0.0294 0.0476 0.0476 ...
+ $ Tcells_count     : int  164771 208241 371723 111552 291777 271870 487937 220634 415867 184930 ...
+ $ lymphocytes_count: int  587573 308583 607477 176662 663667 510730 726238 451047 710964 652155 ...
+ $ Monocytes        : num  0.118 0.0948 0.0155 0.0145 0.0444 ...
+ $ Debris           : num  0.1372 0.1397 0.0421 0.0587 0.1592 ...
+ $ CD45_count       : int  915203 1438047 820570 271304 940733 675857 921660 701657 1066884 1017713 ...
+
+
+
+
+

data.frame

+

Or alternatively using the new-to-us glimpse() function

+
+
glimpse(Data)
+
+
Error in `glimpse()`:
+! could not find function "glimpse"
+
+
+
+
+
+ +
+
+Checkpoint 1 +
+
+
+

This however returns an error. Any idea why this might be occuring?

+
+
+
+
+Code +
# We haven't attached/loaded the package in which the function glimpse is within
+
+
+
+
+
+ +
+
+Checkpoint 2 +
+
+
+

How would we locate a package a not-yet-loaded function is within?

+
+
+
+
+Code +
# We can use double ? to search all installed packages for a function, regardless
+# if the package is attached to the environment or not
+
+??glimpse
+
+
+

+

From the list of search matches (in the right secondary sidebar), it looks likely that the glimpse() function in the dplyr package was the one we were looking for. This is one the main tidyverse packages we will be using throughout the course. Let’s attach it to our environment via the library() call first and try running glimpse() again.

+
+
library(dplyr)
+glimpse(Data)
+
+
Rows: 196
+Columns: 31
+$ bid               <chr> "INF0052", "INF0100", "INF0100", "INF0100", "INF0179…
+$ timepoint         <int> 0, 0, 4, 9, 0, 4, 9, 4, 9, 0, 0, 4, 9, 0, 4, 9, 4, 9…
+$ Condition         <chr> "Ctrl", "Ctrl", "Ctrl", "Ctrl", "Ctrl", "Ctrl", "Ctr…
+$ Date              <chr> "2025-07-26", "2025-07-26", "2025-07-26", "2025-07-2…
+$ infant_sex        <chr> "Male", "Male", "Male", "Male", "Male", "Male", "Mal…
+$ ptype             <chr> "HEU-hi", "HEU-lo", "HEU-lo", "HEU-lo", "HU", "HU", …
+$ root              <int> 2098368, 2020184, 1155040, 358624, 1362216, 1044808,…
+$ singletsFSC       <int> 1894070, 1791890, 1033320, 328624, 1206309, 917398, …
+$ singletsSSC       <int> 1666179, 1697083, 875465, 289327, 1032946, 735579, 9…
+$ singletsSSCB      <int> 1537396, 1579098, 845446, 276289, 982736, 685592, 94…
+$ CD45              <dbl> 0.5952943, 0.9106762, 0.9705765, 0.9819573, 0.957259…
+$ NotMonocytes      <dbl> 0.8820349, 0.9052256, 0.9845400, 0.9855070, 0.955627…
+$ nonDebris         <dbl> 0.8627649, 0.8602660, 0.9578793, 0.9412615, 0.840783…
+$ lymphocytes       <dbl> 0.6420138, 0.2145848, 0.7403110, 0.6511588, 0.705478…
+$ live              <dbl> 0.9020581, 0.8908981, 0.8757665, 0.9153242, 0.895214…
+$ `Dump+`           <dbl> 0.21090996, 0.06252775, 0.20023803, 0.21469246, 0.33…
+$ `Dump-`           <dbl> 0.6911482, 0.8283703, 0.6755285, 0.7006317, 0.556895…
+$ Tcells            <dbl> 0.2804264, 0.6748298, 0.6119129, 0.6314431, 0.439643…
+$ `Vd2+`            <dbl> 0.008120361, 0.007265620, 0.004651313, 0.011348967, …
+$ `Vd2-`            <dbl> 0.9918796, 0.9927344, 0.9953487, 0.9886510, 0.995246…
+$ `Va7.2+`          <dbl> 0.014480704, 0.015774991, 0.015794019, 0.017023451, …
+$ `Va7.2-`          <dbl> 0.9773989, 0.9769594, 0.9795547, 0.9716276, 0.981924…
+$ `CD4+`            <dbl> 0.6341164, 0.6119112, 0.6639621, 0.4378944, 0.739256…
+$ `CD4-`            <dbl> 0.3432825, 0.3650482, 0.3155925, 0.5337331, 0.242668…
+$ `CD8+`            <dbl> 0.2734826, 0.3357696, 0.2862104, 0.4861231, 0.195063…
+$ `CD8-`            <dbl> 0.06979990, 0.02927858, 0.02938209, 0.04761008, 0.04…
+$ Tcells_count      <int> 164771, 208241, 371723, 111552, 291777, 271870, 4879…
+$ lymphocytes_count <int> 587573, 308583, 607477, 176662, 663667, 510730, 7262…
+$ Monocytes         <dbl> 0.11796509, 0.09477437, 0.01545999, 0.01449297, 0.04…
+$ Debris            <dbl> 0.13723513, 0.13973396, 0.04212072, 0.05873854, 0.15…
+$ CD45_count        <int> 915203, 1438047, 820570, 271304, 940733, 675857, 921…
+
+
+

We notice that while similar to the str() output, glimpse() handles spacing a little differently, and includes the dimensions at the top. However, we can also retrieve the dimensions directly using the dim() function (which maintains the row followed by column position convention of base R (ex. [196,31]))

+
+
dim(Data)
+
+
[1] 196  31
+
+
+
+
+

Column value type

+

As we saw last week, functions often need values that match a certain type (the paintbrush needing paint analogy). As we inspect the columns of Data, we can notice some of the columns contain values within that are character (ie. “char”) values. Others appear to contain numeric values (which are subtyped as either double (“ie. dbl”) or integer (ie. “int”)). At first glance, we do not appear to have any logical (ie. TRUE or FALSE) columns in this dataset.

+

+

If we were trying to verify type of values contained within a data.frame column, we could employ several similarly-named functions (is.character(), is.numeric() or is.logical()) to check

+
+
# colnames(Data)  # To recheck the column names
+
+is.character(Data$bid)
+
+
[1] TRUE
+
+
+
+
is.numeric(Data$bid)
+
+
[1] FALSE
+
+
+
+
# colnames(Data)  # To recheck the column names
+
+is.character(Data$Tcells_count)
+
+
[1] FALSE
+
+
+

For numeric columns using the is.numeric() function, we can also be subtype specific using either is.integer() or is.double().

+
+
# colnames(Data)  # To recheck the column names
+
+is.numeric(Data$Tcells_count)
+
+
[1] TRUE
+
+
is.integer(Data$Tcells_count)
+
+
[1] TRUE
+
+
is.double(Data$Tcells_count)
+
+
[1] FALSE
+
+
+
+
+
+ +
+
+Reminder +
+
+
+

As we observed last week with keywords, column names that contain special characters like $ or spaces will need to be surrounded with tick marks in order for the function to be able to run.

+
+
+
+
# colnames(Data)  # To recheck the column names
+is.numeric(Data$CD8-)
+
+
Error in parse(text = input): <text>:2:21: unexpected ')'
+1: # colnames(Data)  # To recheck the column names
+2: is.numeric(Data$CD8-)
+                       ^
+
+
+
+
# colnames(Data)  # To recheck the column names
+is.numeric(Data$`CD8-`)
+
+
[1] TRUE
+
+
+
+
+

select (Columns)

+

Now that we have read in our data, and have a general picture of the structure and contents, lets start learning the main dplyr functions we will be using throughout the course. To do this, lets go ahead and attach dplyr to our local environment via the library() call.

+
+
library(dplyr)
+
+

We will start with the select() function. It is used to “select” a column from a data.frame type object. In the simplest usage, we provide the name of our data.frame variable/object as the first argument after the opening parenthesis. This is then followed by the name of the column we want to select as the second argument (let’s place around the “” around the column name for now)

+
+
DateColumn <- select(Data, "Date")
+DateColumn[1:10,]
+
+
 [1] "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26"
+ [6] "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26"
+
+
+

This results in the column being selected, resulting in the new object containing only that subsetted out column from the original Data object.

+
+

Pipe Operators

+

While the above line of code works to select a column, when you encounter select() out in the wild, it will more often be in a line of code that looks like this:

+
+
DateColumn <- Data |> select("Date")
+DateColumn[1:10,]
+
+
 [1] "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26"
+ [6] "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26"
+
+
+

“What in the world is that thing |> ?”

+

Glad you asked! An useful feature of the tidyverse packages is their use of pipes (either the original magrittr package’s “%>%” or base R version >4.1.0's “|>”“), usually appearing like this:

+
+
# magrittr %>% pipe
+
+DateColumn <- Data %>% select("Date")
+
+# base R |> pipe
+DateColumn <- Data |> select("Date")
+
+

“How do we interpret/read that line of code?”

+

Let’s break it down, starting off just to the right of the assignment arrow (<-) with our data.frame “Data”.

+
+
Data
+
+

We then proceed to read to the right, adding in our pipe operator. The pipe essentially serves as an intermediate passing the contents of data onward to the subsequent function.

+
+
Data |> 
+
+

In our case, this subsequent function is the select() function, which will select a particular column from the available data. When using the pipe, the first argument slot we saw for “select(Data,”Date”)” is occupied by the contents Data that are being passed by the pipe.

+
+
Data |> select()
+
+

To complete the transfer, we provide the desired column name to select() to act on (“Date” in this case)

+
+
Data |> select("Date")
+
+

In summary, contents of Data are passed to the pipe, and select runs on those contents to select the Date column

+
+
Data |> select("Date")
+
+

One of the main advantages for using pipes, is they can be linked together, passing resulting objects of one operation on to the next pipe and subsequent function. We can see this in operation in the example below where we hand off the isolated “Date” column to the nrow() function to determine number of rows. We will use pipes throughout the course, so you will gradually gain familiarity as you encounter them.

+
+
Data |> select("Date") |> nrow()
+
+
[1] 196
+
+
+

For those with prior R experience, you will be more familiar with the older magrittr %>% pipe. The base R |> pipe operator was introduced starting with R version 4.1.0. While mostly interchangeable, they have a few nuances that come into play for more advance use cases. You are welcome to use whichever you prefer (my current preference is |> as it’s one less key to press).

+
+
+

R Quirks

+
+
+
+ +
+
+Odd R Behavior # 1 +
+
+
+

While we used “” around the column name in our previous example, unlike what we encountered with install.packages() when we forget to include quotation marks, select() still retrieves the correct column despite Date not being an environment variable:

+
+
+
+
Data |> select(Date) |> head(3)
+
+
        Date
+1 2025-07-26
+2 2025-07-26
+3 2025-07-26
+
+
+
+
+
+ +
+
+. +
+
+
+

The reasons for this Odd R behaviour are nuanced and for another day. For now, think of it as dplyr R package is picking up the slack, and using context to infer it’s a column name and not an environmental variable/object.

+
+
+
+
+

Selecting multiple columns

+

Since we are able to select one column, can we select multiple (similar to a [Data[,2:5]] approach in base R)? We can, and they can be positioned anywhere within the data.frame:

+
+
Subset <- Data |> select(bid, timepoint, Condition, Tcells, `CD8+`, `CD4+`)
+
+head(Subset, 3)
+
+
      bid timepoint Condition    Tcells      CD8+      CD4+
+1 INF0052         0      Ctrl 0.2804264 0.2734826 0.6341164
+2 INF0100         0      Ctrl 0.6748298 0.3357696 0.6119112
+3 INF0100         4      Ctrl 0.6119129 0.2862104 0.6639621
+
+
+

You will notice that the order in which we selected the columns will dictate their position in the subsetted data.frame object:

+
+
Subset <- Data |> select(bid, Tcells, `CD8+`, `CD4+`, timepoint, Condition, )
+
+head(Subset, 3)
+
+
      bid    Tcells      CD8+      CD4+ timepoint Condition
+1 INF0052 0.2804264 0.2734826 0.6341164         0      Ctrl
+2 INF0100 0.6748298 0.3357696 0.6119112         0      Ctrl
+3 INF0100 0.6119129 0.2862104 0.6639621         4      Ctrl
+
+
+
+
+
+

relocate

+

Alternatively, we occasionally want to move one column. While we could respecify the location using select(), specifying the names of all the other columns out in a line of code to just to rearrange one does not sound like a good use of time. For this reason, the second dplyr function we will be learning is the relocate() function.

+

Looking at our Data object, let’s say we wanted to move the Tcells column from its current location to the second column position (right after the bid column). The line of code to do so would look like:

+
+
Data |> relocate(Tcells, .after=bid) |> head(3)
+
+
      bid    Tcells timepoint Condition       Date infant_sex  ptype    root
+1 INF0052 0.2804264         0      Ctrl 2025-07-26       Male HEU-hi 2098368
+2 INF0100 0.6748298         0      Ctrl 2025-07-26       Male HEU-lo 2020184
+3 INF0100 0.6119129         4      Ctrl 2025-07-26       Male HEU-lo 1155040
+  singletsFSC singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris
+1     1894070     1666179      1537396 0.5952943    0.8820349 0.8627649
+2     1791890     1697083      1579098 0.9106762    0.9052256 0.8602660
+3     1033320      875465       845446 0.9705765    0.9845400 0.9578793
+  lymphocytes      live      Dump+     Dump-        Vd2+      Vd2-     Va7.2+
+1   0.6420138 0.9020581 0.21090996 0.6911482 0.008120361 0.9918796 0.01448070
+2   0.2145848 0.8908981 0.06252775 0.8283703 0.007265620 0.9927344 0.01577499
+3   0.7403110 0.8757665 0.20023803 0.6755285 0.004651313 0.9953487 0.01579402
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990       164771
+2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858       208241
+3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209       371723
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            587573 0.11796509 0.13723513     915203
+2            308583 0.09477437 0.13973396    1438047
+3            607477 0.01545999 0.04212072     820570
+
+
# |> head(3) is used only to make the website output visualization manageable :D
+
+

Similar to what we saw with select(), this approach can also be used for more than 1 column:

+
+
Data |> relocate(Tcells, Monocytes, .after=bid) |> head(3)
+
+
      bid    Tcells  Monocytes timepoint Condition       Date infant_sex  ptype
+1 INF0052 0.2804264 0.11796509         0      Ctrl 2025-07-26       Male HEU-hi
+2 INF0100 0.6748298 0.09477437         0      Ctrl 2025-07-26       Male HEU-lo
+3 INF0100 0.6119129 0.01545999         4      Ctrl 2025-07-26       Male HEU-lo
+     root singletsFSC singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris
+1 2098368     1894070     1666179      1537396 0.5952943    0.8820349 0.8627649
+2 2020184     1791890     1697083      1579098 0.9106762    0.9052256 0.8602660
+3 1155040     1033320      875465       845446 0.9705765    0.9845400 0.9578793
+  lymphocytes      live      Dump+     Dump-        Vd2+      Vd2-     Va7.2+
+1   0.6420138 0.9020581 0.21090996 0.6911482 0.008120361 0.9918796 0.01448070
+2   0.2145848 0.8908981 0.06252775 0.8283703 0.007265620 0.9927344 0.01577499
+3   0.7403110 0.8757665 0.20023803 0.6755285 0.004651313 0.9953487 0.01579402
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990       164771
+2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858       208241
+3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209       371723
+  lymphocytes_count     Debris CD45_count
+1            587573 0.13723513     915203
+2            308583 0.13973396    1438047
+3            607477 0.04212072     820570
+
+
# |> head(3) is used only to make the website output visualization manageable :D
+
+

We can also modify the argument so that columns are placed before a certain column

+
+
Data |> relocate(Tcells, .before=Date) |> head(3)
+
+
      bid timepoint Condition    Tcells       Date infant_sex  ptype    root
+1 INF0052         0      Ctrl 0.2804264 2025-07-26       Male HEU-hi 2098368
+2 INF0100         0      Ctrl 0.6748298 2025-07-26       Male HEU-lo 2020184
+3 INF0100         4      Ctrl 0.6119129 2025-07-26       Male HEU-lo 1155040
+  singletsFSC singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris
+1     1894070     1666179      1537396 0.5952943    0.8820349 0.8627649
+2     1791890     1697083      1579098 0.9106762    0.9052256 0.8602660
+3     1033320      875465       845446 0.9705765    0.9845400 0.9578793
+  lymphocytes      live      Dump+     Dump-        Vd2+      Vd2-     Va7.2+
+1   0.6420138 0.9020581 0.21090996 0.6911482 0.008120361 0.9918796 0.01448070
+2   0.2145848 0.8908981 0.06252775 0.8283703 0.007265620 0.9927344 0.01577499
+3   0.7403110 0.8757665 0.20023803 0.6755285 0.004651313 0.9953487 0.01579402
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990       164771
+2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858       208241
+3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209       371723
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            587573 0.11796509 0.13723513     915203
+2            308583 0.09477437 0.13973396    1438047
+3            607477 0.01545999 0.04212072     820570
+
+
# |> head(3) is used only to make the website output visualization manageable :D
+
+

And as we might suspect, we could specify a column index location rather than using a column name.

+
+
Data |> relocate(Date, .before=1) |> head(3)
+
+
        Date     bid timepoint Condition infant_sex  ptype    root singletsFSC
+1 2025-07-26 INF0052         0      Ctrl       Male HEU-hi 2098368     1894070
+2 2025-07-26 INF0100         0      Ctrl       Male HEU-lo 2020184     1791890
+3 2025-07-26 INF0100         4      Ctrl       Male HEU-lo 1155040     1033320
+  singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris lymphocytes
+1     1666179      1537396 0.5952943    0.8820349 0.8627649   0.6420138
+2     1697083      1579098 0.9106762    0.9052256 0.8602660   0.2145848
+3      875465       845446 0.9705765    0.9845400 0.9578793   0.7403110
+       live      Dump+     Dump-    Tcells        Vd2+      Vd2-     Va7.2+
+1 0.9020581 0.21090996 0.6911482 0.2804264 0.008120361 0.9918796 0.01448070
+2 0.8908981 0.06252775 0.8283703 0.6748298 0.007265620 0.9927344 0.01577499
+3 0.8757665 0.20023803 0.6755285 0.6119129 0.004651313 0.9953487 0.01579402
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990       164771
+2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858       208241
+3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209       371723
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            587573 0.11796509 0.13723513     915203
+2            308583 0.09477437 0.13973396    1438047
+3            607477 0.01545999 0.04212072     820570
+
+
# |> head(3) is used only to make the website output visualization manageable :D
+
+
+
+

rename

+

At this point, we are able to both move and select particular columns, allowing us to rearrange and subset a larger data.frame object however we want it to appear. However, as we encountered, some of the names contain special characters and spaces, requiring use of tick marks (``) to avoid issues. How can we change a column name?

+

In base R, we could change individual column names by assigning a new value with the assignment arrow to the corresponding column name index. For example, looking at our Subset object, wen could rename CD8+ as follows:

+
+
colnames(Subset)
+
+
[1] "bid"       "Tcells"    "CD8+"      "CD4+"      "timepoint" "Condition"
+
+
colnames(Subset)[3]
+
+
[1] "CD8+"
+
+
+
+
colnames(Subset)[3] <- "CD8Positive"
+colnames(Subset)
+
+
[1] "bid"         "Tcells"      "CD8Positive" "CD4+"        "timepoint"  
+[6] "Condition"  
+
+
+

With the tidyverse, we can use the rename() function which removes the need to look up the column index number. The way we write the argument is placing within the parenthesis the old name to the right of the equals sign, with the new name to the left

+
+
Renamed <- Subset |> rename(CD4_Positive = `CD4+`)
+colnames(Renamed)
+
+
[1] "bid"          "Tcells"       "CD8Positive"  "CD4_Positive" "timepoint"   
+[6] "Condition"   
+
+
+

If we wanted to rename multiple column names at once, we would just need to include a comma between the individual rename arguments within the parenthesis.

+
+
Renamed_Multiple <- Subset |> rename(specimen = bid, timepoint_months = timepoint, stimulation = Condition, CD4Positive=`CD4+`)
+colnames(Renamed_Multiple)
+
+
[1] "specimen"         "Tcells"           "CD8Positive"      "CD4Positive"     
+[5] "timepoint_months" "stimulation"     
+
+
+
+
+

pull

+

Sometimes, we may want to retrieve individual values present in a column, to use within either a vector or a list. We can do this using the pull() function, which will retrieve the column contents and strip the column formatting

+
+
Data |> pull(Date) |> head(5)
+
+
[1] "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26"
+
+
+

This can be useful when we are doing data exploration, and trying to determine how many unique variants might be present. For example, if we wanted to see what days individual samples were acquired, we could pull() the data and pass it to the unique() function:

+
+
Data |> pull(Date) |> unique()
+
+
[1] "2025-07-26" "2025-07-29" "2025-07-31" "2025-08-05" "2025-08-07"
+[6] "2025-08-22" "2025-08-28" "2025-08-30"
+
+
+
+
+

filter (Rows)

+

So far, we have been working with dplyr functions primarily used when working with and subsetting columns (including select(), pull(), rename() and relocate()). What if we wanted to work with rows of a data.frame? This is where the filter() function is used.

+

The Condition column in this Dataset appears to be indicating whether the samples were stimulated. Let’s see how many unique values are contained within that column

+
+
Data |> pull(Condition) |> unique() 
+
+
[1] "Ctrl" "PPD"  "SEB" 
+
+
+

In the case of this dataset, looks like the .fcs files where treated with either left alone, treated with PPD (Purified Protein Derrivative) or SEB. What if we wanted to subset only those treated with PPD?

+

Within filter(), we would specify the column name as the first argument, and ask that only values equal to (==) “PPD” be returned. Notice in this case, “” are needed, as we are asking for a matching character value.

+
+
PPDOnly <- Data |> filter(Condition == "PPD")
+head(PPDOnly, 5)
+
+
      bid timepoint Condition       Date infant_sex  ptype    root singletsFSC
+1 INF0052         0       PPD 2025-07-26       Male HEU-hi 2363512     2136616
+2 INF0100         0       PPD 2025-07-26       Male HEU-lo 2049112     1821676
+3 INF0100         4       PPD 2025-07-26       Male HEU-lo 1063496      946587
+4 INF0100         9       PPD 2025-07-26       Male HEU-lo  788368      714198
+5 INF0179         0       PPD 2025-07-26       Male     HU 1380336     1242311
+  singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris lymphocytes
+1     1875394      1732620 0.5873838    0.8619837 0.8429685   0.6408044
+2     1717636      1597085 0.9063081    0.9251961 0.8771889   0.2174284
+3      796056       767297 0.9709891    0.9848719 0.9556049   0.7313503
+4      626387       600011 0.9822803    0.9842139 0.8123041   0.6223228
+5     1047081      1000877 0.9470275    0.9575685 0.9134438   0.6996502
+       live      Dump+     Dump-    Tcells        Vd2+      Vd2-     Va7.2+
+1 0.9009254 0.20743228 0.6934931 0.2835676 0.007408209 0.9925918 0.01507057
+2 0.8929673 0.06181426 0.8311531 0.6735798 0.007137230 0.9928628 0.01671801
+3 0.8782307 0.20727202 0.6709587 0.5989873 0.005254643 0.9947454 0.01609790
+4 0.9566639 0.23164587 0.7250180 0.6489405 0.011935922 0.9880641 0.01855298
+5 0.8856898 0.33186111 0.5538287 0.4441538 0.004382972 0.9956170 0.01297237
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9775212 0.6340345 0.3434867 0.2744119 0.06907479       184930
+2 0.9761448 0.6145707 0.3615741 0.3312279 0.03034620       211987
+3 0.9786475 0.6559480 0.3226994 0.2912084 0.03149109       326378
+4 0.9695111 0.4306889 0.5388222 0.4908558 0.04796636       238021
+5 0.9826447 0.7499194 0.2327253 0.1850897 0.04763554       294549
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            652155 0.13801632 0.15703150    1017713
+2            314717 0.07480391 0.12281107    1447451
+3            544883 0.01512811 0.04439511     745037
+4            366784 0.01578611 0.18769586     589379
+5            663169 0.04243146 0.08655621     947858
+
+
+

While this works, using “==” to match can glitch, especially with character values. Using the %in% operator is a better way of identifying and extracting only the rows whose Condition column contains “PPD”

+
+
Data |> filter(Condition %in% "PPD") |> head(5)
+
+
      bid timepoint Condition       Date infant_sex  ptype    root singletsFSC
+1 INF0052         0       PPD 2025-07-26       Male HEU-hi 2363512     2136616
+2 INF0100         0       PPD 2025-07-26       Male HEU-lo 2049112     1821676
+3 INF0100         4       PPD 2025-07-26       Male HEU-lo 1063496      946587
+4 INF0100         9       PPD 2025-07-26       Male HEU-lo  788368      714198
+5 INF0179         0       PPD 2025-07-26       Male     HU 1380336     1242311
+  singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris lymphocytes
+1     1875394      1732620 0.5873838    0.8619837 0.8429685   0.6408044
+2     1717636      1597085 0.9063081    0.9251961 0.8771889   0.2174284
+3      796056       767297 0.9709891    0.9848719 0.9556049   0.7313503
+4      626387       600011 0.9822803    0.9842139 0.8123041   0.6223228
+5     1047081      1000877 0.9470275    0.9575685 0.9134438   0.6996502
+       live      Dump+     Dump-    Tcells        Vd2+      Vd2-     Va7.2+
+1 0.9009254 0.20743228 0.6934931 0.2835676 0.007408209 0.9925918 0.01507057
+2 0.8929673 0.06181426 0.8311531 0.6735798 0.007137230 0.9928628 0.01671801
+3 0.8782307 0.20727202 0.6709587 0.5989873 0.005254643 0.9947454 0.01609790
+4 0.9566639 0.23164587 0.7250180 0.6489405 0.011935922 0.9880641 0.01855298
+5 0.8856898 0.33186111 0.5538287 0.4441538 0.004382972 0.9956170 0.01297237
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9775212 0.6340345 0.3434867 0.2744119 0.06907479       184930
+2 0.9761448 0.6145707 0.3615741 0.3312279 0.03034620       211987
+3 0.9786475 0.6559480 0.3226994 0.2912084 0.03149109       326378
+4 0.9695111 0.4306889 0.5388222 0.4908558 0.04796636       238021
+5 0.9826447 0.7499194 0.2327253 0.1850897 0.04763554       294549
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            652155 0.13801632 0.15703150    1017713
+2            314717 0.07480391 0.12281107    1447451
+3            544883 0.01512811 0.04439511     745037
+4            366784 0.01578611 0.18769586     589379
+5            663169 0.04243146 0.08655621     947858
+
+
+

Similar to what we saw for select(), we can grab rows that contain various values at once. We would just need to modify the second part of the argument. If we wanted to grab rows whose Condition column contained either PPD or SEB, we would need to provide that argument as a vector, placing both within c()/

+
+
Data |> filter(Condition %in% c("PPD", "SEB")) |> head(5)
+
+
      bid timepoint Condition       Date infant_sex  ptype    root singletsFSC
+1 INF0052         0       PPD 2025-07-26       Male HEU-hi 2363512     2136616
+2 INF0100         0       PPD 2025-07-26       Male HEU-lo 2049112     1821676
+3 INF0100         4       PPD 2025-07-26       Male HEU-lo 1063496      946587
+4 INF0100         9       PPD 2025-07-26       Male HEU-lo  788368      714198
+5 INF0179         0       PPD 2025-07-26       Male     HU 1380336     1242311
+  singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris lymphocytes
+1     1875394      1732620 0.5873838    0.8619837 0.8429685   0.6408044
+2     1717636      1597085 0.9063081    0.9251961 0.8771889   0.2174284
+3      796056       767297 0.9709891    0.9848719 0.9556049   0.7313503
+4      626387       600011 0.9822803    0.9842139 0.8123041   0.6223228
+5     1047081      1000877 0.9470275    0.9575685 0.9134438   0.6996502
+       live      Dump+     Dump-    Tcells        Vd2+      Vd2-     Va7.2+
+1 0.9009254 0.20743228 0.6934931 0.2835676 0.007408209 0.9925918 0.01507057
+2 0.8929673 0.06181426 0.8311531 0.6735798 0.007137230 0.9928628 0.01671801
+3 0.8782307 0.20727202 0.6709587 0.5989873 0.005254643 0.9947454 0.01609790
+4 0.9566639 0.23164587 0.7250180 0.6489405 0.011935922 0.9880641 0.01855298
+5 0.8856898 0.33186111 0.5538287 0.4441538 0.004382972 0.9956170 0.01297237
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9775212 0.6340345 0.3434867 0.2744119 0.06907479       184930
+2 0.9761448 0.6145707 0.3615741 0.3312279 0.03034620       211987
+3 0.9786475 0.6559480 0.3226994 0.2912084 0.03149109       326378
+4 0.9695111 0.4306889 0.5388222 0.4908558 0.04796636       238021
+5 0.9826447 0.7499194 0.2327253 0.1850897 0.04763554       294549
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            652155 0.13801632 0.15703150    1017713
+2            314717 0.07480391 0.12281107    1447451
+3            544883 0.01512811 0.04439511     745037
+4            366784 0.01578611 0.18769586     589379
+5            663169 0.04243146 0.08655621     947858
+
+
+

Alternatively, we could have set up the vector externally, and then provided it to filter()

+
+
TheseConditions <- c("PPD", "SEB")
+Data |> filter(Condition %in% TheseConditions) |> head(5)
+
+
      bid timepoint Condition       Date infant_sex  ptype    root singletsFSC
+1 INF0052         0       PPD 2025-07-26       Male HEU-hi 2363512     2136616
+2 INF0100         0       PPD 2025-07-26       Male HEU-lo 2049112     1821676
+3 INF0100         4       PPD 2025-07-26       Male HEU-lo 1063496      946587
+4 INF0100         9       PPD 2025-07-26       Male HEU-lo  788368      714198
+5 INF0179         0       PPD 2025-07-26       Male     HU 1380336     1242311
+  singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris lymphocytes
+1     1875394      1732620 0.5873838    0.8619837 0.8429685   0.6408044
+2     1717636      1597085 0.9063081    0.9251961 0.8771889   0.2174284
+3      796056       767297 0.9709891    0.9848719 0.9556049   0.7313503
+4      626387       600011 0.9822803    0.9842139 0.8123041   0.6223228
+5     1047081      1000877 0.9470275    0.9575685 0.9134438   0.6996502
+       live      Dump+     Dump-    Tcells        Vd2+      Vd2-     Va7.2+
+1 0.9009254 0.20743228 0.6934931 0.2835676 0.007408209 0.9925918 0.01507057
+2 0.8929673 0.06181426 0.8311531 0.6735798 0.007137230 0.9928628 0.01671801
+3 0.8782307 0.20727202 0.6709587 0.5989873 0.005254643 0.9947454 0.01609790
+4 0.9566639 0.23164587 0.7250180 0.6489405 0.011935922 0.9880641 0.01855298
+5 0.8856898 0.33186111 0.5538287 0.4441538 0.004382972 0.9956170 0.01297237
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9775212 0.6340345 0.3434867 0.2744119 0.06907479       184930
+2 0.9761448 0.6145707 0.3615741 0.3312279 0.03034620       211987
+3 0.9786475 0.6559480 0.3226994 0.2912084 0.03149109       326378
+4 0.9695111 0.4306889 0.5388222 0.4908558 0.04796636       238021
+5 0.9826447 0.7499194 0.2327253 0.1850897 0.04763554       294549
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            652155 0.13801632 0.15703150    1017713
+2            314717 0.07480391 0.12281107    1447451
+3            544883 0.01512811 0.04439511     745037
+4            366784 0.01578611 0.18769586     589379
+5            663169 0.04243146 0.08655621     947858
+
+
+

While this works when we have a limited number of variant condition values, what if had many more but only wanted to exclude one value? As we saw when learning about Conditionals, when we add a ! in front of a logical value, we get the opposite logical value returned

+
+
IsThisASpectralInstrument <- TRUE
+
+!IsThisASpectralInstrument
+
+
[1] FALSE
+
+
+

In the context of the dplyr package, we can use ! within the filter() to remove rows that contain a certain value

+
+
Subset <- Data |> filter(!Condition %in% "SEB")
+Subset |> pull(Condition) |> unique()
+
+
[1] "Ctrl" "PPD" 
+
+
+

Likewise, we can also use it with the select() to exclude columns we don’t want to include

+
+
Subset <- Data |> select(!timepoint)
+Subset[1:3,]
+
+
      bid Condition       Date infant_sex  ptype    root singletsFSC
+1 INF0052      Ctrl 2025-07-26       Male HEU-hi 2098368     1894070
+2 INF0100      Ctrl 2025-07-26       Male HEU-lo 2020184     1791890
+3 INF0100      Ctrl 2025-07-26       Male HEU-lo 1155040     1033320
+  singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris lymphocytes
+1     1666179      1537396 0.5952943    0.8820349 0.8627649   0.6420138
+2     1697083      1579098 0.9106762    0.9052256 0.8602660   0.2145848
+3      875465       845446 0.9705765    0.9845400 0.9578793   0.7403110
+       live      Dump+     Dump-    Tcells        Vd2+      Vd2-     Va7.2+
+1 0.9020581 0.21090996 0.6911482 0.2804264 0.008120361 0.9918796 0.01448070
+2 0.8908981 0.06252775 0.8283703 0.6748298 0.007265620 0.9927344 0.01577499
+3 0.8757665 0.20023803 0.6755285 0.6119129 0.004651313 0.9953487 0.01579402
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990       164771
+2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858       208241
+3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209       371723
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            587573 0.11796509 0.13723513     915203
+2            308583 0.09477437 0.13973396    1438047
+3            607477 0.01545999 0.04212072     820570
+
+
+
+
+

mutate

+

As we can see, with just these handful of functions, we have the building blocks to rearrange and subset a larger data.frame into a format that we prefer. But what if we wanted to alter the content of a column, or add new columns to an existing data.frame? This is where the mutate() function can be used.

+

Let’s start by slimming down our current Data to a smaller workable example, highlighting the functions and pipes we learned about today

+
+
TidyData <- Data |> filter(Condition %in% "Ctrl") |> filter(timepoint %in% "0") |>
+     select(bid, timepoint, Condition, Date, Tcells_count, CD45_count) |>
+      rename(specimen=bid, condition=Condition) |> relocate(Date, .after=specimen)
+
+
+
TidyData
+
+
   specimen       Date timepoint condition Tcells_count CD45_count
+1   INF0052 2025-07-26         0      Ctrl       164771     915203
+2   INF0100 2025-07-26         0      Ctrl       208241    1438047
+3   INF0179 2025-07-26         0      Ctrl       291777     940733
+4   INF0134 2025-07-29         0      Ctrl       127866     689676
+5   INF0148 2025-07-29         0      Ctrl       234335    1013985
+6   INF0191 2025-07-29         0      Ctrl        55780     715443
+7   INF0124 2025-07-31         0      Ctrl        70297     687720
+8   INF0149 2025-07-31         0      Ctrl       107900     857845
+9   INF0169 2025-07-31         0      Ctrl        75540     854594
+10  INF0019 2025-08-05         0      Ctrl       208055     873622
+11  INF0032 2025-08-05         0      Ctrl       361034     753064
+12  INF0180 2025-08-05         0      Ctrl       284958    1049663
+13  INF0155 2025-08-07         0      Ctrl       281626    1065048
+14  INF0158 2025-08-07         0      Ctrl       280913    1249338
+15  INF0159 2025-08-07         0      Ctrl       452551    1190219
+16  INF0013 2025-08-22         0      Ctrl       182751     836573
+17  INF0023 2025-08-22         0      Ctrl       218435     968035
+18  INF0030 2025-08-22         0      Ctrl        85521     732321
+19  INF0166 2025-08-28         0      Ctrl       225650     739495
+20  INF0199 2025-08-28         0      Ctrl       169736    1112176
+21  INF0207 2025-08-28         0      Ctrl        39055     905365
+22  INF0614 2025-08-30         0      Ctrl       224396    1569007
+23  INF0622 2025-08-30         0      Ctrl       161924     939307
+
+
+

The mutate() function can be used to modify existing columns, as well as to create new ones. For example, let’s derrive the proportion of T cells from the overall CD45 gate. To do so, within the parenthesis, we would specify a new column name, and then divide the original columns:

+
+
TidyData <- TidyData |> mutate(Tcells_ProportionCD45 = Tcells_count / CD45_count)
+TidyData
+
+
   specimen       Date timepoint condition Tcells_count CD45_count
+1   INF0052 2025-07-26         0      Ctrl       164771     915203
+2   INF0100 2025-07-26         0      Ctrl       208241    1438047
+3   INF0179 2025-07-26         0      Ctrl       291777     940733
+4   INF0134 2025-07-29         0      Ctrl       127866     689676
+5   INF0148 2025-07-29         0      Ctrl       234335    1013985
+6   INF0191 2025-07-29         0      Ctrl        55780     715443
+7   INF0124 2025-07-31         0      Ctrl        70297     687720
+8   INF0149 2025-07-31         0      Ctrl       107900     857845
+9   INF0169 2025-07-31         0      Ctrl        75540     854594
+10  INF0019 2025-08-05         0      Ctrl       208055     873622
+11  INF0032 2025-08-05         0      Ctrl       361034     753064
+12  INF0180 2025-08-05         0      Ctrl       284958    1049663
+13  INF0155 2025-08-07         0      Ctrl       281626    1065048
+14  INF0158 2025-08-07         0      Ctrl       280913    1249338
+15  INF0159 2025-08-07         0      Ctrl       452551    1190219
+16  INF0013 2025-08-22         0      Ctrl       182751     836573
+17  INF0023 2025-08-22         0      Ctrl       218435     968035
+18  INF0030 2025-08-22         0      Ctrl        85521     732321
+19  INF0166 2025-08-28         0      Ctrl       225650     739495
+20  INF0199 2025-08-28         0      Ctrl       169736    1112176
+21  INF0207 2025-08-28         0      Ctrl        39055     905365
+22  INF0614 2025-08-30         0      Ctrl       224396    1569007
+23  INF0622 2025-08-30         0      Ctrl       161924     939307
+   Tcells_ProportionCD45
+1             0.18003765
+2             0.14480820
+3             0.31015921
+4             0.18540010
+5             0.23110302
+6             0.07796568
+7             0.10221747
+8             0.12578030
+9             0.08839285
+10            0.23815220
+11            0.47942008
+12            0.27147570
+13            0.26442564
+14            0.22484948
+15            0.38022498
+16            0.21845195
+17            0.22564783
+18            0.11678076
+19            0.30514067
+20            0.15261613
+21            0.04313730
+22            0.14301785
+23            0.17238666
+
+
+

We can see that we have many significant digits being returned. Let’s round this new column to 2 significant digits by applying the round() function

+
+
TidyData <- TidyData |> mutate(TcellsRounded = round(Tcells_ProportionCD45, 2))
+TidyData 
+
+
   specimen       Date timepoint condition Tcells_count CD45_count
+1   INF0052 2025-07-26         0      Ctrl       164771     915203
+2   INF0100 2025-07-26         0      Ctrl       208241    1438047
+3   INF0179 2025-07-26         0      Ctrl       291777     940733
+4   INF0134 2025-07-29         0      Ctrl       127866     689676
+5   INF0148 2025-07-29         0      Ctrl       234335    1013985
+6   INF0191 2025-07-29         0      Ctrl        55780     715443
+7   INF0124 2025-07-31         0      Ctrl        70297     687720
+8   INF0149 2025-07-31         0      Ctrl       107900     857845
+9   INF0169 2025-07-31         0      Ctrl        75540     854594
+10  INF0019 2025-08-05         0      Ctrl       208055     873622
+11  INF0032 2025-08-05         0      Ctrl       361034     753064
+12  INF0180 2025-08-05         0      Ctrl       284958    1049663
+13  INF0155 2025-08-07         0      Ctrl       281626    1065048
+14  INF0158 2025-08-07         0      Ctrl       280913    1249338
+15  INF0159 2025-08-07         0      Ctrl       452551    1190219
+16  INF0013 2025-08-22         0      Ctrl       182751     836573
+17  INF0023 2025-08-22         0      Ctrl       218435     968035
+18  INF0030 2025-08-22         0      Ctrl        85521     732321
+19  INF0166 2025-08-28         0      Ctrl       225650     739495
+20  INF0199 2025-08-28         0      Ctrl       169736    1112176
+21  INF0207 2025-08-28         0      Ctrl        39055     905365
+22  INF0614 2025-08-30         0      Ctrl       224396    1569007
+23  INF0622 2025-08-30         0      Ctrl       161924     939307
+   Tcells_ProportionCD45 TcellsRounded
+1             0.18003765          0.18
+2             0.14480820          0.14
+3             0.31015921          0.31
+4             0.18540010          0.19
+5             0.23110302          0.23
+6             0.07796568          0.08
+7             0.10221747          0.10
+8             0.12578030          0.13
+9             0.08839285          0.09
+10            0.23815220          0.24
+11            0.47942008          0.48
+12            0.27147570          0.27
+13            0.26442564          0.26
+14            0.22484948          0.22
+15            0.38022498          0.38
+16            0.21845195          0.22
+17            0.22564783          0.23
+18            0.11678076          0.12
+19            0.30514067          0.31
+20            0.15261613          0.15
+21            0.04313730          0.04
+22            0.14301785          0.14
+23            0.17238666          0.17
+
+
+
+
+

arrange

+

And while we are here, let’s rearrange the rows so that they are descending based on the Tcell proportion. We can use this by using the desc() and arrange() functions from dplyr:

+
+
TidyData <- TidyData |> arrange(desc(TcellsRounded))
+
+

And let’s go ahead and filter() and identify the specimens that had more than 30% T cells as part of the overall CD45 gate (context, these samples were Cord Blood):

+
+
TidyData |> filter(TcellsRounded > 0.3)
+
+
  specimen       Date timepoint condition Tcells_count CD45_count
+1  INF0032 2025-08-05         0      Ctrl       361034     753064
+2  INF0159 2025-08-07         0      Ctrl       452551    1190219
+3  INF0179 2025-07-26         0      Ctrl       291777     940733
+4  INF0166 2025-08-28         0      Ctrl       225650     739495
+  Tcells_ProportionCD45 TcellsRounded
+1             0.4794201          0.48
+2             0.3802250          0.38
+3             0.3101592          0.31
+4             0.3051407          0.31
+
+
+

Which is we had wanted to just retrieve the specimen IDs, we could add pull() after a new pipe argument.

+
+
TidyData |> filter(TcellsRounded > 0.3) |> pull(specimen)
+
+
[1] "INF0032" "INF0159" "INF0179" "INF0166"
+
+
+

And finally, since I may want to send the data to a supervisor, let’s go ahead and export this “tidyed” version of our data.frame out to it’s own .csv file. Working within our project folder, this would look like this:

+
+
NewName <- paste0("MyNewDataset", ".csv")
+StorageLocation <- file.path("data", NewName)
+StorageLocation
+
+
[1] "data/MyNewDataset.csv"
+
+
+
+
write.csv(TidyData, StorageLocation, row.names=FALSE)
+
+
+
+
+

Take Away

+

In this session, we explored the main functions within the dplyr package used in context of “tidying” data, including selecting columns, filtering for rows, as well as additional functions used to create or modify existing values. We will continue to build on these throughout the course, introducing a few additional tidyverse functions we don’t have time to cover today as appropiate. As we saw, knowing how to use these functions can allow us to extensively and quickly modify our existing exported data files.

+

On important goal as we move through the course (in terms of both reproducibility and replicability) is to attempt to only modify files within R, not go back to the original csv or excel file and hand-modify individual values. This approach is not reproducible or replicable. Once set up, an R script can quickly re-carry out these same cleanup steps, and leave a documented process of how the data has changed (even more so if you are maintaining version control). If you do want to save the changes you have made, it is best to save it out as a new .csv file with which you work later.

+

Next week, we will be using these skills when setting up metadata for our .fcs files. We will additionally take a look at the main format source of controversy within Bioconductor Flow Cytometry packages, ie. whether to use a flowframe or a cytoframe. Exciting stuff, but important information to know as the functions needed to import them are slightly different. We will also look at how to import existing manually gated .wsp from FlowJo/Diva/Floreada in via the CytoML package.

+

+
+
+

Additional Resources

+

Data Organization in Spreadsheets for Ecologists This Carpentry self-study course was one of my “Aha” moments early on when learning R, and reinforced the need to try to keep my own Excel/CSV files in a tidy manner. It is worth the time going through in its entirety (even for non-Ecologist).

+

Data Analysis and Visualization in R for Ecologists Continuation of the above, and a good way to continue building on the tidyverse functions we learned today.

+

Simplistics: Introduction to Tidyverse in R The YouTube channel is mainly focused on statistics for Psych classes, but at the end of the day, we are all working with similar objects with rows and columns, just the values contained within differ.

+

Riffomonas Project Playlist: Data Manipulation with R’s Tidyverse Riffomonas has a playlist that delves into both the tidyverse functions we used today, as well as other ones we will encounter later on in the course.

+
+
+

Take-home Problems

+
+
+
+ +
+
+Problem 1 +
+
+
+

Taking a dataset (either todays or one of your own), work through the column-operating functions (select(), rename(), and relocate()). Once this is done, filter() by conditions from two separate columns, arrange in an order that makes sense, and export this “tidy” data as a .csv file.

+
+
+
+
+
+ +
+
+Problem 2 +
+
+
+

We used the mutate() function to create new columns, but it can also be used to modify existing ones. Various numeric columns are showing way to many significant digits. As was shown, use round() to round all these proportion columns, but use mutate to overwrite the existing column. Export this as it’s own .csv file.

+
+
+
+
+
+ +
+
+Problem 3 +
+
+
+

We can also use mutate() to combine columns. For our dataset, “bid”, “timepoint”, “Condition” are separate columns that originally were all part of the filename for the individual .fcs file. Try to figure out a way to combine them back together using paste0(), and save the new column as “filename”. Once this is done, pull() the contents of this column, and using try to determine whether there were any duplicates (think innovative ways of using !, length() and unique())

+
+
+
+

AGPL-3.0 CC BY-SA 4.0

+
+ + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/docs/course/04_IntroToTidyverse/slides.html b/docs/course/04_IntroToTidyverse/slides.html new file mode 100644 index 0000000..931cec0 --- /dev/null +++ b/docs/course/04_IntroToTidyverse/slides.html @@ -0,0 +1,3902 @@ + + + + + + + + + + + + + + Cytometry in R – 04 - Introduction to Tidyverse + + + + + + + + + + + + + + + + + +
+
+ +
+

04 - Introduction to Tidyverse

+ +
+
+
+David Rach +
+
+
+ +

2026-02-24

+
+
+ +

+
+

AGPL-3.0 CC BY-SA 4.0

+
+
+
+ +
+
+
+

Background

+
+
+
+
+
+
+ +
+

.

+
+
+

Within our daily workflows as cytometrist, after acquiring data on our respective instruments, we begin analyzing the resulting datasets. After implementing various workflows, we then export data for downstream statistical analysis.

+
+
+
+
+
+
+
+
+
+
+
+ +
+

.

+
+
+

When I first started my Ph.D program, a substantial amount of my day was spent renaming column names of the exported data so that they would fit nicely in a Microsoft Excel sheet column; setting up formulas to combine proportion of positive cells across positive quadrants, etc. Once this was done, additional hours would go by as I copied and pasted contents of those columns over to a GraphPad Prism worksheet for statistical analysis.

+
+
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

This of course was in an ideal scenario. Often times, the data was less organized, and instead of time spent copying and pasting over columns, it would first be spent rearranging values from individual cells in the worksheet that were separated by spaces, all the while trying to remember what various color codes and bold font stood for.

+
+
+
+
+
+
+
+
+
+
+
+ +
+

.

+
+
+

Today, we will explore what makes data “tidy”, and how to use the toolsets implemented in the various tidyverse R packages. At it’s simplest, if we think of and organize all our data in terms of rows and columns, we need fewer tools (ie. functions) to reshape and extract useful information that we are interested in. Additionally, this approach aligns more closely with how computers work, allowing us to carry out tasks that would otherwise have taken hours in mere seconds.

+
+
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

The dataset we will be using today is a manually-gated spectral flow cytometry dataset (similar to ones we would see exported by commercial software), and has been intentionally left slightly messy. You could however just as easily use a “matrix” or “data.frame” object exported from inside an fcs file, or swap in your own dataset. You would just need to make sure to switch out the input data by providing an alternate file path, etc.

+
+
+
+
+
+
+
+ +
+
+
+

Walk Through

+
+
+
+
+
+ +
+

Housekeeping

+
+
+

As we do every week, on GitHub, sync your forked version of the CytometryInR course to bring in the most recent updates. Then within Positron, pull in those changes to your local computer.

+

After creating a “Week04” project folder, copy over the contents of “course/04_IntroToTidyverse” to that folder. This will hopefully prevent any merge issues when you attempt to bring in new data to your local Cytometry in R folder next week. Please remember once you have set up your project folder to stage, commit and pus your changes to “Week04” to GitHub so that they are backed up remotely.

+

If you are having issues syncing due to the Take-Home Problem merge conflict, see this walkthrough

+
+
+
+
+
+
+

read.csv

+
+
+
+
+
+
+ +
+

.

+
+
+

We will start by first loading in our copied over dataset (Dataset.csv) from it’s location in the project folder. If you are following the organization scheme we have been using throughout the course, your file path will look something like this:

+
+
+
+
+
+
+ +
+
+
+
thefilepath <- file.path("data", "Dataset.csv")
+
+thefilepath
+
+
[1] "data/Dataset.csv"
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

Reminder

+
+
+

We encourage using the file.path function to build our file paths, as this keeps our code reproducible and replicable when a project folder is copied to other people’s computers that differ on whether the operating system uses forward or backward slash separation between folders.

+
+
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

Above, we directly specified the name (Dataset) and filetype (.csv) of the file we wanted in the last argument of the file.path (“Dataset.csv”). This allows us to skip the list.files() step we used last week as we have provided the full file path. While this approach can be faster, if we accidentally mistype the file name, we could end up with an error at the next step due to no files being found with the mistyped name.

+
+
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

Since our dataset is stored as a .csv file, we will be using the read.csv() function from the utils package (included in our base R software installation) to read it into R. We will also use the colnames() function from last week to get a read-out of the column names.

+
+
+
+
+
+
+
+
Data <- read.csv(file=thefilepath, check.names=FALSE)
+colnames(Data)
+
+
 [1] "bid"               "timepoint"         "Condition"        
+ [4] "Date"              "infant_sex"        "ptype"            
+ [7] "root"              "singletsFSC"       "singletsSSC"      
+[10] "singletsSSCB"      "CD45"              "NotMonocytes"     
+[13] "nonDebris"         "lymphocytes"       "live"             
+[16] "Dump+"             "Dump-"             "Tcells"           
+[19] "Vd2+"              "Vd2-"              "Va7.2+"           
+[22] "Va7.2-"            "CD4+"              "CD4-"             
+[25] "CD8+"              "CD8-"              "Tcells_count"     
+[28] "lymphocytes_count" "Monocytes"         "Debris"           
+[31] "CD45_count"       
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

As we look at the line of code, we now have enough context to decipher that the “file” argument is where we provide a file path to an individual file, but what does the “check.names” argument do?

+

Let’s see what happens to the column names when we set “check.names” argument to TRUE:

+
+
+
+
+
+
+
+
Data_Alternative <- read.csv(thefilepath, check.names=TRUE)
+colnames(Data_Alternative)
+
+
 [1] "bid"               "timepoint"         "Condition"        
+ [4] "Date"              "infant_sex"        "ptype"            
+ [7] "root"              "singletsFSC"       "singletsSSC"      
+[10] "singletsSSCB"      "CD45"              "NotMonocytes"     
+[13] "nonDebris"         "lymphocytes"       "live"             
+[16] "Dump."             "Dump..1"           "Tcells"           
+[19] "Vd2."              "Vd2..1"            "Va7.2."           
+[22] "Va7.2..1"          "CD4."              "CD4..1"           
+[25] "CD8."              "CD8..1"            "Tcells_count"     
+[28] "lymphocytes_count" "Monocytes"         "Debris"           
+[31] "CD45_count"       
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

As we can see, any column name that contained a special character or a space was automatically converted over to R-approved syntax. However, this resulted in the loss of both +” and “-”, leaving us unable to determine whether we are looking at cells within or outside a particular gate.

+
+
+
+
+
+
+

+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

Because of this, it is often better to rename columns individually after import, which we will learn how to do later today.

+

Following up with what we practiced last week, lets use the head() function to visualize the first few rows of data.

+
+
+
+
+
+
+
+
head(Data, 3)
+
+
      bid timepoint Condition       Date infant_sex  ptype    root singletsFSC
+1 INF0052         0      Ctrl 2025-07-26       Male HEU-hi 2098368     1894070
+2 INF0100         0      Ctrl 2025-07-26       Male HEU-lo 2020184     1791890
+3 INF0100         4      Ctrl 2025-07-26       Male HEU-lo 1155040     1033320
+  singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris lymphocytes
+1     1666179      1537396 0.5952943    0.8820349 0.8627649   0.6420138
+2     1697083      1579098 0.9106762    0.9052256 0.8602660   0.2145848
+3      875465       845446 0.9705765    0.9845400 0.9578793   0.7403110
+       live      Dump+     Dump-    Tcells        Vd2+      Vd2-     Va7.2+
+1 0.9020581 0.21090996 0.6911482 0.2804264 0.008120361 0.9918796 0.01448070
+2 0.8908981 0.06252775 0.8283703 0.6748298 0.007265620 0.9927344 0.01577499
+3 0.8757665 0.20023803 0.6755285 0.6119129 0.004651313 0.9953487 0.01579402
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990       164771
+2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858       208241
+3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209       371723
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            587573 0.11796509 0.13723513     915203
+2            308583 0.09477437 0.13973396    1438047
+3            607477 0.01545999 0.04212072     820570
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

When working in Positron, we could have alternatively clicked on the little grid icon next to our created variable “Data” in the right secondary sidebar, which would have opened the data in our Editor window. From this same window, we can see it is stored as a “data.frame” object type.

+
+
+
+
+
+
+
+ + +
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

We could also achieve the same window to open using the View() function:

+
+
+
+
+
+
+
+
View(Data)
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

Wrapping up our brief recap of last week functions, we can check an objects type using both the class() and str() functions.

+
+
+
+
+
+
+
+
class(Data)
+
+
[1] "data.frame"
+
+
+
+
+
+
str(Data)
+
+
'data.frame':   196 obs. of  31 variables:
+ $ bid              : chr  "INF0052" "INF0100" "INF0100" "INF0100" ...
+ $ timepoint        : int  0 0 4 9 0 4 9 4 9 0 ...
+ $ Condition        : chr  "Ctrl" "Ctrl" "Ctrl" "Ctrl" ...
+ $ Date             : chr  "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" ...
+ $ infant_sex       : chr  "Male" "Male" "Male" "Male" ...
+ $ ptype            : chr  "HEU-hi" "HEU-lo" "HEU-lo" "HEU-lo" ...
+ $ root             : int  2098368 2020184 1155040 358624 1362216 1044808 1434840 972056 1521928 2363512 ...
+ $ singletsFSC      : int  1894070 1791890 1033320 328624 1206309 917398 1265022 875707 1359574 2136616 ...
+ $ singletsSSC      : int  1666179 1697083 875465 289327 1032946 735579 988445 767323 1175755 1875394 ...
+ $ singletsSSCB     : int  1537396 1579098 845446 276289 982736 685592 940454 718000 1097478 1732620 ...
+ $ CD45             : num  0.595 0.911 0.971 0.982 0.957 ...
+ $ NotMonocytes     : num  0.882 0.905 0.985 0.986 0.956 ...
+ $ nonDebris        : num  0.863 0.86 0.958 0.941 0.841 ...
+ $ lymphocytes      : num  0.642 0.215 0.74 0.651 0.705 ...
+ $ live             : num  0.902 0.891 0.876 0.915 0.895 ...
+ $ Dump+            : num  0.2109 0.0625 0.2002 0.2147 0.3383 ...
+ $ Dump-            : num  0.691 0.828 0.676 0.701 0.557 ...
+ $ Tcells           : num  0.28 0.675 0.612 0.631 0.44 ...
+ $ Vd2+             : num  0.00812 0.00727 0.00465 0.01135 0.00475 ...
+ $ Vd2-             : num  0.992 0.993 0.995 0.989 0.995 ...
+ $ Va7.2+           : num  0.0145 0.0158 0.0158 0.017 0.0133 ...
+ $ Va7.2-           : num  0.977 0.977 0.98 0.972 0.982 ...
+ $ CD4+             : num  0.634 0.612 0.664 0.438 0.739 ...
+ $ CD4-             : num  0.343 0.365 0.316 0.534 0.243 ...
+ $ CD8+             : num  0.273 0.336 0.286 0.486 0.195 ...
+ $ CD8-             : num  0.0698 0.0293 0.0294 0.0476 0.0476 ...
+ $ Tcells_count     : int  164771 208241 371723 111552 291777 271870 487937 220634 415867 184930 ...
+ $ lymphocytes_count: int  587573 308583 607477 176662 663667 510730 726238 451047 710964 652155 ...
+ $ Monocytes        : num  0.118 0.0948 0.0155 0.0145 0.0444 ...
+ $ Debris           : num  0.1372 0.1397 0.0421 0.0587 0.1592 ...
+ $ CD45_count       : int  915203 1438047 820570 271304 940733 675857 921660 701657 1066884 1017713 ...
+
+
+
+
+
+

data.frame

+
+
+
+
+
+
+ +
+

.

+
+
+

Or alternatively using the new-to-us glimpse() function

+
+
+
+
+
+
+
+
glimpse(Data)
+
+
Error in `glimpse()`:
+! could not find function "glimpse"
+
+
+
+
+
+ +
+
+
+
+
+ +
+

Checkpoint 1

+
+
+

This however returns an error. Any idea why this might be occuring?

+
+
+
+
+
+
+
+Code +
# We haven't attached/loaded the package in which the function glimpse is within
+
+
+
+
+
+ +
+
+
+
+
+ +
+

Checkpoint 2

+
+
+

How would we locate a package a not-yet-loaded function is within?

+
+
+
+
+
+
+
+Code +
# We can use double ? to search all installed packages for a function, regardless
+# if the package is attached to the environment or not
+
+??glimpse
+
+
+
+
+
+ + +
+
+
+
+
+
+ +
+

.

+
+
+

From the list of search matches (in the right secondary sidebar), it looks likely that the glimpse() function in the dplyr package was the one we were looking for. This is one the main tidyverse packages we will be using throughout the course. Let’s attach it to our environment via the library() call first and try running glimpse() again.

+
+
+
+
+
+
+
+
library(dplyr)
+glimpse(Data)
+
+
Rows: 196
+Columns: 31
+$ bid               <chr> "INF0052", "INF0100", "INF0100", "INF0100", "INF0179…
+$ timepoint         <int> 0, 0, 4, 9, 0, 4, 9, 4, 9, 0, 0, 4, 9, 0, 4, 9, 4, 9…
+$ Condition         <chr> "Ctrl", "Ctrl", "Ctrl", "Ctrl", "Ctrl", "Ctrl", "Ctr…
+$ Date              <chr> "2025-07-26", "2025-07-26", "2025-07-26", "2025-07-2…
+$ infant_sex        <chr> "Male", "Male", "Male", "Male", "Male", "Male", "Mal…
+$ ptype             <chr> "HEU-hi", "HEU-lo", "HEU-lo", "HEU-lo", "HU", "HU", …
+$ root              <int> 2098368, 2020184, 1155040, 358624, 1362216, 1044808,…
+$ singletsFSC       <int> 1894070, 1791890, 1033320, 328624, 1206309, 917398, …
+$ singletsSSC       <int> 1666179, 1697083, 875465, 289327, 1032946, 735579, 9…
+$ singletsSSCB      <int> 1537396, 1579098, 845446, 276289, 982736, 685592, 94…
+$ CD45              <dbl> 0.5952943, 0.9106762, 0.9705765, 0.9819573, 0.957259…
+$ NotMonocytes      <dbl> 0.8820349, 0.9052256, 0.9845400, 0.9855070, 0.955627…
+$ nonDebris         <dbl> 0.8627649, 0.8602660, 0.9578793, 0.9412615, 0.840783…
+$ lymphocytes       <dbl> 0.6420138, 0.2145848, 0.7403110, 0.6511588, 0.705478…
+$ live              <dbl> 0.9020581, 0.8908981, 0.8757665, 0.9153242, 0.895214…
+$ `Dump+`           <dbl> 0.21090996, 0.06252775, 0.20023803, 0.21469246, 0.33…
+$ `Dump-`           <dbl> 0.6911482, 0.8283703, 0.6755285, 0.7006317, 0.556895…
+$ Tcells            <dbl> 0.2804264, 0.6748298, 0.6119129, 0.6314431, 0.439643…
+$ `Vd2+`            <dbl> 0.008120361, 0.007265620, 0.004651313, 0.011348967, …
+$ `Vd2-`            <dbl> 0.9918796, 0.9927344, 0.9953487, 0.9886510, 0.995246…
+$ `Va7.2+`          <dbl> 0.014480704, 0.015774991, 0.015794019, 0.017023451, …
+$ `Va7.2-`          <dbl> 0.9773989, 0.9769594, 0.9795547, 0.9716276, 0.981924…
+$ `CD4+`            <dbl> 0.6341164, 0.6119112, 0.6639621, 0.4378944, 0.739256…
+$ `CD4-`            <dbl> 0.3432825, 0.3650482, 0.3155925, 0.5337331, 0.242668…
+$ `CD8+`            <dbl> 0.2734826, 0.3357696, 0.2862104, 0.4861231, 0.195063…
+$ `CD8-`            <dbl> 0.06979990, 0.02927858, 0.02938209, 0.04761008, 0.04…
+$ Tcells_count      <int> 164771, 208241, 371723, 111552, 291777, 271870, 4879…
+$ lymphocytes_count <int> 587573, 308583, 607477, 176662, 663667, 510730, 7262…
+$ Monocytes         <dbl> 0.11796509, 0.09477437, 0.01545999, 0.01449297, 0.04…
+$ Debris            <dbl> 0.13723513, 0.13973396, 0.04212072, 0.05873854, 0.15…
+$ CD45_count        <int> 915203, 1438047, 820570, 271304, 940733, 675857, 921…
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

We notice that while similar to the str() output, glimpse() handles spacing a little differently, and includes the dimensions at the top. However, we can also retrieve the dimensions directly using the dim() function (which maintains the row followed by column position convention of base R (ex. [196,31]))

+
+
+
+
+
+
+
+
dim(Data)
+
+
[1] 196  31
+
+
+
+
+
+

Column value type

+
+
+
+
+
+
+ +
+

.

+
+
+

As we saw last week, functions often need values that match a certain type (the paintbrush needing paint analogy). As we inspect the columns of Data, we can notice some of the columns contain values within that are character (ie. “char”) values. Others appear to contain numeric values (which are subtyped as either double (“ie. dbl”) or integer (ie. “int”)). At first glance, we do not appear to have any logical (ie. TRUE or FALSE) columns in this dataset.

+
+
+
+
+
+
+
+ + +
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

If we were trying to verify type of values contained within a data.frame column, we could employ several similarly-named functions (is.character(), is.numeric() or is.logical()) to check

+
+
+
+
+
+
+
+
# colnames(Data)  # To recheck the column names
+
+is.character(Data$bid)
+
+
[1] TRUE
+
+
+
+
+
+
is.numeric(Data$bid)
+
+
[1] FALSE
+
+
+
+
+
+
# colnames(Data)  # To recheck the column names
+
+is.character(Data$Tcells_count)
+
+
[1] FALSE
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

For numeric columns using the is.numeric() function, we can also be subtype specific using either is.integer() or is.double().

+
+
+
+
+
+
+
+
# colnames(Data)  # To recheck the column names
+
+is.numeric(Data$Tcells_count)
+
+
[1] TRUE
+
+
is.integer(Data$Tcells_count)
+
+
[1] TRUE
+
+
is.double(Data$Tcells_count)
+
+
[1] FALSE
+
+
+
+
+
+ +
+
+
+
+
+ +
+

Reminder

+
+
+

As we observed last week with keywords, column names that contain special characters like $ or spaces will need to be surrounded with tick marks in order for the function to be able to run.

+
+
+
+
+
+
+
# colnames(Data)  # To recheck the column names
+is.numeric(Data$CD8-)
+
+
Error in parse(text = input): <text>:2:21: unexpected ')'
+1: # colnames(Data)  # To recheck the column names
+2: is.numeric(Data$CD8-)
+                       ^
+
+
+
+
+
+
# colnames(Data)  # To recheck the column names
+is.numeric(Data$`CD8-`)
+
+
[1] TRUE
+
+
+
+
+
+

select (Columns)

+
+
+
+
+
+
+ +
+

.

+
+
+

Now that we have read in our data, and have a general picture of the structure and contents, lets start learning the main dplyr functions we will be using throughout the course. To do this, lets go ahead and attach dplyr to our local environment via the library() call.

+
+
+
+
+
+
+
+
library(dplyr)
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

We will start with the select() function. It is used to “select” a column from a data.frame type object. In the simplest usage, we provide the name of our data.frame variable/object as the first argument after the opening parenthesis. This is then followed by the name of the column we want to select as the second argument (let’s place around the “” around the column name for now)

+
+
+
+
+
+
+
+
DateColumn <- select(Data, "Date")
+DateColumn
+
+
          Date
+1   2025-07-26
+2   2025-07-26
+3   2025-07-26
+4   2025-07-26
+5   2025-07-26
+6   2025-07-26
+7   2025-07-26
+8   2025-07-26
+9   2025-07-26
+10  2025-07-26
+11  2025-07-26
+12  2025-07-26
+13  2025-07-26
+14  2025-07-26
+15  2025-07-26
+16  2025-07-26
+17  2025-07-26
+18  2025-07-26
+19  2025-07-26
+20  2025-07-26
+21  2025-07-26
+22  2025-07-26
+23  2025-07-26
+24  2025-07-26
+25  2025-07-26
+26  2025-07-26
+27  2025-07-29
+28  2025-07-29
+29  2025-07-29
+30  2025-07-29
+31  2025-07-29
+32  2025-07-29
+33  2025-07-29
+34  2025-07-29
+35  2025-07-29
+36  2025-07-29
+37  2025-07-29
+38  2025-07-29
+39  2025-07-29
+40  2025-07-29
+41  2025-07-29
+42  2025-07-29
+43  2025-07-29
+44  2025-07-29
+45  2025-07-29
+46  2025-07-29
+47  2025-07-29
+48  2025-07-29
+49  2025-07-31
+50  2025-07-31
+51  2025-07-31
+52  2025-07-31
+53  2025-07-31
+54  2025-07-31
+55  2025-07-31
+56  2025-07-31
+57  2025-07-31
+58  2025-07-31
+59  2025-07-31
+60  2025-07-31
+61  2025-07-31
+62  2025-07-31
+63  2025-07-31
+64  2025-07-31
+65  2025-07-31
+66  2025-07-31
+67  2025-07-31
+68  2025-07-31
+69  2025-07-31
+70  2025-07-31
+71  2025-07-31
+72  2025-07-31
+73  2025-07-31
+74  2025-07-31
+75  2025-07-31
+76  2025-08-05
+77  2025-08-05
+78  2025-08-05
+79  2025-08-05
+80  2025-08-05
+81  2025-08-05
+82  2025-08-05
+83  2025-08-05
+84  2025-08-05
+85  2025-08-05
+86  2025-08-05
+87  2025-08-05
+88  2025-08-05
+89  2025-08-05
+90  2025-08-05
+91  2025-08-05
+92  2025-08-05
+93  2025-08-05
+94  2025-08-05
+95  2025-08-05
+96  2025-08-05
+97  2025-08-05
+98  2025-08-05
+99  2025-08-07
+100 2025-08-07
+101 2025-08-07
+102 2025-08-07
+103 2025-08-07
+104 2025-08-07
+105 2025-08-07
+106 2025-08-07
+107 2025-08-07
+108 2025-08-07
+109 2025-08-07
+110 2025-08-07
+111 2025-08-07
+112 2025-08-07
+113 2025-08-07
+114 2025-08-07
+115 2025-08-07
+116 2025-08-07
+117 2025-08-07
+118 2025-08-07
+119 2025-08-07
+120 2025-08-07
+121 2025-08-07
+122 2025-08-07
+123 2025-08-07
+124 2025-08-07
+125 2025-08-22
+126 2025-08-22
+127 2025-08-22
+128 2025-08-22
+129 2025-08-22
+130 2025-08-22
+131 2025-08-22
+132 2025-08-22
+133 2025-08-22
+134 2025-08-22
+135 2025-08-22
+136 2025-08-22
+137 2025-08-22
+138 2025-08-22
+139 2025-08-22
+140 2025-08-22
+141 2025-08-22
+142 2025-08-22
+143 2025-08-22
+144 2025-08-22
+145 2025-08-22
+146 2025-08-22
+147 2025-08-22
+148 2025-08-22
+149 2025-08-22
+150 2025-08-22
+151 2025-08-22
+152 2025-08-28
+153 2025-08-28
+154 2025-08-28
+155 2025-08-28
+156 2025-08-28
+157 2025-08-28
+158 2025-08-28
+159 2025-08-28
+160 2025-08-28
+161 2025-08-28
+162 2025-08-28
+163 2025-08-28
+164 2025-08-28
+165 2025-08-28
+166 2025-08-28
+167 2025-08-28
+168 2025-08-28
+169 2025-08-28
+170 2025-08-28
+171 2025-08-28
+172 2025-08-28
+173 2025-08-28
+174 2025-08-28
+175 2025-08-28
+176 2025-08-28
+177 2025-08-28
+178 2025-08-28
+179 2025-08-30
+180 2025-08-30
+181 2025-08-30
+182 2025-08-30
+183 2025-08-30
+184 2025-08-30
+185 2025-08-30
+186 2025-08-30
+187 2025-08-30
+188 2025-08-30
+189 2025-08-30
+190 2025-08-30
+191 2025-08-30
+192 2025-08-30
+193 2025-08-30
+194 2025-08-30
+195 2025-08-30
+196 2025-08-30
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

This results in the column being selected, resulting in the new object containing only that subsetted out column from the original Data object.

+
+
+
+
+
+
+
+ +

Pipe Operators

+
+
+
+
+
+
+ +
+

.

+
+
+

While the above line of code works to select a column, when you encounter select() out in the wild, it will more often be in a line of code that looks like this:

+
+
+
+
+
+
+
+
DateColumn <- Data |> select("Date")
+DateColumn
+
+
          Date
+1   2025-07-26
+2   2025-07-26
+3   2025-07-26
+4   2025-07-26
+5   2025-07-26
+6   2025-07-26
+7   2025-07-26
+8   2025-07-26
+9   2025-07-26
+10  2025-07-26
+11  2025-07-26
+12  2025-07-26
+13  2025-07-26
+14  2025-07-26
+15  2025-07-26
+16  2025-07-26
+17  2025-07-26
+18  2025-07-26
+19  2025-07-26
+20  2025-07-26
+21  2025-07-26
+22  2025-07-26
+23  2025-07-26
+24  2025-07-26
+25  2025-07-26
+26  2025-07-26
+27  2025-07-29
+28  2025-07-29
+29  2025-07-29
+30  2025-07-29
+31  2025-07-29
+32  2025-07-29
+33  2025-07-29
+34  2025-07-29
+35  2025-07-29
+36  2025-07-29
+37  2025-07-29
+38  2025-07-29
+39  2025-07-29
+40  2025-07-29
+41  2025-07-29
+42  2025-07-29
+43  2025-07-29
+44  2025-07-29
+45  2025-07-29
+46  2025-07-29
+47  2025-07-29
+48  2025-07-29
+49  2025-07-31
+50  2025-07-31
+51  2025-07-31
+52  2025-07-31
+53  2025-07-31
+54  2025-07-31
+55  2025-07-31
+56  2025-07-31
+57  2025-07-31
+58  2025-07-31
+59  2025-07-31
+60  2025-07-31
+61  2025-07-31
+62  2025-07-31
+63  2025-07-31
+64  2025-07-31
+65  2025-07-31
+66  2025-07-31
+67  2025-07-31
+68  2025-07-31
+69  2025-07-31
+70  2025-07-31
+71  2025-07-31
+72  2025-07-31
+73  2025-07-31
+74  2025-07-31
+75  2025-07-31
+76  2025-08-05
+77  2025-08-05
+78  2025-08-05
+79  2025-08-05
+80  2025-08-05
+81  2025-08-05
+82  2025-08-05
+83  2025-08-05
+84  2025-08-05
+85  2025-08-05
+86  2025-08-05
+87  2025-08-05
+88  2025-08-05
+89  2025-08-05
+90  2025-08-05
+91  2025-08-05
+92  2025-08-05
+93  2025-08-05
+94  2025-08-05
+95  2025-08-05
+96  2025-08-05
+97  2025-08-05
+98  2025-08-05
+99  2025-08-07
+100 2025-08-07
+101 2025-08-07
+102 2025-08-07
+103 2025-08-07
+104 2025-08-07
+105 2025-08-07
+106 2025-08-07
+107 2025-08-07
+108 2025-08-07
+109 2025-08-07
+110 2025-08-07
+111 2025-08-07
+112 2025-08-07
+113 2025-08-07
+114 2025-08-07
+115 2025-08-07
+116 2025-08-07
+117 2025-08-07
+118 2025-08-07
+119 2025-08-07
+120 2025-08-07
+121 2025-08-07
+122 2025-08-07
+123 2025-08-07
+124 2025-08-07
+125 2025-08-22
+126 2025-08-22
+127 2025-08-22
+128 2025-08-22
+129 2025-08-22
+130 2025-08-22
+131 2025-08-22
+132 2025-08-22
+133 2025-08-22
+134 2025-08-22
+135 2025-08-22
+136 2025-08-22
+137 2025-08-22
+138 2025-08-22
+139 2025-08-22
+140 2025-08-22
+141 2025-08-22
+142 2025-08-22
+143 2025-08-22
+144 2025-08-22
+145 2025-08-22
+146 2025-08-22
+147 2025-08-22
+148 2025-08-22
+149 2025-08-22
+150 2025-08-22
+151 2025-08-22
+152 2025-08-28
+153 2025-08-28
+154 2025-08-28
+155 2025-08-28
+156 2025-08-28
+157 2025-08-28
+158 2025-08-28
+159 2025-08-28
+160 2025-08-28
+161 2025-08-28
+162 2025-08-28
+163 2025-08-28
+164 2025-08-28
+165 2025-08-28
+166 2025-08-28
+167 2025-08-28
+168 2025-08-28
+169 2025-08-28
+170 2025-08-28
+171 2025-08-28
+172 2025-08-28
+173 2025-08-28
+174 2025-08-28
+175 2025-08-28
+176 2025-08-28
+177 2025-08-28
+178 2025-08-28
+179 2025-08-30
+180 2025-08-30
+181 2025-08-30
+182 2025-08-30
+183 2025-08-30
+184 2025-08-30
+185 2025-08-30
+186 2025-08-30
+187 2025-08-30
+188 2025-08-30
+189 2025-08-30
+190 2025-08-30
+191 2025-08-30
+192 2025-08-30
+193 2025-08-30
+194 2025-08-30
+195 2025-08-30
+196 2025-08-30
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

“What in the world is that thing |> ?”

+
+
+
+
+
+
+
+
+
+
+
+ +
+

.

+
+
+

Glad you asked! An useful feature of the tidyverse packages is their use of pipes (either the original magrittr package’s “%>%” or base R version >4.1.0's “|>”“), usually appearing like this:

+
+
+
+
+
+
+
+
# magrittr %>% pipe
+
+DateColumn <- Data %>% select("Date")
+
+# base R |> pipe
+DateColumn <- Data |> select("Date")
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

“How do we interpret/read that line of code?”

+
+
+
+
+
+
+
+
+
+
+
+ +
+

.

+
+
+

Let’s break it down, starting off just to the right of the assignment arrow (<-) with our data.frame “Data”.

+
+
+
+
+
+
+
+
Data
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

We then proceed to read to the right, adding in our pipe operator. The pipe essentially serves as an intermediate passing the contents of data onward to the subsequent function.

+
+
+
+
+
+
+
+
Data |> 
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

In our case, this subsequent function is the select() function, which will select a particular column from the available data. When using the pipe, the first argument slot we saw for “select(Data,”Date”)” is occupied by the contents Data that are being passed by the pipe.

+
+
+
+
+
+
+
+
Data |> select()
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

To complete the transfer, we provide the desired column name to select() to act on (“Date” in this case)

+
+
+
+
+
+
+
+
Data |> select("Date")
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

In summary, contents of Data are passed to the pipe, and select runs on those contents to select the Date column

+
+
+
+
+
+
+
+
Data |> select("Date")
+
+
          Date
+1   2025-07-26
+2   2025-07-26
+3   2025-07-26
+4   2025-07-26
+5   2025-07-26
+6   2025-07-26
+7   2025-07-26
+8   2025-07-26
+9   2025-07-26
+10  2025-07-26
+11  2025-07-26
+12  2025-07-26
+13  2025-07-26
+14  2025-07-26
+15  2025-07-26
+16  2025-07-26
+17  2025-07-26
+18  2025-07-26
+19  2025-07-26
+20  2025-07-26
+21  2025-07-26
+22  2025-07-26
+23  2025-07-26
+24  2025-07-26
+25  2025-07-26
+26  2025-07-26
+27  2025-07-29
+28  2025-07-29
+29  2025-07-29
+30  2025-07-29
+31  2025-07-29
+32  2025-07-29
+33  2025-07-29
+34  2025-07-29
+35  2025-07-29
+36  2025-07-29
+37  2025-07-29
+38  2025-07-29
+39  2025-07-29
+40  2025-07-29
+41  2025-07-29
+42  2025-07-29
+43  2025-07-29
+44  2025-07-29
+45  2025-07-29
+46  2025-07-29
+47  2025-07-29
+48  2025-07-29
+49  2025-07-31
+50  2025-07-31
+51  2025-07-31
+52  2025-07-31
+53  2025-07-31
+54  2025-07-31
+55  2025-07-31
+56  2025-07-31
+57  2025-07-31
+58  2025-07-31
+59  2025-07-31
+60  2025-07-31
+61  2025-07-31
+62  2025-07-31
+63  2025-07-31
+64  2025-07-31
+65  2025-07-31
+66  2025-07-31
+67  2025-07-31
+68  2025-07-31
+69  2025-07-31
+70  2025-07-31
+71  2025-07-31
+72  2025-07-31
+73  2025-07-31
+74  2025-07-31
+75  2025-07-31
+76  2025-08-05
+77  2025-08-05
+78  2025-08-05
+79  2025-08-05
+80  2025-08-05
+81  2025-08-05
+82  2025-08-05
+83  2025-08-05
+84  2025-08-05
+85  2025-08-05
+86  2025-08-05
+87  2025-08-05
+88  2025-08-05
+89  2025-08-05
+90  2025-08-05
+91  2025-08-05
+92  2025-08-05
+93  2025-08-05
+94  2025-08-05
+95  2025-08-05
+96  2025-08-05
+97  2025-08-05
+98  2025-08-05
+99  2025-08-07
+100 2025-08-07
+101 2025-08-07
+102 2025-08-07
+103 2025-08-07
+104 2025-08-07
+105 2025-08-07
+106 2025-08-07
+107 2025-08-07
+108 2025-08-07
+109 2025-08-07
+110 2025-08-07
+111 2025-08-07
+112 2025-08-07
+113 2025-08-07
+114 2025-08-07
+115 2025-08-07
+116 2025-08-07
+117 2025-08-07
+118 2025-08-07
+119 2025-08-07
+120 2025-08-07
+121 2025-08-07
+122 2025-08-07
+123 2025-08-07
+124 2025-08-07
+125 2025-08-22
+126 2025-08-22
+127 2025-08-22
+128 2025-08-22
+129 2025-08-22
+130 2025-08-22
+131 2025-08-22
+132 2025-08-22
+133 2025-08-22
+134 2025-08-22
+135 2025-08-22
+136 2025-08-22
+137 2025-08-22
+138 2025-08-22
+139 2025-08-22
+140 2025-08-22
+141 2025-08-22
+142 2025-08-22
+143 2025-08-22
+144 2025-08-22
+145 2025-08-22
+146 2025-08-22
+147 2025-08-22
+148 2025-08-22
+149 2025-08-22
+150 2025-08-22
+151 2025-08-22
+152 2025-08-28
+153 2025-08-28
+154 2025-08-28
+155 2025-08-28
+156 2025-08-28
+157 2025-08-28
+158 2025-08-28
+159 2025-08-28
+160 2025-08-28
+161 2025-08-28
+162 2025-08-28
+163 2025-08-28
+164 2025-08-28
+165 2025-08-28
+166 2025-08-28
+167 2025-08-28
+168 2025-08-28
+169 2025-08-28
+170 2025-08-28
+171 2025-08-28
+172 2025-08-28
+173 2025-08-28
+174 2025-08-28
+175 2025-08-28
+176 2025-08-28
+177 2025-08-28
+178 2025-08-28
+179 2025-08-30
+180 2025-08-30
+181 2025-08-30
+182 2025-08-30
+183 2025-08-30
+184 2025-08-30
+185 2025-08-30
+186 2025-08-30
+187 2025-08-30
+188 2025-08-30
+189 2025-08-30
+190 2025-08-30
+191 2025-08-30
+192 2025-08-30
+193 2025-08-30
+194 2025-08-30
+195 2025-08-30
+196 2025-08-30
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

One of the main advantages for using pipes, is they can be linked together, passing resulting objects of one operation on to the next pipe and subsequent function. We can see this in operation in the example below where we hand off the isolated “Date” column to the nrow() function to determine number of rows. We will use pipes throughout the course, so you will gradually gain familiarity as you encounter them.

+
+
+
+
+
+
+
+
Data |> select("Date") |> nrow()
+
+
[1] 196
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

For those with prior R experience, you will be more familiar with the older magrittr %>% pipe. The base R |> pipe operator was introduced starting with R version 4.1.0. While mostly interchangeable, they have a few nuances that come into play for more advance use cases. You are welcome to use whichever you prefer (my current preference is |> as it’s one less key to press).

+
+
+
+
+
+
+
+ +

R Quirks

+
+
+
+
+
+ +
+

Odd R Behavior # 1

+
+
+

While we used “” around the column name in our previous example, unlike what we encountered with install.packages() when we forget to include quotation marks, select() still retrieves the correct column despite Date not being an environment variable:

+
+
+
+
+
+
+
Data |> select(Date) |> head(5)
+
+
        Date
+1 2025-07-26
+2 2025-07-26
+3 2025-07-26
+4 2025-07-26
+5 2025-07-26
+
+
+
+
+
+ +
+
+
+
+
+ +
+

.

+
+
+

The reasons for this Odd R behaviour are nuanced and for another day. For now, think of it as dplyr R package is picking up the slack, and using context to infer it’s a column name and not an environmental variable/object.

+
+
+
+
+
+
+ +

Selecting multiple columns

+
+
+
+
+
+
+ +
+

.

+
+
+

Since we are able to select one column, can we select multiple (similar to a [Data[,2:5]] approach in base R)? We can, and they can be positioned anywhere within the data.frame:

+
+
+
+
+
+
+
+
Subset <- Data |> select(bid, timepoint, Condition, Tcells, `CD8+`, `CD4+`)
+
+head(Subset, 5)
+
+
      bid timepoint Condition    Tcells      CD8+      CD4+
+1 INF0052         0      Ctrl 0.2804264 0.2734826 0.6341164
+2 INF0100         0      Ctrl 0.6748298 0.3357696 0.6119112
+3 INF0100         4      Ctrl 0.6119129 0.2862104 0.6639621
+4 INF0100         9      Ctrl 0.6314431 0.4861231 0.4378944
+5 INF0179         0      Ctrl 0.4396437 0.1950634 0.7392563
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

You will notice that the order in which we selected the columns will dictate their position in the subsetted data.frame object:

+
+
+
+
+
+
+
+
Subset <- Data |> select(bid, Tcells, `CD8+`, `CD4+`, timepoint, Condition, )
+
+head(Subset, 5)
+
+
      bid    Tcells      CD8+      CD4+ timepoint Condition
+1 INF0052 0.2804264 0.2734826 0.6341164         0      Ctrl
+2 INF0100 0.6748298 0.3357696 0.6119112         0      Ctrl
+3 INF0100 0.6119129 0.2862104 0.6639621         4      Ctrl
+4 INF0100 0.6314431 0.4861231 0.4378944         9      Ctrl
+5 INF0179 0.4396437 0.1950634 0.7392563         0      Ctrl
+
+
+
+
+
+

relocate

+
+
+
+
+
+
+ +
+

.

+
+
+

Alternatively, we occasionally want to move one column. While we could respecify the location using select(), specifying the names of all the other columns out in a line of code to just to rearrange one does not sound like a good use of time. For this reason, the second dplyr function we will be learning is the relocate() function.

+
+
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

Looking at our Data object, let’s say we wanted to move the Tcells column from its current location to the second column position (right after the bid column). The line of code to do so would look like:

+
+
+
+
+
+
+
+
Data |> relocate(Tcells, .after=bid) |> head(5)
+
+
      bid    Tcells timepoint Condition       Date infant_sex  ptype    root
+1 INF0052 0.2804264         0      Ctrl 2025-07-26       Male HEU-hi 2098368
+2 INF0100 0.6748298         0      Ctrl 2025-07-26       Male HEU-lo 2020184
+3 INF0100 0.6119129         4      Ctrl 2025-07-26       Male HEU-lo 1155040
+4 INF0100 0.6314431         9      Ctrl 2025-07-26       Male HEU-lo  358624
+5 INF0179 0.4396437         0      Ctrl 2025-07-26       Male     HU 1362216
+  singletsFSC singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris
+1     1894070     1666179      1537396 0.5952943    0.8820349 0.8627649
+2     1791890     1697083      1579098 0.9106762    0.9052256 0.8602660
+3     1033320      875465       845446 0.9705765    0.9845400 0.9578793
+4      328624      289327       276289 0.9819573    0.9855070 0.9412615
+5     1206309     1032946       982736 0.9572591    0.9556272 0.8407837
+  lymphocytes      live      Dump+     Dump-        Vd2+      Vd2-     Va7.2+
+1   0.6420138 0.9020581 0.21090996 0.6911482 0.008120361 0.9918796 0.01448070
+2   0.2145848 0.8908981 0.06252775 0.8283703 0.007265620 0.9927344 0.01577499
+3   0.7403110 0.8757665 0.20023803 0.6755285 0.004651313 0.9953487 0.01579402
+4   0.6511588 0.9153242 0.21469246 0.7006317 0.011348967 0.9886510 0.01702345
+5   0.7054786 0.8952140 0.33831877 0.5568953 0.004753630 0.9952464 0.01332182
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990       164771
+2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858       208241
+3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209       371723
+4 0.9716276 0.4378944 0.5337331 0.4861231 0.04761008       111552
+5 0.9819246 0.7392563 0.2426682 0.1950634 0.04760485       291777
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            587573 0.11796509 0.13723513     915203
+2            308583 0.09477437 0.13973396    1438047
+3            607477 0.01545999 0.04212072     820570
+4            176662 0.01449297 0.05873854     271304
+5            663667 0.04437285 0.15921627     940733
+
+
# |> head(5) is used only to make the website output visualization manageable :D
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

Similar to what we saw with select(), this approach can also be used for more than 1 column:

+
+
+
+
+
+
+
+
Data |> relocate(Tcells, Monocytes, .after=bid) |> head(5)
+
+
      bid    Tcells  Monocytes timepoint Condition       Date infant_sex  ptype
+1 INF0052 0.2804264 0.11796509         0      Ctrl 2025-07-26       Male HEU-hi
+2 INF0100 0.6748298 0.09477437         0      Ctrl 2025-07-26       Male HEU-lo
+3 INF0100 0.6119129 0.01545999         4      Ctrl 2025-07-26       Male HEU-lo
+4 INF0100 0.6314431 0.01449297         9      Ctrl 2025-07-26       Male HEU-lo
+5 INF0179 0.4396437 0.04437285         0      Ctrl 2025-07-26       Male     HU
+     root singletsFSC singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris
+1 2098368     1894070     1666179      1537396 0.5952943    0.8820349 0.8627649
+2 2020184     1791890     1697083      1579098 0.9106762    0.9052256 0.8602660
+3 1155040     1033320      875465       845446 0.9705765    0.9845400 0.9578793
+4  358624      328624      289327       276289 0.9819573    0.9855070 0.9412615
+5 1362216     1206309     1032946       982736 0.9572591    0.9556272 0.8407837
+  lymphocytes      live      Dump+     Dump-        Vd2+      Vd2-     Va7.2+
+1   0.6420138 0.9020581 0.21090996 0.6911482 0.008120361 0.9918796 0.01448070
+2   0.2145848 0.8908981 0.06252775 0.8283703 0.007265620 0.9927344 0.01577499
+3   0.7403110 0.8757665 0.20023803 0.6755285 0.004651313 0.9953487 0.01579402
+4   0.6511588 0.9153242 0.21469246 0.7006317 0.011348967 0.9886510 0.01702345
+5   0.7054786 0.8952140 0.33831877 0.5568953 0.004753630 0.9952464 0.01332182
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990       164771
+2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858       208241
+3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209       371723
+4 0.9716276 0.4378944 0.5337331 0.4861231 0.04761008       111552
+5 0.9819246 0.7392563 0.2426682 0.1950634 0.04760485       291777
+  lymphocytes_count     Debris CD45_count
+1            587573 0.13723513     915203
+2            308583 0.13973396    1438047
+3            607477 0.04212072     820570
+4            176662 0.05873854     271304
+5            663667 0.15921627     940733
+
+
# |> head(5) is used only to make the website output visualization manageable :D
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

We can also modify the argument so that columns are placed before a certain column

+
+
+
+
+
+
+
+
Data |> relocate(Tcells, .before=Date) |> head(5)
+
+
      bid timepoint Condition    Tcells       Date infant_sex  ptype    root
+1 INF0052         0      Ctrl 0.2804264 2025-07-26       Male HEU-hi 2098368
+2 INF0100         0      Ctrl 0.6748298 2025-07-26       Male HEU-lo 2020184
+3 INF0100         4      Ctrl 0.6119129 2025-07-26       Male HEU-lo 1155040
+4 INF0100         9      Ctrl 0.6314431 2025-07-26       Male HEU-lo  358624
+5 INF0179         0      Ctrl 0.4396437 2025-07-26       Male     HU 1362216
+  singletsFSC singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris
+1     1894070     1666179      1537396 0.5952943    0.8820349 0.8627649
+2     1791890     1697083      1579098 0.9106762    0.9052256 0.8602660
+3     1033320      875465       845446 0.9705765    0.9845400 0.9578793
+4      328624      289327       276289 0.9819573    0.9855070 0.9412615
+5     1206309     1032946       982736 0.9572591    0.9556272 0.8407837
+  lymphocytes      live      Dump+     Dump-        Vd2+      Vd2-     Va7.2+
+1   0.6420138 0.9020581 0.21090996 0.6911482 0.008120361 0.9918796 0.01448070
+2   0.2145848 0.8908981 0.06252775 0.8283703 0.007265620 0.9927344 0.01577499
+3   0.7403110 0.8757665 0.20023803 0.6755285 0.004651313 0.9953487 0.01579402
+4   0.6511588 0.9153242 0.21469246 0.7006317 0.011348967 0.9886510 0.01702345
+5   0.7054786 0.8952140 0.33831877 0.5568953 0.004753630 0.9952464 0.01332182
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990       164771
+2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858       208241
+3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209       371723
+4 0.9716276 0.4378944 0.5337331 0.4861231 0.04761008       111552
+5 0.9819246 0.7392563 0.2426682 0.1950634 0.04760485       291777
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            587573 0.11796509 0.13723513     915203
+2            308583 0.09477437 0.13973396    1438047
+3            607477 0.01545999 0.04212072     820570
+4            176662 0.01449297 0.05873854     271304
+5            663667 0.04437285 0.15921627     940733
+
+
# |> head(5) is used only to make the website output visualization manageable :D
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

And as we might suspect, we could specify a column index location rather than using a column name.

+
+
+
+
+
+
+
+
Data |> relocate(Date, .before=1) |> head(5)
+
+
        Date     bid timepoint Condition infant_sex  ptype    root singletsFSC
+1 2025-07-26 INF0052         0      Ctrl       Male HEU-hi 2098368     1894070
+2 2025-07-26 INF0100         0      Ctrl       Male HEU-lo 2020184     1791890
+3 2025-07-26 INF0100         4      Ctrl       Male HEU-lo 1155040     1033320
+4 2025-07-26 INF0100         9      Ctrl       Male HEU-lo  358624      328624
+5 2025-07-26 INF0179         0      Ctrl       Male     HU 1362216     1206309
+  singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris lymphocytes
+1     1666179      1537396 0.5952943    0.8820349 0.8627649   0.6420138
+2     1697083      1579098 0.9106762    0.9052256 0.8602660   0.2145848
+3      875465       845446 0.9705765    0.9845400 0.9578793   0.7403110
+4      289327       276289 0.9819573    0.9855070 0.9412615   0.6511588
+5     1032946       982736 0.9572591    0.9556272 0.8407837   0.7054786
+       live      Dump+     Dump-    Tcells        Vd2+      Vd2-     Va7.2+
+1 0.9020581 0.21090996 0.6911482 0.2804264 0.008120361 0.9918796 0.01448070
+2 0.8908981 0.06252775 0.8283703 0.6748298 0.007265620 0.9927344 0.01577499
+3 0.8757665 0.20023803 0.6755285 0.6119129 0.004651313 0.9953487 0.01579402
+4 0.9153242 0.21469246 0.7006317 0.6314431 0.011348967 0.9886510 0.01702345
+5 0.8952140 0.33831877 0.5568953 0.4396437 0.004753630 0.9952464 0.01332182
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990       164771
+2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858       208241
+3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209       371723
+4 0.9716276 0.4378944 0.5337331 0.4861231 0.04761008       111552
+5 0.9819246 0.7392563 0.2426682 0.1950634 0.04760485       291777
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            587573 0.11796509 0.13723513     915203
+2            308583 0.09477437 0.13973396    1438047
+3            607477 0.01545999 0.04212072     820570
+4            176662 0.01449297 0.05873854     271304
+5            663667 0.04437285 0.15921627     940733
+
+
# |> head(5) is used only to make the website output visualization manageable :D
+
+
+
+
+

rename

+
+
+
+
+
+
+ +
+

.

+
+
+

At this point, we are able to both move and select particular columns, allowing us to rearrange and subset a larger data.frame object however we want it to appear. However, as we encountered, some of the names contain special characters and spaces, requiring use of tick marks (``) to avoid issues. How can we change a column name?

+
+
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

In base R, we could change individual column names by assigning a new value with the assignment arrow to the corresponding column name index. For example, looking at our Subset object, wen could rename CD8+ as follows:

+
+
+
+
+
+
+
+
colnames(Subset)
+
+
[1] "bid"       "Tcells"    "CD8+"      "CD4+"      "timepoint" "Condition"
+
+
colnames(Subset)[3]
+
+
[1] "CD8+"
+
+
+
+
+
+
colnames(Subset)[3] <- "CD8Positive"
+colnames(Subset)
+
+
[1] "bid"         "Tcells"      "CD8Positive" "CD4+"        "timepoint"  
+[6] "Condition"  
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

With the tidyverse, we can use the rename() function which removes the need to look up the column index number. The way we write the argument is placing within the parenthesis the old name to the right of the equals sign, with the new name to the left

+
+
+
+
+
+
+
+
Renamed <- Subset |> rename(CD4_Positive = `CD4+`)
+colnames(Renamed)
+
+
[1] "bid"          "Tcells"       "CD8Positive"  "CD4_Positive" "timepoint"   
+[6] "Condition"   
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

If we wanted to rename multiple column names at once, we would just need to include a comma between the individual rename arguments within the parenthesis.

+
+
+
+
+
+
+
+
Renamed_Multiple <- Subset |> rename(specimen = bid, timepoint_months = timepoint, stimulation = Condition, CD4Positive=`CD4+`)
+colnames(Renamed_Multiple)
+
+
[1] "specimen"         "Tcells"           "CD8Positive"      "CD4Positive"     
+[5] "timepoint_months" "stimulation"     
+
+
+
+
+
+

pull

+
+
+
+
+
+
+ +
+

.

+
+
+

Sometimes, we may want to retrieve individual values present in a column, to use within either a vector or a list. We can do this using the pull() function, which will retrieve the column contents and strip the column formatting

+
+
+
+
+
+
+
+
Data |> pull(Date) |> head(10)
+
+
 [1] "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26"
+ [6] "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26" "2025-07-26"
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

This can be useful when we are doing data exploration, and trying to determine how many unique variants might be present. For example, if we wanted to see what days individual samples were acquired, we could pull() the data and pass it to the unique() function:

+
+
+
+
+
+
+
+
Data |> pull(Date) |> unique()
+
+
[1] "2025-07-26" "2025-07-29" "2025-07-31" "2025-08-05" "2025-08-07"
+[6] "2025-08-22" "2025-08-28" "2025-08-30"
+
+
+
+
+
+

filter (Rows)

+
+
+
+
+
+
+ +
+

.

+
+
+

So far, we have been working with dplyr functions primarily used when working with and subsetting columns (including select(), pull(), rename() and relocate()). What if we wanted to work with rows of a data.frame? This is where the filter() function is used.

+
+
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

The Condition column in this Dataset appears to be indicating whether the samples were stimulated. Let’s see how many unique values are contained within that column

+
+
+
+
+
+
+
+
Data |> pull(Condition) |> unique() 
+
+
[1] "Ctrl" "PPD"  "SEB" 
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

In the case of this dataset, looks like the .fcs files where treated with either left alone, treated with PPD (Purified Protein Derrivative) or SEB. What if we wanted to subset only those treated with PPD?

+
+
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

Within filter(), we would specify the column name as the first argument, and ask that only values equal to (==) “PPD” be returned. Notice in this case, “” are needed, as we are asking for a matching character value.

+
+
+
+
+
+
+
+
PPDOnly <- Data |> filter(Condition == "PPD")
+head(PPDOnly, 5)
+
+
      bid timepoint Condition       Date infant_sex  ptype    root singletsFSC
+1 INF0052         0       PPD 2025-07-26       Male HEU-hi 2363512     2136616
+2 INF0100         0       PPD 2025-07-26       Male HEU-lo 2049112     1821676
+3 INF0100         4       PPD 2025-07-26       Male HEU-lo 1063496      946587
+4 INF0100         9       PPD 2025-07-26       Male HEU-lo  788368      714198
+5 INF0179         0       PPD 2025-07-26       Male     HU 1380336     1242311
+  singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris lymphocytes
+1     1875394      1732620 0.5873838    0.8619837 0.8429685   0.6408044
+2     1717636      1597085 0.9063081    0.9251961 0.8771889   0.2174284
+3      796056       767297 0.9709891    0.9848719 0.9556049   0.7313503
+4      626387       600011 0.9822803    0.9842139 0.8123041   0.6223228
+5     1047081      1000877 0.9470275    0.9575685 0.9134438   0.6996502
+       live      Dump+     Dump-    Tcells        Vd2+      Vd2-     Va7.2+
+1 0.9009254 0.20743228 0.6934931 0.2835676 0.007408209 0.9925918 0.01507057
+2 0.8929673 0.06181426 0.8311531 0.6735798 0.007137230 0.9928628 0.01671801
+3 0.8782307 0.20727202 0.6709587 0.5989873 0.005254643 0.9947454 0.01609790
+4 0.9566639 0.23164587 0.7250180 0.6489405 0.011935922 0.9880641 0.01855298
+5 0.8856898 0.33186111 0.5538287 0.4441538 0.004382972 0.9956170 0.01297237
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9775212 0.6340345 0.3434867 0.2744119 0.06907479       184930
+2 0.9761448 0.6145707 0.3615741 0.3312279 0.03034620       211987
+3 0.9786475 0.6559480 0.3226994 0.2912084 0.03149109       326378
+4 0.9695111 0.4306889 0.5388222 0.4908558 0.04796636       238021
+5 0.9826447 0.7499194 0.2327253 0.1850897 0.04763554       294549
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            652155 0.13801632 0.15703150    1017713
+2            314717 0.07480391 0.12281107    1447451
+3            544883 0.01512811 0.04439511     745037
+4            366784 0.01578611 0.18769586     589379
+5            663169 0.04243146 0.08655621     947858
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

While this works, using “==” to match can glitch, especially with character values. Using the %in% operator is a better way of identifying and extracting only the rows whose Condition column contains “PPD”

+
+
+
+
+
+
+
+
Data |> filter(Condition %in% "PPD") |> head(10)
+
+
       bid timepoint Condition       Date infant_sex  ptype    root singletsFSC
+1  INF0052         0       PPD 2025-07-26       Male HEU-hi 2363512     2136616
+2  INF0100         0       PPD 2025-07-26       Male HEU-lo 2049112     1821676
+3  INF0100         4       PPD 2025-07-26       Male HEU-lo 1063496      946587
+4  INF0100         9       PPD 2025-07-26       Male HEU-lo  788368      714198
+5  INF0179         0       PPD 2025-07-26       Male     HU 1380336     1242311
+6  INF0179         4       PPD 2025-07-26       Male     HU 1240984     1089933
+7  INF0179         9       PPD 2025-07-26       Male     HU 1705960     1492142
+8  INF0186         4       PPD 2025-07-26     Female HEU-hi  848584      759606
+9  INF0186         9       PPD 2025-07-26     Female HEU-hi 1425416     1259825
+10 INF0134         0       PPD 2025-07-29     Female HEU-lo 1245024     1126248
+   singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris lymphocytes
+1      1875394      1732620 0.5873838    0.8619837 0.8429685   0.6408044
+2      1717636      1597085 0.9063081    0.9251961 0.8771889   0.2174284
+3       796056       767297 0.9709891    0.9848719 0.9556049   0.7313503
+4       626387       600011 0.9822803    0.9842139 0.8123041   0.6223228
+5      1047081      1000877 0.9470275    0.9575685 0.9134438   0.6996502
+6       868877       814909 0.9855947    0.9541417 0.9400824   0.7303074
+7      1163543      1107878 0.9820919    0.9816909 0.9681656   0.7933252
+8       648405       607514 0.9824778    0.9539480 0.9250170   0.6720872
+9      1089955      1014266 0.9771490    0.9552573 0.9137615   0.6332438
+10      993895       896183 0.7915660    0.8042298 0.7899781   0.5924868
+        live      Dump+     Dump-    Tcells        Vd2+      Vd2-     Va7.2+
+1  0.9009254 0.20743228 0.6934931 0.2835676 0.007408209 0.9925918 0.01507057
+2  0.8929673 0.06181426 0.8311531 0.6735798 0.007137230 0.9928628 0.01671801
+3  0.8782307 0.20727202 0.6709587 0.5989873 0.005254643 0.9947454 0.01609790
+4  0.9566639 0.23164587 0.7250180 0.6489405 0.011935922 0.9880641 0.01855298
+5  0.8856898 0.33186111 0.5538287 0.4441538 0.004382972 0.9956170 0.01297237
+6  0.9602599 0.34357211 0.6166878 0.5654655 0.004320429 0.9956796 0.01266884
+7  0.9344566 0.24759143 0.6868651 0.6687319 0.002733755 0.9972662 0.01330324
+8  0.8622229 0.32641070 0.5358122 0.4757720 0.009483639 0.9905164 0.04352519
+9  0.8793039 0.23863251 0.6406714 0.5818617 0.018224039 0.9817760 0.03738187
+10 0.9003481 0.15485733 0.7454908 0.3314561 0.009453601 0.9905464 0.02587717
+      Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1  0.9775212 0.6340345 0.3434867 0.2744119 0.06907479       184930
+2  0.9761448 0.6145707 0.3615741 0.3312279 0.03034620       211987
+3  0.9786475 0.6559480 0.3226994 0.2912084 0.03149109       326378
+4  0.9695111 0.4306889 0.5388222 0.4908558 0.04796636       238021
+5  0.9826447 0.7499194 0.2327253 0.1850897 0.04763554       294549
+6  0.9830107 0.6318771 0.3511336 0.3177460 0.03338760       331680
+7  0.9839630 0.7016361 0.2823269 0.2559335 0.02639338       577228
+8  0.9469912 0.5309109 0.4160803 0.3912185 0.02486181       190855
+9  0.9443941 0.5033806 0.4410135 0.4213381 0.01967539       365177
+10 0.9646692 0.6964224 0.2682468 0.2260394 0.04220742       139312
+   lymphocytes_count  Monocytes     Debris CD45_count
+1             652155 0.13801632 0.15703150    1017713
+2             314717 0.07480391 0.12281107    1447451
+3             544883 0.01512811 0.04439511     745037
+4             366784 0.01578611 0.18769586     589379
+5             663169 0.04243146 0.08655621     947858
+6             586561 0.04585829 0.05991758     803170
+7             863168 0.01830910 0.03183437    1088038
+8             401148 0.04605198 0.07498295     596869
+9             627601 0.04474270 0.08623847     991089
+10            420303 0.19577016 0.21002188     709388
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

Similar to what we saw for select(), we can grab rows that contain various values at once. We would just need to modify the second part of the argument. If we wanted to grab rows whose Condition column contained either PPD or SEB, we would need to provide that argument as a vector, placing both within c()/

+
+
+
+
+
+
+
+
Data |> filter(Condition %in% c("PPD", "SEB")) |> head(10)
+
+
       bid timepoint Condition       Date infant_sex  ptype    root singletsFSC
+1  INF0052         0       PPD 2025-07-26       Male HEU-hi 2363512     2136616
+2  INF0100         0       PPD 2025-07-26       Male HEU-lo 2049112     1821676
+3  INF0100         4       PPD 2025-07-26       Male HEU-lo 1063496      946587
+4  INF0100         9       PPD 2025-07-26       Male HEU-lo  788368      714198
+5  INF0179         0       PPD 2025-07-26       Male     HU 1380336     1242311
+6  INF0179         4       PPD 2025-07-26       Male     HU 1240984     1089933
+7  INF0179         9       PPD 2025-07-26       Male     HU 1705960     1492142
+8  INF0186         4       PPD 2025-07-26     Female HEU-hi  848584      759606
+9  INF0186         9       PPD 2025-07-26     Female HEU-hi 1425416     1259825
+10 INF0052         0       SEB 2025-07-26       Male HEU-hi 2523776     2282292
+   singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris lymphocytes
+1      1875394      1732620 0.5873838    0.8619837 0.8429685   0.6408044
+2      1717636      1597085 0.9063081    0.9251961 0.8771889   0.2174284
+3       796056       767297 0.9709891    0.9848719 0.9556049   0.7313503
+4       626387       600011 0.9822803    0.9842139 0.8123041   0.6223228
+5      1047081      1000877 0.9470275    0.9575685 0.9134438   0.6996502
+6       868877       814909 0.9855947    0.9541417 0.9400824   0.7303074
+7      1163543      1107878 0.9820919    0.9816909 0.9681656   0.7933252
+8       648405       607514 0.9824778    0.9539480 0.9250170   0.6720872
+9      1089955      1014266 0.9771490    0.9552573 0.9137615   0.6332438
+10     2041563      1889418 0.5783591    0.8878072 0.8670150   0.6718563
+        live      Dump+     Dump-    Tcells        Vd2+      Vd2-      Va7.2+
+1  0.9009254 0.20743228 0.6934931 0.2835676 0.007408209 0.9925918 0.015070567
+2  0.8929673 0.06181426 0.8311531 0.6735798 0.007137230 0.9928628 0.016718006
+3  0.8782307 0.20727202 0.6709587 0.5989873 0.005254643 0.9947454 0.016097899
+4  0.9566639 0.23164587 0.7250180 0.6489405 0.011935922 0.9880641 0.018552985
+5  0.8856898 0.33186111 0.5538287 0.4441538 0.004382972 0.9956170 0.012972375
+6  0.9602599 0.34357211 0.6166878 0.5654655 0.004320429 0.9956796 0.012668837
+7  0.9344566 0.24759143 0.6868651 0.6687319 0.002733755 0.9972662 0.013303235
+8  0.8622229 0.32641070 0.5358122 0.4757720 0.009483639 0.9905164 0.043525189
+9  0.8793039 0.23863251 0.6406714 0.5818617 0.018224039 0.9817760 0.037381872
+10 0.9115652 0.23344716 0.6781180 0.2741661 0.009225633 0.9907744 0.008420812
+      Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1  0.9775212 0.6340345 0.3434867 0.2744119 0.06907479       184930
+2  0.9761448 0.6145707 0.3615741 0.3312279 0.03034620       211987
+3  0.9786475 0.6559480 0.3226994 0.2912084 0.03149109       326378
+4  0.9695111 0.4306889 0.5388222 0.4908558 0.04796636       238021
+5  0.9826447 0.7499194 0.2327253 0.1850897 0.04763554       294549
+6  0.9830107 0.6318771 0.3511336 0.3177460 0.03338760       331680
+7  0.9839630 0.7016361 0.2823269 0.2559335 0.02639338       577228
+8  0.9469912 0.5309109 0.4160803 0.3912185 0.02486181       190855
+9  0.9443941 0.5033806 0.4410135 0.4213381 0.01967539       365177
+10 0.9823536 0.6083254 0.3740281 0.2811756 0.09285249       201287
+   lymphocytes_count  Monocytes     Debris CD45_count
+1             652155 0.13801632 0.15703150    1017713
+2             314717 0.07480391 0.12281107    1447451
+3             544883 0.01512811 0.04439511     745037
+4             366784 0.01578611 0.18769586     589379
+5             663169 0.04243146 0.08655621     947858
+6             586561 0.04585829 0.05991758     803170
+7             863168 0.01830910 0.03183437    1088038
+8             401148 0.04605198 0.07498295     596869
+9             627601 0.04474270 0.08623847     991089
+10            734179 0.11219277 0.13298504    1092762
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

Alternatively, we could have set up the vector externally, and then provided it to filter()

+
+
+
+
+
+
+
+
TheseConditions <- c("PPD", "SEB")
+Data |> filter(Condition %in% TheseConditions) |> head(10)
+
+
       bid timepoint Condition       Date infant_sex  ptype    root singletsFSC
+1  INF0052         0       PPD 2025-07-26       Male HEU-hi 2363512     2136616
+2  INF0100         0       PPD 2025-07-26       Male HEU-lo 2049112     1821676
+3  INF0100         4       PPD 2025-07-26       Male HEU-lo 1063496      946587
+4  INF0100         9       PPD 2025-07-26       Male HEU-lo  788368      714198
+5  INF0179         0       PPD 2025-07-26       Male     HU 1380336     1242311
+6  INF0179         4       PPD 2025-07-26       Male     HU 1240984     1089933
+7  INF0179         9       PPD 2025-07-26       Male     HU 1705960     1492142
+8  INF0186         4       PPD 2025-07-26     Female HEU-hi  848584      759606
+9  INF0186         9       PPD 2025-07-26     Female HEU-hi 1425416     1259825
+10 INF0052         0       SEB 2025-07-26       Male HEU-hi 2523776     2282292
+   singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris lymphocytes
+1      1875394      1732620 0.5873838    0.8619837 0.8429685   0.6408044
+2      1717636      1597085 0.9063081    0.9251961 0.8771889   0.2174284
+3       796056       767297 0.9709891    0.9848719 0.9556049   0.7313503
+4       626387       600011 0.9822803    0.9842139 0.8123041   0.6223228
+5      1047081      1000877 0.9470275    0.9575685 0.9134438   0.6996502
+6       868877       814909 0.9855947    0.9541417 0.9400824   0.7303074
+7      1163543      1107878 0.9820919    0.9816909 0.9681656   0.7933252
+8       648405       607514 0.9824778    0.9539480 0.9250170   0.6720872
+9      1089955      1014266 0.9771490    0.9552573 0.9137615   0.6332438
+10     2041563      1889418 0.5783591    0.8878072 0.8670150   0.6718563
+        live      Dump+     Dump-    Tcells        Vd2+      Vd2-      Va7.2+
+1  0.9009254 0.20743228 0.6934931 0.2835676 0.007408209 0.9925918 0.015070567
+2  0.8929673 0.06181426 0.8311531 0.6735798 0.007137230 0.9928628 0.016718006
+3  0.8782307 0.20727202 0.6709587 0.5989873 0.005254643 0.9947454 0.016097899
+4  0.9566639 0.23164587 0.7250180 0.6489405 0.011935922 0.9880641 0.018552985
+5  0.8856898 0.33186111 0.5538287 0.4441538 0.004382972 0.9956170 0.012972375
+6  0.9602599 0.34357211 0.6166878 0.5654655 0.004320429 0.9956796 0.012668837
+7  0.9344566 0.24759143 0.6868651 0.6687319 0.002733755 0.9972662 0.013303235
+8  0.8622229 0.32641070 0.5358122 0.4757720 0.009483639 0.9905164 0.043525189
+9  0.8793039 0.23863251 0.6406714 0.5818617 0.018224039 0.9817760 0.037381872
+10 0.9115652 0.23344716 0.6781180 0.2741661 0.009225633 0.9907744 0.008420812
+      Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1  0.9775212 0.6340345 0.3434867 0.2744119 0.06907479       184930
+2  0.9761448 0.6145707 0.3615741 0.3312279 0.03034620       211987
+3  0.9786475 0.6559480 0.3226994 0.2912084 0.03149109       326378
+4  0.9695111 0.4306889 0.5388222 0.4908558 0.04796636       238021
+5  0.9826447 0.7499194 0.2327253 0.1850897 0.04763554       294549
+6  0.9830107 0.6318771 0.3511336 0.3177460 0.03338760       331680
+7  0.9839630 0.7016361 0.2823269 0.2559335 0.02639338       577228
+8  0.9469912 0.5309109 0.4160803 0.3912185 0.02486181       190855
+9  0.9443941 0.5033806 0.4410135 0.4213381 0.01967539       365177
+10 0.9823536 0.6083254 0.3740281 0.2811756 0.09285249       201287
+   lymphocytes_count  Monocytes     Debris CD45_count
+1             652155 0.13801632 0.15703150    1017713
+2             314717 0.07480391 0.12281107    1447451
+3             544883 0.01512811 0.04439511     745037
+4             366784 0.01578611 0.18769586     589379
+5             663169 0.04243146 0.08655621     947858
+6             586561 0.04585829 0.05991758     803170
+7             863168 0.01830910 0.03183437    1088038
+8             401148 0.04605198 0.07498295     596869
+9             627601 0.04474270 0.08623847     991089
+10            734179 0.11219277 0.13298504    1092762
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

While this works when we have a limited number of variant condition values, what if had many more but only wanted to exclude one value? As we saw when learning about Conditionals, when we add a ! in front of a logical value, we get the opposite logical value returned

+
+
+
+
+
+
+
+
IsThisASpectralInstrument <- TRUE
+
+!IsThisASpectralInstrument
+
+
[1] FALSE
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

In the context of the dplyr package, we can use ! within the filter() to remove rows that contain a certain value

+
+
+
+
+
+
+
+
Subset <- Data |> filter(!Condition %in% "SEB")
+Subset |> pull(Condition) |> unique()
+
+
[1] "Ctrl" "PPD" 
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

Likewise, we can also use it with the select() to exclude columns we don’t want to include

+
+
+
+
+
+
+
+
Subset <- Data |> select(!timepoint)
+Subset[1:3,]
+
+
      bid Condition       Date infant_sex  ptype    root singletsFSC
+1 INF0052      Ctrl 2025-07-26       Male HEU-hi 2098368     1894070
+2 INF0100      Ctrl 2025-07-26       Male HEU-lo 2020184     1791890
+3 INF0100      Ctrl 2025-07-26       Male HEU-lo 1155040     1033320
+  singletsSSC singletsSSCB      CD45 NotMonocytes nonDebris lymphocytes
+1     1666179      1537396 0.5952943    0.8820349 0.8627649   0.6420138
+2     1697083      1579098 0.9106762    0.9052256 0.8602660   0.2145848
+3      875465       845446 0.9705765    0.9845400 0.9578793   0.7403110
+       live      Dump+     Dump-    Tcells        Vd2+      Vd2-     Va7.2+
+1 0.9020581 0.21090996 0.6911482 0.2804264 0.008120361 0.9918796 0.01448070
+2 0.8908981 0.06252775 0.8283703 0.6748298 0.007265620 0.9927344 0.01577499
+3 0.8757665 0.20023803 0.6755285 0.6119129 0.004651313 0.9953487 0.01579402
+     Va7.2-      CD4+      CD4-      CD8+       CD8- Tcells_count
+1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990       164771
+2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858       208241
+3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209       371723
+  lymphocytes_count  Monocytes     Debris CD45_count
+1            587573 0.11796509 0.13723513     915203
+2            308583 0.09477437 0.13973396    1438047
+3            607477 0.01545999 0.04212072     820570
+
+
+
+
+
+

mutate

+
+
+
+
+
+
+ +
+

.

+
+
+

As we can see, with just these handful of functions, we have the building blocks to rearrange and subset a larger data.frame into a format that we prefer. But what if we wanted to alter the content of a column, or add new columns to an existing data.frame? This is where the mutate() function can be used.

+
+
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

Let’s start by slimming down our current Data to a smaller workable example, highlighting the functions and pipes we learned about today

+
+
+
+
+
+
+
+
TidyData <- Data |> filter(Condition %in% "Ctrl") |> filter(timepoint %in% "0") |>
+     select(bid, timepoint, Condition, Date, Tcells_count, CD45_count) |>
+      rename(specimen=bid, condition=Condition) |> relocate(Date, .after=specimen)
+
+
+
+
+ +
+
+
TidyData
+
+
   specimen       Date timepoint condition Tcells_count CD45_count
+1   INF0052 2025-07-26         0      Ctrl       164771     915203
+2   INF0100 2025-07-26         0      Ctrl       208241    1438047
+3   INF0179 2025-07-26         0      Ctrl       291777     940733
+4   INF0134 2025-07-29         0      Ctrl       127866     689676
+5   INF0148 2025-07-29         0      Ctrl       234335    1013985
+6   INF0191 2025-07-29         0      Ctrl        55780     715443
+7   INF0124 2025-07-31         0      Ctrl        70297     687720
+8   INF0149 2025-07-31         0      Ctrl       107900     857845
+9   INF0169 2025-07-31         0      Ctrl        75540     854594
+10  INF0019 2025-08-05         0      Ctrl       208055     873622
+11  INF0032 2025-08-05         0      Ctrl       361034     753064
+12  INF0180 2025-08-05         0      Ctrl       284958    1049663
+13  INF0155 2025-08-07         0      Ctrl       281626    1065048
+14  INF0158 2025-08-07         0      Ctrl       280913    1249338
+15  INF0159 2025-08-07         0      Ctrl       452551    1190219
+16  INF0013 2025-08-22         0      Ctrl       182751     836573
+17  INF0023 2025-08-22         0      Ctrl       218435     968035
+18  INF0030 2025-08-22         0      Ctrl        85521     732321
+19  INF0166 2025-08-28         0      Ctrl       225650     739495
+20  INF0199 2025-08-28         0      Ctrl       169736    1112176
+21  INF0207 2025-08-28         0      Ctrl        39055     905365
+22  INF0614 2025-08-30         0      Ctrl       224396    1569007
+23  INF0622 2025-08-30         0      Ctrl       161924     939307
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

The mutate() function can be used to modify existing columns, as well as to create new ones. For example, let’s derrive the proportion of T cells from the overall CD45 gate. To do so, within the parenthesis, we would specify a new column name, and then divide the original columns:

+
+
+
+
+
+
+
+
TidyData <- TidyData |> mutate(Tcells_ProportionCD45 = Tcells_count / CD45_count)
+TidyData
+
+
   specimen       Date timepoint condition Tcells_count CD45_count
+1   INF0052 2025-07-26         0      Ctrl       164771     915203
+2   INF0100 2025-07-26         0      Ctrl       208241    1438047
+3   INF0179 2025-07-26         0      Ctrl       291777     940733
+4   INF0134 2025-07-29         0      Ctrl       127866     689676
+5   INF0148 2025-07-29         0      Ctrl       234335    1013985
+6   INF0191 2025-07-29         0      Ctrl        55780     715443
+7   INF0124 2025-07-31         0      Ctrl        70297     687720
+8   INF0149 2025-07-31         0      Ctrl       107900     857845
+9   INF0169 2025-07-31         0      Ctrl        75540     854594
+10  INF0019 2025-08-05         0      Ctrl       208055     873622
+11  INF0032 2025-08-05         0      Ctrl       361034     753064
+12  INF0180 2025-08-05         0      Ctrl       284958    1049663
+13  INF0155 2025-08-07         0      Ctrl       281626    1065048
+14  INF0158 2025-08-07         0      Ctrl       280913    1249338
+15  INF0159 2025-08-07         0      Ctrl       452551    1190219
+16  INF0013 2025-08-22         0      Ctrl       182751     836573
+17  INF0023 2025-08-22         0      Ctrl       218435     968035
+18  INF0030 2025-08-22         0      Ctrl        85521     732321
+19  INF0166 2025-08-28         0      Ctrl       225650     739495
+20  INF0199 2025-08-28         0      Ctrl       169736    1112176
+21  INF0207 2025-08-28         0      Ctrl        39055     905365
+22  INF0614 2025-08-30         0      Ctrl       224396    1569007
+23  INF0622 2025-08-30         0      Ctrl       161924     939307
+   Tcells_ProportionCD45
+1             0.18003765
+2             0.14480820
+3             0.31015921
+4             0.18540010
+5             0.23110302
+6             0.07796568
+7             0.10221747
+8             0.12578030
+9             0.08839285
+10            0.23815220
+11            0.47942008
+12            0.27147570
+13            0.26442564
+14            0.22484948
+15            0.38022498
+16            0.21845195
+17            0.22564783
+18            0.11678076
+19            0.30514067
+20            0.15261613
+21            0.04313730
+22            0.14301785
+23            0.17238666
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

We can see that we have many significant digits being returned. Let’s round this new column to 2 significant digits by applying the round() function

+
+
+
+
+
+
+
+
TidyData <- TidyData |> mutate(TcellsRounded = round(Tcells_ProportionCD45, 2))
+TidyData 
+
+
   specimen       Date timepoint condition Tcells_count CD45_count
+1   INF0052 2025-07-26         0      Ctrl       164771     915203
+2   INF0100 2025-07-26         0      Ctrl       208241    1438047
+3   INF0179 2025-07-26         0      Ctrl       291777     940733
+4   INF0134 2025-07-29         0      Ctrl       127866     689676
+5   INF0148 2025-07-29         0      Ctrl       234335    1013985
+6   INF0191 2025-07-29         0      Ctrl        55780     715443
+7   INF0124 2025-07-31         0      Ctrl        70297     687720
+8   INF0149 2025-07-31         0      Ctrl       107900     857845
+9   INF0169 2025-07-31         0      Ctrl        75540     854594
+10  INF0019 2025-08-05         0      Ctrl       208055     873622
+11  INF0032 2025-08-05         0      Ctrl       361034     753064
+12  INF0180 2025-08-05         0      Ctrl       284958    1049663
+13  INF0155 2025-08-07         0      Ctrl       281626    1065048
+14  INF0158 2025-08-07         0      Ctrl       280913    1249338
+15  INF0159 2025-08-07         0      Ctrl       452551    1190219
+16  INF0013 2025-08-22         0      Ctrl       182751     836573
+17  INF0023 2025-08-22         0      Ctrl       218435     968035
+18  INF0030 2025-08-22         0      Ctrl        85521     732321
+19  INF0166 2025-08-28         0      Ctrl       225650     739495
+20  INF0199 2025-08-28         0      Ctrl       169736    1112176
+21  INF0207 2025-08-28         0      Ctrl        39055     905365
+22  INF0614 2025-08-30         0      Ctrl       224396    1569007
+23  INF0622 2025-08-30         0      Ctrl       161924     939307
+   Tcells_ProportionCD45 TcellsRounded
+1             0.18003765          0.18
+2             0.14480820          0.14
+3             0.31015921          0.31
+4             0.18540010          0.19
+5             0.23110302          0.23
+6             0.07796568          0.08
+7             0.10221747          0.10
+8             0.12578030          0.13
+9             0.08839285          0.09
+10            0.23815220          0.24
+11            0.47942008          0.48
+12            0.27147570          0.27
+13            0.26442564          0.26
+14            0.22484948          0.22
+15            0.38022498          0.38
+16            0.21845195          0.22
+17            0.22564783          0.23
+18            0.11678076          0.12
+19            0.30514067          0.31
+20            0.15261613          0.15
+21            0.04313730          0.04
+22            0.14301785          0.14
+23            0.17238666          0.17
+
+
+
+
+
+

arrange

+
+
+
+
+
+
+ +
+

.

+
+
+

And while we are here, let’s rearrange the rows so that they are descending based on the Tcell proportion. We can use this by using the desc() and arrange() functions from dplyr:

+
+
+
+
+
+
+
+
TidyData <- TidyData |> arrange(desc(TcellsRounded))
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

And let’s go ahead and filter() and identify the specimens that had more than 30% T cells as part of the overall CD45 gate (context, these samples were Cord Blood):

+
+
+
+
+
+
+
+
TidyData |> filter(TcellsRounded > 0.3)
+
+
  specimen       Date timepoint condition Tcells_count CD45_count
+1  INF0032 2025-08-05         0      Ctrl       361034     753064
+2  INF0159 2025-08-07         0      Ctrl       452551    1190219
+3  INF0179 2025-07-26         0      Ctrl       291777     940733
+4  INF0166 2025-08-28         0      Ctrl       225650     739495
+  Tcells_ProportionCD45 TcellsRounded
+1             0.4794201          0.48
+2             0.3802250          0.38
+3             0.3101592          0.31
+4             0.3051407          0.31
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

Which is we had wanted to just retrieve the specimen IDs, we could add pull() after a new pipe argument.

+
+
+
+
+
+
+
+
TidyData |> filter(TcellsRounded > 0.3) |> pull(specimen)
+
+
[1] "INF0032" "INF0159" "INF0179" "INF0166"
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

And finally, since I may want to send the data to a supervisor, let’s go ahead and export this “tidyed” version of our data.frame out to it’s own .csv file. Working within our project folder, this would look like this:

+
+
+
+
+
+
+
+
NewName <- paste0("MyNewDataset", ".csv")
+StorageLocation <- file.path("data", NewName)
+StorageLocation
+
+
[1] "data/MyNewDataset.csv"
+
+
+
+
+
+
write.csv(TidyData, StorageLocation, row.names=FALSE)
+
+
+
+
+ +
+
+
+

Take Away

+
+
+
+
+
+
+ +
+

.

+
+
+

In this session, we explored the main functions within the dplyr package used in context of “tidying” data, including selecting columns, filtering for rows, as well as additional functions used to create or modify existing values. We will continue to build on these throughout the course, introducing a few additional tidyverse functions we don’t have time to cover today as appropiate. As we saw, knowing how to use these functions can allow us to extensively and quickly modify our existing exported data files.

+
+
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

On important goal as we move through the course (in terms of both reproducibility and replicability) is to attempt to only modify files within R, not go back to the original csv or excel file and hand-modify individual values. This approach is not reproducible or replicable. Once set up, an R script can quickly re-carry out these same cleanup steps, and leave a documented process of how the data has changed (even more so if you are maintaining version control). If you do want to save the changes you have made, it is best to save it out as a new .csv file with which you work later.

+
+
+
+
+
+
+
+ +
+
+
+
+
+
+ +
+

.

+
+
+

Next week, we will be using these skills when setting up metadata for our .fcs files. We will additionally take a look at the main format source of controversy within Bioconductor Flow Cytometry packages, ie. whether to use a flowframe or a cytoframe. Exciting stuff, but important information to know as the functions needed to import them are slightly different. We will also look at how to import existing manually gated .wsp from FlowJo/Diva/Floreada in via the CytoML package.

+
+
+
+
+
+
+
+ + +
+
+ +
+
+
+

Additional Resources

+

Data Organization in Spreadsheets for Ecologists This Carpentry self-study course was one of my “Aha” moments early on when learning R, and reinforced the need to try to keep my own Excel/CSV files in a tidy manner. It is worth the time going through in its entirety (even for non-Ecologist).

+

Data Analysis and Visualization in R for Ecologists Continuation of the above, and a good way to continue building on the tidyverse functions we learned today.

+
+
+ +

Simplistics: Introduction to Tidyverse in R The YouTube channel is mainly focused on statistics for Psych classes, but at the end of the day, we are all working with similar objects with rows and columns, just the values contained within differ.

+

Riffomonas Project Playlist: Data Manipulation with R’s Tidyverse Riffomonas has a playlist that delves into both the tidyverse functions we used today, as well as other ones we will encounter later on in the course.

+
+
+ +
+
+
+

Take-home Problems

+
+
+
+
+
+ +
+

Problem 1

+
+
+

Taking a dataset (either todays or one of your own), work through the column-operating functions (select(), rename(), and relocate()). Once this is done, filter() by conditions from two separate columns, arrange in an order that makes sense, and export this “tidy” data as a .csv file.

+
+
+
+
+
+
+ +
+
+
+
+
+ +
+

Problem 2

+
+
+

We used the mutate() function to create new columns, but it can also be used to modify existing ones. Various numeric columns are showing way to many significant digits. As was shown, use round() to round all these proportion columns, but use mutate to overwrite the existing column. Export this as it’s own .csv file.

+
+
+
+
+
+
+ +
+
+
+
+
+ +
+

Problem 3

+
+
+

We can also use mutate() to combine columns. For our dataset, “bid”, “timepoint”, “Condition” are separate columns that originally were all part of the filename for the individual .fcs file. Try to figure out a way to combine them back together using paste0(), and save the new column as “filename”. Once this is done, pull() the contents of this column, and using try to determine whether there were any duplicates (think innovative ways of using !, length() and unique())

+
+
+
+
+
+
+ +
+

AGPL-3.0 CC BY-SA 4.0

+
+ + +
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/course/index.html b/docs/course/index.html index d16c51b..ae53cab 100644 --- a/docs/course/index.html +++ b/docs/course/index.html @@ -234,6 +234,12 @@ 03 - Inside a .FCS file + + diff --git a/docs/search.json b/docs/search.json index b35f7f3..cf58c5a 100644 --- a/docs/search.json +++ b/docs/search.json @@ -1,831 +1,1130 @@ [ { - "objectID": "course/03_InsideFCSFile/index.html", - "href": "course/03_InsideFCSFile/index.html", - "title": "03 - Inside an FCS File", + "objectID": "course/04_IntroToTidyverse/index.html", + "href": "course/04_IntroToTidyverse/index.html", + "title": "04 - Introduction to Tidyverse", "section": "", "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here", "crumbs": [ "About", "Intro to R", - "03 - Inside a .FCS file" + "04 - Intro to Tidyverse" ] }, { - "objectID": "course/03_InsideFCSFile/index.html#getting-set-up", - "href": "course/03_InsideFCSFile/index.html#getting-set-up", - "title": "03 - Inside an FCS File", - "section": "Getting Set Up", - "text": "Getting Set Up\n\n\nSet up File Paths\nHaving copied over the new data to your working project folder (Week 3 or whatever your chosen name), let’s identify the file paths between our working directory and the fcs files. If you retained the same project organization structure we had during Week #2, it may look similar to the following:\n\nPathToDataFolder <- file.path(\"data\")\n\n\nPathToDataFolder\n\n[1] \"data\"\n\n\n\n\n\n\nLocate .fcs files\nWe will now locate our .fcs files. As we saw last week, our computer will need the full file.paths to these individual files, so we will set the list.files() “full.names” argument to TRUE.\n\nfcs_files <- list.files(PathToDataFolder, pattern=\".fcs\", full.names=TRUE)\nfcs_files\n\n[1] \"data/CellCounts4L_AB_05_ND050_05.fcs\"\n\n\nBy contrast, if the “full.names” argument was set to FALSE, we would have retrieved just the file names\n\nlist.files(PathToDataFolder, pattern=\".fcs\", full.names=FALSE)\n\n[1] \"CellCounts4L_AB_05_ND050_05.fcs\"\n\n\nThis would have been the equivalent of running the basename function on the “full.names=TRUE” output.\n\nbasename(fcs_files)\n\n[1] \"CellCounts4L_AB_05_ND050_05.fcs\"", + "objectID": "course/04_IntroToTidyverse/index.html#read.csv", + "href": "course/04_IntroToTidyverse/index.html#read.csv", + "title": "04 - Introduction to Tidyverse", + "section": "read.csv", + "text": "read.csv\nWe will start by first loading in our copied over dataset (Dataset.csv) from it’s location in the project folder. If you are following the organization scheme we have been using throughout the course, your file path will look something like this:\n\nthefilepath <- file.path(\"data\", \"Dataset.csv\")\n\nthefilepath\n\n[1] \"data/Dataset.csv\"\n\n\n\n\n\n\n\n\nReminder\n\n\n\nWe encourage using the file.path function to build our file paths, as this keeps our code reproducible and replicable when a project folder is copied to other people’s computers that differ on whether the operating system uses forward or backward slash separation between folders.\n\n\nAbove, we directly specified the name (Dataset) and filetype (.csv) of the file we wanted in the last argument of the file.path (“Dataset.csv”). This allows us to skip the list.files() step we used last week as we have provided the full file path. While this approach can be faster, if we accidentally mistype the file name, we could end up with an error at the next step due to no files being found with the mistyped name.\nSince our dataset is stored as a .csv file, we will be using the read.csv() function from the utils package (included in our base R software installation) to read it into R. We will also use the colnames() function from last week to get a read-out of the column names.\n\nData <- read.csv(file=thefilepath, check.names=FALSE)\ncolnames(Data)\n\n [1] \"bid\" \"timepoint\" \"Condition\" \n [4] \"Date\" \"infant_sex\" \"ptype\" \n [7] \"root\" \"singletsFSC\" \"singletsSSC\" \n[10] \"singletsSSCB\" \"CD45\" \"NotMonocytes\" \n[13] \"nonDebris\" \"lymphocytes\" \"live\" \n[16] \"Dump+\" \"Dump-\" \"Tcells\" \n[19] \"Vd2+\" \"Vd2-\" \"Va7.2+\" \n[22] \"Va7.2-\" \"CD4+\" \"CD4-\" \n[25] \"CD8+\" \"CD8-\" \"Tcells_count\" \n[28] \"lymphocytes_count\" \"Monocytes\" \"Debris\" \n[31] \"CD45_count\" \n\n\nAs we look at the line of code, we now have enough context to decipher that the “file” argument is where we provide a file path to an individual file, but what does the “check.names” argument do?\nLet’s see what happens to the column names when we set “check.names” argument to TRUE:\n\nData_Alternative <- read.csv(thefilepath, check.names=TRUE)\ncolnames(Data_Alternative)\n\n [1] \"bid\" \"timepoint\" \"Condition\" \n [4] \"Date\" \"infant_sex\" \"ptype\" \n [7] \"root\" \"singletsFSC\" \"singletsSSC\" \n[10] \"singletsSSCB\" \"CD45\" \"NotMonocytes\" \n[13] \"nonDebris\" \"lymphocytes\" \"live\" \n[16] \"Dump.\" \"Dump..1\" \"Tcells\" \n[19] \"Vd2.\" \"Vd2..1\" \"Va7.2.\" \n[22] \"Va7.2..1\" \"CD4.\" \"CD4..1\" \n[25] \"CD8.\" \"CD8..1\" \"Tcells_count\" \n[28] \"lymphocytes_count\" \"Monocytes\" \"Debris\" \n[31] \"CD45_count\" \n\n\nAs we can see, any column name that contained a special character or a space was automatically converted over to R-approved syntax. However, this resulted in the loss of both +” and “-”, leaving us unable to determine whether we are looking at cells within or outside a particular gate.\n\nBecause of this, it is often better to rename columns individually after import, which we will learn how to do later today.\nFollowing up with what we practiced last week, lets use the head() function to visualize the first few rows of data.\n\nhead(Data, 3)\n\n bid timepoint Condition Date infant_sex ptype root singletsFSC\n1 INF0052 0 Ctrl 2025-07-26 Male HEU-hi 2098368 1894070\n2 INF0100 0 Ctrl 2025-07-26 Male HEU-lo 2020184 1791890\n3 INF0100 4 Ctrl 2025-07-26 Male HEU-lo 1155040 1033320\n singletsSSC singletsSSCB CD45 NotMonocytes nonDebris lymphocytes\n1 1666179 1537396 0.5952943 0.8820349 0.8627649 0.6420138\n2 1697083 1579098 0.9106762 0.9052256 0.8602660 0.2145848\n3 875465 845446 0.9705765 0.9845400 0.9578793 0.7403110\n live Dump+ Dump- Tcells Vd2+ Vd2- Va7.2+\n1 0.9020581 0.21090996 0.6911482 0.2804264 0.008120361 0.9918796 0.01448070\n2 0.8908981 0.06252775 0.8283703 0.6748298 0.007265620 0.9927344 0.01577499\n3 0.8757665 0.20023803 0.6755285 0.6119129 0.004651313 0.9953487 0.01579402\n Va7.2- CD4+ CD4- CD8+ CD8- Tcells_count\n1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990 164771\n2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858 208241\n3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209 371723\n lymphocytes_count Monocytes Debris CD45_count\n1 587573 0.11796509 0.13723513 915203\n2 308583 0.09477437 0.13973396 1438047\n3 607477 0.01545999 0.04212072 820570\n\n\nWhen working in Positron, we could have alternatively clicked on the little grid icon next to our created variable “Data” in the right secondary sidebar, which would have opened the data in our Editor window. From this same window, we can see it is stored as a “data.frame” object type.\n\nWe could also achieve the same window to open using the View() function:\n\nView(Data)\n\nWrapping up our brief recap of last week functions, we can check an objects type using both the class() and str() functions.\n\nclass(Data)\n\n[1] \"data.frame\"\n\n\n\nstr(Data)\n\n'data.frame': 196 obs. of 31 variables:\n $ bid : chr \"INF0052\" \"INF0100\" \"INF0100\" \"INF0100\" ...\n $ timepoint : int 0 0 4 9 0 4 9 4 9 0 ...\n $ Condition : chr \"Ctrl\" \"Ctrl\" \"Ctrl\" \"Ctrl\" ...\n $ Date : chr \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" ...\n $ infant_sex : chr \"Male\" \"Male\" \"Male\" \"Male\" ...\n $ ptype : chr \"HEU-hi\" \"HEU-lo\" \"HEU-lo\" \"HEU-lo\" ...\n $ root : int 2098368 2020184 1155040 358624 1362216 1044808 1434840 972056 1521928 2363512 ...\n $ singletsFSC : int 1894070 1791890 1033320 328624 1206309 917398 1265022 875707 1359574 2136616 ...\n $ singletsSSC : int 1666179 1697083 875465 289327 1032946 735579 988445 767323 1175755 1875394 ...\n $ singletsSSCB : int 1537396 1579098 845446 276289 982736 685592 940454 718000 1097478 1732620 ...\n $ CD45 : num 0.595 0.911 0.971 0.982 0.957 ...\n $ NotMonocytes : num 0.882 0.905 0.985 0.986 0.956 ...\n $ nonDebris : num 0.863 0.86 0.958 0.941 0.841 ...\n $ lymphocytes : num 0.642 0.215 0.74 0.651 0.705 ...\n $ live : num 0.902 0.891 0.876 0.915 0.895 ...\n $ Dump+ : num 0.2109 0.0625 0.2002 0.2147 0.3383 ...\n $ Dump- : num 0.691 0.828 0.676 0.701 0.557 ...\n $ Tcells : num 0.28 0.675 0.612 0.631 0.44 ...\n $ Vd2+ : num 0.00812 0.00727 0.00465 0.01135 0.00475 ...\n $ Vd2- : num 0.992 0.993 0.995 0.989 0.995 ...\n $ Va7.2+ : num 0.0145 0.0158 0.0158 0.017 0.0133 ...\n $ Va7.2- : num 0.977 0.977 0.98 0.972 0.982 ...\n $ CD4+ : num 0.634 0.612 0.664 0.438 0.739 ...\n $ CD4- : num 0.343 0.365 0.316 0.534 0.243 ...\n $ CD8+ : num 0.273 0.336 0.286 0.486 0.195 ...\n $ CD8- : num 0.0698 0.0293 0.0294 0.0476 0.0476 ...\n $ Tcells_count : int 164771 208241 371723 111552 291777 271870 487937 220634 415867 184930 ...\n $ lymphocytes_count: int 587573 308583 607477 176662 663667 510730 726238 451047 710964 652155 ...\n $ Monocytes : num 0.118 0.0948 0.0155 0.0145 0.0444 ...\n $ Debris : num 0.1372 0.1397 0.0421 0.0587 0.1592 ...\n $ CD45_count : int 915203 1438047 820570 271304 940733 675857 921660 701657 1066884 1017713 ...", "crumbs": [ "About", "Intro to R", - "03 - Inside a .FCS file" + "04 - Intro to Tidyverse" ] }, { - "objectID": "course/03_InsideFCSFile/index.html#flowcore", - "href": "course/03_InsideFCSFile/index.html#flowcore", - "title": "03 - Inside an FCS File", - "section": "flowCore", - "text": "flowCore\nWe will be using the flowCore package, which is the oldest and most-frequently downloaded flow cytometry package on Bioconductor.\n\n\nCode\n# I have attached this code for anyone that is interested in seeing how these plots were made. The content is not part of today's lesson, so if you are just starting off, we will cover the details of data-tidying and creating ggplot objects over the next several weeks. Best, David\n\n# Load required packages via a library call\n\nlibrary(dplyr) # CRAN\nlibrary(stringr) # CRAN\nlibrary(ggplot2) # CRAN\n#library(plotly) # Using the :: to access \n\n# Loading in the dataset contained within the .csv file\nBioconductorFlow_path <- file.path(PathToDataFolder, \"BioconductorFlow.csv\")\nBioconductorFlowPackages <- read.csv(BioconductorFlow_path, check.names=FALSE)\nBioconductorFlowPackages <- BioconductorFlowPackages |>\n arrange(desc(since)) |> mutate(package = factor(package, levels = package))\n\n# Newer Base R Pipe : |> \n# Older mostly equivalent Magrittr Pipe %>% \n\n\n\n\nCode\n# Notice the code-chunk eval arguments above dictate the shape of the final rendered plot. \n\n# Taking the imported dataset and passing it to ggplot2 to create the first plot. \n\nplot <- ggplot(BioconductorFlowPackages,\n aes(x = 0, xend = since, y = package, yend = package)) +\n geom_segment(linewidth = 2, color = \"steelblue\") +\n scale_x_continuous(trans = \"reverse\", \n breaks = seq(0, max(BioconductorFlowPackages$since), by = 5)) +\n labs(\n x = \"Years in Bioconductor\",\n y = NULL,\n title = \"Bioconductor Flow Cytometry R packages\"\n ) +\n theme_bw()\n\n# Taking the static plot and making it interactive using the plotly package\n\nplotly::ggplotly(plot)\n\n\n\n\n\n\n\n\nCode\n# Retrieving the names of Bioconductor flow cytometry R packages in correct release order. \n\nHistoricalOrder <- BioconductorFlowPackages |> pull(package)\n\n# Bringing in 2025 package usage dataset from a .csv file\nBioconductorUsage_path <- file.path(PathToDataFolder, \"BioconductorDownloads.csv\")\nBioconductorUsage <- read.csv(BioconductorUsage_path, check.names=FALSE)\nBioconductorUsage <- BioconductorUsage |> dplyr::filter(Month %in% \"all\")\n\n# Note, dplyr::filter is used due to flowCore also having a filter function, which causes conflicts once it is attached to the local environment. \n\n# Combining both data.frames for use in the plot\n\nDataset <- left_join(BioconductorFlowPackages, BioconductorUsage, by=\"package\")\n\n# Rearranging the order in which packages are displayed\n\nDataset$package <- factor(Dataset$package, levels=HistoricalOrder)\n\n\n\n\nCode\n# Generating the 2nd plot with ggplot2\n\nplot <- ggplot(Dataset, aes(x = since, y = Nb_of_distinct_IPs)) +\n geom_point(aes(color = package), size = 3, alpha = 0.7) + \n labs(\n x = \"Years in Bioconductor\",\n y = \"Number of Yearly Downloads\",\n title = \"\",\n color = \"Package\"\n ) +\n theme_bw()\n\n# Making it interactive with plotly\n\nplotly::ggplotly(plot)\n\n\n\n\n\n\nflowCore is also one of the many Bioconductor packages maintained by Mike Jiang. In many ways (as those who completed the optional take-home problems for Week #1 know) reminiscent of this xkcd comic:\n\nAs with all our R packages, we first need to make sure flowCore is attached to our local environment via the library call.\n\nlibrary(flowCore)\n\nThe function we will be using today is the read.FCS() function. Do you remember how to access the help documentation?\n\n\nCode\n# Or when in Positron, hovering over the highlighted function name within the code-chunk\n\n?flowCore::read.FCS\n\n\nTo start, lets select just the first .fcs file. We will do this by indexing the first item within fcs_files via the square brackets [].\n\nfirstfile <- fcs_files[1]\nfirstfile\n\n[1] \"data/CellCounts4L_AB_05_ND050_05.fcs\"", + "objectID": "course/04_IntroToTidyverse/index.html#data.frame", + "href": "course/04_IntroToTidyverse/index.html#data.frame", + "title": "04 - Introduction to Tidyverse", + "section": "data.frame", + "text": "data.frame\nOr alternatively using the new-to-us glimpse() function\n\nglimpse(Data)\n\nError in `glimpse()`:\n! could not find function \"glimpse\"\n\n\n\n\n\n\n\n\nCheckpoint 1\n\n\n\nThis however returns an error. Any idea why this might be occuring?\n\n\n\n\nCode\n# We haven't attached/loaded the package in which the function glimpse is within\n\n\n\n\n\n\n\n\nCheckpoint 2\n\n\n\nHow would we locate a package a not-yet-loaded function is within?\n\n\n\n\nCode\n# We can use double ? to search all installed packages for a function, regardless\n# if the package is attached to the environment or not\n\n??glimpse\n\n\n\nFrom the list of search matches (in the right secondary sidebar), it looks likely that the glimpse() function in the dplyr package was the one we were looking for. This is one the main tidyverse packages we will be using throughout the course. Let’s attach it to our environment via the library() call first and try running glimpse() again.\n\nlibrary(dplyr)\nglimpse(Data)\n\nRows: 196\nColumns: 31\n$ bid <chr> \"INF0052\", \"INF0100\", \"INF0100\", \"INF0100\", \"INF0179…\n$ timepoint <int> 0, 0, 4, 9, 0, 4, 9, 4, 9, 0, 0, 4, 9, 0, 4, 9, 4, 9…\n$ Condition <chr> \"Ctrl\", \"Ctrl\", \"Ctrl\", \"Ctrl\", \"Ctrl\", \"Ctrl\", \"Ctr…\n$ Date <chr> \"2025-07-26\", \"2025-07-26\", \"2025-07-26\", \"2025-07-2…\n$ infant_sex <chr> \"Male\", \"Male\", \"Male\", \"Male\", \"Male\", \"Male\", \"Mal…\n$ ptype <chr> \"HEU-hi\", \"HEU-lo\", \"HEU-lo\", \"HEU-lo\", \"HU\", \"HU\", …\n$ root <int> 2098368, 2020184, 1155040, 358624, 1362216, 1044808,…\n$ singletsFSC <int> 1894070, 1791890, 1033320, 328624, 1206309, 917398, …\n$ singletsSSC <int> 1666179, 1697083, 875465, 289327, 1032946, 735579, 9…\n$ singletsSSCB <int> 1537396, 1579098, 845446, 276289, 982736, 685592, 94…\n$ CD45 <dbl> 0.5952943, 0.9106762, 0.9705765, 0.9819573, 0.957259…\n$ NotMonocytes <dbl> 0.8820349, 0.9052256, 0.9845400, 0.9855070, 0.955627…\n$ nonDebris <dbl> 0.8627649, 0.8602660, 0.9578793, 0.9412615, 0.840783…\n$ lymphocytes <dbl> 0.6420138, 0.2145848, 0.7403110, 0.6511588, 0.705478…\n$ live <dbl> 0.9020581, 0.8908981, 0.8757665, 0.9153242, 0.895214…\n$ `Dump+` <dbl> 0.21090996, 0.06252775, 0.20023803, 0.21469246, 0.33…\n$ `Dump-` <dbl> 0.6911482, 0.8283703, 0.6755285, 0.7006317, 0.556895…\n$ Tcells <dbl> 0.2804264, 0.6748298, 0.6119129, 0.6314431, 0.439643…\n$ `Vd2+` <dbl> 0.008120361, 0.007265620, 0.004651313, 0.011348967, …\n$ `Vd2-` <dbl> 0.9918796, 0.9927344, 0.9953487, 0.9886510, 0.995246…\n$ `Va7.2+` <dbl> 0.014480704, 0.015774991, 0.015794019, 0.017023451, …\n$ `Va7.2-` <dbl> 0.9773989, 0.9769594, 0.9795547, 0.9716276, 0.981924…\n$ `CD4+` <dbl> 0.6341164, 0.6119112, 0.6639621, 0.4378944, 0.739256…\n$ `CD4-` <dbl> 0.3432825, 0.3650482, 0.3155925, 0.5337331, 0.242668…\n$ `CD8+` <dbl> 0.2734826, 0.3357696, 0.2862104, 0.4861231, 0.195063…\n$ `CD8-` <dbl> 0.06979990, 0.02927858, 0.02938209, 0.04761008, 0.04…\n$ Tcells_count <int> 164771, 208241, 371723, 111552, 291777, 271870, 4879…\n$ lymphocytes_count <int> 587573, 308583, 607477, 176662, 663667, 510730, 7262…\n$ Monocytes <dbl> 0.11796509, 0.09477437, 0.01545999, 0.01449297, 0.04…\n$ Debris <dbl> 0.13723513, 0.13973396, 0.04212072, 0.05873854, 0.15…\n$ CD45_count <int> 915203, 1438047, 820570, 271304, 940733, 675857, 921…\n\n\nWe notice that while similar to the str() output, glimpse() handles spacing a little differently, and includes the dimensions at the top. However, we can also retrieve the dimensions directly using the dim() function (which maintains the row followed by column position convention of base R (ex. [196,31]))\n\ndim(Data)\n\n[1] 196 31", "crumbs": [ "About", "Intro to R", - "03 - Inside a .FCS file" + "04 - Intro to Tidyverse" ] }, { - "objectID": "course/03_InsideFCSFile/index.html#flowframe", - "href": "course/03_InsideFCSFile/index.html#flowframe", - "title": "03 - Inside an FCS File", - "section": "flowFrame", - "text": "flowFrame\nFor read.FCS(), it accepts several arguments. The argument “filename” is where we provide our file.path to .fcs file that we wish to load into R. Let’s go ahead and do so\n\nread.FCS(filename=firstfile)\n\nPlease note, if you are doing this with your own .fcs files, you will need to provide two additional arguments, “transformation” = FALSE, and “truncate_max_range” = FALSE for the files to be read in correctly. We will revisit the reasons why in Week #5.\n\nread.FCS(filename=firstfile, transformation = FALSE, truncate_max_range = FALSE)\n\nflowFrame object 'CellCounts4L_AB_05-ND050-05.fcs'\nwith 100 cells and 61 observables:\n name desc range minRange maxRange\n$P1 Time NA 272140 0 272139\n$P2 UV1-A NA 4194304 -111 4194303\n$P3 UV2-A NA 4194304 -111 4194303\n$P4 UV3-A NA 4194304 -111 4194303\n$P5 UV4-A NA 4194304 -111 4194303\n... ... ... ... ... ...\n$P57 R4-A NA 4194304 -111 4194303\n$P58 R5-A NA 4194304 -111 4194303\n$P59 R6-A NA 4194304 -111 4194303\n$P60 R7-A NA 4194304 -111 4194303\n$P61 R8-A NA 4194304 -111 4194303\n476 keywords are stored in the 'description' slot\n\n\nIn this case, we can see the .fcs file has been read into R as a “flowFrame” object. We can also see the file name, as well as details about the number of cells, and number of columns (whether detectors (for raw spectral flow data) or fluorophores (for unmixed spectral flow data)).\n\nDirectly below we see what resembles a table. At first glance, the only column with an immediately discernable purpose is the one with the name column, which is listing the detectors present on a Cytek Aurora.\n\nAnd finally, at the bottom we reach a line that tells us that for this .fcs files, 599 keyword can be found in a description slot.\n\n\n\nSo let’s get our bearings, we have loaded in an .fcs file to R, but let’s use some of the concepts we covered last week to try to understand a bit about what type or class of object we are working with. From the output, we saw the words flowFrame object, so let’s read it back in again, but assign it to an variable/object called flowFrame so that we can use the type-discerning functions we worked with last week on.\n\nflowFrame <- read.FCS(filename=firstfile, transformation = FALSE, truncate_max_range = FALSE)\n\nAs we create this variable, if we have the session tab selected on our right secondary side bar, we see it appear:\n\nIf we were to use the type-determining functions we learned last week\n\nclass(flowFrame)\n\n[1] \"flowFrame\"\nattr(,\"package\")\n[1] \"flowCore\"\n\n\nflowFrames are a class of object with a structure defined within the flowCore package. They are used to work with the data contained within individual .fcs files. Looking again at the right secondary side bar, we can see that it shows up as a ““S4 class flowFrame package flowCore”“” with 3 slots, with the words flowFrame adjacent to it.\nA perfectly valid first reaction to first reading this is “well how should I know what any of this means?”. Powering through this initial discomfort, let’s go ahead and click on the dropdown arrow next to the variables name and see if we get any additional clarity on the issue.\n\nWhen we do so, three additional drop-downs appear. Based on the previous line that mentioned 3 slots, we could infer that each line corresponds to one of those slots.\nWhat we are encountering with flowFrame is our first example of an S4 object type. These more-complicated object types are quite common for the various Bioconductor affilitated R packages.\nThese objects will usually appear with either S4 or S3 in their metadata, and are made up of various simpler object types that are cobbled together within the larger object, usually occupying individual slots.\nWhat advantage this bundling provides will be something we revisit throughout the course as you encounter more of these S4/S3 objects.\n\n\n\nexprs\nThe first slot within the flowFrame object shows up with the name “exprs”. For the exprs object, glancing at it’s middle column, we can based on the 100 rows and 61 columns, that it is likely a matrix-style object. We might also recall we saw similar numbers in the printed output when we ran read.FCS()earlier.\n\nWhich likely means that “exprs” slot is where the MFI data for the individual acquired cells within our .fcs file is being stored. Within Positron, for a matrix object, we can click on the little grid symbol on the far right to open up the table within editor.\n\nIf we utilize the scroll bars, we can see that the individual detectors (in the case of uploading a raw spectral fcs file, they would appear as fluorophores for unmixed spectral or conventional fcs files) occupy the individual columns, which are named. The rows are not named, but number 100, matching the number of cells present in the .fcs file. Additionally, on the far left there is a little summary table about the overall data.\n\nLet’s go ahead and assign this matrix to a new variable/object so that we can explore it later. Since flowFrame is an S4 object, it’s slots can be individually accessed by adding the @ symbol and the respective slot name.\n\nMFI_Matrix <- flowFrame@exprs\n\nAlternatively, we can use the Bioconductor helper function exprs() to get data held in that slot\n\nMFI_Matrix_Alternate <- exprs(flowFrame)\n\nIn the case of the above, this displayed text output to the console would be unwiedly to display all at once. If we wanted to only see the first five rows, we could use the head() function, and provide a value of 5.\n\nhead(MFI_Matrix, 5)\n\n Time UV1-A UV2-A UV3-A UV4-A UV5-A UV6-A\n[1,] 38823 37.79983 -184.479996 353.87714 1106.22998 1145.18140 2130.21899\n[2,] 39780 234.23021 -98.456429 26.43876 70.65833 -89.93541 -29.14263\n[3,] 267292 -117.96355 40.732426 473.94574 1177.86975 1516.70935 1985.46130\n[4,] 128101 289.87671 -2.723389 -54.11960 -163.71489 32.62989 -134.90411\n[5,] 255221 -104.50541 -71.163338 567.57562 610.23627 1416.61328 2868.16040\n UV7-A UV8-A UV9-A UV10-A UV11-A UV12-A UV13-A\n[1,] 4376.34277 3246.7952 32050.65039 8123.2637 1992.5785 1070.3323 956.43573\n[2,] -26.34649 -162.6675 17.98848 271.9152 154.8575 163.5411 -32.81524\n[3,] 3658.87671 4140.1724 59792.16406 14013.7969 3427.4324 1668.7588 1071.14636\n[4,] 739.71960 402.9025 427.37534 315.6364 -223.0423 145.7121 127.03777\n[5,] 4034.58789 3234.6626 40126.46484 10325.0371 1974.0907 1033.8450 -21.57245\n UV14-A UV15-A UV16-A SSC-H SSC-A V1-A V2-A\n[1,] 290.8685 385.49921 670.97687 657613 750760.12 1171.1390 154.5628\n[2,] -104.9198 103.41382 71.41528 83481 81552.85 266.2424 705.2527\n[3,] 730.1430 214.93053 252.75406 890845 1183519.00 1196.0931 1183.1105\n[4,] -207.8978 -55.37944 -45.10131 75103 72457.33 227.9926 556.9189\n[5,] 273.6271 960.16290 341.20633 415791 501690.97 717.2498 929.9780\n V3-A V4-A V5-A V6-A V7-A V8-A\n[1,] 1346.4488525 1706.9260 1923.50940 898.2527 3162.55371 83596.5078\n[2,] 244.3218689 341.0508 381.15939 87.1600 151.68785 119.5544\n[3,] 2087.0092773 824.6352 1635.27258 1613.9069 4653.16260 176981.6094\n[4,] -0.8137281 205.4732 18.12125 179.6371 -69.50061 -132.4348\n[5,] 1358.4512939 788.6506 1208.81006 1156.7040 3118.42627 104951.2578\n V9-A V10-A V11-A V12-A V13-A V14-A\n[1,] 32506.7617 27161.5137 6236.072754 2220.303223 2023.39966 753.589355\n[2,] -79.7691 -109.1783 -73.991196 114.375542 -11.53453 124.986206\n[3,] 69236.4297 57626.8984 13175.838867 4534.874023 3434.38989 1995.172363\n[4,] 129.2739 231.8918 -5.473321 8.792875 -24.62049 -8.212234\n[5,] 42090.0781 34104.3164 7620.552734 3103.544189 2426.43359 650.836304\n V15-A V16-A FSC-H FSC-A SSC-B-H SSC-B-A B1-A\n[1,] 510.9540 228.34962 1055905 1217097.50 716733 815959.06 606.6683\n[2,] -207.3494 -28.96272 79696 83439.11 104575 103132.83 195.2795\n[3,] 1321.8030 615.05560 1092481 1453969.38 757351 982038.31 2010.5110\n[4,] -133.7503 -34.32619 64760 60415.23 67955 66806.13 -146.8936\n[5,] 290.2892 473.32599 1038362 1184479.00 425296 522873.12 1015.5981\n B2-A B3-A B4-A B5-A B6-A B7-A\n[1,] 416.98294 4172.4712 192400.0938 93929.9375 54236.3320 19342.6445\n[2,] 333.25662 332.1675 -230.9639 196.7810 292.8945 -187.2845\n[3,] 2150.21826 10106.9551 437801.5625 212176.1562 124294.3594 45068.3008\n[4,] -34.90987 165.7988 675.0156 136.3076 482.8665 133.0948\n[5,] 639.77527 6034.0200 244022.6094 118871.4609 68616.9688 24067.7949\n B8-A B9-A B10-A B11-A B12-A B13-A\n[1,] 10507.24219 9498.1270 4465.50928 1668.048096 2199.9475 1581.7345\n[2,] 70.90886 -334.3563 188.05545 -663.359619 -163.2331 27.9856\n[3,] 24289.59180 22500.1914 10624.99219 4684.497559 3471.7749 2727.3904\n[4,] 432.28091 246.8794 -94.44906 2.905877 106.8489 283.3633\n[5,] 14182.12793 13019.7100 5577.33984 2753.355957 1279.3009 1423.0276\n B14-A R1-A R2-A R3-A R4-A R5-A\n[1,] 1487.56860 147.1335 129.867630 35.90353 267.23999 49.79849\n[2,] 205.82298 -142.9224 66.516052 113.63218 -94.41375 98.13978\n[3,] 2371.95850 -128.3749 -105.482544 726.48547 18.87000 95.47879\n[4,] 33.24665 127.5455 122.607941 37.83584 -82.87500 -343.83768\n[5,] 1565.72742 -266.4482 -3.350622 -178.39566 -117.10875 -100.10384\n R6-A R7-A R8-A\n[1,] -732.7097 42.83144 248.56728\n[2,] 143.4497 -263.28741 -85.83299\n[3,] -194.4526 -84.08820 -301.46066\n[4,] 82.3745 60.27896 -94.38461\n[5,] -182.0066 184.36417 186.17207\n\n\nThis is much more workable, especially on a small laptop screen. We can see that there are names for each column corresponding to detector/fluorophore/metal depending on the .fcs file we are accessing. Lets retrieve these column names using the colnames() function.\n\nColumnNames <- colnames(MFI_Matrix)\nColumnNames\n\n $P1N $P2N $P3N $P4N $P5N $P6N $P7N $P8N \n \"Time\" \"UV1-A\" \"UV2-A\" \"UV3-A\" \"UV4-A\" \"UV5-A\" \"UV6-A\" \"UV7-A\" \n $P9N $P10N $P11N $P12N $P13N $P14N $P15N $P16N \n \"UV8-A\" \"UV9-A\" \"UV10-A\" \"UV11-A\" \"UV12-A\" \"UV13-A\" \"UV14-A\" \"UV15-A\" \n $P17N $P18N $P19N $P20N $P21N $P22N $P23N $P24N \n \"UV16-A\" \"SSC-H\" \"SSC-A\" \"V1-A\" \"V2-A\" \"V3-A\" \"V4-A\" \"V5-A\" \n $P25N $P26N $P27N $P28N $P29N $P30N $P31N $P32N \n \"V6-A\" \"V7-A\" \"V8-A\" \"V9-A\" \"V10-A\" \"V11-A\" \"V12-A\" \"V13-A\" \n $P33N $P34N $P35N $P36N $P37N $P38N $P39N $P40N \n \"V14-A\" \"V15-A\" \"V16-A\" \"FSC-H\" \"FSC-A\" \"SSC-B-H\" \"SSC-B-A\" \"B1-A\" \n $P41N $P42N $P43N $P44N $P45N $P46N $P47N $P48N \n \"B2-A\" \"B3-A\" \"B4-A\" \"B5-A\" \"B6-A\" \"B7-A\" \"B8-A\" \"B9-A\" \n $P49N $P50N $P51N $P52N $P53N $P54N $P55N $P56N \n \"B10-A\" \"B11-A\" \"B12-A\" \"B13-A\" \"B14-A\" \"R1-A\" \"R2-A\" \"R3-A\" \n $P57N $P58N $P59N $P60N $P61N \n \"R4-A\" \"R5-A\" \"R6-A\" \"R7-A\" \"R8-A\" \n\n\nSomething interesting occurred when this occurred, we can see in addition to the detector names directly above each a “$P#N” pattern appear, with # standing for increasing numbers. If we recall, we saw something similar in the first output column when we first ran read.FCS().\n\nLets break out the str() and class() functions from last week and see what we can find out about why this is occuring.\n\nstr(ColumnNames)\n\n Named chr [1:61] \"Time\" \"UV1-A\" \"UV2-A\" \"UV3-A\" \"UV4-A\" \"UV5-A\" \"UV6-A\" ...\n - attr(*, \"names\")= chr [1:61] \"$P1N\" \"$P2N\" \"$P3N\" \"$P4N\" ...\n\n\nIn this case we can see that we don’t just have a vector (list) similar to what we saw with Fluorophores object last week, because instead of a chr [1:61] we get back a Named chr [1:61] designation. What we see is that in this case, each value has a corresponding index name as well. (ex. $P1N, $P2N, etc.) Let’s double check with class() function.\n\nclass(ColumnNames)\n\n[1] \"character\"\n\n\nWe can see that everything is character, but it doesn’t inform us that each index was named. This is one of the reasons it is best when trying to see what type of an object something is, to use multiple functions, to avoid missing some important details.\nIf we were trying to remove the names, being left with just the values (similar to what we saw with the vector-style list last week), we could use the unname() function:\n\nunname(ColumnNames)\n\n [1] \"Time\" \"UV1-A\" \"UV2-A\" \"UV3-A\" \"UV4-A\" \"UV5-A\" \"UV6-A\" \n [8] \"UV7-A\" \"UV8-A\" \"UV9-A\" \"UV10-A\" \"UV11-A\" \"UV12-A\" \"UV13-A\" \n[15] \"UV14-A\" \"UV15-A\" \"UV16-A\" \"SSC-H\" \"SSC-A\" \"V1-A\" \"V2-A\" \n[22] \"V3-A\" \"V4-A\" \"V5-A\" \"V6-A\" \"V7-A\" \"V8-A\" \"V9-A\" \n[29] \"V10-A\" \"V11-A\" \"V12-A\" \"V13-A\" \"V14-A\" \"V15-A\" \"V16-A\" \n[36] \"FSC-H\" \"FSC-A\" \"SSC-B-H\" \"SSC-B-A\" \"B1-A\" \"B2-A\" \"B3-A\" \n[43] \"B4-A\" \"B5-A\" \"B6-A\" \"B7-A\" \"B8-A\" \"B9-A\" \"B10-A\" \n[50] \"B11-A\" \"B12-A\" \"B13-A\" \"B14-A\" \"R1-A\" \"R2-A\" \"R3-A\" \n[57] \"R4-A\" \"R5-A\" \"R6-A\" \"R7-A\" \"R8-A\" \n\n\n\n\nLet’s return to the right sidebar to continue our exploration, by clicking on the dropdown arrow for exprs in the side-bar\n\nThe output is less user-friendly than what we saw when clicking on the little grid. If we scroll down far enough, we get down as far as [,61], which corresponds to the total number of columns.\n\nIn base R, column order can be defined by placing the corresponding column index number after a comma “,”. So for this case, the first column would be designated would be [,1] while the last column would be designated [,61].\n\nMFI_Matrix[,1]\n\n [1] 38823 39780 267292 128101 255221 79210 196643 83855 109315 26128\n [11] 114423 120001 71831 70551 197021 239994 252611 223012 152780 171822\n [21] 172611 168464 191503 253015 73885 82221 176641 128533 4117 191632\n [31] 191229 58093 141776 265894 55593 227555 233212 248578 95165 171934\n [41] 1360 251847 195764 147503 118723 1060 90033 253553 268268 74610\n [51] 23531 150119 226391 201568 179264 79944 196686 252667 117309 3903\n [61] 77690 195142 229873 254472 179943 236618 68193 87154 28541 78622\n [71] 155664 50115 40866 70753 260118 12033 96149 20740 37461 73998\n [81] 231939 192329 88649 197664 86006 142486 159539 251298 104864 164090\n [91] 102380 218968 145182 239323 261272 118979 17202 194277 229284 258723\n\n\n\nMFI_Matrix[,61]\n\n [1] 248.567276 -85.832993 -301.460663 -94.384613 186.172073 -461.407745\n [7] 843.507080 277.516113 -106.166855 281.633545 195.927261 818.865723\n [13] 734.996460 209.356476 206.442596 279.859894 518.165222 56.947498\n [19] 285.751007 857.126343 -94.384613 -213.030518 62.585236 138.409653\n [25] 118.012444 328.255768 -61.635056 185.285233 464.384979 5.637739\n [31] -66.385956 31.229273 1198.241211 185.475266 873.279419 457.607025\n [37] -73.353951 37.880539 729.168640 221.772171 -169.512238 348.272888\n [43] -338.391022 845.534119 -4.434176 620.024597 610.269409 -193.900208\n [49] 230.830566 -23.754517 607.102112 14.949510 -34.333195 -169.132172\n [55] -96.158287 220.631958 125.297165 -15.202891 -126.057304 193.393448\n [61] 90.203819 -277.706146 590.505615 911.096619 -92.230873 347.259369\n [67] 135.559113 369.430267 -62.015125 -180.597672 -146.517868 810.440796\n [73] 134.038818 -165.268097 727.711731 -88.746880 62.901962 203.275330\n [79] 436.196289 -242.676147 -40.857769 222.278946 -170.272385 525.513245\n [85] -41.491222 176.670258 201.501648 175.530045 329.839386 474.140167\n [91] -48.142490 -174.833252 46.052090 357.584656 -26.541714 191.493088\n [97] 211.320190 124.790398 -113.324883 343.268616\n\n\nWhat would happen if used a column index number that didn’t exist? Let’s check.\n\nMFI_Matrix[,350]\n\nError in `MFI_Matrix[, 350]`:\n! subscript out of bounds\n\n\nWe get back an error message telling us the subscript is out of bounds.\nSo if columns are specified by a number after the comma (ex. [,1]), how are rows specified? In R, rows would be specified by a number before the comma [1,]\n\nMFI_Matrix[1,]\n\n Time UV1-A UV2-A UV3-A UV4-A \n 38823.00000 37.79983 -184.48000 353.87714 1106.22998 \n UV5-A UV6-A UV7-A UV8-A UV9-A \n 1145.18140 2130.21899 4376.34277 3246.79517 32050.65039 \n UV10-A UV11-A UV12-A UV13-A UV14-A \n 8123.26367 1992.57849 1070.33228 956.43573 290.86853 \n UV15-A UV16-A SSC-H SSC-A V1-A \n 385.49921 670.97687 657613.00000 750760.12500 1171.13904 \n V2-A V3-A V4-A V5-A V6-A \n 154.56281 1346.44885 1706.92603 1923.50940 898.25269 \n V7-A V8-A V9-A V10-A V11-A \n 3162.55371 83596.50781 32506.76172 27161.51367 6236.07275 \n V12-A V13-A V14-A V15-A V16-A \n 2220.30322 2023.39966 753.58936 510.95404 228.34962 \n FSC-H FSC-A SSC-B-H SSC-B-A B1-A \n1055905.00000 1217097.50000 716733.00000 815959.06250 606.66833 \n B2-A B3-A B4-A B5-A B6-A \n 416.98294 4172.47119 192400.09375 93929.93750 54236.33203 \n B7-A B8-A B9-A B10-A B11-A \n 19342.64453 10507.24219 9498.12695 4465.50928 1668.04810 \n B12-A B13-A B14-A R1-A R2-A \n 2199.94751 1581.73450 1487.56860 147.13348 129.86763 \n R3-A R4-A R5-A R6-A R7-A \n 35.90353 267.23999 49.79849 -732.70966 42.83144 \n R8-A \n 248.56728 \n\n\nAnd while not the focus of today, we could retrieve individual values from a matrix by specifying both a row and a column index number. So for example, if we wanted the MFI value for the UV1-A detector for the first acquired cell (knowing that UV1-A is the 2nd column):\n\nMFI_Matrix[1,2]\n\n UV1-A \n37.79983 \n\n\nFrom our exploration, this looks to be all the information contained within the “exprs” slot, so let’s back up and check on the next slot.\n\n\n\n\nparameters\nAs we look at the next slot in the flowFrame object, we can see that parameters looks like it is going to be another more complex object, as it is showing up as an AnnotatedDataFrame object (defined by the Biobase R package, and itself contains 4 slots).\n\n\nHaving carved our way this far into the heart of an .fcs file, we are not about to call it quits now, so CHARGE my fellow cytometrist!!! Click that drop-down arrow!\n\nHaving survived our charge into the unknown, the four parameter slots appear to be “varMetadata”, “data”, “dimLabels” and “._classVersion_”.\n\n\nvarMetadata\nFortunately for us, both “varMetadata” and “data” at least appear to be table-like objects of a type known as a “data.frame”, so lets click on the grid to open in our editor window.\nIn the case of varMetadata, we seem to have retrieved a column of metadata names.\n\nThese look reminiscent of what we saw at the top of the read.FCS() column outputs previously\n\n\n\ndata\nClicking on the grid for parameters’s data slot will end opening the actual content that was displayed.\n\nLet’s try to retrieve the data contained within this slot and save it as it’s own variable/object within our R session. First, we need to open flowFrame object, then use @ to get inside its parameters slot. Since parameters is also a complex object (AnnotatedDataFrame specifically), we will need to use another @ to get inside its data slot:\n\nParameterData <- flowFrame@parameters@data\n\nhead(ParameterData, 10)\n\n name desc range minRange maxRange\n$P1 Time <NA> 272140 0.00000 272139\n$P2 UV1-A <NA> 4194304 -111.00000 4194303\n$P3 UV2-A <NA> 4194304 -111.00000 4194303\n$P4 UV3-A <NA> 4194304 -111.00000 4194303\n$P5 UV4-A <NA> 4194304 -111.00000 4194303\n$P6 UV5-A <NA> 4194304 -111.00000 4194303\n$P7 UV6-A <NA> 4194304 -111.00000 4194303\n$P8 UV7-A <NA> 4194304 -26.34649 4194303\n$P9 UV8-A <NA> 4194304 -111.00000 4194303\n$P10 UV9-A <NA> 4194304 0.00000 4194303\n\n\nAnd similarly, we could access with the Bioconductor helper function parameters(), but we would need to specify the accessor for data outside the parenthesis.\n\nParameterData_Alternate <- parameters(flowFrame)@data\n\nIf we ran the str() function, we get the following insight into ParameterData’s object type\n\nstr(ParameterData)\n\n'data.frame': 61 obs. of 5 variables:\n $ name : 'AsIs' Named chr \"Time\" \"UV1-A\" \"UV2-A\" \"UV3-A\" ...\n ..- attr(*, \"names\")= chr [1:61] \"$P1N\" \"$P2N\" \"$P3N\" \"$P4N\" ...\n $ desc : 'AsIs' Named chr NA NA NA NA ...\n ..- attr(*, \"names\")= chr [1:61] NA NA NA NA ...\n $ range : num 272140 4194304 4194304 4194304 4194304 ...\n $ minRange: num 0 -111 -111 -111 -111 ...\n $ maxRange: num 272139 4194303 4194303 4194303 4194303 ...\n\n\nWe can see this class of object is a “data.frame”. This is one of the more common object types in R, and we will be seeing these extensively throughout the course. We see that each of the columns appears to be designated by a $ followed by the column name, and then type of column (numeric, character, etc).\n\nIf we are trying to see these columns in R, we notice that data.frame is not like the previous S4 class objets we interacted with, as the @ symbol after doesn’t bring up any suggestions\n\nParameterData@\n\nBy contrast, adding the $ we saw when using the str() function does retrieve the underlying information\n\nParameterData$\n\n\nAs you become more familiar with R, remembering to check what kind of object you are working with, and how to access the contents will with practice become more familiar to you.\nSimilar to what we saw with a matrix, we can subset a data.frame based on the column or row index using square brackets [].\n\nParameterData[,1]\n\n $P1N $P2N $P3N $P4N $P5N $P6N $P7N $P8N \n \"Time\" \"UV1-A\" \"UV2-A\" \"UV3-A\" \"UV4-A\" \"UV5-A\" \"UV6-A\" \"UV7-A\" \n $P9N $P10N $P11N $P12N $P13N $P14N $P15N $P16N \n \"UV8-A\" \"UV9-A\" \"UV10-A\" \"UV11-A\" \"UV12-A\" \"UV13-A\" \"UV14-A\" \"UV15-A\" \n $P17N $P18N $P19N $P20N $P21N $P22N $P23N $P24N \n \"UV16-A\" \"SSC-H\" \"SSC-A\" \"V1-A\" \"V2-A\" \"V3-A\" \"V4-A\" \"V5-A\" \n $P25N $P26N $P27N $P28N $P29N $P30N $P31N $P32N \n \"V6-A\" \"V7-A\" \"V8-A\" \"V9-A\" \"V10-A\" \"V11-A\" \"V12-A\" \"V13-A\" \n $P33N $P34N $P35N $P36N $P37N $P38N $P39N $P40N \n \"V14-A\" \"V15-A\" \"V16-A\" \"FSC-H\" \"FSC-A\" \"SSC-B-H\" \"SSC-B-A\" \"B1-A\" \n $P41N $P42N $P43N $P44N $P45N $P46N $P47N $P48N \n \"B2-A\" \"B3-A\" \"B4-A\" \"B5-A\" \"B6-A\" \"B7-A\" \"B8-A\" \"B9-A\" \n $P49N $P50N $P51N $P52N $P53N $P54N $P55N $P56N \n \"B10-A\" \"B11-A\" \"B12-A\" \"B13-A\" \"B14-A\" \"R1-A\" \"R2-A\" \"R3-A\" \n $P57N $P58N $P59N $P60N $P61N \n \"R4-A\" \"R5-A\" \"R6-A\" \"R7-A\" \"R8-A\" \n\n\nThe individual detectors or fluorophore appear under “name”. For now, based on what we know, the $P# appears to be some sort of name being used as an internal consistent reference to the respective.\n“desc” is appearing empty for this raw spectral fcs file, but if you were to checked an unmixed file, this would be occupied the marker/ligand name assigned to it during the experiment setup.\n“range”, “minRange” and “maxRange” are beyond the scope of today, but are used by both instrument manufacturers and software vendors when setting appropiate scaling for a plot. For the actual details, see the Flow Cytometry Standard documentation.\nHaving exhausted our options under parameters “varMetadata” and “data” slots, let’s continue to the next slot.\n\n\ndimLabels\n\nIn this case, not much is returned. Yey!\n\nflowFrame@parameters@dimLabels\n\n[1] \"rowNames\" \"columnNames\"\n\n\n\n\nclassVersion\nContinuing on to the last slot “.__classVersion__”\n\nflowFrame@parameters@.__classVersion__\n\nAnnotatedDataFrame \n \"1.1.0\" \n\n\nAlso mercifully short, both of these seem to be more involved in defining the S4 class object, and don’t contain anything we need to retrieve today.\n\n\n\n\nDescription\nAt this point, we have explored both “exprs” and “parameter” slots for the flowFrame object we created. Let’s tackle the final slot, named description.\n\nWhen doing so, a very large list is opened within the Positron variables window. While we could scroll through it, it might be easier to retrieve certain number of rows via the console to make interpreting this more structured.\n\nTo retrieve the list itself, we would need to access the description slot of the flowFrame object. Since it is a slot, we will need to use the @ accessor.\n\n\nDescriptionList <- flowFrame@description\n\n\nDescriptionList \n\n$`$BEGINANALYSIS`\n[1] \"0\"\n\n$`$BEGINDATA`\n[1] \"33312\"\n\n$`$BEGINSTEXT`\n[1] \"0\"\n\n$`$BTIM`\n[1] \"13:55:29.85\"\n\n$`$BYTEORD`\n[1] \"4,3,2,1\"\n\n$`$CYT`\n[1] \"Aurora\"\n\n$`$CYTOLIB_VERSION`\n[1] \"2.22.0\"\n\n$`$CYTSN`\n[1] \"V0333\"\n\n$`$DATATYPE`\n[1] \"F\"\n\n$`$DATE`\n[1] \"04-Aug-2025\"\n\n$`$ENDANALYSIS`\n[1] \"0\"\n\n$`$ENDDATA`\n[1] \"57711\"\n\n$`$ENDSTEXT`\n[1] \"0\"\n\n$`$ETIM`\n[1] \"13:55:57.02\"\n\n$`$FIL`\n[1] \"CellCounts4L_AB_05-ND050-05.fcs\"\n\n$`$INST`\n[1] \"UMBC\"\n\n$`$MODE`\n[1] \"L\"\n\n$`$NEXTDATA`\n[1] \"0\"\n\n$`$OP`\n[1] \"David Rach\"\n\n$`$P10B`\n[1] \"32\"\n\n$`$P10E`\n[1] \"0,0\"\n\n$`$P10N`\n[1] \"UV9-A\"\n\n$`$P10R`\n[1] \"4194304\"\n\n$`$P10TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P10V`\n[1] \"710\"\n\n$`$P11B`\n[1] \"32\"\n\n$`$P11E`\n[1] \"0,0\"\n\n$`$P11N`\n[1] \"UV10-A\"\n\n$`$P11R`\n[1] \"4194304\"\n\n$`$P11TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P11V`\n[1] \"377\"\n\n$`$P12B`\n[1] \"32\"\n\n$`$P12E`\n[1] \"0,0\"\n\n$`$P12N`\n[1] \"UV11-A\"\n\n$`$P12R`\n[1] \"4194304\"\n\n$`$P12TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P12V`\n[1] \"469\"\n\n$`$P13B`\n[1] \"32\"\n\n$`$P13E`\n[1] \"0,0\"\n\n$`$P13N`\n[1] \"UV12-A\"\n\n$`$P13R`\n[1] \"4194304\"\n\n$`$P13TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P13V`\n[1] \"434\"\n\n$`$P14B`\n[1] \"32\"\n\n$`$P14E`\n[1] \"0,0\"\n\n$`$P14N`\n[1] \"UV13-A\"\n\n$`$P14R`\n[1] \"4194304\"\n\n$`$P14TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P14V`\n[1] \"564\"\n\n$`$P15B`\n[1] \"32\"\n\n$`$P15E`\n[1] \"0,0\"\n\n$`$P15N`\n[1] \"UV14-A\"\n\n$`$P15R`\n[1] \"4194304\"\n\n$`$P15TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P15V`\n[1] \"975\"\n\n$`$P16B`\n[1] \"32\"\n\n$`$P16E`\n[1] \"0,0\"\n\n$`$P16N`\n[1] \"UV15-A\"\n\n$`$P16R`\n[1] \"4194304\"\n\n$`$P16TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P16V`\n[1] \"737\"\n\n$`$P17B`\n[1] \"32\"\n\n$`$P17E`\n[1] \"0,0\"\n\n$`$P17N`\n[1] \"UV16-A\"\n\n$`$P17R`\n[1] \"4194304\"\n\n$`$P17TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P17V`\n[1] \"1069\"\n\n$`$P18B`\n[1] \"32\"\n\n$`$P18E`\n[1] \"0,0\"\n\n$`$P18N`\n[1] \"SSC-H\"\n\n$`$P18R`\n[1] \"4194304\"\n\n$`$P18TYPE`\n[1] \"Side_Scatter\"\n\n$`$P18V`\n[1] \"334\"\n\n$`$P19B`\n[1] \"32\"\n\n$`$P19E`\n[1] \"0,0\"\n\n$`$P19N`\n[1] \"SSC-A\"\n\n$`$P19R`\n[1] \"4194304\"\n\n$`$P19TYPE`\n[1] \"Side_Scatter\"\n\n$`$P19V`\n[1] \"334\"\n\n$`$P1B`\n[1] \"32\"\n\n$`$P1E`\n[1] \"0,0\"\n\n$`$P1N`\n[1] \"Time\"\n\n$`$P1R`\n[1] \"272140\"\n\n$`$P1TYPE`\n[1] \"Time\"\n\n$`$P20B`\n[1] \"32\"\n\n$`$P20E`\n[1] \"0,0\"\n\n$`$P20N`\n[1] \"V1-A\"\n\n$`$P20R`\n[1] \"4194304\"\n\n$`$P20TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P20V`\n[1] \"352\"\n\n$`$P21B`\n[1] \"32\"\n\n$`$P21E`\n[1] \"0,0\"\n\n$`$P21N`\n[1] \"V2-A\"\n\n$`$P21R`\n[1] \"4194304\"\n\n$`$P21TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P21V`\n[1] \"412\"\n\n$`$P22B`\n[1] \"32\"\n\n$`$P22E`\n[1] \"0,0\"\n\n$`$P22N`\n[1] \"V3-A\"\n\n$`$P22R`\n[1] \"4194304\"\n\n$`$P22TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P22V`\n[1] \"304\"\n\n$`$P23B`\n[1] \"32\"\n\n$`$P23E`\n[1] \"0,0\"\n\n$`$P23N`\n[1] \"V4-A\"\n\n$`$P23R`\n[1] \"4194304\"\n\n$`$P23TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P23V`\n[1] \"217\"\n\n$`$P24B`\n[1] \"32\"\n\n$`$P24E`\n[1] \"0,0\"\n\n$`$P24N`\n[1] \"V5-A\"\n\n$`$P24R`\n[1] \"4194304\"\n\n$`$P24TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P24V`\n[1] \"257\"\n\n$`$P25B`\n[1] \"32\"\n\n$`$P25E`\n[1] \"0,0\"\n\n$`$P25N`\n[1] \"V6-A\"\n\n$`$P25R`\n[1] \"4194304\"\n\n$`$P25TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P25V`\n[1] \"218\"\n\n$`$P26B`\n[1] \"32\"\n\n$`$P26E`\n[1] \"0,0\"\n\n$`$P26N`\n[1] \"V7-A\"\n\n$`$P26R`\n[1] \"4194304\"\n\n$`$P26TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P26V`\n[1] \"303\"\n\n$`$P27B`\n[1] \"32\"\n\n$`$P27E`\n[1] \"0,0\"\n\n$`$P27N`\n[1] \"V8-A\"\n\n$`$P27R`\n[1] \"4194304\"\n\n$`$P27TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P27V`\n[1] \"461\"\n\n$`$P28B`\n[1] \"32\"\n\n$`$P28E`\n[1] \"0,0\"\n\n$`$P28N`\n[1] \"V9-A\"\n\n$`$P28R`\n[1] \"4194304\"\n\n$`$P28TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P28V`\n[1] \"320\"\n\n$`$P29B`\n[1] \"32\"\n\n$`$P29E`\n[1] \"0,0\"\n\n$`$P29N`\n[1] \"V10-A\"\n\n$`$P29R`\n[1] \"4194304\"\n\n$`$P29TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P29V`\n[1] \"359\"\n\n$`$P2B`\n[1] \"32\"\n\n$`$P2E`\n[1] \"0,0\"\n\n$`$P2N`\n[1] \"UV1-A\"\n\n$`$P2R`\n[1] \"4194304\"\n\n$`$P2TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P2V`\n[1] \"1008\"\n\n$`$P30B`\n[1] \"32\"\n\n$`$P30E`\n[1] \"0,0\"\n\n$`$P30N`\n[1] \"V11-A\"\n\n$`$P30R`\n[1] \"4194304\"\n\n$`$P30TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P30V`\n[1] \"271\"\n\n$`$P31B`\n[1] \"32\"\n\n$`$P31E`\n[1] \"0,0\"\n\n$`$P31N`\n[1] \"V12-A\"\n\n$`$P31R`\n[1] \"4194304\"\n\n$`$P31TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P31V`\n[1] \"234\"\n\n$`$P32B`\n[1] \"32\"\n\n$`$P32E`\n[1] \"0,0\"\n\n$`$P32N`\n[1] \"V13-A\"\n\n$`$P32R`\n[1] \"4194304\"\n\n$`$P32TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P32V`\n[1] \"236\"\n\n$`$P33B`\n[1] \"32\"\n\n$`$P33E`\n[1] \"0,0\"\n\n$`$P33N`\n[1] \"V14-A\"\n\n$`$P33R`\n[1] \"4194304\"\n\n$`$P33TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P33V`\n[1] \"318\"\n\n$`$P34B`\n[1] \"32\"\n\n$`$P34E`\n[1] \"0,0\"\n\n$`$P34N`\n[1] \"V15-A\"\n\n$`$P34R`\n[1] \"4194304\"\n\n$`$P34TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P34V`\n[1] \"602\"\n\n$`$P35B`\n[1] \"32\"\n\n$`$P35E`\n[1] \"0,0\"\n\n$`$P35N`\n[1] \"V16-A\"\n\n$`$P35R`\n[1] \"4194304\"\n\n$`$P35TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P35V`\n[1] \"372\"\n\n$`$P36B`\n[1] \"32\"\n\n$`$P36E`\n[1] \"0,0\"\n\n$`$P36N`\n[1] \"FSC-H\"\n\n$`$P36R`\n[1] \"4194304\"\n\n$`$P36TYPE`\n[1] \"Forward_Scatter\"\n\n$`$P36V`\n[1] \"55\"\n\n$`$P37B`\n[1] \"32\"\n\n$`$P37E`\n[1] \"0,0\"\n\n$`$P37N`\n[1] \"FSC-A\"\n\n$`$P37R`\n[1] \"4194304\"\n\n$`$P37TYPE`\n[1] \"Forward_Scatter\"\n\n$`$P37V`\n[1] \"55\"\n\n$`$P38B`\n[1] \"32\"\n\n$`$P38E`\n[1] \"0,0\"\n\n$`$P38N`\n[1] \"SSC-B-H\"\n\n$`$P38R`\n[1] \"4194304\"\n\n$`$P38TYPE`\n[1] \"Side_Scatter\"\n\n$`$P38V`\n[1] \"241\"\n\n$`$P39B`\n[1] \"32\"\n\n$`$P39E`\n[1] \"0,0\"\n\n$`$P39N`\n[1] \"SSC-B-A\"\n\n$`$P39R`\n[1] \"4194304\"\n\n$`$P39TYPE`\n[1] \"Side_Scatter\"\n\n$`$P39V`\n[1] \"241\"\n\n$`$P3B`\n[1] \"32\"\n\n$`$P3E`\n[1] \"0,0\"\n\n$`$P3N`\n[1] \"UV2-A\"\n\n$`$P3R`\n[1] \"4194304\"\n\n$`$P3TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P3V`\n[1] \"286\"\n\n$`$P40B`\n[1] \"32\"\n\n$`$P40E`\n[1] \"0,0\"\n\n$`$P40N`\n[1] \"B1-A\"\n\n$`$P40R`\n[1] \"4194304\"\n\n$`$P40TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P40V`\n[1] \"1013\"\n\n$`$P41B`\n[1] \"32\"\n\n$`$P41E`\n[1] \"0,0\"\n\n$`$P41N`\n[1] \"B2-A\"\n\n$`$P41R`\n[1] \"4194304\"\n\n$`$P41TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P41V`\n[1] \"483\"\n\n$`$P42B`\n[1] \"32\"\n\n$`$P42E`\n[1] \"0,0\"\n\n$`$P42N`\n[1] \"B3-A\"\n\n$`$P42R`\n[1] \"4194304\"\n\n$`$P42TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P42V`\n[1] \"471\"\n\n$`$P43B`\n[1] \"32\"\n\n$`$P43E`\n[1] \"0,0\"\n\n$`$P43N`\n[1] \"B4-A\"\n\n$`$P43R`\n[1] \"4194304\"\n\n$`$P43TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P43V`\n[1] \"473\"\n\n$`$P44B`\n[1] \"32\"\n\n$`$P44E`\n[1] \"0,0\"\n\n$`$P44N`\n[1] \"B5-A\"\n\n$`$P44R`\n[1] \"4194304\"\n\n$`$P44TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P44V`\n[1] \"467\"\n\n$`$P45B`\n[1] \"32\"\n\n$`$P45E`\n[1] \"0,0\"\n\n$`$P45N`\n[1] \"B6-A\"\n\n$`$P45R`\n[1] \"4194304\"\n\n$`$P45TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P45V`\n[1] \"284\"\n\n$`$P46B`\n[1] \"32\"\n\n$`$P46E`\n[1] \"0,0\"\n\n$`$P46N`\n[1] \"B7-A\"\n\n$`$P46R`\n[1] \"4194304\"\n\n$`$P46TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P46V`\n[1] \"531\"\n\n$`$P47B`\n[1] \"32\"\n\n$`$P47E`\n[1] \"0,0\"\n\n$`$P47N`\n[1] \"B8-A\"\n\n$`$P47R`\n[1] \"4194304\"\n\n$`$P47TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P47V`\n[1] \"432\"\n\n$`$P48B`\n[1] \"32\"\n\n$`$P48E`\n[1] \"0,0\"\n\n$`$P48N`\n[1] \"B9-A\"\n\n$`$P48R`\n[1] \"4194304\"\n\n$`$P48TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P48V`\n[1] \"675\"\n\n$`$P49B`\n[1] \"32\"\n\n$`$P49E`\n[1] \"0,0\"\n\n$`$P49N`\n[1] \"B10-A\"\n\n$`$P49R`\n[1] \"4194304\"\n\n$`$P49TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P49V`\n[1] \"490\"\n\n$`$P4B`\n[1] \"32\"\n\n$`$P4E`\n[1] \"0,0\"\n\n$`$P4N`\n[1] \"UV3-A\"\n\n$`$P4R`\n[1] \"4194304\"\n\n$`$P4TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P4V`\n[1] \"677\"\n\n$`$P50B`\n[1] \"32\"\n\n$`$P50E`\n[1] \"0,0\"\n\n$`$P50N`\n[1] \"B11-A\"\n\n$`$P50R`\n[1] \"4194304\"\n\n$`$P50TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P50V`\n[1] \"286\"\n\n$`$P51B`\n[1] \"32\"\n\n$`$P51E`\n[1] \"0,0\"\n\n$`$P51N`\n[1] \"B12-A\"\n\n$`$P51R`\n[1] \"4194304\"\n\n$`$P51TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P51V`\n[1] \"407\"\n\n$`$P52B`\n[1] \"32\"\n\n$`$P52E`\n[1] \"0,0\"\n\n$`$P52N`\n[1] \"B13-A\"\n\n$`$P52R`\n[1] \"4194304\"\n\n$`$P52TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P52V`\n[1] \"700\"\n\n$`$P53B`\n[1] \"32\"\n\n$`$P53E`\n[1] \"0,0\"\n\n$`$P53N`\n[1] \"B14-A\"\n\n$`$P53R`\n[1] \"4194304\"\n\n$`$P53TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P53V`\n[1] \"693\"\n\n$`$P54B`\n[1] \"32\"\n\n$`$P54E`\n[1] \"0,0\"\n\n$`$P54N`\n[1] \"R1-A\"\n\n$`$P54R`\n[1] \"4194304\"\n\n$`$P54TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P54V`\n[1] \"158\"\n\n$`$P55B`\n[1] \"32\"\n\n$`$P55E`\n[1] \"0,0\"\n\n$`$P55N`\n[1] \"R2-A\"\n\n$`$P55R`\n[1] \"4194304\"\n\n$`$P55TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P55V`\n[1] \"245\"\n\n$`$P56B`\n[1] \"32\"\n\n$`$P56E`\n[1] \"0,0\"\n\n$`$P56N`\n[1] \"R3-A\"\n\n$`$P56R`\n[1] \"4194304\"\n\n$`$P56TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P56V`\n[1] \"338\"\n\n$`$P57B`\n[1] \"32\"\n\n$`$P57E`\n[1] \"0,0\"\n\n$`$P57N`\n[1] \"R4-A\"\n\n$`$P57R`\n[1] \"4194304\"\n\n$`$P57TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P57V`\n[1] \"238\"\n\n$`$P58B`\n[1] \"32\"\n\n$`$P58E`\n[1] \"0,0\"\n\n$`$P58N`\n[1] \"R5-A\"\n\n$`$P58R`\n[1] \"4194304\"\n\n$`$P58TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P58V`\n[1] \"191\"\n\n$`$P59B`\n[1] \"32\"\n\n$`$P59E`\n[1] \"0,0\"\n\n$`$P59N`\n[1] \"R6-A\"\n\n$`$P59R`\n[1] \"4194304\"\n\n$`$P59TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P59V`\n[1] \"274\"\n\n$`$P5B`\n[1] \"32\"\n\n$`$P5E`\n[1] \"0,0\"\n\n$`$P5N`\n[1] \"UV4-A\"\n\n$`$P5R`\n[1] \"4194304\"\n\n$`$P5TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P5V`\n[1] \"1022\"\n\n$`$P60B`\n[1] \"32\"\n\n$`$P60E`\n[1] \"0,0\"\n\n$`$P60N`\n[1] \"R7-A\"\n\n$`$P60R`\n[1] \"4194304\"\n\n$`$P60TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P60V`\n[1] \"524\"\n\n$`$P61B`\n[1] \"32\"\n\n$`$P61E`\n[1] \"0,0\"\n\n$`$P61N`\n[1] \"R8-A\"\n\n$`$P61R`\n[1] \"4194304\"\n\n$`$P61TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P61V`\n[1] \"243\"\n\n$`$P6B`\n[1] \"32\"\n\n$`$P6E`\n[1] \"0,0\"\n\n$`$P6N`\n[1] \"UV5-A\"\n\n$`$P6R`\n[1] \"4194304\"\n\n$`$P6TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P6V`\n[1] \"616\"\n\n$`$P7B`\n[1] \"32\"\n\n$`$P7E`\n[1] \"0,0\"\n\n$`$P7N`\n[1] \"UV6-A\"\n\n$`$P7R`\n[1] \"4194304\"\n\n$`$P7TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P7V`\n[1] \"506\"\n\n$`$P8B`\n[1] \"32\"\n\n$`$P8E`\n[1] \"0,0\"\n\n$`$P8N`\n[1] \"UV7-A\"\n\n$`$P8R`\n[1] \"4194304\"\n\n$`$P8TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P8V`\n[1] \"661\"\n\n$`$P9B`\n[1] \"32\"\n\n$`$P9E`\n[1] \"0,0\"\n\n$`$P9N`\n[1] \"UV8-A\"\n\n$`$P9R`\n[1] \"4194304\"\n\n$`$P9TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P9V`\n[1] \"514\"\n\n$`$PAR`\n[1] \"61\"\n\n$`$PROJ`\n[1] \"CellCounts4L_AB_05\"\n\n$`$SPILLOVER`\n UV1-A UV2-A UV3-A UV4-A UV5-A UV6-A UV7-A UV8-A UV9-A UV10-A UV11-A\n [1,] 1e+00 0 0 0 0 0 0 0 0 0 0\n [2,] 1e-06 1 0 0 0 0 0 0 0 0 0\n [3,] 0e+00 0 1 0 0 0 0 0 0 0 0\n [4,] 0e+00 0 0 1 0 0 0 0 0 0 0\n [5,] 0e+00 0 0 0 1 0 0 0 0 0 0\n [6,] 0e+00 0 0 0 0 1 0 0 0 0 0\n [7,] 0e+00 0 0 0 0 0 1 0 0 0 0\n [8,] 0e+00 0 0 0 0 0 0 1 0 0 0\n [9,] 0e+00 0 0 0 0 0 0 0 1 0 0\n[10,] 0e+00 0 0 0 0 0 0 0 0 1 0\n[11,] 0e+00 0 0 0 0 0 0 0 0 0 1\n[12,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[13,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[14,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[15,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[16,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[17,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[18,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[19,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[20,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[21,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[22,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[23,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[24,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[25,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[26,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[27,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[28,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[29,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[30,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[31,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[32,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[33,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[34,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[35,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[36,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[37,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[38,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[39,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[40,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[41,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[42,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[43,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[44,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[45,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[46,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[47,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[48,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[49,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[50,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[51,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[52,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[53,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[54,] 0e+00 0 0 0 0 0 0 0 0 0 0\n UV12-A UV13-A UV14-A UV15-A UV16-A V1-A V2-A V3-A V4-A V5-A V6-A V7-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 1 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 1 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 1 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 1 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 1 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 1 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 1 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 1 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 1 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 1 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 1 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 1\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0\n V8-A V9-A V10-A V11-A V12-A V13-A V14-A V15-A V16-A B1-A B2-A B3-A B4-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n B5-A B6-A B7-A B8-A B9-A B10-A B11-A B12-A B13-A B14-A R1-A R2-A R3-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n R4-A R5-A R6-A R7-A R8-A\n [1,] 0 0 0 0 0\n [2,] 0 0 0 0 0\n [3,] 0 0 0 0 0\n [4,] 0 0 0 0 0\n [5,] 0 0 0 0 0\n [6,] 0 0 0 0 0\n [7,] 0 0 0 0 0\n [8,] 0 0 0 0 0\n [9,] 0 0 0 0 0\n[10,] 0 0 0 0 0\n[11,] 0 0 0 0 0\n[12,] 0 0 0 0 0\n[13,] 0 0 0 0 0\n[14,] 0 0 0 0 0\n[15,] 0 0 0 0 0\n[16,] 0 0 0 0 0\n[17,] 0 0 0 0 0\n[18,] 0 0 0 0 0\n[19,] 0 0 0 0 0\n[20,] 0 0 0 0 0\n[21,] 0 0 0 0 0\n[22,] 0 0 0 0 0\n[23,] 0 0 0 0 0\n[24,] 0 0 0 0 0\n[25,] 0 0 0 0 0\n[26,] 0 0 0 0 0\n[27,] 0 0 0 0 0\n[28,] 0 0 0 0 0\n[29,] 0 0 0 0 0\n[30,] 0 0 0 0 0\n[31,] 0 0 0 0 0\n[32,] 0 0 0 0 0\n[33,] 0 0 0 0 0\n[34,] 0 0 0 0 0\n[35,] 0 0 0 0 0\n[36,] 0 0 0 0 0\n[37,] 0 0 0 0 0\n[38,] 0 0 0 0 0\n[39,] 0 0 0 0 0\n[40,] 0 0 0 0 0\n[41,] 0 0 0 0 0\n[42,] 0 0 0 0 0\n[43,] 0 0 0 0 0\n[44,] 0 0 0 0 0\n[45,] 0 0 0 0 0\n[46,] 0 0 0 0 0\n[47,] 0 0 0 0 0\n[48,] 0 0 0 0 0\n[49,] 0 0 0 0 0\n[50,] 1 0 0 0 0\n[51,] 0 1 0 0 0\n[52,] 0 0 1 0 0\n[53,] 0 0 0 1 0\n[54,] 0 0 0 0 1\n\n$`$TIMESTEP`\n[1] \"0.0001\"\n\n$`$TOT`\n[1] \"100\"\n\n$`$VOL`\n[1] \"30.31\"\n\n$`APPLY COMPENSATION`\n[1] \"FALSE\"\n\n$CHARSET\n[1] \"utf-8\"\n\n$CREATOR\n[1] \"SpectroFlo 3.3.0\"\n\n$FCSversion\n[1] \"3\"\n\n$FILENAME\n[1] \"data/CellCounts4L_AB_05_ND050_05.fcs\"\n\n$`FSC ASF`\n[1] \"1.21\"\n\n$GROUPNAME\n[1] \"ND050\"\n\n$GUID\n[1] \"CellCounts4L_AB_05-ND050-05.fcs\"\n\n$LASER1ASF\n[1] \"1.09\"\n\n$LASER1DELAY\n[1] \"-19.525\"\n\n$LASER1NAME\n[1] \"Violet\"\n\n$LASER2ASF\n[1] \"1.14\"\n\n$LASER2DELAY\n[1] \"0\"\n\n$LASER2NAME\n[1] \"Blue\"\n\n$LASER3ASF\n[1] \"1.02\"\n\n$LASER3DELAY\n[1] \"20.15\"\n\n$LASER3NAME\n[1] \"Red\"\n\n$LASER4ASF\n[1] \"0.92\"\n\n$LASER4DELAY\n[1] \"40.725\"\n\n$LASER4NAME\n[1] \"UV\"\n\n$ORIGINALGUID\n[1] \"CellCounts4L_AB_05-ND050-05.fcs\"\n\n$P10DISPLAY\n[1] \"LOG\"\n\n$P11DISPLAY\n[1] \"LOG\"\n\n$P12DISPLAY\n[1] \"LOG\"\n\n$P13DISPLAY\n[1] \"LOG\"\n\n$P14DISPLAY\n[1] \"LOG\"\n\n$P15DISPLAY\n[1] \"LOG\"\n\n$P16DISPLAY\n[1] \"LOG\"\n\n$P17DISPLAY\n[1] \"LOG\"\n\n$P18DISPLAY\n[1] \"LIN\"\n\n$P19DISPLAY\n[1] \"LIN\"\n\n$P1DISPLAY\n[1] \"LOG\"\n\n$P20DISPLAY\n[1] \"LOG\"\n\n$P21DISPLAY\n[1] \"LOG\"\n\n$P22DISPLAY\n[1] \"LOG\"\n\n$P23DISPLAY\n[1] \"LOG\"\n\n$P24DISPLAY\n[1] \"LOG\"\n\n$P25DISPLAY\n[1] \"LOG\"\n\n$P26DISPLAY\n[1] \"LOG\"\n\n$P27DISPLAY\n[1] \"LOG\"\n\n$P28DISPLAY\n[1] \"LOG\"\n\n$P29DISPLAY\n[1] \"LOG\"\n\n$P2DISPLAY\n[1] \"LOG\"\n\n$P30DISPLAY\n[1] \"LOG\"\n\n$P31DISPLAY\n[1] \"LOG\"\n\n$P32DISPLAY\n[1] \"LOG\"\n\n$P33DISPLAY\n[1] \"LOG\"\n\n$P34DISPLAY\n[1] \"LOG\"\n\n$P35DISPLAY\n[1] \"LOG\"\n\n$P36DISPLAY\n[1] \"LIN\"\n\n$P37DISPLAY\n[1] \"LIN\"\n\n$P38DISPLAY\n[1] \"LIN\"\n\n$P39DISPLAY\n[1] \"LIN\"\n\n$P3DISPLAY\n[1] \"LOG\"\n\n$P40DISPLAY\n[1] \"LOG\"\n\n$P41DISPLAY\n[1] \"LOG\"\n\n$P42DISPLAY\n[1] \"LOG\"\n\n$P43DISPLAY\n[1] \"LOG\"\n\n$P44DISPLAY\n[1] \"LOG\"\n\n$P45DISPLAY\n[1] \"LOG\"\n\n$P46DISPLAY\n[1] \"LOG\"\n\n$P47DISPLAY\n[1] \"LOG\"\n\n$P48DISPLAY\n[1] \"LOG\"\n\n$P49DISPLAY\n[1] \"LOG\"\n\n$P4DISPLAY\n[1] \"LOG\"\n\n$P50DISPLAY\n[1] \"LOG\"\n\n$P51DISPLAY\n[1] \"LOG\"\n\n$P52DISPLAY\n[1] \"LOG\"\n\n$P53DISPLAY\n[1] \"LOG\"\n\n$P54DISPLAY\n[1] \"LOG\"\n\n$P55DISPLAY\n[1] \"LOG\"\n\n$P56DISPLAY\n[1] \"LOG\"\n\n$P57DISPLAY\n[1] \"LOG\"\n\n$P58DISPLAY\n[1] \"LOG\"\n\n$P59DISPLAY\n[1] \"LOG\"\n\n$P5DISPLAY\n[1] \"LOG\"\n\n$P60DISPLAY\n[1] \"LOG\"\n\n$P61DISPLAY\n[1] \"LOG\"\n\n$P6DISPLAY\n[1] \"LOG\"\n\n$P7DISPLAY\n[1] \"LOG\"\n\n$P8DISPLAY\n[1] \"LOG\"\n\n$P9DISPLAY\n[1] \"LOG\"\n\n$THRESHOLD\n[1] \"(FSC,50000)\"\n\n$TUBENAME\n[1] \"05\"\n\n$USERSETTINGNAME\n[1] \"DTR_CellCounts\"\n\n$`WINDOW EXTENSION`\n[1] \"3\"\n\n\nThe returned list is a little too large to reasonably explore. We can attempt to subset using the head() function as shown below\n\nhead(DescriptionList, 5)\n\n$`$BEGINANALYSIS`\n[1] \"0\"\n\n$`$BEGINDATA`\n[1] \"33312\"\n\n$`$BEGINSTEXT`\n[1] \"0\"\n\n$`$BTIM`\n[1] \"13:55:29.85\"\n\n$`$BYTEORD`\n[1] \"4,3,2,1\"\n\n\nAlternatively, it might be better to subset based on position index\n\nDescriptionList[1:10]\n\n$`$BEGINANALYSIS`\n[1] \"0\"\n\n$`$BEGINDATA`\n[1] \"33312\"\n\n$`$BEGINSTEXT`\n[1] \"0\"\n\n$`$BTIM`\n[1] \"13:55:29.85\"\n\n$`$BYTEORD`\n[1] \"4,3,2,1\"\n\n$`$CYT`\n[1] \"Aurora\"\n\n$`$CYTOLIB_VERSION`\n[1] \"2.22.0\"\n\n$`$CYTSN`\n[1] \"V0333\"\n\n$`$DATATYPE`\n[1] \"F\"\n\n$`$DATE`\n[1] \"04-Aug-2025\"\n\n\nAnd just as we saw for exprs and parameters, there is also a Bioconductor helper keyword() function to access this same information directly from the flowFrame.\n\nDescriptionList_Alternate <- keyword(flowFrame)\n\nIf we run the class() function, we can see that DescriptionList is an actual “list”.\n\nclass(DescriptionList)\n\n[1] \"list\"\n\n\nThis is in contrast to the vectors we have previously generated. While these are also list like, they are what are known as as atomic list, which contain values that are all either characters, numerics or logicals.\n\nFluorophores <- c(\"BV421\", \"FITC\", \"PE\", \"APC\")\nclass(Fluorophores)\n\n[1] \"character\"\n\n\n\nPanelAntibodyCounts <- c(5, 12, 19, 26, 34, 46, 51)\nclass(PanelAntibodyCounts)\n\n[1] \"numeric\"\n\n\n\nSpecimenIndexToKeep <- c(TRUE, TRUE, FALSE, TRUE)\nclass(SpecimenIndexToKeep)\n\n[1] \"logical\"\n\n\nA list on the other hand is not restricted to contain objects composed entirely of a certain atomic type. For example, I could include the three previous vectors into a list using the list() function.\n\nMyListofVectors <- list(Fluorophores, PanelAntibodyCounts, SpecimenIndexToKeep)\nstr(MyListofVectors)\n\nList of 3\n $ : chr [1:4] \"BV421\" \"FITC\" \"PE\" \"APC\"\n $ : num [1:7] 5 12 19 26 34 46 51\n $ : logi [1:4] TRUE TRUE FALSE TRUE\n\n\nWe can see that with the Description/Keyword list we retrieved from our flowFrame shares a somewhat similar format.\n\nstr(DescriptionList[1:10])\n\nList of 10\n $ $BEGINANALYSIS : chr \"0\"\n $ $BEGINDATA : chr \"33312\"\n $ $BEGINSTEXT : chr \"0\"\n $ $BTIM : chr \"13:55:29.85\"\n $ $BYTEORD : chr \"4,3,2,1\"\n $ $CYT : chr \"Aurora\"\n $ $CYTOLIB_VERSION: chr \"2.22.0\"\n $ $CYTSN : chr \"V0333\"\n $ $DATATYPE : chr \"F\"\n $ $DATE : chr \"04-Aug-2025\"\n\n\nBut in this case, there are also names present ($BEGINANALYSIS, $BEGINDATA, etc). What if we had tried to provide names to our List of Vectors? Would the format match?\nWhen we assigned a name to each of the vectors (by providing an equal to = ), we get the same kind of structure format to what we see in Description.\n\nMyNamedListofVectors <- list(FluorophoresNamed=Fluorophores,\n PanelAntibodyCountsNamed=PanelAntibodyCounts,\n SpecimenIndexToKeepNamed=SpecimenIndexToKeep)\n\nstr(MyNamedListofVectors)\n\nList of 3\n $ FluorophoresNamed : chr [1:4] \"BV421\" \"FITC\" \"PE\" \"APC\"\n $ PanelAntibodyCountsNamed: num [1:7] 5 12 19 26 34 46 51\n $ SpecimenIndexToKeepNamed: logi [1:4] TRUE TRUE FALSE TRUE\n\n\nWe could then subsequently be able to isolate items from that list using the $ operator.\n\nMyNamedListofVectors$\n\n\nAlternatively, we could also access by list index position\n\nMyNamedListofVectors[1]\n\n$FluorophoresNamed\n[1] \"BV421\" \"FITC\" \"PE\" \"APC\" \n\n\nRemembering back to the original output from read.FCS() we remember that it mentioned 599 keywords being in the description slot, so now we know that this is what was being referenced.", + "objectID": "course/04_IntroToTidyverse/index.html#column-value-type", + "href": "course/04_IntroToTidyverse/index.html#column-value-type", + "title": "04 - Introduction to Tidyverse", + "section": "Column value type", + "text": "Column value type\nAs we saw last week, functions often need values that match a certain type (the paintbrush needing paint analogy). As we inspect the columns of Data, we can notice some of the columns contain values within that are character (ie. “char”) values. Others appear to contain numeric values (which are subtyped as either double (“ie. dbl”) or integer (ie. “int”)). At first glance, we do not appear to have any logical (ie. TRUE or FALSE) columns in this dataset.\n\nIf we were trying to verify type of values contained within a data.frame column, we could employ several similarly-named functions (is.character(), is.numeric() or is.logical()) to check\n\n# colnames(Data) # To recheck the column names\n\nis.character(Data$bid)\n\n[1] TRUE\n\n\n\nis.numeric(Data$bid)\n\n[1] FALSE\n\n\n\n# colnames(Data) # To recheck the column names\n\nis.character(Data$Tcells_count)\n\n[1] FALSE\n\n\nFor numeric columns using the is.numeric() function, we can also be subtype specific using either is.integer() or is.double().\n\n# colnames(Data) # To recheck the column names\n\nis.numeric(Data$Tcells_count)\n\n[1] TRUE\n\nis.integer(Data$Tcells_count)\n\n[1] TRUE\n\nis.double(Data$Tcells_count)\n\n[1] FALSE\n\n\n\n\n\n\n\n\nReminder\n\n\n\nAs we observed last week with keywords, column names that contain special characters like $ or spaces will need to be surrounded with tick marks in order for the function to be able to run.\n\n\n\n# colnames(Data) # To recheck the column names\nis.numeric(Data$CD8-)\n\nError in parse(text = input): <text>:2:21: unexpected ')'\n1: # colnames(Data) # To recheck the column names\n2: is.numeric(Data$CD8-)\n ^\n\n\n\n# colnames(Data) # To recheck the column names\nis.numeric(Data$`CD8-`)\n\n[1] TRUE", "crumbs": [ "About", "Intro to R", - "03 - Inside a .FCS file" + "04 - Intro to Tidyverse" ] }, { - "objectID": "course/03_InsideFCSFile/index.html#early-metadata", - "href": "course/03_InsideFCSFile/index.html#early-metadata", - "title": "03 - Inside an FCS File", - "section": "Early Metadata", - "text": "Early Metadata\nWithin the initial portion, we are getting back metadata keywords related to where and how the particular file was acquired. Keywords of potential interest include:\n\n\n\n\n\n\nStart Time\n\n\n\nWhat time was the .fcs file acquired\n\n\n\n\nDescriptionList$`$BTIM`\n\n[1] \"13:55:29.85\"\n\n\n\n\n\n\n\n\n\nCytometer\n\n\n\nWhat type of cytometer was the .fcs file acquired on\n\n\n\n\nDescriptionList$`$CYT`\n\n[1] \"Aurora\"\n\n\n\n\n\n\n\n\n\n\n\nCytometer Serial Number\n\n\n\nManufacturer Serial Number of the Cytometer\n\n\n\n\nDescriptionList$`$CYTSN`\n\n[1] \"V0333\"\n\n\n\n\n\n\n\n\n\nFCS File Acquisition Date\n\n\n\nWhat was the date of acquisition\n\n\n\n\nDescriptionList$`$DATE`\n\n[1] \"04-Aug-2025\"\n\n\n\n\n\n\n\n\n\n\n\nAcquisition End Time\n\n\n\nWhat time was acquisition stopped\n\n\n\n\nDescriptionList$`$ETIM`\n\n[1] \"13:55:57.02\"\n\n\n\n\n\n\n\n\n\nFile Name\n\n\n\nName of the .fcs file\n\n\n\n\nDescriptionList$`$FIL`\n\n[1] \"CellCounts4L_AB_05-ND050-05.fcs\"\n\n\n\n\n\n\n\n\n\n\n\nOperator\n\n\n\nWho acquired the .fcs file\n\n\n\n\nDescriptionList$`$OP`\n\n[1] \"David Rach\"", + "objectID": "course/04_IntroToTidyverse/index.html#select-columns", + "href": "course/04_IntroToTidyverse/index.html#select-columns", + "title": "04 - Introduction to Tidyverse", + "section": "select (Columns)", + "text": "select (Columns)\nNow that we have read in our data, and have a general picture of the structure and contents, lets start learning the main dplyr functions we will be using throughout the course. To do this, lets go ahead and attach dplyr to our local environment via the library() call.\n\nlibrary(dplyr)\n\nWe will start with the select() function. It is used to “select” a column from a data.frame type object. In the simplest usage, we provide the name of our data.frame variable/object as the first argument after the opening parenthesis. This is then followed by the name of the column we want to select as the second argument (let’s place around the “” around the column name for now)\n\nDateColumn <- select(Data, \"Date\")\nDateColumn[1:10,]\n\n [1] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\"\n [6] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\"\n\n\nThis results in the column being selected, resulting in the new object containing only that subsetted out column from the original Data object.\n\nPipe Operators\nWhile the above line of code works to select a column, when you encounter select() out in the wild, it will more often be in a line of code that looks like this:\n\nDateColumn <- Data |> select(\"Date\")\nDateColumn[1:10,]\n\n [1] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\"\n [6] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\"\n\n\n… “What in the world is that thing |> ?” …\nGlad you asked! An useful feature of the tidyverse packages is their use of pipes (either the original magrittr package’s “%>%” or base R version >4.1.0's “|>”“), usually appearing like this:\n\n# magrittr %>% pipe\n\nDateColumn <- Data %>% select(\"Date\")\n\n# base R |> pipe\nDateColumn <- Data |> select(\"Date\")\n\n… “How do we interpret/read that line of code?” …\nLet’s break it down, starting off just to the right of the assignment arrow (<-) with our data.frame “Data”.\n\nData\n\nWe then proceed to read to the right, adding in our pipe operator. The pipe essentially serves as an intermediate passing the contents of data onward to the subsequent function.\n\nData |> \n\nIn our case, this subsequent function is the select() function, which will select a particular column from the available data. When using the pipe, the first argument slot we saw for “select(Data,”Date”)” is occupied by the contents Data that are being passed by the pipe.\n\nData |> select()\n\nTo complete the transfer, we provide the desired column name to select() to act on (“Date” in this case)\n\nData |> select(\"Date\")\n\nIn summary, contents of Data are passed to the pipe, and select runs on those contents to select the Date column\n\nData |> select(\"Date\")\n\nOne of the main advantages for using pipes, is they can be linked together, passing resulting objects of one operation on to the next pipe and subsequent function. We can see this in operation in the example below where we hand off the isolated “Date” column to the nrow() function to determine number of rows. We will use pipes throughout the course, so you will gradually gain familiarity as you encounter them.\n\nData |> select(\"Date\") |> nrow()\n\n[1] 196\n\n\nFor those with prior R experience, you will be more familiar with the older magrittr %>% pipe. The base R |> pipe operator was introduced starting with R version 4.1.0. While mostly interchangeable, they have a few nuances that come into play for more advance use cases. You are welcome to use whichever you prefer (my current preference is |> as it’s one less key to press).\n\n\nR Quirks\n\n\n\n\n\n\nOdd R Behavior # 1\n\n\n\nWhile we used “” around the column name in our previous example, unlike what we encountered with install.packages() when we forget to include quotation marks, select() still retrieves the correct column despite Date not being an environment variable:\n\n\n\nData |> select(Date) |> head(3)\n\n Date\n1 2025-07-26\n2 2025-07-26\n3 2025-07-26\n\n\n\n\n\n\n\n\n.\n\n\n\nThe reasons for this Odd R behaviour are nuanced and for another day. For now, think of it as dplyr R package is picking up the slack, and using context to infer it’s a column name and not an environmental variable/object.\n\n\n\n\nSelecting multiple columns\nSince we are able to select one column, can we select multiple (similar to a [Data[,2:5]] approach in base R)? We can, and they can be positioned anywhere within the data.frame:\n\nSubset <- Data |> select(bid, timepoint, Condition, Tcells, `CD8+`, `CD4+`)\n\nhead(Subset, 3)\n\n bid timepoint Condition Tcells CD8+ CD4+\n1 INF0052 0 Ctrl 0.2804264 0.2734826 0.6341164\n2 INF0100 0 Ctrl 0.6748298 0.3357696 0.6119112\n3 INF0100 4 Ctrl 0.6119129 0.2862104 0.6639621\n\n\nYou will notice that the order in which we selected the columns will dictate their position in the subsetted data.frame object:\n\nSubset <- Data |> select(bid, Tcells, `CD8+`, `CD4+`, timepoint, Condition, )\n\nhead(Subset, 3)\n\n bid Tcells CD8+ CD4+ timepoint Condition\n1 INF0052 0.2804264 0.2734826 0.6341164 0 Ctrl\n2 INF0100 0.6748298 0.3357696 0.6119112 0 Ctrl\n3 INF0100 0.6119129 0.2862104 0.6639621 4 Ctrl", "crumbs": [ "About", "Intro to R", - "03 - Inside a .FCS file" + "04 - Intro to Tidyverse" ] }, { - "objectID": "course/03_InsideFCSFile/index.html#detector-values", - "href": "course/03_InsideFCSFile/index.html#detector-values", - "title": "03 - Inside an FCS File", - "section": "Detector Values", - "text": "Detector Values\nThe next major stretch of keywords encode parameter values associated with the individual detectors for at the time of acquisition.\n\nDetectors <- DescriptionList[20:384]\nDetectors\n\n$`$P10B`\n[1] \"32\"\n\n$`$P10E`\n[1] \"0,0\"\n\n$`$P10N`\n[1] \"UV9-A\"\n\n$`$P10R`\n[1] \"4194304\"\n\n$`$P10TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P10V`\n[1] \"710\"\n\n$`$P11B`\n[1] \"32\"\n\n$`$P11E`\n[1] \"0,0\"\n\n$`$P11N`\n[1] \"UV10-A\"\n\n$`$P11R`\n[1] \"4194304\"\n\n$`$P11TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P11V`\n[1] \"377\"\n\n$`$P12B`\n[1] \"32\"\n\n$`$P12E`\n[1] \"0,0\"\n\n$`$P12N`\n[1] \"UV11-A\"\n\n$`$P12R`\n[1] \"4194304\"\n\n$`$P12TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P12V`\n[1] \"469\"\n\n$`$P13B`\n[1] \"32\"\n\n$`$P13E`\n[1] \"0,0\"\n\n$`$P13N`\n[1] \"UV12-A\"\n\n$`$P13R`\n[1] \"4194304\"\n\n$`$P13TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P13V`\n[1] \"434\"\n\n$`$P14B`\n[1] \"32\"\n\n$`$P14E`\n[1] \"0,0\"\n\n$`$P14N`\n[1] \"UV13-A\"\n\n$`$P14R`\n[1] \"4194304\"\n\n$`$P14TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P14V`\n[1] \"564\"\n\n$`$P15B`\n[1] \"32\"\n\n$`$P15E`\n[1] \"0,0\"\n\n$`$P15N`\n[1] \"UV14-A\"\n\n$`$P15R`\n[1] \"4194304\"\n\n$`$P15TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P15V`\n[1] \"975\"\n\n$`$P16B`\n[1] \"32\"\n\n$`$P16E`\n[1] \"0,0\"\n\n$`$P16N`\n[1] \"UV15-A\"\n\n$`$P16R`\n[1] \"4194304\"\n\n$`$P16TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P16V`\n[1] \"737\"\n\n$`$P17B`\n[1] \"32\"\n\n$`$P17E`\n[1] \"0,0\"\n\n$`$P17N`\n[1] \"UV16-A\"\n\n$`$P17R`\n[1] \"4194304\"\n\n$`$P17TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P17V`\n[1] \"1069\"\n\n$`$P18B`\n[1] \"32\"\n\n$`$P18E`\n[1] \"0,0\"\n\n$`$P18N`\n[1] \"SSC-H\"\n\n$`$P18R`\n[1] \"4194304\"\n\n$`$P18TYPE`\n[1] \"Side_Scatter\"\n\n$`$P18V`\n[1] \"334\"\n\n$`$P19B`\n[1] \"32\"\n\n$`$P19E`\n[1] \"0,0\"\n\n$`$P19N`\n[1] \"SSC-A\"\n\n$`$P19R`\n[1] \"4194304\"\n\n$`$P19TYPE`\n[1] \"Side_Scatter\"\n\n$`$P19V`\n[1] \"334\"\n\n$`$P1B`\n[1] \"32\"\n\n$`$P1E`\n[1] \"0,0\"\n\n$`$P1N`\n[1] \"Time\"\n\n$`$P1R`\n[1] \"272140\"\n\n$`$P1TYPE`\n[1] \"Time\"\n\n$`$P20B`\n[1] \"32\"\n\n$`$P20E`\n[1] \"0,0\"\n\n$`$P20N`\n[1] \"V1-A\"\n\n$`$P20R`\n[1] \"4194304\"\n\n$`$P20TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P20V`\n[1] \"352\"\n\n$`$P21B`\n[1] \"32\"\n\n$`$P21E`\n[1] \"0,0\"\n\n$`$P21N`\n[1] \"V2-A\"\n\n$`$P21R`\n[1] \"4194304\"\n\n$`$P21TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P21V`\n[1] \"412\"\n\n$`$P22B`\n[1] \"32\"\n\n$`$P22E`\n[1] \"0,0\"\n\n$`$P22N`\n[1] \"V3-A\"\n\n$`$P22R`\n[1] \"4194304\"\n\n$`$P22TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P22V`\n[1] \"304\"\n\n$`$P23B`\n[1] \"32\"\n\n$`$P23E`\n[1] \"0,0\"\n\n$`$P23N`\n[1] \"V4-A\"\n\n$`$P23R`\n[1] \"4194304\"\n\n$`$P23TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P23V`\n[1] \"217\"\n\n$`$P24B`\n[1] \"32\"\n\n$`$P24E`\n[1] \"0,0\"\n\n$`$P24N`\n[1] \"V5-A\"\n\n$`$P24R`\n[1] \"4194304\"\n\n$`$P24TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P24V`\n[1] \"257\"\n\n$`$P25B`\n[1] \"32\"\n\n$`$P25E`\n[1] \"0,0\"\n\n$`$P25N`\n[1] \"V6-A\"\n\n$`$P25R`\n[1] \"4194304\"\n\n$`$P25TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P25V`\n[1] \"218\"\n\n$`$P26B`\n[1] \"32\"\n\n$`$P26E`\n[1] \"0,0\"\n\n$`$P26N`\n[1] \"V7-A\"\n\n$`$P26R`\n[1] \"4194304\"\n\n$`$P26TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P26V`\n[1] \"303\"\n\n$`$P27B`\n[1] \"32\"\n\n$`$P27E`\n[1] \"0,0\"\n\n$`$P27N`\n[1] \"V8-A\"\n\n$`$P27R`\n[1] \"4194304\"\n\n$`$P27TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P27V`\n[1] \"461\"\n\n$`$P28B`\n[1] \"32\"\n\n$`$P28E`\n[1] \"0,0\"\n\n$`$P28N`\n[1] \"V9-A\"\n\n$`$P28R`\n[1] \"4194304\"\n\n$`$P28TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P28V`\n[1] \"320\"\n\n$`$P29B`\n[1] \"32\"\n\n$`$P29E`\n[1] \"0,0\"\n\n$`$P29N`\n[1] \"V10-A\"\n\n$`$P29R`\n[1] \"4194304\"\n\n$`$P29TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P29V`\n[1] \"359\"\n\n$`$P2B`\n[1] \"32\"\n\n$`$P2E`\n[1] \"0,0\"\n\n$`$P2N`\n[1] \"UV1-A\"\n\n$`$P2R`\n[1] \"4194304\"\n\n$`$P2TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P2V`\n[1] \"1008\"\n\n$`$P30B`\n[1] \"32\"\n\n$`$P30E`\n[1] \"0,0\"\n\n$`$P30N`\n[1] \"V11-A\"\n\n$`$P30R`\n[1] \"4194304\"\n\n$`$P30TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P30V`\n[1] \"271\"\n\n$`$P31B`\n[1] \"32\"\n\n$`$P31E`\n[1] \"0,0\"\n\n$`$P31N`\n[1] \"V12-A\"\n\n$`$P31R`\n[1] \"4194304\"\n\n$`$P31TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P31V`\n[1] \"234\"\n\n$`$P32B`\n[1] \"32\"\n\n$`$P32E`\n[1] \"0,0\"\n\n$`$P32N`\n[1] \"V13-A\"\n\n$`$P32R`\n[1] \"4194304\"\n\n$`$P32TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P32V`\n[1] \"236\"\n\n$`$P33B`\n[1] \"32\"\n\n$`$P33E`\n[1] \"0,0\"\n\n$`$P33N`\n[1] \"V14-A\"\n\n$`$P33R`\n[1] \"4194304\"\n\n$`$P33TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P33V`\n[1] \"318\"\n\n$`$P34B`\n[1] \"32\"\n\n$`$P34E`\n[1] \"0,0\"\n\n$`$P34N`\n[1] \"V15-A\"\n\n$`$P34R`\n[1] \"4194304\"\n\n$`$P34TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P34V`\n[1] \"602\"\n\n$`$P35B`\n[1] \"32\"\n\n$`$P35E`\n[1] \"0,0\"\n\n$`$P35N`\n[1] \"V16-A\"\n\n$`$P35R`\n[1] \"4194304\"\n\n$`$P35TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P35V`\n[1] \"372\"\n\n$`$P36B`\n[1] \"32\"\n\n$`$P36E`\n[1] \"0,0\"\n\n$`$P36N`\n[1] \"FSC-H\"\n\n$`$P36R`\n[1] \"4194304\"\n\n$`$P36TYPE`\n[1] \"Forward_Scatter\"\n\n$`$P36V`\n[1] \"55\"\n\n$`$P37B`\n[1] \"32\"\n\n$`$P37E`\n[1] \"0,0\"\n\n$`$P37N`\n[1] \"FSC-A\"\n\n$`$P37R`\n[1] \"4194304\"\n\n$`$P37TYPE`\n[1] \"Forward_Scatter\"\n\n$`$P37V`\n[1] \"55\"\n\n$`$P38B`\n[1] \"32\"\n\n$`$P38E`\n[1] \"0,0\"\n\n$`$P38N`\n[1] \"SSC-B-H\"\n\n$`$P38R`\n[1] \"4194304\"\n\n$`$P38TYPE`\n[1] \"Side_Scatter\"\n\n$`$P38V`\n[1] \"241\"\n\n$`$P39B`\n[1] \"32\"\n\n$`$P39E`\n[1] \"0,0\"\n\n$`$P39N`\n[1] \"SSC-B-A\"\n\n$`$P39R`\n[1] \"4194304\"\n\n$`$P39TYPE`\n[1] \"Side_Scatter\"\n\n$`$P39V`\n[1] \"241\"\n\n$`$P3B`\n[1] \"32\"\n\n$`$P3E`\n[1] \"0,0\"\n\n$`$P3N`\n[1] \"UV2-A\"\n\n$`$P3R`\n[1] \"4194304\"\n\n$`$P3TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P3V`\n[1] \"286\"\n\n$`$P40B`\n[1] \"32\"\n\n$`$P40E`\n[1] \"0,0\"\n\n$`$P40N`\n[1] \"B1-A\"\n\n$`$P40R`\n[1] \"4194304\"\n\n$`$P40TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P40V`\n[1] \"1013\"\n\n$`$P41B`\n[1] \"32\"\n\n$`$P41E`\n[1] \"0,0\"\n\n$`$P41N`\n[1] \"B2-A\"\n\n$`$P41R`\n[1] \"4194304\"\n\n$`$P41TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P41V`\n[1] \"483\"\n\n$`$P42B`\n[1] \"32\"\n\n$`$P42E`\n[1] \"0,0\"\n\n$`$P42N`\n[1] \"B3-A\"\n\n$`$P42R`\n[1] \"4194304\"\n\n$`$P42TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P42V`\n[1] \"471\"\n\n$`$P43B`\n[1] \"32\"\n\n$`$P43E`\n[1] \"0,0\"\n\n$`$P43N`\n[1] \"B4-A\"\n\n$`$P43R`\n[1] \"4194304\"\n\n$`$P43TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P43V`\n[1] \"473\"\n\n$`$P44B`\n[1] \"32\"\n\n$`$P44E`\n[1] \"0,0\"\n\n$`$P44N`\n[1] \"B5-A\"\n\n$`$P44R`\n[1] \"4194304\"\n\n$`$P44TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P44V`\n[1] \"467\"\n\n$`$P45B`\n[1] \"32\"\n\n$`$P45E`\n[1] \"0,0\"\n\n$`$P45N`\n[1] \"B6-A\"\n\n$`$P45R`\n[1] \"4194304\"\n\n$`$P45TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P45V`\n[1] \"284\"\n\n$`$P46B`\n[1] \"32\"\n\n$`$P46E`\n[1] \"0,0\"\n\n$`$P46N`\n[1] \"B7-A\"\n\n$`$P46R`\n[1] \"4194304\"\n\n$`$P46TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P46V`\n[1] \"531\"\n\n$`$P47B`\n[1] \"32\"\n\n$`$P47E`\n[1] \"0,0\"\n\n$`$P47N`\n[1] \"B8-A\"\n\n$`$P47R`\n[1] \"4194304\"\n\n$`$P47TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P47V`\n[1] \"432\"\n\n$`$P48B`\n[1] \"32\"\n\n$`$P48E`\n[1] \"0,0\"\n\n$`$P48N`\n[1] \"B9-A\"\n\n$`$P48R`\n[1] \"4194304\"\n\n$`$P48TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P48V`\n[1] \"675\"\n\n$`$P49B`\n[1] \"32\"\n\n$`$P49E`\n[1] \"0,0\"\n\n$`$P49N`\n[1] \"B10-A\"\n\n$`$P49R`\n[1] \"4194304\"\n\n$`$P49TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P49V`\n[1] \"490\"\n\n$`$P4B`\n[1] \"32\"\n\n$`$P4E`\n[1] \"0,0\"\n\n$`$P4N`\n[1] \"UV3-A\"\n\n$`$P4R`\n[1] \"4194304\"\n\n$`$P4TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P4V`\n[1] \"677\"\n\n$`$P50B`\n[1] \"32\"\n\n$`$P50E`\n[1] \"0,0\"\n\n$`$P50N`\n[1] \"B11-A\"\n\n$`$P50R`\n[1] \"4194304\"\n\n$`$P50TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P50V`\n[1] \"286\"\n\n$`$P51B`\n[1] \"32\"\n\n$`$P51E`\n[1] \"0,0\"\n\n$`$P51N`\n[1] \"B12-A\"\n\n$`$P51R`\n[1] \"4194304\"\n\n$`$P51TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P51V`\n[1] \"407\"\n\n$`$P52B`\n[1] \"32\"\n\n$`$P52E`\n[1] \"0,0\"\n\n$`$P52N`\n[1] \"B13-A\"\n\n$`$P52R`\n[1] \"4194304\"\n\n$`$P52TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P52V`\n[1] \"700\"\n\n$`$P53B`\n[1] \"32\"\n\n$`$P53E`\n[1] \"0,0\"\n\n$`$P53N`\n[1] \"B14-A\"\n\n$`$P53R`\n[1] \"4194304\"\n\n$`$P53TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P53V`\n[1] \"693\"\n\n$`$P54B`\n[1] \"32\"\n\n$`$P54E`\n[1] \"0,0\"\n\n$`$P54N`\n[1] \"R1-A\"\n\n$`$P54R`\n[1] \"4194304\"\n\n$`$P54TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P54V`\n[1] \"158\"\n\n$`$P55B`\n[1] \"32\"\n\n$`$P55E`\n[1] \"0,0\"\n\n$`$P55N`\n[1] \"R2-A\"\n\n$`$P55R`\n[1] \"4194304\"\n\n$`$P55TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P55V`\n[1] \"245\"\n\n$`$P56B`\n[1] \"32\"\n\n$`$P56E`\n[1] \"0,0\"\n\n$`$P56N`\n[1] \"R3-A\"\n\n$`$P56R`\n[1] \"4194304\"\n\n$`$P56TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P56V`\n[1] \"338\"\n\n$`$P57B`\n[1] \"32\"\n\n$`$P57E`\n[1] \"0,0\"\n\n$`$P57N`\n[1] \"R4-A\"\n\n$`$P57R`\n[1] \"4194304\"\n\n$`$P57TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P57V`\n[1] \"238\"\n\n$`$P58B`\n[1] \"32\"\n\n$`$P58E`\n[1] \"0,0\"\n\n$`$P58N`\n[1] \"R5-A\"\n\n$`$P58R`\n[1] \"4194304\"\n\n$`$P58TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P58V`\n[1] \"191\"\n\n$`$P59B`\n[1] \"32\"\n\n$`$P59E`\n[1] \"0,0\"\n\n$`$P59N`\n[1] \"R6-A\"\n\n$`$P59R`\n[1] \"4194304\"\n\n$`$P59TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P59V`\n[1] \"274\"\n\n$`$P5B`\n[1] \"32\"\n\n$`$P5E`\n[1] \"0,0\"\n\n$`$P5N`\n[1] \"UV4-A\"\n\n$`$P5R`\n[1] \"4194304\"\n\n$`$P5TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P5V`\n[1] \"1022\"\n\n$`$P60B`\n[1] \"32\"\n\n$`$P60E`\n[1] \"0,0\"\n\n$`$P60N`\n[1] \"R7-A\"\n\n$`$P60R`\n[1] \"4194304\"\n\n$`$P60TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P60V`\n[1] \"524\"\n\n$`$P61B`\n[1] \"32\"\n\n$`$P61E`\n[1] \"0,0\"\n\n$`$P61N`\n[1] \"R8-A\"\n\n$`$P61R`\n[1] \"4194304\"\n\n$`$P61TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P61V`\n[1] \"243\"\n\n$`$P6B`\n[1] \"32\"\n\n$`$P6E`\n[1] \"0,0\"\n\n$`$P6N`\n[1] \"UV5-A\"\n\n$`$P6R`\n[1] \"4194304\"\n\n$`$P6TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P6V`\n[1] \"616\"\n\n$`$P7B`\n[1] \"32\"\n\n$`$P7E`\n[1] \"0,0\"\n\n$`$P7N`\n[1] \"UV6-A\"\n\n$`$P7R`\n[1] \"4194304\"\n\n$`$P7TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P7V`\n[1] \"506\"\n\n$`$P8B`\n[1] \"32\"\n\n$`$P8E`\n[1] \"0,0\"\n\n$`$P8N`\n[1] \"UV7-A\"\n\n$`$P8R`\n[1] \"4194304\"\n\n$`$P8TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P8V`\n[1] \"661\"\n\n$`$P9B`\n[1] \"32\"\n\n$`$P9E`\n[1] \"0,0\"\n\n$`$P9N`\n[1] \"UV8-A\"\n\n$`$P9R`\n[1] \"4194304\"\n\n$`$P9TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P9V`\n[1] \"514\"\n\n\nFortunately for all involved, there is a consistently repeating pattern for the keywords corresponding to each detector. We can see that here for $P7B, $P7E, $P7N, $P7R, $P7TYPE, $P7V\n\nWhen referencing to the Flow Cytometry Standard documentation, here are what the particular keyword letters mean:\n\n\n\n\n\n\nB\n\n\n\nNumber of bits reserved for parameter number n\n\n\n\n\nDescriptionList$`$P7B`\n\n[1] \"32\"\n\n\n\n\n\n\n\n\n\nE\n\n\n\nAmplification type for parameter n. \n\n\n\n\nDescriptionList$`$P7E`\n\n[1] \"0,0\"\n\n\n\n\n\n\n\n\n\n\n\nN\n\n\n\nShort Name for parameter n. \n\n\n\n\nDescriptionList$`$P7N`\n\n[1] \"UV6-A\"\n\n\n\n\n\n\n\n\n\nR\n\n\n\nRange for parameter number n. \n\n\n\n\nDescriptionList$`$P7R`\n\n[1] \"4194304\"\n\n\n\n\n\n\n\n\n\n\n\nTYPE\n\n\n\nDetector type for parameter n. \n\n\n\n\nDescriptionList$`$P7TYPE`\n\n[1] \"Raw_Fluorescence\"\n\n\n\n\n\n\n\n\n\nV\n\n\n\nDetector voltage for parameter n. \n\n\n\n\nDescriptionList$`$P7V`\n\n[1] \"506\"\n\n\n\n\n\nWhile not immediately obvious, understanding what these keywords encoded has proven useful for our core. In our case, we have built an automated InstrumentQC dashboard for all the instruments at our core.\n\n\n\nBy extracting out from our daily QC bead .fcs files the stored N (Detector Name) and V (Gain/Voltage) values for all the individual detectors, it allows us to plot Levey-Jennings Plots for our individual instruments, giving us usually around a months warning before an individual laser begins to fail. This helps with scheduling the Field-Service Engineer visit before it starts impacting the actual data.\n\n\n\nWhile most of the detectors keywords are similar (only changing there individual name and voltage) there are a couple exceptions.\nFor the FSC/SSC parameters, instead of Raw_Fluorescence value for Type, we see the corresponding Scatter value get return. This in term is what is used by various commercial softwares to show those axis as linear instead of biexponential when selected.\n\n\n\nThis is similarly the case for the Time parameter, where in addition to Type being set to Time, the range also appears different to Raw/Scatters value.", + "objectID": "course/04_IntroToTidyverse/index.html#relocate", + "href": "course/04_IntroToTidyverse/index.html#relocate", + "title": "04 - Introduction to Tidyverse", + "section": "relocate", + "text": "relocate\nAlternatively, we occasionally want to move one column. While we could respecify the location using select(), specifying the names of all the other columns out in a line of code to just to rearrange one does not sound like a good use of time. For this reason, the second dplyr function we will be learning is the relocate() function.\nLooking at our Data object, let’s say we wanted to move the Tcells column from its current location to the second column position (right after the bid column). The line of code to do so would look like:\n\nData |> relocate(Tcells, .after=bid) |> head(3)\n\n bid Tcells timepoint Condition Date infant_sex ptype root\n1 INF0052 0.2804264 0 Ctrl 2025-07-26 Male HEU-hi 2098368\n2 INF0100 0.6748298 0 Ctrl 2025-07-26 Male HEU-lo 2020184\n3 INF0100 0.6119129 4 Ctrl 2025-07-26 Male HEU-lo 1155040\n singletsFSC singletsSSC singletsSSCB CD45 NotMonocytes nonDebris\n1 1894070 1666179 1537396 0.5952943 0.8820349 0.8627649\n2 1791890 1697083 1579098 0.9106762 0.9052256 0.8602660\n3 1033320 875465 845446 0.9705765 0.9845400 0.9578793\n lymphocytes live Dump+ Dump- Vd2+ Vd2- Va7.2+\n1 0.6420138 0.9020581 0.21090996 0.6911482 0.008120361 0.9918796 0.01448070\n2 0.2145848 0.8908981 0.06252775 0.8283703 0.007265620 0.9927344 0.01577499\n3 0.7403110 0.8757665 0.20023803 0.6755285 0.004651313 0.9953487 0.01579402\n Va7.2- CD4+ CD4- CD8+ CD8- Tcells_count\n1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990 164771\n2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858 208241\n3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209 371723\n lymphocytes_count Monocytes Debris CD45_count\n1 587573 0.11796509 0.13723513 915203\n2 308583 0.09477437 0.13973396 1438047\n3 607477 0.01545999 0.04212072 820570\n\n# |> head(3) is used only to make the website output visualization manageable :D\n\nSimilar to what we saw with select(), this approach can also be used for more than 1 column:\n\nData |> relocate(Tcells, Monocytes, .after=bid) |> head(3)\n\n bid Tcells Monocytes timepoint Condition Date infant_sex ptype\n1 INF0052 0.2804264 0.11796509 0 Ctrl 2025-07-26 Male HEU-hi\n2 INF0100 0.6748298 0.09477437 0 Ctrl 2025-07-26 Male HEU-lo\n3 INF0100 0.6119129 0.01545999 4 Ctrl 2025-07-26 Male HEU-lo\n root singletsFSC singletsSSC singletsSSCB CD45 NotMonocytes nonDebris\n1 2098368 1894070 1666179 1537396 0.5952943 0.8820349 0.8627649\n2 2020184 1791890 1697083 1579098 0.9106762 0.9052256 0.8602660\n3 1155040 1033320 875465 845446 0.9705765 0.9845400 0.9578793\n lymphocytes live Dump+ Dump- Vd2+ Vd2- Va7.2+\n1 0.6420138 0.9020581 0.21090996 0.6911482 0.008120361 0.9918796 0.01448070\n2 0.2145848 0.8908981 0.06252775 0.8283703 0.007265620 0.9927344 0.01577499\n3 0.7403110 0.8757665 0.20023803 0.6755285 0.004651313 0.9953487 0.01579402\n Va7.2- CD4+ CD4- CD8+ CD8- Tcells_count\n1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990 164771\n2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858 208241\n3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209 371723\n lymphocytes_count Debris CD45_count\n1 587573 0.13723513 915203\n2 308583 0.13973396 1438047\n3 607477 0.04212072 820570\n\n# |> head(3) is used only to make the website output visualization manageable :D\n\nWe can also modify the argument so that columns are placed before a certain column\n\nData |> relocate(Tcells, .before=Date) |> head(3)\n\n bid timepoint Condition Tcells Date infant_sex ptype root\n1 INF0052 0 Ctrl 0.2804264 2025-07-26 Male HEU-hi 2098368\n2 INF0100 0 Ctrl 0.6748298 2025-07-26 Male HEU-lo 2020184\n3 INF0100 4 Ctrl 0.6119129 2025-07-26 Male HEU-lo 1155040\n singletsFSC singletsSSC singletsSSCB CD45 NotMonocytes nonDebris\n1 1894070 1666179 1537396 0.5952943 0.8820349 0.8627649\n2 1791890 1697083 1579098 0.9106762 0.9052256 0.8602660\n3 1033320 875465 845446 0.9705765 0.9845400 0.9578793\n lymphocytes live Dump+ Dump- Vd2+ Vd2- Va7.2+\n1 0.6420138 0.9020581 0.21090996 0.6911482 0.008120361 0.9918796 0.01448070\n2 0.2145848 0.8908981 0.06252775 0.8283703 0.007265620 0.9927344 0.01577499\n3 0.7403110 0.8757665 0.20023803 0.6755285 0.004651313 0.9953487 0.01579402\n Va7.2- CD4+ CD4- CD8+ CD8- Tcells_count\n1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990 164771\n2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858 208241\n3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209 371723\n lymphocytes_count Monocytes Debris CD45_count\n1 587573 0.11796509 0.13723513 915203\n2 308583 0.09477437 0.13973396 1438047\n3 607477 0.01545999 0.04212072 820570\n\n# |> head(3) is used only to make the website output visualization manageable :D\n\nAnd as we might suspect, we could specify a column index location rather than using a column name.\n\nData |> relocate(Date, .before=1) |> head(3)\n\n Date bid timepoint Condition infant_sex ptype root singletsFSC\n1 2025-07-26 INF0052 0 Ctrl Male HEU-hi 2098368 1894070\n2 2025-07-26 INF0100 0 Ctrl Male HEU-lo 2020184 1791890\n3 2025-07-26 INF0100 4 Ctrl Male HEU-lo 1155040 1033320\n singletsSSC singletsSSCB CD45 NotMonocytes nonDebris lymphocytes\n1 1666179 1537396 0.5952943 0.8820349 0.8627649 0.6420138\n2 1697083 1579098 0.9106762 0.9052256 0.8602660 0.2145848\n3 875465 845446 0.9705765 0.9845400 0.9578793 0.7403110\n live Dump+ Dump- Tcells Vd2+ Vd2- Va7.2+\n1 0.9020581 0.21090996 0.6911482 0.2804264 0.008120361 0.9918796 0.01448070\n2 0.8908981 0.06252775 0.8283703 0.6748298 0.007265620 0.9927344 0.01577499\n3 0.8757665 0.20023803 0.6755285 0.6119129 0.004651313 0.9953487 0.01579402\n Va7.2- CD4+ CD4- CD8+ CD8- Tcells_count\n1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990 164771\n2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858 208241\n3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209 371723\n lymphocytes_count Monocytes Debris CD45_count\n1 587573 0.11796509 0.13723513 915203\n2 308583 0.09477437 0.13973396 1438047\n3 607477 0.01545999 0.04212072 820570\n\n# |> head(3) is used only to make the website output visualization manageable :D", "crumbs": [ "About", "Intro to R", - "03 - Inside a .FCS file" + "04 - Intro to Tidyverse" ] }, { - "objectID": "course/03_InsideFCSFile/index.html#middle-metadata", - "href": "course/03_InsideFCSFile/index.html#middle-metadata", - "title": "03 - Inside an FCS File", - "section": "Middle Metadata", - "text": "Middle Metadata\nOnce we are out of the detector keywords, we find the last of the $Metadata associated keywords.\n\nDetectors <- DescriptionList[385:398]\nDetectors\n\n$`$PAR`\n[1] \"61\"\n\n$`$PROJ`\n[1] \"CellCounts4L_AB_05\"\n\n$`$SPILLOVER`\n UV1-A UV2-A UV3-A UV4-A UV5-A UV6-A UV7-A UV8-A UV9-A UV10-A UV11-A\n [1,] 1e+00 0 0 0 0 0 0 0 0 0 0\n [2,] 1e-06 1 0 0 0 0 0 0 0 0 0\n [3,] 0e+00 0 1 0 0 0 0 0 0 0 0\n [4,] 0e+00 0 0 1 0 0 0 0 0 0 0\n [5,] 0e+00 0 0 0 1 0 0 0 0 0 0\n [6,] 0e+00 0 0 0 0 1 0 0 0 0 0\n [7,] 0e+00 0 0 0 0 0 1 0 0 0 0\n [8,] 0e+00 0 0 0 0 0 0 1 0 0 0\n [9,] 0e+00 0 0 0 0 0 0 0 1 0 0\n[10,] 0e+00 0 0 0 0 0 0 0 0 1 0\n[11,] 0e+00 0 0 0 0 0 0 0 0 0 1\n[12,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[13,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[14,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[15,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[16,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[17,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[18,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[19,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[20,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[21,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[22,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[23,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[24,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[25,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[26,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[27,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[28,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[29,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[30,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[31,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[32,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[33,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[34,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[35,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[36,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[37,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[38,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[39,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[40,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[41,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[42,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[43,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[44,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[45,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[46,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[47,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[48,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[49,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[50,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[51,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[52,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[53,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[54,] 0e+00 0 0 0 0 0 0 0 0 0 0\n UV12-A UV13-A UV14-A UV15-A UV16-A V1-A V2-A V3-A V4-A V5-A V6-A V7-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 1 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 1 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 1 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 1 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 1 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 1 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 1 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 1 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 1 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 1 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 1 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 1\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0\n V8-A V9-A V10-A V11-A V12-A V13-A V14-A V15-A V16-A B1-A B2-A B3-A B4-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n B5-A B6-A B7-A B8-A B9-A B10-A B11-A B12-A B13-A B14-A R1-A R2-A R3-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n R4-A R5-A R6-A R7-A R8-A\n [1,] 0 0 0 0 0\n [2,] 0 0 0 0 0\n [3,] 0 0 0 0 0\n [4,] 0 0 0 0 0\n [5,] 0 0 0 0 0\n [6,] 0 0 0 0 0\n [7,] 0 0 0 0 0\n [8,] 0 0 0 0 0\n [9,] 0 0 0 0 0\n[10,] 0 0 0 0 0\n[11,] 0 0 0 0 0\n[12,] 0 0 0 0 0\n[13,] 0 0 0 0 0\n[14,] 0 0 0 0 0\n[15,] 0 0 0 0 0\n[16,] 0 0 0 0 0\n[17,] 0 0 0 0 0\n[18,] 0 0 0 0 0\n[19,] 0 0 0 0 0\n[20,] 0 0 0 0 0\n[21,] 0 0 0 0 0\n[22,] 0 0 0 0 0\n[23,] 0 0 0 0 0\n[24,] 0 0 0 0 0\n[25,] 0 0 0 0 0\n[26,] 0 0 0 0 0\n[27,] 0 0 0 0 0\n[28,] 0 0 0 0 0\n[29,] 0 0 0 0 0\n[30,] 0 0 0 0 0\n[31,] 0 0 0 0 0\n[32,] 0 0 0 0 0\n[33,] 0 0 0 0 0\n[34,] 0 0 0 0 0\n[35,] 0 0 0 0 0\n[36,] 0 0 0 0 0\n[37,] 0 0 0 0 0\n[38,] 0 0 0 0 0\n[39,] 0 0 0 0 0\n[40,] 0 0 0 0 0\n[41,] 0 0 0 0 0\n[42,] 0 0 0 0 0\n[43,] 0 0 0 0 0\n[44,] 0 0 0 0 0\n[45,] 0 0 0 0 0\n[46,] 0 0 0 0 0\n[47,] 0 0 0 0 0\n[48,] 0 0 0 0 0\n[49,] 0 0 0 0 0\n[50,] 1 0 0 0 0\n[51,] 0 1 0 0 0\n[52,] 0 0 1 0 0\n[53,] 0 0 0 1 0\n[54,] 0 0 0 0 1\n\n$`$TIMESTEP`\n[1] \"0.0001\"\n\n$`$TOT`\n[1] \"100\"\n\n$`$VOL`\n[1] \"30.31\"\n\n$`APPLY COMPENSATION`\n[1] \"FALSE\"\n\n$CHARSET\n[1] \"utf-8\"\n\n$CREATOR\n[1] \"SpectroFlo 3.3.0\"\n\n$FCSversion\n[1] \"3\"\n\n$FILENAME\n[1] \"data/CellCounts4L_AB_05_ND050_05.fcs\"\n\n$`FSC ASF`\n[1] \"1.21\"\n\n$GROUPNAME\n[1] \"ND050\"\n\n$GUID\n[1] \"CellCounts4L_AB_05-ND050-05.fcs\"\n\n\nAmong those of potential interest\n\n\n\n\n\n\nProj\n\n\n\nOften corresponding to the experiment file name\n\n\n\n\nDescriptionList$`$PROJ`\n\n[1] \"CellCounts4L_AB_05\"\n\n\n\n\n\n\n\n\n\nSpillover\n\n\n\nWhere the internal spillover matrix is stored (we will revisit during compensation)\n\n\n\n\nDescriptionList$`$SPILLOVER`\n\n UV1-A UV2-A UV3-A UV4-A UV5-A UV6-A UV7-A UV8-A UV9-A UV10-A UV11-A\n [1,] 1e+00 0 0 0 0 0 0 0 0 0 0\n [2,] 1e-06 1 0 0 0 0 0 0 0 0 0\n [3,] 0e+00 0 1 0 0 0 0 0 0 0 0\n [4,] 0e+00 0 0 1 0 0 0 0 0 0 0\n [5,] 0e+00 0 0 0 1 0 0 0 0 0 0\n [6,] 0e+00 0 0 0 0 1 0 0 0 0 0\n [7,] 0e+00 0 0 0 0 0 1 0 0 0 0\n [8,] 0e+00 0 0 0 0 0 0 1 0 0 0\n [9,] 0e+00 0 0 0 0 0 0 0 1 0 0\n[10,] 0e+00 0 0 0 0 0 0 0 0 1 0\n[11,] 0e+00 0 0 0 0 0 0 0 0 0 1\n[12,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[13,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[14,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[15,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[16,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[17,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[18,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[19,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[20,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[21,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[22,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[23,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[24,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[25,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[26,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[27,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[28,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[29,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[30,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[31,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[32,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[33,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[34,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[35,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[36,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[37,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[38,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[39,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[40,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[41,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[42,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[43,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[44,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[45,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[46,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[47,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[48,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[49,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[50,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[51,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[52,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[53,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[54,] 0e+00 0 0 0 0 0 0 0 0 0 0\n UV12-A UV13-A UV14-A UV15-A UV16-A V1-A V2-A V3-A V4-A V5-A V6-A V7-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 1 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 1 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 1 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 1 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 1 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 1 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 1 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 1 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 1 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 1 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 1 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 1\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0\n V8-A V9-A V10-A V11-A V12-A V13-A V14-A V15-A V16-A B1-A B2-A B3-A B4-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n B5-A B6-A B7-A B8-A B9-A B10-A B11-A B12-A B13-A B14-A R1-A R2-A R3-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n R4-A R5-A R6-A R7-A R8-A\n [1,] 0 0 0 0 0\n [2,] 0 0 0 0 0\n [3,] 0 0 0 0 0\n [4,] 0 0 0 0 0\n [5,] 0 0 0 0 0\n [6,] 0 0 0 0 0\n [7,] 0 0 0 0 0\n [8,] 0 0 0 0 0\n [9,] 0 0 0 0 0\n[10,] 0 0 0 0 0\n[11,] 0 0 0 0 0\n[12,] 0 0 0 0 0\n[13,] 0 0 0 0 0\n[14,] 0 0 0 0 0\n[15,] 0 0 0 0 0\n[16,] 0 0 0 0 0\n[17,] 0 0 0 0 0\n[18,] 0 0 0 0 0\n[19,] 0 0 0 0 0\n[20,] 0 0 0 0 0\n[21,] 0 0 0 0 0\n[22,] 0 0 0 0 0\n[23,] 0 0 0 0 0\n[24,] 0 0 0 0 0\n[25,] 0 0 0 0 0\n[26,] 0 0 0 0 0\n[27,] 0 0 0 0 0\n[28,] 0 0 0 0 0\n[29,] 0 0 0 0 0\n[30,] 0 0 0 0 0\n[31,] 0 0 0 0 0\n[32,] 0 0 0 0 0\n[33,] 0 0 0 0 0\n[34,] 0 0 0 0 0\n[35,] 0 0 0 0 0\n[36,] 0 0 0 0 0\n[37,] 0 0 0 0 0\n[38,] 0 0 0 0 0\n[39,] 0 0 0 0 0\n[40,] 0 0 0 0 0\n[41,] 0 0 0 0 0\n[42,] 0 0 0 0 0\n[43,] 0 0 0 0 0\n[44,] 0 0 0 0 0\n[45,] 0 0 0 0 0\n[46,] 0 0 0 0 0\n[47,] 0 0 0 0 0\n[48,] 0 0 0 0 0\n[49,] 0 0 0 0 0\n[50,] 1 0 0 0 0\n[51,] 0 1 0 0 0\n[52,] 0 0 1 0 0\n[53,] 0 0 0 1 0\n[54,] 0 0 0 0 1\n\n\n\n\n\n\n\n\n\nTOT\n\n\n\nTotal events (in this case my downsampled 100 cells)\n\n\n\n\nDescriptionList$`$TOT`\n\n[1] \"100\"\n\n\n\n\n\n\n\n\n\nVolume\n\n\n\nVolume amount acquired during acquisition.\n\n\n\n\nDescriptionList$`$VOL`\n\n[1] \"30.31\"\n\n\n\n\n\n\n\n\n\nSoftware\n\n\n\nSoftware used and version\n\n\n\n\nDescriptionList$CREATOR\n\n[1] \"SpectroFlo 3.3.0\"\n\n\n\nYou will notice at this point, the keyword names including a “$” symbol have stopped, so tick marks are no longer required (except when there is a space in the name). The only $ remaining is being used as a selector for a particular item in the list.\n\nDetectors <- DescriptionList[390:398]\nDetectors\n\n$`$VOL`\n[1] \"30.31\"\n\n$`APPLY COMPENSATION`\n[1] \"FALSE\"\n\n$CHARSET\n[1] \"utf-8\"\n\n$CREATOR\n[1] \"SpectroFlo 3.3.0\"\n\n$FCSversion\n[1] \"3\"\n\n$FILENAME\n[1] \"data/CellCounts4L_AB_05_ND050_05.fcs\"\n\n$`FSC ASF`\n[1] \"1.21\"\n\n$GROUPNAME\n[1] \"ND050\"\n\n$GUID\n[1] \"CellCounts4L_AB_05-ND050-05.fcs\"\n\n\n\n\n\n\n\n\nFILENAME\n\n\n\nBasically the full file.path to the .fcs file of interest.\n\n\n\n\nDescriptionList$FILENAME\n\n[1] \"data/CellCounts4L_AB_05_ND050_05.fcs\"\n\n\n\n\n\n\n\n\n\nGROUPNAME\n\n\n\nThe Name assigned to the acquisition Group.\n\n\n\n\nDescriptionList$GROUPNAME\n\n[1] \"ND050\"", + "objectID": "course/04_IntroToTidyverse/index.html#rename", + "href": "course/04_IntroToTidyverse/index.html#rename", + "title": "04 - Introduction to Tidyverse", + "section": "rename", + "text": "rename\nAt this point, we are able to both move and select particular columns, allowing us to rearrange and subset a larger data.frame object however we want it to appear. However, as we encountered, some of the names contain special characters and spaces, requiring use of tick marks (``) to avoid issues. How can we change a column name?\nIn base R, we could change individual column names by assigning a new value with the assignment arrow to the corresponding column name index. For example, looking at our Subset object, wen could rename CD8+ as follows:\n\ncolnames(Subset)\n\n[1] \"bid\" \"Tcells\" \"CD8+\" \"CD4+\" \"timepoint\" \"Condition\"\n\ncolnames(Subset)[3]\n\n[1] \"CD8+\"\n\n\n\ncolnames(Subset)[3] <- \"CD8Positive\"\ncolnames(Subset)\n\n[1] \"bid\" \"Tcells\" \"CD8Positive\" \"CD4+\" \"timepoint\" \n[6] \"Condition\" \n\n\nWith the tidyverse, we can use the rename() function which removes the need to look up the column index number. The way we write the argument is placing within the parenthesis the old name to the right of the equals sign, with the new name to the left\n\nRenamed <- Subset |> rename(CD4_Positive = `CD4+`)\ncolnames(Renamed)\n\n[1] \"bid\" \"Tcells\" \"CD8Positive\" \"CD4_Positive\" \"timepoint\" \n[6] \"Condition\" \n\n\nIf we wanted to rename multiple column names at once, we would just need to include a comma between the individual rename arguments within the parenthesis.\n\nRenamed_Multiple <- Subset |> rename(specimen = bid, timepoint_months = timepoint, stimulation = Condition, CD4Positive=`CD4+`)\ncolnames(Renamed_Multiple)\n\n[1] \"specimen\" \"Tcells\" \"CD8Positive\" \"CD4Positive\" \n[5] \"timepoint_months\" \"stimulation\"", "crumbs": [ "About", "Intro to R", - "03 - Inside a .FCS file" + "04 - Intro to Tidyverse" ] }, { - "objectID": "course/03_InsideFCSFile/index.html#laser-metadata", - "href": "course/03_InsideFCSFile/index.html#laser-metadata", - "title": "03 - Inside an FCS File", - "section": "Laser Metadata", - "text": "Laser Metadata\nNext up, there is a small stretch of keywords containing the values associated with the individual lasers as far as delays and area scaling factors for a particular day (also useful when plotted).\n\nDetectors <- DescriptionList[399:410]\nDetectors\n\n$LASER1ASF\n[1] \"1.09\"\n\n$LASER1DELAY\n[1] \"-19.525\"\n\n$LASER1NAME\n[1] \"Violet\"\n\n$LASER2ASF\n[1] \"1.14\"\n\n$LASER2DELAY\n[1] \"0\"\n\n$LASER2NAME\n[1] \"Blue\"\n\n$LASER3ASF\n[1] \"1.02\"\n\n$LASER3DELAY\n[1] \"20.15\"\n\n$LASER3NAME\n[1] \"Red\"\n\n$LASER4ASF\n[1] \"0.92\"\n\n$LASER4DELAY\n[1] \"40.725\"\n\n$LASER4NAME\n[1] \"UV\"", + "objectID": "course/04_IntroToTidyverse/index.html#pull", + "href": "course/04_IntroToTidyverse/index.html#pull", + "title": "04 - Introduction to Tidyverse", + "section": "pull", + "text": "pull\nSometimes, we may want to retrieve individual values present in a column, to use within either a vector or a list. We can do this using the pull() function, which will retrieve the column contents and strip the column formatting\n\nData |> pull(Date) |> head(5)\n\n[1] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\"\n\n\nThis can be useful when we are doing data exploration, and trying to determine how many unique variants might be present. For example, if we wanted to see what days individual samples were acquired, we could pull() the data and pass it to the unique() function:\n\nData |> pull(Date) |> unique()\n\n[1] \"2025-07-26\" \"2025-07-29\" \"2025-07-31\" \"2025-08-05\" \"2025-08-07\"\n[6] \"2025-08-22\" \"2025-08-28\" \"2025-08-30\"", "crumbs": [ "About", "Intro to R", - "03 - Inside a .FCS file" + "04 - Intro to Tidyverse" ] }, { - "objectID": "course/03_InsideFCSFile/index.html#display", - "href": "course/03_InsideFCSFile/index.html#display", - "title": "03 - Inside an FCS File", - "section": "Display", - "text": "Display\nThen there is a stretch matching whether a particular detector needs to be displayed as linear (in the case of time and scatter) or as log (for individual detectors).\n\nDetectors <- DescriptionList[412:472]\nDetectors\n\n$P10DISPLAY\n[1] \"LOG\"\n\n$P11DISPLAY\n[1] \"LOG\"\n\n$P12DISPLAY\n[1] \"LOG\"\n\n$P13DISPLAY\n[1] \"LOG\"\n\n$P14DISPLAY\n[1] \"LOG\"\n\n$P15DISPLAY\n[1] \"LOG\"\n\n$P16DISPLAY\n[1] \"LOG\"\n\n$P17DISPLAY\n[1] \"LOG\"\n\n$P18DISPLAY\n[1] \"LIN\"\n\n$P19DISPLAY\n[1] \"LIN\"\n\n$P1DISPLAY\n[1] \"LOG\"\n\n$P20DISPLAY\n[1] \"LOG\"\n\n$P21DISPLAY\n[1] \"LOG\"\n\n$P22DISPLAY\n[1] \"LOG\"\n\n$P23DISPLAY\n[1] \"LOG\"\n\n$P24DISPLAY\n[1] \"LOG\"\n\n$P25DISPLAY\n[1] \"LOG\"\n\n$P26DISPLAY\n[1] \"LOG\"\n\n$P27DISPLAY\n[1] \"LOG\"\n\n$P28DISPLAY\n[1] \"LOG\"\n\n$P29DISPLAY\n[1] \"LOG\"\n\n$P2DISPLAY\n[1] \"LOG\"\n\n$P30DISPLAY\n[1] \"LOG\"\n\n$P31DISPLAY\n[1] \"LOG\"\n\n$P32DISPLAY\n[1] \"LOG\"\n\n$P33DISPLAY\n[1] \"LOG\"\n\n$P34DISPLAY\n[1] \"LOG\"\n\n$P35DISPLAY\n[1] \"LOG\"\n\n$P36DISPLAY\n[1] \"LIN\"\n\n$P37DISPLAY\n[1] \"LIN\"\n\n$P38DISPLAY\n[1] \"LIN\"\n\n$P39DISPLAY\n[1] \"LIN\"\n\n$P3DISPLAY\n[1] \"LOG\"\n\n$P40DISPLAY\n[1] \"LOG\"\n\n$P41DISPLAY\n[1] \"LOG\"\n\n$P42DISPLAY\n[1] \"LOG\"\n\n$P43DISPLAY\n[1] \"LOG\"\n\n$P44DISPLAY\n[1] \"LOG\"\n\n$P45DISPLAY\n[1] \"LOG\"\n\n$P46DISPLAY\n[1] \"LOG\"\n\n$P47DISPLAY\n[1] \"LOG\"\n\n$P48DISPLAY\n[1] \"LOG\"\n\n$P49DISPLAY\n[1] \"LOG\"\n\n$P4DISPLAY\n[1] \"LOG\"\n\n$P50DISPLAY\n[1] \"LOG\"\n\n$P51DISPLAY\n[1] \"LOG\"\n\n$P52DISPLAY\n[1] \"LOG\"\n\n$P53DISPLAY\n[1] \"LOG\"\n\n$P54DISPLAY\n[1] \"LOG\"\n\n$P55DISPLAY\n[1] \"LOG\"\n\n$P56DISPLAY\n[1] \"LOG\"\n\n$P57DISPLAY\n[1] \"LOG\"\n\n$P58DISPLAY\n[1] \"LOG\"\n\n$P59DISPLAY\n[1] \"LOG\"\n\n$P5DISPLAY\n[1] \"LOG\"\n\n$P60DISPLAY\n[1] \"LOG\"\n\n$P61DISPLAY\n[1] \"LOG\"\n\n$P6DISPLAY\n[1] \"LOG\"\n\n$P7DISPLAY\n[1] \"LOG\"\n\n$P8DISPLAY\n[1] \"LOG\"\n\n$P9DISPLAY\n[1] \"LOG\"\n\n\nAnd a few final keywords with threshold, window scaling and other user selected settings.\n\nDetectors <- DescriptionList[473:476]\nDetectors\n\n$THRESHOLD\n[1] \"(FSC,50000)\"\n\n$TUBENAME\n[1] \"05\"\n\n$USERSETTINGNAME\n[1] \"DTR_CellCounts\"\n\n$`WINDOW EXTENSION`\n[1] \"3\"", + "objectID": "course/04_IntroToTidyverse/index.html#filter-rows", + "href": "course/04_IntroToTidyverse/index.html#filter-rows", + "title": "04 - Introduction to Tidyverse", + "section": "filter (Rows)", + "text": "filter (Rows)\nSo far, we have been working with dplyr functions primarily used when working with and subsetting columns (including select(), pull(), rename() and relocate()). What if we wanted to work with rows of a data.frame? This is where the filter() function is used.\nThe Condition column in this Dataset appears to be indicating whether the samples were stimulated. Let’s see how many unique values are contained within that column\n\nData |> pull(Condition) |> unique() \n\n[1] \"Ctrl\" \"PPD\" \"SEB\" \n\n\nIn the case of this dataset, looks like the .fcs files where treated with either left alone, treated with PPD (Purified Protein Derrivative) or SEB. What if we wanted to subset only those treated with PPD?\nWithin filter(), we would specify the column name as the first argument, and ask that only values equal to (==) “PPD” be returned. Notice in this case, “” are needed, as we are asking for a matching character value.\n\nPPDOnly <- Data |> filter(Condition == \"PPD\")\nhead(PPDOnly, 5)\n\n bid timepoint Condition Date infant_sex ptype root singletsFSC\n1 INF0052 0 PPD 2025-07-26 Male HEU-hi 2363512 2136616\n2 INF0100 0 PPD 2025-07-26 Male HEU-lo 2049112 1821676\n3 INF0100 4 PPD 2025-07-26 Male HEU-lo 1063496 946587\n4 INF0100 9 PPD 2025-07-26 Male HEU-lo 788368 714198\n5 INF0179 0 PPD 2025-07-26 Male HU 1380336 1242311\n singletsSSC singletsSSCB CD45 NotMonocytes nonDebris lymphocytes\n1 1875394 1732620 0.5873838 0.8619837 0.8429685 0.6408044\n2 1717636 1597085 0.9063081 0.9251961 0.8771889 0.2174284\n3 796056 767297 0.9709891 0.9848719 0.9556049 0.7313503\n4 626387 600011 0.9822803 0.9842139 0.8123041 0.6223228\n5 1047081 1000877 0.9470275 0.9575685 0.9134438 0.6996502\n live Dump+ Dump- Tcells Vd2+ Vd2- Va7.2+\n1 0.9009254 0.20743228 0.6934931 0.2835676 0.007408209 0.9925918 0.01507057\n2 0.8929673 0.06181426 0.8311531 0.6735798 0.007137230 0.9928628 0.01671801\n3 0.8782307 0.20727202 0.6709587 0.5989873 0.005254643 0.9947454 0.01609790\n4 0.9566639 0.23164587 0.7250180 0.6489405 0.011935922 0.9880641 0.01855298\n5 0.8856898 0.33186111 0.5538287 0.4441538 0.004382972 0.9956170 0.01297237\n Va7.2- CD4+ CD4- CD8+ CD8- Tcells_count\n1 0.9775212 0.6340345 0.3434867 0.2744119 0.06907479 184930\n2 0.9761448 0.6145707 0.3615741 0.3312279 0.03034620 211987\n3 0.9786475 0.6559480 0.3226994 0.2912084 0.03149109 326378\n4 0.9695111 0.4306889 0.5388222 0.4908558 0.04796636 238021\n5 0.9826447 0.7499194 0.2327253 0.1850897 0.04763554 294549\n lymphocytes_count Monocytes Debris CD45_count\n1 652155 0.13801632 0.15703150 1017713\n2 314717 0.07480391 0.12281107 1447451\n3 544883 0.01512811 0.04439511 745037\n4 366784 0.01578611 0.18769586 589379\n5 663169 0.04243146 0.08655621 947858\n\n\nWhile this works, using “==” to match can glitch, especially with character values. Using the %in% operator is a better way of identifying and extracting only the rows whose Condition column contains “PPD”\n\nData |> filter(Condition %in% \"PPD\") |> head(5)\n\n bid timepoint Condition Date infant_sex ptype root singletsFSC\n1 INF0052 0 PPD 2025-07-26 Male HEU-hi 2363512 2136616\n2 INF0100 0 PPD 2025-07-26 Male HEU-lo 2049112 1821676\n3 INF0100 4 PPD 2025-07-26 Male HEU-lo 1063496 946587\n4 INF0100 9 PPD 2025-07-26 Male HEU-lo 788368 714198\n5 INF0179 0 PPD 2025-07-26 Male HU 1380336 1242311\n singletsSSC singletsSSCB CD45 NotMonocytes nonDebris lymphocytes\n1 1875394 1732620 0.5873838 0.8619837 0.8429685 0.6408044\n2 1717636 1597085 0.9063081 0.9251961 0.8771889 0.2174284\n3 796056 767297 0.9709891 0.9848719 0.9556049 0.7313503\n4 626387 600011 0.9822803 0.9842139 0.8123041 0.6223228\n5 1047081 1000877 0.9470275 0.9575685 0.9134438 0.6996502\n live Dump+ Dump- Tcells Vd2+ Vd2- Va7.2+\n1 0.9009254 0.20743228 0.6934931 0.2835676 0.007408209 0.9925918 0.01507057\n2 0.8929673 0.06181426 0.8311531 0.6735798 0.007137230 0.9928628 0.01671801\n3 0.8782307 0.20727202 0.6709587 0.5989873 0.005254643 0.9947454 0.01609790\n4 0.9566639 0.23164587 0.7250180 0.6489405 0.011935922 0.9880641 0.01855298\n5 0.8856898 0.33186111 0.5538287 0.4441538 0.004382972 0.9956170 0.01297237\n Va7.2- CD4+ CD4- CD8+ CD8- Tcells_count\n1 0.9775212 0.6340345 0.3434867 0.2744119 0.06907479 184930\n2 0.9761448 0.6145707 0.3615741 0.3312279 0.03034620 211987\n3 0.9786475 0.6559480 0.3226994 0.2912084 0.03149109 326378\n4 0.9695111 0.4306889 0.5388222 0.4908558 0.04796636 238021\n5 0.9826447 0.7499194 0.2327253 0.1850897 0.04763554 294549\n lymphocytes_count Monocytes Debris CD45_count\n1 652155 0.13801632 0.15703150 1017713\n2 314717 0.07480391 0.12281107 1447451\n3 544883 0.01512811 0.04439511 745037\n4 366784 0.01578611 0.18769586 589379\n5 663169 0.04243146 0.08655621 947858\n\n\nSimilar to what we saw for select(), we can grab rows that contain various values at once. We would just need to modify the second part of the argument. If we wanted to grab rows whose Condition column contained either PPD or SEB, we would need to provide that argument as a vector, placing both within c()/\n\nData |> filter(Condition %in% c(\"PPD\", \"SEB\")) |> head(5)\n\n bid timepoint Condition Date infant_sex ptype root singletsFSC\n1 INF0052 0 PPD 2025-07-26 Male HEU-hi 2363512 2136616\n2 INF0100 0 PPD 2025-07-26 Male HEU-lo 2049112 1821676\n3 INF0100 4 PPD 2025-07-26 Male HEU-lo 1063496 946587\n4 INF0100 9 PPD 2025-07-26 Male HEU-lo 788368 714198\n5 INF0179 0 PPD 2025-07-26 Male HU 1380336 1242311\n singletsSSC singletsSSCB CD45 NotMonocytes nonDebris lymphocytes\n1 1875394 1732620 0.5873838 0.8619837 0.8429685 0.6408044\n2 1717636 1597085 0.9063081 0.9251961 0.8771889 0.2174284\n3 796056 767297 0.9709891 0.9848719 0.9556049 0.7313503\n4 626387 600011 0.9822803 0.9842139 0.8123041 0.6223228\n5 1047081 1000877 0.9470275 0.9575685 0.9134438 0.6996502\n live Dump+ Dump- Tcells Vd2+ Vd2- Va7.2+\n1 0.9009254 0.20743228 0.6934931 0.2835676 0.007408209 0.9925918 0.01507057\n2 0.8929673 0.06181426 0.8311531 0.6735798 0.007137230 0.9928628 0.01671801\n3 0.8782307 0.20727202 0.6709587 0.5989873 0.005254643 0.9947454 0.01609790\n4 0.9566639 0.23164587 0.7250180 0.6489405 0.011935922 0.9880641 0.01855298\n5 0.8856898 0.33186111 0.5538287 0.4441538 0.004382972 0.9956170 0.01297237\n Va7.2- CD4+ CD4- CD8+ CD8- Tcells_count\n1 0.9775212 0.6340345 0.3434867 0.2744119 0.06907479 184930\n2 0.9761448 0.6145707 0.3615741 0.3312279 0.03034620 211987\n3 0.9786475 0.6559480 0.3226994 0.2912084 0.03149109 326378\n4 0.9695111 0.4306889 0.5388222 0.4908558 0.04796636 238021\n5 0.9826447 0.7499194 0.2327253 0.1850897 0.04763554 294549\n lymphocytes_count Monocytes Debris CD45_count\n1 652155 0.13801632 0.15703150 1017713\n2 314717 0.07480391 0.12281107 1447451\n3 544883 0.01512811 0.04439511 745037\n4 366784 0.01578611 0.18769586 589379\n5 663169 0.04243146 0.08655621 947858\n\n\nAlternatively, we could have set up the vector externally, and then provided it to filter()\n\nTheseConditions <- c(\"PPD\", \"SEB\")\nData |> filter(Condition %in% TheseConditions) |> head(5)\n\n bid timepoint Condition Date infant_sex ptype root singletsFSC\n1 INF0052 0 PPD 2025-07-26 Male HEU-hi 2363512 2136616\n2 INF0100 0 PPD 2025-07-26 Male HEU-lo 2049112 1821676\n3 INF0100 4 PPD 2025-07-26 Male HEU-lo 1063496 946587\n4 INF0100 9 PPD 2025-07-26 Male HEU-lo 788368 714198\n5 INF0179 0 PPD 2025-07-26 Male HU 1380336 1242311\n singletsSSC singletsSSCB CD45 NotMonocytes nonDebris lymphocytes\n1 1875394 1732620 0.5873838 0.8619837 0.8429685 0.6408044\n2 1717636 1597085 0.9063081 0.9251961 0.8771889 0.2174284\n3 796056 767297 0.9709891 0.9848719 0.9556049 0.7313503\n4 626387 600011 0.9822803 0.9842139 0.8123041 0.6223228\n5 1047081 1000877 0.9470275 0.9575685 0.9134438 0.6996502\n live Dump+ Dump- Tcells Vd2+ Vd2- Va7.2+\n1 0.9009254 0.20743228 0.6934931 0.2835676 0.007408209 0.9925918 0.01507057\n2 0.8929673 0.06181426 0.8311531 0.6735798 0.007137230 0.9928628 0.01671801\n3 0.8782307 0.20727202 0.6709587 0.5989873 0.005254643 0.9947454 0.01609790\n4 0.9566639 0.23164587 0.7250180 0.6489405 0.011935922 0.9880641 0.01855298\n5 0.8856898 0.33186111 0.5538287 0.4441538 0.004382972 0.9956170 0.01297237\n Va7.2- CD4+ CD4- CD8+ CD8- Tcells_count\n1 0.9775212 0.6340345 0.3434867 0.2744119 0.06907479 184930\n2 0.9761448 0.6145707 0.3615741 0.3312279 0.03034620 211987\n3 0.9786475 0.6559480 0.3226994 0.2912084 0.03149109 326378\n4 0.9695111 0.4306889 0.5388222 0.4908558 0.04796636 238021\n5 0.9826447 0.7499194 0.2327253 0.1850897 0.04763554 294549\n lymphocytes_count Monocytes Debris CD45_count\n1 652155 0.13801632 0.15703150 1017713\n2 314717 0.07480391 0.12281107 1447451\n3 544883 0.01512811 0.04439511 745037\n4 366784 0.01578611 0.18769586 589379\n5 663169 0.04243146 0.08655621 947858\n\n\nWhile this works when we have a limited number of variant condition values, what if had many more but only wanted to exclude one value? As we saw when learning about Conditionals, when we add a ! in front of a logical value, we get the opposite logical value returned\n\nIsThisASpectralInstrument <- TRUE\n\n!IsThisASpectralInstrument\n\n[1] FALSE\n\n\nIn the context of the dplyr package, we can use ! within the filter() to remove rows that contain a certain value\n\nSubset <- Data |> filter(!Condition %in% \"SEB\")\nSubset |> pull(Condition) |> unique()\n\n[1] \"Ctrl\" \"PPD\" \n\n\nLikewise, we can also use it with the select() to exclude columns we don’t want to include\n\nSubset <- Data |> select(!timepoint)\nSubset[1:3,]\n\n bid Condition Date infant_sex ptype root singletsFSC\n1 INF0052 Ctrl 2025-07-26 Male HEU-hi 2098368 1894070\n2 INF0100 Ctrl 2025-07-26 Male HEU-lo 2020184 1791890\n3 INF0100 Ctrl 2025-07-26 Male HEU-lo 1155040 1033320\n singletsSSC singletsSSCB CD45 NotMonocytes nonDebris lymphocytes\n1 1666179 1537396 0.5952943 0.8820349 0.8627649 0.6420138\n2 1697083 1579098 0.9106762 0.9052256 0.8602660 0.2145848\n3 875465 845446 0.9705765 0.9845400 0.9578793 0.7403110\n live Dump+ Dump- Tcells Vd2+ Vd2- Va7.2+\n1 0.9020581 0.21090996 0.6911482 0.2804264 0.008120361 0.9918796 0.01448070\n2 0.8908981 0.06252775 0.8283703 0.6748298 0.007265620 0.9927344 0.01577499\n3 0.8757665 0.20023803 0.6755285 0.6119129 0.004651313 0.9953487 0.01579402\n Va7.2- CD4+ CD4- CD8+ CD8- Tcells_count\n1 0.9773989 0.6341164 0.3432825 0.2734826 0.06979990 164771\n2 0.9769594 0.6119112 0.3650482 0.3357696 0.02927858 208241\n3 0.9795547 0.6639621 0.3155925 0.2862104 0.02938209 371723\n lymphocytes_count Monocytes Debris CD45_count\n1 587573 0.11796509 0.13723513 915203\n2 308583 0.09477437 0.13973396 1438047\n3 607477 0.01545999 0.04212072 820570", "crumbs": [ "About", "Intro to R", - "03 - Inside a .FCS file" + "04 - Intro to Tidyverse" ] }, { - "objectID": "course/03_InsideFCSFile/index.html#flowcore-parameters", - "href": "course/03_InsideFCSFile/index.html#flowcore-parameters", - "title": "03 - Inside an FCS File", - "section": "flowCore Parameters", - "text": "flowCore Parameters\nDepending on the arguments selected during read.FCS(), we might also encounter additional keywords that are added in by flowCore. For example, we do not see these keywords when “transformation” is set to FALSE.\n\nflowCoreCheck <- read.FCS(filename=firstfile,\n transformation = FALSE, truncate_max_range = FALSE)\n\nflowCoreCheck\n\nflowFrame object 'CellCounts4L_AB_05-ND050-05.fcs'\nwith 100 cells and 61 observables:\n name desc range minRange maxRange\n$P1 Time NA 272140 0 272139\n$P2 UV1-A NA 4194304 -111 4194303\n$P3 UV2-A NA 4194304 -111 4194303\n$P4 UV3-A NA 4194304 -111 4194303\n$P5 UV4-A NA 4194304 -111 4194303\n... ... ... ... ... ...\n$P57 R4-A NA 4194304 -111 4194303\n$P58 R5-A NA 4194304 -111 4194303\n$P59 R6-A NA 4194304 -111 4194303\n$P60 R7-A NA 4194304 -111 4194303\n$P61 R8-A NA 4194304 -111 4194303\n476 keywords are stored in the 'description' slot\n\n\n\nNoChange <- keyword(flowCoreCheck)\nDetectors <- NoChange [476:500]\nDetectors\n\n$`WINDOW EXTENSION`\n[1] \"3\"\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n\nBy contrast, if we had set “transformation” to TRUE:\n\nflowCoreCheck <- read.FCS(filename=firstfile,\n transformation = TRUE, truncate_max_range = FALSE)\n\nflowCoreCheck\n\nflowFrame object 'CellCounts4L_AB_05-ND050-05.fcs'\nwith 100 cells and 61 observables:\n name desc range minRange maxRange\n$P1 Time NA 272140 0 272139\n$P2 UV1-A NA 4194304 -111 4194303\n$P3 UV2-A NA 4194304 -111 4194303\n$P4 UV3-A NA 4194304 -111 4194303\n$P5 UV4-A NA 4194304 -111 4194303\n... ... ... ... ... ...\n$P57 R4-A NA 4194304 -111 4194303\n$P58 R5-A NA 4194304 -111 4194303\n$P59 R6-A NA 4194304 -111 4194303\n$P60 R7-A NA 4194304 -111 4194303\n$P61 R8-A NA 4194304 -111 4194303\n599 keywords are stored in the 'description' slot\n\n\n\nYesChange <- keyword(flowCoreCheck)\nDetectors <- YesChange [476:500]\nDetectors\n\n$`WINDOW EXTENSION`\n[1] \"3\"\n\n$transformation\n[1] \"applied\"\n\n$`flowCore_$P1Rmax`\n[1] \"272140\"\n\n$`flowCore_$P1Rmin`\n[1] \"0\"\n\n$`flowCore_$P2Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P2Rmin`\n[1] \"-111\"\n\n$`flowCore_$P3Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P3Rmin`\n[1] \"-111\"\n\n$`flowCore_$P4Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P4Rmin`\n[1] \"-111\"\n\n$`flowCore_$P5Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P5Rmin`\n[1] \"-111\"\n\n$`flowCore_$P6Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P6Rmin`\n[1] \"-111\"\n\n$`flowCore_$P7Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P7Rmin`\n[1] \"-111\"\n\n$`flowCore_$P8Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P8Rmin`\n[1] \"-26.3464946746826\"\n\n$`flowCore_$P9Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P9Rmin`\n[1] \"-111\"\n\n$`flowCore_$P10Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P10Rmin`\n[1] \"0\"\n\n$`flowCore_$P11Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P11Rmin`\n[1] \"-111\"\n\n$`flowCore_$P12Rmax`\n[1] \"4194304\"\n\n\n\n\nFor some flow cytometry R packages, you will notice when opening their exported .fcs outputs in commercial software that these flowCore keywords have ended up integrated. It is likely somewhere in the package code the author forgot to add set transformation to FALSE, which is why we are seeing these flowCore keywords after the fact.", + "objectID": "course/04_IntroToTidyverse/index.html#mutate", + "href": "course/04_IntroToTidyverse/index.html#mutate", + "title": "04 - Introduction to Tidyverse", + "section": "mutate", + "text": "mutate\nAs we can see, with just these handful of functions, we have the building blocks to rearrange and subset a larger data.frame into a format that we prefer. But what if we wanted to alter the content of a column, or add new columns to an existing data.frame? This is where the mutate() function can be used.\nLet’s start by slimming down our current Data to a smaller workable example, highlighting the functions and pipes we learned about today\n\nTidyData <- Data |> filter(Condition %in% \"Ctrl\") |> filter(timepoint %in% \"0\") |>\n select(bid, timepoint, Condition, Date, Tcells_count, CD45_count) |>\n rename(specimen=bid, condition=Condition) |> relocate(Date, .after=specimen)\n\n\nTidyData\n\n specimen Date timepoint condition Tcells_count CD45_count\n1 INF0052 2025-07-26 0 Ctrl 164771 915203\n2 INF0100 2025-07-26 0 Ctrl 208241 1438047\n3 INF0179 2025-07-26 0 Ctrl 291777 940733\n4 INF0134 2025-07-29 0 Ctrl 127866 689676\n5 INF0148 2025-07-29 0 Ctrl 234335 1013985\n6 INF0191 2025-07-29 0 Ctrl 55780 715443\n7 INF0124 2025-07-31 0 Ctrl 70297 687720\n8 INF0149 2025-07-31 0 Ctrl 107900 857845\n9 INF0169 2025-07-31 0 Ctrl 75540 854594\n10 INF0019 2025-08-05 0 Ctrl 208055 873622\n11 INF0032 2025-08-05 0 Ctrl 361034 753064\n12 INF0180 2025-08-05 0 Ctrl 284958 1049663\n13 INF0155 2025-08-07 0 Ctrl 281626 1065048\n14 INF0158 2025-08-07 0 Ctrl 280913 1249338\n15 INF0159 2025-08-07 0 Ctrl 452551 1190219\n16 INF0013 2025-08-22 0 Ctrl 182751 836573\n17 INF0023 2025-08-22 0 Ctrl 218435 968035\n18 INF0030 2025-08-22 0 Ctrl 85521 732321\n19 INF0166 2025-08-28 0 Ctrl 225650 739495\n20 INF0199 2025-08-28 0 Ctrl 169736 1112176\n21 INF0207 2025-08-28 0 Ctrl 39055 905365\n22 INF0614 2025-08-30 0 Ctrl 224396 1569007\n23 INF0622 2025-08-30 0 Ctrl 161924 939307\n\n\nThe mutate() function can be used to modify existing columns, as well as to create new ones. For example, let’s derrive the proportion of T cells from the overall CD45 gate. To do so, within the parenthesis, we would specify a new column name, and then divide the original columns:\n\nTidyData <- TidyData |> mutate(Tcells_ProportionCD45 = Tcells_count / CD45_count)\nTidyData\n\n specimen Date timepoint condition Tcells_count CD45_count\n1 INF0052 2025-07-26 0 Ctrl 164771 915203\n2 INF0100 2025-07-26 0 Ctrl 208241 1438047\n3 INF0179 2025-07-26 0 Ctrl 291777 940733\n4 INF0134 2025-07-29 0 Ctrl 127866 689676\n5 INF0148 2025-07-29 0 Ctrl 234335 1013985\n6 INF0191 2025-07-29 0 Ctrl 55780 715443\n7 INF0124 2025-07-31 0 Ctrl 70297 687720\n8 INF0149 2025-07-31 0 Ctrl 107900 857845\n9 INF0169 2025-07-31 0 Ctrl 75540 854594\n10 INF0019 2025-08-05 0 Ctrl 208055 873622\n11 INF0032 2025-08-05 0 Ctrl 361034 753064\n12 INF0180 2025-08-05 0 Ctrl 284958 1049663\n13 INF0155 2025-08-07 0 Ctrl 281626 1065048\n14 INF0158 2025-08-07 0 Ctrl 280913 1249338\n15 INF0159 2025-08-07 0 Ctrl 452551 1190219\n16 INF0013 2025-08-22 0 Ctrl 182751 836573\n17 INF0023 2025-08-22 0 Ctrl 218435 968035\n18 INF0030 2025-08-22 0 Ctrl 85521 732321\n19 INF0166 2025-08-28 0 Ctrl 225650 739495\n20 INF0199 2025-08-28 0 Ctrl 169736 1112176\n21 INF0207 2025-08-28 0 Ctrl 39055 905365\n22 INF0614 2025-08-30 0 Ctrl 224396 1569007\n23 INF0622 2025-08-30 0 Ctrl 161924 939307\n Tcells_ProportionCD45\n1 0.18003765\n2 0.14480820\n3 0.31015921\n4 0.18540010\n5 0.23110302\n6 0.07796568\n7 0.10221747\n8 0.12578030\n9 0.08839285\n10 0.23815220\n11 0.47942008\n12 0.27147570\n13 0.26442564\n14 0.22484948\n15 0.38022498\n16 0.21845195\n17 0.22564783\n18 0.11678076\n19 0.30514067\n20 0.15261613\n21 0.04313730\n22 0.14301785\n23 0.17238666\n\n\nWe can see that we have many significant digits being returned. Let’s round this new column to 2 significant digits by applying the round() function\n\nTidyData <- TidyData |> mutate(TcellsRounded = round(Tcells_ProportionCD45, 2))\nTidyData \n\n specimen Date timepoint condition Tcells_count CD45_count\n1 INF0052 2025-07-26 0 Ctrl 164771 915203\n2 INF0100 2025-07-26 0 Ctrl 208241 1438047\n3 INF0179 2025-07-26 0 Ctrl 291777 940733\n4 INF0134 2025-07-29 0 Ctrl 127866 689676\n5 INF0148 2025-07-29 0 Ctrl 234335 1013985\n6 INF0191 2025-07-29 0 Ctrl 55780 715443\n7 INF0124 2025-07-31 0 Ctrl 70297 687720\n8 INF0149 2025-07-31 0 Ctrl 107900 857845\n9 INF0169 2025-07-31 0 Ctrl 75540 854594\n10 INF0019 2025-08-05 0 Ctrl 208055 873622\n11 INF0032 2025-08-05 0 Ctrl 361034 753064\n12 INF0180 2025-08-05 0 Ctrl 284958 1049663\n13 INF0155 2025-08-07 0 Ctrl 281626 1065048\n14 INF0158 2025-08-07 0 Ctrl 280913 1249338\n15 INF0159 2025-08-07 0 Ctrl 452551 1190219\n16 INF0013 2025-08-22 0 Ctrl 182751 836573\n17 INF0023 2025-08-22 0 Ctrl 218435 968035\n18 INF0030 2025-08-22 0 Ctrl 85521 732321\n19 INF0166 2025-08-28 0 Ctrl 225650 739495\n20 INF0199 2025-08-28 0 Ctrl 169736 1112176\n21 INF0207 2025-08-28 0 Ctrl 39055 905365\n22 INF0614 2025-08-30 0 Ctrl 224396 1569007\n23 INF0622 2025-08-30 0 Ctrl 161924 939307\n Tcells_ProportionCD45 TcellsRounded\n1 0.18003765 0.18\n2 0.14480820 0.14\n3 0.31015921 0.31\n4 0.18540010 0.19\n5 0.23110302 0.23\n6 0.07796568 0.08\n7 0.10221747 0.10\n8 0.12578030 0.13\n9 0.08839285 0.09\n10 0.23815220 0.24\n11 0.47942008 0.48\n12 0.27147570 0.27\n13 0.26442564 0.26\n14 0.22484948 0.22\n15 0.38022498 0.38\n16 0.21845195 0.22\n17 0.22564783 0.23\n18 0.11678076 0.12\n19 0.30514067 0.31\n20 0.15261613 0.15\n21 0.04313730 0.04\n22 0.14301785 0.14\n23 0.17238666 0.17", "crumbs": [ "About", "Intro to R", - "03 - Inside a .FCS file" + "04 - Intro to Tidyverse" ] }, { - "objectID": "course/02_FilePaths/index.html", - "href": "course/02_FilePaths/index.html", - "title": "02 - File Paths", - "section": "", - "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here\nWelcome to the second week of Cytometry in R! This week we will learn about file.path, namely, how to communicate to our computer (and R) where various files are stored.", + "objectID": "course/04_IntroToTidyverse/index.html#arrange", + "href": "course/04_IntroToTidyverse/index.html#arrange", + "title": "04 - Introduction to Tidyverse", + "section": "arrange", + "text": "arrange\nAnd while we are here, let’s rearrange the rows so that they are descending based on the Tcell proportion. We can use this by using the desc() and arrange() functions from dplyr:\n\nTidyData <- TidyData |> arrange(desc(TcellsRounded))\n\nAnd let’s go ahead and filter() and identify the specimens that had more than 30% T cells as part of the overall CD45 gate (context, these samples were Cord Blood):\n\nTidyData |> filter(TcellsRounded > 0.3)\n\n specimen Date timepoint condition Tcells_count CD45_count\n1 INF0032 2025-08-05 0 Ctrl 361034 753064\n2 INF0159 2025-08-07 0 Ctrl 452551 1190219\n3 INF0179 2025-07-26 0 Ctrl 291777 940733\n4 INF0166 2025-08-28 0 Ctrl 225650 739495\n Tcells_ProportionCD45 TcellsRounded\n1 0.4794201 0.48\n2 0.3802250 0.38\n3 0.3101592 0.31\n4 0.3051407 0.31\n\n\nWhich is we had wanted to just retrieve the specimen IDs, we could add pull() after a new pipe argument.\n\nTidyData |> filter(TcellsRounded > 0.3) |> pull(specimen)\n\n[1] \"INF0032\" \"INF0159\" \"INF0179\" \"INF0166\"\n\n\nAnd finally, since I may want to send the data to a supervisor, let’s go ahead and export this “tidyed” version of our data.frame out to it’s own .csv file. Working within our project folder, this would look like this:\n\nNewName <- paste0(\"MyNewDataset\", \".csv\")\nStorageLocation <- file.path(\"data\", NewName)\nStorageLocation\n\n[1] \"data/MyNewDataset.csv\"\n\n\n\nwrite.csv(TidyData, StorageLocation, row.names=FALSE)", "crumbs": [ "About", "Intro to R", - "02 - File Paths" + "04 - Intro to Tidyverse" ] }, { - "objectID": "course/02_FilePaths/index.html#set-up", - "href": "course/02_FilePaths/index.html#set-up", - "title": "02 - File Paths", - "section": "Set Up", - "text": "Set Up\nBefore we begin, let’s make sure you get the data needed for today transferred to your local computer, and then get the .fcs files copied over from there to your own working project folder. This is the process you will repeat each week throughout the course.\n\nNew Repository\nFirst off, login to your GitHub account. Once there, you will select the options to create a new repository (similar to what you did during Using GitHub)\n\n\n\nFor this week, let’s set this new repository up as a private repository, and call it Week2. This will keep things consistent with the file.paths we will be showing in the examples.\n\n\n\nOnce the new repository has been created, copy the URL.\n\n\n\nNext, open up Positron, set the interpreter to use R, and then select the option to bring in a “New Folder from Git”.\n\n\n\nPaste in your new repository’s url. Additionally, if you want to match file.paths shown in the examples, set your storage location to your local Documents folder (please note the start of the file.path will look differently depending on whether you are on Windows, MacOS, or Linux).\n\n\n\nYour new repository will then be imported from GitHub. Once this is done, create two subfolders (data and images) and a new .qmd file (naming it filepaths.qmd).\n\n\n\n\n\nSync\nWith this done, return to GitHub and open your forked version of the CytometryInR course folder. If you haven’t yet done so, click on sync to bring in this week’s code and datasets.\n\n\n\nReturning to Positron, you will need to switch Project Folders, switching from Week2 over to CytometryInR.\n\n\n\n\n\nPull\nOnce CytometryInR project folder has opened, you will need to pull in the new data from GitHub to your local computer.\n\n\n\n\n\nCopy Files to Week2\nOnce this is done, you will see within the course folder, containing this weeks folder (02_FilePaths). Within it there is a data folder with .fcs files. To avoid causing conflicts when bringing in next week’s materials, you will want to manually copy over these .fcs files (via your File Explorer) to the data folder within your “Week2” Project Folder.\n\n\n\n\n\nCommit and Push\nWhen you reopen your Week2 project folder in Positron, you should now be able to see the .fcs files within the data folder. Next, from the action bar on the far left, select the Source Control tab. Stage all the changes (as was done in Using Git), and write a short commit message.\n\n\n\nWith these files now being tracked by version control, push (ie. send) your changes to GitHub so that they are remotely backed up.\n\n\n\nAnd with this setup complete, you are now ready to proceed. Remember, run code and write notes in your working project folder (Week2 or otherwise named) to avoid conflicts next week in the CytometryInR folder when you are trying to bring in the Week #3 code and datasets.", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/04_IntroToTidyverse/slides.html#read.csv", + "href": "course/04_IntroToTidyverse/slides.html#read.csv", + "title": "04 - Introduction to Tidyverse", + "section": "read.csv", + "text": "read.csv\n\n\n\n\n\n\n\n\n.\n\n\nWe will start by first loading in our copied over dataset (Dataset.csv) from it’s location in the project folder. If you are following the organization scheme we have been using throughout the course, your file path will look something like this:\n\n\n\n\n\n\n\n\n\n\nthefilepath <- file.path(\"data\", \"Dataset.csv\")\n\nthefilepath\n\n[1] \"data/Dataset.csv\"" }, { - "objectID": "course/02_FilePaths/index.html#working-directory", - "href": "course/02_FilePaths/index.html#working-directory", - "title": "02 - File Paths", - "section": "Working Directory", - "text": "Working Directory\nNow that we are back in our Week2 folder, let’s start by seeing our current location similarly to how our computer perceives it.\nWe will use getwd() function (ie. get working directory) to return the location of the folder we are currently inside of. For example, when getwd() is run within my Week2 project folder, I see the following location\n\ngetwd()\n\n\nThis returns a file path. The final location (Week2 in this case) is the Working Directory. Your computer when working in R will be descern other locations in relation to this directory.", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/04_IntroToTidyverse/slides.html#data.frame", + "href": "course/04_IntroToTidyverse/slides.html#data.frame", + "title": "04 - Introduction to Tidyverse", + "section": "data.frame", + "text": "data.frame\n\n\n\n\n\n\n\n\n.\n\n\nOr alternatively using the new-to-us glimpse() function\n\n\n\n\n\n\n\nglimpse(Data)\n\nError in `glimpse()`:\n! could not find function \"glimpse\"" }, { - "objectID": "course/02_FilePaths/index.html#directories", - "href": "course/02_FilePaths/index.html#directories", - "title": "02 - File Paths", - "section": "Directories", - "text": "Directories\nWithin this working directory, we have a variety of project folders and files related to the course. We can see the folders that are present using the list.dirs() function.\n\nlist.dirs(path=\".\", full.names=FALSE, recursive=FALSE)\n\n\nWithin this list.dirs() function, we are specifying two arguments with which we will be working with later today, full.names and recursive. For now, lets set their arguments to FALSE, which means they conditions they implement are inactive (turned off).\n\n\nThe path argument is currently set to “.”, which is a stand-in for the present directory. In R, if an argument is not specified directly, it is inferred based on an order of expected arguments. Thus, if not present, we could still get the same output as seen before.\n\nlist.dirs(full.names=FALSE, recursive=FALSE)\n\n\n\n\nWithin Positron, in addition to visible folders, we also have hidden folders (denoted by the “.” in front of the folder name when using list.dirs()). In the case of one of our course website folders, we can see a “.quarto” folder shown in a lighter gray . The “.git” folder we saw from list.dirs() is typically hidden when viewing from Positron.\n\nIn the case of Week2, the two not-hidden folders we created are listed. We will see how to navigate these in a second.", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/04_IntroToTidyverse/slides.html#column-value-type", + "href": "course/04_IntroToTidyverse/slides.html#column-value-type", + "title": "04 - Introduction to Tidyverse", + "section": "Column value type", + "text": "Column value type\n\n\n\n\n\n\n\n\n.\n\n\nAs we saw last week, functions often need values that match a certain type (the paintbrush needing paint analogy). As we inspect the columns of Data, we can notice some of the columns contain values within that are character (ie. “char”) values. Others appear to contain numeric values (which are subtyped as either double (“ie. dbl”) or integer (ie. “int”)). At first glance, we do not appear to have any logical (ie. TRUE or FALSE) columns in this dataset." }, { - "objectID": "course/02_FilePaths/index.html#variables", - "href": "course/02_FilePaths/index.html#variables", - "title": "02 - File Paths", - "section": "Variables", - "text": "Variables\nBefore exploring file paths, we need to have some basic R code knowledge that we can use to work with them. Within R, we have the ability to assign particular values (be they character strings, numbers or logicals) to objects (ie. variables) that can be used when called upon later.\nFor example:\n\nWhatDayDidIWriteThis <- \"Saturday\"\n\nIn this case, the variable name is what the assignment arrow (“<-”) is pointing at. In this case, WhatDayDidIWriteThis\n\n\nWhen we run this, we create a variable, that will appear within the right-sidebar.\n\nWhatDayDidIWriteThis <- \"Saturday\"\n\n\n\n\nThese variables can subsequently be retrieved by printing (ie. running) the name of the variable\n\nWhatDayDidIWriteThis \n\n[1] \"Saturday\"\n\n\n\n\nYou can create variables with almost any name you can think of\n\nTopSecretMeetingDay <- \"Saturday\"\n\n\n\nWith a few exceptions. R doesn’t play well with spaces:\n\nTop Secret Meeting Day <- \"Saturday\"\n\nError in parse(text = input): <text>:1:5: unexpected symbol\n1: Top Secret\n ^\n\n\n\n\nBut does play well with underscores:\n\nTop_Secret_Meeting_Day <- \"Saturday\"\n\n\n\nThe above (with individual words separated by _) is collectively known as snake case. The alternate way to help delineate variable names is “camelCase”, with first letter of each word being capitalized (seen in the previous example).\n\n\n\n\nTopSecretMeetingDay\n\n[1] \"Saturday\"\n\n\n\n\nYou can overwrite a Variable name by assigning a different value to it:\n\nTopSecretMeetingDay <- \"Monday\"\n\n\nTopSecretMeetingDay\n\n[1] \"Monday\"\n\n\n\n\nYou can also remove individual variables via the rm function\n\nrm(Top_Secret_Meeting_Day)\n\n\n\nOr if trying to remove all, via the right sidebar\n\n\n\nIn the prior case, we are creating a variable that is a “string” of character values, due to our use of “” around the word. We can see this when we use the str() function.\n\nFluorophores <- \"FITC\"\nstr(Fluorophores)\n\n chr \"FITC\"\n\n\nThe “chr” in front denotating that Fluorophores contains a character string.\n\n\nThis could also be retrieved using the class() function.\n\nclass(Fluorophores)\n\n[1] \"character\"\n\n\n\n\nAlternatively, we could assign a numeric value to a variable\n\nFluorophores <- 29\nstr(Fluorophores)\n\n num 29\n\n\nWhich returns “num”, ie. numeric.\n\n\nWe can also specify a logical (ie. True or FALSE) to a particular object\n\nIsPerCPCy5AGoodFluorophore <- FALSE\nstr(IsPerCPCy5AGoodFluorophore)\n\n logi FALSE\n\n\nWhich returns logi in front, denoting this variable contains a logical value.\n\n\nLast week, when we were installing dplyr, the reason that installation failed was install.packages() expects a character string. However, when we left off the ““, it looked within our local environments created variables for the dplyr variable, couldn’t find it, and thus failed.\nWe could of course, have assigned a character value to a variable name, and then used that variable name, which would have worked.\n\nPackageToInstall <- \"dplyr\"\n\ninstall.packages(PackageToInstall)", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/04_IntroToTidyverse/slides.html#select-columns", + "href": "course/04_IntroToTidyverse/slides.html#select-columns", + "title": "04 - Introduction to Tidyverse", + "section": "select (Columns)", + "text": "select (Columns)\n\n\n\n\n\n\n\n\n.\n\n\nNow that we have read in our data, and have a general picture of the structure and contents, lets start learning the main dplyr functions we will be using throughout the course. To do this, lets go ahead and attach dplyr to our local environment via the library() call.\n\n\n\n\n\n\n\nlibrary(dplyr)" }, { - "objectID": "course/02_FilePaths/index.html#indexing", - "href": "course/02_FilePaths/index.html#indexing", - "title": "02 - File Paths", - "section": "Indexing", - "text": "Indexing\nNot all variables contain single objects.\nFor example, we can modify Fluorophores and add additional entries:\n\nFluorophores <- c(\"BV421\", \"FITC\", \"PE\", \"APC\")\nstr(Fluorophores)\n\n chr [1:4] \"BV421\" \"FITC\" \"PE\" \"APC\"\n\n\nThe c stands for concatenate. It concatenates the objects into a larger object, known as a vector.\nIn this case, you notice in addition to the specification the values are characters, we get a [1:4], denoting four objects are present.\n\n\nWe can similarly retrieve this information using the length() function\n\nlength(Fluorophores)\n\n[1] 4\n\n\n\n\nWhen multiple objects are present, we can specify them individidually by providing their index number within square brackets [].\n\nFluorophores[1]\n\n[1] \"BV421\"\n\n\n\n\n\nFluorophores[3]\n\n[1] \"PE\"\n\n\n\n\nOr specify in sequence using a colon (:)\n\nFluorophores[3:4]\n\n[1] \"PE\" \"APC\"\n\n\n\n\nOr if not adjacent, reusing c within the square brackets\n\nFluorophores[c(1,4)]\n\n[1] \"BV421\" \"APC\" \n\n\n\n\nWe will revisit these concepts throughout the course, with what we have covered today, this will help us create file.paths and select fcs files that we want to work with via index number.", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/04_IntroToTidyverse/slides.html#relocate", + "href": "course/04_IntroToTidyverse/slides.html#relocate", + "title": "04 - Introduction to Tidyverse", + "section": "relocate", + "text": "relocate\n\n\n\n\n\n\n\n\n.\n\n\nAlternatively, we occasionally want to move one column. While we could respecify the location using select(), specifying the names of all the other columns out in a line of code to just to rearrange one does not sound like a good use of time. For this reason, the second dplyr function we will be learning is the relocate() function." }, { - "objectID": "course/02_FilePaths/index.html#listing-files", - "href": "course/02_FilePaths/index.html#listing-files", - "title": "02 - File Paths", - "section": "Listing Files", - "text": "Listing Files\nAfter this detour into variables and indexing, let’s return our focus to how to use these in context of file paths. Working from within our Week2 project folder, let’s see what directories (folders) are present\n\nlist.dirs(path=\".\", full.names=FALSE, recursive=FALSE)\n\n\n\n\nWe can also list any files that are present within our working directory using the list.files() function.\n\nlist.files()\n\n\nIn this case, in addition to our filepaths.qmd file, we can see the LICENSE and README files created when we set up the repository.\n\n\nWe can also specify a particular folder we want to show items present within by changing the path argument. For example, if we wanted to see the contents of the “data” folder\n\nlist.files(path=\"data\", full.names=FALSE, recursive=FALSE)\n\n\nWhich in this case returns the fcs files we copied over at the start of this lesson.\n\n\nIn this case, there are no folders under “data”. Let’s go ahead and create a new one, calling it target.", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/04_IntroToTidyverse/slides.html#rename", + "href": "course/04_IntroToTidyverse/slides.html#rename", + "title": "04 - Introduction to Tidyverse", + "section": "rename", + "text": "rename\n\n\n\n\n\n\n\n\n.\n\n\nAt this point, we are able to both move and select particular columns, allowing us to rearrange and subset a larger data.frame object however we want it to appear. However, as we encountered, some of the names contain special characters and spaces, requiring use of tick marks (``) to avoid issues. How can we change a column name?" }, { - "objectID": "course/02_FilePaths/index.html#creating-directories", - "href": "course/02_FilePaths/index.html#creating-directories", - "title": "02 - File Paths", - "section": "Creating directories", - "text": "Creating directories\nAlternatively, we can also create a folder via R using the dir.create() function. Since we want it within data, we would modify the path accordingly\n\nNewFolderLocation <- file.path(\"data\", \"target2\")\n\ndir.create(path=NewFolderLocation)\n\n\n\n\nBefore continuing, let’s copy the first two .fcs files into both target and target2.\n\n\n\nGiven our working directory is set the top-level of the Week2 project folder, we can’t just check inside nested target folders directly. If we attempt to:\n\nlist.files(path=\"target\", full.names=FALSE, recursive=FALSE)\n\ncharacter(0)\n\n\n\n\nNo files are returned (ie, character(0)), since from our computers perspective, “target” doesn’t exist within the active working directory.\n\nfile.exists(\"target\")\n\n[1] FALSE\n\n\n\n\nOn the other hand, within it’s view, it knows that the data folder exist\n\nfile.exists(\"data\")\n\n\nSo here we encounter the first challenge when communicating to our computer where to search for and find files. We need to provide a file.path that incorporates the path of folders between where the computer is currently at (ie. the working directory) and the target file itself.", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/04_IntroToTidyverse/slides.html#pull", + "href": "course/04_IntroToTidyverse/slides.html#pull", + "title": "04 - Introduction to Tidyverse", + "section": "pull", + "text": "pull\n\n\n\n\n\n\n\n\n.\n\n\nSometimes, we may want to retrieve individual values present in a column, to use within either a vector or a list. We can do this using the pull() function, which will retrieve the column contents and strip the column formatting\n\n\n\n\n\n\n\nData |> pull(Date) |> head(10)\n\n [1] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\"\n [6] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\"" }, { - "objectID": "course/02_FilePaths/index.html#file-paths", - "href": "course/02_FilePaths/index.html#file-paths", - "title": "02 - File Paths", - "section": "File Paths", - "text": "File Paths\nOne way we can do this is through a file.path argument. We could potentially provide this by adding either a / or a  into the path argument, depending on your computers operating system.\n\nlist.files(path=\"data/target\", full.names=FALSE, recursive=FALSE)\n\n\n\nWhile this works in your particular context, if you are sharing the code with others who have a different operating system, these hard-coded “/” or “\" will cause the code for them to error out at these particular steps.\n\nFor that reason, it is better to assemble a file.path using the file.path() function. This function takes into account the operating system, removing your need to have to worry about this particular computing nuance, and write code that is reproducible and replicable for everyone.\n\nFolderLocation <- file.path(\"data\", \"target\")\nFolderLocation\n\n[1] \"data/target\"\n\n\n\nlist.files(path=FolderLocation, full.names=FALSE, recursive=FALSE)\n\n\n\n\nWe can also append additional locations to existing file paths, by including the variable name within the file.path() we are creating.\n\nFolderLocation <- \"data\"\nScreenshotFolder <- file.path(FolderLocation, \"target\")\nScreenshotFolder\n\n[1] \"data/target\"\n\n\n\nlist.files(path=ScreenshotFolder, full.names=FALSE, recursive=FALSE)\n\n\n\n\nAdditionally, list.files() has the ability to filter for files that contain a particular character string. This can be useful is we are searching for “.fcs” or “.csv” files, but also for files that contain a particular word. In the case of the ScreenshotFolders\n\nlist.files(path=ScreenshotFolder, pattern=\"ND050\", full.names=FALSE, recursive=FALSE)\n\n\nYou will notice, the index numbers are in the context of what is filtered, not all the folder contents.", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/04_IntroToTidyverse/slides.html#filter-rows", + "href": "course/04_IntroToTidyverse/slides.html#filter-rows", + "title": "04 - Introduction to Tidyverse", + "section": "filter (Rows)", + "text": "filter (Rows)\n\n\n\n\n\n\n\n\n.\n\n\nSo far, we have been working with dplyr functions primarily used when working with and subsetting columns (including select(), pull(), rename() and relocate()). What if we wanted to work with rows of a data.frame? This is where the filter() function is used." }, { - "objectID": "course/02_FilePaths/index.html#selecting-for-patterns", - "href": "course/02_FilePaths/index.html#selecting-for-patterns", - "title": "02 - File Paths", - "section": "Selecting for Patterns", - "text": "Selecting for Patterns\nIf we currently listed the files within data, we get a return that looks like this:\n\nlist.files(\"data\", full.names=FALSE, recursive=FALSE)\n\n\n\n\nAs you can see, we are getting back both folders and individual .fcs files. We could consequently change the pattern to provide a character string that will only return the .fcs files. We will go ahead and assign this list to a variable named files, for later retrieval.\n\nfiles <- list.files(\"data\", pattern=\".fcs\", full.names=FALSE, recursive=FALSE)\nfiles\n\n\n\n\nOne of the R packages we will be using througout the course is the stringr package. It contains two functions that can be useful when identifying more complicated character strings. In this case, if we run the str_detect() function to identify which of the .fcs files within the files variable contains the “INF” character string, we get a vector of logical (ie. True or FALSE) outputs corresponding to each file.\n\n# install.packages(\"stringr\") # CRAN\nlibrary(stringr)\n\n\nstr_detect(files, \"INF\")\n\n\n\n\nSimilar to how we indexed the Fluorophore list (ex. Fluorophore[1:2]) which returned a subset, we can similarly use this logical vector to subset files that returned as TRUE for containing the pattern “INF”\n\nfiles[str_detect(files, \"INF\")]\n\n\n\n\nLet’s go ahead and save these subsetted file names to a new variable, called Infants.\n\nInfants <- files[str_detect(files, \"INF\")]", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/04_IntroToTidyverse/slides.html#mutate", + "href": "course/04_IntroToTidyverse/slides.html#mutate", + "title": "04 - Introduction to Tidyverse", + "section": "mutate", + "text": "mutate\n\n\n\n\n\n\n\n\n.\n\n\nAs we can see, with just these handful of functions, we have the building blocks to rearrange and subset a larger data.frame into a format that we prefer. But what if we wanted to alter the content of a column, or add new columns to an existing data.frame? This is where the mutate() function can be used." }, { - "objectID": "course/02_FilePaths/index.html#conditionals", - "href": "course/02_FilePaths/index.html#conditionals", - "title": "02 - File Paths", - "section": "Conditionals", - "text": "Conditionals\nOne useful thing is that within R, we can set conditions on whether something is carried out. The most typical conditional you will encounter are the “If” statements. They typically take a form that resembles the following.\n\nNeedCoffee <- TRUE\n\nif (NeedCoffee){\n print(\"Take a break\")\n}\n\n\n\nIn this case of the above, if the variable within the () is equal to true, the code within the {} will be executed.\n\nNeedCoffee <- TRUE\n\nif (NeedCoffee){\n print(\"Take a break\")\n}\n\n[1] \"Take a break\"\n\n\n\n\nBy contrast, when the variable within the () is equal to false, the code within the {} will not be executed.\n\nNeedCoffee <- FALSE\n\nif (NeedCoffee){\n print(\"Take a break\")\n}\n\n\n\nThese “If” statements will trigger as long as the specified condition within the () is TRUE. For a different example:\n\nRowNumber <- 299\n2 + RowNumber > 300\n\n[1] TRUE\n\n\n\nif (2 + RowNumber > 3){\n print(\"Stop Iterating\")\n}\n\n[1] \"Stop Iterating\"\n\n\n\n\nWhen you add an ! in front a conditional, it flips the expected outcome.\n\nItsRaining <- TRUE\n\nif (ItsRaining){print(\"Bring an Umbrella\")}\n\n[1] \"Bring an Umbrella\"\n\n\n\n!ItsRaining\n\n[1] FALSE\n\n\n\nif (!ItsRaining){print(\"Bring an Umbrella\")}\n\n\nItsRaining <- TRUE\n\nif (!ItsRaining){print(\"Bring Sunglasses\")}\n\n\n\nWe will explore more complicated conditionals throughout the course, but for now, let’s implement a couple simple ones in the context of copying over the .fcs files in Infants over to a new target3 folder.", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/04_IntroToTidyverse/slides.html#arrange", + "href": "course/04_IntroToTidyverse/slides.html#arrange", + "title": "04 - Introduction to Tidyverse", + "section": "arrange", + "text": "arrange\n\n\n\n\n\n\n\n\n.\n\n\nAnd while we are here, let’s rearrange the rows so that they are descending based on the Tcell proportion. We can use this by using the desc() and arrange() functions from dplyr:\n\n\n\n\n\n\n\nTidyData <- TidyData |> arrange(desc(TcellsRounded))" }, { - "objectID": "course/02_FilePaths/index.html#conditionals-in-practice", - "href": "course/02_FilePaths/index.html#conditionals-in-practice", - "title": "02 - File Paths", - "section": "Conditionals in practice", - "text": "Conditionals in practice\nFirst off, let’s write a conditional to check if there is a target3 folder within data.\n\nfiles_present <- list.files(\"data\", full.names=FALSE, recursive=FALSE)\nfiles_present\n\n\n\n\n\nFolderTarget3 <- file.path(\"data\", \"target3\")\ndir.exists(FolderTarget3)\n\n\n\n\nWe can write a conditional to create a folder if one does not yet exist.\n\nif (!dir.exists(FolderTarget3)){\n dir.create(FolderTarget3)\n}", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/03_InsideFCSFile/slides.html#getting-set-up", + "href": "course/03_InsideFCSFile/slides.html#getting-set-up", + "title": "03 - Inside an FCS File", + "section": "Getting Set Up", + "text": "Getting Set Up" }, { - "objectID": "course/02_FilePaths/index.html#copying-files", - "href": "course/02_FilePaths/index.html#copying-files", - "title": "02 - File Paths", - "section": "Copying Files", - "text": "Copying Files\nTo copy files to another folder location, we use the file.copy() function. It has two arguments that we will be working with, from being the .fcs files, and to being the folder location we wish to transfer them to. If we tried using them as we currently have them:\n\n# Variable Infants containing 4 .fcs file names\n\nfile.copy(from=Infants, to=FolderTarget3)\n\n\n\n\nThe reason for this error is we are only working with a partial file path, as viewed from our Working directory. In this case, what is needed is the full file.path, so the file.path should also include the upstream folders from your current working directory.\n\ngetwd()\n\n\n\n\nIn this case, we can update the .fcs files location by switching the full.names argument within list.files() from FALSE, to TRUE.\n\nfiles_present <- list.files(\"data\", full.names=TRUE, recursive=FALSE)\nfiles_present\n\n\nAnd filter for those containing “INF” again\n\nInfants <- files_present[str_detect(files_present, \"INF\")]\n\nAnd then try again:\n\n# Variable Infants containing 4 .fcs file names\n\nfile.copy(from=Infants, to=FolderTarget3)", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/03_InsideFCSFile/slides.html#flowcore", + "href": "course/03_InsideFCSFile/slides.html#flowcore", + "title": "03 - Inside an FCS File", + "section": "flowCore", + "text": "flowCore\n\n\n\n\n\n\n\n\n.\n\n\nWe will be using the flowCore package, which is the oldest and most-frequently downloaded flow cytometry package on Bioconductor." }, { - "objectID": "course/02_FilePaths/index.html#removing-files.", - "href": "course/02_FilePaths/index.html#removing-files.", - "title": "02 - File Paths", - "section": "Removing files.", - "text": "Removing files.\nJust like we can add files via R, we can also remove them. However, when we remove them via this route, they are removed permanently, not sent to the recycle bin. We will revisit how later on in the course after you have gained additional experience with file.paths.\n\n?unlink()", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/03_InsideFCSFile/slides.html#flowframe", + "href": "course/03_InsideFCSFile/slides.html#flowframe", + "title": "03 - Inside an FCS File", + "section": "flowFrame", + "text": "flowFrame\n\n\n\n\n\n\n\n\n.\n\n\nFor read.FCS(), it accepts several arguments. The argument “filename” is where we provide our file.path to .fcs file that we wish to load into R. Let’s go ahead and do so\n\n\n\n\n\n\n\nread.FCS(filename=firstfile)" }, { - "objectID": "course/02_FilePaths/index.html#basename", - "href": "course/02_FilePaths/index.html#basename", - "title": "02 - File Paths", - "section": "Basename", - "text": "Basename\nIf we look at Infants with the full.names=TRUE, we see the additional pathing folder has been added, allowing us to successfully copy over the files.\n\nInfants\n\n\n\n\nIf we were trying to retrieve just the local file names from the full.names, we could do so with basename() function. We will use this in combination with additional arguments later in the course\n\nbasename(Infants)", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/03_InsideFCSFile/slides.html#early-metadata", + "href": "course/03_InsideFCSFile/slides.html#early-metadata", + "title": "03 - Inside an FCS File", + "section": "Early Metadata", + "text": "Early Metadata\n\n\n\n\n\n\n\n\n.\n\n\nWithin the initial portion, we are getting back metadata keywords related to where and how the particular file was acquired. Keywords of potential interest include:\n\n\n\n\n\n\n\n\n\n\n\n\n\nStart Time\n\n\nWhat time was the .fcs file acquired\n\n\n\n\n\n\n\nDescriptionList$`$BTIM`\n\n[1] \"13:55:29.85\"" }, { - "objectID": "course/02_FilePaths/index.html#recursive", - "href": "course/02_FilePaths/index.html#recursive", - "title": "02 - File Paths", - "section": "Recursive", - "text": "Recursive\nAnd finally that we have created additional nested folders and populated them with fcs files, let’s see what setting list.files() recursive argument to TRUE\n\nall_files_present <- list.files(full.names=TRUE, recursive=TRUE)\nall_files_present \n\n\n\n\nIn this case, all files in all folders within the working directory are shown. This can be useful when exploring folder contents, but if there are a lot of files present within the folder, it will take a while to return the list.", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/03_InsideFCSFile/slides.html#detector-values", + "href": "course/03_InsideFCSFile/slides.html#detector-values", + "title": "03 - Inside an FCS File", + "section": "Detector Values", + "text": "Detector Values\n\n\n\n\n\n\n\n\n.\n\n\nThe next major stretch of keywords encode parameter values associated with the individual detectors for at the time of acquisition.\n\n\n\n\n\n\n\nDetectors <- DescriptionList[20:384]\nDetectors\n\n$`$P10B`\n[1] \"32\"\n\n$`$P10E`\n[1] \"0,0\"\n\n$`$P10N`\n[1] \"UV9-A\"\n\n$`$P10R`\n[1] \"4194304\"\n\n$`$P10TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P10V`\n[1] \"710\"\n\n$`$P11B`\n[1] \"32\"\n\n$`$P11E`\n[1] \"0,0\"\n\n$`$P11N`\n[1] \"UV10-A\"\n\n$`$P11R`\n[1] \"4194304\"\n\n$`$P11TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P11V`\n[1] \"377\"\n\n$`$P12B`\n[1] \"32\"\n\n$`$P12E`\n[1] \"0,0\"\n\n$`$P12N`\n[1] \"UV11-A\"\n\n$`$P12R`\n[1] \"4194304\"\n\n$`$P12TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P12V`\n[1] \"469\"\n\n$`$P13B`\n[1] \"32\"\n\n$`$P13E`\n[1] \"0,0\"\n\n$`$P13N`\n[1] \"UV12-A\"\n\n$`$P13R`\n[1] \"4194304\"\n\n$`$P13TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P13V`\n[1] \"434\"\n\n$`$P14B`\n[1] \"32\"\n\n$`$P14E`\n[1] \"0,0\"\n\n$`$P14N`\n[1] \"UV13-A\"\n\n$`$P14R`\n[1] \"4194304\"\n\n$`$P14TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P14V`\n[1] \"564\"\n\n$`$P15B`\n[1] \"32\"\n\n$`$P15E`\n[1] \"0,0\"\n\n$`$P15N`\n[1] \"UV14-A\"\n\n$`$P15R`\n[1] \"4194304\"\n\n$`$P15TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P15V`\n[1] \"975\"\n\n$`$P16B`\n[1] \"32\"\n\n$`$P16E`\n[1] \"0,0\"\n\n$`$P16N`\n[1] \"UV15-A\"\n\n$`$P16R`\n[1] \"4194304\"\n\n$`$P16TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P16V`\n[1] \"737\"\n\n$`$P17B`\n[1] \"32\"\n\n$`$P17E`\n[1] \"0,0\"\n\n$`$P17N`\n[1] \"UV16-A\"\n\n$`$P17R`\n[1] \"4194304\"\n\n$`$P17TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P17V`\n[1] \"1069\"\n\n$`$P18B`\n[1] \"32\"\n\n$`$P18E`\n[1] \"0,0\"\n\n$`$P18N`\n[1] \"SSC-H\"\n\n$`$P18R`\n[1] \"4194304\"\n\n$`$P18TYPE`\n[1] \"Side_Scatter\"\n\n$`$P18V`\n[1] \"334\"\n\n$`$P19B`\n[1] \"32\"\n\n$`$P19E`\n[1] \"0,0\"\n\n$`$P19N`\n[1] \"SSC-A\"\n\n$`$P19R`\n[1] \"4194304\"\n\n$`$P19TYPE`\n[1] \"Side_Scatter\"\n\n$`$P19V`\n[1] \"334\"\n\n$`$P1B`\n[1] \"32\"\n\n$`$P1E`\n[1] \"0,0\"\n\n$`$P1N`\n[1] \"Time\"\n\n$`$P1R`\n[1] \"272140\"\n\n$`$P1TYPE`\n[1] \"Time\"\n\n$`$P20B`\n[1] \"32\"\n\n$`$P20E`\n[1] \"0,0\"\n\n$`$P20N`\n[1] \"V1-A\"\n\n$`$P20R`\n[1] \"4194304\"\n\n$`$P20TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P20V`\n[1] \"352\"\n\n$`$P21B`\n[1] \"32\"\n\n$`$P21E`\n[1] \"0,0\"\n\n$`$P21N`\n[1] \"V2-A\"\n\n$`$P21R`\n[1] \"4194304\"\n\n$`$P21TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P21V`\n[1] \"412\"\n\n$`$P22B`\n[1] \"32\"\n\n$`$P22E`\n[1] \"0,0\"\n\n$`$P22N`\n[1] \"V3-A\"\n\n$`$P22R`\n[1] \"4194304\"\n\n$`$P22TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P22V`\n[1] \"304\"\n\n$`$P23B`\n[1] \"32\"\n\n$`$P23E`\n[1] \"0,0\"\n\n$`$P23N`\n[1] \"V4-A\"\n\n$`$P23R`\n[1] \"4194304\"\n\n$`$P23TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P23V`\n[1] \"217\"\n\n$`$P24B`\n[1] \"32\"\n\n$`$P24E`\n[1] \"0,0\"\n\n$`$P24N`\n[1] \"V5-A\"\n\n$`$P24R`\n[1] \"4194304\"\n\n$`$P24TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P24V`\n[1] \"257\"\n\n$`$P25B`\n[1] \"32\"\n\n$`$P25E`\n[1] \"0,0\"\n\n$`$P25N`\n[1] \"V6-A\"\n\n$`$P25R`\n[1] \"4194304\"\n\n$`$P25TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P25V`\n[1] \"218\"\n\n$`$P26B`\n[1] \"32\"\n\n$`$P26E`\n[1] \"0,0\"\n\n$`$P26N`\n[1] \"V7-A\"\n\n$`$P26R`\n[1] \"4194304\"\n\n$`$P26TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P26V`\n[1] \"303\"\n\n$`$P27B`\n[1] \"32\"\n\n$`$P27E`\n[1] \"0,0\"\n\n$`$P27N`\n[1] \"V8-A\"\n\n$`$P27R`\n[1] \"4194304\"\n\n$`$P27TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P27V`\n[1] \"461\"\n\n$`$P28B`\n[1] \"32\"\n\n$`$P28E`\n[1] \"0,0\"\n\n$`$P28N`\n[1] \"V9-A\"\n\n$`$P28R`\n[1] \"4194304\"\n\n$`$P28TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P28V`\n[1] \"320\"\n\n$`$P29B`\n[1] \"32\"\n\n$`$P29E`\n[1] \"0,0\"\n\n$`$P29N`\n[1] \"V10-A\"\n\n$`$P29R`\n[1] \"4194304\"\n\n$`$P29TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P29V`\n[1] \"359\"\n\n$`$P2B`\n[1] \"32\"\n\n$`$P2E`\n[1] \"0,0\"\n\n$`$P2N`\n[1] \"UV1-A\"\n\n$`$P2R`\n[1] \"4194304\"\n\n$`$P2TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P2V`\n[1] \"1008\"\n\n$`$P30B`\n[1] \"32\"\n\n$`$P30E`\n[1] \"0,0\"\n\n$`$P30N`\n[1] \"V11-A\"\n\n$`$P30R`\n[1] \"4194304\"\n\n$`$P30TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P30V`\n[1] \"271\"\n\n$`$P31B`\n[1] \"32\"\n\n$`$P31E`\n[1] \"0,0\"\n\n$`$P31N`\n[1] \"V12-A\"\n\n$`$P31R`\n[1] \"4194304\"\n\n$`$P31TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P31V`\n[1] \"234\"\n\n$`$P32B`\n[1] \"32\"\n\n$`$P32E`\n[1] \"0,0\"\n\n$`$P32N`\n[1] \"V13-A\"\n\n$`$P32R`\n[1] \"4194304\"\n\n$`$P32TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P32V`\n[1] \"236\"\n\n$`$P33B`\n[1] \"32\"\n\n$`$P33E`\n[1] \"0,0\"\n\n$`$P33N`\n[1] \"V14-A\"\n\n$`$P33R`\n[1] \"4194304\"\n\n$`$P33TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P33V`\n[1] \"318\"\n\n$`$P34B`\n[1] \"32\"\n\n$`$P34E`\n[1] \"0,0\"\n\n$`$P34N`\n[1] \"V15-A\"\n\n$`$P34R`\n[1] \"4194304\"\n\n$`$P34TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P34V`\n[1] \"602\"\n\n$`$P35B`\n[1] \"32\"\n\n$`$P35E`\n[1] \"0,0\"\n\n$`$P35N`\n[1] \"V16-A\"\n\n$`$P35R`\n[1] \"4194304\"\n\n$`$P35TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P35V`\n[1] \"372\"\n\n$`$P36B`\n[1] \"32\"\n\n$`$P36E`\n[1] \"0,0\"\n\n$`$P36N`\n[1] \"FSC-H\"\n\n$`$P36R`\n[1] \"4194304\"\n\n$`$P36TYPE`\n[1] \"Forward_Scatter\"\n\n$`$P36V`\n[1] \"55\"\n\n$`$P37B`\n[1] \"32\"\n\n$`$P37E`\n[1] \"0,0\"\n\n$`$P37N`\n[1] \"FSC-A\"\n\n$`$P37R`\n[1] \"4194304\"\n\n$`$P37TYPE`\n[1] \"Forward_Scatter\"\n\n$`$P37V`\n[1] \"55\"\n\n$`$P38B`\n[1] \"32\"\n\n$`$P38E`\n[1] \"0,0\"\n\n$`$P38N`\n[1] \"SSC-B-H\"\n\n$`$P38R`\n[1] \"4194304\"\n\n$`$P38TYPE`\n[1] \"Side_Scatter\"\n\n$`$P38V`\n[1] \"241\"\n\n$`$P39B`\n[1] \"32\"\n\n$`$P39E`\n[1] \"0,0\"\n\n$`$P39N`\n[1] \"SSC-B-A\"\n\n$`$P39R`\n[1] \"4194304\"\n\n$`$P39TYPE`\n[1] \"Side_Scatter\"\n\n$`$P39V`\n[1] \"241\"\n\n$`$P3B`\n[1] \"32\"\n\n$`$P3E`\n[1] \"0,0\"\n\n$`$P3N`\n[1] \"UV2-A\"\n\n$`$P3R`\n[1] \"4194304\"\n\n$`$P3TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P3V`\n[1] \"286\"\n\n$`$P40B`\n[1] \"32\"\n\n$`$P40E`\n[1] \"0,0\"\n\n$`$P40N`\n[1] \"B1-A\"\n\n$`$P40R`\n[1] \"4194304\"\n\n$`$P40TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P40V`\n[1] \"1013\"\n\n$`$P41B`\n[1] \"32\"\n\n$`$P41E`\n[1] \"0,0\"\n\n$`$P41N`\n[1] \"B2-A\"\n\n$`$P41R`\n[1] \"4194304\"\n\n$`$P41TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P41V`\n[1] \"483\"\n\n$`$P42B`\n[1] \"32\"\n\n$`$P42E`\n[1] \"0,0\"\n\n$`$P42N`\n[1] \"B3-A\"\n\n$`$P42R`\n[1] \"4194304\"\n\n$`$P42TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P42V`\n[1] \"471\"\n\n$`$P43B`\n[1] \"32\"\n\n$`$P43E`\n[1] \"0,0\"\n\n$`$P43N`\n[1] \"B4-A\"\n\n$`$P43R`\n[1] \"4194304\"\n\n$`$P43TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P43V`\n[1] \"473\"\n\n$`$P44B`\n[1] \"32\"\n\n$`$P44E`\n[1] \"0,0\"\n\n$`$P44N`\n[1] \"B5-A\"\n\n$`$P44R`\n[1] \"4194304\"\n\n$`$P44TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P44V`\n[1] \"467\"\n\n$`$P45B`\n[1] \"32\"\n\n$`$P45E`\n[1] \"0,0\"\n\n$`$P45N`\n[1] \"B6-A\"\n\n$`$P45R`\n[1] \"4194304\"\n\n$`$P45TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P45V`\n[1] \"284\"\n\n$`$P46B`\n[1] \"32\"\n\n$`$P46E`\n[1] \"0,0\"\n\n$`$P46N`\n[1] \"B7-A\"\n\n$`$P46R`\n[1] \"4194304\"\n\n$`$P46TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P46V`\n[1] \"531\"\n\n$`$P47B`\n[1] \"32\"\n\n$`$P47E`\n[1] \"0,0\"\n\n$`$P47N`\n[1] \"B8-A\"\n\n$`$P47R`\n[1] \"4194304\"\n\n$`$P47TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P47V`\n[1] \"432\"\n\n$`$P48B`\n[1] \"32\"\n\n$`$P48E`\n[1] \"0,0\"\n\n$`$P48N`\n[1] \"B9-A\"\n\n$`$P48R`\n[1] \"4194304\"\n\n$`$P48TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P48V`\n[1] \"675\"\n\n$`$P49B`\n[1] \"32\"\n\n$`$P49E`\n[1] \"0,0\"\n\n$`$P49N`\n[1] \"B10-A\"\n\n$`$P49R`\n[1] \"4194304\"\n\n$`$P49TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P49V`\n[1] \"490\"\n\n$`$P4B`\n[1] \"32\"\n\n$`$P4E`\n[1] \"0,0\"\n\n$`$P4N`\n[1] \"UV3-A\"\n\n$`$P4R`\n[1] \"4194304\"\n\n$`$P4TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P4V`\n[1] \"677\"\n\n$`$P50B`\n[1] \"32\"\n\n$`$P50E`\n[1] \"0,0\"\n\n$`$P50N`\n[1] \"B11-A\"\n\n$`$P50R`\n[1] \"4194304\"\n\n$`$P50TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P50V`\n[1] \"286\"\n\n$`$P51B`\n[1] \"32\"\n\n$`$P51E`\n[1] \"0,0\"\n\n$`$P51N`\n[1] \"B12-A\"\n\n$`$P51R`\n[1] \"4194304\"\n\n$`$P51TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P51V`\n[1] \"407\"\n\n$`$P52B`\n[1] \"32\"\n\n$`$P52E`\n[1] \"0,0\"\n\n$`$P52N`\n[1] \"B13-A\"\n\n$`$P52R`\n[1] \"4194304\"\n\n$`$P52TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P52V`\n[1] \"700\"\n\n$`$P53B`\n[1] \"32\"\n\n$`$P53E`\n[1] \"0,0\"\n\n$`$P53N`\n[1] \"B14-A\"\n\n$`$P53R`\n[1] \"4194304\"\n\n$`$P53TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P53V`\n[1] \"693\"\n\n$`$P54B`\n[1] \"32\"\n\n$`$P54E`\n[1] \"0,0\"\n\n$`$P54N`\n[1] \"R1-A\"\n\n$`$P54R`\n[1] \"4194304\"\n\n$`$P54TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P54V`\n[1] \"158\"\n\n$`$P55B`\n[1] \"32\"\n\n$`$P55E`\n[1] \"0,0\"\n\n$`$P55N`\n[1] \"R2-A\"\n\n$`$P55R`\n[1] \"4194304\"\n\n$`$P55TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P55V`\n[1] \"245\"\n\n$`$P56B`\n[1] \"32\"\n\n$`$P56E`\n[1] \"0,0\"\n\n$`$P56N`\n[1] \"R3-A\"\n\n$`$P56R`\n[1] \"4194304\"\n\n$`$P56TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P56V`\n[1] \"338\"\n\n$`$P57B`\n[1] \"32\"\n\n$`$P57E`\n[1] \"0,0\"\n\n$`$P57N`\n[1] \"R4-A\"\n\n$`$P57R`\n[1] \"4194304\"\n\n$`$P57TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P57V`\n[1] \"238\"\n\n$`$P58B`\n[1] \"32\"\n\n$`$P58E`\n[1] \"0,0\"\n\n$`$P58N`\n[1] \"R5-A\"\n\n$`$P58R`\n[1] \"4194304\"\n\n$`$P58TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P58V`\n[1] \"191\"\n\n$`$P59B`\n[1] \"32\"\n\n$`$P59E`\n[1] \"0,0\"\n\n$`$P59N`\n[1] \"R6-A\"\n\n$`$P59R`\n[1] \"4194304\"\n\n$`$P59TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P59V`\n[1] \"274\"\n\n$`$P5B`\n[1] \"32\"\n\n$`$P5E`\n[1] \"0,0\"\n\n$`$P5N`\n[1] \"UV4-A\"\n\n$`$P5R`\n[1] \"4194304\"\n\n$`$P5TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P5V`\n[1] \"1022\"\n\n$`$P60B`\n[1] \"32\"\n\n$`$P60E`\n[1] \"0,0\"\n\n$`$P60N`\n[1] \"R7-A\"\n\n$`$P60R`\n[1] \"4194304\"\n\n$`$P60TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P60V`\n[1] \"524\"\n\n$`$P61B`\n[1] \"32\"\n\n$`$P61E`\n[1] \"0,0\"\n\n$`$P61N`\n[1] \"R8-A\"\n\n$`$P61R`\n[1] \"4194304\"\n\n$`$P61TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P61V`\n[1] \"243\"\n\n$`$P6B`\n[1] \"32\"\n\n$`$P6E`\n[1] \"0,0\"\n\n$`$P6N`\n[1] \"UV5-A\"\n\n$`$P6R`\n[1] \"4194304\"\n\n$`$P6TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P6V`\n[1] \"616\"\n\n$`$P7B`\n[1] \"32\"\n\n$`$P7E`\n[1] \"0,0\"\n\n$`$P7N`\n[1] \"UV6-A\"\n\n$`$P7R`\n[1] \"4194304\"\n\n$`$P7TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P7V`\n[1] \"506\"\n\n$`$P8B`\n[1] \"32\"\n\n$`$P8E`\n[1] \"0,0\"\n\n$`$P8N`\n[1] \"UV7-A\"\n\n$`$P8R`\n[1] \"4194304\"\n\n$`$P8TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P8V`\n[1] \"661\"\n\n$`$P9B`\n[1] \"32\"\n\n$`$P9E`\n[1] \"0,0\"\n\n$`$P9N`\n[1] \"UV8-A\"\n\n$`$P9R`\n[1] \"4194304\"\n\n$`$P9TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P9V`\n[1] \"514\"" }, { - "objectID": "course/02_FilePaths/index.html#saving-changes-to-version-control", - "href": "course/02_FilePaths/index.html#saving-changes-to-version-control", - "title": "02 - File Paths", - "section": "Saving changes to Version Control", - "text": "Saving changes to Version Control\nAnd as is good practice, to maintain version control, let’s stage all the files and folders we created today within the Week2 Project Folder, write a commit message, and send these files back to GitHub until they are needed again next time.", - "crumbs": [ - "About", - "Intro to R", - "02 - File Paths" - ] + "objectID": "course/03_InsideFCSFile/slides.html#middle-metadata", + "href": "course/03_InsideFCSFile/slides.html#middle-metadata", + "title": "03 - Inside an FCS File", + "section": "Middle Metadata", + "text": "Middle Metadata\n\n\n\n\n\n\n\n\n.\n\n\nOnce we are out of the detector keywords, we find the last of the $Metadata associated keywords.\n\n\n\n\n\n\n\nDetectors <- DescriptionList[385:398]\nDetectors\n\n$`$PAR`\n[1] \"61\"\n\n$`$PROJ`\n[1] \"CellCounts4L_AB_05\"\n\n$`$SPILLOVER`\n UV1-A UV2-A UV3-A UV4-A UV5-A UV6-A UV7-A UV8-A UV9-A UV10-A UV11-A\n [1,] 1e+00 0 0 0 0 0 0 0 0 0 0\n [2,] 1e-06 1 0 0 0 0 0 0 0 0 0\n [3,] 0e+00 0 1 0 0 0 0 0 0 0 0\n [4,] 0e+00 0 0 1 0 0 0 0 0 0 0\n [5,] 0e+00 0 0 0 1 0 0 0 0 0 0\n [6,] 0e+00 0 0 0 0 1 0 0 0 0 0\n [7,] 0e+00 0 0 0 0 0 1 0 0 0 0\n [8,] 0e+00 0 0 0 0 0 0 1 0 0 0\n [9,] 0e+00 0 0 0 0 0 0 0 1 0 0\n[10,] 0e+00 0 0 0 0 0 0 0 0 1 0\n[11,] 0e+00 0 0 0 0 0 0 0 0 0 1\n[12,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[13,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[14,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[15,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[16,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[17,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[18,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[19,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[20,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[21,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[22,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[23,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[24,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[25,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[26,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[27,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[28,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[29,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[30,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[31,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[32,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[33,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[34,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[35,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[36,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[37,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[38,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[39,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[40,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[41,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[42,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[43,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[44,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[45,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[46,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[47,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[48,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[49,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[50,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[51,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[52,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[53,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[54,] 0e+00 0 0 0 0 0 0 0 0 0 0\n UV12-A UV13-A UV14-A UV15-A UV16-A V1-A V2-A V3-A V4-A V5-A V6-A V7-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 1 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 1 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 1 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 1 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 1 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 1 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 1 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 1 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 1 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 1 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 1 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 1\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0\n V8-A V9-A V10-A V11-A V12-A V13-A V14-A V15-A V16-A B1-A B2-A B3-A B4-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n B5-A B6-A B7-A B8-A B9-A B10-A B11-A B12-A B13-A B14-A R1-A R2-A R3-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n R4-A R5-A R6-A R7-A R8-A\n [1,] 0 0 0 0 0\n [2,] 0 0 0 0 0\n [3,] 0 0 0 0 0\n [4,] 0 0 0 0 0\n [5,] 0 0 0 0 0\n [6,] 0 0 0 0 0\n [7,] 0 0 0 0 0\n [8,] 0 0 0 0 0\n [9,] 0 0 0 0 0\n[10,] 0 0 0 0 0\n[11,] 0 0 0 0 0\n[12,] 0 0 0 0 0\n[13,] 0 0 0 0 0\n[14,] 0 0 0 0 0\n[15,] 0 0 0 0 0\n[16,] 0 0 0 0 0\n[17,] 0 0 0 0 0\n[18,] 0 0 0 0 0\n[19,] 0 0 0 0 0\n[20,] 0 0 0 0 0\n[21,] 0 0 0 0 0\n[22,] 0 0 0 0 0\n[23,] 0 0 0 0 0\n[24,] 0 0 0 0 0\n[25,] 0 0 0 0 0\n[26,] 0 0 0 0 0\n[27,] 0 0 0 0 0\n[28,] 0 0 0 0 0\n[29,] 0 0 0 0 0\n[30,] 0 0 0 0 0\n[31,] 0 0 0 0 0\n[32,] 0 0 0 0 0\n[33,] 0 0 0 0 0\n[34,] 0 0 0 0 0\n[35,] 0 0 0 0 0\n[36,] 0 0 0 0 0\n[37,] 0 0 0 0 0\n[38,] 0 0 0 0 0\n[39,] 0 0 0 0 0\n[40,] 0 0 0 0 0\n[41,] 0 0 0 0 0\n[42,] 0 0 0 0 0\n[43,] 0 0 0 0 0\n[44,] 0 0 0 0 0\n[45,] 0 0 0 0 0\n[46,] 0 0 0 0 0\n[47,] 0 0 0 0 0\n[48,] 0 0 0 0 0\n[49,] 0 0 0 0 0\n[50,] 1 0 0 0 0\n[51,] 0 1 0 0 0\n[52,] 0 0 1 0 0\n[53,] 0 0 0 1 0\n[54,] 0 0 0 0 1\n\n$`$TIMESTEP`\n[1] \"0.0001\"\n\n$`$TOT`\n[1] \"100\"\n\n$`$VOL`\n[1] \"30.31\"\n\n$`APPLY COMPENSATION`\n[1] \"FALSE\"\n\n$CHARSET\n[1] \"utf-8\"\n\n$CREATOR\n[1] \"SpectroFlo 3.3.0\"\n\n$FCSversion\n[1] \"3\"\n\n$FILENAME\n[1] \"data/CellCounts4L_AB_05_ND050_05.fcs\"\n\n$`FSC ASF`\n[1] \"1.21\"\n\n$GROUPNAME\n[1] \"ND050\"\n\n$GUID\n[1] \"CellCounts4L_AB_05-ND050-05.fcs\"" }, { - "objectID": "course/02_FilePaths/slides.html#set-up", - "href": "course/02_FilePaths/slides.html#set-up", - "title": "02 - File Paths", - "section": "Set Up", - "text": "Set Up\n\n\n\n\n\n\n\n\n.\n\n\nBefore we begin, let’s make sure you get the data needed for today transferred to your local computer, and then get the .fcs files copied over from there to your own working project folder. This is the process you will repeat each week throughout the course." + "objectID": "course/03_InsideFCSFile/slides.html#laser-metadata", + "href": "course/03_InsideFCSFile/slides.html#laser-metadata", + "title": "03 - Inside an FCS File", + "section": "Laser Metadata", + "text": "Laser Metadata\n\n\n\n\n\n\n\n\n.\n\n\nNext up, there is a small stretch of keywords containing the values associated with the individual lasers as far as delays and area scaling factors for a particular day (also useful when plotted).\n\n\n\n\n\n\n\nDetectors <- DescriptionList[399:410]\nDetectors\n\n$LASER1ASF\n[1] \"1.09\"\n\n$LASER1DELAY\n[1] \"-19.525\"\n\n$LASER1NAME\n[1] \"Violet\"\n\n$LASER2ASF\n[1] \"1.14\"\n\n$LASER2DELAY\n[1] \"0\"\n\n$LASER2NAME\n[1] \"Blue\"\n\n$LASER3ASF\n[1] \"1.02\"\n\n$LASER3DELAY\n[1] \"20.15\"\n\n$LASER3NAME\n[1] \"Red\"\n\n$LASER4ASF\n[1] \"0.92\"\n\n$LASER4DELAY\n[1] \"40.725\"\n\n$LASER4NAME\n[1] \"UV\"" }, { - "objectID": "course/02_FilePaths/slides.html#working-directory", - "href": "course/02_FilePaths/slides.html#working-directory", - "title": "02 - File Paths", - "section": "Working Directory", - "text": "Working Directory\n\n\n\n\n\n\n\n\n.\n\n\nNow that we are back in our Week2 folder, let’s start by seeing our current location similarly to how our computer perceives it.\nWe will use getwd() function (ie. get working directory) to return the location of the folder we are currently inside of. For example, when getwd() is run within my Week2 project folder, I see the following location" + "objectID": "course/03_InsideFCSFile/slides.html#display", + "href": "course/03_InsideFCSFile/slides.html#display", + "title": "03 - Inside an FCS File", + "section": "Display", + "text": "Display\n\n\n\n\n\n\n\n\n.\n\n\nThen there is a stretch matching whether a particular detector needs to be displayed as linear (in the case of time and scatter) or as log (for individual detectors).\n\n\n\n\n\n\n\nDetectors <- DescriptionList[412:472]\nDetectors\n\n$P10DISPLAY\n[1] \"LOG\"\n\n$P11DISPLAY\n[1] \"LOG\"\n\n$P12DISPLAY\n[1] \"LOG\"\n\n$P13DISPLAY\n[1] \"LOG\"\n\n$P14DISPLAY\n[1] \"LOG\"\n\n$P15DISPLAY\n[1] \"LOG\"\n\n$P16DISPLAY\n[1] \"LOG\"\n\n$P17DISPLAY\n[1] \"LOG\"\n\n$P18DISPLAY\n[1] \"LIN\"\n\n$P19DISPLAY\n[1] \"LIN\"\n\n$P1DISPLAY\n[1] \"LOG\"\n\n$P20DISPLAY\n[1] \"LOG\"\n\n$P21DISPLAY\n[1] \"LOG\"\n\n$P22DISPLAY\n[1] \"LOG\"\n\n$P23DISPLAY\n[1] \"LOG\"\n\n$P24DISPLAY\n[1] \"LOG\"\n\n$P25DISPLAY\n[1] \"LOG\"\n\n$P26DISPLAY\n[1] \"LOG\"\n\n$P27DISPLAY\n[1] \"LOG\"\n\n$P28DISPLAY\n[1] \"LOG\"\n\n$P29DISPLAY\n[1] \"LOG\"\n\n$P2DISPLAY\n[1] \"LOG\"\n\n$P30DISPLAY\n[1] \"LOG\"\n\n$P31DISPLAY\n[1] \"LOG\"\n\n$P32DISPLAY\n[1] \"LOG\"\n\n$P33DISPLAY\n[1] \"LOG\"\n\n$P34DISPLAY\n[1] \"LOG\"\n\n$P35DISPLAY\n[1] \"LOG\"\n\n$P36DISPLAY\n[1] \"LIN\"\n\n$P37DISPLAY\n[1] \"LIN\"\n\n$P38DISPLAY\n[1] \"LIN\"\n\n$P39DISPLAY\n[1] \"LIN\"\n\n$P3DISPLAY\n[1] \"LOG\"\n\n$P40DISPLAY\n[1] \"LOG\"\n\n$P41DISPLAY\n[1] \"LOG\"\n\n$P42DISPLAY\n[1] \"LOG\"\n\n$P43DISPLAY\n[1] \"LOG\"\n\n$P44DISPLAY\n[1] \"LOG\"\n\n$P45DISPLAY\n[1] \"LOG\"\n\n$P46DISPLAY\n[1] \"LOG\"\n\n$P47DISPLAY\n[1] \"LOG\"\n\n$P48DISPLAY\n[1] \"LOG\"\n\n$P49DISPLAY\n[1] \"LOG\"\n\n$P4DISPLAY\n[1] \"LOG\"\n\n$P50DISPLAY\n[1] \"LOG\"\n\n$P51DISPLAY\n[1] \"LOG\"\n\n$P52DISPLAY\n[1] \"LOG\"\n\n$P53DISPLAY\n[1] \"LOG\"\n\n$P54DISPLAY\n[1] \"LOG\"\n\n$P55DISPLAY\n[1] \"LOG\"\n\n$P56DISPLAY\n[1] \"LOG\"\n\n$P57DISPLAY\n[1] \"LOG\"\n\n$P58DISPLAY\n[1] \"LOG\"\n\n$P59DISPLAY\n[1] \"LOG\"\n\n$P5DISPLAY\n[1] \"LOG\"\n\n$P60DISPLAY\n[1] \"LOG\"\n\n$P61DISPLAY\n[1] \"LOG\"\n\n$P6DISPLAY\n[1] \"LOG\"\n\n$P7DISPLAY\n[1] \"LOG\"\n\n$P8DISPLAY\n[1] \"LOG\"\n\n$P9DISPLAY\n[1] \"LOG\"" }, { - "objectID": "course/02_FilePaths/slides.html#directories", - "href": "course/02_FilePaths/slides.html#directories", - "title": "02 - File Paths", - "section": "Directories", - "text": "Directories\n\n\n\n\n\n\n\n\n.\n\n\nWithin this working directory, we have a variety of project folders and files related to the course. We can see the folders that are present using the list.dirs() function.\n\n\n\n\n\n\n\n\n\n\nlist.dirs(path=\".\", full.names=FALSE, recursive=FALSE)" + "objectID": "course/03_InsideFCSFile/slides.html#flowcore-parameters", + "href": "course/03_InsideFCSFile/slides.html#flowcore-parameters", + "title": "03 - Inside an FCS File", + "section": "flowCore Parameters", + "text": "flowCore Parameters\n\n\n\n\n\n\n\n\n.\n\n\nDepending on the arguments selected during read.FCS(), we might also encounter additional keywords that are added in by flowCore. For example, we do not see these keywords when “transformation” is set to FALSE.\n\n\n\n\n\n\n\nflowCoreCheck <- read.FCS(filename=firstfile,\n transformation = FALSE, truncate_max_range = FALSE)\n\nflowCoreCheck\n\nflowFrame object 'CellCounts4L_AB_05-ND050-05.fcs'\nwith 100 cells and 61 observables:\n name desc range minRange maxRange\n$P1 Time NA 272140 0 272139\n$P2 UV1-A NA 4194304 -111 4194303\n$P3 UV2-A NA 4194304 -111 4194303\n$P4 UV3-A NA 4194304 -111 4194303\n$P5 UV4-A NA 4194304 -111 4194303\n... ... ... ... ... ...\n$P57 R4-A NA 4194304 -111 4194303\n$P58 R5-A NA 4194304 -111 4194303\n$P59 R6-A NA 4194304 -111 4194303\n$P60 R7-A NA 4194304 -111 4194303\n$P61 R8-A NA 4194304 -111 4194303\n476 keywords are stored in the 'description' slot" }, { - "objectID": "course/02_FilePaths/slides.html#variables", - "href": "course/02_FilePaths/slides.html#variables", - "title": "02 - File Paths", - "section": "Variables", - "text": "Variables\n\n\n\n\n\n\n\n\n.\n\n\nBefore exploring file paths, we need to have some basic R code knowledge that we can use to work with them. Within R, we have the ability to assign particular values (be they character strings, numbers or logicals) to objects (ie. variables) that can be used when called upon later.\nFor example:\n\n\n\n\n\n\n\nWhatDayDidIWriteThis <- \"Saturday\"\n\n\n\n\n\n\n\n\n\n\n.\n\n\nIn this case, the variable name is what the assignment arrow (“<-”) is pointing at. In this case, WhatDayDidIWriteThis" + "objectID": "course/02_FilePaths/Downsampler.html", + "href": "course/02_FilePaths/Downsampler.html", + "title": "Downsampling", + "section": "", + "text": "Due to trying to keep the overall file size down, I am downsampling to 100 events. For anyone interested in how I did this, this Quarto Markdown Document contains the code needed to repeat the process." }, { - "objectID": "course/02_FilePaths/slides.html#indexing", - "href": "course/02_FilePaths/slides.html#indexing", - "title": "02 - File Paths", - "section": "Indexing", - "text": "Indexing\n\n\n\n\n\n\n\n\n.\n\n\nNot all variables contain single objects.\nFor example, we can modify Fluorophores and add additional entries:\n\n\n\n\n\n\n\nFluorophores <- c(\"BV421\", \"FITC\", \"PE\", \"APC\")\nstr(Fluorophores)\n\n chr [1:4] \"BV421\" \"FITC\" \"PE\" \"APC\"\n\n\n\n\n\n\n\n\n\n\n\n.\n\n\nThe c stands for concatenate. It concatenates the objects into a larger object, known as a vector.\nIn this case, you notice in addition to the specification the values are characters, we get a [1:4], denoting four objects are present." + "objectID": "course/02_FilePaths/Downsampler.html#specify-file.path-and-identify-files", + "href": "course/02_FilePaths/Downsampler.html#specify-file.path-and-identify-files", + "title": "Downsampling", + "section": "Specify file.path and identify files", + "text": "Specify file.path and identify files\nDue to the counts being conducter on two separate instruments, the number of columns differs, so they will need to be loaded into separate GatingSet objects.\n\nStorageLocation <- file.path(\"course\", \"02_FilePaths\", \"data\")\nExisting <- list.files(StorageLocation, pattern=\".fcs\", full.names=TRUE)\nList1 <- Existing[1:2] # 3L Aurora\nList2 <- Existing[3:8] # 4L Aurora" }, { - "objectID": "course/02_FilePaths/slides.html#listing-files", - "href": "course/02_FilePaths/slides.html#listing-files", - "title": "02 - File Paths", - "section": "Listing Files", - "text": "Listing Files\n\n\n\n\n\n\n\n\n.\n\n\nAfter this detour into variables and indexing, let’s return our focus to how to use these in context of file paths. Working from within our Week2 project folder, let’s see what directories (folders) are present\n\n\n\n\n\n\n\nlist.dirs(path=\".\", full.names=FALSE, recursive=FALSE)" + "objectID": "course/02_FilePaths/Downsampler.html#load-.fcs-files-into-a-gatingset", + "href": "course/02_FilePaths/Downsampler.html#load-.fcs-files-into-a-gatingset", + "title": "Downsampling", + "section": "Load .fcs files into a GatingSet", + "text": "Load .fcs files into a GatingSet\nLoad in files to their respective GatingSet objects\n\ncs1 <- load_cytoset_from_fcs(List1, truncate_max_range = FALSE, transformation = FALSE)\ngs1 <- GatingSet(cs1)\n\ncs2 <- load_cytoset_from_fcs(List2, truncate_max_range = FALSE, transformation = FALSE)\ngs2 <- GatingSet(cs2)" }, { - "objectID": "course/02_FilePaths/slides.html#creating-directories", - "href": "course/02_FilePaths/slides.html#creating-directories", - "title": "02 - File Paths", - "section": "Creating directories", - "text": "Creating directories\n\n\n\n\n\n\n\n\n.\n\n\nAlternatively, we can also create a folder via R using the dir.create() function. Since we want it within data, we would modify the path accordingly\n\n\n\n\n\n\n\nNewFolderLocation <- file.path(\"data\", \"target2\")\n\ndir.create(path=NewFolderLocation)" + "objectID": "course/01_InstallingRPackages/index.html", + "href": "course/01_InstallingRPackages/index.html", + "title": "01 - Installing R Packages", + "section": "", + "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here\nWelcome to the first week of Cytometry in R! This week we will be diving into how R packages work, and the how to go about installing them.\nBefore getting started, please make sure you have completed the creating a GitHub and Workstation Setup walk-throughs, since we will begin where they left off once the required software was successfully installed.", + "crumbs": [ + "About", + "Intro to R" + ] }, { - "objectID": "course/02_FilePaths/slides.html#file-paths", - "href": "course/02_FilePaths/slides.html#file-paths", - "title": "02 - File Paths", - "section": "File Paths", - "text": "File Paths\n\n\n\n\n\n\n\n\n.\n\n\nOne way we can do this is through a file.path argument. We could potentially provide this by adding either a “/” or a “\" into the path argument, depending on your computers operating system.\n\n\n\n\n\n\n\nlist.files(path=\"data/target\", full.names=FALSE, recursive=FALSE)" + "objectID": "course/01_InstallingRPackages/index.html#set-up", + "href": "course/01_InstallingRPackages/index.html#set-up", + "title": "01 - Installing R Packages", + "section": "Set Up", + "text": "Set Up\nAlright, with the background out of the way, let’s get started!\n\n\n\n\n\n\nImportant\n\n\n\nPlease make sure to sync your forked version of the CytometryInR repository, and pull any changes to your local computer’s CytometryInR project folder so that you have the most recent version of the code and data available.\n\n\n\n\n\n\n\n\nWarning\n\n\n\nPlease remember to always copy over the new material from your local CytometryInR folder to a separate Project Folder that you created and named (ex. “Week_01” or “MyLearningFolder”, etc.). This will ensure any edits you make to the files do not affect your ability to bring in next week’s materials to the CytometryInR folder\n\n\n\n\nAfter pulling the new data and code locally, open CytometryInR, open the course folder, and open the 01_InstallingRPackages folder. From here, copy the index.qmd file to your own working Project Folder (ex. Week_01) where you can work on it without causing any conflicts. Then return to Positron and open up your working project folder (ex. Week_01).\n\n\n\nNext up, within Positron, let’s make sure to select R as the coding language being used for this session.\n\n\n\nNow that R is running within Positron, the console (lower portion of the screenn) is now able to run (ie. execute) any R code that is sent to it.", + "crumbs": [ + "About", + "Intro to R" + ] }, { - "objectID": "course/02_FilePaths/slides.html#selecting-for-patterns", - "href": "course/02_FilePaths/slides.html#selecting-for-patterns", - "title": "02 - File Paths", - "section": "Selecting for Patterns", - "text": "Selecting for Patterns\n\n\n\n\n\n\n\n\n.\n\n\nIf we currently listed the files within data, we get a return that looks like this:\n\n\n\n\n\n\n\nlist.files(\"data\", full.names=FALSE, recursive=FALSE)" + "objectID": "course/01_InstallingRPackages/index.html#checking-for-loaded-packages", + "href": "course/01_InstallingRPackages/index.html#checking-for-loaded-packages", + "title": "01 - Installing R Packages", + "section": "Checking for Loaded Packages", + "text": "Checking for Loaded Packages\nFor the contents (ie. the functions) of an R package to be available for your computer to use, they must first be activated (ie. loaded) into your local environment. We will first learn how to check what R packages are currently active.\n\n\n\nAccessing Help Documentation\nWithin your own index.qmd (or a new .qmd file that you created), type out/copy-paste the following sessionInfo() function:\n\nsessionInfo()\n\n\n\nIf you hover over the line of code within Positron, you will glimpse the help file for the particular function you are hovering over.\n\n\n\nIn this case, we can see the help documentation corresponding for sessionInfo(). Beyond hovering over the function, this can also be accessed by adding a ? directly in front of the function, and then running that line of code.\n\n?sessionInfo()\n\n\n\n\nWhen executed, the function’s help file documentation will open up within the Help tab in the secondary side bar on the right-side of the screen. Glancing at the top of the page we can see the name of the package that contains the sessionInfo() function ({utils}). Scrolling down the help page past all the documentation, we can see a link to the index page.\n\n\n\nAfter clicking, the Help tab switches from viewing the documentation for the sessionInfo() function, to showing all the functions within the utils package. Most R packages contain help documentation, so this process can be adapted to find out additional information about what a function does, and what arguments are needed to produce customized outputs.\n\n\n\n\n\nsessionInfo()\nWithin your .qmd file, let’s go ahead and run the following code-block:\n\nsessionInfo()\n\nR version 4.5.2 (2025-10-31)\nPlatform: x86_64-pc-linux-gnu\nRunning under: Debian GNU/Linux 13 (trixie)\n\nMatrix products: default\nBLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.12.1 \nLAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.1; LAPACK version 3.12.0\n\nlocale:\n [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C \n [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 \n [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 \n [7] LC_PAPER=en_US.UTF-8 LC_NAME=C \n [9] LC_ADDRESS=C LC_TELEPHONE=C \n[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C \n\ntime zone: America/New_York\ntzcode source: system (glibc)\n\nattached base packages:\n[1] stats graphics grDevices utils datasets methods base \n\nother attached packages:\n[1] BiocStyle_2.38.0\n\nloaded via a namespace (and not attached):\n [1] htmlwidgets_1.6.4 BiocManager_1.30.27 compiler_4.5.2 \n [4] fastmap_1.2.0 cli_3.6.5 tools_4.5.2 \n [7] htmltools_0.5.9 otel_0.2.0 yaml_2.3.12 \n[10] rmarkdown_2.30 knitr_1.51 jsonlite_2.0.0 \n[13] xfun_0.56 digest_0.6.39 rlang_1.1.7 \n[16] evaluate_1.0.5 \n\n\n\n\nThe outputs that get returned by sessionInfo() will vary a bit depending on your computer’s operating system, and the version of R you have installed.\nFor today, let’s focus on the last two elements of the output:\n\n\n\nThe R software itself is made up of several base R packages, that are loaded automatically. These contain everything you need to read, write and run R code on your computer. You can see these packages are the stats, graphics, grDevices, utils, datasets, methods and base packages.\nAs we install additional R packages and load them using the library() function throughout this session, sporadically re-run sessionInfo() to see how this list of R packages changes.", + "crumbs": [ + "About", + "Intro to R" + ] }, { - "objectID": "course/02_FilePaths/slides.html#conditionals", - "href": "course/02_FilePaths/slides.html#conditionals", - "title": "02 - File Paths", - "section": "Conditionals", - "text": "Conditionals\n\n\n\n\n\n\n\n\n.\n\n\nOne useful thing is that within R, we can set conditions on whether something is carried out. The most typical conditional you will encounter are the “If” statements. They typically take a form that resembles the following.\n\n\n\n\n\n\n\nNeedCoffee <- TRUE\n\nif (NeedCoffee){\n print(\"Take a break\")\n}" + "objectID": "course/01_InstallingRPackages/index.html#installing-from-cran", + "href": "course/01_InstallingRPackages/index.html#installing-from-cran", + "title": "01 - Installing R Packages", + "section": "Installing from CRAN", + "text": "Installing from CRAN\nWe will start by installing R packages that are part of the CRAN repository. This is the main R package repository, being part of the broader R software project. In the context of this course, R packages that work primarily with general data structure (rows, columns, matrices, etc.) or visualizations will predominantly be found within this repository.\nThese include the tidyverse packages. These packages have collectively made R easier to use by smoothing out some of the rough edges of base R, which is why R has seen major growth within the last decade. We will be installing several of these R packages today.\n\n\n\ndplyr\nOur first R package we will install during this session is the dplyr package. Since it is hosted on the CRAN repository, to install it, we will need to use the CRAN-specific installation function install.packages().\n\n?install.packages()\n\n\n\n\nFor the install.packages() function, we place within the () the name of the R package from CRAN that we wish to install.\n\ninstall.packages(\"dplyr\")\n\n\n\n\n\n\n\n\n\nTip\n\n\n\nA usual struggle point for beginners is that install.packages() requires ” ” to be placed around the package name. Forgetting them results in the error that we see below.\n\n\n\ninstall.packages(dplyr)\n\nError:\n! object 'dplyr' not found\n\n\n\n\n\ninstall.packages(\"dplyr\")\n\nGo ahead and click on “Run Cell” next to your code-block, to install the dplyr R package.\n\n\nWhen a package starts to install, you will see your console start to display text resembling that seen in the image below (varying a bit depending on your computers operating system).\n\n\n\nWithin this opening scrawl, you will see the location on your computer the R package is being installed to, as well as the file location for the R package being retrieved on CRAN.\nIf the package is successfully located, your computer will proceed to first download, then unpack (ie. unzip) it, before attempting to install to the target folder.\n\n\n\nThe final steps of the installation process involved various steps to verify everything was copied successfully, the help documentation assembled, and that the R package is capable of being loaded. If this is the case, you will see the “Done” line appear, as well as a mention where the original downloaded source package files has been stashed (usually a temp folder).\n\n\n\n\nAttaching packages via library()\nIf an R package has been installed successfully, we are now able to load it (ie. make it’s functions available) to our local environment using the library() function.\n\n?library()\n\n\n\nUnlike install.packages(), where we needed “” around the package name, the library() function does not require “” around the package name. Let’s go ahead and load in dplyr, making its respective functions to our local environment.\n\nlibrary(dplyr)\n\n\nAttaching package: 'dplyr'\n\n\nThe following objects are masked from 'package:stats':\n\n filter, lag\n\n\nThe following objects are masked from 'package:base':\n\n intersect, setdiff, setequal, union\n\n\n\n\nFrom the output, we can see that dplyr has been attached. There are also a couple functions within dplyr that have identical names to functions within the stats and base packages. To avoid confusion, these 6 functions are masked, which is why we get the additional messages.\nWith dplyr now loaded via the library() call, let’s check sessionInfo() to see what has changed.\n\nsessionInfo()\n\nR version 4.5.2 (2025-10-31)\nPlatform: x86_64-pc-linux-gnu\nRunning under: Debian GNU/Linux 13 (trixie)\n\nMatrix products: default\nBLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.12.1 \nLAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.1; LAPACK version 3.12.0\n\nlocale:\n [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C \n [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 \n [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 \n [7] LC_PAPER=en_US.UTF-8 LC_NAME=C \n [9] LC_ADDRESS=C LC_TELEPHONE=C \n[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C \n\ntime zone: America/New_York\ntzcode source: system (glibc)\n\nattached base packages:\n[1] stats graphics grDevices utils datasets methods base \n\nother attached packages:\n[1] dplyr_1.2.0 BiocStyle_2.38.0\n\nloaded via a namespace (and not attached):\n [1] digest_0.6.39 R6_2.6.1 fastmap_1.2.0 \n [4] tidyselect_1.2.1 xfun_0.56 magrittr_2.0.4 \n [7] glue_1.8.0 tibble_3.3.1 knitr_1.51 \n[10] pkgconfig_2.0.3 htmltools_0.5.9 generics_0.1.4 \n[13] rmarkdown_2.30 lifecycle_1.0.5 cli_3.6.5 \n[16] vctrs_0.7.1 compiler_4.5.2 tools_4.5.2 \n[19] evaluate_1.0.5 pillar_1.11.1 yaml_2.3.12 \n[22] otel_0.2.0 BiocManager_1.30.27 rlang_1.1.7 \n[25] jsonlite_2.0.0 htmlwidgets_1.6.4 \n\n\n\n\nSimilar to what was seen for the base R packages, dplyr is now attached. This means we should theoretically now have access to all its functions. We can verify this by seeing if we can look up the dplyr packages select() function and it’s respective help page.\n\n?select\n\n\n\n\nSince its parent package has been attached to our local environment (via the library() call), we can see dplyr functions appear as suggestions as we begin to type.\nBy contrast, is we check for the ggplot() function from the ggplot2 package (which we haven’t yet installed or attached via library()), no suggestions will pop up.\n\n?ggplot\n\nNo documentation for 'ggplot' in specified packages and libraries:\nyou could try '??ggplot'\n\n\n\n\nBeyond individual functions, some R packages also have help landing pages, that can be similarly accessed by adding a ? in front of the package name:\n\n\n\n\n\nUnattaching\nSo far, we have installed an R package, and then attached it (via library()). How would we reverse these steps?\nWell, to unload it from the local environment, there are a couple options. You could of course simply shut down Positron. The local environment only exist in context of when you open and close the session, which closing the program will do. All previously loaded R packages will be unattached, which is why when you start a new session you will need to load in all packages you plan on using via library().\nAlternatively, although less used, you could detach() it via your console:\n\ndetach(\"package:dplyr\", unload=TRUE)\n\n\nsessionInfo()\n\nR version 4.5.2 (2025-10-31)\nPlatform: x86_64-pc-linux-gnu\nRunning under: Debian GNU/Linux 13 (trixie)\n\nMatrix products: default\nBLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.12.1 \nLAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.1; LAPACK version 3.12.0\n\nlocale:\n [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C \n [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 \n [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 \n [7] LC_PAPER=en_US.UTF-8 LC_NAME=C \n [9] LC_ADDRESS=C LC_TELEPHONE=C \n[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C \n\ntime zone: America/New_York\ntzcode source: system (glibc)\n\nattached base packages:\n[1] stats graphics grDevices utils datasets methods base \n\nother attached packages:\n[1] BiocStyle_2.38.0\n\nloaded via a namespace (and not attached):\n [1] digest_0.6.39 R6_2.6.1 fastmap_1.2.0 \n [4] tidyselect_1.2.1 xfun_0.56 magrittr_2.0.4 \n [7] glue_1.8.0 tibble_3.3.1 knitr_1.51 \n[10] pkgconfig_2.0.3 htmltools_0.5.9 generics_0.1.4 \n[13] rmarkdown_2.30 lifecycle_1.0.5 cli_3.6.5 \n[16] vctrs_0.7.1 compiler_4.5.2 tools_4.5.2 \n[19] evaluate_1.0.5 pillar_1.11.1 yaml_2.3.12 \n[22] otel_0.2.0 BiocManager_1.30.27 rlang_1.1.7 \n[25] jsonlite_2.0.0 htmlwidgets_1.6.4 \n\n\n\n\nLooking at the sessionInfo() output, dplyr is no longer attached to the local environment. Consequently, if we try to once again look for the documentation, no information will be retrieved.\n\n?dplyr\n\nNo documentation for 'dplyr' in specified packages and libraries:\nyou could try '??dplyr'\n\n\n\n\nThere is a workaround however, if we want to access functions from an unloaded R package. We can specify the R package’s name, followed by two :, and then the function name. The :: conveys the context to your computer that the package is present, but may not be attached.\n\n?dplyr::select()\n\nThis particular use case can be useful if we want to run a particular function, but not load in all a packages functions (which may have identical function names to other R packages we are using and cause some conflicts).\n\n\n\n\nRemoving Packages\nJust as we can install an R package, we can also uninstall an R package (although doing so is rare, most often when encountering package dependency conflict). To do so, we would use the remove.packages() function.\n\n?remove.packages()\n\n\nremove.packages(\"dplyr\")\n\nThis results in the package being removed entirely from our computer. We would then need to reinstall it if needed in the future.\n\n\n\n\nCommon Issues\nAs previously mentioned, CRAN is the main repository for R packages. But what if we tried to install an R package that is only found on Bioconductor or via GitHub using the install.packages() function?\nTo see what occurs, let’s try installing the PeacoQC package (which is found on Bioconductor).\n\ninstall.packages(\"PeacoQC\")\n\nInstalling package into '/home/david/R/x86_64-pc-linux-gnu-library/4.5'\n(as 'lib' is unspecified)\n\n\nWarning: package 'PeacoQC' is not available for this version of R\n\nA version of this package for your version of R might be available elsewhere,\nsee the ideas at\nhttps://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages\n\n\n\n\nAs you can see, the initial warning message suggest that PeacoQC is not available for your version of R. When I first started trying to learn R on my own during COVID, this particular message was the bane of my existence and I couldn’t figure out what was going on.\nThis is just a default warning message, that would apply for both a package having a version mismatch, but also shown when trying to install packages that are not found on CRAN.", + "crumbs": [ + "About", + "Intro to R" + ] }, { - "objectID": "course/02_FilePaths/slides.html#conditionals-in-practice", - "href": "course/02_FilePaths/slides.html#conditionals-in-practice", - "title": "02 - File Paths", - "section": "Conditionals in practice", - "text": "Conditionals in practice\n\n\n\n\n\n\n\n\n.\n\n\nFirst off, let’s write a conditional to check if there is a target3 folder within data.\n\n\n\n\n\n\n\nfiles_present <- list.files(\"data\", full.names=FALSE, recursive=FALSE)\nfiles_present" + "objectID": "course/01_InstallingRPackages/index.html#installing-from-bioconductor", + "href": "course/01_InstallingRPackages/index.html#installing-from-bioconductor", + "title": "01 - Installing R Packages", + "section": "Installing from Bioconductor", + "text": "Installing from Bioconductor\nBioconductor is the second R package repository we will be working with throughout the course. While it contains far fewer packages than CRAN, it contains packages that are primarily used by the biomedical sciences. Following this link you can find it’s current flow and mass cytometry R packages.\nBioconductor R packages differ from CRAN R packages in a couple of ways. Bioconductor has different standards for acceptance than CRAN. They usually contain interoperable object-types, put more effort into documentation and continous testing to ensure that the R package remains functional across operating systems.\n\n\nTo install an R package that is located on Bioconductor, we first need to install the BiocManager package from CRAN. This package will allow us to install Bioconductor packages from their respective repository.\n\ninstall.packages(\"BiocManager\")\n\n\n\nOnce BiocManager is installed, we can attach it to our local environment using the library() function\n\nlibrary(BiocManager)\n\n\n\nWhen loaded, you will see an output showing the current Bioconductor and R versions.\nWe can then use BiocManager’s install() function to go back and install PeacoQC().\n\n\n\n\n\n\nTip\n\n\n\nAs always, don’t forget the “” when running an install() command.\n\n\n\n?install()\n\n\ninstall(\"PeacoQC\")\n\n\n\nWe see a similar opening sequence of installation steps as what we saw when installing the dplyr package from CRAN. However, in this case, several package dependencies (rjson, GlobalOptions, etc.) are present. Consequently, you can see these packages are also being downloaded from their respective repositories (either CRAN or Bioconductor), then unzipped and assembled before PeacoQC undergoes installation.\n\n\n\n\n\n\nNote\n\n\n\nBehind the scenes, within an R package, what package dependencies need to be installed are specified through the Description and Namespace files. If a package name is removed from these files, it will not be installed during the installation process\n\n\n\n\n\nWithin the scrawl of installation outputs, we can see individual dependencies undergoing installation similar to what we saw with dplyr, with a “Done (packagename)” being printed upon successful installation.\n\n\n\nThis process continues for each dependency being installed.\n\n\n\nAnd finally, once all the dependencies are installed, PeacoQC starts to install.\n\n\n\nOccasionally, during installation, you will see a pop-up windown like this one in the console. This let’s you know that some of the package dependencies have newer updated versions that are available to download. We are prompted to select between updating all, some or none. You will need to specify via the console how you want to proceed, by typing and entering one of the suggested options [a/s/n].\n\n\n\nAlternatively you may encounter a pop-up that resembles this one. Unlike the a/s/n output, we would need to provide a number for our intended choice. In this case, I went ahead and skipped all updates by typing 3 into the console, then hitting enter.\n\n\n\nGenerally, it’s okay to update if you have the time. Updates generally consist of minor improvements or bug fixes, that shouldn’t cause major issues. If you are short on time, you can go ahead and select skip the update by entering the value (n) for the none option.\n\n\n\nWith PeacoQC has been installed, we can load it via the library() call\n\n\n\n\n\n\nTip\n\n\n\nRemeber, library() doesn’t require ” ”\n\n\n\nlibrary(PeacoQC)\n\n\n\nAnd we can check to see if it has been attached to the local environment\n\nsessionInfo()\n\nR version 4.5.2 (2025-10-31)\nPlatform: x86_64-pc-linux-gnu\nRunning under: Debian GNU/Linux 13 (trixie)\n\nMatrix products: default\nBLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.12.1 \nLAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.1; LAPACK version 3.12.0\n\nlocale:\n [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C \n [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 \n [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 \n [7] LC_PAPER=en_US.UTF-8 LC_NAME=C \n [9] LC_ADDRESS=C LC_TELEPHONE=C \n[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C \n\ntime zone: America/New_York\ntzcode source: system (glibc)\n\nattached base packages:\n[1] stats graphics grDevices utils datasets methods base \n\nother attached packages:\n[1] PeacoQC_1.20.0 BiocManager_1.30.27 BiocStyle_2.38.0 \n\nloaded via a namespace (and not attached):\n [1] generics_0.1.4 shape_1.4.6.1 digest_0.6.39 \n [4] magrittr_2.0.4 evaluate_1.0.5 grid_4.5.2 \n [7] RColorBrewer_1.1-3 iterators_1.0.14 circlize_0.4.17 \n[10] fastmap_1.2.0 foreach_1.5.2 doParallel_1.0.17 \n[13] jsonlite_2.0.0 graph_1.88.1 GlobalOptions_0.1.3 \n[16] ComplexHeatmap_2.26.0 flowWorkspace_4.22.1 scales_1.4.0 \n[19] XML_3.99-0.20 Rgraphviz_2.54.0 codetools_0.2-20 \n[22] cli_3.6.5 RProtoBufLib_2.22.0 rlang_1.1.7 \n[25] crayon_1.5.3 Biobase_2.70.0 yaml_2.3.12 \n[28] otel_0.2.0 cytolib_2.22.0 ncdfFlow_2.56.0 \n[31] tools_4.5.2 parallel_4.5.2 dplyr_1.2.0 \n[34] colorspace_2.1-2 ggplot2_4.0.2 GetoptLong_1.1.0 \n[37] BiocGenerics_0.56.0 vctrs_0.7.1 R6_2.6.1 \n[40] png_0.1-8 matrixStats_1.5.0 stats4_4.5.2 \n[43] lifecycle_1.0.5 flowCore_2.22.1 S4Vectors_0.48.0 \n[46] htmlwidgets_1.6.4 IRanges_2.44.0 clue_0.3-66 \n[49] cluster_2.1.8.1 pkgconfig_2.0.3 pillar_1.11.1 \n[52] gtable_0.3.6 data.table_1.18.2.1 glue_1.8.0 \n[55] xfun_0.56 tibble_3.3.1 tidyselect_1.2.1 \n[58] knitr_1.51 farver_2.1.2 rjson_0.2.23 \n[61] htmltools_0.5.9 rmarkdown_2.30 compiler_4.5.2 \n[64] S7_0.2.1 \n\n\n\n\nAs you may have noticed, the section of loaded via namespace (but not attached) packages has grown larger. These packages are dependencies for the attached packages (dplyr, BiocManager and PeacoQC). Since the functions within these dependencies are only used selectively by the attached packages, they do not need to be active within the local environment.\n\n\n\nTo see what packages are installed (but not yet loaded), we can use the installed.packages() function to return a list of R packages for your computer.\n\ninstalled.packages()", + "crumbs": [ + "About", + "Intro to R" + ] }, { - "objectID": "course/02_FilePaths/slides.html#copying-files", - "href": "course/02_FilePaths/slides.html#copying-files", - "title": "02 - File Paths", - "section": "Copying Files", - "text": "Copying Files\n\n\n\n\n\n\n\n\n.\n\n\nTo copy files to another folder location, we use the file.copy() function. It has two arguments that we will be working with, from being the .fcs files, and to being the folder location we wish to transfer them to. If we tried using them as we currently have them:\n\n\n\n\n\n\n\n# Variable Infants containing 4 .fcs file names\n\nfile.copy(from=Infants, to=FolderTarget3)" + "objectID": "course/01_InstallingRPackages/index.html#install-from-github", + "href": "course/01_InstallingRPackages/index.html#install-from-github", + "title": "01 - Installing R Packages", + "section": "Install from GitHub", + "text": "Install from GitHub\nIn addition to the CRAN and Bioconductor repositories, individual R packages can also be found on GitHub hosted on their respective developers GitHub accounts. Newer packages that are still being worked on (often in the process of submission to CRAN or Bioconductor) can be found here, as well as those where the author decided not to bother with a review process, and just made the packages immediately available, warts and all.\n\n\nWhile many gems of R packages can be found on GitHub, there are also a bunch of R packages that due to deprecation since when they were published and released have stopped working. This is often the case for R packages that are not maintained, which is why it’s useful to check the commits and issues pages to see when the last contribution occurred. We will take a closer look at how to do so later on.\n\n\nTo install packages from GitHub, you will need the remotes package, which can be found on CRAN.\n\n\n\n\n\n\nSpot Check #1\n\n\n\nTo install a package from CRAN, what function would you use? Click on the code-fold arrow below to reveal the answer.\n\n\n\n\nCode\ninstall.packages(\"remotes\")\n\n\n\n\nWith the remotes package now installed, we can attach it to our local environment.\n\n\n\n\n\n\nSpot Check #2\n\n\n\nWhat function would be used to do so?\n\n\n\n\nCode\nlibrary(remotes)\n\n\n\n\nAnd finally, we can use the install_github() function to download R packages from the invidual developers GitHub account.\n\n\n\n\n\n\nSpot Check #3\n\n\n\nHow would you look up the help documentation for this function?\n\n\n\n\nCode\n# Either by hovering over it within Positron or via\n\n?install_github()\n\n\n\n\nWe will be installing a small R package flowSpectrum for this example. It’s one of the packages created by Christopher Hall, whose small series of Flow Cytometry Data Analysis in R tutorials were immensely useful when I was first getting started learning R. flowSpectrum can be used to generate spectrum-style plots for spectral flow cytometry data.\n\n\n\nTo install an R package from GitHub, we first need the GitHub username (so hally166 in this case), which is followed by a “/”, and then the name of the package repository (so flowSpectrum in this case). Our code should consequently be:\n\ninstall_github(\"hally166/flowSpectrum\")\n\n\n\nWhen installing from GitHub, the opening installation scrawl will look different. Unlike R packages from CRAN or Bioconductor, which are usually shipped in an assembled binary format, R packages from GitHub start off as source code. So the first steps shown in the scrawl are the process of converting them to binary before proceeding.\nThis process of building R packages from source code is one of the reasons we needed to install Rtools (for Windows users) or Xcode Developer Tools (for MacOS) for this course. We will look at this topic in greater depth later in the course when we talk about creating R packages.", + "crumbs": [ + "About", + "Intro to R" + ] }, { - "objectID": "course/02_FilePaths/slides.html#removing-files.", - "href": "course/02_FilePaths/slides.html#removing-files.", - "title": "02 - File Paths", - "section": "Removing files.", - "text": "Removing files.\n\n\n\n\n\n\n\n\n.\n\n\nJust like we can add files via R, we can also remove them. However, when we remove them via this route, they are removed permanently, not sent to the recycle bin. We will revisit how later on in the course after you have gained additional experience with file.paths.\n\n\n\n\n\n\n\n?unlink()" + "objectID": "course/01_InstallingRPackages/index.html#troubleshooting-install-errors", + "href": "course/01_InstallingRPackages/index.html#troubleshooting-install-errors", + "title": "01 - Installing R Packages", + "section": "Troubleshooting Install Errors", + "text": "Troubleshooting Install Errors\nWe have now installed three R packages, dplyr, PeacoQC, and flowSpectrum. In my case, I did not encounter any errors during the installation. However, sometimes a package installation will fail due to an error encountered during the installation process. This can be due to a number of reasons, ranging from a missing dependency, or an update that caused a conflict. While these can occur for CRAN or Bioconductor packages, they are more frequently seen for GitHub packages where the Description/Namespace files may not have been fully updated yet to install all the required dependencies.\nWhen encountering an error, start of by first reading through the message to see if you can parse any useful information about what package failed to install, and if it list the missing dependency packages name. The later was the case in the error message example shown below.\n\n\n\nIf you encounter an installation error this week, please take screenshots of the error message and post them to this Discussion. This will help us troubleshoot your installation, as well as provide additional examples of installation errors that will be used to update this section in the future.", + "crumbs": [ + "About", + "Intro to R" + ] }, { - "objectID": "course/02_FilePaths/slides.html#basename", - "href": "course/02_FilePaths/slides.html#basename", - "title": "02 - File Paths", - "section": "Basename", - "text": "Basename\n\n\n\n\n\n\n\n\n.\n\n\nIf we look at Infants with the full.names=TRUE, we see the additional pathing folder has been added, allowing us to successfully copy over the files.\n\n\n\n\n\n\n\nInfants" + "objectID": "course/01_InstallingRPackages/index.html#installing-specific-package-versions", + "href": "course/01_InstallingRPackages/index.html#installing-specific-package-versions", + "title": "01 - Installing R Packages", + "section": "Installing Specific-Package Versions", + "text": "Installing Specific-Package Versions\nWhile we may be tempted to think of R packages as static, they change quite often, as their develipers add new features, fix bugs, etc. To help keep track of these changes (essential for reproducibility and replicability), R packages have version numbers.\nWhen we run sessionInfo(), we can see an example of this, with the version number appearing after the package name.\n\nsessionInfo()\n\nR version 4.5.2 (2025-10-31)\nPlatform: x86_64-pc-linux-gnu\nRunning under: Debian GNU/Linux 13 (trixie)\n\nMatrix products: default\nBLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.12.1 \nLAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.1; LAPACK version 3.12.0\n\nlocale:\n [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C \n [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 \n [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 \n [7] LC_PAPER=en_US.UTF-8 LC_NAME=C \n [9] LC_ADDRESS=C LC_TELEPHONE=C \n[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C \n\ntime zone: America/New_York\ntzcode source: system (glibc)\n\nattached base packages:\n[1] stats graphics grDevices utils datasets methods base \n\nother attached packages:\n[1] PeacoQC_1.20.0 BiocManager_1.30.27 BiocStyle_2.38.0 \n\nloaded via a namespace (and not attached):\n [1] generics_0.1.4 shape_1.4.6.1 digest_0.6.39 \n [4] magrittr_2.0.4 evaluate_1.0.5 grid_4.5.2 \n [7] RColorBrewer_1.1-3 iterators_1.0.14 circlize_0.4.17 \n[10] fastmap_1.2.0 foreach_1.5.2 doParallel_1.0.17 \n[13] jsonlite_2.0.0 graph_1.88.1 GlobalOptions_0.1.3 \n[16] ComplexHeatmap_2.26.0 flowWorkspace_4.22.1 scales_1.4.0 \n[19] XML_3.99-0.20 Rgraphviz_2.54.0 codetools_0.2-20 \n[22] cli_3.6.5 RProtoBufLib_2.22.0 rlang_1.1.7 \n[25] crayon_1.5.3 Biobase_2.70.0 yaml_2.3.12 \n[28] otel_0.2.0 cytolib_2.22.0 ncdfFlow_2.56.0 \n[31] tools_4.5.2 parallel_4.5.2 dplyr_1.2.0 \n[34] colorspace_2.1-2 ggplot2_4.0.2 GetoptLong_1.1.0 \n[37] BiocGenerics_0.56.0 vctrs_0.7.1 R6_2.6.1 \n[40] png_0.1-8 matrixStats_1.5.0 stats4_4.5.2 \n[43] lifecycle_1.0.5 flowCore_2.22.1 S4Vectors_0.48.0 \n[46] htmlwidgets_1.6.4 IRanges_2.44.0 clue_0.3-66 \n[49] cluster_2.1.8.1 pkgconfig_2.0.3 pillar_1.11.1 \n[52] gtable_0.3.6 data.table_1.18.2.1 glue_1.8.0 \n[55] xfun_0.56 tibble_3.3.1 tidyselect_1.2.1 \n[58] knitr_1.51 farver_2.1.2 rjson_0.2.23 \n[61] htmltools_0.5.9 rmarkdown_2.30 compiler_4.5.2 \n[64] S7_0.2.1 \n\n\n\n\nAlternatively, we can retrieve the same information for the individual packages via the packageVersion() function\n\npackageVersion(\"PeacoQC\")\n\n[1] '1.20.0'\n\n\n\n\nAs well as from the citation() function.\n\ncitation(\"PeacoQC\")\n\nTo cite package 'PeacoQC' in publications use:\n\n Emmaneel A (2025). _PeacoQC: Peak-based selection of high quality\n cytometry data_. doi:10.18129/B9.bioc.PeacoQC\n <https://doi.org/10.18129/B9.bioc.PeacoQC>, R package version 1.20.0,\n <https://bioconductor.org/packages/PeacoQC>.\n\nA BibTeX entry for LaTeX users is\n\n @Manual{,\n title = {PeacoQC: Peak-based selection of high quality cytometry data},\n author = {Annelies Emmaneel},\n year = {2025},\n note = {R package version 1.20.0},\n url = {https://bioconductor.org/packages/PeacoQC},\n doi = {10.18129/B9.bioc.PeacoQC},\n }\n\n\n\n\nHow does a version number work? Lets say we have the following version number: 1.20.0\nThe first number of the version (1. in this case) denotes major changes, primarily those where after the version change the package may not be compatible with the code used in the prior version. As a consequence, this number changes rarely.\nThe second number (.20. in this case) is the minor version. Minor changes typically consist of new features that are added, that don’t affect the overall package function. These will change more frequently, especially for Bioconductor packages with fixed release cycles.\nThe final number (.0 in this case) is often used to denote small changes occuring within a minor release period, often bug-fixes or fixing typos within the documentation.\n\n\nWe may in the future need to install specific package versions (but wont be doing so today). As expected, which repository contains the R package influences how we would go about doing this.\nFor CRAN packages, we can use the remotes packages install_version() function. This allows us to provide the version number, and designate the repository location (the CRAN url in this case).\n\nremotes::install_version(\"ggplot2\", version = \"3.5.2\", repos = \"https://cloud.r-project.org\")\n\nFor GitHub-based R packages, the package versioning schema is not as strict as that of CRAN or Bioconductor. Typically, changes in R packagaes are put out by their developers as releases. When trying to install a particular release, we can add an additional argument to the install_github() function, specifying the release version’s tag number. For example:\n\nremotes::install_github(\"DavidRach/Luciernaga\", ref = \"v0.99.7\")\n\nAlternatively, if the developer doesn’t implement releases, you can provide the hash number of a particular commit.\n\nremotes::install_github(\"DavidRach/Luciernaga\", ref = \"8d1d694\")", + "crumbs": [ + "About", + "Intro to R" + ] }, { - "objectID": "course/02_FilePaths/slides.html#recursive", - "href": "course/02_FilePaths/slides.html#recursive", - "title": "02 - File Paths", - "section": "Recursive", - "text": "Recursive\n\n\n\n\n\n\n\n\n.\n\n\nAnd finally that we have created additional nested folders and populated them with fcs files, let’s see what setting list.files() recursive argument to TRUE\n\n\n\n\n\n\n\nall_files_present <- list.files(full.names=TRUE, recursive=TRUE)\nall_files_present" + "objectID": "course/01_InstallingRPackages/index.html#documentation-and-websites", + "href": "course/01_InstallingRPackages/index.html#documentation-and-websites", + "title": "01 - Installing R Packages", + "section": "Documentation and Websites", + "text": "Documentation and Websites\nWe have already seen a couple ways to access the help documentation contained within an R package via Positron. Beyond internal documentation, R packages often have external websites that contain additional walk-through articles (ie. vignettes) to better document how to use the package.\nFor CRAN-based packages, we can start off by searching for the package name. So, in the case of dplyr:\n\n\n\nTwo main suggestions pop up. One is the package’s CRAN page. Unfortunately, this one is not particularly user-friendly, although some text-based vignettes are accessible.\n\n\n\nBecause of this, many CRAN-based R packages (especially those part of the tidyverse) use pkgdown-generated websites hosted via a GitHub page (similar to the one used by this course. The second option on the search is dplyr’s pkgdown-style website\n\n\n\nWe can usually find the list of functions under the Reference tab, with the more extensive documentation vignettes being found under the Articles tab.\n\n\n\nGitHub-based packages will vary depending on their individual developers, but often will also use pkgdown-style websites. These often appear as links on the right-hand side, or within the repository’s ReadMe.\n\n\n\nFor Bioconductor-based packages, on the package’s page we can typically find the already rendered vignette articles, usually as either html or pdf files. For example, with PeacoQC:\n\n\n\nAdditionally, package vignettes can also be reached via the packages help index page. These will usually appear under User guides, package vignettes, and other documentation.", + "crumbs": [ + "About", + "Intro to R" + ] }, { - "objectID": "course/02_FilePaths/slides.html#saving-changes-to-version-control", - "href": "course/02_FilePaths/slides.html#saving-changes-to-version-control", - "title": "02 - File Paths", - "section": "Saving changes to Version Control", - "text": "Saving changes to Version Control\n\n\n\n\n\n\n\n\n.\n\n\nAnd as is good practice, to maintain version control, let’s stage all the files and folders we created today within the Week2 Project Folder, write a commit message, and send these files back to GitHub until they are needed again next time." + "objectID": "course/01_InstallingRPackages/slides.html#set-up", + "href": "course/01_InstallingRPackages/slides.html#set-up", + "title": "01 - Installing R Packages", + "section": "Set Up", + "text": "Set Up\nAlright, with the background out of the way, let’s get started!\n\n\n\n\n\n\n\nImportant\n\n\nPlease make sure to sync your forked version of the CytometryInR repository, and pull any changes to your local computer’s CytometryInR project folder so that you have the most recent version of the code and data available.\n\n\n\n\n\n\n\n\n\n\n\nWarning\n\n\nPlease remember to always copy over the new material from your local CytometryInR folder to a separate Project Folder that you created and named (ex. “Week_01” or “MyLearningFolder”, etc.). This will ensure any edits you make to the files do not affect your ability to bring in next week’s materials to the CytometryInR folder" }, { - "objectID": "course/01_InstallingRPackages/slides_inperson.html#checking-for-loaded-packages", - "href": "course/01_InstallingRPackages/slides_inperson.html#checking-for-loaded-packages", + "objectID": "course/01_InstallingRPackages/slides.html#checking-for-loaded-packages", + "href": "course/01_InstallingRPackages/slides.html#checking-for-loaded-packages", "title": "01 - Installing R Packages", "section": "Checking for Loaded Packages", "text": "Checking for Loaded Packages\n\n\n\n\n\n\n\n\n.\n\n\nFor the contents (ie. the functions) of an R package to be available for your computer to use, they must first be activated (ie. loaded) into your local environment. We will first learn how to check what R packages are currently active." }, { - "objectID": "course/01_InstallingRPackages/slides_inperson.html#installing-from-cran", - "href": "course/01_InstallingRPackages/slides_inperson.html#installing-from-cran", + "objectID": "course/01_InstallingRPackages/slides.html#installing-from-cran", + "href": "course/01_InstallingRPackages/slides.html#installing-from-cran", "title": "01 - Installing R Packages", "section": "Installing from CRAN", "text": "Installing from CRAN\n\n\n\n\n\n\n\n\n.\n\n\nWe will start by installing R packages that are part of the CRAN repository. This is the main R package repository, being part of the broader R software project. In the context of this course, R packages that work primarily with general data structure (rows, columns, matrices, etc.) or visualizations will predominantly be found within this repository.\nThese include the tidyverse packages. These packages have collectively made R easier to use by smoothing out some of the rough edges of base R, which is why R has seen major growth within the last decade. We will be installing several of these R packages today." }, { - "objectID": "course/01_InstallingRPackages/slides_inperson.html#installing-from-bioconductor", - "href": "course/01_InstallingRPackages/slides_inperson.html#installing-from-bioconductor", + "objectID": "course/01_InstallingRPackages/slides.html#installing-from-bioconductor", + "href": "course/01_InstallingRPackages/slides.html#installing-from-bioconductor", "title": "01 - Installing R Packages", "section": "Installing from Bioconductor", "text": "Installing from Bioconductor\n\n\n\n\n\n\n\n\n.\n\n\nBioconductor is the second R package repository we will be working with throughout the course. While it contains far fewer packages than CRAN, it contains packages that are primarily used by the biomedical sciences. Following this link you can find it’s current flow and mass cytometry R packages.\nBioconductor R packages differ from CRAN R packages in a couple of ways. Bioconductor has different standards for acceptance than CRAN. They usually contain interoperable object-types, put more effort into documentation and continous testing to ensure that the R package remains functional across operating systems." }, { - "objectID": "course/01_InstallingRPackages/slides_inperson.html#install-from-github", - "href": "course/01_InstallingRPackages/slides_inperson.html#install-from-github", + "objectID": "course/01_InstallingRPackages/slides.html#install-from-github", + "href": "course/01_InstallingRPackages/slides.html#install-from-github", "title": "01 - Installing R Packages", "section": "Install from GitHub", "text": "Install from GitHub\n\n\n\n\n\n\n\n\n.\n\n\nIn addition to the CRAN and Bioconductor repositories, individual R packages can also be found on GitHub hosted on their respective developers GitHub accounts. Newer packages that are still being worked on (often in the process of submission to CRAN or Bioconductor) can be found here, as well as those where the author decided not to bother with a review process, and just made the packages immediately available, warts and all." }, { - "objectID": "course/01_InstallingRPackages/slides_inperson.html#troubleshooting-install-errors", - "href": "course/01_InstallingRPackages/slides_inperson.html#troubleshooting-install-errors", + "objectID": "course/01_InstallingRPackages/slides.html#troubleshooting-install-errors", + "href": "course/01_InstallingRPackages/slides.html#troubleshooting-install-errors", "title": "01 - Installing R Packages", "section": "Troubleshooting Install Errors", "text": "Troubleshooting Install Errors\n\n\n\n\n\n\n\n\n.\n\n\nWe have now installed three R packages, dplyr, PeacoQC, and flowSpectrum. In my case, I did not encounter any errors during the installation. However, sometimes a package installation will fail due to an error encountered during the installation process. This can be due to a number of reasons, ranging from a missing dependency, or an update that caused a conflict. While these can occur for CRAN or Bioconductor packages, they are more frequently seen for GitHub packages where the Description/Namespace files may not have been fully updated yet to install all the required dependencies.\nWhen encountering an error, start of by first reading through the message to see if you can parse any useful information about what package failed to install, and if it list the missing dependency packages name. The later was the case in the error message example shown below." }, { - "objectID": "course/01_InstallingRPackages/slides_inperson.html#documentation-and-websites", - "href": "course/01_InstallingRPackages/slides_inperson.html#documentation-and-websites", + "objectID": "course/01_InstallingRPackages/slides.html#installing-specific-package-versions", + "href": "course/01_InstallingRPackages/slides.html#installing-specific-package-versions", + "title": "01 - Installing R Packages", + "section": "Installing Specific-Package Versions", + "text": "Installing Specific-Package Versions\n\n\n\n\n\n\n\n\n.\n\n\nWhile we may be tempted to think of R packages as static, they change quite often, as their develipers add new features, fix bugs, etc. To help keep track of these changes (essential for reproducibility and replicability), R packages have version numbers.\nWhen we run sessionInfo(), we can see an example of this, with the version number appearing after the package name.\n\n\n\n\n\n\n\nsessionInfo()\n\nR version 4.5.2 (2025-10-31)\nPlatform: x86_64-pc-linux-gnu\nRunning under: Debian GNU/Linux 13 (trixie)\n\nMatrix products: default\nBLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.12.1 \nLAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.1; LAPACK version 3.12.0\n\nlocale:\n [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C \n [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 \n [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 \n [7] LC_PAPER=en_US.UTF-8 LC_NAME=C \n [9] LC_ADDRESS=C LC_TELEPHONE=C \n[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C \n\ntime zone: America/New_York\ntzcode source: system (glibc)\n\nattached base packages:\n[1] stats graphics grDevices utils datasets methods base \n\nother attached packages:\n[1] PeacoQC_1.20.0 BiocManager_1.30.27\n\nloaded via a namespace (and not attached):\n [1] generics_0.1.4 shape_1.4.6.1 digest_0.6.39 \n [4] magrittr_2.0.4 evaluate_1.0.5 grid_4.5.2 \n [7] RColorBrewer_1.1-3 iterators_1.0.14 circlize_0.4.17 \n[10] fastmap_1.2.0 foreach_1.5.2 doParallel_1.0.17 \n[13] jsonlite_2.0.0 graph_1.88.1 GlobalOptions_0.1.3 \n[16] ComplexHeatmap_2.26.0 flowWorkspace_4.22.1 scales_1.4.0 \n[19] XML_3.99-0.20 Rgraphviz_2.54.0 codetools_0.2-20 \n[22] cli_3.6.5 RProtoBufLib_2.22.0 rlang_1.1.7 \n[25] crayon_1.5.3 Biobase_2.70.0 yaml_2.3.12 \n[28] otel_0.2.0 cytolib_2.22.0 ncdfFlow_2.56.0 \n[31] tools_4.5.2 parallel_4.5.2 dplyr_1.2.0 \n[34] colorspace_2.1-2 ggplot2_4.0.2 GetoptLong_1.1.0 \n[37] BiocGenerics_0.56.0 vctrs_0.7.1 R6_2.6.1 \n[40] png_0.1-8 matrixStats_1.5.0 stats4_4.5.2 \n[43] lifecycle_1.0.5 flowCore_2.22.1 S4Vectors_0.48.0 \n[46] IRanges_2.44.0 clue_0.3-66 cluster_2.1.8.1 \n[49] pkgconfig_2.0.3 pillar_1.11.1 gtable_0.3.6 \n[52] data.table_1.18.2.1 glue_1.8.0 xfun_0.56 \n[55] tibble_3.3.1 tidyselect_1.2.1 knitr_1.51 \n[58] farver_2.1.2 rjson_0.2.23 htmltools_0.5.9 \n[61] rmarkdown_2.30 compiler_4.5.2 S7_0.2.1" + }, + { + "objectID": "course/01_InstallingRPackages/slides.html#documentation-and-websites", + "href": "course/01_InstallingRPackages/slides.html#documentation-and-websites", "title": "01 - Installing R Packages", "section": "Documentation and Websites", "text": "Documentation and Websites\n\n\n\n\n\n\n\n\n.\n\n\nWe have already seen a couple ways to access the help documentation contained within an R package via Positron. Beyond internal documentation, R packages often have external websites that contain additional walk-through articles (ie. vignettes) to better document how to use the package.\nFor CRAN-based packages, we can start off by searching for the package name. So, in the case of dplyr" }, { - "objectID": "course/00_WorkstationSetup/MacOSSlides.html#installing-r", - "href": "course/00_WorkstationSetup/MacOSSlides.html#installing-r", - "title": "Installing Software on MacOS", - "section": "Installing R", - "text": "Installing R\n\n\n\n\n\n\n\n\n.\n\n\nTo get started, first navigate to the R website. Once there, click on Download R option towards the top of the page." + "objectID": "course/00_WorkstationSetup/Windows.html", + "href": "course/00_WorkstationSetup/Windows.html", + "title": "Installing Software on Windows", + "section": "", + "text": "For the YouTube livestream recording, see here\nFor screen-shots slides, click here\nThis is the software installation walkthrough for those whose computers are running Windows. Based on our pre-course interest form, you make up the majority of course participants." }, { - "objectID": "course/00_WorkstationSetup/MacOSSlides.html#xcode-command-line-tools", - "href": "course/00_WorkstationSetup/MacOSSlides.html#xcode-command-line-tools", - "title": "Installing Software on MacOS", - "section": "Xcode Command Line Tools", - "text": "Xcode Command Line Tools\n\n\n\n\n\n\n\n\n.\n\n\nDepending on your version of macOS, you may or may not already have Git installed on your computer. The reason is that it comes bundled within the Xcode Command Line Tools.\nIf this is not your first foray into coding, you may have previously seen an installation pop-up along the lines of “XYZ requires command line developer tools. Would you like to install the tools now?” when installing an IDE (like Positron, Rstudio or Visual Studio Code)." + "objectID": "course/00_WorkstationSetup/Windows.html#installing-r", + "href": "course/00_WorkstationSetup/Windows.html#installing-r", + "title": "Installing Software on Windows", + "section": "Installing R", + "text": "Installing R\nTo get started, first navigate to the R website. Once there, click on Download R option towards the top of the page.\n\n\n\nOn the next screen, you will need to select a mirror from which to download the software from. You can either select the closest geographic location (which may be faster) or alternatively just select the Cloud option which should redirect you.\n\n\n\nYou will then select your Operating System, in this case, Windows.\n\n\n\nAnd go ahead and select the Install R for the first time link.\n\n\n\nNext, you will select the download the current version option at the top of the page.\n\n\n\nThe popup window will then ask where you want to save the installer (.exe) file. We generally save this to either Downloads or Desktop to make finding it easier.\n\n\n\nAfter the download is complete, double click on the installer’s .exe file. This will open a popup asking you to select your preferred language.\n\n\n\nYou will then be prompted to acccept the software license (which is the free copyleft GPL2 license, which we will learn about later in the course).\n\n\n\nOn Windows, R will normally save it’s software folder under Program Files.\n\n\n\nNext, please accept the defaults.\n\n\n\n\n\n\n\n\n\n\n\n\nWith the defaults accepted, the installation will commence. Feel free to go have a coffee/tea/beverage-of-your choice break while you wait.\n\n\n\nAnd if all goes well, the installation will complete without any issues." }, { - "objectID": "course/00_WorkstationSetup/MacOSSlides.html#install-positron", - "href": "course/00_WorkstationSetup/MacOSSlides.html#install-positron", - "title": "Installing Software on MacOS", - "section": "Install Positron", - "text": "Install Positron\n\n\n\n\n\n\n\n\n.\n\n\nFinally, you will install Positron. It is an integrated development environment (IDE) in which we will open, modify and run our code throughout the course.\nFirst, navigate to their homepage, and select the blue Download option button on the upper-right." + "objectID": "course/00_WorkstationSetup/Windows.html#installing-rtools", + "href": "course/00_WorkstationSetup/Windows.html#installing-rtools", + "title": "Installing Software on Windows", + "section": "Installing RTools", + "text": "Installing RTools\nWe will now work on installing Rtools. This software is needed when building R packages from source, which we will need throughout the course for R packages hosted on GitHub.\nTo get started, we will return to the R installation page we visited previously and instead click on the Rtools option.\n\n\n\nNext, select the most recent version of Rtools to Download.\n\n\n\nYou will then select for your architecture. For the vast majority of Windows users, your computer will likely be be using a x86 chip architecture, so you would select the Rtools45 installer option.\nIf your computer however uses the ARM chip architecture, you would select the 64-bit ARM Rtools45 installer instead. If you are unsure, see the following.\n\n\n\nNext, you will select the location to save the Rtools installer to. We generally save this to either Downloads or Desktop to make finding it easier.\n\n\n\nOnce downloaded, double click on the .exe to launch the Rtools installer.\n\n\n\nSimilar to what we did when installing R, go ahead and keep the defaults.\n\n\n\nAnd click install to proceed with the installation.\n\n\n\nAnd wait while the installation wraps up.\n\n\n\nIf all goes well, you should see the following installation success page." }, { - "objectID": "course/00_WorkstationSetup/LinuxSlides.html#debian-based-distros", - "href": "course/00_WorkstationSetup/LinuxSlides.html#debian-based-distros", - "title": "Installing Software on Linux", - "section": "Debian-based Distros", - "text": "Debian-based Distros\nInstalling R\n\n\n\n\n\n\n\n\n.\n\n\nTo get started, first navigate to the R website. Once there, click on Download R option towards the top of the page." - }, - { - "objectID": "course/00_WorkstationSetup/LinuxSlides.html#installing-git", - "href": "course/00_WorkstationSetup/LinuxSlides.html#installing-git", - "title": "Installing Software on Linux", + "objectID": "course/00_WorkstationSetup/Windows.html#installing-git", + "href": "course/00_WorkstationSetup/Windows.html#installing-git", + "title": "Installing Software on Windows", "section": "Installing Git", - "text": "Installing Git\n\n# sudo apt install git" + "text": "Installing Git\nGit is a version control software widely used among software developers and bioinformaticians. We will use it extensively throughout the course, both locally on our computers (to keep track of changes to our files), as well as in combination with GitHub(to maintain online backups of our files).\nWe will first navigate to the website and select the download from Windows option.\n\n\n\n\n\n\n\n\n\nWe will then proceed and select install 64-bit Git for Windows Setup option\n\n\n\n\n\n\n\nAs was the case with our installation of R and Rtools, a pop-up will appear asking for a location to save the installer to.\nOnce downloaded, double-click and proceed with the installation.\nYou will be asked to accept the Git License (which is the free copyleft GPL2 license, which we will learn about later in the course).\n\n\n\nThen you will be asked to select the folder to save to software to (usually your Programs folder)\n\n\n\nAt this point, the Git installer will ask a series of increasingly niche questions. It is best to just accept all the default options, to avoid wandering too far down a “What is Vim?!?” rabbit-hole.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHaving made it through all the niche customization screens, we finally reach the install button.\n\n\n\nWe then can wait for the install to complete.\n\n\n\nAnd success, we have now installed Git." }, { - "objectID": "course/00_WorkstationSetup/LinuxSlides.html#installing-positron", - "href": "course/00_WorkstationSetup/LinuxSlides.html#installing-positron", - "title": "Installing Software on Linux", + "objectID": "course/00_WorkstationSetup/Windows.html#installing-positron", + "href": "course/00_WorkstationSetup/Windows.html#installing-positron", + "title": "Installing Software on Windows", "section": "Installing Positron", - "text": "Installing Positron\n\n\n\n\n\n\n\n\n.\n\n\nFinally, you will need to install Positron. It will be the integrated development environment (IDE) we will be using for the course.\nFirst, navigate to their homepage, and select the blue Download option button on the upper-right." + "text": "Installing Positron\nFinally, you will install Positron. It is an integrated development environment (IDE) in which we will open, modify and run our code throughout the course.\nFirst, navigate to their homepage, and select the blue Download option button on the upper-right.\n\n\n\nYou will then need to accept the Elastic License agreement to use the software (we will cover this source-available license type and what it does later in the course).\nWith the license accepted, you will be able to select your operating system. In this case, we will select Windows, specifally the user level install.\n\n\n\nPlease note, if you are using a Windows computer with an ARM based chip (like is the case with Snapdragons), you will need to download the installed from the Positron’s GitHub Release Page, as they are still testing some features.\n\n\nYou will then be prompted to select the location you want to save the installer to. We will generally save this to either Downloads or Desktop to make finding it easier.\n\n\n\nOnce the download is complete, double click on the installer, and again accept the license agreement.\n\n\n\nGenerally, Positron will be store its software folder under Program Files.\n\n\n\nNext up, accept the default options for the following screens\n\n\n\n\n\n\nAnd finally, click Install.\n\n\n\n \n\nIf all goes well, you should then see the installation success page." }, { - "objectID": "course/00_WorkstationSetup/Linux.html", - "href": "course/00_WorkstationSetup/Linux.html", - "title": "Installing Software on Linux", + "objectID": "course/00_WorkstationSetup/index.html", + "href": "course/00_WorkstationSetup/index.html", + "title": "Workstation Setup", "section": "", - "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here\nThis is the software installation walkthrough for those whose computers are running Linux. First off, welcome! Based on our pre-course interest form, there was a suprising number of you daily-drivers out there! However, a CytometryInR interest form is unlikely to be representative of the general population, so please stash all Year of the Linux desktop banners until further notice." + "text": "In the previous section, we first set up your GitHub account. Then we modified your GitHub profile and added a README section. Finally, we forked the CytometryInR repository so that you can easily retrieve the new course materials each week.\nIt is now time to install the required software on your computer, which will get your work-station set up with everything needed for this course. Depending on your computers operating system, the installation requirements may differ a bit. In general, you will need to install the following software:\nR website : The programming language we will be using throughout the course.\nPositron : The integrated development environment (IDE) in which we will open, modify and run our code.\nGit : The version control software that will allow us to track changes to our files.\nAdditionally, Windows users will need to install:\nRTools : Used to build R packages from source code.\nYou can find the operating system specific installation walkthroughs below. Once you have completed your specific walkthrough, return to this page and proceed to the next section.\nPlease note: For those using university or company administered computers, please be aware that you may not have the necessary permissions to install these directly, and may need to reach out to your IT department to help get the software installed and running correctly.\nIf you are using your own computer, congratulations, you are your system administrator, and should already have the necessary permissions.", + "crumbs": [ + "About", + "Getting Started", + "00 - Workstation Setup" + ] }, { - "objectID": "course/00_WorkstationSetup/Linux.html#debian-based-distros", - "href": "course/00_WorkstationSetup/Linux.html#debian-based-distros", - "title": "Installing Software on Linux", - "section": "Debian-based Distros", - "text": "Debian-based Distros\n\nInstalling R\nTo get started, first navigate to the R website. Once there, click on Download R option towards the top of the page.\n\n\n\nOn the next screen, you will need to select a mirror from which to download the software from. You can either select the closest geographic location (which may be faster) or alternatively just select the Cloud option which should redirect you.\n\n\n\nOnce this is done, select your Linux Distro (or one that shares your package managers format).\n\n\n\nOn the landing page, you will find a bunch of relevant installation information, which is worthwhile giving a read-through when you have time.\n\n\n\nThe process to successfully install R can be summarized as follows:\nUpdate apt/sources.list to include the CRAN repository (allowing access to R packages)\n\n\n\nSince we are running on Debian stable (Trixie), we would add the following line to sources.list\n\n\n\nSo in practice, open sources.list:\n\n\n\nPaste the line, and “Ctrl + O”; “Enter”; “Ctrl + X” to save the changes.\n\n\n\nNext, we will need to retrieve the keyID used to sign. This can be fetched from Ubuntu via the terminal.\n\n\n\n\n\n\nThen we need to export and write it\n\n\n\nWhich if successful, will display the public key.\n\n\n\nWith the above set up, we can proceed via our apt package manager to install both r-base and r-base-dev (which contains the equivalent of Rtools for Windows, or Xcode Command Line Tools for macOS).\n\n\n\n\n\n\n\n\n\nAnd if all goes well, R should now be installed." + "objectID": "course/00_WorkstationSetup/index.html#windows", + "href": "course/00_WorkstationSetup/index.html#windows", + "title": "Workstation Setup", + "section": "Windows", + "text": "Windows\n\nInstallation walkthrough for Windows", + "crumbs": [ + "About", + "Getting Started", + "00 - Workstation Setup" + ] }, { - "objectID": "course/00_WorkstationSetup/Linux.html#installing-git", - "href": "course/00_WorkstationSetup/Linux.html#installing-git", - "title": "Installing Software on Linux", - "section": "Installing Git", - "text": "Installing Git\n\n# sudo apt install git" + "objectID": "course/00_WorkstationSetup/index.html#macos", + "href": "course/00_WorkstationSetup/index.html#macos", + "title": "Workstation Setup", + "section": "MacOS", + "text": "MacOS\n\nInstallation walkthrough for MacOS", + "crumbs": [ + "About", + "Getting Started", + "00 - Workstation Setup" + ] }, { - "objectID": "course/00_WorkstationSetup/Linux.html#installing-positron", - "href": "course/00_WorkstationSetup/Linux.html#installing-positron", - "title": "Installing Software on Linux", - "section": "Installing Positron", - "text": "Installing Positron\nFinally, you will need to install Positron. It will be the integrated development environment (IDE) we will be using for the course.\nFirst, navigate to their homepage, and select the blue Download option button on the upper-right.\n\n\n\nYou will then need to accept the Elastic License agreement to use the software (we will cover this source-available license type and what it does later in the course).\n\n\n\nWith the license accepted, you will be able to select distribution and architecture.\n\n\n\nOnce the Download completes, proceed to install the .deb package as you would nornally. GUI example via Discover below.\n\n\n\nDepending on your configurations, you may be asked to exert your sudo powers.\n\n\n\nOnce this completes, you should now be able to launch the software for the first time." + "objectID": "course/00_WorkstationSetup/index.html#linux-debian", + "href": "course/00_WorkstationSetup/index.html#linux-debian", + "title": "Workstation Setup", + "section": "Linux (Debian)", + "text": "Linux (Debian)\n\nInstallation walkthrough for Linux", + "crumbs": [ + "About", + "Getting Started", + "00 - Workstation Setup" + ] }, { - "objectID": "course/00_WorkstationSetup/WindowsSlides.html#installing-r", - "href": "course/00_WorkstationSetup/WindowsSlides.html#installing-r", - "title": "Installing Software on Windows", + "objectID": "course/00_WorkstationSetup/MacOS.html", + "href": "course/00_WorkstationSetup/MacOS.html", + "title": "Installing Software on MacOS", + "section": "", + "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here\nThis is the software installation walkthrough for those whose computers are running MacOS. Based on our pre-course interest form, you make up a solid proportion of the course participants." + }, + { + "objectID": "course/00_WorkstationSetup/MacOS.html#installing-r", + "href": "course/00_WorkstationSetup/MacOS.html#installing-r", + "title": "Installing Software on MacOS", "section": "Installing R", - "text": "Installing R\n\n\n\n\n\n\n\n\n.\n\n\nTo get started, first navigate to the R website. Once there, click on Download R option towards the top of the page." + "text": "Installing R\nTo get started, first navigate to the R website. Once there, click on Download R option towards the top of the page.\n\n\n\nOn the next screen, you will need to select a mirror from which to download the software from. You can either select the closest geographic location (which may be faster) or alternatively just select the Cloud option which should redirect you.\n\n\n\nYou will then select your Operating System, in this case, macOS\n\n\n\nNext, you will need to select the appropiate download based on your computers architecture. On newer Macs (containing M1+ chips) this would the arm64 option on the center left of the screen. For the older Intel (pre-2020) Macs, you would select the x86_64 option. If you are unsure, check your About This Mac tab\n\n\n\nAfter the download has completed, launch the installer\n\n\n\nProceed through the Read Me\n\n\n\nYou will then be prompted to acccept the software license (which is the free copyleft GPL2 license, which we will learn about later in the course).\n\n\n\n\n\n\nNext, you will need to navigate through several pages, keeping the defaults.\n\n\n\nAnd with any luck, you should see that the installation was successful." }, { - "objectID": "course/00_WorkstationSetup/WindowsSlides.html#installing-rtools", - "href": "course/00_WorkstationSetup/WindowsSlides.html#installing-rtools", - "title": "Installing Software on Windows", - "section": "Installing RTools", - "text": "Installing RTools\n\n\n\n\n\n\n\n\n.\n\n\nWe will now work on installing Rtools. This software is needed when building R packages from source, which we will need throughout the course for R packages hosted on GitHub.\nTo get started, we will return to the R installation page we visited previously and instead click on the Rtools option." + "objectID": "course/00_WorkstationSetup/MacOS.html#xcode-command-line-tools", + "href": "course/00_WorkstationSetup/MacOS.html#xcode-command-line-tools", + "title": "Installing Software on MacOS", + "section": "Xcode Command Line Tools", + "text": "Xcode Command Line Tools\nDepending on your version of macOS, you may or may not already have Git installed on your computer. The reason is that it comes bundled within the Xcode Command Line Tools.\nIf this is not your first foray into coding, you may have previously seen an installation pop-up along the lines of “XYZ requires command line developer tools. Would you like to install the tools now?” when installing an IDE (like Positron, Rstudio or Visual Studio Code).\n\n\n\nSince these command line developer tools contain both Git, and also the equivalent of Rtools for Windows, we will need to install them for this course. To get started, first open your terminal.\n\n\n\nNext run the following code:\n\nxcode-select --install\n\n\n\n\nYou will then have the pop-up asking whether you want to install the command line tools (which contain Git). Select Install.\n\n\n\nYou will then be asked to accept the license\n\n\n\nYour installation will then proceed\n\n\n\nAnd if all goes well, the software will finish installing.\n\n\n\nAfter you complete Positron installation (next section), if you check the version control tab on the action bar on the far left side of the screen, you should see the following if Git was installed correctly.\n\n\n\nAlternatively, if you see this, you will need to reattempt the installation." }, { - "objectID": "course/00_WorkstationSetup/WindowsSlides.html#installing-git", - "href": "course/00_WorkstationSetup/WindowsSlides.html#installing-git", - "title": "Installing Software on Windows", - "section": "Installing Git", - "text": "Installing Git\n\n\n\n\n\n\n\n\n.\n\n\nGit is a version control software widely used among software developers and bioinformaticians. We will use it extensively throughout the course, both locally on our computers (to keep track of changes to our files), as well as in combination with GitHub(to maintain online backups of our files).\nWe will first navigate to the website and select the download from Windows option." + "objectID": "course/00_WorkstationSetup/MacOS.html#install-positron", + "href": "course/00_WorkstationSetup/MacOS.html#install-positron", + "title": "Installing Software on MacOS", + "section": "Install Positron", + "text": "Install Positron\nFinally, you will install Positron. It is an integrated development environment (IDE) in which we will open, modify and run our code throughout the course.\nFirst, navigate to their homepage, and select the blue Download option button on the upper-right.\n\n\n\nYou will then need to accept the Elastic License agreement to use the software (we will cover this source-available license type and what it does later in the course).\nWith the license accepted, you will be able to select your operating system and the relevant installer depending on whether you are on an M1+ (ARM) or older Intel (x86) Mac.\n\n\nOnce the Download completes, proceed to install the package as you normally would for any other program." }, { - "objectID": "course/00_WorkstationSetup/WindowsSlides.html#installing-positron", - "href": "course/00_WorkstationSetup/WindowsSlides.html#installing-positron", - "title": "Installing Software on Windows", - "section": "Installing Positron", - "text": "Installing Positron\n\n\n\n\n\n\n\n\n.\n\n\nFinally, you will install Positron. It is an integrated development environment (IDE) in which we will open, modify and run our code throughout the course.\nFirst, navigate to their homepage, and select the blue Download option button on the upper-right." + "objectID": "course/00_Quarto/index.html", + "href": "course/00_Quarto/index.html", + "title": "Introduction to Quarto", + "section": "", + "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here", + "crumbs": [ + "About", + "Getting Started", + "00 - Quarto" + ] }, { - "objectID": "course/00_Quarto/slides.html#renderpreview", - "href": "course/00_Quarto/slides.html#renderpreview", + "objectID": "course/00_Quarto/index.html#renderpreview", + "href": "course/00_Quarto/index.html#renderpreview", "title": "Introduction to Quarto", "section": "Render/Preview", - "text": "Render/Preview\n\n\n\n\n\n\n\n\n.\n\n\nThe preview button, at the upper-left end of the Editor, is used to render/knit a quarto document. This triggers the process by which code-chunks are run, and then the outputs are cobbled together into the file format type designated by the YAML header." + "text": "Render/Preview\nThe preview button, at the upper-left end of the Editor, is used to render/knit a quarto document. This triggers the process by which code-chunks are run, and then the outputs are cobbled together into the file format type designated by the YAML header.\n\n\n\n\nHTML\nIn this case, the YAML header’s format argument is set to html. After clicking preview, we can see the various rendering steps appear in the console below. Since no errors occured, the html document was formed succesfully and appears as a file in the left side-bar. Additional, a preview of the document appears in the View tab of the right side-bar, allowing for quick visual inspection.\n\n\n\nAlternatively, we can render a document via the terminal, by entering “quarto render”, followed by the name of the document.\n\n\n\nThis results similar process as what we saw with the preview button\n\n\n\nWe can also open the .html document via our File Explorer, which will open it within our web browser.\n\n\n\nQuarto documents can be rendered (previewed) in other formats besides html. These include pdf, Word documents (docx), and slides (revealjs). This is set by the format argument within the YAML header.\n\n\nPDF\nBy switching the format argument from html to pdf, we can render the document as a pdf\n\n\n\nWe can see the pdf is now listed in the list of files, with a preview shown on the right side-bar.\n\n\n\n\n\nDocx\nWe can also generate Word documents (.docx) as well.\n\n\n\nIn this case, we can see that a Word document file was created, but nothing appears in the View tab. This is because the format is not yet supported for the View tab. We can however open and view the Word document via our File explorer.\n\n\n\nWhich shows a Word Document style output.", + "crumbs": [ + "About", + "Getting Started", + "00 - Quarto" + ] }, { - "objectID": "course/00_Quarto/slides.html#yaml", - "href": "course/00_Quarto/slides.html#yaml", + "objectID": "course/00_Quarto/index.html#yaml", + "href": "course/00_Quarto/index.html#yaml", "title": "Introduction to Quarto", "section": "YAML", - "text": "YAML\n\n\n\n\n\n\n\n\n.\n\n\nWe can additionally provide additional custom inputs to the YAML header. A couple examples include providing the document author and date." + "text": "YAML\nWe can additionally provide additional custom inputs to the YAML header. A couple examples include providing the document author and date.\n\n\n\nWhich we can see are updated after we preview/render.", + "crumbs": [ + "About", + "Getting Started", + "00 - Quarto" + ] }, { - "objectID": "course/00_Quarto/slides.html#table-of-contents", - "href": "course/00_Quarto/slides.html#table-of-contents", + "objectID": "course/00_Quarto/index.html#table-of-contents", + "href": "course/00_Quarto/index.html#table-of-contents", "title": "Introduction to Quarto", "section": "Table of Contents", - "text": "Table of Contents\n\n\n\n\n\n\n\n\n.\n\n\nIn the previous section, we saw that we could provide headings and subheadings to our .qmd file by placing a # at the start of a line in the text portion of the document. A subheading was designated by a ##, with additional hierarchy being designated by appending an additional #." + "text": "Table of Contents\nIn the previous section, we saw that we could provide headings and subheadings to our .qmd file by placing a # at the start of a line in the text portion of the document. A subheading was designated by a ##, with additional hierarchy being designated by appending an additional #.\n\n\n\nWe can use the heading information to generate a table of contents for our document. To do this, we add a toc argument to the yaml header, and set it to TRUE. After rendering, it appears on the upper-right side of the document.\n\n\n\nNotice, that the subheaders do not appear currently within the TOC.\n\n\n\nWe can fix this by setting a toc-expand argument in the YAML to true.", + "crumbs": [ + "About", + "Getting Started", + "00 - Quarto" + ] }, { - "objectID": "course/00_Quarto/slides.html#code-chunk-arguments", - "href": "course/00_Quarto/slides.html#code-chunk-arguments", + "objectID": "course/00_Quarto/index.html#code-chunk-arguments", + "href": "course/00_Quarto/index.html#code-chunk-arguments", "title": "Introduction to Quarto", "section": "Code Chunk Arguments", - "text": "Code Chunk Arguments\n\n\n\n\n\n\n\n\n.\n\n\nAs we briefly touched on in the last section, code-chunks can be modified by including arguments, which affect whether a particular code chunk gets evaluated. In that example, we included a “#| eval: FALSE” to the install commands since we did not want them to be re-run subsequently. We will take a closer look at the other arguments in this section.\n\n\n\n\n\nEval\n\n\n\n\n\n\n\n\n.\n\n\nThe code-chunk argument, “Eval”, is used to determine when a code-chunk get’s evaluated. When set to true (or by default if no eval argument is included), the code-chunks contents will be run/executed, and the output will appear. We can see this in the html output, as below the code block, we get back the address of my working directory." + "text": "Code Chunk Arguments\nAs we briefly touched on in the last section, code-chunks can be modified by including arguments, which affect whether a particular code chunk gets evaluated. In that example, we included a “#| eval: FALSE” to the install commands since we did not want them to be re-run subsequently. We will take a closer look at the other arguments in this section.\n\nEval\nThe code-chunk argument, “Eval”, is used to determine when a code-chunk get’s evaluated. When set to true (or by default if no eval argument is included), the code-chunks contents will be run/executed, and the output will appear. We can see this in the html output, as below the code block, we get back the address of my working directory.\n\n\n\nWhen we switch the Eval argument to FALSE, and then render the document, we can see that the code block remains, but we do not get any output for the code contained within.\nIn every-day practice, we will “use eval: FALSE” arguments when we want to keep the code for later use, but want to manually run the code contained within ourselves.\n\n\n\n\n\nEcho\nThe code-block argument “echo” dictates whether the code within the code-block is displayed within the document. So in the case when “echo: true”, we get both the code displayed, as well as the output that gets returned by the code.\n\n\n\nBy contrast, when “echo: FALSE”, we do not have the code displayed, but do get the output of that code being run.\nIn daily-practice, “echo: FALSE” gets often used when generating plots that we want to include in the report, without the code that generated them being displayed.\n\n\n\n\n\nInclude\nThe next code-chunk argument is include. Unlike echo, which focuses on whether the code is displayed, but still returns the output, include dictates behavior of both the code-block and it’s output. Unlike eval however, it will still run the code, which allows it to be available for the next code-chunk that might need it. When we set “include: false”, no trace of that code-chunk is present in the document. This is useful when making reports where we do not want to include the code used to generate a particular figure.\n\n\n\nBy contrast, when we set “include: true”, the code block and it’s output is once again included within the rendered document.\n\n\n\n\n\nCode-Fold\nOne of my favorites is “code-fold”. When we set it as “code-fold: show”, it displays the code, but provides a drop-down arrow that can be closed to compress the code.\n\n\n\nIn contrast, if we want to make the code-available for those that are interested, but not directly visible, we can set as “code-fold: true”\n\n\n\n\n\nWarnings\nWithin R, when code is executed, in addition to returning the output, R is capable of returning warnings (when something is not as expected, but not sufficient to elicit an error with a complete stop) or a message (text output that gets displayed, often telling about progress). While these are useful when running code yourself, it can be annoying when generating a report and the 2nd page is a bunch of warning text being displayed.\nFor example, when the R package ggcyto is loaded via the library call, it will automatically load several other packages, which typically results in these messages being outputted:\n\n\n\nWe can therefore set that code-chunk’s warning/message arguments to FALSE, therefore silencing the message outputs that would otherwise clutter up our report.", + "crumbs": [ + "About", + "Getting Started", + "00 - Quarto" + ] }, { - "objectID": "course/00_Quarto/slides.html#text-styles", - "href": "course/00_Quarto/slides.html#text-styles", + "objectID": "course/00_Quarto/index.html#text-styles", + "href": "course/00_Quarto/index.html#text-styles", "title": "Introduction to Quarto", "section": "Text Styles", - "text": "Text Styles\n\n\n\n\n\n\n\n\n.\n\n\nQuarto primarily uses Markdown for text styling. Consequently, markdown arguments can be used within the text to change how various text appears." + "text": "Text Styles\nQuarto primarily uses Markdown for text styling. Consequently, markdown arguments can be used within the text to change how various text appears.\n\n\n\nFor a regular text, This single asterisk on each side of a word will italicize.\n\n\n\nWhen the number of asterisk is doubled, This word is bolded.\n\n\n\nWhen three asterisk are used, both are applied.\n\n\n\nFor an underscore, the word of interest is surrounded by square brackets “[]”, with “{.underline}” adjacent.", + "crumbs": [ + "About", + "Getting Started", + "00 - Quarto" + ] }, { - "objectID": "course/00_Quarto/slides.html#hyperlinks", - "href": "course/00_Quarto/slides.html#hyperlinks", + "objectID": "course/00_Quarto/index.html#hyperlinks", + "href": "course/00_Quarto/index.html#hyperlinks", "title": "Introduction to Quarto", "section": "Hyperlinks", - "text": "Hyperlinks\n\n\n\n\n\n\n\n\n.\n\n\nYou can link to a website by surrounding word of interest in [] and placing the url within () adjacent to it." + "text": "Hyperlinks\nYou can link to a website by surrounding word of interest in [] and placing the url within () adjacent to it.", + "crumbs": [ + "About", + "Getting Started", + "00 - Quarto" + ] }, { - "objectID": "course/00_Quarto/slides.html#images", - "href": "course/00_Quarto/slides.html#images", + "objectID": "course/00_Quarto/index.html#images", + "href": "course/00_Quarto/index.html#images", "title": "Introduction to Quarto", "section": "Images", - "text": "Images\n\n\n\n\n\n\n\n\n.\n\n\nYou can place images by adding the following, as long as the file.path to the image is correctly formatted. In my case, this is why I include images folders within my folders to simplify the copy and paste." + "text": "Images\nYou can place images by adding the following, as long as the file.path to the image is correctly formatted. In my case, this is why I include images folders within my folders to simplify the copy and paste.", + "crumbs": [ + "About", + "Getting Started", + "00 - Quarto" + ] }, { - "objectID": "course/00_Positron/slides.html#console", - "href": "course/00_Positron/slides.html#console", + "objectID": "course/00_Positron/index.html", + "href": "course/00_Positron/index.html", + "title": "Using Positron", + "section": "", + "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here", + "crumbs": [ + "About", + "Getting Started", + "00 - Positron" + ] + }, + { + "objectID": "course/00_Positron/index.html#console", + "href": "course/00_Positron/index.html#console", "title": "Using Positron", "section": "Console", - "text": "Console\n\n\n\n\n\n\n\n\n.\n\n\nAt the bottom of the sceen, you will first see the Console Tab. This is the tab where your lines of code when executed (run) will appear, as well as any messages, warnings or errors that get returned. On the right side of the console, you can find several buttons, among them restart R and delete session (for when you need a fresh start), and clear console (which keeps all previously run outputs and objects, but clears away the displayed text within the console)." + "text": "Console\nAt the bottom of the sceen, you will first see the Console Tab. This is the tab where your lines of code when executed (run) will appear, as well as any messages, warnings or errors that get returned. On the right side of the console, you can find several buttons, among them restart R and delete session (for when you need a fresh start), and clear console (which keeps all previously run outputs and objects, but clears away the displayed text within the console).", + "crumbs": [ + "About", + "Getting Started", + "00 - Positron" + ] }, { - "objectID": "course/00_Positron/slides.html#terminal", - "href": "course/00_Positron/slides.html#terminal", + "objectID": "course/00_Positron/index.html#terminal", + "href": "course/00_Positron/index.html#terminal", "title": "Using Positron", "section": "Terminal", - "text": "Terminal\n\n\n\n\n\n\n\n\n.\n\n\nRight next to the Console tab is your Terminal tab. While the console tab is primarily used to run R code within Positron, the terminal is the interface where code containing system commands directed at at your computer is entered. We will use this less frequently, primarily in two context: 1) rendering Quarto documents, and 2) commiting changes to version control. Among the buttons on the right-side of the terminal to make note of are the + button to add a new terminal, and the trash/garbage can button to kill (stop) the terminal." + "text": "Terminal\nRight next to the Console tab is your Terminal tab. While the console tab is primarily used to run R code within Positron, the terminal is the interface where code containing system commands directed at at your computer is entered. We will use this less frequently, primarily in two context: 1) rendering Quarto documents, and 2) commiting changes to version control. Among the buttons on the right-side of the terminal to make note of are the + button to add a new terminal, and the trash/garbage can button to kill (stop) the terminal.\n\n\n\nThe other tabs (Problems, Output, Ports, Debug Console) are used less frequently. I usually will Problems and Debug when something goes wrong with the code, as various warning and error messages will end up being displayed there.", + "crumbs": [ + "About", + "Getting Started", + "00 - Positron" + ] }, { - "objectID": "course/00_Positron/slides.html#help", - "href": "course/00_Positron/slides.html#help", + "objectID": "course/00_Positron/index.html#help", + "href": "course/00_Positron/index.html#help", "title": "Using Positron", "section": "Help", - "text": "Help\n\n\n\n\n\n\n\n\n.\n\n\nWhen trying to evaluate how a particular function is working in R, you can hover over it and positron will open up the documentation for that particular function if available, alternatively, you can enter ?theParticularFunctionsName in the console and hit enter to similarly view what is occuring." + "text": "Help\nWhen trying to evaluate how a particular function is working in R, you can hover over it and positron will open up the documentation for that particular function if available, alternatively, you can enter ?theParticularFunctionsName in the console and hit enter to similarly view what is occuring.", + "crumbs": [ + "About", + "Getting Started", + "00 - Positron" + ] }, { - "objectID": "course/00_Positron/slides.html#variables", - "href": "course/00_Positron/slides.html#variables", + "objectID": "course/00_Positron/index.html#variables", + "href": "course/00_Positron/index.html#variables", "title": "Using Positron", "section": "Variables", - "text": "Variables\n\n\n\n\n\n\n\n\n.\n\n\nOn the upper-portion of the Secondary Side Bar, we can find the Session window, containing the Variables tab. As you run (execute) lines of code, and different variables, objects and functions are created, these become visible under the variables tab on the upper right." + "text": "Variables\nOn the upper-portion of the Secondary Side Bar, we can find the Session window, containing the Variables tab. As you run (execute) lines of code, and different variables, objects and functions are created, these become visible under the variables tab on the upper right.\n\n\n\nFor some types of objects (generally data.frames and other matrix-like objects), you can click on their listing under variables to expand to see additional details about the object (column names, etc.) as well as view a larger version which will appear within the Editor window.", + "crumbs": [ + "About", + "Getting Started", + "00 - Positron" + ] }, { - "objectID": "course/00_Positron/slides.html#plots", - "href": "course/00_Positron/slides.html#plots", + "objectID": "course/00_Positron/index.html#plots", + "href": "course/00_Positron/index.html#plots", "title": "Using Positron", "section": "Plots", - "text": "Plots\n\n\n\n\n\n\n\n\n.\n\n\nSimilarly, any generated Plots or Documents will appear within the Secondary Side Bar, either under Plots (bottom) or Viewer (top) tabs." + "text": "Plots\nSimilarly, any generated Plots or Documents will appear within the Secondary Side Bar, either under Plots (bottom) or Viewer (top) tabs.", + "crumbs": [ + "About", + "Getting Started", + "00 - Positron" + ] }, { - "objectID": "course/00_Positron/slides.html#view", - "href": "course/00_Positron/slides.html#view", + "objectID": "course/00_Positron/index.html#view", + "href": "course/00_Positron/index.html#view", "title": "Using Positron", "section": "View", - "text": "View\n\n\n\n\n\n\n\n\n.\n\n\nOn the upper bar multiple tabs can be found, which we will explore in due time. Most useful to point out is the View tab. If you accidentally close your console, session or plots window, and are trying to get them to reapper, you would need to reselect them from this tab." + "text": "View\nOn the upper bar multiple tabs can be found, which we will explore in due time. Most useful to point out is the View tab. If you accidentally close your console, session or plots window, and are trying to get them to reapper, you would need to reselect them from this tab.", + "crumbs": [ + "About", + "Getting Started", + "00 - Positron" + ] }, { - "objectID": "course/00_Positron/slides.html#pages", - "href": "course/00_Positron/slides.html#pages", + "objectID": "course/00_Positron/index.html#pages", + "href": "course/00_Positron/index.html#pages", "title": "Using Positron", "section": "Pages", - "text": "Pages\n\n\n\n\n\n\n\n\n.\n\n\nThe pages tab and the left-side bar show you everything that is currently within your project folder, including all the folders, and files. Once version control with Git is initiated, new files are relected showing up as green text and a dot, while modified tracked files are reflected by light brown text and a dot." + "text": "Pages\nThe pages tab and the left-side bar show you everything that is currently within your project folder, including all the folders, and files. Once version control with Git is initiated, new files are relected showing up as green text and a dot, while modified tracked files are reflected by light brown text and a dot.\n\n\n\nThe dropdown arrows can be used to open and close specific folders to allow for better organization. There is also a scrollbar on the right-side of the side-bar to scroll through the entire folders contents.", + "crumbs": [ + "About", + "Getting Started", + "00 - Positron" + ] }, { - "objectID": "course/00_Positron/slides.html#search", - "href": "course/00_Positron/slides.html#search", + "objectID": "course/00_Positron/index.html#search", + "href": "course/00_Positron/index.html#search", "title": "Using Positron", "section": "Search", - "text": "Search\n\n\n\n\n\n\n\n\n.\n\n\nThe search tab on the left side bar is something that I use routinely." + "text": "Search\nThe search tab on the left side bar is something that I use routinely.\n\n\n\nIt can help locate code that you had been working on, but have since forgotten where it is at. Here is an example of finding the files where I had used a function that needed modifying within a local project folder’s files.\n\n\n\nSimilarly, if you need to replace a particular character string with another, the replace with field below can help simplify the task without having to track down and change 20 lines across 5 files.", + "crumbs": [ + "About", + "Getting Started", + "00 - Positron" + ] }, { - "objectID": "course/00_Positron/slides.html#extensions", - "href": "course/00_Positron/slides.html#extensions", + "objectID": "course/00_Positron/index.html#extensions", + "href": "course/00_Positron/index.html#extensions", "title": "Using Positron", "section": "Extensions", - "text": "Extensions\n\n\n\n\n\n\n\n\n.\n\n\nOn the far-left side we can find the Activity bar, which contains several tabs. Which tab you have selected will then dictate the contents of your left side-bar.\nOccupying the left side bar are several tabs. One of these is Extensions, which shows “Plugins” (or the VScode equivalent) that extend the functionality of Positron further. The ones you have installed may vary, but the main ones in context of this course are Air (provides color and highlights syntax for R code to make interpretation easier) as well as Quarto (for rendering the various document types)." + "text": "Extensions\nOn the far-left side we can find the Activity bar, which contains several tabs. Which tab you have selected will then dictate the contents of your left side-bar.\nOccupying the left side bar are several tabs. One of these is Extensions, which shows “Plugins” (or the VScode equivalent) that extend the functionality of Positron further. The ones you have installed may vary, but the main ones in context of this course are Air (provides color and highlights syntax for R code to make interpretation easier) as well as Quarto (for rendering the various document types).", + "crumbs": [ + "About", + "Getting Started", + "00 - Positron" + ] }, { - "objectID": "course/00_Positron/slides.html#git", - "href": "course/00_Positron/slides.html#git", + "objectID": "course/00_Positron/index.html#git", + "href": "course/00_Positron/index.html#git", "title": "Using Positron", "section": "Git", - "text": "Git\n\n\n\n\n\n\n\n\n.\n\n\nThe Git tab on the left side bar is where once version control is initiated for the project folder, we can see changes that have occurred to the individual files since the last commit. These changes can be added to a new commit by clicking on the + sign. This will be covered more extensively in the next section" + "text": "Git\nThe Git tab on the left side bar is where once version control is initiated for the project folder, we can see changes that have occurred to the individual files since the last commit. These changes can be added to a new commit by clicking on the + sign. This will be covered more extensively in the next section\n\n\n\nSimilarly, if you want to discard a change that has occured, the circular arrow will revert to the last commited version. Selecting and pressing the delete button will similarly work.\n\n\n\nSelecting the … options will highlight all the various git functions, some of which we will cover more extensively in the next section and throughout the course.", + "crumbs": [ + "About", + "Getting Started", + "00 - Positron" + ] }, { - "objectID": "course/00_Homeworks/slides.html#discussions-forum", - "href": "course/00_Homeworks/slides.html#discussions-forum", + "objectID": "course/00_Homeworks/index.html", + "href": "course/00_Homeworks/index.html", + "title": "Getting Help", + "section": "", + "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here", + "crumbs": [ + "About", + "Getting Started", + "00 - Getting Help" + ] + }, + { + "objectID": "course/00_Homeworks/index.html#discussions-forum", + "href": "course/00_Homeworks/index.html#discussions-forum", "title": "Getting Help", "section": "Discussions Forum", - "text": "Discussions Forum\n\n\n\n\n\n\n\n\n.\n\n\nOn the course’s GitHub repository, we have opened up a Discussions page that we plan to use as a community forum. We hope that it will serve multiple functions, from providing a better sense of community for the online participants, to facilitating asking and receiving help on something that is not clear, provide feedback about something that it not working out, as well as a place to celebrate and show off your coding wins." + "text": "Discussions Forum\nOn the course’s GitHub repository, we have opened up a Discussions page that we plan to use as a community forum. We hope that it will serve multiple functions, from providing a better sense of community for the online participants, to facilitating asking and receiving help on something that is not clear, provide feedback about something that it not working out, as well as a place to celebrate and show off your coding wins.\n\nTo keep the Discussions forum semi-organized, we have set up several categories, please select the appropiate category when opening a new discussion!\n\n\n\nCode of Conduct\n\nWe ask that all course participants read and adhere to the spirit of our Code of Conduct. We are all human, at different points in our learning journeys, so what may be obvious to you at your point of your learning journey may have necessarily be obvious to someone just getting started. This course is our giving back to those in the community, but is on a voluntary basis in addition to our regular workload. While we try to reply quickly, sometimes our cell sorters fully melt down sending everything into chaos. We will reply when we can.\n\n\n\n\nAnnouncements\n\nWhen we send out an email to all participants, we will also repost it as an announcement. This ensures that even if you are not on our mailing list, you will still be able to have access to important information and course updates.\n\n\n\n\n\n\n\nGeneral\n\nThis section category can be used for any discussions that you think are worth having, that don’t fall under any of the other category. Good examples are continuing a discussion that was held during one of the livestreams; wanting to discuss and dive further into a given week’s topic; or bringing in additional resources that you found useful to understandingn something that didn’t click initially. This space is for the community to shape as they see best.\n\n\n\n\n\n\n\nIdeas\n\nHave an idea for a new topic or a way to improve the course? We would love to hear them. Provide as many details, and ideally, an example, and if it is doable, we will try to implement them.\n\n\n\n\n\n\n\nIntroductions\n\nOnline courses can be odd in terms of replicating in-person dynamics. Fortunately, we have gathered the largest cohort of “cytometrist with no-to-little flow experience trying to learn R at the same time” that the world has ever seen, so best to take advantage of this while we can. Treat this section as if we had just met at a conference, tell us about yourself, what brings you here, and what you want to hopefully be able to do after the course ends.", + "crumbs": [ + "About", + "Getting Started", + "00 - Getting Help" + ] }, { - "objectID": "course/00_Homeworks/slides.html#polls", - "href": "course/00_Homeworks/slides.html#polls", + "objectID": "course/00_Homeworks/index.html#polls", + "href": "course/00_Homeworks/index.html#polls", "title": "Getting Help", "section": "Polls", - "text": "Polls\n\n\n\n\n\n\n\n\n\n.\n\n\nOccasionally, we will need to gather community feedback on what is working and what is not working. We will sporadically post Polls for this purpose." + "text": "Polls\n\nOccasionally, we will need to gather community feedback on what is working and what is not working. We will sporadically post Polls for this purpose.\n\n\n\n\n\n\nQ&A\n\nThe Questions and Answers (Q&A) section is where you go if something is not clear, not working, and you are trying to troubleshoot your way through it. First thing before posting, search! to see if someone has already asked the question. If you don’t find anything, go ahead and open a new discussion.\nSince we are not at your computer, and don’t have your dataset, when troubleshooting it is best to include a minimal reproducible example of the issuen you are encountering, slimming down the number of files needed to be transferred, and generalizing down the code so that other course participants and instructors can follow along. If this is not doable, or if the problem requires added context (and larger files), create a new repository on your GitHub, make it public, and share the links to it in your post. The goal would be to download the folder and be able to replicate the issue that you are encountering.\n\n\n\n\n\n\n\n\n\n\nShow and Tell\n\nWhere Q&A section is for getting help on code that is frustratingly not working, Show and Tell is where to go and celebrate when you finally get things to work. Share your wins, show us the extra pretty graphs, bizarre autofluorescence signatures, or odd outputs that just make you laugh.", + "crumbs": [ + "About", + "Getting Started", + "00 - Getting Help" + ] }, { - "objectID": "course/00_Homeworks/slides.html#issues", - "href": "course/00_Homeworks/slides.html#issues", + "objectID": "course/00_Homeworks/index.html#issues", + "href": "course/00_Homeworks/index.html#issues", "title": "Getting Help", "section": "Issues", - "text": "Issues" + "text": "Issues\nMost of the time, if you are having trouble getting you code to run, you should first stop after some initial troubleshooting should be to open a new Discussion under the Q&A category. Here you will be able to get both community and instructor help and suggestions to hopefully resolve whatever is going on.\n\nThe Issues page is primarily meant for course-specific problems that require the course instructor intervention to fix. For example, we release a new week of material, and while it runs fine for both Windows and Linux, the code fails to run for all MacOS users. While you may be able to find workarounds on your own, it’s ultimately our responsibility to help provide a solution so that everyone can move forward. This is the situation where opening an Issue is appropiate.\n\n\n\nSimilarly, if our code contains a wrong argument, is returning a deprecation warning, etc. open an issue to let us know. While we may not be able to fix something that is not directly related to our code, we can redirect it to the package maintainers so that they can fix the issue.\nAnd likewise, if you find multiple typos in the documentation, you can open an issue and propose carrying out a pull-request to fix them.", + "crumbs": [ + "About", + "Getting Started", + "00 - Getting Help" + ] }, { - "objectID": "course/00_Homeworks/slides.html#submitting-take-home-problems", - "href": "course/00_Homeworks/slides.html#submitting-take-home-problems", + "objectID": "course/00_Homeworks/index.html#submitting-take-home-problems", + "href": "course/00_Homeworks/index.html#submitting-take-home-problems", "title": "Getting Help", "section": "Submitting Take-Home Problems", - "text": "Submitting Take-Home Problems\n\n\n\n\n\n\n\n\n.\n\n\nEach week, during the course, we introduce and cover the main concepts for the particular concept. Our goal is to provide you with the necessary code and enough code to be able to get the jist. However, to become comfortable and be able to apply what you have learned, you will need to explore beyond our examples, try it with your own datasets, encounter things that don’t work, and troubleshoot your way through them. It’s this cycle of venturing into the unknown that develops strong coding skills that are needed to overcome any barrier you encounter. The goal of the take-home questions is to provide some less curated problems that will take a little longer to answer to help get you started on your own exploration of the topic." + "text": "Submitting Take-Home Problems\nEach week, during the course, we introduce and cover the main concepts for the particular concept. Our goal is to provide you with the necessary code and enough code to be able to get the jist. However, to become comfortable and be able to apply what you have learned, you will need to explore beyond our examples, try it with your own datasets, encounter things that don’t work, and troubleshoot your way through them. It’s this cycle of venturing into the unknown that develops strong coding skills that are needed to overcome any barrier you encounter. The goal of the take-home questions is to provide some less curated problems that will take a little longer to answer to help get you started on your own exploration of the topic.\nAs previously mentioned, these take-home problems are completely optional. If you are in the middle of solving them and want to seek feedback from then community and course instructors, open a Discussion under the general category is the way to go.\nHowever, if you have completed them, and want course instructor feedback, you can submit them to us in the form of a pull-request to the CytometryInR repo’s homework branch. We will take a look, offer constructive suggestions, and when ready merge the solution. This will also result in GitHub listing you as a contributor to the course.\nWe will outline the basic steps of how to set up and open a pull-request, to help simplify the process.\n\nSync your Fork\nFirst off, make sure to Sync your fork of the Cytometry in R project. That makes sure that all the commits present are up-to-date and simplifies the process of having the pull-request being merged.\n\n\n\n\n\n\n\n\n\n\n\nPull to Local\nHaving Synced your branch on GitHub, return to your computer, open the CytometryInR repository and pull in the changes locally.\n\n\n\n\nCreate own Folder under Homeworks\nUnder the course folder, you will find folders for each week. Within these folders find the homework folder. This will appear empty except for a README file with ionstructions. It is within this folder you will need to create your own folder.\nTo ensure there are no conflicts on the pull-request merge, please use your GitHub username as the folder name.\n\n\n\nOnce you have your folder inside homework, go ahead and copy anything you are turning in from their respective working project folders. Remember, the goal is minimal reproducible example is the goal. Rendered Quarto Documents are preferred, but we will also accept scripts and small data files and images. a README.MD file with anything you want me to know,\n\n\n\n\n\n\n\n\nSign off Commit\nNow that everything is present, Sign Off and Commit the change.\n\n\n\n\n\nPush Branch to GitHub.\nProceed to push the branch to GitHub.", + "crumbs": [ + "About", + "Getting Started", + "00 - Getting Help" + ] }, { - "objectID": "course/00_GitHub/slides.html#creating-an-account", - "href": "course/00_GitHub/slides.html#creating-an-account", + "objectID": "course/00_GitHub/index.html", + "href": "course/00_GitHub/index.html", + "title": "Using GitHub", + "section": "", + "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here", + "crumbs": [ + "About", + "Getting Started", + "00 - GitHub" + ] + }, + { + "objectID": "course/00_GitHub/index.html#creating-an-account", + "href": "course/00_GitHub/index.html#creating-an-account", "title": "Using GitHub", "section": "Creating an Account", - "text": "Creating an Account\n\n\n\n\n\n\n\n\n.\n\n\nWe will first navigate to the GitHub homepage. If you haven’t previously created an account, click on the button to sign up for an account." + "text": "Creating an Account\nWe will first navigate to the GitHub homepage. If you haven’t previously created an account, click on the button to sign up for an account.\n\n\n\nOn the sign-up page, you will fill in various details needed to create an account. Please remember that GitHub usernames are visible to others. Additionally, if you end up sharing code with others as part of a manuscript, or use GitHub to create a personal portfolio website in the future, your username will appear as part of the URL.\nFor example, in my case, my user name is DavidRach, so my GitHub profile ends up as: https://github.com/DavidRach. For our course, the core’s GitHub user name is UMGCCCFCSR, so the github profile ends up as https://github.com/UMGCCCFCSR, while the course website ends up as https://umgcccfcsr.github.io/CytometryInR/\n\n\n\nOnce you have entered your new account information, you will need your account creation by entering the code sent to the email address that you provided.\n\n\n\nOnce account creation has been confirmed, please proceed to login to GitHub for the first time.", + "crumbs": [ + "About", + "Getting Started", + "00 - GitHub" + ] }, { - "objectID": "course/00_GitHub/slides.html#github-profile", - "href": "course/00_GitHub/slides.html#github-profile", + "objectID": "course/00_GitHub/index.html#github-profile", + "href": "course/00_GitHub/index.html#github-profile", "title": "Using GitHub", "section": "GitHub Profile", - "text": "GitHub Profile\n\n\n\n\n\n\n\n\n.\n\n\nUpon creating a brand new account, your GitHub homepage will initially look rather empty, and can be intimidating to navigate for the first time.\nFor now, on the upper right, go ahead and click on the default profile picture icon…" + "text": "GitHub Profile\nUpon creating a brand new account, your GitHub homepage will initially look rather empty, and can be intimidating to navigate for the first time.\nFor now, on the upper right, go ahead and click on the default profile picture icon…\n\n\n\nAnd then select Profile…\n\n\n\nYou are now on your public GitHub profile page. For a newly created account, it will look something like this:\n\n\n\nFor a more established account, this page will look a little different, and can be customized to highlight various projects that you are working on.\nFor this course, we will have you set up a basic GitHub profile page for now, although you are free to customize and personalize it as much as you may want to in the future!\nTo start, first select the edit profile button on the left below the default profile icon.\n\n\n\nYou can then proceed to fill in any details that you feel are relevant and are comfortable sharing.\n\n\n\nWith the quick access details filled in, it is now time to navigate to the Settings tab. You will return to the previous menu dropdown on the upper right, and instead of selecting Profile, click on the Settings option.\n\n\n\nYou should now end up within your Public Profile Settings page.\nFeel free to edit the default profile picture, and any other fields that you feel are relevant. Once done, continue to scroll down the page past ORCID ID.\n\n\n\nWhen you reach Contributions and Activity, go ahead and select the option to include private repositories in the activity summary graphic. Then scroll down and click save. You will now be returned to your GitHub profile page.\n\n\n\nAt the top of the profile, you will see a “Your contributions” calendar graph. For a new account, it will look like this:\n\n\n\nIf you are just starting out, this chart will be mostly empty, but will fill in as you work on projects, see here as an example.\nEvery time you save your code (ie. make a commit), the activity will be reflected in this chart. By clicking the option in settings, code made within a private repository will remain private, but will count toward your contribution chart. As you progress through the course, this will provide a nice visual reminder of the progress you have made, and the obstacles that you have overcome.", + "crumbs": [ + "About", + "Getting Started", + "00 - GitHub" + ] }, { - "objectID": "course/00_GitHub/slides.html#github-readme", - "href": "course/00_GitHub/slides.html#github-readme", + "objectID": "course/00_GitHub/index.html#github-readme", + "href": "course/00_GitHub/index.html#github-readme", "title": "Using GitHub", "section": "GitHub ReadMe", - "text": "GitHub ReadMe\n\n\n\n\n\n\n\n\n.\n\n\nWith this done, we modify your GitHub profile by adding one customized element, a ReadMe page. This will be used for a couple projects during the course, and can be personalized further in the future.\nTo create a ReadMe page for your profile, we will navigate to the upper right of the screen and click on the + sign." + "text": "GitHub ReadMe\nWith this done, we modify your GitHub profile by adding one customized element, a ReadMe page. This will be used for a couple projects during the course, and can be personalized further in the future.\nTo create a ReadMe page for your profile, we will navigate to the upper right of the screen and click on the + sign.\n\n\n\nWe will then select the Create New Repository option.\n\n\n\nYou will next create a repository (folder), naming it exactly the same as your username. This will be recognized by GitHub as being a special type of repository corresponding to the ReadMe section of your profile.\nFor options, leave the visibility as Public, and Add README set to On. And proceed to Create Repository.\n\n\n\nHaving created the repository (folder), you will see it has been populated by a few default files. For now, you will be editing the README.md file. On a new repository, the easiest way to access it is by clicking the green option on the right side of your screen.\n\n\n\nWith the README.md file now opened, you will be able to see generic filler text that is suggested by GitHub.\nFor this course, I will ask you to add a couple elements for now. You are free to return and further personalize it later if you wish to do so.\n\n\n\nThe type of file that we are working with is a Markdown file, which can allow for a bunch of customizations which we will cover throughout the course.\nFor now, please add and customize the following questions:\nCytometry In R\nLocation: Baltimore, Maryland, USA\nMy Favorite Fluorophore/Metal-Isotope: Spark Blue 550\nPrevious Coding Experience: Repeatedly Calling IT\nWhat I Hope to Get From This Course: A faster way to match FlowSOM clusters to their likely cell type.\n\n\n\nNext, to save you will select the green “Commit changes” button. We will cover the meaning of “Commit” more in-depth during the Git section.\nFor now, write a short summary of the change you made to the file in the “Commit message”, and any additional details within the “Extended description” field. When ready, click the green “Commit changes” button.\n\n\n\nYou will now be able to see the updated README.md file, as you can see in our example below. To make additional edits, you would select the pencil icon on the right-center side of the screen.\n\n\n\nNext, navigate back to your profile page (by clicking on either your username or the Overview option on the tabs).\nYou will see that the README file contents are now displayed on the upper portion of your GitHub profile. Feel free to circle back and customize this further to your liking.\nIn this last example, we created your first repository (folder). Since this is public, it is now shown below the README section of the profile under your repositories. You can also see that your commits made in the process of making the changes are now shown both in the Contributions graph, and under the Contributor Activity summary at the bottom of the page.", + "crumbs": [ + "About", + "Getting Started", + "00 - GitHub" + ] }, { - "objectID": "course/00_GitHub/slides.html#github-repository", - "href": "course/00_GitHub/slides.html#github-repository", + "objectID": "course/00_GitHub/index.html#github-repository", + "href": "course/00_GitHub/index.html#github-repository", "title": "Using GitHub", "section": "GitHub Repository", - "text": "GitHub Repository\n\n\n\n\n\n\n\n\n.\n\n\nHaving set up your GitHub profile, it now is time to make sure you have access to our course materials. We will have you navigate to our course’s GitHub profile\nOn the profile page, you will be able to see our version of the README, our repositories, and the Contributions graph and Contribution activity sections.\nPlease click on the CytometryInR to navigate to its repository (folder)" + "text": "GitHub Repository\nHaving set up your GitHub profile, it now is time to make sure you have access to our course materials. We will have you navigate to our course’s GitHub profile\nOn the profile page, you will be able to see our version of the README, our repositories, and the Contributions graph and Contribution activity sections.\nPlease click on the CytometryInR to navigate to its repository (folder)\n\n\n\nOn this page, you will see several elements that you will be circling back to throughout the course.\nFor our course, we will be extentsively ussing the Discussions page as a community forum. If you have any questions, are looking for feedback, or want to show off something that you worked on, this is the place for it. This will also help make sure\n\n\n\nThe Issues tab is where you will need to go to open an Issue if you encounter a bug (or major documentation typo), so that I can cicle back and correct them when I have the chance.\n\n\n\nTo submit the optional take-home problems, you would turn in these problems by going to the Pull Request tab, and initiating a pull request between your forked version of the project and our “homework” branch (more details on this later).\n\n\n\nOptionally, you can “Star” a repository. This is basically the GitHub equivalent of liking a project. In our case, we will often star a repository since it will be saved under the Stars tab of our profile, which makes finding it again significantly easier a few weeks later after forgetting the repository name.\n\n\n\nTo see projects that you have starred, you can select the Stars option from the same dropdown you used to get to Settings.\n\n\n\nOr from your GitHub profile, you can see these under Stars tab.", + "crumbs": [ + "About", + "Getting Started", + "00 - GitHub" + ] }, { - "objectID": "course/00_GitHub/slides.html#forking-cytometryinr", - "href": "course/00_GitHub/slides.html#forking-cytometryinr", + "objectID": "course/00_GitHub/index.html#forking-cytometryinr", + "href": "course/00_GitHub/index.html#forking-cytometryinr", "title": "Using GitHub", "section": "Forking CytometryInR", - "text": "Forking CytometryInR\n\n\n\n\n\n\n\n\n.\n\n\nBefore we go further, we will need you to make your own copy of the course repository (ie. fork it). This will allow you to quickly retrieve all the new materials and code corrections by simply rereshing (ie. syncing) your forked version with our upstream parent branch once a week." + "text": "Forking CytometryInR\nBefore we go further, we will need you to make your own copy of the course repository (ie. fork it). This will allow you to quickly retrieve all the new materials and code corrections by simply rereshing (ie. syncing) your forked version with our upstream parent branch once a week.\n\n\nTo fork the course repository, you will select the “Fork repository” option on the upper-center portion of your screen.\n\n\n\nBy “Fork-ing” a repository, you are basically copying the contents from that repository to a newly created repository on your own GitHub. Forked projects are still linked to the original (parent) fork, and can retrieve any updates via syncing, as well as return changes via a pull request.\nFor this course, when you create the fork, keep the existing repository name (“CytometryInR”). Importantly, select the copy main branch option. This will ensure you only get the code and data needed for the course copied over, and don’t end up with your entire hard-drive filled will website elements, or other people’s solutions to the take-home problems.\n\n\n\nOnce you have created the fork, you will see your copy of the forked repository under your own username. Seeing as you have just now forked the project, you will see the notification that you are up to date with the existing version of the CytometryInR course repository.\nAs we go through the course, and new material is released each week on Sunday at 2200 EST (Monday 0300 GMT+0), you will see this changed to behind the main branch by a number of commits, and have the option to sync in the changes to your fork to gain access to that week’s material.\n\n\n\nIf you remember, previously under your GitHub profile, the Repositories tab only contained the repository corresponding to your ReadMe section.\n\n\n\nYou should however now be able to see your fork of the CytometryInR repository. As you add project specific repositories throughout the course, they will also appear here.", + "crumbs": [ + "About", + "Getting Started", + "00 - GitHub" + ] }, { - "objectID": "course/00_Git/slides.html#new-folder-from-template", - "href": "course/00_Git/slides.html#new-folder-from-template", + "objectID": "course/00_Git/index.html", + "href": "course/00_Git/index.html", + "title": "Version Control with Git", + "section": "", + "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here", + "crumbs": [ + "About", + "Getting Started", + "00 - Git" + ] + }, + { + "objectID": "course/00_Git/index.html#new-folder-from-template", + "href": "course/00_Git/index.html#new-folder-from-template", "title": "Version Control with Git", "section": "New Folder from Template", - "text": "New Folder from Template\n\n\n\n\n\n\n\n\n.\n\n\nSince Positron can use multiple programming languages, when we select “New Folder from Template” we will be asked what kind of folder template we want to use. Since we are working in R, we will select the “R Project” option." + "text": "New Folder from Template\nSince Positron can use multiple programming languages, when we select “New Folder from Template” we will be asked what kind of folder template we want to use. Since we are working in R, we will select the “R Project” option.\n\n\n\nWe will next be asked to name the new project folder and a storage location.\nOne thing I would like to remind everyone who is just starting to code is that it is best to avoid using special characters (ex. @ $ # ^ ! ; : ,) in any folder or file name. This is because when coding, these can be misinterpreted as commands.\nWhile spaces are generally okay, it is often to best stick to to stick to hyphens (-), underscores (_). We will explore naming conventions in more depth at a later time.\n\n\n\nAnother useful thing to know when getting started with version control, it is best to save your files within your local computer, avoid using OneDrive or other cloud storage options for the time reason. The reason behind is that permissions to write/save to the cloud locations can sometimes be quite finicky, and some autosave/indexing behaviors can cause issues. akes things easier to save or modify without running into permission issues. For most of our course examples, we will be saving our Project Folders under the Documents Folder.\nHaving named our new Project Folder, and designated a storage location, go ahead and check the Initialize Git Repository option. This will indicate to version control to monitor content and changes to files within this folder.\n\n\n\nThe next setup screen will verify which version of R you wish to use. Since we are just getting started, your most recent version of R (usually system) should work. We will also leave the “renv” (reproducible environment setup) option unchecked for the time being (we will revisit the concept later in the course).\n\n\n\nAnd if all goes well, we should see the “New Folder Created” popup.", + "crumbs": [ + "About", + "Getting Started", + "00 - Git" + ] }, { - "objectID": "course/00_Git/slides.html#creating-subfolders", - "href": "course/00_Git/slides.html#creating-subfolders", + "objectID": "course/00_Git/index.html#creating-subfolders", + "href": "course/00_Git/index.html#creating-subfolders", "title": "Version Control with Git", "section": "Creating SubFolders", - "text": "Creating SubFolders\n\n\n\n\n\n\n\n\n.\n\n\nOnce your new project folder has opened, you should be seeing the main layout elements that we briefly covered in the Positron walk-through.\nFor this section, we will primarily be focused on what is happening within the primary side bar on the left, where changes to the individual files within the folder since the last save/commit will be reflected by colored text.\nFor my own projects, there are some elements of organization that I go ahead and add for each new folder. These include both a data and an images subfolders to help keep things a little more organized.\nTo create these folders, we would click on the respective add folder (+) button on the side bar. Files and Folders can be clicked and dragged within the primary side bar to move things to new folder locations." + "text": "Creating SubFolders\nOnce your new project folder has opened, you should be seeing the main layout elements that we briefly covered in the Positron walk-through.\nFor this section, we will primarily be focused on what is happening within the primary side bar on the left, where changes to the individual files within the folder since the last save/commit will be reflected by colored text.\nFor my own projects, there are some elements of organization that I go ahead and add for each new folder. These include both a data and an images subfolders to help keep things a little more organized.\nTo create these folders, we would click on the respective add folder (+) button on the side bar. Files and Folders can be clicked and dragged within the primary side bar to move things to new folder locations.", + "crumbs": [ + "About", + "Getting Started", + "00 - Git" + ] }, { - "objectID": "course/00_Git/slides.html#creating-files", - "href": "course/00_Git/slides.html#creating-files", + "objectID": "course/00_Git/index.html#creating-files", + "href": "course/00_Git/index.html#creating-files", "title": "Version Control with Git", "section": "Creating Files", - "text": "Creating Files\n\n\n\n\n\n\n\n\n.\n\n\nIn context of this course, we will primarily be working with two types of files when coding:\n\nR Scripts: These files end in .R. These contain only code (with occasional # comment line). These are often used for self-contained code that once we get them working we rarely need to modify.\nQuarto Markdowns: These files end in .qmd. They contain a .yaml header, followed by a mix of regular written text (often explanations or other documentation), and sections (ie. chunks) that contain code. These are used when we are still getting the code to work, when we need to modify inputs frequently, or simply when we need to document what and why we are doing something to make life easier for our future-self two months from now." + "text": "Creating Files\nIn context of this course, we will primarily be working with two types of files when coding:\n\nR Scripts: These files end in .R. These contain only code (with occasional # comment line). These are often used for self-contained code that once we get them working we rarely need to modify.\nQuarto Markdowns: These files end in .qmd. They contain a .yaml header, followed by a mix of regular written text (often explanations or other documentation), and sections (ie. chunks) that contain code. These are used when we are still getting the code to work, when we need to modify inputs frequently, or simply when we need to document what and why we are doing something to make life easier for our future-self two months from now.\n\n\n\nIn this example, I will go ahead and select the new file icon\n\n\n\nThenn I will name the file, and designate it as a Quarto Markdown file by adding the .qmd at the end of the name to denote the file type.", + "crumbs": [ + "About", + "Getting Started", + "00 - Git" + ] }, { - "objectID": "course/00_Git/slides.html#qmd-files", - "href": "course/00_Git/slides.html#qmd-files", + "objectID": "course/00_Git/index.html#qmd-files", + "href": "course/00_Git/index.html#qmd-files", "title": "Version Control with Git", "section": "QMD Files", - "text": "QMD Files\n\n\n\n\n\n\n\n\n.\n\n\nOnce this is done, we can now see we have a new .qmd file (“Example.qmd” in this case).\n\n\n\n\n\nYAML\n\n\n\n\n\n\n\n\n.\n\n\nAs previously mentioned, the start of a Quarto Markdown file containg a YAML code chunk that is used to set formatting choices (we will explore this in-depth during the next section)\nWhat designates the location of the YAML block are three hyphens at the start, and three hyphens at the end. For this example, we will also provide a “title:” and “format:” field for the time being (see additional options here)." + "text": "QMD Files\nOnce this is done, we can now see we have a new .qmd file (“Example.qmd” in this case).\n\nYAML\nAs previously mentioned, the start of a Quarto Markdown file containg a YAML code chunk that is used to set formatting choices (we will explore this in-depth during the next section)\nWhat designates the location of the YAML block are three hyphens at the start, and three hyphens at the end. For this example, we will also provide a “title:” and “format:” field for the time being (see additional options here).\n\n\n\n\n\nText\nWith a basic YAML formatting block now in place, we can build out other elements of our Quarto Markdown document. Unless otherwise specified, everything else in the document is assumed to be text, so I will go ahead and provide an initial text description of what I am trying to do.\n\n\n\n\n\nCode-Chunks\nHaving provided some initial text for documentation, we can then add code-block chunks to start writing some code.\nThe easiest way to do do this is to click the respective option on the upper-right of the Editor screen. Since Positron can handle multiple programming languages, so the chunk is inserted, we will need to select the language we use to be used within the code chunk (R in this case).\n\n\n\nYou will notice, that the inserted code block starts off with three backticks (`) and then “{r}”. The end of the code block is denoted by an additional three backticks.\nWe can also add new code blocks by simply typing these elements into the location we want to place a code chunk (as long as we are careful to add 3 backticks also at the end).\n\n\n\n\n\n\n\n\nRunning Code\nNow that we have two code-chunks written, we can write lines of code within them. For this example, I will use two beginner friendly functions, print(“Hello”), which will print the contents contained between the ” ” to the console, and getwd() which will return the location of the folder you are working within (ie. the working directory).\nTo run/execute these lines of code, we have a couple options. We can click on the Run Cell option that appears on the upper-left side of the code chunk. Additionally, it has a companion option that will run all code chunks above it.\n\n\n\nWhen a code block is successfully run, you will see within the console (lower bottom of the screen) the line of code be run, with any returned outputs appear directly after.\n\n\n\nAn alternative to clicking the Run Cell button is to click on the line of code you are interested in running, then press (Ctrl + Enter)/(Command + Enter). This will execute the line of code that you have clicked on. This can be useful in scenarios where you want to run a specific line, and not the entire code-chunk.\n\n\n\nUsing this approach, you can see the location (ie. file path) of the current working directory was returned to the Console.", + "crumbs": [ + "About", + "Getting Started", + "00 - Git" + ] }, { - "objectID": "course/00_Git/slides.html#local-version-control", - "href": "course/00_Git/slides.html#local-version-control", + "objectID": "course/00_Git/index.html#local-version-control", + "href": "course/00_Git/index.html#local-version-control", "title": "Version Control with Git", "section": "Local Version Control", - "text": "Local Version Control\n\n\n\n\n\n\n\n\n.\n\n\nHaving introduced the main elements of a Quarto Markdown file, let’s turn our attention to the tab within the editor showing our newly created .qmd file.\nWe can see there is a solid circle next to the file name, and it is appearing as green. The circle denotes unsaved changes, which we can correct by clicking on the Save Button to save the changes to our file." + "text": "Local Version Control\nHaving introduced the main elements of a Quarto Markdown file, let’s turn our attention to the tab within the editor showing our newly created .qmd file.\nWe can see there is a solid circle next to the file name, and it is appearing as green. The circle denotes unsaved changes, which we can correct by clicking on the Save Button to save the changes to our file.\n\n\n\n\nUntracked\nIf we turn our attention to the left primary sidebar, we can see that within our GitPractie folder there are three files, our Example.qmd, and the default README.md and .gitignore files. These all show up in green text with U’s to the right of the file names.\nThis denotes that the version control tracking software Git is currently considering them as “Untracked” files. While saving the document via the Save button means we will still have our changes when we reopen Positron, we won’t have any history of changes that we can use to revert back to the way all the files appeared at this exact point in time should something go wrong.\nWe will next go the address bar on the very far left, and select the Git tab.\n\n\n\nOn the Git tab, we can see that each of the three files are shown underneath a “Changes” drop-down. This contains the files that have undergone changes since the last commit. In our case, since we haven’t updated the save-state yet, this last commit would be the initial creation of the project folder.\n\n\n\nTo have version control track these individual files going forward, we can do so in two separate ways. We can add them individually by clicking the + symbol next to the individual names.\n\n\n\n\n\nStaged\nThis will result in the files being moved to the “Staged” dropdown. This denotes files being tracked with the intention of being recorded as the next save-state or waypoint (ie. a commit).\n\n\n\n\n\nCommit\nTo create a new commit (save-state or waypoint), once we have the files we want to track staged, we will write a commit message, and then press commit.\nA commit message is a brief description of the changes that have occurred to the files between this commit and the previous one. Make this short description informative enough that if you need to revert back in the future, you can quickly identify the commit you need to fall back to (more about this later).\n\n\n\nIf this your first time using version control, you will likely encounter the following pop-up asking that you provide a user.name and user.email. This is used to designate the author of the changes.If you get this popup, go ahead and select “Open Git Log”\n\n\n\n\n\nUserName and UserEmail\nThe Output tab at the bottom of the screen will open, showing the messages that led to the popup.\nThe important part to note is the commands that will be needed to provide your user name and email to the computer for authoring the commit. Typically, your email will be the same one you used for your GitHub account.\n\n\n\nFrom the displayed message, go ahead and copy\n“git config –global user.email”you@example.com””\nThen click on the adjacent terminal tab. You will paste the command in, but do not hit enter just yet.\nWindows users, please note, depending on your settings, if trying to paste from the keyboard into the terminal, you may need to press “Ctrl + Shift + V” instead of the usual “Ctrl + V”.\n\n\n\nWith the command now pasted (or typed), use your keyboard arrows to navigate to the email portion, and replace the generic email with your email address used for your GitHub account.\nMake sure that the quotation marks (“) around the email address remain present, as they help the computer identify where your email address starts and ends. Once satisfied that your email address is correct, press enter.\n\n\n\nNext up, repeat the process, this time copying over the command needed to set your user name to the terminal. Repeat the editing process to provide your name between the “” marks. Then press enter.\n\n\n\n\n\n\n\n\nFirst Commit\nNow that your user.name and email address have been provided, Git should be able to provide an author to the commit message. Reattempt to press commit button.\nIf this is successful, you will see your initial commit appear on the bottom half of the left primary side bar, under the Graph dropdown. Congrats! Your files are now being tracked by version control.\n\n\n\nIf you hover with your mouse arrow just over the commit, you can see the longer commit message and additional details appear.\nIf you click on the commit tab, a new display will open in the editor, displaying the changes that occured in that commit compared to the previous one. In this case, since we added everything since the previous commit, nothing appears on the left side, while the entire documents contents appear highlighted in green on the right.\nGreen highlighting is used to show additions, while red highlighting is used to show deletions.\n\n\n\nHaving completed this initial commit, for this example, let’s imitate a typical workflow and make some additional changes to the file before we make a second commit. Within text portions of the .qmd file, use of # denotes a section header in markdown, so let’s add a header for Introduction and click save.\n\n\n\n\n\nModified\nWithin the left primary sidebar, we can see that the Git tracking has updated. Examples.qmd is visible once again. However, becuase it is now a tracked file, instead of showing up with the “U/Untracked” green highlight, it now appears as a brownish-red with a “M/Modified”.\nLet’s make an additional change to the .qmd file by adding another section (# Setup) and a code block with a commented out line (denoted by the # at the line start), before pressing Save.\n\n\n\nIf we were now to click on the Example.qmd file in the left primary sidebar, it will open the same kind of tracking display we saw previously. This time, we can see changes since our last commit. These appear as the green highlights along the scroll-bar, corresponding to the # Introduction and # Setup headers that we have added in since the last commit.\n\n\n\nFor a larger document, we can scroll down to see the various highlighted regions.\n\n\n\nWe could now repeat the steps showed above, staging the file, writing a commit message, and commiting again by clicking on the designated buttons.\nAn important question is how often should we commit, vs. just hit save? Well… it depends :D Let’s think about this in context of a video game. If you made commits at regular intervals throughout the day (or more frequently when doing something particularly risky), you are more likely to be close enough to a particular commit (waypoint/save-state) that you can quickly revert back to without loosing any progress. Alternatively, if your last commit was last week, you will not have any intermediate versions to fall back to.\n\n\nCommit via Terminal\nHaving demonstrated how to commit changes to Git via the left primary side-bar, for this second commit, let’s do it the alternate way via the terminal (tab on the panel at the bottom of your screen).\n\n\n\nAfter clicking on the terminal tab, click on blinking command line.\nThe command to stage a file is “git add”, followed by the name of the file you want to stage.\nIn this case, you would enter “git add Example.qmd” and press Enter. \n\n\nYou will see after pressing enter a new blank terminal line appear. If you glance at the left-sidebar, you can see that Example.qmd now appears under the Staged Changes dropdown.\n\n\n\nNext up, let’s write the git commit via the terminal. In this case, the command would be “git commit -m” (-m denoting message). The commit text is then surrounded by “” marks.\nFor example: “git commit -m”Added section headers to my QMD file””\nPress enter to save the commit.\n\n\n\nAnd you should see your second commit now appear in the left primary sidebar underneath the graphs dropdown.", + "crumbs": [ + "About", + "Getting Started", + "00 - Git" + ] }, { - "objectID": "course/00_Git/slides.html#remote-version-control", - "href": "course/00_Git/slides.html#remote-version-control", + "objectID": "course/00_Git/index.html#remote-version-control", + "href": "course/00_Git/index.html#remote-version-control", "title": "Version Control with Git", "section": "Remote Version Control", - "text": "Remote Version Control\nCopying Project Folder to GitHub\n\n\n\n\n\n\n\n\n.\n\n\nWhile having local version control in place is helpful when you need to revert back after encountering issues, where Git shines is the ability to pass your changes to your online GitHub repository.\nNot only does this allow you to switch between computers, but should something disastrous happen to your main computer, you still have all your hard work backed up and readily assessible.\nFor this subsection, first, double check that Positron is still connected to your GitHub account by checking the user tab on the bottom-left. If not, repeat the connection setup." + "text": "Remote Version Control\n\nCopying Project Folder to GitHub\nWhile having local version control in place is helpful when you need to revert back after encountering issues, where Git shines is the ability to pass your changes to your online GitHub repository.\nNot only does this allow you to switch between computers, but should something disastrous happen to your main computer, you still have all your hard work backed up and readily assessible.\nFor this subsection, first, double check that Positron is still connected to your GitHub account by checking the user tab on the bottom-left. If not, repeat the connection setup.\n\n\n\nSince our project was created using the “New Folder from Template” option, it currently only exist locally. What we want to do next is to copy it to our GitHub account, creating a new repository in the process.\nTo do this, we will first need to install the usethis R package. Within your console, you would run the following line of code:\n\ninstall.packages(\"usethis\")\n\nDepending on what R packages you already have installed on your computer, you may get a prompt asking if you want to update/install additional dependencies. Go ahead and type the number corresponding to Update All, and press enter.\nThe package and all it’s dependencies should then install. If an error message appears, read through it, and follow provided instructions. Go to Discussions if need help.\n\n\nOnce the usethis package is installed, we need to activate it within R by calling it with the library command. This makes all the tools (ie. functions) within an R package available for use within Positron.\nIn your console, you would type:\n\nlibrary(usethis)\n\n\n\n\nWith library called, you now have access to the functions (tools) within the usethis R package. One of these is the use_github() function.\nIn Positron, if you hover over a function, it will pull up the associated help file which will provide you information about the arguments the function expects to receive, and what they do.\nFor use_github(), the main thing to remember for now is since this is a personal project being used for testing, we don’t necessarily want to share it with the entire world, so we should set the “private” argument equal to TRUE when creating a new repository.\n\n\n\nTaking this information that we have now gathered, we can now within our Quarto Markdown create a code chunk, write out the line of code calling the function, and providing the Private=TRUE argument within the ().\nWithin a code chunk, adding a # in front of a line of code, will comment it out, resulting in that line of code not being run. Since we have already installed the usethis package, and we don’t want to reinstall it every single time, let’s go ahead and comment out that line. Go ahead and press Enter.\n\n\n\nWe will see a message pop-up in the console. In this case, we had not saved before pressing enter, so there are uncommitted changes within the folder. The pop-up is asking whether you want to save these as well before sending the Folder to GitHub.\nIn this case I will chose to ignore the uncommitted changes by entering 3 (for Definitely) in the console and hitting enter on my keyboard.\n\n\n\nThe usethis R package will then execute the series of git commands that are needed to set up a GitHub repository (ie. the messages being displayed in the console window), and when finished will open a pop-up asking whether you want to see your new repository in your default Web Browser. I will go ahead and select yes in this case.\n\n\n\nAfter the browser opens, you can see that the elements I had staged and committed within Positron are now present within the GitHub repository. Since I had only staged Example.qmd, it is the only file that was backed up. We can also see the commit history online by clicking on the commit clock.\n\n\n\nAs we would expect, we only see our two commit messages. One important thing to note is the commit hash numbers, that denote a particular commit. If we decided to revert/fall back to a prior commit in the future, this would be the number we would need to provide to Git to return to that previous commit/save-state\n\n\n\nSimilarly, on GitHub, we have an option to Browse a Repository at a particular point in time. This will be quite useful later in the future when troubleshooting what major changes occurred between versions of an R package.\n\n\n\n\n\nCode Chunk Arguments\nHaving successfully connected our local Project Folder to a remote GitHub repository, let’s return to Positron.\nBefore continuing, if we left the code chunk that created the GitHub repository as is, every time we ran all code chunks in the document, it would try to recreate the GitHub repository. We don’t want this to happen, as the setup was a one-time operation.\nWhile we could add # in front of every line of code (or delete the code chunk entirely) it is often useful to have these set-up code chunks around to remind us what arguments we need to provide next time we need to a similar setup and are mind blanking on what to do.\nFortunately, Quarto allows us to set conditions on whether a chunk is run (ie. evaluated). We will discuss the conditiona arguments in more depth in the next section, but for now, we can modify the code chunk as follows.\nOn the next line after the {r}, we will add a hashtag (#), then a pipe (|), followed by a space. This is the setup for a code-chunk specific argument. We will then add “eval: FALSE”, which signals that the particular code-chunk should not be evaluated (ie, should not be run).\n\n\n\n\n\nREADME\nNow that we have connected our local Project Folder to GitHub, and have gotten a basic introduction to the “git add”, “git commit” arguments, let’s turn our focus to the other files currently listed as untracked by Git within our folder, the README.md and the .gitignore files.\nWhen setting up our GitHub account, we encounted an example of a README.md file. This file often provides a brief description of the project, and an outline of what the other files in the folder are for. As you may have gathered, even software developers are forgetful/under-caffenaited, and having notes to catch back up to speed is important.\n\n\n\n\n\n.gitignore\nWe additionally have a .gitignore file. Within a project, there are often some files that we will never want version control to track. These could be files that are too large for GitHub (ex. really large .fcs files), or files containing sensitive information (passwords, history, credentials, etc.).\nWhen the names of these file (or the file type shothand) are added to the .gitignore file, they are ignored by version control, and no longer appear on the primary left side bar.\n\n\n\nLet’s proceed and stage both the README.md and .gitignore file, so that changes to these files will be tracked. We can of course select both from the primary left side bar and write a short commit message.\n\n\n\n\n\n\nOr alternatively, if we want to stage all uncommitted files present in a single step, we could in the terminal use the “git add .”\nWe can then write our git commit using “git commit -m”.\nBoth approaches work, and you may switch between them based on preferrence.\n\n\n\nYou will notice after having committed, that if you look at the Graph dropdown on the bottom half of the primary left side-bar that something has changed.\nThere are now separate icons denoted as main and origin/main. These correspond to the last commit present locally (main), and the last commit on remote (ie. GitHub, origin/main).\nLocal is ahead since you just made the commit with the changes inactivating the code-chunk, and you have not passed these changes up to GitHub yet.\n\n\n\n\n\nPull\nBefore sending (ie. pushing up) our updated commit to GitHub, especially if you are working on a project from multiple computers (or as part of a team) to bring in (ie. pull down) any changes that might be on GitHub that are not present locally.\nThis ensures that everything is up to date, and you don’t end up with mismatched commits that are incompatible with each other and trigger an error message.\nTo pull in changes from GitHub, at the top of the primary left side-bar, you can select the … button to open a drop-down menu of Git options. You would then select “Pull”.\nAlternatively, you could do the same thing via the terminal by running the “git pull” command.\n\n\n\n\n\nPush\nIn our case, there was no new material present on our GitHub repository that were not already present locally, so all that is returned is the “Already up to date” message.\nWe are now good to proceed to push (ie. send) the updated commit up to our GitHub repository.\nWe can do this by either pressing the Sync changes button, or via the terminal entering the “git push” command.\n\n\n\nAnd now, if you glance down at left side-bar’s graph section, you will see that both the main and origin/main icons are now present for the most recent commit.\n\n\n\nIf we switch to our Web browser, we can see that this is also now the case for our GitHub repository that now also has the most recent changes.\n\n\n\n\n\nReverting to Prior Commit\nFor most daily-workflows, you will only need the git commands that we have introduced above (git add, commit, pull, push). The next two areas (reverting to a prior commit, and branches) are more specialized, and will be covered in greater depth later in the course. We are briefly covering them here. If you are at the point where your last remaining neuron has disconnected, and you feel you need to take a break from version control, feel free to skip to the next section and we will revisit these topics later in the course.\n\n–\nIn most cases, if your code stops working, you can identify the issue and fix it in the existing version, never needing to resort to reverting to a previous commit (save-state). The times you would need to revert would be if you deleted important files, or the new files are hopeless mess that is not worth trying to sort through. In those cases, reverting back might be better approach.\nTo imitate a falling back scenario, lets create and additional file, stage and commit it to end up a commit ahead of where we are currently at within the Project Folder.\n\n\n\nNow being one (or several) commits ahead, if we wanted to revert back, we would first need to identify the commit we want to revert back to and copy the commit hash number.\n\n\n\nThen, opening the terminal, we can enter “git reset” and paste the hash afterwards. We can then press enter.\n\n\n\nYou will notice our additional commit has been removed, although the newer files we were working on subsequent last commit are still present.\n\n\n\nIf however, we had wanted to return to the exact same state as the previous commit (removing all subsequent created files), we could do so by adding in the –hard argument. Before starting, save any newer files you want to keep in a completely different folder, because they will be permanently removed.\nThen, enter “git reset –hard thecommithashnumber” into the terminal, which would result in a “hard” return to the previous commits save-state. You may need to close and reopen Positron to see the changes reflected.\n\n\nBranches\nBranches are an useful Git feature that we will start using extensively later in the course. Branching allows you to create a parallel/carbon-copy of your existing repository, which you can then edit without affecting the main branch. This is particularly useful for projects that may get messy or drawn out. By isolating these edits to a parallel branch, if they don’t work, your main branch remains safe. Alternatively, if you like the changes that occurred in the branch, you can pull these changes from the branch back to main, bringing the timelines back together.\n\n\nWithin the terminal, entering “git branch” will show the existing branches. In this case, only main is present since we haven’t yet created a new branch.\n\n\n\nWe can create a new branch in the terminal by entering “git branch” followed by the name of our desired branch. In this case, we are creating a branch called Week1\n\n\n\nNow, when we check “git branch” again in the terminal, which returns the two branches, Week1 and main. The * is located next to main, indicating that we are currently within the main branch.\n\n\n\nBesides the terminal, we can also create a new branch via Positron. To do so, we first click on the Git tab in the Actions Bar.\nOnce the left-side bar displays the version control display, we can click on the … button (to the right of changes)to gain access to the Git options drop-down.\nFrom here, we click on Branch, and then select Create Branch.\n\n\n\nUsing Git branch, we saw that we were still within the main branch. In the terminal, we can switch over to the Week1 branch by using the “git checkout” command, followed by the branch we wish to switch to.\n\n\n\nThis results in us switching over to the Week1 branch.\n\n\n\nHaving switched (ie. checkout) to the Week 1 branch, let’s create the file BranchTest.qmd, which will exist within this branch, but not yet in the main branch.\n\n\n\nHaving created the file, let’s stage and then commit it. This will put the Week1 branch ahead of the main branch by a single commit.\n\n\n\nWith our changes staged and committed, if we look at the left side-bar’s graph section, our Week1 branch is now ahead of the origin/main branch by one commit.\n\n\n\nIf we were to check on GitHub, we can see that no new files are present on the main branch, but can see the notification listing recent changes to Week1 branch.\n\n\n\nUsing the drop-down, we can switch from displaying the main branch to the Week1 branch, where we can see the new file.\n\n\n\nIf we click the green compare and pull request button, we end up on this screen. This compares how the two branches are different from each other.\n\n\n\nWe will delve into branches again at a later point. For now, remember that by creating and prunning parallel branches, you can develop knowing that even if something goes wrong, your main branch remains safe.", + "crumbs": [ + "About", + "Getting Started", + "00 - Git" + ] }, { - "objectID": "course/00_Floreada/slides.html#floreada", - "href": "course/00_Floreada/slides.html#floreada", + "objectID": "course/00_Floreada/index.html", + "href": "course/00_Floreada/index.html", + "title": "Using Floreada", + "section": "", + "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here", + "crumbs": [ + "About", + "Getting Started", + "00 - Floreada" + ] + }, + { + "objectID": "course/00_Floreada/index.html#floreada", + "href": "course/00_Floreada/index.html#floreada", "title": "Using Floreada", "section": "Floreada", - "text": "Floreada\nLoading Dataset\n\n\n\n\n\n\n\n\n.\n\n\nFirst, open your web browser and navigate to the website\nClick on Start to proceed to the next page." + "text": "Floreada\n\nLoading Dataset\nFirst, open your web browser and navigate to the website\nClick on Start to proceed to the next page.\n\n\n\n\n\n\nOnce that is done, select the File tab on the upper navigation bar. Then click on Open File(s).\n\n\n\n\n\n\n\nFrom there, select your .fcs files of interest, and click Open.\n\n\n\nThe .fcs files will now load in, and you should see a view similar to the one below. On the left side-bar, you have your gating options (Rectangle, Polygon, Range, Elipse, Quad, etc). Next to these on the right you have the FCS files that are loaded into the workspace. Then on the right, you have the visual display for your selected specimen.\n\n\n\n\n\nSwitching Axis Markers\nIf you left click on the axis name (SSC-H or FSC-H in this case), you will be able to select other markers by which to gate your specimen. For the provided example, we were using a raw spectral flow cytometry .fcs file, so the names of the detectors are present.\n\n\n\nFor now, I plan to start off by gating for singlets. I switch the y-axis to FSC-H, and then proceed to switch the x-axis to FSC-A.\n\n\n\n\n\nCreating Gates\nWith this done, I can now select the Poly gate tab on the upper left.\n\n\n\nThen manually click on the locations on the plot to add the individual gate nodes.\n\n\n\nTo complete the gate, I click back to the original node point. At this point, the popup will allow you to name the gate.\n\n\n\nTo adjust the polygon gate, you can click on a node and drag it to expand or contract in a particular direction\n\n\n\nTo move the entire gate, first click on a node to select the gate, then click in the center of the gate to adjust its location.\n\n\n\n\n\nAdditional Gates\nUnfortunately for those with the force-of-habit from using other softwares, double-clicking within the gate doesn’t do anything. To continue gating on the selected cells, you will need to click on the newly created gate name on the left. This will result in visualizing the isolated cells.\n\n\n\nOnce this is done, you can repeat the previous steps to change the axis markers and create a second gate. For this example, we went with a “Cells” gate to exclude debris from this particular sample.\n\n\n\nHaving created the “Cells” gate, we will be switching gating based on FSC and SSC to using the detector parameters.\nThe samples in this example were acquired to derive the cell counts and concentration of various cell populations within cryopreserved cord and peripheral blood mononuclear cells (CBMC and PBMC) specimens after thawing.\nThey were stained with CD19 BV421, CD45 PE, and CD14 APC on a 5-Laser Cytek Aurora, before unmixing, this would correspond respectively to V1, YG1/B4, R1/YG4 peaks respectively.\n\n\n\n\nScaling/Transformation\nWhen we switch the axis, we can see that the scaling/transformation is not ideal, as the staining and not-staining populations are scrunched up together in the center of the plot.\n\n\n\nTo change the scaling/transformation, we need to click directly on the axis.\n\n\n\nFrom there, when we click on the drop-down, we see the various transformation options. We will select Logicle, given we are working with spectral flow cytometry files.\n\n\n\nThis y-axis values are subsequently visualized with the logicle transformation applied, increasing our resolution between the positive and negative population.\n\n\n\nWe can then repeat this for the x-axis, adjusting the fine-tune options for the scaling as needed.\n\n\n\n\n\nNavigating Gating Hierarchy\nWith this done, let’s first draw a rectangle gate for the CD45+ (B4-A) cells.\n\n\n\nAnd then selecting that population by clicking on the gate name, let’s proceed and gate the CD19+ cells (V2-A).\n\n\n\nAs you can see, we now have the various gates present in the gating hierarchy for the respective .fcs file. To return to a previous gated population, we would click on the parent population above it.\n\n\n\nWe can subsequently add an additional gate at this gating level for the likely debris population (the threshold setting was suboptimal for this experimental run).\n\n\n\n\n\nCopying Gates\nThis was the process for gating for a single specimen. To copy gates over to the other specimens, we have two options. First, holding down your Ctrl (or equivalent) button, you can click on the individual gate names.\n\n\n\nFrom there, you can drag them down to the next specimen and apply them.\n\n\n\nAlternatively, you can drag down the highlighted gated to the Pipelines Tab, and apply to All Files. This will result in the gates being copied to all specimens in the experiment.\n\n\n\n\n\n\nAdjustments within Pipelines will carry over to all other respective unmodified specimens that share it’s gates.\n\n\n\nOnce this is done, I recommend cycling through the gates for each specimen, just to ensure that the gates were positioned correctly before saving the workspace.\n\n\n\n\n\nSaving Workspace\nWith everyone now “correctly” gated, we can proceed to save the workspace so that we can reopen it later from another browser.\nTo do this we open the File tab from the upper navigation bar, and select Save Workspace.\n\n\n\nFrom there we have a couple options, for now let’s select Floreada Workspace. Where it is saved at will depend on your individual browser settings, so watch for a popup.\n\n\n\nAlternatively (and crucially for the CytoML pipeline) we can also choose to save it as a FlowJo v10 .wsp file.\n\n\n\nIn both cases, you will end up with Workspace files that can be used later to access your created gates\n\n\n\n\n\nReopening Workspace\nTo reopen the Floreada workspace within the browser, reopen the website, and select the Open File(s) option.\n\n\n\nFrom there, select both the Floreada Workspace file as well as the .fcs files\n\n\n\nAt which point you will now be back to the point you last saved at.", + "crumbs": [ + "About", + "Getting Started", + "00 - Floreada" + ] }, { - "objectID": "course/00_Floreada/slides.html#cytoml", - "href": "course/00_Floreada/slides.html#cytoml", + "objectID": "course/00_Floreada/index.html#cytoml", + "href": "course/00_Floreada/index.html#cytoml", "title": "Using Floreada", "section": "CytoML", - "text": "CytoML\n\n\n\n\n\n\n\n\n.\n\n\nDue to a unknown formatting bug, the Floreada produced FlowJo v10 .wsp is not directly accessible by CytoML at the time of this course. However, the issue is resolved as soon as the file is opened the first time within FlowJo v10, regardless of whether you have a log in or not. Strange? Yes, but we will take the workaround.\nSo, for anyone on Windows or MacOS, download FlowJo v10. Once installed, open the software, and close the login popups. Once there, open the Floreada created FlowJo.wsp file. Since you haven’t logged in, it won’t show any events. But it will correct the formatting bug. Close the software, and return to R. Your Floreada sourced .WSP file should now be readable by CytoML.\nOdd? For sure. Fixable? Likely, I will set a reminder to work with the Floreada and CytoML devs to see if we can cut out the need for this workaround." + "text": "CytoML\nDue to a unknown formatting bug, the Floreada produced FlowJo v10 .wsp is not directly accessible by CytoML at the time of this course. However, the issue is resolved as soon as the file is opened the first time within FlowJo v10, regardless of whether you have a log in or not. Strange? Yes, but we will take the workaround.\nSo, for anyone on Windows or MacOS, download FlowJo v10. Once installed, open the software, and close the login popups. Once there, open the Floreada created FlowJo.wsp file. Since you haven’t logged in, it won’t show any events. But it will correct the formatting bug. Close the software, and return to R. Your Floreada sourced .WSP file should now be readable by CytoML.\nOdd? For sure. Fixable? Likely, I will set a reminder to work with the Floreada and CytoML devs to see if we can cut out the need for this workaround.", + "crumbs": [ + "About", + "Getting Started", + "00 - Floreada" + ] }, { - "objectID": "course/00_BonusContent/PullConflicts/UpdatedPullRequest.html", - "href": "course/00_BonusContent/PullConflicts/UpdatedPullRequest.html", - "title": "Updated Pull Request Protocol", + "objectID": "course/00_BonusContent/index.html", + "href": "course/00_BonusContent/index.html", + "title": "Bonus Content", "section": "", - "text": "Background\nDue to an encountered issue pulling in new updates for CytometryInR when you have an optional take-home problem still waiting to be reviewed, we will be modifying the protocol for submitting a pull request. You will first create a local homework branch, and submit from your branch to our homework branch. That should hopefully prevent any incoming changes from main to main from becoming conflicted.\n\n\nGetting Started\nThe first step is to open Positron, and navigate through the dropdown options to the Create a Branch option\n\n\nAnd provide a name (since the homework was for Week 02, we set it as Week 02)\n\n\nNext, select the option to Publish the Branch\n\n\nFrom here, importantly, select the option to make it a branch of YOUR forked CytometryInR version (since you don’t have permissions for the main course repository)\n\n\nAt this point, your new branch will have been created. You can check by entering the following code in the terminal, and verifying the * is next to the Week02 branch\n\ngit branch\n\n\n\nOnce you have confirmed you are in your homework branch, go ahead and transfer in all the files you will be submitting for the optional take-home problems\n\n\nAnd once done, make a commit as you would normally\n\n\nAs you can see, you will now be ahead of the main branch by one commit. Go ahead and sync your branch to GitHub so the contents are available remotely for use in the pull-request.\n\n\nOnce synced, you will notice that your branch is now up to date with the remote (cloud) icon. Next, proceed to checkout to the main branch, either via the dropdown or via the terminal using\n\ngit checkout main\n\n\n\nReturning to GitHub, you will see that your homework branch has received the incoming changes. You are now safe to sync your fork to bring in changes from the main course CytometryInR repository.\n\n\nAnd confirm yes.\n\n\nReturning to Positron, once verified you are in your main branch, proceed to pull in changes\n\n\nIf you switch between branches, you will notice you have both the new changes to main, as well as your week specific side branch co-existing peacefully.\n\n\nYou are then safe to make a pullrequest from your homework branch, to our homework branch, without running into risk of an additional commit from our end (or delay in reviewing) causing issues.\n\n\n\n\nAdditional Resources\nThis method should hopefully avoid the previously encountered issues. Apologies once again to those who encountered the issue! Still learning how to use some of these aspects of Version control in a GitHub context." + "text": "This is a miscellaneous page to host walk-throughs of topics that come up via the Discussion Page. Rather than re-explain how to in the comments, I want to have a place to post short-walkthroughs to solve these issues, while avoiding incorporating it into the existing walk-throughs at this point in time. I hope you find it useful, and pardon the organized chaos of miscellaneous topics." }, { - "objectID": "course/00_BonusContent/Immport/images/index.html", - "href": "course/00_BonusContent/Immport/images/index.html", - "title": "ImmPort - Downloading Datasets", + "objectID": "course/00_BonusContent/index.html#windows-arm", + "href": "course/00_BonusContent/index.html#windows-arm", + "title": "Bonus Content", + "section": "Windows Arm", + "text": "Windows Arm\n\nPositron\nOn the main Positron installation page, only the installers for Windows Computers with x86 chips are currently shown. There is a beta (experimental) version of Positron for Windows ARM (ex. Snapdragons), but it needs to be installed from Positron’s GitHub releases page.\n\nPlease note, you should download the most recent version that is available to you, as they continue to update it and fix bugs. As of February 04, 2026, you would also need to install Quarto separately.\nTo install Quarto, first navigate to their website. Quarto for Windows ARM was implemented in 2023, so the regular installer should work.\n\nAlso, you would need a Python arm64 installation installed if you decide you want to venture into using Python at any point." + }, + { + "objectID": "course/00_BonusContent/index.html#visual-mode", + "href": "course/00_BonusContent/index.html#visual-mode", + "title": "Bonus Content", + "section": "Visual Mode", + "text": "Visual Mode\nWithin Positron, there exist a toggle button to switch between source and visual mode on Quarto documents. But what do you do when you can’t find it?\n\nTuns out…. Visual Mode is currently broken, so the developers removed it about two weeks ago.\nThe current way to switch to it is via right-click, then select edit visual mode.\n\nVice versa, once there, you can revert by right-clicking and selecting Edit Source Mode\n\nThe process of figuring what is going on highlights how to use a GitHub Discussion Page. This is Positrons when I searched for visual button. I then found that a similar question was asked 3 days ago that led me to the linked thread above." + }, + { + "objectID": "course/00_BonusContent/index.html#not-detecting-git", + "href": "course/00_BonusContent/index.html#not-detecting-git", + "title": "Bonus Content", + "section": "Not Detecting Git", + "text": "Not Detecting Git\nIf you leave the “Initialize Git Repository” option unclicked when setting up a New Folder from Template, Git will not be active within your project folder.\n\nAs a result, when you try to use the usethis packages use_github(private=true) function, you will get an error that resembles the one below\n\nTo initiate a Git repository after the fact, you will need to go to the Version Control tab in the action bar, and select the option.\n\nThen, you will need to stage the files you want to work with, and commit them.\n\nuse_github(private=TRUE) should now be functional at that point. However, you can also choose to continue via Positron’s interface instead by selecting Publish.\n\nIt will then ask you whether you want to save it as either a Public or a Private repository.\n\nAnd if all goes well, you will see the “Successs” pop-up in the lower-right" + }, + { + "objectID": "course/00_BonusContent/PullConflicts/index.html", + "href": "course/00_BonusContent/PullConflicts/index.html", + "title": "Take-Home Problems - Pull Fix Resolution", "section": "", - "text": "To download data from the ImmPort Shared Data Repository, first navigate to the website\n\nFor help setting up Aspera Connect, see the following help documentation" + "text": "Background\nFor those who have turned in the homework to the Cytometry in R - homework branches, many report having encountered merge issues pulling in the next week’s data if the pull request hasn’t been resolved yet. Creating a parallel branch, and submitting homework from there to the homework branch might solve the issue? But we will need to test that out. For now, here are the steps we used to resolve the issue locally without needing to delete and re-download.\n\n\nGetting Started\nStart off checking your GitHub forked version of the CytometryInR, notice how many commits behind you are.\n\n\nIf you haven’t submitted the optional Take-Home problems via a pull-request, proceed to do so.\n\n\nThis was an example of the page you see when submitting the pull request. Upon submission, your branch may show merge conflicts due to difference in rendered docs. This is okay, we will resolve it on our end.\n\n\nWhat we will end up doing is ignore the changes and accept the current version. This issue is likely due to the weekly updating of the data resulting in new sidebar links. We will then mark these issues as resolved\n\nWe will then mark the issues as resolved.\n\n\nOn return to the homework, we will be able to merge the branch once again. We will likely make our suggestions at this point for this branch.\n\n\nHowever, after pull request has been merged, you will see your branch is way ahead (due to everyone elses homework commits). This is the area we will need to address via the new branch method.\n\n\nFor now, proceed to discard the changes (you don’t need the other participants homeworks cluttering your folder)\n\n\nYou will then appear as caught up with the main branch.\n\n\nOn return to Positron, attempt to pull\n\n\nHowever, since your homework commit is still present, you will receive a pop-up asking you to see the GitLog. If you scroll up the problem log, it will give you several options.\n\n\nYou will need to enter the following code into your terminal tab:\n\ngit config pull.rebase TRUE\n\n\n\nThis will result in a branched appearance, and the button asking you to sync the changes.\n\n\nUpon doing so you will have a restored status vs. the main cytometry in R project folder.\n\n\n\nTake Away\nWe have encountered a first growing pain for the course, in that the pull-request method we have been using still causes merge conflicts. We will be going to a homework branch to homework branch pull-request approach going forward, I will send out additional instructions on how to do so shortly.\nThanks for your patience!\nDavid\n\n\nAdditional Resources" + }, + { + "objectID": "course/index.html", + "href": "course/index.html", + "title": "Cytometry in R", + "section": "", + "text": "Cytometry in R is a free virtual mini-course being organized by the Flow Cytometry Shared Resource Core at the University of Maryland’s Greenebaum Comprehensive Cancer Center. This course is a passion project arising from our desire to contribute back to the community.\nWe are excited that so many individuals worldwide have chosen to take part, and we look forward to helping you get started on your own learning journeys." + }, + { + "objectID": "course/index.html#resources", + "href": "course/index.html#resources", + "title": "Cytometry in R", + "section": "Resources", + "text": "Resources\nThe pre-course learning materials are now available, providing walkthroughs of how to set up your workstations with the required software, and exercises to help you become more familiar with the various teaching and coding resources we will be using throughout the course.\nNarrated versions of the walk through materials are now also available via YouTube" + }, + { + "objectID": "course/index.html#in-person-baltimore", + "href": "course/index.html#in-person-baltimore", + "title": "Cytometry in R", + "section": "In-Person (Baltimore)", + "text": "In-Person (Baltimore)\nFor those joining us in person, the class is being offered on Monday, Tuesday and Thursday from 4-5 pm EST in Bressler Research Building Room 7-035. We invite you to make whichever session best fits your schedule. Monitors to plug your laptops in will be available on a first come, first served basis. These in-person sessions will not be recorded, but with the smaller class size you will have our undivided attention should you have any questions." + }, + { + "objectID": "course/index.html#virtual-worldwide", + "href": "course/index.html#virtual-worldwide", + "title": "Cytometry in R", + "section": "Virtual (Worldwide)", + "text": "Virtual (Worldwide)\nFor those joining us virtually, we will have three separate livestreams throughout the week on YouTube. These will be offered on:\n\nTuesday 2200 EST (Wednesday 0300 GMT+0)\nWednesday 1600 EST (Wednesday 2100 GMT+0)\nThursday 1000 EST (Thursday 1500 GMT+0)\n\nAll three livestreams will be recorded and available on YouTube immediately afterwards." + }, + { + "objectID": "course/index.html#discussion-forum", + "href": "course/index.html#discussion-forum", + "title": "Cytometry in R", + "section": "Discussion Forum", + "text": "Discussion Forum\nWe will be using the Cytometry in R Discussions page as a community forum, and a place to ask questions, celebrate wins, and provide feedback. After creating a a GitHub account, pleae go introduce yourself." + }, + { + "objectID": "ExistingResources.html", + "href": "ExistingResources.html", + "title": "Existing Resources", + "section": "", + "text": "We are not the first “Cytometry in R” course, nor will we be the last. This page is linking to the already existant online Cytometry in R resources that we have encountered and benefited from during our own learning journey. May they prove useful to you as you progress your way through yours!\n\n\n\nChristopher Hall - Flow Cytometry Data Analysis in R\nCytometry-R-Scripts: R scripts to help with your flow cytometry analysis\nR_flowcytometry_course: The files and presentation from the Cytometry Core Facility flow cytometry data analysis course in R\n\nInstallation and Loading Data\n(1) Flow Cytometry Data Analysis in R - Installation and Loading Data\n\n\n\n\nCompensation, Cleaning, Transformation, Visualization\n(2) Flow Cytometry Data Analysis in R: compensation, cleaning, transformation, visualization\n\n\n\n\nGating with flowWorkspace\n(3) Flow Cytometry Data Analysis in R: gating with flowWorkspace\n\n\n\n\nVisualization\n(4) Flow Cytometry Data Analysis in R: Visualisation\n\n\n\n\n\n\n\nOzette Technologies - BioC 2023 Workshop\nWorkshop given at the Bioc2023 conference, authored by Arpan Neupane and Andrew McDavid.\nWorkshop: Reproducible and programmatic analysis of flow cytometry experiments with the cytoverse\n\n\n\n\n\n\nPritam Kumar Panda - Flow Cytometry Data Analysis & Visualization in R using CytoExploreR\nFlow-Cytometry-analysis-in-R\nCytoExploreR-Interactive-visualization\n\nComplete Guide\nFlow Cytometry Data Analysis & Visualization in R using CytoExploreR: Complete Guide\n\n\n\n\n\n\n\nBioinformatics DotCa - Introduction to Flow Cytometry in R\n\nIntroduction to Flow Cytometry in R\nIntroduction to Flow Cytometry in R\n\n\n\n\nExploring FCM Data in R\nExploring FCM Data in R\n\n\n\n\nProcessing and Quality Assurance of FCM Data\nProcessing and Quality Assurance of FCM Data\n\n\n\n\n1D Dynamic Gating\n1D Dynamic Gating\n\n\n\n\nClustering and Additional FCM Tools\nClustering and Additional FCM Tools\n\n\n\n\n\nTulika Rai - Learn Innovatively With Me\n\nflowAI Flow Cytometry Data Cleaning using R\nflowAI Flow Cytometry Data Cleaning using R: A Step-by-step Tutorial\n\n\n\n\ntSNE UMAP TRIMAP colorization or Transformation using R script\ntSNE UMAP TRIMAP colorization or Transformation using R script\n\n\n\n\n\n\n\nGivanna Putri - Introduction to Cytometry Data Analysis in R workshop\nACS 2021 Workshops - Introduction to Cytometry Data Analysis in R workshop\n\n\n\n\n\n\nTimothy Keyes -\n{tidytof}: Predicting Patient Outcomes from Single-cell Data using Tidy Data Principles\n\n\n\n\n\n\nRyan Duggan - Cytometry on Air\nCytometry on Air: Analyzing Flow Cytometry Data in R Presentation by TJ Chen and Greg Finak,\n\n\n\n\n\n\nGuillaume Beyrend - Learn Cytometry\nLearn Cytometry Originally appeared to have been paywalled, doesn’t currently appear to be the case.\n\n\n\n\nHong Qin - flow analysis in R\n\nFlow Analysis in R\nflow analysis in R, bio125, Spring 2015\n\n\n\n\nFlow Cytometer Data Analysis\nBIO233 demo, flow cytometer data analysis, simple example\n\n\n\n\n\n\n\nSwayam Prabha - Flow cytometry data analysis in R/Bioconductor\nLecture 15 : Flow cytometry data analysis in R/Bioconductor" + }, + { + "objectID": "index.html", + "href": "index.html", + "title": "About", + "section": "", + "text": "Cytometry in R is a free weekly mini-course being offered both in-person and online by the Flow Cytometry Shared Resource staff at the University of Maryland Greenebaum Comprehensive Cancer Center. Its primary audience is for those with prior flow cytometry knowledge, who have limited previous experience with the programming language R. However, we welcome everyone regardless of their existing  flow cytometry  or coding experience.\nThis course is a passion project arising from our desire to contribute back to the community. We are excited that so many of you have chosen to sign up, and look forward to helping you get started on your own learning journeys.\nFor more information on topics-covered, please see our schedule.\nIf you did not previously complete the interest form, and would like to be added to our mailing list, please complete the form here\n\n\nAbout\n\n\nMotivation\nWhile many cytometry enthusiast express an interest in learning how to carry out flow cytometry analyses in R, they often do not know where to start. Additionally, many of the limited existing resources are focused towards users with intermediate bioinformatic skills, contributing to a greater barrier for entry for those just starting out. Our motivation in offering this mini-course tailored towards beginners is to make the learning journey smoother than the one we ourselves experienced.\n\nWhile designing the course, we kept the following concepts in mind:\n\nBeginning coders benefit both by having detailed examples that they can initially work through on their own time, as well as less defined problems that through troubleshooting enable the acquisition of the thought-process and skills needed for coding.\nSome topics will take individuals a longer time to fully grasp. Providing a format and resources that enable being able to revisit the material multiple times is incredibly helpful. Likewise, life is busy, and missing a workshop session is highly probable. If this happens, it shouldn’t make or break the ability of the individual to understanding the rest of the course.\nConsistency is key, and being able to apply what you are learning to your own datasets, files, and questions of interest helps achieve this.\n\n\n\n\n\nCourse Details\n\nEach week, the mini-course will cover a particular topic for an hour. This individual class is offered on multiple days, at different times, both in-person and online. We invite you to attend the one that best fits your schedule each week. If life gets busy and you can’t make regular day, the online livestream recordings will be available on YouTube.\n \n\nCourse Materials\n\nWe will release the course materials for the upcoming week on Sundays 2200 EST (Monday 0300 GMT+0) via our course website and GitHub. These materials will normally be Quarto Markdown documents containing code, explanation, and other resources needed for that week. If you have your own data, you can use your own data! If you don’t have any data, we will make sure to provide some of our own available data for each lesson so that you can use it and be able to follow along.\nIn our commitment to open-source and open-science, all teaching materials are freely offered under a CC-BY-SA license, while all code examples are offered under the AGPL3-0 copyleft license.\n \n\n\nIn-Person (Baltimore)\n\nFor those who are local and attending in person, the class will be offered on Monday, Tuesday and Thursday from 4-5 pm EST in Bressler Research Building Conference Room 7-035 (around the corner from the Flow Core).\nWe invite you to make whichever session best fits your schedule. If you have your own laptop, feel free to bring it. If you don’t have a laptop, please reach out, the Flow Core has 6 laptops running Linux that we can lend out to participants for use during the session.\nFor those who arrive early, we will have a limited number of second screens with provided mouse and keyboard that you can plug a laptop into via HDMI cable to set up a larger workstation. For those arriving later, the room has enough space (and electrical plugs) for up to 20 people, but you will need to balance a laptop on your lap.\n \n\n\nOnline (Worldwide)\n\nFor those joining us virtually, we will have three separate livestreams throughout the week on YouTube. These will be offered on:\n\nTuesday 2200 EST (Wednesday 0300 GMT+0)\nWednesday 1600 EST (Wednesday 2100 GMT+0)\nThursday 1000 EST (Thursday 1500 GMT+0)\n\n \n\n\nRecordings\n\nAll three livestreams will be recorded and available on YouTube immediately afterwards. Our plan is to eventually circle back after the course and properly edit them (ie. less minutes of random background noise, highlighting the relevant lines of code, time-stamps, subtitles, translations, etc.) later on as time allows, so that they can serve as a more permanent resource.  \n\n\nDiscussion Forum\n\nWe will be using our GitHub Discussions page as a community forum. This will allow us to answer questions, and benefit from insights from others in the community. One advantage of having so many people signed up for the course is that if you have a question, someone else likely does as well, so go start a post and ask it!\n \n\n\nOptional Take-Home Problems\n\nEach week, we will offer optional take-home problems. These are intended to allow you to work with your own data on similar problems, but in a not-so-structured manner. Challenges that you and overcome during the process will help grow your problem solving and debugging skills, and help solidify concepts covered during the course.\nTo get feedback on these problems, you can reach out to the community on the Discussions page, or once far enough open a pull request to the homework branch and we will provide additional feedback.\n \n\n\nCost\nIs there a cost to participate? No, it’s absolutely free! Is there a catch? Yes, you learn R, and may wind up with strong feelings about flowframes vs. cytoframes. This is also our first year offering this course, so we will sporadically ask you to fill out a feedback forms to help us improve.\n\n \n\n\nComputing Requirements\n\nFor those attending online, you will need a computer with internet access. Operating system shouldn’t matter, as we will be offering code examples for Windows, Mac and Linux. As with all things flow-cytometry software, having a faster CPU with multiple cores, more RAM and greater storage space is generally helpful, but not a deal breaker.\n\nYou will need to be able to install the required software (R, Rtools, Positron, Quarto, and Git) as well as install and compile R packages from the CRAN and Bioconductor repositories (as well as a few GitHub-based R packages). Installation walkthroughs for each computer operating system can be found here.\nFor those using university or company administered computers, please be aware that you may not have the necessary permissions to install these directly, and may need to reach out to your IT department to help get these initial requirements set up. If you are using your own computer, congratulations, you are your system administrator, and should already have the necessary permissions.\n\nFor those attending in-person, we have set up a pop-up computer lab in the conference room. For those who arrive early, we have a limited number of second screens with provided mouse and keyboard that you can plug a laptop into via HDMI cable to set up a workstation. For those arriving later, the room has enough space (and electrical plugs) for 20 people, but you will need to balance a laptop on your lap. If you have your own laptop, feel free to bring it. If you don’t have a laptop, the flow core has 6 loaner laptops running Linux that we can let participants use for that session.\n\n\n\n\nLicense\nIn our commitment to open-science and open-source, all teaching materials are freely offered under a CC-BY-SA license, while all code examples are offered under the AGPL3-0 copyleft license." + }, + { + "objectID": "Schedule.html", + "href": "Schedule.html", + "title": "Cytometry in R", + "section": "", + "text": "Cytometry in R: A Course for Beginners\nCytometry in R is a free virtual mini-course being organized by the Flow Cytometry Shared Resource core at the University of Maryland’s Greenebaum Comprehensive Cancer Center. This course is a passion project arising from our desire to contribute back to the community. We are excited that you have chosen to take part and look forward to helping you get started on your own learning journey.\nIf you did not complete the original interest form, and would like to be added to our mailing list, please complete the form here\nThe pre-course learning materials are now available via the Course tab. They consist of walkthroughs of how to set up your workstations with the required software, and exercises to help you become more familiar with the various teaching and coding resources we will be using throughout the course.\n\n\n\nPre-Course Walkthroughs\n\nWeek 0: January 26, 2026 In these pre-course walk-throughs, we ensure that everyone creates a GitHub account, and has their computer properly set up with the required software (including R, Positron, and Git). We then start to build individual participants familiarity with the software infrastructure that they will be using throughout the rest of the course.\n \n\n\nInstalling R Packages\n\nWeek 1: February 2, 2026 During this first session, we learn how to install R packages from the various repositories (CRAN, Bioconductor, GitHub), and how to troubleshoot the more typical errors that occur during this process.\n \n\n\nFile Paths\n\nWeek 2: February 9, 2026 For this second session, we focus on how to programmatically tell your computer where to locate your experimental files, introducing the concept of file paths. We explore how the various operating systems (Linux, MacOS, Windows) specify their respective folders and files, and how to identify where you are currently within the directory. Our goal by the end of this session is to have walked you through how to figure out where an .fcs file of interest is stored, and convey to your computer where you want it copied/moved to, without encountering the common pitfalls.\n \n\n\nInside an .FCS file\n\nWeek 3: February 16, 2026 In the course of this third session we will slice into an .FCS file and find out what the individual components that make it up are. In the process, we will cover the concepts of main data structures within R (vectors, matrices, data.frames, list) and how to identify what we are working with. Additionally, we will explore how various cytometry softwares store their metadata variables under various keywords that can be useful to know about.\n \n\n\nIntroduction to the Tidyverse\n\nWeek 4: February 23, 2026 Within this session, we explore how the various tidyverse packages can be utilized to reorganize rows and columns of data in ways that are useful for data analysis. We will primarily work with the MFI expression data we isolated from within the .fcs file in the previous session, identifying and isolating events that meet certain criterias. We introduce the concepts behind “tidy” data and how it can improve our workflows.\n \n\n\nGating Sets\n\nWeek 5: March 2, 2026 As part of this session, we learn about the two main flow cytometry infrastructure packages in R we will be working with during the course, flowcore and flowWorkspace. Throughout the session, we will compare how they differ in naming, memory usage, and accessing .fcs file metadata. We additionally explore how to add keywords to their respective metadata for use in filtering specimens of interest from the larger set of .fcs files.\n \n\n\nVisualizing with ggplot2\n\nWeek 6: March 9, 2026 During this session we provide an introduction to the ggplot2 package. We will take the datasets we have collected from the previous sessions and see how in varying in different arguments at the respective plot layers we can produce and customize many different forms of plots, focusing on both cytometry and statistics plots. We close out providing links to additional helpful resources and highlight the TidyTuesday project.\n \n\n\nApplying Transformations and Compensation\n\nWeek 7: March 16, 2026 For this seventh session, we take a closer look at the raw values of the data within our .fcs files, and explore the various ways to transform (ie. scale) flow cytometry data in R to better visualize “positive” and “negative populations”. In the process, we visualize the differences resulting from applying different transformations commonly used by commercial software. Similarly, we learn how to apply and visualize compensation in context of conventional flow cytometry files.\n \n\n\nManual and Automated Gating\n\nWeek 8: March 23, 2026 Within this session, we explore various ways to implement gating for flow cytometry files in R. We will explore manual approaches utilizing flowGate, as well as automated options with openCyto and it’s gating templates. We additionally will explore how to provide gate constraints and various ways to visually screen and evaluate the outcomes within the context of our own projects.\n \n\n\nConference Break 1\nNo class week of March 30, 2026. If you are attending the ABRF conference, track me down at the Complex Data Analysis in Flow Cytometry: Navigating the Landscape talk on Monday, March 30th at 4:30 PM.\n \n\n\nIt’s Raining Functions!\n\nWeek 9: April 6, 2026 In the course of this ninth session, we tackle one of the harder but most useful concepts to learn for a beginner, namely functions. We explore what they are, how their individual arguments work, how they differ from for-loops, and how to create our own to do useful work, reduce the number times code gets copied and pasted. Additionally, some functional programming best practices will be introduced, as well as provide introduction to how to use the walk and map functions from the purrr package.\n \n\n\nDownsampling and Concatenation\n\nWeek 10: April 13, 2026 Within this session, we will expand on our growing understanding of GatingSets, functions and fcs file internals to write a script to downsample your fcs files to a desired number (or percentage) of cells for a given cell population. We will additionally learn how to concatenate these downsampled files together, and save them to a new .fcs file in ways that the metadata can be read by commercial software without the scaling being widely thrown off.\n \n\n\nRetrieving data for Statistics\n\nWeek 11: April 20, 2026 Leveraging the increased familiarity working with the various packages this far in the course, in this session we will retrieve summary statistics for the gates within our GatingSet, and programmatically derrive out tidy data.frames for use in statistical analyses typically used by many Immunologist. In the process, we add a couple additional plot types to our ggplot2 arsenal to hold in reserve should Prism prices go up again.\n \n\n\nSpectral Signatures\n\nWeek 12: April 27, 2026 As part of this session, we will explore how to extract fluorescent signatures from our raw spectral flow cytometry reference controls. Building on prior concepts, we will learn to isolate median signatures from positive and negative gates, and how to derrive and plot normalized signatures. We also introduce plotly package and it’s interactive plotting features, before showcasing various packages attempts at facilitating signature retrieval.\n \n\n\nSimilarities and Hotspots\n\nWeek 13: May 4, 2026 During this session, we will utilize the spectral signature matrix isolated from raw spectral flow cytometry controls and evaluate different ways of evaluating how similar different fluorescent signatures are to each other. In the process, we will gain better understanding of the metrics behind similarity (cosine), panel complexity (kappa), and unmixing-dependent spreading (collinearity).\n \n\n\nUnmixing in R\n\nWeek 14: May 11, 2026 In the course of this session, we will attempt a reach goal of many, namely carry out unmixing of raw .fcs files using the spectral signatures we have isolated from our unmixing controls, and write to new .fcs files. After evaluating the necessary internals, we will explore how various current cytometry R packages have implemented their own unmixing functions, and the various limitations that each approach has encountered.\n \n\n\nCleaning Algorithms\n\nWeek 15: May 18, 2026 In the span of this session, we will directly compare how various Bioconductor data cleanup algorithms (namely PeacoQC, FlowAI, FlowCut, and FlowClean) tackle distinguishing and removing bad quality events. We will see how they perform with previously identified good quality and horrific quality .fcs files. We will whether the implemented algorithmic decisions made sense, and how to customize them within our workflows to achieve our own desired goals.\n \n\n\nClustering Algorithms\n\nWeek 16: May 25, 2026 As part of this session, we venture away from supervised and semi-supervised analyses to explore unsupervised clustering approaches, namely FlowSOM and Phenograph. We will compare outcomes depending markers included, transformations applied, and panel used to gain a greater familiarity with how they work. We wrap up by investigating ways to visualize marker expression of cells ending up in each cluster, and how to backgate them to our manual gates.\n \n\n\nNormalization: Batch Effect or Real Biology\n\nWeek 17: June 1, 2026 During this session, we will dive into evaluating the performance of two commonly used normalization algorithms, CytoNorm and CyCombine. We will utilize our ggplot2 and functional programming toolkits to create a customized workflow to visualize the differences for our respective cell populations before and after normalization, to better evaluate how the respective parameter choices can affect the process.\n \n\n\nConference Break 2\nNo class week of June 8, 2026. If you are attending the Cyto conference, track me down at my talks and posters.\n \n\n\nDimensionality Visualization\n\nWeek 18: June 15, 2026 For this session, we explore how dimensionality visualization algorithms perform tSNE and UMAP in R using our raw and unmixed samples. In the process, we will explore how markers included, number of cells, and presence of bad quality events can impact the final visualizations. Finally, we will provide an overview of how to link to Python to additionally run PaCMAP and PHATE visualizations for use in R.\n \n\n\nAnnotating Unsupervised Clusters\n\nWeek 19: June 22, 2026 In the course of this session, we explore ways to scale our efficiency in figuring out what an unsupervised cluster of cells may be, by employing several annotation packages. We explore how these work under the hood in their decision making process, and how to link them to reference data from external repositories for additional evaluation.\n \n\n\nThe Art of GitHub Diving\n\nWeek 20: June 29, 2026 Within this session, we delve into the art of investigating a new-to-you GitHub repository. We discuss the overall structure of R packages stored as source files within GitHub repositories, and how to leverage this knowledge when troubleshooting errors thrown by underdocumented R packages. We discuss how to modify identified functions, evaluate them, and process to submit helpful bug reports back to the original project to help fix the issue.\n \n\n\nXML Files All The Way Down\n\nWeek 21: July 6, 2026 Breaking news alert, most of the experiment templates and worksheet layouts we work with as cytometrist are .xml files. In this session, we learn some additional coding tools to allow us to work with these types of files to extract useful data. In this session, we test out our new problem solving abilities to retrieve data from SpectroFlo and Diva .xml files to monitor how our core’s flow cytometers behaved for various users last week.\n \n\n\nUtilizing Bioconductor packages\n\nWeek 22: July 13, 2026 Many of the R packages for Flow Cytometry we have utilized in this course were packages from the Bioconductor project. We take a look at what makes Bioconductor packages unique compared to packages found on GitHub and CRAN, explore some of their specific infrastructure types for flow cytometry data, and highlight some useful packages for downstream analysis that we haven’t had time to properly explore.\n \n\n\nBuilding your First R package\n\nWeek 23: July 20, 2026 For most of the course, we have been working with R packages that other individuals built and maintained. In this session, we leverage all your hard work from the rest of the course and corral the unwieldly arsenal of functions you wrote into your first R package for easier use. We will discuss the individual pieces of an R package, the importance of a well-setup namespace file, and how to generate help page manuals to refer future-you back to what your individual function arguments actually do.\n \n\n\nEveryone Get’s a Quarto Website\n\nWeek 24: July 27, 2026 In this session, we will extend the knowledge of .R and .qmd files you have gained from the course and extend them to create your own website using Quarto. We discuss the additional files that are required, how to customize and render the website locally, and finally set up Quarto Pub or GitHub Pages website that we are to access online.\n \n\n\nReproducibility and Replicability\n\nWeek 25: August 3, 2026 Throughout the course, we emphasized the importance of making your workspaces and code reproducible and replicable. But what do we mean by these terms, and are there best practices we could add to our existing workflow to do this more efficiently? We explore a couple community-led efforts within the cytometry space and troubleshoot their implementation into a previously published pipeline.\n \n\n\nConference Break 3\nNo class week of August 10, 2026. If you are attending the BioC conference, track me down at my talk/poster.\n \n\n\nOpen Source Licenses\n\nWeek 26: August 17, 2026 For this course, we have relied extensively on open-source software to create our own data analysis pipelines. In the process, you may have some recollection of the various license names. But what impact do all these different names have in the end? We take a brief deep-dive into the ecosystem of free and open-source licenses, and evaluate what their respective license terms mean for us as individual users of the code, as well as potential developers extending existing codebases.\n \n\n\nValidating Algorthmic Tools\n\nWeek 27: August 24, 2026 We will be the first to admit, new implementations of algorithms as R packages are awesome! We appreciate the effort that went into them and making them available to the community at large. But what is the best way of evaluating whether they behave as promised, or work for our dataset? During this session, we share tips and tricks to gain better understanding of how a new R package works, and things to watch out for when evaluating complicated algortithms. We wrap with walkthrough of how to generate simulated datasets with known distributions for use in testing.\n \n\n\nDatabases and Repositories\n\nWeek 28: August 31, 2026 During this session, we will learn how to identify and retrieve .fcs files from databases. While many of us are accustomed to working with large datasets of our own making, many of us are increasingly encountering larger-than-memory datasets, as well as files stored in large repositories. In this session, we will explore several database focused R packages, before investigating how to identify and retrieve .fcs files and associated metadata of interest from repositories, namely ImmPort (and maybe FlowRepository if it can be pinged that afternoon).\n \n\n\nAssembling Web Data\n\nWeek 29: September 7, 2026 In this session, we briefly delve into the concepts of web-scraping and APIs in general. We highlight useful packages, namely httr2 and rvest, and best practices implemented to allow respectful retrieval of useful data without crashing someone’s server like some AI startup bot. We finish by providing a list of additional useful resources for those interested in learning more.\n \n\n\nFuture Directions\n\nWeek 30: September 14, 2026 In this final of the planned sessions, we revisit our solutions to the challenge problems set out during the beginning of the course. We also discuss potential future topics to visit in the future, and any additional resources that proved helpful throughout the course.", + "crumbs": [ + "About", + "Cytometry Core" + ] }, { "objectID": "PackageWalkthroughs.html#flowcore", @@ -1325,928 +1624,859 @@ "text": "flowMagic" }, { - "objectID": "Schedule.html", - "href": "Schedule.html", - "title": "Cytometry in R", - "section": "", - "text": "Cytometry in R: A Course for Beginners\nCytometry in R is a free virtual mini-course being organized by the Flow Cytometry Shared Resource core at the University of Maryland’s Greenebaum Comprehensive Cancer Center. This course is a passion project arising from our desire to contribute back to the community. We are excited that you have chosen to take part and look forward to helping you get started on your own learning journey.\nIf you did not complete the original interest form, and would like to be added to our mailing list, please complete the form here\nThe pre-course learning materials are now available via the Course tab. They consist of walkthroughs of how to set up your workstations with the required software, and exercises to help you become more familiar with the various teaching and coding resources we will be using throughout the course.\n\n\n\nPre-Course Walkthroughs\n\nWeek 0: January 26, 2026 In these pre-course walk-throughs, we ensure that everyone creates a GitHub account, and has their computer properly set up with the required software (including R, Positron, and Git). We then start to build individual participants familiarity with the software infrastructure that they will be using throughout the rest of the course.\n \n\n\nInstalling R Packages\n\nWeek 1: February 2, 2026 During this first session, we learn how to install R packages from the various repositories (CRAN, Bioconductor, GitHub), and how to troubleshoot the more typical errors that occur during this process.\n \n\n\nFile Paths\n\nWeek 2: February 9, 2026 For this second session, we focus on how to programmatically tell your computer where to locate your experimental files, introducing the concept of file paths. We explore how the various operating systems (Linux, MacOS, Windows) specify their respective folders and files, and how to identify where you are currently within the directory. Our goal by the end of this session is to have walked you through how to figure out where an .fcs file of interest is stored, and convey to your computer where you want it copied/moved to, without encountering the common pitfalls.\n \n\n\nInside an .FCS file\n\nWeek 3: February 16, 2026 In the course of this third session we will slice into an .FCS file and find out what the individual components that make it up are. In the process, we will cover the concepts of main data structures within R (vectors, matrices, data.frames, list) and how to identify what we are working with. Additionally, we will explore how various cytometry softwares store their metadata variables under various keywords that can be useful to know about.\n \n\n\nIntroduction to the Tidyverse\n\nWeek 4: February 23, 2026 Within this session, we explore how the various tidyverse packages can be utilized to reorganize rows and columns of data in ways that are useful for data analysis. We will primarily work with the MFI expression data we isolated from within the .fcs file in the previous session, identifying and isolating events that meet certain criterias. We introduce the concepts behind “tidy” data and how it can improve our workflows.\n \n\n\nGating Sets\n\nWeek 5: March 2, 2026 As part of this session, we learn about the two main flow cytometry infrastructure packages in R we will be working with during the course, flowcore and flowWorkspace. Throughout the session, we will compare how they differ in naming, memory usage, and accessing .fcs file metadata. We additionally explore how to add keywords to their respective metadata for use in filtering specimens of interest from the larger set of .fcs files.\n \n\n\nVisualizing with ggplot2\n\nWeek 6: March 9, 2026 During this session we provide an introduction to the ggplot2 package. We will take the datasets we have collected from the previous sessions and see how in varying in different arguments at the respective plot layers we can produce and customize many different forms of plots, focusing on both cytometry and statistics plots. We close out providing links to additional helpful resources and highlight the TidyTuesday project.\n \n\n\nApplying Transformations and Compensation\n\nWeek 7: March 16, 2026 For this seventh session, we take a closer look at the raw values of the data within our .fcs files, and explore the various ways to transform (ie. scale) flow cytometry data in R to better visualize “positive” and “negative populations”. In the process, we visualize the differences resulting from applying different transformations commonly used by commercial software. Similarly, we learn how to apply and visualize compensation in context of conventional flow cytometry files.\n \n\n\nManual and Automated Gating\n\nWeek 8: March 23, 2026 Within this session, we explore various ways to implement gating for flow cytometry files in R. We will explore manual approaches utilizing flowGate, as well as automated options with openCyto and it’s gating templates. We additionally will explore how to provide gate constraints and various ways to visually screen and evaluate the outcomes within the context of our own projects.\n \n\n\nConference Break 1\nNo class week of March 30, 2026. If you are attending the ABRF conference, track me down at the Complex Data Analysis in Flow Cytometry: Navigating the Landscape talk on Monday, March 30th at 4:30 PM.\n \n\n\nIt’s Raining Functions!\n\nWeek 9: April 6, 2026 In the course of this ninth session, we tackle one of the harder but most useful concepts to learn for a beginner, namely functions. We explore what they are, how their individual arguments work, how they differ from for-loops, and how to create our own to do useful work, reduce the number times code gets copied and pasted. Additionally, some functional programming best practices will be introduced, as well as provide introduction to how to use the walk and map functions from the purrr package.\n \n\n\nDownsampling and Concatenation\n\nWeek 10: April 13, 2026 Within this session, we will expand on our growing understanding of GatingSets, functions and fcs file internals to write a script to downsample your fcs files to a desired number (or percentage) of cells for a given cell population. We will additionally learn how to concatenate these downsampled files together, and save them to a new .fcs file in ways that the metadata can be read by commercial software without the scaling being widely thrown off.\n \n\n\nRetrieving data for Statistics\n\nWeek 11: April 20, 2026 Leveraging the increased familiarity working with the various packages this far in the course, in this session we will retrieve summary statistics for the gates within our GatingSet, and programmatically derrive out tidy data.frames for use in statistical analyses typically used by many Immunologist. In the process, we add a couple additional plot types to our ggplot2 arsenal to hold in reserve should Prism prices go up again.\n \n\n\nSpectral Signatures\n\nWeek 12: April 27, 2026 As part of this session, we will explore how to extract fluorescent signatures from our raw spectral flow cytometry reference controls. Building on prior concepts, we will learn to isolate median signatures from positive and negative gates, and how to derrive and plot normalized signatures. We also introduce plotly package and it’s interactive plotting features, before showcasing various packages attempts at facilitating signature retrieval.\n \n\n\nSimilarities and Hotspots\n\nWeek 13: May 4, 2026 During this session, we will utilize the spectral signature matrix isolated from raw spectral flow cytometry controls and evaluate different ways of evaluating how similar different fluorescent signatures are to each other. In the process, we will gain better understanding of the metrics behind similarity (cosine), panel complexity (kappa), and unmixing-dependent spreading (collinearity).\n \n\n\nUnmixing in R\n\nWeek 14: May 11, 2026 In the course of this session, we will attempt a reach goal of many, namely carry out unmixing of raw .fcs files using the spectral signatures we have isolated from our unmixing controls, and write to new .fcs files. After evaluating the necessary internals, we will explore how various current cytometry R packages have implemented their own unmixing functions, and the various limitations that each approach has encountered.\n \n\n\nCleaning Algorithms\n\nWeek 15: May 18, 2026 In the span of this session, we will directly compare how various Bioconductor data cleanup algorithms (namely PeacoQC, FlowAI, FlowCut, and FlowClean) tackle distinguishing and removing bad quality events. We will see how they perform with previously identified good quality and horrific quality .fcs files. We will whether the implemented algorithmic decisions made sense, and how to customize them within our workflows to achieve our own desired goals.\n \n\n\nClustering Algorithms\n\nWeek 16: May 25, 2026 As part of this session, we venture away from supervised and semi-supervised analyses to explore unsupervised clustering approaches, namely FlowSOM and Phenograph. We will compare outcomes depending markers included, transformations applied, and panel used to gain a greater familiarity with how they work. We wrap up by investigating ways to visualize marker expression of cells ending up in each cluster, and how to backgate them to our manual gates.\n \n\n\nNormalization: Batch Effect or Real Biology\n\nWeek 17: June 1, 2026 During this session, we will dive into evaluating the performance of two commonly used normalization algorithms, CytoNorm and CyCombine. We will utilize our ggplot2 and functional programming toolkits to create a customized workflow to visualize the differences for our respective cell populations before and after normalization, to better evaluate how the respective parameter choices can affect the process.\n \n\n\nConference Break 2\nNo class week of June 8, 2026. If you are attending the Cyto conference, track me down at my talks and posters.\n \n\n\nDimensionality Visualization\n\nWeek 18: June 15, 2026 For this session, we explore how dimensionality visualization algorithms perform tSNE and UMAP in R using our raw and unmixed samples. In the process, we will explore how markers included, number of cells, and presence of bad quality events can impact the final visualizations. Finally, we will provide an overview of how to link to Python to additionally run PaCMAP and PHATE visualizations for use in R.\n \n\n\nAnnotating Unsupervised Clusters\n\nWeek 19: June 22, 2026 In the course of this session, we explore ways to scale our efficiency in figuring out what an unsupervised cluster of cells may be, by employing several annotation packages. We explore how these work under the hood in their decision making process, and how to link them to reference data from external repositories for additional evaluation.\n \n\n\nThe Art of GitHub Diving\n\nWeek 20: June 29, 2026 Within this session, we delve into the art of investigating a new-to-you GitHub repository. We discuss the overall structure of R packages stored as source files within GitHub repositories, and how to leverage this knowledge when troubleshooting errors thrown by underdocumented R packages. We discuss how to modify identified functions, evaluate them, and process to submit helpful bug reports back to the original project to help fix the issue.\n \n\n\nXML Files All The Way Down\n\nWeek 21: July 6, 2026 Breaking news alert, most of the experiment templates and worksheet layouts we work with as cytometrist are .xml files. In this session, we learn some additional coding tools to allow us to work with these types of files to extract useful data. In this session, we test out our new problem solving abilities to retrieve data from SpectroFlo and Diva .xml files to monitor how our core’s flow cytometers behaved for various users last week.\n \n\n\nUtilizing Bioconductor packages\n\nWeek 22: July 13, 2026 Many of the R packages for Flow Cytometry we have utilized in this course were packages from the Bioconductor project. We take a look at what makes Bioconductor packages unique compared to packages found on GitHub and CRAN, explore some of their specific infrastructure types for flow cytometry data, and highlight some useful packages for downstream analysis that we haven’t had time to properly explore.\n \n\n\nBuilding your First R package\n\nWeek 23: July 20, 2026 For most of the course, we have been working with R packages that other individuals built and maintained. In this session, we leverage all your hard work from the rest of the course and corral the unwieldly arsenal of functions you wrote into your first R package for easier use. We will discuss the individual pieces of an R package, the importance of a well-setup namespace file, and how to generate help page manuals to refer future-you back to what your individual function arguments actually do.\n \n\n\nEveryone Get’s a Quarto Website\n\nWeek 24: July 27, 2026 In this session, we will extend the knowledge of .R and .qmd files you have gained from the course and extend them to create your own website using Quarto. We discuss the additional files that are required, how to customize and render the website locally, and finally set up Quarto Pub or GitHub Pages website that we are to access online.\n \n\n\nReproducibility and Replicability\n\nWeek 25: August 3, 2026 Throughout the course, we emphasized the importance of making your workspaces and code reproducible and replicable. But what do we mean by these terms, and are there best practices we could add to our existing workflow to do this more efficiently? We explore a couple community-led efforts within the cytometry space and troubleshoot their implementation into a previously published pipeline.\n \n\n\nConference Break 3\nNo class week of August 10, 2026. If you are attending the BioC conference, track me down at my talk/poster.\n \n\n\nOpen Source Licenses\n\nWeek 26: August 17, 2026 For this course, we have relied extensively on open-source software to create our own data analysis pipelines. In the process, you may have some recollection of the various license names. But what impact do all these different names have in the end? We take a brief deep-dive into the ecosystem of free and open-source licenses, and evaluate what their respective license terms mean for us as individual users of the code, as well as potential developers extending existing codebases.\n \n\n\nValidating Algorthmic Tools\n\nWeek 27: August 24, 2026 We will be the first to admit, new implementations of algorithms as R packages are awesome! We appreciate the effort that went into them and making them available to the community at large. But what is the best way of evaluating whether they behave as promised, or work for our dataset? During this session, we share tips and tricks to gain better understanding of how a new R package works, and things to watch out for when evaluating complicated algortithms. We wrap with walkthrough of how to generate simulated datasets with known distributions for use in testing.\n \n\n\nDatabases and Repositories\n\nWeek 28: August 31, 2026 During this session, we will learn how to identify and retrieve .fcs files from databases. While many of us are accustomed to working with large datasets of our own making, many of us are increasingly encountering larger-than-memory datasets, as well as files stored in large repositories. In this session, we will explore several database focused R packages, before investigating how to identify and retrieve .fcs files and associated metadata of interest from repositories, namely ImmPort (and maybe FlowRepository if it can be pinged that afternoon).\n \n\n\nAssembling Web Data\n\nWeek 29: September 7, 2026 In this session, we briefly delve into the concepts of web-scraping and APIs in general. We highlight useful packages, namely httr2 and rvest, and best practices implemented to allow respectful retrieval of useful data without crashing someone’s server like some AI startup bot. We finish by providing a list of additional useful resources for those interested in learning more.\n \n\n\nFuture Directions\n\nWeek 30: September 14, 2026 In this final of the planned sessions, we revisit our solutions to the challenge problems set out during the beginning of the course. We also discuss potential future topics to visit in the future, and any additional resources that proved helpful throughout the course.", - "crumbs": [ - "About", - "Cytometry Core" - ] - }, - { - "objectID": "index.html", - "href": "index.html", - "title": "About", + "objectID": "course/00_BonusContent/Immport/images/index.html", + "href": "course/00_BonusContent/Immport/images/index.html", + "title": "ImmPort - Downloading Datasets", "section": "", - "text": "Cytometry in R is a free weekly mini-course being offered both in-person and online by the Flow Cytometry Shared Resource staff at the University of Maryland Greenebaum Comprehensive Cancer Center. Its primary audience is for those with prior flow cytometry knowledge, who have limited previous experience with the programming language R. However, we welcome everyone regardless of their existing  flow cytometry  or coding experience.\nThis course is a passion project arising from our desire to contribute back to the community. We are excited that so many of you have chosen to sign up, and look forward to helping you get started on your own learning journeys.\nFor more information on topics-covered, please see our schedule.\nIf you did not previously complete the interest form, and would like to be added to our mailing list, please complete the form here\n\n\nAbout\n\n\nMotivation\nWhile many cytometry enthusiast express an interest in learning how to carry out flow cytometry analyses in R, they often do not know where to start. Additionally, many of the limited existing resources are focused towards users with intermediate bioinformatic skills, contributing to a greater barrier for entry for those just starting out. Our motivation in offering this mini-course tailored towards beginners is to make the learning journey smoother than the one we ourselves experienced.\n\nWhile designing the course, we kept the following concepts in mind:\n\nBeginning coders benefit both by having detailed examples that they can initially work through on their own time, as well as less defined problems that through troubleshooting enable the acquisition of the thought-process and skills needed for coding.\nSome topics will take individuals a longer time to fully grasp. Providing a format and resources that enable being able to revisit the material multiple times is incredibly helpful. Likewise, life is busy, and missing a workshop session is highly probable. If this happens, it shouldn’t make or break the ability of the individual to understanding the rest of the course.\nConsistency is key, and being able to apply what you are learning to your own datasets, files, and questions of interest helps achieve this.\n\n\n\n\n\nCourse Details\n\nEach week, the mini-course will cover a particular topic for an hour. This individual class is offered on multiple days, at different times, both in-person and online. We invite you to attend the one that best fits your schedule each week. If life gets busy and you can’t make regular day, the online livestream recordings will be available on YouTube.\n \n\nCourse Materials\n\nWe will release the course materials for the upcoming week on Sundays 2200 EST (Monday 0300 GMT+0) via our course website and GitHub. These materials will normally be Quarto Markdown documents containing code, explanation, and other resources needed for that week. If you have your own data, you can use your own data! If you don’t have any data, we will make sure to provide some of our own available data for each lesson so that you can use it and be able to follow along.\nIn our commitment to open-source and open-science, all teaching materials are freely offered under a CC-BY-SA license, while all code examples are offered under the AGPL3-0 copyleft license.\n \n\n\nIn-Person (Baltimore)\n\nFor those who are local and attending in person, the class will be offered on Monday, Tuesday and Thursday from 4-5 pm EST in Bressler Research Building Conference Room 7-035 (around the corner from the Flow Core).\nWe invite you to make whichever session best fits your schedule. If you have your own laptop, feel free to bring it. If you don’t have a laptop, please reach out, the Flow Core has 6 laptops running Linux that we can lend out to participants for use during the session.\nFor those who arrive early, we will have a limited number of second screens with provided mouse and keyboard that you can plug a laptop into via HDMI cable to set up a larger workstation. For those arriving later, the room has enough space (and electrical plugs) for up to 20 people, but you will need to balance a laptop on your lap.\n \n\n\nOnline (Worldwide)\n\nFor those joining us virtually, we will have three separate livestreams throughout the week on YouTube. These will be offered on:\n\nTuesday 2200 EST (Wednesday 0300 GMT+0)\nWednesday 1600 EST (Wednesday 2100 GMT+0)\nThursday 1000 EST (Thursday 1500 GMT+0)\n\n \n\n\nRecordings\n\nAll three livestreams will be recorded and available on YouTube immediately afterwards. Our plan is to eventually circle back after the course and properly edit them (ie. less minutes of random background noise, highlighting the relevant lines of code, time-stamps, subtitles, translations, etc.) later on as time allows, so that they can serve as a more permanent resource.  \n\n\nDiscussion Forum\n\nWe will be using our GitHub Discussions page as a community forum. This will allow us to answer questions, and benefit from insights from others in the community. One advantage of having so many people signed up for the course is that if you have a question, someone else likely does as well, so go start a post and ask it!\n \n\n\nOptional Take-Home Problems\n\nEach week, we will offer optional take-home problems. These are intended to allow you to work with your own data on similar problems, but in a not-so-structured manner. Challenges that you and overcome during the process will help grow your problem solving and debugging skills, and help solidify concepts covered during the course.\nTo get feedback on these problems, you can reach out to the community on the Discussions page, or once far enough open a pull request to the homework branch and we will provide additional feedback.\n \n\n\nCost\nIs there a cost to participate? No, it’s absolutely free! Is there a catch? Yes, you learn R, and may wind up with strong feelings about flowframes vs. cytoframes. This is also our first year offering this course, so we will sporadically ask you to fill out a feedback forms to help us improve.\n\n \n\n\nComputing Requirements\n\nFor those attending online, you will need a computer with internet access. Operating system shouldn’t matter, as we will be offering code examples for Windows, Mac and Linux. As with all things flow-cytometry software, having a faster CPU with multiple cores, more RAM and greater storage space is generally helpful, but not a deal breaker.\n\nYou will need to be able to install the required software (R, Rtools, Positron, Quarto, and Git) as well as install and compile R packages from the CRAN and Bioconductor repositories (as well as a few GitHub-based R packages). Installation walkthroughs for each computer operating system can be found here.\nFor those using university or company administered computers, please be aware that you may not have the necessary permissions to install these directly, and may need to reach out to your IT department to help get these initial requirements set up. If you are using your own computer, congratulations, you are your system administrator, and should already have the necessary permissions.\n\nFor those attending in-person, we have set up a pop-up computer lab in the conference room. For those who arrive early, we have a limited number of second screens with provided mouse and keyboard that you can plug a laptop into via HDMI cable to set up a workstation. For those arriving later, the room has enough space (and electrical plugs) for 20 people, but you will need to balance a laptop on your lap. If you have your own laptop, feel free to bring it. If you don’t have a laptop, the flow core has 6 loaner laptops running Linux that we can let participants use for that session.\n\n\n\n\nLicense\nIn our commitment to open-science and open-source, all teaching materials are freely offered under a CC-BY-SA license, while all code examples are offered under the AGPL3-0 copyleft license." + "text": "To download data from the ImmPort Shared Data Repository, first navigate to the website\n\nFor help setting up Aspera Connect, see the following help documentation" }, { - "objectID": "ExistingResources.html", - "href": "ExistingResources.html", - "title": "Existing Resources", + "objectID": "course/00_BonusContent/PullConflicts/UpdatedPullRequest.html", + "href": "course/00_BonusContent/PullConflicts/UpdatedPullRequest.html", + "title": "Updated Pull Request Protocol", "section": "", - "text": "We are not the first “Cytometry in R” course, nor will we be the last. This page is linking to the already existant online Cytometry in R resources that we have encountered and benefited from during our own learning journey. May they prove useful to you as you progress your way through yours!\n\n\n\nChristopher Hall - Flow Cytometry Data Analysis in R\nCytometry-R-Scripts: R scripts to help with your flow cytometry analysis\nR_flowcytometry_course: The files and presentation from the Cytometry Core Facility flow cytometry data analysis course in R\n\nInstallation and Loading Data\n(1) Flow Cytometry Data Analysis in R - Installation and Loading Data\n\n\n\n\nCompensation, Cleaning, Transformation, Visualization\n(2) Flow Cytometry Data Analysis in R: compensation, cleaning, transformation, visualization\n\n\n\n\nGating with flowWorkspace\n(3) Flow Cytometry Data Analysis in R: gating with flowWorkspace\n\n\n\n\nVisualization\n(4) Flow Cytometry Data Analysis in R: Visualisation\n\n\n\n\n\n\n\nOzette Technologies - BioC 2023 Workshop\nWorkshop given at the Bioc2023 conference, authored by Arpan Neupane and Andrew McDavid.\nWorkshop: Reproducible and programmatic analysis of flow cytometry experiments with the cytoverse\n\n\n\n\n\n\nPritam Kumar Panda - Flow Cytometry Data Analysis & Visualization in R using CytoExploreR\nFlow-Cytometry-analysis-in-R\nCytoExploreR-Interactive-visualization\n\nComplete Guide\nFlow Cytometry Data Analysis & Visualization in R using CytoExploreR: Complete Guide\n\n\n\n\n\n\n\nBioinformatics DotCa - Introduction to Flow Cytometry in R\n\nIntroduction to Flow Cytometry in R\nIntroduction to Flow Cytometry in R\n\n\n\n\nExploring FCM Data in R\nExploring FCM Data in R\n\n\n\n\nProcessing and Quality Assurance of FCM Data\nProcessing and Quality Assurance of FCM Data\n\n\n\n\n1D Dynamic Gating\n1D Dynamic Gating\n\n\n\n\nClustering and Additional FCM Tools\nClustering and Additional FCM Tools\n\n\n\n\n\nTulika Rai - Learn Innovatively With Me\n\nflowAI Flow Cytometry Data Cleaning using R\nflowAI Flow Cytometry Data Cleaning using R: A Step-by-step Tutorial\n\n\n\n\ntSNE UMAP TRIMAP colorization or Transformation using R script\ntSNE UMAP TRIMAP colorization or Transformation using R script\n\n\n\n\n\n\n\nGivanna Putri - Introduction to Cytometry Data Analysis in R workshop\nACS 2021 Workshops - Introduction to Cytometry Data Analysis in R workshop\n\n\n\n\n\n\nTimothy Keyes -\n{tidytof}: Predicting Patient Outcomes from Single-cell Data using Tidy Data Principles\n\n\n\n\n\n\nRyan Duggan - Cytometry on Air\nCytometry on Air: Analyzing Flow Cytometry Data in R Presentation by TJ Chen and Greg Finak,\n\n\n\n\n\n\nGuillaume Beyrend - Learn Cytometry\nLearn Cytometry Originally appeared to have been paywalled, doesn’t currently appear to be the case.\n\n\n\n\nHong Qin - flow analysis in R\n\nFlow Analysis in R\nflow analysis in R, bio125, Spring 2015\n\n\n\n\nFlow Cytometer Data Analysis\nBIO233 demo, flow cytometer data analysis, simple example\n\n\n\n\n\n\n\nSwayam Prabha - Flow cytometry data analysis in R/Bioconductor\nLecture 15 : Flow cytometry data analysis in R/Bioconductor" + "text": "Background\nDue to an encountered issue pulling in new updates for CytometryInR when you have an optional take-home problem still waiting to be reviewed, we will be modifying the protocol for submitting a pull request. You will first create a local homework branch, and submit from your branch to our homework branch. That should hopefully prevent any incoming changes from main to main from becoming conflicted.\n\n\nGetting Started\nThe first step is to open Positron, and navigate through the dropdown options to the Create a Branch option\n\n\nAnd provide a name (since the homework was for Week 02, we set it as Week 02)\n\n\nNext, select the option to Publish the Branch\n\n\nFrom here, importantly, select the option to make it a branch of YOUR forked CytometryInR version (since you don’t have permissions for the main course repository)\n\n\nAt this point, your new branch will have been created. You can check by entering the following code in the terminal, and verifying the * is next to the Week02 branch\n\ngit branch\n\n\n\nOnce you have confirmed you are in your homework branch, go ahead and transfer in all the files you will be submitting for the optional take-home problems\n\n\nAnd once done, make a commit as you would normally\n\n\nAs you can see, you will now be ahead of the main branch by one commit. Go ahead and sync your branch to GitHub so the contents are available remotely for use in the pull-request.\n\n\nOnce synced, you will notice that your branch is now up to date with the remote (cloud) icon. Next, proceed to checkout to the main branch, either via the dropdown or via the terminal using\n\ngit checkout main\n\n\n\nReturning to GitHub, you will see that your homework branch has received the incoming changes. You are now safe to sync your fork to bring in changes from the main course CytometryInR repository.\n\n\nAnd confirm yes.\n\n\nReturning to Positron, once verified you are in your main branch, proceed to pull in changes\n\n\nIf you switch between branches, you will notice you have both the new changes to main, as well as your week specific side branch co-existing peacefully.\n\n\nYou are then safe to make a pullrequest from your homework branch, to our homework branch, without running into risk of an additional commit from our end (or delay in reviewing) causing issues.\n\n\n\n\nAdditional Resources\nThis method should hopefully avoid the previously encountered issues. Apologies once again to those who encountered the issue! Still learning how to use some of these aspects of Version control in a GitHub context." }, { - "objectID": "course/index.html", - "href": "course/index.html", - "title": "Cytometry in R", - "section": "", - "text": "Cytometry in R is a free virtual mini-course being organized by the Flow Cytometry Shared Resource Core at the University of Maryland’s Greenebaum Comprehensive Cancer Center. This course is a passion project arising from our desire to contribute back to the community.\nWe are excited that so many individuals worldwide have chosen to take part, and we look forward to helping you get started on your own learning journeys." + "objectID": "course/00_Floreada/slides.html#floreada", + "href": "course/00_Floreada/slides.html#floreada", + "title": "Using Floreada", + "section": "Floreada", + "text": "Floreada\nLoading Dataset\n\n\n\n\n\n\n\n\n.\n\n\nFirst, open your web browser and navigate to the website\nClick on Start to proceed to the next page." }, { - "objectID": "course/index.html#resources", - "href": "course/index.html#resources", - "title": "Cytometry in R", - "section": "Resources", - "text": "Resources\nThe pre-course learning materials are now available, providing walkthroughs of how to set up your workstations with the required software, and exercises to help you become more familiar with the various teaching and coding resources we will be using throughout the course.\nNarrated versions of the walk through materials are now also available via YouTube" + "objectID": "course/00_Floreada/slides.html#cytoml", + "href": "course/00_Floreada/slides.html#cytoml", + "title": "Using Floreada", + "section": "CytoML", + "text": "CytoML\n\n\n\n\n\n\n\n\n.\n\n\nDue to a unknown formatting bug, the Floreada produced FlowJo v10 .wsp is not directly accessible by CytoML at the time of this course. However, the issue is resolved as soon as the file is opened the first time within FlowJo v10, regardless of whether you have a log in or not. Strange? Yes, but we will take the workaround.\nSo, for anyone on Windows or MacOS, download FlowJo v10. Once installed, open the software, and close the login popups. Once there, open the Floreada created FlowJo.wsp file. Since you haven’t logged in, it won’t show any events. But it will correct the formatting bug. Close the software, and return to R. Your Floreada sourced .WSP file should now be readable by CytoML.\nOdd? For sure. Fixable? Likely, I will set a reminder to work with the Floreada and CytoML devs to see if we can cut out the need for this workaround." }, { - "objectID": "course/index.html#in-person-baltimore", - "href": "course/index.html#in-person-baltimore", - "title": "Cytometry in R", - "section": "In-Person (Baltimore)", - "text": "In-Person (Baltimore)\nFor those joining us in person, the class is being offered on Monday, Tuesday and Thursday from 4-5 pm EST in Bressler Research Building Room 7-035. We invite you to make whichever session best fits your schedule. Monitors to plug your laptops in will be available on a first come, first served basis. These in-person sessions will not be recorded, but with the smaller class size you will have our undivided attention should you have any questions." + "objectID": "course/00_Git/slides.html#new-folder-from-template", + "href": "course/00_Git/slides.html#new-folder-from-template", + "title": "Version Control with Git", + "section": "New Folder from Template", + "text": "New Folder from Template\n\n\n\n\n\n\n\n\n.\n\n\nSince Positron can use multiple programming languages, when we select “New Folder from Template” we will be asked what kind of folder template we want to use. Since we are working in R, we will select the “R Project” option." }, { - "objectID": "course/index.html#virtual-worldwide", - "href": "course/index.html#virtual-worldwide", - "title": "Cytometry in R", - "section": "Virtual (Worldwide)", - "text": "Virtual (Worldwide)\nFor those joining us virtually, we will have three separate livestreams throughout the week on YouTube. These will be offered on:\n\nTuesday 2200 EST (Wednesday 0300 GMT+0)\nWednesday 1600 EST (Wednesday 2100 GMT+0)\nThursday 1000 EST (Thursday 1500 GMT+0)\n\nAll three livestreams will be recorded and available on YouTube immediately afterwards." + "objectID": "course/00_Git/slides.html#creating-subfolders", + "href": "course/00_Git/slides.html#creating-subfolders", + "title": "Version Control with Git", + "section": "Creating SubFolders", + "text": "Creating SubFolders\n\n\n\n\n\n\n\n\n.\n\n\nOnce your new project folder has opened, you should be seeing the main layout elements that we briefly covered in the Positron walk-through.\nFor this section, we will primarily be focused on what is happening within the primary side bar on the left, where changes to the individual files within the folder since the last save/commit will be reflected by colored text.\nFor my own projects, there are some elements of organization that I go ahead and add for each new folder. These include both a data and an images subfolders to help keep things a little more organized.\nTo create these folders, we would click on the respective add folder (+) button on the side bar. Files and Folders can be clicked and dragged within the primary side bar to move things to new folder locations." }, { - "objectID": "course/index.html#discussion-forum", - "href": "course/index.html#discussion-forum", - "title": "Cytometry in R", - "section": "Discussion Forum", - "text": "Discussion Forum\nWe will be using the Cytometry in R Discussions page as a community forum, and a place to ask questions, celebrate wins, and provide feedback. After creating a a GitHub account, pleae go introduce yourself." + "objectID": "course/00_Git/slides.html#creating-files", + "href": "course/00_Git/slides.html#creating-files", + "title": "Version Control with Git", + "section": "Creating Files", + "text": "Creating Files\n\n\n\n\n\n\n\n\n.\n\n\nIn context of this course, we will primarily be working with two types of files when coding:\n\nR Scripts: These files end in .R. These contain only code (with occasional # comment line). These are often used for self-contained code that once we get them working we rarely need to modify.\nQuarto Markdowns: These files end in .qmd. They contain a .yaml header, followed by a mix of regular written text (often explanations or other documentation), and sections (ie. chunks) that contain code. These are used when we are still getting the code to work, when we need to modify inputs frequently, or simply when we need to document what and why we are doing something to make life easier for our future-self two months from now." }, { - "objectID": "course/00_BonusContent/PullConflicts/index.html", - "href": "course/00_BonusContent/PullConflicts/index.html", - "title": "Take-Home Problems - Pull Fix Resolution", - "section": "", - "text": "Background\nFor those who have turned in the homework to the Cytometry in R - homework branches, many report having encountered merge issues pulling in the next week’s data if the pull request hasn’t been resolved yet. Creating a parallel branch, and submitting homework from there to the homework branch might solve the issue? But we will need to test that out. For now, here are the steps we used to resolve the issue locally without needing to delete and re-download.\n\n\nGetting Started\nStart off checking your GitHub forked version of the CytometryInR, notice how many commits behind you are.\n\n\nIf you haven’t submitted the optional Take-Home problems via a pull-request, proceed to do so.\n\n\nThis was an example of the page you see when submitting the pull request. Upon submission, your branch may show merge conflicts due to difference in rendered docs. This is okay, we will resolve it on our end.\n\n\nWhat we will end up doing is ignore the changes and accept the current version. This issue is likely due to the weekly updating of the data resulting in new sidebar links. We will then mark these issues as resolved\n\nWe will then mark the issues as resolved.\n\n\nOn return to the homework, we will be able to merge the branch once again. We will likely make our suggestions at this point for this branch.\n\n\nHowever, after pull request has been merged, you will see your branch is way ahead (due to everyone elses homework commits). This is the area we will need to address via the new branch method.\n\n\nFor now, proceed to discard the changes (you don’t need the other participants homeworks cluttering your folder)\n\n\nYou will then appear as caught up with the main branch.\n\n\nOn return to Positron, attempt to pull\n\n\nHowever, since your homework commit is still present, you will receive a pop-up asking you to see the GitLog. If you scroll up the problem log, it will give you several options.\n\n\nYou will need to enter the following code into your terminal tab:\n\ngit config pull.rebase TRUE\n\n\n\nThis will result in a branched appearance, and the button asking you to sync the changes.\n\n\nUpon doing so you will have a restored status vs. the main cytometry in R project folder.\n\n\n\nTake Away\nWe have encountered a first growing pain for the course, in that the pull-request method we have been using still causes merge conflicts. We will be going to a homework branch to homework branch pull-request approach going forward, I will send out additional instructions on how to do so shortly.\nThanks for your patience!\nDavid\n\n\nAdditional Resources" + "objectID": "course/00_Git/slides.html#qmd-files", + "href": "course/00_Git/slides.html#qmd-files", + "title": "Version Control with Git", + "section": "QMD Files", + "text": "QMD Files\n\n\n\n\n\n\n\n\n.\n\n\nOnce this is done, we can now see we have a new .qmd file (“Example.qmd” in this case).\n\n\n\n\n\nYAML\n\n\n\n\n\n\n\n\n.\n\n\nAs previously mentioned, the start of a Quarto Markdown file containg a YAML code chunk that is used to set formatting choices (we will explore this in-depth during the next section)\nWhat designates the location of the YAML block are three hyphens at the start, and three hyphens at the end. For this example, we will also provide a “title:” and “format:” field for the time being (see additional options here)." }, { - "objectID": "course/00_BonusContent/index.html", - "href": "course/00_BonusContent/index.html", - "title": "Bonus Content", - "section": "", - "text": "This is a miscellaneous page to host walk-throughs of topics that come up via the Discussion Page. Rather than re-explain how to in the comments, I want to have a place to post short-walkthroughs to solve these issues, while avoiding incorporating it into the existing walk-throughs at this point in time. I hope you find it useful, and pardon the organized chaos of miscellaneous topics." + "objectID": "course/00_Git/slides.html#local-version-control", + "href": "course/00_Git/slides.html#local-version-control", + "title": "Version Control with Git", + "section": "Local Version Control", + "text": "Local Version Control\n\n\n\n\n\n\n\n\n.\n\n\nHaving introduced the main elements of a Quarto Markdown file, let’s turn our attention to the tab within the editor showing our newly created .qmd file.\nWe can see there is a solid circle next to the file name, and it is appearing as green. The circle denotes unsaved changes, which we can correct by clicking on the Save Button to save the changes to our file." }, { - "objectID": "course/00_BonusContent/index.html#windows-arm", - "href": "course/00_BonusContent/index.html#windows-arm", - "title": "Bonus Content", - "section": "Windows Arm", - "text": "Windows Arm\n\nPositron\nOn the main Positron installation page, only the installers for Windows Computers with x86 chips are currently shown. There is a beta (experimental) version of Positron for Windows ARM (ex. Snapdragons), but it needs to be installed from Positron’s GitHub releases page.\n\nPlease note, you should download the most recent version that is available to you, as they continue to update it and fix bugs. As of February 04, 2026, you would also need to install Quarto separately.\nTo install Quarto, first navigate to their website. Quarto for Windows ARM was implemented in 2023, so the regular installer should work.\n\nAlso, you would need a Python arm64 installation installed if you decide you want to venture into using Python at any point." + "objectID": "course/00_Git/slides.html#remote-version-control", + "href": "course/00_Git/slides.html#remote-version-control", + "title": "Version Control with Git", + "section": "Remote Version Control", + "text": "Remote Version Control\nCopying Project Folder to GitHub\n\n\n\n\n\n\n\n\n.\n\n\nWhile having local version control in place is helpful when you need to revert back after encountering issues, where Git shines is the ability to pass your changes to your online GitHub repository.\nNot only does this allow you to switch between computers, but should something disastrous happen to your main computer, you still have all your hard work backed up and readily assessible.\nFor this subsection, first, double check that Positron is still connected to your GitHub account by checking the user tab on the bottom-left. If not, repeat the connection setup." }, { - "objectID": "course/00_BonusContent/index.html#visual-mode", - "href": "course/00_BonusContent/index.html#visual-mode", - "title": "Bonus Content", - "section": "Visual Mode", - "text": "Visual Mode\nWithin Positron, there exist a toggle button to switch between source and visual mode on Quarto documents. But what do you do when you can’t find it?\n\nTuns out…. Visual Mode is currently broken, so the developers removed it about two weeks ago.\nThe current way to switch to it is via right-click, then select edit visual mode.\n\nVice versa, once there, you can revert by right-clicking and selecting Edit Source Mode\n\nThe process of figuring what is going on highlights how to use a GitHub Discussion Page. This is Positrons when I searched for visual button. I then found that a similar question was asked 3 days ago that led me to the linked thread above." + "objectID": "course/00_GitHub/slides.html#creating-an-account", + "href": "course/00_GitHub/slides.html#creating-an-account", + "title": "Using GitHub", + "section": "Creating an Account", + "text": "Creating an Account\n\n\n\n\n\n\n\n\n.\n\n\nWe will first navigate to the GitHub homepage. If you haven’t previously created an account, click on the button to sign up for an account." }, { - "objectID": "course/00_BonusContent/index.html#not-detecting-git", - "href": "course/00_BonusContent/index.html#not-detecting-git", - "title": "Bonus Content", - "section": "Not Detecting Git", - "text": "Not Detecting Git\nIf you leave the “Initialize Git Repository” option unclicked when setting up a New Folder from Template, Git will not be active within your project folder.\n\nAs a result, when you try to use the usethis packages use_github(private=true) function, you will get an error that resembles the one below\n\nTo initiate a Git repository after the fact, you will need to go to the Version Control tab in the action bar, and select the option.\n\nThen, you will need to stage the files you want to work with, and commit them.\n\nuse_github(private=TRUE) should now be functional at that point. However, you can also choose to continue via Positron’s interface instead by selecting Publish.\n\nIt will then ask you whether you want to save it as either a Public or a Private repository.\n\nAnd if all goes well, you will see the “Successs” pop-up in the lower-right" + "objectID": "course/00_GitHub/slides.html#github-profile", + "href": "course/00_GitHub/slides.html#github-profile", + "title": "Using GitHub", + "section": "GitHub Profile", + "text": "GitHub Profile\n\n\n\n\n\n\n\n\n.\n\n\nUpon creating a brand new account, your GitHub homepage will initially look rather empty, and can be intimidating to navigate for the first time.\nFor now, on the upper right, go ahead and click on the default profile picture icon…" }, { - "objectID": "course/00_Floreada/index.html", - "href": "course/00_Floreada/index.html", - "title": "Using Floreada", - "section": "", - "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here", - "crumbs": [ - "About", - "Getting Started", - "00 - Floreada" - ] + "objectID": "course/00_GitHub/slides.html#github-readme", + "href": "course/00_GitHub/slides.html#github-readme", + "title": "Using GitHub", + "section": "GitHub ReadMe", + "text": "GitHub ReadMe\n\n\n\n\n\n\n\n\n.\n\n\nWith this done, we modify your GitHub profile by adding one customized element, a ReadMe page. This will be used for a couple projects during the course, and can be personalized further in the future.\nTo create a ReadMe page for your profile, we will navigate to the upper right of the screen and click on the + sign." }, { - "objectID": "course/00_Floreada/index.html#floreada", - "href": "course/00_Floreada/index.html#floreada", - "title": "Using Floreada", - "section": "Floreada", - "text": "Floreada\n\nLoading Dataset\nFirst, open your web browser and navigate to the website\nClick on Start to proceed to the next page.\n\n\n\n\n\n\nOnce that is done, select the File tab on the upper navigation bar. Then click on Open File(s).\n\n\n\n\n\n\n\nFrom there, select your .fcs files of interest, and click Open.\n\n\n\nThe .fcs files will now load in, and you should see a view similar to the one below. On the left side-bar, you have your gating options (Rectangle, Polygon, Range, Elipse, Quad, etc). Next to these on the right you have the FCS files that are loaded into the workspace. Then on the right, you have the visual display for your selected specimen.\n\n\n\n\n\nSwitching Axis Markers\nIf you left click on the axis name (SSC-H or FSC-H in this case), you will be able to select other markers by which to gate your specimen. For the provided example, we were using a raw spectral flow cytometry .fcs file, so the names of the detectors are present.\n\n\n\nFor now, I plan to start off by gating for singlets. I switch the y-axis to FSC-H, and then proceed to switch the x-axis to FSC-A.\n\n\n\n\n\nCreating Gates\nWith this done, I can now select the Poly gate tab on the upper left.\n\n\n\nThen manually click on the locations on the plot to add the individual gate nodes.\n\n\n\nTo complete the gate, I click back to the original node point. At this point, the popup will allow you to name the gate.\n\n\n\nTo adjust the polygon gate, you can click on a node and drag it to expand or contract in a particular direction\n\n\n\nTo move the entire gate, first click on a node to select the gate, then click in the center of the gate to adjust its location.\n\n\n\n\n\nAdditional Gates\nUnfortunately for those with the force-of-habit from using other softwares, double-clicking within the gate doesn’t do anything. To continue gating on the selected cells, you will need to click on the newly created gate name on the left. This will result in visualizing the isolated cells.\n\n\n\nOnce this is done, you can repeat the previous steps to change the axis markers and create a second gate. For this example, we went with a “Cells” gate to exclude debris from this particular sample.\n\n\n\nHaving created the “Cells” gate, we will be switching gating based on FSC and SSC to using the detector parameters.\nThe samples in this example were acquired to derive the cell counts and concentration of various cell populations within cryopreserved cord and peripheral blood mononuclear cells (CBMC and PBMC) specimens after thawing.\nThey were stained with CD19 BV421, CD45 PE, and CD14 APC on a 5-Laser Cytek Aurora, before unmixing, this would correspond respectively to V1, YG1/B4, R1/YG4 peaks respectively.\n\n\n\n\nScaling/Transformation\nWhen we switch the axis, we can see that the scaling/transformation is not ideal, as the staining and not-staining populations are scrunched up together in the center of the plot.\n\n\n\nTo change the scaling/transformation, we need to click directly on the axis.\n\n\n\nFrom there, when we click on the drop-down, we see the various transformation options. We will select Logicle, given we are working with spectral flow cytometry files.\n\n\n\nThis y-axis values are subsequently visualized with the logicle transformation applied, increasing our resolution between the positive and negative population.\n\n\n\nWe can then repeat this for the x-axis, adjusting the fine-tune options for the scaling as needed.\n\n\n\n\n\nNavigating Gating Hierarchy\nWith this done, let’s first draw a rectangle gate for the CD45+ (B4-A) cells.\n\n\n\nAnd then selecting that population by clicking on the gate name, let’s proceed and gate the CD19+ cells (V2-A).\n\n\n\nAs you can see, we now have the various gates present in the gating hierarchy for the respective .fcs file. To return to a previous gated population, we would click on the parent population above it.\n\n\n\nWe can subsequently add an additional gate at this gating level for the likely debris population (the threshold setting was suboptimal for this experimental run).\n\n\n\n\n\nCopying Gates\nThis was the process for gating for a single specimen. To copy gates over to the other specimens, we have two options. First, holding down your Ctrl (or equivalent) button, you can click on the individual gate names.\n\n\n\nFrom there, you can drag them down to the next specimen and apply them.\n\n\n\nAlternatively, you can drag down the highlighted gated to the Pipelines Tab, and apply to All Files. This will result in the gates being copied to all specimens in the experiment.\n\n\n\n\n\n\nAdjustments within Pipelines will carry over to all other respective unmodified specimens that share it’s gates.\n\n\n\nOnce this is done, I recommend cycling through the gates for each specimen, just to ensure that the gates were positioned correctly before saving the workspace.\n\n\n\n\n\nSaving Workspace\nWith everyone now “correctly” gated, we can proceed to save the workspace so that we can reopen it later from another browser.\nTo do this we open the File tab from the upper navigation bar, and select Save Workspace.\n\n\n\nFrom there we have a couple options, for now let’s select Floreada Workspace. Where it is saved at will depend on your individual browser settings, so watch for a popup.\n\n\n\nAlternatively (and crucially for the CytoML pipeline) we can also choose to save it as a FlowJo v10 .wsp file.\n\n\n\nIn both cases, you will end up with Workspace files that can be used later to access your created gates\n\n\n\n\n\nReopening Workspace\nTo reopen the Floreada workspace within the browser, reopen the website, and select the Open File(s) option.\n\n\n\nFrom there, select both the Floreada Workspace file as well as the .fcs files\n\n\n\nAt which point you will now be back to the point you last saved at.", - "crumbs": [ - "About", - "Getting Started", - "00 - Floreada" - ] + "objectID": "course/00_GitHub/slides.html#github-repository", + "href": "course/00_GitHub/slides.html#github-repository", + "title": "Using GitHub", + "section": "GitHub Repository", + "text": "GitHub Repository\n\n\n\n\n\n\n\n\n.\n\n\nHaving set up your GitHub profile, it now is time to make sure you have access to our course materials. We will have you navigate to our course’s GitHub profile\nOn the profile page, you will be able to see our version of the README, our repositories, and the Contributions graph and Contribution activity sections.\nPlease click on the CytometryInR to navigate to its repository (folder)" }, { - "objectID": "course/00_Floreada/index.html#cytoml", - "href": "course/00_Floreada/index.html#cytoml", - "title": "Using Floreada", - "section": "CytoML", - "text": "CytoML\nDue to a unknown formatting bug, the Floreada produced FlowJo v10 .wsp is not directly accessible by CytoML at the time of this course. However, the issue is resolved as soon as the file is opened the first time within FlowJo v10, regardless of whether you have a log in or not. Strange? Yes, but we will take the workaround.\nSo, for anyone on Windows or MacOS, download FlowJo v10. Once installed, open the software, and close the login popups. Once there, open the Floreada created FlowJo.wsp file. Since you haven’t logged in, it won’t show any events. But it will correct the formatting bug. Close the software, and return to R. Your Floreada sourced .WSP file should now be readable by CytoML.\nOdd? For sure. Fixable? Likely, I will set a reminder to work with the Floreada and CytoML devs to see if we can cut out the need for this workaround.", - "crumbs": [ - "About", - "Getting Started", - "00 - Floreada" - ] + "objectID": "course/00_GitHub/slides.html#forking-cytometryinr", + "href": "course/00_GitHub/slides.html#forking-cytometryinr", + "title": "Using GitHub", + "section": "Forking CytometryInR", + "text": "Forking CytometryInR\n\n\n\n\n\n\n\n\n.\n\n\nBefore we go further, we will need you to make your own copy of the course repository (ie. fork it). This will allow you to quickly retrieve all the new materials and code corrections by simply rereshing (ie. syncing) your forked version with our upstream parent branch once a week." }, { - "objectID": "course/00_Git/index.html", - "href": "course/00_Git/index.html", - "title": "Version Control with Git", - "section": "", - "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here", - "crumbs": [ - "About", - "Getting Started", - "00 - Git" - ] + "objectID": "course/00_Homeworks/slides.html#discussions-forum", + "href": "course/00_Homeworks/slides.html#discussions-forum", + "title": "Getting Help", + "section": "Discussions Forum", + "text": "Discussions Forum\n\n\n\n\n\n\n\n\n.\n\n\nOn the course’s GitHub repository, we have opened up a Discussions page that we plan to use as a community forum. We hope that it will serve multiple functions, from providing a better sense of community for the online participants, to facilitating asking and receiving help on something that is not clear, provide feedback about something that it not working out, as well as a place to celebrate and show off your coding wins." }, { - "objectID": "course/00_Git/index.html#new-folder-from-template", - "href": "course/00_Git/index.html#new-folder-from-template", - "title": "Version Control with Git", - "section": "New Folder from Template", - "text": "New Folder from Template\nSince Positron can use multiple programming languages, when we select “New Folder from Template” we will be asked what kind of folder template we want to use. Since we are working in R, we will select the “R Project” option.\n\n\n\nWe will next be asked to name the new project folder and a storage location.\nOne thing I would like to remind everyone who is just starting to code is that it is best to avoid using special characters (ex. @ $ # ^ ! ; : ,) in any folder or file name. This is because when coding, these can be misinterpreted as commands.\nWhile spaces are generally okay, it is often to best stick to to stick to hyphens (-), underscores (_). We will explore naming conventions in more depth at a later time.\n\n\n\nAnother useful thing to know when getting started with version control, it is best to save your files within your local computer, avoid using OneDrive or other cloud storage options for the time reason. The reason behind is that permissions to write/save to the cloud locations can sometimes be quite finicky, and some autosave/indexing behaviors can cause issues. akes things easier to save or modify without running into permission issues. For most of our course examples, we will be saving our Project Folders under the Documents Folder.\nHaving named our new Project Folder, and designated a storage location, go ahead and check the Initialize Git Repository option. This will indicate to version control to monitor content and changes to files within this folder.\n\n\n\nThe next setup screen will verify which version of R you wish to use. Since we are just getting started, your most recent version of R (usually system) should work. We will also leave the “renv” (reproducible environment setup) option unchecked for the time being (we will revisit the concept later in the course).\n\n\n\nAnd if all goes well, we should see the “New Folder Created” popup.", - "crumbs": [ - "About", - "Getting Started", - "00 - Git" - ] + "objectID": "course/00_Homeworks/slides.html#polls", + "href": "course/00_Homeworks/slides.html#polls", + "title": "Getting Help", + "section": "Polls", + "text": "Polls\n\n\n\n\n\n\n\n\n\n.\n\n\nOccasionally, we will need to gather community feedback on what is working and what is not working. We will sporadically post Polls for this purpose." }, { - "objectID": "course/00_Git/index.html#creating-subfolders", - "href": "course/00_Git/index.html#creating-subfolders", - "title": "Version Control with Git", - "section": "Creating SubFolders", - "text": "Creating SubFolders\nOnce your new project folder has opened, you should be seeing the main layout elements that we briefly covered in the Positron walk-through.\nFor this section, we will primarily be focused on what is happening within the primary side bar on the left, where changes to the individual files within the folder since the last save/commit will be reflected by colored text.\nFor my own projects, there are some elements of organization that I go ahead and add for each new folder. These include both a data and an images subfolders to help keep things a little more organized.\nTo create these folders, we would click on the respective add folder (+) button on the side bar. Files and Folders can be clicked and dragged within the primary side bar to move things to new folder locations.", - "crumbs": [ - "About", - "Getting Started", - "00 - Git" - ] + "objectID": "course/00_Homeworks/slides.html#issues", + "href": "course/00_Homeworks/slides.html#issues", + "title": "Getting Help", + "section": "Issues", + "text": "Issues" }, { - "objectID": "course/00_Git/index.html#creating-files", - "href": "course/00_Git/index.html#creating-files", - "title": "Version Control with Git", - "section": "Creating Files", - "text": "Creating Files\nIn context of this course, we will primarily be working with two types of files when coding:\n\nR Scripts: These files end in .R. These contain only code (with occasional # comment line). These are often used for self-contained code that once we get them working we rarely need to modify.\nQuarto Markdowns: These files end in .qmd. They contain a .yaml header, followed by a mix of regular written text (often explanations or other documentation), and sections (ie. chunks) that contain code. These are used when we are still getting the code to work, when we need to modify inputs frequently, or simply when we need to document what and why we are doing something to make life easier for our future-self two months from now.\n\n\n\nIn this example, I will go ahead and select the new file icon\n\n\n\nThenn I will name the file, and designate it as a Quarto Markdown file by adding the .qmd at the end of the name to denote the file type.", - "crumbs": [ - "About", - "Getting Started", - "00 - Git" - ] + "objectID": "course/00_Homeworks/slides.html#submitting-take-home-problems", + "href": "course/00_Homeworks/slides.html#submitting-take-home-problems", + "title": "Getting Help", + "section": "Submitting Take-Home Problems", + "text": "Submitting Take-Home Problems\n\n\n\n\n\n\n\n\n.\n\n\nEach week, during the course, we introduce and cover the main concepts for the particular concept. Our goal is to provide you with the necessary code and enough code to be able to get the jist. However, to become comfortable and be able to apply what you have learned, you will need to explore beyond our examples, try it with your own datasets, encounter things that don’t work, and troubleshoot your way through them. It’s this cycle of venturing into the unknown that develops strong coding skills that are needed to overcome any barrier you encounter. The goal of the take-home questions is to provide some less curated problems that will take a little longer to answer to help get you started on your own exploration of the topic." }, { - "objectID": "course/00_Git/index.html#qmd-files", - "href": "course/00_Git/index.html#qmd-files", - "title": "Version Control with Git", - "section": "QMD Files", - "text": "QMD Files\nOnce this is done, we can now see we have a new .qmd file (“Example.qmd” in this case).\n\nYAML\nAs previously mentioned, the start of a Quarto Markdown file containg a YAML code chunk that is used to set formatting choices (we will explore this in-depth during the next section)\nWhat designates the location of the YAML block are three hyphens at the start, and three hyphens at the end. For this example, we will also provide a “title:” and “format:” field for the time being (see additional options here).\n\n\n\n\n\nText\nWith a basic YAML formatting block now in place, we can build out other elements of our Quarto Markdown document. Unless otherwise specified, everything else in the document is assumed to be text, so I will go ahead and provide an initial text description of what I am trying to do.\n\n\n\n\n\nCode-Chunks\nHaving provided some initial text for documentation, we can then add code-block chunks to start writing some code.\nThe easiest way to do do this is to click the respective option on the upper-right of the Editor screen. Since Positron can handle multiple programming languages, so the chunk is inserted, we will need to select the language we use to be used within the code chunk (R in this case).\n\n\n\nYou will notice, that the inserted code block starts off with three backticks (`) and then “{r}”. The end of the code block is denoted by an additional three backticks.\nWe can also add new code blocks by simply typing these elements into the location we want to place a code chunk (as long as we are careful to add 3 backticks also at the end).\n\n\n\n\n\n\n\n\nRunning Code\nNow that we have two code-chunks written, we can write lines of code within them. For this example, I will use two beginner friendly functions, print(“Hello”), which will print the contents contained between the ” ” to the console, and getwd() which will return the location of the folder you are working within (ie. the working directory).\nTo run/execute these lines of code, we have a couple options. We can click on the Run Cell option that appears on the upper-left side of the code chunk. Additionally, it has a companion option that will run all code chunks above it.\n\n\n\nWhen a code block is successfully run, you will see within the console (lower bottom of the screen) the line of code be run, with any returned outputs appear directly after.\n\n\n\nAn alternative to clicking the Run Cell button is to click on the line of code you are interested in running, then press (Ctrl + Enter)/(Command + Enter). This will execute the line of code that you have clicked on. This can be useful in scenarios where you want to run a specific line, and not the entire code-chunk.\n\n\n\nUsing this approach, you can see the location (ie. file path) of the current working directory was returned to the Console.", - "crumbs": [ - "About", - "Getting Started", - "00 - Git" - ] + "objectID": "course/00_Positron/slides.html#console", + "href": "course/00_Positron/slides.html#console", + "title": "Using Positron", + "section": "Console", + "text": "Console\n\n\n\n\n\n\n\n\n.\n\n\nAt the bottom of the sceen, you will first see the Console Tab. This is the tab where your lines of code when executed (run) will appear, as well as any messages, warnings or errors that get returned. On the right side of the console, you can find several buttons, among them restart R and delete session (for when you need a fresh start), and clear console (which keeps all previously run outputs and objects, but clears away the displayed text within the console)." }, { - "objectID": "course/00_Git/index.html#local-version-control", - "href": "course/00_Git/index.html#local-version-control", - "title": "Version Control with Git", - "section": "Local Version Control", - "text": "Local Version Control\nHaving introduced the main elements of a Quarto Markdown file, let’s turn our attention to the tab within the editor showing our newly created .qmd file.\nWe can see there is a solid circle next to the file name, and it is appearing as green. The circle denotes unsaved changes, which we can correct by clicking on the Save Button to save the changes to our file.\n\n\n\n\nUntracked\nIf we turn our attention to the left primary sidebar, we can see that within our GitPractie folder there are three files, our Example.qmd, and the default README.md and .gitignore files. These all show up in green text with U’s to the right of the file names.\nThis denotes that the version control tracking software Git is currently considering them as “Untracked” files. While saving the document via the Save button means we will still have our changes when we reopen Positron, we won’t have any history of changes that we can use to revert back to the way all the files appeared at this exact point in time should something go wrong.\nWe will next go the address bar on the very far left, and select the Git tab.\n\n\n\nOn the Git tab, we can see that each of the three files are shown underneath a “Changes” drop-down. This contains the files that have undergone changes since the last commit. In our case, since we haven’t updated the save-state yet, this last commit would be the initial creation of the project folder.\n\n\n\nTo have version control track these individual files going forward, we can do so in two separate ways. We can add them individually by clicking the + symbol next to the individual names.\n\n\n\n\n\nStaged\nThis will result in the files being moved to the “Staged” dropdown. This denotes files being tracked with the intention of being recorded as the next save-state or waypoint (ie. a commit).\n\n\n\n\n\nCommit\nTo create a new commit (save-state or waypoint), once we have the files we want to track staged, we will write a commit message, and then press commit.\nA commit message is a brief description of the changes that have occurred to the files between this commit and the previous one. Make this short description informative enough that if you need to revert back in the future, you can quickly identify the commit you need to fall back to (more about this later).\n\n\n\nIf this your first time using version control, you will likely encounter the following pop-up asking that you provide a user.name and user.email. This is used to designate the author of the changes.If you get this popup, go ahead and select “Open Git Log”\n\n\n\n\n\nUserName and UserEmail\nThe Output tab at the bottom of the screen will open, showing the messages that led to the popup.\nThe important part to note is the commands that will be needed to provide your user name and email to the computer for authoring the commit. Typically, your email will be the same one you used for your GitHub account.\n\n\n\nFrom the displayed message, go ahead and copy\n“git config –global user.email”you@example.com””\nThen click on the adjacent terminal tab. You will paste the command in, but do not hit enter just yet.\nWindows users, please note, depending on your settings, if trying to paste from the keyboard into the terminal, you may need to press “Ctrl + Shift + V” instead of the usual “Ctrl + V”.\n\n\n\nWith the command now pasted (or typed), use your keyboard arrows to navigate to the email portion, and replace the generic email with your email address used for your GitHub account.\nMake sure that the quotation marks (“) around the email address remain present, as they help the computer identify where your email address starts and ends. Once satisfied that your email address is correct, press enter.\n\n\n\nNext up, repeat the process, this time copying over the command needed to set your user name to the terminal. Repeat the editing process to provide your name between the “” marks. Then press enter.\n\n\n\n\n\n\n\n\nFirst Commit\nNow that your user.name and email address have been provided, Git should be able to provide an author to the commit message. Reattempt to press commit button.\nIf this is successful, you will see your initial commit appear on the bottom half of the left primary side bar, under the Graph dropdown. Congrats! Your files are now being tracked by version control.\n\n\n\nIf you hover with your mouse arrow just over the commit, you can see the longer commit message and additional details appear.\nIf you click on the commit tab, a new display will open in the editor, displaying the changes that occured in that commit compared to the previous one. In this case, since we added everything since the previous commit, nothing appears on the left side, while the entire documents contents appear highlighted in green on the right.\nGreen highlighting is used to show additions, while red highlighting is used to show deletions.\n\n\n\nHaving completed this initial commit, for this example, let’s imitate a typical workflow and make some additional changes to the file before we make a second commit. Within text portions of the .qmd file, use of # denotes a section header in markdown, so let’s add a header for Introduction and click save.\n\n\n\n\n\nModified\nWithin the left primary sidebar, we can see that the Git tracking has updated. Examples.qmd is visible once again. However, becuase it is now a tracked file, instead of showing up with the “U/Untracked” green highlight, it now appears as a brownish-red with a “M/Modified”.\nLet’s make an additional change to the .qmd file by adding another section (# Setup) and a code block with a commented out line (denoted by the # at the line start), before pressing Save.\n\n\n\nIf we were now to click on the Example.qmd file in the left primary sidebar, it will open the same kind of tracking display we saw previously. This time, we can see changes since our last commit. These appear as the green highlights along the scroll-bar, corresponding to the # Introduction and # Setup headers that we have added in since the last commit.\n\n\n\nFor a larger document, we can scroll down to see the various highlighted regions.\n\n\n\nWe could now repeat the steps showed above, staging the file, writing a commit message, and commiting again by clicking on the designated buttons.\nAn important question is how often should we commit, vs. just hit save? Well… it depends :D Let’s think about this in context of a video game. If you made commits at regular intervals throughout the day (or more frequently when doing something particularly risky), you are more likely to be close enough to a particular commit (waypoint/save-state) that you can quickly revert back to without loosing any progress. Alternatively, if your last commit was last week, you will not have any intermediate versions to fall back to.\n\n\nCommit via Terminal\nHaving demonstrated how to commit changes to Git via the left primary side-bar, for this second commit, let’s do it the alternate way via the terminal (tab on the panel at the bottom of your screen).\n\n\n\nAfter clicking on the terminal tab, click on blinking command line.\nThe command to stage a file is “git add”, followed by the name of the file you want to stage.\nIn this case, you would enter “git add Example.qmd” and press Enter. \n\n\nYou will see after pressing enter a new blank terminal line appear. If you glance at the left-sidebar, you can see that Example.qmd now appears under the Staged Changes dropdown.\n\n\n\nNext up, let’s write the git commit via the terminal. In this case, the command would be “git commit -m” (-m denoting message). The commit text is then surrounded by “” marks.\nFor example: “git commit -m”Added section headers to my QMD file””\nPress enter to save the commit.\n\n\n\nAnd you should see your second commit now appear in the left primary sidebar underneath the graphs dropdown.", - "crumbs": [ - "About", - "Getting Started", - "00 - Git" - ] + "objectID": "course/00_Positron/slides.html#terminal", + "href": "course/00_Positron/slides.html#terminal", + "title": "Using Positron", + "section": "Terminal", + "text": "Terminal\n\n\n\n\n\n\n\n\n.\n\n\nRight next to the Console tab is your Terminal tab. While the console tab is primarily used to run R code within Positron, the terminal is the interface where code containing system commands directed at at your computer is entered. We will use this less frequently, primarily in two context: 1) rendering Quarto documents, and 2) commiting changes to version control. Among the buttons on the right-side of the terminal to make note of are the + button to add a new terminal, and the trash/garbage can button to kill (stop) the terminal." }, { - "objectID": "course/00_Git/index.html#remote-version-control", - "href": "course/00_Git/index.html#remote-version-control", - "title": "Version Control with Git", - "section": "Remote Version Control", - "text": "Remote Version Control\n\nCopying Project Folder to GitHub\nWhile having local version control in place is helpful when you need to revert back after encountering issues, where Git shines is the ability to pass your changes to your online GitHub repository.\nNot only does this allow you to switch between computers, but should something disastrous happen to your main computer, you still have all your hard work backed up and readily assessible.\nFor this subsection, first, double check that Positron is still connected to your GitHub account by checking the user tab on the bottom-left. If not, repeat the connection setup.\n\n\n\nSince our project was created using the “New Folder from Template” option, it currently only exist locally. What we want to do next is to copy it to our GitHub account, creating a new repository in the process.\nTo do this, we will first need to install the usethis R package. Within your console, you would run the following line of code:\n\ninstall.packages(\"usethis\")\n\nDepending on what R packages you already have installed on your computer, you may get a prompt asking if you want to update/install additional dependencies. Go ahead and type the number corresponding to Update All, and press enter.\nThe package and all it’s dependencies should then install. If an error message appears, read through it, and follow provided instructions. Go to Discussions if need help.\n\n\nOnce the usethis package is installed, we need to activate it within R by calling it with the library command. This makes all the tools (ie. functions) within an R package available for use within Positron.\nIn your console, you would type:\n\nlibrary(usethis)\n\n\n\n\nWith library called, you now have access to the functions (tools) within the usethis R package. One of these is the use_github() function.\nIn Positron, if you hover over a function, it will pull up the associated help file which will provide you information about the arguments the function expects to receive, and what they do.\nFor use_github(), the main thing to remember for now is since this is a personal project being used for testing, we don’t necessarily want to share it with the entire world, so we should set the “private” argument equal to TRUE when creating a new repository.\n\n\n\nTaking this information that we have now gathered, we can now within our Quarto Markdown create a code chunk, write out the line of code calling the function, and providing the Private=TRUE argument within the ().\nWithin a code chunk, adding a # in front of a line of code, will comment it out, resulting in that line of code not being run. Since we have already installed the usethis package, and we don’t want to reinstall it every single time, let’s go ahead and comment out that line. Go ahead and press Enter.\n\n\n\nWe will see a message pop-up in the console. In this case, we had not saved before pressing enter, so there are uncommitted changes within the folder. The pop-up is asking whether you want to save these as well before sending the Folder to GitHub.\nIn this case I will chose to ignore the uncommitted changes by entering 3 (for Definitely) in the console and hitting enter on my keyboard.\n\n\n\nThe usethis R package will then execute the series of git commands that are needed to set up a GitHub repository (ie. the messages being displayed in the console window), and when finished will open a pop-up asking whether you want to see your new repository in your default Web Browser. I will go ahead and select yes in this case.\n\n\n\nAfter the browser opens, you can see that the elements I had staged and committed within Positron are now present within the GitHub repository. Since I had only staged Example.qmd, it is the only file that was backed up. We can also see the commit history online by clicking on the commit clock.\n\n\n\nAs we would expect, we only see our two commit messages. One important thing to note is the commit hash numbers, that denote a particular commit. If we decided to revert/fall back to a prior commit in the future, this would be the number we would need to provide to Git to return to that previous commit/save-state\n\n\n\nSimilarly, on GitHub, we have an option to Browse a Repository at a particular point in time. This will be quite useful later in the future when troubleshooting what major changes occurred between versions of an R package.\n\n\n\n\n\nCode Chunk Arguments\nHaving successfully connected our local Project Folder to a remote GitHub repository, let’s return to Positron.\nBefore continuing, if we left the code chunk that created the GitHub repository as is, every time we ran all code chunks in the document, it would try to recreate the GitHub repository. We don’t want this to happen, as the setup was a one-time operation.\nWhile we could add # in front of every line of code (or delete the code chunk entirely) it is often useful to have these set-up code chunks around to remind us what arguments we need to provide next time we need to a similar setup and are mind blanking on what to do.\nFortunately, Quarto allows us to set conditions on whether a chunk is run (ie. evaluated). We will discuss the conditiona arguments in more depth in the next section, but for now, we can modify the code chunk as follows.\nOn the next line after the {r}, we will add a hashtag (#), then a pipe (|), followed by a space. This is the setup for a code-chunk specific argument. We will then add “eval: FALSE”, which signals that the particular code-chunk should not be evaluated (ie, should not be run).\n\n\n\n\n\nREADME\nNow that we have connected our local Project Folder to GitHub, and have gotten a basic introduction to the “git add”, “git commit” arguments, let’s turn our focus to the other files currently listed as untracked by Git within our folder, the README.md and the .gitignore files.\nWhen setting up our GitHub account, we encounted an example of a README.md file. This file often provides a brief description of the project, and an outline of what the other files in the folder are for. As you may have gathered, even software developers are forgetful/under-caffenaited, and having notes to catch back up to speed is important.\n\n\n\n\n\n.gitignore\nWe additionally have a .gitignore file. Within a project, there are often some files that we will never want version control to track. These could be files that are too large for GitHub (ex. really large .fcs files), or files containing sensitive information (passwords, history, credentials, etc.).\nWhen the names of these file (or the file type shothand) are added to the .gitignore file, they are ignored by version control, and no longer appear on the primary left side bar.\n\n\n\nLet’s proceed and stage both the README.md and .gitignore file, so that changes to these files will be tracked. We can of course select both from the primary left side bar and write a short commit message.\n\n\n\n\n\n\nOr alternatively, if we want to stage all uncommitted files present in a single step, we could in the terminal use the “git add .”\nWe can then write our git commit using “git commit -m”.\nBoth approaches work, and you may switch between them based on preferrence.\n\n\n\nYou will notice after having committed, that if you look at the Graph dropdown on the bottom half of the primary left side-bar that something has changed.\nThere are now separate icons denoted as main and origin/main. These correspond to the last commit present locally (main), and the last commit on remote (ie. GitHub, origin/main).\nLocal is ahead since you just made the commit with the changes inactivating the code-chunk, and you have not passed these changes up to GitHub yet.\n\n\n\n\n\nPull\nBefore sending (ie. pushing up) our updated commit to GitHub, especially if you are working on a project from multiple computers (or as part of a team) to bring in (ie. pull down) any changes that might be on GitHub that are not present locally.\nThis ensures that everything is up to date, and you don’t end up with mismatched commits that are incompatible with each other and trigger an error message.\nTo pull in changes from GitHub, at the top of the primary left side-bar, you can select the … button to open a drop-down menu of Git options. You would then select “Pull”.\nAlternatively, you could do the same thing via the terminal by running the “git pull” command.\n\n\n\n\n\nPush\nIn our case, there was no new material present on our GitHub repository that were not already present locally, so all that is returned is the “Already up to date” message.\nWe are now good to proceed to push (ie. send) the updated commit up to our GitHub repository.\nWe can do this by either pressing the Sync changes button, or via the terminal entering the “git push” command.\n\n\n\nAnd now, if you glance down at left side-bar’s graph section, you will see that both the main and origin/main icons are now present for the most recent commit.\n\n\n\nIf we switch to our Web browser, we can see that this is also now the case for our GitHub repository that now also has the most recent changes.\n\n\n\n\n\nReverting to Prior Commit\nFor most daily-workflows, you will only need the git commands that we have introduced above (git add, commit, pull, push). The next two areas (reverting to a prior commit, and branches) are more specialized, and will be covered in greater depth later in the course. We are briefly covering them here. If you are at the point where your last remaining neuron has disconnected, and you feel you need to take a break from version control, feel free to skip to the next section and we will revisit these topics later in the course.\n\n–\nIn most cases, if your code stops working, you can identify the issue and fix it in the existing version, never needing to resort to reverting to a previous commit (save-state). The times you would need to revert would be if you deleted important files, or the new files are hopeless mess that is not worth trying to sort through. In those cases, reverting back might be better approach.\nTo imitate a falling back scenario, lets create and additional file, stage and commit it to end up a commit ahead of where we are currently at within the Project Folder.\n\n\n\nNow being one (or several) commits ahead, if we wanted to revert back, we would first need to identify the commit we want to revert back to and copy the commit hash number.\n\n\n\nThen, opening the terminal, we can enter “git reset” and paste the hash afterwards. We can then press enter.\n\n\n\nYou will notice our additional commit has been removed, although the newer files we were working on subsequent last commit are still present.\n\n\n\nIf however, we had wanted to return to the exact same state as the previous commit (removing all subsequent created files), we could do so by adding in the –hard argument. Before starting, save any newer files you want to keep in a completely different folder, because they will be permanently removed.\nThen, enter “git reset –hard thecommithashnumber” into the terminal, which would result in a “hard” return to the previous commits save-state. You may need to close and reopen Positron to see the changes reflected.\n\n\nBranches\nBranches are an useful Git feature that we will start using extensively later in the course. Branching allows you to create a parallel/carbon-copy of your existing repository, which you can then edit without affecting the main branch. This is particularly useful for projects that may get messy or drawn out. By isolating these edits to a parallel branch, if they don’t work, your main branch remains safe. Alternatively, if you like the changes that occurred in the branch, you can pull these changes from the branch back to main, bringing the timelines back together.\n\n\nWithin the terminal, entering “git branch” will show the existing branches. In this case, only main is present since we haven’t yet created a new branch.\n\n\n\nWe can create a new branch in the terminal by entering “git branch” followed by the name of our desired branch. In this case, we are creating a branch called Week1\n\n\n\nNow, when we check “git branch” again in the terminal, which returns the two branches, Week1 and main. The * is located next to main, indicating that we are currently within the main branch.\n\n\n\nBesides the terminal, we can also create a new branch via Positron. To do so, we first click on the Git tab in the Actions Bar.\nOnce the left-side bar displays the version control display, we can click on the … button (to the right of changes)to gain access to the Git options drop-down.\nFrom here, we click on Branch, and then select Create Branch.\n\n\n\nUsing Git branch, we saw that we were still within the main branch. In the terminal, we can switch over to the Week1 branch by using the “git checkout” command, followed by the branch we wish to switch to.\n\n\n\nThis results in us switching over to the Week1 branch.\n\n\n\nHaving switched (ie. checkout) to the Week 1 branch, let’s create the file BranchTest.qmd, which will exist within this branch, but not yet in the main branch.\n\n\n\nHaving created the file, let’s stage and then commit it. This will put the Week1 branch ahead of the main branch by a single commit.\n\n\n\nWith our changes staged and committed, if we look at the left side-bar’s graph section, our Week1 branch is now ahead of the origin/main branch by one commit.\n\n\n\nIf we were to check on GitHub, we can see that no new files are present on the main branch, but can see the notification listing recent changes to Week1 branch.\n\n\n\nUsing the drop-down, we can switch from displaying the main branch to the Week1 branch, where we can see the new file.\n\n\n\nIf we click the green compare and pull request button, we end up on this screen. This compares how the two branches are different from each other.\n\n\n\nWe will delve into branches again at a later point. For now, remember that by creating and prunning parallel branches, you can develop knowing that even if something goes wrong, your main branch remains safe.", - "crumbs": [ - "About", - "Getting Started", - "00 - Git" - ] + "objectID": "course/00_Positron/slides.html#help", + "href": "course/00_Positron/slides.html#help", + "title": "Using Positron", + "section": "Help", + "text": "Help\n\n\n\n\n\n\n\n\n.\n\n\nWhen trying to evaluate how a particular function is working in R, you can hover over it and positron will open up the documentation for that particular function if available, alternatively, you can enter ?theParticularFunctionsName in the console and hit enter to similarly view what is occuring." }, { - "objectID": "course/00_GitHub/index.html", - "href": "course/00_GitHub/index.html", - "title": "Using GitHub", - "section": "", - "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here", - "crumbs": [ - "About", - "Getting Started", - "00 - GitHub" - ] + "objectID": "course/00_Positron/slides.html#variables", + "href": "course/00_Positron/slides.html#variables", + "title": "Using Positron", + "section": "Variables", + "text": "Variables\n\n\n\n\n\n\n\n\n.\n\n\nOn the upper-portion of the Secondary Side Bar, we can find the Session window, containing the Variables tab. As you run (execute) lines of code, and different variables, objects and functions are created, these become visible under the variables tab on the upper right." }, { - "objectID": "course/00_GitHub/index.html#creating-an-account", - "href": "course/00_GitHub/index.html#creating-an-account", - "title": "Using GitHub", - "section": "Creating an Account", - "text": "Creating an Account\nWe will first navigate to the GitHub homepage. If you haven’t previously created an account, click on the button to sign up for an account.\n\n\n\nOn the sign-up page, you will fill in various details needed to create an account. Please remember that GitHub usernames are visible to others. Additionally, if you end up sharing code with others as part of a manuscript, or use GitHub to create a personal portfolio website in the future, your username will appear as part of the URL.\nFor example, in my case, my user name is DavidRach, so my GitHub profile ends up as: https://github.com/DavidRach. For our course, the core’s GitHub user name is UMGCCCFCSR, so the github profile ends up as https://github.com/UMGCCCFCSR, while the course website ends up as https://umgcccfcsr.github.io/CytometryInR/\n\n\n\nOnce you have entered your new account information, you will need your account creation by entering the code sent to the email address that you provided.\n\n\n\nOnce account creation has been confirmed, please proceed to login to GitHub for the first time.", - "crumbs": [ - "About", - "Getting Started", - "00 - GitHub" - ] + "objectID": "course/00_Positron/slides.html#plots", + "href": "course/00_Positron/slides.html#plots", + "title": "Using Positron", + "section": "Plots", + "text": "Plots\n\n\n\n\n\n\n\n\n.\n\n\nSimilarly, any generated Plots or Documents will appear within the Secondary Side Bar, either under Plots (bottom) or Viewer (top) tabs." }, { - "objectID": "course/00_GitHub/index.html#github-profile", - "href": "course/00_GitHub/index.html#github-profile", - "title": "Using GitHub", - "section": "GitHub Profile", - "text": "GitHub Profile\nUpon creating a brand new account, your GitHub homepage will initially look rather empty, and can be intimidating to navigate for the first time.\nFor now, on the upper right, go ahead and click on the default profile picture icon…\n\n\n\nAnd then select Profile…\n\n\n\nYou are now on your public GitHub profile page. For a newly created account, it will look something like this:\n\n\n\nFor a more established account, this page will look a little different, and can be customized to highlight various projects that you are working on.\nFor this course, we will have you set up a basic GitHub profile page for now, although you are free to customize and personalize it as much as you may want to in the future!\nTo start, first select the edit profile button on the left below the default profile icon.\n\n\n\nYou can then proceed to fill in any details that you feel are relevant and are comfortable sharing.\n\n\n\nWith the quick access details filled in, it is now time to navigate to the Settings tab. You will return to the previous menu dropdown on the upper right, and instead of selecting Profile, click on the Settings option.\n\n\n\nYou should now end up within your Public Profile Settings page.\nFeel free to edit the default profile picture, and any other fields that you feel are relevant. Once done, continue to scroll down the page past ORCID ID.\n\n\n\nWhen you reach Contributions and Activity, go ahead and select the option to include private repositories in the activity summary graphic. Then scroll down and click save. You will now be returned to your GitHub profile page.\n\n\n\nAt the top of the profile, you will see a “Your contributions” calendar graph. For a new account, it will look like this:\n\n\n\nIf you are just starting out, this chart will be mostly empty, but will fill in as you work on projects, see here as an example.\nEvery time you save your code (ie. make a commit), the activity will be reflected in this chart. By clicking the option in settings, code made within a private repository will remain private, but will count toward your contribution chart. As you progress through the course, this will provide a nice visual reminder of the progress you have made, and the obstacles that you have overcome.", - "crumbs": [ - "About", - "Getting Started", - "00 - GitHub" - ] + "objectID": "course/00_Positron/slides.html#view", + "href": "course/00_Positron/slides.html#view", + "title": "Using Positron", + "section": "View", + "text": "View\n\n\n\n\n\n\n\n\n.\n\n\nOn the upper bar multiple tabs can be found, which we will explore in due time. Most useful to point out is the View tab. If you accidentally close your console, session or plots window, and are trying to get them to reapper, you would need to reselect them from this tab." }, { - "objectID": "course/00_GitHub/index.html#github-readme", - "href": "course/00_GitHub/index.html#github-readme", - "title": "Using GitHub", - "section": "GitHub ReadMe", - "text": "GitHub ReadMe\nWith this done, we modify your GitHub profile by adding one customized element, a ReadMe page. This will be used for a couple projects during the course, and can be personalized further in the future.\nTo create a ReadMe page for your profile, we will navigate to the upper right of the screen and click on the + sign.\n\n\n\nWe will then select the Create New Repository option.\n\n\n\nYou will next create a repository (folder), naming it exactly the same as your username. This will be recognized by GitHub as being a special type of repository corresponding to the ReadMe section of your profile.\nFor options, leave the visibility as Public, and Add README set to On. And proceed to Create Repository.\n\n\n\nHaving created the repository (folder), you will see it has been populated by a few default files. For now, you will be editing the README.md file. On a new repository, the easiest way to access it is by clicking the green option on the right side of your screen.\n\n\n\nWith the README.md file now opened, you will be able to see generic filler text that is suggested by GitHub.\nFor this course, I will ask you to add a couple elements for now. You are free to return and further personalize it later if you wish to do so.\n\n\n\nThe type of file that we are working with is a Markdown file, which can allow for a bunch of customizations which we will cover throughout the course.\nFor now, please add and customize the following questions:\nCytometry In R\nLocation: Baltimore, Maryland, USA\nMy Favorite Fluorophore/Metal-Isotope: Spark Blue 550\nPrevious Coding Experience: Repeatedly Calling IT\nWhat I Hope to Get From This Course: A faster way to match FlowSOM clusters to their likely cell type.\n\n\n\nNext, to save you will select the green “Commit changes” button. We will cover the meaning of “Commit” more in-depth during the Git section.\nFor now, write a short summary of the change you made to the file in the “Commit message”, and any additional details within the “Extended description” field. When ready, click the green “Commit changes” button.\n\n\n\nYou will now be able to see the updated README.md file, as you can see in our example below. To make additional edits, you would select the pencil icon on the right-center side of the screen.\n\n\n\nNext, navigate back to your profile page (by clicking on either your username or the Overview option on the tabs).\nYou will see that the README file contents are now displayed on the upper portion of your GitHub profile. Feel free to circle back and customize this further to your liking.\nIn this last example, we created your first repository (folder). Since this is public, it is now shown below the README section of the profile under your repositories. You can also see that your commits made in the process of making the changes are now shown both in the Contributions graph, and under the Contributor Activity summary at the bottom of the page.", - "crumbs": [ - "About", - "Getting Started", - "00 - GitHub" - ] + "objectID": "course/00_Positron/slides.html#pages", + "href": "course/00_Positron/slides.html#pages", + "title": "Using Positron", + "section": "Pages", + "text": "Pages\n\n\n\n\n\n\n\n\n.\n\n\nThe pages tab and the left-side bar show you everything that is currently within your project folder, including all the folders, and files. Once version control with Git is initiated, new files are relected showing up as green text and a dot, while modified tracked files are reflected by light brown text and a dot." }, { - "objectID": "course/00_GitHub/index.html#github-repository", - "href": "course/00_GitHub/index.html#github-repository", - "title": "Using GitHub", - "section": "GitHub Repository", - "text": "GitHub Repository\nHaving set up your GitHub profile, it now is time to make sure you have access to our course materials. We will have you navigate to our course’s GitHub profile\nOn the profile page, you will be able to see our version of the README, our repositories, and the Contributions graph and Contribution activity sections.\nPlease click on the CytometryInR to navigate to its repository (folder)\n\n\n\nOn this page, you will see several elements that you will be circling back to throughout the course.\nFor our course, we will be extentsively ussing the Discussions page as a community forum. If you have any questions, are looking for feedback, or want to show off something that you worked on, this is the place for it. This will also help make sure\n\n\n\nThe Issues tab is where you will need to go to open an Issue if you encounter a bug (or major documentation typo), so that I can cicle back and correct them when I have the chance.\n\n\n\nTo submit the optional take-home problems, you would turn in these problems by going to the Pull Request tab, and initiating a pull request between your forked version of the project and our “homework” branch (more details on this later).\n\n\n\nOptionally, you can “Star” a repository. This is basically the GitHub equivalent of liking a project. In our case, we will often star a repository since it will be saved under the Stars tab of our profile, which makes finding it again significantly easier a few weeks later after forgetting the repository name.\n\n\n\nTo see projects that you have starred, you can select the Stars option from the same dropdown you used to get to Settings.\n\n\n\nOr from your GitHub profile, you can see these under Stars tab.", - "crumbs": [ - "About", - "Getting Started", - "00 - GitHub" - ] + "objectID": "course/00_Positron/slides.html#search", + "href": "course/00_Positron/slides.html#search", + "title": "Using Positron", + "section": "Search", + "text": "Search\n\n\n\n\n\n\n\n\n.\n\n\nThe search tab on the left side bar is something that I use routinely." }, { - "objectID": "course/00_GitHub/index.html#forking-cytometryinr", - "href": "course/00_GitHub/index.html#forking-cytometryinr", - "title": "Using GitHub", - "section": "Forking CytometryInR", - "text": "Forking CytometryInR\nBefore we go further, we will need you to make your own copy of the course repository (ie. fork it). This will allow you to quickly retrieve all the new materials and code corrections by simply rereshing (ie. syncing) your forked version with our upstream parent branch once a week.\n\n\nTo fork the course repository, you will select the “Fork repository” option on the upper-center portion of your screen.\n\n\n\nBy “Fork-ing” a repository, you are basically copying the contents from that repository to a newly created repository on your own GitHub. Forked projects are still linked to the original (parent) fork, and can retrieve any updates via syncing, as well as return changes via a pull request.\nFor this course, when you create the fork, keep the existing repository name (“CytometryInR”). Importantly, select the copy main branch option. This will ensure you only get the code and data needed for the course copied over, and don’t end up with your entire hard-drive filled will website elements, or other people’s solutions to the take-home problems.\n\n\n\nOnce you have created the fork, you will see your copy of the forked repository under your own username. Seeing as you have just now forked the project, you will see the notification that you are up to date with the existing version of the CytometryInR course repository.\nAs we go through the course, and new material is released each week on Sunday at 2200 EST (Monday 0300 GMT+0), you will see this changed to behind the main branch by a number of commits, and have the option to sync in the changes to your fork to gain access to that week’s material.\n\n\n\nIf you remember, previously under your GitHub profile, the Repositories tab only contained the repository corresponding to your ReadMe section.\n\n\n\nYou should however now be able to see your fork of the CytometryInR repository. As you add project specific repositories throughout the course, they will also appear here.", - "crumbs": [ - "About", - "Getting Started", - "00 - GitHub" - ] + "objectID": "course/00_Positron/slides.html#extensions", + "href": "course/00_Positron/slides.html#extensions", + "title": "Using Positron", + "section": "Extensions", + "text": "Extensions\n\n\n\n\n\n\n\n\n.\n\n\nOn the far-left side we can find the Activity bar, which contains several tabs. Which tab you have selected will then dictate the contents of your left side-bar.\nOccupying the left side bar are several tabs. One of these is Extensions, which shows “Plugins” (or the VScode equivalent) that extend the functionality of Positron further. The ones you have installed may vary, but the main ones in context of this course are Air (provides color and highlights syntax for R code to make interpretation easier) as well as Quarto (for rendering the various document types)." }, { - "objectID": "course/00_Homeworks/index.html", - "href": "course/00_Homeworks/index.html", - "title": "Getting Help", + "objectID": "course/00_Positron/slides.html#git", + "href": "course/00_Positron/slides.html#git", + "title": "Using Positron", + "section": "Git", + "text": "Git\n\n\n\n\n\n\n\n\n.\n\n\nThe Git tab on the left side bar is where once version control is initiated for the project folder, we can see changes that have occurred to the individual files since the last commit. These changes can be added to a new commit by clicking on the + sign. This will be covered more extensively in the next section" + }, + { + "objectID": "course/00_Quarto/slides.html#renderpreview", + "href": "course/00_Quarto/slides.html#renderpreview", + "title": "Introduction to Quarto", + "section": "Render/Preview", + "text": "Render/Preview\n\n\n\n\n\n\n\n\n.\n\n\nThe preview button, at the upper-left end of the Editor, is used to render/knit a quarto document. This triggers the process by which code-chunks are run, and then the outputs are cobbled together into the file format type designated by the YAML header." + }, + { + "objectID": "course/00_Quarto/slides.html#yaml", + "href": "course/00_Quarto/slides.html#yaml", + "title": "Introduction to Quarto", + "section": "YAML", + "text": "YAML\n\n\n\n\n\n\n\n\n.\n\n\nWe can additionally provide additional custom inputs to the YAML header. A couple examples include providing the document author and date." + }, + { + "objectID": "course/00_Quarto/slides.html#table-of-contents", + "href": "course/00_Quarto/slides.html#table-of-contents", + "title": "Introduction to Quarto", + "section": "Table of Contents", + "text": "Table of Contents\n\n\n\n\n\n\n\n\n.\n\n\nIn the previous section, we saw that we could provide headings and subheadings to our .qmd file by placing a # at the start of a line in the text portion of the document. A subheading was designated by a ##, with additional hierarchy being designated by appending an additional #." + }, + { + "objectID": "course/00_Quarto/slides.html#code-chunk-arguments", + "href": "course/00_Quarto/slides.html#code-chunk-arguments", + "title": "Introduction to Quarto", + "section": "Code Chunk Arguments", + "text": "Code Chunk Arguments\n\n\n\n\n\n\n\n\n.\n\n\nAs we briefly touched on in the last section, code-chunks can be modified by including arguments, which affect whether a particular code chunk gets evaluated. In that example, we included a “#| eval: FALSE” to the install commands since we did not want them to be re-run subsequently. We will take a closer look at the other arguments in this section.\n\n\n\n\n\nEval\n\n\n\n\n\n\n\n\n.\n\n\nThe code-chunk argument, “Eval”, is used to determine when a code-chunk get’s evaluated. When set to true (or by default if no eval argument is included), the code-chunks contents will be run/executed, and the output will appear. We can see this in the html output, as below the code block, we get back the address of my working directory." + }, + { + "objectID": "course/00_Quarto/slides.html#text-styles", + "href": "course/00_Quarto/slides.html#text-styles", + "title": "Introduction to Quarto", + "section": "Text Styles", + "text": "Text Styles\n\n\n\n\n\n\n\n\n.\n\n\nQuarto primarily uses Markdown for text styling. Consequently, markdown arguments can be used within the text to change how various text appears." + }, + { + "objectID": "course/00_Quarto/slides.html#hyperlinks", + "href": "course/00_Quarto/slides.html#hyperlinks", + "title": "Introduction to Quarto", + "section": "Hyperlinks", + "text": "Hyperlinks\n\n\n\n\n\n\n\n\n.\n\n\nYou can link to a website by surrounding word of interest in [] and placing the url within () adjacent to it." + }, + { + "objectID": "course/00_Quarto/slides.html#images", + "href": "course/00_Quarto/slides.html#images", + "title": "Introduction to Quarto", + "section": "Images", + "text": "Images\n\n\n\n\n\n\n\n\n.\n\n\nYou can place images by adding the following, as long as the file.path to the image is correctly formatted. In my case, this is why I include images folders within my folders to simplify the copy and paste." + }, + { + "objectID": "course/00_WorkstationSetup/WindowsSlides.html#installing-r", + "href": "course/00_WorkstationSetup/WindowsSlides.html#installing-r", + "title": "Installing Software on Windows", + "section": "Installing R", + "text": "Installing R\n\n\n\n\n\n\n\n\n.\n\n\nTo get started, first navigate to the R website. Once there, click on Download R option towards the top of the page." + }, + { + "objectID": "course/00_WorkstationSetup/WindowsSlides.html#installing-rtools", + "href": "course/00_WorkstationSetup/WindowsSlides.html#installing-rtools", + "title": "Installing Software on Windows", + "section": "Installing RTools", + "text": "Installing RTools\n\n\n\n\n\n\n\n\n.\n\n\nWe will now work on installing Rtools. This software is needed when building R packages from source, which we will need throughout the course for R packages hosted on GitHub.\nTo get started, we will return to the R installation page we visited previously and instead click on the Rtools option." + }, + { + "objectID": "course/00_WorkstationSetup/WindowsSlides.html#installing-git", + "href": "course/00_WorkstationSetup/WindowsSlides.html#installing-git", + "title": "Installing Software on Windows", + "section": "Installing Git", + "text": "Installing Git\n\n\n\n\n\n\n\n\n.\n\n\nGit is a version control software widely used among software developers and bioinformaticians. We will use it extensively throughout the course, both locally on our computers (to keep track of changes to our files), as well as in combination with GitHub(to maintain online backups of our files).\nWe will first navigate to the website and select the download from Windows option." + }, + { + "objectID": "course/00_WorkstationSetup/WindowsSlides.html#installing-positron", + "href": "course/00_WorkstationSetup/WindowsSlides.html#installing-positron", + "title": "Installing Software on Windows", + "section": "Installing Positron", + "text": "Installing Positron\n\n\n\n\n\n\n\n\n.\n\n\nFinally, you will install Positron. It is an integrated development environment (IDE) in which we will open, modify and run our code throughout the course.\nFirst, navigate to their homepage, and select the blue Download option button on the upper-right." + }, + { + "objectID": "course/00_WorkstationSetup/Linux.html", + "href": "course/00_WorkstationSetup/Linux.html", + "title": "Installing Software on Linux", "section": "", - "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here", - "crumbs": [ - "About", - "Getting Started", - "00 - Getting Help" - ] + "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here\nThis is the software installation walkthrough for those whose computers are running Linux. First off, welcome! Based on our pre-course interest form, there was a suprising number of you daily-drivers out there! However, a CytometryInR interest form is unlikely to be representative of the general population, so please stash all Year of the Linux desktop banners until further notice." }, { - "objectID": "course/00_Homeworks/index.html#discussions-forum", - "href": "course/00_Homeworks/index.html#discussions-forum", - "title": "Getting Help", - "section": "Discussions Forum", - "text": "Discussions Forum\nOn the course’s GitHub repository, we have opened up a Discussions page that we plan to use as a community forum. We hope that it will serve multiple functions, from providing a better sense of community for the online participants, to facilitating asking and receiving help on something that is not clear, provide feedback about something that it not working out, as well as a place to celebrate and show off your coding wins.\n\nTo keep the Discussions forum semi-organized, we have set up several categories, please select the appropiate category when opening a new discussion!\n\n\n\nCode of Conduct\n\nWe ask that all course participants read and adhere to the spirit of our Code of Conduct. We are all human, at different points in our learning journeys, so what may be obvious to you at your point of your learning journey may have necessarily be obvious to someone just getting started. This course is our giving back to those in the community, but is on a voluntary basis in addition to our regular workload. While we try to reply quickly, sometimes our cell sorters fully melt down sending everything into chaos. We will reply when we can.\n\n\n\n\nAnnouncements\n\nWhen we send out an email to all participants, we will also repost it as an announcement. This ensures that even if you are not on our mailing list, you will still be able to have access to important information and course updates.\n\n\n\n\n\n\n\nGeneral\n\nThis section category can be used for any discussions that you think are worth having, that don’t fall under any of the other category. Good examples are continuing a discussion that was held during one of the livestreams; wanting to discuss and dive further into a given week’s topic; or bringing in additional resources that you found useful to understandingn something that didn’t click initially. This space is for the community to shape as they see best.\n\n\n\n\n\n\n\nIdeas\n\nHave an idea for a new topic or a way to improve the course? We would love to hear them. Provide as many details, and ideally, an example, and if it is doable, we will try to implement them.\n\n\n\n\n\n\n\nIntroductions\n\nOnline courses can be odd in terms of replicating in-person dynamics. Fortunately, we have gathered the largest cohort of “cytometrist with no-to-little flow experience trying to learn R at the same time” that the world has ever seen, so best to take advantage of this while we can. Treat this section as if we had just met at a conference, tell us about yourself, what brings you here, and what you want to hopefully be able to do after the course ends.", - "crumbs": [ - "About", - "Getting Started", - "00 - Getting Help" - ] + "objectID": "course/00_WorkstationSetup/Linux.html#debian-based-distros", + "href": "course/00_WorkstationSetup/Linux.html#debian-based-distros", + "title": "Installing Software on Linux", + "section": "Debian-based Distros", + "text": "Debian-based Distros\n\nInstalling R\nTo get started, first navigate to the R website. Once there, click on Download R option towards the top of the page.\n\n\n\nOn the next screen, you will need to select a mirror from which to download the software from. You can either select the closest geographic location (which may be faster) or alternatively just select the Cloud option which should redirect you.\n\n\n\nOnce this is done, select your Linux Distro (or one that shares your package managers format).\n\n\n\nOn the landing page, you will find a bunch of relevant installation information, which is worthwhile giving a read-through when you have time.\n\n\n\nThe process to successfully install R can be summarized as follows:\nUpdate apt/sources.list to include the CRAN repository (allowing access to R packages)\n\n\n\nSince we are running on Debian stable (Trixie), we would add the following line to sources.list\n\n\n\nSo in practice, open sources.list:\n\n\n\nPaste the line, and “Ctrl + O”; “Enter”; “Ctrl + X” to save the changes.\n\n\n\nNext, we will need to retrieve the keyID used to sign. This can be fetched from Ubuntu via the terminal.\n\n\n\n\n\n\nThen we need to export and write it\n\n\n\nWhich if successful, will display the public key.\n\n\n\nWith the above set up, we can proceed via our apt package manager to install both r-base and r-base-dev (which contains the equivalent of Rtools for Windows, or Xcode Command Line Tools for macOS).\n\n\n\n\n\n\n\n\n\nAnd if all goes well, R should now be installed." + }, + { + "objectID": "course/00_WorkstationSetup/Linux.html#installing-git", + "href": "course/00_WorkstationSetup/Linux.html#installing-git", + "title": "Installing Software on Linux", + "section": "Installing Git", + "text": "Installing Git\n\n# sudo apt install git" + }, + { + "objectID": "course/00_WorkstationSetup/Linux.html#installing-positron", + "href": "course/00_WorkstationSetup/Linux.html#installing-positron", + "title": "Installing Software on Linux", + "section": "Installing Positron", + "text": "Installing Positron\nFinally, you will need to install Positron. It will be the integrated development environment (IDE) we will be using for the course.\nFirst, navigate to their homepage, and select the blue Download option button on the upper-right.\n\n\n\nYou will then need to accept the Elastic License agreement to use the software (we will cover this source-available license type and what it does later in the course).\n\n\n\nWith the license accepted, you will be able to select distribution and architecture.\n\n\n\nOnce the Download completes, proceed to install the .deb package as you would nornally. GUI example via Discover below.\n\n\n\nDepending on your configurations, you may be asked to exert your sudo powers.\n\n\n\nOnce this completes, you should now be able to launch the software for the first time." + }, + { + "objectID": "course/00_WorkstationSetup/LinuxSlides.html#debian-based-distros", + "href": "course/00_WorkstationSetup/LinuxSlides.html#debian-based-distros", + "title": "Installing Software on Linux", + "section": "Debian-based Distros", + "text": "Debian-based Distros\nInstalling R\n\n\n\n\n\n\n\n\n.\n\n\nTo get started, first navigate to the R website. Once there, click on Download R option towards the top of the page." + }, + { + "objectID": "course/00_WorkstationSetup/LinuxSlides.html#installing-git", + "href": "course/00_WorkstationSetup/LinuxSlides.html#installing-git", + "title": "Installing Software on Linux", + "section": "Installing Git", + "text": "Installing Git\n\n# sudo apt install git" + }, + { + "objectID": "course/00_WorkstationSetup/LinuxSlides.html#installing-positron", + "href": "course/00_WorkstationSetup/LinuxSlides.html#installing-positron", + "title": "Installing Software on Linux", + "section": "Installing Positron", + "text": "Installing Positron\n\n\n\n\n\n\n\n\n.\n\n\nFinally, you will need to install Positron. It will be the integrated development environment (IDE) we will be using for the course.\nFirst, navigate to their homepage, and select the blue Download option button on the upper-right." + }, + { + "objectID": "course/00_WorkstationSetup/MacOSSlides.html#installing-r", + "href": "course/00_WorkstationSetup/MacOSSlides.html#installing-r", + "title": "Installing Software on MacOS", + "section": "Installing R", + "text": "Installing R\n\n\n\n\n\n\n\n\n.\n\n\nTo get started, first navigate to the R website. Once there, click on Download R option towards the top of the page." + }, + { + "objectID": "course/00_WorkstationSetup/MacOSSlides.html#xcode-command-line-tools", + "href": "course/00_WorkstationSetup/MacOSSlides.html#xcode-command-line-tools", + "title": "Installing Software on MacOS", + "section": "Xcode Command Line Tools", + "text": "Xcode Command Line Tools\n\n\n\n\n\n\n\n\n.\n\n\nDepending on your version of macOS, you may or may not already have Git installed on your computer. The reason is that it comes bundled within the Xcode Command Line Tools.\nIf this is not your first foray into coding, you may have previously seen an installation pop-up along the lines of “XYZ requires command line developer tools. Would you like to install the tools now?” when installing an IDE (like Positron, Rstudio or Visual Studio Code)." + }, + { + "objectID": "course/00_WorkstationSetup/MacOSSlides.html#install-positron", + "href": "course/00_WorkstationSetup/MacOSSlides.html#install-positron", + "title": "Installing Software on MacOS", + "section": "Install Positron", + "text": "Install Positron\n\n\n\n\n\n\n\n\n.\n\n\nFinally, you will install Positron. It is an integrated development environment (IDE) in which we will open, modify and run our code throughout the course.\nFirst, navigate to their homepage, and select the blue Download option button on the upper-right." + }, + { + "objectID": "course/01_InstallingRPackages/slides_inperson.html#checking-for-loaded-packages", + "href": "course/01_InstallingRPackages/slides_inperson.html#checking-for-loaded-packages", + "title": "01 - Installing R Packages", + "section": "Checking for Loaded Packages", + "text": "Checking for Loaded Packages\n\n\n\n\n\n\n\n\n.\n\n\nFor the contents (ie. the functions) of an R package to be available for your computer to use, they must first be activated (ie. loaded) into your local environment. We will first learn how to check what R packages are currently active." + }, + { + "objectID": "course/01_InstallingRPackages/slides_inperson.html#installing-from-cran", + "href": "course/01_InstallingRPackages/slides_inperson.html#installing-from-cran", + "title": "01 - Installing R Packages", + "section": "Installing from CRAN", + "text": "Installing from CRAN\n\n\n\n\n\n\n\n\n.\n\n\nWe will start by installing R packages that are part of the CRAN repository. This is the main R package repository, being part of the broader R software project. In the context of this course, R packages that work primarily with general data structure (rows, columns, matrices, etc.) or visualizations will predominantly be found within this repository.\nThese include the tidyverse packages. These packages have collectively made R easier to use by smoothing out some of the rough edges of base R, which is why R has seen major growth within the last decade. We will be installing several of these R packages today." + }, + { + "objectID": "course/01_InstallingRPackages/slides_inperson.html#installing-from-bioconductor", + "href": "course/01_InstallingRPackages/slides_inperson.html#installing-from-bioconductor", + "title": "01 - Installing R Packages", + "section": "Installing from Bioconductor", + "text": "Installing from Bioconductor\n\n\n\n\n\n\n\n\n.\n\n\nBioconductor is the second R package repository we will be working with throughout the course. While it contains far fewer packages than CRAN, it contains packages that are primarily used by the biomedical sciences. Following this link you can find it’s current flow and mass cytometry R packages.\nBioconductor R packages differ from CRAN R packages in a couple of ways. Bioconductor has different standards for acceptance than CRAN. They usually contain interoperable object-types, put more effort into documentation and continous testing to ensure that the R package remains functional across operating systems." }, { - "objectID": "course/00_Homeworks/index.html#polls", - "href": "course/00_Homeworks/index.html#polls", - "title": "Getting Help", - "section": "Polls", - "text": "Polls\n\nOccasionally, we will need to gather community feedback on what is working and what is not working. We will sporadically post Polls for this purpose.\n\n\n\n\n\n\nQ&A\n\nThe Questions and Answers (Q&A) section is where you go if something is not clear, not working, and you are trying to troubleshoot your way through it. First thing before posting, search! to see if someone has already asked the question. If you don’t find anything, go ahead and open a new discussion.\nSince we are not at your computer, and don’t have your dataset, when troubleshooting it is best to include a minimal reproducible example of the issuen you are encountering, slimming down the number of files needed to be transferred, and generalizing down the code so that other course participants and instructors can follow along. If this is not doable, or if the problem requires added context (and larger files), create a new repository on your GitHub, make it public, and share the links to it in your post. The goal would be to download the folder and be able to replicate the issue that you are encountering.\n\n\n\n\n\n\n\n\n\n\nShow and Tell\n\nWhere Q&A section is for getting help on code that is frustratingly not working, Show and Tell is where to go and celebrate when you finally get things to work. Share your wins, show us the extra pretty graphs, bizarre autofluorescence signatures, or odd outputs that just make you laugh.", - "crumbs": [ - "About", - "Getting Started", - "00 - Getting Help" - ] + "objectID": "course/01_InstallingRPackages/slides_inperson.html#install-from-github", + "href": "course/01_InstallingRPackages/slides_inperson.html#install-from-github", + "title": "01 - Installing R Packages", + "section": "Install from GitHub", + "text": "Install from GitHub\n\n\n\n\n\n\n\n\n.\n\n\nIn addition to the CRAN and Bioconductor repositories, individual R packages can also be found on GitHub hosted on their respective developers GitHub accounts. Newer packages that are still being worked on (often in the process of submission to CRAN or Bioconductor) can be found here, as well as those where the author decided not to bother with a review process, and just made the packages immediately available, warts and all." }, { - "objectID": "course/00_Homeworks/index.html#issues", - "href": "course/00_Homeworks/index.html#issues", - "title": "Getting Help", - "section": "Issues", - "text": "Issues\nMost of the time, if you are having trouble getting you code to run, you should first stop after some initial troubleshooting should be to open a new Discussion under the Q&A category. Here you will be able to get both community and instructor help and suggestions to hopefully resolve whatever is going on.\n\nThe Issues page is primarily meant for course-specific problems that require the course instructor intervention to fix. For example, we release a new week of material, and while it runs fine for both Windows and Linux, the code fails to run for all MacOS users. While you may be able to find workarounds on your own, it’s ultimately our responsibility to help provide a solution so that everyone can move forward. This is the situation where opening an Issue is appropiate.\n\n\n\nSimilarly, if our code contains a wrong argument, is returning a deprecation warning, etc. open an issue to let us know. While we may not be able to fix something that is not directly related to our code, we can redirect it to the package maintainers so that they can fix the issue.\nAnd likewise, if you find multiple typos in the documentation, you can open an issue and propose carrying out a pull-request to fix them.", - "crumbs": [ - "About", - "Getting Started", - "00 - Getting Help" - ] + "objectID": "course/01_InstallingRPackages/slides_inperson.html#troubleshooting-install-errors", + "href": "course/01_InstallingRPackages/slides_inperson.html#troubleshooting-install-errors", + "title": "01 - Installing R Packages", + "section": "Troubleshooting Install Errors", + "text": "Troubleshooting Install Errors\n\n\n\n\n\n\n\n\n.\n\n\nWe have now installed three R packages, dplyr, PeacoQC, and flowSpectrum. In my case, I did not encounter any errors during the installation. However, sometimes a package installation will fail due to an error encountered during the installation process. This can be due to a number of reasons, ranging from a missing dependency, or an update that caused a conflict. While these can occur for CRAN or Bioconductor packages, they are more frequently seen for GitHub packages where the Description/Namespace files may not have been fully updated yet to install all the required dependencies.\nWhen encountering an error, start of by first reading through the message to see if you can parse any useful information about what package failed to install, and if it list the missing dependency packages name. The later was the case in the error message example shown below." }, { - "objectID": "course/00_Homeworks/index.html#submitting-take-home-problems", - "href": "course/00_Homeworks/index.html#submitting-take-home-problems", - "title": "Getting Help", - "section": "Submitting Take-Home Problems", - "text": "Submitting Take-Home Problems\nEach week, during the course, we introduce and cover the main concepts for the particular concept. Our goal is to provide you with the necessary code and enough code to be able to get the jist. However, to become comfortable and be able to apply what you have learned, you will need to explore beyond our examples, try it with your own datasets, encounter things that don’t work, and troubleshoot your way through them. It’s this cycle of venturing into the unknown that develops strong coding skills that are needed to overcome any barrier you encounter. The goal of the take-home questions is to provide some less curated problems that will take a little longer to answer to help get you started on your own exploration of the topic.\nAs previously mentioned, these take-home problems are completely optional. If you are in the middle of solving them and want to seek feedback from then community and course instructors, open a Discussion under the general category is the way to go.\nHowever, if you have completed them, and want course instructor feedback, you can submit them to us in the form of a pull-request to the CytometryInR repo’s homework branch. We will take a look, offer constructive suggestions, and when ready merge the solution. This will also result in GitHub listing you as a contributor to the course.\nWe will outline the basic steps of how to set up and open a pull-request, to help simplify the process.\n\nSync your Fork\nFirst off, make sure to Sync your fork of the Cytometry in R project. That makes sure that all the commits present are up-to-date and simplifies the process of having the pull-request being merged.\n\n\n\n\n\n\n\n\n\n\n\nPull to Local\nHaving Synced your branch on GitHub, return to your computer, open the CytometryInR repository and pull in the changes locally.\n\n\n\n\nCreate own Folder under Homeworks\nUnder the course folder, you will find folders for each week. Within these folders find the homework folder. This will appear empty except for a README file with ionstructions. It is within this folder you will need to create your own folder.\nTo ensure there are no conflicts on the pull-request merge, please use your GitHub username as the folder name.\n\n\n\nOnce you have your folder inside homework, go ahead and copy anything you are turning in from their respective working project folders. Remember, the goal is minimal reproducible example is the goal. Rendered Quarto Documents are preferred, but we will also accept scripts and small data files and images. a README.MD file with anything you want me to know,\n\n\n\n\n\n\n\n\nSign off Commit\nNow that everything is present, Sign Off and Commit the change.\n\n\n\n\n\nPush Branch to GitHub.\nProceed to push the branch to GitHub.", - "crumbs": [ - "About", - "Getting Started", - "00 - Getting Help" - ] + "objectID": "course/01_InstallingRPackages/slides_inperson.html#documentation-and-websites", + "href": "course/01_InstallingRPackages/slides_inperson.html#documentation-and-websites", + "title": "01 - Installing R Packages", + "section": "Documentation and Websites", + "text": "Documentation and Websites\n\n\n\n\n\n\n\n\n.\n\n\nWe have already seen a couple ways to access the help documentation contained within an R package via Positron. Beyond internal documentation, R packages often have external websites that contain additional walk-through articles (ie. vignettes) to better document how to use the package.\nFor CRAN-based packages, we can start off by searching for the package name. So, in the case of dplyr" }, { - "objectID": "course/00_Positron/index.html", - "href": "course/00_Positron/index.html", - "title": "Using Positron", - "section": "", - "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here", - "crumbs": [ - "About", - "Getting Started", - "00 - Positron" - ] + "objectID": "course/02_FilePaths/slides.html#set-up", + "href": "course/02_FilePaths/slides.html#set-up", + "title": "02 - File Paths", + "section": "Set Up", + "text": "Set Up\n\n\n\n\n\n\n\n\n.\n\n\nBefore we begin, let’s make sure you get the data needed for today transferred to your local computer, and then get the .fcs files copied over from there to your own working project folder. This is the process you will repeat each week throughout the course." }, { - "objectID": "course/00_Positron/index.html#console", - "href": "course/00_Positron/index.html#console", - "title": "Using Positron", - "section": "Console", - "text": "Console\nAt the bottom of the sceen, you will first see the Console Tab. This is the tab where your lines of code when executed (run) will appear, as well as any messages, warnings or errors that get returned. On the right side of the console, you can find several buttons, among them restart R and delete session (for when you need a fresh start), and clear console (which keeps all previously run outputs and objects, but clears away the displayed text within the console).", - "crumbs": [ - "About", - "Getting Started", - "00 - Positron" - ] + "objectID": "course/02_FilePaths/slides.html#working-directory", + "href": "course/02_FilePaths/slides.html#working-directory", + "title": "02 - File Paths", + "section": "Working Directory", + "text": "Working Directory\n\n\n\n\n\n\n\n\n.\n\n\nNow that we are back in our Week2 folder, let’s start by seeing our current location similarly to how our computer perceives it.\nWe will use getwd() function (ie. get working directory) to return the location of the folder we are currently inside of. For example, when getwd() is run within my Week2 project folder, I see the following location" }, { - "objectID": "course/00_Positron/index.html#terminal", - "href": "course/00_Positron/index.html#terminal", - "title": "Using Positron", - "section": "Terminal", - "text": "Terminal\nRight next to the Console tab is your Terminal tab. While the console tab is primarily used to run R code within Positron, the terminal is the interface where code containing system commands directed at at your computer is entered. We will use this less frequently, primarily in two context: 1) rendering Quarto documents, and 2) commiting changes to version control. Among the buttons on the right-side of the terminal to make note of are the + button to add a new terminal, and the trash/garbage can button to kill (stop) the terminal.\n\n\n\nThe other tabs (Problems, Output, Ports, Debug Console) are used less frequently. I usually will Problems and Debug when something goes wrong with the code, as various warning and error messages will end up being displayed there.", - "crumbs": [ - "About", - "Getting Started", - "00 - Positron" - ] + "objectID": "course/02_FilePaths/slides.html#directories", + "href": "course/02_FilePaths/slides.html#directories", + "title": "02 - File Paths", + "section": "Directories", + "text": "Directories\n\n\n\n\n\n\n\n\n.\n\n\nWithin this working directory, we have a variety of project folders and files related to the course. We can see the folders that are present using the list.dirs() function.\n\n\n\n\n\n\n\n\n\n\nlist.dirs(path=\".\", full.names=FALSE, recursive=FALSE)" }, { - "objectID": "course/00_Positron/index.html#help", - "href": "course/00_Positron/index.html#help", - "title": "Using Positron", - "section": "Help", - "text": "Help\nWhen trying to evaluate how a particular function is working in R, you can hover over it and positron will open up the documentation for that particular function if available, alternatively, you can enter ?theParticularFunctionsName in the console and hit enter to similarly view what is occuring.", - "crumbs": [ - "About", - "Getting Started", - "00 - Positron" - ] + "objectID": "course/02_FilePaths/slides.html#variables", + "href": "course/02_FilePaths/slides.html#variables", + "title": "02 - File Paths", + "section": "Variables", + "text": "Variables\n\n\n\n\n\n\n\n\n.\n\n\nBefore exploring file paths, we need to have some basic R code knowledge that we can use to work with them. Within R, we have the ability to assign particular values (be they character strings, numbers or logicals) to objects (ie. variables) that can be used when called upon later.\nFor example:\n\n\n\n\n\n\n\nWhatDayDidIWriteThis <- \"Saturday\"\n\n\n\n\n\n\n\n\n\n\n.\n\n\nIn this case, the variable name is what the assignment arrow (“<-”) is pointing at. In this case, WhatDayDidIWriteThis" }, { - "objectID": "course/00_Positron/index.html#variables", - "href": "course/00_Positron/index.html#variables", - "title": "Using Positron", - "section": "Variables", - "text": "Variables\nOn the upper-portion of the Secondary Side Bar, we can find the Session window, containing the Variables tab. As you run (execute) lines of code, and different variables, objects and functions are created, these become visible under the variables tab on the upper right.\n\n\n\nFor some types of objects (generally data.frames and other matrix-like objects), you can click on their listing under variables to expand to see additional details about the object (column names, etc.) as well as view a larger version which will appear within the Editor window.", - "crumbs": [ - "About", - "Getting Started", - "00 - Positron" - ] + "objectID": "course/02_FilePaths/slides.html#indexing", + "href": "course/02_FilePaths/slides.html#indexing", + "title": "02 - File Paths", + "section": "Indexing", + "text": "Indexing\n\n\n\n\n\n\n\n\n.\n\n\nNot all variables contain single objects.\nFor example, we can modify Fluorophores and add additional entries:\n\n\n\n\n\n\n\nFluorophores <- c(\"BV421\", \"FITC\", \"PE\", \"APC\")\nstr(Fluorophores)\n\n chr [1:4] \"BV421\" \"FITC\" \"PE\" \"APC\"\n\n\n\n\n\n\n\n\n\n\n\n.\n\n\nThe c stands for concatenate. It concatenates the objects into a larger object, known as a vector.\nIn this case, you notice in addition to the specification the values are characters, we get a [1:4], denoting four objects are present." }, { - "objectID": "course/00_Positron/index.html#plots", - "href": "course/00_Positron/index.html#plots", - "title": "Using Positron", - "section": "Plots", - "text": "Plots\nSimilarly, any generated Plots or Documents will appear within the Secondary Side Bar, either under Plots (bottom) or Viewer (top) tabs.", - "crumbs": [ - "About", - "Getting Started", - "00 - Positron" - ] + "objectID": "course/02_FilePaths/slides.html#listing-files", + "href": "course/02_FilePaths/slides.html#listing-files", + "title": "02 - File Paths", + "section": "Listing Files", + "text": "Listing Files\n\n\n\n\n\n\n\n\n.\n\n\nAfter this detour into variables and indexing, let’s return our focus to how to use these in context of file paths. Working from within our Week2 project folder, let’s see what directories (folders) are present\n\n\n\n\n\n\n\nlist.dirs(path=\".\", full.names=FALSE, recursive=FALSE)" }, { - "objectID": "course/00_Positron/index.html#view", - "href": "course/00_Positron/index.html#view", - "title": "Using Positron", - "section": "View", - "text": "View\nOn the upper bar multiple tabs can be found, which we will explore in due time. Most useful to point out is the View tab. If you accidentally close your console, session or plots window, and are trying to get them to reapper, you would need to reselect them from this tab.", - "crumbs": [ - "About", - "Getting Started", - "00 - Positron" - ] + "objectID": "course/02_FilePaths/slides.html#creating-directories", + "href": "course/02_FilePaths/slides.html#creating-directories", + "title": "02 - File Paths", + "section": "Creating directories", + "text": "Creating directories\n\n\n\n\n\n\n\n\n.\n\n\nAlternatively, we can also create a folder via R using the dir.create() function. Since we want it within data, we would modify the path accordingly\n\n\n\n\n\n\n\nNewFolderLocation <- file.path(\"data\", \"target2\")\n\ndir.create(path=NewFolderLocation)" }, { - "objectID": "course/00_Positron/index.html#pages", - "href": "course/00_Positron/index.html#pages", - "title": "Using Positron", - "section": "Pages", - "text": "Pages\nThe pages tab and the left-side bar show you everything that is currently within your project folder, including all the folders, and files. Once version control with Git is initiated, new files are relected showing up as green text and a dot, while modified tracked files are reflected by light brown text and a dot.\n\n\n\nThe dropdown arrows can be used to open and close specific folders to allow for better organization. There is also a scrollbar on the right-side of the side-bar to scroll through the entire folders contents.", - "crumbs": [ - "About", - "Getting Started", - "00 - Positron" - ] + "objectID": "course/02_FilePaths/slides.html#file-paths", + "href": "course/02_FilePaths/slides.html#file-paths", + "title": "02 - File Paths", + "section": "File Paths", + "text": "File Paths\n\n\n\n\n\n\n\n\n.\n\n\nOne way we can do this is through a file.path argument. We could potentially provide this by adding either a “/” or a “\" into the path argument, depending on your computers operating system.\n\n\n\n\n\n\n\nlist.files(path=\"data/target\", full.names=FALSE, recursive=FALSE)" }, { - "objectID": "course/00_Positron/index.html#search", - "href": "course/00_Positron/index.html#search", - "title": "Using Positron", - "section": "Search", - "text": "Search\nThe search tab on the left side bar is something that I use routinely.\n\n\n\nIt can help locate code that you had been working on, but have since forgotten where it is at. Here is an example of finding the files where I had used a function that needed modifying within a local project folder’s files.\n\n\n\nSimilarly, if you need to replace a particular character string with another, the replace with field below can help simplify the task without having to track down and change 20 lines across 5 files.", - "crumbs": [ - "About", - "Getting Started", - "00 - Positron" - ] + "objectID": "course/02_FilePaths/slides.html#selecting-for-patterns", + "href": "course/02_FilePaths/slides.html#selecting-for-patterns", + "title": "02 - File Paths", + "section": "Selecting for Patterns", + "text": "Selecting for Patterns\n\n\n\n\n\n\n\n\n.\n\n\nIf we currently listed the files within data, we get a return that looks like this:\n\n\n\n\n\n\n\nlist.files(\"data\", full.names=FALSE, recursive=FALSE)" }, { - "objectID": "course/00_Positron/index.html#extensions", - "href": "course/00_Positron/index.html#extensions", - "title": "Using Positron", - "section": "Extensions", - "text": "Extensions\nOn the far-left side we can find the Activity bar, which contains several tabs. Which tab you have selected will then dictate the contents of your left side-bar.\nOccupying the left side bar are several tabs. One of these is Extensions, which shows “Plugins” (or the VScode equivalent) that extend the functionality of Positron further. The ones you have installed may vary, but the main ones in context of this course are Air (provides color and highlights syntax for R code to make interpretation easier) as well as Quarto (for rendering the various document types).", - "crumbs": [ - "About", - "Getting Started", - "00 - Positron" - ] + "objectID": "course/02_FilePaths/slides.html#conditionals", + "href": "course/02_FilePaths/slides.html#conditionals", + "title": "02 - File Paths", + "section": "Conditionals", + "text": "Conditionals\n\n\n\n\n\n\n\n\n.\n\n\nOne useful thing is that within R, we can set conditions on whether something is carried out. The most typical conditional you will encounter are the “If” statements. They typically take a form that resembles the following.\n\n\n\n\n\n\n\nNeedCoffee <- TRUE\n\nif (NeedCoffee){\n print(\"Take a break\")\n}" }, { - "objectID": "course/00_Positron/index.html#git", - "href": "course/00_Positron/index.html#git", - "title": "Using Positron", - "section": "Git", - "text": "Git\nThe Git tab on the left side bar is where once version control is initiated for the project folder, we can see changes that have occurred to the individual files since the last commit. These changes can be added to a new commit by clicking on the + sign. This will be covered more extensively in the next section\n\n\n\nSimilarly, if you want to discard a change that has occured, the circular arrow will revert to the last commited version. Selecting and pressing the delete button will similarly work.\n\n\n\nSelecting the … options will highlight all the various git functions, some of which we will cover more extensively in the next section and throughout the course.", - "crumbs": [ - "About", - "Getting Started", - "00 - Positron" - ] + "objectID": "course/02_FilePaths/slides.html#conditionals-in-practice", + "href": "course/02_FilePaths/slides.html#conditionals-in-practice", + "title": "02 - File Paths", + "section": "Conditionals in practice", + "text": "Conditionals in practice\n\n\n\n\n\n\n\n\n.\n\n\nFirst off, let’s write a conditional to check if there is a target3 folder within data.\n\n\n\n\n\n\n\nfiles_present <- list.files(\"data\", full.names=FALSE, recursive=FALSE)\nfiles_present" }, { - "objectID": "course/00_Quarto/index.html", - "href": "course/00_Quarto/index.html", - "title": "Introduction to Quarto", - "section": "", - "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here", - "crumbs": [ - "About", - "Getting Started", - "00 - Quarto" - ] + "objectID": "course/02_FilePaths/slides.html#copying-files", + "href": "course/02_FilePaths/slides.html#copying-files", + "title": "02 - File Paths", + "section": "Copying Files", + "text": "Copying Files\n\n\n\n\n\n\n\n\n.\n\n\nTo copy files to another folder location, we use the file.copy() function. It has two arguments that we will be working with, from being the .fcs files, and to being the folder location we wish to transfer them to. If we tried using them as we currently have them:\n\n\n\n\n\n\n\n# Variable Infants containing 4 .fcs file names\n\nfile.copy(from=Infants, to=FolderTarget3)" }, { - "objectID": "course/00_Quarto/index.html#renderpreview", - "href": "course/00_Quarto/index.html#renderpreview", - "title": "Introduction to Quarto", - "section": "Render/Preview", - "text": "Render/Preview\nThe preview button, at the upper-left end of the Editor, is used to render/knit a quarto document. This triggers the process by which code-chunks are run, and then the outputs are cobbled together into the file format type designated by the YAML header.\n\n\n\n\nHTML\nIn this case, the YAML header’s format argument is set to html. After clicking preview, we can see the various rendering steps appear in the console below. Since no errors occured, the html document was formed succesfully and appears as a file in the left side-bar. Additional, a preview of the document appears in the View tab of the right side-bar, allowing for quick visual inspection.\n\n\n\nAlternatively, we can render a document via the terminal, by entering “quarto render”, followed by the name of the document.\n\n\n\nThis results similar process as what we saw with the preview button\n\n\n\nWe can also open the .html document via our File Explorer, which will open it within our web browser.\n\n\n\nQuarto documents can be rendered (previewed) in other formats besides html. These include pdf, Word documents (docx), and slides (revealjs). This is set by the format argument within the YAML header.\n\n\nPDF\nBy switching the format argument from html to pdf, we can render the document as a pdf\n\n\n\nWe can see the pdf is now listed in the list of files, with a preview shown on the right side-bar.\n\n\n\n\n\nDocx\nWe can also generate Word documents (.docx) as well.\n\n\n\nIn this case, we can see that a Word document file was created, but nothing appears in the View tab. This is because the format is not yet supported for the View tab. We can however open and view the Word document via our File explorer.\n\n\n\nWhich shows a Word Document style output.", - "crumbs": [ - "About", - "Getting Started", - "00 - Quarto" - ] + "objectID": "course/02_FilePaths/slides.html#removing-files.", + "href": "course/02_FilePaths/slides.html#removing-files.", + "title": "02 - File Paths", + "section": "Removing files.", + "text": "Removing files.\n\n\n\n\n\n\n\n\n.\n\n\nJust like we can add files via R, we can also remove them. However, when we remove them via this route, they are removed permanently, not sent to the recycle bin. We will revisit how later on in the course after you have gained additional experience with file.paths.\n\n\n\n\n\n\n\n?unlink()" }, { - "objectID": "course/00_Quarto/index.html#yaml", - "href": "course/00_Quarto/index.html#yaml", - "title": "Introduction to Quarto", - "section": "YAML", - "text": "YAML\nWe can additionally provide additional custom inputs to the YAML header. A couple examples include providing the document author and date.\n\n\n\nWhich we can see are updated after we preview/render.", - "crumbs": [ - "About", - "Getting Started", - "00 - Quarto" - ] + "objectID": "course/02_FilePaths/slides.html#basename", + "href": "course/02_FilePaths/slides.html#basename", + "title": "02 - File Paths", + "section": "Basename", + "text": "Basename\n\n\n\n\n\n\n\n\n.\n\n\nIf we look at Infants with the full.names=TRUE, we see the additional pathing folder has been added, allowing us to successfully copy over the files.\n\n\n\n\n\n\n\nInfants" }, { - "objectID": "course/00_Quarto/index.html#table-of-contents", - "href": "course/00_Quarto/index.html#table-of-contents", - "title": "Introduction to Quarto", - "section": "Table of Contents", - "text": "Table of Contents\nIn the previous section, we saw that we could provide headings and subheadings to our .qmd file by placing a # at the start of a line in the text portion of the document. A subheading was designated by a ##, with additional hierarchy being designated by appending an additional #.\n\n\n\nWe can use the heading information to generate a table of contents for our document. To do this, we add a toc argument to the yaml header, and set it to TRUE. After rendering, it appears on the upper-right side of the document.\n\n\n\nNotice, that the subheaders do not appear currently within the TOC.\n\n\n\nWe can fix this by setting a toc-expand argument in the YAML to true.", - "crumbs": [ - "About", - "Getting Started", - "00 - Quarto" - ] + "objectID": "course/02_FilePaths/slides.html#recursive", + "href": "course/02_FilePaths/slides.html#recursive", + "title": "02 - File Paths", + "section": "Recursive", + "text": "Recursive\n\n\n\n\n\n\n\n\n.\n\n\nAnd finally that we have created additional nested folders and populated them with fcs files, let’s see what setting list.files() recursive argument to TRUE\n\n\n\n\n\n\n\nall_files_present <- list.files(full.names=TRUE, recursive=TRUE)\nall_files_present" }, { - "objectID": "course/00_Quarto/index.html#code-chunk-arguments", - "href": "course/00_Quarto/index.html#code-chunk-arguments", - "title": "Introduction to Quarto", - "section": "Code Chunk Arguments", - "text": "Code Chunk Arguments\nAs we briefly touched on in the last section, code-chunks can be modified by including arguments, which affect whether a particular code chunk gets evaluated. In that example, we included a “#| eval: FALSE” to the install commands since we did not want them to be re-run subsequently. We will take a closer look at the other arguments in this section.\n\nEval\nThe code-chunk argument, “Eval”, is used to determine when a code-chunk get’s evaluated. When set to true (or by default if no eval argument is included), the code-chunks contents will be run/executed, and the output will appear. We can see this in the html output, as below the code block, we get back the address of my working directory.\n\n\n\nWhen we switch the Eval argument to FALSE, and then render the document, we can see that the code block remains, but we do not get any output for the code contained within.\nIn every-day practice, we will “use eval: FALSE” arguments when we want to keep the code for later use, but want to manually run the code contained within ourselves.\n\n\n\n\n\nEcho\nThe code-block argument “echo” dictates whether the code within the code-block is displayed within the document. So in the case when “echo: true”, we get both the code displayed, as well as the output that gets returned by the code.\n\n\n\nBy contrast, when “echo: FALSE”, we do not have the code displayed, but do get the output of that code being run.\nIn daily-practice, “echo: FALSE” gets often used when generating plots that we want to include in the report, without the code that generated them being displayed.\n\n\n\n\n\nInclude\nThe next code-chunk argument is include. Unlike echo, which focuses on whether the code is displayed, but still returns the output, include dictates behavior of both the code-block and it’s output. Unlike eval however, it will still run the code, which allows it to be available for the next code-chunk that might need it. When we set “include: false”, no trace of that code-chunk is present in the document. This is useful when making reports where we do not want to include the code used to generate a particular figure.\n\n\n\nBy contrast, when we set “include: true”, the code block and it’s output is once again included within the rendered document.\n\n\n\n\n\nCode-Fold\nOne of my favorites is “code-fold”. When we set it as “code-fold: show”, it displays the code, but provides a drop-down arrow that can be closed to compress the code.\n\n\n\nIn contrast, if we want to make the code-available for those that are interested, but not directly visible, we can set as “code-fold: true”\n\n\n\n\n\nWarnings\nWithin R, when code is executed, in addition to returning the output, R is capable of returning warnings (when something is not as expected, but not sufficient to elicit an error with a complete stop) or a message (text output that gets displayed, often telling about progress). While these are useful when running code yourself, it can be annoying when generating a report and the 2nd page is a bunch of warning text being displayed.\nFor example, when the R package ggcyto is loaded via the library call, it will automatically load several other packages, which typically results in these messages being outputted:\n\n\n\nWe can therefore set that code-chunk’s warning/message arguments to FALSE, therefore silencing the message outputs that would otherwise clutter up our report.", + "objectID": "course/02_FilePaths/slides.html#saving-changes-to-version-control", + "href": "course/02_FilePaths/slides.html#saving-changes-to-version-control", + "title": "02 - File Paths", + "section": "Saving changes to Version Control", + "text": "Saving changes to Version Control\n\n\n\n\n\n\n\n\n.\n\n\nAnd as is good practice, to maintain version control, let’s stage all the files and folders we created today within the Week2 Project Folder, write a commit message, and send these files back to GitHub until they are needed again next time." + }, + { + "objectID": "course/02_FilePaths/index.html", + "href": "course/02_FilePaths/index.html", + "title": "02 - File Paths", + "section": "", + "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here\nWelcome to the second week of Cytometry in R! This week we will learn about file.path, namely, how to communicate to our computer (and R) where various files are stored.", "crumbs": [ "About", - "Getting Started", - "00 - Quarto" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/00_Quarto/index.html#text-styles", - "href": "course/00_Quarto/index.html#text-styles", - "title": "Introduction to Quarto", - "section": "Text Styles", - "text": "Text Styles\nQuarto primarily uses Markdown for text styling. Consequently, markdown arguments can be used within the text to change how various text appears.\n\n\n\nFor a regular text, This single asterisk on each side of a word will italicize.\n\n\n\nWhen the number of asterisk is doubled, This word is bolded.\n\n\n\nWhen three asterisk are used, both are applied.\n\n\n\nFor an underscore, the word of interest is surrounded by square brackets “[]”, with “{.underline}” adjacent.", + "objectID": "course/02_FilePaths/index.html#set-up", + "href": "course/02_FilePaths/index.html#set-up", + "title": "02 - File Paths", + "section": "Set Up", + "text": "Set Up\nBefore we begin, let’s make sure you get the data needed for today transferred to your local computer, and then get the .fcs files copied over from there to your own working project folder. This is the process you will repeat each week throughout the course.\n\nNew Repository\nFirst off, login to your GitHub account. Once there, you will select the options to create a new repository (similar to what you did during Using GitHub)\n\n\n\nFor this week, let’s set this new repository up as a private repository, and call it Week2. This will keep things consistent with the file.paths we will be showing in the examples.\n\n\n\nOnce the new repository has been created, copy the URL.\n\n\n\nNext, open up Positron, set the interpreter to use R, and then select the option to bring in a “New Folder from Git”.\n\n\n\nPaste in your new repository’s url. Additionally, if you want to match file.paths shown in the examples, set your storage location to your local Documents folder (please note the start of the file.path will look differently depending on whether you are on Windows, MacOS, or Linux).\n\n\n\nYour new repository will then be imported from GitHub. Once this is done, create two subfolders (data and images) and a new .qmd file (naming it filepaths.qmd).\n\n\n\n\n\nSync\nWith this done, return to GitHub and open your forked version of the CytometryInR course folder. If you haven’t yet done so, click on sync to bring in this week’s code and datasets.\n\n\n\nReturning to Positron, you will need to switch Project Folders, switching from Week2 over to CytometryInR.\n\n\n\n\n\nPull\nOnce CytometryInR project folder has opened, you will need to pull in the new data from GitHub to your local computer.\n\n\n\n\n\nCopy Files to Week2\nOnce this is done, you will see within the course folder, containing this weeks folder (02_FilePaths). Within it there is a data folder with .fcs files. To avoid causing conflicts when bringing in next week’s materials, you will want to manually copy over these .fcs files (via your File Explorer) to the data folder within your “Week2” Project Folder.\n\n\n\n\n\nCommit and Push\nWhen you reopen your Week2 project folder in Positron, you should now be able to see the .fcs files within the data folder. Next, from the action bar on the far left, select the Source Control tab. Stage all the changes (as was done in Using Git), and write a short commit message.\n\n\n\nWith these files now being tracked by version control, push (ie. send) your changes to GitHub so that they are remotely backed up.\n\n\n\nAnd with this setup complete, you are now ready to proceed. Remember, run code and write notes in your working project folder (Week2 or otherwise named) to avoid conflicts next week in the CytometryInR folder when you are trying to bring in the Week #3 code and datasets.", "crumbs": [ "About", - "Getting Started", - "00 - Quarto" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/00_Quarto/index.html#hyperlinks", - "href": "course/00_Quarto/index.html#hyperlinks", - "title": "Introduction to Quarto", - "section": "Hyperlinks", - "text": "Hyperlinks\nYou can link to a website by surrounding word of interest in [] and placing the url within () adjacent to it.", + "objectID": "course/02_FilePaths/index.html#working-directory", + "href": "course/02_FilePaths/index.html#working-directory", + "title": "02 - File Paths", + "section": "Working Directory", + "text": "Working Directory\nNow that we are back in our Week2 folder, let’s start by seeing our current location similarly to how our computer perceives it.\nWe will use getwd() function (ie. get working directory) to return the location of the folder we are currently inside of. For example, when getwd() is run within my Week2 project folder, I see the following location\n\ngetwd()\n\n\nThis returns a file path. The final location (Week2 in this case) is the Working Directory. Your computer when working in R will be descern other locations in relation to this directory.", "crumbs": [ "About", - "Getting Started", - "00 - Quarto" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/00_Quarto/index.html#images", - "href": "course/00_Quarto/index.html#images", - "title": "Introduction to Quarto", - "section": "Images", - "text": "Images\nYou can place images by adding the following, as long as the file.path to the image is correctly formatted. In my case, this is why I include images folders within my folders to simplify the copy and paste.", + "objectID": "course/02_FilePaths/index.html#directories", + "href": "course/02_FilePaths/index.html#directories", + "title": "02 - File Paths", + "section": "Directories", + "text": "Directories\nWithin this working directory, we have a variety of project folders and files related to the course. We can see the folders that are present using the list.dirs() function.\n\nlist.dirs(path=\".\", full.names=FALSE, recursive=FALSE)\n\n\nWithin this list.dirs() function, we are specifying two arguments with which we will be working with later today, full.names and recursive. For now, lets set their arguments to FALSE, which means they conditions they implement are inactive (turned off).\n\n\nThe path argument is currently set to “.”, which is a stand-in for the present directory. In R, if an argument is not specified directly, it is inferred based on an order of expected arguments. Thus, if not present, we could still get the same output as seen before.\n\nlist.dirs(full.names=FALSE, recursive=FALSE)\n\n\n\n\nWithin Positron, in addition to visible folders, we also have hidden folders (denoted by the “.” in front of the folder name when using list.dirs()). In the case of one of our course website folders, we can see a “.quarto” folder shown in a lighter gray . The “.git” folder we saw from list.dirs() is typically hidden when viewing from Positron.\n\nIn the case of Week2, the two not-hidden folders we created are listed. We will see how to navigate these in a second.", "crumbs": [ "About", - "Getting Started", - "00 - Quarto" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/00_WorkstationSetup/MacOS.html", - "href": "course/00_WorkstationSetup/MacOS.html", - "title": "Installing Software on MacOS", - "section": "", - "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here\nThis is the software installation walkthrough for those whose computers are running MacOS. Based on our pre-course interest form, you make up a solid proportion of the course participants." - }, - { - "objectID": "course/00_WorkstationSetup/MacOS.html#installing-r", - "href": "course/00_WorkstationSetup/MacOS.html#installing-r", - "title": "Installing Software on MacOS", - "section": "Installing R", - "text": "Installing R\nTo get started, first navigate to the R website. Once there, click on Download R option towards the top of the page.\n\n\n\nOn the next screen, you will need to select a mirror from which to download the software from. You can either select the closest geographic location (which may be faster) or alternatively just select the Cloud option which should redirect you.\n\n\n\nYou will then select your Operating System, in this case, macOS\n\n\n\nNext, you will need to select the appropiate download based on your computers architecture. On newer Macs (containing M1+ chips) this would the arm64 option on the center left of the screen. For the older Intel (pre-2020) Macs, you would select the x86_64 option. If you are unsure, check your About This Mac tab\n\n\n\nAfter the download has completed, launch the installer\n\n\n\nProceed through the Read Me\n\n\n\nYou will then be prompted to acccept the software license (which is the free copyleft GPL2 license, which we will learn about later in the course).\n\n\n\n\n\n\nNext, you will need to navigate through several pages, keeping the defaults.\n\n\n\nAnd with any luck, you should see that the installation was successful." - }, - { - "objectID": "course/00_WorkstationSetup/MacOS.html#xcode-command-line-tools", - "href": "course/00_WorkstationSetup/MacOS.html#xcode-command-line-tools", - "title": "Installing Software on MacOS", - "section": "Xcode Command Line Tools", - "text": "Xcode Command Line Tools\nDepending on your version of macOS, you may or may not already have Git installed on your computer. The reason is that it comes bundled within the Xcode Command Line Tools.\nIf this is not your first foray into coding, you may have previously seen an installation pop-up along the lines of “XYZ requires command line developer tools. Would you like to install the tools now?” when installing an IDE (like Positron, Rstudio or Visual Studio Code).\n\n\n\nSince these command line developer tools contain both Git, and also the equivalent of Rtools for Windows, we will need to install them for this course. To get started, first open your terminal.\n\n\n\nNext run the following code:\n\nxcode-select --install\n\n\n\n\nYou will then have the pop-up asking whether you want to install the command line tools (which contain Git). Select Install.\n\n\n\nYou will then be asked to accept the license\n\n\n\nYour installation will then proceed\n\n\n\nAnd if all goes well, the software will finish installing.\n\n\n\nAfter you complete Positron installation (next section), if you check the version control tab on the action bar on the far left side of the screen, you should see the following if Git was installed correctly.\n\n\n\nAlternatively, if you see this, you will need to reattempt the installation." - }, - { - "objectID": "course/00_WorkstationSetup/MacOS.html#install-positron", - "href": "course/00_WorkstationSetup/MacOS.html#install-positron", - "title": "Installing Software on MacOS", - "section": "Install Positron", - "text": "Install Positron\nFinally, you will install Positron. It is an integrated development environment (IDE) in which we will open, modify and run our code throughout the course.\nFirst, navigate to their homepage, and select the blue Download option button on the upper-right.\n\n\n\nYou will then need to accept the Elastic License agreement to use the software (we will cover this source-available license type and what it does later in the course).\nWith the license accepted, you will be able to select your operating system and the relevant installer depending on whether you are on an M1+ (ARM) or older Intel (x86) Mac.\n\n\nOnce the Download completes, proceed to install the package as you normally would for any other program." - }, - { - "objectID": "course/00_WorkstationSetup/index.html", - "href": "course/00_WorkstationSetup/index.html", - "title": "Workstation Setup", - "section": "", - "text": "In the previous section, we first set up your GitHub account. Then we modified your GitHub profile and added a README section. Finally, we forked the CytometryInR repository so that you can easily retrieve the new course materials each week.\nIt is now time to install the required software on your computer, which will get your work-station set up with everything needed for this course. Depending on your computers operating system, the installation requirements may differ a bit. In general, you will need to install the following software:\nR website : The programming language we will be using throughout the course.\nPositron : The integrated development environment (IDE) in which we will open, modify and run our code.\nGit : The version control software that will allow us to track changes to our files.\nAdditionally, Windows users will need to install:\nRTools : Used to build R packages from source code.\nYou can find the operating system specific installation walkthroughs below. Once you have completed your specific walkthrough, return to this page and proceed to the next section.\nPlease note: For those using university or company administered computers, please be aware that you may not have the necessary permissions to install these directly, and may need to reach out to your IT department to help get the software installed and running correctly.\nIf you are using your own computer, congratulations, you are your system administrator, and should already have the necessary permissions.", + "objectID": "course/02_FilePaths/index.html#variables", + "href": "course/02_FilePaths/index.html#variables", + "title": "02 - File Paths", + "section": "Variables", + "text": "Variables\nBefore exploring file paths, we need to have some basic R code knowledge that we can use to work with them. Within R, we have the ability to assign particular values (be they character strings, numbers or logicals) to objects (ie. variables) that can be used when called upon later.\nFor example:\n\nWhatDayDidIWriteThis <- \"Saturday\"\n\nIn this case, the variable name is what the assignment arrow (“<-”) is pointing at. In this case, WhatDayDidIWriteThis\n\n\nWhen we run this, we create a variable, that will appear within the right-sidebar.\n\nWhatDayDidIWriteThis <- \"Saturday\"\n\n\n\n\nThese variables can subsequently be retrieved by printing (ie. running) the name of the variable\n\nWhatDayDidIWriteThis \n\n[1] \"Saturday\"\n\n\n\n\nYou can create variables with almost any name you can think of\n\nTopSecretMeetingDay <- \"Saturday\"\n\n\n\nWith a few exceptions. R doesn’t play well with spaces:\n\nTop Secret Meeting Day <- \"Saturday\"\n\nError in parse(text = input): <text>:1:5: unexpected symbol\n1: Top Secret\n ^\n\n\n\n\nBut does play well with underscores:\n\nTop_Secret_Meeting_Day <- \"Saturday\"\n\n\n\nThe above (with individual words separated by _) is collectively known as snake case. The alternate way to help delineate variable names is “camelCase”, with first letter of each word being capitalized (seen in the previous example).\n\n\n\n\nTopSecretMeetingDay\n\n[1] \"Saturday\"\n\n\n\n\nYou can overwrite a Variable name by assigning a different value to it:\n\nTopSecretMeetingDay <- \"Monday\"\n\n\nTopSecretMeetingDay\n\n[1] \"Monday\"\n\n\n\n\nYou can also remove individual variables via the rm function\n\nrm(Top_Secret_Meeting_Day)\n\n\n\nOr if trying to remove all, via the right sidebar\n\n\n\nIn the prior case, we are creating a variable that is a “string” of character values, due to our use of “” around the word. We can see this when we use the str() function.\n\nFluorophores <- \"FITC\"\nstr(Fluorophores)\n\n chr \"FITC\"\n\n\nThe “chr” in front denotating that Fluorophores contains a character string.\n\n\nThis could also be retrieved using the class() function.\n\nclass(Fluorophores)\n\n[1] \"character\"\n\n\n\n\nAlternatively, we could assign a numeric value to a variable\n\nFluorophores <- 29\nstr(Fluorophores)\n\n num 29\n\n\nWhich returns “num”, ie. numeric.\n\n\nWe can also specify a logical (ie. True or FALSE) to a particular object\n\nIsPerCPCy5AGoodFluorophore <- FALSE\nstr(IsPerCPCy5AGoodFluorophore)\n\n logi FALSE\n\n\nWhich returns logi in front, denoting this variable contains a logical value.\n\n\nLast week, when we were installing dplyr, the reason that installation failed was install.packages() expects a character string. However, when we left off the ““, it looked within our local environments created variables for the dplyr variable, couldn’t find it, and thus failed.\nWe could of course, have assigned a character value to a variable name, and then used that variable name, which would have worked.\n\nPackageToInstall <- \"dplyr\"\n\ninstall.packages(PackageToInstall)", "crumbs": [ "About", - "Getting Started", - "00 - Workstation Setup" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/00_WorkstationSetup/index.html#windows", - "href": "course/00_WorkstationSetup/index.html#windows", - "title": "Workstation Setup", - "section": "Windows", - "text": "Windows\n\nInstallation walkthrough for Windows", + "objectID": "course/02_FilePaths/index.html#indexing", + "href": "course/02_FilePaths/index.html#indexing", + "title": "02 - File Paths", + "section": "Indexing", + "text": "Indexing\nNot all variables contain single objects.\nFor example, we can modify Fluorophores and add additional entries:\n\nFluorophores <- c(\"BV421\", \"FITC\", \"PE\", \"APC\")\nstr(Fluorophores)\n\n chr [1:4] \"BV421\" \"FITC\" \"PE\" \"APC\"\n\n\nThe c stands for concatenate. It concatenates the objects into a larger object, known as a vector.\nIn this case, you notice in addition to the specification the values are characters, we get a [1:4], denoting four objects are present.\n\n\nWe can similarly retrieve this information using the length() function\n\nlength(Fluorophores)\n\n[1] 4\n\n\n\n\nWhen multiple objects are present, we can specify them individidually by providing their index number within square brackets [].\n\nFluorophores[1]\n\n[1] \"BV421\"\n\n\n\n\n\nFluorophores[3]\n\n[1] \"PE\"\n\n\n\n\nOr specify in sequence using a colon (:)\n\nFluorophores[3:4]\n\n[1] \"PE\" \"APC\"\n\n\n\n\nOr if not adjacent, reusing c within the square brackets\n\nFluorophores[c(1,4)]\n\n[1] \"BV421\" \"APC\" \n\n\n\n\nWe will revisit these concepts throughout the course, with what we have covered today, this will help us create file.paths and select fcs files that we want to work with via index number.", "crumbs": [ "About", - "Getting Started", - "00 - Workstation Setup" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/00_WorkstationSetup/index.html#macos", - "href": "course/00_WorkstationSetup/index.html#macos", - "title": "Workstation Setup", - "section": "MacOS", - "text": "MacOS\n\nInstallation walkthrough for MacOS", + "objectID": "course/02_FilePaths/index.html#listing-files", + "href": "course/02_FilePaths/index.html#listing-files", + "title": "02 - File Paths", + "section": "Listing Files", + "text": "Listing Files\nAfter this detour into variables and indexing, let’s return our focus to how to use these in context of file paths. Working from within our Week2 project folder, let’s see what directories (folders) are present\n\nlist.dirs(path=\".\", full.names=FALSE, recursive=FALSE)\n\n\n\n\nWe can also list any files that are present within our working directory using the list.files() function.\n\nlist.files()\n\n\nIn this case, in addition to our filepaths.qmd file, we can see the LICENSE and README files created when we set up the repository.\n\n\nWe can also specify a particular folder we want to show items present within by changing the path argument. For example, if we wanted to see the contents of the “data” folder\n\nlist.files(path=\"data\", full.names=FALSE, recursive=FALSE)\n\n\nWhich in this case returns the fcs files we copied over at the start of this lesson.\n\n\nIn this case, there are no folders under “data”. Let’s go ahead and create a new one, calling it target.", "crumbs": [ "About", - "Getting Started", - "00 - Workstation Setup" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/00_WorkstationSetup/index.html#linux-debian", - "href": "course/00_WorkstationSetup/index.html#linux-debian", - "title": "Workstation Setup", - "section": "Linux (Debian)", - "text": "Linux (Debian)\n\nInstallation walkthrough for Linux", + "objectID": "course/02_FilePaths/index.html#creating-directories", + "href": "course/02_FilePaths/index.html#creating-directories", + "title": "02 - File Paths", + "section": "Creating directories", + "text": "Creating directories\nAlternatively, we can also create a folder via R using the dir.create() function. Since we want it within data, we would modify the path accordingly\n\nNewFolderLocation <- file.path(\"data\", \"target2\")\n\ndir.create(path=NewFolderLocation)\n\n\n\n\nBefore continuing, let’s copy the first two .fcs files into both target and target2.\n\n\n\nGiven our working directory is set the top-level of the Week2 project folder, we can’t just check inside nested target folders directly. If we attempt to:\n\nlist.files(path=\"target\", full.names=FALSE, recursive=FALSE)\n\ncharacter(0)\n\n\n\n\nNo files are returned (ie, character(0)), since from our computers perspective, “target” doesn’t exist within the active working directory.\n\nfile.exists(\"target\")\n\n[1] FALSE\n\n\n\n\nOn the other hand, within it’s view, it knows that the data folder exist\n\nfile.exists(\"data\")\n\n\nSo here we encounter the first challenge when communicating to our computer where to search for and find files. We need to provide a file.path that incorporates the path of folders between where the computer is currently at (ie. the working directory) and the target file itself.", "crumbs": [ "About", - "Getting Started", - "00 - Workstation Setup" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/00_WorkstationSetup/Windows.html", - "href": "course/00_WorkstationSetup/Windows.html", - "title": "Installing Software on Windows", - "section": "", - "text": "For the YouTube livestream recording, see here\nFor screen-shots slides, click here\nThis is the software installation walkthrough for those whose computers are running Windows. Based on our pre-course interest form, you make up the majority of course participants." - }, - { - "objectID": "course/00_WorkstationSetup/Windows.html#installing-r", - "href": "course/00_WorkstationSetup/Windows.html#installing-r", - "title": "Installing Software on Windows", - "section": "Installing R", - "text": "Installing R\nTo get started, first navigate to the R website. Once there, click on Download R option towards the top of the page.\n\n\n\nOn the next screen, you will need to select a mirror from which to download the software from. You can either select the closest geographic location (which may be faster) or alternatively just select the Cloud option which should redirect you.\n\n\n\nYou will then select your Operating System, in this case, Windows.\n\n\n\nAnd go ahead and select the Install R for the first time link.\n\n\n\nNext, you will select the download the current version option at the top of the page.\n\n\n\nThe popup window will then ask where you want to save the installer (.exe) file. We generally save this to either Downloads or Desktop to make finding it easier.\n\n\n\nAfter the download is complete, double click on the installer’s .exe file. This will open a popup asking you to select your preferred language.\n\n\n\nYou will then be prompted to acccept the software license (which is the free copyleft GPL2 license, which we will learn about later in the course).\n\n\n\nOn Windows, R will normally save it’s software folder under Program Files.\n\n\n\nNext, please accept the defaults.\n\n\n\n\n\n\n\n\n\n\n\n\nWith the defaults accepted, the installation will commence. Feel free to go have a coffee/tea/beverage-of-your choice break while you wait.\n\n\n\nAnd if all goes well, the installation will complete without any issues." - }, - { - "objectID": "course/00_WorkstationSetup/Windows.html#installing-rtools", - "href": "course/00_WorkstationSetup/Windows.html#installing-rtools", - "title": "Installing Software on Windows", - "section": "Installing RTools", - "text": "Installing RTools\nWe will now work on installing Rtools. This software is needed when building R packages from source, which we will need throughout the course for R packages hosted on GitHub.\nTo get started, we will return to the R installation page we visited previously and instead click on the Rtools option.\n\n\n\nNext, select the most recent version of Rtools to Download.\n\n\n\nYou will then select for your architecture. For the vast majority of Windows users, your computer will likely be be using a x86 chip architecture, so you would select the Rtools45 installer option.\nIf your computer however uses the ARM chip architecture, you would select the 64-bit ARM Rtools45 installer instead. If you are unsure, see the following.\n\n\n\nNext, you will select the location to save the Rtools installer to. We generally save this to either Downloads or Desktop to make finding it easier.\n\n\n\nOnce downloaded, double click on the .exe to launch the Rtools installer.\n\n\n\nSimilar to what we did when installing R, go ahead and keep the defaults.\n\n\n\nAnd click install to proceed with the installation.\n\n\n\nAnd wait while the installation wraps up.\n\n\n\nIf all goes well, you should see the following installation success page." - }, - { - "objectID": "course/00_WorkstationSetup/Windows.html#installing-git", - "href": "course/00_WorkstationSetup/Windows.html#installing-git", - "title": "Installing Software on Windows", - "section": "Installing Git", - "text": "Installing Git\nGit is a version control software widely used among software developers and bioinformaticians. We will use it extensively throughout the course, both locally on our computers (to keep track of changes to our files), as well as in combination with GitHub(to maintain online backups of our files).\nWe will first navigate to the website and select the download from Windows option.\n\n\n\n\n\n\n\n\n\nWe will then proceed and select install 64-bit Git for Windows Setup option\n\n\n\n\n\n\n\nAs was the case with our installation of R and Rtools, a pop-up will appear asking for a location to save the installer to.\nOnce downloaded, double-click and proceed with the installation.\nYou will be asked to accept the Git License (which is the free copyleft GPL2 license, which we will learn about later in the course).\n\n\n\nThen you will be asked to select the folder to save to software to (usually your Programs folder)\n\n\n\nAt this point, the Git installer will ask a series of increasingly niche questions. It is best to just accept all the default options, to avoid wandering too far down a “What is Vim?!?” rabbit-hole.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHaving made it through all the niche customization screens, we finally reach the install button.\n\n\n\nWe then can wait for the install to complete.\n\n\n\nAnd success, we have now installed Git." - }, - { - "objectID": "course/00_WorkstationSetup/Windows.html#installing-positron", - "href": "course/00_WorkstationSetup/Windows.html#installing-positron", - "title": "Installing Software on Windows", - "section": "Installing Positron", - "text": "Installing Positron\nFinally, you will install Positron. It is an integrated development environment (IDE) in which we will open, modify and run our code throughout the course.\nFirst, navigate to their homepage, and select the blue Download option button on the upper-right.\n\n\n\nYou will then need to accept the Elastic License agreement to use the software (we will cover this source-available license type and what it does later in the course).\nWith the license accepted, you will be able to select your operating system. In this case, we will select Windows, specifally the user level install.\n\n\n\nPlease note, if you are using a Windows computer with an ARM based chip (like is the case with Snapdragons), you will need to download the installed from the Positron’s GitHub Release Page, as they are still testing some features.\n\n\nYou will then be prompted to select the location you want to save the installer to. We will generally save this to either Downloads or Desktop to make finding it easier.\n\n\n\nOnce the download is complete, double click on the installer, and again accept the license agreement.\n\n\n\nGenerally, Positron will be store its software folder under Program Files.\n\n\n\nNext up, accept the default options for the following screens\n\n\n\n\n\n\nAnd finally, click Install.\n\n\n\n \n\nIf all goes well, you should then see the installation success page." - }, - { - "objectID": "course/01_InstallingRPackages/slides.html#set-up", - "href": "course/01_InstallingRPackages/slides.html#set-up", - "title": "01 - Installing R Packages", - "section": "Set Up", - "text": "Set Up\nAlright, with the background out of the way, let’s get started!\n\n\n\n\n\n\n\nImportant\n\n\nPlease make sure to sync your forked version of the CytometryInR repository, and pull any changes to your local computer’s CytometryInR project folder so that you have the most recent version of the code and data available.\n\n\n\n\n\n\n\n\n\n\n\nWarning\n\n\nPlease remember to always copy over the new material from your local CytometryInR folder to a separate Project Folder that you created and named (ex. “Week_01” or “MyLearningFolder”, etc.). This will ensure any edits you make to the files do not affect your ability to bring in next week’s materials to the CytometryInR folder" - }, - { - "objectID": "course/01_InstallingRPackages/slides.html#checking-for-loaded-packages", - "href": "course/01_InstallingRPackages/slides.html#checking-for-loaded-packages", - "title": "01 - Installing R Packages", - "section": "Checking for Loaded Packages", - "text": "Checking for Loaded Packages\n\n\n\n\n\n\n\n\n.\n\n\nFor the contents (ie. the functions) of an R package to be available for your computer to use, they must first be activated (ie. loaded) into your local environment. We will first learn how to check what R packages are currently active." - }, - { - "objectID": "course/01_InstallingRPackages/slides.html#installing-from-cran", - "href": "course/01_InstallingRPackages/slides.html#installing-from-cran", - "title": "01 - Installing R Packages", - "section": "Installing from CRAN", - "text": "Installing from CRAN\n\n\n\n\n\n\n\n\n.\n\n\nWe will start by installing R packages that are part of the CRAN repository. This is the main R package repository, being part of the broader R software project. In the context of this course, R packages that work primarily with general data structure (rows, columns, matrices, etc.) or visualizations will predominantly be found within this repository.\nThese include the tidyverse packages. These packages have collectively made R easier to use by smoothing out some of the rough edges of base R, which is why R has seen major growth within the last decade. We will be installing several of these R packages today." - }, - { - "objectID": "course/01_InstallingRPackages/slides.html#installing-from-bioconductor", - "href": "course/01_InstallingRPackages/slides.html#installing-from-bioconductor", - "title": "01 - Installing R Packages", - "section": "Installing from Bioconductor", - "text": "Installing from Bioconductor\n\n\n\n\n\n\n\n\n.\n\n\nBioconductor is the second R package repository we will be working with throughout the course. While it contains far fewer packages than CRAN, it contains packages that are primarily used by the biomedical sciences. Following this link you can find it’s current flow and mass cytometry R packages.\nBioconductor R packages differ from CRAN R packages in a couple of ways. Bioconductor has different standards for acceptance than CRAN. They usually contain interoperable object-types, put more effort into documentation and continous testing to ensure that the R package remains functional across operating systems." - }, - { - "objectID": "course/01_InstallingRPackages/slides.html#install-from-github", - "href": "course/01_InstallingRPackages/slides.html#install-from-github", - "title": "01 - Installing R Packages", - "section": "Install from GitHub", - "text": "Install from GitHub\n\n\n\n\n\n\n\n\n.\n\n\nIn addition to the CRAN and Bioconductor repositories, individual R packages can also be found on GitHub hosted on their respective developers GitHub accounts. Newer packages that are still being worked on (often in the process of submission to CRAN or Bioconductor) can be found here, as well as those where the author decided not to bother with a review process, and just made the packages immediately available, warts and all." - }, - { - "objectID": "course/01_InstallingRPackages/slides.html#troubleshooting-install-errors", - "href": "course/01_InstallingRPackages/slides.html#troubleshooting-install-errors", - "title": "01 - Installing R Packages", - "section": "Troubleshooting Install Errors", - "text": "Troubleshooting Install Errors\n\n\n\n\n\n\n\n\n.\n\n\nWe have now installed three R packages, dplyr, PeacoQC, and flowSpectrum. In my case, I did not encounter any errors during the installation. However, sometimes a package installation will fail due to an error encountered during the installation process. This can be due to a number of reasons, ranging from a missing dependency, or an update that caused a conflict. While these can occur for CRAN or Bioconductor packages, they are more frequently seen for GitHub packages where the Description/Namespace files may not have been fully updated yet to install all the required dependencies.\nWhen encountering an error, start of by first reading through the message to see if you can parse any useful information about what package failed to install, and if it list the missing dependency packages name. The later was the case in the error message example shown below." - }, - { - "objectID": "course/01_InstallingRPackages/slides.html#installing-specific-package-versions", - "href": "course/01_InstallingRPackages/slides.html#installing-specific-package-versions", - "title": "01 - Installing R Packages", - "section": "Installing Specific-Package Versions", - "text": "Installing Specific-Package Versions\n\n\n\n\n\n\n\n\n.\n\n\nWhile we may be tempted to think of R packages as static, they change quite often, as their develipers add new features, fix bugs, etc. To help keep track of these changes (essential for reproducibility and replicability), R packages have version numbers.\nWhen we run sessionInfo(), we can see an example of this, with the version number appearing after the package name.\n\n\n\n\n\n\n\nsessionInfo()\n\nR version 4.5.2 (2025-10-31)\nPlatform: x86_64-pc-linux-gnu\nRunning under: Debian GNU/Linux 13 (trixie)\n\nMatrix products: default\nBLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.12.1 \nLAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.1; LAPACK version 3.12.0\n\nlocale:\n [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C \n [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 \n [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 \n [7] LC_PAPER=en_US.UTF-8 LC_NAME=C \n [9] LC_ADDRESS=C LC_TELEPHONE=C \n[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C \n\ntime zone: America/New_York\ntzcode source: system (glibc)\n\nattached base packages:\n[1] stats graphics grDevices utils datasets methods base \n\nother attached packages:\n[1] PeacoQC_1.20.0 BiocManager_1.30.27\n\nloaded via a namespace (and not attached):\n [1] generics_0.1.4 shape_1.4.6.1 digest_0.6.39 \n [4] magrittr_2.0.4 evaluate_1.0.5 grid_4.5.2 \n [7] RColorBrewer_1.1-3 iterators_1.0.14 circlize_0.4.17 \n[10] fastmap_1.2.0 foreach_1.5.2 doParallel_1.0.17 \n[13] jsonlite_2.0.0 graph_1.88.1 GlobalOptions_0.1.3 \n[16] ComplexHeatmap_2.26.0 flowWorkspace_4.22.1 scales_1.4.0 \n[19] XML_3.99-0.20 Rgraphviz_2.54.0 codetools_0.2-20 \n[22] cli_3.6.5 RProtoBufLib_2.22.0 rlang_1.1.7 \n[25] crayon_1.5.3 Biobase_2.70.0 yaml_2.3.12 \n[28] otel_0.2.0 cytolib_2.22.0 ncdfFlow_2.56.0 \n[31] tools_4.5.2 parallel_4.5.2 dplyr_1.2.0 \n[34] colorspace_2.1-2 ggplot2_4.0.2 GetoptLong_1.1.0 \n[37] BiocGenerics_0.56.0 vctrs_0.7.1 R6_2.6.1 \n[40] png_0.1-8 matrixStats_1.5.0 stats4_4.5.2 \n[43] lifecycle_1.0.5 flowCore_2.22.1 S4Vectors_0.48.0 \n[46] IRanges_2.44.0 clue_0.3-66 cluster_2.1.8.1 \n[49] pkgconfig_2.0.3 pillar_1.11.1 gtable_0.3.6 \n[52] data.table_1.18.2.1 glue_1.8.0 xfun_0.56 \n[55] tibble_3.3.1 tidyselect_1.2.1 knitr_1.51 \n[58] farver_2.1.2 rjson_0.2.23 htmltools_0.5.9 \n[61] rmarkdown_2.30 compiler_4.5.2 S7_0.2.1" - }, - { - "objectID": "course/01_InstallingRPackages/slides.html#documentation-and-websites", - "href": "course/01_InstallingRPackages/slides.html#documentation-and-websites", - "title": "01 - Installing R Packages", - "section": "Documentation and Websites", - "text": "Documentation and Websites\n\n\n\n\n\n\n\n\n.\n\n\nWe have already seen a couple ways to access the help documentation contained within an R package via Positron. Beyond internal documentation, R packages often have external websites that contain additional walk-through articles (ie. vignettes) to better document how to use the package.\nFor CRAN-based packages, we can start off by searching for the package name. So, in the case of dplyr" - }, - { - "objectID": "course/01_InstallingRPackages/index.html", - "href": "course/01_InstallingRPackages/index.html", - "title": "01 - Installing R Packages", - "section": "", - "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here\nWelcome to the first week of Cytometry in R! This week we will be diving into how R packages work, and the how to go about installing them.\nBefore getting started, please make sure you have completed the creating a GitHub and Workstation Setup walk-throughs, since we will begin where they left off once the required software was successfully installed.", + "objectID": "course/02_FilePaths/index.html#file-paths", + "href": "course/02_FilePaths/index.html#file-paths", + "title": "02 - File Paths", + "section": "File Paths", + "text": "File Paths\nOne way we can do this is through a file.path argument. We could potentially provide this by adding either a / or a  into the path argument, depending on your computers operating system.\n\nlist.files(path=\"data/target\", full.names=FALSE, recursive=FALSE)\n\n\n\nWhile this works in your particular context, if you are sharing the code with others who have a different operating system, these hard-coded “/” or “\" will cause the code for them to error out at these particular steps.\n\nFor that reason, it is better to assemble a file.path using the file.path() function. This function takes into account the operating system, removing your need to have to worry about this particular computing nuance, and write code that is reproducible and replicable for everyone.\n\nFolderLocation <- file.path(\"data\", \"target\")\nFolderLocation\n\n[1] \"data/target\"\n\n\n\nlist.files(path=FolderLocation, full.names=FALSE, recursive=FALSE)\n\n\n\n\nWe can also append additional locations to existing file paths, by including the variable name within the file.path() we are creating.\n\nFolderLocation <- \"data\"\nScreenshotFolder <- file.path(FolderLocation, \"target\")\nScreenshotFolder\n\n[1] \"data/target\"\n\n\n\nlist.files(path=ScreenshotFolder, full.names=FALSE, recursive=FALSE)\n\n\n\n\nAdditionally, list.files() has the ability to filter for files that contain a particular character string. This can be useful is we are searching for “.fcs” or “.csv” files, but also for files that contain a particular word. In the case of the ScreenshotFolders\n\nlist.files(path=ScreenshotFolder, pattern=\"ND050\", full.names=FALSE, recursive=FALSE)\n\n\nYou will notice, the index numbers are in the context of what is filtered, not all the folder contents.", "crumbs": [ "About", - "Intro to R" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/01_InstallingRPackages/index.html#set-up", - "href": "course/01_InstallingRPackages/index.html#set-up", - "title": "01 - Installing R Packages", - "section": "Set Up", - "text": "Set Up\nAlright, with the background out of the way, let’s get started!\n\n\n\n\n\n\nImportant\n\n\n\nPlease make sure to sync your forked version of the CytometryInR repository, and pull any changes to your local computer’s CytometryInR project folder so that you have the most recent version of the code and data available.\n\n\n\n\n\n\n\n\nWarning\n\n\n\nPlease remember to always copy over the new material from your local CytometryInR folder to a separate Project Folder that you created and named (ex. “Week_01” or “MyLearningFolder”, etc.). This will ensure any edits you make to the files do not affect your ability to bring in next week’s materials to the CytometryInR folder\n\n\n\n\nAfter pulling the new data and code locally, open CytometryInR, open the course folder, and open the 01_InstallingRPackages folder. From here, copy the index.qmd file to your own working Project Folder (ex. Week_01) where you can work on it without causing any conflicts. Then return to Positron and open up your working project folder (ex. Week_01).\n\n\n\nNext up, within Positron, let’s make sure to select R as the coding language being used for this session.\n\n\n\nNow that R is running within Positron, the console (lower portion of the screenn) is now able to run (ie. execute) any R code that is sent to it.", + "objectID": "course/02_FilePaths/index.html#selecting-for-patterns", + "href": "course/02_FilePaths/index.html#selecting-for-patterns", + "title": "02 - File Paths", + "section": "Selecting for Patterns", + "text": "Selecting for Patterns\nIf we currently listed the files within data, we get a return that looks like this:\n\nlist.files(\"data\", full.names=FALSE, recursive=FALSE)\n\n\n\n\nAs you can see, we are getting back both folders and individual .fcs files. We could consequently change the pattern to provide a character string that will only return the .fcs files. We will go ahead and assign this list to a variable named files, for later retrieval.\n\nfiles <- list.files(\"data\", pattern=\".fcs\", full.names=FALSE, recursive=FALSE)\nfiles\n\n\n\n\nOne of the R packages we will be using througout the course is the stringr package. It contains two functions that can be useful when identifying more complicated character strings. In this case, if we run the str_detect() function to identify which of the .fcs files within the files variable contains the “INF” character string, we get a vector of logical (ie. True or FALSE) outputs corresponding to each file.\n\n# install.packages(\"stringr\") # CRAN\nlibrary(stringr)\n\n\nstr_detect(files, \"INF\")\n\n\n\n\nSimilar to how we indexed the Fluorophore list (ex. Fluorophore[1:2]) which returned a subset, we can similarly use this logical vector to subset files that returned as TRUE for containing the pattern “INF”\n\nfiles[str_detect(files, \"INF\")]\n\n\n\n\nLet’s go ahead and save these subsetted file names to a new variable, called Infants.\n\nInfants <- files[str_detect(files, \"INF\")]", "crumbs": [ "About", - "Intro to R" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/01_InstallingRPackages/index.html#checking-for-loaded-packages", - "href": "course/01_InstallingRPackages/index.html#checking-for-loaded-packages", - "title": "01 - Installing R Packages", - "section": "Checking for Loaded Packages", - "text": "Checking for Loaded Packages\nFor the contents (ie. the functions) of an R package to be available for your computer to use, they must first be activated (ie. loaded) into your local environment. We will first learn how to check what R packages are currently active.\n\n\n\nAccessing Help Documentation\nWithin your own index.qmd (or a new .qmd file that you created), type out/copy-paste the following sessionInfo() function:\n\nsessionInfo()\n\n\n\nIf you hover over the line of code within Positron, you will glimpse the help file for the particular function you are hovering over.\n\n\n\nIn this case, we can see the help documentation corresponding for sessionInfo(). Beyond hovering over the function, this can also be accessed by adding a ? directly in front of the function, and then running that line of code.\n\n?sessionInfo()\n\n\n\n\nWhen executed, the function’s help file documentation will open up within the Help tab in the secondary side bar on the right-side of the screen. Glancing at the top of the page we can see the name of the package that contains the sessionInfo() function ({utils}). Scrolling down the help page past all the documentation, we can see a link to the index page.\n\n\n\nAfter clicking, the Help tab switches from viewing the documentation for the sessionInfo() function, to showing all the functions within the utils package. Most R packages contain help documentation, so this process can be adapted to find out additional information about what a function does, and what arguments are needed to produce customized outputs.\n\n\n\n\n\nsessionInfo()\nWithin your .qmd file, let’s go ahead and run the following code-block:\n\nsessionInfo()\n\nR version 4.5.2 (2025-10-31)\nPlatform: x86_64-pc-linux-gnu\nRunning under: Debian GNU/Linux 13 (trixie)\n\nMatrix products: default\nBLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.12.1 \nLAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.1; LAPACK version 3.12.0\n\nlocale:\n [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C \n [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 \n [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 \n [7] LC_PAPER=en_US.UTF-8 LC_NAME=C \n [9] LC_ADDRESS=C LC_TELEPHONE=C \n[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C \n\ntime zone: America/New_York\ntzcode source: system (glibc)\n\nattached base packages:\n[1] stats graphics grDevices utils datasets methods base \n\nother attached packages:\n[1] BiocStyle_2.38.0\n\nloaded via a namespace (and not attached):\n [1] htmlwidgets_1.6.4 BiocManager_1.30.27 compiler_4.5.2 \n [4] fastmap_1.2.0 cli_3.6.5 tools_4.5.2 \n [7] htmltools_0.5.9 otel_0.2.0 yaml_2.3.12 \n[10] rmarkdown_2.30 knitr_1.51 jsonlite_2.0.0 \n[13] xfun_0.56 digest_0.6.39 rlang_1.1.7 \n[16] evaluate_1.0.5 \n\n\n\n\nThe outputs that get returned by sessionInfo() will vary a bit depending on your computer’s operating system, and the version of R you have installed.\nFor today, let’s focus on the last two elements of the output:\n\n\n\nThe R software itself is made up of several base R packages, that are loaded automatically. These contain everything you need to read, write and run R code on your computer. You can see these packages are the stats, graphics, grDevices, utils, datasets, methods and base packages.\nAs we install additional R packages and load them using the library() function throughout this session, sporadically re-run sessionInfo() to see how this list of R packages changes.", + "objectID": "course/02_FilePaths/index.html#conditionals", + "href": "course/02_FilePaths/index.html#conditionals", + "title": "02 - File Paths", + "section": "Conditionals", + "text": "Conditionals\nOne useful thing is that within R, we can set conditions on whether something is carried out. The most typical conditional you will encounter are the “If” statements. They typically take a form that resembles the following.\n\nNeedCoffee <- TRUE\n\nif (NeedCoffee){\n print(\"Take a break\")\n}\n\n\n\nIn this case of the above, if the variable within the () is equal to true, the code within the {} will be executed.\n\nNeedCoffee <- TRUE\n\nif (NeedCoffee){\n print(\"Take a break\")\n}\n\n[1] \"Take a break\"\n\n\n\n\nBy contrast, when the variable within the () is equal to false, the code within the {} will not be executed.\n\nNeedCoffee <- FALSE\n\nif (NeedCoffee){\n print(\"Take a break\")\n}\n\n\n\nThese “If” statements will trigger as long as the specified condition within the () is TRUE. For a different example:\n\nRowNumber <- 299\n2 + RowNumber > 300\n\n[1] TRUE\n\n\n\nif (2 + RowNumber > 3){\n print(\"Stop Iterating\")\n}\n\n[1] \"Stop Iterating\"\n\n\n\n\nWhen you add an ! in front a conditional, it flips the expected outcome.\n\nItsRaining <- TRUE\n\nif (ItsRaining){print(\"Bring an Umbrella\")}\n\n[1] \"Bring an Umbrella\"\n\n\n\n!ItsRaining\n\n[1] FALSE\n\n\n\nif (!ItsRaining){print(\"Bring an Umbrella\")}\n\n\nItsRaining <- TRUE\n\nif (!ItsRaining){print(\"Bring Sunglasses\")}\n\n\n\nWe will explore more complicated conditionals throughout the course, but for now, let’s implement a couple simple ones in the context of copying over the .fcs files in Infants over to a new target3 folder.", "crumbs": [ "About", - "Intro to R" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/01_InstallingRPackages/index.html#installing-from-cran", - "href": "course/01_InstallingRPackages/index.html#installing-from-cran", - "title": "01 - Installing R Packages", - "section": "Installing from CRAN", - "text": "Installing from CRAN\nWe will start by installing R packages that are part of the CRAN repository. This is the main R package repository, being part of the broader R software project. In the context of this course, R packages that work primarily with general data structure (rows, columns, matrices, etc.) or visualizations will predominantly be found within this repository.\nThese include the tidyverse packages. These packages have collectively made R easier to use by smoothing out some of the rough edges of base R, which is why R has seen major growth within the last decade. We will be installing several of these R packages today.\n\n\n\ndplyr\nOur first R package we will install during this session is the dplyr package. Since it is hosted on the CRAN repository, to install it, we will need to use the CRAN-specific installation function install.packages().\n\n?install.packages()\n\n\n\n\nFor the install.packages() function, we place within the () the name of the R package from CRAN that we wish to install.\n\ninstall.packages(\"dplyr\")\n\n\n\n\n\n\n\n\n\nTip\n\n\n\nA usual struggle point for beginners is that install.packages() requires ” ” to be placed around the package name. Forgetting them results in the error that we see below.\n\n\n\ninstall.packages(dplyr)\n\nError:\n! object 'dplyr' not found\n\n\n\n\n\ninstall.packages(\"dplyr\")\n\nGo ahead and click on “Run Cell” next to your code-block, to install the dplyr R package.\n\n\nWhen a package starts to install, you will see your console start to display text resembling that seen in the image below (varying a bit depending on your computers operating system).\n\n\n\nWithin this opening scrawl, you will see the location on your computer the R package is being installed to, as well as the file location for the R package being retrieved on CRAN.\nIf the package is successfully located, your computer will proceed to first download, then unpack (ie. unzip) it, before attempting to install to the target folder.\n\n\n\nThe final steps of the installation process involved various steps to verify everything was copied successfully, the help documentation assembled, and that the R package is capable of being loaded. If this is the case, you will see the “Done” line appear, as well as a mention where the original downloaded source package files has been stashed (usually a temp folder).\n\n\n\n\nAttaching packages via library()\nIf an R package has been installed successfully, we are now able to load it (ie. make it’s functions available) to our local environment using the library() function.\n\n?library()\n\n\n\nUnlike install.packages(), where we needed “” around the package name, the library() function does not require “” around the package name. Let’s go ahead and load in dplyr, making its respective functions to our local environment.\n\nlibrary(dplyr)\n\n\nAttaching package: 'dplyr'\n\n\nThe following objects are masked from 'package:stats':\n\n filter, lag\n\n\nThe following objects are masked from 'package:base':\n\n intersect, setdiff, setequal, union\n\n\n\n\nFrom the output, we can see that dplyr has been attached. There are also a couple functions within dplyr that have identical names to functions within the stats and base packages. To avoid confusion, these 6 functions are masked, which is why we get the additional messages.\nWith dplyr now loaded via the library() call, let’s check sessionInfo() to see what has changed.\n\nsessionInfo()\n\nR version 4.5.2 (2025-10-31)\nPlatform: x86_64-pc-linux-gnu\nRunning under: Debian GNU/Linux 13 (trixie)\n\nMatrix products: default\nBLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.12.1 \nLAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.1; LAPACK version 3.12.0\n\nlocale:\n [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C \n [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 \n [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 \n [7] LC_PAPER=en_US.UTF-8 LC_NAME=C \n [9] LC_ADDRESS=C LC_TELEPHONE=C \n[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C \n\ntime zone: America/New_York\ntzcode source: system (glibc)\n\nattached base packages:\n[1] stats graphics grDevices utils datasets methods base \n\nother attached packages:\n[1] dplyr_1.2.0 BiocStyle_2.38.0\n\nloaded via a namespace (and not attached):\n [1] digest_0.6.39 R6_2.6.1 fastmap_1.2.0 \n [4] tidyselect_1.2.1 xfun_0.56 magrittr_2.0.4 \n [7] glue_1.8.0 tibble_3.3.1 knitr_1.51 \n[10] pkgconfig_2.0.3 htmltools_0.5.9 generics_0.1.4 \n[13] rmarkdown_2.30 lifecycle_1.0.5 cli_3.6.5 \n[16] vctrs_0.7.1 compiler_4.5.2 tools_4.5.2 \n[19] evaluate_1.0.5 pillar_1.11.1 yaml_2.3.12 \n[22] otel_0.2.0 BiocManager_1.30.27 rlang_1.1.7 \n[25] jsonlite_2.0.0 htmlwidgets_1.6.4 \n\n\n\n\nSimilar to what was seen for the base R packages, dplyr is now attached. This means we should theoretically now have access to all its functions. We can verify this by seeing if we can look up the dplyr packages select() function and it’s respective help page.\n\n?select\n\n\n\n\nSince its parent package has been attached to our local environment (via the library() call), we can see dplyr functions appear as suggestions as we begin to type.\nBy contrast, is we check for the ggplot() function from the ggplot2 package (which we haven’t yet installed or attached via library()), no suggestions will pop up.\n\n?ggplot\n\nNo documentation for 'ggplot' in specified packages and libraries:\nyou could try '??ggplot'\n\n\n\n\nBeyond individual functions, some R packages also have help landing pages, that can be similarly accessed by adding a ? in front of the package name:\n\n\n\n\n\nUnattaching\nSo far, we have installed an R package, and then attached it (via library()). How would we reverse these steps?\nWell, to unload it from the local environment, there are a couple options. You could of course simply shut down Positron. The local environment only exist in context of when you open and close the session, which closing the program will do. All previously loaded R packages will be unattached, which is why when you start a new session you will need to load in all packages you plan on using via library().\nAlternatively, although less used, you could detach() it via your console:\n\ndetach(\"package:dplyr\", unload=TRUE)\n\n\nsessionInfo()\n\nR version 4.5.2 (2025-10-31)\nPlatform: x86_64-pc-linux-gnu\nRunning under: Debian GNU/Linux 13 (trixie)\n\nMatrix products: default\nBLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.12.1 \nLAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.1; LAPACK version 3.12.0\n\nlocale:\n [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C \n [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 \n [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 \n [7] LC_PAPER=en_US.UTF-8 LC_NAME=C \n [9] LC_ADDRESS=C LC_TELEPHONE=C \n[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C \n\ntime zone: America/New_York\ntzcode source: system (glibc)\n\nattached base packages:\n[1] stats graphics grDevices utils datasets methods base \n\nother attached packages:\n[1] BiocStyle_2.38.0\n\nloaded via a namespace (and not attached):\n [1] digest_0.6.39 R6_2.6.1 fastmap_1.2.0 \n [4] tidyselect_1.2.1 xfun_0.56 magrittr_2.0.4 \n [7] glue_1.8.0 tibble_3.3.1 knitr_1.51 \n[10] pkgconfig_2.0.3 htmltools_0.5.9 generics_0.1.4 \n[13] rmarkdown_2.30 lifecycle_1.0.5 cli_3.6.5 \n[16] vctrs_0.7.1 compiler_4.5.2 tools_4.5.2 \n[19] evaluate_1.0.5 pillar_1.11.1 yaml_2.3.12 \n[22] otel_0.2.0 BiocManager_1.30.27 rlang_1.1.7 \n[25] jsonlite_2.0.0 htmlwidgets_1.6.4 \n\n\n\n\nLooking at the sessionInfo() output, dplyr is no longer attached to the local environment. Consequently, if we try to once again look for the documentation, no information will be retrieved.\n\n?dplyr\n\nNo documentation for 'dplyr' in specified packages and libraries:\nyou could try '??dplyr'\n\n\n\n\nThere is a workaround however, if we want to access functions from an unloaded R package. We can specify the R package’s name, followed by two :, and then the function name. The :: conveys the context to your computer that the package is present, but may not be attached.\n\n?dplyr::select()\n\nThis particular use case can be useful if we want to run a particular function, but not load in all a packages functions (which may have identical function names to other R packages we are using and cause some conflicts).\n\n\n\n\nRemoving Packages\nJust as we can install an R package, we can also uninstall an R package (although doing so is rare, most often when encountering package dependency conflict). To do so, we would use the remove.packages() function.\n\n?remove.packages()\n\n\nremove.packages(\"dplyr\")\n\nThis results in the package being removed entirely from our computer. We would then need to reinstall it if needed in the future.\n\n\n\n\nCommon Issues\nAs previously mentioned, CRAN is the main repository for R packages. But what if we tried to install an R package that is only found on Bioconductor or via GitHub using the install.packages() function?\nTo see what occurs, let’s try installing the PeacoQC package (which is found on Bioconductor).\n\ninstall.packages(\"PeacoQC\")\n\nInstalling package into '/home/david/R/x86_64-pc-linux-gnu-library/4.5'\n(as 'lib' is unspecified)\n\n\nWarning: package 'PeacoQC' is not available for this version of R\n\nA version of this package for your version of R might be available elsewhere,\nsee the ideas at\nhttps://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages\n\n\n\n\nAs you can see, the initial warning message suggest that PeacoQC is not available for your version of R. When I first started trying to learn R on my own during COVID, this particular message was the bane of my existence and I couldn’t figure out what was going on.\nThis is just a default warning message, that would apply for both a package having a version mismatch, but also shown when trying to install packages that are not found on CRAN.", + "objectID": "course/02_FilePaths/index.html#conditionals-in-practice", + "href": "course/02_FilePaths/index.html#conditionals-in-practice", + "title": "02 - File Paths", + "section": "Conditionals in practice", + "text": "Conditionals in practice\nFirst off, let’s write a conditional to check if there is a target3 folder within data.\n\nfiles_present <- list.files(\"data\", full.names=FALSE, recursive=FALSE)\nfiles_present\n\n\n\n\n\nFolderTarget3 <- file.path(\"data\", \"target3\")\ndir.exists(FolderTarget3)\n\n\n\n\nWe can write a conditional to create a folder if one does not yet exist.\n\nif (!dir.exists(FolderTarget3)){\n dir.create(FolderTarget3)\n}", "crumbs": [ "About", - "Intro to R" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/01_InstallingRPackages/index.html#installing-from-bioconductor", - "href": "course/01_InstallingRPackages/index.html#installing-from-bioconductor", - "title": "01 - Installing R Packages", - "section": "Installing from Bioconductor", - "text": "Installing from Bioconductor\nBioconductor is the second R package repository we will be working with throughout the course. While it contains far fewer packages than CRAN, it contains packages that are primarily used by the biomedical sciences. Following this link you can find it’s current flow and mass cytometry R packages.\nBioconductor R packages differ from CRAN R packages in a couple of ways. Bioconductor has different standards for acceptance than CRAN. They usually contain interoperable object-types, put more effort into documentation and continous testing to ensure that the R package remains functional across operating systems.\n\n\nTo install an R package that is located on Bioconductor, we first need to install the BiocManager package from CRAN. This package will allow us to install Bioconductor packages from their respective repository.\n\ninstall.packages(\"BiocManager\")\n\n\n\nOnce BiocManager is installed, we can attach it to our local environment using the library() function\n\nlibrary(BiocManager)\n\n\n\nWhen loaded, you will see an output showing the current Bioconductor and R versions.\nWe can then use BiocManager’s install() function to go back and install PeacoQC().\n\n\n\n\n\n\nTip\n\n\n\nAs always, don’t forget the “” when running an install() command.\n\n\n\n?install()\n\n\ninstall(\"PeacoQC\")\n\n\n\nWe see a similar opening sequence of installation steps as what we saw when installing the dplyr package from CRAN. However, in this case, several package dependencies (rjson, GlobalOptions, etc.) are present. Consequently, you can see these packages are also being downloaded from their respective repositories (either CRAN or Bioconductor), then unzipped and assembled before PeacoQC undergoes installation.\n\n\n\n\n\n\nNote\n\n\n\nBehind the scenes, within an R package, what package dependencies need to be installed are specified through the Description and Namespace files. If a package name is removed from these files, it will not be installed during the installation process\n\n\n\n\n\nWithin the scrawl of installation outputs, we can see individual dependencies undergoing installation similar to what we saw with dplyr, with a “Done (packagename)” being printed upon successful installation.\n\n\n\nThis process continues for each dependency being installed.\n\n\n\nAnd finally, once all the dependencies are installed, PeacoQC starts to install.\n\n\n\nOccasionally, during installation, you will see a pop-up windown like this one in the console. This let’s you know that some of the package dependencies have newer updated versions that are available to download. We are prompted to select between updating all, some or none. You will need to specify via the console how you want to proceed, by typing and entering one of the suggested options [a/s/n].\n\n\n\nAlternatively you may encounter a pop-up that resembles this one. Unlike the a/s/n output, we would need to provide a number for our intended choice. In this case, I went ahead and skipped all updates by typing 3 into the console, then hitting enter.\n\n\n\nGenerally, it’s okay to update if you have the time. Updates generally consist of minor improvements or bug fixes, that shouldn’t cause major issues. If you are short on time, you can go ahead and select skip the update by entering the value (n) for the none option.\n\n\n\nWith PeacoQC has been installed, we can load it via the library() call\n\n\n\n\n\n\nTip\n\n\n\nRemeber, library() doesn’t require ” ”\n\n\n\nlibrary(PeacoQC)\n\n\n\nAnd we can check to see if it has been attached to the local environment\n\nsessionInfo()\n\nR version 4.5.2 (2025-10-31)\nPlatform: x86_64-pc-linux-gnu\nRunning under: Debian GNU/Linux 13 (trixie)\n\nMatrix products: default\nBLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.12.1 \nLAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.1; LAPACK version 3.12.0\n\nlocale:\n [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C \n [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 \n [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 \n [7] LC_PAPER=en_US.UTF-8 LC_NAME=C \n [9] LC_ADDRESS=C LC_TELEPHONE=C \n[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C \n\ntime zone: America/New_York\ntzcode source: system (glibc)\n\nattached base packages:\n[1] stats graphics grDevices utils datasets methods base \n\nother attached packages:\n[1] PeacoQC_1.20.0 BiocManager_1.30.27 BiocStyle_2.38.0 \n\nloaded via a namespace (and not attached):\n [1] generics_0.1.4 shape_1.4.6.1 digest_0.6.39 \n [4] magrittr_2.0.4 evaluate_1.0.5 grid_4.5.2 \n [7] RColorBrewer_1.1-3 iterators_1.0.14 circlize_0.4.17 \n[10] fastmap_1.2.0 foreach_1.5.2 doParallel_1.0.17 \n[13] jsonlite_2.0.0 graph_1.88.1 GlobalOptions_0.1.3 \n[16] ComplexHeatmap_2.26.0 flowWorkspace_4.22.1 scales_1.4.0 \n[19] XML_3.99-0.20 Rgraphviz_2.54.0 codetools_0.2-20 \n[22] cli_3.6.5 RProtoBufLib_2.22.0 rlang_1.1.7 \n[25] crayon_1.5.3 Biobase_2.70.0 yaml_2.3.12 \n[28] otel_0.2.0 cytolib_2.22.0 ncdfFlow_2.56.0 \n[31] tools_4.5.2 parallel_4.5.2 dplyr_1.2.0 \n[34] colorspace_2.1-2 ggplot2_4.0.2 GetoptLong_1.1.0 \n[37] BiocGenerics_0.56.0 vctrs_0.7.1 R6_2.6.1 \n[40] png_0.1-8 matrixStats_1.5.0 stats4_4.5.2 \n[43] lifecycle_1.0.5 flowCore_2.22.1 S4Vectors_0.48.0 \n[46] htmlwidgets_1.6.4 IRanges_2.44.0 clue_0.3-66 \n[49] cluster_2.1.8.1 pkgconfig_2.0.3 pillar_1.11.1 \n[52] gtable_0.3.6 data.table_1.18.2.1 glue_1.8.0 \n[55] xfun_0.56 tibble_3.3.1 tidyselect_1.2.1 \n[58] knitr_1.51 farver_2.1.2 rjson_0.2.23 \n[61] htmltools_0.5.9 rmarkdown_2.30 compiler_4.5.2 \n[64] S7_0.2.1 \n\n\n\n\nAs you may have noticed, the section of loaded via namespace (but not attached) packages has grown larger. These packages are dependencies for the attached packages (dplyr, BiocManager and PeacoQC). Since the functions within these dependencies are only used selectively by the attached packages, they do not need to be active within the local environment.\n\n\n\nTo see what packages are installed (but not yet loaded), we can use the installed.packages() function to return a list of R packages for your computer.\n\ninstalled.packages()", + "objectID": "course/02_FilePaths/index.html#copying-files", + "href": "course/02_FilePaths/index.html#copying-files", + "title": "02 - File Paths", + "section": "Copying Files", + "text": "Copying Files\nTo copy files to another folder location, we use the file.copy() function. It has two arguments that we will be working with, from being the .fcs files, and to being the folder location we wish to transfer them to. If we tried using them as we currently have them:\n\n# Variable Infants containing 4 .fcs file names\n\nfile.copy(from=Infants, to=FolderTarget3)\n\n\n\n\nThe reason for this error is we are only working with a partial file path, as viewed from our Working directory. In this case, what is needed is the full file.path, so the file.path should also include the upstream folders from your current working directory.\n\ngetwd()\n\n\n\n\nIn this case, we can update the .fcs files location by switching the full.names argument within list.files() from FALSE, to TRUE.\n\nfiles_present <- list.files(\"data\", full.names=TRUE, recursive=FALSE)\nfiles_present\n\n\nAnd filter for those containing “INF” again\n\nInfants <- files_present[str_detect(files_present, \"INF\")]\n\nAnd then try again:\n\n# Variable Infants containing 4 .fcs file names\n\nfile.copy(from=Infants, to=FolderTarget3)", "crumbs": [ "About", - "Intro to R" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/01_InstallingRPackages/index.html#install-from-github", - "href": "course/01_InstallingRPackages/index.html#install-from-github", - "title": "01 - Installing R Packages", - "section": "Install from GitHub", - "text": "Install from GitHub\nIn addition to the CRAN and Bioconductor repositories, individual R packages can also be found on GitHub hosted on their respective developers GitHub accounts. Newer packages that are still being worked on (often in the process of submission to CRAN or Bioconductor) can be found here, as well as those where the author decided not to bother with a review process, and just made the packages immediately available, warts and all.\n\n\nWhile many gems of R packages can be found on GitHub, there are also a bunch of R packages that due to deprecation since when they were published and released have stopped working. This is often the case for R packages that are not maintained, which is why it’s useful to check the commits and issues pages to see when the last contribution occurred. We will take a closer look at how to do so later on.\n\n\nTo install packages from GitHub, you will need the remotes package, which can be found on CRAN.\n\n\n\n\n\n\nSpot Check #1\n\n\n\nTo install a package from CRAN, what function would you use? Click on the code-fold arrow below to reveal the answer.\n\n\n\n\nCode\ninstall.packages(\"remotes\")\n\n\n\n\nWith the remotes package now installed, we can attach it to our local environment.\n\n\n\n\n\n\nSpot Check #2\n\n\n\nWhat function would be used to do so?\n\n\n\n\nCode\nlibrary(remotes)\n\n\n\n\nAnd finally, we can use the install_github() function to download R packages from the invidual developers GitHub account.\n\n\n\n\n\n\nSpot Check #3\n\n\n\nHow would you look up the help documentation for this function?\n\n\n\n\nCode\n# Either by hovering over it within Positron or via\n\n?install_github()\n\n\n\n\nWe will be installing a small R package flowSpectrum for this example. It’s one of the packages created by Christopher Hall, whose small series of Flow Cytometry Data Analysis in R tutorials were immensely useful when I was first getting started learning R. flowSpectrum can be used to generate spectrum-style plots for spectral flow cytometry data.\n\n\n\nTo install an R package from GitHub, we first need the GitHub username (so hally166 in this case), which is followed by a “/”, and then the name of the package repository (so flowSpectrum in this case). Our code should consequently be:\n\ninstall_github(\"hally166/flowSpectrum\")\n\n\n\nWhen installing from GitHub, the opening installation scrawl will look different. Unlike R packages from CRAN or Bioconductor, which are usually shipped in an assembled binary format, R packages from GitHub start off as source code. So the first steps shown in the scrawl are the process of converting them to binary before proceeding.\nThis process of building R packages from source code is one of the reasons we needed to install Rtools (for Windows users) or Xcode Developer Tools (for MacOS) for this course. We will look at this topic in greater depth later in the course when we talk about creating R packages.", + "objectID": "course/02_FilePaths/index.html#removing-files.", + "href": "course/02_FilePaths/index.html#removing-files.", + "title": "02 - File Paths", + "section": "Removing files.", + "text": "Removing files.\nJust like we can add files via R, we can also remove them. However, when we remove them via this route, they are removed permanently, not sent to the recycle bin. We will revisit how later on in the course after you have gained additional experience with file.paths.\n\n?unlink()", "crumbs": [ "About", - "Intro to R" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/01_InstallingRPackages/index.html#troubleshooting-install-errors", - "href": "course/01_InstallingRPackages/index.html#troubleshooting-install-errors", - "title": "01 - Installing R Packages", - "section": "Troubleshooting Install Errors", - "text": "Troubleshooting Install Errors\nWe have now installed three R packages, dplyr, PeacoQC, and flowSpectrum. In my case, I did not encounter any errors during the installation. However, sometimes a package installation will fail due to an error encountered during the installation process. This can be due to a number of reasons, ranging from a missing dependency, or an update that caused a conflict. While these can occur for CRAN or Bioconductor packages, they are more frequently seen for GitHub packages where the Description/Namespace files may not have been fully updated yet to install all the required dependencies.\nWhen encountering an error, start of by first reading through the message to see if you can parse any useful information about what package failed to install, and if it list the missing dependency packages name. The later was the case in the error message example shown below.\n\n\n\nIf you encounter an installation error this week, please take screenshots of the error message and post them to this Discussion. This will help us troubleshoot your installation, as well as provide additional examples of installation errors that will be used to update this section in the future.", + "objectID": "course/02_FilePaths/index.html#basename", + "href": "course/02_FilePaths/index.html#basename", + "title": "02 - File Paths", + "section": "Basename", + "text": "Basename\nIf we look at Infants with the full.names=TRUE, we see the additional pathing folder has been added, allowing us to successfully copy over the files.\n\nInfants\n\n\n\n\nIf we were trying to retrieve just the local file names from the full.names, we could do so with basename() function. We will use this in combination with additional arguments later in the course\n\nbasename(Infants)", "crumbs": [ "About", - "Intro to R" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/01_InstallingRPackages/index.html#installing-specific-package-versions", - "href": "course/01_InstallingRPackages/index.html#installing-specific-package-versions", - "title": "01 - Installing R Packages", - "section": "Installing Specific-Package Versions", - "text": "Installing Specific-Package Versions\nWhile we may be tempted to think of R packages as static, they change quite often, as their develipers add new features, fix bugs, etc. To help keep track of these changes (essential for reproducibility and replicability), R packages have version numbers.\nWhen we run sessionInfo(), we can see an example of this, with the version number appearing after the package name.\n\nsessionInfo()\n\nR version 4.5.2 (2025-10-31)\nPlatform: x86_64-pc-linux-gnu\nRunning under: Debian GNU/Linux 13 (trixie)\n\nMatrix products: default\nBLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.12.1 \nLAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.1; LAPACK version 3.12.0\n\nlocale:\n [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C \n [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 \n [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 \n [7] LC_PAPER=en_US.UTF-8 LC_NAME=C \n [9] LC_ADDRESS=C LC_TELEPHONE=C \n[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C \n\ntime zone: America/New_York\ntzcode source: system (glibc)\n\nattached base packages:\n[1] stats graphics grDevices utils datasets methods base \n\nother attached packages:\n[1] PeacoQC_1.20.0 BiocManager_1.30.27 BiocStyle_2.38.0 \n\nloaded via a namespace (and not attached):\n [1] generics_0.1.4 shape_1.4.6.1 digest_0.6.39 \n [4] magrittr_2.0.4 evaluate_1.0.5 grid_4.5.2 \n [7] RColorBrewer_1.1-3 iterators_1.0.14 circlize_0.4.17 \n[10] fastmap_1.2.0 foreach_1.5.2 doParallel_1.0.17 \n[13] jsonlite_2.0.0 graph_1.88.1 GlobalOptions_0.1.3 \n[16] ComplexHeatmap_2.26.0 flowWorkspace_4.22.1 scales_1.4.0 \n[19] XML_3.99-0.20 Rgraphviz_2.54.0 codetools_0.2-20 \n[22] cli_3.6.5 RProtoBufLib_2.22.0 rlang_1.1.7 \n[25] crayon_1.5.3 Biobase_2.70.0 yaml_2.3.12 \n[28] otel_0.2.0 cytolib_2.22.0 ncdfFlow_2.56.0 \n[31] tools_4.5.2 parallel_4.5.2 dplyr_1.2.0 \n[34] colorspace_2.1-2 ggplot2_4.0.2 GetoptLong_1.1.0 \n[37] BiocGenerics_0.56.0 vctrs_0.7.1 R6_2.6.1 \n[40] png_0.1-8 matrixStats_1.5.0 stats4_4.5.2 \n[43] lifecycle_1.0.5 flowCore_2.22.1 S4Vectors_0.48.0 \n[46] htmlwidgets_1.6.4 IRanges_2.44.0 clue_0.3-66 \n[49] cluster_2.1.8.1 pkgconfig_2.0.3 pillar_1.11.1 \n[52] gtable_0.3.6 data.table_1.18.2.1 glue_1.8.0 \n[55] xfun_0.56 tibble_3.3.1 tidyselect_1.2.1 \n[58] knitr_1.51 farver_2.1.2 rjson_0.2.23 \n[61] htmltools_0.5.9 rmarkdown_2.30 compiler_4.5.2 \n[64] S7_0.2.1 \n\n\n\n\nAlternatively, we can retrieve the same information for the individual packages via the packageVersion() function\n\npackageVersion(\"PeacoQC\")\n\n[1] '1.20.0'\n\n\n\n\nAs well as from the citation() function.\n\ncitation(\"PeacoQC\")\n\nTo cite package 'PeacoQC' in publications use:\n\n Emmaneel A (2025). _PeacoQC: Peak-based selection of high quality\n cytometry data_. doi:10.18129/B9.bioc.PeacoQC\n <https://doi.org/10.18129/B9.bioc.PeacoQC>, R package version 1.20.0,\n <https://bioconductor.org/packages/PeacoQC>.\n\nA BibTeX entry for LaTeX users is\n\n @Manual{,\n title = {PeacoQC: Peak-based selection of high quality cytometry data},\n author = {Annelies Emmaneel},\n year = {2025},\n note = {R package version 1.20.0},\n url = {https://bioconductor.org/packages/PeacoQC},\n doi = {10.18129/B9.bioc.PeacoQC},\n }\n\n\n\n\nHow does a version number work? Lets say we have the following version number: 1.20.0\nThe first number of the version (1. in this case) denotes major changes, primarily those where after the version change the package may not be compatible with the code used in the prior version. As a consequence, this number changes rarely.\nThe second number (.20. in this case) is the minor version. Minor changes typically consist of new features that are added, that don’t affect the overall package function. These will change more frequently, especially for Bioconductor packages with fixed release cycles.\nThe final number (.0 in this case) is often used to denote small changes occuring within a minor release period, often bug-fixes or fixing typos within the documentation.\n\n\nWe may in the future need to install specific package versions (but wont be doing so today). As expected, which repository contains the R package influences how we would go about doing this.\nFor CRAN packages, we can use the remotes packages install_version() function. This allows us to provide the version number, and designate the repository location (the CRAN url in this case).\n\nremotes::install_version(\"ggplot2\", version = \"3.5.2\", repos = \"https://cloud.r-project.org\")\n\nFor GitHub-based R packages, the package versioning schema is not as strict as that of CRAN or Bioconductor. Typically, changes in R packagaes are put out by their developers as releases. When trying to install a particular release, we can add an additional argument to the install_github() function, specifying the release version’s tag number. For example:\n\nremotes::install_github(\"DavidRach/Luciernaga\", ref = \"v0.99.7\")\n\nAlternatively, if the developer doesn’t implement releases, you can provide the hash number of a particular commit.\n\nremotes::install_github(\"DavidRach/Luciernaga\", ref = \"8d1d694\")", + "objectID": "course/02_FilePaths/index.html#recursive", + "href": "course/02_FilePaths/index.html#recursive", + "title": "02 - File Paths", + "section": "Recursive", + "text": "Recursive\nAnd finally that we have created additional nested folders and populated them with fcs files, let’s see what setting list.files() recursive argument to TRUE\n\nall_files_present <- list.files(full.names=TRUE, recursive=TRUE)\nall_files_present \n\n\n\n\nIn this case, all files in all folders within the working directory are shown. This can be useful when exploring folder contents, but if there are a lot of files present within the folder, it will take a while to return the list.", "crumbs": [ "About", - "Intro to R" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/01_InstallingRPackages/index.html#documentation-and-websites", - "href": "course/01_InstallingRPackages/index.html#documentation-and-websites", - "title": "01 - Installing R Packages", - "section": "Documentation and Websites", - "text": "Documentation and Websites\nWe have already seen a couple ways to access the help documentation contained within an R package via Positron. Beyond internal documentation, R packages often have external websites that contain additional walk-through articles (ie. vignettes) to better document how to use the package.\nFor CRAN-based packages, we can start off by searching for the package name. So, in the case of dplyr:\n\n\n\nTwo main suggestions pop up. One is the package’s CRAN page. Unfortunately, this one is not particularly user-friendly, although some text-based vignettes are accessible.\n\n\n\nBecause of this, many CRAN-based R packages (especially those part of the tidyverse) use pkgdown-generated websites hosted via a GitHub page (similar to the one used by this course. The second option on the search is dplyr’s pkgdown-style website\n\n\n\nWe can usually find the list of functions under the Reference tab, with the more extensive documentation vignettes being found under the Articles tab.\n\n\n\nGitHub-based packages will vary depending on their individual developers, but often will also use pkgdown-style websites. These often appear as links on the right-hand side, or within the repository’s ReadMe.\n\n\n\nFor Bioconductor-based packages, on the package’s page we can typically find the already rendered vignette articles, usually as either html or pdf files. For example, with PeacoQC:\n\n\n\nAdditionally, package vignettes can also be reached via the packages help index page. These will usually appear under User guides, package vignettes, and other documentation.", + "objectID": "course/02_FilePaths/index.html#saving-changes-to-version-control", + "href": "course/02_FilePaths/index.html#saving-changes-to-version-control", + "title": "02 - File Paths", + "section": "Saving changes to Version Control", + "text": "Saving changes to Version Control\nAnd as is good practice, to maintain version control, let’s stage all the files and folders we created today within the Week2 Project Folder, write a commit message, and send these files back to GitHub until they are needed again next time.", "crumbs": [ "About", - "Intro to R" + "Intro to R", + "02 - File Paths" ] }, { - "objectID": "course/02_FilePaths/Downsampler.html", - "href": "course/02_FilePaths/Downsampler.html", - "title": "Downsampling", + "objectID": "course/03_InsideFCSFile/index.html", + "href": "course/03_InsideFCSFile/index.html", + "title": "03 - Inside an FCS File", "section": "", - "text": "Due to trying to keep the overall file size down, I am downsampling to 100 events. For anyone interested in how I did this, this Quarto Markdown Document contains the code needed to repeat the process." - }, - { - "objectID": "course/02_FilePaths/Downsampler.html#specify-file.path-and-identify-files", - "href": "course/02_FilePaths/Downsampler.html#specify-file.path-and-identify-files", - "title": "Downsampling", - "section": "Specify file.path and identify files", - "text": "Specify file.path and identify files\nDue to the counts being conducter on two separate instruments, the number of columns differs, so they will need to be loaded into separate GatingSet objects.\n\nStorageLocation <- file.path(\"course\", \"02_FilePaths\", \"data\")\nExisting <- list.files(StorageLocation, pattern=\".fcs\", full.names=TRUE)\nList1 <- Existing[1:2] # 3L Aurora\nList2 <- Existing[3:8] # 4L Aurora" - }, - { - "objectID": "course/02_FilePaths/Downsampler.html#load-.fcs-files-into-a-gatingset", - "href": "course/02_FilePaths/Downsampler.html#load-.fcs-files-into-a-gatingset", - "title": "Downsampling", - "section": "Load .fcs files into a GatingSet", - "text": "Load .fcs files into a GatingSet\nLoad in files to their respective GatingSet objects\n\ncs1 <- load_cytoset_from_fcs(List1, truncate_max_range = FALSE, transformation = FALSE)\ngs1 <- GatingSet(cs1)\n\ncs2 <- load_cytoset_from_fcs(List2, truncate_max_range = FALSE, transformation = FALSE)\ngs2 <- GatingSet(cs2)" + "text": "For the YouTube livestream recording, see here\nFor screen-shot slides, click here", + "crumbs": [ + "About", + "Intro to R", + "03 - Inside a .FCS file" + ] }, { - "objectID": "course/03_InsideFCSFile/slides.html#getting-set-up", - "href": "course/03_InsideFCSFile/slides.html#getting-set-up", + "objectID": "course/03_InsideFCSFile/index.html#getting-set-up", + "href": "course/03_InsideFCSFile/index.html#getting-set-up", "title": "03 - Inside an FCS File", "section": "Getting Set Up", - "text": "Getting Set Up" + "text": "Getting Set Up\n\n\nSet up File Paths\nHaving copied over the new data to your working project folder (Week 3 or whatever your chosen name), let’s identify the file paths between our working directory and the fcs files. If you retained the same project organization structure we had during Week #2, it may look similar to the following:\n\nPathToDataFolder <- file.path(\"data\")\n\n\nPathToDataFolder\n\n[1] \"data\"\n\n\n\n\n\n\nLocate .fcs files\nWe will now locate our .fcs files. As we saw last week, our computer will need the full file.paths to these individual files, so we will set the list.files() “full.names” argument to TRUE.\n\nfcs_files <- list.files(PathToDataFolder, pattern=\".fcs\", full.names=TRUE)\nfcs_files\n\n[1] \"data/CellCounts4L_AB_05_ND050_05.fcs\"\n\n\nBy contrast, if the “full.names” argument was set to FALSE, we would have retrieved just the file names\n\nlist.files(PathToDataFolder, pattern=\".fcs\", full.names=FALSE)\n\n[1] \"CellCounts4L_AB_05_ND050_05.fcs\"\n\n\nThis would have been the equivalent of running the basename function on the “full.names=TRUE” output.\n\nbasename(fcs_files)\n\n[1] \"CellCounts4L_AB_05_ND050_05.fcs\"", + "crumbs": [ + "About", + "Intro to R", + "03 - Inside a .FCS file" + ] }, { - "objectID": "course/03_InsideFCSFile/slides.html#flowcore", - "href": "course/03_InsideFCSFile/slides.html#flowcore", + "objectID": "course/03_InsideFCSFile/index.html#flowcore", + "href": "course/03_InsideFCSFile/index.html#flowcore", "title": "03 - Inside an FCS File", "section": "flowCore", - "text": "flowCore\n\n\n\n\n\n\n\n\n.\n\n\nWe will be using the flowCore package, which is the oldest and most-frequently downloaded flow cytometry package on Bioconductor." + "text": "flowCore\nWe will be using the flowCore package, which is the oldest and most-frequently downloaded flow cytometry package on Bioconductor.\n\n\nCode\n# I have attached this code for anyone that is interested in seeing how these plots were made. The content is not part of today's lesson, so if you are just starting off, we will cover the details of data-tidying and creating ggplot objects over the next several weeks. Best, David\n\n# Load required packages via a library call\n\nlibrary(dplyr) # CRAN\nlibrary(stringr) # CRAN\nlibrary(ggplot2) # CRAN\n#library(plotly) # Using the :: to access \n\n# Loading in the dataset contained within the .csv file\nBioconductorFlow_path <- file.path(PathToDataFolder, \"BioconductorFlow.csv\")\nBioconductorFlowPackages <- read.csv(BioconductorFlow_path, check.names=FALSE)\nBioconductorFlowPackages <- BioconductorFlowPackages |>\n arrange(desc(since)) |> mutate(package = factor(package, levels = package))\n\n# Newer Base R Pipe : |> \n# Older mostly equivalent Magrittr Pipe %>% \n\n\n\n\nCode\n# Notice the code-chunk eval arguments above dictate the shape of the final rendered plot. \n\n# Taking the imported dataset and passing it to ggplot2 to create the first plot. \n\nplot <- ggplot(BioconductorFlowPackages,\n aes(x = 0, xend = since, y = package, yend = package)) +\n geom_segment(linewidth = 2, color = \"steelblue\") +\n scale_x_continuous(trans = \"reverse\", \n breaks = seq(0, max(BioconductorFlowPackages$since), by = 5)) +\n labs(\n x = \"Years in Bioconductor\",\n y = NULL,\n title = \"Bioconductor Flow Cytometry R packages\"\n ) +\n theme_bw()\n\n# Taking the static plot and making it interactive using the plotly package\n\nplotly::ggplotly(plot)\n\n\n\n\n\n\n\n\nCode\n# Retrieving the names of Bioconductor flow cytometry R packages in correct release order. \n\nHistoricalOrder <- BioconductorFlowPackages |> pull(package)\n\n# Bringing in 2025 package usage dataset from a .csv file\nBioconductorUsage_path <- file.path(PathToDataFolder, \"BioconductorDownloads.csv\")\nBioconductorUsage <- read.csv(BioconductorUsage_path, check.names=FALSE)\nBioconductorUsage <- BioconductorUsage |> dplyr::filter(Month %in% \"all\")\n\n# Note, dplyr::filter is used due to flowCore also having a filter function, which causes conflicts once it is attached to the local environment. \n\n# Combining both data.frames for use in the plot\n\nDataset <- left_join(BioconductorFlowPackages, BioconductorUsage, by=\"package\")\n\n# Rearranging the order in which packages are displayed\n\nDataset$package <- factor(Dataset$package, levels=HistoricalOrder)\n\n\n\n\nCode\n# Generating the 2nd plot with ggplot2\n\nplot <- ggplot(Dataset, aes(x = since, y = Nb_of_distinct_IPs)) +\n geom_point(aes(color = package), size = 3, alpha = 0.7) + \n labs(\n x = \"Years in Bioconductor\",\n y = \"Number of Yearly Downloads\",\n title = \"\",\n color = \"Package\"\n ) +\n theme_bw()\n\n# Making it interactive with plotly\n\nplotly::ggplotly(plot)\n\n\n\n\n\n\nflowCore is also one of the many Bioconductor packages maintained by Mike Jiang. In many ways (as those who completed the optional take-home problems for Week #1 know) reminiscent of this xkcd comic:\n\nAs with all our R packages, we first need to make sure flowCore is attached to our local environment via the library call.\n\nlibrary(flowCore)\n\nThe function we will be using today is the read.FCS() function. Do you remember how to access the help documentation?\n\n\nCode\n# Or when in Positron, hovering over the highlighted function name within the code-chunk\n\n?flowCore::read.FCS\n\n\nTo start, lets select just the first .fcs file. We will do this by indexing the first item within fcs_files via the square brackets [].\n\nfirstfile <- fcs_files[1]\nfirstfile\n\n[1] \"data/CellCounts4L_AB_05_ND050_05.fcs\"", + "crumbs": [ + "About", + "Intro to R", + "03 - Inside a .FCS file" + ] }, { - "objectID": "course/03_InsideFCSFile/slides.html#flowframe", - "href": "course/03_InsideFCSFile/slides.html#flowframe", + "objectID": "course/03_InsideFCSFile/index.html#flowframe", + "href": "course/03_InsideFCSFile/index.html#flowframe", "title": "03 - Inside an FCS File", "section": "flowFrame", - "text": "flowFrame\n\n\n\n\n\n\n\n\n.\n\n\nFor read.FCS(), it accepts several arguments. The argument “filename” is where we provide our file.path to .fcs file that we wish to load into R. Let’s go ahead and do so\n\n\n\n\n\n\n\nread.FCS(filename=firstfile)" + "text": "flowFrame\nFor read.FCS(), it accepts several arguments. The argument “filename” is where we provide our file.path to .fcs file that we wish to load into R. Let’s go ahead and do so\n\nread.FCS(filename=firstfile)\n\nPlease note, if you are doing this with your own .fcs files, you will need to provide two additional arguments, “transformation” = FALSE, and “truncate_max_range” = FALSE for the files to be read in correctly. We will revisit the reasons why in Week #5.\n\nread.FCS(filename=firstfile, transformation = FALSE, truncate_max_range = FALSE)\n\nflowFrame object 'CellCounts4L_AB_05-ND050-05.fcs'\nwith 100 cells and 61 observables:\n name desc range minRange maxRange\n$P1 Time NA 272140 0 272139\n$P2 UV1-A NA 4194304 -111 4194303\n$P3 UV2-A NA 4194304 -111 4194303\n$P4 UV3-A NA 4194304 -111 4194303\n$P5 UV4-A NA 4194304 -111 4194303\n... ... ... ... ... ...\n$P57 R4-A NA 4194304 -111 4194303\n$P58 R5-A NA 4194304 -111 4194303\n$P59 R6-A NA 4194304 -111 4194303\n$P60 R7-A NA 4194304 -111 4194303\n$P61 R8-A NA 4194304 -111 4194303\n476 keywords are stored in the 'description' slot\n\n\nIn this case, we can see the .fcs file has been read into R as a “flowFrame” object. We can also see the file name, as well as details about the number of cells, and number of columns (whether detectors (for raw spectral flow data) or fluorophores (for unmixed spectral flow data)).\n\nDirectly below we see what resembles a table. At first glance, the only column with an immediately discernable purpose is the one with the name column, which is listing the detectors present on a Cytek Aurora.\n\nAnd finally, at the bottom we reach a line that tells us that for this .fcs files, 599 keyword can be found in a description slot.\n\n\n\nSo let’s get our bearings, we have loaded in an .fcs file to R, but let’s use some of the concepts we covered last week to try to understand a bit about what type or class of object we are working with. From the output, we saw the words flowFrame object, so let’s read it back in again, but assign it to an variable/object called flowFrame so that we can use the type-discerning functions we worked with last week on.\n\nflowFrame <- read.FCS(filename=firstfile, transformation = FALSE, truncate_max_range = FALSE)\n\nAs we create this variable, if we have the session tab selected on our right secondary side bar, we see it appear:\n\nIf we were to use the type-determining functions we learned last week\n\nclass(flowFrame)\n\n[1] \"flowFrame\"\nattr(,\"package\")\n[1] \"flowCore\"\n\n\nflowFrames are a class of object with a structure defined within the flowCore package. They are used to work with the data contained within individual .fcs files. Looking again at the right secondary side bar, we can see that it shows up as a ““S4 class flowFrame package flowCore”“” with 3 slots, with the words flowFrame adjacent to it.\nA perfectly valid first reaction to first reading this is “well how should I know what any of this means?”. Powering through this initial discomfort, let’s go ahead and click on the dropdown arrow next to the variables name and see if we get any additional clarity on the issue.\n\nWhen we do so, three additional drop-downs appear. Based on the previous line that mentioned 3 slots, we could infer that each line corresponds to one of those slots.\nWhat we are encountering with flowFrame is our first example of an S4 object type. These more-complicated object types are quite common for the various Bioconductor affilitated R packages.\nThese objects will usually appear with either S4 or S3 in their metadata, and are made up of various simpler object types that are cobbled together within the larger object, usually occupying individual slots.\nWhat advantage this bundling provides will be something we revisit throughout the course as you encounter more of these S4/S3 objects.\n\n\n\nexprs\nThe first slot within the flowFrame object shows up with the name “exprs”. For the exprs object, glancing at it’s middle column, we can based on the 100 rows and 61 columns, that it is likely a matrix-style object. We might also recall we saw similar numbers in the printed output when we ran read.FCS()earlier.\n\nWhich likely means that “exprs” slot is where the MFI data for the individual acquired cells within our .fcs file is being stored. Within Positron, for a matrix object, we can click on the little grid symbol on the far right to open up the table within editor.\n\nIf we utilize the scroll bars, we can see that the individual detectors (in the case of uploading a raw spectral fcs file, they would appear as fluorophores for unmixed spectral or conventional fcs files) occupy the individual columns, which are named. The rows are not named, but number 100, matching the number of cells present in the .fcs file. Additionally, on the far left there is a little summary table about the overall data.\n\nLet’s go ahead and assign this matrix to a new variable/object so that we can explore it later. Since flowFrame is an S4 object, it’s slots can be individually accessed by adding the @ symbol and the respective slot name.\n\nMFI_Matrix <- flowFrame@exprs\n\nAlternatively, we can use the Bioconductor helper function exprs() to get data held in that slot\n\nMFI_Matrix_Alternate <- exprs(flowFrame)\n\nIn the case of the above, this displayed text output to the console would be unwiedly to display all at once. If we wanted to only see the first five rows, we could use the head() function, and provide a value of 5.\n\nhead(MFI_Matrix, 5)\n\n Time UV1-A UV2-A UV3-A UV4-A UV5-A UV6-A\n[1,] 38823 37.79983 -184.479996 353.87714 1106.22998 1145.18140 2130.21899\n[2,] 39780 234.23021 -98.456429 26.43876 70.65833 -89.93541 -29.14263\n[3,] 267292 -117.96355 40.732426 473.94574 1177.86975 1516.70935 1985.46130\n[4,] 128101 289.87671 -2.723389 -54.11960 -163.71489 32.62989 -134.90411\n[5,] 255221 -104.50541 -71.163338 567.57562 610.23627 1416.61328 2868.16040\n UV7-A UV8-A UV9-A UV10-A UV11-A UV12-A UV13-A\n[1,] 4376.34277 3246.7952 32050.65039 8123.2637 1992.5785 1070.3323 956.43573\n[2,] -26.34649 -162.6675 17.98848 271.9152 154.8575 163.5411 -32.81524\n[3,] 3658.87671 4140.1724 59792.16406 14013.7969 3427.4324 1668.7588 1071.14636\n[4,] 739.71960 402.9025 427.37534 315.6364 -223.0423 145.7121 127.03777\n[5,] 4034.58789 3234.6626 40126.46484 10325.0371 1974.0907 1033.8450 -21.57245\n UV14-A UV15-A UV16-A SSC-H SSC-A V1-A V2-A\n[1,] 290.8685 385.49921 670.97687 657613 750760.12 1171.1390 154.5628\n[2,] -104.9198 103.41382 71.41528 83481 81552.85 266.2424 705.2527\n[3,] 730.1430 214.93053 252.75406 890845 1183519.00 1196.0931 1183.1105\n[4,] -207.8978 -55.37944 -45.10131 75103 72457.33 227.9926 556.9189\n[5,] 273.6271 960.16290 341.20633 415791 501690.97 717.2498 929.9780\n V3-A V4-A V5-A V6-A V7-A V8-A\n[1,] 1346.4488525 1706.9260 1923.50940 898.2527 3162.55371 83596.5078\n[2,] 244.3218689 341.0508 381.15939 87.1600 151.68785 119.5544\n[3,] 2087.0092773 824.6352 1635.27258 1613.9069 4653.16260 176981.6094\n[4,] -0.8137281 205.4732 18.12125 179.6371 -69.50061 -132.4348\n[5,] 1358.4512939 788.6506 1208.81006 1156.7040 3118.42627 104951.2578\n V9-A V10-A V11-A V12-A V13-A V14-A\n[1,] 32506.7617 27161.5137 6236.072754 2220.303223 2023.39966 753.589355\n[2,] -79.7691 -109.1783 -73.991196 114.375542 -11.53453 124.986206\n[3,] 69236.4297 57626.8984 13175.838867 4534.874023 3434.38989 1995.172363\n[4,] 129.2739 231.8918 -5.473321 8.792875 -24.62049 -8.212234\n[5,] 42090.0781 34104.3164 7620.552734 3103.544189 2426.43359 650.836304\n V15-A V16-A FSC-H FSC-A SSC-B-H SSC-B-A B1-A\n[1,] 510.9540 228.34962 1055905 1217097.50 716733 815959.06 606.6683\n[2,] -207.3494 -28.96272 79696 83439.11 104575 103132.83 195.2795\n[3,] 1321.8030 615.05560 1092481 1453969.38 757351 982038.31 2010.5110\n[4,] -133.7503 -34.32619 64760 60415.23 67955 66806.13 -146.8936\n[5,] 290.2892 473.32599 1038362 1184479.00 425296 522873.12 1015.5981\n B2-A B3-A B4-A B5-A B6-A B7-A\n[1,] 416.98294 4172.4712 192400.0938 93929.9375 54236.3320 19342.6445\n[2,] 333.25662 332.1675 -230.9639 196.7810 292.8945 -187.2845\n[3,] 2150.21826 10106.9551 437801.5625 212176.1562 124294.3594 45068.3008\n[4,] -34.90987 165.7988 675.0156 136.3076 482.8665 133.0948\n[5,] 639.77527 6034.0200 244022.6094 118871.4609 68616.9688 24067.7949\n B8-A B9-A B10-A B11-A B12-A B13-A\n[1,] 10507.24219 9498.1270 4465.50928 1668.048096 2199.9475 1581.7345\n[2,] 70.90886 -334.3563 188.05545 -663.359619 -163.2331 27.9856\n[3,] 24289.59180 22500.1914 10624.99219 4684.497559 3471.7749 2727.3904\n[4,] 432.28091 246.8794 -94.44906 2.905877 106.8489 283.3633\n[5,] 14182.12793 13019.7100 5577.33984 2753.355957 1279.3009 1423.0276\n B14-A R1-A R2-A R3-A R4-A R5-A\n[1,] 1487.56860 147.1335 129.867630 35.90353 267.23999 49.79849\n[2,] 205.82298 -142.9224 66.516052 113.63218 -94.41375 98.13978\n[3,] 2371.95850 -128.3749 -105.482544 726.48547 18.87000 95.47879\n[4,] 33.24665 127.5455 122.607941 37.83584 -82.87500 -343.83768\n[5,] 1565.72742 -266.4482 -3.350622 -178.39566 -117.10875 -100.10384\n R6-A R7-A R8-A\n[1,] -732.7097 42.83144 248.56728\n[2,] 143.4497 -263.28741 -85.83299\n[3,] -194.4526 -84.08820 -301.46066\n[4,] 82.3745 60.27896 -94.38461\n[5,] -182.0066 184.36417 186.17207\n\n\nThis is much more workable, especially on a small laptop screen. We can see that there are names for each column corresponding to detector/fluorophore/metal depending on the .fcs file we are accessing. Lets retrieve these column names using the colnames() function.\n\nColumnNames <- colnames(MFI_Matrix)\nColumnNames\n\n $P1N $P2N $P3N $P4N $P5N $P6N $P7N $P8N \n \"Time\" \"UV1-A\" \"UV2-A\" \"UV3-A\" \"UV4-A\" \"UV5-A\" \"UV6-A\" \"UV7-A\" \n $P9N $P10N $P11N $P12N $P13N $P14N $P15N $P16N \n \"UV8-A\" \"UV9-A\" \"UV10-A\" \"UV11-A\" \"UV12-A\" \"UV13-A\" \"UV14-A\" \"UV15-A\" \n $P17N $P18N $P19N $P20N $P21N $P22N $P23N $P24N \n \"UV16-A\" \"SSC-H\" \"SSC-A\" \"V1-A\" \"V2-A\" \"V3-A\" \"V4-A\" \"V5-A\" \n $P25N $P26N $P27N $P28N $P29N $P30N $P31N $P32N \n \"V6-A\" \"V7-A\" \"V8-A\" \"V9-A\" \"V10-A\" \"V11-A\" \"V12-A\" \"V13-A\" \n $P33N $P34N $P35N $P36N $P37N $P38N $P39N $P40N \n \"V14-A\" \"V15-A\" \"V16-A\" \"FSC-H\" \"FSC-A\" \"SSC-B-H\" \"SSC-B-A\" \"B1-A\" \n $P41N $P42N $P43N $P44N $P45N $P46N $P47N $P48N \n \"B2-A\" \"B3-A\" \"B4-A\" \"B5-A\" \"B6-A\" \"B7-A\" \"B8-A\" \"B9-A\" \n $P49N $P50N $P51N $P52N $P53N $P54N $P55N $P56N \n \"B10-A\" \"B11-A\" \"B12-A\" \"B13-A\" \"B14-A\" \"R1-A\" \"R2-A\" \"R3-A\" \n $P57N $P58N $P59N $P60N $P61N \n \"R4-A\" \"R5-A\" \"R6-A\" \"R7-A\" \"R8-A\" \n\n\nSomething interesting occurred when this occurred, we can see in addition to the detector names directly above each a “$P#N” pattern appear, with # standing for increasing numbers. If we recall, we saw something similar in the first output column when we first ran read.FCS().\n\nLets break out the str() and class() functions from last week and see what we can find out about why this is occuring.\n\nstr(ColumnNames)\n\n Named chr [1:61] \"Time\" \"UV1-A\" \"UV2-A\" \"UV3-A\" \"UV4-A\" \"UV5-A\" \"UV6-A\" ...\n - attr(*, \"names\")= chr [1:61] \"$P1N\" \"$P2N\" \"$P3N\" \"$P4N\" ...\n\n\nIn this case we can see that we don’t just have a vector (list) similar to what we saw with Fluorophores object last week, because instead of a chr [1:61] we get back a Named chr [1:61] designation. What we see is that in this case, each value has a corresponding index name as well. (ex. $P1N, $P2N, etc.) Let’s double check with class() function.\n\nclass(ColumnNames)\n\n[1] \"character\"\n\n\nWe can see that everything is character, but it doesn’t inform us that each index was named. This is one of the reasons it is best when trying to see what type of an object something is, to use multiple functions, to avoid missing some important details.\nIf we were trying to remove the names, being left with just the values (similar to what we saw with the vector-style list last week), we could use the unname() function:\n\nunname(ColumnNames)\n\n [1] \"Time\" \"UV1-A\" \"UV2-A\" \"UV3-A\" \"UV4-A\" \"UV5-A\" \"UV6-A\" \n [8] \"UV7-A\" \"UV8-A\" \"UV9-A\" \"UV10-A\" \"UV11-A\" \"UV12-A\" \"UV13-A\" \n[15] \"UV14-A\" \"UV15-A\" \"UV16-A\" \"SSC-H\" \"SSC-A\" \"V1-A\" \"V2-A\" \n[22] \"V3-A\" \"V4-A\" \"V5-A\" \"V6-A\" \"V7-A\" \"V8-A\" \"V9-A\" \n[29] \"V10-A\" \"V11-A\" \"V12-A\" \"V13-A\" \"V14-A\" \"V15-A\" \"V16-A\" \n[36] \"FSC-H\" \"FSC-A\" \"SSC-B-H\" \"SSC-B-A\" \"B1-A\" \"B2-A\" \"B3-A\" \n[43] \"B4-A\" \"B5-A\" \"B6-A\" \"B7-A\" \"B8-A\" \"B9-A\" \"B10-A\" \n[50] \"B11-A\" \"B12-A\" \"B13-A\" \"B14-A\" \"R1-A\" \"R2-A\" \"R3-A\" \n[57] \"R4-A\" \"R5-A\" \"R6-A\" \"R7-A\" \"R8-A\" \n\n\n\n\nLet’s return to the right sidebar to continue our exploration, by clicking on the dropdown arrow for exprs in the side-bar\n\nThe output is less user-friendly than what we saw when clicking on the little grid. If we scroll down far enough, we get down as far as [,61], which corresponds to the total number of columns.\n\nIn base R, column order can be defined by placing the corresponding column index number after a comma “,”. So for this case, the first column would be designated would be [,1] while the last column would be designated [,61].\n\nMFI_Matrix[,1]\n\n [1] 38823 39780 267292 128101 255221 79210 196643 83855 109315 26128\n [11] 114423 120001 71831 70551 197021 239994 252611 223012 152780 171822\n [21] 172611 168464 191503 253015 73885 82221 176641 128533 4117 191632\n [31] 191229 58093 141776 265894 55593 227555 233212 248578 95165 171934\n [41] 1360 251847 195764 147503 118723 1060 90033 253553 268268 74610\n [51] 23531 150119 226391 201568 179264 79944 196686 252667 117309 3903\n [61] 77690 195142 229873 254472 179943 236618 68193 87154 28541 78622\n [71] 155664 50115 40866 70753 260118 12033 96149 20740 37461 73998\n [81] 231939 192329 88649 197664 86006 142486 159539 251298 104864 164090\n [91] 102380 218968 145182 239323 261272 118979 17202 194277 229284 258723\n\n\n\nMFI_Matrix[,61]\n\n [1] 248.567276 -85.832993 -301.460663 -94.384613 186.172073 -461.407745\n [7] 843.507080 277.516113 -106.166855 281.633545 195.927261 818.865723\n [13] 734.996460 209.356476 206.442596 279.859894 518.165222 56.947498\n [19] 285.751007 857.126343 -94.384613 -213.030518 62.585236 138.409653\n [25] 118.012444 328.255768 -61.635056 185.285233 464.384979 5.637739\n [31] -66.385956 31.229273 1198.241211 185.475266 873.279419 457.607025\n [37] -73.353951 37.880539 729.168640 221.772171 -169.512238 348.272888\n [43] -338.391022 845.534119 -4.434176 620.024597 610.269409 -193.900208\n [49] 230.830566 -23.754517 607.102112 14.949510 -34.333195 -169.132172\n [55] -96.158287 220.631958 125.297165 -15.202891 -126.057304 193.393448\n [61] 90.203819 -277.706146 590.505615 911.096619 -92.230873 347.259369\n [67] 135.559113 369.430267 -62.015125 -180.597672 -146.517868 810.440796\n [73] 134.038818 -165.268097 727.711731 -88.746880 62.901962 203.275330\n [79] 436.196289 -242.676147 -40.857769 222.278946 -170.272385 525.513245\n [85] -41.491222 176.670258 201.501648 175.530045 329.839386 474.140167\n [91] -48.142490 -174.833252 46.052090 357.584656 -26.541714 191.493088\n [97] 211.320190 124.790398 -113.324883 343.268616\n\n\nWhat would happen if used a column index number that didn’t exist? Let’s check.\n\nMFI_Matrix[,350]\n\nError in `MFI_Matrix[, 350]`:\n! subscript out of bounds\n\n\nWe get back an error message telling us the subscript is out of bounds.\nSo if columns are specified by a number after the comma (ex. [,1]), how are rows specified? In R, rows would be specified by a number before the comma [1,]\n\nMFI_Matrix[1,]\n\n Time UV1-A UV2-A UV3-A UV4-A \n 38823.00000 37.79983 -184.48000 353.87714 1106.22998 \n UV5-A UV6-A UV7-A UV8-A UV9-A \n 1145.18140 2130.21899 4376.34277 3246.79517 32050.65039 \n UV10-A UV11-A UV12-A UV13-A UV14-A \n 8123.26367 1992.57849 1070.33228 956.43573 290.86853 \n UV15-A UV16-A SSC-H SSC-A V1-A \n 385.49921 670.97687 657613.00000 750760.12500 1171.13904 \n V2-A V3-A V4-A V5-A V6-A \n 154.56281 1346.44885 1706.92603 1923.50940 898.25269 \n V7-A V8-A V9-A V10-A V11-A \n 3162.55371 83596.50781 32506.76172 27161.51367 6236.07275 \n V12-A V13-A V14-A V15-A V16-A \n 2220.30322 2023.39966 753.58936 510.95404 228.34962 \n FSC-H FSC-A SSC-B-H SSC-B-A B1-A \n1055905.00000 1217097.50000 716733.00000 815959.06250 606.66833 \n B2-A B3-A B4-A B5-A B6-A \n 416.98294 4172.47119 192400.09375 93929.93750 54236.33203 \n B7-A B8-A B9-A B10-A B11-A \n 19342.64453 10507.24219 9498.12695 4465.50928 1668.04810 \n B12-A B13-A B14-A R1-A R2-A \n 2199.94751 1581.73450 1487.56860 147.13348 129.86763 \n R3-A R4-A R5-A R6-A R7-A \n 35.90353 267.23999 49.79849 -732.70966 42.83144 \n R8-A \n 248.56728 \n\n\nAnd while not the focus of today, we could retrieve individual values from a matrix by specifying both a row and a column index number. So for example, if we wanted the MFI value for the UV1-A detector for the first acquired cell (knowing that UV1-A is the 2nd column):\n\nMFI_Matrix[1,2]\n\n UV1-A \n37.79983 \n\n\nFrom our exploration, this looks to be all the information contained within the “exprs” slot, so let’s back up and check on the next slot.\n\n\n\n\nparameters\nAs we look at the next slot in the flowFrame object, we can see that parameters looks like it is going to be another more complex object, as it is showing up as an AnnotatedDataFrame object (defined by the Biobase R package, and itself contains 4 slots).\n\n\nHaving carved our way this far into the heart of an .fcs file, we are not about to call it quits now, so CHARGE my fellow cytometrist!!! Click that drop-down arrow!\n\nHaving survived our charge into the unknown, the four parameter slots appear to be “varMetadata”, “data”, “dimLabels” and “._classVersion_”.\n\n\nvarMetadata\nFortunately for us, both “varMetadata” and “data” at least appear to be table-like objects of a type known as a “data.frame”, so lets click on the grid to open in our editor window.\nIn the case of varMetadata, we seem to have retrieved a column of metadata names.\n\nThese look reminiscent of what we saw at the top of the read.FCS() column outputs previously\n\n\n\ndata\nClicking on the grid for parameters’s data slot will end opening the actual content that was displayed.\n\nLet’s try to retrieve the data contained within this slot and save it as it’s own variable/object within our R session. First, we need to open flowFrame object, then use @ to get inside its parameters slot. Since parameters is also a complex object (AnnotatedDataFrame specifically), we will need to use another @ to get inside its data slot:\n\nParameterData <- flowFrame@parameters@data\n\nhead(ParameterData, 10)\n\n name desc range minRange maxRange\n$P1 Time <NA> 272140 0.00000 272139\n$P2 UV1-A <NA> 4194304 -111.00000 4194303\n$P3 UV2-A <NA> 4194304 -111.00000 4194303\n$P4 UV3-A <NA> 4194304 -111.00000 4194303\n$P5 UV4-A <NA> 4194304 -111.00000 4194303\n$P6 UV5-A <NA> 4194304 -111.00000 4194303\n$P7 UV6-A <NA> 4194304 -111.00000 4194303\n$P8 UV7-A <NA> 4194304 -26.34649 4194303\n$P9 UV8-A <NA> 4194304 -111.00000 4194303\n$P10 UV9-A <NA> 4194304 0.00000 4194303\n\n\nAnd similarly, we could access with the Bioconductor helper function parameters(), but we would need to specify the accessor for data outside the parenthesis.\n\nParameterData_Alternate <- parameters(flowFrame)@data\n\nIf we ran the str() function, we get the following insight into ParameterData’s object type\n\nstr(ParameterData)\n\n'data.frame': 61 obs. of 5 variables:\n $ name : 'AsIs' Named chr \"Time\" \"UV1-A\" \"UV2-A\" \"UV3-A\" ...\n ..- attr(*, \"names\")= chr [1:61] \"$P1N\" \"$P2N\" \"$P3N\" \"$P4N\" ...\n $ desc : 'AsIs' Named chr NA NA NA NA ...\n ..- attr(*, \"names\")= chr [1:61] NA NA NA NA ...\n $ range : num 272140 4194304 4194304 4194304 4194304 ...\n $ minRange: num 0 -111 -111 -111 -111 ...\n $ maxRange: num 272139 4194303 4194303 4194303 4194303 ...\n\n\nWe can see this class of object is a “data.frame”. This is one of the more common object types in R, and we will be seeing these extensively throughout the course. We see that each of the columns appears to be designated by a $ followed by the column name, and then type of column (numeric, character, etc).\n\nIf we are trying to see these columns in R, we notice that data.frame is not like the previous S4 class objets we interacted with, as the @ symbol after doesn’t bring up any suggestions\n\nParameterData@\n\nBy contrast, adding the $ we saw when using the str() function does retrieve the underlying information\n\nParameterData$\n\n\nAs you become more familiar with R, remembering to check what kind of object you are working with, and how to access the contents will with practice become more familiar to you.\nSimilar to what we saw with a matrix, we can subset a data.frame based on the column or row index using square brackets [].\n\nParameterData[,1]\n\n $P1N $P2N $P3N $P4N $P5N $P6N $P7N $P8N \n \"Time\" \"UV1-A\" \"UV2-A\" \"UV3-A\" \"UV4-A\" \"UV5-A\" \"UV6-A\" \"UV7-A\" \n $P9N $P10N $P11N $P12N $P13N $P14N $P15N $P16N \n \"UV8-A\" \"UV9-A\" \"UV10-A\" \"UV11-A\" \"UV12-A\" \"UV13-A\" \"UV14-A\" \"UV15-A\" \n $P17N $P18N $P19N $P20N $P21N $P22N $P23N $P24N \n \"UV16-A\" \"SSC-H\" \"SSC-A\" \"V1-A\" \"V2-A\" \"V3-A\" \"V4-A\" \"V5-A\" \n $P25N $P26N $P27N $P28N $P29N $P30N $P31N $P32N \n \"V6-A\" \"V7-A\" \"V8-A\" \"V9-A\" \"V10-A\" \"V11-A\" \"V12-A\" \"V13-A\" \n $P33N $P34N $P35N $P36N $P37N $P38N $P39N $P40N \n \"V14-A\" \"V15-A\" \"V16-A\" \"FSC-H\" \"FSC-A\" \"SSC-B-H\" \"SSC-B-A\" \"B1-A\" \n $P41N $P42N $P43N $P44N $P45N $P46N $P47N $P48N \n \"B2-A\" \"B3-A\" \"B4-A\" \"B5-A\" \"B6-A\" \"B7-A\" \"B8-A\" \"B9-A\" \n $P49N $P50N $P51N $P52N $P53N $P54N $P55N $P56N \n \"B10-A\" \"B11-A\" \"B12-A\" \"B13-A\" \"B14-A\" \"R1-A\" \"R2-A\" \"R3-A\" \n $P57N $P58N $P59N $P60N $P61N \n \"R4-A\" \"R5-A\" \"R6-A\" \"R7-A\" \"R8-A\" \n\n\nThe individual detectors or fluorophore appear under “name”. For now, based on what we know, the $P# appears to be some sort of name being used as an internal consistent reference to the respective.\n“desc” is appearing empty for this raw spectral fcs file, but if you were to checked an unmixed file, this would be occupied the marker/ligand name assigned to it during the experiment setup.\n“range”, “minRange” and “maxRange” are beyond the scope of today, but are used by both instrument manufacturers and software vendors when setting appropiate scaling for a plot. For the actual details, see the Flow Cytometry Standard documentation.\nHaving exhausted our options under parameters “varMetadata” and “data” slots, let’s continue to the next slot.\n\n\ndimLabels\n\nIn this case, not much is returned. Yey!\n\nflowFrame@parameters@dimLabels\n\n[1] \"rowNames\" \"columnNames\"\n\n\n\n\nclassVersion\nContinuing on to the last slot “.__classVersion__”\n\nflowFrame@parameters@.__classVersion__\n\nAnnotatedDataFrame \n \"1.1.0\" \n\n\nAlso mercifully short, both of these seem to be more involved in defining the S4 class object, and don’t contain anything we need to retrieve today.\n\n\n\n\nDescription\nAt this point, we have explored both “exprs” and “parameter” slots for the flowFrame object we created. Let’s tackle the final slot, named description.\n\nWhen doing so, a very large list is opened within the Positron variables window. While we could scroll through it, it might be easier to retrieve certain number of rows via the console to make interpreting this more structured.\n\nTo retrieve the list itself, we would need to access the description slot of the flowFrame object. Since it is a slot, we will need to use the @ accessor.\n\n\nDescriptionList <- flowFrame@description\n\n\nDescriptionList \n\n$`$BEGINANALYSIS`\n[1] \"0\"\n\n$`$BEGINDATA`\n[1] \"33312\"\n\n$`$BEGINSTEXT`\n[1] \"0\"\n\n$`$BTIM`\n[1] \"13:55:29.85\"\n\n$`$BYTEORD`\n[1] \"4,3,2,1\"\n\n$`$CYT`\n[1] \"Aurora\"\n\n$`$CYTOLIB_VERSION`\n[1] \"2.22.0\"\n\n$`$CYTSN`\n[1] \"V0333\"\n\n$`$DATATYPE`\n[1] \"F\"\n\n$`$DATE`\n[1] \"04-Aug-2025\"\n\n$`$ENDANALYSIS`\n[1] \"0\"\n\n$`$ENDDATA`\n[1] \"57711\"\n\n$`$ENDSTEXT`\n[1] \"0\"\n\n$`$ETIM`\n[1] \"13:55:57.02\"\n\n$`$FIL`\n[1] \"CellCounts4L_AB_05-ND050-05.fcs\"\n\n$`$INST`\n[1] \"UMBC\"\n\n$`$MODE`\n[1] \"L\"\n\n$`$NEXTDATA`\n[1] \"0\"\n\n$`$OP`\n[1] \"David Rach\"\n\n$`$P10B`\n[1] \"32\"\n\n$`$P10E`\n[1] \"0,0\"\n\n$`$P10N`\n[1] \"UV9-A\"\n\n$`$P10R`\n[1] \"4194304\"\n\n$`$P10TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P10V`\n[1] \"710\"\n\n$`$P11B`\n[1] \"32\"\n\n$`$P11E`\n[1] \"0,0\"\n\n$`$P11N`\n[1] \"UV10-A\"\n\n$`$P11R`\n[1] \"4194304\"\n\n$`$P11TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P11V`\n[1] \"377\"\n\n$`$P12B`\n[1] \"32\"\n\n$`$P12E`\n[1] \"0,0\"\n\n$`$P12N`\n[1] \"UV11-A\"\n\n$`$P12R`\n[1] \"4194304\"\n\n$`$P12TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P12V`\n[1] \"469\"\n\n$`$P13B`\n[1] \"32\"\n\n$`$P13E`\n[1] \"0,0\"\n\n$`$P13N`\n[1] \"UV12-A\"\n\n$`$P13R`\n[1] \"4194304\"\n\n$`$P13TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P13V`\n[1] \"434\"\n\n$`$P14B`\n[1] \"32\"\n\n$`$P14E`\n[1] \"0,0\"\n\n$`$P14N`\n[1] \"UV13-A\"\n\n$`$P14R`\n[1] \"4194304\"\n\n$`$P14TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P14V`\n[1] \"564\"\n\n$`$P15B`\n[1] \"32\"\n\n$`$P15E`\n[1] \"0,0\"\n\n$`$P15N`\n[1] \"UV14-A\"\n\n$`$P15R`\n[1] \"4194304\"\n\n$`$P15TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P15V`\n[1] \"975\"\n\n$`$P16B`\n[1] \"32\"\n\n$`$P16E`\n[1] \"0,0\"\n\n$`$P16N`\n[1] \"UV15-A\"\n\n$`$P16R`\n[1] \"4194304\"\n\n$`$P16TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P16V`\n[1] \"737\"\n\n$`$P17B`\n[1] \"32\"\n\n$`$P17E`\n[1] \"0,0\"\n\n$`$P17N`\n[1] \"UV16-A\"\n\n$`$P17R`\n[1] \"4194304\"\n\n$`$P17TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P17V`\n[1] \"1069\"\n\n$`$P18B`\n[1] \"32\"\n\n$`$P18E`\n[1] \"0,0\"\n\n$`$P18N`\n[1] \"SSC-H\"\n\n$`$P18R`\n[1] \"4194304\"\n\n$`$P18TYPE`\n[1] \"Side_Scatter\"\n\n$`$P18V`\n[1] \"334\"\n\n$`$P19B`\n[1] \"32\"\n\n$`$P19E`\n[1] \"0,0\"\n\n$`$P19N`\n[1] \"SSC-A\"\n\n$`$P19R`\n[1] \"4194304\"\n\n$`$P19TYPE`\n[1] \"Side_Scatter\"\n\n$`$P19V`\n[1] \"334\"\n\n$`$P1B`\n[1] \"32\"\n\n$`$P1E`\n[1] \"0,0\"\n\n$`$P1N`\n[1] \"Time\"\n\n$`$P1R`\n[1] \"272140\"\n\n$`$P1TYPE`\n[1] \"Time\"\n\n$`$P20B`\n[1] \"32\"\n\n$`$P20E`\n[1] \"0,0\"\n\n$`$P20N`\n[1] \"V1-A\"\n\n$`$P20R`\n[1] \"4194304\"\n\n$`$P20TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P20V`\n[1] \"352\"\n\n$`$P21B`\n[1] \"32\"\n\n$`$P21E`\n[1] \"0,0\"\n\n$`$P21N`\n[1] \"V2-A\"\n\n$`$P21R`\n[1] \"4194304\"\n\n$`$P21TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P21V`\n[1] \"412\"\n\n$`$P22B`\n[1] \"32\"\n\n$`$P22E`\n[1] \"0,0\"\n\n$`$P22N`\n[1] \"V3-A\"\n\n$`$P22R`\n[1] \"4194304\"\n\n$`$P22TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P22V`\n[1] \"304\"\n\n$`$P23B`\n[1] \"32\"\n\n$`$P23E`\n[1] \"0,0\"\n\n$`$P23N`\n[1] \"V4-A\"\n\n$`$P23R`\n[1] \"4194304\"\n\n$`$P23TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P23V`\n[1] \"217\"\n\n$`$P24B`\n[1] \"32\"\n\n$`$P24E`\n[1] \"0,0\"\n\n$`$P24N`\n[1] \"V5-A\"\n\n$`$P24R`\n[1] \"4194304\"\n\n$`$P24TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P24V`\n[1] \"257\"\n\n$`$P25B`\n[1] \"32\"\n\n$`$P25E`\n[1] \"0,0\"\n\n$`$P25N`\n[1] \"V6-A\"\n\n$`$P25R`\n[1] \"4194304\"\n\n$`$P25TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P25V`\n[1] \"218\"\n\n$`$P26B`\n[1] \"32\"\n\n$`$P26E`\n[1] \"0,0\"\n\n$`$P26N`\n[1] \"V7-A\"\n\n$`$P26R`\n[1] \"4194304\"\n\n$`$P26TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P26V`\n[1] \"303\"\n\n$`$P27B`\n[1] \"32\"\n\n$`$P27E`\n[1] \"0,0\"\n\n$`$P27N`\n[1] \"V8-A\"\n\n$`$P27R`\n[1] \"4194304\"\n\n$`$P27TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P27V`\n[1] \"461\"\n\n$`$P28B`\n[1] \"32\"\n\n$`$P28E`\n[1] \"0,0\"\n\n$`$P28N`\n[1] \"V9-A\"\n\n$`$P28R`\n[1] \"4194304\"\n\n$`$P28TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P28V`\n[1] \"320\"\n\n$`$P29B`\n[1] \"32\"\n\n$`$P29E`\n[1] \"0,0\"\n\n$`$P29N`\n[1] \"V10-A\"\n\n$`$P29R`\n[1] \"4194304\"\n\n$`$P29TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P29V`\n[1] \"359\"\n\n$`$P2B`\n[1] \"32\"\n\n$`$P2E`\n[1] \"0,0\"\n\n$`$P2N`\n[1] \"UV1-A\"\n\n$`$P2R`\n[1] \"4194304\"\n\n$`$P2TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P2V`\n[1] \"1008\"\n\n$`$P30B`\n[1] \"32\"\n\n$`$P30E`\n[1] \"0,0\"\n\n$`$P30N`\n[1] \"V11-A\"\n\n$`$P30R`\n[1] \"4194304\"\n\n$`$P30TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P30V`\n[1] \"271\"\n\n$`$P31B`\n[1] \"32\"\n\n$`$P31E`\n[1] \"0,0\"\n\n$`$P31N`\n[1] \"V12-A\"\n\n$`$P31R`\n[1] \"4194304\"\n\n$`$P31TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P31V`\n[1] \"234\"\n\n$`$P32B`\n[1] \"32\"\n\n$`$P32E`\n[1] \"0,0\"\n\n$`$P32N`\n[1] \"V13-A\"\n\n$`$P32R`\n[1] \"4194304\"\n\n$`$P32TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P32V`\n[1] \"236\"\n\n$`$P33B`\n[1] \"32\"\n\n$`$P33E`\n[1] \"0,0\"\n\n$`$P33N`\n[1] \"V14-A\"\n\n$`$P33R`\n[1] \"4194304\"\n\n$`$P33TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P33V`\n[1] \"318\"\n\n$`$P34B`\n[1] \"32\"\n\n$`$P34E`\n[1] \"0,0\"\n\n$`$P34N`\n[1] \"V15-A\"\n\n$`$P34R`\n[1] \"4194304\"\n\n$`$P34TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P34V`\n[1] \"602\"\n\n$`$P35B`\n[1] \"32\"\n\n$`$P35E`\n[1] \"0,0\"\n\n$`$P35N`\n[1] \"V16-A\"\n\n$`$P35R`\n[1] \"4194304\"\n\n$`$P35TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P35V`\n[1] \"372\"\n\n$`$P36B`\n[1] \"32\"\n\n$`$P36E`\n[1] \"0,0\"\n\n$`$P36N`\n[1] \"FSC-H\"\n\n$`$P36R`\n[1] \"4194304\"\n\n$`$P36TYPE`\n[1] \"Forward_Scatter\"\n\n$`$P36V`\n[1] \"55\"\n\n$`$P37B`\n[1] \"32\"\n\n$`$P37E`\n[1] \"0,0\"\n\n$`$P37N`\n[1] \"FSC-A\"\n\n$`$P37R`\n[1] \"4194304\"\n\n$`$P37TYPE`\n[1] \"Forward_Scatter\"\n\n$`$P37V`\n[1] \"55\"\n\n$`$P38B`\n[1] \"32\"\n\n$`$P38E`\n[1] \"0,0\"\n\n$`$P38N`\n[1] \"SSC-B-H\"\n\n$`$P38R`\n[1] \"4194304\"\n\n$`$P38TYPE`\n[1] \"Side_Scatter\"\n\n$`$P38V`\n[1] \"241\"\n\n$`$P39B`\n[1] \"32\"\n\n$`$P39E`\n[1] \"0,0\"\n\n$`$P39N`\n[1] \"SSC-B-A\"\n\n$`$P39R`\n[1] \"4194304\"\n\n$`$P39TYPE`\n[1] \"Side_Scatter\"\n\n$`$P39V`\n[1] \"241\"\n\n$`$P3B`\n[1] \"32\"\n\n$`$P3E`\n[1] \"0,0\"\n\n$`$P3N`\n[1] \"UV2-A\"\n\n$`$P3R`\n[1] \"4194304\"\n\n$`$P3TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P3V`\n[1] \"286\"\n\n$`$P40B`\n[1] \"32\"\n\n$`$P40E`\n[1] \"0,0\"\n\n$`$P40N`\n[1] \"B1-A\"\n\n$`$P40R`\n[1] \"4194304\"\n\n$`$P40TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P40V`\n[1] \"1013\"\n\n$`$P41B`\n[1] \"32\"\n\n$`$P41E`\n[1] \"0,0\"\n\n$`$P41N`\n[1] \"B2-A\"\n\n$`$P41R`\n[1] \"4194304\"\n\n$`$P41TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P41V`\n[1] \"483\"\n\n$`$P42B`\n[1] \"32\"\n\n$`$P42E`\n[1] \"0,0\"\n\n$`$P42N`\n[1] \"B3-A\"\n\n$`$P42R`\n[1] \"4194304\"\n\n$`$P42TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P42V`\n[1] \"471\"\n\n$`$P43B`\n[1] \"32\"\n\n$`$P43E`\n[1] \"0,0\"\n\n$`$P43N`\n[1] \"B4-A\"\n\n$`$P43R`\n[1] \"4194304\"\n\n$`$P43TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P43V`\n[1] \"473\"\n\n$`$P44B`\n[1] \"32\"\n\n$`$P44E`\n[1] \"0,0\"\n\n$`$P44N`\n[1] \"B5-A\"\n\n$`$P44R`\n[1] \"4194304\"\n\n$`$P44TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P44V`\n[1] \"467\"\n\n$`$P45B`\n[1] \"32\"\n\n$`$P45E`\n[1] \"0,0\"\n\n$`$P45N`\n[1] \"B6-A\"\n\n$`$P45R`\n[1] \"4194304\"\n\n$`$P45TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P45V`\n[1] \"284\"\n\n$`$P46B`\n[1] \"32\"\n\n$`$P46E`\n[1] \"0,0\"\n\n$`$P46N`\n[1] \"B7-A\"\n\n$`$P46R`\n[1] \"4194304\"\n\n$`$P46TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P46V`\n[1] \"531\"\n\n$`$P47B`\n[1] \"32\"\n\n$`$P47E`\n[1] \"0,0\"\n\n$`$P47N`\n[1] \"B8-A\"\n\n$`$P47R`\n[1] \"4194304\"\n\n$`$P47TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P47V`\n[1] \"432\"\n\n$`$P48B`\n[1] \"32\"\n\n$`$P48E`\n[1] \"0,0\"\n\n$`$P48N`\n[1] \"B9-A\"\n\n$`$P48R`\n[1] \"4194304\"\n\n$`$P48TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P48V`\n[1] \"675\"\n\n$`$P49B`\n[1] \"32\"\n\n$`$P49E`\n[1] \"0,0\"\n\n$`$P49N`\n[1] \"B10-A\"\n\n$`$P49R`\n[1] \"4194304\"\n\n$`$P49TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P49V`\n[1] \"490\"\n\n$`$P4B`\n[1] \"32\"\n\n$`$P4E`\n[1] \"0,0\"\n\n$`$P4N`\n[1] \"UV3-A\"\n\n$`$P4R`\n[1] \"4194304\"\n\n$`$P4TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P4V`\n[1] \"677\"\n\n$`$P50B`\n[1] \"32\"\n\n$`$P50E`\n[1] \"0,0\"\n\n$`$P50N`\n[1] \"B11-A\"\n\n$`$P50R`\n[1] \"4194304\"\n\n$`$P50TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P50V`\n[1] \"286\"\n\n$`$P51B`\n[1] \"32\"\n\n$`$P51E`\n[1] \"0,0\"\n\n$`$P51N`\n[1] \"B12-A\"\n\n$`$P51R`\n[1] \"4194304\"\n\n$`$P51TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P51V`\n[1] \"407\"\n\n$`$P52B`\n[1] \"32\"\n\n$`$P52E`\n[1] \"0,0\"\n\n$`$P52N`\n[1] \"B13-A\"\n\n$`$P52R`\n[1] \"4194304\"\n\n$`$P52TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P52V`\n[1] \"700\"\n\n$`$P53B`\n[1] \"32\"\n\n$`$P53E`\n[1] \"0,0\"\n\n$`$P53N`\n[1] \"B14-A\"\n\n$`$P53R`\n[1] \"4194304\"\n\n$`$P53TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P53V`\n[1] \"693\"\n\n$`$P54B`\n[1] \"32\"\n\n$`$P54E`\n[1] \"0,0\"\n\n$`$P54N`\n[1] \"R1-A\"\n\n$`$P54R`\n[1] \"4194304\"\n\n$`$P54TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P54V`\n[1] \"158\"\n\n$`$P55B`\n[1] \"32\"\n\n$`$P55E`\n[1] \"0,0\"\n\n$`$P55N`\n[1] \"R2-A\"\n\n$`$P55R`\n[1] \"4194304\"\n\n$`$P55TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P55V`\n[1] \"245\"\n\n$`$P56B`\n[1] \"32\"\n\n$`$P56E`\n[1] \"0,0\"\n\n$`$P56N`\n[1] \"R3-A\"\n\n$`$P56R`\n[1] \"4194304\"\n\n$`$P56TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P56V`\n[1] \"338\"\n\n$`$P57B`\n[1] \"32\"\n\n$`$P57E`\n[1] \"0,0\"\n\n$`$P57N`\n[1] \"R4-A\"\n\n$`$P57R`\n[1] \"4194304\"\n\n$`$P57TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P57V`\n[1] \"238\"\n\n$`$P58B`\n[1] \"32\"\n\n$`$P58E`\n[1] \"0,0\"\n\n$`$P58N`\n[1] \"R5-A\"\n\n$`$P58R`\n[1] \"4194304\"\n\n$`$P58TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P58V`\n[1] \"191\"\n\n$`$P59B`\n[1] \"32\"\n\n$`$P59E`\n[1] \"0,0\"\n\n$`$P59N`\n[1] \"R6-A\"\n\n$`$P59R`\n[1] \"4194304\"\n\n$`$P59TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P59V`\n[1] \"274\"\n\n$`$P5B`\n[1] \"32\"\n\n$`$P5E`\n[1] \"0,0\"\n\n$`$P5N`\n[1] \"UV4-A\"\n\n$`$P5R`\n[1] \"4194304\"\n\n$`$P5TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P5V`\n[1] \"1022\"\n\n$`$P60B`\n[1] \"32\"\n\n$`$P60E`\n[1] \"0,0\"\n\n$`$P60N`\n[1] \"R7-A\"\n\n$`$P60R`\n[1] \"4194304\"\n\n$`$P60TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P60V`\n[1] \"524\"\n\n$`$P61B`\n[1] \"32\"\n\n$`$P61E`\n[1] \"0,0\"\n\n$`$P61N`\n[1] \"R8-A\"\n\n$`$P61R`\n[1] \"4194304\"\n\n$`$P61TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P61V`\n[1] \"243\"\n\n$`$P6B`\n[1] \"32\"\n\n$`$P6E`\n[1] \"0,0\"\n\n$`$P6N`\n[1] \"UV5-A\"\n\n$`$P6R`\n[1] \"4194304\"\n\n$`$P6TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P6V`\n[1] \"616\"\n\n$`$P7B`\n[1] \"32\"\n\n$`$P7E`\n[1] \"0,0\"\n\n$`$P7N`\n[1] \"UV6-A\"\n\n$`$P7R`\n[1] \"4194304\"\n\n$`$P7TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P7V`\n[1] \"506\"\n\n$`$P8B`\n[1] \"32\"\n\n$`$P8E`\n[1] \"0,0\"\n\n$`$P8N`\n[1] \"UV7-A\"\n\n$`$P8R`\n[1] \"4194304\"\n\n$`$P8TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P8V`\n[1] \"661\"\n\n$`$P9B`\n[1] \"32\"\n\n$`$P9E`\n[1] \"0,0\"\n\n$`$P9N`\n[1] \"UV8-A\"\n\n$`$P9R`\n[1] \"4194304\"\n\n$`$P9TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P9V`\n[1] \"514\"\n\n$`$PAR`\n[1] \"61\"\n\n$`$PROJ`\n[1] \"CellCounts4L_AB_05\"\n\n$`$SPILLOVER`\n UV1-A UV2-A UV3-A UV4-A UV5-A UV6-A UV7-A UV8-A UV9-A UV10-A UV11-A\n [1,] 1e+00 0 0 0 0 0 0 0 0 0 0\n [2,] 1e-06 1 0 0 0 0 0 0 0 0 0\n [3,] 0e+00 0 1 0 0 0 0 0 0 0 0\n [4,] 0e+00 0 0 1 0 0 0 0 0 0 0\n [5,] 0e+00 0 0 0 1 0 0 0 0 0 0\n [6,] 0e+00 0 0 0 0 1 0 0 0 0 0\n [7,] 0e+00 0 0 0 0 0 1 0 0 0 0\n [8,] 0e+00 0 0 0 0 0 0 1 0 0 0\n [9,] 0e+00 0 0 0 0 0 0 0 1 0 0\n[10,] 0e+00 0 0 0 0 0 0 0 0 1 0\n[11,] 0e+00 0 0 0 0 0 0 0 0 0 1\n[12,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[13,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[14,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[15,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[16,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[17,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[18,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[19,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[20,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[21,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[22,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[23,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[24,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[25,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[26,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[27,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[28,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[29,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[30,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[31,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[32,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[33,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[34,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[35,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[36,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[37,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[38,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[39,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[40,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[41,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[42,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[43,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[44,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[45,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[46,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[47,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[48,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[49,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[50,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[51,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[52,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[53,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[54,] 0e+00 0 0 0 0 0 0 0 0 0 0\n UV12-A UV13-A UV14-A UV15-A UV16-A V1-A V2-A V3-A V4-A V5-A V6-A V7-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 1 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 1 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 1 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 1 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 1 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 1 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 1 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 1 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 1 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 1 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 1 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 1\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0\n V8-A V9-A V10-A V11-A V12-A V13-A V14-A V15-A V16-A B1-A B2-A B3-A B4-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n B5-A B6-A B7-A B8-A B9-A B10-A B11-A B12-A B13-A B14-A R1-A R2-A R3-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n R4-A R5-A R6-A R7-A R8-A\n [1,] 0 0 0 0 0\n [2,] 0 0 0 0 0\n [3,] 0 0 0 0 0\n [4,] 0 0 0 0 0\n [5,] 0 0 0 0 0\n [6,] 0 0 0 0 0\n [7,] 0 0 0 0 0\n [8,] 0 0 0 0 0\n [9,] 0 0 0 0 0\n[10,] 0 0 0 0 0\n[11,] 0 0 0 0 0\n[12,] 0 0 0 0 0\n[13,] 0 0 0 0 0\n[14,] 0 0 0 0 0\n[15,] 0 0 0 0 0\n[16,] 0 0 0 0 0\n[17,] 0 0 0 0 0\n[18,] 0 0 0 0 0\n[19,] 0 0 0 0 0\n[20,] 0 0 0 0 0\n[21,] 0 0 0 0 0\n[22,] 0 0 0 0 0\n[23,] 0 0 0 0 0\n[24,] 0 0 0 0 0\n[25,] 0 0 0 0 0\n[26,] 0 0 0 0 0\n[27,] 0 0 0 0 0\n[28,] 0 0 0 0 0\n[29,] 0 0 0 0 0\n[30,] 0 0 0 0 0\n[31,] 0 0 0 0 0\n[32,] 0 0 0 0 0\n[33,] 0 0 0 0 0\n[34,] 0 0 0 0 0\n[35,] 0 0 0 0 0\n[36,] 0 0 0 0 0\n[37,] 0 0 0 0 0\n[38,] 0 0 0 0 0\n[39,] 0 0 0 0 0\n[40,] 0 0 0 0 0\n[41,] 0 0 0 0 0\n[42,] 0 0 0 0 0\n[43,] 0 0 0 0 0\n[44,] 0 0 0 0 0\n[45,] 0 0 0 0 0\n[46,] 0 0 0 0 0\n[47,] 0 0 0 0 0\n[48,] 0 0 0 0 0\n[49,] 0 0 0 0 0\n[50,] 1 0 0 0 0\n[51,] 0 1 0 0 0\n[52,] 0 0 1 0 0\n[53,] 0 0 0 1 0\n[54,] 0 0 0 0 1\n\n$`$TIMESTEP`\n[1] \"0.0001\"\n\n$`$TOT`\n[1] \"100\"\n\n$`$VOL`\n[1] \"30.31\"\n\n$`APPLY COMPENSATION`\n[1] \"FALSE\"\n\n$CHARSET\n[1] \"utf-8\"\n\n$CREATOR\n[1] \"SpectroFlo 3.3.0\"\n\n$FCSversion\n[1] \"3\"\n\n$FILENAME\n[1] \"data/CellCounts4L_AB_05_ND050_05.fcs\"\n\n$`FSC ASF`\n[1] \"1.21\"\n\n$GROUPNAME\n[1] \"ND050\"\n\n$GUID\n[1] \"CellCounts4L_AB_05-ND050-05.fcs\"\n\n$LASER1ASF\n[1] \"1.09\"\n\n$LASER1DELAY\n[1] \"-19.525\"\n\n$LASER1NAME\n[1] \"Violet\"\n\n$LASER2ASF\n[1] \"1.14\"\n\n$LASER2DELAY\n[1] \"0\"\n\n$LASER2NAME\n[1] \"Blue\"\n\n$LASER3ASF\n[1] \"1.02\"\n\n$LASER3DELAY\n[1] \"20.15\"\n\n$LASER3NAME\n[1] \"Red\"\n\n$LASER4ASF\n[1] \"0.92\"\n\n$LASER4DELAY\n[1] \"40.725\"\n\n$LASER4NAME\n[1] \"UV\"\n\n$ORIGINALGUID\n[1] \"CellCounts4L_AB_05-ND050-05.fcs\"\n\n$P10DISPLAY\n[1] \"LOG\"\n\n$P11DISPLAY\n[1] \"LOG\"\n\n$P12DISPLAY\n[1] \"LOG\"\n\n$P13DISPLAY\n[1] \"LOG\"\n\n$P14DISPLAY\n[1] \"LOG\"\n\n$P15DISPLAY\n[1] \"LOG\"\n\n$P16DISPLAY\n[1] \"LOG\"\n\n$P17DISPLAY\n[1] \"LOG\"\n\n$P18DISPLAY\n[1] \"LIN\"\n\n$P19DISPLAY\n[1] \"LIN\"\n\n$P1DISPLAY\n[1] \"LOG\"\n\n$P20DISPLAY\n[1] \"LOG\"\n\n$P21DISPLAY\n[1] \"LOG\"\n\n$P22DISPLAY\n[1] \"LOG\"\n\n$P23DISPLAY\n[1] \"LOG\"\n\n$P24DISPLAY\n[1] \"LOG\"\n\n$P25DISPLAY\n[1] \"LOG\"\n\n$P26DISPLAY\n[1] \"LOG\"\n\n$P27DISPLAY\n[1] \"LOG\"\n\n$P28DISPLAY\n[1] \"LOG\"\n\n$P29DISPLAY\n[1] \"LOG\"\n\n$P2DISPLAY\n[1] \"LOG\"\n\n$P30DISPLAY\n[1] \"LOG\"\n\n$P31DISPLAY\n[1] \"LOG\"\n\n$P32DISPLAY\n[1] \"LOG\"\n\n$P33DISPLAY\n[1] \"LOG\"\n\n$P34DISPLAY\n[1] \"LOG\"\n\n$P35DISPLAY\n[1] \"LOG\"\n\n$P36DISPLAY\n[1] \"LIN\"\n\n$P37DISPLAY\n[1] \"LIN\"\n\n$P38DISPLAY\n[1] \"LIN\"\n\n$P39DISPLAY\n[1] \"LIN\"\n\n$P3DISPLAY\n[1] \"LOG\"\n\n$P40DISPLAY\n[1] \"LOG\"\n\n$P41DISPLAY\n[1] \"LOG\"\n\n$P42DISPLAY\n[1] \"LOG\"\n\n$P43DISPLAY\n[1] \"LOG\"\n\n$P44DISPLAY\n[1] \"LOG\"\n\n$P45DISPLAY\n[1] \"LOG\"\n\n$P46DISPLAY\n[1] \"LOG\"\n\n$P47DISPLAY\n[1] \"LOG\"\n\n$P48DISPLAY\n[1] \"LOG\"\n\n$P49DISPLAY\n[1] \"LOG\"\n\n$P4DISPLAY\n[1] \"LOG\"\n\n$P50DISPLAY\n[1] \"LOG\"\n\n$P51DISPLAY\n[1] \"LOG\"\n\n$P52DISPLAY\n[1] \"LOG\"\n\n$P53DISPLAY\n[1] \"LOG\"\n\n$P54DISPLAY\n[1] \"LOG\"\n\n$P55DISPLAY\n[1] \"LOG\"\n\n$P56DISPLAY\n[1] \"LOG\"\n\n$P57DISPLAY\n[1] \"LOG\"\n\n$P58DISPLAY\n[1] \"LOG\"\n\n$P59DISPLAY\n[1] \"LOG\"\n\n$P5DISPLAY\n[1] \"LOG\"\n\n$P60DISPLAY\n[1] \"LOG\"\n\n$P61DISPLAY\n[1] \"LOG\"\n\n$P6DISPLAY\n[1] \"LOG\"\n\n$P7DISPLAY\n[1] \"LOG\"\n\n$P8DISPLAY\n[1] \"LOG\"\n\n$P9DISPLAY\n[1] \"LOG\"\n\n$THRESHOLD\n[1] \"(FSC,50000)\"\n\n$TUBENAME\n[1] \"05\"\n\n$USERSETTINGNAME\n[1] \"DTR_CellCounts\"\n\n$`WINDOW EXTENSION`\n[1] \"3\"\n\n\nThe returned list is a little too large to reasonably explore. We can attempt to subset using the head() function as shown below\n\nhead(DescriptionList, 5)\n\n$`$BEGINANALYSIS`\n[1] \"0\"\n\n$`$BEGINDATA`\n[1] \"33312\"\n\n$`$BEGINSTEXT`\n[1] \"0\"\n\n$`$BTIM`\n[1] \"13:55:29.85\"\n\n$`$BYTEORD`\n[1] \"4,3,2,1\"\n\n\nAlternatively, it might be better to subset based on position index\n\nDescriptionList[1:10]\n\n$`$BEGINANALYSIS`\n[1] \"0\"\n\n$`$BEGINDATA`\n[1] \"33312\"\n\n$`$BEGINSTEXT`\n[1] \"0\"\n\n$`$BTIM`\n[1] \"13:55:29.85\"\n\n$`$BYTEORD`\n[1] \"4,3,2,1\"\n\n$`$CYT`\n[1] \"Aurora\"\n\n$`$CYTOLIB_VERSION`\n[1] \"2.22.0\"\n\n$`$CYTSN`\n[1] \"V0333\"\n\n$`$DATATYPE`\n[1] \"F\"\n\n$`$DATE`\n[1] \"04-Aug-2025\"\n\n\nAnd just as we saw for exprs and parameters, there is also a Bioconductor helper keyword() function to access this same information directly from the flowFrame.\n\nDescriptionList_Alternate <- keyword(flowFrame)\n\nIf we run the class() function, we can see that DescriptionList is an actual “list”.\n\nclass(DescriptionList)\n\n[1] \"list\"\n\n\nThis is in contrast to the vectors we have previously generated. While these are also list like, they are what are known as as atomic list, which contain values that are all either characters, numerics or logicals.\n\nFluorophores <- c(\"BV421\", \"FITC\", \"PE\", \"APC\")\nclass(Fluorophores)\n\n[1] \"character\"\n\n\n\nPanelAntibodyCounts <- c(5, 12, 19, 26, 34, 46, 51)\nclass(PanelAntibodyCounts)\n\n[1] \"numeric\"\n\n\n\nSpecimenIndexToKeep <- c(TRUE, TRUE, FALSE, TRUE)\nclass(SpecimenIndexToKeep)\n\n[1] \"logical\"\n\n\nA list on the other hand is not restricted to contain objects composed entirely of a certain atomic type. For example, I could include the three previous vectors into a list using the list() function.\n\nMyListofVectors <- list(Fluorophores, PanelAntibodyCounts, SpecimenIndexToKeep)\nstr(MyListofVectors)\n\nList of 3\n $ : chr [1:4] \"BV421\" \"FITC\" \"PE\" \"APC\"\n $ : num [1:7] 5 12 19 26 34 46 51\n $ : logi [1:4] TRUE TRUE FALSE TRUE\n\n\nWe can see that with the Description/Keyword list we retrieved from our flowFrame shares a somewhat similar format.\n\nstr(DescriptionList[1:10])\n\nList of 10\n $ $BEGINANALYSIS : chr \"0\"\n $ $BEGINDATA : chr \"33312\"\n $ $BEGINSTEXT : chr \"0\"\n $ $BTIM : chr \"13:55:29.85\"\n $ $BYTEORD : chr \"4,3,2,1\"\n $ $CYT : chr \"Aurora\"\n $ $CYTOLIB_VERSION: chr \"2.22.0\"\n $ $CYTSN : chr \"V0333\"\n $ $DATATYPE : chr \"F\"\n $ $DATE : chr \"04-Aug-2025\"\n\n\nBut in this case, there are also names present ($BEGINANALYSIS, $BEGINDATA, etc). What if we had tried to provide names to our List of Vectors? Would the format match?\nWhen we assigned a name to each of the vectors (by providing an equal to = ), we get the same kind of structure format to what we see in Description.\n\nMyNamedListofVectors <- list(FluorophoresNamed=Fluorophores,\n PanelAntibodyCountsNamed=PanelAntibodyCounts,\n SpecimenIndexToKeepNamed=SpecimenIndexToKeep)\n\nstr(MyNamedListofVectors)\n\nList of 3\n $ FluorophoresNamed : chr [1:4] \"BV421\" \"FITC\" \"PE\" \"APC\"\n $ PanelAntibodyCountsNamed: num [1:7] 5 12 19 26 34 46 51\n $ SpecimenIndexToKeepNamed: logi [1:4] TRUE TRUE FALSE TRUE\n\n\nWe could then subsequently be able to isolate items from that list using the $ operator.\n\nMyNamedListofVectors$\n\n\nAlternatively, we could also access by list index position\n\nMyNamedListofVectors[1]\n\n$FluorophoresNamed\n[1] \"BV421\" \"FITC\" \"PE\" \"APC\" \n\n\nRemembering back to the original output from read.FCS() we remember that it mentioned 599 keywords being in the description slot, so now we know that this is what was being referenced.", + "crumbs": [ + "About", + "Intro to R", + "03 - Inside a .FCS file" + ] }, { - "objectID": "course/03_InsideFCSFile/slides.html#early-metadata", - "href": "course/03_InsideFCSFile/slides.html#early-metadata", + "objectID": "course/03_InsideFCSFile/index.html#early-metadata", + "href": "course/03_InsideFCSFile/index.html#early-metadata", "title": "03 - Inside an FCS File", "section": "Early Metadata", - "text": "Early Metadata\n\n\n\n\n\n\n\n\n.\n\n\nWithin the initial portion, we are getting back metadata keywords related to where and how the particular file was acquired. Keywords of potential interest include:\n\n\n\n\n\n\n\n\n\n\n\n\n\nStart Time\n\n\nWhat time was the .fcs file acquired\n\n\n\n\n\n\n\nDescriptionList$`$BTIM`\n\n[1] \"13:55:29.85\"" + "text": "Early Metadata\nWithin the initial portion, we are getting back metadata keywords related to where and how the particular file was acquired. Keywords of potential interest include:\n\n\n\n\n\n\nStart Time\n\n\n\nWhat time was the .fcs file acquired\n\n\n\n\nDescriptionList$`$BTIM`\n\n[1] \"13:55:29.85\"\n\n\n\n\n\n\n\n\n\nCytometer\n\n\n\nWhat type of cytometer was the .fcs file acquired on\n\n\n\n\nDescriptionList$`$CYT`\n\n[1] \"Aurora\"\n\n\n\n\n\n\n\n\n\n\n\nCytometer Serial Number\n\n\n\nManufacturer Serial Number of the Cytometer\n\n\n\n\nDescriptionList$`$CYTSN`\n\n[1] \"V0333\"\n\n\n\n\n\n\n\n\n\nFCS File Acquisition Date\n\n\n\nWhat was the date of acquisition\n\n\n\n\nDescriptionList$`$DATE`\n\n[1] \"04-Aug-2025\"\n\n\n\n\n\n\n\n\n\n\n\nAcquisition End Time\n\n\n\nWhat time was acquisition stopped\n\n\n\n\nDescriptionList$`$ETIM`\n\n[1] \"13:55:57.02\"\n\n\n\n\n\n\n\n\n\nFile Name\n\n\n\nName of the .fcs file\n\n\n\n\nDescriptionList$`$FIL`\n\n[1] \"CellCounts4L_AB_05-ND050-05.fcs\"\n\n\n\n\n\n\n\n\n\n\n\nOperator\n\n\n\nWho acquired the .fcs file\n\n\n\n\nDescriptionList$`$OP`\n\n[1] \"David Rach\"", + "crumbs": [ + "About", + "Intro to R", + "03 - Inside a .FCS file" + ] }, { - "objectID": "course/03_InsideFCSFile/slides.html#detector-values", - "href": "course/03_InsideFCSFile/slides.html#detector-values", + "objectID": "course/03_InsideFCSFile/index.html#detector-values", + "href": "course/03_InsideFCSFile/index.html#detector-values", "title": "03 - Inside an FCS File", "section": "Detector Values", - "text": "Detector Values\n\n\n\n\n\n\n\n\n.\n\n\nThe next major stretch of keywords encode parameter values associated with the individual detectors for at the time of acquisition.\n\n\n\n\n\n\n\nDetectors <- DescriptionList[20:384]\nDetectors\n\n$`$P10B`\n[1] \"32\"\n\n$`$P10E`\n[1] \"0,0\"\n\n$`$P10N`\n[1] \"UV9-A\"\n\n$`$P10R`\n[1] \"4194304\"\n\n$`$P10TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P10V`\n[1] \"710\"\n\n$`$P11B`\n[1] \"32\"\n\n$`$P11E`\n[1] \"0,0\"\n\n$`$P11N`\n[1] \"UV10-A\"\n\n$`$P11R`\n[1] \"4194304\"\n\n$`$P11TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P11V`\n[1] \"377\"\n\n$`$P12B`\n[1] \"32\"\n\n$`$P12E`\n[1] \"0,0\"\n\n$`$P12N`\n[1] \"UV11-A\"\n\n$`$P12R`\n[1] \"4194304\"\n\n$`$P12TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P12V`\n[1] \"469\"\n\n$`$P13B`\n[1] \"32\"\n\n$`$P13E`\n[1] \"0,0\"\n\n$`$P13N`\n[1] \"UV12-A\"\n\n$`$P13R`\n[1] \"4194304\"\n\n$`$P13TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P13V`\n[1] \"434\"\n\n$`$P14B`\n[1] \"32\"\n\n$`$P14E`\n[1] \"0,0\"\n\n$`$P14N`\n[1] \"UV13-A\"\n\n$`$P14R`\n[1] \"4194304\"\n\n$`$P14TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P14V`\n[1] \"564\"\n\n$`$P15B`\n[1] \"32\"\n\n$`$P15E`\n[1] \"0,0\"\n\n$`$P15N`\n[1] \"UV14-A\"\n\n$`$P15R`\n[1] \"4194304\"\n\n$`$P15TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P15V`\n[1] \"975\"\n\n$`$P16B`\n[1] \"32\"\n\n$`$P16E`\n[1] \"0,0\"\n\n$`$P16N`\n[1] \"UV15-A\"\n\n$`$P16R`\n[1] \"4194304\"\n\n$`$P16TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P16V`\n[1] \"737\"\n\n$`$P17B`\n[1] \"32\"\n\n$`$P17E`\n[1] \"0,0\"\n\n$`$P17N`\n[1] \"UV16-A\"\n\n$`$P17R`\n[1] \"4194304\"\n\n$`$P17TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P17V`\n[1] \"1069\"\n\n$`$P18B`\n[1] \"32\"\n\n$`$P18E`\n[1] \"0,0\"\n\n$`$P18N`\n[1] \"SSC-H\"\n\n$`$P18R`\n[1] \"4194304\"\n\n$`$P18TYPE`\n[1] \"Side_Scatter\"\n\n$`$P18V`\n[1] \"334\"\n\n$`$P19B`\n[1] \"32\"\n\n$`$P19E`\n[1] \"0,0\"\n\n$`$P19N`\n[1] \"SSC-A\"\n\n$`$P19R`\n[1] \"4194304\"\n\n$`$P19TYPE`\n[1] \"Side_Scatter\"\n\n$`$P19V`\n[1] \"334\"\n\n$`$P1B`\n[1] \"32\"\n\n$`$P1E`\n[1] \"0,0\"\n\n$`$P1N`\n[1] \"Time\"\n\n$`$P1R`\n[1] \"272140\"\n\n$`$P1TYPE`\n[1] \"Time\"\n\n$`$P20B`\n[1] \"32\"\n\n$`$P20E`\n[1] \"0,0\"\n\n$`$P20N`\n[1] \"V1-A\"\n\n$`$P20R`\n[1] \"4194304\"\n\n$`$P20TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P20V`\n[1] \"352\"\n\n$`$P21B`\n[1] \"32\"\n\n$`$P21E`\n[1] \"0,0\"\n\n$`$P21N`\n[1] \"V2-A\"\n\n$`$P21R`\n[1] \"4194304\"\n\n$`$P21TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P21V`\n[1] \"412\"\n\n$`$P22B`\n[1] \"32\"\n\n$`$P22E`\n[1] \"0,0\"\n\n$`$P22N`\n[1] \"V3-A\"\n\n$`$P22R`\n[1] \"4194304\"\n\n$`$P22TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P22V`\n[1] \"304\"\n\n$`$P23B`\n[1] \"32\"\n\n$`$P23E`\n[1] \"0,0\"\n\n$`$P23N`\n[1] \"V4-A\"\n\n$`$P23R`\n[1] \"4194304\"\n\n$`$P23TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P23V`\n[1] \"217\"\n\n$`$P24B`\n[1] \"32\"\n\n$`$P24E`\n[1] \"0,0\"\n\n$`$P24N`\n[1] \"V5-A\"\n\n$`$P24R`\n[1] \"4194304\"\n\n$`$P24TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P24V`\n[1] \"257\"\n\n$`$P25B`\n[1] \"32\"\n\n$`$P25E`\n[1] \"0,0\"\n\n$`$P25N`\n[1] \"V6-A\"\n\n$`$P25R`\n[1] \"4194304\"\n\n$`$P25TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P25V`\n[1] \"218\"\n\n$`$P26B`\n[1] \"32\"\n\n$`$P26E`\n[1] \"0,0\"\n\n$`$P26N`\n[1] \"V7-A\"\n\n$`$P26R`\n[1] \"4194304\"\n\n$`$P26TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P26V`\n[1] \"303\"\n\n$`$P27B`\n[1] \"32\"\n\n$`$P27E`\n[1] \"0,0\"\n\n$`$P27N`\n[1] \"V8-A\"\n\n$`$P27R`\n[1] \"4194304\"\n\n$`$P27TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P27V`\n[1] \"461\"\n\n$`$P28B`\n[1] \"32\"\n\n$`$P28E`\n[1] \"0,0\"\n\n$`$P28N`\n[1] \"V9-A\"\n\n$`$P28R`\n[1] \"4194304\"\n\n$`$P28TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P28V`\n[1] \"320\"\n\n$`$P29B`\n[1] \"32\"\n\n$`$P29E`\n[1] \"0,0\"\n\n$`$P29N`\n[1] \"V10-A\"\n\n$`$P29R`\n[1] \"4194304\"\n\n$`$P29TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P29V`\n[1] \"359\"\n\n$`$P2B`\n[1] \"32\"\n\n$`$P2E`\n[1] \"0,0\"\n\n$`$P2N`\n[1] \"UV1-A\"\n\n$`$P2R`\n[1] \"4194304\"\n\n$`$P2TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P2V`\n[1] \"1008\"\n\n$`$P30B`\n[1] \"32\"\n\n$`$P30E`\n[1] \"0,0\"\n\n$`$P30N`\n[1] \"V11-A\"\n\n$`$P30R`\n[1] \"4194304\"\n\n$`$P30TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P30V`\n[1] \"271\"\n\n$`$P31B`\n[1] \"32\"\n\n$`$P31E`\n[1] \"0,0\"\n\n$`$P31N`\n[1] \"V12-A\"\n\n$`$P31R`\n[1] \"4194304\"\n\n$`$P31TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P31V`\n[1] \"234\"\n\n$`$P32B`\n[1] \"32\"\n\n$`$P32E`\n[1] \"0,0\"\n\n$`$P32N`\n[1] \"V13-A\"\n\n$`$P32R`\n[1] \"4194304\"\n\n$`$P32TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P32V`\n[1] \"236\"\n\n$`$P33B`\n[1] \"32\"\n\n$`$P33E`\n[1] \"0,0\"\n\n$`$P33N`\n[1] \"V14-A\"\n\n$`$P33R`\n[1] \"4194304\"\n\n$`$P33TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P33V`\n[1] \"318\"\n\n$`$P34B`\n[1] \"32\"\n\n$`$P34E`\n[1] \"0,0\"\n\n$`$P34N`\n[1] \"V15-A\"\n\n$`$P34R`\n[1] \"4194304\"\n\n$`$P34TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P34V`\n[1] \"602\"\n\n$`$P35B`\n[1] \"32\"\n\n$`$P35E`\n[1] \"0,0\"\n\n$`$P35N`\n[1] \"V16-A\"\n\n$`$P35R`\n[1] \"4194304\"\n\n$`$P35TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P35V`\n[1] \"372\"\n\n$`$P36B`\n[1] \"32\"\n\n$`$P36E`\n[1] \"0,0\"\n\n$`$P36N`\n[1] \"FSC-H\"\n\n$`$P36R`\n[1] \"4194304\"\n\n$`$P36TYPE`\n[1] \"Forward_Scatter\"\n\n$`$P36V`\n[1] \"55\"\n\n$`$P37B`\n[1] \"32\"\n\n$`$P37E`\n[1] \"0,0\"\n\n$`$P37N`\n[1] \"FSC-A\"\n\n$`$P37R`\n[1] \"4194304\"\n\n$`$P37TYPE`\n[1] \"Forward_Scatter\"\n\n$`$P37V`\n[1] \"55\"\n\n$`$P38B`\n[1] \"32\"\n\n$`$P38E`\n[1] \"0,0\"\n\n$`$P38N`\n[1] \"SSC-B-H\"\n\n$`$P38R`\n[1] \"4194304\"\n\n$`$P38TYPE`\n[1] \"Side_Scatter\"\n\n$`$P38V`\n[1] \"241\"\n\n$`$P39B`\n[1] \"32\"\n\n$`$P39E`\n[1] \"0,0\"\n\n$`$P39N`\n[1] \"SSC-B-A\"\n\n$`$P39R`\n[1] \"4194304\"\n\n$`$P39TYPE`\n[1] \"Side_Scatter\"\n\n$`$P39V`\n[1] \"241\"\n\n$`$P3B`\n[1] \"32\"\n\n$`$P3E`\n[1] \"0,0\"\n\n$`$P3N`\n[1] \"UV2-A\"\n\n$`$P3R`\n[1] \"4194304\"\n\n$`$P3TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P3V`\n[1] \"286\"\n\n$`$P40B`\n[1] \"32\"\n\n$`$P40E`\n[1] \"0,0\"\n\n$`$P40N`\n[1] \"B1-A\"\n\n$`$P40R`\n[1] \"4194304\"\n\n$`$P40TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P40V`\n[1] \"1013\"\n\n$`$P41B`\n[1] \"32\"\n\n$`$P41E`\n[1] \"0,0\"\n\n$`$P41N`\n[1] \"B2-A\"\n\n$`$P41R`\n[1] \"4194304\"\n\n$`$P41TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P41V`\n[1] \"483\"\n\n$`$P42B`\n[1] \"32\"\n\n$`$P42E`\n[1] \"0,0\"\n\n$`$P42N`\n[1] \"B3-A\"\n\n$`$P42R`\n[1] \"4194304\"\n\n$`$P42TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P42V`\n[1] \"471\"\n\n$`$P43B`\n[1] \"32\"\n\n$`$P43E`\n[1] \"0,0\"\n\n$`$P43N`\n[1] \"B4-A\"\n\n$`$P43R`\n[1] \"4194304\"\n\n$`$P43TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P43V`\n[1] \"473\"\n\n$`$P44B`\n[1] \"32\"\n\n$`$P44E`\n[1] \"0,0\"\n\n$`$P44N`\n[1] \"B5-A\"\n\n$`$P44R`\n[1] \"4194304\"\n\n$`$P44TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P44V`\n[1] \"467\"\n\n$`$P45B`\n[1] \"32\"\n\n$`$P45E`\n[1] \"0,0\"\n\n$`$P45N`\n[1] \"B6-A\"\n\n$`$P45R`\n[1] \"4194304\"\n\n$`$P45TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P45V`\n[1] \"284\"\n\n$`$P46B`\n[1] \"32\"\n\n$`$P46E`\n[1] \"0,0\"\n\n$`$P46N`\n[1] \"B7-A\"\n\n$`$P46R`\n[1] \"4194304\"\n\n$`$P46TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P46V`\n[1] \"531\"\n\n$`$P47B`\n[1] \"32\"\n\n$`$P47E`\n[1] \"0,0\"\n\n$`$P47N`\n[1] \"B8-A\"\n\n$`$P47R`\n[1] \"4194304\"\n\n$`$P47TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P47V`\n[1] \"432\"\n\n$`$P48B`\n[1] \"32\"\n\n$`$P48E`\n[1] \"0,0\"\n\n$`$P48N`\n[1] \"B9-A\"\n\n$`$P48R`\n[1] \"4194304\"\n\n$`$P48TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P48V`\n[1] \"675\"\n\n$`$P49B`\n[1] \"32\"\n\n$`$P49E`\n[1] \"0,0\"\n\n$`$P49N`\n[1] \"B10-A\"\n\n$`$P49R`\n[1] \"4194304\"\n\n$`$P49TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P49V`\n[1] \"490\"\n\n$`$P4B`\n[1] \"32\"\n\n$`$P4E`\n[1] \"0,0\"\n\n$`$P4N`\n[1] \"UV3-A\"\n\n$`$P4R`\n[1] \"4194304\"\n\n$`$P4TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P4V`\n[1] \"677\"\n\n$`$P50B`\n[1] \"32\"\n\n$`$P50E`\n[1] \"0,0\"\n\n$`$P50N`\n[1] \"B11-A\"\n\n$`$P50R`\n[1] \"4194304\"\n\n$`$P50TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P50V`\n[1] \"286\"\n\n$`$P51B`\n[1] \"32\"\n\n$`$P51E`\n[1] \"0,0\"\n\n$`$P51N`\n[1] \"B12-A\"\n\n$`$P51R`\n[1] \"4194304\"\n\n$`$P51TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P51V`\n[1] \"407\"\n\n$`$P52B`\n[1] \"32\"\n\n$`$P52E`\n[1] \"0,0\"\n\n$`$P52N`\n[1] \"B13-A\"\n\n$`$P52R`\n[1] \"4194304\"\n\n$`$P52TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P52V`\n[1] \"700\"\n\n$`$P53B`\n[1] \"32\"\n\n$`$P53E`\n[1] \"0,0\"\n\n$`$P53N`\n[1] \"B14-A\"\n\n$`$P53R`\n[1] \"4194304\"\n\n$`$P53TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P53V`\n[1] \"693\"\n\n$`$P54B`\n[1] \"32\"\n\n$`$P54E`\n[1] \"0,0\"\n\n$`$P54N`\n[1] \"R1-A\"\n\n$`$P54R`\n[1] \"4194304\"\n\n$`$P54TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P54V`\n[1] \"158\"\n\n$`$P55B`\n[1] \"32\"\n\n$`$P55E`\n[1] \"0,0\"\n\n$`$P55N`\n[1] \"R2-A\"\n\n$`$P55R`\n[1] \"4194304\"\n\n$`$P55TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P55V`\n[1] \"245\"\n\n$`$P56B`\n[1] \"32\"\n\n$`$P56E`\n[1] \"0,0\"\n\n$`$P56N`\n[1] \"R3-A\"\n\n$`$P56R`\n[1] \"4194304\"\n\n$`$P56TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P56V`\n[1] \"338\"\n\n$`$P57B`\n[1] \"32\"\n\n$`$P57E`\n[1] \"0,0\"\n\n$`$P57N`\n[1] \"R4-A\"\n\n$`$P57R`\n[1] \"4194304\"\n\n$`$P57TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P57V`\n[1] \"238\"\n\n$`$P58B`\n[1] \"32\"\n\n$`$P58E`\n[1] \"0,0\"\n\n$`$P58N`\n[1] \"R5-A\"\n\n$`$P58R`\n[1] \"4194304\"\n\n$`$P58TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P58V`\n[1] \"191\"\n\n$`$P59B`\n[1] \"32\"\n\n$`$P59E`\n[1] \"0,0\"\n\n$`$P59N`\n[1] \"R6-A\"\n\n$`$P59R`\n[1] \"4194304\"\n\n$`$P59TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P59V`\n[1] \"274\"\n\n$`$P5B`\n[1] \"32\"\n\n$`$P5E`\n[1] \"0,0\"\n\n$`$P5N`\n[1] \"UV4-A\"\n\n$`$P5R`\n[1] \"4194304\"\n\n$`$P5TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P5V`\n[1] \"1022\"\n\n$`$P60B`\n[1] \"32\"\n\n$`$P60E`\n[1] \"0,0\"\n\n$`$P60N`\n[1] \"R7-A\"\n\n$`$P60R`\n[1] \"4194304\"\n\n$`$P60TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P60V`\n[1] \"524\"\n\n$`$P61B`\n[1] \"32\"\n\n$`$P61E`\n[1] \"0,0\"\n\n$`$P61N`\n[1] \"R8-A\"\n\n$`$P61R`\n[1] \"4194304\"\n\n$`$P61TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P61V`\n[1] \"243\"\n\n$`$P6B`\n[1] \"32\"\n\n$`$P6E`\n[1] \"0,0\"\n\n$`$P6N`\n[1] \"UV5-A\"\n\n$`$P6R`\n[1] \"4194304\"\n\n$`$P6TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P6V`\n[1] \"616\"\n\n$`$P7B`\n[1] \"32\"\n\n$`$P7E`\n[1] \"0,0\"\n\n$`$P7N`\n[1] \"UV6-A\"\n\n$`$P7R`\n[1] \"4194304\"\n\n$`$P7TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P7V`\n[1] \"506\"\n\n$`$P8B`\n[1] \"32\"\n\n$`$P8E`\n[1] \"0,0\"\n\n$`$P8N`\n[1] \"UV7-A\"\n\n$`$P8R`\n[1] \"4194304\"\n\n$`$P8TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P8V`\n[1] \"661\"\n\n$`$P9B`\n[1] \"32\"\n\n$`$P9E`\n[1] \"0,0\"\n\n$`$P9N`\n[1] \"UV8-A\"\n\n$`$P9R`\n[1] \"4194304\"\n\n$`$P9TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P9V`\n[1] \"514\"" + "text": "Detector Values\nThe next major stretch of keywords encode parameter values associated with the individual detectors for at the time of acquisition.\n\nDetectors <- DescriptionList[20:384]\nDetectors\n\n$`$P10B`\n[1] \"32\"\n\n$`$P10E`\n[1] \"0,0\"\n\n$`$P10N`\n[1] \"UV9-A\"\n\n$`$P10R`\n[1] \"4194304\"\n\n$`$P10TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P10V`\n[1] \"710\"\n\n$`$P11B`\n[1] \"32\"\n\n$`$P11E`\n[1] \"0,0\"\n\n$`$P11N`\n[1] \"UV10-A\"\n\n$`$P11R`\n[1] \"4194304\"\n\n$`$P11TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P11V`\n[1] \"377\"\n\n$`$P12B`\n[1] \"32\"\n\n$`$P12E`\n[1] \"0,0\"\n\n$`$P12N`\n[1] \"UV11-A\"\n\n$`$P12R`\n[1] \"4194304\"\n\n$`$P12TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P12V`\n[1] \"469\"\n\n$`$P13B`\n[1] \"32\"\n\n$`$P13E`\n[1] \"0,0\"\n\n$`$P13N`\n[1] \"UV12-A\"\n\n$`$P13R`\n[1] \"4194304\"\n\n$`$P13TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P13V`\n[1] \"434\"\n\n$`$P14B`\n[1] \"32\"\n\n$`$P14E`\n[1] \"0,0\"\n\n$`$P14N`\n[1] \"UV13-A\"\n\n$`$P14R`\n[1] \"4194304\"\n\n$`$P14TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P14V`\n[1] \"564\"\n\n$`$P15B`\n[1] \"32\"\n\n$`$P15E`\n[1] \"0,0\"\n\n$`$P15N`\n[1] \"UV14-A\"\n\n$`$P15R`\n[1] \"4194304\"\n\n$`$P15TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P15V`\n[1] \"975\"\n\n$`$P16B`\n[1] \"32\"\n\n$`$P16E`\n[1] \"0,0\"\n\n$`$P16N`\n[1] \"UV15-A\"\n\n$`$P16R`\n[1] \"4194304\"\n\n$`$P16TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P16V`\n[1] \"737\"\n\n$`$P17B`\n[1] \"32\"\n\n$`$P17E`\n[1] \"0,0\"\n\n$`$P17N`\n[1] \"UV16-A\"\n\n$`$P17R`\n[1] \"4194304\"\n\n$`$P17TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P17V`\n[1] \"1069\"\n\n$`$P18B`\n[1] \"32\"\n\n$`$P18E`\n[1] \"0,0\"\n\n$`$P18N`\n[1] \"SSC-H\"\n\n$`$P18R`\n[1] \"4194304\"\n\n$`$P18TYPE`\n[1] \"Side_Scatter\"\n\n$`$P18V`\n[1] \"334\"\n\n$`$P19B`\n[1] \"32\"\n\n$`$P19E`\n[1] \"0,0\"\n\n$`$P19N`\n[1] \"SSC-A\"\n\n$`$P19R`\n[1] \"4194304\"\n\n$`$P19TYPE`\n[1] \"Side_Scatter\"\n\n$`$P19V`\n[1] \"334\"\n\n$`$P1B`\n[1] \"32\"\n\n$`$P1E`\n[1] \"0,0\"\n\n$`$P1N`\n[1] \"Time\"\n\n$`$P1R`\n[1] \"272140\"\n\n$`$P1TYPE`\n[1] \"Time\"\n\n$`$P20B`\n[1] \"32\"\n\n$`$P20E`\n[1] \"0,0\"\n\n$`$P20N`\n[1] \"V1-A\"\n\n$`$P20R`\n[1] \"4194304\"\n\n$`$P20TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P20V`\n[1] \"352\"\n\n$`$P21B`\n[1] \"32\"\n\n$`$P21E`\n[1] \"0,0\"\n\n$`$P21N`\n[1] \"V2-A\"\n\n$`$P21R`\n[1] \"4194304\"\n\n$`$P21TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P21V`\n[1] \"412\"\n\n$`$P22B`\n[1] \"32\"\n\n$`$P22E`\n[1] \"0,0\"\n\n$`$P22N`\n[1] \"V3-A\"\n\n$`$P22R`\n[1] \"4194304\"\n\n$`$P22TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P22V`\n[1] \"304\"\n\n$`$P23B`\n[1] \"32\"\n\n$`$P23E`\n[1] \"0,0\"\n\n$`$P23N`\n[1] \"V4-A\"\n\n$`$P23R`\n[1] \"4194304\"\n\n$`$P23TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P23V`\n[1] \"217\"\n\n$`$P24B`\n[1] \"32\"\n\n$`$P24E`\n[1] \"0,0\"\n\n$`$P24N`\n[1] \"V5-A\"\n\n$`$P24R`\n[1] \"4194304\"\n\n$`$P24TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P24V`\n[1] \"257\"\n\n$`$P25B`\n[1] \"32\"\n\n$`$P25E`\n[1] \"0,0\"\n\n$`$P25N`\n[1] \"V6-A\"\n\n$`$P25R`\n[1] \"4194304\"\n\n$`$P25TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P25V`\n[1] \"218\"\n\n$`$P26B`\n[1] \"32\"\n\n$`$P26E`\n[1] \"0,0\"\n\n$`$P26N`\n[1] \"V7-A\"\n\n$`$P26R`\n[1] \"4194304\"\n\n$`$P26TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P26V`\n[1] \"303\"\n\n$`$P27B`\n[1] \"32\"\n\n$`$P27E`\n[1] \"0,0\"\n\n$`$P27N`\n[1] \"V8-A\"\n\n$`$P27R`\n[1] \"4194304\"\n\n$`$P27TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P27V`\n[1] \"461\"\n\n$`$P28B`\n[1] \"32\"\n\n$`$P28E`\n[1] \"0,0\"\n\n$`$P28N`\n[1] \"V9-A\"\n\n$`$P28R`\n[1] \"4194304\"\n\n$`$P28TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P28V`\n[1] \"320\"\n\n$`$P29B`\n[1] \"32\"\n\n$`$P29E`\n[1] \"0,0\"\n\n$`$P29N`\n[1] \"V10-A\"\n\n$`$P29R`\n[1] \"4194304\"\n\n$`$P29TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P29V`\n[1] \"359\"\n\n$`$P2B`\n[1] \"32\"\n\n$`$P2E`\n[1] \"0,0\"\n\n$`$P2N`\n[1] \"UV1-A\"\n\n$`$P2R`\n[1] \"4194304\"\n\n$`$P2TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P2V`\n[1] \"1008\"\n\n$`$P30B`\n[1] \"32\"\n\n$`$P30E`\n[1] \"0,0\"\n\n$`$P30N`\n[1] \"V11-A\"\n\n$`$P30R`\n[1] \"4194304\"\n\n$`$P30TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P30V`\n[1] \"271\"\n\n$`$P31B`\n[1] \"32\"\n\n$`$P31E`\n[1] \"0,0\"\n\n$`$P31N`\n[1] \"V12-A\"\n\n$`$P31R`\n[1] \"4194304\"\n\n$`$P31TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P31V`\n[1] \"234\"\n\n$`$P32B`\n[1] \"32\"\n\n$`$P32E`\n[1] \"0,0\"\n\n$`$P32N`\n[1] \"V13-A\"\n\n$`$P32R`\n[1] \"4194304\"\n\n$`$P32TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P32V`\n[1] \"236\"\n\n$`$P33B`\n[1] \"32\"\n\n$`$P33E`\n[1] \"0,0\"\n\n$`$P33N`\n[1] \"V14-A\"\n\n$`$P33R`\n[1] \"4194304\"\n\n$`$P33TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P33V`\n[1] \"318\"\n\n$`$P34B`\n[1] \"32\"\n\n$`$P34E`\n[1] \"0,0\"\n\n$`$P34N`\n[1] \"V15-A\"\n\n$`$P34R`\n[1] \"4194304\"\n\n$`$P34TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P34V`\n[1] \"602\"\n\n$`$P35B`\n[1] \"32\"\n\n$`$P35E`\n[1] \"0,0\"\n\n$`$P35N`\n[1] \"V16-A\"\n\n$`$P35R`\n[1] \"4194304\"\n\n$`$P35TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P35V`\n[1] \"372\"\n\n$`$P36B`\n[1] \"32\"\n\n$`$P36E`\n[1] \"0,0\"\n\n$`$P36N`\n[1] \"FSC-H\"\n\n$`$P36R`\n[1] \"4194304\"\n\n$`$P36TYPE`\n[1] \"Forward_Scatter\"\n\n$`$P36V`\n[1] \"55\"\n\n$`$P37B`\n[1] \"32\"\n\n$`$P37E`\n[1] \"0,0\"\n\n$`$P37N`\n[1] \"FSC-A\"\n\n$`$P37R`\n[1] \"4194304\"\n\n$`$P37TYPE`\n[1] \"Forward_Scatter\"\n\n$`$P37V`\n[1] \"55\"\n\n$`$P38B`\n[1] \"32\"\n\n$`$P38E`\n[1] \"0,0\"\n\n$`$P38N`\n[1] \"SSC-B-H\"\n\n$`$P38R`\n[1] \"4194304\"\n\n$`$P38TYPE`\n[1] \"Side_Scatter\"\n\n$`$P38V`\n[1] \"241\"\n\n$`$P39B`\n[1] \"32\"\n\n$`$P39E`\n[1] \"0,0\"\n\n$`$P39N`\n[1] \"SSC-B-A\"\n\n$`$P39R`\n[1] \"4194304\"\n\n$`$P39TYPE`\n[1] \"Side_Scatter\"\n\n$`$P39V`\n[1] \"241\"\n\n$`$P3B`\n[1] \"32\"\n\n$`$P3E`\n[1] \"0,0\"\n\n$`$P3N`\n[1] \"UV2-A\"\n\n$`$P3R`\n[1] \"4194304\"\n\n$`$P3TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P3V`\n[1] \"286\"\n\n$`$P40B`\n[1] \"32\"\n\n$`$P40E`\n[1] \"0,0\"\n\n$`$P40N`\n[1] \"B1-A\"\n\n$`$P40R`\n[1] \"4194304\"\n\n$`$P40TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P40V`\n[1] \"1013\"\n\n$`$P41B`\n[1] \"32\"\n\n$`$P41E`\n[1] \"0,0\"\n\n$`$P41N`\n[1] \"B2-A\"\n\n$`$P41R`\n[1] \"4194304\"\n\n$`$P41TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P41V`\n[1] \"483\"\n\n$`$P42B`\n[1] \"32\"\n\n$`$P42E`\n[1] \"0,0\"\n\n$`$P42N`\n[1] \"B3-A\"\n\n$`$P42R`\n[1] \"4194304\"\n\n$`$P42TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P42V`\n[1] \"471\"\n\n$`$P43B`\n[1] \"32\"\n\n$`$P43E`\n[1] \"0,0\"\n\n$`$P43N`\n[1] \"B4-A\"\n\n$`$P43R`\n[1] \"4194304\"\n\n$`$P43TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P43V`\n[1] \"473\"\n\n$`$P44B`\n[1] \"32\"\n\n$`$P44E`\n[1] \"0,0\"\n\n$`$P44N`\n[1] \"B5-A\"\n\n$`$P44R`\n[1] \"4194304\"\n\n$`$P44TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P44V`\n[1] \"467\"\n\n$`$P45B`\n[1] \"32\"\n\n$`$P45E`\n[1] \"0,0\"\n\n$`$P45N`\n[1] \"B6-A\"\n\n$`$P45R`\n[1] \"4194304\"\n\n$`$P45TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P45V`\n[1] \"284\"\n\n$`$P46B`\n[1] \"32\"\n\n$`$P46E`\n[1] \"0,0\"\n\n$`$P46N`\n[1] \"B7-A\"\n\n$`$P46R`\n[1] \"4194304\"\n\n$`$P46TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P46V`\n[1] \"531\"\n\n$`$P47B`\n[1] \"32\"\n\n$`$P47E`\n[1] \"0,0\"\n\n$`$P47N`\n[1] \"B8-A\"\n\n$`$P47R`\n[1] \"4194304\"\n\n$`$P47TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P47V`\n[1] \"432\"\n\n$`$P48B`\n[1] \"32\"\n\n$`$P48E`\n[1] \"0,0\"\n\n$`$P48N`\n[1] \"B9-A\"\n\n$`$P48R`\n[1] \"4194304\"\n\n$`$P48TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P48V`\n[1] \"675\"\n\n$`$P49B`\n[1] \"32\"\n\n$`$P49E`\n[1] \"0,0\"\n\n$`$P49N`\n[1] \"B10-A\"\n\n$`$P49R`\n[1] \"4194304\"\n\n$`$P49TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P49V`\n[1] \"490\"\n\n$`$P4B`\n[1] \"32\"\n\n$`$P4E`\n[1] \"0,0\"\n\n$`$P4N`\n[1] \"UV3-A\"\n\n$`$P4R`\n[1] \"4194304\"\n\n$`$P4TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P4V`\n[1] \"677\"\n\n$`$P50B`\n[1] \"32\"\n\n$`$P50E`\n[1] \"0,0\"\n\n$`$P50N`\n[1] \"B11-A\"\n\n$`$P50R`\n[1] \"4194304\"\n\n$`$P50TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P50V`\n[1] \"286\"\n\n$`$P51B`\n[1] \"32\"\n\n$`$P51E`\n[1] \"0,0\"\n\n$`$P51N`\n[1] \"B12-A\"\n\n$`$P51R`\n[1] \"4194304\"\n\n$`$P51TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P51V`\n[1] \"407\"\n\n$`$P52B`\n[1] \"32\"\n\n$`$P52E`\n[1] \"0,0\"\n\n$`$P52N`\n[1] \"B13-A\"\n\n$`$P52R`\n[1] \"4194304\"\n\n$`$P52TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P52V`\n[1] \"700\"\n\n$`$P53B`\n[1] \"32\"\n\n$`$P53E`\n[1] \"0,0\"\n\n$`$P53N`\n[1] \"B14-A\"\n\n$`$P53R`\n[1] \"4194304\"\n\n$`$P53TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P53V`\n[1] \"693\"\n\n$`$P54B`\n[1] \"32\"\n\n$`$P54E`\n[1] \"0,0\"\n\n$`$P54N`\n[1] \"R1-A\"\n\n$`$P54R`\n[1] \"4194304\"\n\n$`$P54TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P54V`\n[1] \"158\"\n\n$`$P55B`\n[1] \"32\"\n\n$`$P55E`\n[1] \"0,0\"\n\n$`$P55N`\n[1] \"R2-A\"\n\n$`$P55R`\n[1] \"4194304\"\n\n$`$P55TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P55V`\n[1] \"245\"\n\n$`$P56B`\n[1] \"32\"\n\n$`$P56E`\n[1] \"0,0\"\n\n$`$P56N`\n[1] \"R3-A\"\n\n$`$P56R`\n[1] \"4194304\"\n\n$`$P56TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P56V`\n[1] \"338\"\n\n$`$P57B`\n[1] \"32\"\n\n$`$P57E`\n[1] \"0,0\"\n\n$`$P57N`\n[1] \"R4-A\"\n\n$`$P57R`\n[1] \"4194304\"\n\n$`$P57TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P57V`\n[1] \"238\"\n\n$`$P58B`\n[1] \"32\"\n\n$`$P58E`\n[1] \"0,0\"\n\n$`$P58N`\n[1] \"R5-A\"\n\n$`$P58R`\n[1] \"4194304\"\n\n$`$P58TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P58V`\n[1] \"191\"\n\n$`$P59B`\n[1] \"32\"\n\n$`$P59E`\n[1] \"0,0\"\n\n$`$P59N`\n[1] \"R6-A\"\n\n$`$P59R`\n[1] \"4194304\"\n\n$`$P59TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P59V`\n[1] \"274\"\n\n$`$P5B`\n[1] \"32\"\n\n$`$P5E`\n[1] \"0,0\"\n\n$`$P5N`\n[1] \"UV4-A\"\n\n$`$P5R`\n[1] \"4194304\"\n\n$`$P5TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P5V`\n[1] \"1022\"\n\n$`$P60B`\n[1] \"32\"\n\n$`$P60E`\n[1] \"0,0\"\n\n$`$P60N`\n[1] \"R7-A\"\n\n$`$P60R`\n[1] \"4194304\"\n\n$`$P60TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P60V`\n[1] \"524\"\n\n$`$P61B`\n[1] \"32\"\n\n$`$P61E`\n[1] \"0,0\"\n\n$`$P61N`\n[1] \"R8-A\"\n\n$`$P61R`\n[1] \"4194304\"\n\n$`$P61TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P61V`\n[1] \"243\"\n\n$`$P6B`\n[1] \"32\"\n\n$`$P6E`\n[1] \"0,0\"\n\n$`$P6N`\n[1] \"UV5-A\"\n\n$`$P6R`\n[1] \"4194304\"\n\n$`$P6TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P6V`\n[1] \"616\"\n\n$`$P7B`\n[1] \"32\"\n\n$`$P7E`\n[1] \"0,0\"\n\n$`$P7N`\n[1] \"UV6-A\"\n\n$`$P7R`\n[1] \"4194304\"\n\n$`$P7TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P7V`\n[1] \"506\"\n\n$`$P8B`\n[1] \"32\"\n\n$`$P8E`\n[1] \"0,0\"\n\n$`$P8N`\n[1] \"UV7-A\"\n\n$`$P8R`\n[1] \"4194304\"\n\n$`$P8TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P8V`\n[1] \"661\"\n\n$`$P9B`\n[1] \"32\"\n\n$`$P9E`\n[1] \"0,0\"\n\n$`$P9N`\n[1] \"UV8-A\"\n\n$`$P9R`\n[1] \"4194304\"\n\n$`$P9TYPE`\n[1] \"Raw_Fluorescence\"\n\n$`$P9V`\n[1] \"514\"\n\n\nFortunately for all involved, there is a consistently repeating pattern for the keywords corresponding to each detector. We can see that here for $P7B, $P7E, $P7N, $P7R, $P7TYPE, $P7V\n\nWhen referencing to the Flow Cytometry Standard documentation, here are what the particular keyword letters mean:\n\n\n\n\n\n\nB\n\n\n\nNumber of bits reserved for parameter number n\n\n\n\n\nDescriptionList$`$P7B`\n\n[1] \"32\"\n\n\n\n\n\n\n\n\n\nE\n\n\n\nAmplification type for parameter n. \n\n\n\n\nDescriptionList$`$P7E`\n\n[1] \"0,0\"\n\n\n\n\n\n\n\n\n\n\n\nN\n\n\n\nShort Name for parameter n. \n\n\n\n\nDescriptionList$`$P7N`\n\n[1] \"UV6-A\"\n\n\n\n\n\n\n\n\n\nR\n\n\n\nRange for parameter number n. \n\n\n\n\nDescriptionList$`$P7R`\n\n[1] \"4194304\"\n\n\n\n\n\n\n\n\n\n\n\nTYPE\n\n\n\nDetector type for parameter n. \n\n\n\n\nDescriptionList$`$P7TYPE`\n\n[1] \"Raw_Fluorescence\"\n\n\n\n\n\n\n\n\n\nV\n\n\n\nDetector voltage for parameter n. \n\n\n\n\nDescriptionList$`$P7V`\n\n[1] \"506\"\n\n\n\n\n\nWhile not immediately obvious, understanding what these keywords encoded has proven useful for our core. In our case, we have built an automated InstrumentQC dashboard for all the instruments at our core.\n\n\n\nBy extracting out from our daily QC bead .fcs files the stored N (Detector Name) and V (Gain/Voltage) values for all the individual detectors, it allows us to plot Levey-Jennings Plots for our individual instruments, giving us usually around a months warning before an individual laser begins to fail. This helps with scheduling the Field-Service Engineer visit before it starts impacting the actual data.\n\n\n\nWhile most of the detectors keywords are similar (only changing there individual name and voltage) there are a couple exceptions.\nFor the FSC/SSC parameters, instead of Raw_Fluorescence value for Type, we see the corresponding Scatter value get return. This in term is what is used by various commercial softwares to show those axis as linear instead of biexponential when selected.\n\n\n\nThis is similarly the case for the Time parameter, where in addition to Type being set to Time, the range also appears different to Raw/Scatters value.", + "crumbs": [ + "About", + "Intro to R", + "03 - Inside a .FCS file" + ] }, { - "objectID": "course/03_InsideFCSFile/slides.html#middle-metadata", - "href": "course/03_InsideFCSFile/slides.html#middle-metadata", + "objectID": "course/03_InsideFCSFile/index.html#middle-metadata", + "href": "course/03_InsideFCSFile/index.html#middle-metadata", "title": "03 - Inside an FCS File", "section": "Middle Metadata", - "text": "Middle Metadata\n\n\n\n\n\n\n\n\n.\n\n\nOnce we are out of the detector keywords, we find the last of the $Metadata associated keywords.\n\n\n\n\n\n\n\nDetectors <- DescriptionList[385:398]\nDetectors\n\n$`$PAR`\n[1] \"61\"\n\n$`$PROJ`\n[1] \"CellCounts4L_AB_05\"\n\n$`$SPILLOVER`\n UV1-A UV2-A UV3-A UV4-A UV5-A UV6-A UV7-A UV8-A UV9-A UV10-A UV11-A\n [1,] 1e+00 0 0 0 0 0 0 0 0 0 0\n [2,] 1e-06 1 0 0 0 0 0 0 0 0 0\n [3,] 0e+00 0 1 0 0 0 0 0 0 0 0\n [4,] 0e+00 0 0 1 0 0 0 0 0 0 0\n [5,] 0e+00 0 0 0 1 0 0 0 0 0 0\n [6,] 0e+00 0 0 0 0 1 0 0 0 0 0\n [7,] 0e+00 0 0 0 0 0 1 0 0 0 0\n [8,] 0e+00 0 0 0 0 0 0 1 0 0 0\n [9,] 0e+00 0 0 0 0 0 0 0 1 0 0\n[10,] 0e+00 0 0 0 0 0 0 0 0 1 0\n[11,] 0e+00 0 0 0 0 0 0 0 0 0 1\n[12,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[13,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[14,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[15,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[16,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[17,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[18,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[19,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[20,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[21,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[22,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[23,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[24,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[25,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[26,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[27,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[28,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[29,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[30,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[31,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[32,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[33,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[34,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[35,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[36,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[37,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[38,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[39,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[40,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[41,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[42,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[43,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[44,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[45,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[46,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[47,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[48,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[49,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[50,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[51,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[52,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[53,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[54,] 0e+00 0 0 0 0 0 0 0 0 0 0\n UV12-A UV13-A UV14-A UV15-A UV16-A V1-A V2-A V3-A V4-A V5-A V6-A V7-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 1 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 1 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 1 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 1 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 1 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 1 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 1 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 1 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 1 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 1 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 1 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 1\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0\n V8-A V9-A V10-A V11-A V12-A V13-A V14-A V15-A V16-A B1-A B2-A B3-A B4-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n B5-A B6-A B7-A B8-A B9-A B10-A B11-A B12-A B13-A B14-A R1-A R2-A R3-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n R4-A R5-A R6-A R7-A R8-A\n [1,] 0 0 0 0 0\n [2,] 0 0 0 0 0\n [3,] 0 0 0 0 0\n [4,] 0 0 0 0 0\n [5,] 0 0 0 0 0\n [6,] 0 0 0 0 0\n [7,] 0 0 0 0 0\n [8,] 0 0 0 0 0\n [9,] 0 0 0 0 0\n[10,] 0 0 0 0 0\n[11,] 0 0 0 0 0\n[12,] 0 0 0 0 0\n[13,] 0 0 0 0 0\n[14,] 0 0 0 0 0\n[15,] 0 0 0 0 0\n[16,] 0 0 0 0 0\n[17,] 0 0 0 0 0\n[18,] 0 0 0 0 0\n[19,] 0 0 0 0 0\n[20,] 0 0 0 0 0\n[21,] 0 0 0 0 0\n[22,] 0 0 0 0 0\n[23,] 0 0 0 0 0\n[24,] 0 0 0 0 0\n[25,] 0 0 0 0 0\n[26,] 0 0 0 0 0\n[27,] 0 0 0 0 0\n[28,] 0 0 0 0 0\n[29,] 0 0 0 0 0\n[30,] 0 0 0 0 0\n[31,] 0 0 0 0 0\n[32,] 0 0 0 0 0\n[33,] 0 0 0 0 0\n[34,] 0 0 0 0 0\n[35,] 0 0 0 0 0\n[36,] 0 0 0 0 0\n[37,] 0 0 0 0 0\n[38,] 0 0 0 0 0\n[39,] 0 0 0 0 0\n[40,] 0 0 0 0 0\n[41,] 0 0 0 0 0\n[42,] 0 0 0 0 0\n[43,] 0 0 0 0 0\n[44,] 0 0 0 0 0\n[45,] 0 0 0 0 0\n[46,] 0 0 0 0 0\n[47,] 0 0 0 0 0\n[48,] 0 0 0 0 0\n[49,] 0 0 0 0 0\n[50,] 1 0 0 0 0\n[51,] 0 1 0 0 0\n[52,] 0 0 1 0 0\n[53,] 0 0 0 1 0\n[54,] 0 0 0 0 1\n\n$`$TIMESTEP`\n[1] \"0.0001\"\n\n$`$TOT`\n[1] \"100\"\n\n$`$VOL`\n[1] \"30.31\"\n\n$`APPLY COMPENSATION`\n[1] \"FALSE\"\n\n$CHARSET\n[1] \"utf-8\"\n\n$CREATOR\n[1] \"SpectroFlo 3.3.0\"\n\n$FCSversion\n[1] \"3\"\n\n$FILENAME\n[1] \"data/CellCounts4L_AB_05_ND050_05.fcs\"\n\n$`FSC ASF`\n[1] \"1.21\"\n\n$GROUPNAME\n[1] \"ND050\"\n\n$GUID\n[1] \"CellCounts4L_AB_05-ND050-05.fcs\"" + "text": "Middle Metadata\nOnce we are out of the detector keywords, we find the last of the $Metadata associated keywords.\n\nDetectors <- DescriptionList[385:398]\nDetectors\n\n$`$PAR`\n[1] \"61\"\n\n$`$PROJ`\n[1] \"CellCounts4L_AB_05\"\n\n$`$SPILLOVER`\n UV1-A UV2-A UV3-A UV4-A UV5-A UV6-A UV7-A UV8-A UV9-A UV10-A UV11-A\n [1,] 1e+00 0 0 0 0 0 0 0 0 0 0\n [2,] 1e-06 1 0 0 0 0 0 0 0 0 0\n [3,] 0e+00 0 1 0 0 0 0 0 0 0 0\n [4,] 0e+00 0 0 1 0 0 0 0 0 0 0\n [5,] 0e+00 0 0 0 1 0 0 0 0 0 0\n [6,] 0e+00 0 0 0 0 1 0 0 0 0 0\n [7,] 0e+00 0 0 0 0 0 1 0 0 0 0\n [8,] 0e+00 0 0 0 0 0 0 1 0 0 0\n [9,] 0e+00 0 0 0 0 0 0 0 1 0 0\n[10,] 0e+00 0 0 0 0 0 0 0 0 1 0\n[11,] 0e+00 0 0 0 0 0 0 0 0 0 1\n[12,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[13,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[14,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[15,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[16,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[17,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[18,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[19,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[20,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[21,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[22,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[23,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[24,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[25,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[26,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[27,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[28,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[29,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[30,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[31,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[32,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[33,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[34,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[35,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[36,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[37,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[38,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[39,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[40,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[41,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[42,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[43,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[44,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[45,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[46,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[47,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[48,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[49,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[50,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[51,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[52,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[53,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[54,] 0e+00 0 0 0 0 0 0 0 0 0 0\n UV12-A UV13-A UV14-A UV15-A UV16-A V1-A V2-A V3-A V4-A V5-A V6-A V7-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 1 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 1 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 1 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 1 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 1 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 1 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 1 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 1 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 1 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 1 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 1 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 1\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0\n V8-A V9-A V10-A V11-A V12-A V13-A V14-A V15-A V16-A B1-A B2-A B3-A B4-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n B5-A B6-A B7-A B8-A B9-A B10-A B11-A B12-A B13-A B14-A R1-A R2-A R3-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n R4-A R5-A R6-A R7-A R8-A\n [1,] 0 0 0 0 0\n [2,] 0 0 0 0 0\n [3,] 0 0 0 0 0\n [4,] 0 0 0 0 0\n [5,] 0 0 0 0 0\n [6,] 0 0 0 0 0\n [7,] 0 0 0 0 0\n [8,] 0 0 0 0 0\n [9,] 0 0 0 0 0\n[10,] 0 0 0 0 0\n[11,] 0 0 0 0 0\n[12,] 0 0 0 0 0\n[13,] 0 0 0 0 0\n[14,] 0 0 0 0 0\n[15,] 0 0 0 0 0\n[16,] 0 0 0 0 0\n[17,] 0 0 0 0 0\n[18,] 0 0 0 0 0\n[19,] 0 0 0 0 0\n[20,] 0 0 0 0 0\n[21,] 0 0 0 0 0\n[22,] 0 0 0 0 0\n[23,] 0 0 0 0 0\n[24,] 0 0 0 0 0\n[25,] 0 0 0 0 0\n[26,] 0 0 0 0 0\n[27,] 0 0 0 0 0\n[28,] 0 0 0 0 0\n[29,] 0 0 0 0 0\n[30,] 0 0 0 0 0\n[31,] 0 0 0 0 0\n[32,] 0 0 0 0 0\n[33,] 0 0 0 0 0\n[34,] 0 0 0 0 0\n[35,] 0 0 0 0 0\n[36,] 0 0 0 0 0\n[37,] 0 0 0 0 0\n[38,] 0 0 0 0 0\n[39,] 0 0 0 0 0\n[40,] 0 0 0 0 0\n[41,] 0 0 0 0 0\n[42,] 0 0 0 0 0\n[43,] 0 0 0 0 0\n[44,] 0 0 0 0 0\n[45,] 0 0 0 0 0\n[46,] 0 0 0 0 0\n[47,] 0 0 0 0 0\n[48,] 0 0 0 0 0\n[49,] 0 0 0 0 0\n[50,] 1 0 0 0 0\n[51,] 0 1 0 0 0\n[52,] 0 0 1 0 0\n[53,] 0 0 0 1 0\n[54,] 0 0 0 0 1\n\n$`$TIMESTEP`\n[1] \"0.0001\"\n\n$`$TOT`\n[1] \"100\"\n\n$`$VOL`\n[1] \"30.31\"\n\n$`APPLY COMPENSATION`\n[1] \"FALSE\"\n\n$CHARSET\n[1] \"utf-8\"\n\n$CREATOR\n[1] \"SpectroFlo 3.3.0\"\n\n$FCSversion\n[1] \"3\"\n\n$FILENAME\n[1] \"data/CellCounts4L_AB_05_ND050_05.fcs\"\n\n$`FSC ASF`\n[1] \"1.21\"\n\n$GROUPNAME\n[1] \"ND050\"\n\n$GUID\n[1] \"CellCounts4L_AB_05-ND050-05.fcs\"\n\n\nAmong those of potential interest\n\n\n\n\n\n\nProj\n\n\n\nOften corresponding to the experiment file name\n\n\n\n\nDescriptionList$`$PROJ`\n\n[1] \"CellCounts4L_AB_05\"\n\n\n\n\n\n\n\n\n\nSpillover\n\n\n\nWhere the internal spillover matrix is stored (we will revisit during compensation)\n\n\n\n\nDescriptionList$`$SPILLOVER`\n\n UV1-A UV2-A UV3-A UV4-A UV5-A UV6-A UV7-A UV8-A UV9-A UV10-A UV11-A\n [1,] 1e+00 0 0 0 0 0 0 0 0 0 0\n [2,] 1e-06 1 0 0 0 0 0 0 0 0 0\n [3,] 0e+00 0 1 0 0 0 0 0 0 0 0\n [4,] 0e+00 0 0 1 0 0 0 0 0 0 0\n [5,] 0e+00 0 0 0 1 0 0 0 0 0 0\n [6,] 0e+00 0 0 0 0 1 0 0 0 0 0\n [7,] 0e+00 0 0 0 0 0 1 0 0 0 0\n [8,] 0e+00 0 0 0 0 0 0 1 0 0 0\n [9,] 0e+00 0 0 0 0 0 0 0 1 0 0\n[10,] 0e+00 0 0 0 0 0 0 0 0 1 0\n[11,] 0e+00 0 0 0 0 0 0 0 0 0 1\n[12,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[13,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[14,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[15,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[16,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[17,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[18,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[19,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[20,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[21,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[22,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[23,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[24,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[25,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[26,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[27,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[28,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[29,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[30,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[31,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[32,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[33,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[34,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[35,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[36,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[37,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[38,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[39,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[40,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[41,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[42,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[43,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[44,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[45,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[46,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[47,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[48,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[49,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[50,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[51,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[52,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[53,] 0e+00 0 0 0 0 0 0 0 0 0 0\n[54,] 0e+00 0 0 0 0 0 0 0 0 0 0\n UV12-A UV13-A UV14-A UV15-A UV16-A V1-A V2-A V3-A V4-A V5-A V6-A V7-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 1 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 1 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 1 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 1 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 1 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 1 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 1 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 1 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 1 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 1 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 1 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 1\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0\n V8-A V9-A V10-A V11-A V12-A V13-A V14-A V15-A V16-A B1-A B2-A B3-A B4-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[37,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n B5-A B6-A B7-A B8-A B9-A B10-A B11-A B12-A B13-A B14-A R1-A R2-A R3-A\n [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [2,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [3,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [4,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [6,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [7,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [8,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n [9,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[10,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[11,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[12,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[13,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[14,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[15,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[16,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[17,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[18,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[19,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[20,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[23,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[24,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[25,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[26,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[27,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[28,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[29,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[30,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[31,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[32,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[33,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[34,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[35,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[36,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[37,] 1 0 0 0 0 0 0 0 0 0 0 0 0\n[38,] 0 1 0 0 0 0 0 0 0 0 0 0 0\n[39,] 0 0 1 0 0 0 0 0 0 0 0 0 0\n[40,] 0 0 0 1 0 0 0 0 0 0 0 0 0\n[41,] 0 0 0 0 1 0 0 0 0 0 0 0 0\n[42,] 0 0 0 0 0 1 0 0 0 0 0 0 0\n[43,] 0 0 0 0 0 0 1 0 0 0 0 0 0\n[44,] 0 0 0 0 0 0 0 1 0 0 0 0 0\n[45,] 0 0 0 0 0 0 0 0 1 0 0 0 0\n[46,] 0 0 0 0 0 0 0 0 0 1 0 0 0\n[47,] 0 0 0 0 0 0 0 0 0 0 1 0 0\n[48,] 0 0 0 0 0 0 0 0 0 0 0 1 0\n[49,] 0 0 0 0 0 0 0 0 0 0 0 0 1\n[50,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[51,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[52,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[53,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n[54,] 0 0 0 0 0 0 0 0 0 0 0 0 0\n R4-A R5-A R6-A R7-A R8-A\n [1,] 0 0 0 0 0\n [2,] 0 0 0 0 0\n [3,] 0 0 0 0 0\n [4,] 0 0 0 0 0\n [5,] 0 0 0 0 0\n [6,] 0 0 0 0 0\n [7,] 0 0 0 0 0\n [8,] 0 0 0 0 0\n [9,] 0 0 0 0 0\n[10,] 0 0 0 0 0\n[11,] 0 0 0 0 0\n[12,] 0 0 0 0 0\n[13,] 0 0 0 0 0\n[14,] 0 0 0 0 0\n[15,] 0 0 0 0 0\n[16,] 0 0 0 0 0\n[17,] 0 0 0 0 0\n[18,] 0 0 0 0 0\n[19,] 0 0 0 0 0\n[20,] 0 0 0 0 0\n[21,] 0 0 0 0 0\n[22,] 0 0 0 0 0\n[23,] 0 0 0 0 0\n[24,] 0 0 0 0 0\n[25,] 0 0 0 0 0\n[26,] 0 0 0 0 0\n[27,] 0 0 0 0 0\n[28,] 0 0 0 0 0\n[29,] 0 0 0 0 0\n[30,] 0 0 0 0 0\n[31,] 0 0 0 0 0\n[32,] 0 0 0 0 0\n[33,] 0 0 0 0 0\n[34,] 0 0 0 0 0\n[35,] 0 0 0 0 0\n[36,] 0 0 0 0 0\n[37,] 0 0 0 0 0\n[38,] 0 0 0 0 0\n[39,] 0 0 0 0 0\n[40,] 0 0 0 0 0\n[41,] 0 0 0 0 0\n[42,] 0 0 0 0 0\n[43,] 0 0 0 0 0\n[44,] 0 0 0 0 0\n[45,] 0 0 0 0 0\n[46,] 0 0 0 0 0\n[47,] 0 0 0 0 0\n[48,] 0 0 0 0 0\n[49,] 0 0 0 0 0\n[50,] 1 0 0 0 0\n[51,] 0 1 0 0 0\n[52,] 0 0 1 0 0\n[53,] 0 0 0 1 0\n[54,] 0 0 0 0 1\n\n\n\n\n\n\n\n\n\nTOT\n\n\n\nTotal events (in this case my downsampled 100 cells)\n\n\n\n\nDescriptionList$`$TOT`\n\n[1] \"100\"\n\n\n\n\n\n\n\n\n\nVolume\n\n\n\nVolume amount acquired during acquisition.\n\n\n\n\nDescriptionList$`$VOL`\n\n[1] \"30.31\"\n\n\n\n\n\n\n\n\n\nSoftware\n\n\n\nSoftware used and version\n\n\n\n\nDescriptionList$CREATOR\n\n[1] \"SpectroFlo 3.3.0\"\n\n\n\nYou will notice at this point, the keyword names including a “$” symbol have stopped, so tick marks are no longer required (except when there is a space in the name). The only $ remaining is being used as a selector for a particular item in the list.\n\nDetectors <- DescriptionList[390:398]\nDetectors\n\n$`$VOL`\n[1] \"30.31\"\n\n$`APPLY COMPENSATION`\n[1] \"FALSE\"\n\n$CHARSET\n[1] \"utf-8\"\n\n$CREATOR\n[1] \"SpectroFlo 3.3.0\"\n\n$FCSversion\n[1] \"3\"\n\n$FILENAME\n[1] \"data/CellCounts4L_AB_05_ND050_05.fcs\"\n\n$`FSC ASF`\n[1] \"1.21\"\n\n$GROUPNAME\n[1] \"ND050\"\n\n$GUID\n[1] \"CellCounts4L_AB_05-ND050-05.fcs\"\n\n\n\n\n\n\n\n\nFILENAME\n\n\n\nBasically the full file.path to the .fcs file of interest.\n\n\n\n\nDescriptionList$FILENAME\n\n[1] \"data/CellCounts4L_AB_05_ND050_05.fcs\"\n\n\n\n\n\n\n\n\n\nGROUPNAME\n\n\n\nThe Name assigned to the acquisition Group.\n\n\n\n\nDescriptionList$GROUPNAME\n\n[1] \"ND050\"", + "crumbs": [ + "About", + "Intro to R", + "03 - Inside a .FCS file" + ] }, { - "objectID": "course/03_InsideFCSFile/slides.html#laser-metadata", - "href": "course/03_InsideFCSFile/slides.html#laser-metadata", + "objectID": "course/03_InsideFCSFile/index.html#laser-metadata", + "href": "course/03_InsideFCSFile/index.html#laser-metadata", "title": "03 - Inside an FCS File", "section": "Laser Metadata", - "text": "Laser Metadata\n\n\n\n\n\n\n\n\n.\n\n\nNext up, there is a small stretch of keywords containing the values associated with the individual lasers as far as delays and area scaling factors for a particular day (also useful when plotted).\n\n\n\n\n\n\n\nDetectors <- DescriptionList[399:410]\nDetectors\n\n$LASER1ASF\n[1] \"1.09\"\n\n$LASER1DELAY\n[1] \"-19.525\"\n\n$LASER1NAME\n[1] \"Violet\"\n\n$LASER2ASF\n[1] \"1.14\"\n\n$LASER2DELAY\n[1] \"0\"\n\n$LASER2NAME\n[1] \"Blue\"\n\n$LASER3ASF\n[1] \"1.02\"\n\n$LASER3DELAY\n[1] \"20.15\"\n\n$LASER3NAME\n[1] \"Red\"\n\n$LASER4ASF\n[1] \"0.92\"\n\n$LASER4DELAY\n[1] \"40.725\"\n\n$LASER4NAME\n[1] \"UV\"" + "text": "Laser Metadata\nNext up, there is a small stretch of keywords containing the values associated with the individual lasers as far as delays and area scaling factors for a particular day (also useful when plotted).\n\nDetectors <- DescriptionList[399:410]\nDetectors\n\n$LASER1ASF\n[1] \"1.09\"\n\n$LASER1DELAY\n[1] \"-19.525\"\n\n$LASER1NAME\n[1] \"Violet\"\n\n$LASER2ASF\n[1] \"1.14\"\n\n$LASER2DELAY\n[1] \"0\"\n\n$LASER2NAME\n[1] \"Blue\"\n\n$LASER3ASF\n[1] \"1.02\"\n\n$LASER3DELAY\n[1] \"20.15\"\n\n$LASER3NAME\n[1] \"Red\"\n\n$LASER4ASF\n[1] \"0.92\"\n\n$LASER4DELAY\n[1] \"40.725\"\n\n$LASER4NAME\n[1] \"UV\"", + "crumbs": [ + "About", + "Intro to R", + "03 - Inside a .FCS file" + ] }, { - "objectID": "course/03_InsideFCSFile/slides.html#display", - "href": "course/03_InsideFCSFile/slides.html#display", + "objectID": "course/03_InsideFCSFile/index.html#display", + "href": "course/03_InsideFCSFile/index.html#display", "title": "03 - Inside an FCS File", "section": "Display", - "text": "Display\n\n\n\n\n\n\n\n\n.\n\n\nThen there is a stretch matching whether a particular detector needs to be displayed as linear (in the case of time and scatter) or as log (for individual detectors).\n\n\n\n\n\n\n\nDetectors <- DescriptionList[412:472]\nDetectors\n\n$P10DISPLAY\n[1] \"LOG\"\n\n$P11DISPLAY\n[1] \"LOG\"\n\n$P12DISPLAY\n[1] \"LOG\"\n\n$P13DISPLAY\n[1] \"LOG\"\n\n$P14DISPLAY\n[1] \"LOG\"\n\n$P15DISPLAY\n[1] \"LOG\"\n\n$P16DISPLAY\n[1] \"LOG\"\n\n$P17DISPLAY\n[1] \"LOG\"\n\n$P18DISPLAY\n[1] \"LIN\"\n\n$P19DISPLAY\n[1] \"LIN\"\n\n$P1DISPLAY\n[1] \"LOG\"\n\n$P20DISPLAY\n[1] \"LOG\"\n\n$P21DISPLAY\n[1] \"LOG\"\n\n$P22DISPLAY\n[1] \"LOG\"\n\n$P23DISPLAY\n[1] \"LOG\"\n\n$P24DISPLAY\n[1] \"LOG\"\n\n$P25DISPLAY\n[1] \"LOG\"\n\n$P26DISPLAY\n[1] \"LOG\"\n\n$P27DISPLAY\n[1] \"LOG\"\n\n$P28DISPLAY\n[1] \"LOG\"\n\n$P29DISPLAY\n[1] \"LOG\"\n\n$P2DISPLAY\n[1] \"LOG\"\n\n$P30DISPLAY\n[1] \"LOG\"\n\n$P31DISPLAY\n[1] \"LOG\"\n\n$P32DISPLAY\n[1] \"LOG\"\n\n$P33DISPLAY\n[1] \"LOG\"\n\n$P34DISPLAY\n[1] \"LOG\"\n\n$P35DISPLAY\n[1] \"LOG\"\n\n$P36DISPLAY\n[1] \"LIN\"\n\n$P37DISPLAY\n[1] \"LIN\"\n\n$P38DISPLAY\n[1] \"LIN\"\n\n$P39DISPLAY\n[1] \"LIN\"\n\n$P3DISPLAY\n[1] \"LOG\"\n\n$P40DISPLAY\n[1] \"LOG\"\n\n$P41DISPLAY\n[1] \"LOG\"\n\n$P42DISPLAY\n[1] \"LOG\"\n\n$P43DISPLAY\n[1] \"LOG\"\n\n$P44DISPLAY\n[1] \"LOG\"\n\n$P45DISPLAY\n[1] \"LOG\"\n\n$P46DISPLAY\n[1] \"LOG\"\n\n$P47DISPLAY\n[1] \"LOG\"\n\n$P48DISPLAY\n[1] \"LOG\"\n\n$P49DISPLAY\n[1] \"LOG\"\n\n$P4DISPLAY\n[1] \"LOG\"\n\n$P50DISPLAY\n[1] \"LOG\"\n\n$P51DISPLAY\n[1] \"LOG\"\n\n$P52DISPLAY\n[1] \"LOG\"\n\n$P53DISPLAY\n[1] \"LOG\"\n\n$P54DISPLAY\n[1] \"LOG\"\n\n$P55DISPLAY\n[1] \"LOG\"\n\n$P56DISPLAY\n[1] \"LOG\"\n\n$P57DISPLAY\n[1] \"LOG\"\n\n$P58DISPLAY\n[1] \"LOG\"\n\n$P59DISPLAY\n[1] \"LOG\"\n\n$P5DISPLAY\n[1] \"LOG\"\n\n$P60DISPLAY\n[1] \"LOG\"\n\n$P61DISPLAY\n[1] \"LOG\"\n\n$P6DISPLAY\n[1] \"LOG\"\n\n$P7DISPLAY\n[1] \"LOG\"\n\n$P8DISPLAY\n[1] \"LOG\"\n\n$P9DISPLAY\n[1] \"LOG\"" + "text": "Display\nThen there is a stretch matching whether a particular detector needs to be displayed as linear (in the case of time and scatter) or as log (for individual detectors).\n\nDetectors <- DescriptionList[412:472]\nDetectors\n\n$P10DISPLAY\n[1] \"LOG\"\n\n$P11DISPLAY\n[1] \"LOG\"\n\n$P12DISPLAY\n[1] \"LOG\"\n\n$P13DISPLAY\n[1] \"LOG\"\n\n$P14DISPLAY\n[1] \"LOG\"\n\n$P15DISPLAY\n[1] \"LOG\"\n\n$P16DISPLAY\n[1] \"LOG\"\n\n$P17DISPLAY\n[1] \"LOG\"\n\n$P18DISPLAY\n[1] \"LIN\"\n\n$P19DISPLAY\n[1] \"LIN\"\n\n$P1DISPLAY\n[1] \"LOG\"\n\n$P20DISPLAY\n[1] \"LOG\"\n\n$P21DISPLAY\n[1] \"LOG\"\n\n$P22DISPLAY\n[1] \"LOG\"\n\n$P23DISPLAY\n[1] \"LOG\"\n\n$P24DISPLAY\n[1] \"LOG\"\n\n$P25DISPLAY\n[1] \"LOG\"\n\n$P26DISPLAY\n[1] \"LOG\"\n\n$P27DISPLAY\n[1] \"LOG\"\n\n$P28DISPLAY\n[1] \"LOG\"\n\n$P29DISPLAY\n[1] \"LOG\"\n\n$P2DISPLAY\n[1] \"LOG\"\n\n$P30DISPLAY\n[1] \"LOG\"\n\n$P31DISPLAY\n[1] \"LOG\"\n\n$P32DISPLAY\n[1] \"LOG\"\n\n$P33DISPLAY\n[1] \"LOG\"\n\n$P34DISPLAY\n[1] \"LOG\"\n\n$P35DISPLAY\n[1] \"LOG\"\n\n$P36DISPLAY\n[1] \"LIN\"\n\n$P37DISPLAY\n[1] \"LIN\"\n\n$P38DISPLAY\n[1] \"LIN\"\n\n$P39DISPLAY\n[1] \"LIN\"\n\n$P3DISPLAY\n[1] \"LOG\"\n\n$P40DISPLAY\n[1] \"LOG\"\n\n$P41DISPLAY\n[1] \"LOG\"\n\n$P42DISPLAY\n[1] \"LOG\"\n\n$P43DISPLAY\n[1] \"LOG\"\n\n$P44DISPLAY\n[1] \"LOG\"\n\n$P45DISPLAY\n[1] \"LOG\"\n\n$P46DISPLAY\n[1] \"LOG\"\n\n$P47DISPLAY\n[1] \"LOG\"\n\n$P48DISPLAY\n[1] \"LOG\"\n\n$P49DISPLAY\n[1] \"LOG\"\n\n$P4DISPLAY\n[1] \"LOG\"\n\n$P50DISPLAY\n[1] \"LOG\"\n\n$P51DISPLAY\n[1] \"LOG\"\n\n$P52DISPLAY\n[1] \"LOG\"\n\n$P53DISPLAY\n[1] \"LOG\"\n\n$P54DISPLAY\n[1] \"LOG\"\n\n$P55DISPLAY\n[1] \"LOG\"\n\n$P56DISPLAY\n[1] \"LOG\"\n\n$P57DISPLAY\n[1] \"LOG\"\n\n$P58DISPLAY\n[1] \"LOG\"\n\n$P59DISPLAY\n[1] \"LOG\"\n\n$P5DISPLAY\n[1] \"LOG\"\n\n$P60DISPLAY\n[1] \"LOG\"\n\n$P61DISPLAY\n[1] \"LOG\"\n\n$P6DISPLAY\n[1] \"LOG\"\n\n$P7DISPLAY\n[1] \"LOG\"\n\n$P8DISPLAY\n[1] \"LOG\"\n\n$P9DISPLAY\n[1] \"LOG\"\n\n\nAnd a few final keywords with threshold, window scaling and other user selected settings.\n\nDetectors <- DescriptionList[473:476]\nDetectors\n\n$THRESHOLD\n[1] \"(FSC,50000)\"\n\n$TUBENAME\n[1] \"05\"\n\n$USERSETTINGNAME\n[1] \"DTR_CellCounts\"\n\n$`WINDOW EXTENSION`\n[1] \"3\"", + "crumbs": [ + "About", + "Intro to R", + "03 - Inside a .FCS file" + ] }, { - "objectID": "course/03_InsideFCSFile/slides.html#flowcore-parameters", - "href": "course/03_InsideFCSFile/slides.html#flowcore-parameters", + "objectID": "course/03_InsideFCSFile/index.html#flowcore-parameters", + "href": "course/03_InsideFCSFile/index.html#flowcore-parameters", "title": "03 - Inside an FCS File", "section": "flowCore Parameters", - "text": "flowCore Parameters\n\n\n\n\n\n\n\n\n.\n\n\nDepending on the arguments selected during read.FCS(), we might also encounter additional keywords that are added in by flowCore. For example, we do not see these keywords when “transformation” is set to FALSE.\n\n\n\n\n\n\n\nflowCoreCheck <- read.FCS(filename=firstfile,\n transformation = FALSE, truncate_max_range = FALSE)\n\nflowCoreCheck\n\nflowFrame object 'CellCounts4L_AB_05-ND050-05.fcs'\nwith 100 cells and 61 observables:\n name desc range minRange maxRange\n$P1 Time NA 272140 0 272139\n$P2 UV1-A NA 4194304 -111 4194303\n$P3 UV2-A NA 4194304 -111 4194303\n$P4 UV3-A NA 4194304 -111 4194303\n$P5 UV4-A NA 4194304 -111 4194303\n... ... ... ... ... ...\n$P57 R4-A NA 4194304 -111 4194303\n$P58 R5-A NA 4194304 -111 4194303\n$P59 R6-A NA 4194304 -111 4194303\n$P60 R7-A NA 4194304 -111 4194303\n$P61 R8-A NA 4194304 -111 4194303\n476 keywords are stored in the 'description' slot" + "text": "flowCore Parameters\nDepending on the arguments selected during read.FCS(), we might also encounter additional keywords that are added in by flowCore. For example, we do not see these keywords when “transformation” is set to FALSE.\n\nflowCoreCheck <- read.FCS(filename=firstfile,\n transformation = FALSE, truncate_max_range = FALSE)\n\nflowCoreCheck\n\nflowFrame object 'CellCounts4L_AB_05-ND050-05.fcs'\nwith 100 cells and 61 observables:\n name desc range minRange maxRange\n$P1 Time NA 272140 0 272139\n$P2 UV1-A NA 4194304 -111 4194303\n$P3 UV2-A NA 4194304 -111 4194303\n$P4 UV3-A NA 4194304 -111 4194303\n$P5 UV4-A NA 4194304 -111 4194303\n... ... ... ... ... ...\n$P57 R4-A NA 4194304 -111 4194303\n$P58 R5-A NA 4194304 -111 4194303\n$P59 R6-A NA 4194304 -111 4194303\n$P60 R7-A NA 4194304 -111 4194303\n$P61 R8-A NA 4194304 -111 4194303\n476 keywords are stored in the 'description' slot\n\n\n\nNoChange <- keyword(flowCoreCheck)\nDetectors <- NoChange [476:500]\nDetectors\n\n$`WINDOW EXTENSION`\n[1] \"3\"\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n$<NA>\nNULL\n\n\nBy contrast, if we had set “transformation” to TRUE:\n\nflowCoreCheck <- read.FCS(filename=firstfile,\n transformation = TRUE, truncate_max_range = FALSE)\n\nflowCoreCheck\n\nflowFrame object 'CellCounts4L_AB_05-ND050-05.fcs'\nwith 100 cells and 61 observables:\n name desc range minRange maxRange\n$P1 Time NA 272140 0 272139\n$P2 UV1-A NA 4194304 -111 4194303\n$P3 UV2-A NA 4194304 -111 4194303\n$P4 UV3-A NA 4194304 -111 4194303\n$P5 UV4-A NA 4194304 -111 4194303\n... ... ... ... ... ...\n$P57 R4-A NA 4194304 -111 4194303\n$P58 R5-A NA 4194304 -111 4194303\n$P59 R6-A NA 4194304 -111 4194303\n$P60 R7-A NA 4194304 -111 4194303\n$P61 R8-A NA 4194304 -111 4194303\n599 keywords are stored in the 'description' slot\n\n\n\nYesChange <- keyword(flowCoreCheck)\nDetectors <- YesChange [476:500]\nDetectors\n\n$`WINDOW EXTENSION`\n[1] \"3\"\n\n$transformation\n[1] \"applied\"\n\n$`flowCore_$P1Rmax`\n[1] \"272140\"\n\n$`flowCore_$P1Rmin`\n[1] \"0\"\n\n$`flowCore_$P2Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P2Rmin`\n[1] \"-111\"\n\n$`flowCore_$P3Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P3Rmin`\n[1] \"-111\"\n\n$`flowCore_$P4Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P4Rmin`\n[1] \"-111\"\n\n$`flowCore_$P5Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P5Rmin`\n[1] \"-111\"\n\n$`flowCore_$P6Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P6Rmin`\n[1] \"-111\"\n\n$`flowCore_$P7Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P7Rmin`\n[1] \"-111\"\n\n$`flowCore_$P8Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P8Rmin`\n[1] \"-26.3464946746826\"\n\n$`flowCore_$P9Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P9Rmin`\n[1] \"-111\"\n\n$`flowCore_$P10Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P10Rmin`\n[1] \"0\"\n\n$`flowCore_$P11Rmax`\n[1] \"4194304\"\n\n$`flowCore_$P11Rmin`\n[1] \"-111\"\n\n$`flowCore_$P12Rmax`\n[1] \"4194304\"\n\n\n\n\nFor some flow cytometry R packages, you will notice when opening their exported .fcs outputs in commercial software that these flowCore keywords have ended up integrated. It is likely somewhere in the package code the author forgot to add set transformation to FALSE, which is why we are seeing these flowCore keywords after the fact.", + "crumbs": [ + "About", + "Intro to R", + "03 - Inside a .FCS file" + ] + }, + { + "objectID": "course/04_IntroToTidyverse/BonusContent.html", + "href": "course/04_IntroToTidyverse/BonusContent.html", + "title": "Bonus Content", + "section": "", + "text": "thefilepath <- file.path(\"data\", \"Dataset.csv\")\n\nthefilepath\n\n[1] \"data/Dataset.csv\"\nData <- read.csv(file=thefilepath, check.names=FALSE)\ncolnames(Data)\n\n [1] \"bid\" \"timepoint\" \"Condition\" \n [4] \"Date\" \"infant_sex\" \"ptype\" \n [7] \"root\" \"singletsFSC\" \"singletsSSC\" \n[10] \"singletsSSCB\" \"CD45\" \"NotMonocytes\" \n[13] \"nonDebris\" \"lymphocytes\" \"live\" \n[16] \"Dump+\" \"Dump-\" \"Tcells\" \n[19] \"Vd2+\" \"Vd2-\" \"Va7.2+\" \n[22] \"Va7.2-\" \"CD4+\" \"CD4-\" \n[25] \"CD8+\" \"CD8-\" \"Tcells_count\" \n[28] \"lymphocytes_count\" \"Monocytes\" \"Debris\" \n[31] \"CD45_count\"" + }, + { + "objectID": "course/04_IntroToTidyverse/BonusContent.html#pull", + "href": "course/04_IntroToTidyverse/BonusContent.html#pull", + "title": "Bonus Content", + "section": "Pull", + "text": "Pull" + }, + { + "objectID": "course/04_IntroToTidyverse/BonusContent.html#case-when", + "href": "course/04_IntroToTidyverse/BonusContent.html#case-when", + "title": "Bonus Content", + "section": "Case-When", + "text": "Case-When\nCase-when is an useful function, but may be a bit much to try to teach in the main segment. Basically, when the condition on the left side of the ~ is fulfilled, it will execute what is being specified on the right hand side.\nIn turn, we can combine these together by adding a “,”. I tend to use this mutate str_detect case_when combination when encountering messy data out in the while where I need to selectively change particular cell values in a consistent reproducible manner" + }, + { + "objectID": "course/04_IntroToTidyverse/BonusContent.html#selecting-columns-base-r", + "href": "course/04_IntroToTidyverse/BonusContent.html#selecting-columns-base-r", + "title": "Bonus Content", + "section": "Selecting Columns (Base R)", + "text": "Selecting Columns (Base R)\nAs we saw last week, there are multiple ways to select values from particular columns in base R. If we had wanted to retrieve the “Date” column, why not first identify its index position, and use [,] to extract the underlying data?\n\ncolnames(Data)\n\n [1] \"bid\" \"timepoint\" \"Condition\" \n [4] \"Date\" \"infant_sex\" \"ptype\" \n [7] \"root\" \"singletsFSC\" \"singletsSSC\" \n[10] \"singletsSSCB\" \"CD45\" \"NotMonocytes\" \n[13] \"nonDebris\" \"lymphocytes\" \"live\" \n[16] \"Dump+\" \"Dump-\" \"Tcells\" \n[19] \"Vd2+\" \"Vd2-\" \"Va7.2+\" \n[22] \"Va7.2-\" \"CD4+\" \"CD4-\" \n[25] \"CD8+\" \"CD8-\" \"Tcells_count\" \n[28] \"lymphocytes_count\" \"Monocytes\" \"Debris\" \n[31] \"CD45_count\" \n\n\n\ncolnames(Data)[4]\n\n[1] \"Date\"\n\n\n\nDataColumn <- Data[,4] # Column specified after the ,\nDataColumn\n\n [1] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\"\n [6] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\"\n [11] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\"\n [16] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\"\n [21] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\"\n [26] \"2025-07-26\" \"2025-07-29\" \"2025-07-29\" \"2025-07-29\" \"2025-07-29\"\n [31] \"2025-07-29\" \"2025-07-29\" \"2025-07-29\" \"2025-07-29\" \"2025-07-29\"\n [36] \"2025-07-29\" \"2025-07-29\" \"2025-07-29\" \"2025-07-29\" \"2025-07-29\"\n [41] \"2025-07-29\" \"2025-07-29\" \"2025-07-29\" \"2025-07-29\" \"2025-07-29\"\n [46] \"2025-07-29\" \"2025-07-29\" \"2025-07-29\" \"2025-07-31\" \"2025-07-31\"\n [51] \"2025-07-31\" \"2025-07-31\" \"2025-07-31\" \"2025-07-31\" \"2025-07-31\"\n [56] \"2025-07-31\" \"2025-07-31\" \"2025-07-31\" \"2025-07-31\" \"2025-07-31\"\n [61] \"2025-07-31\" \"2025-07-31\" \"2025-07-31\" \"2025-07-31\" \"2025-07-31\"\n [66] \"2025-07-31\" \"2025-07-31\" \"2025-07-31\" \"2025-07-31\" \"2025-07-31\"\n [71] \"2025-07-31\" \"2025-07-31\" \"2025-07-31\" \"2025-07-31\" \"2025-07-31\"\n [76] \"2025-08-05\" \"2025-08-05\" \"2025-08-05\" \"2025-08-05\" \"2025-08-05\"\n [81] \"2025-08-05\" \"2025-08-05\" \"2025-08-05\" \"2025-08-05\" \"2025-08-05\"\n [86] \"2025-08-05\" \"2025-08-05\" \"2025-08-05\" \"2025-08-05\" \"2025-08-05\"\n [91] \"2025-08-05\" \"2025-08-05\" \"2025-08-05\" \"2025-08-05\" \"2025-08-05\"\n [96] \"2025-08-05\" \"2025-08-05\" \"2025-08-05\" \"2025-08-07\" \"2025-08-07\"\n[101] \"2025-08-07\" \"2025-08-07\" \"2025-08-07\" \"2025-08-07\" \"2025-08-07\"\n[106] \"2025-08-07\" \"2025-08-07\" \"2025-08-07\" \"2025-08-07\" \"2025-08-07\"\n[111] \"2025-08-07\" \"2025-08-07\" \"2025-08-07\" \"2025-08-07\" \"2025-08-07\"\n[116] \"2025-08-07\" \"2025-08-07\" \"2025-08-07\" \"2025-08-07\" \"2025-08-07\"\n[121] \"2025-08-07\" \"2025-08-07\" \"2025-08-07\" \"2025-08-07\" \"2025-08-22\"\n[126] \"2025-08-22\" \"2025-08-22\" \"2025-08-22\" \"2025-08-22\" \"2025-08-22\"\n[131] \"2025-08-22\" \"2025-08-22\" \"2025-08-22\" \"2025-08-22\" \"2025-08-22\"\n[136] \"2025-08-22\" \"2025-08-22\" \"2025-08-22\" \"2025-08-22\" \"2025-08-22\"\n[141] \"2025-08-22\" \"2025-08-22\" \"2025-08-22\" \"2025-08-22\" \"2025-08-22\"\n[146] \"2025-08-22\" \"2025-08-22\" \"2025-08-22\" \"2025-08-22\" \"2025-08-22\"\n[151] \"2025-08-22\" \"2025-08-28\" \"2025-08-28\" \"2025-08-28\" \"2025-08-28\"\n[156] \"2025-08-28\" \"2025-08-28\" \"2025-08-28\" \"2025-08-28\" \"2025-08-28\"\n[161] \"2025-08-28\" \"2025-08-28\" \"2025-08-28\" \"2025-08-28\" \"2025-08-28\"\n[166] \"2025-08-28\" \"2025-08-28\" \"2025-08-28\" \"2025-08-28\" \"2025-08-28\"\n[171] \"2025-08-28\" \"2025-08-28\" \"2025-08-28\" \"2025-08-28\" \"2025-08-28\"\n[176] \"2025-08-28\" \"2025-08-28\" \"2025-08-28\" \"2025-08-30\" \"2025-08-30\"\n[181] \"2025-08-30\" \"2025-08-30\" \"2025-08-30\" \"2025-08-30\" \"2025-08-30\"\n[186] \"2025-08-30\" \"2025-08-30\" \"2025-08-30\" \"2025-08-30\" \"2025-08-30\"\n[191] \"2025-08-30\" \"2025-08-30\" \"2025-08-30\" \"2025-08-30\" \"2025-08-30\"\n[196] \"2025-08-30\"\n\n\nHowever, looking at the output, we see this looks like the values, not a column. Our suspicions are confirmed when running DataColumn\n\nstr(DataColumn)\n\n chr [1:196] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" ...\n\n\nThis is similarly the case when we use the $ accessor.\n\nDataColumn <- Data$Date\nstr(DataColumn)\n\n chr [1:196] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" \"2025-07-26\" ...\n\n\n\nhead(DataColumn, 3)\n\n[1] \"2025-07-26\" \"2025-07-26\" \"2025-07-26\"\n\n\nBy contrast, when selecting two columns, the structure is maintained.\n\nTwoColumns <- Data[,4:5]\n\nWhy is the data.frame column structure lost in base R when isolating a single data.frame column? And who thought to make it that convoluted? If we were an R course in early 2010s, we might go into an explanation, but fortunately, we don’t need to understand why, we have the dplyr R package to rescue us." } ] \ No newline at end of file