Hello,
Stumbled on your paper and found your tool to be quite useful, really like the approach. I happened to go through the source code and I wanted to suggest replacing the for loop in getOptimalFeatures with sapply. This should improve the memory consumption (maybe speed) because of less overhead required to iteratively add to a vector in R. Using sapply should also take care of naming the list. I think you just have to check the minima.
sequence <- seq(from=elbow.pt, to=length(ordered.genes), by=25)
mean_knn_vec <- sapply(X=sequence, FUN=function(num_genes)) {
# Initialise number of genes
neighbour_feature_genes <- ordered.genes[1:num_genes]
# Run PCA on the feature data
log.feature.data <- filt.data[neighbour_feature_genes, ]
density_index <- computeDensityIndex(counts=log.feature.data, k=k, error=error, num.pcs=num.pcs, features=neighbour_feature_genes)
}
EDIT
This might actually work better.....
log_counts_list <- sapply(X=sequence, FUN=function(num_genes)) {
# Initialise number of genes
neighbour_feature_genes <- ordered.genes[1:num_genes]
# Run PCA on the feature data
log.feature.data <- filt.data[neighbour_feature_genes, ]
}
nCores <- ifelse(future::availableCores() > 2, round(sqrt(length(log_counts_list))), 1)
mean_knn_vec <- parallel::mclapply(X=log_counts_list, FUN=computeDensityIndex, mc.cores=nCores)
I also had a question about calculating principal components for the density index. I noticed you use Seurat to scale the data and perform PCA. I wonder if you can improve the final result by adding technical feature names in ScaleData using the vars.to.regress parameter? This way, any confounding noise doesn't carry over to the dimensions used for the density index. Theoretically, if there are any clusters created by technical noise, the density index won't consider this before returning the final result.
Thanks.
Hello,
Stumbled on your paper and found your tool to be quite useful, really like the approach. I happened to go through the source code and I wanted to suggest replacing the
forloop ingetOptimalFeatureswithsapply. This should improve the memory consumption (maybe speed) because of less overhead required to iteratively add to a vector in R. Usingsapplyshould also take care of naming the list. I think you just have to check the minima.EDIT
This might actually work better.....
I also had a question about calculating principal components for the density index. I noticed you use
Seuratto scale the data and perform PCA. I wonder if you can improve the final result by adding technical feature names inScaleDatausing thevars.to.regressparameter? This way, any confounding noise doesn't carry over to the dimensions used for the density index. Theoretically, if there are any clusters created by technical noise, the density index won't consider this before returning the final result.Thanks.