Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

## New features

* Added segmentation article on `model_fcn_resnet50()` with two images (@DerrickUnleashed, #281).
* Added collection dataset catalog with `search_collection()`, `get_collection_catalog()`, and `list_collection_datasets()` functions for discovering and exploring collections (#271, @ANAMASGARD).
* Added `target_transform_coco_masks()` and `target_transform_trimap_masks()` transformation functions for explicit segmentation mask generation (@ANAMASGARD).

Expand All @@ -12,6 +13,7 @@

## New datasets

* Added `vggface2_dataset()` for loading the VGGFace2 dataset (@DerrickUnleashed, #238).
* Added `moth` dataset to `rf100_biology_collection()` and `currency` and `wine_label` to `rf100_document_collection()` (#274).

## Bug fixes and improvements
Expand All @@ -33,7 +35,6 @@ to non-vectorized `transform_` operations (#264)
and `rf100_underwater_collection()` . Those are collection of datasets from RoboFlow 100 under the same
thematic, for a total of 35 datasets (@koshtiakanksha, @cregouby, #239).
* Added `rf100_peixos_segmentation_dataset()`. (@koshtiakanksha, @cregouby, #250).
* Added `vggface2_dataset()` for loading the VGGFace2 dataset (@DerrickUnleashed, #238).

## New models

Expand Down
2 changes: 1 addition & 1 deletion R/dataset-vggface2.R
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
#' ds$classes[item$y] # list(name=..., gender=...)
#' }
#'
#' @family segmentation_dataset
#' @family classification_dataset
#' @export
vggface2_dataset <- torch::dataset(
name = "vggface2",
Expand Down
2 changes: 1 addition & 1 deletion R/vision_utils.R
Original file line number Diff line number Diff line change
Expand Up @@ -346,7 +346,7 @@ draw_segmentation_masks.torch_tensor <- function(x,
if (masks$dtype != torch::torch_bool() && masks$dtype != torch::torch_float() ) {
type_error("`masks` is expected to be of dtype torch_bool() or torch_float()")
}
if (any(masks$shape[2:3] != img_to_draw$shape[2:3])) {
if (any(masks$shape[-2:-1] != img_to_draw$shape[-2:-1])) {
value_error("`masks` and `image` must have the same height and width")
}
# if mask is a model inference output, we need to convert float mask to boolean mask
Expand Down
2 changes: 2 additions & 0 deletions _pkgdown.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@ navbar:
href: articles/examples/style-transfer.html
- text: texture-nca
href: articles/examples/texture-nca.html
- text: fcnresnet
href: articles/examples/fcnresnet.html

reference:
- title: Transforms
Expand Down
1 change: 1 addition & 0 deletions man/caltech_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions man/cifar_datasets.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions man/eurosat_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions man/fer_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions man/fgvc_aircraft_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions man/flowers102_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions man/image_folder_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions man/lfw_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions man/mnist_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions man/oxfordiiitpet_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 1 addition & 2 deletions man/oxfordiiitpet_segmentation_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 1 addition & 2 deletions man/pascal_voc_datasets.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions man/places365_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 1 addition & 2 deletions man/rf100_peixos_segmentation_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions man/tiny_imagenet_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

21 changes: 16 additions & 5 deletions man/vggface2_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions man/whoi_plankton_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions man/whoi_small_coralnet_dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

14 changes: 7 additions & 7 deletions tests/testthat/test-dataset-coco.R
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ test_that("coco_detection_dataset handles missing files gracefully", {
})

test_that("coco_detection_dataset loads a single example correctly", {
skip_if(Sys.getenv("TEST_LARGE_DATASETS", unset = 0) < 1,
skip_if(Sys.getenv("TEST_LARGE_DATASETS", unset = 0) != 1,
"Skipping test: set TEST_LARGE_DATASETS=1 to enable tests requiring large downloads.")

ds <- coco_detection_dataset(root = tmp, train = FALSE, year = "2017", download = TRUE)
Expand Down Expand Up @@ -47,7 +47,7 @@ test_that("coco_detection_dataset loads a single example correctly", {
})

test_that("coco_ dataset loads a single segmentation example correctly", {
skip_if(Sys.getenv("TEST_LARGE_DATASETS", unset = 0) < 1,
skip_if(Sys.getenv("TEST_LARGE_DATASETS", unset = 0) != 1,
"Skipping test: set TEST_LARGE_DATASETS=1 to enable tests requiring large downloads.")

ds <- coco_detection_dataset(root = tmp, train = FALSE, year = "2017", download = TRUE,
Expand All @@ -71,7 +71,7 @@ test_that("coco_ dataset loads a single segmentation example correctly", {
})

test_that("coco_detection_dataset batches correctly using dataloader", {
skip_if(Sys.getenv("TEST_LARGE_DATASETS", unset = 0) < 1,
skip_if(Sys.getenv("TEST_LARGE_DATASETS", unset = 0) != 1,
"Skipping test: set TEST_LARGE_DATASETS=1 to enable tests requiring large downloads.")


Expand Down Expand Up @@ -99,8 +99,8 @@ test_that("coco_caption_dataset handles missing files gracefully", {
})

test_that("coco_caption_dataset loads a single example correctly", {
skip_if(Sys.getenv("TEST_LARGE_DATASETS", unset = 0) < 2,
"Skipping test: set TEST_LARGE_DATASETS=2 to enable tests requiring huge downloads.")
skip_if(Sys.getenv("TEST_HUGE_DATASETS", unset = 0) != 1,
"Skipping test: set TEST_HUGE_DATASETS=1 to enable tests requiring huge downloads.")

ds <- coco_caption_dataset(root = tmp, train = FALSE, download = TRUE)

Expand All @@ -120,8 +120,8 @@ test_that("coco_caption_dataset loads a single example correctly", {
})

test_that("coco_caption_dataset batches correctly using dataloader", {
skip_if(Sys.getenv("TEST_LARGE_DATASETS", unset = 0) < 2,
"Skipping test: set TEST_LARGE_DATASETS=2 to enable tests requiring huge downloads.")
skip_if(Sys.getenv("TEST_HUGE_DATASETS", unset = 0) != 1,
"Skipping test: set TEST_HUGE_DATASETS=1 to enable tests requiring huge downloads.")

ds <- coco_caption_dataset(root = tmp, train = FALSE, download = TRUE)

Expand Down
69 changes: 69 additions & 0 deletions vignettes/examples/fcnresnet.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# Loading Images ---------------------------------------------------
library(torchvision)
library(torch)

url1 <- "https://raw.githubusercontent.com/pytorch/vision/main/gallery/assets/dog1.jpg"
url2 <- "https://raw.githubusercontent.com/pytorch/vision/main/gallery/assets/dog2.jpg"

dog1 <- magick_loader(url1) |> transform_to_tensor()
dog2 <- magick_loader(url2) |> transform_to_tensor()


# Visualizing a grid of images -------------------------------------


dogs <- torch_stack(list(dog1, dog2))
grid <- vision_make_grid(dogs, scale = TRUE, num_rows = 2)
tensor_image_browse(grid)


# Preprocessing the data -------------------------------------


norm_mean <- c(0.485, 0.456, 0.406)
norm_std <- c(0.229, 0.224, 0.225)

dog1_prep <- dog1 |>
transform_resize(c(520,520)) |>
transform_normalize(mean = norm_mean, std = norm_std)
dog2_prep <- dog2 |>
transform_resize(c(520,520)) |>
transform_normalize(mean = norm_mean, std = norm_std)

# make batch (2,3,520,520)
dog_batch <- torch_stack(list(dog1_prep, dog2_prep))


# Loading Model -------------------------------------


model <- model_fcn_resnet50(pretrained = TRUE)
model$eval()

# run model
output <- model(dog_batch)


# Processing the Output ------------------------------

mask <- output$out
mask$shape
mask$dtype

# Visualizing the Output ------------------------------


segmented1 <- draw_segmentation_masks(
dog1 |> transform_resize(c(520,520)),
masks = mask[1,, ],
alpha = 0.5
)

segmented2 <- draw_segmentation_masks(
dog2 |> transform_resize(c(520,520)),
masks = mask[2,, ],
alpha = 0.5
)

tensor_image_browse(segmented1)
tensor_image_browse(segmented2)
9 changes: 9 additions & 0 deletions vignettes/examples/fcnresnet.Rmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
title: "fcnresnet"
type: docs
---

```{r, echo = FALSE}
knitr::opts_chunk$set(eval = TRUE)
knitr::spin_child(paste0(rmarkdown::metadata$title, ".R"))
```