Skip to content

NovelApplicationsToInformationSieve.md Final Peer Review (could not post on repository) #82

@mikegorczyca

Description

@mikegorczyca

I like that this project attempted to compare two novel algorithms using multiple image data sets - both of these are ambitious as we have neither discussed the information sieve nor image data in the course. Despite the difficulty presented by these, the final report was well written and filled all gaps in understanding. It was also nice to see the performance of GLRMs for the MNIST data set - you can qualitatively tell from the figures that GLRMs perform particularly well.

It would have been interesting to see some of the applications of the information sieve at work in your paper - a few pictures from the in-painting presentation would have sufficed. I also think the paper could have benefitted from the use of more data sets and more types of data used. And lastly, I would've liked to see the performance of GLRMs and the information sieve on the MADELON data set, even if human error may have harmed performance - my group thought that our neural networks were performing poorly due to human error, but after some checking, we realized that neural networks were just a poor choice of algorithm for our data set. As a notice, I can understand that time constraints limited your ability to do this.

Regardless, I enjoyed the paper and appreciate that you tried to apply material from outside of the course!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions