List view
If a DataNode dies, RDFS is able to re-replicate the dead DataNode's block onto the rest of the DataNodes. Acceptance test: In a cluster of 3 DataNodes and 1 NameNode, create a file with multiple blocks and then close it. Bring up a fourth DataNode, and then kill one of the original 3. If replication occurs successfully, we can kill the remaining two original DataNodes and read the file from the fourth DataNode.
Overdue by 9 year(s)•Due by November 18, 2016•7/12 issues closedDemonstrate error reliability and/or recovery under the following conditions: failure of k - 1 of k NameNodes, failure of 50% of DataNodes. Failure conditions must be reasonably simulated during demo.
Overdue by 9 year(s)•Due by November 23, 2016Implement use of native filesystem to pre-allocate storage on disk
Overdue by 9 year(s)•Due by November 16, 2016•1/1 issues closedAll interactions between DataNode and Zookeeper complete. Such interactions include functional Block reports, Heartbeats, and replication command processing between DataNode and ZK.
Overdue by 9 year(s)•Due by November 9, 2016Passes the following tests: Basic functionality tests found in hadoop-*test*.jar Basic MapReduce examples found in hadoop-*example*.jar Stand-alone benchmarking tools: Apache Pig, Apache Spark, and Apache Hive.
Overdue by 9 year(s)•Due by December 2, 2016Complete basic MapReduce tasks supported by HDFS such as (AggregateWordCount, MultiFileWordCount). Demo these tests on Nov 23rd. Tests: https://github.com/apache/hadoop/tree/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples
Overdue by 9 year(s)•Due by November 23, 2016•1/1 issues closedDemonstrate end to end file system operations. Should be able to create a file, write to it, close it, open it again, read out the written contents.
Overdue by 9 year(s)•Due by November 2, 2016•4/4 issues closed