Hadoop Cluster

Revision as of 14:14, 21 February 2017 by Botto (talk | contribs) (→‎Alpha Cluster Status:)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Hadoop Cluster

Purpose:

The Apache ecosystem is growing in popularity amongst researchers and data scientists. Apache Hadoop and Spark are both becoming the most prominent tools for analyzing data in a “big data” settings. Due to the demand for this software, our team has decided to develop a system which will serve as a means of developing and running software specific to the Apache Hadoop ecosystem. This system will feature nodes running exclusively Hadoop-related software.


Alpha Cluster Status:

Currently, the Hadoop Cluster is offline. Please monitor this page for any change in its status.