Skip to content

DocOverview

Jared Yanovich edited this page Jul 28, 2015 · 2 revisions

SLASH2 Deployment Overview

This document describes the basics of deploying a SLASH2 file system instance: terminology, components, etc.

There are three types of components present in a SLASH2 deployment:

  • metadata service ("MDS", named slashd(8))
  • I/O service (named sliod(8))
  • client service (named mount_slash(8)).

Each service type must be deployed on at least one host. If multiple services are intended to be run on the same host, the host will need to have multiple network IDs (e.g. IP addresses): one for each service type.

The sladm(7) online system manual page provides a quick guide of how to configure each service type.

SLASH2 MDS servers (slashd)

This service need storage for the SLASH2 file system metadata. This typically means low-latency persistent storage such as solid-state drives (SSD), and the configuration should have redundancy (e.g. mirror RAID) to prevent against data loss.

This file system is actually handled by slashd via zfs-fuse internally so the steps to create the zpool for the SLASH2 deployment metadata exactly follow a regular ZFS pool creation.

This also means all ZFS features such as snapshotting and exporting are available to the SLASH2 MDS. These features can be utilized to protect the system from data loss in the event of failure, described in DocAdmin.

SLASH2 I/O servers (sliod)

One or more SLASH2 I/O servers are necessary which actually store the data in the SLASH2 deployment. Typical installations use modern Linux with ZFSOnLinux that present POSIX file systems upon which the lightweight SLASH2 I/O service sliod runs, although sliod relies on normal POSIX I/O and can thus support any POSIX file system.

It is often desirable to split large amounts of storage across multiple zpools. In this scenario, multiple instances of sliod will run on the same system. Under these configurations, an IP address is necessary for each sliod instance.

Clients

These hosts run the SLASH2 client daemon software mount_slash, which may run on a variety of machines, the only requirement being modern FUSE support (Linux, IllumOS, BSD, MacOS X):

  • Dedicated front-end machines, which solely provide access to the SLASH2 file system for users.

  • Compute resources, so jobs can access data residing in the SLASH2 file system directly.

  • Administrative nodes, for any administration that needs to be performed without interfering with user workloads.

  • Test machines, to roll out configuration/system changes and test workloads in a non-disruptive manner to the rest of the deployment.

Auxiliary/Management Nodes

Other types of nodes may be present in a production SLASH2 deployment:

  • syslog servers, for aggregating all usage activity from all clients, may be utilized to store system activity for analysis, reporting, etc.

  • database servers, for tracking historical activity. SEC servers, for performing actions in response to events (data replication, etc.)

  • Nagios/Icinga/etc. servers, for monitoring health of various machines in the deployment.

  • MDFS mirroring nodes, for scans, dumps, reports, etc.

Clone this wiki locally