Currently, there are 3 main network filesystems available on CCAST systems. They are /home (NFS), /projects (NFS), /gpfs1 (GPFS), and each compute node and server has a locally attached /scratch that is not shared with any other computers within the center.
/home is to be used for management of jobs and their IO. /home is not to be used for data or job IO nor is it designed to be used for the working directory for jobs. /home data is backed up to tape, so it can be viewed as a reliable and persistant data storage area. There are limits or quota's of individual's usage on /home which are displayed on login, and are queryable with the quota command at any time. Running of jobs out of /home is not permitted as it can affect the interactive use and other jobs on the system. Node's local /scratch or the global scratch /gpfs1/scratch is to be used for compute jobs.
/gpfs1 is the largest and fastest filesystem at CCAST. It is accessable on all compute nodes and servers with the convention of /gpfs1/scratch/$USERNAME, and it is designed as a place for working directories for jobs so that the files can be seen anywhere from any compute node or server. /gpfs1 is not backed up, and files on the system not accessed within the last 40 days are automatically purged.
/projects has the structure of /projects/$PI_USERNAME, which is intended for a storage area to be managed by a PI and all researchers working under that PI can store data on that system. This is a larger resource, currently about ~50TB and this area is backed up nightly to tape. Additional space can be provided for a fee.
/scratch is a local filesystem (ie, not visible from other compute nodes or servers) that is attached to each compute node and its contents are cleaned at the end of each job.