SCC Introduction for Engineering Users

Below is a very brief introduction to the BU Shared Compute Cluster aimed at Engineering users. For detailed documentation and guides on using the system, see the Research Computing Services and SCC Quickstart pages on SCC.

Ultra-Brief Quick-Start Guide

  1. If you’re a PI, create a new SCC project, and if you’re a user within a research group, request that your PI add you to their project by finding their own username in this list and following one of the Add User links.
  2. Once your account is active, connect to the SCC login nodes using an SSH client.
  3. Try out navigation of the filesystem and viewing your home directory and project space, and starting software such as MATLAB.
  4. If you have existing data you’d like to work with, transfer it to the SCC filesystem. You can mount Engineering shares on your local computer if needed, or just use SCP directly from the Engineering Grid command line if you’re already comfortable there.
  5. Try running an interactive job on a compute node, and submitting a batch job to run non-interactively.

Putting it all together, making a connection and starting up an interactive MATLAB session on a compute node is very simple; only the name of the login node is different from the equivalent workflow in the Engineering Grid:

username@your-local-system ~ $ ssh -X username@scc1.bu.edu
[username@scc1 ~]$ qlogin
[username@scc1 ~]$ matlab

Similarities to the Engineering Grid

Engineering network shares are available with the same paths as ENG-Grid. You’ll find them under /ad/eng/. Although user home directories are SCC-specific, you can reach your ENG home directory /ad/eng/users/. For example the username gus89 would be in folder /ad/eng/user/g/u/gus89, where the g and u come from the 1st and 2nd letter of the username.

Like the Engineering Grid, the BU Shared Compute Cluster (SCC) runs on CentOS and uses Open Grid Scheduler for queuing and scheduling compute jobs, provides each user a fixed 10 GB home directory, and each project group flexible storage in their own group directories.

Key Differences from the Engineering Grid

  • Accounts and file storage on SCC are separate from BU Kerberos accounts and any other network file storage; User account creation (and project registrations, to manage a group’s research) are requested via the web interface, and files can be transferred to and from SCC storage via SCP/SFTP or from Engineering storage under the /ad/eng paths.
  • Within Engineering, jobs can run as long as compute nodes are available; on SCC, resource usage is metered by “service units” (conceptually similar to a print quota system), with units deducted by CPU-hour. See here for a table of compute node specs and details on how SUs are charged, and here (under “Allocations and Accounting”) for details on how SUs are allocated.
  • The majority of Engineering’s shared compute nodes are also workstations in our public computer labs, with some server systems for specialized use and those owned by specific research groups. SCC is composed entirely of dedicated servers hosted in MGHPCC.
  • The ‘threaded‘ parallel group is called ‘omp’ on the SCC.

Using ENG Applications & Modules on SCC

To use applications installed on Engineering’s file server, you’ll need to configure Bash environment. This can be done with the following:

[username@scc1 ~]$ export PATH=/ad/eng/bin:/ad/eng/bin/64/:$PATH

This can also be done with modules. Load the module with the following format:

[username@scc1 ~]$ module use /ad/eng/etc/modulefiles

Then load the module and launch the program as you would on the grid. See module basics for more information on using modules.

Even easier is sourcing the engnev.sh script from your command line

$ source /ad/eng/bin/engenv.sh