Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
User Manual

User Manual
Results will update as you type.
  • Application Guide
  • Status of System
  • Usage Guide
    • SSH Login
    • Data Transfer
    • File Systems
      • File system replacement
        • Data Migration HowTo
      • System Quota
      • Metadata Usage on WORK
      • Special Filesystems
      • Sharing data
    • MPI, OpenMP build and run workflows
    • Slurm usage
  • Compute partitions
  • Software
  • FAQ
  • NHR Community
  • Contact

    You‘re viewing this with anonymous access, so some content might be blocked.
    /
    File system replacement
    Updated Jan. 27

    File system replacement

    Content

    • 1 Schedule - update: extended deadline
    • 2 Migration phase for WORK
    • 3 Important Remarks

    Starting September 2024, the underlying hardware of Lise’s global file systems HOME and WORK will be replaced. This affects all login nodes and all compute partitions. Please be aware of the following activities.

    • One-week downtime for maintenance, preparation steps, and for preliminary transfer of HOME data. NHR@ZIB staff copies all data from the old to the new HOME storage. No user action is required.

    • 5 day maintenance of the entire system for a final synchronization step between old and new HOME. Old HOME goes offline, new HOME goes online. No user action is required.

    • 6 week Migration phase for WORK. During this period, both the old and new WORK file systems are available. Users transfer their data from the old to the new WORK storage.

    Schedule - update: extended deadline

    step

    date

    subject

    status

    step

    date

    subject

    status

    1

    Sept 30

    one-week downtime of the GPU clusters (A100 and PVC), CPU CLX cluster remains available

    ●

    2

    Dec 4
    - Dec 9

    starting 8:00 am, maintenance

    ●

    3

    Dec 9
    - Jan 22

    6 week migration phase for users to copy their data from the old to the new WORK storage

    ●

    4

    Jan 23

    starting 10:00 am, several hours maintenance: old WORK storage will be removed.

    ●

    ● completed

    ● ongoing

    ● open

    Migration phase for WORK

    In step 3 of the schedule, data migration for WORK will be organized as follows.

    Active migration phase:

    • please visit Data Migration HowTo

    • a six week period starting in December

    • simultaneous user access to the old and new WORK storage

    • data transfer by users from the old to the new WORK storage (no data transfer by NHR@ZIB staff)

    • old WORK storage: /old-scratch/usr, /old-scratch/projects, /old-scratch/share

    After January 22nd 2025 (post migration phase):

    • the old WORK file system is switched off, its data is deleted

    • only the new WORK file system is available

    Important Remarks

    The new file system of WORK is GPFS. It is not a Lustre file system anymore. Hence, the lfs command does not work for $WORK anymore as well and will be gone when the migration phase has ended.

    No backups are available on WORK, it is a scratch file system. Data can get lost any time, due to user mistakes or due to system failures. Users need to copy important data (job results) to a safe place.

    WORK is a file system shared by all users. It is important that only data actively used in computations (“hot” data) reside here. WORK is not intended to store backups, software installations, and other kinds of “cold” data.

    The PERM file system is not affected by this maintenance. During all times of Lise operation, PERM will be available to each user.

    Jobs in the queue at the end of the downtime might fail if they were using the old scratch file system.

    Project directories have only been created for active projects.

    In the new file system quotas are set up to only depend on file location (folder quotas).

     

    {"serverDuration": 11, "requestCorrelationId": "6b71218e30eb4299a9449fbc0b6d32a2"}