Past System Notices

HPC System Downtime Scheduled for February 11th, 2014
Scheduled Downtime

NOTICE: DOWNTIME DATE POSTPONED TO FEBRUARY 11

The Ohio Supercomputer Center has scheduled downtime for all HPC systems on Tuesday, February 11, 2014 from 7AM until 5PM. The downtime will affect the Glenn cluster, Oakley cluster, web portals, and HPC file servers. Login services and access to storage will not be available during this time.

In order to quiesce the system for an orderly shutdown, beginning January 28th, the batch scheduler will begin holding jobs that cannot complete before 7AM on 2/11/2014. Jobs that are not started will be held until after the downtime and started once the system is returned to production status.

Departmental clusters that we are administering will not be affected by this outage.

Highlights of the downtime activities:

  • Upgrading GPFS servers
  • Updating the Red Hat Enterprise Linux operating system on all clusters to a newer version

To stay up to date on system notices, please visit http://osc.edu/n or follow @HPCNotices on Twitter.

Start: 02/11/2014 7 AM
End: 02/11/2014 5 PM
HPC System Downtime Scheduled for September 29th, 2013
Scheduled Downtime

The Ohio Supercomputer Center has scheduled downtime for all HPC systems on Sunday, September 29, 2013 from 7AM until 5PM. The downtime will affect the Glenn cluster, Oakley cluster, web portals, and HPC file servers. Login services and access to storage will not be available during this time.

In order to quiesce the system for an orderly shutdown, beginning September 15th, the batch scheduler will begin holding jobs that cannot complete before 7AM on 9/29/2013. Jobs that are not started will be held until after the downtime and started once the system is returned to production status.

Departmental clusters that we are administering will not be affected by this outage.

Highlights of the downtime activities:

  • OSC systems are being moved to a different UPS and generator system as a result of updates to building infrastructure. These changes will allow additional power capacity to be added in the future.
  • Upgrading software on Ethernet switches to improve performance between 1Gb and 10Gb Ethernet connections.
  • The kernel on the Oakley compute nodes is being upgraded.

To stay up to date on system notices, please visit http://osc.edu/n or follow @HPCNotices on Twitter.

Start: 09/29/2013 7 AM
End: 09/29/2013 5 PM
CANCELED: HPC System Downtime Scheduled for July 30th, 2013
Scheduled Downtime

This downtime has been CANCELED. Construction delays at the SOCC have postponed the requirement for power changes until a future date. We will communicate the new date to you once we have agreed on a schedule with the various impacted parties.

The Ohio Supercomputer Center has scheduled downtime for all HPC systems on Tuesday, July 30th, 2013 from 7AM until 5PM. The downtime will affect the Glenn cluster, Oakley cluster, web portals, and HPC file servers. Login services and access to storage will not be available during this time.

In order to quiesce the system for an orderly shutdown, beginning July 16th, the batch scheduler will begin holding jobs that cannot complete before 7AM on 7/30/2013. Jobs that are not started will be held until after the downtime and started once the system is returned to production status.

Departmental clusters that we are administering will not be affected by this outage.

This downtime is necessary to accommodate power changes being made as a part of infrastructure improvements at the State of Ohio Computing Center. Timing has been dictated by the construction schedule, and we were unable to line this up with our normal quarterly downtime cycle.

To stay up to date on system notices, please visit http://osc.edu/n or follow @HPCNotices on Twitter.

Start: 07/30/2013 7 AM
End: 07/30/2013 5 PM
HPC System Downtime Scheduled for June 4th, 2013
Scheduled Downtime

The Ohio Supercomputer Center has scheduled downtime for all HPC systems on Tuesday, June 4th, 2013 from 7AM until 5PM. The downtime will affect the Glenn cluster, Oakley cluster, web portals, and HPC file servers. Login services and access to storage will not be available during this time.

In order to quiesce the system for an orderly shutdown, beginning May 21st, the batch scheduler will begin holding jobs that cannot complete before 7AM on 6/04/2013. Jobs that are not started will be held until after the downtime and started once the system is returned to production status.

Departmental clusters that we are administering will not be affected by this outage.

To stay up to date on system notices, please visit http://osc.edu/n or follow @HPCNotices on Twitter.

Start: 06/04/2013 7:00 AM
End: 06/04/2013 5:00 PM
Unexpected outage on Oakley - resolved
System Maintenance
Oakley is experiencing problems that have caused both newly submitted and running jobs to fail. The systems staff is working to resolve the issue. This has been fixed. If you had a running job that was affected you'll be contacted about a refund of RUs.
Start: 05/18/2013
End: 05/18/2013
Downtime extended for some services
System Maintenance

All systems should be functioning normally. Please report any remaining issues to OSC Help.


Some difficulties experienced during the downtime has resulted in some services not returning to production status as scheduled.

Currently, the following systems are still offline:

  • Oakley (returned to service)
  • ARMSTRONG (returned to service)
  • license server (returned to service)
  • proj11 (returned to service)
  • proj12 (returned to service)
  • proj13 (returned to service)
  • proj14 (returned to service)

Glenn is operational. We will update as the situation changes.

Start: 02/26/2013
End: 02/27/2013
HPC System Downtime Scheduled for February 26th, 2013
Scheduled Downtime

The Ohio Supercomputer Center has scheduled downtime for all HPC systems on Tuesday, February 26th from 7AM until 5PM. The downtime will affect the Glenn cluster, Oakley cluster, web portals, and HPC file servers. Login services and access to storage will not be available during this time.

In order to quiesce the system for an orderly shutdown, beginning February 12th, the batch scheduler will begin holding jobs that cannot complete before 7AM on 2/26/2013. Jobs that are not started will be held until after the downtime and started once the system is returned to production status.

Departmental clusters that we are administering will not be affected by this outage.

To stay up to date on system notices, please visit http://osc.edu/n or follow @HPCNotices on Twitter.

Start: 02/26/2013 7:00AM
End: 02/26/2013 5:00PM
Changes to Accounting Algorithm for Parallel Jobs at OSC
System Maintenance

OSC will adopt a new charging algorithm for parallel jobs on Oakley and Glenn effective February 26, 2013. Parallel jobs requesting partial nodes will be charged at a higher rate after this date. Note that for academic users all charges are in terms of RUs charged against your allocation.

Parallel jobs are always given whole nodes even if they request fewer processors than the total available on a node. This rule has always existed on Glenn and will be put in place on Oakley on Feb. 26. In the future, parallel jobs will be charged for all the processors on the nodes they occupy.

Depending on your current usage, this change could cause you to use up your allocation at a much faster rate than before. For example, a job requesting nodes=5:ppn=4 on Oakley will be charged for 5x12=60 processors because each Oakley node has 12 processors. On Glenn the charge would be for 5x8=40 processors.

If you are requesting fewer than 8 (Glenn) or 12 (Oakley) processors per node (ppn) because you are using an old script from an earlier system, we suggest that you use fewer nodes and all the processors on each node. For example, nodes=6:ppn=4 could be changed to nodes=3:ppn=8 on Glenn or nodes=2:ppn=12 on Oakley.

If you are requesting fewer than the maximum number of processors because of the memory requirements of your MPI processes, there is no need to change your script. Memory is just as valuable a resource as processors, and our new accounting algorithm reflects that fact.

You can check your RU balance by logging in to the ARMSTRONG portal, https://armstrong.osc.edu.

If you would like assistance updating your PBS scripts to make efficient use of our current HPC resources, please contact oschelp@osc.edu (614-292-1800 or 1-800-686-6472).

Feel free to contact us with any questions or concerns.

Accounts manager: Barb Woodall (woodall@osc.edu)
Computational science consultant: Judy Gardiner (judithg@osc.edu)
User support manager: Brian Guilfoos (guilfoos@osc.edu)

Start: 02/26/2013
End: 02/26/2013
Large run on Oakley
System Maintenance

As discussed at the last SUG meeting, a commercial user has approached OSC requesting exclusive use of up to 66 percent of the Oakley Cluster. This user has requested up to 467 nodes, or 5,604 cores, for up to 45 days. This opportunity will bring substantial benefit to our academic users both in the short- and long-term, even though we expect wait times in the Oakley queue to expand considerably, during this run.

We are supporting the request for the following reasons:

  • A portion of the revenue generated by this public-private partnership will directly provide additional resources to the user community. A few ideas being investigated include adding memory to several Oakley nodes, or possibly adding storage capacity to the home directory systems (allowing increased quotas).
  • In the long term, successful completion of this project through a public-private partnership can have a positive impact on OSC's future. As we are all aware, public-private partnerships are the "new normal" for academia. This project can generate good will for the center. Additionally, the hardship you experience will demonstrate that in order to do this kind of large project more frequently, we need to increase our computational capacity.

We are negotiating the final details, but we expect this dedicated computation to begin Dec. 3, 2012. Rest assured that we feel a strong obligation to act in the best interests of the academic community and are looking for ways to mitigate the impact to our regular users. We have been consulting SUG leadership and some key long-term users as we consider the impacts of this request.

Thank you for your ongoing support of the Ohio Supercomputer Center and for your cooperation during this time.

If you have any questions, concerns, or comments please do not hesitate to give me a call personally at 614-292-2846. Our support staff and I can assist you in getting your work done under these conditions.

Executive Director of OSC, Pankaj Shah (614-292-1486) and Director of Supercomputing Operations, Kevin Wohlever (614-247-2061) are also available to address any concerns.

Thank you,
Brian Guilfoos
614-292-2846
guilfoos@osc.edu

OSC Help Desk
oschelp@osc.edu
614-292-1800
800-686-6472


Mitigation Strategies: We suggest users carefully select job sizes when running on Oakley. Of special importance is accurately sizing your walltime requests. Include a safety factor, but remember that grossly overestimating the required walltime will result in your jobs waiting longer in the queue. Additionally, moving some of your work to Glenn may be beneficial. Please contact OSC Help if you need assistance with porting your jobs to Glenn.

Start: 12/03/2012
End: 01/11/2013
OSC Discontinuing support of Partek
Software

Beginning December 3rd, 2012 OSC will begin the process of discontinuing support of the Partek software. We will be turning off the license server on December 31st. We are currently investigating alternative methods to make the software available to current users. Please contact OSC Help at oschelp@osc.edu, 614-292-1800, or 1-800-686-6472 if you are interested in the future of Partek license support.

Start: 12/03/2012
End: 12/31/2012
Delay in password change processing: Now resolved
System Maintenance
The systems work has been completed. Password and other changes should now be processed in the normal manner. Users may find a delay in password change requests being processed on Friday, December 28, 2012, owing to systems work. We will update ARMSTRONG notices when this work has been completed.
Start: 12/28/2012 8:00 a.m.
End: 12/28/2012 unknown
ANSYS/Fluent license update on 3 December 2012
Software

On Monday morning, December 3rd 2012, we will be updating the ANSYS license. This will impact all ANSYS and Fluent jobs that utilize the academic license on both the Oakley and Glenn clusters. We have been manually holding jobs the past several days that would not complete running by Monday morning, but as it is a manual process, some may be missed, especially over the weekend. Any jobs running on Monday morning may experience difficulties, and we advise academic ANSYS and Fluent users to avoid running jobs on Monday morning, and avoid starting any jobs that will not complete by Monday morning.

Please contact OSC Help (oschelp@osc.edu or 614-292-1800 / 1-800-686-6472) if you have any questions.

Start: 11/30/2012
End: 12/03/2012
Some account-related services will be unavailable for a brief outage
System Maintenance
In preparation for some major improvements in the future, we will need to take our account management database offline for 2 hours on the afternoon of November 6th, 2012. Beginning at 2 PM, password changes and resets, adding new users and projects, ARMSTRONG, and a few other account services will be unavailable until approximately 4 PM. Production services on the supercomputers will not be impacted.
Start: 11/06/2012 2 PM
End: 11/06/2012 4 PM
HPC System Downtime Scheduled for October 16th, 2012
Scheduled Downtime

The Ohio Supercomputer Center has scheduled downtime for all HPC systems on Tuesday, October 16th from 6AM until 5PM. The downtime will affect the Glenn cluster, Oakley cluster, web portals, and HPC file servers. Login services and access to storage will not be available during this time.

In order to quiesce the system for an orderly shutdown, beginning October 2nd, the batch scheduler will begin holding jobs that cannot complete before 6AM on 10/16/2012. Jobs that are not started will be held until after the downtime and started once the system is returned to production status.

Departmental clusters that we are administering will not be affected by this outage.

To stay up to date on system notices, please visit http://osc.edu/n or follow @HPCNotices on Twitter.

Start: 10/16/2012 6 AM
End: 10/16/2012 5 PM
Brief outage for maintenance will effect selected web services
System Maintenance

In order to load a new module, we need to restart one of our web servers. The restart is scheduled for 9AM on Tuesday, August 7th, and should last about 15 minutes. The following websites will be impacted:

eweldpredictor.ewi.org
portal.epolymer.org
epi.osc.edu
websvcs.osc.edu
webdav.osc.edu
rmws.osc.edu
mirror.osc.edu
galaxy.osc.edu
mcic.osc.edu
cdi.osc.edu
cmif.osc.edu
pmgf.osc.edu
svn.osc.edu
webmo.osc.edu
microscope.osc.edu
virtualslide.osc.edu
ondemand.osc.edu
openid.osc.edu

If you have any questions or concerns, please contact OSC Help at oschelp@osc.edu.

Start: 08/07/2012 9:00 AM
End: 08/07/2012 9:15 AM
HPC System Downtime Scheduled for June 18th, 2012
Scheduled Downtime

The Ohio Supercomputer Center has scheduled downtime for all HPC systems on Monday, June 18th from 6AM until 5PM. The downtime will affect the OSC Glenn Opteron cluster, Oakley cluster, BALE cluster, and HPC file servers. Login services and access to storage will not be available during this time.

In order to quiesce the system for an orderly shutdown, beginning June 3rd, the batch scheduler will begin holding jobs that cannot complete before 6AM on 6/18/2012. Jobs that are not started will be held until after the downtime and started once the system is returned to production status.

Departmental clusters that we are administering will not be affected by this outage.

Start: 06/18/2012 6 AM
End: 06/18/2012 5 PM
Problem with batch system on Oakley - resolved
System Maintenance

Oakley was experiencing some technical difficulties relating to the batch scheduler earlier today.  Service was restored at 11:45am.  We apologize for the inconvenience.

Start: 04/27/2012
End: 04/27/2012
Accounting Will Be Enabled on Oakley Starting Monday 4/23
System Maintenance

Accounting will be enabled on Oakley on Monday, April 23.  All jobs that complete on or after this date will be charged for resources used.  We have not been charging during the transition period because of anticipated startup glitches, but the system is now stable and ready for production use.  We hope you have a successful computing experience on our newest system.

Start: 04/23/2012
End: 04/23/2012
Brief Outage Thursday Morning
System Maintenance

There will be a BRIEF OUTAGE at 07:00 on Thursday, April 19, so we can reboot the home directory servers. User jobs and interactive sessions will hang while the servers are rebooted. After the nodes are back up, sessions nd jobs should resume execution. The reboots will require about 30 minutes.

Start: 04/19/2012 7:00 am
End: 04/19/2012 7:30 am
Oakley is now available to general users
System Maintenance

We have completed the test period of Oakley, and the HP Intel Xeon machine is now available to the wider user community. For the time being, all RUs will not be charged to accounts; we will provide notice in advance of when cycles will be charged (along with final details about how the charging algorithm has changed).

For information about using Oakley, please visit the Using Oakley guide. Please note that not all packages currently available on Glenn have been installed on Oakley, and the process of doing installations is still ongoing. If there is software you use on Glenn and need on Oakley, please contact OSC Help to inform us of your requirements. Until your software is available on Oakley, you will need to continue using Glenn.

Please continue to monitor osc.edu/n and @HPCNotices on Twitter for information on charging changes.

Start: 03/19/2012 9 AM
End: 03/30/2012 5 PM
Major center-wide downtime scheduled for March 14th and 15th
Scheduled Downtime

The Ohio Supercomputer Center has scheduled downtime for all HPC systems from Wednesday, March 14th at 7AM until Thursday, March 15th at 6PM. Scheduled downtime will affect the OSC Glenn Opteron Cluster, Oakley Cluster, BALE Cluster, and HPC File Servers. Login services and access to storage will not be available during this time.

In order to quiesce the system for an orderly shutdown, beginning March 1st, the batch scheduler will begin holding jobs that cannot complete before 7:00 AM on 3/14/2012. Jobs that are not started will be held until after the downtime and started once the system is returned to production status.

Departmental clusters that we are administering will not be affected by this outage.

The Oakley system is in testing at the moment, and we anticipate opening it to the wider community shortly. This work is required before we can do so. For more information about differences between Glenn and Oakley, please visit https://armstrong.osc.edu/pg/groups/4968/oakley-deployment-amp-transition/

Start: 03/14/2012 7AM
End: 03/15/2012 5PM
BMI Owens Cluster Scheduled Downtime
Scheduled Downtime

OSC will be taking the Owens cluster offline on Monday, February 27th, 2012 to make a few network configuration changes to better enable support of the cluster. The work is scheduled to begin at 7AM, and is scheduled to last until 5PM.

If you have any questions or concerns, please to not hesitate to contact OSC Help at 614-292-1800, 1-800-686-6472, or oschelp@osc.edu between 9AM and 5PM, Monday through Friday, excepting University holidays.

Start: 02/27/2012 7AM
End: 02/27/2012 5PM
BMI RI Cluster Scheduled Downtime
Scheduled Downtime

OSC will be taking the RI cluster offline on Monday, February 20th, 2012 to make a few network configuration changes to better enable support of the cluster. The work is scheduled to begin at 7AM, and is scheduled to last until 5PM.

If you have any questions or concerns, please to not hesitate to contact OSC Help at 614-292-1800, 1-800-686-6472, or oschelp@osc.edu between 9AM and 5PM, Monday through Friday, excepting University holidays.

Start: 02/20/2012 7AM
End: 02/20/2012 5PM
Decommissioning of Phase 1 Glenn
System Maintenance

On Dec 14 at 7AM, we will begin decommissioning Phase 1 of Glenn. This will entail the shutdown and removal of all "olddual" and "oldquad" nodes, to make room for the delivery of Oakley later in December. Any jobs requesting "olddual" or "oldquad" that will not finish by this time will never start. Please modify your job requests accordingly. Please continue to monitor System Notices and the Oakley Transition blog for more information about changes to OSC services.

Note: Due to supplier problems, this date has been delayed from Dec 7th.

Start: 12/14/2011 7:00AM
End: 12/21/2011
HPC System Downtime Scheduled for October 18th, 2011
Scheduled Downtime

The Ohio Supercomputer Center has scheduled downtime for all HPC systems on Tuesday, October 18, 2011, from 6:00am until noon. Scheduled downtime will affect the OSC Glenn Opteron Cluster, BALE Cluster, and HPC File Servers. Login services and access to storage will not be available during this time.

In order to quiesce the system for an orderly shutdown, beginning October 4th, the batch scheduler will be begin holding jobs that cannot complete before 6:00 AM on 10/18/2011.Jobs that are not started will be held until after the downtime and started once the system is returned to production status.

Delivery date for the new Oakley cluster is undetermined, however it will not be early enough to necessitate turning off a portion of Phase I Glenn during this downtime. Please watch for additional system notices; more information on Oakley can be found at https://armstrong.osc.edu/pg/groups/4968/oakley-deployment-amp-transition/

Start: 10/18/2011 6 AM
End: 10/18/2011 12 PM
System In Production
Scheduled Downtime

Our HPC systems are back in production. Filesystem checks have been completed. Any held jobs have been released from the queue, and all users should be able to access the system.

If you have any questions or problems, please contact the OSC Help Desk at oschelp@osc.edu, 614-292-1800 or 1-800-686-6472.

Start: 07/26/2011 8:00 PM
End: 07/27/2011
PVFS directories will be erased and rebuilt
System Maintenance

During OSC’s scheduled downtime on July 26, 2011, the Parallel Virtual File System (PVFS) will be erased and rebuilt. All data on PVFS at that time will be lost and cannot be restored.

This action affects only those users who have directories on PVFS. If you want to keep data that you have stored on PVFS, you will need to copy it to another location. Feel free to contact us if you need assistance copying large amounts of data.

If you have any questions, please contact the OSC Help Desk at oschelp@osc.edu or 614-292-1800 or 1-800-686-6472.

Start: 07/26/2011 6:00am
End: 07/26/2011 12:00pm
HPC System Downtime Scheduled on July 26, 2011
Scheduled Downtime

The Ohio Supercomputer Center has scheduled downtime for all HPC systems on Tuesday, July 26, 2011, from 6:00am until noon. Scheduled downtime will affect the OSC Glenn Opteron Cluster, BALE Cluster, and HPC File Servers. Login services and access to storage will not be available during this time.

The system will begin draining approximately 2 weeks prior to that date. The batch scheduler will prevent any job from starting if it will not complete before 6 a.m. on 7/26/2011. Jobs that are not started will be held until after the downtime and started once the system is returned to production status.

NOTICE TO PVFS USERS: During the downtime, the Parallel Virtual File System (PVFS) will be erased and rebuilt. All data on PVFS at that time will be lost and cannot be restored. If you want to keep data that you have stored on PVFS, you will need to copy it to another location.

Please contact the OSC Help Desk at oschelp@osc.edu, 614-292-1800, or 1-800-686-6472 if you have any questions.

Start: 07/26/2011 6:00 am
End: 07/26/2011 12:00 pm
Scheduled Downtime Extended until at least 4:00pm today.
Scheduled Downtime

Filesystem checks are still in progress, and the system will not return to production status until at least 4PM. We will be providing hourly updates. The most up-to-date information will be available at our Twitter page, or at the bottom left corner of http://www.osc.edu.

If you have any questions, please contact the OSC Help Desk at oschelp@osc.edu or 614-292-1800 or 1-800-686-6472.

Start: 07/26/2011 12:00 pm
End: 07/26/2011 4:00 pm
NAMD 2.6-mpi has been installed
Software

Version 2.6 of the NAMD molecular dynamics package has been installed on Glenn. This installation has been configured to use MPI communications. The module name is namd-2.6-mpi. We recommend that users prefer namd-2.6-mpi to namd-2.6-tcp.

The main reason to prefer namd-2.6-mpi is to work around a problem triggered by namd-2.6-tcp that can result in a compute node crash. A secondary reason to use namd-2.6-mpi is performance because this installation scales to larger numbers of nodes.

Start: 07/06/2011
End: 07/06/2011
Seven oldquad nodes on Glenn to be retired
System Maintenance

Seven of the 88 oldquad nodes (dual core, quad socket) on Glenn are being retired. The hardware will be used in support of the HPC infrastructure, including file servers. The nodes being retired have 8 cores, 16GB memory, and 218GB local disk space. Users who have been running jobs on the oldquad partition should consider migrating to newdual, with 8 cores, 24GB memory, and 393GB local disk per node. Please contact the OSC Help Desk if you have questions or need assistance.

Start: 06/24/2011
End: 06/24/2011
Glenn Systems Downtime is COMPLETE
Scheduled Downtime

The systems maintenance on all OSC systems is complete.  All systems and services are back and functioning normally.  

Thank you for your patience.

Please report any problems to the OSC Help Desk.

Start: 04/12/2011 0600
End: 04/12/2011 1700
OpenBabel-2.3.0 has been installed on Glenn
Software

OpenBabel is a chemical toolbox mainly for data format conversion and it has been installed on the OSC Glenn System - click here for more information on this sofware package

Start: 01/11/2011
End: 01/13/2011
Planned Maintenance has been completed
System Maintenance

The PM has been completed. OSC systems are back up. Interactive users can now log in. Batch processing will begin, when testing complete.

Start: 01/10/2011
End: 01/13/2011
OSC HPC System Downtime Scheduled for this Monday January 10, 2011 at 6:00am
Scheduled Downtime

OSC HPC systems will be unavailable on 1/10/2011 from 6am until 11am for planned maintenance.

Login access as well as access to files will be unavailable during this time. Batch jobs will need to be stopped prior to the downtime, so the batch queues will not start any job that will not complete by 1/10/2011 at 6am, based on your walltime settings. Any jobs that are not started will be held until after the downtime.

Start: 01/11/2011 6:00 am
End: 01/10/2011 11:00 am
OSC HPC System - 32 bit Applications Unavailable **UPDATE**
System Maintenance

Update: 9/21/2010 at 8:40 a.m. Due to a kernel bug, the Glenn cluster systems are vulnerable to a local privilege exploit. The exploit relies on a bug in the 32 bit compatibility layer. Until Redhat provides an updated kernel it is necessary to disable legacy 32 bit binaries.

Current applications that are currently unavailable due to the fact that their execution involves 32 bit binaries include:

-Intel compilers, version 10.0
Users can switch to the Intel compilers version 11.1, which are 64 bit applications and do not have a problem at this time, by switching to version 11.1 module with the command:

module switch intel-compilers-10.0 intel-compilers-11.1
-MOE -Macromodel

Also, the default modules environment was not being loaded properly between 4:30pm on 9/20 and approximately 9am on 9/21. The primary symptom of this problem was that the batch commands, such as qstat, were not in the default path. This was corrected around 9am on 9/21.

Please contact oschelp@osc.edu if you have questions or concerns.

Start: 09/21/2010 8:40 am
End: 09/22/2010 10:00 am
Feedback

Feedback

Love it? Hate it? Want to suggest new features or report a bug? We would love to hear from you.

About:    
Bug Report Content Suggestions
Compliment Other

  >>