FSL in Review 2000 - 2001

Cover/Title Page


Organizational Chart


Message from
the Director


Office of Administration
and Research


Forecast Research
Division


Facility Division


Demonstration Division


Systems Development
Division


Aviation Division


Modernization Division


International Division


Publications


Acronyms and Terms


Figures



Contact the Editor
Nita Fullerton


Web Design:
Will von Dauster
John Osborn


Best Viewed With
Internet Explorer

Facility Division

Peter A. Mandics, Chief
(Supervisory Physical Scientist)

(303-497-6854)

Web Homepage: http://www-fd.fsl.noaa.gov/

Mark D. Andersen, Senior Database Analyst, 303-497-6518
Jonathan B. Auerbach, Computer Operator, 303-497-3760
Joan M. Brundage, Dep. Chief, Meteorologist, 303-497-6895
Richard J. Bustillos, Small Systems Admin., 303-497-5267
Dr. Joseph R. Carlson, Programmer, 303-497-6794
Lee M. Cohen, Professional Research Asst., 303-497-6052
Michael A. Doney, FSL Network Manager, 303-497-6364
Steven J. Ennis, Computer Specialist, 303-497-6372
Leslie A. Ewy, Systems Analyst, 303-497-6018
Rick A. Grubin, Systems Analyst, 303-497-6991
Paul Hamer, Systems Analyst, 303-497-6342
Huming Han, Computer Operator, 303-497-6862
Leslie B. Hart, Computer Scientist, 303-497-7253
Loeta A. Hendrickson, Computer Specialist, 303-497-6775
Yeng Her, Computer Operator, 303-497-7339
Keith G. Holub, Lead FD Systems Admin., 303-497-6774
Ward Lemke, Systems Administrator, 303-497-7762
Robert C. Lipschutz, Lead Systems Analyst, 303-497-6636
Chris MacDermaid, Sr. Program Mgr., 303-497-6987
Ed Moxley, Systems Administrator, 303-497-6844
Glen F. Pankow, Systems Analyst, 303-497-7028
Peter Rahm-Coffey, Computer Operator, 303-497-7341
Debra J. Saenz, Secretary, 303-497-6109
Susan R. Sahm, Computer Specialist, 303-497-6975
Amenda B. Stanley, Systems Analyst, 303-497-6964
Michael Stephenson, Computer Operator, 303-497-7359
Sarah E. Thompson, Small Systems Admin,, 303-497-6024
Dr. Craig Tierney, Systems Engineer, 303-497-6028
Arthur T. Urban, Lead Computer Operator, 303-497-6922
Bruce D. Welker, Systems Administrator, 303-497-6466
Jon S. Wood, Systems Engineer, 303-497-6486

(The above roster, current when document is published, includes
government, cooperative agreement, and commercial affiliate staff.)

Address
NOAA Forecast Systems Laboratory Mail Code: FS2
David Skaggs Research Center
325 Broadway
Boulder, Colorado 80305-3328


Objectives

The Facility Division (FD) manages the computers, communications and data networks, and associated peripherals that FSL staff use to accomplish their research and systems-development mission. The FSL Central Facility comprises 60 Sun Microsystems, Inc., Silicon Graphics, Inc. (SGI), and Hewlett-Packard (HP) computers ranging from workstations and servers to a High Performance Technologies, Inc. (HPTI) supercomputer. The facility also contains a variety of meteorological data-ingest interfaces, storage devices, local- and wide-area networks, communications links to external networks, and display devices. Over 700 Internet Protocol (IP)-capable hosts and network devices serve the other six FSL divisions. They include Unix hosts, PCs and Macintoshes, and network routers, hubs, and switches. This hardware and associated software enable FSL staff to design, develop, test, evaluate, and transfer to operations advanced weather information systems and new forecasting techniques.

The division designs, develops, upgrades, administers, operates, and maintains the FSL Central Computer Facility. For the past 20 years, the facility has undergone continual enhancements and upgrades in response to changing and expanding FSL project requirements and new advances in computer and communications technology. In addition, FD lends technical support and expertise to other federal agencies and research laboratories in meteorological data acquisition, processing, storage, telecommunications, and networking.

The Central Facility acquires and stores a large variety of conventional (operational) and advanced (experimental) meteorological observations in real time. The ingested data encompass almost all available meteorological observations in the Front Range of Colorado and much of the available data in the entire United States. Data are also received from Canada, Mexico, and some observations from the rest of the world. The richness of these meteorological data is illustrated by such diverse datasets as advanced automated aircraft, wind profiler, satellite, Global Positioning System (GPS) moisture, Doppler radar measurements, and hourly surface observations. The Central Facility computer systems are used to analyze and process these data into meteorological products in real time, store the results, and make the data and products available to researchers, systems developers, and forecasters. The resultant meteorological products cover a broad range of complexity, from simple plots of surface observations to meteorological analyses and model prognoses generated by sophisticated mesoscale computer models.

Accomplishments

Computer Facility

One major focus requiring a large number of staff was the installation and operation of the newly acquired High Performance Computing System (HPCS),shown in Figure 19. Named "Jet," the HPCS consists of a 277-node Compaq Alpha Linux cluster, a 500-gigabyte (GB) storage array, and a 150-terabyte Mass Store System (MSS). Key staff worked closely with the vendor, HPTI, installing and configuring Jet for use by FSL and other NOAA scientists. Jet was ready for testing by a subset of users by March 2000, and an operational version was available for FSL scientists to use the next month. Project leaders outside FSL were given accounts and began using Jet by summer, and by late September, 29 projects (13 outside FSL) were authorized to run on Jet. A Web page, http://www-fd.fsl.noaa.gov/hpcs, was developed that contains instructions for submitting projects to be run on Jet, a brief description of approved projects, information on the MSS, and a Frequently Asked Questions (FAQ) section. Storage was increased to 1 terabyte during the year, and negotiations began for modifications to the interim HPCS upgrade due in Fiscal Year 2001.

FD - FSL HPCS

Figure 19. Director A.E. MacDonald (front, right) and Brent Shaw, an FSL researcher, outside the FSL High Performance Computing Center.

The on-line FSL storage capacity for Networked Information Management client-Based User Service (NIMBUS) and NOAAPORT data was expanded significantly with the replacement of the Auspex File Server with a 700-GB Network Appliance (NetApp) Filer. The new device allows for data to be served to other laboratory servers at OC-12 (622 Mbps) speeds. A second, smaller NetApp Filer was installed to accommodate FD's software development needs. Available storage space allowed for the creation of a software repository containing source and compiled code for tools and utilities used within the division.

FSL's automated backup system was reviewed and enhanced to improve performance and reduce network traffic. The type and number of files being backed up were scrutinized on a per-machine basis to eliminate backup of nonessential files. Some file servers within FSL now perform their respective backups using locally attached tape libraries to produce more robust and faster backups of these systems. An offsite backup schedule was implemented for all systems, each of which now receives a full backup quarterly, and those tapes are stored offsite.

To support NOAA's new e-mail architecture standard, FSL installed a Netscape Enterprise Messaging Server. All FSL e-mail users were migrated from the previous mailhub. A Secure Server Certificate installed on this server provides access to secure Web- and Internet Message Access Protocol (IMAP)-based e-mail services.

The FSL computer security plan was revised and implemented. The installation of monitoring software, periodic security patches, and the OpenSSH secure shell software all enhance the security of FSL's computers.

Availability of dozens of decommissioned Bureau of the Census PCs provided hardware for the first FD deployment of Linux-based workstations. Older workstations and X-terminals were replaced with these PCs, running RedHat Linux 6.2, with very positive feedback from the users.

A Windows NT server was deployed, providing a much more secure Windows environment for the entire laboratory. All Facility Division PCs were upgraded to NT. Automated virus detection software was installed and regularly updated on all Windows machines.

The FD system administrators continued to support a large variety of Unix operating systems, including HP-UX, SGI IRIX, Linux, and Sun Solaris. Microsoft Windows 95, 98, and Windows NT were also supported. These FD operating systems and commercial applications software were periodically upgraded as new versions became available from vendors. Additional utility, productivity, and tool-type software packages were installed on FSL servers and made available for laboratory-wide use.

A Sensaphone SCADA environment monitoring system was installed in the main and auxiliary computer rooms to continuously monitor the temperature and immediately report to Operations and other appropriate staff via phone calls when preset limits are exceeded. On several occasions, the SCADA system has proven helpful in limiting equipment damage when the building (or DSRC) air conditioner failed and the temperature could have risen rapidly in the computer rooms. Under these circumstances, equipment was turned off quickly and damage was avoided.

FSL Network

The FSL network underwent several changes and upgrades (shown in Figure 20) to accommodate the demand for increased bandwidth, and to provide a general migration path toward a more efficient use of resources. The core Asynchronous Transfer Mode (ATM) network backbone infrastructure was augmented by adding modules to the three ATM switches already in place. The nine Switch Control Processors on these ATM switches were upgraded from Intel i960 processors with 16 MB of memory to Pentium 166-MHz processors with 64 MB of memory each. This hardware upgrade was necessary when FSL's ATM switch #2 reported 97% processor capacity use under normal traffic loads and began malfunctioning. Since hardware alone does not solve network problems, the Network Team expanded and redistributed the ATM and routing services over the array of aging PowerHub and ATM routing devices to better manage traffic flow. The operating system software on these switches was also upgraded to the latest vendor recommended version.

FD - FSL Network

Figure 20. Schematic of the current FSL network.

Standardizing and simplifying enterprise networks is one method for improving the efficiency of overall network performance. To this end, many legacy FSL Fiber-Distributed Data Interface (FDDI) network nodes were upgraded to either ATM or Fast Ethernet. Some remaining older systems still requiring FDDI connections will be targeted for upgrade or removal in Fiscal Year 2001. The FSL Network Team works closely with system and data managers to determine the best upgrade path for improving network performance while minimizing interruptions to project functions.

The Wide Area Network (WAN) link that serves FSL was upgraded from a 1.5-Mbps T1 to a 12-Mbps fractional T3 connection with MCI WorldCom. The National Center for Atmospheric Research (NCAR) is funding half of this Internet connection as part of an agreement with FSL and NOAA Boulder to provide backup and failover network paths. FSL's failover path to the Internet is via the NOAA Boulder primary path, which traverses the Boulder Research and Administrative Network (BRAN) and NCAR ATM links to the Front Range Gigapop in Denver. The FSL Network Team collaborated with the NOAA Boulder Network Operations Center (NOC) to redesign an array of four routers utilizing the Open Shortest Path First (OSPF) protocol to make failover possible. This configuration will also provide better traffic management and redundant hardware links to FSL and NOAA Boulder WANS. This array of routers maintains the Autonomous System configuration (segregation) of all FSL networks from the NOAA Boulder ATM backbone and numerous other NOAA Boulder networks served by the NOC.

New technologies being investigated and implemented include a Virtual Private Network (VPN) server for secure remote access into the FSL network, and a Cisco Aironet wireless hub. The VPN server provides a migration path from the older Xyplex dial-in server, which has limited security, to a more trusted model for secure remote access. The wireless hub is being investigated for use in FSL conference rooms to simplify on-line training and presentations.

Data Acquisition, Processing, and Distribution

The Data Systems Group (DSG) continued to support the real-time meteorological data acquisition, processing, storage, and distribution systems within the FSL Central Facility (Figure 2l). A key component of these systems is the NIMBUS software environment. As shown in Figure 22, NIMBUS is configured to run on six separate Unix hosts, and is supported by a variety of additional data acquisition, distribution, monitoring, and computing devices. To achieve high data availability, all NIMBUS hosts are paired with duplicate systems that serve as failover and software integration platforms when needed. Each day these systems process over 100 meteorological product types, totaling about 30 GB of data.

FD - Central Facility

Figure 21. FSL Central Facility data systems.

FD - NIMBUS Hosts

Figure 22. Schematic of NIMBUS hosts.

NIMBUS acquires data from many external sources, including the National Weather Service's (NWS) operational NOAAPORT satellite data feed and the Office of System Operations (OSO), the National Centers for Environmental Prediction (NCEP), the National Environmental Satellite, Data, and Information Service (NESDIS), NCAR, Aeronautical Radio Inc. (ARINC), and Weather Services International (WSI) Corporation. Additionally, data are acquired from FSL sources outside the Central Facility, including the Demonstration, Forecast Research, and Systems Development divisions. Wideband Weather Surveillance Radar 1988 Doppler (WSR-88D) data are received from the NWS Front Range (KFTG) radar. An FSL-developed direct readout groundstation acquires satellite data from the Geostationary Operational Environmental Satellites (GOES)-8 and -10 satellites. A new source of data is Jet, which runs the backup Rapid Update Cycle (RUC-2) model for NCEP.

A variety of methods is used to distribute data from the Central Facility. FSL users of NIMBUS datasets typically access real-time data files from the NetApp Network File System (NFS) server-mounted /public data directory tree. External customers, including the NOAA Environmental Technology Laboratory (ETL), several NWS Weather Forecast Offices (WFOs), NCAR, the University Corporation for Atmospheric Research (UCAR) Unidata program, and approximately 15 universities, receive various data types via the Unidata Local Data Manager (LDM) protocol. To provide a backup source of operational NCEP model data, the OSO fetches RUC-2 model data from the Central Facility using File Transfer Protocol (FTP). In addition to real-time data access, most Central Facility datasets are saved on the Central Facility Mass Store System (MSS) for later analysis by FSL scientists and other collaborators.

Enhancements and extensions to Central Facility systems were implemented to handle new data formats and message types. In particular, the software that processes U.S. Aircraft Communications Addressing and Reporting System (ACARS) data was upgraded to accommodate new United Airlines and Federal Express data formats, and additional enhancements to this software enabled the processing of Aircraft Meteorological DAta Reporting (AMDAR) messages from around the world. Also, the Velocity-Azimuth Display (VAD) translator was modified to handle radar messages originating in the NWS Advanced Weather Information Processing System (AWIPS). The processing of Global Aviation (AVN) and Eta model data was modified to accommodate new grid domains and resolutions.

In support of FSL projects, new datasets were acquired, including:

  • Sea Surface Temperature and Snowcover grids, AVN model grids, and NCEP precipitation data and station tables.
  • Global Positioning System Meteorological (GPSMet) data from FSL's Demonstration Division.
  • Meteorological Assimilation Data Ingest System (MADIS) data from FSL's Systems Development Division.
  • Turbulence and Icing Aviation Impact Variable (AIV) products from NCAR.
  • Various aviation-related products from the NWS Aviation Weather Center (AWC) via the NCEP-developed DBNet package which was installed in the Central Facility.
  • Derived GOES satellite products from the University of Wisconsin Space Science and Engineering Center(SSEC).
  • WSR-88D Level-3 products from WSI.
  • GOES-11 data, for the duration of the GOES science test period.

In addition to upgrading or acquiring datasets at the request of FSL users, routine support and maintenance were provided for the following systems:

  • Regularly updated station tables.
  • Modified the maritime data translator to correct software bugs.
  • Tuned GOES groundstation demodulators and maintained ingest hardware.
  • Modified configuration of the LDM server to add routing to new internal and external clients.
  • Installed several updates to the Quality Controlled (QC) ACARS data.
  • Installed several updates to the RUC Surface Analysis System (RSAS).
  • Updated the NIMBUS Data Saving and FSL Data Repository (FDR) systems to store data on the new HPCS MSS.

Staff provided real-time system monitoring and troubleshooting services, including 24 hours a day, 7 days a week operations, and after-hours systems support, as needed. Monitoring was facilitated by the Web-based Facility Information and Control System (FICS), which provides real-time status information on critical datasets and systems. The FICS configuration was extended to support new datasets and systems. Other FICS configuration modifications were applied after installation of the new NetApp /public NFS server. A major upgrade to FICS substantially improved the monitoring of LDM services. In addition, FICS monitoring of archived NIMBUS and FDR files was updated to accommodate the new HPCS MSS.

Work began or continued on several long-term development efforts. The Object-Oriented (OO) methodology was incorporated into the redesign of NIMBUS data handling to improve software maintainability and reusability. These OO techniques have so far been applied to prototype grid, point, and satellite data applications.

The increasing volume of meteorological observations and products (currently several tens of gigabytes per day) require better management of the associated metadata. Metadata, information about the data, include parameters such as latitude/longitude of the observations, instrument characteristics, and numerous others needed to effectively use the data and products. FD staff initiated a project to establish a database that allows easy access, updating, parsing and publishing of meteorological metadata. A Web interface, extensible Markup Language (XML) format, Oracle database and up-to-date, standardized open-source software are the main components of the project.

Following an extensive analysis effort to identify possible Year 2000 (Y2K) problems, approximately 45 NIMBUS software modules were systematically repaired, tested, and integrated. Final preparations leading up to the date and onsite monitoring during the rollover resulted in a very successful outcome, with key datasets remaining on-line and only minor difficulties needing attention in a few Central Facility systems. A residual Y2K bug in GOES imager and sounder software surfaced on Leap Day, but was quickly resolved.

Laboratory Project, Research, and External Support

In support of the Aviation Division's Aviation Digital Data Service (ADDS) project, software previously developed for NIMBUS to decode and store Airman's Meteorological Advisories (AIRMETS) was adapted for use at the AWC. This effort required successful porting and testing of the software to run in a Linux environment with substantially modified data input and output mechanisms.

FSL routinely generated and transferred RUC-2 grids to the NWS to serve as an operational backup to the grids produced on the NCEP IBM SP-2 supercomputer. Similarly, a method was implemented to transfer RUC Surface Assimilation System (RSAS) grids to the OSO as an operational backup. Configuration management and installation methods for the operational RUC-2 and RSAS packages were substantially revamped to streamline the updating process.

Support continued for the FRD-developed QC ACARS processing, with data distribution to external organizations expanding to approximately 20 university and government agency sites.

Working with the FSL WFO-Advanced development staff, Facility Division staff deployed an AWIPS Data Server in the Central Facility. The Data Server ingests NOAAPORT and local radar data, and makes these data available in netCDF format on /public in real time. The Central Facility AWIPS Data Server offloaded data servers in the Systems Development and Modernization divisions, and provided data to several FSL development projects in the Aviation, Forecast Research, and International divisions.

Additional FSL project support activities included the following:

  • Creation of AWIPS cases for interesting weather events for exercises and case studies:

    • Loaded and swapped six cases as needed for the fall 1999 D3D exercise.
    • Saved cases upon request, such as the 17 May 2000 severe weather and 20 March 2000 icing events.
    • Loaded cases from the MSS to /case as requested by users.
    • Transferred the /case directory from Auspex to the NetApp NFS server.

  • Support of the International Division's FX-Net project:

    • Provided WSI radar data from Grey, Maine, and Sterling, Virginia, on the Plymouth State College data server, ds1-psc.
    • Supported ds1-psc during various demonstrations and training sessions.
    • Installed AWIPS data server software on newly provided FX-Net hardware.

  • Support of other FSL projects:

    • Installed AWIPS data server software on the new Central Facility hardware.
    • Implemented transition of /data/fxa from Auspex to the NetApp NFS server.
    • Planned for the upgrade to AWIPS Build 5.0.

  • Enhanced the Central Facility Local Data Acquisition and Dissemination (LDAD) data processing:

    • Placed Central Facility LDAD decoder/NetCDF maker into production, which allowed processing of mesowest, alert, Internet, schoolnet, raws, and aprswxnet data.
    • Learned how to add new variables.
    • Reviewed metadata update procedures.

The Data Systems Group conducted quarterly Central Facility task-prioritization meetings to ensure that FD development efforts responded to all FSL requirements. The FSL director, division chiefs, project leaders, and other interested parties were invited to review and discuss with the lead FD developers the status of all Central Facility tasks, including data acquisition, processing, storage, NIMBUS, and related facility development efforts. The main result of these meetings was implementation of a prioritized list on the Web, which ensures that FD development activities are carried out in accordance with FSL management, project, and user requirements.

The Facility Division continued to distribute real-time and retrospective data and products to all internal FSL projects and numerous outside groups and users. External recipients include:

  • ETL received real-time GOES-8 and -1O extended sector satellite data, in support of the Pan-American Climate Studies (PACS) program, and WSR-88D data.
  • NWS Storm Prediction Center (SPC) in Norman, Oklahoma, received six-minute profiler data.
  • NWS Aviation Weather Center in Kansas City.
  • UCAR COMET and Unidata Program Center.
  • NCAR RAP and Mesoscale and Microscale Meteorology Division.

In addition to the data mentioned above, the Facility Division provided other datasets and products to outside groups, which included Doppler radar, upper-air soundings, Meteorological Aviation Reports (METARs), profiler, satellite imagery and soundings, and MAPS and LAPS grids. Operations staff served as liaison for outside users, providing them with information on system status, modifications, and upgrades.

By adding two additional part-time computer Operations staff, Central Facility coverage increased from 7 days a week, 16 hours per day to 7 days a week, 24 hours a day. They monitored the HPCS, real-time data-acquisition systems, and NIMBUS and its associated hardware and software. The operators corrected problems, rebooted machines and/or restarted software, and referred unresolved problems to the appropriate systems administrators, network staff, or systems developers. In support of the FSL user community, operators also answered facility-related questions, performed backups, restored lost files and file systems, and provided data from the Mass Store or the FSL tape library.

Additional Operations accomplishments included:

  • Replaced the simple X-Terminal-based displays used by Operations with fully functional, stand-alone Linux workstations. These tools and capabilities contribute to higher levels of productivity and enhance learning opportunities.
  • Proposed and implemented a quarterly off-site backup plan to protect critical FSL-developed project software, and to ensure FSL's continuity in the event of a disaster, such as a major building fire.
  • Created an on-line collection of Web documents to provide maintenance service contract information to the System Administration team and the Operations staff. This significantly shortens the response time to equipment failures.
  • Generated 50 additional (now totaling 100) Web-based documents for maintaining, troubleshooting, and recovering Central Facility real-time systems.
  • Oversaw the daily laboratory-wide computer system backups amounting to 460 GB of information written each night.
  • Serviced approximately 134 user requests for data compilations, file restoration, account management, and video conferencing.

Development of the FSL Hardware Assets Management System (HAMS) was completed and placed into operation. HAMS, based on an Oracle DBMS, provides the storage, maintenance, and retrieval of detailed records of each piece of FSL equipment and software. The system contains vendor, warranty, and support contact information for each asset. Since it can be used for multiple levels of input, viewing, and searching, HAMS facilitates the tracking of equipment moves, upgrades, and reconfigurations. It provides management, technical support staff, and developers with vital statistics and attributes about FSL hardware and software, and also provides accurate information to the FSL Office of Administration and Research for equipment and software maintenance. Platform-independent Web browsers, serving as the primary HAMS interface, provide extensive query capabilities to satisfy a wide variety of day-to-day requests for asset information and maintenance.

Division staff provided technical advice to FSL management on the optimal use of laboratory computing and network resources, and participated in cross-cutting activities that extended beyond FSL, as follows:

  • Chaired the FSL Technical Steering Committee (FTSC), which reviewed all FSL equipment fund requests and provided the FSL director and senior staff with technical recommendations for equipment procurements.
  • Served on the FSL Technical Review Committee.
  • Served as Core Team and Advisory Team members for selecting upgrades to the FSL High-Performance Computing System.
  • Served as FSL representative and was elected chair of the OAR Technical Committee for Computing Resources.
  • Served on the DOC Boulder Laboratories Network Working Group.
  • Served on the NOAA High-Performance Computing Study Team.
  • Participated in the creation of OAR's IT Architecture plan.

Projections

Computer Facility

Several major upgrades to the FSL HPCS will be performed during Fiscal Year 2001. The input/output (I/O) performance of the HPCS computational platform will be increased by at least a factor of 10, allowing faster job turnarounds and better utilization of the machine. The amount of shared disk storage will be upgraded from 1 TB to about 2.4 TB during winter 2001. Also, 280 additional compute processors will be added to the computational platform. Two large memory servers will be installed for short running analysis jobs on the computational platform. The HPCS job scheduler will be upgraded to support multiple real-time runs on the computational platform; this will be especially helpful with more real-time jobs and projects being added from within FSL and across NOAA. A 12-TB Redundant Array of Independent Disks (RAID) system obtained from the Bureau of the Census will be installed to increase the data storage capability of the HPCS computational platform.

More PCs obtained from the Bureau of the Census will be installed, many of which will allow FSL to accelerate the transition to Linux for Central Facility functions.

The transfer of FSL user data, FSL Data Repository, and real-time NIMBUS data storage function from the old FSL Mass Store System (MSS) to the new HPCS MSS will be completed.

Staff will continue the initial work on a system to more effectively manage the configuration of computer systems within the Facility and Aviation divisions. The plan is to develop a system to streamline and track system configurations and automate updating of configuration files.

Emphasis will continue on improving computer security within the Central Facility and FSL. More secure versions of FSL's anonymous ftp and Web servers will be deployed. An FSL Computer Policy and Procedures document will be completed and adopted. The FSL computer facilities security plan, risk assessment and contingency/disaster recovery plan will be submitted for NOAA approval with accreditation of the computer facility planned by March 200l. Another security measure is to hire a full-time FSL Security Officer.

FE-36 clean agent fire extinguishers will be installed in the hallways near all FSL computer rooms, and carbon dioxide fire extinguishers will be installed inside all computer rooms. An FM-200 gaseous fire suppression and VESDA air sampling system will be installed in the main FSL computer room early 2001. To provide adequate cooling for the additional equipment deployed in the secondary FSL computer room, air conditioning will be upgraded.

FSL Network

FSL experienced a number of network difficulties in Fiscal Year 2000 that stemmed from the expanded use of bandwidth, and an imbalance in the distribution of a complex configuration of ATM and router services over five primary network devices. Originally purchased as Alantec PowerHubs, these devices have been discontinued by the vendor, but, at FSL's request, continue to be supported by the current ATM network vendor, Marconi Corp. The Network Team plans a substantial network upgrade (Figure 23) in Fiscal Year 2001 to offload the services provided by these failing devices onto an array of ATM and Ethernet Campus Switches. The upgrade will expand the ATM OC-12 (622 Mbps) network core to the edge where users and workstations typically attach, provide high-density of 10/ 100 Base-T Ethernet and ATM OC-3 (155 Mbps) connections for users, provide ATM OC-12 and Gigabit Ethernet connections for servers, relieve the PowerHubs of all ATM services in order to effectively provide FSL divisional routing, and eliminate 12 low capacity and performance Ethernet switches. In addition, the current network upgrade design will position FSL for an upgrade, once the need arises, of the ATM backbone from 622 Mbps OC-12 to 2.5 Gbps OC-48 via modules available for the Campus Switches. The planned upgrade addresses all current issues for expanded network capacity, provides a faster platform for distributed ATM services, and leverages FSL's investment in ATM for the future expansion of our state-of-the-art multiprotocol network.

FD - Planned Network Upgrade

Figure 23. Planned FSL network upgrade.

Data Acquisition

Prototyped Object-Oriented grid, point, and satellite data handling software will be made fully functional and integrated into the production NIMBUS systems. Although the reconfiguration of NIMBUS to accommodate these OO applications will be a major undertaking, the improvements are expected to greatly optimize data processing and facilitate system maintenance. In the case of gridded data, the OO methods will reduce needed modifications for new grids to simply changing a metadata file, rather than implementing new software. Similarly, redesigned satellite data software will simplify GOES imager and sounder data processing. Additional OO software will provide services for better handling grids obtained via NOAAPORT and LDM.

Work to port NIMBUS software to Linux will continue. Depending on whether funding is approved, some older SGI servers will be replaced with Linux servers. The Linux NIMBUS servers will be directly connected to the HPCS Storage Area Network (SAN). To provide better data and product service to FSL, /public will be transferred to the SAN.

The GOES groundstation ingest hardware components, from demodulators to ingest computers, are nearing practical end-of-life, and need to be replaced. Following an investigation of alternative solutions, a suitable vendor will be identified. Pending FSL management funding approval, new satellite data acquisition systems will be procured and implemented.

The Distributed-Brokered Networking (DBNet) data transfer system, which was initially set up to acquire datasets from AWC, will be configured to also acquire data from NCEP. This will streamline the transfer process by eliminating AWC as an intermediate hop for the NCEP data.

Work will continue on the metadata database project. Modeling of the METAR and GRIdded Binary (GRlB) metadata and populating of the respective databases will be completed.

Laboratory Project, Research, and External Support

New datasets will continue to be acquired as FSL users and project managers request them. Anticipated additions include several new Eta model grids for the Systems Development and Forecast Research divisions, and datasets for the Aviation Division, including Stage IV Precipitation grids from NCEP, different Turbulence AIV products from NCAR, and Alaskan AIRMETs and Pilot Reports (PIREPs).

Support will continue for distribution of quality controlled ACARS to external agencies, and will expand as additional requests for data are accommodated.

Staff will begin support of the FRD collaborative project with NCAR and the Federal HighWays Administration (FHWA) by providing consultation on data availability and distribution methods and initiating the distribution of LDAD mesonet netCDF files to NCAR/RAP.

In support of the FX-Net project, FD staff will provide assistance with displaying GPSMET water vapor data, implementing national centers AWIPS data server localization, and processing NOAAPORT radar data for requested sites.

The Central Facility NOAAPORT receiving system will be upgraded by adding a second communications processor to serve as a backup/failover system. The AWIPS Data Server will be enhanced by adding support to process NOAAPORT radar data from sites requested by FSL users and projects.

The Central Facility LDAD data processing system will be enhanced by adding new data providers (including GPSMet and additional data sources required by the FHWA project) and new variables provided by the new data providers, and implementing routine metadata update procedures.

Staff will continue support of the NWS NCEP by providing FSL-generated RUC-2 and RSAS backup files.


FSL Staff FSL in Review (Other Years) FSL Forum Publications