Forum 2/2003

In This Issue

Introductory Page

NOAA SOSTM

Mesoscale NWP

Managing Metadata

Object Data System

GPS Forecasting

Data Distribution

Weather Products

Briefs

Recent Publications

Contact the Editor
Nita Fullerton


Design/Layout:
Will von Dauster
John Osborn


Best Viewed With
Internet Explorer

2/2003 Forum - AWIPS LAN By Michael R. Biere and Darien L. Davis

Introduction

Historically, the AWIPS site architecture has relied on a central repository, at each National Weather Service (NWS) Forecast Office site, that contains all of the data viewed at the workstations within that location. The data server is the host containing these data, and the Network File System (NFS) is used to export the file system containing the data from the server to each visualization workstation at the site. This central data server has been one of the performance bottlenecks at the sites.

This architecture can be thought of as a pull approach, in which data are pulled by the workstation software as needed to service user-initiated display requests. FSL has investigated an alternative push approach, in which data are efficiently pushed to the workstations as they become available on the server. The datasets are then available on the local disk of the workstation, ready to be loaded as needed by the display software.

In this article, we discuss how this approach improves workstation performance, by reducing latencies and increasing throughput in displaying data on the workstations. We also discuss experiments that verify performance improvement using this approach. Open design issues include the optimal subset of site data to broadcast, the data notification mechanisms, and whether to multicast transmission-format or decoded data formats. In our demonstration system, we have generally taken the simplest possible approach in tackling each of these issues.

The Multicast Approach

FSL adopted the broadcast software developed at the National Severe Storms Laboratory (NSSL) for use in radar product generation. This software is general and flexible enough to be used for efficiently pushing data to the workstations. The NSSL software may be run as either a broadcast, in which data are sent to every host on a network, or as a multicast, in which data are sent to an addressable subset of the hosts on a network. Both broadcast and multicast send a single stream of packets across the network, which are received simultaneously by all of the receiving hosts. This obviously results in dramatically less network traffic than repetitively sending the same data over dedicated connections to each receiving host. Broadcast and multicast are usually considered unreliable as communication mechanisms, in the sense that there can be no guarantee that the data will be delivered, or that the data will arrive in the correct order. The NSSL software builds an additional level of reliability, by checking for data arrival and retransmitting any missing data over a reliable channel. In practice we have found that very little data need be resent. To reduce the amount of interprocess-communications (IPC) middleware software in our demonstration system, we replaced the NSSL IPC software (known as the Linear Buffer) with the standard AWIPS IPC software (ThreadIPC).

The two primary NSSL processes are named bcast and brecv, for broadcast and broadcast receiver, respectively. These processes were modified slightly, and to distinguish the modified versions, we renamed them mcast and mrecv, for multicast and multicast receiver, respectively. There is one mcast process on the central server and one mrecv process on every workstation host that receives the multicast data.

Expected Benefits

Our rationale in implementing the AWIPS multicast system was to improve overall system performance and expandability and provide specific benefits, including the following.

Loading Time Improvements – Since the workstations can now load data directly from a local disk, rather than across the network from the NFS server, throughput should be higher and latency lower, resulting in faster load times. This should be especially true for high-volume datasets such as satellite imagery.

Lowering NFS Server Load – Since the multicast data need to be accessed only once for transmission rather than for every workstation access, we expect the load on the data server disks to be reduced, along with the NFS load on the server.

Expandability – Adding more workstations does not increase the load to the server in our multicast architecture. This mechanism could also be used to provide data to local application servers with no impact on the AWIPS data server. (This is a slight simplification, but the additional overhead is very small compared with traditional access via the NFS server.)

Redundancy – Since the data are broadcast to every workstation, a server failure will not affect access to existing data. This is unlike the current architecture, in which loss of the data server is a critical failure requiring failover to an alternate server, during which time all data are unavailable.

Lower Network Usage – Along with lower server overhead from accessing data only once, the network usage is also lowered due to sending the data only once for all workstations.

No NFS Race Conditions – With the data securely on a local disk, there will be no NFS cache inconsistencies or race conditions such as those which plague AWIPS display software from time to time.

Design Issues

In considering the use of a multicast data distribution at an AWIPS site, certain design issues need to be addressed. We list some of them here and discuss the simplifying assumptions that we made in our demonstration system.

Deciding What Data to Multicast – One extreme approach is to send and store all data locally on every workstation, which would obviate the need for a central data repository. An alternative is to send all data, but keep a smaller subset on the workstations rather than on a central data server. The central server might have a longer history of data or case studies, for example. Another possibility is to send only a subset of the data to all workstations, and rely on a server for the rest. The criteria for determining what data to send might be based on timeliness performance requirements, or design expediency. Satellite and radar data files are good candidates, for example, because both are important to the forecaster. Since these are file-based datasets, they are easy to broadcast; they are, incidentally, the datasets we chose for our demonstration.

Structured Datasets – A few of the AWIPS datasets are stored in a simple format with one dataset per file; for example, radar scans and satellite images are stored in individual files. This simplifies their distribution because, as each file appears on the server, it is just copied to the workstations. Many datasets are more complicated; for example, model runs are stored in structured files, but the fields of each model run are received individually and inserted into place in the structured file. If the individual fields are multicast as they are received, the multicast receiver software must replicate the ingest functionality that decodes and stores the data into the appropriate file. The extreme case of this would be to broadcast all data as received, in transmission format, and replicate all of the decoders on each workstation. Alternatively, one could wait until each model run (or other structured file) is finished, and then multicast the entire file. This would result in considerable delay in the availability of the earliest data within the structured file. A compromise implementation would be for the workstation display software to use the existing server files while they are being updated, and then use local multicast copies once the files are complete.

Inventory Server – The current AWIPS software is limited in its ability to accommodate an architecture wherein some data are stored on the server and some on each workstation. In particular, AWIPS display software assumes that all meteorological data to display reside within a single directory hierarchy pointed to by the FXA_DATA environment variable. Some dramatic workarounds to this limitation are possible through the creation of symbolic links within the directory structure. However, a more elegant solution would be to extend the existing design with an inventory server software component. This inventory server would have information about the actual distribution of the data at the site, conceptually merging the central data with the data at the workstations, and removing the limitation of one directory hierarchy.

Missed Data – What do we do after a workstation is down and unavailable for a while? One approach would be to initiate a backfill operation, filling in data from a central server. Once the local database is caught up, use of the workstation could resume. Another approach is to merge any missing data into the local database as needed by the workstation software, in conjunction with loading it to the display. The simplest and perhaps best approach is not to bother with backfilling at all, but to merge in any missed data from the central server as needed while the workstation is running. This approach merges well with the inventory server previously discussed. In any of these approaches, the role of the central server could be played by another workstation (or even a set of workstations in Napster-like fashion) with a complete database.

Relational Database – Our description of data storage within AWIPS has been simplified to this point. In addition to the file-based datasets already mentioned, other data are currently stored in an Informix relational database on the server. It seems unreasonable to replicate this Informix database on every workstation, so we assume that these datasets will remain centralized.

Notification Server – The AWIPS notification server process, as implied, is responsible for alerting client processes at the site when new data become available for display. The notification server has been a centralized function at the site. Once data are distributed to individual workstations, each workstation has a potentially different view of the arrival of data. Hence, in a robust architecture, each workstation should have its own notification server to report the arrival of data to that workstation. This is similar to the need for a distributed inventory server that reflects the new distributed nature of data availability. Use of the central notification server in the distributed data environment is subject to race conditions in which notifications arrive before data have been completely received by the multicast receiver on the workstation.

Purging – Although it is a relatively minor issue, it is clear that replicating data on the workstations will also require replicating data management activities such as purging old datasets to make room for new ones.

Demonstration System

FSL implemented a demonstration multicast system for the two simplest (but still important) datasets: satellite images and radar tilts. We have taken the path of simplicity in most of our design decisions. Figure 1 shows the key processes in our system. The left side shows the processes on the data server, while the right side shows the processes that are replicated on every workstation. The dashed line down the center represents the Local Area Network (LAN).

Broadcast Data Flow

Figure 1. Process flow of demonstration multicast system. The left side shows the data server processes, the right side shows the replicated processes on every workstation, and the vertical dashed line represents the LAN.

The mcast and mrecv processes are modified versions of the NSSL bcast and brecv processes. The multi- cast_xmit and multicast_recv processes are client processes that handle reading and writing of AWIPS satellite and radar files. On the server, the multicast_xmit process registers with the notification server to be notified when new radar or satellite data are available. When a new file is available, it is read from the data tree below /data/fxa in the file system. The data are then sent via the AWIPS thread IPC to the mcast process, which multicasts the datasets to the workstations. On each workstation, the mrecv process receives the multicast data and hands them off to the client process multicast_recv, which writes the file to disk. The data are written to the workstation local disk in a different directory hierarchy, for example, at /scratch/data/fxa. Every file is written to the same relative position in this file hierarchy as where it was read from in the serverís /data/fxa hierarchy.

Experimental Results

The test environment simulated the most current AWIPS-deployed system including the Linux preprocessors and Linux workstation. The workstation tests were performed on Linux systems and involved three areas: performance for the forecaster, impact on the data server, and data availability and validity.

Performance Tests – The performance tests run three scripting routines that simulate a forecaster using a mouse and making menu selections. The three tests load satellite, radar, and synoptic datasets. The time lapse for running the tests is then averaged and a rating is determined from that average. Figure 2 shows a high level system and network configuration used for these tests. The hybrid computer architecture uses an HP NFS server and Linux NFS server. The HP system is networked by the Fiber Distributed Data Interface (FDDI) to a Plaintree switch. The data are then transmitted over a 100Base-T high-speed network to the forecaster workstations. During 2003, the NWS will beta-test this system with the satellite and grid data available on the Linux server along with other datasets on the HP server.

Test System Configuration

Figure 2. Test system configuration. The hybrid architecture uses an HP system networked by FDDI to a Plaintree switch, with data transmitted over a 100Base-T network to the forecaster workstation.

Four timing tests were run on different configurations for accessing data. The Linux workstation was staged with AWIPS 5.2.2 software, and care was taken during the test to remove all of the buffered cache data in memory. Figure 3 shows the configuration for the first test. A test was run with all of the test data moved to a RAM filesystem on the workstation. Although this would not represent an operational system, it provides a good benchmark for the performance tests. Since the data are resident in memory, no disk or network interference is possible. The results represent the best that the workstation can perform on the benchmark tests.

RAM Filesystem Configuration

Figure 3. RAM filesystem configuration for a performance test in which all of the test data are moved to a RAM filesystem on the workstation.

For the second performance test, all of the data were moved locally to a disk on the forecaster workstation. This test shows the performance benchmarks when an NFS server is not used to host the data.

The third performance placed the data repository on the Linux fileserver (Figure 4). The workstation accessed data via NFS over the 100Base-T network to transfer the frames for display. This is similar to current schemes that the NWS is investigating with a port to Linux.

Linux NFS Data Server Configuration

Figure 4. Linux NFS data server configuration in which the workstation accessed data via NFS over the 100Base-T network to transfer the frames for display.

The fourth configuration tested had the data resident on the HP data NFS server (see Figure 2). All data were transferred to the Linux workstation via the FDDI and network switch.

The final tests reflect the b-cast data repository. The grids are on the Linux preprocessor, the point data are on the HP data server, and the radar and satellite data are local to the workstation (Figure 5).

Simulated Multicast Configuration

Figure 5. Simulated multicast configuration in which the grids are on the Linux preprocessor, the point data are on the HP data server, and the radar and satellite data are local to the workstation.

The performance benchmark is based on software written by Michael Biere (FSL) and Timothy Hopkins (National Weather Service). These scripts characterize the forecasters' data retrieval for normal use of the workstation. The final performance ratings from these tests are summarized in Table 1. The last line in Table 1 shows the Workstation Performance Rating (WPR) for the HP workstation client and server. These ratings reflect the current AWIPS system minus any Linux upgrades. As one can see, the performance improvements are remarkable as more data are removed from the HP NFS server.

Table 1.
Workstation Performance Ratings for Testing Configurations

File Access Type WPR
RAM filesystem 28
Local data 35
Linux NFS 59
HP NFS 95
B-cast (Figure 5) 44
All HP hardware >300

Impact and Data Tests – The tests for server impact and data validity have not been completed at this time. These results will be presented at the Annual Meeting of the American Meteorological Society in February 2003.

Summary

Although the software is in preliminary testing, the results are promising. Forecasters will see performance gains for each dataset propagated to the Linux server and workstation. As expected, the greatest gains show all the data local to the forecaster workstation, not using NFS. This is, however, a complicated data management effort; for example, separating and prioritizing which datasets are key to a successful use of the b-cast software for system support and forecasters alike.


Note: A complete list of references and more information on this and related topics are available at the main FSL Website http://www.fsl.noaa.gov, by clicking on "Publications" and "Research Articles."

(Michael Biere is a Systems Analyst in the Systems Development Division, headed by Herbert Grote. He is also affiliated with the Cooperative Institute for Research in the Atmosphere, Colorado State University, Fort Collins, CO. Mr. Biere can be reached at Michael.Biere@noaa.gov.

Darien Davis is a Computer Specialist/Technical Advisory in the same division. She can be reached at Darien.L.Davis@noaa.gov.)


FSL Staff FSL in Review FSL Forum (Other Editions) Publications