The Center for Information Technology Integration at the University of Michigan and IBM have undertaken a joint project to gain experience with the IBM Multimedia Server for AIX. In this report we describe our video testbed configuration, our experiences in installing and running the Multimedia server on our configuration, and present some recommendations.
Testbed ConfigurationOne of our goals in this experiment was to evaluate the efficacy of the multimedia server on a low-end server machine. Accordingly, we selected the RS/6000 Model 42T as our video server.
Server ConfigurationThe server is an RS/6000 model 42T running AIX 4.1.4, Ultimedia Services 2.1.3, and Multimedia Server 1.1. It is equipped with 192 MB of memory and four 4 GB wide SCSI disks on their own controller for the Multimedia File System (MMFS). MMFS uses NFS to deliver files to its clients.
Client ConfigurationThere are two types of clients. One is an RS/6000 model 42T with 64 MB of memory. The other is a Pentium 133 (or better) running Windows NT. On the AIX clients we set the suggested parameters for UDP send and receive space (128K) and sb_max (256K).
Network ConfigurationThe server and clients are each multi-homed on both a traditional 10 Mbps Ethernet and an ATM network. The Ethernet is shared with the department and heavily loaded. The ATM network consists of a single IBM 8285 switch, a 155 Mbps OC-3 connection to the server, and 25 Mbps twisted-pair connections to the clients. All ATM adapters are IBM Turboways, microchannel for the RS/6000 and PCI bus for the Pentia.
We are running classic IP over ATM, which would seem to scale better than LAN emulation.
Data FilesAll data files are either video, audio, or both. We use both AVI and MPEG for combined audio and video. The AVI files are encoded with motion jpeg, Indeo, or mpeg I-frame codecs. The files range in size from 80 KB to 500 MB and more. Data rates depend on the codec and compression parameters used, frame size and rate, and audio sampling rate. Rates for video files range from 40 to 5000 Kbps.
Test PlanPreliminary testing showed that the combination of MMFS with ATM easily provides enough bandwidth to swamp a typical client. In viewing video, the client CPU becomes the bottleneck, as it can't decode the video fast enough to keep up with the incoming video stream.
We were interested in stressing the file system and network rather than the video decoder. The Ultimedia video adapter provides hardware assist in decoding motion jpeg video files, so we chose this format for most of our tests so the client could process the video at the highest possible rate.
The multi-homed hosts allow us to compare performance between the Ethernet and the ATM net. Although the ATM client connections are 2.5 times faster than the Ethernet, we expect a greater than 2.5x performance improvement, because the server is connected at 15.5 times the speed of Ethernet, and ATM provides dedicated bandwidth to each client.
The Ultimedia Services avi and mpeg players do not give any indication when they are being starved for data, which made it difficult to do any quantitative measurements. However, at the maximum rate of 30 frames per second (fps), it is obvious when frames are being dropped. We chose video clips with lots of action to make this more apparent.
We used "nfsstat" and "netstat -v" to give us some indication as to what was going on at the network and transport layers.
Ethernet ComparisonFor control purposes we tried delivering video to our clients over Ethernet. We did this at various data rates and under various Ethernet loads to determine the maximum data rates available.
On a mostly unloaded Ethernet it was possible to view mpeg clips at 1.2 Mbps. The 5 Mbps motion jpeg files were quite choppy but viewable. On a heavily loaded Ethernet, the 5 Mbps files were not viewable at all, and the avi player usually just gave up.
Multiple clients on the same Ethernet failed to work at all at the highest data rates.
ATMWe first tried the same clips we used on Ethernet, and they played with no dropped frames. Then we tried increasing the load on the clients, network, and server.
We found that a single 42T client could play about three simultaneous 5 Mbps video streams. With more than three, one or more of the video streams would stop playing. This was true even when the avi players were run on the file server machine.
Next we started as many client machines as we could, each playing three simultaneous 5 Mbps video streams. At the time we only had three working client machines, and were able to start all of them, for a total of nine video streams, or 45 Mbps from the server. There was no visible degradation in the video as more client machines were added.
Non-NFS TransportAlthough the MMFS is intended specifically for use with NFS transport, we do not use NFS much at the University. We have done some preliminary testing with two other transport layers, HTTP and Samba. So far we have been unable to test these at high speed (5 Mbps), because we don't have client software for reading a stream of video at this speed (Ultimedia only reads from files). 1.5 Mbps mpeg streams delivered to a streaming mpeg player (InterVU) on the NT machines via HTTP are perfectly smooth, but the same stream delivered via Samba is choppy.
Real-Time VideoWe have also done some testing over the ATM net using the M-bone video tool, vic, with motion-jpeg encoding done in the Ultimedia video adapter. The highest data rate supported by vic in this configuration is about 3 Mbps, and works very smoothly at 30 fps between pairs of 42T machines.
ConclusionsThe file server and network are able to deliver at least 45 Mbps from the server and 15 Mbps to each of three clients. This would seem to be plenty for delivering low resolution video to a small set of clients in a classroom setting.
However, for this technology to be more useful, we believe it should provide higher resolution video to a larger client community than that served by our single ATM switch. Approximately twice the resolution (640x480) is needed to achieve home VCR quality video. This would require four times the video bandwidth, or about 5 Mbps if encoded as an mpeg-1 stream.
Decoding a 5 Mbps mpeg-1 video stream is not possible in software on today's computers, and delivering more than a few of these streams over the current FDDI campus backbone is also not possible.
MMFS and ATM have removed the server and network as the bottleneck in delivering digital video to the desktop. Faster, higher resolution encoders and decoders are needed to realize the full potential of this technology.
Further TestingWith our current configuration we are unable to load the Multimedia File System or the ATM network to the failure point. We hope to add more clients to the network to do this.