Browse Prior Art Database

Measurement of host costs for transmitting network data (RFC0392) Disclosure Number: IPCOM000004901D
Original Publication Date: 1972-Sep-20
Included in the Prior Art Database: 2001-Jul-11
Document File: 7 page(s) / 14K

Publishing Venue

Internet Society Requests For Comment (RFCs)

Related People

G. Hicks: AUTHOR [+2]


Background for the UTAH Timing Experiments

This text was extracted from a ASCII Text document.
This is the abbreviated version, containing approximately 27% of the total text.

Network Working Group G. Hicks Request for Comments: 392 B. Wessler NIC: 11584 Utah

20 September 1972

Measurement of Host Costs for Transmitting Network Data

Background for the UTAH Timing Experiments

Since October 1971 we, at the University of Utah, have had very large compute bound jobs running daily. These jobs would run for many cpu hours to achieve partial results and used resources that may be better obtained elsewhere. We felt that since these processes were being treated as batch jobs, they should be run on a batch machine.

To meet the needs of these "batch" users, in March of this year, we developed a program[1] to use the Remote Job Service System (RJS) at UCLA-CCN. RJS at UCLA is run on an IBM 360/91.

Some examples of these jobs were (and still are!):

(a) Algebraic simplification (using LISP and REDUCE)

(b) Applications of partial differential equation solving

(c) Waveform processing (both audio and video)

The characteristics of the jobs run on the 91 were small data decks being submitted to RJS and massive print files being retrieved. With one exception: The waveform processing group needed, from time to time, to store large data files at UCLA for later processing. When this group did their processing, they retrieved very large punch files that were later displayed or listened to here.

When the program became operational in late march and started being used as a matter of course users complained that the program page faulted frequently. We restructured the program so that the parts that were often used did not cross page boundaries.

The protocol with RJS at UCLA requires that all programs and data to be transmitted on the data connection be blocked[2]. This means that we simulate records and blocks with special headers. This we found to be another problem because of the computation and core space involved. This computation took an appreciable amount of time and core space we found because of our real core size that we were being charged an excessive amount due to page faulting. The page faulting also reduced our real-time transmission rate to the extent that we

Hicks Wessler [Page 1]

RFC 392 Measurement for Transmitting Network Data September 1972

felt a re-write of the transmitting and receiving portions of the program was needed. In order that the program receive the best service from the system, these portions optimized so that they each occupied a little over half of a page. As we now had so few pages in core at any one time, the TENEX scheduler could give the program preference over larger working set jobs. (As an aside, because of our limited core, we have written a small (one and one half pages) editor in order to provide an interactive text editing service.)

The mechanism to access the network under TENEX is file oriented. This means byte-in (BIN) and byte-out (BOUT) must be used to communicate with another host. The basic timing of these two instructions (in the fast mode) is 120 us per byte to get the data onto or off of the network[3]. A d...