Chicago

With great relief I arrived at O’Hare airport, knowing that my next plane ride would bring me home. I drove out the I-88 toll road, past ATT’s Bell Laboratories, to the little town of Warrenville. When I first moved to Warrenville in 1969, the town of a few thousand people was in the middle of farm country.

Robert Wilson, the legendary high energy physicist had gathered many of the best experimental physicists in the world to 6,800 acres of prairie to make the largest accelerator ever built. Over the years, the accelerators they constructed became capable of smashing protons against antiprotons at the incredible energy of 1.8 Trillion Electron Volts (TEV).

Collisions of subatomic particles at high energies help reveal information about quarks and leptons, the fundamental forms of matter. Acting like a giant microscope, the Fermilab accelerator used higher energies to create and discover particles unobservable at lower energies.

In the mid-1970s, the accelerator provided evidence of the existence of the upsilon, a subatomic particle composed of a bottom quark and an anti-bottom quark, a discovery that provided strong support for the Standard Model. In the 1990s, the lab was hot on the trail of the top quark, the last quark yet to be observed experimentally.

When I moved to Warrenville, though, there was nothing but prairie where Fermilab now stands. My father and the other physicists did their work in a suburban office complex until they could start using houses in a subdevelopment on the site that had been purchased under eminent domain. Old farmhouses from the countryside were moved together into a cluster, to house visiting scientists. The subdevelopment was painted bright primary colors, and a chef was brought in to staff the village cafeteria (although the food was lousy, at least there was a chef).

At that time, Fermilab was just beginning to coalesce. While everybody slaved away building the accelerator, Wilson continued to put in those unique features that made the lab such a special place. A herd of buffalo was brought in and a project was started in the middle of the 4-mile circumference main ring to restore it to the original prairie grasses. One of the lab buildings was even constructed as a geodesic dome, with the panels of the dome made out of panes of plastic, sandwiching thousands and thousands of aluminum cans, collected in a recycling program by the neighboring schools.

My visit to Fermilab on Friday had been set up over the past two months via electronic mail. Around Singapore I had received a message from a lab public relations engineer asking if we might make my visit a two-way exchange of information.

Of course, I wrote back, two-way is good. By the time I hit Amsterdam, I had another message from the lab wanting to know if I would be discussing the future of networking on my visit. With another 100 messages to plow through, I dashed a hurried (but puzzled) acknowledgment.

Arriving at the lab, I saw that the little two-way exchange had blossomed into a full-blown public seminar with the title “The Future of Networking.” A nice topic, no doubt, but one best left to astrologers and marketing analysts.

I asked around to see if perhaps some physicist had spoken on “The Future of Cosmology” in a recent seminar, and perhaps I was just part of some series. No such luck, so I took advantage of the rule that states the one with the whiteboard gets to set the agenda and changed my topic to a more manageable one.

After my talk, I sat down with some of the Fermilab networking staff, including the managers of the U.S. High Energy Physics network (HEPnet). HEPnet ties physicists in dozens of countries to the major world physics laboratories, such as Fermilab, CERN in Geneva, NIKHEF in Holland, KEK in Japan, and many, many, other sites.

HEPnet started as a physical network, a microwave link between Lawrence Berkeley Laboratory and the Stanford Linear Accelerator Center (SLAC). It was based on DECnet protocols, a natural choice considering the overwhelming use of PDP and VAX computers in the physics community.

The network quickly grew, using the big laboratories as hubs to run tail circuits to regional (and sometimes not so regional) research universities. CERN and Fermilab were two of the more aggressive hubs, bringing some of the first network lines into places like China, South America, and Eastern Europe.

Connecting the hubs together were the lines that made up the HEPnet backbone. While some of the tail sites were strictly DECnet, many of the backbone sites began using multiprotocol routers, letting HEPnet finance a part of the line and having the other user communities fund the rest.

Many of the universities at the ends of the tail circuits began joining the Internet, and the HEPnet lines became one of many possible paths, or often just a virtual network layered on some other service provider.

Gradually, HEPnet grew from a single protocol network to a multiprotocol one, and from a physical network to a shared line use model. The shared line use just made HEPnet a convenient label for the bundle of resources that the high energy physics community was purchasing for its users.

In the U.S., HEPnet was partially swallowed up by a larger network run by the Department of Energy, the Energy Sciences network (ESnet). Rather than lease many lines between labs for individual communities, ESnet would furnish the backbone.

Although HEPnet relinquished the role of backbone, it still furnished quite a few tail sites, particularly medium-speed links to places that ESnet did not serve, such as Brazil, or those sites that need special high-speed dedicated capacities, such as the physics departments of many large universities. In addition to managing the dedicated links, the HEPnet group was trying to move towards providing higher level services, such as running a mail gateway and starting video conferences.

In addition to the HEPnet WAN links, the Fermilab Computer Division had to manage a vast array of internal resources, including what was reputed to be the largest VAX Cluster in the world. Workstations are a dime a dozen, and large Amdahls provide local mainframe support.

The network architecture at Fermilab was originally, as can be expected by the DECophilic physicists, set up as a huge extended Ethernet running protocols like DECnet and LAT. Over time, though, UNIX-based workstations made TCP/IP an important part of the network.

Routers were introduced to segment the traffic into subnetworks. With some of the Local Area VAX Clusters (LAVCs) having as many as 60 workstations as clients, segmenting the network certainly made management easier.

The biggest concern, though, was how to continue migrating to higher and higher bandwidth LANs. Fermilab is the kind of place that will always be using the latest equipment in all sorts of custom configurations. To accommodate FDDI and any future LANs, the laboratory laid bundles of 48 single and multimode fibers around the ring.

All that fiber around the lab meant that in addition to one or more lab backbones, the Computer Division could take a fiber to an experiment or workgroup, thereby creating rings on demand. Like many sites, Fermilab was trying hard to go through the exercise of laying fiber in a large area only once.


After a quick lunch, I went into the building housing the Linear Accelerator (LINAC), the first of the seven accelerators linked together at the laboratory. Behind racks and racks of electronic equipment, I found the office of Peter Lucas, one of the designers of the accelerator control system.

Controlling a series of accelerators involves monitoring and setting a huge number of devices on subsystems ranging from vacuum pumps to refrigeration units to superconducting magnets. In the case of Fermilab, the accelerator control system must handle over 45,000 data points, some of which operate at very high frequencies.

Most devices are managed by CAMAC crates, a venerable backplane architecture that can hold up to 22 cards per crate. CAMAC crates are controlled by serial links, which in turn plug into a computer, usually a PDP 11.

The computer can poll each of the cards on the crate to gather and set data. If a card needs to generate an interrupt, it can set a single “look at me” bit on the crate. The computer then polls each of the cards to find out which ones need attention. The basic control flow on the front end was thus a PDP 11 to a CAMAC crate to a card to the device being controlled (although the PDPs were scheduled to be swapped out at some point for µVAXen).

In the early 1980s, when this system was first being put together, VAXen were just being released. A series of 311/785s, later supplemented by an 8650, acted as database servers, keeping track of information on each of the data points, such as which CAMAC and PDP that data point was accessible from.

The last part of the system were µVAXen (originally PDPs) acting as consoles. The basic operation would be for a console to log a request for a certain data element at a certain frequency. The database server would define the elements and the PDP 11s were responsible for delivering the data.

So how to connect all these systems together? In 1986, when the current network was being designed, the great Ethernet versus Token Ring debate was raging. After extensive deliberations, the lab decided the predictable latency of the token ring was needed for the main ring control system.

DEC, of course, was firmly in the Ethernet camp, but lack of product support by the vendor doesn’t faze experimental physicists. They turned to their electrical engineers, who built token ring cards for the Unibus used by the PDPs and VAXen.

Running on top of the token ring, the lab put their own Accelerator Control network (ACnet). ACnet allows any console to monitor and manage any device, a capability that makes operators at other accelerator centers stuck with special purpose consoles green with envy.

ACnet is a protocol in which a console logs a single request, which can generate a stream of data. A request might be for a certain data item at 1 hz, which would yield a stream of data spaced one per second. In addition to logged requests, certain events trigger alarms, sending the front-end PDP into the database to decide who should be notified.

The token ring choice worked, but it turned out that there really wasn’t that much difference between it and Ethernet. Not only that, DEC kept coming out with new µVAXen, each sporting a newly incompatible variant on the Q-bus. It was evident that Ethernet had to be incorporated into the network somehow.

An Ethernet was added and VAXstations started to pop up around the laboratory as consoles. To link the Ethernet with ACnet, the lab designed and wrote an Ethernet-to-token ring bridge running in software on one or more of the VAXen.

The ACnet was still token ring, but the addition of the bridge meant that authorized users from all over the lab had access to accelerator data. One of the VAXen was even set up with the X Window System, allowing X servers to access the data.

Since X runs just fine on top of the Internet, especially if you are communicating between two core sites, it is even possible to run the accelerator remotely. On one occasion, lab employees at a conference in Japan pulled up the accelerator control panels. To prevent any possibility of accelerator hackers, the lab set up the system so that remote terminals could only read data items.

This architecture proved to be quite flexible, easily expanding to meet the needs of the Tevatron and other newer subsystems. PDPs controlling CAMAC crates could easily be added if new data points needed to be controlled. Over time, CAMAC crates became a bit old-fashioned and newer devices simply had a VME crate directly on the token ring, alleviating the need for the front-end PDP.

Even the token ring decision turned out to have been wisely made. Networking at the lab grew like a prairie fire, with workstations going in as fast as physicists could unpack them. This is not a user community that will wait for official installers to unpack boxes. In fact, if pieces are missing, they are just as liable to manufacture a replacement themselves as they are to call up the shipper and complain.

After walking me through the architecture, Peter Lucas took me over to an X Window System console and showed me some of the 1,000 application packages that had been written for the accelerator.

All the screens were color coded, providing visual clues on which items in a complex form led to other screens and which items could be changed by the user. Clicking on a line item with the ID for a CAMAC crate, for example, would result in a graphic representation of the map, with each card and its ID depicted. Clicking on one of the cards would show the data items kept by the card and its current status.

All these programs were written at first for PDPs, in a day when that level of user interface was real, heavy-duty programming. These were the kind of programmers who battled their way through RSX, FORTRAN 2, and TECO, languages that make me shudder whenever I think of them.


When the superconducting magnets in the main ring generate proton-antiproton collisions at 1.8 TEV, there needs to be a way to collect data describing the event and store it away for further analysis. The detectors used in high energy physics are as complicated as the accelerator itself.

I drove around the main ring to D Zero (DØ), the quadrant of the ring containing a detector that had been under construction for over nine years. The instrument had consumed most of the time of 400 physicists and the U.S. $50 million camera was almost ready to be inserted into the ring.

DØ was a six-story building, just large enough to house the detector. When it was operational, the detector would put out 100,000 data points per collision, and there would be a collision every 3.5 microseconds. With roughly 250 kbytes per collision, this results in a data rate of 75 Gbytes per second.

This is quite a bit of data, especially when you consider the fact that the experiment would run 24 hours per day for six to nine months at a time. One of the great hopes of the DØ detector was that it would provide experimental evidence of the existence of the top quark, the only one not yet empirically observed.

Top quarks are very rare, and it was quite likely that only one (or less) would be found per day. In a day of operation, the data acquisition subsystem on the detector thus had to somehow filter through 6.48 petabytes (6.48 million gigabytes) looking for the interesting 250,000 bytes.

DØ was pandemonium. With the pins being pulled in a few days to roll the detector in, desks were littered with piles of empty styrofoam cups and coke cans, boards were everywhere in various stages of assembly or disassembly. A pager went off every few minutes, directing people to where they weren’t.

I found Dean Schaumberger, a professor from State University of New York at Stony Brook who agreed to give me a quick briefing. We walked through the data acquisition system, concentrating on the data and ignoring the physics. To decide which events to keep, an event mask provided the first level of filtering, allowing the physicists to hone in on a few dozen attributes that characterize interesting events. Once the event mask was tripped, boards housed in 79 VME-based front end crates would digitize the data on the detector. The goal of the mask was to reduce the data flow to 200 to 400 events per second.

Since it was possible that two interesting events could occur back to back, the front-end boards were double buffered. After the two buffers were full, though, events that tripped the mask had to be discarded until space became available again.

The 79 front-end crates were distributed among eight data cables, each cable providing a 32-bit wide data path running at 10 Mhz, yielding a data rate of 40 Mbytes per cable. With all eight cables running in parallel, the data flow reached 320 Mbytes per second.

The next level of analysis was provided by a farm of 50 µVAX 4000s, each machine running at the rate of about 12 times a VAX 11/780. Each µVAX had 8 custom boards, one for each of the 8 data cables. The data cables thus came out of the VME Gates and snaked through each of the µVAXen allowing any machine to collect all the data on one event.

The µVAXen, running the ELN real-time operating system, were also connected to a scheduling VAX, a machine that decided where on the farm there were available cycles to process the next event.

With the major pieces in place, Dean walked me through the data path. When the mask got tripped, data would go into the front-end boards. If the data bus was clear, the token generator (another µVAX at the end of the data bus) would issue a token with the current event number.

Each of the eight data cables would get a token. On any one of those cables, if a VME crate saw a token, it was able to capture the token and send as many bursts of data as it needed, releasing the token downstream when it was done to allow the next crate to dump its data. When all eight tokens got back to the token scheduler, the next event could proceed.

When the data reached a µVAX on the farm, it would undergo scrutiny by a variety of programs designed to filter interesting events from the subatomic chaff. The goal at this stage was to winnow down the data rate another 200 to 400 times.

Events that met the second level of filtering would emerge from the µVAX farm and go to a VAX 6000 to be spooled to tape. I saw a rack with 11 Exabyte tape drives on it, and Dean informed me that the experiment had well over 50.

This was not excessive when you consider that one event per second for six months would be close to 4 terabytes of data. Eventually, a physicist would spend up to a half hour with each interesting event, looking for clues. Some events that looked particularly promising were also spooled on the VAX 6000 for immediate analysis during the run.


The next morning, I went with my father to see SciTech, a hands-on science museum for kids he had spent the last two years building. SciTech had managed to obtain a cavernous old post office in the middle of nearby Aurora.

The museum specialized in the very big and the very small, teaching kids about light, sound, astronomy, and even particle physics. Every exhibit was hands-on. To explain quarks, for example, SciTech was modifying slot machines, replacing lemons and cherries with rare quarks and adjusting the probabilities to match. Pulling the lever to create a particularly rare particle would result in a prize.

All around were prisms, tuning forks, echo chambers, computers, and dozens of other devices making up the exhibits. Signs were next to each exhibit and teenage “explainers” would walk around telling people what they were seeing. A group of girl scouts trundled down to the basement to help build a new exhibit. Upstairs, some six-year olds were playing in a science club.

Starting a museum is never easy, and SciTech had managed to surprise everybody with its rapid growth. Still run on a shoestring, the museum was able to get enough grants, admissions, and members to keep expanding the exhibits and start filling up the huge old post office.

I admired the exhibits, then ran to catch my plane. O’Hare was not as bad as usual, so I had a little time to kill. Walking down the moving walkway, I came to a stop behind a little girl and her mother.

“Move over, honey,” the mother said, “this is an airport and the nice man has to hurry and catch his plane.”

The little girl smiled sweetly and moved over, just in time for me to get off the ramp and head into the nearby cocktail lounge.