J. Allaby, R. Billinge, R. Bock, J. Bunn, F. Dydak, T. Ericson, J. Ferguson, M. Hine, D. Jacobs, C.Jones, J. Sacton, J. Thresher, D. Williams, P. Zanella CERN, Geneva, Switzerland.   1&S'st. January 1990

CERN Computing Infrastructure: Evolution and Status to end 1988

We are indebted to M.Hine, the principal author of this Report, for collating the available material. The aim of the Report is to present as complete a picture as possible of the informatics infrastructure at CERN at the end of 1988, with some historical information where change was significant, to provide a context for the projections and proposals of the report "Computing at CERN in the 1990s". The material is intended for internal reference within CN Division.

The material covers in principle most equipment on the CERN site, whether CERN's or visitors'. The details of visitor equipment are likely to be incomplete, and the cost and manpower figures refer in most cases only to CERN. The information is divided roughly according to the main natural functions in CERN:

Computer Centre Networks Computers in Accelerators Computing for Engineering and Technical Support Computers for Administration Computers in major Experimental groups Such a separation cannot be clear\cut: there is some overlapping and double counting, which is noted in the text, particularly where Technical Support or Administration use machines with other primary purposes.

Computer Centre

"IBM" Systems

<p>Siemens (Fujitsu) 7890S (installed end-1984. 2 CPU, 64 MB main store, 64 channels IBM 3090 600E (by end 1988, now 400E, installed as 200 end-1985) 6 CPU, 6 Vector units, 256 MB main store + 256 MB expanded store, 96 channels Disks (total capacity, mostly shared between machines) 128 MB Comparex solid state disc for VM paging 205 GB on 5 different models, largely IBM Tapes (shared between &quot;IBM&quot; and Cray machines) 25 3420 conventional tape drives 32 3480 cartridge drives Mass store (being phased out; its disks are in above) 233 GB of possible storage Printers (in Centre and elsewhere, all driven from IBM) 3800/1 fanfold laser, 2 Xerox 4050 cut sheet APA 6670 in Centre + 30 3812 APA in labs etc. 12 3262 classic impact printers, almost all in labs etc. Kodak microfiche printer, 3 Versatek plotters (up to A0) Communications controllers 2 3705 for Wylbur, X25, 19.2 kb/s RSCS 1 3725 19.2 and 64 kb/s SNA 2+ 8232 TCP/IP 2 Interlink DEC/IBM 1 3745 for up to 2 Mb/s 12 7171 ASCII to &quot;3270&quot; 8 327X local coax controllers System capacity Siemens: 6.1 units/CPU IBM 3090E (scalar): 6.6 units/CPU (Vector advantage unknown) .tb 7 14 21 28 35 42 49 56 63 70 Total: 1984 7880 + 3081K 2.4 + 2 x 2.5 = 7.4 units 1985 7890S + 3081KX 2 x 6.1 + 2 x 2.9 = 18 units 1986 7890S + 3090-200 2 x 6.1 + 2 x 6.2 = 24.6 units 1987 7890S + 3090-400E 2 x 6.1 + 4 x 6.6 = 38.8 units end 1988 7890S + 3090-600E 2 x 6.1 + 6 x 6.6 = 51.8 units Capital Cost Siemens 7890S: 10.3 MSf less 1.3 MSf sale of 7880 = 9 MSf. IBM 3090-200: 12.5 MSf less 4.6 MSf sale of 3081 = 7.9 MSF IBM upgrade to 3090-600: = 11 MSf.

CRAY System

<p>Cray XMP/48 (installed March 1988) 4 CPU scalar and vector, 64 MB main store, 1 GB extended memory, 4 I-O processors VAX 8350 front end station, 20 Mbytes, with connection to VAX Cluster. Discs: 27 GB total capacity Tapes: 3 on-line 3420, 6 on-line 3480 cartridges System capacity: 8 units/CPU (Vector advantage: perhaps x 2 overall later on) .tb 7 14 21 28 35 42 49 56 63 70 Total: 1984 CDC 875 7 units 1985 CDC 875 7 units 1986 CDC 875 7 units 1987 CDC gone 0 units 1988 Cray X-MP/48 32 units + ?? vector advantage

DEC Systems

<p>Central Cluster of 6 VAX/VMS nodes with a MicroVAX-based central cluster controller: 8650 + 8800 for General VMS service, 60 + 91 Mbytes memory 8650 + 8700 for LEP Oracle data base 32 + 48 Mbytes 8650 + 6210 (ex 11/785) for Euclid CAD/CAE service, 48 + 32 Mbytes Disks: ~50 GB on 60 units Tapes; 4 TU78 + 14 &quot;3480&quot; Printing via IBM service + local Postscript-compatible units + low speed printing Cluster Capacity: about 10 units (General- 4.5, Oracle- 3.2, CAD/CAE- 2.3) Started in 1983/84 with LEP Oracle and CAD/CAE VMS machines adeed in 1986/87

Other Services and Machines in the Centre

<p>.tb 7 14 21 28 35 42 49 56 63 70 IBM: 43XX EARN node with 3275 DEC: 8530 Unix (Ultrix) PRIAM programme dev. system with 4 VAXstations 11/750 Unix (BSD 4.2) MINT mail gateway MicroVAX II GIFT file transfer gateway 11/750 ALL-IN-ONE system for ST Div. for LEP work MicroVAX II for network management MicroVAX II Decnet gateway to Europe and USA Local cluster with MicroVAX II and 10 VAXstations for centre. Local cluster with VAXserver and 6 VAXstations for network dev. Training: 4 VAXstations .pa

Usage

<p>IBM VM/CMS Service Average weekly figures; Batch CPU in IBM 168 hours .tb 7 14 21 28 35 42 49 56 63 70 CPU hrs. Users jobs tapes + cart. Sept. 1985 300 280 350 Sept. 1986 1000 1000 4700 1800 Sept. 1987 1450 1600 8900 2800 Sept. 1988 3100 2200 18700 6000 IBM MVS Service Average weekly figures; CPU in IBM 168 hours CPU hrs. Users jobs tapes + cart. Sept. 1985 1100 1150 32000 6500 Sept. 1986 1800 900 28000 6500 Sept. 1987 1000 500 12500 3500 Sept. 1988 250 180 5700 1000 CDC/Cray Average monthly figures; CPU in IBM 168 hours CPU hrs. Users jobs tapes + cart. Sept. 1985 1000 350 9500 900 Sept. 1986 1100 400 7000 1500 Sept. 1988 4800 DEC At end 1988 the general purpose VAXs were fully loaded

Computer Centre Costs and Manpower

<p>Costs: IBM Systems 28 MSf Capital Cray/CDC Systems 3 MSf/yr Manpower: DD staff at all levels 22 0perators, 10 Managment and technical, 22 contract staff 15 User consultancy and programme library 17 Software, accounting, etc.: 10 IBM, 2.5 Cray, 4 VMS +1 Ultrix

Networks

Three site-wide networks, plus Gateways, Public Network access, Leased lines, Terminal connections and Telephone

CERNET

<p>18 Modcomp switching nodes, mainly in Computer Centre, connected in a mesh by 2.5 and 8 Mb/s links covering whole site (not LEP) 90 connected systems with CERNET protocol software Speeds for file transfer in 15 - 50 kB/s range, depending on end-systems and state of the network load. Serves as backbone for low-rate Ethernet traffic till end 1988 Costs and Manpower: ~4MSf capital, 300 kSf/year maintenance ~50 man-years total DD effort over 10 years, less than 1 person today; ten years old, nearing retirement

Ethernet and Backbone

<p>Some 20 Local Ethernets in labs, offices and experiments interconnected via MAClevel bridges CERNET with Frigate bridges till end 1988 Back-to back Frigates with G 703 links or CERNET Fibre optic or G703 links between commercial bridges All interconnections go via two linked backbone Ethernets in Computer Centre More than 800 attached users Costs : ~2.5 MSf capital

X25

<p>X25 Net with 7 Camtec switches, (one in SPS N.Area), plus various X29 interfaces to Telepac and to leased lines Access speeds up to 64 kb/s; Link Speeds 64 kb/s; aggregate capacity ~2 Mb/s Connection to 9 external leased lines (2 via RAL,FNAL switches) Costs: about 0.5 MSf capital. CERN Telepac bill 60 kSf/year

Network Services

<p>DECNET: more than 300 nodes on site, access to &gt;5000 off site TCP/IP: 12 interconnected logical networks over Ethernet and Token Rings, 2 for Apollo, 2 for Cray access, the remainder for accelerator applications. About 700 connections, mainly PCs SNA: links to ETHZ, Saclay, IN2P3-Lyon, . . . MAIL: connections to most known systems via MINT gateways File transfer conversion gateway, GIFT, between Col.Books, Cernet, DECnet protocols Swiss nodes of EARN and EUNET Capital costs about 1 MSf.

Terminal Connections

<p>INDEX 8 switches with 2600 terminals connected and connections to 1100 hosts. Capital Costs: about 11 MSf. for switches and cables. CONCENTRATORS ON ETHERNET More than 150 boxes with 1300 lines Cost: about 1 MSf.

Leased Lines

<p>Lines shown are either in place or ordered. They are either reserved for HEP, or for EARN on the CEARN machine.HEP lines .tb 7 14 21 28 35 42 49 56 63 70 Destination Speed Protocols Saclay (DPhPE) 64 kb/s 2-split: X25, SNA-RSCS Lyon (IN2P3) 64 kb/s 3-split: Index, X25, SNA Annecy (LAPP) 2 x 9.6 kb/s X25, RSCS (The above lines make the French Phynet X25 backbone) Bologna (INFN) 2 x 9.6 kb/s X25, Decnet to INFNET Uni.Geneva 64 kb/s X25 ETH Zurich 64 kb/s SNA for L3 Zurich (SIN) 64 kb/s X25 to CHADNET RAL (HEP) 64 kb/s 2-split: X25 to JANET, RSCS CIEMAT (Madrid) 9.6 kb/s X25 to IRIS MIT 16 kb/s X25 for L3 Fermilab 64 kb/s X25 to ESNET NIKHEF-M 64 kb/s X25 for HEP and EUNET Montpelier 64 kb/s IBM line for EASI and EARN EARN lines to the CEARN machine in Blg. 513 RAL 9.6 kb/s Montpelier 9.6 kb/s Stockholm 9.6 kb/s Darmstadt 9.6 kb/s Geneva Uni. 9.6 kb/s Geneva Hospital 9.6 kb/s Berne Uni. 9.6 kb/s Zurich Uni. 9.6 kb/s Neuchatel Uni. 9.6 kb/s Lausanne Uni. 9.6 kb/s Linz 9.6 kb/s RAL (2nd line) 64 kb/s for OSI migration Costs: All HEP lines are paid by distant sites; CERN share of EARN and EUNET costs are ~40 kSF/yr each.

Telephone, Telex Service

<p>Hasler lines: 5100 STK. lines: ~500 Telex and FAX machines: 46 PTT bills: ~370,000 SFr/month Costs: Hasler rented, STK so far 2 MSf. Manpower: Hasler and STK ~15 operators, 8 Technical and Admin.

Manpower for Networking

<p>Figures are exclusive of the impact of networking on all programmers CERNET: less than 1 person today; BACKBONE ETHERNET: central Ethernet development ~5 DD staff, X25: 2 DD staff incl. CERN share of leased lines LOCAL ETHERNET and TERMINAL SERVICES. installation and maintenance, 12 DD staff plus some effort in other divisions NETWORK SERVICES: 10 DD staff plus small effort outside

Computers in Accelerators

PS

<p>PS Control System 2 Nord 570 + 30 Nord 100, controlling ~250 Camac crates 10 consoles; TITN network Linac/Lear Control System MicroVAX II + 5 VAXstations + PDPs; Ethernet Development and services Vaxserver + 9 VAXstations on Ethernet; ~70 IBM PC/Compatibles for accelerator calcs., CAE, secretarial; ~20 Macs for Camac control; ~50 old HP desk machines, little used Administration 2 Nord 100 for NOTIS, some of above PCs SC Control System and services 12 crate CAMAC system with 6 x 68000 microproc.J

SPS & LEP

<p>Control systems are integrated in several respects, with one control room 2 Nord 570 + ~80 Nord 100 controlling SPS equipt and in labs and SPS experimental areas; ~50 PC AT/386 + VME &quot;process computers&quot; for LEP equipment, with 50 more PCs as local consoles in the ring; ~20 Apollos as Control Room consoles; Apollos and PCs all run Unix with TCP/IP communications Development and services ~30 PC + VME in labs; 4 IBM PCRT in Control Room and labs ~120 other PCs of various types in labs and offices Networks SPS has 80 hosts on 6 interconnected TITN stars LEP has 2 IBM Token Rings round tunnel, 1 in Control Room, 1 round SPS, 1 lab ring, 2 as links to DD and ST

Computing for Engineering and Technical Support

Microprocessor Support

<p>Priam VAX 8530 in Centre with Ultrix Users: 500 Support: ~4 in DD

Databases

<p>Oracle is installed on: VM and on centralVAX cluster, for general use; ~200 user accounts 2 VAX for LEP; ~40 LEP/SPS application designers, 400 user accounts 2 VAX for mechanical CAD/CAM (all above in the Computer Centre) 10 Microvax, 1 Sun, 2 Apollo, several PC etc. in Divisions Oracle is also accessible from many VAXstations, Apollos, Suns, PCs. Support Manpower: 6 in DD; ~3 in other Divisions

Engineering Design

<p>Mechanical CAD: Euclid 3-D runs on 30 W-S on the 2 VAX for CAD Autocad 2-D runs on ~35 &quot;IBM&quot; PCs Users: ~50 engineers and &gt;100 designers Support Manpower 4 in DD Mechanical CAE: ANSYS, CASTEM, etc., on VM Users: ~30 engineers, designers Support Manpower 2 in other Divisions: Electronic CAD/CAE: ~7 Apollos, 90 PCs, part of central VAX, Priam Users: ~200 at various levels Support: ~8, half in DD Electromagnetic fields: TOSCA etc. on VM Users: ~50, self supporting

Computers for Administration and Information

Central ADP System

<p>.tb 7 14 21 28 35 42 49 56 63 70 197X IBM 360-50 DOS 1980 IBM 360-50 DOS 1981 IBM 4331-2 DOS-VSE 1983-IBM 4361-5 DOS-VSE under VM/SP, with 7 GB of disk Networks: ~120 327X terminals on coax, half local, half remote 64 dedicated ASCII lines via a 7171 controller General access via CERNVM, Passthrough on 64 kb/s line. Corporate Data Bases : Financial: 500 MB. Purchasing, Stores : 200, 100 MB. Personnel: 160 MB. Capital cost of present system: ~2.5 MSf. Present staffing level: 10 professional, 5 operators

Computers in major Experiments

The listing covers some machines in physics development labs at CERN as well as in the experiments proper; there may be some duplication with other sections. In any case, the configurations vary from month to month as the experiments evolve. The Capacity of systems is given in CERN units (= IBM 168 equiv.), excluding workstations (VAXstations, Apollo). Costs are based on typical CERN costs today.

Aleph

<p>VAX: 2 x 8200, 8700, 8250, in &quot;on-line&quot; cluster 2 x 11/750 for test beam and TPC tests 8800, 8200, 11/750, 5 x MicroVAX in labs. VAXstation: 12 in LA cluster; 5 in Aleph-Wisconsin cluster Total Capacity (VAXst. excluded): about 7 units; Cost : roughly 5 MSf. 25 DECservers, 4 Ethernets, 2 LANbridge

Delphi

<p>VAX: 8700, 12 x MicroVAX II, VAXserver 3500 + 6 x VAXstations in on- line cluster, with 6 Gbyte disk space 4 x 3081E on-line 2 x 6210, 8200, 2 x MicroVAX II, 36 VAXstations, Megatec, 11 Gbytes disk space for off-line 2 x 11/750, Nord 570/CX general purpose, 2 x N100 in test beams Total Capacity (VAXst. excluded): about 13 units; Cost: roughly 5 Mfr.

L3

<p>VAX: 8800, 5 x 11/750, with 4 x MicroVAX, 18 x VAXstations; 7 tapes, 7 disks 5 x 3081E on Fastbus on-line 11 x Apollos, 3090/180E with 2 attached 3081E off-line 20 x DECservers, LANbriges, et al. 56 kb/s link to MIT Total Capacity (VAXst., Apollo excluded): about 16 units;

Opal

<p>VAX; 8700, 8x MicroVAX/server, VAXstations: 14 5 x 370E emulators on VME IBM 3084Q 4 processor system 12 x Apollo, 13 x Mac, Wang WP system Total capacity inc. IBM, excl. Stations, Apollo: about 16 units Cost: 4 MSf + IBM

UA1

<p>IBM 9375-60 + 12 x 3081E, 4 x 3480 tapes, 56 VME/68000 boxes in experiment Nord: N100/500, N100 experiment backup, 6 x 6250 bpi tapes 2 x N100 in test beam and labs 34 x Mac, 20 x Caviar, monitoring, test and development

Development Labs

<p>DD: OC Development Group VAX: 8350, MicroVAX for Fastbus 2 x MicroVAX + 19 x VAXstations in cluster for software dev. 4 x MicroVAX for test and loan Nord 550/CX software support Caviar Pool, ~70 Valet+ widely in CERN ED Emulator Group 3 x 3081E from pilot Project, at present on LEP 3090 EF: Equipment development groups VAX 8200, 6 x MicroVAX, 40 x &quot;IBM&quot; PC, 30 x Mac EP: VAX 8300, VAX 11/750, 6 x MicroVAX 115 Caviars distributed widely in CERN, incl. 30 in UA1