Escolar Documentos
Profissional Documentos
Cultura Documentos
Linux at CERN
1.1. Introduction & Summary
1.2. History
CERN has been using Linux since at least 1995 when individual physicists
groups started using it. As from 1997, CERN has had a centrally-supported
Linux release, then based on Red Hat 4, with all the CERN tools and
environments (AFS, ASIS, SUE) available as on the other vendor UNICes.
Regular releases have followed the Linux evolution of the outside world, and
CERN has actively contributed to the development (Gigabit drivers, porting
GNU libc to IA64) as necessary to its needs.
'Special' servers
A large part of the Linux systems installed at CERN is used for computing
through large batch farms running LFS, like LXBATCH. Today, the typical
physicists computing job fits comfortably within a single dual-processor IA32
machine, so the farm machines are processing jobs independently without
need for Single-System-Image abstractions or MPI. This independence
between (a large number of) jobs running over extended periods of time has
given rise to the term High-throughput computing (HTC). Dual-Processor
machines with commodity hardware and off-the-shelf networking offer a sweet
spot in terms of price / CPU performance for this kind of workload. Linux
supports this type of machine very well.
A smaller interactive cluster with SUN Solaris allows to validate code against
a different compiler.
b)Special Servers
Similar to the batch farms, commodity hardware with Linux as the operating
system can deliver significant cost benefits over proprietary solutions for
storage or special-purpose server applications. At CERN, a combination of
Dual-Processor IA32 with (cheap) hardware RAID cards and IDE disk drives
has offered reliable disk storage in the form of the "Disk Server". Similarly,
tape drives are directly attached via SCSI or FC to Linux "Tape Servers".
Access to the data from the application is handled via the SHIFT architecture
or through CERN's mass storage application CASTOR.
Other special-purpose servers are also running Linux, for example ORACLE
database servers, AFS or NFS file servers or CERN's DNS service. Linux is
used here in parallel with SUN Solaris, typically being preferred as soon as
the number of system grows. Solaris typically runs at CERN on more reliable
hardware for services where high availability of individual machines is
required.
c)Desktops
d)Embedded appliances
As mentioned earlier, the goal is to use a single Linux release to cover all
aspects of Linux use at CERN, both to comply with user expectation and to
keep support effort down. No commercial distribution fulfills all CERN's needs
and runs on all the hardware found at CERN, so CERN has been providing a
modified version of Red Hat. The modifications include additional kernel
patches, new software like OpenAFS and CERN-specific physics software
and management applications.
Whenever the goal of having a uniform system cannot be met (e.g. due to
new bugs being discovered or incompatible hardware, both of which can
trigger a kernel update), such deviations are noted and are folded into the
next release.
AT CERN, the support for Linux is handled at multiple levels and in different
groups:
• Users with several machines (like the CERN-IT farms) typically have
dedicated local support for the day-to-day running of their applications.
• Large user groups (experiments, divisions) also have local support to
handle direct user questions
• CERN-IT offers centralized support:
• the CERN Helpdesk will take calls and re-route them appropriately. This
level is handled by an external company.
• a second level will deal with recurring user problems and gives individual
assistance to users, like desktop installations. This level is handled by
an external company.
• An in-house third-level support handles everything else, including
preparations of new releases, kernel bug fixes, workarounds to common
problems, assistance to the farm operations and documentation.
• eventually, support calls may be opened with a vendor. Given that
CERN has no support contract for the majority of Linux machines, these
calls are typically used to inform the Linux user community and may not
be resolved for considerable time.
1.6. Outlook
The current assumptions about physicists' jobs still seem to hold, so the
current computing model is likely to be useful in the future. Therefore, the
number of CPU nodes in batch farms will increase to cope with the massive
amount of computational power required for LHC. Similarly, the online event
filter farms will grow massively. This growth requires new farm management
tools to prevent operational costs from exploding, such tools are currently
under development as part of the "fabrics" efforts of EDG, LCG and EGEE.
The various GRID projects bring in new challenges, both technical (new
services to be defined and implemented) and political (interoperability
between sites perhaps leading to a HEP-wide Linux distribution, access to
remote resources).
Lastly, the Linux world itself is changing – vendors like Red Hat or SuSE are
now concentrating more on profitable bits of their business, making life harder
for the copycat distributions like CERN Linux which in the past have profited
from "free" software updates. Third-party hardware and software vendors like
ORACLE, IBM and SUN have all embraced Linux and are offering commercial
support. The large (noncommercial) user and developer community is
meanwhile proceeding with adding features (the 2.6 kernel is expected soon),
often enough in uncoordinated fashion, and creating new software as often as
abandoning older products.