Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

The Laboratory Computer: A Practical Guide for Physiologists and Neuroscientists
The Laboratory Computer: A Practical Guide for Physiologists and Neuroscientists
The Laboratory Computer: A Practical Guide for Physiologists and Neuroscientists
Ebook943 pages

The Laboratory Computer: A Practical Guide for Physiologists and Neuroscientists

Rating: 0 out of 5 stars

()

Read preview

About this ebook

The Laboratory Computer: A Practical Guide for Physiologists and Neuroscientists introduces the reader to both the basic principles and the actual practice of recording physiological signals using the computer.

It describes the basic operation of the computer, the types of transducers used to measure physical quantities such as temperature and pressure, how these signals are amplified and converted into digital form, and the mathematical analysis techniques that can then be applied. It is aimed at the physiologist or neuroscientist using modern computer data acquisition systems in the laboratory, providing both an understanding of how such systems work and a guide to their purchase and implementation.

  • The key facts and concepts that are vital for the effective use of computer data acquisition systems
  • A unique overview of the commonly available laboratory hardware and software, including both commercial and free software
  • A practical guide to designing one's own or choosing commercial data acquisition hardware and software
LanguageEnglish
Release dateJul 2, 2001
ISBN9780080521558
The Laboratory Computer: A Practical Guide for Physiologists and Neuroscientists

Related to The Laboratory Computer

Biology For You

View More

Reviews for The Laboratory Computer

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    The Laboratory Computer - John Dempster

    chapters.

    CHAPTER ONE

    Introduction

    The computer now plays a central role in the laboratory, as a means of acquiring experimental data, analysing that data, and controlling the progress of experiments. An understanding of it and the principles by which experimental data are digitised has become an essential part of the (ever lengthening) skill set of the researcher. This book provides an introduction to the principles and practical application of computer-based data acquisition systems in the physiological sciences. The aim here is to provide a coherent view of the methodology, drawing together material from disparate sources, usually found in highly compressed form in the methods sections of scientific papers, short technical articles, or in manufacturers’ product notes.

    An emphasis is placed on both principles and practice. An understanding of the principles by which the physiological systems one is studying are measured is necessary to avoid error through the introduction of artefacts into the recorded data. A similar appreciation of the theoretical basis of any analysis methods employed is also required. Throughout the text, reference is therefore made to the key papers that underpin the development of measurement and analysis methodologies being discussed. At the same time, it is important to have concrete examples and to know, in purely practical terms, where such data acquisition hardware and software can be obtained, and what is involved in using it in the laboratory. The main commercially available hardware and software packages used in this field are therefore discussed along with their capabilties and limitations. In all cases, the supplier’s physical and website address is supplied. A significant amount of public domain, or ‘freeware’, software is also available and the reader’s attention is drawn to the role that this kind of software plays in research.

    Physiology – the study of bodily function and particularly how the internal state is regulated – more than any other of the life sciences can be considered to be a study of signals. A physiological signal is the time-varying changes in some property of a physiological system, at the cellular, tissue or whole animal level. Many such signals are electrical in nature, cell membrane potential and current for instance, or chemical such as intracellular ion concentrations (H+, Ca++). But, almost any of the fundamental physical variables – temperature, force, pressure, light intensity – finds some physiological role. Records of such signals provide the raw material by which an understanding of body function is constructed, with advances in physiology often closely associated with improved measurement techniques. Physiologists, and particularly electrophysiologists, have always been ready to exploit new measurement and recording technology, and the computer-based data acquisition is no exception.

    1.1 THE RISE OF THE LABORATORY COMPUTER

    Computers first started to be used in the laboratory about 45 years ago, about 10 years after the first digital computer, the ENIAC (Electronic Numerical Integrator And Calculator), had gone into operation at the University of Pennsylvania. Initially, these machines were very large, room-size devices, seen exclusively as calculating machines. However, by the mid-1950s laboratory applications were becoming conceivable. Interestingly enough, the earliest of these applications was in the physiological (or at least psychophysiological) field. The Whirlwind system developed by Kenneth Olsen and others at Massachusetts Institute of Technology, with primitive cathode ray tube (CRT) display systems, was used for studies into the visual perception of patterns associated with the air defence project that lay behind the funding of the computer (Green et al., 1959). The Whirlwind was of course still a huge device, powered by vacuum tubes, and reputed to dim the lights of Cambridge, Massachusetts when operated, but the basic principles of the modern laboratory computing could be discerned. It was a system controlled by the experimenter acquiring data in real time from an experimental subject and displaying results in a dynamic way.

    Olsen went on to found Digital Equipment Corporation (DEC) which pioneered the development of the minicomputer. Taking advantage of the developments in integrated circuit technology in the 1960s, minicomputers were much smaller and cheaper (although slower) than the mainframe computer of the time. While a mainframe, designed for maximum performance and storage capacity, occupied a large room and required specialised air conditioning and other support, a minicomputer took up little more space than a filing cabinet and could operate in the normal laboratory environment. Clark & Molnar (1964) describe the LINC (Laboratory INstrument Computer), a typical paper-tape-driven system of that time (magnetic disc drives were still the province of the mainframe). However, it could digitise experimental signals, generate stimuli, and display results on a CRT. The DEC PDP-8 (Programmable Data Processor) minicomputer was the first to go into widespread commercial production, and a variant of it the LINC-8 was designed specifically for laboratory use. The PDP-8 became a mainstay of laboratory computing throughout the 1960s, being replaced by the even more successful PDP-11 series in the 1970s.

    Although the minicomputer made the use of a dedicated computer within the experimental laboratory feasible, it was still costly compared to conventional laboratory recording devices such as paper chart recorders. Consequently, applications were restricted to areas where a strong justification for their use could be made. One area where a case could be made was in the clinical field, and systems for the computer-based analysis of electrocardiograms and electroencephalograms began to appear (e.g. Stark et al., 1964). Electrophysiological research was another area where the rapid acquisition and analysis of signals could be seen to be beneficial. H.K. Hartline was one of the earliest to apply the computer to physiological experimentation, using it to record the frequency of nerve firing of Limulus (horseshoe crab) eye, in response to a variety of computer-generated light stimuli (see Schonfeld, 1964, for a review).

    By the early 1980s most well-equipped electrophysiological laboratories could boast at least one minicomputer. Applications had arisen, such as the spectral analysis of ionic current fluctuations or the analysis of single ion channel currents, that could only be successfully handled using computer methods. Specialised software for these applications was being developed by a number of groups (e.g. D’Agrosa & Marlinghaus, 1975; Black et al., 1976; Colquhoun & Sigworth, 1995; Dempster, 1985; Re & Di Sarra, 1988). The utility of this kind of software was becoming widely recognised, but it was also becoming obvious that its production was difficult and time consuming. Because of this, software was often exchanged informally between laboratories which had existing links with the software developer or had been attracted by demonstrations at scientific meetings. Nevertheless, the cost of minicomputer technology right up to its obsolescence in the late 1980s prevented it from replacing the bulk of conventional laboratory recording devices.

    Real change started to occur with the development of the microprocessor – a complete computer central processing unit on a single integrated circuit chip – by Intel Corp. in 1974. Again, like the minicomputer in its own day, although the first microprocessor-based computers were substantially slower than the contemporary minicomputers, their order-of-magnitude lower cost opened up a host of new opportunities for their use. New companies appeared to exploit the new technology, and computers such as the Apple II and the Commodore PET began to appear in the laboratory (examples of their use can be found in Kerkut, 1985; or Mize, 1985). Not only that; computers had become affordable to individuals for the first time, and they began to appear in the home and small office. The era of the personal computer had begun.

    As integrated circuit technology improved it became possible to cram more and more transistors on to each silicon chip. Over the past 25 years this has led to a constant improvement in computing power and reduction in cost. Initially, each new personal computer was based on a different design. Software written for one computer could not be expected to run on another. As the industry matured, standardisation began to be introduced, first with the CP/M operating system and then with the development of the IBM (International Business Machines) Personal Computer in 1981. IBM being the world’s largest computer manufacturer at the time, the IBM PC became a de facto standard, with many other manufacturers copying its design and producing IBM PC-compatible computers or ‘clones’. Equally important was the appearance of the Apple Macintosh in 1984, the first widely available computer with a graphical user interface (GUI), which used the mouse as a pointing device. Until the introduction of the Macintosh, using a computer involved the user in learning its operating system command language, a significant disincentive to many. The Macintosh, on the other hand, could be operated by selecting options from a series of menus using its mouse or directly manipulating ‘icons’ representing computer programs and data files on the screen. Thus while the microprocessor made the personal computer affordable to all, the graphical user interface made it usable by all. By the 1990s, the GUI paradigm for operating a computer had become near universal, having been adopted on the IBM PC family of computers, in the form of Microsoft’s Windows operating system. Figure 1.1 summarises these developments.

    Figure 1.1 Laboratory computers over the past 50 years.

    The last decade has seen ever-broadening application of the personal computer, not simply in the laboratory, but in society in general, in the office and in the home. The standardisation of computer systems has also shifted power away from the hardware to software manufacturers. The influence of hardware suppliers such as IBM and DEC, who dominated the market in the 1970s and 80s, has waned, to be replaced by the software supplier Microsoft, which supplies the operating systems for 90% of all computers. Currently, the IBM PC family dominates the computer market, with over 90% of systems running one of Microsoft’s Windows operating systems. Apple, although with a much lesser share of the market (9%), still plays a significant role, particularly in terms of innovation. The Apple Macintosh remains a popular choice as a laboratory computer in a number of fields, notably molecular biology.

    Most significantly from the perspective of laboratory computer, the computer has now become the standard means for recording and analysing experimental data. The falling cost of microprocessor-based digital technology has continued to such an extent that it is now usually the most cost-effective means of recording experimental signals. Conventional analogue recording devices with mechanical components, paper chart recorders for instance, have always required specialist high-precision engineering. Digital technology, on the other hand, can be readily mass-produced, once initial design problems have been solved. When this is combined with the measurement and analysis capabilities that the computer provides, the case for using digital technology becomes almost unassailable. Thus while we will no doubt see conventional instrumentation in the laboratory for a long time to come, as such devices wear out, their replacements are likely to be digital in nature.

    Since the computer lies at the heart of the data acquisition system, an appreciation of the key factors that affect its performance is important. Chapter 2 (The Personal Computer) therefore covers the basic principles of computer operation and the key hardware and software features in the modern personal computer. The three main computer families in common use in the laboratory – IBM PC, Apple Macintosh, Sun Microsystems or Silicon Graphic International workstations – are compared, along with the respective operating system software necessary to use them. The capabilities of various fixed and removable disc storage technologies are compared, in terms of capacity, rate of data transfer and suitability as a means of long-term archival storage.

    1.2 THE DATA ACQUISITION SYSTEM

    There are four key components to a computer-based data acquisition system that need to be considered:

    • Transducer(s)

    • Signal conditioning

    • Data storage system

    • Data acquisition and analysis software

    As illustrated in Fig. 1.2, they form a chain carrying experimental information from the tissue under study towards its ultimate storage and analysis.

    Figure 1.2 Main components of a computer-based data acquisition system. Physiological signals are measured by a transducer, amplified and filtered by the signal conditioning system, digitised by the A/D converter (ADC) and stored on magnetic disc. The process is controlled by the data acquisition and analysis software.

    Most recording devices, whether analogue or digital, record electrical voltages. The first stage in the data acquisition process is therefore to convert the physical quantity being measured into a voltage signal using a transducer – a generic term for a device which converts energy from one form into another, electrical in this case. (The terms sensor and detector are also used.) An appropriate transducer is required for each type of experiment variable being studied. In the case of bio-electrical signals, some form of specialised electrode is required to pick up the signal, effectively playing the role of the transducer (although no actual transduction is taking place).

    The electrical voltage produced by most transducers is usually quite small, in the order of a few millivolts. Bio-electrical signals are similarly small, 150 mV at most and sometimes less than 20 μV. Such signals must be amplified significantly if they are to be recorded without loss of quality, or measured accurately. Amplification of the transducer signal to match the requirements of the recording device is known as the signal conditioning stage of data acquisition. Signal conditioning encompasses all the operations – amplification, low- or high-pass filtering, etc. – necessary to make the signal suitable for recording by the data storage device. Some transducers require additional support in the form of an excitation voltage supply and the signal conditioner would provide this too.

    The data storage device makes a permanent record of the conditioned transducer signals. In the context of the systems discussed in this book, this is a personal computer (but more generally could also be a paper chart or magnetic tape recorder). Transducers produce analogue output signals – continuous electrical voltages proportional to the physical variable being measured. Computers, on the other hand, store information in the digital form of binary numbers. Analogue signals must therefore be digitised for storage on a computer system. An analogue-to-digital converter (ADC), in essence a computer-controlled voltmeter, is used to measure (sample) the analogue voltage at regular intervals, producing an integer number proportional to the voltage, which can be stored in computer memory. By this means, analogue signals are converted into series of numbers which are then stored on the computer’s magnetic disc. Facilities for analogue-to-digital (A/D) conversion and its converse, digital-to-analogue (D/A) conversion used to generate stimulus waveforms, are typically provided by a combined laboratory interface unit installed within or attached to the computer. Finally, computer software is required to control the digitisation process, display incoming signals and manage the storage of the data on disc. Furthermore, on completion of the experiment, more software is required to allow the inspection and analysis of the stored data.

    1.2.1 Digitisation of signals

    An appreciation of certain basic principles is essential for the successful use of a data acquisition system. A characteristic feature of all digital recording systems is that they store a sampled representation of analogue signals, the intervals at which these samples are acquired determining how accurately the signal time course is represented. Similarly, conversion of the analogue voltage into binary integer numbers involves a quantisation of the signal amplitude to the nearest of a series of discrete integer levels, the number of available levels determining the precision of the measurement. It is essential therefore to correctly match the sampling rate of the data acquisition system to the time course of the signals being acquired, and to ensure that the signal level is significantly larger than the quantisation steps of the A/D converter. Incorrectly set sampling rates can also lead to highly misleading artefacts in the digital recording where high-frequency signals appear ‘aliased’ at lower frequencies. The general issues involved in the digitisation of analogue signals are discussed in Chapter 3 (Digital Data Acquisition), including the basic principles of A/D conversion and the general properties of laboratory interfaces. The various types of commercially available laboratory interface used most commonly in physiological applications are also reviewed.

    1.2.2 Transducers

    It is important to also pay appropriate attention to each step in the data acquisition chain, both in the initial specification of the system and in its operational use. Attention is often focused upon the computer software and other ‘digital’ aspects of the system but, important as this is, other factors can have just as great an effect on the quality of a recording. The transducers, for instance, must be sufficiently sensitive to resolve the smallest changes in the physiological signal under study, but still have a dynamic range capable of dealing with the likely maximal response. A force transducer for recording the tiny forces associated with the contraction of single muscle fibres will have quite different characteristics from one used to measure the arm strength of an athlete. The response time of a transducer is also important, in that it must be able to change its output voltage quickly enough to respond to the rate at which the signal is changing. Not only is the correct choice of transducer important, the manner by which it is coupled to the experimental tissue or subject often has to be taken into consideration. The catheters, coupling pressure transducers to the arterial systems in cardiovascular system studies, for instance, can profoundly affect the dynamic response of that transducer.

    In fact, as a matter of general principle, a careful researcher should have a full understanding of the operational performance and limitations of the transducers in use. Consequently, Chapter 5 (Transducers and Sensors) discusses the basic principles of operation of a number of the common types of transducers used in physiological research – temperature, force, pressure, light, chemical concentration. The key specifications of a transducer’s performance – sensitivity, response time, accuracy – and the manner in which they are normally expressed by the supplier are also discussed. Typical examples of these transducers are presented along with sources of supply.

    1.2.3 Signal conditioning

    Equally, the signal conditioning must both match the needs of transducer and produce an output signal suitable for digitisation by the A/D converter. Not only must the appropriate type of signal conditioning be available to the data acquisition system, it must be correctly adjusted for the prevailing experimental conditions. It is a sad fact that the digitised recordings routinely made by many experimenters are sub-optimal to say the least, perhaps due to an uncritical belief in the benefits of digital recording. As mentioned earlier, the precision of a digitised recording is dependent upon the number of quantisation levels available to express the signal amplitude. An A/D converter typically quantises a ±5V voltage range into 4096 levels. For accurate measurement, the transducer signal must be amplified to span a significant fraction of this range (e.g. ±3V), to ensure that the quantisation steps are a small fraction (0.04%) of the signal amplitude.

    Signal conditioning involves more than simply signal amplification; filtering of the signals by removal of high- or low-frequency components is at least as important, particularly in terms of the anti-alias filtering necessary to avoid artefacts in the digitised signals. The process of filtering, although often necessary, also has the potential to distort the signal. Depending on the kinds of analysis procedure to be applied later to the digitised data, different types of filtering may be appropriate. Some types of analysis require minimal distortion of the signal time course, other types require the precise removal of frequencies above or below certain limits. Chapter 4 (Signal Conditioning) discusses the principles and specifications of the amplifiers and filters used in signal conditioning. Different filter designs and their appropriate areas of application are discussed. The chapter also discusses the ways in which the signal conditioning system can be configured to eliminate (or at least minimise) noise and interference signals from sources external to the experiment.

    1.3 ANALYSING DIGITISED SIGNALS

    The end result of the data acquisition process is a set of digitised waveforms stored on magnetic disc and available for analysis. Data analysis can be looked at as a process of data refining, in the sense that a large amount of ‘raw’ information is condensed into a more compact and meaningful form, ultimately appearing in a publication or report of some sort. The process is illustrated in Fig. 1.3. The amount of digitised data acquired during an experiment can vary markedly depending on the kind of signals being acquired. It is rarely less than 1-2 Mbyte and, particularly when images as well as signals are being captured, can be as high as a 1 Gbyte.

    Figure 1.3 Analysis of digitised signals. In the first stage, selected characteristics of digitised waveforms are measured. These are then combined with the results from other experiments and summarised.

    Most physiological signals can be usefully represented by a relatively small number of key waveform characteristics, such as peak amplitude, duration, rise and decay time. The periodic blood pressure waveform, for instance, can be characterised in terms of minimum and maximum pressures and pulse rate. An endplate current can be similarly represented by its peak amplitude, rise time, and exponential decay time constant. A set of 1000 digitised waveforms, consisting of 1024 samples each, occupies 2 Mbyte of disc storage space. A condensed representation consisting of three characteristics per waveform can occupy only 12 Kbyte. Discarding redundant waveform data (at least for the purposes of the analysis) and replacing it with a smaller amount of higher quality data, reduces the amount of information to a more manageable level. The waveform characteristics themselves can now be subjected to a further analysis phase, scrutinised for trends, and data acquired under varying experimental conditions compared. Finally, in the summarisation phase, the data from a series of experiments are further condensed into a set of group mean and standard deviation values. These results, tabulated and plotted, eventually find their way (hopefully) into some form of publication. The general analysis process outlined here applies to most forms of experimentation, the main differences being the nature of the waveform characteristics measured. Also, while the summarisation of the data can usually be accomplished by standard software, such as spreadsheets, statistical analysis or scientific graph-plotting packages, waveform characterisation usually requires highly specialised software, adapted for particular experimental fields.

    Software for analysing waveform characteristics has to perform a range of tasks. It must be able to access the digitised signals, which are often stored in proprietary file formats. The location of the waveforms within these records must be identified, sometimes involving a signal search and detection process. It is often necessary to identify regions of interest within the waveforms, or exclude other regions which contain artefacts. In fact, one of the most essential features is a facility allowing the user to visually inspect waveforms to assess the quality of the data.

    One of the few disadvantages of digital data storage is that it places a barrier between the researcher and the experimental data. With the earlier recording techniques, the raw data was directly visible, on a paper chart or 35 mm film. Visual inspection was always possible and analysis, because it was done manually by the researcher, had an inherent potential to allow for judgement of data quality. In the modern situation, the digitised raw data can only be inspected using highly specialised computer software. Given that most software is produced by someone else, the researcher’s freedom of action has, in effect, become hostage to the decisions of the programmer. This makes an understanding of the requirements of this type of software all the more important when such systems are being specified or purchased. By the same token, it is important that the user of a computer program fully understands, at least in principle, the computational algorithms used to make a particular measurement.

    The procedures involved in the measurement of waveform characteristics are discussed in Chapter 6 (Signal Analysis and Measurement). The principles behind the measurement of simple amplitude and temporal characteristics are discussed. One of the distinct advantages of storing data in digitised form is that a wide variety of computer algorithms can be used to enhance signal quality (e.g. reduce background noise) or transform the data into an alternative representation (e.g. frequency domain analysis). Chapter 6 also discusses basic signal enhancement procedures such as digital filtering and averaging. The uses of the Fourier transform and frequency domain analysis are also covered, including the latest techniques using the wavelet transform. Detailed coverage is also given to one of the most powerful analysis techniques applied to physiological signals – curve fitting. Experimental results can be quantified and/or related to the predictions of underlying theory by fitting of mathematical functions to experimental data. The principles and practical application of non-linear least squares curve fitting are discussed.

    1.4 ANALYSIS OF ELECTROPHYSIOLOGICAL SIGNALS

    The variety of signal types that can be encountered in physiological experimentation necessitates the measurement of different waveform characteristics and approaches to analysis. This is particularly true of the area which this book focuses most closely on – the analysis of electrophysiological signals. These signals can take a multiplicity of forms from random noise signals (ionic current noise, electromyogram) to stochastic unitary fluctuations (single ion channel currents) and a variety of transient waveforms (whole cell ionic currents, synaptic currents, action potentials).

    Modes of analysis differ most greatly between signals recorded from electrodes inserted into the cell, directly recording the internal electrical activity, and the more indirect extracellularly recorded signals. The typical applications of these approaches also differ and, consequently, they are treated separately here. Chapter 7 (Recording and Analysis of Intracellular Electrophysiological Signals) first explains the origin of these signals, the characteristics of the various experimental approaches, and then deals with the methods for their recording and analysis.

    A notable feature of intracellular electrophysiology is the extent that the computer system is involved in controlling the experiment as well as recording data. Probing cellular properties often involves the application of many series of stimuli, in the form of voltage pulses applied to the cell and/or the rapid application of drugs by ionophoresis or pressure ejection. The computer system has proved ideal for this purpose, replacing a whole rack of specially designed timing and pulse generation equipment, probably one of the main reasons for its adoption by the electrophysiological research community earlier than in other fields. Figure. 1.4 shows a typical electrophysiological experimental rig with computer system, patch clamp amplifier (effectively the signal conditioning), attached to recording chamber, mounted on a microscope and antivibration table.

    Figure 1.4 Electrophysiological experimentation rig, showing computer system (left), signal conditioning (middle) and tissue mounted on a microscope and antivibration table (right). The recording area is shielded using a Faraday cage.

    The specific procedures involved in the analysis of voltage-activated currents, synaptic currents, single-channel currents, current noise and cell capacity measurement are discussed. Much of the work in this field is carried out using one of a small number of commercial electrophysiological data acquisition packages. The key features and range of application of this kind of software is discussed, along with other packages that can be obtained as ‘freeware’ or ‘shareware’ from within the scientific community.

    Chapter 8 (Recording and Analysis of Extracellular Electrophysiological Signals) discusses the corresponding data acquisition and analysis procedures associated with extracellularly recorded electrical activity within the body. These signals are by their nature quite diverse. Attention is focused on the important clinical electrophysiological signals, recorded (primarily) from the body surface – the electromyogram (EMG), generated by skeletal muscle activity, electrocardiogram (ECG) reflecting cardiac muscle activity, and the electroencephalogram (EEG) reflecting neuronal activity in the brain. The issues of electrodes, signal conditioning and the avoidance of interference are discussed along with the characteristic features of each type of signal. Various approaches to the analysis of these signals are discussed and, again, the features of some of the available commercial and free software designed for these purposes are compared.

    Chapter 8 also discusses the digital acquisition and analysis of extracellular action potentials – ‘spikes’ – recorded from individual neurons within the central nervous system, using fine wire electrodes inserted into the brain. The primary aim of this kind of study is to investigate interneuronal communication and information processing. The technique is widely used and forms one of the cornerstones of neurophysiology. Unlike intracellular electrophysiology, where interest is focused on the amplitude and shape of the signal waveform, these studies are concerned only with when spikes occur. Spike shape is only important insofar as it assists in the classification of individual spike waveforms as originating from particular cells. Methods for recording spikes and, most importantly, classifying them into groups associated with particular neurons are discussed. This is followed by a discussion of the techniques applied to the analysis of the interspike intervals.

    1.5 IMAGE ANALYSIS

    Chapter 9 (Image Analysis) discusses some physiological applications of the acquisition and analysis of images. The imaging of intracellular activity, particularly using fluorescence microscopy techniques, has become an important tool in the study of physiology at the cellular level. One reason for including it here is that in many cases it is dynamic changes with time, captured by taking series of images, that are most revealing. Such time series of images can be considered to be multichannel signals, with large numbers of channels mapped spatially across the cell. Another reason is that image capture is now being combined with the measurement of other types of more conventional signal, such as intracellular electrophysiological measurements.

    The operating principles of the three main image capture devices – electronic cameras, flatbed scanners and the confocal microscope – are discussed, along with their areas of application. The relative merits of various types of camera – analogue video and digital –and the associated frame grabber interface hardware necessary for image digitisation are compared. The common image measurement and enhancement algorithms, comparable to the signal analysis algorithms of Chapter 6, are then discussed, and the capabilities of some of the available image analysis software compared.

    1.6 SOFTWARE DEVELOPMENT

    Finally, Chapter 10 (Software Development) discusses the issues involved in the development of software for the laboratory, and reviews some of the available software development systems. Although researchers mostly now make use of commercial or public domain software, the question still exists as to how such software gets written. As was raised earlier, the digital storage of data has shifted control away from the researcher to the software developer. However, commercial software tends to follow rather than lead new trends in experimental analysis, with most effort being focused on implementing basic packages that will appeal to a wide range of customers. Support for experimental procedures or modes of analysis of interest to a minority of researchers tends to be neglected. This is not to denigrate commercial products, since a company must make a profit to survive. It simply puts the responsibility for developing appropriate software back into the researcher’s court. There is still an argument therefore for software development within the research laboratory, particularly at the leading edge where, almost by definition, support is unlikely to be found yet in the standard commercial packages. However, given the amount of time and effort involved, such endeavours have to be carried out with a degree of professionalism, with due attention paid to the likely benefits of the project.

    The widespread adoption of graphical user interfaces, such as Microsoft Windows or Apple Mac OS, has changed the nature of program development. At one time, the main choice was what kind of programming language should be used – BASIC, FORTRAN, Pascal, etc. However, it now makes more sense to take a broader view in terms of what software development system should be used. A software development system provides not just a programming language but a system for defining the user interface of the program, and an integrated environment for testing and debugging the program.

    In addition to outlining the basic principles of computer programming, Chapter 10 compares the relative merits of the commonly available software development systems for the IBM PC and Apple Macintosh families – Microsoft Visual Basic, Visual C++, Borland Delphi and Metrowerks CodeWarrior. The ease with which each of these systems can be learned is also considered – an important issue for the researcher who may be only a part-time programmer. In addition to these general-purpose systems, two specialist packages, aimed specifically at the development of software for the acquisition and analysis of signals are considered – National Instruments LabVIEW and Mathworks Matlab. LabVIEW is a graphical programming environment, designed to simplify the construction of experimental data acquisition and instrumentation control software, which has become the ‘industry standard’ for this type of application in many areas of science and engineering. Matlab (Matrix Laboratory), on the other hand, provides a powerful command-based environment for executing complex signal processing, statistical, and other mathematical operations to digitised data.

    1.7 SUMMARY

    The first six chapters in this book constitute a basic introduction to the principles and methods of computer-based data acquisition, forming a basis for the remainder. Chapters 7 and 8 focus more closely on the specific issues involved in electrophysiological data acquisition. Chapter 9 covers techniques associated with image analysis and Chapter 10 covers the techniques associated with development of software for the laboratory.

    CHAPTER TWO

    The Personal Computer

    The digital computer has evolved into a powerful computing and information storage device since its first development. Increases in computational performance have been remarkable, with a 1000-fold increase in speed since the first personal computers appeared about 25 years ago. The rapid pace of development makes describing the state of the art something of an attempt to hit a moving target. However, basic principles tend not to change so rapidly, and it is important to appreciate what issues affect a computer’s performance and, particularly, its fitness for laboratory applications. This chapter will discuss the basic design of the computer hardware which forms the core of the laboratory data acquisition system, with a particular focus on the choices that need to be made to ensure that the system meets requirements of the experiment. Performance figures and examples are taken from computers used in the typical laboratory c. 2000.

    2.1 COMPUTER FAMILIES

    Although there are many different computer manufacturers, most belong to one or another computer ‘families’, in the sense that they share a common design, and are able to run a common range of software associated with that family. Conversely, software designed for one computer family is unlikely to be usable with another. From the point of view of the laboratory user, there are currently three main architectural families of note:

    • IBM PC-compatibles

    • Apple Macintoshes

    • Scientific/engineering workstations

    The IBM PC-compatible family is the largest (over 90% of all computers in current use), evolving from the original IBM (International Business Machines) Personal Computer, introduced in 1981. The backing of IBM, the world’s largest manufacturer of computers in those days, helped to establish the credibility of the personal computer as a business device. The design was copied by other suppliers, notably Compaq, who produced IBM PC ‘clones’ capable of running software designed for the IBM PC. At that time there were many different types of personal computer on the market, each with its own system design, which made it unlikely that software from one would run on another. The benefits of standardisation rapidly became apparent to user and software developer alike and within a few years the IBM PC design dominated the market, as it has done ever since. PC-compatible computers are available from a wide range of suppliers, some of the better known being IBM (Armonk, NY), Compaq (Houston, TX), Dell (Round Rock, TX) and Gateway (North Sioux City, SD).

    The Apple (Cupertino, CA) Macintosh family accounts for another 9% of the computer market. The first Macintosh, developed in 1984, was revolutionary, introducing to the mass market the graphical user interface as a means of operating the computer. Combined with the laser printer, it laid the foundation of the desktop publishing industry and its ease of use, compared to the IBM PC at that time, made it very popular in education. Unlike the IBM PC-compatible, which is available from many different manufacturers, the Macintosh is essentially a product of a single company. Apple (unlike IBM) succeeded in maintaining a tight control over the Macintosh design, due to the Macintosh ‘toolbox’ software that must be embedded within the Macintosh system. Although some companies did obtain licences to produce Macintosh ‘clones’ in the early 1990s, Apple ultimately decided it was in its commercial interests to restrict production to itself. This has to be borne in mind when considering its market share. Although this is relatively small compared to the IBM PC-compatible family, it is nevertheless quite respectable for an individual supplier.

    The scientific and engineering workstation is a much looser concept than the Macintosh and IBM PC-compatible families, defined not by a specific computer architecture, but by capabilities and the choice of operating system. A ‘workstation’, in this context is a computer system intended for demanding scientific or engineering applications, designed with a greater emphasis on performance than cost. The current leading workstation suppliers are Sun Microsystems (Palo Alto, CA), Silicon Graphics (Mountain View, CA) and Hewlett-Packard (Palo Alto, CA). The main thing they have in common is that they make use of the Unix operating system. However, this does not mean that programs written for one type of workstation will run on another without modification, since each supplier uses their own variant of Unix: Solaris on Sun, Irix on SGI, and HP-UX on Hewlett Packard systems. Minor differences between these Unix variants, plus the differences in hardware design, means that software cannot be moved between systems in binary code form, as it can with the IBM PC-compatible and Macintosh families. Instead, programs are ported between systems in the form of source code text, which has to be modified to make it compatible with the new system, and compiled to form executable binary code (see Section 10.3). Versions of Unix are, however, also available for the PC-compatible and Macintosh architectures in the form of the Linux operating system.

    Scientific workstations have a role in the laboratory where performance in excess of that provided by, even top of the range, personal computers is required. Such applications tend to arise in particular areas such as the modelling of molecular structure and other forms of simulation. Silicon Graphics have also specialised in producing systems with very high performance graphics display sub-systems which significantly outperform their personal computer equivalents. Such systems find applications in areas such as image analysis and the 3D display of molecules.

    The significance of a computer family’s market share lies as an indicator of its potential longevity. Without sufficient sales volume, a company is unlikely to be able to continue investing in new designs, leading to its eventual demise. An example of the dangers can be found in the history of the NeXT computer. This was an innovative product, developed in 1986 by Steve Jobs, one of the founders of Apple, which combined the capabilities of the scientific workstation with some of the ease of use of the Macintosh. It had many features which would have made it a good laboratory computer. However, it failed to gain widespread acceptance in any market, and within a few years ceased production.

    On this basis, a laboratory computer from the IBM PC-compatible family is a very safe choice. Although some questions have been raised about the Macintosh in the past, Apple’s current profitability and its record for innovation probably make it fairly secure too. The Macintosh family remains popular in areas such as graphics design, education and some aspects of laboratory research. The question needs to be considered more closely when looking at scientific workstations. Many of these, such as Mass-Comp, NeXT or Apollo, have disappeared over the years.

    The following treatment of computer systems hardware and software reflects the market dominance of the IBM PC-compatible, with most examples taken from that family. It should, however, be borne in mind, that the general principles apply to the others, and where appropriate, specific features of the Apple and scientific workstations families are compared.

    2.2 MAIN COMPONENTS OF A COMPUTER SYSTEM

    The key technology which has enabled the development of the modern computer is the ability to fabricate complex electronic integrated circuits on silicon ‘chips’. A digital computer essentially consists of a group of integrated circuit systems and sub-systems aimed at the input, storage, processing and output of information. The basic sub-systems of a typical personal computer are outlined in Fig. 2.1. At the heart of a computer system, and probably its single most complex component, is the device which carries out the actual data processing – the central processing unit (CPU). The CPU is an integrated circuit microprocessor designed to manipulate data under the control of a program in the form of a stream of external instructions. It consists of an arithmetic logic unit (ALU) for performing arithmetic and logical operations on the data, an instruction decoder for interpreting program instructions, and a set of storage locations for the data being manipulated, known as registers.

    Figure 2.1 Input (keyboard), storage (RAM, disc), processing (CPU) and output (video) sub-systems of a digital computer. Data is exchanged between sub-systems via the 32 digital address data lines of the interface bus.

    A CPU is defined by its instruction set – the set of numerical codes which instruct the CPU to execute arithmetic and logical operations. A typical CPU, for example, has over 100 basic instructions for moving numbers between the RAM (defined below) and the CPU, adding, subtracting, multiplying and dividing numbers, and applying a variety of logical tests to numbers stored in the CPU registers. A computer program typically consists of thousands or even millions of such instructions. CPUs from different manufacturers, although often providing the same basic range of operations, typically use different codes, making them incompatible with each other. The CPU gets its program instructions and data from the computer’s primary storage system – random access memory (RAM). This consists of a set of storage locations from which the CPU can read or write data, the term ‘random access’ indicating that any location can be directly accessed by the CPU when required.

    The information stored in RAM and processed by the CPU is encoded in the form of binary numbers. In contrast to the 10-digit (0-9) decimal system we are all familiar with, numbers within the binary system are represented by combinations of only two digits (0,1). There is nothing special about a 10-digit number system and, although the binary system is composed of only two digits, it is equally capable of supporting all the same arithmetic operations. The binary number system would be a mathematical curiosity except for the development of the computer. Much of the speed and reliability of digital electronic circuitry stems from the fact that it is composed of circuitry which can occupy only two possible states. A switch may be ON or OFF, a voltage level may be HIGH or LOW. The two-digit nature of the binary system is well matched to this design, i.e. OFF = 0 and ON = 1. RAM, for instance, consists of silicon chips containing a large array of storage cells, each of which can be set to OFF or ON to represent the value of a binary digit. Arithmetic and other CPU operations can similarly be carried out by using networks of logic gates which combine the states of each bit in the number. Almost any kind of arithmetic and logical function can be constructed from relatively simple gate operations. An introduction to digital logic circuitry can be found in Horowitz & Hill (1989).

    A single binary digit is known as a bit. Binary data is stored in RAM in the form of 8-bit binary numbers, or bytes, e.g.

    10000010

    Computer memory capacity is thus normally described in terms of the number of bytes that it can hold. A kilobyte (Kbyte) is 1024 bytes (not 1000 bytes), a megabyte (Mbyte) is 1024 Kbyte (1 048 576 bytes), and a gigabyte (Gbyte) is 1024 Mbyte.

    Data is transferred between the CPU and RAM by means of a set of digital address and data communications lines known as the interface bus. Each byte of data held within a RAM storage location has its own individual index number or address. The CPU accesses a location by placing the binary number of its address on to a set of parallel address lines, in the form of ON/OFF binary voltage levels. The contents of the location then appears as ON/OFF levels on the set of data lines to be read by the CPU. Conversely, the CPU can write a number to that location by placing it on to the data lines. A typical computer system might have 32 address and 32 data lines, allowing it to address up to 2³² (4.3 × 10⁹) individual byte locations and to transfer 32-bit numbers (or 4 bytes of data) to/from RAM in a single operation.

    The CPU and RAM, communicating via the interface bus, provide the basis of computation on the computer system.

    (1) The CPU reads a program instruction from RAM and decodes it.

    (2) Data is transferred from RAM to the CPU.

    (3) The selected arithmetic logical operation is performed.

    (4) The result is returned to RAM.

    (5) The next instruction is read….

    Some form of input sub-system is required to get data (including the program) into the computer system. The two most basic of these are the keyboard, allowing data entry in alphanumeric form, and the mouse providing positional information. Similarly, output sub-systems, such as a video display or printer, are required to report the results of computations. These input/output (I/O) sub-systems similarly communicate and exchange data with the CPU and RAM via the interface bus. Finally, a computer system will have a number of secondary storage sub-systems to back up its primary RAM. Silicon-chip-based RAM, although highly accessible, is relatively expensive and only retains information while power is applied to the computer system. It is thus complemented by high capacity magnetic or optical disc systems, providing non-volatile storage. A typical computer may have several types of fixed and removable disc

    Enjoying the preview?
    Page 1 of 1