Você está na página 1de 76

www.iambiomed.

com

NISM PAPER SOLUTIONS

SEMISTER VII BIOMEDICAL ENGINEERING



Syllabus
Solutions
University Question Papers





AE
Copyrights reserved.
NISM PAPER SOLUTIONS

Syllabus

Theory Examination:
1. Question paper will comprise of total 7 questions, each of 20 marks.
2. Only 5 questions need to be solved.
3. Q.1 will be compulsory and based on the entire syllabus.
4. Remaining questions will be mixed in nature.





MODULE 1:

1. Explain the TCP/IP model in detail (Dec10, May 11)
The TCP/IP Model
TCP/IP is older version of OSI still in use.
The major design goal of the TCP/IP model was to connect multiple networks in a seamless
way.
At the physical level TCP/IP supports coaxial, twisted-pair and fiber-optics cable media and
at data link layer levels of TCP/IP it is compatible with the IEEE 802.2 logic link control
standard and MAC addressing.
The Internet Layer
This layer, called the internet layer, holds the whole architecture together.
Its job is to permit hosts to inject packets into any network and have them travel
independently to the destination (potentially on a different network).
They may even arrive in a different order than they were sent, in which case it is the
job of higher layers to rearrange them, if in-order delivery is desired. Note that
''internet'' is used here in a generic sense, even though this layer is present in the
Internet.
The letters will travel through one or more international mail gateways along the
way, but this is transparent to the users. Furthermore, that each country (i.e., each
network) has its own stamps, preferred envelope sizes, and delivery rules is hidden
from the users.
The internet layer defines an official packet format and protocol called IP (Internet
Protocol).
The job of the internet layer is to deliver IP packets where they are supposed to go.
Packet routing is clearly the major issue here, as is avoiding congestion. For these
reasons, it is reasonable to say that the TCP/IP internet layer is similar in
functionality to the OSI network layer.
The Transport Layer
The layer above the internet layer in the TCP/IP model is called the transport layer.
It is designed to allow peer entities on the source and destination hosts to carry on a
conversation, just as in the OSI transport layer.
Two end-to-end transport protocols have been defined here.
The first one, TCP (Transmission Control Protocol), is a reliable connection-oriented
protocol that allows a byte stream originating on one machine to be delivered
without error on any other machine in the internet.
It fragments the incoming byte stream into discrete messages and passes each one
on to the internet layer.
At the destination, the receiving TCP process reassembles the received messages
into the output stream.
TCP also handles flow control to make sure a fast sender cannot swamp a slow
receiver with more messages than it can handle.
The second protocol in this layer, UDP (User Datagram Protocol), is an unreliable,
connectionless protocol for applications that do not want TCP's sequencing or flow
control and wish to provide their own.
It is also widely used for one-shot, client-server-type request-reply queries and
applications in which prompt delivery is more important than accurate delivery, such
as transmitting speech or video.
The Application Layer
On top of the transport layer is the application layer.
It contains all the higher-level protocols. The early ones included virtual terminal
(TELNET), file transfer (FTP), and electronic mail (SMTP).
The virtual terminal protocol allows a user on one machine to log onto a distant
machine and work there.
The file transfer protocol provides a way to move data efficiently from one machine
to another. Electronic mail was originally just a kind of file transfer, but later a
specialized protocol (SMTP) was developed for it.
Many other protocols have been added to these over the years: the Domain Name
System (DNS) for mapping host names onto their network addresses, NNTP, the
protocol for moving USENET news articles around, and HTTP, the protocol for
fetching pages on the World Wide Web, and many others.
The Host-to-Network Layer
Below the internet layer is a great void. The TCP/IP reference model does not really
say much about what happens here, except to point out that the host has to connect
to the network using some protocol so it can send IP packets to it.
TCP/IP makes use of existing data link physical layer standard rather than defining its own.


2. Explain the addressing systems used in TCP/IP protocals. (DEC 2012)
Four levels of addresses are used in an internet employing the TCP/IP protocols: physical
(link) addresses, logical (IP) addresses, port addresses, and specific addresses.







Physical Addresses
The physical address, also known as the link address, is the address of a node as
defined by its LAN or WAN.
It is included in the frame used by the data link layer. It is the lowest-level address.
The physical addresses have authority over the network (LAN or WAN).
Addresses
Physical

Logical
Port Specific
The size and format of these addresses vary depending on the network. For example,
Ethernet uses a 6-byte (48-bit) physical address that is imprinted on the network
interface card (NIC).
Logical Addresses
Logical addresses are necessary for universal communications that are independent
of underlying physical networks.
Physical addresses are not adequate in an internetwork environment where
different networks can have different address formats.
A universal addressing system is needed in which each host can be identified
uniquely; regardless of the underlying physical network. The logical addresses are
designed for this purpose.
A logical address in the Internet is currently a 32-bit address that can uniquely define
a host connected to the Internet.
No two publicly addressed and visible hosts on the Internet can have the same IP
address.


Port Addresses
The IP address and the physical address are necessary for a quantity of data to travel
from a source to the destination host. However, arrival at the destination host is not
the final objective of data communications on the Internet.
Today, computers are devices that can run multiple processes at the same time.
The end objective of Internet communication is a process communicating with
another process.
For example, computer 'A' can communicate with computer 'C' by using TELNET. At
the same time, computer 'A' communicates with computer 'B' by using the File
Transfer Protocol (FTP).
For these processes to receive data simultaneously, we need a method to label the
different processes. In other words, they need addresses. In the TCP/IP architecture,
the label assigned to a process is called a port address.
A port address in TCP/IP is 16 bits in length.
Specific Addresses
Some applications have user-friendly addresses that are designed for that specific
address. Examples include the e-mail address (e.g, i.am.biomedical@gmail.com) and
the Universal Resource Locator (URL) (e.g, www.iambiomed.com). The first defines
the recipient of an e-mail; the second is used to find a document on the World Wide
Web.
These addresses, however, get changed to the corresponding port and logical
addresses by the sending computer; router consults its routing table and ARP to find
the physical destination address of the next hop (router 2), creates a new frame,
encapsulates the packet, and sends it to router 2.
The source physical address changes from 10 to 99. The destination physical address
changes from 20 (router 1 physical address) to 33 (router 2 physical address).
The logical source and destination addresses must remain the same; otherwise the
packet will be lost.
At router 2 we have a similar scenario. The physical addresses are changed, and a
new frame is sent to the destination computer.
When the frame reaches the destination, the packet is decapsulated. The destination
logical address P matches the logical address of the computer.
The data are decapsulated from the packet and delivered to the upper layer. Note
that although physical addresses will change from hop to hop, logical addresses
remain the same from the source to destination.


3. Short note on internet
Internet
The internet society was set up to promote the use of internet and perhaps take
over managing it.
The Internet is a global system of interconnected computer networks that use
the standard Internet protocol suite
It is a network of networks that consists of millions of private, public, academic,
business, and government networks, of local to global scope, that are linked by a
broad array of electronic, wireless and optical networking technologies.
The Internet carries an extensive range of information resources and services, such
as the inter-linked hypertext documents of the World Wide Web (WWW) and the
infrastructure to support email.
Traditionally internet has four main applications, as follows:
Email- the ability to compose, send and receive electronic mail has been around
since the early days of ARPANET and is enormously popular.
Many people get dozens of message a day and consider it their primary way of
interacting with the outside world, far outdistancing the telephone.
News:- Newsgroups are specialized forums in which users with a common interest
can exchange messages.
Thousands of newsgroups exist on technical and non-technical topics, including
computers, science, recreation and politics.
Remote login:- using the telenet, Rlogin or other programs, users anywhere on the
internet can log into any other machine on which they have account.
File transfer:- using the FTP program, it is possibly to copy files from one machine on
the internet to other.
Vast number of articles, database and other information are available this way.

4. Explain OSI model in detail (MAY11 , DEC11)
The OSI Model
The model is called the OSI (Open Systems Interconnection) Reference Model
because it deals with connecting open systems that is, systems that are open for
communication with other systems


The OSI model has seven layers.
The Physical Layer
The physical layer is concerned with transmitting raw bits over a communication
channel. The design issues have to do with making sure that when one side sends a 1
bit, it is received by the other side as a 1 bit, not as a 0.
The design issues here largely deal with mechanical, electrical, and timing interfaces,
and the physical transmission medium, which lies below the physical layer.
The devices used within the physical layer are responsible for generating, carrying,
and detecting voltage in order to transmit and receive signals containing data.
Network signals are analog or digital.

The Data Link Layer
The main task of the data link layer is to transform a raw transmission facility into a
line that appears free of undetected transmission errors to the network layer.
It accomplishes this task by having the sender break up the input data into data
frames (typically a few hundred or a few thousand bytes) and transmit the frames
sequentially.
If the service is reliable, the receiver confirms correct receipt of each frame by
sending back an acknowledgement frame.
Some traffic regulation mechanism is often needed to let the transmitter know how
much buffer space the receiver has at the moment

The Network Layer
The network layer controls the passage of packets.
A key design issue is determining how packets are routed from source to destination.
When a packet has to travel from one network to another to get to its destination,
many problems can arise. The addressing used by the second network may be
different from the first one. The second one may not accept the packet at all
because it is too large. The protocols may differ, and so on. It is up to the network
layer to overcome all these problems to allow heterogeneous networks to be
interconnected.

The Transport Layer
The basic function of the transport layer is to accept data from above, split it up into
smaller units if needed, pass these to the network layer, and ensure that the pieces
all arrive correctly at the other end.
The transport layer also determines what type of service to provide to the session
layer, and, ultimately, to the users of the network.
The transport layer is a true end-to-end layer, all the way from the source to the
destination.
Protocols used in this layer are :
Class 0 : simplest, no error checking of flow control and rules on network layer to
perform these functions.
Class 1: monitors for packet. Transmission error and if an error is detected it notifys
sending nodes transport layer to resend data.
Class2: monitors for transmission errors and provides flow control between the
transport layer and session layer.
Class3: functions of class1 and 2 addition with option to recover lost packets in
certain situation.
Class4: perform the same function as class 3 along with more extensive error
monitoring and recovery.

The Session Layer
The session layer is responsible for establishing and maintaining the communication
link between two nodes.
The session layer allows users on different machines to establish sessions between
them. Sessions offer various services, including dialog control (keeping track of
whose turn it is to transmit), token management (preventing two parties from
attempting the same critical operation at the same time), and synchronization
(checkpointing long transmissions to allow them to continue from where they were
after a crash).

The Presentation Layer
The presentation layer is concerned with the syntax and semantics of the
information transmitted.
The presentation layer manages data formatting which is necessary because
different software application use different data formatting schemes.
In order to make it possible for computers with different data representations to
communicate, the data structures to be exchanged can be defined in an abstract
way, along with a standard encoding to be used ''on the wire.''
The presentation layer manages these abstract data structures and allows higher-
level data structures (e.g., banking records), to be defined and exchanged.

The Application Layer
The application layer contains a variety of protocols that are commonly needed by
users. One widely-used application protocol is HTTP (HyperText Transfer Protocol),
which is the basis for the World Wide Web.
When a browser wants a Web page, it sends the name of the page it wants to the
server using HTTP. The server then sends the page back. Other application protocols
are used for file transfer, electronic mail, and network news.

5. Difference between OSI and TCP/IP reference model

OSI TCP/IP
1)It has 7 layers 1) It has 4 layers
2)Transport layer guarantees delivery of
packets
2)Transport layer does not guarantees delivery
of packets

3)Horizontal approach
3)Vertical approach
4)Separate presentation layer
4)No presentation layer, characteristics are
provided by application layer
5)Separate session layer
5)No session layer, characteristics are
provided by transport layer
6)Network layer provides both
connectionless and connection oriented
services
6)Network layer provides only connection
less services
7)It defines the services, interfaces and
protocols very clearly and makes a clear
distinction between them
7)It does not clearly distinguishes between
service interface and protocols
8)The protocol are better hidden and can be
easily replaced as the technology changes
8)It is not easy to replace the protocols
9)OSI truly is a general model 9)TCP/IP cannot be used for any other
application
10)It has a problem of protocol filtering into
a model
10)The model does not fit any protocol stack

6. Short note on ISDN
Integrated Services Digital Network (ISDN) is a set of communications standards for
simultaneous digital transmission of voice, video, data, and other network services
ISDN is the integration of both analog or voice data together with digital data over
the same network.
The key idea behind ISDN is that of the digital bit pipe.
The digital bit pipe can normally does support multiple independent channels by
Time Division Multiplexing of the bit stream.
The exact format of bit stream and its multiplexing is a carefully defined part of
interface specification for the digital bit pipe.
Two principle standards for the bit pipe have been developed.
-A low bandwidth standard for home use.
-A higher bandwidth data for business.
ISDN is a circuit-switched telephone network system, which also provides access to
packet switched networks.
It offers circuit-switched connections (for either voice or data), and packet-switched
connections (for data).


7. Short note on DSL

DSL:
Digital subscriber line (DSL) technology is one the most promising for supporting high
speed digital communication over existing local loops.
Summary of DSL technologies:
Technology Downstream
Rate
Upstream
Rate
Distance
(ft)
Twisted Pairs Line
Code
ADSL 1.5-6.1 Mbps 16-640 Kbps 12,000 1 DMT
ADSL Lite 1.5 Mbps 500 Kbps 18,000 1 DMT
HDSL 1.5-2.0 Mbps 1.5-2.0 Mbps 12,000 2 2B1Q
SDSL 768 kbps 768 kbps 12,000 1 2B1Q
VDSL 22-55 Mbps 3.2 Mbps 3,000-10,000 1 DMT


ADSL:
Asymmetrical DSL (ADSL), 56 K modem, provides higher speed (bit rate) in the
downstream direction then in upstream direction. Hence its called asymmetrical.
The designers of ADSL have specifically divided the available bandwidth of local loop
unevenly for the residential customers.

SDSL:
ADSL provides asymmetric communication. The downstream bit rate is higher than
upstream bit rate.
Although this feature meets the needs of residential customers, it is not suitable for
businesses that send and receive data in large volumes in both the directions.
The SDSL is designed for such type of businesses.

HDSL:
High bit rate DSL (HDSL) was designed as an alternative to T-1 line.
The T-1 line uses Alternate mark inversion (AMI) encoding which is very susceptible
to attenuation at high frequency.
HDSL used 2B1Q encoding which is less susceptible for attenuation.

VDSL:
Very-high-bit-rate digital subscriber line is a digital subscriber line (DSL) technology
providing faster data transmission over a single flat untwisted or twisted pair of
copper wires (up to 52 Mbit/s downstream and 16 Mbit/s upstream) and on coaxial
cable (up to 85 Mbit/s down- and upstream)
VDSL is capable of supporting applications such as high-definition television, as well
as telephone services and general Internet access, over a single connection.


8. Explain in detail circuit switching (DEC11)
Circuit switching
It was designed in order to send telephone calls down a dedicated channel.
There are three phases in circuit switching:
- Establish
-Transfer
-Disconnect

The telephone message is sent in one go, it is not broken up.
The message arrives in the same order as it was originally sent.
In modern circuit switched networks, electronic signals pass through several
switches before a connection is established.
During a call, no other network traffic can use those switches.
The resources remain dedicated to the circuit during entire data transfer and the
entire message follows the same path.
A circuit switched network is excellent for data that needs a constant link from end
to end. For example, real time video.
Advantages-
1) Circuit is dedicated to the call no interference, no sharing.
2) Guaranteed full bandwidth for duration of call.
3) Guaranteed quality of service.
Disadvantages
1) Takes a relatively long time to set up the circuit.
2) Inefficient the equipment maybe unused for lot of calls, if no data is being sent the
dedicated lines still remain open.
It was primarily developed for voice traffic rather than data traffic.

9. Explain the parameters wrt network performance: (Dec10)
Different networking parameters:
Bandwidth
Throughput
latency
Availability
Jitter
Reliability
Bandwidth Delay Product
Bandwidth
Bandwidth in Hertz is the range of frequency contained in the composite signal or
range of frequency that a channel can pass.
Bandwidth in BPS is the no of bits per second that a channel, link or even a network
can transmit.
E.g.:- Fast Ethernet can send 100mbps; therefore its bandwidth is 100mbps.
An increase in bandwidth in hertz means an increase in bandwidth in bits per
second.
Throughput
Bandwidth is the potential measurement of a link.
Throughput is the actual measurement of how fast it can send data.
Latency (delay)
Latency or delay defines how long it takes for an entire message to completely arrive
at the destination from the time the 1st bit is sent from the source.
It has four components: propagation time, transmission time, queuing time,
processing delay
Latency = propagation time + transmission time + queuing time + processing delay
Jitter
Jitter is problem if different packets of data encounter different delays and the
application using data at the receiver site is time dependent.

10. Write a short note on Ethernet


Ethernet is defined as LAN standard in IEEE 802.3 specification.
Ethernet is installed in more places because it has the option for expansion and high
speed networking.
Ethernet transmits data at 10Mbps and newer fast Ethernet transmits at 100Mbps.
Ethernet uses a control method known as Carrier Sense Multiple Access with
Collision Detection (CSMA/CD)
CSMA/CD is an algorithm that transmits and decodes formatted data frames.
All nodes that wish to transmit a frame on the cable are in connection with one
another.
No single node has priority over another node.
The Ethernet protocol permits only one node to transmit at a time, transmission is
accomplished by sending a carrier signal.
Carrier sense is the process of checking communication cable for specific voltage
level indicating the presence of data carrying signal.
When no traffic is detected on the communication medium for a given amount of
time, any node is eligible to transmit.
Occasionally more than one node will transmit at the same time which results in
collision.
The transmitting node detects a collision by measuring the signal strength.
A transmitting node uses the collision detection software algorithm to recover from
packet collision.
This algorithm causes the station that has transmitted to continue transmission for
designated time.
The continued transmission is signal of all binary ones which informs all listening
nodes that a collision has occurred.
The software at each node then generated a random number which is used as the
amount of time to wait until transmitting,
This ensures that no 2 nodes will attempt to transmit again at the same time.
Frames find their way to destination through physical addressing. Each workstation
and server has a unique layer 2 address associated with its NIC (Network Interface
Controller).
Network drivers are required for performing NICs function. When data is
transmitted in Ethernet communication, it is encapsulated in frames.

When data is transmitted in Ethernet communication it is encapsulated in frames.

ETHERNET FRAME:


Preamble- it alters the receiving system to the coming frame and enables it to
synchronize its input timing.
Preamble is actually added at the physical layer and is not part of the frame.

Start frame delimeter-(8bit)
Signals the begining of the frame. SFD warns the station that this is the last chance for
synchronization.
Destination address -
It contains the physical address of the destination station or stations to receive the
packets.
Source address -
It contains the physical address of the sender of the packet.
Length or type-
This field is used to define the number of bytes in the data.
Data & Padding-
It carries data encapsulated from upper layer protocols. Its a minimum of 46 &
maximum of 1500 bytes.



Module 2:

11. Short note on Entity authentication
Entity authentication:
Entity authentication is a technique designed to let one party prove identity of another part

An entity can be a person, a process, a client or a server.
The entity whose identity is to be proved is called a claimant.
In entity authentication claimant needs to identify himself/ herself to the verifier.
This can be done with any one of the three witnesses:
something known,
something possessed,
something inherent.

Something known:
This secret is known only by the claimant that can be checked by the verifier. Examples are
password, a PIN number, a secret key.
Something possessed:
This is something that can prove claimants identity. Examples are passport, identification
card, smart card etc.
Something inherent:
This is inherent characteristics of claimant. Examples are signature, fingerprints, voice, and
handwriting.
12. Short note on cryptography:
13. Short note on keys used in cryptography:(DEC 2012)
Cryptography is the science and art of transforming message to make them secure
and immune from attacks.
We can divide all the cryptographic algorithms (ciphers) into two groups: symmetric-
key cryptography (secret key) and Asymmetric key cryptography (public key).
Symmetric-Key Cryptography:
In Symmetric-Key Cryptography, same key is used by both the parties.
The sender uses this key and an encryption algorithm to encrypt the message and
receiver uses this same key and a decryption algorithm to decrypt the data.
Asymmetric-Key Cryptography:
In Asymmetric-Key Cryptography, there are two keys; a private key and a public key.
The private key is kept by the receiver.
The public key is announced to the public.
In this the public key which is used for encryption is different from private key that is
used for decryption.
The public key is available to public and the private key is available only to the
receiver.




14. List various network security services. Explain Messsage authentication & Messsage
Integrity (Dec10, May11)
MESSAGE INTEGRITY
Encryption and decryption provide secrecy, or confidentiality, but not integrity.
However, on occasion we may not even need secrecy, but instead must have
integrity.
For example, Alice may write a will to distribute her estate upon her death. The
will does not need to be encrypted. After her death, anyone can examine the
will. The integrity of the will, however, needs to be preserved. Alice does not
want the contents of the will to be changed.
Document and Fingerprint
One way to preserve the integrity of a document is through the use of a
fingerprint.
If Alice needs to be sure that the contents of her document will not be illegally
changed, she can put her fingerprint at the bottom of the document.
To ensure that the document has not been changed, Alices fingerprint on the
document can be compared to Alices fingerprint on file. If they are not the
same, the document is not from Alice.
To preserve the integrity of a document both the document and fingerprint are
needed.

Message and Message Digest
The electronic equivalent of the document and fingerprint pair is the message
and message digest pair.
To preserve the integrity of a message the message is passed through algorithm
called hash function.
The hash function creates a compressed image of the message that can be used
as a fingerprint.

The two pairs document/fingerprint and message/message digest are similar,
with some differences.
The document and fingerprint are physically linked together; also, neither needs
to be kept secret.
message
Hash
Function
Message digest
The message and message digest can be unlinked (or sent) separately and, most
importantly, the message digest needs to be kept secret.
The message digest is either kept secret in a safe place or encrypted if we need
to send it through a communications channel.
The message digest needs to be kept secret.

Creating and Checking the Digest
The message digest is created at the sender site and is sent with the message to
the receiver.
To check the integrity of a message, or document, the receiver creates the hash
function again and compares the new message digest with the one received.
If both are the same, the receiver is sure that the original message has not been
changed. Of course, we are assuming that the digest has been sent secretly.

Figure Checking integrity



Hash function criteria:
To be eligible for HASH, a function must meet three criteria; one wayness, weak collision
resistance, strong collision resistance.
1) One wayness: A message digest is created by one way hashing function and it is
impossible to create message if message digest is given.
2) Weak collision resistance: in a weak collision, given a message digest it is unlikely
that someone will create a message with exactly the same digest.
3) Strong collision: we cannot find two message attached to the same digest.

Weak Collision Resistance
The second criterion, weak collision resistance, ensures that a message cannot
easily be forged.
If Alice creates a message and a digest and sends both to Bob, this criterion
ensures that Eve cannot easily create another message that hashes exactly to the
same digest.
A specific message and its digest it is impossible to create another message with
the same digest.
When two messages create die same digest, we say there is a collision.
In a week collision, given a message digest, it is very unlikely that someone can
create a message with exactly the same digest.
A hash function must have weak collision resistance.

Strong Collision Resistance
The third criterion, strong collision resistance, ensures that we cannot find two
messages that hash to the same digest.
If Alice can create two messages that hash to the same digest, she can deny
sending the first to Bob and claim that she sent only the second.
This type of collision is called strong because the probability of collision is higher
than in the previous case.
An adversary can create two messages that hash to the same digest.
For example, if the number of bits in the message digest is small, it is likely Alice
can create two different messages with the same message digest. She can send
the first to Bob and keep the second for herself. Alice can later say that the
second was the original agreed-upon document and not the first.

MESSAGE AUTHENTICATION
A hash function guarantees the integrity of a message. It guarantees that the
message has not been changed.
A hash function, however, does not authenticate the sender of the message.
When Alice sends a message to Bob, bob needs to know if the message is coming
from Alice or Eve.
To provide message authentication, Alice needs to provide proof that it is Alice
sending the message and not an imposter.
A hash function per se cannot provide such a proof.
The digest created by a hash function is normally called a modification detection
code (MDC).
The code can detect any modification in the message.

MAC
To provide message authentication, we need to change a modification detection
code to a message authentication code (MAC).
An MDC uses a keyless hash function; a MAC uses a keyed hash function. A-keyed
hash function includes the symmetric key between the sender and receiver when
creating the digest. Figure shows Alice uses a keyed Function to authenticate her
message and how Bob can verify the authenticity of the message.


Alice, using the symmetric key between herself and Bob and a keyed hash
function, generates a MAC.
She then concatenates the MAC with the original message and sends the two to
Bob.
Bob receives the message and the MAC. He separates the message from the
MAC.
He applies the same keyed hash function to the message using the symmetric
key to get a fresh MAC.
He then compares the MAC sent by Alice with the newly generated MAC.
If the two MACs are identical, the message has not been modified and the sender
of the message is definitely Alice.

15. Explain client server model in detail (may 11)
Client server model:
In a client/server model one or more stations called servers give services to other
stations called clients.
Dedicated server:
A larger network may have several servers each dedicated to a particular task.
A network may have mail server, file server or a print server.
A print server allows different clients to share a printer. Each client can send data to
be printed to the printer server which then prioritizes and prints them.
In this model, the file server station runs a server access program, a mail server
station runs a server mail handling program and a print server station runs a server
print handling program. Depending on its need a client mail program or a client print
program.


General server:
In the figure, one client using the client file access program to retrieve data from the
file server. The second client uses its client print program to output a job on the
shared printers.

A small network may have only one general server. In this case this one server is
responsible for all services typically requested from a server. It can serve as a mail
server, file server, a print server and so on.
A general server runs all the server programs all the time.
In the figure two clients are accessing the general server.
One client is requesting a service from the file server program. The other client is
requesting a service from the print server program.

16. SHORT NOTE ON OPERATING SYSTEMS
The operating system (OS) is the software at the heart of the computer.
It is a program that performs a variety of basic functions for all other programs.
Any program that needs a mouse pointer [such as a radiology workstation] simply
uses the OS to generate it.)
Some operating systems are CP/M, DOS, LINUX, OS-X, MUMPS, OS/2, SOLARIS, UNIX,
VMS, and Windows XP.
The operating system is what determines what software you can and cannot run on
the computer.
If you have UNIX, you cannot run Microsoft Word for Windows.
If you have Windows XP, you cannot run Macintosh OS-based software.
MUMPS was the first OS designed primarily for medical application programs.
An operating system is usually designed to run on a specific CPU or series of CPUs.
That is, you can run DOS on an Intel 286, 386, 486, or Pentium, but it will not run on
a Motorola-based Mac.
As computers have become more powerful and compact, operating systems have
been designed to take advantage of this power, and they have become graphical so
that ordinary people can interact with the computer more intuitively.
What facilitates their interactions are GUIs (graphical user interfaces).
Because these GUI-based OSs expose many high-level functions to the end user
application programs running on them have a common look and feel, regardless of
who makes them.
Major medical equipment, including magnetic resonance (MR) and computed
tomography (CT) scanners and virtually every kind of picture archiving and
communication systems (PACS) component, run applications on GUI OSs on PCs.


OPERATING SYSTEM SECURITY AND POLICIES
To maintain a secure network environment and ensure system functionality, it is
crucial that server security policies be clearly developed and outlined.
The following is a general outline of recommended server policies.
OPERATING SYSTEM PATCHES
Operating systems (defined as the first program that runs when a computer is turned
on; this program manages all other programs, including applications such as e-mail
or PACS) all have flaws or bugs.
The OS provider continually offers updates (patches) to fix problems that can cause
crashes or to seal security flaws to protect the server from hackers.

VIRUS PROTECTION
Operating system patches are the first line of defence against malicious (virus)
software.
The second line of defence is antivirus.
Antivirus software runs on each server and workstation and continually monitors
files and running programs for signature behaviours.
Once a new virus is discovered, it is then studied and reverse engineered to
determine what security flaws it takes advantage of, how it replicates itself, and
whatever else it tries to do (e.g., steal data, clog the network).
Once the virus is understood, this information is electronically sent to all customers
subscribing to the antivirus providers service.
The updated information is often called virus signatures. Once the antivirus
software Is updated with the new signatures, it will then scan the server to
determine whether it has been compromised. If it has been, the antivirus software
will remove the virus, cleansing the system.
Providers of antivirus software include McAfee and Norton.
PASSWORDS
Servers should have individual user accounts and fairly complicated passwords to
prevent a security breach by a hacker.
Most hackers will write a simple algorithm looking for easy access through weak
passwords or, no password at all.
NO UNENCRYPTED AUTHENTICATION
It is advisable that user authentication (logging in) be encrypted during transmission
over the network.
This is important to consider because hackers have tools that allow them to monitor
network traffic and snatch passwords that are transmitted in clear text.

NO UNAUTHORIZED E-MAIL DELIVERIES
All operating systems have the ability to send mail messages to other servers.
Malicious code (viruses) has the ability to leverage this process and cause havoc
throughout the institution.
To help avoid this situation, it is advisable to turn off the SMTP (simple mail transfer
protocol) program unless it is absolutely needed.


17. Define VIRUSES, TROJANS, AND WORMS
More than 30 new viruses appear every day, and the roster of active viruses increases by
more than 10,000 per year.
VIRUS
A virus is defined as a piece of code that replicates by attaching itself to another
object, usually without the users knowledge or permission. Viruses can infect program files,
documents, or low-level disk and file-system structures such as the boot sector and partition
table.
Viruses can run when an infected program file runs and can also reside in memory
and infect files as the user opens, saves, or creates the files. When a computer virus infects
a computer running Windows, it can change values in the registry, replace system files, and
take over e-mail programs in its attempt to replicate itself.

WORM
Worms are a type of virus that replicate by copying themselves from one computer
to another, usually over a network. The general goal of a worm is to replicate itself to as
many systems as possible.
Often the worm acts as a transport mechanism for delivering yet another virus.

TROJAN HORSE
A Trojan horse is a virus that masquerades (A disguise ) as a useful program.
Sometimes it actually performs a useful purpose, but behind the scenes it releases a
virus.
A Trojan horse program can come from an e-mail attachment or a download from a
Web site, usually disguised as a joke program or a software utility of some sort.

18. Explain the terms switch, router, hub, bridge (DEC10, MAY11)
Bridge:
In computer network a bridge connects two or more computers.
The data uses bridge to move to and fro in the network.
This device is similar to a router, but it does not analyse the data being forwarded.
This is the reason why bridges are fast in transferring the data.
They are not as versatile as a router.
The bridge cannot be used as a firewall like most routers.
A bridge can transfer data between different protocols.
The different protocols make as Token Ring and Ethernet network.
It operates at the data link layers.

HUB:
It is a small rectangular Box, made up of plastic.
A hub joins multiple computers (or other network device) together to form a single
network segment.
This helps all the PCs in the network to communicate with each other.
Ethernet HUBs are by far the most common types, but USB Hubs also exists.
Small hubs have 4 ports and larger may have 16, 32, 12, 24 ports.
Hubs classify as layer 1 of the OSI model.
Hubs are in a little way of sophisticated networking.
Hubs do not read any of the data passing through them.
They receive incoming packets.
There are three types of HUBS
a) Passive these do not amplify signal.
b) Active these perform amplification and are called repeaters.
c) Intelligent this is a stackable device.

Switch:
A switch is a small hardware device that joins multiple computers together within
one LAN.
They operate on layer 2 of OSI model.
It is similar to a network bus with some more intelligence.
They are capabale of inspecting data packets unlike the HUB.
A switch conserves network bandwidth.
It gives higher performance as compared to a HUB.
Ethernet implementations of network switches are the most common.
The switches support 10/100 Mbps fast Ethernet.
Different models of network switches support differing numbers of connected
devices.
Router.
These are small physical devices that connect multiple networks.
It is a layer 3 gateway device.
It works on network layer OSI model.
We normally use IP routers.
An IP router such as DSL or cable modem broadband routers are available in market.
It selects a path along which to send the network traffic.
These are bit costly.

19. Explain single server and multi-server
SINGLE SERVER
The single server model consists of one server handling all PACS functions.
These functions include the importation of exams from DICOM modalities, the
processing of the database requests, and the processing of image distribution
requests from the radiologists interpretation workstations.
Workstations can access exams via standard Web protocols or through a proprietary
application program interface (API) built into the PACS vendors software.
The single server scenario is often ideal for an imaging center or small community
hospital where the size and volume of exams are relatively small.
Thus, the need for additional servers to provide the distributed processor power,
memory, and hard disk space is not as great as it would be in a larger hospital, where
higher volumes and exam sizes necessitate more resources.


Figure illustrates a single server model for PACS using Web technology as a means of
distribution, with a backup server waiting in the wings.


MULTISERVER
In a multiserver model, a separate server is responsible for each component of the
PACS.
One server handles DICOM importing, another server maintains the database, and
one handles image distribution requests.
This design affords the luxury of having processor power, memory, and hard disk
space dedicated to each PACS component on an individual basis rather than having
the components competing for the resources of one server.
For example, this proves useful in institutions where large exam sizes and the
number of studies performed can cause the database to grow to considerable size
(200 gigabytes [GB] or more).


Figure shows a multiserver model for PACS using Web technology as a means of image
distribution. All requests for information or images are funneled through the Web server.
The Web server acts like a traffic cop, directing information requests to the proper servers.
Image requests are serviced from the archive(s), and textual requests are serviced through
the database.

20. Short note on WAN
A wide area network, or WAN, spans a large geographical area.
It contains a collection of machines (host) intended for running user (i.e., application)
programs.
The hosts are connected by a communication subnet.
The hosts are owned by the customers, whereas the communication subnet is
typically owned and operated by a telephone company or Internet service provider.
The job of the subnet is to carry messages from host to host.
In most wide area networks, the subnet consists of two distinct components:
transmission lines and switching elements.
Transmission lines move bits between machines.
They can be made of copper wire, optical fiber, or even radio links.
Switching elements are specialized computers that connect three or more
transmission lines.
When data arrive on an incoming line, the switching element (routers) must choose
an outgoing line on which to forward them.
In this model, shown in Fig, each host is frequently connected to a LAN on which a
router is present, although in some cases a host can be connected directly to a
router. The collection of communication lines and routers (but not the hosts) form
the subnet.



In most WANs, the network contains numerous transmission lines, each one
connecting a pair of routers.
If two routers that do not share a transmission line wish to communicate, they must
do this indirectly, via other routers.
When a packet is sent from one router to another via one or more intermediate
routers, the packet is received at each intermediate router in its entirety, stored
there until the required output line is free, and then forwarded.
A subnet organized according to this principle is called a store-and-forward or
packet-switched subnet.
,when a process on some host has a message to be sent to a process on some other
host, the sending host first cuts the message into packets, each one bearing its
number in the sequence.
These packets are then injected into the network one at a time in quick succession.
The packets are transported individually over the network and deposited at the
receiving host, where they are reassembled into the original message and delivered
to the receiving process.
A stream of packets resulting from some initial message is illustrated in fig



Module 3:

21. Explain image quality factors like Spatial resolution, contrast resolution & noise
(Dec10)

Image quality factors:
SPATIAL RESOLUTION
The spatial resolution response or sharpness of an image capture process can be
expressed in terms of its modulation transfer function (MTF), which in practice is
determined by taking the Fourier Transform of the line spread function (LSF), and
relates input subject contrast to imaged subject contrast as a function of spatial
frequency.

The ideal image receptor adds no blur or broadening to the input LSF, resulting in an
MTF response of 1 at all spatial frequencies.

A real image receptor adds blur, typically resulting in a loss of MTF at higher spatial
frequencies.
The main factor limiting the spatial resolution in CR, similar to screen film systems, is
x-ray scattering within the phosphor layer.

However, it is the scattering of the stimulating beam in CR, rather than the emitted
light as in screen-film, that determines system sharpness.

Broadening of the laser light spot within the IP phosphor layer spreads with the
depth of the plate. Thus, the spatial resolution response of CR is largely dependent
on the initial laser beam diameter and on the thickness of the IP detector.

The spatial resolution of CR is less than that of screen-film.

Finer spatial resolution can technically be achieved with the ability to tune laser spot
sizes down to 50m or less. But the image must be sampled more finely to achieve
10lp/mm.

Thus there is a trade-off between the spatial resolution that can technically be
achieved and the file size to practically transmit and store.


CONTRAST RESOLUTION
The contrast resolution for CR is much greater than that for screen-film.
Since overall image quality resolution is a combination of spatial and grayscale
resolution, the superior contrast resolution of CR can often compensate for its lack
of inherent spatial resolution.

By manipulating the image contrast and brightness, or window and level values,
respectively, small features often become more readily apparent in the image.

The overall impression is that the spatial resolution of the image has been improved,
when in fact, it has not changedonly the contrast resolution has been
manipulated.


NOISE
The types of noise affecting CR images include x-ray dose dependent noise and fixed
noise (independent of x-ray dose).

The dose dependent noise components can be classified into x-ray quantum noise,
or mottle, and light photon noise.

The quantum mottle inherent in the input x-ray beam is the limiting noise factor, and
it arises in the process of absorption by the IP, with noise being inversely
proportional to the detector x-ray dose absorption.

Light photon noise arises in the process of photoelectric transmission of the
photostimulable luminescence light at the surface of the PMT.

Fixed noise sources in CR systems include IP structural noise (the predominant
factor), noise in the electronics chain, laser power fluctuations, quantization noise in
the analog-to-digital conversion process, and so on.

Imaging plate structural noise arises from the nonuniformity of phosphor particle
distribution, with finer particles providing noise improvement. Note that for CR
systems, it is the noise sources that limit the DQE system latitude. (high DQE means
images can be acquired)


22. Discuss the image attributes, advantages, disadvantages of the following
modalities:
Projection Radiography
Source: X-rays; ionizing radiation; part of the electromagnetic spec-trum emitted as a
result of bom-bardment of a tungsten anode by free electrons from a cathode.
Analog detector: fluorescent
screen and radiographic film.
Digital detector: computed radiography (CR) uses a photostimulable or storage phosphor
imaging plate; direct digital radiography (DR) devices convert X-ray energy to electron hole
pairs in an amor-phous selenium photoconductor, which are read out by a thin-film
transistor (TFT) array of amorphous silicon (Am-Si). For indirect DR devices, light is
generated using an X-ray sensitive phosphor and converted to a proportional charge in a
photodiode (e.g., cesium iodide scintillator) and read out by a charge-coupled device (CCD)
or flat panel Am-Si TFT array.
Image attributes: variations in the grayscale of the image represent the X-ray attenuation
or density of tissues; bone absorbs large amounts of radiation allowing less signal to reach
the detector, resulting in white or bright areas of the image; air has the least attenuation
causing maximum signal to reach the detector, resulting in black or dark areas of the image.
Advantages: fast and easy to per-form; equipment is relatively inex-pensive and widely
available; low amounts of radiation; high spatial resolution capability. Particularly useful for
assessing the parts of the body that have inherently high contrast resolution but require fine
detail such as for ima-ging the chest or skeletal system.
Disadvantages: poor differentiation of low contrast objects; superposition of structures
makes image interpretation difficult; uses ionizing n
Fluorography
Source: continuous low-power X-ray beam; ionizing radiation.
Detector: X-ray image intensifier amplifies the output image.

Image attributes: continuous acquisition of a sequence of X-ray images over time results in
a real-time X-ray movie.
May use inverted grayscale (white for air; black for bones).
Advantages: Can image anatomic motion and provide real-time image feed-back during
procedures. Useful for monitoring and carrying out barium studies of the gastrointestinal
tract, arteriography, and interventional pro-cedures such as positioning catheters.
Disadvantages: Lower quality moving projection radiograph.

Computed Tomography (CT)
Source: collimated X-ray beam; X-ray tube rotates around the patient.
Detector: early sensors were scintillation detectors with photomultiplier tubes excited by
sodium iodide (Nal) crystals; modern detectors are solid- state scintillators coupled to
photodiodes or are filled with low-pressure xenon gas. An image is obtained by computer
processing of the digital readings of the detectors.
Image attributes: thin transverse sections of the body are acquired representing an
absorption pattern or X-ray attenuation of each tis-sue. Absorption values are expressed as
Hounsfield Units.
Advantages: good contrast resolution allowing differentiation of tissues with similar
physical densi-ties; tomographic acquisition elimi-nates the superposition of images of
overlapping structures; advanced scanners can produce images that can be viewed in
multiple planes or as volumes. Any region of the body can be scanned; has become
diagnostic modality of choice for a large number of disease entities; useful for tumor
staging.
Disadvantages: high cost of equipment and procedure; high dose of ionizing radiation per
examination; artifacts from high contrast objects in the body such as bone or devices.

Magnetic Resonance Imaging (MRI)
Source: high-intensity magnetic field; typically, helium-cooled superconduct-ing magnets
are used today; non-ionizing; gradient coils turn radiofrequency (RF) pulses on/off.
Detector: phased array receiver coils capable of acquiring multiple channels of data in
parallel.
Image attributes: produces images of the body by utilizing the magnetic properties of
certain nuclei, predominately hydrogen (H ) in water and fat molecules; the response of
magnetized tissue when perturbed by an RF pulse varies between tissues and is differ-ent
for pathological tissue as com-pared to normal.
Advantages: non-ionizing radia-tion, originally called nuclear magnetic resonance (NMR)
but because the word "nuclear was associated with ionizing radiation, the name was
chan-ged to emphasize the modality's safety; can image in any plane; has excellent soft
tissue contrast detail; visualizes blood vessels without contrast; no bony artefact since no
signal from bone; particularly useful in neurological, cardi-ovascular, musculoskeletal, and
oncological imaging.
Disadvantages: high purchase and operating costs; lengthy scan time; more difficult for
some patients to tolerate; poor images of lung fields; inability to show calcification;
contraindicated in patients with pacemakers or metallic foreign bodies.


Module 4:

23. Explain MCSP and SCMP
In PACS of small scale, services such as data registration, receiving, compression,
storage are installed in a computer server. Each service is handled by a computer
processor.
In PACS of larger scale, two designs can be used: Multiple computer Single Processor
(MCSP) PACS & Single Computer Multiple Processor (SCMP) PACS.
MCSP is using separate computer server for each service.
SCMP is using a single computer with multiple processors and multi-tasking
computer operating system for handling all procedures.
MCSP PACS :
The new in-coming images are firstly registered in the database computer and
received in another computer during image archiving.
After the images are received, they are compressed and stored in archieve.
During image query, user sends a query request to the database computer.
The PACS can retrieve the required images according to the request.
The retrieved images are decompressed and then delivered to the client computer.
It has higher power consumption and heat dissipation.
It is required for network between each process.

SCMP PACS:
All PACS services including registration, query, image receiving, compression,
decompression, store and retrieval are handled by different processors in a time
sharing multi task computer.
During image achieving, the new incoming images are in the database through a
registration service.
The images are then received, compressed and then stored in the achieve by
different processors within the same computer.
During image query and retrieval the request is handled by a database service.
After the location of images is searched the retrieved images are decompressed and
delivered to the user.
It occupies less space.
No network is required between each process.



24. Implementation of Filmless Hospital

All PACS implementations should start with a business plan. This is the summation of many
small plans and also summarizes the global goal.
The plan should be segmented into smaller separate plans for cost justification, risk
assessment, capacity planning, and the last consisting implementation plan.

THE BUSINESS PLAN
A business plan for setting up a filmless hospital is required for a successful project.
The first thing in the plan is to define a clear objective for the project which is fully
understandable by the whole healthcare organization from front line staff to senior
management. The detail business plan should include the followings:

Cost Justification:
This covers the area of hard and soft cost reductions and revenue enhancement.
Hard cost, reduction should include low skill labour costs in the darkrooms, film
libraries, film management, and film cost

Soft cost reduction will include decreased film waiting time for clinicians, no film
transportation is required, improved infection control as less physical interaction is
needed, and thus leads to increased productivity of staff.

Revenue enhancement can be experienced by setting up new services for the
radiologists and clinician as a profit sharing venture. Services such as providing
remote medical consultation for small hospitals are now possible.

Risk Assessment
This includes an inventory checking and assessment relating to the risk of filmless hospital
operation.
Evaluate medical equipment or imaging modality that need integration including their
upgradability.
Evaluate personnel needs and technology education level of all users.
Compile a list of imaging equipment, computer equipment, network equipment,
vendors, and training capabilities.
Check with local networking and internet service providers regarding services available
in the area. .

Capacity Planning
Set up a methodology to evaluate the needs and resource requirement.
Assess and evaluate the existing resources and if they can meet the current needs.
Determine what current resources can be utilized.
Assess the future needs.
Integrate available resources, present a plan that incorporates future resource needs.
Implementation Plan
Communications Plan
A successful PACS implementation necessitates strong collaboration between unrelated
departments and operational areas.
Communication should be part of PACS project to ensure that communication is conveyed
to all constituents.
Selling the concept and benefits to the radiology department is important.

Implement PACS in Phases
Phased implementations allow the process to stop at any point for adjustments.
The completion of one phase does not mean that the next must follow immediately.
Phases may be months or even years apart.
Every phase incorporates new people and services that are likely to have different needs
than others

Contingency plan:
A well-developed contingency plan will mitigate disruptions to hospital in the event
of technology failure
The following points should be addressed in the plan:
Carefully define potential problems and their symptoms.
The system should include self-monitoring tools that can examine system loads.
Protocols to disseminate the nature of the problem to the end users should be
developed in advance.
Everyone who interacts (technologists, radiologists, supporting staff) with the
system should understand their roles and actions in the event that a
contingency plan is executed.


Module 5:

25. Explain RIS with a suitable diagram (DEC11)
THE RADIOLOGY INFORMATION SYSTEM


The RIS is the nervous system of the digital department.
Every aspect of the digital department relies in some manner on the RIS. The RIS
drives the workflow of the information of the department.
It is responsible for scheduling orders, capturing relevant clinical information about
an exam and providing this clinical information only to areas of the department that
require it, preparing prior exams if needed, and providing the PACS with the
information it needs to perform its role.
Once an image is captured, the RIS and PACS work together to provide the
radiologist with the necessary information to interpret the exam and to deliver the
report to the clinicians.
In addition to the clinical functions of the RIS, the system manages billing for the
exams and provides the necessary data to support management reporting for the
department.
Scheduling is where the process begins. The scheduling step kicks off a number of
events within the RIS to prepare for an exam to be performed.
The process of scheduling an exam captures the appropriate clinical information to
determine the exam to be performed. It is also the point in the process at which the
patient demographics are captured.
The scheduling process is where a majority of the data errors occur within the
system. Input data errors at this point will for the most part eliminate any
operational efficiency gained by moving to the digital department.
The Web-based scheduling method is the most accurate because there is more
control over the incoming data, assuming there is a structured method of gathering
the required information.
The most beneficial processes are the acquisition of relevant prior exam information
and the validation of patient information. This information is used in the pre-fetching
of prior films, either by moving studies in the PACS from longterm storage to near-
line cache or by the creation of pick lists for the film library.
The RIS provides the technologist and the radiologist with relevant information for
performing the exam. The technologist interacts with the RIS either by receiving a
paper request or, in the digital environment, by checking an electronic worklist that
provides the details of the exam, including the protocol assigned by the radiologist.
When the exam is complete and the images are ready for interpretation, the RIS and
PACS interact to validate that the images acquired match the order information.
Once the images are determined to be valid, the exam data are routed to populate
worklists for the appropriate radiology specialty for interpretation.
The report in the digital department is captured by speech recognition, and, after it
is signed by the radiologist, it is delivered to the appropriate destinations.
These are primarily the requesting clinician and the billing office; delivery methods
may include fax, secure e-mail, and, of course, regular mail.
The RIS also serves as an archive for all the exam data, including the report.
Thus, the RIS is the backbone for almost all the clinical operations of the department.


26. Enumerate the various steps from a radiologists perspective to extract clinically
useful information from the medical images. (DEC10, MAY11)
The retrieval of images to the workstation is only the first of several workflow steps
in the interpretation of an imaging study.
To extract as much clinically useful information as possible from the images, a
number of steps may be helpful:
Images must be optimized with regard to window/level (brightness/ contrast)
settings.
Continuous dynamic adjustment of window/level settings is often necessary for a
conventional radiograph. Alternatively, certain presets may be used.

The method of image display and navigation must be chosen.
The simplest of these is static softcopy interpretation, with images displayed on
workstations.
Frame mode is an example of this type of static mode in which images are displayed
in a matrix similar to that typically printed to film.
Stack mode displays images sequentially in a single window in a movie- or cine-like
format.
Linked stack mode, a further enhancement that synchronizes multiple stacked
images within a single examination.

Images and portions of images can be zoomed or magnified.

Images can be viewed using MIP, which has been documented to be useful in the
evaluation of blood vessels and lung nodules.

Thin-section images can be combined arithmetically to create a user-selected slice
thickness that is a multiple of that reconstructed by the acquisition device.

Images can be arranged in a logical format to make it as easy as possible to compare
various sequences and to compare a current study with comparable images from a
previous study.

Images can also be enhanced with tools, such as edge enhancement, smoothing or
interpolation algorithms that smooth the image to give it a less boxy or pixely
appearance, or those that enhance the ability to display a wide range of contrast on
an 8-bit monitor or film.

Images can also be processed using more sophisticated techniques to achieve spatial
frequency and image contrast optimization.

Additional tools can be implemented to aid in decision support, including computer-
aided detection tools that have been successfully applied in mammography and in
the detection of lung nodules on CT or conventional chest radiography.
Each of these steps depends to some degree on the preferences of the radiologist
and on the demands of the specific study and modality.
For many radiologists, both experienced and in training, one of the greatest current
challenges is in identifying these preferences as the range of choices expands and
technology evolves.
The intelligent workstation and the PACS that supports it not only must be ready
to customize different combinations of workstation tools to suit each user but must
be configured to seamlessly integrate new software that enhances the interpretation
process.
Current PACS workstations vary tremendously in the success of their graphical user
interface and in the number of steps required to utilize these and other tools.
Radiologists have a tendency to use tools such as image zoom and magnification less
frequently as they gain additional experience with the workstation. However, even
experienced radiologists utilize the window/level adjustments in the majority of
cases.


27. what are the different elements of PACS (may11 )
ELEMENTS OF A PACS
Following are the basic elements of a PACS:
Image acquisition
PACS core
Interpretation workstations


IMAGE ACQUISITION
Image acquisition is the first point of image data entry into a PACS
There are two methods for accomplishing this: direct capture and frame grabbing.
Direct digital interfaces allow capture and transmission of image data from the
modality at the full spatial resolution and bit depth or gray scale inherent in the
modality.
Analog (video) frame grabbers digitize the video signal voltage output going to an
image display, such as a scanner console monitor.
In the frame-grabbing method, as in printing an image to film, the image quality is
limited by the process to only 8 bits (or 256 gray values). This may not allow viewing
in all the appropriate clinical windows and levels or contrast and brightness settings.
Direct capture of the digital data will allow the viewer to dynamically window and
level through each of these settings on the fly (in real time) at the softcopy display
station.
Direct capture of digital data from the inherently digital modalities is the preferred
method of acquisition.
Digital acquisition of images already on film can be accomplished using a variety of
image digitization devices or film scanners.
FILM DIGITIZERS
Film digitizers convert the continuous optical density values on film into a digital
image by sampling at discrete, evenly spaced locations and quantizing the
transmitted light from a scan of the film into digital numbers.
A commonly used film scanner for PACS is the CCD or flat-bed scanner.
The laser scanner or laser film digitizer uses either a helium-neon (HeNe) gas laser or
a solid-state diode laser source. Digital radiography has more efficient detectors but
the cost is high.
CR system is easy to use and has straight forward integration.
DR has potential for excellent image quality available immediately at the time of
exposure.
PACS CORE
Once the images have been acquired, they need to be managed appropriately to
ensure that storage, retrieval, and delivery all occur without error.
The PACS core consists of the following:
Database manager (e.g., Oracle, MS-SQL, Sybase)
Image archive (e.g., RAID, Jukebox)
Workflow/control software (image manager)
RIS interface
The database manager is the heart of the PACS. The relationship between the image
and the storage location is stored and managed within the database along with all
the relevant data required to retrieve the image.
The database manager must also to be able to retrieve images for a given patients
current or prior exams when queried by the RIS or other outside systems. The types
of queries that the database responds to are defined by the Digital Imaging and
Communications in Medicine (DICOM) standards.
The image archive works in conjunction with the database manager by storing the
images in a highly available system to provide online images for nearly instant
retrieval and long-term storage to meet retention regulations and disaster recovery.
It consists of redundant array of inexpensive disks (RAID), where the images are
stored on hard disk and are readily available when the database manager makes a
request for the images to be distributed. The second tier of storage is referred to as
long-term storage.
Image management (workflow control) is the role of the core that is the most visible
and drives the functionality of the PACS.
The image management process is where the data from the RIS and the data from
the core meet and are managed in a number of different ways.
Image management/ workflow of the PACS determines where and how images are
routed throughout the system to ensure they are stored appropriately once received
from the imaging devices. Image management is also responsible for the routing of
exams to the appropriate location. In addition to managing the storage and
distribution of images, the image manager is also the area within the PACS where
the system administrator has tools to correct for system and data errors to ensure
data integrity.

The RIS interface is where the two principal computing systems within the digital
department come together. This interface is responsible for passing the appropriate
scheduling and exam information to the core to facilitate the pre-fetching of prior
exams, the validation of the demographic/ exam information stored within the
image prior to storage in the core and subsequent distribution.
Depending on the configuration and architecture of the PACS-RIS relationship, this
interface is managed with or without a broker.

INTERPRETATION WORKSTATIONS
The workstation is where the physician and clinician see the results of the capture of
the relevant exam information within the RIS and the images acquired and stored
within the PACS.
There are two general classifications of workstations: diagnostic and review. The
distinguishing characteristics between them are resolution and functionality. The
diagnostic workstation is the type that is used by the radiologist to perform primary
interpretation of the exam. These workstations are the highest in resolution and
brightness and contain the highest level of functionality
The next type of workstation is the clinical review workstation. The clinical review
workstation allows referring clinicians to have direct access to the images. The
quality of images is sufficient for the interpretation of clinicians, allowing them to
review the images along with the radiology report and possibly to share those
results.

28. Draw & explain integration of information transfer between HIS/RIS/PACS (DEC10,
MAY11)


PACS is an imaging management system that requires relevant data from other
medical information systems for effective operation.
Among these systems, data from the hospital information system (HIS), and
radiology information system (RIS) are of most importance.
Many functions in PACS sever and achieve server rely on data extracted from both
HIS & RIS.


Integration of HIS, RIS, PACS


There are 3 methods of transmitting data between information systems.
1. Workstation emulation:
in this PACS workstation can be connected to RIS with a simple computer program that
emulates RIS workstation. From PACS workstation, RIS functions such as scheduling a new
exam, updating patient demographics, recording film movement, viewing diagnostic reports
can be performed.
2. Database to Database transfer:
It allows two or more networked information system to share a subset of data by storing
them in a common local area. The ADT data from HIS can be reformatted to HL7 std and
broadcast periodically to certain local database in RIS.
3. Interface engine:
It provides a single interface and language to access distributed data in networked
heterogeneous information system.

In hospital environment interface is done for:
1. Diagnostic process:
RIS & HIS information such as clinical diagnosis, radiological reports, and patient history are
necessary at PACS workstation to complement the images from examination under
consideration.
2. PACS image management:
Some information provided by RIS can be integrated into PACS image management to
optimize grouping and routing of image data on the network to the requesting locations.
3. RIS administration:
PACS can provide image archive status and image data file information to RIS. RIS
administration would also benefit from the HIS by gaining knowledge about patient ADT.




While interfacing HIS, RIS, PACS, each system remains unchanged.
Each system is extended in both hardware and software for allowing communication
with other system.
Only data are shared. Functions remain local.



Information transfer between HIS to RIS and from RIS to PACS.

Information transfer between HIS, RIS AND PACS triggered by PACS server


29. Explain how PIR(patient information reconciliation) can be achieved (DEC10, DEC11)

Patient information reconciliation (PIR) coordinates reconciliation of the patient
record when images are acquired for unidentified (e.g., trauma), or misidentified
patients.

The IHE patient information reconciliation (PIR) integration identifies the
transactions necessary to have smooth operations when information is incomplete.
A common scenario is that of the John Doe who arrives at an emergency room.
Images are often obtained before the patient is identified and registered and often
before orders are placed. An analogous situation occurs when one or more of the
information systems are unavailable.
PIR describes the transactions necessary to reconcile the systems once the
information is available.
IHE patient information reconciliation profile is used as an example to explain how
workflow would benefit from the integration of HIS, RIS, PACS.
The integration profile extends the scheduled workflow profile by providing the
means to match images acquired for an unidentified patient with the patient's
registration and order history.
For example during trauma case in which the patient ID is not known, this allows
subsequent reconciliation of the patient record with images acquired before
patient's identity could be determined

PIR extends Scheduled Workflow to handle: unidentified/emergency patient
demographic information updates patient name changes (marriage, etc.) correction
of mistakes ID space mergers such changes are reliably propogated to all affected
systems, which update all affected data.
The result is a complete patient record. In the example of the trauma case, images
can be acquired (either without a prior registration or under a generic registration),
and read and then later merged with the patients record when their identity is
determined.

Benefits:

Normal workflow proceeds even when the patient is unknown The patient record
reconciles itself into a whole when the patient is identified.
Eliminates confusion over which system should reconcile what where. Reduction of
errors Reduces incorrectly identified or lost studies More complete medical record
Supports maintenance of demographics to maintain database correspondence
Flexible Trauma Workflow.

the PACS is notified of demographics changes the RIS backfills orders the modality
doesn't mess with anything.

Systems Affected Enterprise-wide information systems that manage patient
registration and services ordering (ADT/registration system, HIS), Radiology
departmental information systems that manage department scheduling (RIS) and
image management/archiving (PACS) Acquisition modalities



Module 6:

30. with the help of a flowchart compare film based radiology with filmless using PACS
(MAY11, DEC11)
BEFORE PACS
Before the transition to the use of PACS and the HIS-RIS, radiology technologists
(RTs) had a number of responsibilities that overlapped with the clerical and film
library staff.
Technologists routinely performed a large number of manual processes that added
to the number of workflow steps in this section of the department.
Each RT would perform thousands of individual steps in a given workday.
Technologists were also responsible for re-entry of patient information from
physician order forms into computers associated with the various imaging
modalities.
The result was a relatively high rate (~15%) of errors, including spelling of names and
patient identification numbers, which could result in additional time delays in
locating and correctly identifying studies.
AFTER PACS
The elimination of film and the transition to the PACS and HIS-RIS resulted in
extraordinary improvements in the workflow of the technologists.
There was a 40% increase in technologist productivity for general radiography.
Filmless operation resulted in the elimination of a large number of steps previously
used in the CT suite.
Eliminated steps included those related to the creation of multiple versions of
images in different window and level settings and those related to the handling and
distribution of films.
The elimination of these steps resulted in a 45% reduction in the amount of time
required for a CT technologist to perform an examination.
In addition, a dramatic reduction in examination retake rates was noted for general
radiographic studies performed in the department.
One of the benefits of integration of an imaging modality such as a CT scanner, HIS-
RIS, and a PACS is the ability to reduce workflow steps and improve accuracy using
the modality worklist feature.
Using this feature, the imaging modality can communicate with a PACS or HIS-RIS to
obtain the list of examinations to be performed and generate a worklist that can be
displayed on the technologist operator console.
The technologist can then easily select a specific examination or combination of
studies, speeding entry of patient information and increasing the accuracy of data.
Incorporation of this feature reduced modality transmission error rates below 1.5%
in comparison with the approximately 15% rate of patient entry errors by
technologists performing manual entry.
Technologist productivity increased by 40% after the transition to the use of
computed radiography and PACS.


Module 7:


31. Short note on reporting paradigm (Dec 10)

Reporting Paradigm on PACS


Speech recognition
The interactive workstation has facilitated one of the most obvious changes in
radiologist workflow over the last decade: the way in which radiology reports are
generated, reviewed, and relayed to the referring physician and to the medical
record.

Speech recognition has become an integral part of the radiology workflow, with
many institutions eliminating medical transcription positions entirely.

Speech recognition affects every imaging specialist, across modalities and
subspecialisations, and it has often been a hard sell to the radiologists who would
benefit most from its implementation.

The literature on speech recognition is large in volume, with many reports of
benefits in decreased needs for auxiliary staff, money savings, and time savings in
turning around reports for delivery to physicians who ordered studies.

The process of acceptance of speech recognition technologies by radiologists has not
been smooth. In part, this was because it was unfamiliar.
More important, many radiologists perceived speech recognition reporting to be
more difficult, more time consuming, and part of a slippery slope that seemed to be
taking clerical tasks out of the hands of paid assistants and turning them into routine
parts of the professional interpretation and reporting process.

Efforts by vendors to simplify enrolment (the process by which an individual imprints
his or her speech patterns on the system), increased use of report templates, well-
executed training, and the introduction of innovative timesaving features have
served to win over many who originally opposed the introduction of speech
recognition.


The ability to correct and edit reports at the time of dictation or subsequent time of
choice, access to the report through the PACS from anywhere in the medical
enterprise, and a tendency toward the production of shorter, more organized
reports have strengthen support for speech recognition among radiologists.
For those who use speech recognition in combination with structured reporting
technologies, real-time savings and workflow advantages are being realized.



STRUCTURED REPORTING

Today, templates and macros integrated into the electronic radiology reporting
process allow each radiologist to customize routine reporting and enter large
sections of reports using only a few keystrokes.

Moreover, templates can be used in batch mode for high-volume study reporting.

Structured reporting is now being combined with other workstation interpretation
tools to yield what some have called the radiology report of the future: a multimedia
package that includes not only the traditional report but embedded annotated
images, lists of and links to additional informational and visual resources, and cues
and guidance provided by computer-aided decision support strategies.

The possibilities for entirely transforming both the radiology reporting process as
well as elevating the level of content in reports is truly exciting


Module 8:

32. Explain the application of image compression in the images of a.mammography
b.CR/DR (DEC10)

COMPRESSION APPLICATIONS IN MEDICAL IMAGING
The goal of medical image compression is to reduce the amount of data that must be
stored or transmitted. Depending on the application, either lossy or lossless compression
may be the best choice.
MAMMOGRAPHY
Mammography is an area that has received much attention from the compression
industry because the data sets are large, there is a large exam volume.
As there is only one disease related to breast (Breast cancer), analysis of
mammogram compression is easier.
However, breast cancer can have any of 3 types of appearance.
The first is that of a small area of tiny calcifications, also known as
microcalcifications.
The second appearance is of a mass, which will often have irregular borders.
The third is distortion of the architecture of the tissue within the breast.
Because these 3 appearances can be fairly easily characterized, the analysis of
compression effects is simpler than for other image types.
Early mammography compression studies used digitized films and either JPEG or a
wavelet variant. Because early algorithms tended to lose high-frequency information
first, most attention was paid to the effect on microcalcifications.
Studies from the 1990s showed that JPEG compression could be applied at levels up
to 15 : 1 without producing perceptible changes in the images, as assessed by image-
processing experts.
Computer-aided diagnosis (CAD) is an elegant way to avoid the variability inherent in
human observers, and to evaluate the large number of cases required to detect
small differences.
Digital detectors are beginning to appear on the market. It is possible that the image
properties are different, and so it is necessary to perform separate studies on these
images to determine the correct compression ratio.
But at the same time, compression algorithms, such as JPEG2000, are advancing.
JPEG2000 is receiving much attention because it has been adopted into the DICOM
standard and because it holds promise for better compression performance than
standard JPEG.
One recent study using an alternative forced-choice method found that digital
mammograms compressed with JPEG2000 at ratios up to 20 : 1 were
indistinguishable from the originals, which is similar to results for digitized films.

COMPUTED RADIOGRAPHY/DIGITAL RADIOGRAPHY
Compression of radiographic images also has a long history, beginning with study of
digitized radiographs but now focusing on computed radiography (CR) and digital
radiography (DR) images.
It is encouraging that the results seem similar: digitized radiographs seem to be
about as compressible as CR and DR.
For chest radiographs, ratios in the range of 20 : 1 seem to produce no visible or
diagnostic degradation. Slone found that with very close inspection/ magnification,
images compressed at 10 : 1 using JPEG are still indistinguishable from originals, and
at normal viewing conditions, 20 : 1 is equivalent.
Compression at ratios of up to 32 : 1 (either JPEG or wavelet) did not degrade
detection of simulated nodules on chest phantom images.
It is found that for either nodule diagnosis or interstitial disease detection, wavelet
compression of digitized chest X-rays (CXRs) at up to 40 : 1 was not significantly
different for nodules and for interstitial disease, and performance at 10 : 1 showed a
trend to be superior to original images. There are fewer studies of musculoskeletal
radiographs, but those that exist also show
that compression in the range of 20 : 1 does not alter visual


module 9:

33. Explain the centralized model of PACS(circa model 1995) (may11)

CENTRALIZED MODEL
As PACS systems have matured, the goal has become to make all images readily
available to all users regardless of what workstation configuration is available
(uncontrolled hardware).
In a centralized system (Internet standardsbased), there are two distinct layers, the
core and the workstation.
In the figure, A through F are all considered part of the core, while G and H are
workstations.
The client workstations are based on operating system (file sharing) and Web
browsers (HTML).
One of the crucial elements for a centralized PACS is adequate network speed and
bandwidth to accommodate the movement of images quickly from one location to
another.
Due to advancements in network technologies there have been substantial
performance gains and significant reductions in costs.
The same can be said of storage,
The HL7 process is now considered an integral part of the PACS; it is not referred to
as a separate broker process.
The imported RIS data is then used for modality worklists and exam validation.
After the images are acquired at the modality, the images are transferred via DICOM
to the PACS DICOM/image server (C in Figure).
Once the data are received and validated by the DICOM/image server, the images
are sent to the archive, where multiple copies, both lossless and lossy, are stored (D
and E in Figure).
At that point the images are available via the image distribution servers (F in Figure)
to the various types of PACS workstations and enterprise distribution systems (G and
H in Figure).
Integration of PACS into other hospital applications becomes much simpler when a
standards-based PACS is installed
In this model, no autorouting or pre-fetching of images is required.
Another significant benefit is that the scalability of the system is such that growth (in
exam volume and new modalities) can be attained and new technologies with no or
little impact to the radiologist or clinician.



34. Short note on DICOM

DICOM (Digital Imaging and Communications in Medicine) is a standard for handling,
storing, printing, and transmitting information in medical imaging. It includes a file
format definition and a network communications protocol.
The communication protocol is an application protocol that uses TCP/IP to
communicate between systems.
DICOM files can be exchanged between two entities that are capable of receiving
image and patient data in DICOM format.

DICOM enables the integration of scanners, servers, workstations, printers, and
network hardware from multiple manufacturers into a picture archiving and
communication system (PACS).

DICOM differs from some, but not all, data formats in that it groups information into
data sets.
That means that a file of a chest x-ray image, for example, actually contains the
patient ID within the file, so that the image can never be separated from this
information by mistake.

The DICOM Store service is used to send images or other persistent objects
(structured reports, etc.) to a PACS or workstation.


The DICOM storage commitment service is used to confirm that an image has been
permanently stored by a device (either on redundant disks or on backup media, e.g.
burnt to a CD).

Query/Retrieve- This enables a workstation to find lists of images or other such
objects and then retrieve them from a PACS. Modality worklist.
This enables a piece of imaging equipment (a modality) to obtain details of patients
and scheduled examinations electronically, avoiding the need to type such
information multiple times (and the mistakes caused by retyping).
The DICOM Printing service is used to send images to a DICOM Printer, normally to
print an "X-Ray"

Dicom standard part 15 provides a standardized method for secure communication
and digital signature.
4 security profiles that have been added to DICOM standards are secure use profiles,
secure transport connection profiles, digital signature profiles and media storage
secure profiles.


35. what is HIPAA & explain its impact on PACS security (MAY11, DEC11)

HIPAA AND ITS IMPACTS ON PACS SECURITY


HIPAA was put in place by Congress in 1996, and became a formal compliance doc-
ument in April l47 2003. It provides a conceptual framework for health care data
security and integrity and sets out strict and significant Federal penalties for non-
compliance.

The term HIPAA complaint can only refer to a company, institution, or hospital.
Policies on patient privacy must be implemented institution wide.
Software or hardware implementation for image data security by itself is not
sufficient.
Communication of DICOM images in a PACS environment is only a part of
information system in a hospital.
One cannot just implement the image security using DICOM or the image-embedded
DE method and assume that the PACS is HIPAA compliant.
All other security measures, such as user authorization with passwords, user training,
physical access constraints, and auditing, are as important as the secure
communication.
Image security which provides a means for protecting the image and corresponding
patient information when exchanging this information among devices and health
care providers, is definitely a critical and essential part of the provisions that can be
used to support institution-wide compliance with HIPAA privacy and security
regulations.
There are currently four key areas:
Electronic transactions and code sets (Compliance date: October 16,2002)
Privacy (Compliance date: April 14,2003)
Unique identifiers
Security

HIPAA mandates the use of unique identifiers for providers, health plans, employers,
and individuals receiving health care services.
The transactions, code sets and unique identifiers are mainly a concern for users and
manufacturers of hospital information-systems and, to a much lesser extent, for
radiology information system (RIS) users and manufacturers, whereas it has little or
no consequence for users and manufacturers of PACS.
Privacy and security regulations will have an impact on all HIS. RIS. and PACS users
and manufacturers.
Although HIPAA compliance is an institution-wide implementation. PACS and its
applications should have a great interest in making them helpful to become HIPAA
supportive.
In addition to those, the basic requirement for a PACS that will help a hospital to
comply with the HIPAA requirement is the ability to generate a list of information on
demand, related to the access of clinical information for a specific patient.
From an application point of view, there should be a log mechanism to keep track of
the access information such as
identification of the person who accessed the data
Date and time when data have been accessed
Type of access (create, read, modify, delete)
Status of access (success or failure) Identification of the data

A PACS should be designed in such a way that a single data Security server can
generate the HIPAA information without the need for, interrogating other servers
or workstations.
The PACS alone cannot be claimed as HIPAA compliant. Secure communication of
images using DE and DICOM security standards and continuous PACS monitoring
have shown HIPAA support functionalities that are indispensable for hospital-wide
HIPAA compliance.


Module 10:
36. Explain load balancing model & clustered failover model of PACS with suitable
diagrams. ( DEC11 )
LOAD BALANCING
The load-balancing model is based on the theory of distributing tasks evenly so that
no one system is overwhelmed; each server assigned to a task shares that task
evenly.
In PACS this technique works well with DICOM image servers and image distribution
servers. For example, if a PACS had two DICOM image servers and two image
distribution servers, the request to send images into the PACS would be intelligently
routed to the DICOM server with the least amount of activity.
Requests by the PACS workstations for images would be satisfied by the distribution
server that is the least busy.
The intelligence required to load balance is not provided by the PACS vendors but is
provided by third-party technology vendors.
Load balancing can be implemented at the hardware level (through content switch, a
device that monitors network/server activity and makes decisions on what server
should service a given request) or the software level (through Microsoft; the
software runs on the servers instead of on a dedicated device).
It is worth noting that database servers are costly to load balance (See Clustered
Failover following this section) because transactions need to be coordinated among
all participating servers.

Figure illustrates a typical load-balancing model for PACS, using the Web as a means of
image distribution. Following the diagram, both DICOM image servers forward their
respective information to the database server, while each Web server makes respective calls
to the database server to fulfill the client requests.

CLUSTERED FAILOVER
The clustered failover model provides a high level of hardware redundancy
protection for PACS servers (this approach is often referred to as active passive
clustering). Clustered failover means that one server controls the task being
performed while the second server passively waits for the first one to fail. In this
model, the DICOM image servers are clustered together in pairs, as are the image
distribution servers, providing simple failover redundancy.
The failover between servers is managed through software, either with an
enterprise-class operating system such as UNIX/Solaris, Windows 2000 Advanced
Server, or Windows Server 03 Enterprise edition, or through commercially available
cross-platform products such as Veritas. A caveat of this approach is that if the PACS
software caused the failure of the first server, the second server will also fail because
it is running the same software as the first.
It is also possible to combine techniques (often called active-active clustering). For
example, if the database (IBM DB2, Oracle, or Microsoft SQL Server) were clustered
and load balanced, then during normal operations all data requests would be evenly
distributed between both physical servers.
If one failed, all operations would be funneled to the remaining server.
Figure shows a clustered failover (load-balanced) model for PACS using Windows
2000 Advanced Server as the operating system.


This approach is usually very costly and offers little advantage over active-passive
clustering.


Module 11:

37. Explain the concept of RAID storage system (DEC10, MAY11, DEC11)
RAID is a concept of implementing high performance, large storage solution.
It allows multiple hard disks to work together and appear as a single storage to the
computer.
Several RAID configurations are available and provide different combinations of redundancy
and performance characteristics.
RAID controller is used .
The RAID controls the hard drives directly and provides blocks of storage to the operating
system.
RAID 0
it implements stripping. Data is broken into blocks and each block is written on
separate drives.
The information is shared in all the drives.
However if one drive fails, the information is lost.
It requires minimum of 2 drives.
Write and read operation is faster. It is easy to design and implement.
RAID 1
it implements mirroring.
The data is duplicated, i.e the data is copied into another drive.
It requires a minimum of 2 drives.
If one drive fails, information can be recovered.
RAID 10
It combines RAID 1 & RAID 0.
2 drives are used for stripping and one drive for mirroring.
RAID 3-
implements byte level striping with parity.
Byte level stripping means 1
st
byte of data is sent to 1
st
drive and 2
nd
byte sent to 2
nd

drive and so on.
Data to be written is divided into stripes and stripe parity is calculated for every
write operation. The stripe parity is stored on a separate parity disk.
Parity means extra bit of information which is calculated and stored such that if one
drive fails, original value can be reconstructed.
It helps in better disk usage.
Read and write operation is faster.
RAID 4-
implements block level striping with parity.
Data is divided in to blocks. Block parity is calculated and stored in another disk.
It can sustain one disk failure.

RAID 5 -
RAID 5 is a variant of RAID 4 that also calculates the parity at the block level.
parity is divided evenly in the drives.



Module 13 :
38. Short note on Application of teleradiology (may11)
APPLICATIONS OF TELERADIOLOGY
On-call coverageEmergency/24-hour Hospital to home Inter- and intrainstitutional
Primary Interpretations Freestanding imaging centers Rural hospitals and clinics,
Imaging
centers within regional delivery systems Nursing homes, other special care facilities
Reverse Teleradiology
Second opinions/consultations.
Access to subspecialty expertise at academic medical centers
Domestic and international consultations image processing
Utilization management.
Quality assurance .
Research.
image data collection and management.
Image analysis.
teaching, files.
Care presentations.

On-call applications are proving extremely valuable-in-the clinical practice of
radiology.
Radiologists are able to provide more rapid consultations than would be possible if
physical travel to the hospital were required. Radiologists can cover multiple institutions
simultaneously, and subspecialists within a group can provide on-call coverage more
flexibly.
Many Teleradiology companies have formed in recent years that offer emergency
radiology covering during overnight hours and weekends. These services can help
radiologists face the challenge of meeting ever-increasing on-call demands in the setting of
a relative shortage of radiologist.
As round-the-clock service is impractical for solo practitioners or small groups,
Teleradiology offers the opportunity for contemporaneous interpretation without having to
be on call an undue or impractical percentage of the time.
Application of teleradiology is coverage for freestanding imaging centers,
outpatient clinics, nursing homes, and smaller hospitals. In these applications, common
themes are improved coverage, improved access to subspecialist radiologists, and lower
cost of service provided.
Teleradiology is an efficient way for groups of radiologists to work with these
imaging centers to expand their practices and benefit from the growth in demand for
imaging studies without unduly disrupting the logistics of their practices, especially in the
case of hospital-based radiology groups.
reverse teleradiology by sending cases from larger centers to smaller centers.
In some situations, small departments require a radiologist to do procedures or to
meet requirements for direct supervision but not generate enough work to keep them busy
or justify their cost.
Teleradiology also provides access to subspecialty expertise at academic medical
centers or to second opinions and consultations forming larger radiology groups


Module 14:
39. Short note on legal socio-economic issues effecting teleradiology (DEC11)
LEGAL AND SOCIOECONOMIC ISSUES
Here is a list of legal and socioeconomic issues affecting telemedicine.
1. Medical licensure and credentialing
2. Malpractice insurance coverage
3. Jurisdictional control over malpractice suits
4. Confidentiality of medical records
5. Physician-patient relationship
6. Technical and clinical practice standards.
7. Reimbursement by third parties
8. Turf issues among radiologists

LICENSURE AND CREDENTIALLING
(Licensure * means a restricted practice requiring a license)
(Credentialing * is the process of establishing the qualifications of licensed professionals,)

A conflict between, on the one hand, the ability of teleradiology and other
telemedicine services to transcend geographic barriers technologically and, on the
other, the legal responsibilities of medical licensing authorities within individual
states has increased.
In addition to licensure, most hospitals require teleradiology providers to apply for
staff credentials.

MALPRACTICE INSURANCE COVERAGE
Radiologists providing teleradiology services must have malpractice insurance that
will extend coverage to lawsuits brought in all of the states in which they are
providing coverage.
It is an established legal principle that the: state in which the injury occurs and that
has the greatest connection to the injury possesses jurisdiction over the lawsuit.
Unfortunately, many malpractice insurers have regional limitations on their cover-
age.
These have been set up due to the difficulties in defending a lawsuit in a remote
location where the insurance company does not have relationships with malpractice
In addition, the current malpractice climate has turned many regions into very
difficult malpractice environments that carriers are reluctant to become involved in.
As the field of teleradiology continues to advance and malpractice risk levels become
clarified, it is likely that insurers will gain enough comfort to respond to the unique
coverage needs of teleradiology.

PATIENT CONFIDENTIALITY
Ensuring patient confidentiality is of utmost importance, and formal procedures for
doing so have been mandated with the passing of HIPAA (Health Insurance
Portability and Accountability Act) legislation by congress.
This legislation provides guidelines for the protection of patient privacy that hos-
pitals, imaging centers, and doctors must adhere to. For Teleradiology, the image-
compression process encrypts the image in a way that prevents casual theft, and
Internet-based systems can be protected using virtual private networks or secure
sockets layer technology.
With a secure mechanism for image transmission in place, a Teleradiology office
must take the usual steps for patient privacy that other medical offices would taken

STANDARDS [39. What are the ACR standards for teleradiology? (DEC 2012)]

The ACR has deemed that several criteria in the teleradiology process should meet
minimum standards. It would be prudent for anyone providing teleradiology services to
review the ACR standards. Highlights from the guidelines include:

Minimum display resolution is not specified, although the display device should be
capable of displaying all acquired data.
Image annotation (patient demographics and examination information) should be
included.
Image display stations should provide functionality for window width and level
settings, inverting contrast, image rotation, measurement function, and
magnification.
Compression ratios should be displayed with the image (no specification for what
type or level of compression).
Reasonable measures must be taken to protect patient confidentiality.


TURF ISSUES
The radiology community is divided on the issue of teleradiology.
Some people in the radiology community have expressed concern that larger groups,
including academic medical centers, will use their financial resources, prestige, and
clout to invade the turf of smaller groups, who regard their geographic service area
as inviolable.
In defending this view, the point is made that it is important for radiologists to
provide more than just interpretations and to take part in the medical life of the
respective hospital or healthcare community.
Counterarguments include the view that teleradiology provides patients and their
attending physicians the opportunity to access more highly sub specialized experts
than may be available in the local community, and that it is not appropriate for
individual radiologists to stand between the patient and such expert opinions.

The one thing abundantly obvious in the Internet age is that patients will increasingly
expect electronic access when they see it as beneficial to their interests and needs.
Doctors, including radiologists, will be ill-served to stand in the way of that new and
increasing imperative.

Você também pode gostar