Escolar Documentos
Profissional Documentos
Cultura Documentos
Ingls Tcnico
Ingeniera en Informtica y
Licenciatura en Sistemas
Profesoras:
Mnica Santilli
Pilar Traverso
Belisa Martino
2014
1
10 pasos para organizar el proceso de lectura, comprensin y traduccin de textos
acadmicos
2
10. Tiempos de revisin: no dedicar el 100% del tiempo disponible solamente a traducir.
Sin una buena revisin, por ms bien que traduzcamos, es imposible obtener una
traduccin de calidad
3
El uso del diccionario
Una vez que tengamos el diccionario en nuestras manos, debemos familiarizarnos con su
organizacin y con su contenido en cuanto a las abreviaturas que utiliza, tablas, listas y
cualquier otra informacin que pueda brindar adems del significado de las palabras.
En lo que atae al contenido, debemos tener en cuenta que muchos diccionarios brindan
informacin adicional que puede resultar muy til para el proceso de comprensin y traduccin.
Por ejemplo, la mayora incluye una lista con los verbos irregulares en ingls y muchos
contienen tablas de equivalencia de pesos y medidas. Si el diccionario es completo, puede
adems contar con una seccin dedicada a aspectos de gramtica de las dos lenguas.
Otras consideraciones
4
ACTIVIDADES
Lista de abreviaturas
Tabla de pesos y medidas
Resumen de estructuras gramaticales del ingls
Resumen de estructuras gramaticales del espaol
Lista de verbos irregulares en ingls
Seccin para los trminos en espaol
Seccin para los trminos en ingls
Otra/s...
c. Identifique los recursos que utiliza este diccionario online para indicar cambio de clase
de palabra.
5
1- Busque la palabra CODE en su diccionario tradicional y luego:
-Many code generator applications come with pre-built templates from which to build a
webpage.
6
Cmo optimizar la comprensin lectora
Consideremos que, desde una perspectiva interactiva, el lector asume una actitud dinmica y
no pasiva frente a la lectura: predice, hace conjeturas, interroga al texto: El significado de la
lectura es el resultado de la interaccin entre el texto y el lector.
En primer lugar, debemos hacer una planificacin de la lectura utilizando esta gua:
Un texto es un escrito con sentido completo. Los textos estn compuestos de prrafos.
Los autores usan prrafos para dividir su texto en unidades de ideas con significado
completo.
Un prrafo consiste en un grupo de oraciones que se relacionan con una idea central.
A continuacin analizaremos los dos ltimos puntos, tomando como punto de partida la
pregunta Cul es el tema de un texto?
El tema indica aquello sobre lo que trata un texto, y que puede expresarse a travs de una
frase simple e, inclusive, mediante una palabra. Para identificar el tema debemos preguntarnos:
De qu trata el texto?
7
El tema principal, por su parte, nos informa sobre el enunciado ms importante que el autor
quiere transmitirnos acerca del tema.
El reconocimiento del tema principal tiene como punto de partida la identificacin de las ideas
ms importantes o centrales. En este punto nos preguntaremos: Cul o cules son las ideas
ms importantes que el autor pretende explicar con relacin al tema?
En este caso, vemos que el autor habla del profesional graduado en Redes y Sistemas
Operativos.
Por lo tanto, al preguntarnos cul o cules son las ideas ms importantes que el autor
pretende explicar con relacin al tema de este prrafo, deberamos responder que el tema
principal es el perfil profesional del graduado en Redes y Sistemas Operativos, ya que
responde a las preguntas que nos planteamos sin ser demasiado amplio ni demasiado
detallado.
http://www.usfq.edu.ec/programas_academicos/colegios/politecnico/carreras/Paginas/redes_y_sistem
as_operativos.aspx
8
Dnde ubicamos la idea principal?
La idea principal puede estar presente y aparecer en cualquier lugar del texto, aunque
generalmente se presenta en la introduccin.
En algunos casos la idea principal aparece implcita en el texto y en estos casos hay que
construirla.
En esa lectura, identificaremos ttulos y subttulos, que facilitan enormemente el descubrir las
ideas principales. Ellos seguramente le darn, sino la idea principal, las pistas suficientes para
encontrarla.
Descubriremos una frase especialmente significativa que se repite en el ttulo del texto, o en los
subttulos y en el contenido.
Buscaremos un prrafo que nos pueda referir a la idea principal. En el texto puede haber un
prrafo destacado de alguna manera en el que el mismo autor nos diga Lo que me interesa
destacar en este texto es... o Lo importante a tener en cuenta es...
Podremos as responder la pregunta: Por qu el autor escribi esto, de esta manera y en este
momento?
Actividades:
2
http://www.globalization101.org/information-technology/
9
a. La Tecnologa de la Informacin.
b. Los avances en la TI como factor clave en el proceso de globalizacin.
c. Los avances en software y hardware de la dcada del 90 y los ms recientes en
herramientas de Internet.
The language you write and speak has structure: for example, a book has chapters with
paragraphs that contain sentences consisting of words. Programs written in Visual Basic also
have a structure: modules are like chapters, procedures are like paragraphs, and lines of
code are like sentences.
When you speak or write, you use different categories of words, such as nouns or verbs. Each
category is used according to a defined set of rules. In many ways, Visual Basic is much like the
language that you use every day. Visual Basic also has rules that define how categories of
words, known as programming elements, are used to write programs. Programming elements in
Visual Basic include statements, declarations, methods, operators, and keywords.
Written and spoken language also has rules, or syntax, that defines the order of words in a
sentence. Visual Basic also has syntaxat first it may look strange, but it is actually very simple
and tools such as IntelliSense provide you with guidance in using the correct syntax when you
write programs.
http://msdn.microsoft.com/en-us/library/ms172579(v=vs.90).aspx
The UNNOBA (National University of the North West of the Province of Buenos Aires) was
founded in 2002. It is a national, State-run and free university. It offers over 25 courses of
studies in areas such as Agricultural, Natural and Environmental Sciences and Economics, Law
and Technology. More than five thousand students complete their higher education at this
University.
10
How are computers designed?
You can't build a computer until you decide on its purpose. Manufacturers first pinpoint a need
for a specific product. Then they design that product on computers equipped with modeling
software. With a product plan in hand, engineers can determine what sort of manufacturing
equipment they'll need.
No matter which product they might conceive, it will require substantial resources. Your
computer is made up of a fantastic array of different materials, including steel, glass, silica
sand, iron ore, gold, bauxite and a lot of others. All of those raw materials have to come from
somewhere, such as mines.
Once the raw materials are gathered, they're transported to a factory, where individual computer
parts are made. One factory might specialize in RAM chips; another makes top-quality CPUs.
CPUs are made mostly of crystalline silicon, which can be sourced from common sand.
First, though, that silicon must be purified. This is one of the most critical steps, because
even a minute trace of impurities can cause chips to fail. Once in purified form, the
silicon is formed into wafers, which are simply thin sheets crystalline material.
Then, the CPU maker etches, or imprints, lines onto the surface of the wafer. This
process is followed by the actual placing of transistors and circuits.
Then the wafer is thoroughly cleaned with chemicals to ensure there are no contaminants. And
finally, the wafer is precisely cut into the many individual chips, or CPUs, which will eventually
provide the horsepower for your computer.
That's an extremely condensed synopsis of CPU creation. Now imagine the same kinds of
processes occurring for all of the other components inside your computer. It's no wonder that
computer manufacturers outsource component construction to third-party companies.
ACTIVIDADES
FIRST READING
SECOND READING
LANGUAGE ANALYSIS
11
GLOSARIO
12
Topic: Computer Types
In this unit you will learn about different types of computers and what makes them unique.
Computers were not always things you could carry around with you, or even have in your
bedroom. Sixty years ago, computers (such as ENIAC) were as big as entire apartments. They
were difficult to use and not very powerful by today's standards. They also cost a lot of money to
build and operate. So computers were only used by large organizations such as governments,
international corporations, and universities.
Throughout the 1950s and 1960s, computers captured the public's imagination in literature,
films, and TV. More and more companies wanted computers, even if they didn't always have a
good reason to own one. As a result, computers gradually became smaller, cheaper, and more
practical to own. This was thanks in part to companies like IBM, which mass-produced
computers for the first time and promoted them to medium and large businesses to do things
like payroll, accounting, and other number-crunching tasks.
In the 1970s and 1980s a new type of computer started to gain in popularity. It was called the
PC or personal computer. For the first time in history, computers were now for everyone. The
PC started a revolution which affects nearly everything we do today. The ways we work, play,
communicate, and access information have all been radically reshaped due to the invention and
evolution of the PC.
PCs are everywhere you look today. At home, at the office, and everywhere in between. Many
people still mistakenly believe the term PC is synonymous with a desktop computer running
Windows. This is not really true. Really, any computer you use by yourself for general purposes
could be called a PC. You probably already own at least one of these types of PCs:
- laptop
- desktop computer
- PDA or personal digital assistant
- workstation
Besides PCs, there are other types of computers you probably see at work or school. These
include:
- file servers
- print servers
- web servers
But not all types of computers are as obvious as the ones above. There are still other kinds of
computers that fit inside of other devices and control them. These computers are known as
embedded systems.
Embedded systems can be found in traffic lights, TV sets, refrigerators, coffee machines and
many more devices. Embedded systems are typically controlled by inexpensive, specialized
processors which can only handle very specific tasks.
Types of computers go in and out of fashion as times changes. Older kinds of computers which
were very popular in the 20th century (1900's) are now referred to as legacy systems. These
include:
- mainframes
- minicomputers
- IBM clones
13
New types of computers are always coming out and replacing or augmenting existing computer
types. Examples of new types of computers emerging would be netbooks, tablet PCs, and even
wearable computers.
ACTIVIDADES
A-Skimming:
B-Scanning:
a-Hace 60 aos.
d-Actualmente.
C-Language analysis:
b-Adjetivos compuestos.
c-Forma ing como post-modificador, como gerundio, como verbo principal en Presente
Continuo, como pre-modificador.
d-Forma ed como verbo principal en tiempo Pasado Simple, como verbo principal de
oracin en Voz Pasiva, como pre-modificador.
e-Proposiciones relativas.
f-Comparativos.
14
GLOSARIO
EMBEDDED (system)
LEGACY (system)
MAINFRAME
15
WHO WANTS A LAPTOP
A project set up by Nicholas Negroponte, founder of the MIT Media Lab, in January 2005 with
the aim of designing, producing and distributing $100 laptops to children in the poorest
countries in the world.
2.
They are portable and so can be taken from school to home. An essential feature of the project
is that the children should own the computers, and be able to do whatever they want with them.
The chosen design is very robust and can be powered in a number of ways, including wind-up,
so can be used even in homes without electricity.
3.
The major saving is in the dual-mode display. It uses cheaper technology than is usually used in
laptops, and has an innovative black and white mode that can be used in bright sunshine. The
software is cutting edge but slimmed down. Computers nowadays can typically do the same
function in lots of different ways; this laptop will do everything well, but in one way only. It runs a
modified version of Linux, and all the software is open source. Not only will this cut costs, but
mean that the owners will be able to modify the software to their needs.
4.
It has a 500MHz processor and 128MB of DRAM, with 500MB of Flash memory. It doesnt have
a hard disk, but it has four USB ports. It will be able to do everything that a more expensive
laptop can do except store large amounts of data. Probably the most important feature is that
the machines are capable of forming a wireless peer-to-peer mesh network. Children will be
able to communicate with each other via an ad hoc local area network, sharing information and
collaborating in projects.
5.
They will be sold to governments, who will then give them out to schoolchildren, in the same
way that they might give out textbooks. Governments in countries such as India, China, Brazil
and Thailand are expected to order huge quantities of the laptops. When enough orders have
been received and paid for in advance, manufacturing will begin
ACTIVIDADES
COMPREHENSION
LANGUAGE ANALYSIS
Objective (p.1) ..
Characteristic (p2) ..
Powerful (p2) ..
Most important (p3) ..
Reduce (p3) ..
Distribute (p5) ...
Very big (p5)
17
GLOSARIO
OPEN SOURCE
PEER-TO-PEER
18
Topic: Storage and Memory
Memory and storage are important concepts to master in Information Technology. The two
terms are sometimes used interchangeably, so it is important to understand some key
differences.
Computer memory needs to be quick. It is constantly feeding the CPU with data to process.
Since nobody likes to wait for a computer, high-quality computers will have fast processors and
lots of quick memory.
Computers do not normally process all the information they have at once. They also need to
save some data for long term use. This is where storage comes in. Think of all the video files,
mp3s, photos, documents, etc on your PC. These files are not always being processed by the
CPU, they are mostly just hanging around waiting to be used at some point. Storage does need
to be as quick as memory, but there does need to a lot more of it. This is a key difference
between memory and storage.
Because memory needs to be much faster than storage, it is rather more expensive than
storage per KB. A typical desktop computer today (in 2009) typically has between 512 MB and 8
GB of memory running at speeds of anywhere from 300 MHZ to 1.2 GHZ. Don't worry if you
don't know what those measurements mean at this point. We will get to them in a later unit.
Computer storage is typically cheaper, slower, and more plentiful than computer memory.
Storage comes in many different types including magnetic storage, optical storage, and more
recently semiconductor storage. Storage is typically non-volatile in nature, meaning that it
retains its state even when the power is off. A typical computer today comes with anywhere
between 50 GB and 1 TB of computer storage.
The most popular example today of magnetic storage is the hard disk drive. These devices use
rotating, magnetically-charged platters to store data. Hard disk drives are popular because they
can store a lot of data very reliably with relatively quick access times. Other examples of
magnetic storage devices include the tape drive and diskette. Tape drives and diskettes are
both good examples of legacy devices. It's unlikely they will even be made much past 2010.
Trends in computer storage are always changing. Now it looks as if traditional magnetic hard
disk drives might eventually be replaced by SSDs or solid state drives. SSDs have many key
advantages over magnetic storage including 1) no moving parts and 2) less power consumption.
This makes them very good for laptops where battery life and overall durability can be big
issues. If the technology continues to improve, we may even see them in desktop computers as
well.
Optical storage is another technology strategy used in computer storage, and is particularly
useful for sharing audio, video, and larger programs. Optical storage works by a laser burning
or reading data off a plastic disc coated with various types of light sensitive material in it. Due to
reliability and space limitations, optical storage is seldom used as a primary means of data
storage.
19
ACTIVIDADES
A. SKIMMING
B. SCANNING
C. LANGUAGE ANALYSIS
GLOSARIO
20
Topic: Operating Systems
An operating system is a generic term for the multitasking software layer that lets you perform
a wide array of 'lower level tasks' with your computer. By low-level tasks we mean:
A computer would be fairly useless without an OS, so today almost all computers come with an
OS pre-installed. Before 1960, every computer model would normally have its own OS custom
programmed for the specific architecture of the machine's components. Now it is common for an
OS to run on many different hardware configurations.
At the heart of an OS is the kernel, which is the lowest level, or core, of the operating system.
The kernel is responsible for all the most basic tasks of an OS such as controlling the file
systems and device drivers. The only lower-level software than the kernel would be the BIOS,
which isn't really a part of the operating system. We discuss the BIOS in more detail in another
unit.
The most popular OS today is Microsoft Windows, which has about 85% of the market share for
PCs and about 30% of the market share for servers. But there are different types of Windows
OSs as well. Some common ones still in use are Windows 98, Windows 2000, Windows XP,
Windows Vista, and Windows Server. Each Windows OS is optimized for different users,
hardware configurations, and tasks. For instance Windows 98 would still run on a brand new PC
you might buy today, but it's unlikely Vista would run on PC hardware originally designed to run
Windows 98.
There are many more operating systems out there besides the various versions of Windows,
and each one is optimized to perform some tasks better than others. Free BSD, Solaris, Linux
and Mac OS X are some good examples of non-Windows operating systems.
Geeks often install and run more than one OS an a single computer. This is possible with dual-
booting or by using a virtual machine. Why? The reasons for this are varied and may include
preferring one OS for programming, and another OS for music production, gaming, or
accounting work.
An OS must have at least one kind of user interface. Today there are two major kinds of user
interfaces in use, the command line interface ( CLI ) and the graphical user interface ( GUI ).
Right now you are most likely using a GUI interface, but your system probably also contains a
command line interface as well.
Typically speaking, GUIs are intended for general use and CLIs are intended for use by
computer engineers and system administrators. Although some engineers only use GUIs and
some diehard geeks still use a CLI even to type an email or a letter.
In recent years, more and more features are being included in the basic GUI OS install,
including notepads, sound recorders, and even web browsers and games. This is another
example of the concept of 'convergence' which we like to mention.
21
A great example of an up and coming OS is Ubuntu. Ubuntu is a Linux operating system which
is totally free, and ships with nearly every application you will ever need already installed. Even
a professional quality office suite is included by default. What's more, thousands of free, ready-
to-use applications can be downloaded and installed with a few clicks of the mouse. This is a
revolutionary feature in an OS and can save lots of time, not to mention hundreds or even
thousands of dollars on a single PC. Not surprisingly, Ubuntu's OS market share is growing very
quickly around the world.
ACTIVIDADES
SKIMMING:
Qu es un sistema operativo.
Qu tareas realiza un SO.
Cundo fueron inventados los SO.
Qu es el ncleo de un SO.
Qu caractersticas tienen los SO.
Qu clases de SO existen.
Qu es la interfaz de usuario.
Qu clases de interfaces de usuario hay.
SCANNING:
4-Los blocks de notas y las grabadoras de sonido se mencionan como ejemplos de:
a. Sistema operativo
b. El concepto de convergencia
c. Interfaz grfica de usuario
22
LANGUAGE ANALYSIS
Analice las siguientes oraciones prestando especial atencin a las palabras o frases
Subrayadas. Luego tradzcalas:
3. The kernel is responsible for all the most basic tasks of an OS such as controlling the
file systems and device drivers.
................................................................................................................................................
................................................................................................................................................
4. It's unlikely Vista would run on PC hardware originally designed to run Windows 98.
................................................................................................................................................
................................................................................................................................................
6. The reasons for this are varied and may include preferring one OS for programming.
................................................................................................................................................
................................................................................................................................................
8. CLIs are intended for use by computer engineers and system administrators.
................................................................................................................................................
................................................................................................................................................
GLOSARIO
23
Topic: Components
Due to convergence, the traditional categories we divide computing into are blurring. But for
practical reasons, IT professionals can still divide hardware into two main classes: components
and peripherals.
Components are primarily core internal devices of a computer which help define what type a
computer is, what it is capable of doing, and how well it is capable of doing it. Nothing affects
the overall quality of a computer more than its components.
Normally the more expense a component is, the better it performs. This is a general guideline
however and not a steadfast rule. Sometimes you can spend a lot more money on a component
with only slightly better performance than one costing half as much. Other times a very
expensive component might be based on a completely new technology that is not ready for
mass production. In these cases, one is often better off buying a more mainstream part.
Being an early adopter is not always the most practical move when speccing components for a
new system. Often you can find very powerful hardware at the medium price ranges. There is
normally a relatively large sweet-spot in the market.
How can you know if a component is good or bad? You want to be an IT professional, right? IT
professionals need good computers without performance bottlenecks. So do some research.
Read articles about components on a website. Where do you find them? Just Google it!
Imagine you want to build your own computer. It's not that difficult or expensive really. I
personally think its kind of fun, How would you start? If you are experienced, you would start by
choosing the components first! Components must be compatible with each other in order to
function correctly. For example not all processors are compatible with all motherboards.
Research is necessary to solve your dependencies.
If you can't afford the exact parts you want to get all at the same time, you can use old parts or
buy cheaper parts at first if you have to. Why? Because certain components can be upgraded
to attain increased performance. For example, a video card (or graphics card) can be upgraded
to improve the graphics for a CAD/CAM application or 3D gaming experience.
At the heart of the computer lie several key components sitting on the motherboard including the
microprocessor, the chipset, RAM and a ROM firmware instruction set called the BIOS. These
core components are connected by several buses made to carry information around the system
and eventually out to display devices and other peripherals.
The CPU is another name for the 'brain' of the computer and normally includes the
microprocessor and RAM. This is what does all the calculations. One or more coprocessors
may or may not be needed depending on what the computer is used for. In the 20th century,
coprocessors were often used for mathematics such as floating point operations. Today
however coprocessors are mostly used for 3D graphics (GPUs), sound generation, and physics
applications.
As you probably learned in an earlier chapter, RAM is the memory which allows your computer
to hold the operating system and all running programs while your computer is in use. On the
contrary, ROM is a kind of permanent memory which is still intact even when the computer is
off. The BIOS is a good example of an application using ROM. The BIOS controls very low-level
access to the hardware.
Busses and ports are general terms for connectivity components which connect the different
parts of the PC together. These include the serial port, parallel port, PCI and PCIe busses, and
24
the Universal Serial Bus (USB) controller. These devices allow communication between
different parts of the system. Also network interface cards are now standard on most
motherboards, although USB and PCI versions of the devices are also available.
Your optical drives and hard disk drives are also components in your computer. To allow data
interchange between your CPU and drives, SATA, ATA, and SCSI controllers are still widely
used.
The core multimedia components include the sound card and graphics card. They make
computing more fun and useful for creative professionals such as designers, gamers, and
musicians. Multimedia is definitely a place where high-quality components really matter.
Feeding all these components with a steady supply of energy is another component called the
power supply. This is an often overlooked piece of hardware but obviously very important! A
low quality power supply can cause havoc in a computer system. On the other hand a bigger
than necessary power supply can increase system heat, waste power, and make a lot of noise.
Choose wisely!
At the most exterior of the computer we see the computer case. This is meant to look good,
protect the components, and provide an easy interface to plug in peripherals. If you are buying
or building your own computer, make sure it has a good case.
Apple is well-known for high quality PC and laptop cases, although most major companies have
fair to medium quality PC cases. Beware of computers with cheap looking plastic cases. If a
computer manufacturer uses a cheap case, it's very likely they are also using other cheap
components inside as well. Cheap components equal a slow computer which will break after
moderate use. If you intend to use a computer for several hours every day, it makes sense to
buy the very best one which fits your needs and budget.
ACTIVIDADES
Skimming
Scanning
1-Responda:
Language Analysis
a. Components are primarily core internal devices of a computer which help define what
type a computer is, what it is capable of doing, and how well it is capable of doing it.
25
b. Normally the more expense a component is, the better it performs. This is a general
guideline however and not a steadfast rule.
c. In these cases, one is often better off buying a more mainstream part.
e. If you intend to use a computer for several hours every day, it makes sense to buy the
very best one which fits your needs and budget.
GLOSARIO
PERFORM
CARD
26
Topic: Input Devices
We use input devices every time we use a computer. Simply speaking, it is these devices
which allow us to enter information. Without them, the computer would not know what we want it
to do.
Some of the things we do with input devices are: move a cursor around the screen, enter
alphanumeric text, draw pictures, and even enter binary data in the form of graphics or audio
wave forms.
Input devices have a history as long as computers themselves. Perhaps the first input device
was the simple electronic switch (similar to a light switch) which turned bits on or off. There
were hundreds or even thousands of these switches on larger computers. It used to take a team
of programmers hours or even days to set up a computer to perform a single calculation.
Switches and jumpers are still used today on computers. For instance the power button on the
computer is a switch which is also an input device telling the computer to power on or power off.
Tiny switches called jumpers are also widely used on motherboards to change important
settings such as processor clock speed or memory speed.
Most likely in front of you right now are two of the most popular input devices: the keyboard and
the mouse. And instead of a mouse on a laptop computer you normally have a touchpad.
As computers evolved throughout the late 20th century, computers became more and more
interactive. Input devices came and went. Some lasted and some did not. The light pen and the
joystick are almost unknown today, although they were popular before the mouse and the
gamepad became well-known. Touch screens are already replacing keypads on mobile phones
and may come to replace or augment keyboards and mice on PCs and laptops in the near
future.
Different people prefer different input devices for doing same task. For instance, many graphic
artists prefer to use a stylus and graphics tablet rather than a mouse. It might offer them a
greater deal of artistic freedom, or precision while performing their work.
Sufferers of carpal tunnel syndrome often prefer a trackball or stylus to a mouse. Handicapped
computer users have invented a wide array of input devices designed to replace the mouse
including devices controlled by foot or even eye movement.
Not only PCs and mainframes use input devices. Almost all computers feature some kind of
input device. Special scanners are used in many stores and warehouses called barcode
readers to enter stock and sell items at the cashier. These are input devices as well. Even
microphones can technically be called input devices as a computer can respond to them and
interpret them as incoming data.
Corporations and especially government institutions are already implementing the second
generation of input devices to improve security. These include retina scanners and/or fingerprint
readers to replace or improve accuracy of username and password authentication. You will be
seeing more of this kind of biometric authentication in the coming years as a general remedy for
weak passwords or leaked passwords.
In summary, input devices are how you interact with a computer. The computer responds to
your input and hopefully does what you need it to do. It seems really simple, and that's the way
it was meant to be!
27
ACTIVIDADES
SKIMMING
SCANNING
LANGUAGE ANALYSIS
28
b. normally touch pad. Laptop computers have a
............................................................................................................................................
............................................................................................................................................
GLOSARIO
SWITCH
JUMPERS
READER
29
Topic: Programming: Languages
Learning a programming language is not easy, but it can be very rewarding. You will have a lot
of questions at first. Just remember to get help when you need it! You can find out the answer to
almost everything on Google nowadays.... so there is no excuse for failure. Also remember that
it takes years to become an expert programmer. Don't expect to get good overnight. Just keep
learning something new every day and eventually you will be competent enough to get the job
done ;)
This article describes three of the most popular programming languages as ranked by
Tiobe.com in June 2009.
#1. Java
Java uses a compiler, and is an object-oriented language released in 1995 by Sun
Microsystems. Java is the number one programming language today for many reasons. First, it
is a well-organized language with a strong library of reusable software components. Second,
programs written in Java can run on many different computer architectures and operating
systems because of the use of the JVM (Java virtual machine). Sometimes this is referred to as
code portability or even WORA (write once, run anywhere). Third, Java is the language most
likely to be taught in university computer science classes. A lot of computer science theory
books written in the past decade use Java in the code examples. So learning Java syntax is a
good idea even if you never actually code in it.
Java Strengths: WORA, popularity
Java Weaknesses: Slower than natively compiled languages
#6. Python
Python is an interpreted, multi-paradigm programming language written by Guido van Rossum
in the late 1980's and intended for general programming purposes. Python was not named after
the snake but actually after the Monty Python comedy group. Python is characterized by its use
of indentation for readability, and its encouragement for elegant code by making developers do
similar things in similar ways. Python is used as the main programming choice of both Google
and Ubuntu.
Strengths: Excellent readability and overall philosophy
Weaknesses: None
#8. JavaScript
JavaScript is an interpreted, multi-paradigm language. A very strange one too. Despite its
name, it has nothing whatsoever to do with Java. You will rarely, if ever, see this language
outside of a web browser. It is basically a language meant to script behaviors in web browsers
and used for things such as web form validation and AJAX style web applications. The trend in
the future seems to be building more and more complex applications in JavaScript, even simple
online games and office suites. The success of this trend will depend upon advancements in the
speed of a browser's JavaScript interpreter. If you want to be correct, the real name of this
programming language is ECMAscript, although almost nobody actually calls it this.
Strengths: it's the only reliable way to do client-side web programming
Weaknesses: it's only really useful in a web browser
ACTIVIDADES
SKIMMING
30
SCANNING
LANGUAGE ANALYSIS
a. The catalogue will let you make an informed . about which computer
to buy.
b. You should the programming language according to your needs.
c. Input devices are . considering the users needs.
31
Topic: Basic Networking
In the simplest explanation, networking is just computers talking to each other. They do this by
sending data packets using various protocols and transmission mediums such as ethernet cable
or Wi-Fi connections. Computers must also know how to find other computers on the network.
To put it briefly, every computer on the network needs a unique address so messages know
where to go after they are sent.
The types of networks you deal with on a daily basis include local area networks (LANs) and
wide area networks (WANs).
Many people today have LANs in their schools, offices, and even their homes. LANs are
especially good for sharing Internet access and commonly used files and databases.
Users can also connect to wide area networks (WANs) as well, which are just large LANs
spread out over several physical locations. The Internet itself is basically a large WAN, with
each node on the network having its own unique IP address.
As you may have read in books or seen in movies, security considerations play a large role
when designing networks. Technology such as firewalls can both block and filter unwanted
network traffic. Virtual private networks (VPNs) are used to connect remote users to office
networks without jeopardizing security. VPNs use strong data encryption to hide data as it is
moving between routers over the Internet.
Networking is not something you can master in a week or even a month. Hundreds of books
have been written about the subject and many more hundreds will come in the future as
technologies mature and evolve. If you work on networks for a living, you are called a network
engineer, and you will probably take certification exams by networking companies such as
Cisco.
There are other kinds of networking as well which are not always between PCs and servers. An
example is Bluetooth technology, which is optimized for networking between common consumer
electronics such as mobile phones, mp3 players, and similar devices.
ACTIVIDADES
COMPREHENSION:
LANGUAGE ANALYSIS:
Spread out: . .
Play (a role): . .
Unwanted: . .
Jeopardizing: . .
Hide: . .
Master: . .
2- Encuentre ejemplos de las siguientes nociones y estructuras gramaticales:
a. Participio presente como postmodificador: ..
b. Participio pasado como postmodificador: ...
c. Proposicin relativa: ..
d. Verbos modales: ...
e. Una verbo en presente perfecto: ...
f. Una oracin condicional: ..
3- Complete el glosario:
DOMAIN
NODE
ENCRYPTION
33
Fundamentals of Computer Design
Prrafo .
These changes made it possible to develop successfully a new set of architectures with simpler
instructions, called RISC (Reduced Instruction Set Computer) architectures, in the early 1980s.
The RISC-based machines focused the attention of designers on two critical performance
techniques, the exploitation of instruction level parallelism (initially through pipelining and later
through multiple instruction issue) and the use of caches (initially in simple forms and later using
more sophisticated organizations and optimizations).
Prrafo .
Computer technology has made incredible progress in the roughly 60 years since the first
general-purpose electronic computer was created. Today, less than $500 will purchase a
personal computer that has more performance, more main memory, and more disk storage than
a computer bought in 1985 for 1 million dollars. This rapid improvement has come both from
advances in the technology used to build computers and from innovation in computer design.
Although technological improvements have been fairly steady, progress arising from better
computer architectures has been much less consistent. During the first 25 years of electronic
computers, both forces made a major contribution, delivering performance improvement of
about 25% per year.
Prrafo .
The RISC-based computers raised the performance bar, forcing prior architectures to keep up
or disappear. The Digital Equipment Vax could not, and so it was replaced by a RISC
architecture. Intel rose to the challenge, primarily by translating x86 (or IA-32) instructions into
RISC-like instructions internally, allowing it to adopt many of the innovations first pioneered in
the RISC designs. As transistor counts soared in the late 1990s, the hardware overhead of
translating the more complex x86 architecture became negligible.
Prrafo .
The late 1970s saw the emergence of the microprocessor. The ability of the microprocessor to
ride the improvements in integrated circuit technology led to a higher rate of improvement
roughly 35% growth per year in performance.
This growth rate, combined with the cost advantages of a mass-produced microprocessor, led
to an increasing fraction of the computer business being based on microprocessors. In addition,
two significant changes in the computer marketplace made it easier than ever before to be
commercially successful with a new architecture. First, the virtual elimination of assembly
language programming reduced the need for object-code compatibility. Second, the creation of
standardized, vendor-independent operating systems, such as UNIX and its clone, Linux,
lowered the cost and risk of bringing out a new architecture.
34
Storage Systems
The popularity of Internet services like search engines and auctions has enhanced the
importance of I/O for computers, since no one would want a desktop computer that couldnt
access the Internet. This rise in importance of I/O is reflected by the names of our times. The
1960s to 1980s were called the Computing Revolution; the period since 1990 has been called
the Information Age, with concerns focused on advances in information technology versus raw
computational power.
This shift in focus from computation to communication and storage of information emphasizes
reliability and scalability as well as cost-performance.
Although it is frustrating when a program crashes, people become hysterical if they lose their
data. Hence, storage systems are typically held to a higher standard of dependability than the
rest of the computer. Dependability is the bedrock of storage, yet it also has its own rich
performance theoryqueuing theorythat balances throughout versus response time. The
software that determines which processor features get used is the compiler, but the operating
system usurps that role for storage.
Thus, storage has a different, multifaceted culture from processors, yet it is still found within the
architecture tent. We start our exploration with advances in magnetic disks, as they are the
dominant storage device today in desktop and server computers.
ACTIVIDADES
35
GLOSARIO
QUEUING (theory)
36
SOFTWARE APPLICATIONS
Without software applications, it would be very hard to actually perform any meaningful task on
a computer unless one was a very talented, fast, and patient programmer. Applications are
meant to make users more productive and get work done faster. Their goal should be flexibility,
efficiency, and user-friendliness.
Today there are thousands of applications for almost every purpose, from writing letters to
playing games. Producing software is no longer the lonely profession it once was, with a few
random geeks hacking away in the middle of the night. Software is a big business and the
development cycle goes through certain stages and versions before it is released.
Applications are released in different versions, including alpha versions, beta versions, release
candidates, trial versions, full versions, and upgrade versions. Even an application's instructions
are often included in the form of another application called a help file.
Alpha versions of software are normally not released to the public and have known bugs. They
are often seen internally as a 'proof of concept'. Avoid alphas unless you are desperate or else
being paid as a 'tester'.
Beta versions, sometimes just called 'betas' for short, are a little better. It is common
practice nowadays for companies to release public beta versions of software in order to
get free, real-world testing and feedback. Betas are very popular and can be downloaded
all over the Internet, normally for free. In general you should be wary of beta versions,
especially if program stability is important to you. There are exceptions to this rule as
well. For instance, Google has a history of excellent beta versions which are more stable
than most company's releases.
After the beta stage of software development comes the release candidates (abbreviated RC).
There can be one or more of these candidates, and they are normally called RC 1, RC 2, RC 3,
etc. The release candidate is very close to what will actually go out as a feature complete
'release'.
The final stage is a 'release'. The release is the real program that you buy in a shop or
download. Because if the complexity in writing PC software, it is likely that bugs will still find
their way into the final release. For this reason, software companies will offer patches to fix any
major problems that end users complain loudly about.
Applications are distributed in many ways today. In the past most software has been bought in
stores in versions called retail boxes. More and more, software is being distributed over the
Internet, as open source, shareware, freeware, or traditional proprietary and upgrade versions.
ACTIVIDADES
COMPREHENSION:
37
LANGUAGE ANALYSIS:
1. Complete el siguiente cuadro con ejemplos del texto:
GLOSARIO
38
ACTIVIDADES
A. PRE-READING
39
B. SCANNING
1. Segn Tactus, cul es la ventaja del teclado con respecto a la pantalla tctil?
C. LANGUAGE ANALYSIS
1. Encuentre en el texto cuatro adjetivos que describan a las pantallas tctiles. Complete y
traduzca
40
3. Encuentre en el texto ejemplos de:
b. Oracin condicional
c. Proposicin relativa
d. Verbos modales
.
GLOSARIO
41
The Rest of Computer Architecture: Designing the Organization and Hardware to Meet
Goals and Functional Requirements
The implementation of a computer has two components: organization and hardware. The term
organization includes the high-level aspects of a computers design, such as the memory
system, the memory interconnect, and the design of the internal processor or CPU (central
processing unitwhere arithmetic, logic, branching, and data transfer are implemented). For
example, two processors with the same instruction set architectures but very different
organizations are the AMD Opteron 64 and the Intel Pentium 4. Both processors implement the
x86 instruction set, but they have very different pipeline and cache organizations.
Hardware refers to the specifics of a computer, including the detailed logic design and the
packaging technology of the computer. Often a line of computers contains computers with
identical instruction set architectures and nearly identical organizations, but they differ in the
detailed hardware implementation. For example, the Pentium 4 and the Mobile Pentium 4 are
nearly identical, but offer different clock rates and different memory systems, making the Mobile
Pentium 4 more effective for low-end computers.
The word architecture covers all three aspects of computer designinstruction set
architecture, organization, and hardware. Computer architects must design a computer to meet
functional requirements as well as price, power, performance, and availability goals. Often,
architects also must determine what the functional requirements are, which can be a major task.
The requirements may be specific features inspired by the market. Application software often
drives the choice of certain functional requirements by determining how the computer will be
used. If a large body of software exists for a certain instruction set architecture, the architect
may decide that a new computer should implement an existing instruction set. The presence of
a large market for a particular class of applications might encourage the designers to
incorporate requirements that would make the computer competitive in that market.
Architects must also be aware of important trends in both the technology and the use of
computers, as such trends not only affect future cost, but also the longevity of an architecture.
ACTIVIDADES
3. Elabore un cuadro o grfico que mencione e ilustre las nociones ms importantes que
presenta el autor.
4. Identifique en el texto y transcriba ejemplos de :
a. Frase nominal. (3)
b. Oracin en presente simple, 3ra persona del singular. (2)
c. Oracin en voz pasiva. (1)
42
GLOSARIO
BRANCHING
PIPELINE
INSTRUCTION SET
ARCHITECUTRE
43
Introduction and Layered Network Architecture
Primitive forms of data networks have a long history, including the smoke signals used by
primitive societies, and certainly including nineteenth-century telegraphy. The messages in
these systems were first manually encoded into strings of essentially binary symbols, and then
manually transmitted and received. Where necessary, the messages were manually relayed at
intermediate points.
A major development, in the early 1950s, was the use of communication links to connect central
computers to remote terminals and other peripheral devices, such as printers and remote job
entry points (RIEs). The number of such peripheral devices expanded rapidly in the 1960s with
the development of time-shared computer systems and with the increasing power of central
computers. With the proliferation of remote peripheral devices, it became uneconomical to
provide a separate long-distance communication link to each peripheral. Remote multiplexers
or concentrators were developed to collect all the traffic from a set of peripherals in the same
area and to send it on a single link to the central processor. Finally, to free the central
processor from handling all this communication, special processors called front ends were
developed to control the communication to and from all the peripherals. This led to the more
complex structure. The communication is automated in such systems, in contrast to telegraphy,
for example, but the control of the communication is centrally exercised at the computer. While it
is perfectly appropriate and widely accepted to refer to such a system as a data network or
computer communication network, it is simpler to view it as a computer with remote peripherals.
Many of the interesting problems associated with data networks, such as the distributed control
of the system, the relaying of messages over multiple communication links, and the sharing of
communication links between many users and processes, do not arise in these centralized
systems.
The ARPANET and TYMNET, introduced around 1970, were the first large-scale, general-
purpose data networks connecting geographically distributed computer systems, users, and
peripherals. Inside the "subnet" are a set of nodes, various pairs of which are connected by
communication links. Outside the subnet are the various computers, data bases, terminals, and
so on, that are connected via the subnet. Messages originate at these external devices, pass
into the subnet, pass from node to node on the communication links, and finally pass out to the
external recipient.
The nodes of the subnet, usually computers in their own right, serve primarily to route the
messages through the subnet. These nodes are sometimes called IMPs (interface message
processors) and sometimes called switches. In some networks (e.g., DECNET), nodes in the
subnet might be physically implemented within the external computers using the network. It is
helpful, however, to view the subnet nodes as being logically distinct from the external
computers.
It is important to observe that in Figs. 1.1 and 1.2 the computer system is the center of the
network, whereas in Fig. 1.3 the subnet (i.e., the communication part of the network) is central.
Keeping this picture of external devices around a communication subnet in mind will make it
easier both to understand network layering later in this chapter and to understand the issues of
distributed network control throughout the book.
This arbitrary placement (or arbitrary topology as it is often called) is typical of wide area
networks (i.e., networks covering more than a metropolitan area). Local area networks (i.e.,
networks covering on the order of a square kilometer or less) usually have a much more
restricted topology, with the nodes typically distributed on a bus, a ring, or a star.
Since 1970 there has been an explosive growth in the number of wide area and local area
networks. Many examples of these networks are discussed later, including as wide area
networks, the seminal ARPANET and TYMNET, and as local area networks, Ethemets and
token rings.
With the multiplicity of different data networks in existence in the 1980s, more and more
networks have been connected via gateways and bridges so as to allow users of one network
44
to send data to users of other networks. At a fundamental level, one can regard such a network
of networks as simply another network, with each gateway, bridge, and subnet node of each
constituent network being a subnet node of the overall network. From a more practical
viewpoint, a network of networks is much more complex than a single network. The problem is
that each constituent subnet has its own conventions and control algorithms (i.e., protocols) for
handling data, and the gateways and bridges must deal with this inhomogeneity. We discuss
this problem later after developing some understanding of the functioning of individual subnets.
In the future, it is likely that data networks, the voice network, and perhaps cable TV networks
will be far more integrated than they are today. Data can be sent over the voice network today,
and many of the links in data networks are leased from the voice network. Similarly, voice can
be sent over data networks. What is envisioned for the future, however, is a single integrated
network, called an integrated services digital network (ISDN), as ubiquitous as the present voice
network. In this vision, offices and homes will each have an access point into the ISDN that will
handle voice, current data applications, and new applications, all with far greater convenience
and less expense than is currently possible. ISDN is currently available in some places, but it is
not yet very convenient or inexpensive. Another possibility for the future is called broadband
ISDN. Here the links will carry far greater data rates than ISDN and the network will carry video
as well as voice and data.
ACTIVIDADES
2. Realice un cuadro que muestre los avances en el tema desde 1950 hasta 1980
LANGUAGE ANALYSIS
45
GLOSARIO
MULTIPLEXER
GATEWAY
FRONT END
46
Chapter 1: What is Software Architecture?
What is Software Architecture?
Software application architecture is the process of defining a structured solution that meets all of
the technical and operational requirements, while optimizing common quality attributes such as
performance, security, and manageability. It involves a series of decisions based on a wide
range of factors, and each of these decisions can have considerable impact on the quality,
performance, maintainability, and overall success of the application.
Philippe Kruchten, Grady Booch, Kurt Bittner, and Rich Reitman derived and refined a definition
of architecture based on work by Mary Shaw and David Garlan (Shaw and Garlan 1996). Their
definition is:
Software architecture encompasses the set of significant decisions about the organization of a
software system including the selection of the structural elements and their interfaces by which
the system is composed; behavior as specified in collaboration among those elements;
composition of these structural and behavioral elements into larger subsystems; and an
architectural style that guides this organization. Software architecture also involves functionality,
usability, resilience, performance, reuse, comprehensibility, economic and technology
constraints, tradeoffs and aesthetic concerns.
The highest-level breakdown of a system into its parts; the decisions that are hard to change;
there are multiple architectures in a system; what is architecturally significant can change over a
system's lifetime; and, in the end, architecture boils down to whatever the important stuff is.
In Software Architecture in Practice (2nd edition), Bass, Clements, and Kazman define
architecture as follows:
The software architecture of a program or computing system is the structure or structures of the
system, which comprise software elements, the externally visible properties of those elements,
and the relationships among them. Architecture is concerned with the public side of interfaces;
private details of elementsdetails having to do solely with internal implementationare not
architectural.
Like any other complex structure, software must be built on a solid foundation. Failing to
consider key scenarios, failing to design for common problems, or failing to appreciate the long
term consequences of key decisions can put your application at risk. Modern tools and
platforms help to simplify the task of building applications, but they do not replace the need to
design your application carefully, based on your specific scenarios and requirements. The risks
exposed by poor architecture include software that is unstable, is unable to support existing or
future business requirements, or is difficult to deploy or manage in a production environment.
Systems should be designed with consideration for the user, the system (the IT infrastructure),
and the business goals. For each of these areas, you should outline key scenarios and identify
important quality attributes (for example, reliability or scalability) and key areas of satisfaction
and dissatisfaction. Where possible, develop and consider metrics that measure success in
each of these areas.
47
Figure 1
Tradeoffs are likely, and a balance must often be found between competing requirements
across these three areas. For example, the overall user experience of the solution is very often
a function of the business and the IT infrastructure, and changes in one or the other can
significantly affect the resulting user experience. Similarly, changes in the user experience
requirements can have significant impact on the business and IT infrastructure requirements.
Performance might be a major user and business goal, but the system administrator may not be
able to invest in the hardware required to meet that goal 100 percent of the time. A balance
point might be to meet the goal only 80 percent of the time.
Architecture focuses on how the major elements and components within an application are used
by, or interact with, other major elements and components within the application. The selection
of data structures and algorithms or the implementation details of individual components are
design concerns. Architecture and design concerns very often overlap. Rather than use hard
and fast rules to distinguish between architecture and design, it makes sense to combine these
two areas. In some cases, decisions are clearly more architectural in nature. In other cases, the
decisions are more about design, and how they help you to realize that architecture.
By following the processes described in this guide, and using the information it contains, you will
be able to construct architectural solutions that address all of the relevant concerns, can be
deployed on your chosen infrastructure, and provide results that meet the original aims and
objectives.
Consider the following high level concerns when thinking about software architecture:
ACTIVIDADES
6. LANGUAGE ANALYSIS
Application architecture seeks to build a bridge between business requirements and technical
requirements by understanding use cases, and then finding ways to implement those use
cases in the software. The goal of architecture is to identify the requirements that affect the
structure of the application. Good architecture reduces the business risks associated with
building a technical solution. A good design is sufficiently flexible to be able to handle the
natural drift that will occur over time in hardware and software technology, as well as in user
scenarios and requirements. An architect must consider the overall effect of design decisions,
the inherent tradeoffs between quality attributes (such as performance and security), and the
tradeoffs required to address user, system, and business requirements.
Expose the structure of the system but hide the implementation details.
Realize all of the use cases and scenarios.
Try to address the requirements of various stakeholders.
Handle both functional and quality requirements.
It is important to understand the key forces that are shaping architectural decisions today, and
which will change how architectural decisions are made in the future. These key forces are
driven by user demand, as well as by business demand for faster results, better support for
varying work styles and workflows, and improved adaptability of software design.
49
Consider the following key trends:
Current thinking on architecture assumes that your design will evolve over time and that you
cannot know everything you need to know up front in order to fully architect your system. Your
design will generally need to evolve during the implementation stages of the application as you
learn more, and as you test the design against real world requirements. Create your architecture
with this evolution in mind so that it will be able to adapt to requirements that are not fully known
at the start of the design process.
What are the foundational parts of the architecture that represent the greatest risk if you
get them wrong?
What are the parts of the architecture that are most likely to change, or whose design you
can delay until later with little impact?
What are your key assumptions, and how will you test them?
What conditions may require you to refactor the design?
Do not attempt to over engineer the architecture, and do not make assumptions that you cannot
verify. Instead, keep your options open for future change. There will be aspects of your design
that you must fix early in the process, which may represent significant cost if redesign is
required. Identify these areas quickly and invest the time necessary to get them right.
Build to change instead of building to last. Consider how the application may need to
change over time to address new requirements and challenges, and build in the flexibility
to support this.
Model to analyze and reduce risk. Use design tools, modeling systems such as Unified
Modeling Language (UML), and visualizations where appropriate to help you capture
requirements and architectural and design decisions, and to analyze their impact.
50
However, do not formalize the model to the extent that it suppresses the capability to
iterate and adapt the design easily.
Use models and visualizations as a communication and collaboration tool. Efficient
communication of the design, the decisions you make, and ongoing changes to the
design, is critical to good architecture. Use models, views, and other visualizations of the
architecture to communicate and share your design efficiently with all the stakeholders,
and to enable rapid communication of changes to the design.
Identify key engineering decisions. Use the information in this guide to understand the
key engineering decisions and the areas where mistakes are most often made. Invest in
getting these key decisions right the first time so that the design is more flexible and less
likely to be broken by changes.
Consider using an incremental and iterative approach to refining your architecture. Start with a
baseline architecture to get the big picture right, and then evolve candidate architectures as you
iteratively test and improve your architecture. Do not try to get it all right the first timedesign
just as much as you can in order to start testing the design against requirements and
assumptions. Iteratively add details to the design over multiple passes to make sure that you get
the big decisions right first, and then focus on the details. A common pitfall is to dive into the
details too quickly and get the big decisions wrong by making incorrect assumptions, or by
failing to evaluate your architecture effectively. When testing your architecture, consider the
following questions:
ACTIVIDADES
2. Responda:
a. Cul es el objetivo principal de la arquitectura y qu se debe tener en cuenta para
alcanzarlo?
b. Cules son los principales factores que afectan las decisiones de un arquitecto de
software?
c. Cules son las pautas fundamentales que se deben considerar cuando se disea
una arquitectura de software? Explique brevemente cada una.
c. Frases nominales
51
5. Complete el glosario.
GLOSARIO
CONCURRENCY
COUPLING
PLUGGABLE
BANDWIDTH
CLOUD-BASED
BASELINE
52
What is an operating system?
The most important program that runs on a computer. Every general-purpose computer
must have an operating system to run other programs. Operating systems perform basic
tasks, such as recognizing input from the keyboard, sending output to the display
screen, keeping track of files and directories on the disk, and controlling peripheral
devices such as disk drives and printers.
For large systems, the operating system has even greater responsibilities and powers. It
is like a traffic cop -- it makes sure that different programs and users running at the same
time do not interfere with each other. The operating system is also responsible for
security, ensuring that unauthorized users do not access the system.
multi-user : Allows two or more users to run programs at the same time. Some
operating systems permit hundreds or even thousands of concurrent users.
multiprocessing : Supports running a program on more than one CPU.
multitasking : Allows more than one program to run concurrently.
multithreading : Allows different parts of a single program to run concurrently.
real time: Responds to input instantly. General-purpose operating systems, such
as DOS and UNIX, are not real-time.
Operating systems provide a software platform on top of which other programs, called
application programs, can run. The application programs must be written to run on top of a
particular operating system. Your choice of operating system, therefore, determines to a great
extent the applications you can run. For PCs, the most popular operating systems are DOS,
OS/2, and Windows, but others are available, such as Linux.
As a user, you normally interact with the operating system through a set of commands. For
example, the DOS operating system contains commands such as COPY and RENAME for
copying files and changing the names of files, respectively. The commands are accepted and
executed by a part of the operating system called the command processor or command line
interpreter. Graphical user interfaces allow you to enter commands by pointing and clicking at
objects that appear on the screen.
53
ACTIVIDADES
GLOSARIO
54
HISTORY OF COMPUTERS
(1) Let us take a look at the history of the computers that we know today. The very first
calculating device used was the ten fingers of a mans hand. This, in fact, is why today
we still count in tens and multiples of ten, then the abacus was invented, a bead frame
in which the beads are moved from left to right. People went on using some form of
th
abacus well into the 16 century, and it is still being used in some parts of the world
because it can be understood without knowing how to read.
th th
(2) During the 17 and 18 centuries many people tried to find easy ways of calculating. J
Napier, a Scotsman, devised a mechanical way of multiplying and dividing, which is
how the modern slide rule works. Henry Briggs used Napiers ideas to produce
logarithm tables which all mathematicians use today. Calculus, another branch of
mathematics, was independently invented by both, Sir Isaac Newton, an Englishman,
and Leibnitz, a German mathematician.
(3) The first real calculating machine appeared in 1820 as the result of several peoples
experiments. This type of machine, which saves a great deal of time and reduces the
possibility of making mistakes, depends on a series of the-toothed gear wheels. In 1830
Charles Babbage, en Englishman, designed a machine that was called The
Analytical Engine. This machine, which Babbage showed at the Paris Exhibition in
1855, was an attempt to cut out the human being altogether, except for providing the
machine with the necessary facts about the problem to be solved. He never finished this
work, but many of his ideas were the basis for building todays computers.
(4) In 1930, the first analog computer was built by an American named Vennevar Bush.
This device was used in World War II to help aim guns. Mark 1, the name given to the
first digital computer, was completed in 1944. The men responsible for this invention
were Professor Howard Aiken and some people from IBM. This was the first machine
that could figure out long lists of mathematical problems, all at a very fast rate. In
1946 two engineers at the University of Pennsylvania, J. Eckert and J Mauchly, built the
first digital computer using parts called vacuum tubes. They named their new
invention ENIAC. Another important advancement in computers came in 1947, when
John Von Newmann developed the idea of keeping instructions inside the computers
memory.
(5) The first generation of computers, which used vacuum tubes, came out in 1950. Univac
I is an example of these computers which could perform thousands of calculations per
second. In 1960, the second generation of computers was developed and these could
perform work ten times faster than their predecessors. The reason for this extra speed
was the use of transistors instead of vacuum tubes. Second-generations computers
were smaller, faster and more dependable than first-generations computers. The third-
generation computers appeared on the market in 1965. These computers could do a
million calculations in a second, which is 1000 times as many as first-generation
computers. Unlike second-generation computers, these are controlled by tiny integrated
circuits and are consequently smaller and more dependable. Fourth-generation
computers have now arrived, and the integrated circuits that are being developed
have been greatly reduced in size. This is due to microminiaturization, which means
that the circuits are much smaller than before; as many as 1000 tiny circuits now fit onto
a single chip. A chip is a square or rectangular piece of silicon, usually form 1/10 to
inch, upon which several layers of an integrated circuit etched or imprinted, after which
the circuit is encapsulated in plastic, ceramic or metal. Fourth-generation computers are
50 times faster than third-generation computers and can complete approximately
1,000,000 instructions per second.
(6) At the rate computer technology is growing, todays computers might be obsolete in a
few years. It has been said that if transport had developed as rapidly as computer
technology, a trip across the Atlantic Ocean today would take a few seconds.
55
ACTIVIDADES
1. MAIN IDEA
Which idea best expresses the main idea of the text? Why?
a. The abacus and the fingers are two calculating devices still in use today.
c. During the early 1880s, many people worked on inventing a mechanical calculating
machine.
f. Instructions used by computers have always been kept inside the computers
memory.
g. Using transistors instead of vacuum tubes did nothing to increase the speed at
which calculations were done.
3. LOCATING INFORMATION
Find the passages in the text where the following ideas are expressed. Give line ref.
.. 1. During the same period in history, logarithm tables and calculus were
developed.
th
.. 2. It wasnt until the 19 century that a calculating machine was invented
which tried to reduce manpower.
5. The computers of the future may be quite different from those in use today.
56
4. Understanding words
Refer back to the text and find synonyms for the following words
1. Machine (l. 2) ..
2. Designed (l. 3) ..
4. Errors (l.15) .
Refer back to the text and find antonyms for the following words
6. Old (l. 7)
5. Content review
A B
57
6. Contextual reference
Look back at the text and find out what the words in bold typeface refer to
LANGUAGE ACTIVITIES
1.
PAST SIMPLE
Verb + ed
Irregular verbs
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
Were
58
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
3.
Present Continuous
Is + -ing
Are
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
4.
Passive Voice Present Continuous
Are
_______________________________________________________________________
_______________________________________________________________________
_______________________________________________________________________
_______________________________________________________________________
_______________________________________________________________________
_______________________________________________________________________
59