Você está na página 1de 13

VASAVI COLLEGE OF ENGINEERING

IBRAHIMBAGH, HYDERABAD-31
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
Subject: Principles of Programming languages
Class: B.E 3/4
Academic Year: 2013-14,
Sem-II
Instructor: I.Navakanth
Tel. No: 9550578779
Lecturers: I.Navakanth
Venue : Room No. R-207
Monday: 12.30-1.20
Tuesday: 12.30-1.20
Wednesday: 10.00-10.50
Saturday: 11.40-12.30
---------------------------------------------------------------------------------------------------------------------

Course Objectives:
1. Ability to understand the Syntax of various Programming languages
2. Ability to define context free grammars for the programming language constructs
3. Ability to understand the structure of Programming languages
4. Able to declare the variables in various Programming languages
5. Able to differentiate the syntax of conditional statements and control loops in
various Programming Languages
6. Able to understand Data representation
7. Able to understand Procedure invocation through Activation records
8. Able to differentiate Functional and Object oriented languages
9. Understand the features of object oriented programming languages
10. Understand the features of functional programming languages and Logic
Programming languages
Course Contents:
1. The Art of Language Design
2. Programming Language Syntax
3. Names, Scopes, and Bindings
4. Control Flow
5. Data Types
6. Subroutines and Control Abstraction
7. Data Abstraction and Object Orientation
8. Concurrency
9. Run-time Program Management
10. Functional Languages and Logic languages

Mode of Evaluation :
Examinations
1st Internal Test
2nd Internal Test
Quizes based on Assignments
Final Semesters Examination

Marks
20
20
5
75

Suggested Reading:
1. Programming Language Pragmatics, 3/e, Michael Scott, Elsevier, Morgan
Kaufmann, 2009.
2. Concepts of Programming languages, Sebesta, 8/e, Pearson.
3. Programming Languages Design and Implementation, 4/e Pratt, Zelkowitz, PHI
4. Programming Languages, Louden, 2/e, Cengage, 2003

Unit-I
Introduction:
The Art of Language Design
Without programming languages, we would have to program computers using their native code, called
machine code. A programming language is a man made language designed to express the calculations that
can be performed by a machine, including a computer. Programming languages creates several programs
that control the behavior of a machine to express precisely in algorithms, or as any form of human
communication assigned.
Language design and language implementation are intimately related to one another. An implementation
must conform to the rules of the language. At the same time, a language designer must consider how easy
or difficult it will be to implement various features, and what sort of performance is likely to result for
programs that use those features.

Programming Language Spectrum:


The many existing languages can be classified into families based on their model of computation.

Classification of Programming languages:


The top-level division distinguishes between the declarative languages, in which the focus is on what the
computer is to do, and the imperative languages, in which the focus is on how the computer should do it.
declarative
functional Lisp/Scheme, ML, Haskell
dataflow Id, Val
logic, constraint-based Prolog,
spreadsheets
template-based XSLT

Why Study Programming Languages?

imperative
von Neumann C, Ada, Fortran, . . .
scripting Perl, Python, PHP, . . .
object-oriented Smalltalk, Eiffel, C++, Java, .
.

Programming languages are central to computer


science and to the typical computer science curriculum. For one thing, a good understanding of language
design and implementation can help one choose the most appropriate language for any given task. Most
languages are better for some things than for others.
Many languages were designed for a specific problem domain and Personal Preference. Each of the
languages can be used successfully for a wider range of tasks, but the emphasis is clearly on the specialty.
Whatever language you learn, understanding the decisions that went into its design and implementation will
help you use it better.

Compilation and Interpretation:


Compilers: A compiler takes Source program as input and gives the target program as output
Interpreter: An interpreter takes a source program and its input at the same time and it scans the program
and implements the operations as it encounters them.

Programming Environments:
In recent programming environments provide much more integrated tools.When an invalid address error
occurs in an integrated environment, a new window is likely to appear on the users screen, with the line of
source code at which the error occurred highlighted. Breakpoints and tracing can then be set in this window
without explicitly invoking a debugger. Changes to the source can be made without explicitly invoking an
editor. The editor may also incorporate knowledge of the language syntax, providing templates for all the
standard control structures, and checking syntax as it is typed in. If the user asks to rerun the program after
making changes, a new version may be built without explicitly invoking the compiler or configuration
manager.

Overview of Compilation:
Compilers are generally structured as a series of phases. The first few phasesscanning, parsing, and
semantic analysisserve to analyze the source program.Collectively these phases are known as the
compilers front end. The final few phasesintermediate code generation, code improvement, and target
code generationare known as the back end. They serve to build a target programpreferably a fast one
whose semantics match those of the source.

Programming Language Syntax:


Unlike natural languages such as English or Chinese, computer languages must be precise. Both their
form(syntax) and meaning (semantics) must be specified without ambiguity so that both programmers and
computers can tell what a program is supposed to do. To provide the needed degree of precision, language
designers and implementors use formal syntactic and semantic notation.
Specifying Syntax:
Formal specification of syntax requires a set of rules. How complicated (expressive) the syntax can be
depends on the kinds of rules we are allowed to use. It turns out that what we intuitively think of as tokens
can be constructed from individual characters using just three kinds of formal rules: concatenation,
alternation , and so-called Kleene closure. Specifying most of the rest of what we intuitively think of as
syntax requires one additional kind of rule: recursion.
Regular Expressions: Any set of strings that can be defined in terms of the first three rules is called a
regular set, or sometimes a regular language. Regular sets are generated by regular expressions
and recognized by scanners.
To specify tokens, we use the notation of regular expressions. A regular expression is one of the following.
1. A character, 2. The empty string, 3. Two regular expressions next to each other, meaning any string
generated by the first one followed by (concatenated with) any string generated by the second One, 4. Two
regular expressions separated by a vertical bar ( ), meaning any string generated by the first one or any
string generated by the second one, 5. A regular expression followed by a Kleene star, meaning the
concatenation of zero or more strings generated by the expression in front of the star

Context-Free Grammars:
Any set of strings that can be defined if we add recursion is called a context-free language (CFL). Contextfree languages are generated by context-free grammars (CFGs) and recognized by parsers. Each of the rules
in a context-free grammar is known as a production. The symbols on the left-hand sides of the productions
are known as variables, or nonterminals. They cannot appear on the left-hand side of any production. In a
programming language, the terminals of the contextfree grammar are the languages tokens. One of the
nonterminals, usually the one on the left-hand side of the first production, is called the start symbol. It

names the construct defined by the overall grammar. The notation for context-free grammars is sometimes
called Backus-Naur Form (BNF).
Scanning: Scanners and parsers are language recognizers:They indicate whether a given string is valid.
The principal job of the scannerbis to reduce the quantity of information that must be processed by the
parser, bybgrouping characters together into tokens, and by removing comments and white space. Scanner
and parser generators automatically translate regular expressions and context-free grammars into scanners
and parsers.
Parsing: Practical parsers for programming languages (parsers that run in linear time) fall into two
principal groups: top-down (also called LL or predictive) and bottom-up (also called LR or shift-reduce). A
top-down parser constructs a parse tree starting from the root and proceeding in a left-to-right depth-first
traversal. A bottom-up parser constructs a parse tree starting from the leaves, again working left-to-right,
and combining partial trees together when it recognizes the children of an internal node. The stack of a topdown parser contains a prediction of what will be seen in the future; the stack of a bottom-up parser
contains a record of what has been seen in the past.

Unit-II
Names, Scopes, and Bindings:
The Notion of Binding Time:
A binding is an association between two things, such as a name and the thing it names. Binding time is the
time at which a binding is created or, more generally, the time at which any implementation decision is
made. There are many different times at which decisions may be bound: Language Design time, Language
Implementation time, Program writing time, Compile time, Link time, Load time, Run time.

Object Lifetime and Storage Management:


The period of time between the creation and the destruction of a name-to object binding is called the
bindings lifetime. Similarly, the time between the creation and destruction of an object is the objects
lifetime.
Object lifetimes generally correspond to one of three principal storage allocation
mechanisms, used to manage the objects space:
1. Static objects are given an absolute address that is retained throughout the
programs execution.
2. Stack objects are allocated and deallocated in last-in, first-out order, usually
in conjunction with subroutine calls and returns.
3. Heap objects may be allocated and deallocated at arbitrary times. They require
a more general (and expensive) storage management algorithm.

Scope Rules:
The textual region of the program in which a binding is active is its scope. In most modern languages, the
scope of a binding is determined staticallythat is, at compile time. Typically, a scope is the body of a
module, class, subroutine, or structured control flow statement, sometimes called a block. At any given
point in a programs execution, the set of active bindings is called the current referencing environment. The
set is principally determined by static or dynamic scope rules. We shall see that a referencing environment
generally corresponds to a sequence of scopes that can be examined (in order) to find the current binding
for a given name. In some cases, referencing environments also depend on what are (in a confusing use of
terminology) called binding rules. Specifically, when a reference to a subroutine S is stored in a variable,
passed as a parameter to another subroutine, or returned as a function value, one needs to determine when
the referencing environment for S is chosenthat is, when the binding between the reference to S and the
referencing environment of S is made. The two principal options are deep binding, in which the choice is
made when the reference is first created, and shallow binding, in which the choice is made when the
reference is finally used.

Implementing Scope:
To keep track of the names in a statically scoped program, a compiler relies on a data abstraction called a
symbol table. In essence, the symbol table is a dictionary: it maps names to the information the compiler
knows about them. The most basic operations serve to place a new mapping (a name-to-object binding) into
the table and to retrieve (nondestructively) the information held in the mapping for a given name. Static
scope rules in most languages impose additional complexity by requiring that the referencing environment
be different in different parts of the program. In a language with dynamic scoping, an interpreter (or the
output of a compiler) must performoperations at run time that orrespond to the insert, lookup, enter scope,
and leave scope symbol table operations in the implementation of a statically scoped language.

The Meaning of Names within a scope:


It is assumed that there is one-to-one mapping between names and visible objects in any given point in a
program. Two or more names that refer to the same point in the program are said to be aliases. A name that
can refer to more than one object at a given point in the program is said to be overloaded.

The Binding of Referencing Environments:


Scope rules determine the referencing environment of a given statement in a program. Static scope rules
specify that the referencing environment depends on the lexical nesting of program blocks in which names
are declared. Dynamic scope rules specify that the referencing environment depends on the order in which
declarations are encountered at run time. An additional issue that we have not yet considered arises in
languages that allow one to create a reference to a subroutinefor example, by passing it as a parameter.

Macro Expansion: To ease the burden of writing repetitive code many assemblers provided
sophisticated macro expansion facilities. A macro is that which replaces an expression or statement with
the appropriate multi-instruction sequence.

Separate Compilation:
As most language programs are constructed and tested incrementally, and since the compilation of a very
large program can be a multihour operation, any language designed to support large programs must provide
for separate compilation.

Control Flow:
Ordering is fundamental to most (though not all) models of computing. It determines what should be done
first, what second, and so forth, to accomplish some desired task. We can organize the language
mechanisms used to specify ordering into seven principal categories. 1. sequencing 2. selection
3. iteration 4. procedural abstraction 5. recursion 6. concurrency

Expression Evaluation:
An expression generally consists of either a simple object (e.g., a literal constant, or a named variable or
constant) or an operator or function applied to a collection of operands or arguments, each of which in turn
is an expression. It is conventional to use the term operator for built-in functions that use special, simple
syntax, and to use the term operand ffor the argument of an operator.

Structured and Unstructured Flow:


Control flow in assembly languages is achieved by means of conditional and unconditional jumps
(branches). Early versions of Fortran mimicked the low-level approach by relying heavily on goto
statements for most nonprocedural control
flow:
if A .lt. B goto 10 ! ".lt." means "<"
...
10
The 10 on the bottom line is a statement label. _
Goto statements also feature prominently in other early imperative languages. The abandonment of gotos
was part of a larger revolution in software engineering known as structured programming. Structured
programming was the hot trend of the 1970s, in much the same way that object-oriented programming
was the trend of the 1990s. Structured programming emphasizes top-down design (i.e., progressive
refinement), modularization of code, structured types (records, sets, pointers, multidimensional arrays),
descriptive variable and constant names, and extensive commenting conventions. The developers of
structured programming were able to demonstrate that within a subroutine, almost any well-designed
imperative algorithm can be elegantly expressed with only sequencing, selection, and iteration. Instead of
labels, structured languages rely on the boundaries of lexically nested constructs as the targets of branching
control.
Sequencing: Statements are to be executed (or expressions evaluated) in a certain specified order
usually the order in which they appear in the program text.
Selection: Depending on some run-time condition, a choice is to be made among two or more statements
or expressions. The most common selection constructs are if and case (switch) statements. Selection is also
sometimes referred to as alternation.
Iteration: A given fragment of code is to be executed repeatedly, either a certain number of times or until
a certain run-time condition is true. Iteration constructs include while, do, and repeat loops.
Recursion: An expression is defined in terms of (simpler versions of) itself, either directly or indirectly;
the computational model requires a stack on which to save information about partially evaluated instances
of the expression. Recursion is usually defined by means of self-referential subroutines.
Non determinacy: The ordering or choice among statements or expressions is deliberately left
unspecified, implying that any alternative will lead to correct results. Some languages require the choice to
be random, or fair, in some formal sense of the word.

Unit-III
Data Types:
Most programming languages include a notion of type for expressions and/or objects. Types serve two
principal purposes:
1. Types provide implicit context for many operations, so the programmer does not have to specify
that context explicitly.
2. Types limit the set of operations that may be performed in a semantically valid program.
Type Systems:Informally, a type system consists of (1) a mechanism to define types and associate them
with certain language constructs and (2) a set of rules for type equivalence, type compatibility, and type
inference. The constructs that must have types are precisely those that have values, or that can refer to
objects that have values. These constructs include named constants, variables, record fields, parameters,
and sometimes subroutines; explicit (manifest) constants ; and more complicated expressions containing
these. Type equivalence rules determine when the types of two values are the same. Type compatibility
rules determine when a value of a given type can be used in a given context. Type inference rules define the
type of an expression based on the types of its constituent parts or (sometimes) the surrounding context.
Type Checking:Type checking is the process of ensuring that a program obeys the languages type
compatibility rules. A violation of the rules is known as a type clash. A language is said to be strongly typed
if it prohibits, in a way that the language implementation can enforce, the application of any operation to
any object that is not intended to support that operation. A language is said to be statically typed if it is
strongly typed and type checking can be performed at compile time. In the strictest sense of the term, few
languages are statically typed. In practice, the term is often applied to languages in which most type
checking can be performed at compile time, and the rest can be performed at run time.
Composite Types: Nonscalar types are usually called composite, or constructed types. They are
generally created by applying a type constructor to one or more simpler types. Common composite types
include records (structures), variant records (unions), ar rays, sets, pointers, lists, and files. All but pointers
and lists are easily described in terms of mathematical set operations (pointers and lists can be described
mathematically as well, but the description is less intuitive).
Records were introduced by Cobol, and have been supported by most languages since the 1960s. A record
consists of a collection of fields, each of whichbelongs to a (potentially different) simpler type. Records are
akin to mathematical tuples; a record type corresponds to the Cartesian product of the types of the fields.
Variant records differ from normal records in that only one of a variant records fields (or collections
of fields) is valid at any given time. A variant record type is the union of its field types, rather than their
Cartesian product.
Arrays are the most commonly used composite types. An array can be thought of as a function that maps
members of an index type to members of a component type. Arrays of characters are often referred to as
strings, and are often supported by special purpose operations not available for other arrays.
Sets, like enumerations and subranges, were introduced by Pascal. A set type is the mathematical powerset
of its base type, which must usually be discrete. A variable of a set type contains a collection of distinct
elements of the base type.
Pointers are l-values. A pointer value is a reference to an object of the pointers base type. Pointers are
often but not always implemented as addresses. They are most often used to implement recursive data
types. A type T is recursive if an object of type T may contain one or more references to other objects of
type T.
Lists, like arrays, contain a sequence of elements, but there is no notion of mapping or indexing. Rather, a
list is defined recursively as either an empty list or a pair consisting of a head element and a reference to a
sublist. While the length of an array must be specified at elaboration time in most (though not all)
languages, lists are always of variable length. To find a given element of a list, a program must examine all
previous elements, recursively or iteratively, starting at the head. Because of their recursive definition, lists
are fundamental to programming in most functional languages.
Files and Input/Output: Input/output (I/O) facilities allow a program to communicate with the
outside world. In discussing this communication, it is customary to distinguish between interactive I/O and
I/O with files. Interactive I/O generally implies communication with human users or physical devices,

which work in parallel with the running program, and whose input to the program may depend on earlier
output from the program (e.g., prompts). Files are intended to represent data on mass storage devices,
outside the memory in which other program objects reside. Like arrays, most files can be conceptualized as
a function that maps members of an index type (generally integer) to members of a component type. Unlike
arrays, files usually have a notion of current position, which allows the index to be implied implicitly in
consecutive operations. Files often display idiosyncrasies inherited from physical input/ output devices. In
particular, the elements of some files must be accessed in sequential order.
Strings: In many languages, a string is simply an array of characters. In other languages,
strings have special status, with operations that are not available for arrays of other sorts. Particularly
powerful string facilities are found in Snobol, Icon, and the various scripting languages.

Equality Testing and Assignment


For simple, primitive data types such as integers, floating-point numbers, or characters, equality testing and
assignment are relatively straightforward operations, with obvious semantics and obvious implementations
(bit-wise comparison or copy). In many cases the definition of equality boils down to the distinction
between l-values and r-values: in the presence of references, should expressions be considered equal only if
they refer to the same object, or also if the objects to which they refer are in some sense equal? The first
option (refer to the same object) is known as a shallow comparison. The second (refer to equal objects) is
called a deep comparison. For complicated data structures (e.g., lists or graphs) a deep comparison may
require recursive traversal.

Subroutines and Control Abstraction:


Review of Stack Layout: Each routine, as it is called, is given a new stack frame, or activation record, at
the top of the stack. This frame may contain arguments and/or return values, bookkeeping information
(including the return address and saved registers), local variables, and/or temporaries. When a subroutine
returns, its frame is popped from the stack. At any given time, the stack pointer register contains the
address of either the last used location at the top of the stack or the first unused location, depending on
convention. The frame pointer register contains an address within the frame. Objects in the frame are
accessed via displacement addressing with respect to the frame pointer.
Calling Sequences: maintenance of the subroutine call stack is the responsibility of the calling
sequencethe code executed by the caller immediately before and after a subroutine calland of the
prologue (code executed at the beginning) and epilogue (code executed at the end) of the subroutine itself.
Sometimes the term calling sequence is used to refer to the combined operations
of the caller, the prologue, and the epilogue.
Parameter Passing: Most subroutines are parameterized: they take arguments that control certain
aspects of their behavior, or specify the data on which they are to operate. Parameter names that appear in
the declaration of a subroutine are known as formal parameters. Variables and expressions that are passed
to a subroutine in a particular call are known as actual parameters. We have been referring to actual
parameters as arguments.
Generic Subroutines and Modules: Subroutines provide a natural way to perform an operation for
a variety of different object (parameter) values. In large programs, the need also often arises to perform an
operation for a variety of different object types. Generic modules or classes are particularly valuable for
creating containers: data abstractions that hold a collection of objects, but whose operations are generally
oblivious to the type of those objects. Examples of containers include stack, queue, heap, set, and
dictionary (mapping) abstractions, implemented as lists, arrays, trees, or hash tables. Generic subroutines
(methods) are needed in generic modules (classes), and may also be useful in their own right.
Exception Handling: An exception can be defined as an unexpectedor at least unusual condition
that arises during program execution, and that cannot easily be handled in the local context. It may be
detected automatically by the language implementation, or the program may raise it explicitly.
Events: An event is something to which a running program needs to respond but which occurs outside
the program, at an unpredictable time. The most common inputs to a GUI are keystrokes, mouse motions,
button clicks. They may also be network operations or other asynchronous I/O activity: the arrival of a
message, the completion of a previously requested disk operation.

Unit-IV

Data Abstraction and Object Orientation:


Object-Oriented Programming: With the development of ever-more complicated computer
applications, data abstraction has become essential to software engineering. The abstraction provided
by modules and module types has at least three important benefits. 1. It reduces conceptual load by
minimizing the amount of detail that the programmer must think about at one time. 2. It provides fault
containment by preventing the programmer from using a program component in inappropriate ways, and by
limiting the portion of a programs text in which a given component can be used, thereby limiting the
portion that must be considered when searching for the cause of a bug. 3. It provides a significant degree of
independence among program components, making it easier to assign their construction to separate
individuals, to modify their internal implementations without changing external code that uses them, or to
install them in a library where they can be used by other programs.

Encapsulation and Inheritance:


Encapsulation mechanisms enable the programmer to group data and the subroutines that operate on them
together in one place, and to hide irrelevant details from the users of an abstraction. In the discussion above
we have cast objectoriented programming as an extension of the module-as-type mechanisms of Simula
and Euclid. It is also possible to cast object-oriented programming in a module-as-manager framework.
In the first subsection below we consider the data-hiding mechanisms of modules in non-object-oriented
languages. In the second subsection we consider the new data-hiding issues that arise when we add
inheritance to modules to make classes. In the third subsection we briefly consider an alternative approach,
in which inheritance is added to records, and (static) modules continue to provide data hiding.
Information hiding: The ability to protect some components of the object from external entities. This is
realized by language keywords to enable a variable to be declared as private or protected to the owning
class.
Inheritance: The ability for a class to extend or override functionality of another class. The so-called
subclass has a whole section that is derived (inherited) from the superclass and then it has its own set of
functions and data.
Initialization and Finalization:Most object-oriented languages provide some sort of special
mechanism to initialize an object automatically at the beginning of its lifetime.When written in the form of
a subroutine, this mechanism is known as a constructor. Though the name might be thought to imply
otherwise, a constructor does not allocate space; it initializes space that has already been allocated. A few
languages provide a similar destructor mechanism to finalize an object automatically at the end of its
lifetime.Several important issues arise like choosing a constructor, references and values, execution order,
garbage collection
Dynamic Method Binding: Dynamic method binding is central to object-oriented programming.
Imagine, for example, that our administrative computing program has created a list of persons who have
overdue library books. The list may contain both students and professors. If we traverse the list and print a
mailing label for each person, dynamic method binding will ensure that the correct printing routine is called
for each individual. In this situation the definitions in the derived classes are said to override the definition
in the base class.
Multiple Inheritance: At times it can be useful for a derived class to inherit features from more than
one base class. Multiple inheritance also appears in CLOS and Python. Simula, Smalltalk, Objective-C,
Modula-3, Ada 95, and Oberon have only single inheritance. Java, C#, and Ruby provide a limited, mixin formofmultiple inheritance, in which only one parent class is permitted to have fields. Multiple
inheritance with a common grandparent is known as repeated inheritance. Repeated inheritance with
separate copies of the grandparent is known as replicated inheritance; repeated inheritance with a single
copy of the grandparent is known as shared inheritance. Shared inheritance is the default in Eiffel.
Replicated inheritance is the default in C++. Both languages allow the programmer to obtain the other
option when desired.
Concurrency: A program is said to be concurrent if it contains more than one active execution context
more than one thread of control. Concurrency arises for at least three important reasons. 1. To capture
the logical structure of a problem. Many programs, particularly servers and graphical applications, must
keep track of more than one largely independent task at the same time. Often the simplest and most
logical way to structure such a program is to represent each task with a separate thread of control.

2. To cope with independent physical devices. Some software is by necessity concurrent. An operating
system may be interrupted by a device at almost any time. It needs one context to represent what it was
doing before the interrupt and another for the interrupt itself. Likewise a system for real-time control
(e.g., of a factory, or even an automobile) is likely to include a large number of processors, each connected
to a separate machine or device. Each processor has its own thread(s) of control, which must interact with
the threads on other processors to accomplish the overall objectives of the system .Message-routing
software for the Internet is in some sense a very large concurrent program, running on thousands of servers
around the world. 3. To increase performance by running on more than one processor at once. Even when
concurrency is not dictated by the structure of a program or the hardware on which it has to run, we can
often increase performance by choosing to have more than one processor work on the problem
simultaneously. With many processors, the resulting parallel speedup can be very large.

Concurrent Programming Fundamentals:


We will use the word concurrency to characterize any program in which two or more execution contexts
may be active at the same time. Under this definition, coroutines are not concurrent, because only one of
them can be active at once. We will use the term parallelism to characterize concurrent programs in which
execution is actually happening in more than one context at once. True parallelism thus requires parallel
hardware. From a semantic point of view, there is no difference between true parallelism and the
quasiparallelism of a preemptive concurrent system, which switches between execution contexts at
unpredictable times: the same programming techniques apply in both situations. Within a concurrent
program, we will refer to an execution context as a thread. The threads of a given program are implemented
on top of one or more processes provided by the operating system. We will sometimes use the word task to
refer to a well-defined unit of work that must be performed by some thread. In one common programming
idiom, a collection of threads shares a common bag of tasks: a list of work to be done.
Each thread repeatedly removes a task from the bag, performs it, and goes back for another. Sometimes the
work of a task entails adding new tasks to the bag. Unfortunately, the vocabulary of concurrent
programming is not consistent across languages or authors. Several languages call their threads processes.
Ada calls them tasks. Several operating systems call lightweight processes threads.
Implementing Synchronization: synchronization is the principal semantic challenge for sharedmemory concurrent programs. One commonly sees two forms of synchronization: mutual exclusion and
condition synchronization. Mutual exclusion ensures that only one thread is executing a critical section of
code at a given point in time. Condition synchronization ensures that a given thread does not proceed until
some specific condition holds (e.g., until a given variable has a given value). It is tempting to think of
mutual exclusion as a form of condition synchronization (dont proceed until no other thread is in its
critical section), but this sort of condition would require consensus among all extant threads, something that
condition synchronization doesnt generally provide.
Language-Level Mechanisms: Though widely used, semaphores are also widely considered to be
too low-level for well-structured, maintainable code. They suffer fromtwo principal problems. First, their
operations are simply subroutine calls, it is easy to leave one out (e.g., on a control path with several nested
if statements). Second, unless they are hidden inside an abstraction, uses of a given semaphore tend to get
scattered throughout a program, making it difficult to track them down for purposes of software
maintenance. A monitor is a module or object with operations, internal state, and a number of condition
variables. Only one operation of a given monitor is allowed to be active at a given point in time. A thread
that calls a busy monitor is automatically delayed until the monitor is free. On behalf of its calling thread,
any operation may suspend itself by waiting on a condition variable. An operation may also
signal a condition variable, in which case one of the waiting threads is resumed, usually the one that waited
first.
Message Passing: While shared-memory concurrent programming is common on small-scale
multiprocessors, most concurrent programming on large multi computers and networks is currently based
on messages. we consider three principal issues in message-based computing: naming, sending, and
receiving. We noted that any of the three principal send mechanisms (no-wait, synchronization, remoteinvocation) can be paired with either of the principal receive mechanisms (explicit, implicit). Remote
invocation send with explicit receipt is sometimes known as rendezvous. Remote invocation send with
implicit receipt is generally known as remote procedure call.

Unit-V
Run-time Program Management:
Late Binding of Machine Code: Late binding allows for dynamic execution at runtime.
In some environments it makes sense to bring compilation and execution closer together in time.Just-intime (JIT) compilation translates a program from source or intermediate form into machine language
immediately before each separate run of the program
Inspection/Introspection: Symbol data metadata makes it easy for utility programs-just-in-time and
dynamic compilers, optimizers, debuggers, profilers and binary rewriters to inspect a program and reason
about its structures and types. Lisp has long allowed a program to reason about its own internal structure
and types. This sort of reasoning is sometimes called introspection.

Functional Languages:
Functional Programming Concepts:
In a strict sense of the term, functional programming defines the outputs of a
program as a mathematical function of the inputs, with no notion of internal state,
and thus no side effects. Among the common functional programming languages,
Miranda, Haskell, Sisal, pH, and Backuss FP proposal [Bac78] are purely functional;
Lisp/Scheme and ML include imperative features. To make
functional programming practical, functional languages provide a number of features,
the following among them, that are often missing in imperative languages.
_ First-class function values and higher-order functions, Extensive polymorphism, List
types and operators, Recursion, Structured function returns, Constructors
(aggregates) for structured objects, Garbage collection
A Review/Overview of Scheme: Most Scheme implementations employ an interpreter that runs a
read-evalprint loop. The interpreter repeatedly reads an expression from standard input (generally typed
by the user), evaluates that expression, and prints the resulting
If the user types (+ 3 4)
the interpreter will print 7
If the user types 7
the interpreter will also print 7
(The number 7 is already fully evaluated.) To save the programmer the need to type an entire program
verbatim at the keyboard, most Scheme implementations provide a load function that reads (and evaluates)
input from a file: (load "my_Scheme_program")
Evaluation Order Revisited: we observed that the subcomponents of many expressions can
be evaluated in more than one order. In particular, one can choose to evaluate function arguments before
passing them to a function, or to pass them unevaluated. The former option is called applicative-order
evaluation; the latter is called normal-order evaluation. Like most imperative languages, Scheme uses
applicative order in most cases. Normal order, which arises in the macros and call-byname parameters of
imperative languages, is available in special cases. Suppose, that we have defined the following function.
(define double (lambda (x) (+ x x)))
Evaluating the expression (double (* 3 4)) in applicative order (as Scheme

does), we have (double (* 3 4))


_ (double 12)
_ (+ 12 12)
_ 24

Under normal-order evaluation we would have


(double (* 3 4))
_ (+ (* 3 4) (* 3 4))
_ (+ 12 (* 3 4))
_ (+ 12 12)
_ 24
Here we end up doing extra work: normal order causes us to evaluate (* 3 4)
twice. _
Higher-Order Functions: A function is said to be a higher-order function (also called a functional form) if
it takes a function as an argument or returns a function as a result.

Theoritical Foundations: Mathematically, a function is a single-valued mapping: it associates every


element in one set (the domain) with (at most) one element in another set (the range). In
conventional notation, we indicate the domain and range of, say, the square root function by writing
sqrt :RR
We can also define functions using conventional set notation:
sqrt _(x, y) RR| y = x2_
Unfortunately, this notation is nonconstructive: it doesnt tell us how to compute square roots. Church
designed the lambda calculus to address this limitation.

Functional Programming in Perspective:


Side effects can make programs both hard to read and hard to compile. By
contrast, the lack of side effects makes expressions referentially transparent
independent of evaluation order. Programmers and compilers of a purely
functional language can employ equational reasoning, in which the equivalence
of two expressions at any point in time implies their equivalence at all times.
Unfortunately, there are common programming idioms in which the canonical
side effect assignment plays a central role. Critics of functional programming
often point to these idioms as evidence of the need for imperative language
features. I/O is one example.

Logic Languages:
Logic Programming Concepts:
Logic programming systems allow the programmer to state a collection of axioms from which theorems can
be proven. The user of a logic program states a theorem, or goal, and the language implementation attempts
to find a collection of axioms and inference steps (including choices of values for variables) that together
imply the goal. Of the several existing logic languages, Prolog is by far the most widely used.
Prolog: Logic programming consists of facts and rules. Prolog was born for natural language processing
Introduction to Prolog:
PROLOG is particularly strong in solving problems characterized by requiring complex symbolic
computations. As conventional imperative programs for solving this type of problems tend to be large and
impenetrable, equivalent PROLOG programs are often much shorter and easier to grasp. The language in
principle enables a programmer to give a formal specification of a program; the result is then almost
directly suitable for execution on the computer. Moreover, PROLOG supports stepwise are amendment in
developing programs because of its modular nature. These characteristics render PROLOG a suitable
language for the development of prototype systems.
Data structures in prolog:
Term is a basic data structure in Prolog, i.e., everything including program and data is expressed in form of
term. There are four basic types of terms in Prolog: variables, compound terms, atoms and numbers. The
following picture shows the correlation between them as well as examples of corresponding terms:
term
|-- var (X,Y)
|
-- nonvar (a,1,f(a),f(X))
|-- compound (f(a),f(X))
|
-- atomic (a,1)
|-- atom (a)
|
number (1)
Theoritical Foundations: In mathematical logic, a predicate is a function that maps constants
(atoms) or variables to the values true and false. rainy is a predicate, for example, we might have
rainy(Seattle) = true and rainy(Tijuana) = false. Predicate calculus provides a notation and inference rules
for constructing and reasoning about propositions (statements) composed of predicate applications,
operators (and, or, not, etc.), and the quantifiers and . Logic programming formalizes the search for
variable values that will make a given proposition true.

Logic Programming in Perspective:I n the abstract, logic programming is a very


compelling idea: it suggests a model of computing in which we simply list the
logical properties of an unknown value, and then the computer figures out how to
find it (or tells us it doesnt exist). Unfortunately, the current state of the art falls
quite a bit short of the vision, for both theoretical and practical reasons.

Você também pode gostar