Você está na página 1de 62

Program Optimization for

Multi-core Architectures
Sanjeev K Aggarwal (ska@iitk.ac.in)
M Chaudhuri (mainak@iitk.ac.in)
R Moona (moona@iitk.ac.in)

Department of Computer Science and Engineering,


IIT Kanpur - 208016
India
Acknowledgement: This course has been developed with
support from Intel Semiconductors (US) Limited

June 2009 Sanjeev K Aggarwal 1


Multi-core Processors:
Challenges and Opportunities

Sanjeev K Aggarwal
Department of Computer Science and
Engineering,
IIT Kanpur, India
ska@iitk.ac.in
June 2009 Sanjeev K Aggarwal 2
Fact #1

„ Processors and computer systems are


becoming more and more powerful
„ Faster and many-core processors
„ Large memory bandwidth
„ High speed memory and disks

„ In few years time a desktop will be able to


give a teraflop of compute power

June 2009 Sanjeev K Aggarwal 3


Fact #2
„ Applications are becoming more and more
demanding

„ Important applications:
„ Engineering simulations (FM, CFD, structures)
„ Biology (Genetics, cell structure, molecular biology)
„ Particle and nuclear physics, high energy physics,
astrophysics, weather prediction, drug design,
molecular dynamics …
„ Entertainment industry, gaming
„ Financial services, data-mining, web services, data
centers, search engines
„ Medical industry, tomography

June 2009 Sanjeev K Aggarwal 4


Systems and Applications

„ These applications require enormous


compute power

„ Compute power is available!

June 2009 Sanjeev K Aggarwal 5


Challenge
„ The biggest challenge: how do we program these
machines?

„ Software systems (algorithms, compilers, libraries,


debuggers) AND programmers (mainly trained in
sequential programming) are unable to exploit the
compute power

„ Solving these complex problems and programming


these architectures require different methodology

June 2009 Sanjeev K Aggarwal 6


Challenge
„ Better algorithms, compilers, libraries,
profilers, application tuners, debuggers etc.
have to be designed

„ Programmers have to be trained to use these


tools

June 2009 Sanjeev K Aggarwal 7


Challenge
„ The largest user base is outside computer science
domain

„ Engineers and scientists do not want to understand


architecture, concurrency and language issues
„ They want to solve their problems
„ They are domain experts and not system experts

„ We have largely ignored end-users while designing


solutions for high performance machines (including multi-
core)
June 2009 Sanjeev K Aggarwal 8
High Performance Systems
„ Power of high performance systems come from
„ Hardware technology: faster machines, more cache,
low latencies between devices

„ Multilevel architectural parallelism


„ Pipelines: out of order execution

„ Vector: handles arrays with a single instruction

„ Parallel: lot of processors each capable of executing


independent instruction stream
„ VLIW: handles many instructions in a single cycle

„ Clusters: large number of systems on a very fast


network

„ Grid: cooperation of a large number of systems


June 2009 Sanjeev K Aggarwal 9
Processor Design Problems
„ Processor frequency and power consumption seem
to be scaling in lockstep

„ How can machines stay on historic performance


curve without burning down the system!?

„ Moore’s Law is still applicable

„ Physics and chemistry at the nano-scale:


„ Materials
„ Transistor leakage current, quantum effect coming into
play

„ Wire lengths have to be reduced


June 2009 Sanjeev K Aggarwal 10
Processor Design Problems …
„ For 200 MHz frequency steps two steps back
on frequency cuts power consumption by
~40%

„ Same thermal envelope for dual core running


at n-2 as the single processor running at n

June 2009 Sanjeev K Aggarwal 11


Sources and types of Parallelism
„ Structured: identical tasks on different data sets

„ Unstructured: different data streams and different instructions

„ Algorithm level: appropriate algorithms and data structures

„ Programming:
„ specify parallelism in parallel languages
„ write sequential code and use compilers
„ use course grain parallelism: independent modules
„ use medium grain parallelism: loop level
„ use fine grain parallelism: basic block or statement

„ Expressing parallelism in programs


„ no good languages
„ most applications are not multi-threaded
„ Writing multi-threaded code increases software costs
„ programmers are unable to exploit whatever little is available
June 2009 Sanjeev K Aggarwal 12
Amdahl’s Law
„ Determines speed up

„ α : fraction of code is scalar


„ 1- α : fraction of code which is parallelizable

„ 1 operation per unit time is scalar unit


„ ζ operations per unit time in parallel units
1
„ Speed-up= 1-α
α +
ζ

where 0 ≤ α ≤ 1, and ζ ≥1
June 2009 Sanjeev K Aggarwal 13
Fraction of scalar code vs. Speed up

25
20
Speed up

15 Series1
10 Series2
5
0
2

8
0

1
0.

0.

0.

0.

Fraction of Scalar Code


June 2009 Sanjeev K Aggarwal 14
Amdahl’s Law
„ To achieve any significant speedup, parallel code must be
greater then 90%

„ Most of the execution time is spent in small sections of code. So


concentrate on critical sections (loops)

„ Loop parallelization is beneficial

„ Inner most loop parallelization is the most beneficial

„ scalar component of the code is the limiting factor

June 2009 Sanjeev K Aggarwal 15


Loop Optimization
„ Loop unrolling
„ Induction variable simplification
„ Loop jamming
„ Loop splitting
„ Loop interchange
„ Loop restructuring
„ Statement reordering
„ Breaking dependence cycle: Node splitting
„ Loop skewing
„ Handling while loops
„ Speculative parallelization
June 2009 Sanjeev K Aggarwal 16
A small experiment
„ Matrix multiplication program

„ Notice how many cores are busy and how


much time does it take

for i = 1 to n
for j = 1 to n
for k = 1 to n
C[i,j] = C[i,j] + A[i,k]*B[k,j]

June 2009 Sanjeev K Aggarwal 17


Data Layout

C A B

= X

June 2009 Sanjeev K Aggarwal 18


Experiment: continued
„ Matrix multiplication program (how is it
different from the previous one?)

„ Notice how many cores are busy and how


much time does it take

for i = 1 to n
for k = 1 to n
for j = 1 to n
C[i,j] = C[i,j] + A[i,k]*B[k,j]
June 2009 Sanjeev K Aggarwal 19
Data Layout

C A B

= X

June 2009 Sanjeev K Aggarwal 20


Experiment: continued

„ Matrix multiplication program (how is it different from


the previous two?)

„ Notice how many cores are busy and how much time
does it take
omp_set_num_threads(omp_get_num_procs());
#pragma omp parallel for private(j,k)
for i = 1 to n
for j = 1 to n
for k = 1 to n
C[i,j] = C[i,j] + A[i,k]*B[k,j]
June 2009 Sanjeev K Aggarwal 21
Data Dependence Analysis
„ Flow or True dependence: When a variable is assigned or
defined in one statement and used in subsequent statement

„ Anti dependence: When a variable is used in one statement and


reassigned in subsequently executed statement

„ Output dependence: When a variable is assigned in one


statement and reassigned in subsequent statement

„ Anti dependence and Output dependence arise from reuse of


variable and are also called False dependence.

„ Flow dependence is inherent in computation and cannot be


eliminated by renaming. Therefore it is also called True
dependence.
June 2009 Sanjeev K Aggarwal 22
Data Dependence Analysis
„ A data dependence is loop independent if dependence is
between instances in the same iteration. Consider a loop

DO I = 1, N
X[ f(I) ] = ..... S1
...... = X[ g(I) ] S2
ENDDO

„ There is a loop independent dependence from S1 to S2 if


there is an integer I such that
1 ≤ I ≤ N and f(I) = g(I)

there is an iteration in which S1 writes into X and S2 reads from the same
element of X.
June 2009 Sanjeev K Aggarwal 23
Data Dependence Analysis
„ A data dependence is loop dependent if dependence is between
different iterations.

„ There is a loop dependent dependence from S1 to S2 if there


exist integers I1 and I2 such that

1 ≤ I1 < I2 ≤ N and f(I1) = g(I2)

S1 writes into X in iteration I1 and S2 reads from the same


location in a later iteration I2.

„ Therefore, to find out data dependence from S1 to S2, one has to


solve

f(I1) = g(I2) such that 1 ≤ I1 ≤ I2 ≤ N holds


June 2009 Sanjeev K Aggarwal 24
„ If f and g are general functions, then the problem is intractable.

„ If f and g are linear functions of loop index, then to test


dependence we need to find values of two integers I1 and I2
such that

1 ≤ I1 ≤ I2 ≤ N and a0 + a1I1 = b0 + b1I2

which can be rewritten as


1 ≤ I1 ≤ I2 ≤ N
a1I1 − b1I2 = b0 − a0

„ These are called Linear Diophantine Equations. These


equations have to be solved to do program re-structuring.

June 2009 Sanjeev K Aggarwal 25


OpenMP (Open Multi-Processing)

„ An application programming interface (API)

„ Supports multi-platform shared memory multiprocessing


programming

„ C/C++ and Fortran on many architectures, including Unix and


Microsoft Windows platforms.

„ consists of a set of compiler directives, library routines, and


environment variables that influence run-time behavior

„ Reference: www.openmp.org

June 2009 Sanjeev K Aggarwal 26


State of the Art

„ Programmers use high level languages and APIs like OpenMP,


Pthreads, Window threads,, Mutex etc. to write parallel
programs

„ The programmer must have a deep knowledge of concurrency


to program these machines

„ We are reaching (have already reached?) an era where


programmers can not write effective programs without
understanding machines and concurrency

„ Is it close to machine level programming of early 50s?

June 2009 Sanjeev K Aggarwal 27


History
„ Fortran (Formula Translation) compiler project
1954-1957
„ Considered one of the ten most influential
development in the history of computing

„ Prior to Fortran:
„ Programming was largely done in
assembly/machine language
„ Productivity was low
„ Nobody believed Backus when he started the
project
June 2009 Sanjeev K Aggarwal 28
Reasons of Success of Fortran
„ End-users were not ignored
„ A mathematical formula could easily be translated into a
program
„ Productivity was very high, code maintenance was easy
„ Quality of generated code was very high

„ Adoption: about 70-80% programmers were using Fortran within


an year

„ Side effects: enormous impact on programming languages and


computer science
„ Started a new field of research in computer science
„ enormous amount of theoretical work - lexical analysis,
parsing, optimization, structured programming, code
generation, error recovery etc.
June 2009 Sanjeev K Aggarwal 29
Opportunity

„ Fortran has survived for 50 years!!


„ Simple is beautiful!

„ What is the status of the most “nicely designed


languages” with all those “wonderful properties”?!

„ Let us learn from the history


„ Involve end users in understanding their requirements
„ Design efficient systems which can be used by non
systems programmers

June 2009 Sanjeev K Aggarwal 30


Opportunity
„ There is no silver bullet

„ Better algorithms, languages, compilers, libraries,


profilers, application tuners, debuggers etc. have to
be designed and developed

„ Programmers have to be trained to use these tools

„ Involve end-users in understanding their


requirements

June 2009 Sanjeev K Aggarwal 31


Multi-core: Teaching and Research
at IIT Kanpur
„ Regular courses to PhD, masters and senior BTech students

„ Courses to Industry and other college faculty

„ BTech projects, and MTech and PhD thesis by students

„ Faculty members involved:


„ Sanjeev K Aggarwal
„ Mainak Chaudhuri
„ Rajat Moona

„ IIT Kanpur courseware available worldwide through Intel portals

„ We are one of the ten universities supported by Intel world wide


June 2009 Sanjeev K Aggarwal 32
Course delivery at IIT Kanpur
„ Target audience: doctoral, masters and senior
undergraduate students

„ Approximately 40-50 lectures of one hour each (both by


faculty and students)

„ One large programming project starting early in the


course
„ Involves taking a large sequential workload and
parallelizing it using OpenMP, MPI etc.
„ Students use Intel tools to analyze the programs

„ Some small programming assignments for laboratory


and exams
June 2009 Sanjeev K Aggarwal 33
Background required
„ Knowledge of basic compiler, operating systems and computer
architecture

„ Focus on back-end of the compiler and concurrency issues in


operating systems

„ Background reading material


„ Compiler reference: Dragon book
„ Compilers: Principles, Tools and Techniques by Aho, Lam,
Sethi and Ullman
„ Operating Systems reference
„ Operating Systems Concepts by Silberschatz and Galvin
„ Computer Architecture
„ Computer Architecture: A Quantitative Approach by Hennessy
and Patterson
June 2009 Sanjeev K Aggarwal 34
Delivery to other institutes
„ Teacher’s training programme (2007 and 2008): 100 faculty members
from IITs, NITs, IIITs and other institutes, and industry have
participated

„ Making all the course material available: this is base material and
institutes are free to modify to suit local environment

„ Making the project details available

„ Course website and discussion groups making all the upgrades


available to the participants

„ Providing opportunity to faculty and students from other universities to


do summer projects at IIT Kanpur in the area of multi-core technologies

„ Helping other universities adopt the course: UP Technical University


and VTU
June 2009 Sanjeev K Aggarwal 35
IITK and Intel partnership
„ Intel has set up labs, and is continuously upgrading
them

„ The partnership has grown into research


collaboration (Focus School Program)

„ Students are able to work on real life cutting edge


research problems

„ A large number of masters and doctoral students are


taking up research projects in the area

„ A large number of interns and students are available


to Industry who are trained
June 2009
in multi-core technologies
Sanjeev K Aggarwal 36
IITK and Intel partnership

„ Intel has provided generous funding to support the


curriculum development and research activities

„ Intel has provided a lab consisting of


desktops/laptops, and all the software tools
(compilers, libraries, profiler, VTune, thread checker
etc.)

„ Intel has provided training on use of the Intel tools to


the instructors, students and regularly conducts
sessions for faculty from rest of the institutes
June 2009 Sanjeev K Aggarwal 37
June 2009 Sanjeev K Aggarwal 38
Research Projects
„ Parallel Patterns: An approach to Parallel Program Development
„ Dynamic code generation for Cell Processors
„ Workbench for Parallel Programming
„ Code Optimization in Matlab Applications
„ DTLS: Speculative Parallelization of Partially Parallel Loops using
Squashed Value Prediction
„ Energy Aware Scheduling
„ Slicing OpenMP Programs
„ A Tools for Parallelizing Programs for Multi-core architectures
„ An Interactive Debugger for Message Passing Parallel Programs
„ Checkpointing Fortran/MPI Programs
„ Debugging Optimized Code
„ An Architecture for Next-generation Scalable Multi-threading
„ Controlling Leakage in Large Chip-multiprocessor Caches via Profile-
guided Virtual Address Translation
„ Scavenger: A New Last Level Cache Architecture with Global Block
Priority.
June 2009 Sanjeev K Aggarwal 39
Thank you

Questions?

June 2009 Sanjeev K Aggarwal 40


About
the
Course
June 2009 Sanjeev K Aggarwal 41
What will we learn in the course?

„ Processor architectures with focus on memory


hierarchy, instruction level parallelism and multi-core
architectures
„ Program analysis techniques for redundancy
removal and optimization for high performance
architectures
„ Concurrency and operating systems issues in using
these architectures
„ Programming techniques for exploiting parallelism
(use of message passing libraries)
„ Tools for code analysis and optimization (Intel
compilers, profilers and application tuning tools)
June 2009 Sanjeev K Aggarwal 42
What will we learn …
„ Understand paradigms for programming high end
machines, compilers and runtime systems
„ Applications requirements
„ Shared-memory programming
„ Optimistic and pessimistic parallelization
„ Memory hierarchy optimization
„ Focus on software problem for multi-core processors

June 2009 Sanjeev K Aggarwal 43


What do we expect to achieve by the
end of the course?
„ Faculty who can teach this course and conduct research
in this area

„ Students who can design, develop, understand,


modify/enhance, and maintain complex applications
which run on high performance architectures (in addition
to doing research!)

„ A set of slides, notes, projects and laboratory exercises


which can be used for teaching this course in future
both at IITK and at other universities

June 2009 Sanjeev K Aggarwal 44


Organization of the course at IITK

„ Approximately 40 lectures of one hour each (both by faculty and


students)

„ One term paper/project to be done individually. It is important to


start early. (30% credit)

„ Some small programming assignments for laboratory (20% credit)

„ One mid semester examination (20% credit)

„ One end semester examination (30% credit)

„ Every one is expected to participate in the discussions in the class

June 2009 Sanjeev K Aggarwal 45


Detailed course contents
„ what are multi-core architectures,

„ issues involved into writing code for multi-core


architectures,

„ how to develop programs for these architectures,

„ what are the program optimizations techniques,

„ how to build some of these techniques in compilers,

„ OpenMP and other message passing libraries,


June 2009 Sanjeev K Aggarwal 46
threads, mutex etc.
… contents: Architecture
„ Introduction to parallel computers: Instruction level parallelism
(ILP) vs. thread level parallelism (TLP);
„ Performance issues: Brief introduction to cache hierarchy and
communication latency;
„ Shared memory multiprocessors: General architectures and the
problem of cache coherence;
„ Synchronization primitives: Atomic primitives; locks: ticket,
array; barriers: central and tree;
„ performance implications in shared memory programs;
„ Chip multiprocessors: Why CMP (Moore's law, wire delay);
„ shared L2 vs. tiled CMP; core complexity; power/performance;
„ Snoopy coherence: invalidate vs. update, MSI, MESI, MOESI,
MOSI; performance trade-offs; pipelined snoopy bus design;
„ Memory consistency models: SC, PC, TSO, PSO, WO/WC, RC;
„ Chip multiprocessor case studies: Intel Montecito and dual-core
Pentium4, IBM Power4, Sun Niagara
June 2009 Sanjeev K Aggarwal 47
… contents: Program analysis
„ Introduction to optimization, overview of
parallelization;
„ Shared memory programming, introduction to
OpenMP;
„ Dataflow analysis, pointer analysis, alias analysis;
„ Data dependence analysis, solving data dependence
equations (integer linear programming problem);
„ Loop optimizations;
„ Memory hierarchy issues in code optimization;

June 2009 Sanjeev K Aggarwal 48


… contents: Concurrency
„ Operating System issues for multiprocessing. Need
for pre-emptive OS;
„ Scheduling Techniques, Usual OS scheduling
techniques, Threads, Distributed scheduler,
Multiprocessor scheduling, Gang scheduling;
„ Communication between processes, Message boxes,
Shared memory;
„ Sharing issues and Synchronization, Sharing
memory and other structures, Sharing I/O devices,
Distributed Semaphores, monitors, spin-locks,
Implementation techniques on multi-cores;
„ Case studies from Applications:
Digital Signal Processing, Image processing,
Speech processing.
June 2009 Sanjeev K Aggarwal 49
Ethical Issues (Advice to students)

„ Copying material from internet and other


sources
„ DO NOT use “cut and paste” technology to
prepare your reports
„ DO NOT copy assignments
„ You can borrow ideas (after giving due credit) but
not the text/programs
„ Look at the word Plagiarism in
www.dictionary.com
“a piece of writing that has been copied from
someone else and is presented as being your
June 2009 Sanjeev K Aggarwal 50

own work”
Ethical Issue …

„ Look at the word Plagiarism in


http://en.wikipedia.org/wiki/Plagiarism
“Plagiarism is a form of cheating, and within
academia is seen as academic dishonesty. It
is a matter of deceit: fooling a reader into
believing that certain written material is
original when it is not. Plagiarism is a serious
and punishable academic offense, when the
goal is to obtain some sort of personal
academic credit or personal recognition”

June 2009 Sanjeev K Aggarwal 51


Background required
„ Knowledge of basic compiler, operating systems and
computer organization
„ Focus on back-end of the compiler and concurrency
issues in operating systems
„ Computer Organization
„ Computer Organization and Design by Patterson and
Hennessy
„ Compiler reference: Dragon book
„ Compilers: Principles, Tools and Techniques by Aho,
Sethi and Ullman
„ Operating Systems reference
„ Operating Systems Concepts by Silberschatz and
Galvin
June 2009 Sanjeev K Aggarwal 52
References

„ No specific text book!!

„ Material has been collected from various sources like


books, research papers, position papers etc.

„ We will make the material available as we go along

„ Our slides and class notes will be useful material

June 2009 Sanjeev K Aggarwal 53


Some useful references

„ ACKNOWLEDGEMENT: Some of the figures have


been taken from the Wolfe's book on High
Performance Compilers for Parallel Computing.
„ Material on OpenMP are the tutorials given by Tim
Mattson (Intel) and Rudolf Eigenmann (Purdue
University) at Super Computing 2001.

„ Computer Architecture
„ J. L. Hennessy and D. A. Patterson. Computer
Architecture: A Quantitative Approach. Morgan
Kofmann publishers, 3rd Edition.
„ D. E. Culler, J. P. Singh, with A. Gupta. Parallel
Computer Architecture: A Hardware/Software
Approach. Morgan Kaufmann publishers, 2nd Edition.
June 2009 Sanjeev K Aggarwal 54
Some useful References …
„ Redundancy removal
„ Aho A, Lam Monica S, Sethi R, Ullman J D, ``Compilers Principles, Techniques,
and Tools'', Addison-Wesley Publishing Company, 2007.
„ Hecht M S, ``Flow Analysis of Computer Programs'', Elsevier North Holland, Inc.,
1977.

„ High Performance Compilers, Data Dependence Analysis


„ Wolfe M, ``High Performance Compilers for Parallel Computing'', Addison-
Wesley Publishing Company, 1996.
„ Muchnick S S, ``Advanced Compiler Design Implementation'', Morgan-
Kaufmann Publishers, 1997.
„ Allen Randy and Ken Kennedy, ``Optimizing Compilers for Modern
Architectures’’, Morgan Kauffman Publishers, 2002

„ Operating Systems
„ Tanenbaum A S, Distributed Operating Systems, Prentice Hall.
„ Coulouris, Dollimore and Kindberg Distributed Systems Concept and Design,
Addison-Wesley.
„ Silberschatz, Galvin, Operating Systems Principles, Addison-Wesley

June 2009 Sanjeev K Aggarwal 55


Schedule (Monday, June 29)

0900-1015 Introduction to the course and logistics (SKA)


1015-1030 break
1030-1130 Introduction to multi-core architectures (MC)
1130-1145 break
1145-1245 Introduction to OpenMP (Abhishek and Ankush)
1245-1430 Lunch
1415-1515 Introduction to OpenMP (Abhishek and Ankush)
1515-1530 break
1530-1630 Intel Tools (Deepak Majeti)
1630-1700 break
1700-1900 laboratory session: getting familiar with the environment
and Intel Tools (Rachit, Deepak, Abhishek and Ankush)

June 2009 Sanjeev K Aggarwal 56


Schedule (Tuesday, June 30)

0900-1000 Recap: virtual memory and caches (MC)


1000-1015 break
1015-1115 Parallel programming (MC)
1115-1145 break
1145-1245 Coherence and consistency (MC)
1245-1415 Lunch
1415-1515 Coherence and consistency (MC)
1515-1545 break
1545-1900 Project Presentations and laboratory sessions
(Afroz Mohiuddin and Teja Palwali)

June 2009 Sanjeev K Aggarwal 57


Schedule (Wednesday, July 1)

0900-1000 Coherence and consistency (MC)


1000-1015 break
1015-1115 Synchronization (MC)
1115-1145 break
1145-1245 Synchronization (MC)
1245-1415 Lunch
1415-1515 Case studies of CMP (MC)
1515-1545 break
1545-1900 Project Presentations and laboratory session
(Vishwesh Inamdar and Dharmendra Modi)

June 2009 Sanjeev K Aggarwal 58


Schedule (Thursday, July 2)

0900-1000 Introduction to Optimization (SKA)


1000-1015 break
1015-1115 Control flow Analysis (SKA)
1115-1145 break
1145-1245 Dataflow Analysis (SKA)
1245-1415 Lunch
1415-1515 Dataflow Analysis (SKA)
1515-1545 break
1545-1900 Project Presentations and laboratory sessions
(Jitesh Jain and Himanshu Govil)

June 2009 Sanjeev K Aggarwal 59


Schedule (Friday, Jul 3)

0900-1000 Compilers for High Performance Architectures (SKA)


1000-1015 break
1015-1115 Data Dependence Analysis (SKA)
1115-1145 break
1145-1245 Data Dependence Analysis (SKA)
1245-1415 Lunch
1415-1515 Loop Optimizations (SKA)
1515-1545 break
1545-1900 Project Presentations and laboratory sessions
(Priyank Faldu and Dilip Kola)

2000-2230 Course Dinner, meet CSE faculty and Intel team

June 2009 Sanjeev K Aggarwal 60


Schedule (Saturday, Jul 4)

0900-1000 CPU Scheduling (RM)


1000-1015 break
1015-1115 Synchronization (RM)
1115-1145 break
1145-1245 Multi-processor Scheduling (RM)
1245-1415 Lunch
1415-1515 Security issues (RM)
1515-1545 break
1545-1630 Project Presentation (Sudhanshu Shukla)
1630-1730 Course feed-back, discussion
Certificate distribution and closing

June 2009 Sanjeev K Aggarwal 61


Logistics
„ Lectures in CS101

„ Lab sessions in the ground floor lab


„ Labs are available round the clock

„ TAs will be available to help you in the labs


„ Computer lab accounts already created
„ Internet can be accessed using these accounts (refer to the sheet in the kit)

„ Coffee/tea in the student lobby area


„ Breakfast/lunch/dinner in VH

„ Contact Mishra Ji and/or Santosh Kumar in the department office for any
assistance
„ Medical assistance can be taken from the health centre on payment basis (carry
your course badge)

„ Anything I have missed out? Questions?


June 2009 Sanjeev K Aggarwal 62

Você também pode gostar