Você está na página 1de 4

Code No: RR420507 Set No.

1
IV B.Tech II Semester Regular Examinations, Apr/May 2007
NEURAL NETWORKS
( Common to Computer Science & Engineering and Electronics &
Computer Engineering)
Time: 3 hours Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
⋆⋆⋆⋆⋆

1. (a) Give a brief description of neural networks as optimizing networks. [8]


(b) Explain the use of ANNs for clustering and feature detection. [4+4]

2. Briefly discuss about linear separability and the solution for EX-OR problem.Also
suggest a network that can solve EX-OR problem. [4+6+6]

3. Explain about the generalized delta- rule and derive the weight updatation for a
multi layer feed forward neural network. [8+8]

4. Construct an energy function for a discrete Hopfield neural network of size N×N
neurons. Show that the energy function decreases every time the neuron output is
changed. [8+8]

5. Discuss how the “Winner-Take-All” in the Kohonen’s layer is implemented and


explain the architecture, Also explain the training algorithm. [16]

6. Derive expressions for the weight updation involved in counter propagation. [16]

7. Give a detailed note on the following:

(a) ART1 data structures. [8]


(b) ART2 simulation. [8]

8. Describe how a neural network may be trained for a pattern recognition task.
Illustrate with an example [16]

⋆⋆⋆⋆⋆

1 of 1
Code No: RR420507 Set No. 2
IV B.Tech II Semester Regular Examinations, Apr/May 2007
NEURAL NETWORKS
( Common to Computer Science & Engineering and Electronics &
Computer Engineering)
Time: 3 hours Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
⋆⋆⋆⋆⋆

1. (a) What is the Hebbian-learning rule for training neural networks? Explain with
the help of an illustration. [4+4]
(b) What is the delta learning rule in neural networks? Explain with the help of
an Illustration. [4+4]

2. State and prove the perceptron convergence theorem. [2+14]

3. Explain about the generalized delta- rule and derive the weight updatation for a
multi layer feed forward neural network. [8+8]

4. The truncated energy function, E(v), of a certain two-neuron network is specified as

E (v) = − 12 (v12 + 2v1 v2 + 4v22 + v1 ) ,Assuming high-gain neurons,

(a) find the weight matrix W and the bias current vector i. [8]
(b) Determine whether single-layer feedback neural network postulates (symmetry
and lack of self-feedback) are fulfilled for W and i computed in part (a). [8]

5. (a) What is the Kohonen layer architure and explain its features. [4+4]
(b) Explain the Kohonen’s learning algorithm. [4+4]

6. (a) Explain briefly about the counter propagation-training algorithm. [10]


(b) Explain the various applications of counter propagation. [6]

7. (a) What are the advantages of ART network. Discuss about gain control in ART
network. [3+5]
(b) Discuss in detail about orienting subsystem in an ART network. [8]

8. Describe how a neural network may be trained for a pattern recognition task.
Illustrate with an example [16]

⋆⋆⋆⋆⋆

1 of 1
Code No: RR420507 Set No. 3
IV B.Tech II Semester Regular Examinations, Apr/May 2007
NEURAL NETWORKS
( Common to Computer Science & Engineering and Electronics &
Computer Engineering)
Time: 3 hours Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
⋆⋆⋆⋆⋆

1. (a) Explain in detail about the single layer artificial neural network with diagram.
(b) Explain in detail about the Multi layer artificial neural network with neat
diagram. [8+8]
2. Compare the similarities and differences between single layer and multi layer per-
ceptrons and also discuss in what aspects multi layer perceptrons are advantageous
over single layer perceptrons. [6+6+4]
3. Describe how a feed forward multi layer neural network may be trained for a func-
tion approximation task. Illustrate with an example. [6+10]
4. (a) What are the limitations of Hopfield network? Suggest methods that may
overcome these limitations. [4+4]
(b) A Hopfield network made up of five neurons, which is required to store the
following three fundamental memories: [8]

ξ1 = [+1, +1, +1, +1, +1]T


ξ2 = [+1, −1, −1, +1, −1]T
ξ3 = [−1, +1, −1, +1, +1]T
Evaluate the 5-by-5 synaptic weight matrix of the network.
5. Explain the Kohonen’s method of unsupervised learning. Discuss any example as
its application. [8+8]
6. Describe the following:
(a) Grossberg layer. [8]
(b) Counter propagation network. [8]
7. (a) ART network exploits in full one of the inherent advantages of neural com-
puting technique, namely parallel processing ? Explain. [8]
(b) Describe the architecture and operation of ART2 network. [3+5]
8. What are the applications of Kohonen?s networks in image processing and pattern
recognition? [16]

⋆⋆⋆⋆⋆

1 of 1
Code No: RR420507 Set No. 4
IV B.Tech II Semester Regular Examinations, Apr/May 2007
NEURAL NETWORKS
( Common to Computer Science & Engineering and Electronics &
Computer Engineering)
Time: 3 hours Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
⋆⋆⋆⋆⋆

1. What are the modes of operation of a Hopfield network? Explain the algorithm for
storage of information in a Hopfield network. Similarly explain the recall algorithm.
[4+8+4]

2. Briefly discuss about linear separability and the solution for EX-OR problem.Also
suggest a network that can solve EX-OR problem. [4+6+6]

3. Implement a backpropagation algorithm to solve EX-OR problem and try the ar-
chitecture in which there is a hidden layer with three hidden units and the network
is fully connected. [8+8]

4. Show how the traveling salesman problem can be solved using the Hopfield model.
[16]

5. Discuss how the “Winner-Take-All” in the Kohonen’s layer is implemented and


explain the architecture, Also explain the training algorithm. [16]

6. Explain the operation of counter propagation with suitable network model and give
the equations for training. [16]

7. Explain the major phases involved in the ART classification process. [16]

8. Describe how a neural network may be trained for a pattern recognition task.
Illustrate with an example [16]

⋆⋆⋆⋆⋆

1 of 1