Você está na página 1de 2

VERIFICATION

What makes an optimal


SOC verification strategy
By Gaurav Jalan
Manager - Design Verification
SmartPlay Technologies India Pvt. Ltd

During the last decade, connected devices resulting from


cross pollination of internet
and mobile phone technologies dominated the electronics
world. Personalized products
and services have penetrated our
society so much that it is hard to
imagine life in their absence. To
achieve the desired user experience, these devices are powered
with a hardware known as system on chip (SOC). While the
semiconductor industry carries
the ownership of adhering to the
Moores law and doubling the
number of transistors in a given
area every two years, the onset
of application driven designs
further adds pressure to support
multiple applications in one device i.e. more hardware on the
same chip. If connectivity was
the buzz for a while, intelligent
connected solutions will dominate the electronics world in this
decade. This intelligence comes
from additional sensors which
means more analogue on the
chip thereby complicating the
SOCs further. Pressure from time
to market limit the SOC design
schedules leaving minimal margin of error in the whole process.
Verification claiming most part
of the schedule becomes all the
more critical.
Aggressive schedules suggest
reusability as the key to complex
SOC designs. Realising an SOC
eventually means integrating a
whole bunch of complex IPs and
ensuring that they all work in
harmony. Integrating IPs poses
serious challenges at multiple
levels for SOC verification. The
traditional SOC test bench needs
to evolve into an SOC verification
platform. Similarly, the test plan
should discuss the complete veri-

eetindia.com | EE Times-India

fication strategy to be followed


rather than just specifying the
features to be verified.
A typical SOC design for a consumer electronic device will have
blocks that can be broadly classified into processors, DSP cores,
peripherals, memory controllers,
layered bus architectures and analogue components. An optimal
SOC verification strategy should
address all the challenges that
would be encountered during the
process of verification. It should
include answers to what to verify,
how to verify and are we done.
What to verify?
The main focus of SOC verification is verifying integration of IPs.
Integration verification spans at
multiple levels. At the basic level,
connectivity checks are desired
that require ensuring the polarity of an output port of a block
is same as the desired polarity at
input port of another block. The
connectivity check between
module boundaries as well as
chip boundaries i.e. with IO pins
is a must. Every additional pin on
the SOC adds up to the cost of
the parts. To minimise the same,
the IOs of SOC pass through a
complex multiplexing structure. A
simple bug in controlling the output enable of an IO can be a show
stopper. With different IPs getting
designed by different teams there
is a fair chance of interpretation
of protocols differently. Next to
connectivity are protocol checks
across module boundaries ensuring that the blocks interacting
with each other have similar understanding of the protocol.
Alongside connectivity and
protocol checks, verification of
initial sequences for the SOC
bring-up are equally vital. This
mainly includes reset and bootup sequence. Today SOCs have
multiple ways to boot-up based
on different pin configurations.

Verifying all possible boot-up


scenarios and the state of the SOC
after boot up is crucial. Apart from
pin configuration for boot up, the
reset sequence of all the processors on the SOC is next on the verification list. Once the processors
are initialized and ready to control
the SOC scenarios, a quick check
on memory and register accesses
for all valid addresses ensures that
the design can be exposed to SOC
level scenarios.
With the above setup in place,
the direction should be targeted
towards every control and data
path in the design. Here all the
modules involved for a given
scenario needs to configured
and test vector should ensure
that it triggers cases like multiple
masters driving the peripheral
or memory controller data path.
Apart from driving and receiving the data, conditions such as
interrupt assertion that eventually hit the processor need specific consideration. SOC designs
involve integration of multiple sub
systems. Integration of modules
needs to be first verified at a sub
system level before they move to
SOC level. At the SOC level inter
processor communication is most
important. The states of the sub
systems should be in sync with
the master sub system all the time
to ensure proper functioning of
the SOC while handling multiple
applications in parallel.
Analogue is slowly claiming a
big chunk of the SOC with MCM,
SiP or single die solutions. Pin or
port connectivity between digital and analogue contributes to
maximum no. of silicon re-spins.
Besides connectivity, various
functional states in which the analogue domain can go in needs to
be cross checked. Low power is a
critical feature implemented in all
SOC. Depending upon the application, both digital and analogue
blocks on the SOC can go into

various power modes. Simulators


can emulate the low power cells at
RTL functional level based on the
defined power states. Verification
needs to confirm that introduction of these cells do not upset
the inputs leading to functional
failures. Test vectors driving the
SOC through all power modes
and checking the desired input
and output ports/pins in each of
these modes is important.
Finally, the use cases or concurrency testing is important. At the
RTL verification level, test vectors
should target enabling parallel data
paths as expected in real applications to weed out any bottlenecks
in the architecture. On chip and on
processor cache hit/miss scenarios,
memory and DMA bottlenecks,
bus bandwidth issues and prioritizing interrupts etc. should be
planned to avoid any unforeseen
issues in later part of the design
cycle. Given that today, the driver
development would start along
with SOC designs, team can plan
to run real life data at an early stage
for verifying SOCs.
How to verify?
While what to verify provides
a layout to execute verification,
other important aspects needs
to be planned early enough so
that the execution is as smooth
as possible. With SOC designs
expanding to millions of gates,
the total simulation time for a
vector increases drastically. Given
limited compute resources and
licences, verification teams need
to make sure that each run results
into progress. Debugging failures
consuming most of the verification engineers time becomes an
arduous challenge. The simulation dump file size itself can be
a bottleneck depending upon
the depth of design signals getting strobed. Teams might have
to rerun simulations with limited
dump window to go deep inside

the designs for debugging. The


important factor here is the post
processing of the output after
simulation is complete. Whether
we add assertions, monitors to
dump design state or enable
signal dumps, whatever helps in
minimising the reruns for debugging is of great help. For simulating real life data, having a setup
with hardware accelerators can
be a good choice for faster turnaround time. Planning regressions from the very start is vital.
It not only avoids rework but also
assists in managing progress. As
verification closure approaches,
regressions become significant.
Planning additional interim compute resources and licences during this peak usage can be very
helpful to meet the schedules.
While planning for infrastructure is important, verification
effectiveness depends upon the
SOC verification environment. If
various IPs forming a sub system
have constraint random environment that is portable from IP to
SOC level, teams can plan to come
up with a random test bench.
Constrained random test bench
however has limitations. Instead
of the processor, typically these
test benches will have a VIP (BFM)
to drive the bus. The connectivity
with the processor still remains
untested. Even the reset and boot
up sequences cannot be verified
with these test benches. Secondly
the run time at SOC level is high
and teams may not want to spend
time debugging incorrect constraints causing failures. Finally
when the verification moves from
RTL to Gate level, the BFMs do not
have any timing annotation which

eetindia.com | EE Times-India

limits reuse at SDF annotated


netlist simulations. However, various components of the random
test bench at IP level can be very
useful at SOC level. Particularly,
bus monitors, protocol checkers,
assertions, scoreboards and coverage monitors can be reused for
SOC test bench. Care needs to be
taken so as to avoid connections of
unnecessary probes as this would
further load the simulator. The test
cases to bring the design in an initial state and drive the processor
can be in C or assembly language.
C is preferable so that in the next
version of the SOC if the processor changes, the test vectors can
still be reused. A set of these test
vectors can be reused for net list
simulations also. The test bench
defined should be scalable from
sub system to chip level. It should
have APIs to enable performance
checks whenever required and
should be conducive to reproduce scenarios from pre or post Si
validation.
For analogue blocks on the
SOC, behavioural models should
be developed and validated separately before using them at SOC
level. These models speed up
simulations to ensure connectivity and functional checks at the
digital analogue boundaries of
the design. Once the digital netlist
is available, simulations of the
transistor level analogue netlist
with digital ensures that the basic
connectivity is at least verified before sign off. Low power features
need to be verified at both RTL
and gate level. At RTL level the
tools should be able to emulate
the behaviour of low power cells
wherever desired. Models for volt-

age regulators and PLLs further


help in putting across scenarios
that are close to real life particularly for multi voltage designs where
power shut off is required.
Are we done?
Even if gate count increases linearly
the possible states increase exponentially. It is impossible to verify
all possible states of the design.
A clearly defined set of coverage
goals ensure that the verification
efforts are focused towards convergence with quality in limited
time. Module level verification enables coverage data collection at
IP level. Complete code coverage
(line, expression, block, FSM and
toggle) reports can be generated
at module level. Functional coverage and assertion coverage data
can also be collected at this level.
If the modules are verified using
formal, one can collect figures on
proofs, reachability and explored
contexts indicating the total coverage on that module. One issue
here is that different tools dump
coverage information in varied
formats due to which the unit
level coverage may not always be
pulled at the SOC level. Work towards unified coverage data base
is in progress. For modules where
coverage numbers from unit level
are either unknown (not part of
the IP delivery) or cannot be
merged at SOC level, it is advised
to define a clear set of SOC level
coverage goals.
At SOC level since the focus
is on integration testing, toggle
coverage plays an important role.
Whether it is analogue digital
boundary, module to IO pin connection or inter-connectivity of

the modules, toggle coverage ensures that each bit of the pin/port
has individually toggled from 1 to
0 and 0 to 1. Besides this, defining
functional coverage and assertions at chip level is advisable.
For analogue components, one
can define functional coverage
and assertions for the behavioural
models to ensure that the regular
operating modes are verified. For
low power simulations the power
format files can be easily transformed into functional coverage
parameters and power controller
FSMs can be covered using FSM
coverage again at SOC level to
exhaustively verify the SOC power
modes.
Regular collection of coverage
numbers through regressions
gives a strong indication of the
progress. Other indicators include
diminishing bug rate and clean
regressions. A well defined check
list which includes reviews of the
verification environments, test
vectors, any force statements applied from test bench and ignore
conditions is a must to sign off
SOC verification.
SOC verification brings forth
multi dimensional challenges to
the verification teams. These challenges continue to increase as the
semiconductor industry keeps on
adding more features with small
form factors, high performance
and low energy attributes to the
designs. The key to first silicon
success for such complex designs
is defining a comprehensive verification strategy with emphasis on
all fronts i.e. feature verification,
verification infra structure, reusability, test bench definition and
progress measurement.

Você também pode gostar