Você está na página 1de 6

Connecting SystemC-AMS Models with OSCI TLM 2.

0 Models Using
Temporal Decoupling
Markus Damm, Christoph Grimm, Jan Haase
Institute of Computer Technology
Vienna University of Technology, Austria
{damm|grimm|haase}@ict.tuwien.ac.at
Andreas Herrholz
OFFIS Institute
Oldenburg, Germany
andreas.herrholz@offis.de
Wolfgang Nebel
Carl von Ossietzky University
Oldenburg, Germany
wolfgang.nebel@informatik.uni-oldenburg.de
Abstract
Recent additions to the system modelling language
SystemC, namely SystemC-AMS and OSCI TLM 2.0,
provide more efcient ways to model systems within their
respective domains. However, most of todays embedded
systems are usually heterogeneous, requiring some way
to connect and simulate models from different domains.
In this paper we present a rst approach on how to con-
nect SystemC-AMS models and TLM 2.0 loosely timed
models using temporal decoupling. We show how cer-
tain properties of the involved Models of Computation
(MoC) can be exploited to maintain high simulation per-
formance. Using an example to show the feasibility of
our approach, we could also observe a certain trade-
off between simulation performance and accuracy. Fur-
ther on, we discuss semantical issues and decisions that
have to be made, when models are connected. As these
decision are typically application-driven, we propose a
converter structure that keeps converters simple but also
provides ways to model application-specifc behaviour.
1
1 Introduction
Today, a lot of research and development activities
in electronic design automation focus on reducing the
design productivity gap due to Moores Law. Tradi-
tional approaches and design languages are not efcient
enough anymore to cope with the rising complexity in
system design. As one result, the system modelling lan-
guage SystemC is nowadays widely used for embed-
ded hardware/software design in industry and academia.
SystemC is based on C++ and available under an open
source license, so it is easily extensible by other method-
ology specic libraries.
1
The work presented in this paper has been carried out in the
ANDRES project, co-funded by the European Commision within the
Sixth Framework Programme (IST-5-033511).
One of these modelling methodologies introduced
with SystemC is Transaction Level Modelling (TLM)
[2, 4]. TLM models the communication of processes by
method calls enabling early integration of hardware and
software and signicantly improving simulation speed
compared to cycle-accurate models. Recently, the Open
SystemC Initiative (OSCI) released the second draft of
its TLM 2.0 standard for public review [6]. TLM 2.0
extends the previous OSCI TLM standard by a more
detailed approach for modelling bus-oriented systems-
on-chips, enabling easy reuse of existing models in dif-
ferent architectures. TLM 2.0 introduces three differ-
ent modelling styles (also called coding styles) regarding
the modelling of time, providing different trade-offs be-
tween simulation accuracy and speed. In the untimed
modelling style, time is not modelled at all. In the
loosely timed modelling style, processes are allowed to
run ahead of the global SystemCsimulation time (tempo-
ral decoupling). Since this modelling style is the focus
of our interest, we will describe it in more detail later.
Finally, when using the approximately timed modelling
style, all processes run in lock-step with the global Sys-
temC simulation time.
Another SystemC extension is SystemC-AMS [7],
enabling the modelling of analogue/mixed signal ap-
plications using different Models of Computation, e.g.
Timed Synchronous Dataow (TDF) for data-ow ori-
ented models, linear electrical networks (LEN) or dif-
ferential algebraic equations (DAE). SystemC-AMS fo-
cuses on executable specication rather than exact mod-
elling providing higher simulation performance than cir-
cuit simulators while being slightly less accurate.
Both of the given SystemC extensions work well in
their respective domains. However, todays embedded
systems are usually heterogeneous in nature, combin-
ing components from different domains, such as digi-
tal hardware, software and analogue hardware. There-
fore an efcient design ow for heterogeneous systems
would require ways to combine both approaches in one
joint model enabling early simulation and exploration.
Though SystemC-AMS provides means to connect AMS
models with discrete-event SystemC models, these are
not enough to efciently integrate AMS models in TLM-
based systems, since they stay on the signal level.
In this paper, we present and discuss a rst approach
for a MoC converter that can be used to connect loosely
timed TLM models with TDF models. To our best
knowledge, there is no other published work on this
topic so far. The rest of this paper is organized as fol-
lows: We start with a brief introduction of the TLM 2.0
loosely timed modelling style and the SystemC-AMS
TDF computational model. This is followed by a dis-
cussion on how these two models can be connected by
exploiting certain similarities to maintain their simula-
tion efciency. We then present the general structure of
converters from TLM to TDF and vice versa in Sections
5 and 6, and present an example where they are used in
Section 7. Section 8 discusses semantical issues that may
arise when TDF and TLM models are coupled and how
these issues can be handled using a structured modelling
approach. We conclude in Section 9.
2 OSCI TLM 2.0 loosely timed modelling
style
The draft 2 of the OSCI TLM 2.0 standard [6] intro-
duces the so called loosely timed modelling style (LT-
TLM). In this modelling style, a non-blocking method
interface is used, where initiator processes generate
transactions (the payload) and send them using a method
call to target processes. The speciality of this modelling
style is the possibility to annotate a transaction with a
time delay d to mark it (in the case of d > 0) as a future
transaction. That is, the loosely timed modelling style
allows initiator processes to warp ahead of simulation
time. The target processes, on the other hand, must deal
with these future transactions accordingly. They have to
store them in a way such they can access and process
delayed transactions at the right time. For example, the
payload event queue (PEQ) [1], foreseen by the TLM 2.0
standard, can be used for this.
The idea of this approach is that context switches
on the simulation host system (generally triggered by
wait() statements) are reduced and thus simulation
performance is gained. Instead of letting initiator and
target repeatedly produce and process a transaction re-
spectively, an initiator can produce a chunk of transac-
tions in advance, which is then processed by a target
(ideally) at once. However, this may lead to time-twisted
transactions, i.e. the order of arrival of two transactions
at one target is different from their temporal order. If the
processing of these transactions is not synchronized, this
may lead to causal errors.
The loosely timed modelling style basically allows
processes to run according to their own, local simulation
time. To organize this, TLM 2.0 provides the facility
of the quantum keeper. Processes can use the quantum
keeper to keep track of their local timewarp, and yield to
the SystemC simulation kernel after a certain time quan-
tum is used. Typically, a smaller time quantum will re-
duce the chance of causal errors while a greater quan-
tum increases the simulation performance. So using the
quantum keeper, the tradeoff between simulation perfor-
mance and accuracy can be controlled.
3 SystemC-AMS TDF
The main Model of Computation (MoC) provided
by SystemC-AMS is the Timed Synchronous Dataow
(TDF) MoC. It is a timed version of the (originally un-
timed) Synchronous Dataow (SDF) MoC [5], where
processes communicate via bounded fos. The number
of data tokens produced and/or consumed by a process
(the data rate) is constant for all process rings. For ex-
ample, consider a process which has two inputs with data
rates 2 and 3, and an output with data rate 4. Every time
this process is red, it will consume 2 and 3 tokens from
its two inputs and will produce 4 tokens at its output.
The advantage of SDF is that the schedule of pro-
cess execution can be computed in advance, such that
the simulation kernel is only engaged with executing
this static schedule, which makes the simulation of SDF
models very fast. The speciality of the SystemC-AMS
TDF MoC is that a certain time span (the sampling pe-
riod) is associated with token consumption and produc-
tion. The sampling period is an attribute of the (input
or output) TDF port classes, which are analogous to the
SystemC sc_in and sc_out classes, respectively. Via
TDF ports, TDF modules are connected to each other by
TDF signals. Again, these facilities are correspondent
to the sc_module and sc_signal classes. A TDF
module encapsulates the actual process as a standard
method called sig_proc(). In the current SystemC-
AMS prototype, the sampling period has to be set only at
one TDF port of one TDF module of a connected set of
TDF modules (also called TDF cluster). The sampling
periods of all other TDF ports within the cluster are then
a result of this one given sampling period.
For example, if the sampling period of an input port
p
1
of a TDF Module M
1
is set to 2 ms, with a data rate
of 3, the consumption of one token takes 2 ms. such
that the consumption of all 3 tokens takes 6 ms. If M
1
contains also an output port p
2
with data rate 2, the sam-
pling period of p
2
is 6ms divided by the datarate of 2,
resulting in 3 ms. An input port p
3
of a TDF module M
2
which is connected to p
2
via a TDF signal now also has
a sampling period of 3 ms, regardless of its data rate.
4 Connecting LT-TLM and TDF
At a rst glance, bringing these two approaches to-
gether seems to be futile. On the one hand, there is the
loosely timed TLM approach with its local time warps
decoupled from the global simulation time. On the other
hand, we have the strictly timed TDF approach which
runs at an unstoppable pace. But by taking a closer look
at how the TDF simulation works when using a static
schedule (as it is the case with the current SystemC-AMS
prototype), we nd surprising similarities.
The SystemC-AMS simulation kernel is using its own
simulation time, whose current value is returned by the
TDF module method sca_get_time() (from now on
denoted by t
TDF
). The current SystemCsimulation time
(from now on denoted by t
DE
) is returned by the DE
module method sc_time_stamp(). By DE, we de-
note the discrete event MoC implemented by the Sys-
temC simulation kernel, while a DE module denotes the
sc_module-instances.
If a pure SystemC-AMS TDF model is used in
a simulation, the SystemC-AMS simulation kernel is
blocking the DE kernel all the time, so the DE sim-
ulation time doesnt proceed at all. However, there
might be a need to connect and synchronize TDF mod-
ules to DE modules. SystemC-AMS provides con-
verter ports for this cause, namely sca_scsdf_in
and sca_scsdf_out. They can be used within TDF
modules to connect to instances of the SystemC discrete
event sc_signal.
If there is an access to such a port within the
sig_proc() method of a TDF module, the SystemC-
AMS simulation kernel interrupts the execution of the
static schedule of TDF modules and yields to the Sys-
temC DE simulation kernel, such that the DE part of the
model can now execute, effectively proceeding t
DE
un-
til it is equal to t
TDF
. Now, the DE modules reading
from signals driven by TDF modules can read their new
values at the right time, and TDF modules reading from
signals driven by DE modules can read their correct cur-
rent values.
Figure 1 shows an example using the TDF module
M
1
from Section 3. The data tokens consumed are on
the left axis, and those produced are on the right axis.
The numbers beneath the tokens denote the time (in ms)
at which the respective token is valid. The time spans
above the tokens indicate the values of t
TDF
when the
respective token are consumed resp. produced. The time
spans below indicate the according values for t
DE
. At
the beginning of the example, t
TDF
> t
DE
already
holds, until t
TDF
= 38ms. Then the SystemC-AMS
simulation kernel initiates synchronization, for example
because M
1
contains a converter port which it accesses
at that time, or because another TDF module within the
same TDF cluster accesses its converter port.
TDF-module M1
p2 p1
2 ms
rate 3
3 ms
rate 2
26 28 30 32 34 36 38 40 42 26 29 32 35 38 41
26 ms
tTDF
32 ms 38 ms
20 ms 38 ms
tDE
26 ms 32 ms 38 ms
20 ms 38 ms
ms ms
token
valid at
synchronization
tTDF tDE
Figure 1. Example for the relation of t
DE
to
t
TDF
with synchronization
The important conclusion is that TDF modules also
use a certain time warp. In general, TDF modules run
ahead of SystemC simulation time, since t
TDF
t
DE
always holds. Further time warp effects result fromusing
multi-rate data ow. When a TDF module has an input
port with data rate > 1 it also receives "future values"
with respect to t
DE
, and even t
TDF
. When a TDF mod-
ule has an output port with data rate > 1, it also sends
values "to the future" with respect to t
DE
and t
TDF
. The
difference to TLM is that the effective local timewarps
are a consequence of the static schedule, with the respec-
tive local time offsets only varying because of interrupts
of the static schedule execution due to synchronization
needs.
In the following two Sections we describe how the
streaming data of TDF signals can be converted to TLM
2.0 transactions and vice versa, effectively proposing
general TLM2TDF converters. We do this in a way
such that the temporal decoupling approach of the TLM
2.0 loosely timed modelling style is exploited to main-
tain a high simulation performance. The transaction
class used will always be the TLM 2.0 generic payload.
5 Converting from LT-TLM to TDF
The principal idea of a TLMTDF converter is to
take write-transactions (i.e. with a command set to
TLM_WRITE_COMMAND) and stream their data to a
TDF signal. However, we are confronted with several
difculties.
First of all, we cant expect the data from the TLM
side to arrive at certain rates, even if we take the time
warp into account. We might get huge amounts of data
within short time spans, and almost no data for long time
spans. Nevertheless, we have to feed an unstoppable data
token consumer, namely the TDF reading side.
The obvious solution for this problem is to use an in-
ternal fo within the converter to buffer the incoming
data. If a transaction causes a buffer overow (when the
internal buffer is chosen to be of a xed size), it is re-
turned with an error response. Currently, we set the re-
sponse status to TLM_GENERIC_ERROR_RESPONSE,
but we might consider using a generic payload extension
to make this more specic. If, on the other hand, the
buffer is empty, default value(s) are written to the TDF
side to ll the gap(s) (e.g. 0).
DE-module
TDF-module
port
to
port
TDF-DE in
converter
DE-module
DE-signal
TDF out TDF out
DE out
tlm_nb_target_socket
sig_proc() nb_transport()
T
D
F
-
s
i
g
n
a
l
m
e
t
h
o
d

c
a
l
l
Figure 2. The TLMTDF converter archi-
tecture
Another advantage of using an internal buffer is that
the size of the data section of the write-transactions can
be set independent from the data rate of the TDF output
of the converter. Especially, the transaction data size can
vary over the course of the simulation.
Another problem, however, arises due to the delays
of the transactions. It cant be guaranteed that after
the arrival of a transaction T
1
, which is ready at e.g.
t
DE
= 50ms, that there wont be a transaction T
2
which
arrives later, but is ready at e.g. t
DE
= 40ms. That
is, transactions can arrive with twisted time warps, since
they might have emerged from different initiators, or
from one initiator producing such twisted time warped
transactions (there is no rule within TLM 2.0 which pro-
hibits this). Therefore, we cant write the transaction
data to the buffer right away. Instead, we write the trans-
action to a PEQ, where the transactions are stored in
the order of their procession time, such that these delay
twists are resolved. When the local time of the converter
has proceeded far enough for a transaction in the PEQ to
be processed, its data gets written to the buffer. For this,
the PEQ is checked for transactions with a time stamp
smaller or equal to the current local time of the converter
at every sig_proc() execution.
Another issue we have to deal with is synchroniza-
tion. Since the converter is a TDF module, it might run
way ahead of t
DE
, and the initiators feeding the con-
verter might not have had the chance to produce transac-
tions sufciently, even if they would be ready by the time
of the converters sig_proc() execution. Therefore,
if there are no transactions available in the PEQ when
the sig_proc() is processed, and the buffer also holds
not enough data for the output, the SystemC simulation
kernel must get the chance to catch on. This is done
by connecting the converter to a sc_signal using a
SystemC-AMS converter port. If a reading access is now
performed on this converter port, the SystemC-AMS
simulation kernel interrupts the procession of the static
schedule, and the SystemC simulation kernel regains
control and can proceed the SystemC simulation (includ-
ing the TLM initiator modules) until t
DE
= t
TDF
.
Figure 2 shows an overview of the architecture of the
proposed converter. The core is a TDF-module, which
contains the PEQ, the buffer, and the port to the TDF
side. It is encapsulated within a DE-module, which im-
plements the TLM 2.0 nonblocking transport interface.
For synchronization, the TDF module is connected to a
DE-signal via a SystemC-AMS converter port. Note that
the DE-signal needs no driver; simply accessing it from
within the TDF module is sufcient to trigger synchro-
nization.
6 Converting from TDF to LT-TLM
When converting from TDF to TLM, we want to bun-
dle the streaming TDF data into the data section of a
transaction and send it to the TLM side. This would be
an easy and straightforward task if we would consider
the converter (i.e. the TDF side) to act as an TLM initia-
tor. In this case, the transactions command would be set
to TLM_WRITE_COMMAND, and the delay of the trans-
action could be set to the difference of t
DE
and the valid
time stamp of the last token sent via the transaction (i.e.
t
TDF
+ token numbersampling period).
However, despite its striking simplicity, the TDF
models we focus on here just provide streaming input
to the TLM side, and the idea of such models acting as
a TLM initiator is as realistic as an A/D converter act-
ing as a bus master. Also, the address the transaction
is sent to would basically be static throughout the simu-
lation, since we dont get addresses from the TDF side
(at least not in an obvious manner). As a consequence,
the converter would need to read address manipulating
transactions coming from the TLM side, too, in order to
be useful.
There are useful applications for TDF models acting
as TLM initiators, and we get back to this topic in Sec-
tion 8. Nevertheless, for now we chose a conversion ap-
proach where the initiators are again on the TLM side.
That is, TLM initiators send read-transactions (i.e. with
the command set to TLM_READ_COMMAND) to the con-
verter, which copies the data tokens it receives from the
TDF side into the data section of the transaction and re-
turns it.
The advantage of this approach is that it is pretty sim-
ilar to the TLMTDF conversion direction. For exam-
ple, the converter needs an internal buffer to store the
incoming TDF data tokens, for similar reasons as dis-
cussed in Section 5. Here, the TLM side might request
data of varying length at varying times, while the TDF
side provides data at an unstoppable pace. Therefore, an
internal fo is used again.
We also use a PEQ to store the incoming transactions,
such that time delay twists are resolved. In this case, the
standard TLM 2.0 PEQ is used, which produces an event
when a transaction within the queue is ready. In that
case, the transaction is taken from the queue, and it is
checked whether the internal buffer provides sufciently
many data tokens to ll the transactions data section.
Here, we also have to make sure that we dont return "fu-
ture" tokens from the transactions point of view. Note
that the presence of such tokens in the internal buffer
is perfectly possible when using multirate data ow. If
enough valid data tokens are present, the transaction is
returned with them. If not, it is returned with an error
response status.
DE-module
TDF-module
TDF-DE in
converter
DE-module
DE-signal
TDF in
DE out
tlm_nb_target_socket m
e
t
h
o
d

c
a
l
l
nb_transport()
port
to
port
TDF in
T
D
F
-
s
i
g
n
a
l
sig_proc()
Figure 3. The TDFTLM converter
When the internal fo is chosen to be of nite
size, buffer overows can occur. Therefore, at every
sig_proc() execution, it is checked whether the in-
ternal buffer contains enough space to take the next
chunk of data tokens provided by the TDF side. If not,
the converter yields to the SystemC simulation kernel
with the same converter port access technique described
in Section 5. This gives the TLM side the chance to pro-
duce more reading transactions, and might proceed t
DE
far enough for transactions in the PEQ to become ready.
If there is still not enough space in the internal buffer, the
surplus data tokens are simply discarded and a warning
is raised.
As it can be seen in Figure 3, the architecture of the
TDFTLM converter is pretty similar to the architec-
ture of the TLMTDF converter. Since the TDF part
now does not need to access the PEQ, it is contained in
the toplevel DE-module.
7 Example system
To test our conversion approach, we implemented an
example system containing two TDF sources, two TDF
drains, three TLM digital signal processing (DSP) mod-
ules and a TLM bus (see Figure 4). The idea of this sys-
tem is that the data coming from source
i
is processed
by any of the DSPs, and the results are then passed to
the respective drain
i
(i = 1, 2). Here, the exact na-
ture of the computations performed by the DSPs were
not the focus of our interest. However, a possible exam-
ple would be a software dened radio, where the TDF
sources would provide data to be modulated (or demod-
ulated). The modulation (or demodulation) schemes to
be applied to the source data could be different for every
source, but every DSP provides the capabilities to per-
form them. That is, every DSP checks the sources for
new data, reads them, processes them accordingly, and
writes the results to the appropriate drain.
1 3
1
1 2
2
2
Figure 4. Example system
Our interest in this model is to demonstrate the func-
tional correctness of our converters and to observe ac-
curacy / simulation speed tradeoffs, typical for loosely
timed TLM models. The loss of accuracy in this case
manifests itself in data packages arriving at the signal
drains in the wrong order. This is possible in principle,
since the DSPs run independently from each other. Nev-
ertheless, when the DSPs run in approximately timed
mode (i.e. their time warp is set to 0 and they run in
lock-step with SystemC simulation time), the proces-
sion delay will make sure that the data package order
is preserved. However, when we allow the DSPs to warp
ahead in time locally, such data package twists can occur.
Regarding the simulation speed, we measure the
number of context switches in the TLM initiators, since
a high number of context switches usually increases sim-
ulation time. That is, every point in simulation time the
simulator switches to the process of one specic DSP
counts as one context switch. We simulated about 16
minutes of time, in which about 12,000 data packages
with 128 values were received by the drains. The sam-
pling period of the sources was 1 ms.
Figure 5 shows the results. The curve starting to the
left with a value of about 600,000 is the number of con-
text switches, while the other curve shows the number
of errors (i.e. data package twists) with a maximum of
about 1,200. As it can be seen, even a small timewarp re-
duces the number of context switches drastically, while
larger timewarps dont reduce the context switches much
more, but lead to errors. In fact, using a timewarp up to
100 ms resulted in no errors at all, but led to 20 times
less context switches.
0 0
100000
200000
300000
400000
500000
600000
700000
timewarp (ms)
# errors # context switches
200
400
600
800
1000
1200
1400
0 100 200 300 400 500 600
Figure 5. Speed vs. accuracy tradeoff in
example system
8 Discussion of semantical issues
The focus of Section 5 and Section 6 was to provide
a sound technical basis for interoperability of SystemC-
AMS models with TLM 2.0 models using temporal de-
coupling. However, there are certain semantical issues
requiring discussion.
One issue concerns the buffers within the converters.
Should they be of limited size or virtually be unlimited?
Also, is there a canonical way, how the converters should
behave when their buffers run full or empty? A similar
question arose in previous work [3] on automatic con-
version between Kahn Process Network (KPN) models
and TDF models. The approach chosen there was to al-
low the designer to specify the buffer size. In the nite
buffer case, several options can be set for raising an er-
ror, return a default value or the last value which could
be read from the buffer (empty buffer case), or to discard
either the new value or the oldest value in the buffer (full
buffer case).
The same could be used here, as the decision how
to dimension the buffer is basically application specic.
For example, there might be cases where a converters
buffer directly relates to a buffer that will also be present
in the nal implementation, so it would make sense to
have the size of the converter buffer limited. Also a
designer would like to make sure that there are always
enough tokens available as inputs to the TDF model and
he wants the converter to throw an exception or give at
least a warning if there are no more tokens available.
Another open question was briey mentioned in Sec-
tion 6, namely whether a TDF model can be a TLM ini-
tiator or not. In general, there is no reason why a TDF
model should not act like an initiator or master. Imag-
ine a TDF model simulating an signal processing algo-
rithm that is later to be implemented on a DSP. The al-
gorithm might regularly read data from a memory and
write its result back to the memory, so later the DSP
will become a bus master, periodically generating read
and write transactions. However in many cases the TDF
model will usually be a target or slave, like e.g. a D/A
converter only reacting to requests from other initiators.
As it can be seen, all resolutions of the issues come
down to application specic properties and decisions. So
the designer always has to specify the behaviour he ex-
pects from the converters. Integrating all of these possi-
ble behaviours within one congurable converter would
potentially result in a quite complex and difcult to mod-
elling element.
Therefore we propose a different approach: For keep-
ing the converter simple and elegant while still having
the full degree of application-dened behaviour, it is best
to split the converter in two parts: One application inde-
pendent part (the basic converter), implementing a re-
duced and well-dened subset of the converter seman-
tics. And one application specic part (the wrapper),
implementing the application specic behaviour on top
of these semantics. While the basic converter provides
the coupling of the two involved Models of Computa-
tion, the wrapper can also be designed, such that it im-
plements the interface of an AMS component as it will
be found in the nal architecture. So for example, if the
TDF model should act like a bus master, the wrapper
could also be implemented as a TLM initiator. The ba-
sic converter could then be provided as a ready-to-use
element while for the wrapper the designer would be re-
quired to specify the expected behaviour.
We think that to nally dene the basic converters
semantics would require an indepth analysis of typical
application use cases. Also, formalizing the semantics
of the conversion could help to prove that all application
specic behaviour can be mapped in general to the re-
stricted set of the basic converter. However, this is out of
the scope of this paper and will be done in future work.
9 Conclusion and future work
In this paper, a rst approach on how to connect
SystemC-AMS models with loosely timed TLM 2.0
models using temporal decoupling was presented, with
the focus on the SystemC-AMS side acting as a stream-
ing data producer and/or consumer. It was shown that
the loosely timed modelling style of TLM 2.0 can be ex-
ploited efciently to t with SystemC-AMSs TDF, pre-
serving the high simulation performance of both Mod-
els of Computation. We described generic converter el-
ements implementing our approach. A small example
model was implemented which indicated the converters
functional correctness, while the general simulation per-
formance / accuracy tradeoff typically found in loosely
timed TLM models could still be observed. We con-
cluded with discussing semantical issues that arise when
dening the conversions behaviour. As these issues are
typically application specic, we proposed a structured
approach separating the conversion in a general and an
application dependent part.
The purpose of this paper was to present a rst tech-
nical feasibility analysis of TLMTDF conversion. We
were able to show, that the idea of having TLM initia-
tors running in advance of simulation time shows some
similarities to how TDF is simulated by the SystemC-
AMS kernel and helps to solve the synchronization task.
In principle, approximately timed TLM models could be
connected to TDF models using the same converter ap-
proach. However this would not show the same simula-
tion benet as AT models are not allowed to run in ad-
vance.
Future work in this area will focus on two aspects: To
formalize the conversion problem at hand more rigidly,
possibly including a more formal description of the
TLM 2.0 loosely timed modelling style. And to ex-
plore and implement a more structured and sophisticated
TLMTDF conversion approach as outlined in Section
8 by analysing typical application use cases for the inter-
facing of purely digital systems with analog/mixed sig-
nal components.
References
[1] J. Aynsley. OSCI TLM2 User Manual. Technical report,
Open SystemC Initiative, 2007.
[2] L. Cai and D. Gajski. Transaction level modeling in sys-
tem level design. In Technical Report 03-10. Center for
Embedded Computer Systems, University of California,
Irvine, 2003.
[3] M. Damm, F. Herrera, J. Haase, E. Villar, and C. Grimm.
Using Converter Channels within a Top-Down Design
Flow in SystemC. In Proceedings of the Austrochip 2007,
2007.
[4] F. Ghenassia. Transaction-Level Modeling with SystemC:
TLM Concepts and Applications for Embedded Systems.
Springer-Verlag New York, Inc., Secaucus, NJ, USA,
2006.
[5] E. A. Lee and D. G. Messerschmitt. Static scheduling of
synchronous data ow programs for digital signal process-
ing. IEEE Transactions on Computers, C-36(1):24 35,
1987.
[6] Open SystemC Initiative. OSCI TLM2.0 draft2. http:
//www.systemc.org.
[7] A. Vachoux, C. Grimm, and K. Einwich. Towards Analog
and Mixed-Signal SoC Design with SystemC-AMS. In
IEEE International Workshop on Electronic Design, Test
and Applications (DELTA04), Perth, Australia, 2004.

Você também pode gostar