Você está na página 1de 9

Danish company >12y of experience

 
NetSales = Local Currency amount - deduções e descontos de clientes
Gross Sales = Local currency amount (valor total das vendas)
Segment Margin = NetSales - Extended Costs (custo de produção, custo
do material utilizado para criar o produto) não inclui o custo de
movimentar o produto
 
DJGL-DevTX_DSA_MDW (sqlserver-ezpfo65r52ve4/DJGL-DevTX_DSA_MDW)
 
todas as tabelas de incremental load vao precisar do DW_TimeStamp da tabela de
origem
ODX>DSA DSA>MDW
 
 

 
depois de mover tudo do odx para dsa e colocar as modificações nas tabelas de
seguida fazer deploy as tabelas alteradas
apos deploy fazer um execute para garantir que os dados sao povoados nas tabelas
no caso do historical por exemplo ver print (apenas selecionas as tabelas modificadas)
 
Apos concluir o execute packages ai podes arrastar d de DSA para MDW
Não esquecer de tambem arrastar DW-TimeStamp arrastar para Incremental time
stamp
Apply Deploy ao MDW e por fim executar MDW_Daily_load (executar tudo que esta
la dentro)
 
próximo passo é aceder analisys services (caso seja necessario atualizar os dados na
hora, senão aguardar pelo dia seguinte faz auto)
e fazer a atualização da data para que fique disponivel no PBI ( Segment Reporting
Documentation - Update Data)
 
caso nao seja possivel aceder ao Analysis Services podes fazer o update aos dados
atraves do Azure.
correr na mesma o full ou incremental load no TimeXtender e dps ou vais ao Analysis
services (ver video)
ou em alternativa (so se nao for possivel via SSAS) corres este script no Azure >
TabularModalPROD (AutomationTest/TabularModalPROD)
 
 
Data refresh, implica daily/full load dos dados de ODX dps DSA e por fim
MDW para garantir que toda a informação é atualizada e
posteriormente fazer no SSAS (process table no analysis services)
 
 
Apos efetuar transferencia de projecto de test para Prod fazer deploy diretamente no
TEST
NOTE usar a opção Diferencial nao a Partial
 

 
Exexute DSA Full and MDW full de seguida processar tabelas no SSAS
 
 
Deployment and Execution on TimeXtender
 
Conditional Lookups
 
Conditional Lookups are fields created in a table in which the values are populated
from another table based on a join (relation). While they are quite different, this is
how your traditional SQL LEFT JOIN is performed in TimeXtender. Conditional lookups
are created by dragging a field from the source table onto the destination table name.
Typically this is done from the "One" to the "Many" of a one-to-many relationship.
 
 
Transformations and Conditions

Analysis Services Tabular

Analysis Services Tabular is what's known as an Online Analytical Processing (OLAP)


Database. Not only does Analysis Service Tabular store the entire dataset in-memory, it
maintains indexes of every "intersection" of each table in the model. While SQL based
relational database is ideal for transformation and storage of data, the OLAP database
(also known as a cube) is highly proficient for data retrieval. It is able to display filter
results almost instantly.

Understanding System Control Fields

TimeXtender adds four System Control fields on every table created. These, "system
control fields" store helpful information about the records in each table.
DW_Id: Stores an incremental integer uniquely identifying each row in a table which
can be used as a surrogate key when needed.

 DW_Batch: Stores the batch or "execution number" within the table, linking
each row with a specific execution.

 DW_SourceCode: Stores the name of the data source in which the row was
originally generated from.

 DW_TimeStamp: Stores the date and time in which the row was populated.

*Note: When using the System Control fields at the Project level, tables that are
already included in the data warehouses will not show the System Control Fields, until
enabled per table.

Table Performance Settings

Enable Physical Valid Table

Disabling this setting will replace the final "valid" table (after data cleansing) with a
database view. By not duplicating the data in both the Raw and Valid tables, this can
drastically reduce the amount of space required for each table. However, using a
database view can slow query performance. So, if you are short on storage and less
concerned about the speed of your data, this may be an option for you.

Enable Batch Data Cleansing

During the Execution process, TimeXtender attempts to transfer all resulting records
from the Raw table and into the Valid table (unless using Incremental load). On very
large tables, this can significantly increase the size of the SQL logs, causing errors if size
limits are reached. Enabling this option will transfer data from raw to valid in batches
of the specified number of rows. While this can slow transfer rates a bit, it will reduce
the SQL logging overhead.

Raw and Valid Table Compression

The Compression feature uses SQL Servers table compression technology to reduce
the size of the database. While this can improve performance on I/O intensive
workloads, it does require extra CPU resources on the database server to process the
data. So, if you have plenty of CPU, but are short on I/O, you may see a significant
performance improvement by enabling compression. Compression can be performed
on either the Raw table or the Valid table. In most cases, if compression is desired, it
should be enabled on both.

Row or Page-based Compression

Row Based Compression performs a compression on each individual row of data, while
Page based compression performs Row Compression and compresses the data even
further using some additional methods. So while Page based Compression will give you
the greatest space-saving results and I/O improvement, it will also require the greatest
CPU resources. Row based compression is a good option if you desire some space
savings, but Page based is too resource intensive.

Table Partitioning

Partitioning a table splits the data into multiple units that can be spread across
multiple file groups in the database to improve the performance of large tables. A
great way to picture this is, say you had a large transaction table spread across
multiple years. The table was so large that queries became very slow, but most of your
query was only on the current year's data. If you split this data into multiple tables and
put the current year's data in its own table, your queries would improve dramatically.
The idea is the same with partitioning, only partitions are invisible to outside
applications. The data in the table is arranged on the hard disk based on the increment
or partition type specified. This ensures that queries within a single partition process
significantly faster.

Index Automation

Supernatural Keys

Incremental Load
Execution Packages

Você também pode gostar