Você está na página 1de 20

Supply Chain Management?

A SUPPLY CHAIN is a network of supplier, manufacturing, assembly, distribution, and logistics facilities that perform the functions of procurement of materials, transformation of these materials into intermediate and finished products, and the distribution of these products to customers. Supply chains arise in both manufacturing and service organizations. SUPPLY CHAIN MANAGEMENT (SCM) is a systems approach to managing the entire flow of information, materials, and services from raw materials suppliers through factories and warehouses to the end customer. SCM is different from SUPPLY MANAGEMENT which emphasizes only the buyer-supplier relationship. Supply chain management has emerged as the new key to productivity and competitiveness of manufacturing and service enterprises. The importance of this area is shown by a significant spurt in research in the last five years and also proliferation of supply chain solutions and supply chain companies . All major ERP companies are now offering supply chain solutions as a major extended feature of their ERP packages. Supply chain management is a major application area for Internet Technologies and Electronic Commerce (ITEC). In fact, advances in ITEC have contributed to growing importance of supply chain management and SCM in turn has contributed to many advances in ITEC. Two Faces of Supply Chain Management SCM has two major faces to it. The first can be called loosely as the back-end and comprises the physical building blocks such as the supply facilities, production facilities, warehouses, distributors, retailers, and logistics facilities. The back-end essentially involves production, assembly, and physical movement. Major decisions here include: 1. Procurement (supplier selection, optimal procurement policies, etc.) 2. Manufacturing (plant location, product line selection, capacity planning, production scheduling, etc.) 3. Distribution (warehouse location, customer allocation, demand forecasting, inventory management, etc.) 4. Logistics (selection of logistics mode, selection of ports, direct delivery, vehicle scheduling, etc.) 5. Global Decisions (product and process selection, planning under uncertainty, real-time monitoring and control, integrated scheduling) The second face (which can be called the front-end) is where IT and ITEC play a key role. This face involves processing and use of information to facilitate and optimize the back-end operations. Key technologies here include: EDI (for exchange for information across different players in the supply chain); Electronic payment protocols; Internet auctions (for selecting suppliers, distributors, demand forecasting, etc.); Electronic Business Process Optimization; E-logistics; Continuous tracking of customer orders through the Internet; Internet-based shared services manufacturing; etc.

Key issues in Supply Chain ManagementISSUE Network Planning CONSIDERATIONS Warehouse locations and capacities Plant locations and production levels Transportation flows between facilities to minimize cost and time Inventory Control How should inventory be managed? Why does inventory fluctuate and what strategies minimize this? Supply Contracts Impact of volume discount and revenue sharing Pricing strategies to reduce order-shipment variability Distribution Strategies Selection of distribution strategies (e.g., direct ship vs. crossdocking) How many cross-dock points are needed? Cost/Benefits of different strategies Integration and Strategic Partnering How can integration with partners be achieved? What level of integration is best? What information and processes can be shared? What partnerships should be implemented and in which situations? Outsourcing & Procurement What are our core supply chain capabilities and which are Strategies not? Does our product design mandate different outsourcing approaches? Risk management Product Design How are inventory holding and transportation costs affected by product design? How does product design enable mass customization? Supply Chain Management Strategies STRATEGY Make to Stock Make to Order Configure to Order WHEN TO CHOOSE BENEFITS standardized products, Low manufacturing costs; relatively predictable demand meet customer demands quickly customized products, many Customization; reduced variations inventory; improved service levels many variations on finished Low inventory levels; wide product; infrequent demand range of product offerings; simplified planning complex products, unique customer specifications Enables response to specific customer requirements

Engineer to Order

Supply chain management must address the following problems:


Distribution Network Configuration: number, location and network missions of suppliers, production facilities, distribution centers, warehouses, cross-docks and customers. Distribution Strategy: questions of operating control (centralized, decentralized or shared); delivery scheme, e.g., direct shipment, pool point shipping, cross docking, direct store delivery (DSD), closed loop shipping; mode of transportation, e.g., motor carrier, including truckload, Less than truckload (LTL), parcel; railroad; intermodal transport, including trailer on flatcar (TOFC) and container on flatcar (COFC); ocean freight; airfreight; replenishment strategy (e.g., pull, push or hybrid); and transportation control (e.g., owner-operated, private carrier, common carrier, contract carrier, or third-party logistics (3PL)). Trade-Offs in Logistical Activities: The above activities must be well coordinated in order to achieve the lowest total logistics cost. Trade-offs may increase the total cost if only one of the activities is optimized. For example, full truckload (FTL) rates are more economical on a cost per pallet basis than LTL shipments. If, however, a full truckload of a product is ordered to reduce transportation costs, there will be an increase in inventory holding costs which may increase total logistics costs. It is therefore imperative to take a systems approach when planning logistical activities. These trade-offs are key to developing the most efficient and effective Logistics and SCM strategy. Information: Integration of processes through the supply chain to share valuable information, including demand signals, forecasts, inventory, transportation, potential collaboration, etc. Inventory Management: Quantity and location of inventory, including raw materials, workin-process (WIP) and finished goods. Cash-Flow: Arranging the payment terms and methodologies for exchanging funds across entities within the supply chain.

Supply chain execution means managing and coordinating the movement of materials, information and funds across the supply chain. The flow is bi-directional. Supply chain management Activities Supply chain management is a cross-function approach including managing the movement of raw materials into an organization, certain aspects of the internal processing of materials into finished goods, and the movement of finished goods out of the organization and toward the end-consumer. As organizations strive to focus on core competencies and becoming more flexible, they reduce their ownership of raw materials sources and distribution channels. These functions are increasingly being outsourced to other entities that can perform the activities better or more cost effectively. The effect is to increase the number of organizations involved in satisfying customer demand, while reducing management control of daily logistics operations. Less control and more supply chain partners led to the creation of supply chain management concepts. The purpose of supply chain management is to improve trust and collaboration among supply chain partners, thus improving inventory visibility and the velocity of inventory movement. Several models have been proposed for understanding the activities required to manage material movements across organizational and functional boundaries. SCOR is a supply chain management model promoted by the Supply Chain Council. Another model is the SCM Model proposed by the Global Supply Chain Forum (GSCF). Supply chain activities can be grouped into strategic, tactical, and operational levels

Strategic level

Strategic network optimization, including the number, location, and size of warehousing, distribution centers, and facilities. Strategic partnerships with suppliers, distributors, and customers, creating communication channels for critical information and operational improvements such as cross docking, direct shipping, and third-party logistics. Product life cycle management, so that new and existing products can be optimally integrated into the supply chain and capacity management activities. Segmentation of products and customers to guide alignment of corporate objectives with manufacturing and distribution strategy. Information technology chain operations. Where-to-make and make-buy decisions. Aligning overall organizational strategy with supply strategy. It is for long term and needs resource commitment.

Tactical level

Sourcing contracts and other purchasing decisions. Production decisions, including contracting, scheduling, and planning process definition. Inventory decisions, including quantity, location, and quality of inventory. Transportation strategy, including frequency, routes, and contracting. Benchmarking of all operations against competitors and implementation of best practices throughout the enterprise. Milestone payments. Focus on customer demand and Habits.

Operational level

Daily production and distribution planning, including all nodes in the supply chain. Production scheduling for each manufacturing facility in the supply chain (minute by minute). Demand planning and forecasting, coordinating the demand forecast of all customers and sharing the forecast with all suppliers. Sourcing planning, including current inventory and forecast demand, in collaboration with all suppliers. Inbound operations, including transportation from suppliers and receiving inventory. Production operations, including the consumption of materials and flow of finished goods. Outbound operations, including all fulfillment activities, warehousing and transportation to customers. Order promising, accounting for all constraints in the supply chain, including all suppliers, manufacturing facilities, distribution centers, and other customers. From production level to supply level accounting all transit damage cases & arrange to settlement at customer level by maintaining company loss through insurance company. Managing non-moving, short-dated inventory and avoiding more products to go shortdated.

Customer Relationship Management (CRM)


Customer Relationship Management (CRM) can be widely defined as company activities related to developing and retaining customers. It is a blend of internal business processes: sales, marketing and customer support with technology and data capturing techniques. Customer Relationship Management is all about building long-term business relationships with customers. CRM is an alignment of strategy, processes and technology to manage customers and all customerfacing departments & partners. Any CRM initiative is and has the potential of providing strategic advantages to the organization, if handled right. It is a process or methodology used to learn more about customers' needs and behaviors in order to develop stronger relationships with them. There are many technological components to CRM, but thinking about CRM in primarily technological terms is a mistake. The more useful way to think about CRM is as a process that will help bring together lots of pieces of information about customers, sales, marketing effectiveness, responsiveness and market trends. CRM helps businesses use technology and human resources to gain insight into the behavior of customers and the value of those customers. Advantages Of CRM 1. Using CRM, a business can: 2. Provide better customer service 3. Increase customer revenues 4. Discover new customers 5. Cross sell/Up Sell products more effectively 6. Help sales staff close deals faster 7. Make call centers more efficient 8. Simplify marketing and sales processes Other advantages areCRM solutions help companies boost their business efficiency, thereby increasing profit and revenue generation capabilities. Let us take a quick look at some of the measurable benefits that your organization can gain by implementing a CRM solution. Increase Customer Lifecycle Value In most businesses, the cost of acquisition of customers is high. To make profits, it is important to keep the customer longer and sell him more products (cross sell, up sell, etc) to him, during his lifecycle. Customer stay, if they are provided with value, quality

service and continuity. CRM solutions enable you to do that. Execution Control Once the business strategy is put into motion, the management needs feedback and reports to judge how the business is performing. CRM solutions provide management with control and a scientific way to identify and resolve issues. The benefits include a clearer visibility of the sales pipeline, accurate forecasts and more. Customer Lifecycle Management To keep the customers happy, you need to know them better. At the minimum, you need a centralize customer database, that captures most of the information from your entire customer facing departments and partners. Integrated CRM solutions, like CRMnext enable you to manage customer information, throughout all stages of their life cycle, from contact to contract to customer service. Strategic Consistency Because CRM offers business and technological alignment, it enables companies to achieve strategic company goals more effectively, like enhanced sales realization, higher customer satisfaction, better brand management and more. Additionally, the alignment results in a more consistent customer communication creating a feeling of continuity. Business Intelligence Due to the valuable business insights that CRM provides, it becomes easier to identify the bottlenecks, their causes and the remedial measures that need to be taken. For example, CRMnext provides real-time business focus dashboards with extensive drill down capabilities that provide the decision makers with the depth of information required to identify the causes and spot trends.

Definition 1-Data Warehouse


A data warehouse is a collection and summarization of information from multiple databases and database tables. The primary purpose of a data warehouse is not data storage, but the collection of information for decision-making. Typically, a data warehouse extracts updated information from operational databases on a regular basis (nightly, hourly, etc.). This forms a snapshot of collected data that can be organized into a logical structure based on your analytical needs. Data warehouses allow you to express your information needs logically, without being constrained to database fields and records. Using the correct data mining tools, it is possible to display information from a data warehouse in ways that are not possible using SQL or other basic query languages. Unlike a relational database, a data warehouse can present information in multidimensional format. This representation is called a hypercube, and contains layers of rows and columns. Using this model a company could, for instance, track sales of multiple products in multiple regions over a given period of time, all in the same view.

A data warehouse can contain extremely large amounts of information, and many users will only need to access a portion of this. Information in a data warehouse can be organized into data marts, which are subsets of data with a specific focus. Data marts can provide an analyst with a more efficient set of working data relevant to, for instance, a specific business process or unit of the company Supplier Database

Customer Database

Sales Database

Data warehouse

Data Mart

Data Mart

Definition 2- Data Warehouse?


A data warehouse is a relational database that is designed for query and analysis rather than for transaction processing. It usually contains historical data derived from transaction data, but it can include data from other sources. It separates analysis workload from transaction workload and enables an organization to consolidate data from several sources. In addition to a relational database, a data warehouse environment includes an extraction, transportation, transformation, and loading (ETL) solution, an online analytical processing (OLAP)

engine, client analysis tools, and other applications that manage the process of gathering data and delivering it to business users Different types of data warehouse :

Subject Oriented Integrated Nonvolatile Time Variant

Subject Oriented
Data warehouses are designed to help you analyze data. For example, to learn more about your company's sales data, you can build a warehouse that concentrates on sales. Using this warehouse, you can answer questions like "Who was our best customer for this item last year?" This ability to define a data warehouse by subject matter, sales in this case, makes the data warehouse subject oriented.

Integrated
Integration is closely related to subject orientation. Data warehouses must put data from disparate sources into a consistent format. They must resolve such problems as naming conflicts and inconsistencies among units of measure. When they achieve this, they are said to be integrated.

Nonvolatile
Nonvolatile means that, once entered into the warehouse, data should not change. This is logical because the purpose of a warehouse is to enable you to analyze what has occurred.

Time Variant
In order to discover trends in business, analysts need large amounts of data. This is very much in contrast to online transaction processing (OLTP) systems, where performance requirements demand that historical data be moved to an archive. A data warehouse's focus on change over time is what is meant by the term time variant

Data Warehouse Architectures


Data warehouses and their architectures vary depending upon the specifics of an organization's situation. Three common architectures are:

Data Warehouse Architecture (Basic) Data Warehouse Architecture (with a Staging Area) Data Warehouse Architecture (with a Staging Area and Data Marts)

Data Warehouse Architecture (Basic)


Figure shows a simple architecture for a data warehouse. End users directly access data derived from several source systems through the data warehouse.

Figure 1- Architecture of a Data Warehouse

In Figure 1 the metadata and raw data of a traditional OLTP system is present, as is an additional type of data, summary data. Summaries are very valuable in data warehouses because they precompute long operations in advance. For example, a typical data warehouse query is to retrieve something like August sales.

Data Warehouse Architecture (with a Staging Area)


In above Figure1 , you need to clean and process your operational data before putting it into the warehouse. You can do this programmatically, although most data warehouses use a staging area -instead. A staging area simplifies building summaries and general warehouse management. Figure 2 illustrates this typical architecture.

Figure 2- Architecture of a Data Warehouse with a Staging Area

Data Warehouse Architecture (with a Staging Area and Data Marts)


Although the architecture in Figure 2 is quite common, you may want to customize your warehouse's architecture for different groups within your organization. You can do this by adding data marts, which are systems designed for a particular line of business. Figure 3 illustrates an example where purchasing, sales, and inventories are separated. In this example, a financial analyst might want to analyze historical data for purchases and sales. Figure 3- Architecture of a Data Warehouse with a Staging Area and Data Marts

OLTP (On-line Transaction Processing) is characterized by a large number of short on-line transactions (INSERT, UPDATE, DELETE). The main emphasis for OLTP systems is put on very fast query processing, maintaining data integrity in multi-access environments and an effectiveness measured by number of transactions per second. In OLTP database there is detailed and current data, and schema used to store transactional databases is the entity model (usually 3NF). - OLAP (On-line Analytical Processing) is characterized by relatively low volume of transactions. Queries are often very complex and involve aggregations. For OLAP systems a response time is an effectiveness measure. OLAP applications are widely used by Data Mining techniques. In OLAP database there is aggregated, historical data, stored in multi-dimensional schemas (usually star schema). The following table summarizes the major differences between OLTP and OLAP system design.

OLTP System Online Transaction Processing (Operational System)


Operational data; OLTPs are the original source of the data. To control and run fundamental business Purpose of data tasks Reveals a snapshot of ongoing business What the data processes Inserts and Short and fast inserts and updates initiated Updates by end users Source of data Queries Relatively standardized and simple queries Returning relatively few records

OLAP System Online Analytical Processing (Data Warehouse)


Consolidation data; OLAP data comes from the various OLTP Databases To help with planning, problem solving, and decision support Multi-dimensional views of various kinds of business activities Periodic long-running batch jobs refresh the data Often complex queries involving aggregations

Depends on the amount of data involved; Processing batch data refreshes and complex queries Typically very fast Speed may take many hours; query speed can be improved by creating indexes Larger due to the existence of aggregation Space Can be relatively small if historical data is structures and history data; requires more Requirements archived indexes than OLTP Database Design Backup and Recovery Highly normalized with many tables Typically de-normalized with fewer tables; use of star and/or snowflake schemas

Backup religiously; operational data is Instead of regular backups, some critical to run the business, data loss is environments may consider simply reloading likely to entail significant monetary loss the OLTP data as a recovery method and legal liability

Difference between data Warehouse and database The primary difference betwen you application database and a data warehouse is that while the former is designed (and optimized) to record , the latter has to be designed (and optimized) to respond to analysis questions that are critical for your business. Application databases are OLTP (On-Line Transaction Processing) systems where every transaction has to be recorded, and super-fast at that. Consider the scenario where a bank ATM has disbursed cash to a customer but was unable to record this event in the bank records. If this started happening frequently, the bank wouldn't stay in business for too long. So the banking system is designed to make sure that every trasaction gets recorded within the time you stand before the ATM machine. This system is write-optimized, and you shouldn't crib if your analysis query (read operation) takes a lot of time on such a system. A Data Warehouse (DW) on the other end, is a database that is designed for facilitating querying and analysis. Often designed as OLAP (On-Line Analytical Processing) systems, these databases contain read-only data that can be queried and analysed far more efficiently as compared to your regular OLTP application databases. In this sense an OLAP system is designed to be read-optimized. Separation from your application database also ensures that your business intelligence solution is scalable (your bank and ATMs don't go down just because the CFO asked for a report), better documented and managed (god help the novice who is given the application database diagrams and asked to locate the needle of data in the proverbial haystack of table proliferation), and can answer questions far more efficietly and frequently. Creation of a DW leads to a direct increase in quality of analyses as the table structures are simpler (you keep only the needed information in simpler tables), standardized (well-documented table structures), and often denormalized (to reduce the linkages between tables and the corresponding complexity of queries). A DW drastically reduces the 'cost-per-analysis' and thus permits more analysis per FTE. Having a well-designed DW is the foundation successful BI/Analytics initiatives are built upon.

Data Mining Data mining (the analysis step of the "Knowledge Discovery in Databases" process, or KDD),a field at the intersection of computer science and statistics,is the process that attempts to discover patterns in large data sets. It utilizes methods at the intersection of artificial intelligence, machine learning, statistics, and database systems. The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. Aside from the raw analysis step, it involves database and data management aspects, data preprocessing, model and inference considerations, interestingness metrics, complexity considerations, postprocessing of discovered structures, visualization, and online updating The term is a buzzword, and is frequently misused to mean any form of large-scale data or information processing (collection, extraction, warehousing, analysis, and statistics) but is also

generalized to any kind of computer decision support system, including artificial intelligence, machine learning, and business intelligence. In the proper use of the word, the key term is discovery, commonly defined as "detecting something new". Even the popular book "Data mining: Practical machine learning tools and techniques with Java" (which covers mostly machine learning material) was originally to be named just "Practical machine learning", and the term "data mining" was only added for marketing reasons. Often the more general terms "(large scale) data analysis", or "analytics" or when referring to actual methods, artificial intelligence and machine learning are more appropriate. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indexes. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps.

Knowledge Discovery in Databases (KDD) Knowledge Discovery in Databases (KDD), refers to the nontrivial extraction of implicit, previously unknown and potentially useful information from data in databases. While data mining and knowledge discovery in databases (or KDD) are frequently treated as synonyms, data mining is actually part of the knowledge discovery process. The Knowledge Discovery in Databases (KDD) process is commonly defined with the stages: (1) Selection (2) Pre-processing (3) Transformation (4) Data Mining (5) Interpretation/Evaluation. It exists, however, in many variations on this theme, such as the Cross Industry Standard Process for Data Mining (CRISP-DM) which defines six phases: (1) Business Understanding (2) Data Understanding (3) Data Preparation (4) Modeling (5) Evaluation (6) Deployment or a simplified process such as (1) pre-processing, (2) data mining, and (3) results validation.

Explain the KDD process.

Knowledge discovery as a process is depicted and consists of an iterative sequence of the following steps: 1. Data cleaning: to remove noise and inconsistent data 2. Data integration: where multiple data sources may be combined 3. Data selection: where data relevant to the analysis task are retrieved from the database

4. Data transformation: where data are transformed or consolidated into forms appropriate for mining by performing summary or aggregation operations. 5. Data mining: an essential process where intelligent methods are applied in order to extract data pattern. 6. Pattern evaluation to identify the truly interesting patterns representing knowledge based on some interestingness measures; 7. Knowledge presentation where visualization and knowledge representation techniques are used to present the mined knowledge to the user.

Steps 1 to 4 are different forms of data preprocessing, where the data are prepared for mining. The data mining step may interact with the user or a knowledge base.

The interesting patterns are presented to the user and may be stored as new knowledge in the knowledge base. Data mining is only one step in the entire process but an essential one because it uncovers hidden patterns for evaluation.

Therefore, data mining is a step in the knowledge discovery process

Data Mining
Data mining is the process of extracting information from large sources of data, such as a corporate data warehouse, and extrapolating relationships and trends within that data. It is not possible to use standard query tools, such as SQL, to perform these operations. There are three main categories of data mining tools: query-and-reporting tools, intelligent agents, and multidimensional analysis tools. Query-and-reporting tools offer functionality similar to query and report generators for standard databases. These tools are easy to use, but their scope is limited to that of a relational database, and they do not take full advantage of the potential of a data warehouse. The term 'intelligent agents' encompasses a variety of artificial intelligence tools which have recently emerged into the field of data manipulation. Two of these tools are neural networks and fuzzy logic. An intelligent agent can sift through the contents of a database, finding unsuspected trends and relationships between data. Multidimensional analysis tools allow a user to interpret multidimensional data (i.e., a hypercube data set) from different perspectives. For example, if a set of data includes products sold in various regions over time, multidimensional analysis allows you to view the data in different ways. For instance, you could display all sales in all regions for a given time, or all sales over time in a given region

Query And reporting Tools

Multidimensional Analysis Tools

Data Warehouse Engine

Data warehouse Intelligent Agent

Data mining involves six common classes of tasks:[1]

Anomaly detection (Outlier/change/deviation detection) The identification of unusual data records, that might be interesting or data errors and require further investigation. Association rule learning (Dependency modeling) Searches for relationships between variables. For example a supermarket might gather data on customer purchasing habits. Using association rule learning, the supermarket can determine which products are frequently bought together and use this information for marketing purposes. This is sometimes referred to as market basket analysis. Clustering is the task of discovering groups and structures in the data that are in some way or another "similar", without using known structures in the data. Classification is the task of generalizing known structure to apply to new data. For example, an e-mail program might attempt to classify an e-mail as "legitimate" or as "spam". Regression Attempts to find a function which models the data with the least error. Summarization providing a more compact representation of the data set, including visualization and report generation.

Architecture of a typical Data Mining SystemThe architecture of a typical data mining system may have the following major components : Database, data warehouse, WorldWideWeb, or other information repository: This is one or a set of databases, data warehouses, spreadsheets, or other kinds of information repositories. Data cleaning and data integration techniques may be performed on the data.

Database or data warehouse server: The database or data warehouse server is responsible for fetching the relevant data, based on the users data mining request.

Knowledge base: This is the domain knowledge that is used to guide the search or evaluate the interestingness of resulting patterns. Such knowledge can include concept hierarchies, used to organize attributes or attribute values into different levels of abstraction. Knowledge such as user beliefs, which can be used to assess a patterns interestingness based on its unexpectedness, may also be included. Other examples of domain knowledge are additional interestingness constraints or thresholds, and metadata (e.g., describing data from multiple heterogeneous sources).

Data mining engine: This is essential to the data mining system and ideally consists of a set of functional modules for tasks such as characterization, association and correlation analysis, classification, prediction, cluster analysis, outlier analysis, and evolution analysis.

Pattern evaluation module: This component typically employs interestingness measures and interacts with the data mining modules so as to focus the search toward interesting patterns. It may use interestingness thresholds to filter out discovered patterns. Alternatively, the pattern evaluation module may be integrated with the mining module, depending on the implementation of the data mining method used. For efficient data mining, it is highly recommended to push the evaluation of pattern interestingness as deep as possible into the mining process so as to confine the search to only the interesting patterns.

User interface: This module communicates between users and the data mining system, allowing the user to interact with the system by specifying a data mining query or

task, providing information to help focus the search, and performing exploratory data mining based on the intermediate data mining results. Also, it allows the user to browse database and data warehouse schemas or data structures, evaluate mined patterns, and visualize the patterns in different forms.

Figure-Typical data Mining Architecture Classification of Data mining System There are many data mining systems available or being developed. Some are specialized systems dedicated to a given data source or are confined to limited data mining functionalities, other are more versatile and comprehensive. Data mining systems can be categorized according to various criteria among other classification are the following:

Classification according to the type of data source mined: this classification categorizes data mining systems according to the type of data handled such as spatial data, multimedia data, time-series data, text data, World Wide Web, etc.

Classification according to the data model drawn on: this classification categorizes data mining systems based on the data model involved such as relational database, object-oriented database, data warehouse, transactional, etc. Classification according to the king of knowledge discovered: this classification categorizes data mining systems based on the kind of knowledge discovered or data mining functionalities, such as characterization, discrimination, association, classification, clustering, etc. Some systems tend to be comprehensive systems offering several data mining functionalities together. Classification according to mining techniques used: Data mining systems employ and provide different techniques. This classification categorizes data mining systems according to the data analysis approach used such as machine learning, neural networks, genetic algorithms, statistics, visualization, database-oriented or data warehouse-oriented, etc. The classification can also take into account the degree of user interaction involved in the data mining process such as query-driven systems, interactive exploratory systems, or autonomous systems. A comprehensive system would provide a wide variety of data mining techniques to fit different situations and options, and offer different degrees of user interaction.

Enterprise Resource planning (ERP) ERP is a software architecture that facilitates the flow of information among the different functions within an enterprise. Similarly, ERP facilitates information sharing across organizational units and geographical locations.3 It enables decision-makers to have an enterprise-wide view of the information they need in a timely, reliable and consistent fashion. ERP provides the backbone for an enterprise-wide information system. At the core of this enterprise software is a central4 database which draws data from and feeds data into modular applications that operate on a common computing platform, thus standardizing business processes and data definitions into a unified environment. With an ERP system, data needs to be entered only once. The system provides consistency and visibilityor transparencyacross the entire enterprise. A primary benefit of ERP is easier access to reliable, integrated information. A related benefit is the elimination of redundant data and the rationalization of processes, which result in substantial cost savings. The integration among business functions facilitates communication and information sharing, leading to dramatic gains in productivity and speed. The Components of an ERP System - The components of an ERP system are the common components of a Management Information System (MIS). ERP Software - Module based ERP software is the core of an ERP system. Each software module automates business activities of a functional area within an organization. Common ERP software modules include product planning, parts purchasing, inventory control, product distribution, order tracking, finance, accounting and human resources aspects of an organization. Business Processes - Business processes within an organization falls into three levels strategic planning, management control and operational control. ERP has been promoted as solutions for supporting or streamlining business processes at all levels. Much of ERP success, however, has been limited to the integration of various functional departments. ERP Users - The users of ERP systems are employees of the organization at all levels, from workers, supervisors, mid-level managers to executives.

Hardware and Operating Systems - Many large ERP systems are UNIX based. Windows NT and Linux are other popular operating systems to run ERP software. Legacy ERP systems may use other operating systems. The Boundary of an ERP System - The boundary of an ERP system is usually small than the boundary of the organization that implements the ERP system. In contrast, the boundary of supply chain systems and ecommerce systems extends to the organization's suppliers, distributors, partners and customers. In practice, however, many ERP implementations involve the integration of ERP with external information systems.

ERP vs. CRM and SCM CRM (Customer Relationship Management) and SCM (Supply Chain Management) are two other categories of enterprise software that are widely implemented in corporations and non-profit organizations. While the primary goal of ERP is to improve and streamline internal business processes, CRM attempts to enhance the relationship with customers and SCM aims to facilitate the collaboration between the organization, its suppliers, the manufacturers, the distributors and the partners.

Você também pode gostar