Você está na página 1de 64

TOWARDS A FORMAL FRAMEWORK FOR SEMANTIC INTEROPERABILITY IN SUPPLY NETWORKS

Milan Zdravkovi1, Miroslav Trajanovi1 Faculty of Mechanical Engineering, University of Ni In a response to the weaknesses of static architecture of the supply chain, a notion of virtual enterprise has been introduced and widely discussed. Virtual enterprise is a temporary network of independent enterprises, who come together quickly to exploit fast-changing opportunities and then dissolve [5]. It is characterized by a short-living appearance of a supply chain, capable to produce low volume of high variety of products, by drawing from the loosely-coupled, heterogeneous environment of available competences, capabilities and resources, referred to as Virtual Breeding Environment [6]. One of the main challenges of bringing these paradigms to reality is complexity of the ICT environment and heterogeneity of corresponding enterprise information systems (EIS). Those challenges are related to realization of some of the fundamental requirements for ICT applications enterprise integration and interoperability [7]. The dominant integration approach today, is Service-Oriented Architecture (SOA) [8]. Still, its implementation is resource and time-consuming. For the achievement of full interoperability, the federated approach is proposed [9]. It is characterized by dynamic accommodation of the systems, based on a pre-determined meta-model. In the remainder of this paper, we describe the formal framework for semantic interoperability in supply networks, which is based on this approach. 2. FORMAL FRAMEWORK FOR SEMANTIC INTEROPERABILITY IN SUPPLY NETWORKS Paradigms of virtual enterprises and their breeding environments are based on the capability of an enterprise to configure or reconfigure quickly, according to the circumstances of the market, often not known in advance, or even in the moment of configuration. Hence, efficiency and effectiveness of this joint endeavor depends on the interoperability of enterprises, rather than their integration. The main prerequisite for achievement of interoperability of the loosely coupled systems is to maximize the amount of semantics which can be utilized and make it increasingly explicit [10], and consequently, to make the systems semantically interoperable. 2.1. Foundational aspects of semantic interoperability ISO/IEC 2382 defines interoperability as the capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units [11]. Semantic interoperability builds upon this notion and it means ensuring that the precise meaning of exchanged

Abstract - In this position paper, we summarize our achievements in the research and development of formal framework for semantic interoperability in supply networks. First, we identify common problems of traditional supply chains and argue that interoperability of enterprise information systems is crucial for their resolution. Second, we discuss on the foundational aspects of semantic interoperability and criteria for its evaluation. Third, multiple abstraction layers of this framework are described, including 1) individual enterprises realities, namely, local ontologies, 2) common collaborative practices, namely, formalizations of relevant reference models, and 3) application ontologies. Finally, we identify gaps and provide guidelines for the future research in the topic. 1. INTRODUCTION Supply chain is a complex, dynamic networked environment which resembles a number of different actors, assets, goals, competencies, functions and roles. The interest in creating a new discipline of supply chain management (SCM) was developed in the early 60s with the initial motivation to investigate the increase in demand fluctuation (known as bullwhip effect) which occurred in deeper levels of the manufacturing supply tree [1]. Despite the advances in ICT, organizational sciences and rich implementation experience, there are many evidences that the level of adoption of the paradigm of SCM and related tools, methods and practices is still low. One of the effects of the development in this domain is that the manufacturers started to view their suppliers as extensions of themselves. Consequently, supply chains are characterized by high level of integration, technical and organizational, focusing on a cost reduction as a key motivation for collaboration. We consider this type of supply chains as traditional. Traditional approach to supply chains configuration may have negative impact to their performance. First, high-speed, low-cost supply chains are often unable to respond efficiently to unexpected structural changes in demand or supply. Second, high level of integration reduces flexibility of small and medium enterprises, main constituents of the lower levels of supply chain. Third, investments in technical framework for enterprise integration cannot be returned in a short term. Furthermore, starting collaboration in such traditional settings is reactive decision. Namely, relationship establishment is motivated by internal, rather than external factors: complexity and volume of supply relationships, potential for cost reduction [2], high frequency of transactions between parties [3], degree of asset specificity [4], etc.

information is uniquely interpreted by any system not initially developed for the purpose of its interpretation. It enables systems to combine and process received information with other information resources and thus, to improve the expressivity of the underlying ontologies. In our research, we adopt the definition of John Sowa [12], because we can use it to evaluate semantic interoperability of EISs. We represent this definition in controlled natural language, as asymmetric logical function semantically-interoperable(S,R): data(p) system(S) system(R) semanticallyinteroperable(S,R) p( (transmitted-from(p,S) transmitted-to(p,R)) q(statement-of(q,S) p q) q (statement-of(q ,R) p q q q) ) Figure 1 illustrates the following assumption of semantic interoperability of systems, represented by the local ontologies: when two different application ontologies of two partners in the supply chain (or two departments or contexts of the same enterprise) are mapped to the same domain ontology, relevant EISs whose knowledge they represent will become fully or partially semantically interoperable in specific direction, depending on the mappings. Thus, if there exist two isolated EISs S1 and S2 and corresponding application ontologies OL1 and OL2 and if there are mappings ML1D1 and ML2D1, established between the concepts of OL1, OL2 and domain ontology OD1, respectively, then there exist mappings MO1O2 which can be inferred as logical functions of ML1D1 and ML2D1.

Each of the local ontologies may represent one of the contexts of the enterprise (C1-Cn).

Figure 1. Semantic interoperability of systems 2.2. Description of the ontological framework The concepts and tools presented in this paper are using the formal framework of supply chain operations [13] presented at Fig. 2. It is based on Supply Chain Operations Reference (SCOR) [14] - a standard approach for analysis, design and implementation of five integrated processes in supply chains: plan, source, make, deliver and return. SCOR aims at integrating processes, metrics, best practices and technologies with the objective to improve collaboration between supply chain partners. The framework is developed with goals to enable the semantic interoperability between SCOR-based systems and other relevant EISs, and to improve the expressivity of SCOR-based models.

Figure 2. Formal framework of supply chain operations The approach to its development is based on a premise that domain knowledge evolves at highest rate at lower levels of abstraction in domain community interaction. Consensus on the specific notions is more likely to be reached than agreement on the generalizations and abstractions. However, this level of abstraction is often characterized by the implicit semantics of the standards, reference models, etc. Hence, we consider coherence between creation, evolution and use of highly contextualized knowledge and development of formal expressive models as very important factor for usability of the models. In process of development of an ontological framework for supply chain operations, we start with representing the implicit semantics of SCOR model as knowledge organization system (KOS) by using OWL (SCOR-KOS OWL). Thus, we ensure the interoperability of SCORbased enterprise applications with semantic systems. Next, semantics of the SCOR elements is made explicit by the

concepts and relations of SCOR-Full ontology and rules, used for mapping those to SCOR-KOS OWL concepts (SCOR-MAP). Various design goals, such as generation of process models, acquisition of product data and goal reconciliation are formalized by the application ontologies which are used by the corresponding systems. Finally, the realities of the enterprises are represented in the framework by the local ontologies, which transform the implicit database schemas to explicit semantic models. 2.3. Domain ontologies SCOR-Full formalizes knowledge about supply chain operations, by identifying and aggregating common enterprise notions. It is using those to define the semantics of chosen generalizations, namely, the notions of Course, Setting, Quality, Function and Resource.

availability, performance, cost or time/location data. "Function" concept entails elements of the horizontal business organization, such as stocking, shipping, control, sales, replenishment, return, delivery, disposition, maintenance, production, etc. Instead of representing process flows, SCOR-Full is used to model enabling and caused states of the relevant activities. These states are represented by the concept of configured item. A resource item is a general term which encloses communicated ("Comm-Item", e.g. notification, response, request) and configured ("Conf-Item", with defined state) information items ("Inf-Item"), such as order, forecast, report, etc., and physical items ("PhyItem"). Where information items are the attributes of a quality, their configurations are realizations of the rules, metrics, requirements, constraints, goals or assumptions of a course. Configured items model state semantics of the resource physical or information item, the notions which are used to aggregate the atomic, exchangeable objects in enterprise environment, and are characterized by their states. Examples of information items are Order, Forecast, Budget, Bill-Of-Material, etc. Their semantics is not addressed by SCOR-Full ontology. From this perspective, these are the atomic concepts which can be semantically defined when mapped to other enterprise ontologies. 2.4. Application ontologies One of the layers of the formal framework presented in this paper consists of the application ontologies. These are considered as formalizations of design goals and are usually used as software application meta-models. Layering of application and domain representation models reflects the paradigm of separation of domain and task-solving knowledge [15] and assume their mutual independence [16]. Thus, arbitrary design goals can be defined, formalized to set of competency questions and used for development of a task-solving, application ontology. In our framework, we developed application ontologies for supply chain processes generation, product data acquisition and goal reconciliation. Those are shortly described below. SCOR-CFG ontology [13] is designed with a goal to address the problem of generation of a SCOR thread diagram - a standard tool used in implementation of a SCOR model. In this case, it is inferred as configuration of source, make and deliver processes, on basis of asserted product topology, participants and production strategies for each component. Different process patterns (and roles) are inferred as a result of SPARQL queries executed against SCOR-CFG model in each of the three possible manufacturing strategies: made-to-stock, madeto-order or engineered-to-order. Main features of the corresponding semantic application are: a) development of complex thread diagrams (including horizontal organization of individual supply chain actors); b) generation of process models (including SCOR PLAN activities) and c) workflows and generation of

Figure 3. Main concepts of SCOR-Full ontology and relationships between them SCOR-Full ontology does not aim at formalizing the supply chain, but only to resolve semantic inconsistencies of a SCOR reference model. Thus, its scope is strictly limited to using the common enterprise notions for expressing the existing elements of SCOR model. Central notion of the SCOR-Full ontology (as it is the case for SCOR model) is a generalization of process, in the sense that it acts as the main context for semantic definition of other concepts in the ontology. In SCOR-Full ontology, "Agent" is the concept which describes an executive role and entails all entities which perform individual or set of tasks within the supply network, classified with the concepts of equipment, organization, supply chain, supply chain network, facility and system. "Course" classifies prescriptions of ordered sets of tasks: activity, process, method, procedure, strategy or plan. The notion of Course generalizes doable things with common properties of environment (enabling and resulting states, constraints, requirements, etc.), quality (cost, duration, capacity, performance, etc.) and organization (agent and business function). "Setting" concept provides the description of environment of a course. It aggregates semantically defined features of the context in which course take place its motivation, drivers and constraints. Thus, it classifies rules, metrics, requirements, constraints, objectives, goals or assumptions of a prescribed set of actions. "Quality" is the general attribute of a course, agent or function which can be perceived or measured, eg. capability, capacity,

implementation roadmap (with best practices, systems, resource tracking and performance measurement). Another example of the application ontology we use in our framework is product ontology for interorganizational networks [17]. It aims at facilitating cooperative response in product information acquisition and management and is used in a process of semantic alignment of two perspectives of the product information - design and functional. The process is expected to decrease human intervention in product data exchange in networked environments, and to create added value, through possible recognition of design intent and automated referencing to related manufacturing competences. Current prototype of system comprises of interfaces for topological model submission and semantic refinement. Third application ontology facilitates the process of goal reconciliation in supply chain networks. Where supply chain has a singular objective, its actors are individually characterized by different goals, not necessarily compatible with the overall objective. Misalignment of individual goals and objectives can have a negative impact on the capability of an enterprise to act upon its business strategy, when the enterprise is involved in more than one supply chain. SCOR-GOAL ontology formalizes the notions of cooperative goals, hard and soft goals and other relevant terms and is expected to facilitate coherent, system-wide decision making process, by providing the guidelines for individual enterprises goals reconciliation. Also, it is expected to drive future research on using the intelligent software agents in SCM. 2.5. Local ontologies One of the major challenges in the efficient use of computer systems is interoperability between multiple representations of reality (data, processes, etc.) stored inside the systems and reality itself systems users and their perception of reality. Where latter can be formalized by the domain ontologies, as shared specifications of the conceptualizations, former relies upon the local ontologies wrappers for heterogeneous sources of information, business logic and presentation rules. In our work, the range of semantic interoperability is set to EISs. Their conceptualizations are made on basis of the business logic, which is usually hidden in the actual code, and data model, represented by the corresponding database schema. We consider EISs databases as legitimate starting point for building a relevant local ontology. Obviously, business logic which is encapsulated in the EIS will remain hidden only underlying data model is exposed by ontology. The exceptions are databases triggers, which can be considered as business rules, if they are not implemented only to enforce referential integrity of the database. In order to enable the implementation of local ontologies within the formal framework, we developed a new approach to database to ontology mapping, which overcomes the weaknesses of existing methods by using

the full expressivity of OWL for making the implicit semantics of the database schema - explicit and enable translation of semantic to SQL queries. Generation process consists of 4 phases: a) data import and classification of ER entities; b) classification (inference) of OWL types and properties; c) lexical refinement; d) generation of local ontology; and is illustrated on Figure 4, below. The process is supported by a web application, which consists of modules for data import/assertion of ER meta-model instances, lexical refinement and transformation of classified OWL types and properties to a local ontology.

Figure 4. Approach to database-to-ontology mapping First, database schema is investigated and OWL representation of the ER-model is constructed. This is realized by developed application, which connects to the database, uses introspection queries to discover its structure and asserts the relations between the artifacts by using proposed ER formalization (er.owl). Second, resulting (serialized) OWL representation of the database ER-model is imported into meta-model (ser.owl), which classifies future OWL concepts and domains and ranges of the object and data properties, according to defined rules. In this phase, approach takes into account existential constraints from the ER-model. They are associated to an explicit semantics of the resulting ontology, namely, necessary conditions for entailment of the corresponding concepts. According to these constraints, rules for intensional conceptualization (necessary conditions, or inherited anonymous classes) for particular entity are inferred. Also, the approach considers functionality of the properties. Functional property is property that can have only one (unique) value y for each instance x. They are classified when relation one-to-one is identified between two concepts. The rules classify instances of the OWL representation of the database ER model (er.owl) into a meta-model (ser.owl). Inferred triples can be edited in a simple web application, which also launches the process of local ontology generation. In this process, meta-model entities are transformed into corresponding OWL, RDF and RDFS constructs a resulting local ontology. Concepts of the generated local ontology are annotated with URIs

of the corresponding ER entities from er.owl model. Thus, translation of semantic to SQL queries becomes possible. The approach above is implemented on the case of OpenERP EIS. OpenERP is an open source suite of business applications including sales, CRM, project management, warehouse management, manufacturing, accounting and human resources. It uses PostgreSQL relational database for data storage and application server for enterprise logic. Fig. 5 shows the portions of generated local ontology in Protg.

DL query hasAccountAccountType some (hasCode value 3) returns all instances of account_account concept whose types code is exactly 3. This kind of query representation (only by using properties restrictions) may produce unpredictable and misleading results where the restrictions are posed on the common lexical notions of different concepts, such as name, type, id, etc. Ambiguity of the corresponding properties is reflected on the relevant ontology in the sense that their domains are typically defined as union of large number of concepts. However, this may be considered as an advantage in some cases. Value restrictions on ambiguous data properties may produce relevant inferences and thus, facilitate semantic querying without a need to have extensive knowledge on the underlying ontology structure. This kind of query is mapped to a SQL UNION query which combines SELECT sub-queries made on the each element of the property domain, with the WHERE statement corresponding to the relevant rows restrictions. When corresponding element of the UNION query is assembled, a static field with appropriate label (a reference to the concept) is added to each of the elements, so as to become possible to determine which sub-query actually returned the results. In the first step, decomposition and semantic analysis of the input query is performed. The 4-tuplets in forms of (subject predicate some|only|min n|max m|exactly o bNode) and (subject predicate value {type}) are extracted from the input query. In case of the DL query which returns all concepts which are related to a company whose primary currency is EURO (hasResCompany some (hasResCurrency some (hasName value "EUR"))), following 4-tuplets are identified: X hasResCompany some bNode1 bNode1 hasResCurrency some bNode2 bNode2 hasName value "EUR" Next, a database connection is established and SQL query is constructed and executed for each 4-tuplet, in reverse order, as a result of above analysis. Each query returns data which is used to generate OWL statements which are asserted to a temporary model. Each set of the OWL statements corresponds to a sub-graph whose focal individual is an instance of the concept, inferred on basis of the 4-tuplets property domain or returned result (label). Other individuals or values correspond to defining properties of this concept (inherited anonymous classes). In case of ambiguity, resulting blank nodes are represented as the sets, which are filtered as a result of range inference of the parent 4-tuplet, in a final stage of the method. 3. CONCLUSIONS AND FUTURE RESEARCH In this position paper, a research of semantic interoperability in supply chain networks is presented. It is based on formalization of widely adopted supply chain process reference model and includes development of its OWL representation, semantically enriched model and

Figure 5. Generated local ontology in Protg One of the benefits of the semantically interoperable systems (see Fig. 1) is the possibility to use the single criterion (or criteria) to infer the statements that hold true in all these systems, despite their heterogeneous structure. Namely, specific semantic query executed against the local ontology OLi would normally infer triples of information from the database of Si. However, if mappings (or their logical function) between OLi and OLj exist, inferred triples will also include information from the database of Sj. For example, in supply chain networks, a single semantic query can be used to find out the availability of specific resource or competence, of all owned and used by the enterprises from the Virtual Breeding Environment. 2.6. Queries translation Semantic query can be considered as a pair (O, C), where O is a set of concepts which need to be inferred and C - a set of restrictions to be applied on their properties, namely value and cardinality constraints. This consideration corresponds to a simplified representation of a SQL query which includes tables (and fields) and comparison predicate, namely restrictions posed on the rows returned by a query. In addition, different types of property restrictions correspond to different cases (or patterns, where complex semantic query is mapped) of SQL queries. Where relevant entailments can be reasoned only by property domain and range inferences, a set C may be considered as sufficient for representation of the semantic query. For example, in openERP ontology (see Fig. 5), a

correspondences with other models. It transforms implicit semantics of the reference model to the explicit specification which uses common enterprise notions, assumingly defined in other domain ontologies and/or conceptualizations of relevant enterprise models, architectures and frameworks. Used formalization approach is characterized by the multiple, crossreferenced levels of abstraction, represented by the OWL models of different expressivity. Modular design contributes to the usability of the ontology framework, by facilitating the maintenance, avoiding performance related problems in reasoning and by providing increased potential for ontology matching. Thus, it is expected to facilitate the semantic interoperability in supply chain networks. With regard to local ontologies generation, significant research efforts are needed for representation and exposition of the enterprise business logic, which is hardcoded in the systems, as well as the semantics of the instances, namely information which is stored in the database (for example, occurrence patterns). Another line of research in the future will aim at enactment of the generated ontologies, as they are considered only as intermediary models. Enactment will also facilitate the process semantic matching of the local ontologies with SCOR-Full. At the explicit side of the semantic framework, considerable efforts are needed for formalization of what is considered as domain state-ofthe-art knowledge architectures, standards, models, etc. of the enterprise, and mapping with SCOR-Full, in order to improve the expressivity and completeness of the framework and delivery of complete SCM domain ontology. Finally, in the implementation context, evaluation of the approach, by using the definition of the semantic interoperability and selected cases will bring evidences on its feasibility and validness. We consider the research directions above as important for increasing collaboration in a supply chain network, as the fulfillment of the above objectives will facilitate logic driven, automatic and transparent decision making, thus, enabling a transition from traditional supply chains to virtual enterprise and related paradigms. 4. REFERENCES [1] J.W. Forrester, Industrial Dynamics, The M.I.T. Press, Cambridge, MA, (1961) [2] Lamber, D.M., Knemeyer, A.M., We're in This Together, Harvard Business Review on Supply Chain Management, December 2004 [3] Jespersen, B.D., Larse, T.S., Supply Chain Management - in Theory and Practice, Copenhagen Business School Press, 2006 [4] Williamson, O., The Economic Institutions of Capitalism: Firms, Markets, Relational Contracting, The Free Press, New York, 1985 [5] Browne, J., Zhang, J., Extended and virtual enterprises similarities and differences,

International Journal of Agile Management Systems, Vol.1, No.1, pp.30-36, 1999 [6] Snchez, N.G., Apolinar, D., Zubiaga, G., Atahualpa, J., Gonzlez. I., Molina, A., Virtual Breeding Environment: A First Approach to Understanding Working and Sharing Principles, In Proceedings of InterOp-ESA05 [7] H. Panetto, A. Molina, Enterprise integration and interoperability in manufacturing systems: Trends and issues, Computers in Industry, Vol.59, No. 7, pp.641-646, 2008 [8] Q. Li, J. Zhou, Q.R. Peng, C.Q. Li, C. Wang, J. Wu, B.E. Shao, Business processes oriented heterogeneous systems integration platform for networked enterprises, Computers in Industry, Vol.61, No. 2, pp.127-144, 2010 [9] H. Panetto, G. Berio, K. Benali, N. Boudjlida, M. Petit, A Unified Enterprise Modeling Language for enhanced interoperability of Enterprise Models, In: Proceedings of the 11th IFAC INCOM2004 Symposium, April 5th-7th, 2004, Bahia, Brazil [10] Obrst, L., Ontologies for semantically interoperable systems. In: Proceedings of the 12th International Conference on Information and Knowledge Management, New Orleans, USA, 2003 [11] ISO/IEC 2382, 01.01.47 Interoperability [12] Mailing list of the IEEE Standard Upper Ontology working group, http://suo.ieee.org/email/msg07542.html [13] Zdravkovi, M., Panetto, H., Trajanovi, M., Towards an approach for formalizing the supply chain operations, I-SEMANTICS '10 Proceedings of the 6th International Conference on Semantic Systems, 2010 [14] G. Stewart, Supply-chain operations reference model (SCOR): the first cross-industry framework for integrated supply-chain management, Logistics Information Management, Vol.10, No. 2, pp.6267, 1997 [15] Gangemi, A., Ontology Design Patterns for Semantic Web Content, Lecture Notes in Computer Science, Springer Berlin, Heidelberg, Vol.3729, pp.262-276, 2005 [16] Guarino, N., Understanding, building and using ontologies, International Journal of HumanComputer Studies, Vol.46, No.23, pp.293310, 1997 [17] M. Zdravkovi, M. Trajanovi, Integrated Product Ontologies for Inter-Organizational Networks, Computer Science and Information Systems, Vol.6, No.2, pp.29-46, 2009

GAME BASED LEARNING MODULE Z-BUFFER ON A COURSE IN COMPUTER GRAPHICS1


Kristijan Kuk2, Petar Spalevi2, Marko Cari3, Stefan Pani4
2

College of Electrical Engineering and Computer Science professional studies, Belgrade 3 Faculty of Engineering Management and Economic, Novi Sad 4 Faculty of Electronic Engineering, Nis learning environment is implemented Z-buffer; Section 4 describes the methodology of the use of concept maps, realized thorugh game based learning modules GBLm; Section 5 includes the used interface design, application of levels in GBLm, as well as the way of concept maps usage through levels; finally, Section 6 illustrates the results recorded by students after the use of GBLm on the course in Computer Graphics. 2. RELATED WORK Having in mind the big power of multimedia type contents, in todays education process there is an increased use of specific types of multimedia applications for amusing learnign, i.e. educational games. Through this type of educational material todays teachers are trying to present teaching material to students in the simplest and most interesting possible way. If we start from the fact that playing games and competing are the oldest human characteristics, this adds even more significance to the use of educational games in education systems. A combination of dynamic simulations and educational games on courses in physics for teaching a new generation of students is used by teachers of the Norwegian University of Science and Technology (NTNU). Bjarne A. and Tor I. Eikaas [3] present in their work Game play in engineering education: concept and experimental results the main design and a series of online learning resources based on dynamic simulations which give significance to the use of games on engineering courses in the future. Sweller and Mayer, relying on cognitive load theory [4, 5], suggest that complex tasks, procedures and complex problem solving can be best understood if taught as mutually connected units. On the other hand, concept maps are the tools used to build relationships among concepts. These tools have been used in educational environments to better connect the relationships among theory and practice as well as among other concepts covered in a course. These tools also help the learners build relationships between previous knowledge and newly introduced concepts, encouraging meaningful learning rather than rote learning (memorizing concepts, no relationship to previous learning) [6]. As in any discipline, in Computer Engineering, students start learning the basic concepts of the discipline in their first year studies. Gul Tokdemir and Nergiz Ercil Cagiltay [7] propose to use a concept map to built connections between the concepts taught in Introduction to Computer

Abstract: Interactive multimedia simulations combined with computer game elements can be successfully applied as a new type of educational resources for teaching today's generations of students. This type of game based learning module, which we have named game based learning modules (GBLm), has been designed by the author. This works describes one type of GBLm, which we used in the subject Computer Graphics for the teaching unit Z-buffer in order to facilitate the learning process. During designing and creation of the teaching unit for GBLm we used one of constructivist teaching methods concept maps. Keywords: educational games, game based learning, Zbuffer algorithm, concept maps. 1. INTRODUCTION In the education process with the use of todays modern multimedia technologies and Internet a lot of teachers usually use interactive simulations for students training and practice. However, since the game environment is becoming more and more complex, educational games can be very useful in providing by-ways leading to the teaching material concepts, which are difficult to acquire by traditional methods. The use of interactive learning enables students to handle data and geometric shapes in order to check and practice mathematical principles, which is confirmed by Prenskys most successful game project, Monkey Wrench Conspiracy [1]. Educational games attract students attention in a simple way. Research in this field shows that this phenomenon is a result of an emotional connection between a game and a student. The emotional connection is established due to combining of a number of various sources, such as graphics and sound, which provides for a high interactiveness level between the computer and student [2]. Educational games and interactive simulations can enable a student to acquire knowledge in a specific field through playing a game sucessfully. Educational games are very popular among younger children and adolescents, and For the mentioned reasons, in this work we tried to make easier for students to learn abstract educational material and to improve their results on the final exam with the use of one type of such software. Section 2 gives an overview of new ways of students interaction with amusing and modern learning systems that are today used in technical science and maths lessons; Section 3 presents a teaching unit in which amusing
1

The paper is the result of the Project - Software environment for optimal management of the process of developing quality software, No. TR35026 that funded by the Ministry of Science and Technological Development.

Engineering course. While preparing the concept map, they applied a new paradigm called Goal-QuestionConcept inspired from a well-known GQM (GoalQuestion-Metric) method of software engineering field. Programming is hard to learn both for complex concept and skills. To overcome these disadvantages, a new teaching strategy, named as concept map-based anchored teaching[8], was proposed by Liu Li, Haijun Mao and Licheng Xu.With the anchor as core, students launched inquiry learning and incorporated detailed syntax into a real application, which meant the construction of application skills and problem-solving abilities. To support concept leaning, concept map was assigned to students for better understanding of the concepts relationships. 3. BACKGROUND After analysing the final exam results during examination periods in the subject Computer Graphics at the College of Electrical Engineering and Computing Science Professional Studies in Belgrade, we came to the conclusion that results of one group of questions were much more different than the others. Performing an analysis we discovered that the questions referring to the field of hidden surface techniques and especially Z-buffer algorithm, taught on the course in Computer Graphics, had an unusual ratio between correct, incorrect and I dont know answers. The answers to these questions were in the following proportion: 34% of correct, 33% of incorrect, and also 33% of I dont know answers. The proportion of answers to other questions was usual for test-type questions and answers, and it was the following: 46% of correct, 17% of incorrect and 29% of I dont know answers. Although students had the same learning materials for exam preparations for all fields, the difference discovered for this teaching unit showed that this field was quite complex and abstract for students. It was therefore necessary to take some steps in order to improve the approach to learning regarding this teaching unit, and improve the final exam results on the other hand as well. Of all the algorithms for finding visible surface, Z-buffer algorithm is maybe the simplest and is thus used most frequently. Starting from the facts that this teaching unit is simple and that, on the other side, the students results are worse for this than for other fields, we began searching for an answer to the question how to offer students teaching material that will be understandable for learning. For every pixel on the screen this algorithm keeps a record on the object depth in the scene in relation to an observer, plus a record on intensity of the color used for the object description. In the situation when a new polygon is to be presented, the value of depth and the value of color intensity are calculated for each pixel positioned within the borders of that polygon. If the value of the polygon pixels is closer to the observer than the value in Z-buffer, the recorded values of depth and values of color intensity are replaced with the previous values in the buffer [9,10]. Calculation of Z value for every point on a scan line can be simplified by using the fact that

some polygons are planar. Z-buffer is often implemented with 16 to 32-bit integer values in hardware, but software (as well as some hardware) implementations can use values with movable points. Although the Z-buffer algorithm demands huge memory space, it is easy for implementation. The procedure for placing pixels in the Z-buffer algorithm: Procedure SetPixel (x:Xres;y:Yres;z:Zres;v:Value); Begin If z>depth[x,y] then Begin depth[x,y] :=z; screen[x,y] :=v; End; 4. PROPOSED METHODOLOGY 4.1. Concept Maps Knowledge structure is regarded as an important component of understanding in a subject domain, especially in science [11,12]. The knowledge structure of experts and successful learners is characterized by elaborate, highly integrated frameworks of related concepts [13,14], which facilitate problem solving and other cognitive activities. A knowledge structure, then, might well be considered an important but generally unmeasured aspect of science achievement. Conceptmapping techniques are interpreted as representative of students knowledge structures and so might provide one possible means of tapping into a students conceptual knowledge structure [14,12]. During designing and creating of a teaching unit a lecturer may find concept maps very useful. Global, macromaps can also be made, showing the main ideas we want to present during the entire course, or specific micromaps, showing the structure of knowledge for specific fields. Concept maps are graphic tools for organizing and presenting knowledge. They comprise concepts presented in regular geometric shapes, and relations among them are marked by lines that connect them [15]. Connecting words or expressions are written on the lines and they determine the relationship between the concepts. A concept is defined as a regularity discovered in phenomena or objects, or data on stated phenomena and objects. Ideas comprise two or more concepts connected by words or expressions into a sensible unit (Figure 1).

Concept

SubConcept

Connecting lines and words

SubConcept

Fig. 1: Concept map In the sense of graphics, concept maps include: Concept

Connecting lines and words Sub-concepts (concepts which the connection is leading to)

With the use of advantages of the concept map technique, in the subject of Computer Graphics we created a concept map for the teaching unit Z-buffer, with the aim to reduce the items presented in this unit to main concepts and to connect them in the simplest possible way. 4.2. Game Based Learning Modules Computer Graphic students start the learning process from learning computer techniques and algorithms for generating two- and three-dimensional graphical objects. This subject is very good for teaching through games. Such games can be used by both beginners and those who want to improve their skills. While playing learner gets theoretical knowledge and an experience of graphics algorithms. If he is really interested in this game he will go to external information (books, Internet) and in that way study advanced graphics algorithms. After performing an analysis of existing Internet simulations and applications used in the teaching process, we came to the conclusion that it is necessary to introduce some of those contemporary teaching resources in the course in Computer Graphics, so that students could learn the planned material in the best possible way. We consider that the main objective of computer techniques in games is awaking interest in computer graphic through game. Thus the game concept should be based on two components: learners must get the course information through its interpretation in game world; learners must see the result of his algorithm in a game context;

with effects of brightness, transparency and reflection, characteristic for new fancy technologies. The basic terms that explain the principle of graphics algorithms are reached by selecting the Help option in the game. When a student starts learning with the use of the game or faces a difficulty during solving a task generated by the application, Help serves to accelerate finding the right solution. This means that formulation of definitions and theorems within Help is the key moment in designing the entire application.

Fig. 2. The model of the game based learning module Entire design and architecture of GBLm was made in ActionScript 3.0 object oriented program language supported in the package Adobe Flash CS3. Construction of the game interface demanded a longlasting and extensive analysis, whose main objective was adjustment of the environment and the ways of task implementation to affinities, formerly acquired knowledge and age of end users (students from 18 to 20). 5. GBLm IMPLEMENTATION The innovative way of checking knowledge with multiple answer computer tests served us as the main idea for the way in which teaching material could be given to students for solving. With the use of techniques for the first class innovative testing select/recognize, we came to the idea that answers in the multimedia interactive learning module can be given as a series of image fields on which a student should click. Since the buffer we use in the Zbuffer algorithm uses 16 bits, the task of this module is to determine the value of each bit, i.e. contents of the registry in various situations presented in the task.

The concept is based on a Role Playing Game (RPG) genre. Also, besides placing game interface into modules we have also applied basic game elements, such as: result, time and difficulty levels. These new modules, which include game elements, represent research multimedia learning applications and are intended for Computer science students (Net-generation), we named game based learning modules (GBLm). The purpose of learning through the game is to enable students to learn the rules and check them in practice on the example of all graphics algorithms. Multiple repetition of tasks with performing the same operation increases the probability of learning characteristics and use of particular operation. Quality evaluation whether an operation is acquired or not is performed through visual indication of the number of successful and unsuccessful tasks (score) with the same operation, and comparison with preset criteria. The model of the game based learning module - GBLm is shown in Figure 2. Motives of modern applications were used for representation of elements in the game, together with motives of the Aero interface in the operative systems Windows Vista and Windows 7,

Type 1: An answer is chosen by selection of a square in the marked column referring to scan line Type 2: An answer is chosen by selection of a square in the dropping menu for the given registry bit

Fig. 3. Possibility of answer selection in GBLm

The correct answer to fill the content of one bit is one of the proposed answers presented to students in the form of squares to be selected. These squares, i.e. offered answers, are presented in two ways. In the first level of the module answers are offered in the form of a 13-square column (type 1, Figure 3). Answers in the second level are also offered as an array, but in the form of a fivesquare row (type 2, Figure 3). This row with offered answers is not constantly visible, but is shown only when the given bit is selected as a sub-menu in the menu list. 5.1. Module interface The visual presentation of the module interface is adjusted to the type of student it is intended for, i.e. to teenage students. We used motives of contemporary applications, as well as motives of the graphic interface Aero on operating systems Windows Vista and Windows 7. The graphics is adjusted to provide for students to enjoy in visually attractive effects of brightness, transparency and reflexion, the characteristics of the new fancy technology. In the functional sense, the interface of the GBLm Z-buffer comprises 6 thematic fields (Figure 4): The field in which the problem task is presented visually field 1, The field which presents solution to the task field 2, Offering assistance field 3, Description of the algorithm operation field 4, The task text field 5, The field for main playing information field 6.

operating of the Depth test is shown in the module as the easier level, i.e. level 1.

Fig. 5. A concept map for Z-buffer algorithm 5.1.1. Level 1 The task at level 1 demands from students to determine a pixel whose value will be written in the depth buffer. The student should select the appropriate square within the active field, which is determined by the current position of the scan line arrow (Figure 6). Observed from the aspect of concept maps, solving of the task at level 1 is reduced to recognizing concepts of the depth test operation.

Fig. 4. The interface of GBLm Z-buffer 5.2. Various difficulty levels in the module The learning module named Z-buffer has been made on the basis of a concept map created for this teaching unit, which is shown in Figure 5. Since two main concepts are given in the concept map (the depth and color test concepts), which are to be learned by students through this module, we presented them as two levels with different solving difficulty. Solving the task that presents

Fig. 6. A concept map at level 1 in GBLm 5.1.2. Level 2 At level 2 students are requested to determine the color of the screen buffer on the basis of the displayed order of squares and the legend of color transparency given in the lower right corner of field 1. A student can select as an answer one of the offered colors that appear as a submenu when a specific bit is selected in the screen buffer. To determine the resulting value of the color in case that squares have a certain level of transparency, the field 3 gives a part with colors whose transparency can be

changed interactively, which helps in determining the value of the resulting color. The concepts to be recognized by students at this level are given in the picture of the lower concept map (Figure 7).

Testing of learning efficiency with the use of GBLm was performed during the summer semester 2009/10 for the teaching unit Hidden surface techniques on the course in Computer Graphics. The research included first year students at the College of Electrical Engineering and Computing Science Professional Studies in Belgrade. On the lectures of the said course students were informed about the principle of the Z-buffer algorithm operation. The professor presented the teaching unit with the use of a classic method, with blackboard and without modern teaching resources such as a projector, Power Point presentation etc. After the lectures on the distant learning system Moodle, students also obtained GBLm Zbuffer as additional material for exam preparation. After that, a pedagogical experiement was carried out, and it consisted of monitoring students activities in the period prior to taking the exam, and analyzing of the results they accomplished on the final exam. The sample included 183 students, and 4 predefined groups were monitored: PI (the group which did not attend the lectures and did not use GBLm), PI (the group which did not attend the lectures, but used GBLm), P I (the group which attended the lectures but did not use GBLm), PI (the group which attended the lectures and used GBLm). Characteristics of the results distribution on the final test are shown in Table 1. Groups Correct answer 11.5 % 14.8 % Incorrect answer 13.1 % 11.5 %

Fig. 7. A concept map at level 2 in GBLm 5.1.3. Help Window For both levels in GBLm there is a help window which is closed in usual circumstances. If students need help during the task solving they can request it at any moment with the use of this window. The window contains a definition needed for understanding the task and gives an answer at the moment when the student doesnt know what to do next. The content of the help window is actually the key thing, i.e. concept a student should learn at each level. An example of the open window at level 1 can be seen in Figure 8.

18.0 % 4.9 % 23.0 % 3.3 % Table 1. Characteristics of the results distribution on the final test The total accomplishment of students who used GBLm regardless of whether they attended the lectures or not (the second and fourth row in Table 1, PI +PI 37.8 %) was better than the accomplishment of students who did not use GBLm (the first and third row in Table 1, PI + P I - 29.8 %). On the other hand, the difference between the arithmetic means in incorrect answers given by the groups that used GBLm and those that didnt use it was very small - 3.2%. It means that students in the groups that didnt use GBLm gave incorrect answers to the same extent as students in the groups that used it, which shows that students of that group were not interested in the given teaching unit regardless of the way of presenting the teaching material. Total percentage of correct answers given by groups that used GBLm and attended lectures (PI 23.0 %) in comparison with all other groups shows that these groups recorded best results, which still gives high significance to classic lectures supplemented with this kind of the modern module GBLm.

PI PI PI PI

Fig. 8. The help window at level 1 in GBLm 6. TESTING RESULTS

[2] Conati, C., Probabilistic assessment of users emotions in educational games, Journal of Applied Artificial Intelligence, special issue on Merging cognition and affect in HCI, Vol. 16(7-8), 2002, pp. 555-575. [3] B. A. Foss and T. I. Eikaas, Game play in engineering education: concept and experimental results", International Journal of Engineering Education, Vol. 22, no.5, 2006, pp. 1043-1052. Fig. 9. Results on the final exam Results on the final exam showed that the generation of students that had a chance to use GBLm (generation 2009/10) acquired the teaching unit Z-buffer more successfully than the generation of students who didnt use such teaching material (generation 2008/09). Proportion of results accomplished by these generations can be seen in Figure 9. On the basis of the result proportion we can conclude that the generation 2009/10 increased the number of correct answers by almost 50%, by eliminating a big number of I dont know answers. The use of GBLm, besides increasing the level of knowledge in the given teaching unit, also gave a lot of students confidence in the acquired knowledge. 7. CONCLUSIONS AND FUTURE WORK Documents related to education in many countries emphasize the significance of learning technical sciences through the technique of discovering, with the use of amusing interactive multimedia applications. A result of such learning is that students establish interaction in order to recognize the main concepts of the learning topic and their mutual connection, with amusing environment which reminds them of playing a game. Students capability of making conclusions on the basis of playing games was checked in this work by test questions on the final exam of the course in Computer Graphics for the teaching unit Z-buffer. The effort students had to make using GBLm in the learning process showed that its usage helped students to better understand and acquire given material with minimum time spent. Students playing games in GBLm lasted only 3 minutes on average, and the result on the final test for this group of questions improved by almost 50%. This was the reason for which the authors of this work decided to make and apply GBLm in other teaching units of the course in Computer Graphics in the future as well. The presented results and conclusions drawn in this work give big significance to the use of GBLm for fields that are abstract and hard to understand, especially having in mind that Net-generation students are not quite interested in classic way of learning. REFERENCES [1] Prensky M., Digital game-based learning, McGraw-Hill: NewYork, ISBN 0-07-136344-0, 2001. [4] Sweller, J., Instructional design in technical areas, Camberwell, Australia: ACER Press, 1999. [5] Mayer, R. E., Learning and instruction, Upper Saddle River, NJ: Merrill Prentice Hall, 2003. [6] K. C. Gupta, R. Ramadoss, and H. Zhang, Concept mapping and concept modules for Web-based and CD-ROM-based RF and microwave education,IEEE Trans. Microwave Theory Tech., vol. 51, Mar. 2003, pp. 13061311. [7] Tokdemir, G. and Cagiltay, N.E., A concept map approach for introduction to Computer Engineering course curriculum, Education Engineering (EDUCON), 2010 IEEE, 14-16 April 2010, Madrid, Spain, pp. 243 250. [8] Liu Li, Haijun Mao, Licheng Xu, "Application of Concept Maps-based Anchored Instruction in Programming Course," cit, pp.2196-2200, 2010 10th IEEE International Conference on Computer and Information Technology, 2010. [9] Kenneth I. Joy, The depth-buffer visible surface algorithm, On-Line Computer Graphics Notes, Visualization and Graphics Research Group, Department of Computer Science, University of California, 1996. [10] S. Baker, Learning to Love your Z-buffer, http://www.sjbaker.org/steve/omniv/love_your_z_buf fer.html [11] Novak, J.D. Concept mapping: A useful tool for science education, Journal of Research in Science Teaching, 27,pp. 937949, 1990. [12] Novak, J.D. and Gowin, D.B.,Learning how to learn, New York: Cambridge University Press, 1984. [13] Chi, M.T.H., Glaser, R., and Far, M.J.,The nature of expertise, Hillsdale, NJ: Erlbaum, 1988. [14] Mintzes, J.J., Wandersee, J.H., and Novak, J.D. ,Teaching science for understanding, San Diego, CA: Academic Press, 1997. [15] Novak, J.D., Concept mapping: A useful tool for science education, Journal of Research in Science Teaching, 27, 937949. 1990.

THE GEOMETRICAL MODELS OF THE HUMAN FEMUR AND ITS USAGE IN APPLICATION FOR PREOPERATIVE PLANNING IN ORTHOPEDICS
Nikola Vitkovi1, Miroslav Trajanovi1, Jelena Milovanovi1, Nikola Korunovi1, Stojanka Arsi2, Dragana Ili3 1 University of Ni, Faculty of Mechanical Engineering in Ni 2 University of Ni, Faculty of Medicine in Ni 3 Clinical Center Ni Abstract: In this paper two types of human femur geometrical models and method for its creation are presented. Created method is developed with respect to the morphological and anatomical properties of the human femur, and it enables forming of parametric polygonal mesh and decriptive XML models. The parametric mesh model is based on two parameters, acquired from medical imaging method (CT, X-ray). The first parameter is determined as distance between most prominent points on epicondyles Df, and the second one as distance between line conecting the most prominent points of epicondyles and the center of the femoral head FHA. The purpose of the polygonal mesh model is to improve the preparation of orthopedic surgeries and make it easier. The aim of XML model is to enable exchange of mesh model data between applications in network environment. The presented models are applied in the application for preoperative planning, developed by the authors. media storage. For this reason OXMP model was developed and applied in the APPO.

1.3 The possible benefits


The use of developed models in medicine and technology can bring considerable benefits. The most important characteristic of PMM is its ability to conform to the individual human femur geometry and topology, which brings great benefits in a sense of: application in the software for preoperative planning, detail analysis and collation with medical images (CT, MRI, X-ray), creation of solid model for use in FEA, RP fabrication of implant prototypes, making presentation models, etc.

1.4 The brief review of research in this field


Several attempts to create adequate bone models, in a sense of its implementation in preoperative planning, have been made. Matthews and coworkers in [1] presented composite bone model with possible bone segment adaptation and replacement from the generic database of the bone models. This is an useful approach when 3D scanning devices are available. For 2D scanning more precise and patient-adapted models are required. Sourina and coworkers in [2] suggest application of standard bone fracture models database and its implementation in application for planning orthopedic surgeries (PC application). This and the previous approach [1] do not cover all possibilities of the bone pathology (bone fractures being just one of them). The main difference between applied technique and the other mentioned above is the way geometrical model are created. The presented model is generated according to the acquired parameters from the medical imaging methods, and no database of generated bone models is used. Transferring data (image data, model data) in medicine can be achieved by using known standards and techniques (like PACS - picture archiving and communication systems); however, they use different approach to access relevant information about medical images, as it is described in [3, 4], although PACS can be used for acquiring parameter values. The proposed system for data exchange (OXMP) is more simple and possibly faster and easier to use.

1. INTRODUCTION
1.1 The goal of the research
The goal of present research is creation of digital representation of the human femur, which will be employed in the Application for Preoperative Planning in Orthopedics (APPO). This representation will enable exchange of model data between APPO applications or between different types of medical imaging devices (X-ray, CT, MRI) and APPOs. For this purpose two models have been created. The first one is Parametric polygonal Mesh Model (PMM), which describes geometry and topology of the human femur, and the second one is Orthopedics XML Model for oPerations (OXMP). The XML word refers to the well known specification of the rules for creating (describing) data structures, used for sharing data over the network (LAN, Internet) by various types of applications. In this case XML is used as a tool for describing data about mesh geometry (descriptive geometrical model), patient-relevant data, etc

1.2 The previous work


The PMM and OXMP models have been defined during the process of APPO development. The parametric mesh model is able to adapt its geometry and topology to the real human femur by employing adequate parameter values which can be acquired from the medical images (CT, MRI, X-ray). In earlier stages of work, implementation of this parametric model was restricted to the APPO at one workstation. The transfer of mesh model data or parameter values between different workstations was possible only via some kind of

2. THE GEOMETRICAL MODELS OF THE HUMAN FEMUR


2.1 The Parametric polygonal mesh model (PMM)
The most important goal of this research is to find the best possible solution or method for creation of parametric femur

model in a sense of its application in medical imaging and preoperative planning. The main idea is to define some basic dimensional parameters on the femur, which will be used as the base for calculation of relevant femur dimensions. The defined parameters are arguments of the parametric function for each of the defined dimensions of the femur geometrical model (1).

7. Constructing vectors for point coordinates (Figure 2) for ten femurs, (2). X = [12.149 11.010 ... 11.567 12.2]; (2) Y = [7.246 7.536 ... 6.564 7.648]; 8. Creating parametric functions for coordinates of anatomic points (3), by applying multiple regression on created vectors. (3)

Di = f i ( p1 , p 2 ,... p n )

(1)

fi - parametric function for the dimension Di p1,p2,...,pn - defined parameters. Each dimension of the femur model can be calculated by using the parametric functions fi. The created set of parametric functions represents the parametric model of the femur, which can be used for the creation of the patientspecific femur model. The input values (parameters) for the functions can be measured from some medical images, like CT, or X-Ray, and used for calculations. The final outcome is the adequate mesh model which geometrically corresponds to the original human femur geometry with the smallest possible deviation. In a sense of medical imaging and preoperative planning, the assets can be: the choice of adequate implant for the analogous femur fracture, positioning accuracy of implant(s) on the femur, determination of fixation pins position, etc. The process for creating parametric model of the human femur contains several steps: 1. Acquisition of Computer Tomography (CT) images (data) for a specific patient. 2. Preprocessing of raw data (scans) and its transformation in STL format. 3. Importing the scanned model in STL format into CAD application and its further preprocessing a) Cleaning the cloud of points b) Tessellation c) Healing the tessellated model 4. Defining the Referential Geometrical Entities (RGE, described in [5]), and its correlation with femur anatomy. 5. Defining the adequate planes for each femur section and its correlations with RGEs 6. Creating spline curves and anatomic points in the defined planes (Figure 1).

(X fi , Y fi ) = (FXi ( Di , FHAi ), FYi ( Di , FHAi ))

Set of created parametric functions defines the human femur parametric model.

Figure 2. Relevant anatomic points on the spline curve The parameters involved in the parametric functions (3) are two dimensions (Figure 3), measured in AP (Anterior Posterior) plane of femur: a) Distance between P_LEc and P_MEc points- The most prominent points on distal femur epicondyles in lateral and medial view Dfi, where i represents specific femur. b) Distance between P_CFH (center of the femoral head) and axes of revolution (defined as P_LEc P_MEc ) in AP plane FHAi, where i represents specific femur.

Figure 3. Parameters definition and basic geometry of femur model For each defined curve point, parametric functions are created for X and Y coordinates. Planes position and orientation in 3D space are defined in relation to the basic RGEs (Referential Geometrical Entities) geometry. When all required data are known, the next step is to create parametric functions for the coordinates of anatomic points. An example of parametric functions for one of the points is presented in (4).
X
fi

= -3.1702 + 0.444 D fi 0.055 FHAi

Figure 1. Curves and points on the distal femur surface model

(4)
Y fi = 7.341 + 0.631 D fi 0.078 FHAi

By applying known parameter values in parametric functions, position of points in 3D space can be clearly identified. When position of the points is determined, it is easy to construct a mesh model for the human femur. The construction of the femur mesh model can be performed in many commercial CAD applications, but usually they are too expensive and require a lot of specific design knowledge (usually completely unknown for doctors). Because of these reasons authors have developed the APPO (MedApp), which is able to do meshing of input set of points (just one of the capabilities of the application), and to create valid meshes in very fluent way, easily used by doctors.

Mesh Curves Curve id=1 Point id="1" Xcord="50" Ycord="2" Point id="2" Xcord="1" Ycord="8" Parameter Drev FHA General Data id= Figure 5. The Structure of OXMP file The scheme presented in Figure 5 represents the current state of the OXMP structure. The curves element is optional because xml data can come from a computer connected to the X-ray, or from standalone computer, and in that case, APPO only needs parameters data for calculations. If mesh data points are created in APPO, then it is possible to produce complete OXMP file with curves, general data, parameters, etc. The data elements can provide detail information about: patient, condition of scanning processes, scanning devices, etc. This information can be useful for doctors or for the APPO, if some additional modifications need to be done to the mesh model. The Data element has attribute id which may be: name, age, description, image, etc. Id attribute value clearly identifies and separates information, so it can be applied at adequate location in APPO. The APPO can import image from network resource (medical imaging device, computer, etc.). In that case, a network path of an image must be added to the data element.

2.2 Orthopedics XML model for operations (OXMP)


PMM model represents a graphical 3D view of the human femur. A descriptive view is achieved by using OXMP model. In other words, OXMP model describes data structure of the PMM model. OXMP model was created in order to enable communication between APPO and another network resource. The resource can be other APPO or some medical imaging device (CT, MRI, X-ray). A simple scheme of the communication system is presented on Figure 4. The basic structure of OXMP file is defined according to the mesh data definition. The root element is the mesh element. The first child is optional curves element with repeated curve elements. The curve element has attribute id which defines individual curve and can be repeated. Each curve element has more point elements with attributes Xpos, Ypos, and id (point identification). APOP1 APOP2

OXMP

Medical Imaging device CT,X-ray,MRI

3. THE APPLICATION OF MODELS


3.1 The application of PMM
The PMM model is applied in the APPO application (MedApp) developed by the authors. The tool used in the APPO for mesh model creation is Microsoft DirectX. DirectX is a collection of application programming interfaces (APIs) for handling tasks related to multimedia, especially graphical programming and video, on Microsoft platforms (and on the Linux with the employment of third party applications like Wine). The process of mesh model creation is done by importing the points, creating of the preview model and normalization of the final mesh model (removing the necessary points, adjusting some of the points, etc). The resulting shaded mesh model of the human femur is presented in Figure 6.

Figure 4. Scheme of the communication system The second child of the root element is the Parameter element which is obligatory. Parameter element contains two child elements, Drev and FHA, with adequate text nodes (parameter values) Third child element of the root element is the optional General element. It contains more data elements which describe various data.

The basic structure of OXMP content is presented in Figure 5.

can be used for various purposes in medicine (implant placement and positioning, learning processes, etc.), and technology (implants manufacturing, manufacturing of human bone presentation models, etc.).

ACKNOWLEDGEMENT
The paper presents a case that is a result of the application of multidisciplinary research from the domain of bioengineering in real medical practice. The research project (Virtual Human Osteoarticular System and its Application in Preclinical and Clinical Practice) is sponsored by the Ministry of Science and Technology of the Republic of Serbia - project id III 41017 for the period of 2011-2014. Figure 6. Shaded mesh model of the human femur in MedApp application The application enables modification and transformation of mesh model by manipulating the points position in space.

REFERENCES
[1] Felix Matthews, Peter Messmer, Vladislav Raikov,Guido A. Wanner, Augustinus L. Jacob,Pietro Regazzoni, and Adrian Egli, "Patient-Specific ThreeDimensional Composite Bone Models for Teaching and Operation Planning", Journal of Digital Imaging. 2009 Oct; 22(5):473-82. [2] Olga Sourina, Alexei Sourin, Howe Tet Sen, "Virtual orthopedic surgery training on personal computer", International Journal of Information Technology, Volume 6, No. 1, May 2000 [3] Yajiong Xue and Huigang Liang, Understanding PACS Development in Context:The Case of China, IEEE transactions on information technology in biomedicine, vol. 11, no. 1, january 2007 [4] F. Cao, H.K. Huang, Zhou XQ., Medical image security in a HIPAA mandated PACS environment, Computerized Medical Imaging and Graphics 27 (2003) 185196 [5] M. Stojkovic, M. Trajanovic, N. Vitkovic, J. Milovanovic, S. Arsic, M. Mitkovic, Referential Geometrical Entities for Reverse Modeling of Geometry of Femur, Vip Image 2009, Porto, Portugal. [6] S. Hankemeier, T. Gosling, M. Richter, T. Hufner, C. Hochhausen, & C. Krettek , "Computer-assisted analysis of lower limb geometry: higher intraobserver reliability compared to conventional method", Computer Aided Surgery, March 2006; 11(2): 8186 [7] J. Milovanovi, M. Trajanovi, Medical applications of rapid prototyping Facta Universitatis series Mechanical engineering, vol 5, no 1, 2007, pp 79-85 [8] Marcus G. Pandy, Kotaro Sasaki, Seonfil Kim, "A Three-Dimensional Musculoskeletal Model of the Human Knee Joint. Part 1: Theoretical Construction", Computer Methods in Biomechanics and Biomedical Engineering, 1476-8259, Volume 1, Issue 2, 1997

3.1 The application of OXMP


The OXMP model is used as a tool for sharing model data between APPO applications and for creating the mesh model by importing XML data from network source(s) in the application. It can be pointed out that PMM model can be modified by manually changing the coordinate values of points in OXMP file. This is possible because OXMP is defined as textual file and it can be modified in any text processor, not just in APPO.

4. CONCLUSION
Based on the above observations and claims, it can be concluded that the presented geometrical models describe the real human femur in such way that they can be used in applications for preoperative planning. The PMM of the femur defines (and describes) the geometry and topology of the real human femur. The method of generating the PMM and its application is not restricted to the femur only; it can be applied for any human bone. The success of PMM creation is based on the right selection of the morphological points. If the morphological points are wrongly selected, the resulting model will not follow the geometry of the physical model, and the final product will not be of good quality. The OXMP model describes the mesh model data in such a way that it can be used in a network environment. The OXMP model can be used to describe mesh data in various fields (for example in: engineering, technology, science, etc.), not just in medicine. Exchange of specific model data in the development environment can be improved by applying some variant of OXMP structure. The most important result(s) of this study are the parametric equations for the position of points in 3D space. These equations can be used for creation of a valid surface (solid) model of a patient-adapted femur. Required parameters can be measured from medical images (MRI, CT, X-ray) and incorporated into the parametric equations, which will produce adequate coordinate values for points of PMM model. The application of PMM is primary in APPO, but it

[9] Anthony G. Au, Darren Palathinkal, Adrian B. Liggins, V. James Raso, Jason Carey, Robert G. Lambert, A. Amirfazli, "A NURBS-based technique for subjectspecific construction of knee bone geometry", Computer Methods and Programs in Biomedicine, Volume 92, Issue 1, Pages 20-34, October 2008 [10] Miroslav Trajanovi, Nikola Vitkovi, Milan Trifunovi, Stojanka Arsi, New approach in generation of interpolated surfaces of physical objects, YUINFO 2009, Kopaonik, Serbia. [11] Dean C. Barratt, Carolyn S.K. Chan, Philip J. Edwards, Graeme P. Penney, Mike Lomczykowski, Timothy J. Carter, David J. Hawkes, Instantiation and registration of statistical shape models of the femur and pelvis using 3D ultrasound imaging, Medical Image Analysis 12 (2008) 358374

[12] Ofer Ron, Leo Joskowicz, Charles Milgrom, Ariel Simkin, Computer-Based Periaxial Rotation Measurement for Aligning Fractured Femur Fragments from CT: A Feasibility Study, Computer Aided Surgery 7:332.341 (2002) [13] Beat Schmutz; Karen J. Reynolds; John P. Slavotinek, Development and validation of a generic 3D model of the distal femur,Computer Methods in Biomechanics and Biomedical Engineering, 1476-8259, Volume 9, Issue 5, 2006, Pages 305 312 [14] Leslie J. Bisson, Jennifer Gurske-DePerio, "Axial and Sagittal Knee Geometry as a Risk Factor for Noncontact Anterior Cruciate Ligament Tear: A CaseControl Study,Arthroscopy", The Journal of Arthroscopic and Related Surgery Volume 26, Issue 7 , Pages 901-906

LOCATING WEIGH-IN-MOTION CHECKPOINTS IN TRAFFIC NETWORKS USING GENETIC ALGORITHM


Milica elmi1, Nikola Beinovi1, Duan Teodorovi1 University of Belgrade, Faculty of Transport and Traffic Engineering

Abstract - The problem of optimizing locations for weigh-in-motion (WIM) checkpoints facilities is studied in this paper. The WIM checkpoints equipped with a remote system can be used for various practical applications like statistics, interval measuring and online traffic control, selecting of overloaded vehicles, estimation of the current loading of road or bridge constructions and so on. In order to check truck weight limits and obtain all mentioned information, we need to find answer to the question: where should WIM checkpoints be located? This paper develops a model to determine the locations of WIM checkpoints in traffic network. The problem is solved using the Genetic Algorithm. The purpose of this research was to develop a decision support methodology to identify the optimal locations of a finite set of WIM checkpoints on a transport network in order to minimize possible risk and to maximize total flow captured. 1. INTRODUCTION

Versatile statistic representations for all types of traffic parameters.

In order to check truck weight limits and obtain all necessary information we need to find an answer to the question: where should WIM checkpoints be located? The WIM checkpoints belong to the class of flowintercepting facilities (including billboards, gas stations, fast food outlets, convenience stores, ATMs, retail facilities, etc.). In the case of flow-intercepting facilities, clients (drivers) are serviced if they pass through a facility. Every client (driver) that passes through such a facility is treated as a captured or intercepted client. The aim of this paper is to develop the model capable to determine optimal locations of WIM checkpoints in traffic network. The objective function to be maximized represents the combination of the largest possible risk reduction and the largest possible intercepted flow on the traffic network. The number of facilities is given in advance and treated as a constraint in the problem formulation. Location of WIM checkpoints represents NP hard combinatorial problem. This fact motivated the authors of the paper to develop heuristic tehnique based on Genetic Algorithm. The proposed model is supported by numerical examples. The paper is organized as follows. The problem statement is in the second section. The third section is devoted to the Genetic Algorithm approach to the WIM checkpoints location problem. The numerical test results follow in the fourth section and finally conclusions are provided in the last, fifth, section.

Several billion tons of hazardous materials are shipped yearly in the world. Authorities in some countries have established a system of checkpoints to detect leaks from trucks transporting hazardous materials such as waste oil, diesel fuel and calcium chloride. At these checkpoints, vehicles in violation are taken out of service. At truck weighing stations traffic authorities check weight limits, driver hours and service regulations, vehicle equipment safety, and collect road use taxes. For example, in some states and countries, simple portable scales enable a weighing station to be setup practically at any node along a highway. Weigh-in-motion (WIM) systems reduce the probability of occurrence of overloaded trucks on highways. Economic benefits of the WIM checkpoints far exceed the costs of implementation and usage. WIM systems are finding increasingly widespread use as a valuable extension to conventional traffic counters and classifiers. They provide a whole spectrum of information on traffic flow, with detailed data for each individual vehicle, including: Dynamic weights of all axles (or if selected, even left/ right half axles), Gross vehicle weights, Axle spacing, Distance between vehicles, Speed, Vehicle classification according to various schemes,

2.

PROBLEM STATEMENT

The problem of determining the locations of flowcapturing facilities in a transportation network have been treated by numerous authors [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. One group of flow-capturing problems comprises problems such as: location of billboards, gasoline stations, fast food outlets, convenience stores, ATMs, or retail facilities. When considering the location of these facilities, the goal should be to capture the largest possible total flow. On the other hand, driving under influence

checkpoints, weighing stations, routine police check-ups, or hazardous material checkpoints are related to specific risks in traffic network [12]. In each case, any client intercepted more than once counts only one time. In the case of WIM checkpoints that are risk-related, drivers should be intercepted as soon as possible after starting their trips through traffic network. This idea reflects preventive policy. Within this policy, WIM checkpoints should be located in the network in such a way so as to maximize the largest possible risk reduction in a traffic network. On the other hand, costs of locating WIM checkpoints are very high, so it is very important to optimize location of WIM checkpoints. According to this idea, it is important to locate WIM checkpoints at the points (tunnels, viaducts, bridges) with the most intensive traffic flows. In this paper we study location problems in the case of risk-related flow-intercepting facilities (the preventive flow-intercepting problem). We consider the WIM checkpoints location problem in the case of a nonoriented network G=(V, A). The total number of nodes in the network in which it is possible to locate WIM checkpoints equals n. We denote respectively by R V ), and S ( S V ) the set of origin and the set of (R destination nodes respectively. We also denote by P a set of origin-destination path flows. The total number of WIM checkpoints that we want to deploy equals m. Let us consider a path p P and a set of nodes Vp on the path p. Proposing the idea of preventive policy Gendreau [8] introduced the quantity a ip 0 that denotes the reduction of risk achieved on path p if the first facility encountered along that path is located at vertex vi V p . Obviously, ajp < aip when node vj follows node vi . Let us introduce binary variables yi and xip defined in the following ways:
yi

vi Vp

x ip 1

(p

P )

(4)

yi

{0,1} (vi V)

(5) (6)

{0,1} ( p P, vi xip Vp )

The objective function (1) reflects our desire to maximize reduction of risk and to maximize captured traffic flow. The total number of checkpoints that should be deployed equals m (constraint (2)). The flow on the path p cannot be intercepted in node Vi if there is no WIM checkpoints deployed (constraint (3)). By constraint (4) not all paths necessarily contain a facility and each path counts at most once towards the objective function. Binary nature of variables yi and xip is represented by constraints (5) and (6). The number of WIM checkpoints is given in advance and treated as a constraint in the problem formulations. Large number of papers deals with WIM problems, but most of them treats different types of sensors that can be used as WIM facility [13, 14, 15, 16]. The most of WIM checkpoints work on the principle of the piezoelectric WIM Lineas Quartz Sensor, Slow Speed WIM sensors, PAT Bending Plate WIM sensors or Single Load Cell WIM sensors. Also, there is the paper that focus on improvement of the precision of weigh-in-motion due to vibration from dynamic load [17]. An excellent overview of research on weigh-in-motion system is given in [18]. To the best of authors knowledge there is no paper in relevant literature that finds optimal location to the WIM checkpoints in a transport network. 3. USING GA TO OPTIMIZE WIM LOCATIONS Genetic Algorithm (GA) is heuristic search techniqe based on the evolutionary ideas of the natural selection and genetic. The basic concept of GA is designed to simulate processes in natural system necessary for evolution. GA was first developed by Holland [19]. GA represents an intelligent utilization of a random search within a defined search space to solve a problem. GA belongs to the class of algorithms that have the ability to find solutions close to optimal for complex combinatorial optimization problems. This method is probabilistic and perform a multidirectional search by maintaining a population of potential solutions. The new generation of solutions (individuals) is expected to be better than the parent population because only the good quality solutions (individuals) from the parent population are allowed to participate in future mating [20]. Authors suggested GA as a tool for solving proposed model of WIM checkpoints locations. GA was tailored for solving particular problem by adding more complex operations in processes of generation initial population and selection. An instinctive way of representing solutions for the WIM checkpoints locations problem is using of binary coding.

1, if facility located at node vi 0, otherwise

1, if the flow on path p is first intercepted at vi V p x ip 0, otherwise

The mathematical formulation of the WIM checkpoints location problem reads: Maximize subject to:
yi = m vi V
xip yi ( p P, vi Vp )

aij f i xip p P vi V p

(1)

(2) (3)

One solution is presented as a string of cells as shown in Figure 1. The number of nodes (n) in the network is adopted as the length of each string (individual, chromosome) in the population. The value in the cell indicates the existence of a WIM facility. A value of 1 means that a WIM facility is deployed at that location and a value of 0 means there is no deployment. The sum of all cell values, or the length of the string, is equal to the number of WIM facilities to be deployed (m).
0 1 0 0 1 1 ... 0 0 1 0 0

To introduce and preserve diversity in search population, a mutation operator is applied to the generated offspring. It operates independently on each individual by probabilistic perturbing a single random gene. This type of mutation operator is called one-point mutation (Figure 3).

OFFSPRING

...

0 1 0 0 0 1 ... 1 0 1 1 0

m potential locations

Figure 1. Solution Representation Initial generation is selected in such way to provide a large number of feasible individuals. This means that each individual who is repeated several times in the generation is removed. During the selection process the elitist strategy is combined with the steady-state replacement. Using elitist strategy, the best Nelit individuals are chosen and directly transferred to the next generation. The individuals for the other places are obtained by applying the genetic operators. In order to have more equal treatment of all individuals in a popuation, GA does correction of elitist individuals' objective function by using of correction function used in [21]. After correction, weaker individuals, that consist of good genes, have higher probability to take part in making the new generation. The selection of individuals is done by a tournament. The number of groups and the number of individuals in each group are predefined. The individual with the highest objective function value becomes the winner of the group. Each group winner represents a parent that will form an offspring for the next generation. Offsprings are made by combining the genes extracted from the parents. The method of crossover used in the algorithm is the one-point crossover (Figure 2). Typically, the probability for crossover ranges from 0.6 to 0.95. Figure 3. Mutation operator The algorithm uses predefined number of generations as stopping criterion. Before running the algorithm, user defines the dimension of the population, the number of elitist individuals and number of generations. 4. NUMERICAL EXAMPLES The proposed model is tested on instances that are randomly formed, due to the lack of real data. All these matrices are available upon request. We tested our model and GA on networks with 10, 20 and 30 nodes. Number of WIM checkpoints that should be located is varied from 3 to 13. Quantities aip are generated in a random way. During this procedure, we generated quantities ajp and aip in such a way that ajp < aip when node vj follows node vi . In the problem studied in this paper input parameters are: number of nodes, number of WIM checkpoints, the origin-destination matrix and the shortest paths between nodes in a transport network. Tests were performed on the Intel Atom 1.6 MHz processor and 1 GB of RAM. Results obtained by GA were compared with optimal solutions. To find the optimal solution IBM ILOG OPL optimization programming language was used. The number of individuals in the initial population is 50. The number of generations is 50, 100 or 300, depending on problem dimensions. During the selection process we combine the elitist strategy and the steady-state replacement. Using elitist strategy, we select the best 30 individuals and directly transferred them to the next generation. The other 20 individuals are obtained by applying genetic operators. The selection of individuals is done by a tournament. We selected the number of groups (Ngr = 20) and the number of individuals in groups (Ni = 3). The winners of the groups represent parents that will form individuals in the next generation. In the algorithm we used one-point crossover and one-point mutation. We define crossover probability pcros = 0.85 and the coefficient of mutations pmut = 0.5. Table 1 shows results of applying GA on the proposed model and their comparison with optimal results. The first

PARENT 1

. ..

Exchange of Genetic Material


PARENT 2 1 1 0 0 0 1 . .. 1 0

OFFSPRING 1

. ..

OFFSPRING 2

. ..

Figure 2. Random one-point crossover operator

1 1 1

1 0

column of Table 1 contains the total number of nodes, while the next two columns present the total number of flows on the network and the number of WIM checkpoints to be located. The fourth column contains the best known results obtained by GA. The next column presents optimal results reached by the Optimization programming language (objective function value, OPL OPT). In the sixth column we report the percentage of captured flow. The last column contains the number of generations that GA needs to obtain the best result. Table 1. GA results
No. of No. of No. of WIM GA BEST OPL OPT nodes flows checkpoi nts 10 90 3 37417 37417 10 90 4 42466 42466 10 90 5 45293 45293 10 90 6 46739 46739 20 380 3 138484 138484 20 380 5 159212 159212 20 380 8 177008 177008 20 380 10 181634 181634 30 870 3 240258 240258 30 870 6 327098 327098 30 870 10 389199 389199 30 870 13 414965 414965 Cap. flow (%) 81.2 91.7 97.2 99.4 79 89.9 98.1 100 58.8 78.2 91.7 96.7 No. of gener. <10 <10 <10 <15 <40 <70 <90 <70 <30 60 <110 <260

intercepted. Every pair (Number of WIM checkpoints, Percentage of captured flow) corresponds to a specific decision. In this manner, a large number of different potential location choices are generated for the decisionmaker. Table 2. Captured flow
No. of nodes 30 30 30 30 30 30 30 30 30 30 30 30 30 No. of WIM checkpoints 1 2 3 4 5 6 7 8 9 10 11 12 13 GA BEST 110640 199880 240258 277193 306909 327098 345045 361709 377898 389199 399320 407998 414965 Captured flow 120757 217762 259616 297353 327380 347414 364028 380575 396635 407227 416148 423104 429593 Captured flow (%) 27.2 49.1 58.5 67.0 73.7 78.2 82.0 85.7 89.3 91.7 93.7 95.3 96.7

Figure 5. Percentage of captured flow Figure 4. Maximization of objective function From Table 1 it can be seen that GA was able to get optimal solution in all tested networks. Table 1 and Figure 4 show that all solutions are obtained using a relatively small number of generations. Also, from Table 1 it can be noticed that proposed model enable large precentage of intercepted flow with maximization of possible risk reduction. Only few WIM checkpoints are enough to capture almost total flow in a transport network. Table 2 presents real value and percentage of captured flow considering the number of located WIM checkpoints (the particular example consists of 30 nodes and number of WIM checkpoints is varied from 1 to 13). From Figure 5 and Table 2 it can be seen that even 3 optimally placed WIM checkpoints are enough to capture almost 60% of total flow on transport network. With 8 properly located WIM checkpoints more than 85% of total flow can be Figure 6 and Table 3 show the number of times a potential WIM checkpoint location is present in the optimal solutions.

Figure 6. Frequency plot showing the number of times a WIM checkpoint is placed at each location

For the network of 30 nodes, for example, from 13 sets of WIM checkpoint deployments, locations 23 and 8 are present in 13 and 12 sets, respectively. This is an indication that these locations are very critical for the flow capturing and WIM checkpoints deployed in these locations need to be regularly maintained. Table 3. The number of times a WIM checkpoint is placed at each location
No. of nodes 30 30 30 30 30 30 30 30 30 30 30 30 30 No. of WIM facilities 1 2 3 4 5 6 7 8 9 10 11 12 13 Locations 23 8,23 3,8,23 6,8,23,26 6,8,21,23,26 6,8,21,23,26,27 6,8,19,21,23,26,27 6,8,10,18,19,23,26,27 2,3,8,10,16,18,19,20,23 2,4,8,10,16,18,22,23,29,30 2,3,4,6,8,10,12,18,22,23,27 2,3,4,6,8,10,18,19,22,23,27,29 2,3,4,6,7,8,10,18,19,22,23,27,29

[3] Berman, O. Larson, R.C. and N. Fouska, ''Optimal location of discretionary service facilities'', Transportation Science Vol. 26, pp 201-211, 1992. [4] Berman, O. Hodgson, M.J. and D. Krass, Flowinterception problems, Facility location. A survey of applications and methods, Springer-Verlag, New York, pp 389-426, 1995. [5] Mirchandani, P.B., Rebello, R., and A. Agnetis, ''The inspection station location problem in hazardous materials transportation: Some heuristics and bounds'', INFOR Vol. 33, pp 100-113, 1995. [6] Hodgson, M.J., Rosing, K.E. and J. Zhang, ''Locating vehicle inspection stations to protect a transportation network'', Geogr Anal Vol. 28, pp 299-314, 1996. [7] Berman, O. and D. Krass, ''Flow intercepting spatial interaction model: a new approach to optimal location of competitive facilities'', Location Science Vol. 6, pp 41-65, 1998. [8] Gendreau, M., Laporte, G. and I. Parent, ''Heuristics for the Location of Inspection Stations on a Network'', Naval Research Logistics Vol. 47, pp 287-304, 2000. [9] Wu, TS. And J.N. Lin, ''Solving the competitive discretionary service facility location problem'', European Journal of Operational Research, Vol. 144, pp 366-378, 2003. [10] Kuby, M. and S. Lim, ''The flow-refueling location problem for alternative-fuel vehicles'', SocioEconomic Planning Sciences, Vol. 39, pp 125-145, 2005. [11] Jun, Y., Min, Z., He, B. and C. Yang, ''Bi-level programming model and hybrid genetic algorithm for flow interception problem with customer choice'', Computers & Mathematics with Applications, Vol. 57, pp 1985-1994 , 2008. [12] elmi, M., Teodorovi, D. and K. Vukadinovi K., ''Locating inspection facilities in traffic networks: an artificial intelligence approach'', Transportation Planning and Technology Vol. 33, pp 481-493, 2010. [13] Suopajarvi, P., Pennala, R., Heikkinen, M., Karioja, P., Lyori, V., Myllayla, R., Nissila, S., Kopola, H. and H. Suni, ''Fiber optic sensors for traffic monitoring applications'', Proc. SPIE Proceedings of SPIE 3325, pp 222-229, 1998. [14] Wierzba P. and B.B. Kosmowski, ''Polarimetric sensors for weight-in-motion of roads vehicles'', OptoElectronics Review Vol. 8, pp 181-187, 2000. [15] Udd, E., Kunzler, M., Layor, M., Schulz, W., Kreger, S., Corones, J., McMahon, R., Soltesz, S. and R. Edgar, ''Fiber Grating Systems for Traffic Monitoring'', Proceedings of SPIE 4337, pp 510-516, 2001. [16] Kunzel, M., Edgar, R., Udd, E., Taylor, T., Schulz, W., Kunzler, W. and S. Soltesz, ''Fiber Grating Traffic Monitoring Systems'', Proceedings of SPIE 4696, pp 238-243, 2002.

5. CONCLUSION
The trucks, loaded over their allowed weight, cause great damage to the road and traffic security. To solve these problems we must take efforts in developing more advanced WIM system and in optimizing location of WIM checkpoints. This paper studied the WIM checkpoints location problem. We analyzed preventive policy in which riskrelated drivers should be intercepted as soon as possible after starting their trips through the traffic network. Within the analyzed policy, WIM checkpoints should be located in the network in such a way to maximize largest possible risk decrease on traffic network and total flow captured. Since presented problem is NP hard, we use Genetic algorithms as a tool for its solving. In the absence of real data, we tested our model on randomly generated instances. The results show that with optimal placement of WIM checkpoints high percentage of traffic flow can be captured. With carefully placed WIM checkpoints that are well maintained, all necesarry information can be derived. REFERENCES [1] Hodgson, M.J. ''The location of public facilities intermediate to the journey to work'', Eur J Oper Res, Vol. 6, pp 199-204, 1981. [2] Hodgson, M.J. ''A flow-capturing location-allocation model'', Geogr Anal Vol. 22, pp 270-279, 1990.

[17] Lan Ying, G., Bo, L. and D. Anguo, '' Research on Data Processing for Weight-in-motion of Vehicles'', International Conference on Measuring Technology and Mechatronics Automation, pp 454-456, 2009. [18] Wang, J. and M. Wu, ''An Overview of Research on Weigh-in-motion System'', Proceedings of the 5Ih World Congress on Intelligent Control and Automation, P.R. China, pp 5241-5244, 2004. [19] Holland, J. H. ''Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence''. Ann Arbor, MI: University of Michigan Press, 1975.

[20] Edara, P., Guo, J., Smith B.L. and C. McGhee, ''Optimal placement of point detectors on virginias freeways: case studies of northern Virginia and Richmond'', Final Contract Report VTRC 08-CR3, Virginia Transportation Research Council, Richmond, VA, 2008. [21] Kratica, J., Parallelization of the genetic algorithms for solving some NP - complete problems, PhD Thesis, Belgrade, 2000.

COMPARATION POSSIBILITIES OF K-MEANS AND HAC CLUSTERING IN ANALYSIS OF USERS PATTERNS OF BEHAVIOR
1

Marija Blagojevi1 Technical Faculty aak content [6]. However, LMS doesnt allow detail monitoring of the users activities nor the evaluation of the course contents structure and its efficiency in the teaching process. In order to consider the complete teaching process that includes the usage of electronic courses within a specific LMS, a thorough analysis is a must. Bearing that in mind and other techniques that are used in electronic courses evaluation, a comparison of two clustering types has been conducted. The comparison was conducted in order to determine differences in the application of the mentioned techniques during the detection of users` patterns of behavior. Tasks of the study: Data pre-processing: clean and prepare the Web server log file Application of k-means and HAC-clustering on pre-processed data Analysis of obtained results and evaluation of users` patterns of behavior Purpose of the study: Determination of possibilities of the k-means and HAC clustering application in the analysis of users` behavior patterns and their comparison 3. METHODS Clustering that is applied on log files, is also used for analysis of users` behavior patterns. 3.1 Participants Data is collected on a sample that consists of 1789 bachelor and masters students at Technical Faculty in Cacak, Serbia. These students are users of Moodle learning management system. System with courses is available for overview at the address [7].

Abstract - The paper presents a comparison of k-means and HAC clustering. Here is determined the applicability of these methods of clustering in the analysis of user behavior patterns. Also, in this paper, differences between them were observed. 1. INTRODUCTION The World Wide Web (WWW) is a vast resource of multiple types of information in varied formats. Need for discovering and analysis of new behavior patterns of the users has increased since the expansion of the web. Analysis of users patterns of behavior can be used for new model designing that can be of high importance for understanding of users behavior in virtual environment. According to [1], clustering can be used for determination of users` patterns of behavior in e-learning and in ecommerce domain as well. They propose in that paper new algorithm based on sequence alignment to measure similarities between web sessions where sessions are chronologically ordered sequences of page accesses. Data mining techniques are applied on log files for the purpose of obtaining recommendations for efficiency improvement within electronic courses [2]. That paper [2] proposes a platform dependant framework for recording, processing and analyzing data from Learning Management Systems (LMS). Data mining presents analysis of observational data sets with the purpose for detection of undetected links and data summing in a sophisticated manner, understandable and useful for data owner [3]. The relations that are obtained by the data mining process are defined as models or patterns. K-means [4] is one of the simplest unsupervised learning algorithms that solve the well known clustering problem. The procedure follows simple and easy way to classify a given data set through a certain number of clusters (assume k clusters) fixed a priori. The main idea is to define k centroids, one for each cluster. Hierarchical clustering algorithms are either top-down or bottom-up. Bottom-up algorithms treat each document as a singleton cluster at the outset and then successively merge (or agglomerate) pairs of clusters until all clusters have been merged into a single cluster that contains all documents. Bottom-up hierarchical clustering is therefore called hierarchical agglomerative clustering or HAC. [5] 2. PURPOSE OF THE STUDY Learning management system (LMS) is a software application for the administration, documentation, tracking, and reporting of training programs, classroom and online events, e-learning programs, and training

3.2 Tool A tool that is used for application of clustering is called Tanagra 1.4 [8]. This tool provides a vast number of analysis possibilities in the data mining research domain. Module that relates to clustering is used for the needs of this specific research. 3.3 Procedure Before the beginning of clustering process, pre-processing on log files has to be conducted. Raw log files contain data that has to be normalized. After pre-processing, log

files contain data that is staggered in the following columns: year, month, day, hour, minute, module, activity, and course.

Figure 1. The illustration of log file after pre-processing After the data importation, entry and target parameters are being determined, and their choice is defined by the selection of a specific clustering method.

Figure 4: Illustration of k-means clustering parameters Within this figure data about parameters of k-means clustering are given. These data include cluster number, maximal number of iterations, distance normalization, average computation and seed random generator. Cluster number is 4, maximal number of iteration is 10, and number of attempts is 5. Figure 2: the selection of an entry and target parameters After the selection of an entry and target parameters, a clustering method is being chosen. In this, specific study, the comparation of a k-means and HAC clustering method. The selection of the above mentioned clustering types is given in the Figure 3.

Figure 5: Cluster size and WSS Figure 3: Illustration of clustering types within Tanagra program After the selection of clustering type, a selected clustering type is being conducted by selecting the option Execute. Apart from that, Group Characterization is also done in order to present differences between groups. 4. RESULTS 4.1 Results that are obtained with the application of kmeans clustering Figure 6: Illustration of cluster centroids The illustration of cluster centroids in relation to attribute course is presented in Figure 6. Figure 5 presents cluster size as well as the vector of length which contains the within sum of squares foe each cluster. According to figure 5, the smallest cluster is cluster 1, and the biggest one is cluster 4.

Figure 7: Illustration of results that are obtained with k-means clustering Results that are obtained with the application of k-means clustering are given in the Figure 7. Description of the following months is given. Those months are February, March, April, May and October. For each month the percentage amount is given for the present instance, and the standard deviation for a class that is continuous attribute. 4.2 Results obtained with the application of HAC clustering Figure 8 presents application of clusters and their sizes. According to Figure 8, the smallest cluster is cluster 2, and the biggest is cluster 1.

Figure 9: Illustration of cluster centroids Figure 9 presents cluster centroids in relation to attribute course. Figure 10 presents HAC dendrogram that is obtained with the application of HAC clustering on baseline data. Set of embedded clusters is organized with the help of a tree. Based on figure and data that are obtained in a program Tanagra, clusters that resemble the most are the ones that relate to months 3 and 5, following (3, 5) and 2, then, (3, 5, 2) and 10, and at the very end ( 3, 5, 2, 10) and 4.

Figure 8: Initial results of HAC clustering

Figure 10: Illustration of dendrogram obtained by the HAC clustering method 5. DISCUSSION Having in mind figures given in the Chapter Results, one can conclude following things. Based on these results we can conclude that K means and HAC clustering can be used to analyse user behaviour patterns. These conclusions relate to the determination of differences between k-means and HAC clustering in the analysis of users` patterns of behaviour. As it can be seen, in figure 4 and 8, the number of clusters is the same. The only difference is that in the method of k-means clustering the number has to be stated, whilst in HAC clustering, it isnt recommendable to give any assumptions about the number of clusters. According to figures 6 and 9, centroids that are different for k-means and HAC clustering are defined for the same attribute. That is the thing that indicates different

algorithm for centroids choosing in these two clustering methods. In both of these methods determination of centroids is conducted with the assistance of repeat/until loop. However, HAC clustering in being updated by the resemblance matrix and k-means clustering is being recomputed for centroid for each cluster in every step. There is one more difference between these two clustering types, and it relates to dendrogram formatting. Dendrogram formatting can be done in HAC clustering and it is presented in Figure 10. Graphic illustration that is given in Figure 10, enables better perception of clusters that are organised with the assistance of a tree. Unlike dendrogram within HAC clustering, data about clusters are presented in figure 7 in a form of a table. Both of the mentioned methods are found to be very useful in the users` profile analysis, where log filer records are grouped in clusters. When analysing users` profiles it is essential to choose clustering method according to specific research demands, a way of results obtention and selection of the number of clusters. The following study relates to the analysis of the other clustering types in the analysis of users` behaviour patterns. REFERENCES [1] Wang, W and O. Zaiane, Clustering Web Sessions by Sequence Alignment , Retrieved from: http://webdocs.cs.ualberta.ca/~zaiane/postscript/dexa2002 .pdf, 2002. [2] Kazanidis, I., Valsamidis, S., Theodosiu, T. Proposed framework for data mining in e-learning: The case of open e-class, retrived from: http://utopia.duth.gr/~skontog/papers/iadis2009.pdf, 2009.

[3] Hend D., Mannila H., Smyth P.: Principles of Data Mining, e-book, retrived from: http://books.google.com/books?hl=en&lr=&id=SdZbhVhZGYC&oi=fnd&pg=PR17&dq=Hend+D.,+Mannila +H.,+Smyth+P.:+Principles+of+Data+Mining&ots=yvT8 zjstk1&sig=3fZxLzXOZRmqJxAcGV6NTZTlbE#v=onepage&q&f=false [4] MacQueen, J. Some methods for classification and analysis of multivariate observations. In Le Cam, L. M. and Neyman, J., editors, Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 281-297, Berkeley, Califonia. University of California Press, 1967. [5] HAC clustering, retrieved from: http://nlp.stanford.edu/IRbook/html/htmledition/hierarchical-agglomerativeclustering-1.html, 2009. [6] Ellis, R: A Field Guide to Learning Management Systems, ASTD Learning Circuits, from http://www.astd.org/NR/rdonlyres/12ECDB99-3B91403E-9B15 7E597444645D/23395/LMS_fieldguide_20091.pdf, 2009. [7] LMS Moodle on the http://itlab.tfc.kg.ac.rs/moodle/ Technical Faculty:

[8] Software Tanagra, Retrieved from: http://eric.univlyon2.fr/~ricco/tanagra/en/tanagra.html

KONTROLA AUTONOMNOG MOBILNOG ROBOTA IZ WEB OKRUENJA SA VIDEO STREAMOM WEB BASED REMOTE CONTROL OF MOBILE ROBOT WITH VIDEO STREAM FEEDBACK
Istvn Matijevics *University of Szeged, Department of Informatics e-mail: mistvan@inf.u-szeged.hu

Sadraj Web tehnologije menjaju proces edukacije u robotici. Uz pomo laboratorija za daljinsko upravljanje na bazi web tehnologija korisnik je u interakciji sa procesom kretanja mobilnih robota preko interneta. Upravljanje kretanja mobilnih robota je danas veoma atraktivna istraivaka oblast kako iz aspekta teorijskih istraivanja tako i iz aspekta praktine primene. U radu se prikazuje daljinsko upravljanje mobilnih robota na tokovima pri kretanju u nepoznatom okruenju sa preprekama. U radu objekat upravljanja je mobilni robot Scribbler od Parallaxa. Mobilni robot ima dva pogonska toka i upravljanje se ostvaruje promenom ugaonih brzina pogonskih tokova. Kada se mobilni robot kree ka cilju u nepoznatom okruenju, a senzori mobilnog robota detektuju prepreke, tada se mora postaviti upravljaka strategija u cilju izbegavanja kontakta izmeu mobilnog robota i prepreka. Abstract There has been a tremendous increase of interest in mobile robots. Today, however, we can build small mobile robots with numerous actuators and sensors that are controlled by inexpensive, small, and light embedded computer systems that are carried on-board the robot. The simplest case of mobile robots are wheeled robots. The goal of this article is to get students interested in and excited about the fields of engineering, mechatronics, and software development as they design, construct, and program an autonomous robot.

motors mounted in fixed positions on the left and right side of the robot, independently driving one wheel each. II. SCRIBBLER

The Scribbler robot is a great tool with which to get started with robotics. The eb500 module makes it possible for the Scribblers BASIC Stamp 2 microcontroller brain to communicate wirelessly with Microsoft Robotics Studio running on a nearby PC. The BASIC Stamp microcontroller runs a small PBASIC program that controls the Scribblers servos and optionally monitors sensors while it communicates wirelessly with Microsoft Robotics Studio.

Keywords distant monitoring, remote control, embedded system, Scribbler

I.

INTRODUCTION

Figure 1. Assembled Scribbler

One of the main advantages of Scribbler developing environment is the ability to monitor and remote control any of the Scribblers from a host PC. For that purpose, a remote control program has been implemented. The host system connects to the mobile robot with the Bluetooth module. A number of interfaces are available on most embedded systems. These are digital inputs, digital outputs, and analog inputs. DC motors are usually driven by using a digital output line and a pulsing technique called pulse width modulation (PWM). The differential drive design has two

III.

SENSORS AND ACTUATORS

There are a vast number of different sensors being used in robotics, applying different measurement techniques, and using different interfaces to a controller. Very important thing is to find the right sensor for a particular application. In this case an infra red sensor was used.

Figure 2. Sensors and actuators

Binary sensors are the simplest type of sensors. They only return a single bit of information, either 0 or 1. A typical example is a tactile sensor on a robot, for example using a micro switch. Interfacing to a microcontroller can be achieved very easily by using a digital input either of the controller or a latch. DC electric motors are arguably the most commonly used method for locomotion in mobile robots. DC motors are clean, quiet, and can produce sufficient power for a variety of tasks. Standard DC motors revolve freely, unlike for example stepper motors. Motor control therefore requires a feedback mechanism using shaft encoders. The first step when building robot hardware is to select the appropriate motor system. The best choice is an encapsulated motor combination comprising a DC motor, Gearbox and Optical encoder. IV. WIRELESS COMMUNICATION

Being only an internal code name first, Bluetooth became an official trademark later. The name Bluetooth derives from the Viking king Harald Bltand, who united Norway and Denmark in the 10th century and brought Christianity to Scandinavia. The Viking word Bltand translates to Blue Tooth and refers to Haralds dark complexion rather than the folklore story of his affection for blueberries. According to his unification of two countries, the SIG founders believed Bluetooth to be an appropriate name for the unification of the companies in that project. Bluetooth is an industrial specification for low costs short-range wireless networks. In 1994, Ericsson Mobile Communications started a study to find a low power and low cost solution to replace cable connections between mobile as well as fixed devices, such as a laptop and a printer. The solution had to match special criterias of cost, performance, size and power consumption to also fit in small battery-powered portable devices, i.e. cell phone. Beside this the transmission of both data and speech had to be realized. To make Bluetooth to a worldwide standard, Ericsson Mobile Communications, IBM, Intel, Nokia, Toshiba founded the Bluetooth Special Interest Group (SIG) in September 1998. The SIG developed the Bluetooth wireless technology and standard to be interoperable between different devices of different producers. The group growed larger and counts over 2000 membership companies today whom are allowed to use the open platform technology. The Bluetooth core specification, covering the physical layer and the data link layer, was adopted by IEEE under the name WPAN (Wireless Personal Area Network) and can be found in IEEE 802.15. V. EXPLORATION AND NAVIGATION In the landmark-based navigation system, the robot operates in two modes: exploration and navigation. In exploration mode, the robot explores the environment using a depth -first search among the unvisited landmarks. A visibility edge between two landmarks can be traversed by visual servoing, using the real-time recognition algorithm. At every newly-visited landmark, the robot scans for all landmarks visible from this position, records their relative angles and estimates of their distances, and starts an observation history for this landmark. The landmarks all have unique markers which are used as node labels in the graph. As mentioned above, in the process of exploring, the robot replaces distance estimates with more accurate odometry measurements. Also, as landmarks are revisited during the exploration phase, the observation histories are updated and the probability estimates are refined. Once part of the environment has been explored, the robot can enter navigation mode, and accept navigation tasks from the user. For a given goal, the expected shortest paths are computed and used for path planning as described above. Navigation mode and exploration mode can be interleaved seamlessly, length and probability factors are continuously updated in both modes based on observations made and edges traversed. In summary, this navigation system is able to operate robustly in the presence of unreliable sensory input_ and can

Bluetooth is a technology intended for connecting devices which can be found within a relatively small distance from each other (initially this technology was imagined to be functioning within a perimeter of 10 m). Bluetooth was created to replace the cables. It is used for so-called ISM frequency range (2.4 GHz intended for industry, medicine and science).

Figure 3. The eb500 Bluetooth module

cope both with the temporary occlusion of landmarks and with permanent changes to the environment, such as the removal and addition of new landmarks. VI. 3. DISTANT MONITORING

The first input level in the system is the movable webcam. This web-cam is connected to a PC over the USB. The picture is observable through the PC. The software used for the web cam is Broad Cam software, it makes broadcast via internet, sends picture and voice.

sensed information of the obstacles and the relative position of the target In moving towards the target and avoiding obstacles, the mobile robot changes its orientation. When the obstacle in an unknown environment is very close, the mobile robot slows down and rapidly changes its orientation. The navigation strategy is to come as near to the target position as possible while avoiding collision with the obstacles in an unknown environment. VIII. CONCLUSION Robotics has come a long way, especially for mobile robots. In the past, mobile robots were controlled by heavy, large, and expensive computer systems that could not be carried and had to be linked via cable or wireless devices. Today, however, we can build small mobile robots with numerous actuators and sensors that are controlled by inexpensive, small, and light embedded computer systems that are carried on-board the robot. Building and programming a robot is a combination of mechanics, electronics, and problem solving. What you're about to learn while doing the activities and projects in this text will be relevant to "real world" applications that use robotic control, the only difference being the size and sophistication. The mechanical principles, example program listings, and circuits you will use are very similar to, and sometimes the same as, industrial applications developed by engineers. IX. REFERENCES

Figure 4. Figure 4. Distant monitoring

The Broad Cam software samples the picture, compacts the data, controls the bandwidth, and also broadcasts the video signal over the internet. The software Broad Cam first creates a video server after the start, opens the TCP port, starts the web-cam and transmits through the port picture. Other PC computers can receive this picture. The public IP address enables the full availability for all users from all places. VII. SOLUTION In this project we have used a Bluetooth module to achieve remote control over a Scribbler. We used the Bluetooth base station to read a file from the controlling computer and send its contents to the second Bluetooth module. The second Bluetooth when receiving the data in turn opens up its outputs depending on what it received. These outputs control the speed of the wheels individually.

Acknowledgements: This research was partially supported by the TAMOP-4.2.2/08/2008-0008 program of the Hungarian National Development Agency.

REFERENCES
[1] Bruno Siciliano, Oussama Khatib, Springer Handbook of Robotics, Springer-Verlag Berlin Heidelberg 2008. Sun Small Programmable Object Technology (Sun SPOT) Owners Manual Release 3.0, Sun Microsystems, Inc. 2007 Sun Spot Developers Guide. Sun Microsystems. 2005 Demo Sensor Board Library. Sun Microsystems. 2005 Sergio Scaglia, The Embedded Internet, Addison-Wesley 2008 James Gosling, The Java Language Specification Third Edition, Addison-Wesley 2005 Matijevics Istvn, Simon Jnos, Advantages of Remote Greenhouse Laboratory for Distant Monitoring, Proceedings of the Conference ICoSTAF 2008, pp 1-5, Szeged, Hungary, 2008 Simon Jnos, Matijevics Istvn, Distant Monitoring And Control For Greenhouse Systems Via Internet, Zbornik radova konferencije Yuinfo 2009, pp. 1-3, Kopaonik, Srbija, 2009

[2]

[3] [4] [5] [6]

[7] Figure 5. The web interface

Users can watch the on-line videos provided by the web camera. With obstacles present in the unknown environment, the mobile robot reacts based on both the

[8]

[9]

Matijevics Istvn, Simon Jnos, Comparison of various wireless sensor networks and their implementation, Proceedings of the Conference SIP 2009, pp 1-3, Pcs, Hungary, 2009

[14] Vasu Jolly, Shahram Latifi Comprehensive Study of Routing Management in Wireless Sensor Networks- Part-1, 2008 [15] Simon Jnos, Goran Martinovi, Web Based Distant Monitoring and Control for Greenhouse Systems Using the Sun SPOT Modules, Proceedings of the Conference SISY 2009, pp. 1-5, Subotica, Serbia, 2009. [16] Gyula Mester, Wireless Sensor-based Control of Mobile Robot Motion, Proceeding of the IEEE SISY 2009, pp 81-84, Subotica, Serbia 2009. [17] Gyula Mester, Intelligent Wheeled Mobile Robot Navigation, Eurpai Kihvsok V, pp. 1-5, SZTE, Szeged, Hungary, 2009, [18] Kucsera Pter, Sensors For Mobile Robot Systems , Academic and Applied Research in Military Science, Volume 5, Issue 4, 2006 p.645-658. ISSN 1588-8789 [19] Kucsera Pter, Industrial Component-based Sample Mobile Robot System , Acta Polytechnica Hungarica, Volume 4 Issue Number 4 2007 ISSN 1785-8860

[10] Andrzej Pawlowski, Jose Luis Guzman, Francisco Rodrguez, Manuel Berenguel, Jos Snchez 2 and Sebastin Dormido Simulation of Greenhouse Climate Monitoring and Control with Wireless Sensor Network and Event-Based Control ISSN 1424-8220 ,2009 [11] Gonda, L.; Cugnasca, C.E. A proposal of greenhouse control using wireless sensor networks. In Proceedings of 4thWorld Congress Conference on Computers in Agriculture and Natural Resources, Orlando, Florida, USA, 2006. [12] Feng X.; Yu-Chu T.; Yanjun L.; Youxian S. Wireless Sensor/Actuator Network Design for Mobile Control Applications. Sensors 2007, 7, 2157-2173. [13] Roland Siegwart and Illah R., Introduction to Autonomous Mobile Robots, Nourbakhsh, 2004

UPRAVLJANJE KLJUNIM PARAMETRIMA ZADOVOLJSTVA KORISNIKA U 3G MOBILNIM MREAMA THE KEY CUSTOMER SATISFACTION PARAMETERS MANAGEMENT IN 3G MOBILE NETWORKS
Bojana Struni1, eljko Jungi2 Telekomunikacije Republike Srpske, a.d. Banjaluka 2 Telekomunikacije Republike Srpske, a.d. Banjaluka
1

Sadraj Zadovoljstvo korisnika servisima u mobilnoj mrei na najbolji nain oslikava njeno stvarno funkcionisanje, bez obzira na parametre performansi i kvaliteta servisa, koji se tehniki mogu mjeriti. Mobilni operatori danas pri optimizaciji mree moraju voditi rauna o korisnikom zapaanju kvaliteta servisa na koje, pored tehnikih parametara, utiu i brojni drugi faktori, kao to su: aktuelni trendovi, marketing (reklame), cijene usluga itd. Cilj ovog rada je da se napravi pregled QoE koncepta (ukljuujui tehnike i druge ''netehnike'' faktore) u 3G mobilnim mreama i da se naznae izazovi pred kojima su danas mobilni operatori. Cilj je i da se opiu i razrade efikasne metode za upravljanje QoE konceptom i postizanje optimalnog stepena zadovoljstva korisnika u 3G mreama. Abstract - Customer satisfaction of mobile services in the best way reflects the real functioning of mobile network, regardless of network performance and quality of service parameters, which technically could be measured. Mobile operators nowadays in network optimization must take into account the customer perceptions of services, which are, apart from the technical parameters, affected by numerous factors, such as current trends, marketing (advertising), the prices of services etc. The aim of this paper is to review the QoE concept (including technical and non-technical factors) in 3G mobile networks and to point out the challenges with which mobile operators nowadays are faced. The aim of paper is also to describe and develop effective methods for managing QoE concept and achieve optimum level of customer satisfaction in 3G networks. 1. UVOD Nedavne promjene u svijetu mobilnih i beinih tehnologija, kao i prelazak sa prenosa govora na kombinovani prenos govora i podataka znaajno je uticao na mobilne operatore. Mobilni Internet i nove napredne aplikacije doveli su do krupnih promjena i izuzetnog porasta saobraaja u modernim celularnim mreama. Naroito kritini parametri postali su propusni opseg i kanjenje. U ovakvim uslovima i uz konkurenske snage jae nego ikada, operatori moraju osigurati visok kvalitet pruenih usluga u cilju zadravanja korisnika. Prema rezultatima istraivanja [1], oko 82% frustracija korisnika nastaju zbog nezadovoljavajueg kvaliteta pruenih usluga ili zbog nesposobnosti

operatora/provajdera da pruanje usluga obavljaju na efikasan i kvalitetan nain. Takoe, istraivanja su pokazala da na svakog korisnika, koji je zbog nezadovoljstva i odreenih problema koje je imao prilikom koritenja servisa, nazvao Call Centar, dolazi jo 29 onih korisnika koji nikada nisu nazvali korisniki servis, sa ciljem da se ale. Oko 90% nezadovoljnih korisnika se nee aliti prije prelaska u mreu drugog operatora, ve e jednostavno napusiti prvobitnu mreu kada postanu nezadovoljni. Brojni su razlozi zbog kojih mobilni operatori ne smiju dopustiti da performanse mree i kvalitet pruenih servisa dou do take kada nastaju albe korisnika i da tek tad pokuaju rijeiti problem. Potrebno je pronai i primjenjivati pouzdane mehanizme za praenje cjelovitog stepena zadovoljstva korisnika kako bi se preventivno djelovalo i uoeni problemi, koji dovode do albi korisnika, na vrijeme ublaili i otklonili. 2. ZADOVOLJSTVO KORISNIKA PRUENIM USLUGAMA U MOBILNIM MREAMA Stepen zadovoljstva korisnika servisima u mobilnoj mrei QoE (Quality of Experience) je kljuna metrika koja na najbolji nain oslikava stvarno funkcionisanje mree, bez obzira na parametre performansi mree i parametre kvaliteta servisa koji se tehniki mogu mjeriti. Popularnost servisa i konkurentnost mobilnog operatora na telekomunikacionom tritu u velikoj mjeri zavise upravo od vrijednosti ovog parametra. QoE, za razliku od QoS (Quality of Service), nije ista metrika, nego predstavlja zaokruen koncept koji u obzir uzima gotovo sve elemente koji su od vanosti za korisnika, kao i nain na koji e se u potpunosti zadovoljiti njegova oekivanja u pogledu kvaliteta servisa koje koristi. Brojni su faktori koji utiu na zadovoljstvo korisnika, a meu najvanijim su: cijena, pouzdanost, raspoloivost i sigurnost mree i jednostavnost koritenja servisa. Pored navedenih postoje i drugi faktori, na izgled ''sitnice'' (kao to je npr. ljubaznost servisnog osoblja operatora), koji mogu znaajno uticati na to da li e korisnik biti zadovoljan ili razoaran pruenim uslugama i odnosom operatora. Zadovoljstvo korisnika pruenim servisima postaje vrlo bitan parametar koji se mora procjenjivati i uzimati u obzir pri optimizaciji mobilne mree. Korisnici obino imaju unaprijed odreena i definisana oekivanja, koja se uglavnom fokusiraju na dostupnost,

pouzdanost i upotrebljivost servisa, jednostavnost interakcije izmeu korisnika i servisa, performanse sistema, kao i na tarife, odnosno cijene usluga. Da bi korisnik bio zadovoljan pruenim servisima, njegova oekivanja moraju biti ispunjena. Takoe, kroz interakciju sa okruenjem, korisnici doivljavaju razliita iskustva, razmjenjuju svoja miljenja i stavove o zapaanju kvaliteta mree i servisa. Karakteristike ovih iskustava takoe odreuju ukupan stepen zadovoljstva korisnika. Mobilni operatori danas, pored tehnikih zahtjeva, pri optimizaciji mree moraju jo vie voditi rauna o korisnikom zapaanju kvaliteta servisa. Korisniko zapaanje servisa ukazuje operatoru na stvarno funkcionisanje mree, bez obzira na to kako je ona tehniki realizovana. Rezultat pozitivnog QoE su zadovoljni i lojalni korisnici i konkurentnost mobilnog operatora u okruenju. S druge strane, lo QoE u mrei dovodi do estih albi, loeg imida operatora i gubitka korisnika, odnosno njihove migracije u konkurentske mree u okruenju. 3. PARAMETRI KOJI S TEHNIKOG ASPEKTA ODREUJU ZADOVOLJSTVO KORISNIKA 3G SERVISIMA Upravljanje QoE konceptom u multi-servisnoj mrei jedan je od najizazovnijih aspekata planiranja i projektovanja 3G mobilne mree. Nain praenja i upravljanja performansama mree preko NMS (Network Management System) sistema dominantno utie na kvalitet pruenih servisa i postaje, s tehnike take gledita, kljuni mehanizam za postizanje optimalnog stepena zadovoljstva korisnika. Da bi se ostvario i odrao visok QoE u 3G mreama, operatori trebaju izabrati efikasne algoritme praenja kljunih indikatora performansi mree (KPIs Key Performance Indicators) i parametara kvaliteta servisa, uzimajui u obzir vanost nivoa mree i vremenskih intervala u kojima se prate ovi parametri i intenzitet saobraaja. Od izuzetne vanosti je praenje vrijednosti kljunih KPI/QoS parametara u satima najveeg intenziteta saobraaja, kada pojedini dijelovi mree ulaze u zaguenje. To su KPI/QoS parametri koji neposredno odreuju najvanije QoE metrike, odnosno koji s tehnikog aspekta najvie doprinose zapaanju krajnjeg korisnika i dominantno utiu na njegovu percepciju servisa. Primjena adekvatnih i efikasnih metoda praenja performansi omoguuje kvalitetnu optimizaciju mree, pravovremeno otkrivanje i lokalizaciju neispravnih mrenih elemenata i problematinih podruja u radiopristupnom, transportnom i upravljako-komutacionom dijelu mree, koja prouzrokuju zaguenje sistema. Sa korisnike take gledita, u tehnikom smislu, postoje etiri aspekta, tj. faze koritenja servisa. To su: pristup mrei (network access), pristup servisu (service access), odrivost servisa, tj. kontinuitet servisne konekcije (service retainability) i kvalitet servisa za vrijeme njegove isporuke (service integrity). S obzirom na to, QoE metrike

u 3G mreama se najee grupiu u dvije osnovne kategorije: - QoE metrike pouzdanosti (Reliability QoE), koje su vezane za dostupnost mree i servisa i - QoE metrike kvaliteta (Quality QoE), koje se odnose na kvalitet servisa za vrijeme njegove isporuke. Svakodnevnim praenjem performansi preko NMS sistema, operatori se moraju fokusirati na najvanije tj. kljune 3G QoE metrike, a to su: - dostupnost servisa, tj. uspjenost uspostave servisa (service accessibility), koju direktno opisuju odgovarajui Accessibility KPI parametri, koji odreuju uspjenost uspostave RRC konekcije i uspjenost uspostave RAB nosioca po pojedinim servisima (govorni poziv, video poziv, paketski IP servisi itd) i - kontinuitet servisne konekcije, tj. procenat prekinutih veza/sesija (continuity of service or service drop ratio), koju direktno odreuju odgovarajui Retainability KPI parametri po pojedinim servisima i Mobility KPI parametri koji odreuju uspjenost procedure soft handover-a i Inter-system handover-a prema GSM, odnosno GPRS sistemu. Postoje i druge QoE metrike, koje se uglavnom odreuju na osnovu testnih konekcija po pojedinim 3G servisima i popratnih analiza na odgovarajuim sistemima za monitoring servisa. To su: vrijeme uspostave poziva/sesije, prosjena bitska brzina, prosjeno kanjenje s kraja na kraj veze, varijacija kanjenja, gubitak informacije itd. Ove QoE metrike nemaju pojednaku teinu i vanost za sve UMTS servise. Karakteristike aplikacija su razliite i kao rezultat toga zahtjevi za sve aplikacije ne mogu biti isti. Tako su, na primjer, malo kanjenje i mala varijacija kanjenja vani za aplikacije konverzacijske klase (govorni i video poziv). Mala varijacija kanjenja je takoe veoma bitna za audio i video streaming, ali je npr. manje znaajna za Web i WAP pretraivanja, koja zahtjevaju da je prenos slika i drugih multimedijalnih sadraja dovoljno brz, odnosno da je kanjenje u prenosu prihvatljivo za interaktivno koritenje. Za aplikacije backround klase (e-mail, file transfer) kanjenje nije toliko vano jer krajnji korisnik obino nema potrebu da podaci budu isporueni u tano specificiranom vremenu. Kod ovih aplikacija najbitniji je integritet i vjerodostojnost podataka, tj. pouzdanost prenosa. Sve prethodno pomenute QoE metrike su bitne i treba ih redovno mjeriti, te u skladu sa otkrivenim nepravilnostima poduzimati konkretne aktivnosti na njihovom rjeavanju, ali se operatori, u cilju efikasnosti, prilikom svakodnevnog praenja performansi, moraju fokusirati na pomenute kljune QoE metrike, odnosno kljune KPI parametre i izgraditi efikasne algoritme za praenje parametara performansi i intenziteta saobraaja, pravilno rangirajui nivoe mree i vremenske intervale u kojima se oni posmatraju. Panju treba obratiti na nominalne vrijednosti KPI parametara, koje predstavljaju osnovne ulazne podatke za planiranje kapaciteta i resursa za odgovarajue 3G

servise, kao i na referentne vrijednosti KPI parametara, kao to su: maksimalna, minimalna i srednja vrijednost u toku dana, vrijednost u asu najveeg opetereenja itd. Uzimajui to u obzir u nastavku, u kratkim crtama, navodimo primjer jednog od efikasnih metoda za praenje i upravljanje performansama 3G mree. Na nivou RNC-a svakodnevno treba obavljati monitoring kljunih KPI parametara i intenziteta CS (Circuit Switched) i PS (Packet Switched) saobraaja sa vremenskim korakom od jednog sata ili 30 minuta. Na taj nain se dovoljno brzo i efikasno uoava da li postoji neki problem na nivou RNC-a, pa se gotovo trenutno mogu preduzeti dalji koraci i dublja analiza u cilju otkrivanja i rjeavanja problema. Na osnovu dnevnih mjerenja na nivou RNC-a pravi se statistika na sedminom i mjesenom nivou, na osnovu koje se dolazi do sljedeih korisnih pokazatelja i zakljuaka: - Identifikuju se sati najveeg intenzitea CS i PS saobraaja po pojedinom RNC-u, tj. u pojedinim dijelovima UMTS radio podsistema; - Registruje se intenzitet saobraaja po pojedinom RNC-u i pronalaze se najvie optereeni dijelovi UTRAN mree; - Dobija se statistika kretanja vrijednosti kljunih KPI parametara u asu najveeg optereenja, kao i statistika kretanja njihovih minimalnih i maksimalnih vrijednosti u toku dana. Pored analiza i statistika na nivou RNC-a, svakodnevno se za cijelu mreu na nivou elije rade mjerenja za dnevni as najveeg optereenja. Analiziraju se rezultati mjerenja saobraaja i KPI parametara po pojedim elijama, izdvajaju elije sa smanjenim KPI vrijednostima i odmah se stavljaju na analizu. Na sedminom i mjesenom nivou se pravi statistika elija koje se ponavljaju po smanjenjoj vrijednosti odreenih KPI parametara za pojedine servise. Na taj nain se identifikuju problemi koji se odnose na greke u optimizaciji, povezane sa zahtjevima za velikim saobraajem za koje elija nema dovoljno kapaciteta na Uu ili Iub interfejsu. Takve elije se stavljaju na dodatne analize, ispituju se i odreuju uzroci problema, te se poduzimaju aktivnosti na njihovom otklanjanju. Takoe, veoma je vano identifikovanje elija preko kojih se u satima vrnog optereenja generie najvei saobraaj. To su najvanije elije, koje opsluuju veliki broj korisnika, meu kojima su po pravilu i oni najznaajniji za operatora. Efikasno praenje i blagovremeno uoavanje degradiranih vrijednosti KPI parametara na ovim elijama ima veliku vanost za upravljanje stepenom zadovoljstva korisnika. Usmjeravanje panje na zaguenja u mrei i pronalaenje efikasnih algoritama za upravljanje zaguenjima, koji vode ka smanjenju njihovog intenziteta, trajanja i uestalosti, pokazuju se kao mono sredstvo za poveanje stepena zadovoljstva korisnika. U mobilnim 3G mreama smanjenje pojava zaguenja u prvom redu podrazumjeva identifikovanje i lociranje tzv. ''kritinih taaka'', odnosno

problematinih podruja u mrei, gdje se zaguenja najee pojavljuju. Paketski IP servisi zahtjevaju mnogo vee propusne opsege i kapacitete u odnosu na tradicionalni govorni servis, tako da su elije sa velikim paketskim saobraajem najvie podlone zaguenjima u satima velikog intenziteta saobraaja. S obzirom na to, u razvijenim i modernim 3G mreama, sa velikom zastupljenou IP servisa, operatorima se namee potreba za sistemskim upravljanjem zaguenjima i ogranienim resursima u registrovanim kritinim zonama u mrei. Problem ogranienosti resursa danas je aktuelan moda vie nego ikad. Jednostavno poveavanje kapaciteta mree, tako da ona udovolji zahtjevima za servisima u periodu vrnih optereenja, zbog ogranienosti resursa najee nije tehniki izvodljivo, a zbog neravnomjerne raspodjele saobraaja u toku dana, nije ni ekonomski opravdano. Na izbjegavanje zaguenja operatori u velikoj mjeri mogu uticati preko PS core dijela mree, primjenom odgovarajuih politika dinamikog i automatskog upravljanja resursima, a prije svega dozvoljenim bitskim protocima. Dakle, neophodna je diferencijacija korisnika na bazi ustanovljenih korisnikih profila i precizno definisananih SLA (Service Level Agreement) ugovora, koji obuhvataju ugovorene maksimalne protoke, zahtjevani kvalitet za pojedine servise, maksimalnu koliinu podataka koja se moe prenijeti u satima najveeg optereenja itd. 4. ''NETEHNIKI'' FAKTORI KOJI ODREUJU KORISNIKU PERCEPCIJU 3G SERVISA Uticaj na korisniki QoE nema samo tehniki kvalitet mree, nego naravno i niz drugih faktora. QoS je u sutini tehniki koncept koji obino predstavlja samo dio, tj. podskup cjelokupnog QoE koncepta. Iako e vei QoS u mrei u mnogim sluajevima rezultovati veim QoE, ispunjavanje nominalnih vrijednosti QoS i KPI parametara ne garantuje visok korisniki QoE. Drugim rijeima, QoE se moe definisati kao koncept koji obuhvata sve elemente korisnikog opaanja mree i njenih performansi, odnosno sve elemente ispunjavanja korisnikih oekivanja. Na zadovoljstvo korisnika pored tehnikih parametara (performanse mree i mobilnih terminala, tehniki kvalitet pruenih usluga) utiu i brojni drugi ''netehniki'' parametri, kao to su: aktuelni trendovi, marketing (reklame), tarife, cijene itd. Posebno je vana uloga CRM (Customer Relationship Management)-a, tj. organizacija i upravljane poslovima koji podrazumjevaju brigu o korisnicima (Customer Care) i prilagoavanje usluga Call Centra i prodajnih mjesta razliitim tipovima korisnika. Veoma je vano na pravi nain prilagoditi usluge Call Centra i prodajnih mjesta kako rezidencijalnim, tako i poslovnim korisnicima. Odnos izmeu zadovoljstva korisnika i brojnih faktora koji na njega utiu prikazani su na Slici 1.

odreenom tipu korisnika, tako da oni budu to privlaniji, intuitivniji i laki za koritenje. Svakom tipu korisnika se na pravi nain trebaju prilagoditi usluge Call Centra i prodajnih mjesta. Operatori moraju pratiti trendove iz razvijenijeg okruenja, kao i svjetske trendove, a servisi se moraju marketinki promovisati kroz atraktivne kampanje i kvalitetno osmiljene tarifne modele, prilagoene ciljnom segmentu korisnika. Popularnost servisa je bitan faktor koji indirektno utie na QoE, a istovremeno ukazuje operatoru koji statistiki uzorak je bitan za razmatranje, na koje servise treba obratiti panju pri optimizaciji i planiranju mree, kakve tarifne modele primjeniti na popularne ili manje atraktivne servise itd. Isto tako, u obzir treba uzeti i druge faktore koji mogu uticati na korisniko zapaanje servisa, kao to su npr. nivo tehnikog znanja korisnika i poznavanje tehnologije koju koristi, uzrast, platena mo i slino. Mada operator ne moe da utie na sve ove faktore, oni se mogu uzeti u obzir pri optimizaciji mree u smislu prilagoavanja servisa i korisnikih aplikacija samom korisniku tako da oni budu intuitivniji i laki za koritenje. Rezultati tih akcija takoe doprinose veem QoE. Na subjektivni doivljaj kvaliteta mree i servisa moe se uticati sistemskim upravljanjem resursima, odnosno dozvoljenim bitskim protocima u registrovanim ''kritinim zonama'' u mrei. Politika kako e se tretirati pojedini korisnici u kritinim zonama ostaje na operatoru. Postoji mnogo modela dodjele resursa, baziranih na potrebama korisnika i njihovom doprinosu ukupnom prihodu operatora. Jedna od usvojenih politika, koja se ovdje preporuuje, jeste monitoring paketskih sesija korisnika koji se nau u identifikovanim kritinim zonama, preko PS jezgra mree, odnosno preko SGSN-a, analiziranjem njihovih korisnikih profila i provjerom da li nain na koji oni koriste prenos podataka odgovara profilu koji je prethodno dogovoren sa operatorom. Ako se ustanovi da su oni premaili dogovorene limite, mrea e reagovati i smanjiti njihove protoke kao to je islustrovano na Slici 2.

Slika 1. Odnos izmeu zadovoljstva korisnika i faktora koji na njega utiu Prilikom planiranja i dimenzionisanja mree, odnosno prilikom definisanja QoE ciljeva moraju se u obzir, u pravoj mjeri, uzeti svi elementi korisnikog zapaanja kvaliteta mree i servisa, odnosno svi elementi ispunjavanja korisnikih oekivanja. Dakle, pored tehnikih faktora mora se voditi rauna i o drugim, veoma bitnim, ''netehnikim'' faktorima. Da bi se to bolje upravljalo cjelokupnim QoE konceptom dobro je napraviti QoE segmentaciju po tipovima korisniska, a posebno na bazi ''netehnikih'' faktora koji utiu na korisniko zapaanje servisa. Osnovna segmentacija se, u pravilu, odnosi na rezidencijalne i poslovne korisnike, ali je neophodno vriti dalju segmentaciju ove dvije grupe, na vie naina: po prihodu koji donose operatoru, po servisima koje najee koriste, po periodu dana i sedmice kada su najvie izraene njihove potrebe za zahtjevnim (bandwith hungry) aplikacijama i slino. Pored govornog poziva i messaging servisa, na tritu veine 3G operatora najvie se koriste servisi Web i WAP pretraivanja, koji omoguavaju dobijanje razliitih vijesti i informacija sa grafikim sadrajem, kao i download-ovanje raznovrsnih multimedijalnih sadraja. Ovi servisi su naroito popularni za rezidencijalne korisnike i to posebno meu mlaom populacijom. Audio i video streaming razliitih sadraja dostupnih na Internetu, pristup drutvenim mreama, mobilnoj televijizi i radiju itd. su isto tako, u znaajnijoj mjeri, prisutni meu rezidencijalnim korisnicima na razvijenim i modernim tritima velikih 3G operatora. Kao servisi poslovnih korisnika izdvajaju se: pristup e-mail-u i bazama podataka u kompanijskim mreama, te file transfer, tj. prenos poslovnih podataka i informacija. Kada je rije o netehnikim faktorima, veliki uticaj na zadovoljstvo korisnika imaju organizacija i upravljane poslovima koji podrazumjevaju brigu o korisnicima. Drugim rijeima, veoma je vana uloga CRM-a, tj. prilagoavanje kljunih servisa i korisnikih aplikacija

Slika 2. Sistemsko upravljanje dozvoljenim protocima u satima najveeg optereenja Cilj ispitivanja korisnikih profila u kritinim oblastima i blago korigovanje protoka korisnicima koji su premaili dogovorene limite je da se manjem broju korisnika, koji

su okupirali najvei dio opsega i koji obino koriste manje vane aplikacije, blago smanji protok podataka, kako bi preostali vei broj korisnika, sa manjim protocima i vanijim aplikacijma, nesmetano nastavio rad u uoenoj kritinoj zoni i bio poteen negativnog iskustva po pitanju kvaliteta servisa. Na operatoru ostaje da tano odredi kako e korisnici biti tretirani u pogledu ekstremnih zahtjeva za mrenim reursima kada pojedini dijelovi mree ulaze u zaguenje, mada na tritu ve postoje razvijeni modeli za praenje koritenja resursa i davanje prioriteta korisnicima koji donose vee prihode operatoru ili ije su aplikacije i potrebe ozbiljnije i poslovno znaajnije. Segmentacija korisnika se moe vriti na osnovu podataka iz SLA ugovora ili na osnovu klastera koji su formirani Data mining analizama zapisa o koritenju servisa, pohranjenih u Billing centrima ili Data warehouse-u operatora. Tako npr. operator u periodima zaguenja moe smanjiti propusne opsege korisnicima koji su dogovorili da plaaju manje tarife za tzv. non-guaranteed klase servisa i istovremeno poveati kapacitete za profitabilne korisnike koji su kroz SLA ugovore pristali da plate vie za klase servisa sa poboljanim performansama. Ovakvo dinamiko upravljanje bitskim protocima, koje se uglavnom primjenjuje u sektoru fiksnog broadband-a, pokazuje sa kao pogodan mehanizam i u 3G mobilnim mreama. Predloena politika omoguuje veini korisnika najbolje mogue performanse uz istovremeno usklaivanje ostvarenih prihoda sa poveanim zahtjevima za mrenim kapacitetima. Ovo se pokazuje mnogo efikasnijim od uobiajne dodjele propusnog opsega za sve korisnike podjednako, u svako doba dana , ne vodei rauna o tipu korisnika i aplikacije kao i stanju u kome se mrea nalazi. Razuman prilaz upravljanju ogranienim mrenim resursima u vrijeme zaguenja baziran na prethodno definisanim zahtjevima za aplikacijama kroz SLA ugovore, prilagoen profilu korisnika, je od presudne vanosti za obezbjeenje najboljeg mogueg stepena zadovoljstva korisnika pruenim uslugama. Oigledno je da su najvaniji tehniki i tzv. ''nethniki'' faktori koji utiu na zadovoljsdtvo korisnika u tijesnoj korelaciji jer frustracije korisnika i povean broj poziva prema servisnim i Call centrima nastaju uglavnom u periodima poveanih zahtjeva za mrenim resursima, kada dolazi do zaguenja u pojedinim dijelovima mree. U takvim situacijama vrijednosti KPI/QoS parametara, koji s tehnikog aspekta najznaajnije odreuju QoE, nisu u skladu sa nominalnim i prelaze dozvoljene limite. Redovno i vremenski regularno praenje i analaiziranje vrijednosti pomenutih kljunih KPI/QoS parametara su od

izuzetnog znaaja. Takve analize upotpunjene podacima o albama korisnika su najbolji nain da se lociraju kritina mjesta, tj. ''uska grla'' u mrei i da se intervencije i investicije u proirenje mrenih kapaciteta usmjere na prava mjesta, uz minimum trokova za operatora i minimalan rizik za odliv znaajnih korisnika. 5. ZAKLJUAK Stepen zadovoljstva korisnika je najvanija mjera na osnovu koje operatori mogu na pravi nain planirati i dimenzionisati svoju mreu. Od kljune vanosti je pronai ravnoteu izmeu gornjih i donjih ekstremnih granica za kvalitet usluge. Dok je obezbjeivanje jako visokog kvaliteta usluga skupo i neefikasano, previe nizak kvalitet pruenih usluga moe imati negativan uticaj na zadovoljstvo korisnika i na stepen odliva korisnika (churn). Razumjevanje korisnikih oekivanja i identifikovanje osnovnih pokretaa zadovoljstva korisnika neophodni su za postizanje optimalnog stepena zadovoljstva, kao i za definisanje tanih QoE ciljeva. Obezbjeenje optimalnog QoE (ukljuujui tehnike i druge ''netehnike'' faktore) i tano definisanje QoE ciljeva za servise koji su najvaniji, kako za rezidencijalne, tako i za poslovne korisnike, veoma je vano za svakog 3G operatora, jer stepen zadovoljstva korisnika upravo ovim servisima dominatno utie na profitabilnost operatora i na njegovu konkurentsku poziciju na relevantnom telekomunikacionom tritu. LITERATURA [1] ''Quality of Experience of mobile services'', Pjer M. Vuckovic and Nevena S. Stefanovic, 14th Telecommunications forum TELFOR, Belgrade 2006 [2] ''Opportunity in the Air, Congestion Management and the Mobile Broadband Revolution'', Tecelec, Whitepaper, 2010 [3] David Soldani, Man Li, Renaud Cuny, ''QoS and QoE Managemant in UMTS Cellular Systems'', John Wiley & Sons, LTD, 2006 [4] ''Quality of Experience (QoE) of mobile services: Can it be measured and improved'', Nokia White Paper, 2006 [5] ''Network planning for Quality of Experience'', N2Nsoft, Whitepaper, 2008
[4] ''Assuring QoE on Next Generation Network'', Empirix Whitepaper, 2008

[5] Yves Cognet, ''QoE versus QoS'', presentation, QoSmetrix, ITEA, March 2006

POSSIBLE SOLUTION FOR EVALUATING ONLINE LEARNING


Vladimir Petoevi MSc, Military Academy Belgrade Abstract-As institutions of higher education experience a dramatic rise in the demands for online classes, faculty members are at a loss for available tools effectively to evaluate their teaching practices. The authors of this article developed an instrument to give higher education faculty reliable feedback on their online classes. The authors developed an instrument that is unique to the online classroom and addresses issues that evaluation tools for traditional classes cannot address, such as course delivery, instructor's online input, and efficiency of the medium. In this article, the authors report on the reliability and validity of this instrument. 1. INTRODUCTION Enrolment in online courses has drastically increased in the last decade. This increase has led to the intensified need for course evaluation tools that are developed specifically for online courses. Over the last few years, many instructors have expressed their dissatisfaction with the inadequacy of traditional course evaluations to provide them with useful feedback to improve their teaching methods in their online classes. The authors of this article developed a course evaluation instrument designed to address the needs of online educators. McVay Lynch (2002) contended that one of the most difficult obstacles to overcome in the use of students' surveys to evaluate an online course was the students' inability to separate among the course content (materials, assignments, and activities), the instructor's style and personality, and the technical course delivery methods. She stated, "A sticky subject at most schools is the evaluation of the instructor. In the university system, end-of-course student evaluations often serve for promotion and tenure purposes. Consequently, the creation, validating, and reliability of any instruments used for this purpose is of high concern to faculty". Palloff and Pratt (2003) criticized the use of evaluation tools from traditional face-to-face classes in online classes since they fail to assess the instructors' ability to build learning communities for independent and autonomous learners. They argued that online class evaluation tools should assess faculty members' abilities to engage students in the course, to give meaningful feedback to their students, and to be responsive to students' needs. The authors of this paper developed this online course evaluation tool with these concerns in mind. 2. COURSE EVALUATION The online class focuses on building learning communities and facilitating learners' autonomy and independence, which course evaluation tools must address. Palloff and Pratt (2003) argued that online course evaluations should measure instructors' engagement in the course, quality of feedback, responsiveness to questions, support and assistance with projects, and assignments. They also maintained that summative evaluation should be used in the online class but not as the only measure of the effectiveness of the course. Koontz, Li, and Compora (2006) defined evaluation as the "process of defining, obtaining, and providing useful information to make informed decisions that will enhance the teaching/learning process". They criticized the summative evaluation as it is practiced in higher education because that evaluation fails to provide useful information to online instructors to make informed decisions. Koontz et al. (2006) contended that most instruments ask students to respond to general statements which elicit no specific comments. They recommended that online summative evaluation tools should be designed specifically to measure the effectiveness of the instruction; the efficiency or the time required to learn the materials; the objectives of the coursework; and the attitude of the students toward course content, instruction, and course requirements. Cooper (2000) pointed out the importance of online course evaluation when she stated that, "Student evaluations help determine the effectiveness of the various components of an online course and address areas that may need revision. They also communicate to students that their input is valuable". Similarly, Lorenzetti (2006) argued that the current course evaluation tools used by higher education institutes are very broad in scope and fail to give instructors feedback that can be used to improve their course delivery. McKeachie and Svinicki (2006) maintained that online course assessments should provide feedback to instructors on ways that learning "can be facilitated." The assessment, McKeachie & Svinicki (2006) contended, should inform the teacher "how well the students are meeting the objectives." Cooper (2000), Hoffman, (2003), and Lorenzetti (2006) all criticized the use of traditional courses' evaluation tools in online courses. They agreed that there is a great need for course evaluations that are specifically designed for online courses. Hoffman (2003) agreed that online course evaluation has been receiving increased attention from institutions of higher education over the last few years. In his study, Hoffman asked such institutions to report their use of online course evaluation tools: he found an increase of eight percent among higher education institutes' use over the span of one year. However, he contended that the large majority of such institutions still rely on paper and pencil course evaluation instruments for all classes, both traditional and online. Palloff and Pratt (2003) listed a number of elements that should be included in a summative evaluation tool for

online coursework. They argued that these evaluation items should focus not only on the instructor's performance but on the total experience of the online learner in the course. These elements are: The overall online course experience; Orientation to the course and course materials; The content, including quantity of materials presented and quality of presentation; Discussion with other students and the instructor; Self-assessment of level of participation and performance in the course; The courseware in use, ease of use, and ability to support learning in this course; Technical support; and Access to resources. The authors of this paper recognized the need to develop a course evaluation tool that was different from those which have been used in traditional courses. This instrument took into consideration the fact that the nature of communication among class participants in the online class was different from that in the face-to-face class. In the online classroom, however, the instructor is represented predominantly by the text. Just as with their students, an instructor's engagement with the material and the course is demonstrated through the number, length, and quality of his or her posts. In many cases, the students and instructor may never meet. The physical manifestation of the instructor may be a photograph on a homepage. Although this creates a difficult evaluation process, it also serves, on some level, to make the feedback received from students more valuable, as it relates directly to their experience of the course and the materials they have studied rather than reflecting the personality of the instructor. The authors of this paper developed an instrument that includes consideration for the nature of communication among class participants in the online class. This new tool provides feedback on the efficacy of the instructor and the utility of the course from the students' point of view. The instrument elicits students' feedback with regard to four areas: the course delivery methods; materials and instruction; communication among instructor, students, and peers; and support provided for students during the course. 3. RELIABILITY Much of the research to establish reliability for newly constructed instruments has been done in the fields of medicine and psychology. A large number of these projects focused on survey instruments designed to measure quality of life under specific circumstances. Rich, Nau, and Grainger-Rousseau (1999) modified an existing questionnaire more quickly to measure quality of life with asthma. Bradley and colleagues (1999) designed an instrument to measure the impact of diabetes on quality of life. Damiano and others (2000) designed and tested a similar instrument to measure patient quality of life with Parkinson's disease. Coyne and others (2002) designed and

tested still another questionnaire designed to measure quality of life with overactive bladder symptoms. Other researchers have worked recently to establish reliability and validity for new instruments in the realm of health and mental health. Bethell, Peck, and Schor (2001) designed a survey to assess health care provisions for wellchild care. Seymour and colleagues (2001) tested the validity of an existing questionnaire to measure health issues among older patients with cognitive impairments. Quintana and colleagues (2003) translated and tested the reliability of a Spanish version of the Hospital Anxiety and Depression Scale, an established instrument in its English version. Obayashi, Bianchi, and Song (2003) measured the reliability and validity of nutrition knowledge, sociopsychological factors, and food label use scales from an earlier diet and health knowledge survey. Finally, McMillan, Bradley, Gibney, Russell-Jones (2003) evaluated two health status measures in adults with growth hormone deficiencies. Most frequently, these researchers all employed Cronbach's alpha as the primary measure of reliability, with a minimum acceptable alpha coefficient value of 0.70. More closely aligned to the work in question were the recent efforts to construct surveys designed to measure perceptions or attitudes. Walker, Phillips, and Richardson (1993) surveyed a Native American population about minority recruitment to programs of teacher education and employed Cronbach's to determine internal consistency of the survey instrument. Dowson and McInerney (1997) designed and tested a new instrument to measure in Australian educational settings students' achievement goals and learning strategies. These researchers used both Cronbach's and factor analysis to establish reliability in their instrument. Cronbach's was used, but these researchers relied more heavily on factor analysis to demonstrate the reliability of the shortened instrument. Through the use of Cronbach's , correlation coefficients, and unrotated factor loadings, McGuiness and Sibthorpe (2003) tested a measure of the coordination of health care services. Coyle, Saunderson, and Freeman (2004) designed and evaluated a questionnaire to measure differing attitudes about learning disabilities, piloting the questionnaire among dental and social policy graduate students and using Cronbach's across both total results and dental and social policy subgroups. METHODOLOGY - DEVELOPMENT OF THE EVALUATION The impetus to create an instrument designed specifically for students to evaluate online classes was occasioned by two desires: the desire better to understand student satisfaction or frustration with the requirements of online coursework and the desire to document online teaching in a way similar to the way that universities document traditional face-to-face teaching. In order to draft the initial evaluation form, these authors examined a number of existing course evaluation forms, drew from past feedback 4.

during less formal exchanges with online students over the past seven years, and solicited the input of colleagues who also taught online classes. Potential evaluation questions were narrowed to thirty total items which fell into four categories: course webpage, course structure and content, course instructor, and overall course evaluation; plus one "global" coordination item that summarized students' reaction to the entire course: "The course met my educational needs." 5. PILOT ADMINISTRATION In order to pilot the original instrument during the 2009/2010. the pilot evaluation form was distributed electronically to 78 students who had participated in four classes during the 2009/2010. at Military Academy. The survey was made available through a commercial online service which guaranteed anonymity to participants but provided full details to the researchers on each completed survey. Of the 78 students invited to participate in this pilot study, 58 (74%) responded and completed the evaluation form in full. Response data were entered in the Statistical Package for the Technical Sciences, one variable per item on the pilot evaluation form, plus one item with reverse coding for the final item on the pilot evaluation form. The final item was originally worded so that the "sense" of the answers was in the opposite order as the sense of the other 29 items: testing was completed first with the original coding and then with the reverse coding. In order to provide assurance that there were no disparities between the two semesters of survey administration or between courses in either of the semesters, t-tests and simple analysis of variance tests were run among all combinations of those participants. No statistically significant differences were discovered among participants by class groups or by semesters. 6. RESULTS - VALIDITY Two of the most important and frequently used categories of validity are content validity and construct validity. Content validity reveals whether an instrument truly reflects the "universe" of items in the subject that the instrument claims to measure; while construct validity demonstrates that the instrument measures a definable underlying psychological construct. Although researchers need only to establish one type of validity for a given instrument, these researchers established both content and construct validity for this new evaluation form: both professors and students who have worked online were consulted in order to determine whether this evaluation form asked and provided opportunity to answer the most pertinent questions about online coursework, and student responses on the pilot administration of the evaluation were examined in comparison with other feedback that the students provided to the professors in order to determine whether the evaluation form actually measured the construct of student satisfaction with online coursework. In both cases, the pilot

evaluation stood the tests: this instrument demonstrated both content and construct validity. 7. RELIABILITY Statistical analyses to measure reliability have been long established. Through the use of these statistical tests, researchers can determine the extent to which the items in an instrument are related to one another, the level at which all items relate to a global "coordination" item on the pilot evaluation instrument ("The course met my educational needs."), an overall idea of internal consistency (repeatability) of the scale as a whole, and specific problem items that need to be reworded or excluded from the instrument in future administrations. For these operations, these researchers used a full set of Spearman's rho correlation coefficients, Cronbach's alpha coefficient of internal consistency, and Cronbach's coefficient when each item was deleted from the total scale. Spearman's rho was applied because, in this pilot administration, the minimum ratio of cases to variables (10.4 to 1) could not be met: Spearman's rho better evaluates the relationships among responses from small samples of respondents. Strong Spearman's rho correlation coefficients among the items in each of the four subsets on the evaluation instrument plus strong correlation between each item and the "coordination" item ("course met educational needs") were desired. Within each of the four response subsets, each item in the subset correlated significantly to each of the other items with only three exceptions. In the Course Web Pages subset, neither "The web links were relevant." nor ""I was able to interact effectively with the instructor." correlated with "I was able easily to access the course information at the beginning of the course." In the Overall Course Evaluation, the final question on the pilot evaluation, "I prefer to have face-to-face classes." did not correlate with "The course met my educational needs." With the exception of the two relationships that failed to correlate in the Course Web Pages subset, correlation coefficients ranged from .365 to .764, with 11 of the 13 remaining correlations exceeding .40.In the Course Content and Structure subset, all the Spearman's rho values held statistical significance, and the correlation coefficients ranged from .260 to .935, with 33 of the 36 significant correlations exceeding .40.In the Instructor subset, all the Spearman's rho values held statistical significance, and the correlation coefficients ranged from .336 to .875, with 65 of the 67 total correlations exceeding .40.With the exception of the one relationship that failed to correlate in the Overall Course Evaluation subset, the two remaining correlation coefficients equaled .433 and .435. All but one evaluation item was statistically significantly correlated to the global coordination item on the pilot instrument. Responses to "The course met my educational needs." did not correlate significantly to the final item, "I prefer to have face-to-face classes." (p = .466). Spearman's rho correlation coefficients between the other evaluation items and that coordination item exceeded .40 in twenty-

seven of the remaining twenty-eight items (range .365 .882). Cronbach's alpha for the total thirty items was .956 (high internal consistency) with items coded as marked, .964 (high internal consistency) with the final item coded in reverse to align with the scoring sense of the other twentynine items. The Cronbach's alpha formula determines the extent to which all items on an instrument measure the same underlying notion, or the extent to which all items on the instrument are internally consistent. In this case, the researchers wanted all items on the evaluation to measure satisfaction with specific components of the online course. The alpha formula is based on repeated comparisons between the scores of individual items and the overall score: the more similar these scores are, the more accurately each item actually measures one part of the overall notion of satisfaction with the course. The maximum possible value for Cronbach's alpha is 1.0, which would indicate a "perfect" correlation between the scores of all the individual items and that one notion of satisfaction, so the value here of .956 or .964 indicates a very strong correlation. Cronbach's alphas were then recalculated with each single item removed in turn. This procedure allowed the researchers to determine whether any single items had powerfully influenced the original calculation. The alpha values of each recalculation should remain close to the original result. The resulting alpha correlations for all tests remained high, each exceeding .953. With original coding maintained on the final item, the range of Cronbach's alpha was .953 to .966 (all high internal consistency) with one item removed from each statistical test. 8. CONCLUSION An important factor in developing evaluation surveys is to reach a consensus among instructors on the factors that constitute good teaching in the online classroom. Instructors must be clear on the expectations for communication between them and the students, on time limitations, and on the nature of assignments that can be accomplished in such a class. The authors of this instrument provide a statistically valid tool for online educators which gives them reliable feedback on their teaching as perceived by their students. Based on the increased need for such tools in online classes, such an instrument can be a valuable tool for institutions of higher education (Hoffman, 2003; McVay Lynch, 2002; Lorenzetti, 2006). The failure of very few evaluation items to correlate in the process of this pilot application could be due to the fact that this limited group of students perceived the items to ask unrelated questions. Participating students might have perceived that neither "The web links were relevant." nor "I was able to interact effectively with the instructor." related directly to their experience in the opening couple weeks of the online course ("I was able easily to access the course information at the beginning of the course."), and that disconnect might explain the lack of correlations among

these survey items. This discrepancy could also be attributed to the fact that students were not required to read the links to be successful in the course, but rather to access them as an additional resource. The fact that many of the survey participants were first-time online students might explain their perception of e-mail and discussion boards as ineffective tools of communication as compared to face-to face communication with the instructor. These survey items in particular must be monitored in future applications of the instrument. The item "I prefer to have face-to-face classes." also did not correlate with the global "coordination" item, "The course met my educational needs." The preference item was the only item on the survey worded originally to code in the opposite direction as the other 29 items: the authors tested this item both as it was written and with reversed coding. For many of these students, the courses at hand were their first online course experiences: their responses to "I prefer to have face-to-face classes." may have been affected by the newness of the experiences. Alternately, students may have perceived their responses to "I prefer to have face-to-face classes." to be comments about the instructor or the process of the course rather than an overall comment about the online experience, and their bias might have changed their responses to this item. This survey item, like the two others that failed to correlate, must be monitored in future applications of the instrument. If these items continue to fail to correlate, then they should be reworded or eliminated from the survey instrument. These two researchers both continue to use this pilot instrument in their online courses and have begun to recruit other instructors to use the instrument as well. Additional input from students who participate in online classes will serve to clarify the reliability of evaluation items for the purpose of summative evaluation in the context of online instruction. REFERENCES [1] Cooper, L. (2000). Online courses: Tips for making them work. THE Journal, 27 (8), 86-92. [2] Dowson, M., & McInerney, D. (1997, March 24-28). The development of goal orientation and learning strategies survey (GOALS-S): A quantitative instrument designed to measure students' achievement goals and learning strategies in Australian educational settings. Paper presented at the Annual Meeting of the American Educational Research Association, Chicago, IL. [3] Hoffman, K. M. (2003). Online course evaluation and reporting in higher education. New Directions for Teaching and Learning, 96 (3), 25-29. [4] Koontz, F. R., Li, H., & Compora, D. P. (2006). Designing effective online instruction: A handbook for web-based courses. Oxford, UK: Rowman & Littlefield Education. [5] Lorenzetti, J. P. (2006, March 15). Course evaluation project is model for content assessment (Distance Education Report). Magna Publications Inc.

[6] McKeachie, W. J., & Svinicki, M. (2006). McKeachie's teaching tips: Strategies, research, and theory for college and university teachers. Boston, MA: Houghton Mifflin Company. [7] McGuiness, C., & Sibthorpe, B. (2003). Development and initial validation of a measure of coordination of health care. International Journal for Quality of Health Care, 15 (4), 309-318. [8] McVay Lynch, M. (2002). The online educator: A guide to creating the virtual classroom. London: RoutledgeFalmer. [9] Meredith, L. S., Wenger, N., Harada, N., & Kahn. K. (2000). Development of a brief scale to measure acculturation among Japanese Americans. Journal of Community Psychology, 28 (1), 103-113. [10] Obayashi, S., Bianchi, L. J., & Song, W. Reliability and validity of nutrition knowledge, socio-psychological factors, and food label use scales from the 1995 Diet and Health Knowledge Survey. Journal of Nutrition Education and Behavior, 35 (2), 83-92. [11] Palloff, R.M., & Pratt, K. (2003). The virtual student: A profile and guide to working with online learners. San Francisco, CA: Jossey-Bass Publishers. [12] Seymour, D. G., Ball, A. E., Russell, E. M., Primrose, W. R., Garratt, A. M., & Crawford, J. R. (2001). Problems in using health survey questionnaires in older patients with physical disabilities: The reliability and validity of the SF36 and the effect of cognitive impairment. Journal of Evaluation in Clinical Practice, 7 (4), 411-418. [13] Walker, L., Philips, J., & Richardson, G. D. (1993, November 10-12). Minority recruitment in teacher education. Paper presented at the Twenty-second annual meeting of the Mid-South Educational Research Association. New Orleans, LA. Appendix A Online course evaluation instrument Please choose the number which best describe your opinion on a scale of 0 to 5, where 0 indicates you strongly disagree with the statement and 5 means you strongly agree with the statement. The first part of the evaluation focuses on the course while the second part focuses on the instructor.

1. I was able to navigate the course web pages with ease. 2. I was able easily to access the course information at the beginning of the course. 3. Course expectations were acceptable and clearly communicated. 4. I liked the way the way course pages were organized. 5. I had to use several resources in this class (e.g., textbook, course presentations, discussions, links, etc.). 6. The web links were relevant. 7. I was able to interact effectively with classmates. 8. I was able to interact effectively with the instructor. 9. I found the discussions useful. 10. I found the course presentations interesting and informative. 11. The use of cooperative learning (if applicable) was well structured. 12. My opinion and input were encouraged and valued. 13. Sharing our research presentations with others in the class was informative. 14. The course assignments were relevant and useful. 15. The course readings were interesting and relevant. 16. The course was intellectually challenging. 17. The course met my educational needs. 18. The instructor was accessible to me by e-mail, phone, or in person. 19. The instructor was well prepared. 20. The instructor posted course assignments on time. 21. The instructor posted grades in a timely fashion. 22. The instructor provided effective feedback on assignments. 23. The instructor maintained a positive atmosphere for learning in the class. 24. The instructor utilized effective teaching methods. 25. The instructor encouraged my participation. 26. The instructor provided relevant topics for discussions. 27. The instructor demonstrated mastery of knowledge of the course materials. 28. The instructor exhibited interest in my learning. 29. Online medium accommodates my learning style. 30. I prefer to have face-to-face classes.

Transcoding free voice transmission in GSM and UMTS networks


Sara Stanin, Grega Jakus, Sao Tomai University of Ljubljana, Faculty of Electrical Engineering Abstract - Transcoding refers to the conversion between two encoding schemes of a digital signal. It is usually performed where two interfaces do not support the same encoding. Transcoding introduces some undesired effects into the signal, the most important of which are distortions and delays. In this paper we give our attention to possibilities of transcoding free operations in a GSM (Global System for Mobile Communications) and UMTS (Universal Mobile Telecommunications System) network. Tandem Free Operation (TFO) in GSM networks enables transmitting voice transparently trough the core network without transcoding. Although TFO has some advantages, such as improvement in speech quality and reduction of delays, it also has many limitations. Transcoder Free Operation (TrFO) is similar to TFO but is employed in the packet-based core networks, such as UMTS. TrFO overcomes some of the TFO limitations. TrFO reduces bandwidth and voice call costs. It increases network capacity and is more robust than TFO. In a UMTS network, when TrFO is not possible, TFO can still be attempted. Interworking of both mechanisms is necessary for mixed GSM/UMTS networks. 1. INTRODUCTION Transcoding introduces some undesired effects into the signal. The most important are distortions and delays. The distortions are cumulative and are a consequence of: loss of audio information quantization errors algorithmic errors (pre-echo, metal sounds, oscillations etc.) Other downsides of transcoding are the need for additional DSP (Digital Signal Processing) resources, unsupported cryptography between the endpoints and more difficult implementation. Due to the all of the above-mentioned negative impacts on voice quality, transcoding should be avoided whenever possible [2]. To reveal the effects of transcoding on user experience, various tests were conducted. Performance of various AMR codecs tandeming with GSM codecs is presented in [3]. In a GSM (Global System for Mobile Communications) network, encoding schemes for voice transmitted through the core and radio parts of the network differ. In this paper we present the possibility to serve a voice call in a GSM network without voice transcoding. We examine what are the elements and logic necessary to provide this functionality in a GSM network. We also present the differences when providing transcoding free voice transmission in a UMTS (Universal Mobile Telecommunications System) network and we give a brief overview of transcoding free voice calls when GSM and UMTS networks are interworking. 2. GSM/UMTS SCHEMES NETWORK TRANSCODING

Despite the increasing data rates and the amounts of the transferred data, voice calls remain the most important application in the mobile domain. However, in the past, the quality of voice was not a primary concern to the mobile operators. Voice was compromised using aggressive voice compression to save the scarce and costly frequency spectrum. The advent of new radio access technologies, more efficient compression techniques, multifunctional mobile terminals supporting multimedia applications and the consequent high user expectations forces mobile operators to offer higher quality voice applications. One of the factors that have a negative impact on voice quality in mobile networks is transcoding. The term transcoding refers to the conversion between two encoding schemes of a digital signal. Transcoding can be performed using the same format, this is known as self tandeming or two different formats known as cross tandeming. Transcoding is used where two interfaces do not support the same encoding scheme. Ideally, transcoding of a compressed signal is performed without the prior decompression into some intermediate format (e.g. G. 711 [1] for audio transcoding). However, while such conversion is feasible in the context of video processing, the audio and speech can currently employ merely a brute force approach. This means that before compression in the target format, decompression into G.711 format is necessary.

During a voice call in a GSM network, both mobile devices perform voice encodings to make user voice suitable for GSM radio network transmission. On the GSM radio interface transmitted voice is encoded using Full Rate (FR) [4], Half Rate (HR) [5], Enhanced Full Rate (EFR) [6] or Adaptive Multi Rate (AMR) [7] codecs. These schemes incorporate voice compression necessary in order to assure better use of the limited-bandwidth radio channel. Voice frames are then typically decompressed and re-encoded for transport over the 64 kbps circuit switched links through the core network. For such transport, the G.711 standard is used that is common in digital switched telephone networks. The reason networks were designed in such a way is simple connections to other networks (e.g. Public Switched Telephone Network, PSTN) and possible additional voice processing in the core network itself, like for example echo cancelation. In the common GSM voice transcoding scenario it is therefore needed to perform transcoding.

Because of different supported voice encoding schemes when passing from the radio to the core network and reverse, transmitted voice must be transcoded. Figure 1

presents the main network elements activated during call setup where a GSM user initiates a voice call with another GSM user.

User A

TRAU

CCU
16 kbit/s FR

BSC
64 kbit/s G.711

BTS BSC

MSC BSC

MSC BSC
64 kbit/s G.711 64 kbit/s G.711

User B

BSC

16 kbit/s FR

BTS BSC
CCU

TRAU

Figure 1: Voice transcoding in a GSM network. in a GSM network Two units are responsible for voice transcoding: the Transcoder and Rate Adaptation Unit (TRAU) and the Channel Coding Unit (CCU). TRAU is an independent network unit responsible for voice encoding and decoding as well as for data rate adaptation. Between two TRAU units in a mobile network, transmitted voice is encoded to the 64 kbit/s G.711. The TRAU unit is logically a part of the BSC (Base Station Controller) while its physical location can be between the BSC and the BTS (Base Transceiver Station) or between the BSC and the MSC (Mobile Switching Centre). The second possibility enables cost reduction of the leased lines between the MSC and the BSC due to lower bit rates. On the radio interface, GSM encoders support 16 kbit/s logical channels. 20 ms voice frames are encoded with 260 bits giving a bit rate of 13 kbit/s. The difference between 13 and 16 kbit/s represents 60 bits of voice coding information including coding scheme and rate bits. These bits are transmitted in the so-called TRAU frames. The CCU is a part of the BTS. It takes care for channel coding and radio network quality measurements. Upon this information, the CCU can determine a suitable encoding scheme. In a GSM network, information about the selected encoding scheme is sent in-bound, together with transmitted user data. Figure 2 presents voice transcoding in a UMTS network. In UMTS, the standard voice encoding scheme for transmission over the UMTS radio network is the narrowband AMR scheme [8]. The scheme consists of 14 modes providing bit rates from 4,75 kbit/s up to 12,2 kbit/s. The selected mode primarily depends on radio channel conditions and on voice content. Beside the standard narrow-band scheme, wide-band AMR (AMR-WB) scheme can also be used consisting of bit rates from 6,60 kbit/s to 23,85 kbit/s and encoding the bandwidth up to 16 kHz. In general, transcoding operations in a UMTS network are a part of the media gateway (MGW) function set. Other MGW functionalities are: announcement services, echo cancelation, DTMF (Dual-Tone Multi-Frequency) detection and generation, support for transport protocols like ATM (Asynchronous Transfer Mode), IP (Internet Protocol) or TDM (Time Division Multiplex), support for lu interfaces, bad frame treatment, IP protocol-based functions like RTP/RTCP (Real-Time Transport Protocol/ Real-Time Transport Control Protocol), encryption and QoS (Quality of Service).

BSC

BSC

BSC

BSC

BSC

BSC

BSC

BSC

Figure 2: Voice transcoding in a UMTS network. 3. TFO AND TRFO OPERATIONS bits (LSB) of the voice frames, giving a 16 kbit/s virtual data tunnel. This is illustrated in Figure 3. The remaining 6 bits still carry voice encoded in G.711. This is important because when TFO operation fails, transmission can easily be reverted to normal operation mode. Instead of the 2 TFO bits, the remaining G.711 6 bits are used to reproduce voice sent from the origin side. Enabling TFO functionality in a GSM network requires only the upgrade of TRAU units. As TFO operations require a transparent path, all devices between both TRAU units must transparently forward TFO frames.

Tandem Free Operation (TFO) [9], [10] enables voice frames encoded according to radio network codecs to be transparently transmitted through the GSM core network avoiding TRAU transcoding. This is only possible if both devices encoding scheme lists include at least one common encoding. TFO supports common codec negotiation between the two involved user terminals. The TFO protocol uses dedicated messages and frames for the negotiation and establishment of TFO connection between TRAU units. Because these frames are transmitted over the 64 kbps link together with user data traffic, such communication is known as in-band signalling. A TFO frame is transmitted by stealing the two least significant

BSC

BSC

BSC

Figure 3: TFO voice transmission principle in a GSM network. Compressed voice is transmitted through the core network by stealing the two least significant bits of the G.711 voice frames.

TFO supports GSM encoding mode adaptation to radio network conditions. When speech is transcoded, the encoding mode is adapted on each of the connection sides separately. When TFO is active and one connection side perceives radio network condition degradation, the TFO must initiate encoding mode change on both connection ends without any negotiation. Both sides must perceive radio network condition improvement in order to adapt the encoding mode. TrFO (Transcoder Free Operation) [11, 12] is similar to TFO but is employed in packet-based core networks which are based on high bandwidth ATM or IP links rather than on 64 kbps TDM links. In such core networks it is therefore possible to transmit voice data streams with other codecs than 64 kbit/s G.711. The MSC can therefore establish a voice connection without activating transcoders as illustrated in Figure 4.

TrFO uses out-of-band signalling, which means that messages for transcoder-free operation negotiation and to establishment are not transmitted on the same link as user data. Both mobile terminals report their codec capabilities the corresponding serving MSC before the bearer path is established. It is only when both sides negotiate the common encoding mode that the barrier can be established. TrFO operation uses the Out of Band Transcoder Control (OoBTC) [11] mechanism which is responsible for configuring the call without involving transcoders. It supports encoding mode negotiation and encoding mode list changes/adaptations. Unlike TFO, TrFO is established and controlled before the call is configured. Selected encoding mode can be changed later on during the call. If avoiding transcoding in a UMTS network can not be fully achieved, Remote Transcoder Operation (RTO) [12] can include a single transcoder in the user data path. This does not imply double voice transcoding of user voice. From all possible in-path transcoders, the one used should be the one closest to the user device supporting the higher bit rate encoding scheme. Such a scenario is presented in Figure 5 and is also applicable for establishing voice calls to and from PSTN networks.

BSC

BSC

BSC

BSC

Figure 4: TrFO voice transmission principle in a UMTS network. A voice call can be established without activating the transcoders.

BSC

BSC

BSC

BSC

BSC

BSC

BSC

BSC

Figure 5: Single voice transcoding principle in a UMTS network.

BSC

BSC

BSC

BSC

BSC

BSC

BSC

BSC

Figure 6: TFO and TrFO interworking in a GSM/UMTS network. The OoBTC procedure can result with choosing the G.711 codec as the common codec in the UMTS network. In such a case, a transcoder is inserted in the appropriate MGW in order to perform the necessary AMR and G.711 transcoding. The network initiating the call is informed about the selected codec G.711. In such a case, TFO operations in the GSM network are pointless. 4. TFO AND TRFO EFFECTS would cause the corruption of TFO messages and consequently the failure of TFO transmission. To enable the TFO, the IPE must be disabled or properly configured. Another event which interrupts TFO is the inter-BSC handover when one of the TRAUs is replaced by the new one corresponding to the new BSC. In this case, the TFO is temporarily interrupted but is later renegotiated if the new TRAU is TFO capable. The intra- and inter- BTS handovers are generally not problematic since the TRAUs do not alter. TFO is also temporarily interrupted when DTMF tones or announcements must be inserted by a MSC. Since MSC is not aware of the TFO, it can overwrite the TFO signalling and the compressed voice information. The distortion is immediately detected by one of the TRAUs and the tandem transcoding is temporarily re-established. Group and conference calls are also problematic in the context of TFO. The conference bridges, namely, most often function by mixing voice signals of involved parties encoded using G.711 codec. The mixing of compressed voice signals and TFO messages would again distort the voice signal and the TRAU units would again have to reinsert the tandem transcoding. If a multi-party call turns into a normal call between two parties, TFO can be reconfigured. Finally, one of the major drawbacks of TFO is its overall effect on network capacity. Even though TFO improves voice quality and decreases delays, it does not improve

Although TFO has some advantages, such as improvement in speech quality and reduction of delays, it also has many limitations: Only mobile to mobile calls are supported Problems with the so-called digital transparency Group and conference calls are not supported Problems with hard handovers The transmission of DTMF signalization and announcements The used codecs are negotiated independently during callsetup between mobile terminals and their corresponding TRAU units. Because the TFO procedure is configured after call-setup is completed, if non-compatible codecs are selected during the call-setup, TFO cannot be applied. Another problem is associated with the so-called digital transparency. The digital transparency refers to the case when the digital content is not altered in any way by any of the network elements on the path between the TRAUs (IPE, In-Path Equipment). Any intervention by the IPE

the overall capacity of the network. Uncompressed voice is, namely, still transmitted in parallel with TFO traffic. On the other hand, TrFO reduces bandwidth and voice call costs and increases network capacity (despite its superior quality, AMR-WB codec also requires just a third of the bit rate of the G.711 codec). TrFO is also more robust than TFO as it supports sudden reconfigurations (e.g. because of handovers) via out-ofband signalling. Furthermore, it supports the use of wideband codecs (e.g. AMR-WB) that are not compatible with the G.711 (which is a narrow-band codec) and can therefore not be used with TFO. As AMR-WB can encode twice the frequency range as older GSM and G.711 codecs voice quality is improved. When TrFO is not possible, TFO can still be attempted. 5. CONCLUSION

[3] 3GPP TS 26.090 - Mandatory Speech Codec speech processing functions; Adaptive Multi-Rate (AMR) speech codec; Transcoding functions" [4] Digital cellular telecommunications system (Phase 2+) (GSM); Full rate speech; Transcoding (GSM 06.10 version 8.1.1 Release 1999), ETSI [5] Digital cellular telecommunications system (Phase 2+) (GSM); Half rate speech; Half rate speech transcoding (GSM 06.20 version 8.0.1 Release 1999) [6] Digital cellular telecommunications system (Phase 2+) (GSM); Enhanced Full Rate (EFR) speech transcoding (GSM 06.60 version 8.0.1 Release 1999) [7] ETSI TR 126 976 V6.0.0 (2004-12), "Performance characterization of the Adaptive Multi-Rate Wideband (AMR-WB) speech codec [8] 3GPP TS 26.103 Speech codec list for GSM and UMTS [9] 3GPP TS 23.053 Tandem Free Operation (TFO); Service description; Stage 2 [10] 3GPP TS 28.062 Inband Tandem Free Operation (TFO) of speech codecs; Service description; [11] 3GPP TS 23.153 Out of band transcoder control; Stage 2 [12] 3GPP2 Transcoder Free Operation; Stage 1 Requirements

In this paper, we presented the TFO and TrFO approaches for avoiding undesired voice transcoding in mobile networks. Both approaches enable better voice quality. TrFO also has many other advantages, such as lower delays and reduced processing requirements. The latter also reduces the cost of voice transmission. There are some open questions regarding the TFO/TrFO interworking such as mobile call handover from a UMTS to a GSM network and increased signalling. REFERENCES [1] ITU-T. "G.711.0: Lossless compression of G.711 pulse code modulation" [2] TIA TSB-116-A Telecommunications - IP Telephony Equipment Voice Quality Recommendations for IP Telephony

Scenario elektronskog uenja u ambijentu praktinih zajednica ELearning scenario in Community of practice environment
1

Mr eljko Duni1 , prof. dr Leonid Stoimenov2 Univerzitet u Niu,2Elektronski fakultet u Niu between members is stimulated. Through development of Community of practice we expect very significant effects in other areas, especially in the field of e-lance economy and employment of staff. The first results of research conducted within the project Scope are presented in this paper. Keywords: Community of practice, eLearning, Online social community, Social Networking 1. INTRODUCTION ELearning is a concept describing any type of learning environment that is computer enhanced. As a concept of learning that has been available for a longer period, its advantages and disadvantages are quite enough explored [1]. ELearning systems are very often used in educational institutions as a primer way of education, not only as support of existing classic process of learning. There are a lot of tools which can enable the implementation of eLearning environment, commercial or open source, such as Moodle, Blackboard, etc. Nevertheless, the following question is still actual: is it possible to implement the complete process of education using these systems and tools? The analysis of already implemented eLearning systems and services available at University of Nis, has shown that such systems are used only for learning objects distribution and knowledge evaluation but not for students collaboration using Weblog, forums or other services for collaboration. In that way one very important part of learning, shared learning - learning through practice or experience is missing. On the other hand, there is a theory according to which the social and cultural factors are most influential in the development process of an individual [2]. People are constantly learning from other people who are in their environment. A similar theory, called 'Situated Cognition' was set by Lave and Wenger [3], in which they presented the concept of Community of practice. Learning, as outlined in this Wenger-vision Community of practice [4] is achieved mainly through social activities. In such an environment the student acquires his knowledge and competence through connectivity and belonging within communities where he can realize his interests and share knowledge with others. In addition to these communities, there is an interesting community in the area of collaborative learning, identified by Berlanga et all. [5]. They define the so-called ad hoc community as "a community that exists in order to meet the individual requirements in a limited period of time". It should be particularly noted that in these communities, sharing

Sadraj Fenomen drutvenog uenja je prisutan u ljudskom drutvu od davnina i naunici u toj injenici vide glavne razloge razvoja ljudskog drutva. I pored toga moderni sistemi za elektronsko uenje ne koriste prednosti takvog uenja ve se baziraju uglavnom na distribuciji materijala za uenje i evaluaciji znanja. S druge strane neosporna je sve vea popularnost online drutvenih zajednica kao i primena drutvenog softvera u okviru njih. Koncept praktinih zajednica (Community of practice), predstavlja odlinu sponu izmedju oblasti edukacije i drutvenih zajednica. U implementaciji elearning sistema koji se baziraju na praktinim zajednicama prepoznat je znaajan potencijal koji moe da dovede do efekta drutvenog ucenja. Iz tog razloga je pokrenut projekat SCoPe, projekat razvoja studentske praktine zajednice sa ciljem da se istrae efekti drutvenog uenja i utvrde relacije izmedju tehnologija drutvenog softvera i elektronskog uenja. Scope treba da obezbedi razvojni ambijent za studente ali i ostale lanove zajednice u okviru koga se stimulie slobodna razmena znanja kao i pomo drugim lanovima. Razvojem ovakve zajednice oekuju se znaajni efekti i u drugim oblastima pogotovo u oblasti e-lance ekonomije i zapoljavanja kadrova. Prvi rezultati istraivanja sprovedenih u okviru projekta razvoja Scope praktine zajednice izneti su u ovom radu. Kljune rei: praktine zajednice, elekrtonsko uenje, virtuelne drutvene zajednice, drutvene mree Abstract The phenomenon of social learning is present in human society since ancient times and scientists in that fact see the main reasons for the development of human society. Despite this, eLearning systems do not take advantage of such learning, but are based mainly on the distribution of learning materials and evaluation of knowledge. On the other hand there is undeniable growing popularity of online social communities as well as the application of social software within them. The concept of Community of practice represents an excellent bridge between the education area and social communities. The implementation of eLearning systems that are based on Community of practice is recognized as significant potential that can lead to effective social learning. For this reason, the project Scope is running, project of development Student Community of practice, in order to investigate the effects of social learning and the relationship between technology of social software and eLearning. Scope should provide a learning environment for students and other members of the community where free exchange of knowledge is and providing help

knowledge is not imposed or under pressure, but occurs spontaneously, whereby the application of technology can help speed up the process and the emergence of community itself. Conditions that each community should fulfill so that the sharing of knowledge is enabled are as follows: Community has to have a clear goal; Community has to have members with different levels of knowledge in different domains; Community has to track all members activity and to measure performances based on community trust upon each member; Human life as well as social communities have got a new dimension with technology development. Digital community or social community available on Web, are today more and more in focus. Regarding this, Tim Berners Lee notes that the Web is more a social creation than a technical one, and it is designed for a social effect, to help people work together. According to Tim Berners Lee, the ultimate goal of the Web is to support and improve our Web-like existence in the world [6]. Based on specified facts about not well researched social character of existing eLearning systems as well as the fact that education process is placed in digital social community, an online community, named Scope (Student Community of practice) is developed. By developing Community of practice for eLearning system we wish to research efects of social connections on education process as well as to find out if the idea of free knowledge sharing is sustainable in university population despite significant differences with existing economical model. Competency will not be based on protecting personal knowledge any more, than quite opposite: trough knowledge sharing and open online collaboration. Community members, who own and share knowledge with other members will have higher rating and thereby higher competency comparing with others who keep their knowledge. The second aspect of our research is finding possibility to use Community of practice and accumulated knowledge about personal knowledge for other kinds of connection between people, firstly on job market and human resources business. In this paper we put forward the results of the first phase of research, which consists of developing practical community as well as defining functional design of the environment of online social community that can respond to the above requirements. The paper is organized as follows. The first part presents the theoretical basis of Communities of Practice on which developing of Scope Community of practice is based. In the second part of pape is presented the system architecture and a description of technologies and services implemented within the system. In the context of the architecture, the characteristics of electronic portfolios are presented as well as a description of the role of emerging community members. At the end of the paper, we presented the existing research results of practical application of the concept of online communities

and social software in the field of education, in the form of a conclusion. Also we presented the future steps in the development of Scope Community of practice, particularly the implementation of trust on which the whole concept is based and which is a key prerequisite for the use of such systems within the "e-lance" economy. 2. COMMUNITY OF PRACTICE The term Community of practice has only been recently in use although the phenomenon is present from the foundation of mankind and their need to learn in the community. It turns out that this concept is a useful perspective in the field of knowledge and learning. This part of the paper aims to explore what Community of practice (CoP) is, its theoretical basis and why researchers and experts from various fields and in different contexts see this community as useful in the process of learning and sharing knowledge. In addition to this part of the work, the relationship between online communities and Community of practice are analyzed as well as the possibility that online communities represent a functional environment for the realization of communities of practice. According to Wenger and his theory of learning within the community, learning is a social process, so it can be seen through the involvement and contribution of each individual to community, which he belongs to. The basic assumption of the theory of Community of practice is, "the engagement and involvement of individuals within the community is the basic process through which we learn and become what we are" [3]. Communities of Practice are formed by people who are involved in the process of collective learning in the shared domains of human behavior, such as, for example, a group of artists seeking new forms of expression, a group of engineers working to solve similar problems, a group of students who prepare a certain exam, a network of surgeons that explore new techniques, a group of inexperienced managers who help each other, etc.. Therefore, Community of practice is a group of people who have common interest, problem or passion in a particular domain and who want to gain knowledge in the appropriate area or expand existing knowledge to specialize in a particular area [4]. Participation in community is voluntary and open to all who are interested in a given area or topic. Community development is based on mutual interest and interaction of participants, which means that it is impossible to create a community without the active participation of people [7]. On the other hand, as is proved by Wenger [7], not every social community or group is a Community of practice, because otherwise this concept would lose its meaning. There are certain characteristics that must be identified, so that a community could be classified as Community of practice. Community can be considered as Community of practice if it is formed around the corresponding domain, has an

interactive community and owns divisible knowledge and experience, while basic features can be explained as follows (modified from [8]): Domain Community of practice is not a club or a network of friends. It has an identity defined by divisible domain that represents the interests of all members. Membership in the community implies a commitment to the area and therefore a certain level of competence in a given area that differs members of the community from other people. The aim is to improve community knowledge of the whole community through the exchange of knowledge in a defined area. Community - to fulfill their interests in the domain, members of the community join together and through their activities and discussions share information and help each other. They build relationships that enable them to learn from each other. Web site is not a Community of practice itself. Also, the same job or the same position does not make Community of practice, unless members of the community have the opportunity to learn from each other through interaction and thereby advance. Practice - Community of practice is not only a Community of Interests or people who, for example prefer a certain kind of films. Community members are practical experts. They develop shared resources such as experience, ways to solve specific problems that often occur, in a brief and divisible way. This requires time and sustainable interaction. Exchange of shared experiences should be more or less self-conscious. The combination of these three elements form a Community of practice. 3. VIRTUAL (VCOP) COMMUNITY OF PRACTICE

As already emphasized in the paper, members of online Community of practice must not know each other but their activities still adhere to the basic concept of Community of practice defined by Wenger [7]. Considering the theory of social learning and characteristics of the social software it is obvious that there are great similarities between these two concepts. First of all, both concepts are directed towards people and require their active participation and engagement on which their success directly depends on. In addition, both support the concept of shared interests of the people. It should be emphasized that the success of any technology depends on whether it is supported in the respective community as it is spotted in Wenger [10]. Social software is essentially entirely oriented towards the community and therefore has broad support within it. From the standpoint of technological requirements, virtual Community of practice can be realized through the appropriate Web site with implemented services for collaboration and administration of members, shared work space, shared document repository, search and the creation and management of communities. Considering these requirements, the conclusion is that by social software, almost all the listed requirements for implementation of VCoP could be realized. Because of these similarities it is interesting to study whether the virtual communities based on social software solution can be considered as Community of practice and support the learning process organized within it. In the literature there are different interpretations of these concepts. Given that the majority of research is still in the description phase, it is of great importance to precisely define these concepts and relations between them. In relation to the first part of the question, dominant idea is that social software is generally treated as an additional channel for communication and not as a Community of practice itself. A similar conclusion was reached by Johnson [9] who believes that virtual communities can only represent practical support to communities instead of being Community of practice itself, which implies that the technology used in virtual communities is only the means for their implementation. Regarding the second part of question, undeniable fact is that social software is still supporting learning in practice. The general conclusion about online communities and social software on the one hand and online Community of practice on the other is the following: Technologies of virtual communities are developed from existing tools through their active use by community members While there are tools that support the Community of practice, there is no technology with which it is possible to entirely realize Community of practice The main potential of the Community of practice are not or should not be tools to support them but the people who belong to the virtual community. Efficient technology is only part of development process of successful online communities [11].

Prior to information technology, the term Community of practice related to a group of connected people who usually lived in the same area [9]. With the development of online tools that allow people to exchange ideas in a virtual environment, the concept of face to face community is enriched and expanded with virtual interactions. These online communities could include people who know each other and share the same living space but at the same time are able to communicate on an international level with anonymous participants. Such communities are called online or virtual Communities of Practice and include the online platform in which people share their knowledge and interests in a virtual basis or through online communication in the appropriate domain. In this case, communication and sharing of knowledge is supported by software tools, which are often called social software. These tools enable cooperation and collaboration without time and geographical constraints, which is considered a key factor for Learning on Demand and Just in Time learning as the characteristics of Communities of Practice.

It can be assumed that, within the Community of practice, emphasis is on connecting people and their active contribution to network development and not on technology. 4.SCOPE OVERVIEW Project of student Community of practice (SCoPe) has been developed in order to support eLearning at the University of Nis. Community on the one hand supports formal learning through a link with the existing systems of eLearning (in this case it is about Moodle software platform for eLearning) while on the other hand, the emphasis is on informal learning through the development of online communities and access to learning material that can be imported from other media community (Youtube, Wikipedia, etc.). Development of such a designed community allows the exchange of knowledge and the accumulation of knowledge in certain areas, which following the applied social network analysis, represents a huge potential for the entire university, especially in the connecting staff and companies from around the world that have a need for them. With Scope, students have free access to knowledge and can realize simple communication with other students, professors or institutions within the particular domain. Through Community of practice they share ideas, cooperate with each other and are engaged in one or more groups. In addition to learning within the community, students have at their disposal different resources for learning, which can change in a way that suits them and thus promote the knowledge of the whole community. The main features of Scope community are the following: Activities of the community are directed to knowledge sharing and acquiring; Classical relation between teacher and student is changed with community members collaboration no metter which is his role in education process (student also could be the source of knowledge or to transfer his experience in one domain and consume knowledge in other); Activities in community are focused to problem solution (Problem Based Learning - PBL) that is learning is happening during the process of solving the problem which student found. All activities in the Scope, such as sending responses to student questions, writing comments on blogs, initiating research, accumulate in the shared and evolutionary online portfolio. Students who freely share the knowledge and wish to demonstrate and improve their skills, can meet such requirements through their activities in one or more communities. The specific knowledge could be gained on the Web through access to available educational resources such as Wiki, Weblog, learning together, sharing and exchange of knowledge in the community. The choice of technologies for making Scope Community of practice is managed by the following requirements: ease of use, flexibility, the ability to adjust to the demands of

users and a simple and effective communication between users of the system. We considered several solutions, from the development of completely new software product to the implementation of the system using ready-made open source solutions. After the analysis of available products in the field of course management system as Sakai and content management systems such as Joomla and TikiWiki, we decided to use Elgg platform for the development of online communities. The choice is entirely logical if one takes into account that the objective of the whole system is support of informal learning through learning in the community. Using Elgg platform we have developed a community that is oriented towards the solution of concrete problems. Students are organized into small groups specialized in specific domains within which they exchange the materials, ideas and experiences or ask specific questions and present problems that are encountered in their work. They use mail, chat and other communication tools such as wiki, blog and forum for the solution of concrete problems. In addition, Scope platform is completely open ended to allow communities to connect with existing systems for electronic learning, such as Moodle which was one of the conditions for implementation. The main feature of Scope Community of practice is full orientation towards the users and support of all the requirements of shared learning through the development of a personal environment for learning. This environment can be defined as a system or concept that helps students to control and manage their own learning process. This way, students can: define learning objectives; manage the learning process and necessary facilities; communicate with other participants in the learning process, and thus achieve the planned objectives. Personal learning environment is not a type of software but a new approach to using technology for learning [12] or a collection of free, distributed, Web-based tools, mainly concentrated around the blogs, which are interconnected and which group content using RSS feeds and simple HTML scripts [13]. The four basic characteristics could be distinguish in almost all definitions of a personal learning environment: Individual control tools and content; Content aggregation and collection; Service integration; No spatial and temporal constraints.

The following services are used in the personal learning environment of Scope Community of practice (Figure 1):

RSS (RDF Site Syndication) is a technology that allows users to search a list of changes in blogs, tags, communities and other services from Community of practice. A user who is logged on to the appropriate RSS feed, gets information about the names of new items, their short summary and the URL of any changes. Access Control allows different levels of access to content for individuals or groups within the Community of practice. In Scope Community of practice, users manage their own security because there are mechanisms for defining access to each level of community organisation. It is possible to create an unlimited number of communities which can define public access to their contents such as documents, discussions and other activities, or to keep them away from the public eye. Online community enables students to connect to the exchange of knowledge around common interests and domains. Search is one of the key mechanisms of Scope Community of practice that allows users to search resources whether it is a learning material or community members or groups that are formed around certain domain. A special feature, which is enabled using Elgg, is the possibility of environment adjustments according to the needs of each individual or group. 5. SCOPE MEMBERS ROLE Implementation of environment for the Community of practice represents only the beginning and prerequisite of its establishment. Community of practice begins its life only when people connect through the domain community and begin to actively work within them. Survival and maintenance of community is possible only if all members have clear and achievable interests and if they are focused on achieving common goals. Also, for the success of community of crucial importance is the manner in which members experience the community, how much time they spent in community activities and whether the community is able to evolve over time. Members of the community can roughly be classified into those that provide knowledge (Knowledge Providers) and those who use or consume knowledge (Knowledge Consumers). One person can have different roles in different domains: for example to be knowledge provider within one domain while a consumer in the second. On the basis of activities that members achieve within a certain domain, we can classify the members of Community of practice as active, casual, peripheral and external subjects On the other hand, according to one's knowledge in certain domains of community, members can be observers, beginners, experts, leaders, coordinators and freelancers. The leaders direct other members to be focused on shortand long-term goals and at the same time, if necessary, raise the energy of community organizing new events and

Figure 1. Personal learning environment in the Scope Community of practice Profile (User profile) is the basis for the creation of user digital identity and e-portfolio which will be discussed in more detail later in paper. In addition, the user profile is the basis for the realization of economic vision of Scope Community of practice since it is based on shared and evolutionary user profiles. Profile reflects the status and needs of each individual within the domain where he belongs. It is essential that every member can easily find the appropriate domain within which he wants to progress and achieve full functionality. User profile within the Scope Community of practice is realized using Foaf standards for the description of user profiles and connections with other people based on the RDF. Blog (Weblog) is a form of a Web page that contains articles similar to posts in chronological order. Blog can be related to an individual or small group of authors who are in some of the online communities, formed around a particular domain. File repository allows storage of different types of files that can be support in the exchange of knowledge and ideas. Tagging is a process in which users assign tags to objects in order to share content with other members of the community. The process is also known as folksonomy and is directly related to the social bookmarking, method that allows storing, organizing, search and management of meta data (tags) by which users mark a Web page. Search tags in Scope Community of practice make an excellent mechanism to find community members or groups with similar interests as well as monitoring their activities in the long run. To achieve the planned goal of learning within the Scope Community of practice it is necessary that the system is developed using technology of Semantic Web. One of the planned mechanisms is tagging objects for learning and user communities with ontological-based tags. This way, it would be possible to connect formal knowledge represented by domain ontology with informal knowledge that is gained through the process of social labeling.

activities. Coordinators are tasked to assist members of the community, advise them, connect with other members of the community and constantly stimulate and encourage interaction between members. Experts are experts in the appropriate field. External entities are usually agents of the company, HR managers who are in charge of finding adequate personnel according to the customer requirements, who would be involved in specific projects or activities. The basis of the community are of course all of its members, regardless of their roles, because without them, there is no community. Starting as the newcomers on the outskirts of the community, members of the community evolve to average users who thus gravitate toward the center of the community and that at some point on the basis of their activities become experts in the appropriate domain and thus gain a central role. Process of the student evolution in the community is a learning process in which students become more aware of the facts about the community. 6. CONCLUSION The paper presents the first results of research within the Scope project, developing Community of practice of students. In this phase, we implemented personal learning environment that is based on the Elgg software, and which aims to improve the process of social learning among students and other users of the system. Different services are available to students in the Scope Community of practice such as Weblog, chat, forum, online community that can be formed around different domain, the digital identity of each user, a mechanism for searching for learning materials as well as other members of the community based on social tagging. The system was developed so that it can identify, search and recommend relevant materials as well as individuals and communities that exist within the system that can help in solving of certain learning tasks, based on user profiles and the specific task of learning. It should be emphasized that the focus of developed practical community is oriented towards customers and the most important effects of the system are expect from improvement of social learning. However, Scope Community of practice cannot be considered as complete environment for eLearning since it doesnt contain a system for course management. Considering its openness, connection to other standard course management systems in the area of e-learning is not a problem, as demonstrated by the Scope Community of practice integration with existing Moodle course management systems. The conclusion is that Scope is the missing link to complete eLearning process, since in the existing systems, the concept of social learning does not apply to a sufficient extent. It is important to say that Scope has just begun its development and the positive effects of the system implementation should yet be expected. The next phase in the implementation of Scope Community of practice means expansion and provision of community sustainability, as a very important step in our

opinion. First thing is the implementation of confidence in the Scope Community of practice which means testing the key factors that customers consider when making decisions whom to believe in the learning process? User profile will be extended with the identified attributes and the trust will specifically refer to each domain in which the user is active. LITERATURA [1] Anderson, T., (2008), The Theory and Practice of Online Learning, Published by AU Press, Athabasca University [2] Vygotsk, L., (1986), Thought and Language. Rev. & ed. A. Kozulin, Cambridge, Mass: Massachusetts Institute of Technology Press [3] Lave, J., Wenger, E., (1991), Situated learning: Legitimate peripheral participation, Cambridge: Cambridge University Press. [4] Wenger, E., McDermott, R.E., & Snyder, W.M., (2002), Cultivating communities of practice, Boston, Harvard Business School Press [5] Berlanga, A., Sloep, P.B., Kester, L., Brouns, F., Van Rosmalen, P., Koper, R., (2008), Ad hoc transient communities: towards fostering knowledge sharing in Learning Networks, International Journal of Learning Technology, 3(4), 443-458. [6] Berners Lee, T., (2000), Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web, HarperCollins Publishers Inc. [7] Wenger, E., (1998), Communities of Practice: Learning, Meaning and Identity, Cambridge University Press, New York, USA [8] Wenger, E., (2006), Communities of Practice: a brief introduction, By Etienne Wenger, [9] Johnson, C. M. (2001), A survey of current research on online communities of practice, Internet and Higher Education, 4:4560, [10 ] Wenger, E., White, N., Smith, J. D., and Rowe, K. (2005), Technology for communities, CEFRIO Book Chapter, pages 115 [11] Garber, D., (2004), Growing Virtual Communities, Athabasca University, Centre for Distance Education, Online Software Evaluation Report [12] Attwell, G., (2007), The Personal Learning Environments - the future of eLearning?, eLearning Papers, vol. 2 no. 1. ISSN 1887-1542. [13] Fitzgerald, S. (2006), Creating your Personal Learning Environment, A workshop presented for the August 3rd LearnScope Workshop. Australian Technology Park, Redfern,

THE IMPLEMENTATION OF THE IMS LD E-COURSE GENERATOR


Goran Savi, Milan Segedinac, Zora Konjovi {savicg,milansegedinac, ftn_zora}@uns.ac.rs Fakultet tehnikih nauka Novi Sad Abstract This paper presents the implementation of an e-course generator. Courses are generated based on three components: learning goals, learning objects and instructional design. The systems architecture is extensible - it is possible to extend it in order to generate courses in different output format. The result is an automatically generated IMS LD compliant course. The system is implemented in Java programming language. 1. INTRODUCTION Creating an e-course may be a very time consuming job if it is done completely manually. Also, there is a need to periodically change the created course. A part of this job may be automated using course generator systems. These systems automatically create an e-learning course using different input parameters learning objects, domain knowledge, student model, pedagogical knowledge, etc. The paper [1] presents a system DCG which generates a sequence of learning activities using domain knowledge, instructional plan, learning material and student's preknowledge as the input parameters. This sequence is personalized for each student. Domain knowledge is represented using a concept map. A learning environment for teaching mathematics is presented in [2]. For each student, the system generates a personalized course. The input parameters are learning objectives, learning material (represented by semantically annotated XML documents), student's profile (knowledge and preferences), teaching scenario (there are 6 predefined global pedagogical strategies) and pedagogical rules (sequencing and selection of learning objects defined using if-then rules). The paper [3] presents the system for automatic generation of a set of individualized hypermedia documents. A course is generated using student's model, domain knowledge, learning objects and a set of rules defined in JESS rule engine. Student's model is also used in [4] where a sequence of learning objects is created using planning mechanism and PDDL plan. This language is used in [5] for course generation, too. Learning activity is chosen in the real-time depending of a learning objective and student's profile. The paper [6] presents the system for the automatic generation of a learning path using ontology of learning objectives and student's model. A similar approach is used in [7], but without the formal representation of learning objectives. A course is generated directly using learning objects and relations are directly defined among learning objects. This paper presents our system for automatic generation of e-learning courses. As we can see above, all similar systems have two basic components: learning objects and domain knowledge. Our system contains these two components, too. In addition, our system is focused on the instructional design in the course. The system contains a component which formally specifies instructional design used in the course. By changing the instructional design definition, different pedagogical strategies may be easily applied in the course. Then, these pedagogical strategies may be evaluated in order to find the most appropriate strategy for the course. This is the current purpose of our system. Currently, our system doesn't have a student's model as a component and it doesn't create a personalized course. 2. SYSTEM ARCHITECTURE In this paper, an e-learning course is modelled using the model presented in [8]. The model is based on classical Tyler's rationale [9] and it specifies four modules in a curriculum: learning objectives, learning objects, instructional design and testing strategy. In this paper we are concerned about first three modules. E-courses (in popular e-learning systems or in globally adopted elearning standards) mostly contain these modules. But, the course is represented as a monolithic unit and the modules are not explicitly represented as separated units. Using such representation, it is not possible to independently change only one module, which is practical demand very often. Hence, we have decided to represent each module as a distinct component. That way, each component may be independently changed and the components are the input parameters for the automatic generation of an e-learning course. Our system generates a course using three input parameters: learning objectives (goals), learning objects and instructional design. Learning objectives may be defined as learning outcomes that a learner has to achieve. In this paper, we have decided to use ontology for representing learning goals which is an approach used in [4] and [6]. Our ontology of learning objectives is represented in OWL [10] language. In this paper we use the term learning object to refer to any digital content that helps a student to achieve a learning objective or evaluates if a learning objective is achieved. Since our system is intended for generating an e-course for web environment, we need learning objects represented in a browser-readable format. So, we have chosen HTML format for learning objects, packed in IMS Content Packaging [11] format. As mentioned, the usage of a learning object is always related to a specific learning objective (for learning or evaluation). Therefore, we created a mapping input component that maps learning objects to learning objectives. We use an XML document to specify this mapping. For generating a sequence of learning activities, it is necessary to define an instructional design used in the course. We have created a

special-purpose XML-based language for describing the instructional design. The language specifies the sequencing and selection of learning objects. An XML schema for this language is presented in [8]. The result of our system is a formal description of an e-learning course. It may be defined in various formats, and we have chosen IMS Learning Design [12] format. A detailed description of all components may be found in [8]. The global architecture of our course generator is depicted in Figure 1.

from

the

sequence

of

learning

activities.

IMSLDCourseModel class represents the course in the IMS LD format. The methods createActivities and createActivity-Relationships create the IMS LD

representation of learning activities (see section 3.4). The overridden method generateCourse creates an IMS LD manifest file. The e-learning course model is shown in Figure 2.
resources

Resource : 2

Ontology

OR Mapping

IMS CP Manifest

ID Template

TemplateParser

Course Generator

LearningObjective : 2 objectives

Parse ontology

Parse resources

Init mapping

Create relationships

Create activities

Parse template

CourseModel

InstructionalDesignElement : 2 instructionalDesignElement

Generate IMS LD manifest

rootActivity IMSLDCourseModel
IMS LD Manifest

LearningActivity (<CourseGeneratorModel>) {abstract}

Figure 1. System architecture Figure 2. E-learning course model 3. SYSTEM IMPLEMENTATION The system is implemented in Java programming language. The system parses the input files and creates the in-memory representation of data read from the input files. Firstly, the ontology of learning objectives and learning objects specification (in the IMS CP manifest file) are parsed. Then, learning objects are linked with learning objectives by parsing the mapping component. The last input file that is parsed is the XML document that describes course instructional design. Using all these data, a sequence of learning activities is generated. Finally, this sequence is a base for generating an IMS LD manifest file, which represents a formal description of our e-learning course. The object models of each component are described below. 3.1. E-learning course An e-learning course has been modelled using an abstract class CourseModel. The course contains learning objectives, learning objects and information about the instructional design. These attributes are initialized from the system inputs. TemplateParser class parses the input XML document that describes instructional design. Using initialized attributes, learning activities are created. The course (in the concrete output format) is generated 3.2. Learning objectives and learning objects For in-memory representation of learning objectives and learning objects, we have created the model shown in Figure 3. Resource class represents a learning object. For each learning object, the name, id and file path are managed. In addition, this class has a list of metadata that closely describe the learning object. For a learning objective (class LearningObjective), the system specifies the name and hierarchical level. Since learning objectives are hierarchically organized, each learning objective has a reference to its parent learning objective (attribute parentObjective in the figure). Likewise, a learning objective has a list of its children. Our ontology defines a relation surmises which specifies that a learning objective may be a precondition for other learning objectives. This relation among learning objectives is modelled using ObjectivePrecondition class. The attribute source in this class represents a learning objective that is a precondition for the learning objective defined by the destination attribute. LearningObjective class has the list of all ObjectivePrecondition objects where that learning objective is the source. Similarly, it contains the list of ObjectivePrecondition objects where it is the destination. Mapping of learning objects to learning objectives is modelled in ObjectiveResource class.

The class contains a reference to a learning object and to an appropriate learning objective.
Resource

resource

ObjectiveResource

ObjectivePrecondition

next lesson if he hasnt completed the previous one. So, we need to define the relationship between two lessons. These relationships are modelled using ElementRelationship class. Learning elements which participate in the relationship are modelled using ConditionElement class. Learning elements creates the relationship only when a specific condition (modelled with ElementJoin class) is satisfied. ElementRelationship class contains the ifCondition reference which represents a condition for performing specific actions in the course.
ConditionAction : 1 (<CourseGeneratorModel>) {abstract} ConditionExpression : 2 (<CourseGeneratorModel>) {abstract} ifCondition thenAction elseAction ConditionAction : 2 (<CourseGeneratorModel>) {abstract}

objective

destination source

preconditions postconditions

LearningObjective parentObjective

children

ElementRelationship

Figure 3. The model of learning objectives and learning objects 3.3. Instructional design Based on the XML scheme for describing the instructional design in the course, we have created a corresponding object model shown in Figure 4. InstructionalDesignElement is a container class and it describes the organization of learning elements in the course. LearningElement is a generic learning element in the organization. Learning elements are hierarchically organized, so each learning element contains a reference to its parent learning element and a collection of child learning elements. Also, generic LearningElement has a unique identifier represented with elementId attribute. There are three different types of learning elements element group, sequence and learning object. Element group is just a container for other learning elements. Sequence represents a chain of other elements. Its role is similar to the role of loop statements in programming languages. For each sequence, the system iterates through the elements specified with element attribute. Learning object is a learning element on the lowest hierarchical level. It actually represents a concrete learning resource. For sequences and learning objects, we should define a specific strategy for selecting learning resources. This strategy is defined in SelectionRule element. SelectionRule aggregates two lists of ObjectSelection elements. First list contains objects that will be included in the course. The second one is for excluded objects. Object selection element specifies learning objects for including or excluding. Learning objects selection is done by specifying the values of its metadata attributes. Also, using the priority attribute we specify the order of included learning objects. Learning object may be used for evaluating students knowledge. In such case, grading information is modelled using Grading class. Beside the course structure, sometimes it is necessary to define relationships between learning elements. For example, in mastery learning student cant proceed to the

conditionElements join

ElementJoin

ConditionElement

joinExpression ConditionExpression : 1 (<CourseGeneratorModel>) InstructionalDesignElement : 1 {abstract}

elements

Grading parent

LearningElement

grading

ObjectSelection

includes excludes

SelectionRule selectionRule selectionRule

Sequence

LearningObject

ElementGroup

IMSLDSequence

IMSLDLearningObject

IMSLDElementGroup

Figure 4. Instructional design model Conditions are modelled using ConditionExpression class. When the condition is satisfied, specific actions defined in the thenAction list are performed. Otherwise, actions defined in the elseAction list are performed. Actions are modelled using ConditionAction class (see below).
ConditionExpression is an abstract class that represents a certain condition in the learning process. Condition may contain sub-conditions, which is represented with childExpressions attribute. ConditionExpression class has successors that represent concrete logical expressions (and, or, equals...). The successors override the method calculate that

evaluates the value of the logical expression (true or false). ConditionExpression class and its successors are shown in Figure 5.
IMSLDEqualsExpression IMSLDNotExpression IMSLDAndExpression

IMSLDCompletedExpression EqualsExpression NotExpression AndExpression IMSLDOrExpression

modelled using ActivityRelationship class. The relationship contains a list of activities that form the relation (learningActivities attribute). When the ifCondition attribute is satisfied, the course performs the actions defined in the thenAction attribute. Otherwise, the actions specified in the elseAction attribute are performed.
parentActivity

CompletedExpression ConditionExpression {abstract} OrExpression LessThanExpression

LearningActivity learningActivity gradingActivity {abstract} activities learningActivities GradingActivity {abstract} IMSLDLearningActivity ActivityRelationship {abstract}

ValueExpression {abstract} childExpressions

PassedExpression

NameExpression

ParentNameExpression

IndexExpression

GreaterThanExpression

IMSLDPassedExpression

Figure 5. The model of logical expressions in the instructional design description


ConditionAction is an abstract class that represents a

IMSLDGradingActivity

IMSLDActivityRelationship

certain action that should be done in the learning process. Two types of action are supported: showing a learning element (ActionShow class) and hiding a learning element (ActionHide class). ConditionAction class and its successors are shown in Figure 6.
ConditionAction {abstract}

ifCondition

thenAction

elseAction

ConditionExpression (expression) {abstract}

ConditionAction : 1 (action) {abstract}

ConditionAction : 2 (action) {abstract}

Figure 7. The model of learning activities in the instructional design description 3.5. IMS LD support

HideAction

ShowAction

IMSLDHideAction

IMSLDShowAction

Figure 6. The model of course actions in the instructional design description 3.4. Learning activities On the basis of input parameters our system creates learning activities in the course. The model of learning activities is shown in Figure 7. An abstract learning activity is represented with LearningActivity class. Activities are hierarchically organized. Each activity has a reference to its parent activity (parentActivity attribute) and a list of its child activities (activities attribute). Grading is a distinct activity and therefore represented with GradingActivity class. Grading is always related to a standard learning activity (learningActivity attribute), e.g. after the test (standard activity), the teacher gives the grades (grading activity). The relations among learning activities are

Described classes represent a general model of an elearning course and consequently they not define any specific output format. In order to generate a course in the specific output format, it is necessary to create successors of the described classes. We have created successors that generate a course in the IMS LD format. For each learning element in the instructional design model, we have created an appropriate successor class (IMSLDElementGroup, IMSLDSequence and IMSLDLearningObject). Likewise, for conditions and actions, we have created successors that generate conditions and actions in the IMS LD format. All successors override exportToXML method and create the XML nodes in the IMS LD manifest file.
LearningActivity and ActivityRelationship are

abstract classes and they represent activities in general. We have created successors IMSLDLearningActivity and IMSLDActivityRelationship in order to generate activities in the IMS LD format. These classes export their attributes to the IMS LD format. 4. SYSTEM OUTPUT In this paper we have shown how to generate the course in the IMS LD format. But, the model enables generating a course in any XML-based format. It is only necessary to

implement appropriate successor classes that export the content to desired format. So far the system is used for generating the e-course Numerical Algorithms and Numerical Software in Engineering at Faculty of Technical Sciences in Novi Sad. We applied three instructional strategies and generated three versions of our course. A part of the one of the generated IMS LD manifest files is shown in listing 1.
<imsld:learning-activity identifier="Iterativni_postupci_1"> <imsld:title> Iterativni postupci </imsld:title> <imsld:activity-description> <imsld:item identifierref = "iterativni_postupci_res"> <imsld:title> Iterativni postupci </imsld:title> </imsld:item> </imsld:activity-description> </imsld:learning-activity>

the generated course is static it contains a sequence of predefined learning activities. This could be taken as a drawback of our system. The system could be improved if the generated course chooses a next learning activity in the real-time. A student would dynamically get a learning activity depending on various parameters (instructional design, students knowledge state, personal preferences ...). However, in this phase we have chosen to generate only a static course because most of the popular elearning systems and e-learning standards dont have support for dynamic courses. Another disadvantage still remaining in our system is that it doesnt consider a student model, so the generated course is not personalized. The system has been used at Faculty of Technical Sciences in Novi Sad for generating the course Numerical Algorithms and Numerical Software in Engineering which is presented as an illustrative example in this paper. In addition, we have generated the course Web programming. Our plan is to apply this e-course in the summer semester 2011. Future work is concerned with analysis of the data collected while the generated course is used in the teaching process. Our short-term goal is to find the most appropriate instructional strategy for the Numerical Algorithms and Numerical Software in Engineering course. Also, the plan is to create graphical tools for defining input parameters. So far, we have created a graphical editor for defining course instructional design and we are implementing a graphical editor of learning objectives management. Long-term goal is to develop a system supporting dynamic courses and containing a student model as the input parameter. Using students knowledge state and personal preferences, the system would generate a personalized e-course. 6. REFERENCES [1] Brusilovsky, P., Vassileva, J., Course sequencing techniques for large-scale web-based education, Int. J. Cont. Engineering Education and Lifelong Learning, Vol. 13, Nos. 1/2, 2003. [2] Melis, E., Andres, E., Budenbender, J., Frischauf, A., Goguadze, G., Libbrecht, P., Pollet, M., Ullrich, C., ActiveMath: A Generic and Adaptive Web-Based Learning Environment, International Journal of Artificial Intelligence in Education, (12), 2001. [3] Kettel, L., Thomson, J., Greer, J., Generating Individualized Hypermedia application, Proc. Of Workshop on Adaptive and Intelligent Web-based Education Systems at 5th International Conference on Intelligent Tutoring Systems, Montreal, Canada, 2000. [4] Kontopoulos, E., Vrakas, D., Kokkoras, F., Bassiliades, N., Vlahavas, I., An ontology-based planning system for e-course generation, Expert Syst. Appl. 35, no. 1-2, 2008. [5] Hernandez, J., Baldiris, S., Santos, O.C., Fabregat, R., Boticario, J. G., Conditional ims learning design generation using user modeling and planning techniques, Proceedings of the 2009 Ninth IEEE

Listing 1. A part of the generated IMS LD manifest file The created course may be shown in any IMS LD player and we used Reload LD Player [13]. A screenshot of the generated course is shown in Figure 8.

Figure 8. Screenshot of the generated course in the Reload LD Player 5. CONCLUSION This paper describes the implementation of the system for automatic generation of e-learning courses. The system architecture, object model and functionalities are presented. The model contains two layers. First layer models an abstract course. The second one models a course in the concrete output format. Our system generates courses in the IMS LD format. The model contains following subcomponents: the model of learning objectives and learning objects, instructional design model and the model of learning activities. System output is the IMS LD manifest file. The implementation is done in Java programming language. By changing the input parameters, our system may generate different versions of an e-learning course. Still,

International Conference on Advanced Learning Technologies, IEEE Computer Society, 2009. [6] Capuano, N., Gaeta, M., Micarelli, A., Sangineto, E., An integrated architecture for automatic course generation, Proceedings of the IEEE International Conference on Advanced Learning Technologies (ICALT 02), 2002. [7] Lluvia, M., Luis, C., Juan, F.-O., Arturo, G.-F., Automatic generation of user adapted learning designs: An AI-planning proposal, Proceedings of the 5th international conference on Adaptive Hypermedia and Adaptive Web-Based Systems, Springer-Verlag, Hannover, Germany, 2008. [8] Segedinac, M., Savi, G., Konjovi, Z., Knowledge representation framework for curriculum development, International conference on knowledge engineering and ontology development, Valensia, Spain, 2010.

[9] Tyler, R., Basic Principles of Curriculum and Instruction, Chicago: University of Chicago Press, 1949. [10] W3C, OWL Web Ontology Language Overview, 2004. [11] IMS, IMS content packaging specification, 2007. [12] IMS, IMS Learning Design Information model, 2003. [13] Reload, Learning design player v. 2.1.3, 2010, Internet site: www.reload.ac.uk/ldplayer.html

Acknowledgments
Results presented in this paper are part of the research conducted within the Grant No. III-43007, Ministry of Science and Technological Development of the Republic of Serbia.

PERFORMANCE PROFILING OF JAVA ENTERPRISE APPLICATIONS


Duan Okanovi, Milan Vidakovi Fakultet tehnikih nauka, Novi Sad {oki, minja}@uns.ac.rs Abstract Continuous monitoring of an application under production workload provides more valuable data than information obtained using profilers and debuggers. This paper presents one solution of obtained by integrating the Kieker framework into the JBoss application server. We used this solution for profiling distributed Java EE application deployed on multiple JBoss application servers. The recorded data was analyzed using data mining techniques and R programming language. Keywords: Kieker, JMX, continuous monitoring, R programming language, Java EE 1. INTRODUCTION To determine whether the quality of service and service level agreements are of satisfactory level, it is necessary to monitor software in its operational stage. This is important since software testing, debugging and profiling in development phase are not able to detect errors and unpredicted events that can occur once the software is deployed. While new, previously unknown, errors can show up, it is also a common phenomenon that software performance and quality of service degrade over time [1], i.e. significant difference in performance during the development phase and production phase.. Continuous monitoring of software is a technique that provides an image of the dynamic behavior of software under real conditions, but often results in a large amount of data. The analysis of the obtained data is very important step in the process of reconstruction of software behavior. The main contribution of this paper is the method of distributed enterprise Java application profiling. We used monitoring and analysis framework Kieker [1] for continuous monitoring of distributed enterprise Java (Java EE) applications. We created additional JMX [2] components that allow changing monitoring parameters during the monitoring process (during operation, we can disable monitoring of parts of the application to reduce overhead, or enable it to obtain more information) and datamining techniques and R programming language for data processing. The rest of this paper is structured as follows. Section 2 provides an overview of related work in the field of performance monitoring. Section 3 presents the Kieker framework. Our extension enabling adaptive monitoring is described in Section 4. Section 5 presents the R programming language. Section 6 presents extensions that we have developed as well as configuration and experimental results obtained for monitoring the test application deployed on both single and dual JBoss application server configuration. Section 7 draws the conclusions and outlines future work. 2. RELATED WORK A study presented by Snatzke [3] indicates that performance is considered critical, but developers usually fail to use monitoring tools. In practice, application-level monitoring tools, and especially open-source tools, are rarely used. The reason for this are usually time constraints (during development), and resource constraints (e.g. performance degradation) during application use. Developers usually limit themselves to profilers and debuggers, during development. Apart from Kieker, which is used in this paper and described in Section 3, there are several other systems that are used for profiling and monitoring of JEE applications. For example, JBossProfiler [4] is a tool based on JVMTI [5] and JVMPI [6] APIs. It is used to monitor applications deployed on the JBoss application server. The use of JVMTI/JVMPI APIs gives very precise results, but with significant overhead. Also, in order to change this tool or extend it, the knowledge of C/C# is required. COMPAS JEEM [7] inserts software probes during the application startup. The probes can be inserted into each layer of JEE applications (EJB, Servlet, etc.). The advantage of this approach is that there is no need for application source code changes. However, a drawback of this approach is the fact that different probes must be defined for each application layer. The system shown by Briand et al. [8] is used for reconstructing UML sequence diagrams from JEE applications. The instrumentation is performed using AspectJ, as is the case for Kieker. The system is limited to diagram generation. It is not suitable for continuous monitoring and it is not able to monitor web-services, only RMI. This overview shows the lack of tools (especially noncommercial open-source tools) that allow continuous and reconfigurable monitoring of JEE applications with low overhead. Also the data analysis, if exists, is often limited to generation of diagrams. 3. KIEKER FRAMEWORK Kieker [2] is a framework for continuous monitoring and analysis of all types of Java applications. It consists of the Kieker.Monitoring and the Kieker.Analysis components. The Kieker.Monitoring component collects and stores monitoring data. The Kieker.Analysis component performs analysis and visualization of the collected monitoring data. The architecture of the Kieker framework is depicted in Figure 1.

Figure 1. Kieker framework component diagram (from [1]) The Kieker.Monitoring component is executed on the same computer where the monitored application is being run. This component collects data during the execution of the monitored applications. A Monitoring Probe is a software sensor that is inserted into the monitored application and takes various measurements. For example, Kieker includes probes to monitor control-flow and timing information of method execution. Monitoring Log Writers store the collected data, in the form of Monitoring Records, in the Monitoring Log. The framework is distributed, with multiple Monitoring Log Writers that can store Monitoring Records in a file system, database, or JMS queue. Additionally, users can implement and use their own writers. A Monitoring Controller controls the work of this part of the framework. The data in the Monitoring Log can be analyzed by the Kieker.Analysis component. A Monitoring Log Reader reads records from the Monitoring Log and forwards them to Analysis Plugins. Analysis Plugins may, for example, analyze and visualize gathered data. Control of all components in this part of the Kieker framework is performed by the Analysis Controller component. Program instrumentation in the Kieker framework is usually performed using aspect-oriented programming (AOP) [9]. This way developer can separate program logic from monitoring logic (separation of concerns). Kieker can monitor every method in every class or only designated ones (all methods or only methods annotated with annotations). Users can use Kieker's or write their own aspects and annotations. Kieker uses AspectJ framework [10]. 4. KIEKER EXTENSION FOR ADAPTIVE MONITORING The framework extension for adaptive monitoring and its configuration for use with the JBoss server was done by implementing a new MonitoringLogWriter and by adding new JMX components. Deployment diagram for this system is depicted in Figure 2.

Server or cluster of servers Monitored application JBoss AS (in cluster configuration) Kieker+JMX extensions

Database and analysis server Record Receiver Database Record Analyzer

Figure 2. Deployment diagram of the system DProfWriter is a new monitoring log writer which stores all records into a special buffer the ResultBuffer. Kiekers Monitoring Controller is configured to use DProfWriter. The ResultBuffer is implemented as a JMX MBean and relies on the JBoss microkernel infrastructure. The DProfWriter sends records to the ResultBuffer through the MBeanServer. The buffer sends data to a Record Receiver service running on a remote server. Data can be sent periodically in bulks or as soon as they arrive into the buffer (as is the case with the Kieker synchronous writers). This remote service stores records into the database for further analysis. Essentially, in this case, the combination of the buffer, the service and the database constitutes Kiekers Monitoring Log. The DProfManager component, implemented as a JMX MBean, is used to control the monitoring process. It controls the ResultBuffer and an AspectController component. The AspectController component is used to change weaving parameters defined in AspectJ configuration file (aop.xml). It is implemented as JMX MBean, too. The AspectController can access the monitored application's aop.xml file, parse it, change parameters and save changes in the application archive (jar/war/ear) file. This will change the timestamp of the

archive file, which will cause the application server to redeploy the application, causing the re-weaving of the application with the Kieker aspects. Loss of session and breaking of running transactions can occur, but these are not within the scope of this paper. Also, if there is no aop.xml file inside the application archive, the AspectController can create one. The communication through the MBeanServer may seem to cause increased performance lag and overhead, but, since all these actions are performed within one Java virtual machine, this overhead is lower than the overhead caused by, for example, storing records into the database. On the receiving side, the Record Analyzer component analyzes the records contained in the database. Depending on its configuration, it chooses new monitoring parameters. These parameters are then sent to the DProfManager for a reconfiguration of monitoring. It is important to state that user can manually change monitoring parameters using any JMX console application. 5. R-PROJECT R-project is a part of the GNU project and its source code is freely available under the GNU General Public License. R programming language and software environment [11] is a standard for statistical computing and graphics. It is used for the development of statistical software and data analysis. R provides a wide variety of statistical and graphical techniques, including linear and nonlinear modeling, classical statistical tests, time-series analysis, classification, clustering, and others [12]. The capabilities of R are extended through user-submitted packages, which allow specialized statistical techniques, graphical devices, as well as import/export capabilities to many external data formats.. A core set of packages are included with the installation of R, while other packages are available at the Comprehensive R Archive Network (CRAN). The extremevalues package [13] for R provides functions for anomaly detection (i.e. outlier detection) [14]. This package is based on [15]. 6. TEST CONFIGURATION AND EXPERIMENT RESULTS The use of the Kieker framework for monitoring of JEE applications will be demonstrated using the software configuration management (SCM) application described in [16]. SCM is a JEE application responsible for tracking of applications and application versions. We deployed SCM on a cluster of JBoss 5.1.0 servers [17]. The mod_jk [18] module (version 1.2) for Apache 2.2 [19] server was used to enable load balancing. 6.1 SCM Application The application is implemented using Enterprise JavaBeans (EJB) [20] technology. Entity EJBs are used in the O/R mapping layer. They are accessed through stateless session EJBs (SLSB), modeled according to the faade design pattern [21]. SLSBs are annotated to work as JAX-WS [22] web services as well.

The application client is a Java Swing [23] application which uses web services to access the application. Listing 1 shows an excerpt of the OrganizationFacade class. The checkZipCode() method is used to check validity of the provided zip code. This method is annotated with Kieker's @OperationExecutionMonitoringProbe. Other method definitions from this class are omitted from this listing, but are also annotated with @OperationExecutionMonitoringProbe annotation. The OrganizationFacadeService remote interface is omitted since it contains only method declarations. Listing 2 shows an excerpt of the City entity EJB class. Other entity and session EJBs in this system are similar. @Stateless public class CityFacade implements CityFacadeService { @OperationExecutionMonitoringProbe public void checkZipCode( String zipCode) { // zip code check // ... } // other methods and attributes... } Listing 1. Stateless session EJB CityFacade class @Entity public class City { long id; String name, zipCode; @OperationExecutionMonitoringProbe @Id public long getId() { return id; } // other methods and attributes... } Listing 2. Test application entity EJB City class The testing will be conducted by repeatedly invoking methods of faade SLSBs. Test client consists of multitude of threads which are invoking randomly chosen method (with random delay between calls). These invocations are supposed to simulate production workload of many users accessing test application. Generated data will be used for program performance analysis. 6.2 Data Analysis Records generated by software probes are stored in the database during application execution. The first look at the data (Figure 3.) shows that there are several extreme values that differ from other data by several orders of magnitude. They can be considered measurement errors and should be eliminated before further analysis. The data has been analyzed using R programming language and extremevalues package. We used this package to detect and remove all the data that is statistically distant from the rest of the data, i.e. to detect and remove outliers.

number of values are expected is determined. Outlier values are those that fall out of this boundaries. The part of the R script used for outlier detection and removal is shown in Listing 3. library(extremevalues) data1 = read.table(file="1.txt", header=FALSE)[,1] outliers1 = getOutliers(data1, method="I")$iRight toClean1=data1 toClean1[outliers1]=NA cleandata1 = toClean1[!is.na(toClean1)] Figure 3. Recorded execution times The method implemented in this package determines outliers for a given set of values in two steps. First, distribution of observed data is approximated by using regression of observed values. Based on this distribution and given values, the value above which less then certain write(cleandata1,"cleaned1.txt", ncolumns=1) Listing 3. R script for outlier detection and removal After the cleanup, the data has been used to generate following workload diagrams (Figure 4. and Figure 5.).

Figure 4. Distribution of execution times on a single server configuration

Figure 5. Distribution of execution times on a dual server configuration

The diagrams above display normal distributions for execution time of one method call (City.checkZipCode()) in a single and a dual application server configuration. The mean time of method execution is ~63s for the single server, while the mean time of method execution for the dual server configuration is ~44s. The responsiveness of the system is better when the configuration of two application servers is used, by the expected margin. Also, the standard deviation of results for dual server configuration is lower, which means that dual server configuration gives more consistent performance under heavy load, which is very important quality of service issue [24]. 7. CONCLUSION This paper presented one solution to profiling of distributed enterprise Java applications. The profiling is based on the continuous monitoring which is performed using the Kieker framework and custom JMX components. The test application has been deployed on cluster of JBoss application servers with Apache server (using mod_jk module) as load balancer. The system was used for monitoring a software management application (SCM) which was implemented using EJB and web-services technologies. To enable reconfigurable monitoring, a new component (a new monitoring log writer) has been developed. The use of additional components, implemented using JMX technology, allows for development of the reconfigurable application monitoring system. During the monitoring, it is possible to change monitoring parameters and obtain more precise results. Changing of monitoring parameters can be performed using any JMX console application. The analysis has been performed using the R programming language. The future work will focus on the design and implementation of the control component for adaptive monitoring. This component will automate monitoring process adaptation (reconfiguration of monitoring parameters) based on the gathered data and appropriate data analysis algorithms whose development will also be the subject of further research. 8. REFERENCES [1] van Hoorn, A., Rohr, M., Hasselbring, W., Waller, J., Ehlers, J., Frey, S., Kieselhorst, D., Continuous Monitoring of Software Services: Design and Application of the Kieker Framework, TR-0921, Department of Computer Science, University of Kiel, Germany, 2009. [2] Flury, M., Lindfors, J., JMX: Managing J2EE with Java Management Extensions, Sams, 2002. [3] Snatzke, R. G., Performance survey 2008., http://www.codecentric.de/export/sites/www/resourc es/pdf/performance-survey-2008-web.pdf [4] JBossProfiler, http://jboss.org/jbossprofiler [5] Java Virtual Machine Tool Interface, http://download.oracle.com/javase/6/docs/platform/j vmti/jvmti.html

[6] Java Virtual Machine Profiler Interface, http://download.oracle.com/javase/1.4.2/docs/guide/j vmpi/jvmpi.html [7] Parsons, T., Mos, A., Murphy, J., Nonintrusive end to end run-time path tracing for J2EE systems, IEEE Proceedings-Software, vol. 153, no. 4, p.149-161, IEEE, 2006. [8] Briand, L. C., Labiche, Y., Leduc, J., Toward the reverse engineering of UML sequence diagrams for distributed Java software, IEEE Transactions on Software Engineering, vol. 32(9), p. 642663, IEEE Press, 2006. [9] Kiczales, G., Lamping, J., Mendhekar, A., Maeda, C., Videira Lopes, C., Loingtier, J.-M., Irwin, J., Aspect-Oriented Programming, Proceedings of ECOOP, p. 220-242, Springer-Verlag London UK, 1997. [10] AspectJ, http://www.eclipse.org/aspectj/ [11] R-project, http://www.r-project.org/ [12] Fox, J., Andersen, R., Using the R Statistical Computing Environment to Teach Social Statistics Courses, Department of Sociology, McMaster University, http://www.unt.edu/rss/Teaching-withR.pdf (retrieved on 26.1.2011.) [13] extremevalues: Univariate Outlier Detection, http://cran.rproject.org/web/packages/extremevalues/ [14] Barnett, V., Lewis, T.: 1994, Outliers in Statistical Data, John Wiley & Sons., 3rd edition, 1994. [15] van der Loo, M. P. J., Distribution based outlier detection for univariate data, Technical Report 10003, Statistics Netherlands, The Hague, 2010. [16] Okanovi, D., Vidakovi, M., One Implementation of the System for Application Version Tracking and Automatic Updating, SE2008, p.62-67, ACTAPress, USA, 2008. [17] JBossAS 5.1.0, http://www.jboss.org/jbossas/ [18] The Apache Tomcat Connector, http://tomcat.apache.org/connectors-doc/ [19] Apache HTTP Server Project, http://httpd.apache.org/ [20] EJB 3.0, http://java.sun.com/products/ejb/ [21] Gamma, E., Helm, R., Johnson, R., Vlissides, J. M., Design Patterns: Elements of Reusable ObjectOriented Software, Addison-Wesley Proffesional, 1994. [22] Kalin, M., Java Web Services: Up and Running, O'Reilly Media, 2009. [23] Java Swing, http://java.sun.com/javase/6/docs/technotes/guides/s wing [24] Domingo, R. T., Consistency in Service Quality, http://www.rtdonline.com/BMA/CSM/9.html (retrieved on 27.1.2011.) Acknowledgments Results presented in this paper are part of the research conducted within the Grant No. III-44010, Ministry of Science and Technological Development of the Republic of Serbia.

Você também pode gostar