Read PDF Romain Gary: The Man Who Sold His Shadow (Critical Authors and Issues)

Free download. Book file PDF easily for everyone and every device. You can download and read online Romain Gary: The Man Who Sold His Shadow (Critical Authors and Issues) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Romain Gary: The Man Who Sold His Shadow (Critical Authors and Issues) book. Happy reading Romain Gary: The Man Who Sold His Shadow (Critical Authors and Issues) Bookeveryone. Download file Free Book PDF Romain Gary: The Man Who Sold His Shadow (Critical Authors and Issues) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Romain Gary: The Man Who Sold His Shadow (Critical Authors and Issues) Pocket Guide.

The Apex Book includes stories from twelve countries, including two each from France, Israel, China, and the Philippines.

Site Navigation

Seven of the sixteen stories are translated from other languages, including Hebrew and Chinese two stories The novel was very well received by both the public and the critics and was nominated for the Bachmann Prize, one of Europe's most important literary prizes. Born in Al-Habbaniyah, Iraq, he left his country in He is the assistant editor of Banipal magazine, and creator of www. His first novel An Iraqi in Paris was published in He also edited the forthcoming I felt what he had to say about Enrigue was important and well thought out, the sort of analysis that good critics aspire to.

So this year I went to see what Rigoberto had to say. Ben Okri is a handsome man, not tall, with an intense She was recovering from hip surgery, and was not eager to devote herself The book centers around her relationship with Robert Mapplethorpe, whom she met almost as soon as she arrived in New York City at the age of twenty, and it ends with his death from AIDS at the age of forty-two. It is an extraordinary book -- in its emotional honesty, the beauty of its language, and its willingness to face difficult questions Saigon was my birthplace, and thousands of bits of old firecrackers covered the soil in red as if they were petals from a cherry tree, or the blood of two million soldiers, scattered through the towns and villages of a Vietnam torn in two.

I was born in the shadows of skies embroidered with He paused briefly, then continued in the same self-assured tone of gentle pleasantry: My God, I suppose the inverse is probably true as well. He stopped himself again. But, he said, lowering his voice just slightly, we have yet to hear the opinion of Professor Berlingieri. The insult was so unexpected and brutal that numerous eyes from both sides of the She came out from the liquor store in Majorstua, the bottles pushed down into a worn brown bag, and I sensed shame, shame is the only word I can use—shame.

Georg and I had been to the Frogner Baths. But we were walking on air—by heck were we—we were world champs. Some girls from the same year at school had been standing by the He watches her walk back to the house, he sees her fumble with the key. Nikolaj turns and reaches down to grab the pacifier, but it scoots away underneath the front seat.

The latter were the elite. Only elegant, fair-skinned, well-behaved girls got to wear white. On the top tier, suspended only by the Virgin Zan is prone to epileptic fits; she is also, as a form of resistance to the conformity around her, publicly promiscuous. It all started at the zoo: the smell of the zoo, the nervous excitability as we stepped off the minibus.

It can be used in individual offices, laboratories or in the hospital clinics and departments. Coz, R. Heradio Gil, J. Cerrada Somolinos and J. This paper highlights the benefits, in terms of quality, productivity and time-to-market, of applying a generative approach to increase the abstraction level to build applications based on the notification of changes in databases. Most of the database management systems maintain meta-tables with information about all stored tables; this information is used in an automatic process to define the software product line SPL variability.

The remaining variability can be specified by means of domain specific languages. Code generators can automatically query the meta-tables, analyze the input specifications and configure the current product. The paper also introduces the Exemplar Driven Development process to incrementally develop code generators and the Exemplar Flexibilization Language that supports the process implementation. Given the business globalization, complete integration is a major goal of information resource management.

The applications and data are combined in integrated entities providing not only access to information, but also internal and external economic process management. The first case refers to organizational internal processes and is achieved with ERP packages. External integration concerns customer and supply chain connection to the organizational environment for the performance of economic processes and it cannot be achieved if there is no internal information coherence an ERP system. In the last decade, ERP has continued to expand, blurring the boundaries of the core system.

The number of modules and the extended functionality offered in the ERP suites have progressively grown making integration a greater challenge for the enterprise. Also, wireless applications provide new opportunities for organizations, enabling access to relevant information from anywhere and at anytime. In order to take advantage from the features of ubiquitous environment, ERP systems have to support the mobile behaviour of their users.

The present paper is an exploratory analysis of the current stage regarding the mobile applications and services for companies, the achievements in the field and proposes an architecture model for the mobile services starting from the necessary unidentified functionalities for a portal of mobile services. Besides the general architecture of a portal of mobile applications for companies, a set of minimal functionalities for implementation is proposed in order to ensure the promotion and use of services.

Molina, I. Paredes, M. Argotte and N. In this paper the integration of a traditional training system with a competence management model is conceptually described. The resulting e-system is accessed through powerful Web interfaces and contains a comprehensive database that maintains information of the two models.

The traditional model emphasizes the contractual worker training rights and the skills model the alignment of the human talent with the mission and objectives of the company. The paper describes the specific traditional training model of CFE Federal Commission of Electricity , its competences model and the integration of the two models following a thematic contents approach.

Software organizations are constantly looking for better solutions when designing and using well-defined software processes for the development of their products and services. However, many software development processes lack for more support on project management issues. This work proposes a model that integrates the concepts of the PMBOK and those available on RUP, helping not only process integration but also assisting managers in the decision making process during project planning.

We present the model and the results from a qualitative exploratory evaluation of a tool that implements the proposed model, conducted with project managers from nine companies. Catherine Equey, Rob J. Kusters, Sacha Varone and Nicolas Montandon. Based on sparse literature investigating the cost of ERP systems implementation, our research uses data from a survey of Swiss SMEs having implemented ERP in order to test cost drivers.

The main innovation is the proposition of a new classification of cost drivers that depend on the enterprise itself, rather than on ERP. Particular attention is given to consulting fees as a major factor of implementation cost and a new major cost driver has come to light. Particular attention must be paid to this factor by the ERP implementation project manager. The satisfiability problem of queries is an important determinant in query optimization. The application of a satisfiability test can avoid the submission and the unnecessary evaluation of unsatisfiable queries, and thus save processing time and query costs.

If an XPath query does not conform to the constraints in a given schema, or the constraints of an XPath query itself are inconsistent with each other, the evaluation of the query will return an empty result for any valid XML document, and thus the query is unsatisfiable. Therefore, we propose a schema-based approach to filtering XPath queries not conforming to the constraints in the schema and XPath queries with conflicting constraints.

We present a complexity analysis of our approach, which proves that our approach is efficient at typical cases. We present an experimental analysis of our developed prototype, which shows the optimization potential of avoiding the evaluation of unsatisfiable queries. Currently, most of the companies have computational systems to provide support to the operational routines.

Many times, those systems are not integrated, thus generating duplicated and inconsistent information. Such situation makes difficult the search for necessary and trustworthy information for decision-making. Technologies of data warehousing and data mining have appeared to solve that type of problem. Many existing solutions do not enclose those two technologies; some of them are directed to the construction of a data warehouse and others to the application of data mining techniques.

There are OLAP tools that do not enclose the activities of preparation of data.

All Articles by Date

Th4is paper presents a reference architecture and a software architecture that define the components necessary for implementation of knowledge discovery systems in database including activities of data warehouse and data mining. Some standards of best practices in projects and in software development were used since the definition until the implementation. An interesting difference between tests and other disciplines of the software development process is that they constitute a task that essentially identifies and evidences the weaknesses of the software product.

Four relevant elements are considered when defining tests namely, reliability, cost, time and quality. Time and cost shall increase to the extent reliable tests and quality software are desired, but what does it take to make actors understand that tests should be seen as a security network? If quality is not there before starting the tests, it will not be there upon their completion.

Accordingly, how can we lay out a trace between tests and functional and non-functional requirements of the software system? This initiative originated as a response to the request of a software developing company of the Venezuelan public sector. One of the drawbacks of e-learning methods such as Web-based submission and evaluation of students' papers and essays is that it has become easier for students to plagiarize the work of other people. In this paper we present a computer-based system for discovering similar documents, which has been in use at Masaryk University in Brno since August , and which will also be used in the forthcoming Czech national archive of graduate theses.

We also focus on practical aspects of this system: achieving near real-time response to newly imported documents, and computational feasibility of handling large sets of documents on commodity hardware. We also show the possibilities and problems with parallelization of this system for running on a distributed cluster of computers. Enterprises are constantly struggling to optimize their business processes in order to gain competitive advantage and to survive in the fast evolving global market.

Often, the only ones to understand the matter and complexity of these processes are the people, who actually execute them. This raises the need for novel business process management approaches, which can enable business users to proactively express process knowledge and to participate in the business process management and design according to their actual expertise and problem solving strategies. The presented paper describes an architecture, which supports a framework for end user-driven composition and management of underspecified, human-centric business processes.

The solution builds up on email-integrated task management and enables dynamic generation of decentralized-emerging process structures through web service-based activity tracking. The captured process execution examples are shared in central enterprise repositories for further adaptation and reuse. Telecommunication operators currently servicing mobile users world-wide have dramatically increased in the last few years. Although most of the operators use similar technologies and equipment provided by world leaders in the field such as Ericsson, Nokia-Siemens, Motorola, etc, it can be observed that many vendors utilize propriety methods and processes for maintaining network status and collecting statistical data for detailed monitoring of network elements.

This data forms their competitive differentiation and hence is extremely valuable for the organization. However, in this paper we will demonstrate through a case study based on a GSM operator in Iran, how this mission critical data can be fraught with serious data quality problems, leading to diminished capacity to take appropriate action and ultimately achieve customer satisfaction.

We will further present a taxonomy of data quality problems derived from the case study. A comprehensive survey of reported literature on data quality is presented in the context of the taxonomy, which can not only be utilized as a framework to classify and understand data quality problems in the telecommunication domain but can also be used for other domains with similar information systems landscapes. Understanding software requirements and customer needs is vital for all SW companies around the world. Lately clearly more attention has been focused also on the costs, cost-effectiveness, productivity and value of software development and products.

This study outlines concepts, principles and process of implementing a value assessment for SW requirements. The main purpose of this study is to collect experiences whether the value assessment for product requirements is useful for companies, works in practice, and what are the strengths and weaknesses of using it. This is done by implementing value assessment in a case company step by step to see which phases possibly work and which phases possibly do not work.

The practical industrial case shows that proposed value assessment for product requirements is useful and supports companies trying to find value in their products. With the growing importance of XML in data exchange, much research has been done in providing flexible query mechanisms to extract data from XML documents. A core operation for XML query processing is to find all occurrences of a twig pattern Q or small tree in a document T. Prior work has typically decomposed Q into binary structural relationships, such as parent-child and ancestor-descendant relations, or root-to-leaf paths.

The twig matching is achieved by: i matching the binary relationships or paths against XML databases, and ii using the join algorithms to stitch together all the matching binary relationships or paths. In this paper, we propose a new algorithm for this task with no costly path joins or join-like operations involved. Our experiments show that our method is efficient in supporting twig pattern queries. The e-commerce has become a point of strength for the companies that desire to increase their billing enlarging their clients park and reducing the management costs.

Therefore the demand has been born to use platforms able to support the interoperability between heterogeneous systems and the multi-channelling with variegated devices to access different services in reliable manner and to allow, so, a spread of the market toward partner with particular needs. Furthermore, many available services have been typically designed for a single channel the web one. In a real world scenario, an ever-growing number of users take advantage of different kind of communication channel and devices.

In this paper we propose a B2B oriented framework able to support the interoperability among heterogeneous systems developed according to the ebXML reference model for the business messages interchange suitable to any B2B marketplace that foresees the commercial interaction among partners with different roles and profiles including channel and device. Consequently, developers tend to focus more on the user interface aspects and less on business related code. The key concept behind our approach is the generation of concrete graphical user interface from a source code based model, which includes the original source code metadata and non-intrusive declarative language extensions that describes the user interface structure.

Concrete user interface implementation will be delegated to specialized software packages, developed by external entities, that provides complete graphical user interfaces services to the application. It is therefore important to provide appropriate methods, techniques and tools to support the maintenance phase of the software life cycle. One major maintenance task is the analysis and validation of change impacts. Existing approaches address change impact analysis, but using them in practice raises specific problems. Tools for change impact analysis must be able to deal with inconsistent requirements- and design-models, and with large legacy systems and systems, which are distributed across data processing centers.

We have developed an approach and an evaluation framework to overcome these problems. The proposed approach combines methods of dynamic dependency analysis and change coupling analysis to detect physical and logical dependencies. The implementation of the approach - a framework consisting of methods, techniques and tools - will support both the management and developers.

The goal is to detect low-level artefacts and dependencies based on only up-to-date and system-conform data, including logfiles, the service repository, the versioning system database and the change management system database. With the assistance of a data warehouse, the framework will enable dynamic querying and reporting. During the design phase of a chemical plant, information is typically created by various software tools and stored in different documents and databases. Unfortunately, the further processing of the data is often hindered by the structural, syntactic and semantic heterogeneities of the data sources.

In fact the merging and consolidation of the data becomes virtually prohibitive when exclusively conventional database technologies are employed. Therefore, XML technologies as well as specific domain ontologies are increasingly applied in the context of data integration. Both, ontology and software development is performed in close cooperation with partners from the chemical and software industries to ensure their compliance with the requirements of industrial practice.

Recently, some researchers have started to analyze the impact on business performance of the organizational changes that complement IT investments. We have collected information of Spanish SME during the period , concerning the type of purchased ERP, implementation period, number of employees, personnel costs and some financial indicators. Our preliminary findings suggest that as bigger the SME as lower will be the decrease its number of employees. On the other hand, ERP impacts positively in personnel costs. This trend to increase personnel costs can be explained in the sense that SMEs using an ERP system need people not only with specific operative skills but also with a very holistic approach to understand and obtain maximum benefits to the ERP system.

Over the past decade many organizations are increasingly concerned with the implementation of Enterprise Resource Planning ERP systems. This counts for both large and small and medium sized companies. Implementation can be considered to be a process of change influenced by different so-called critical success factors CSF of type organizational, technological and human.

Critical success factors are being derived from project goals and subsequently measured in this project to monitor and control the implementation project. Bringing sense-and-respond to business intelligence systems for making decisions become essential for organizations in the foreseeable future. An existing challenge is that organizations need to make business processes as the centrepiece of their strategy to enable the processes performing at higher level and to efficiently improve these processes in the global competition.

Traditional Data Warehouse and OLAP tools, which have been used for data analysis in the Business Intelligence BI systems, are inadequate for delivering information faster to make decisions and to earlier identify failures of a business process. In this paper we propose a closed-loop BI framework that can be used for monitoring and analyzing a business process of an organization, optimizing the business process, and reporting cost based on activities.

Business Activity Monitoring BAM as data resource of a control system is as the heart of this framework. Furthermore, to support such a BI system, we integrate an extracting, transforming, and loading tool that works based on rules and state of business process activities.

The tool can automatically transfer data into a data warehouse when conditions of rule and state have been satisfied. The petroleum industry is a technically challenging business with highly specialized companies and complex operational structures. Several terminological standards have been introduced over the last few years, though they address particular disciplines and cannot help people collaborate efficiently across disciplines and organizational borders. This paper discusses the results from the industrally driven Integrated Information Platform project, which has developed and formalized an extensive OWL ontology for the Norwegian petroleum business.

The ontology is now used in production reports, and the ontology is considered vital to semantic interoperability and the concept of integrated operations on the Norwegian continental shelf. A Data Warehouse DW is a database used for analytical processing whose principal objective is to maintain and analyze historical data Kimball, R.

Since the introduction of multidimensional data model as modelling formalism for DW design, several techniques have been proposed to capture multidimensional data at the conceptual level. In this paper, we present a goal-oriented method for DW analysis requirements. This paper shows how goal modelling contributes to a logical scoping and analysis of the application domain to elicit the information requirements, from which the conceptual multidimensional schema is derived.

When an organisation decides to invest in a software project it expects to get some value in return. Thus, decisions in software project management should be based on this expected value by trying to understand and influence its driver factors. This paper intends to contribute to a view of software project management based on business value by identifying value determinant factors in a software project and proposing some tools for recording and monitoring them.

The proposed approach will be tested in a real project, in order to evaluate its applicability and usefulness in decision-making. A study of 27 ERP systems in the Queensland Government revealed 41 issues clustered into seven major issue categories. Two of these categories described intra- and inter-organisational knowledge-related issues. This paper describes and discusses the intra-organisational knowledge issues arising from this research.

These intra-organisational issues include insufficient knowledge in the user base, ineffective staff and knowledge retention strategies, inadequate training method and management, inadequate helpdesk knowledge resources, and finally, under-resourced helpdesk. When barriers arise in knowledge flows from sources such as implementation partner staff, training materials, trainers, and help-desk staff, issues such as those reported in this paper arise in the ERP lifecycle.

The majority of Information Systems is concerned by heterogeneity in both data and solutions. The use of this data thus becomes complex, inefficient and expensive in business applications.

Free second level domains by vobylusesuje.tk

The issues of data integration, data storage, design and exchange of models are strongly linked. The need to federate data sources and to use standard modelling formalism is apparent.

  • Joann Sfar.
  • 100 Notable Books of 2018.
  • When Goats Smile.
  • Donovan Creed Two Up 3-4: Donovan Creed Books 3 and 4;

In this paper, we focus on mediation solutions based on XML architecture. The integration of the heterogeneous data sources is done by defining a pivot model. This model uses the standard XML Schema allowing the definition of complex data structures. We introduce features of the UML formalism, through a profile, to facilitate the collaborative definition and the exchange of these models, and to introduce the capacity to express semantic constraints in XML models.

These constraints will be used to perform data factorisation and to optimise data operations. Collaborative systems have to support specific functionalities in order to be useful for special fields of application and to fulfil those requirements. In this paper we introduce the Wasabi framework for collaborative systems, which is characterised by flexibility and adaptability.

  • Related Content!
  • Navigation menu.
  • Biographies Memoirs Of Authors - Best books online?

The framework implements a service oriented architecture and integrates different persistence layers. The requirement analysis for the Wasabi CSCW system is presented in the context of a collaborative environment for medical research which has strict requirements concerning data integrity. This paper shows the results of the requirement analysis and how these are implemented in the Wasabi architecture. Content management systems CMS have evolved in various different ways. Even amongst CMS support-ing JSR, the recent Java standard to organise and access content repositories, incompatible content structures exist due to substantial differences in the implemented content models.

This can be of primary concern when migration between CMS is required. This paper proposes a framework to enable automated migrations between CMS in the absence of con-sistency and uniformity in content structures. This framework serves as a starting point of a body of re-search to design a common content model which can be implemented in future CMS to facilitate migration between CMS based on JSR and improve integration into existing information systems environments.

A model-based approach towards a generalised content structure is postulated to resolve the differences between the proprietary content structures as identified in the visualisa-tion of the simple website created. The proposed model has been implemented in Jackrabbit, the JSR reference implementation, and the proposed framework therefore contains simple methods to transform con-tent structures between Magnolia and Alfresco using this Jackrabbit implementation as an intermediator. Current specification tools for ECA rules include visual specification tools and textual specification tools based on XML.

Thus, a specification tool with advantages of both visual representation and XML-based representation is needed. We also use a web-based smart home system to evaluate our work. The enabling of data produced by product embedded sensor devices for use in product development could greatly benefit manufactures, while opening up new business opportunities. Currently products such as cars already have embedded sensor devices, but the data is usually not available for analysis in real-time. We propose that a world-wide, inter-organizational network for product data gathering should be created.

The network should be based on open standards so that it can be widely adopted. It is important that a common, interoperable solution is accepted by all companies, big or small, to enable innovative new services to be developed. In this paper, the concept of the Internet of Things IoT is described. The PROMISE project is presented, and a distributed messaging system for product data gathering developed within the project is introduced. Practical experiences related to the implementation of the messaging system in a real application scenario are discussed. Interoperability requires two components technical and information integration.

Most of the enterprises have solved the problem of technical integration but at the moment they are struggling with information integration. The challenge in information integration is to preserve the meaning of information in different context. Semantic technologies can provide means for information integration by representing the meaning of information. This paper introduces how to use semantics by developing ontology models based on enterprise information. Different ontology models from diverse sources and applications can be mapped together in order to provide integrated view for different information sources.

Furthermore this paper describes the process of ontology development and mapping. The domain area of this case study is heavy industrial environment with multiple applications and data sources. Metadata are deeply necessary in an environment of Knowledge Discovery in Database KDD , once they are the responsible for the whole documentation of information on data that integrate a data warehouse DW , being the latter used to store the data about the organization business. Such data usually come from several data sources, thus the metadata format should be independent on the platform.

The manager was implemented in Java, which provides support to the model presently proposed. Each software entity should have as high quality as possible in the context of limited resources. A software quality metric is a kind of software entity. Existing studies about the evaluation of software metrics do not pay enough attention to the quality of specifications of metrics.

Semiotics has been used as a basis in order to evaluate the quality of different types of software entities. In this paper, we propose a multidimensional, semiotic quality framework of software quality metrics. We apply this framework in order to evaluate the syntactic and semantic quality of two sets of database design metrics. The evaluation shows that these metrics have some quality problems. With increasingly distributed and inhomogeneous resources, sharing knowledge, information, or data becomes more and more difficult and manageable for both, end-users and providers.

To reduce administrative overheads and ease complicated and time-consuming integration tasks of widely dispersed data resources, quite a few solutions for collaborative data sharing and access have been designed and introduced in several European research projects for example in CoSpaces and ViroLab. These two projects basically concentrate on the development of collaborative working environments for different user communities such as engineering teams as well as health professionals with a particular focus on the integration of heterogeneous and large data resources into the system's infrastructure.

In this paper, we present two approaches realised within CoSpaces and ViroLab to overcome the difficulties of integrating multiple data resources and making them accessible in a user-friendly but also secure way. We start with an analysis on systems' specifications describing user and provider requirements for appropriate solutions. Finally, we conclude with an outlook and give some recommendations how those systems can be further enhanced in order to guarantee a certain level of dynamicity, scalability, reliability, and last but not least security and trustworthiness.

These components may be in-house developed or bought from other vendors. In the latter case, the source code of components is usually not available to application developers. The result is the application may contain malicious components. The framework examines bean methods invoked by each thread in applications and compares them with pre-defined business functions to check whether the latest calls of threads are proper.

Unexpected calls, which are considered as made by malicious components, will be blocked. The notion of a Configuration Fragment is adopted to help address the challenge of managing the different kinds of dependencies that exist during the evolution of component-based and service-oriented systems. Based upon a model of Architectural Change and an example application-specific context, they are defined in order to express and reconcile change properties with respect to existing system properties.

This occurs through the process of configuration leading to association, configuration leading to disassociation, or configuration leading to refinement of these system elements. In recent years, data warehouse applications become more and more popular. It is a persistent challenge to achieve a high quality of data in data warehouses. Among the tasks of readying data, data cleaning is crucial. To deal with this problem, a set of methods and tools has been developed.

Site Search Navigation

However, there are still at least two questions needed to be answered: How to improve the efficiency while performing data cleaning? How to improve the degree of automation when performing data cleaning? This paper challenges these two questions by presenting a novel framework, which provides an approach to managing data cleaning in data warehouses by focusing on the use of data quality dimensions, and decoupling a cleaning process into several sub-processes.

Initial test run of the processes in the framework demonstrates that the approach presented is efficient and scalable for data cleaning in data warehouses. In this paper we describe DeVisa, a Web system for scoring and management of data mining models. The system has been designed to provide unified access to different prediction models using standard technologies based on XML.

The system provides functions such as scoring, model comparison, model selection or sequencing through a a web service interface. The paper analyzes the system's architecture and functionality and discusses its use as a tool for researchers. While early cited benefits of Enterprise Resource Planning ERP or enterprise systems remain for the most part highly desirable, it is often the case that the promise of delivery differs from reality.

Many now agree that achieving enterprise systems benefits is complex, cumbersome, risky and expensive. Furthermore many ERP projects do not fully achieve expectations. It reveals a rich picture of implementation motivators, inhibitors and the perceived and real benefits of enterprise systems. In a previous work in the context of information retrieval, XQuery was extended with an iterative paradigm. This extension helps the user getting the desired results from queries.

In the paper this proposal is introduced and justified and a case study is presented. Daniela F. Brauner, Alexandre Gazola, Marco A. Casanova and Karin K. This paper proposes an approach and a mediator architecture for adaptively matching export schemas of database web services. Differently from traditional mediator approaches, the mediated schema is constructed from the mappings adaptively elicited from user query responses.

That is, query results are postprocessed to identify reliable mappings and build the mediated schema on the fly. The approach is illustrated with two case studies from rather different application domains. Current legislation demands organizations to responsibly manage sensitive data. To achieve compliance, data auditing must be implemented in information systems.

In this paper we propose a data auditing architecture that creates data audit reports out of simple audit events at the technical level. We use complex event processing technology CET to obtain composed audit events out of simple audit events. In two scenarios we show how complex audit events can be built for business processes and application users, when one database user is shared between many application users, as found in multi-tier architectures.

Schema evolution keeps only the current data and the schema version after applying schema changes. On the contrary, schema versioning creates new schema versions and preserves old schema versions and their corresponding data. These two techniques have been investigated widely, both in the context of static and temporal databases. With the growing interest in XML and temporal XML data as well as the mechanisms for holding such data, the XML context within which data items are formatted also becomes an issue. If much research work has recently focused on the problem of schema evolution in XML databases, less attention has been devoted to schema versioning in such databases.

In this paper, we propose an approach for schema versioning in multi-temporal XML databases. In the dawn of 21st century, companies are seeking ways to perform transactions efficiently and effectively. Enterprises must tackle B2B integration and adoption challenges in the short term in order to survive in such a competitive business environment of nowadays. However, most enterprises, and especially SMEs, lack the necessary technical and non-technical infrastructure as well the economic potential in order to efficiently adopt a B2B integration framework.

This paper presents a methodological approach towards measuring the B2B integration readiness of Enterprises and the development of the software system to support it. Nowadays information systems are more and more important for all types of organizations. To deal with the complex technologies available, IT specialists use proven best practices inspired from more comprehensive process frameworks for software and systems delivery or implementation and for effective project management.

Methods developed to support theses processes produce many different heterogeneous resources design documents and models, planning, project prototypes, etc Furthermore Information Systems need to follow, to adapt to a continuously changing reality. Designers will always have to consider new user and stakeholders requirements and go back to the starting design case for an update. The design cycle is then iterative. In this paper we present an organizational and technical infrastructure for a collaborative design process management system which automates mechanisms to assure the coherence and consistency of these continuously updated resources.

Our approach uses document structuring, knowledge representation, and mechanisms for dependency analyzing and impact studies. Reliable and accurate software cost estimations have always been a challenge especially for people involved in project resource management. The challenge is amplified due to the high level of complexity and uniqueness of the software process. The majority of estimation methods proposed fail to produce successful cost forecasting and neither resolve to explicit, measurable and concise set of factors affecting productivity.

Throughout the software cost estimation literature software size is usually proposed as one of the most important attributes affecting effort and is used to build cost models. This paper aspires to provide size and effort-based estimations for the required software effort based on past historical projects data. The obtained optimal ANN topologies and input methods for each dataset are presented, discussed and compared to a classic MLR model.

RFID technology can be used to its fullest potential only with software to supplement the hardware with powerful capabilities for data capture, filtering, counting and storage. The EPCglobal Network architecture encourages minimizing the amount of business logic embedded in the tags, readers and middleware. This creates the need for a Business Logic Layer above the event filtering layer that enhances basic observation events with business context - i. The purpose of this project is to develop an implementation of the Business Logic Layer. This application accepts observation event data e.

The strength of the application lies in the automatic addition of business context. It is quick and easy to adapt any business process to the framework suggested and equally easy to reconfigure it if the business process is changed. A sample application has been developed for a business scenario in the retail sector. In peer-to-peer P2P systems, files from the same application domain are spread over the network.

When the user poses a query, the processing relies mainly on the flooding technique, which is quite inefficient for optimization purposes. To solve this issue, our work proposes to cluster documents from the same application domain into super peers. Thus, files related to the same universe of discourse are grouped and the query processing is restricted to a subset of the network. The clustering task involves: ontology generation, document and ontology matching, and metadata management. The proposed mechanism implements the ontology manager in DetVX, an environment for detecting, managing and querying replicas and versions in a P2P context.

Vidal, Fernando C. Lemos, Valdiana S. In this work we study the problem of how to incrementally maintain materialized XML views of relational data, based on the semantic mappings that model the relationship between the source and view schemas. The semantic mappings are specified by a set of correspondence assertions, which are simple to understand. The paper focuses on an algorithm to incrementally maintain materialized XML views of relational data.

One of the most complex issues of the integration and transformation interface is the case where there are multiple sources for a single data element in the enterprise Data Warehouse DW. There are many facets due to the number of variables that are needed in the integration phase. However we are interested in the integration of temporal and spatial problem due to the nature of DWs. This paper presents our ontology based DW architecture for temporal integration on the basis of the temporal and spatial properties of the data and temporal characteristics of the data sources.

The proposal shows the steps for the transformation of the native schemes of the data sources into the DW scheme and end user scheme and the use of an ontology model as the common data model. The quality of a data mart DM tightly depends on the quality of its multidimensional model. Currently proposed constraints are either incomplete, or informally presented, which may lead to ambiguous interpretations.

The work presented in this paper is a first step towards the definition of a formal framework for the specification and the verification of the quality of DM schemas. In this framework, the quality is expressed in terms of both the syntactic well-formedness of the DM schema as well as its semantic soundness with respect to the DM instances. More precisely, this paper first formalizes in Z the constraints pertinent to the hierarchy concept; the formalization is treated at the meta-model level. Data replication is often used in distributed systems to improve both availability and performance of applications accessing data.

This is interesting for distributed real-time database systems since accessing data possibilities can help transactions to meet their deadlines. However such systems must ensure the maintaining of data copies consistency. To achieve this goal, distributed systems have to manage replication by implementing replication control protocols. We introduce a new entity called List of available copies LAC which is a list related to each data item in the database. A LAC of a data item contains all or a part of updated replicas references of this data item. These references are used by sites in order to access data at the appropriate sites.

RT-RCP ensures data updates without affecting system performance and allows inconsistencies to happen but prevents access to stale data.

Carlo A. Curino, Hyun J. Moon , Letizia Tanca and Carlo Zaniolo. Evolving the database that is at the core of an Information System represents a difficult maintenance problem that has only been studied in the framework of traditional information systems. However, the problem is likely to be even more severe in web information systems, where open-source software is often developed through the contributions and collaboration of many groups and individuals. Therefore, in this paper, we present an in-depth analysis of the evolution history of the Wikipedia database and its schema; Wikipedia is the best-known example of a large family of web information systems built using the open-source software MediaWiki.

Our study is based on: i a set of Schema Modification Operators that provide a simple conceptual representation for complex schema changes, and ii simple software tools to automate the analysis. This framework allowed us to dissect and analyze the 4. Beyond confirming the initial hunch about the severity of the problem, our analysis suggests the need for developing better methods and tools to support graceful schema evolution. Therefore, we briefly discuss documentation and automation support systems for database evolution, and suggest that the Wikipedia case study can provide the kernel of a benchmark for testing and improving such systems.

Enterprise Resource Planning ERP systems have transformed the way organizations go about the process of providing information systems. They promise to provide an off-the-shelf solution to the information needs of organizations. Despite that promise, implementation projects are plagued with much publicized failures and abandoned projects. Efforts to make ERP systems successful in organizations are facing challenges. The purpose of the study reported in this paper was to investigate the challenges faced by organisations implementing ERP systems in Kenya based on consultant point of view.

Based on the factors identified from the interview, a survey was administered to ERP consultants from five Kenyan organisations that were identified as having a key role in ERP systems implementation in their firms in order to assess the criticality of the identified challenges. A factor analysis of these items identified six underlying dimensions. The findings of this study should provide to management of firms implementing ERP systems a better understanding of the likely challenges they may face and put in place appropriate measure to help in mitigating the risk of implementation failures.

In this paper we have described a collaboration study between two companies in a networked organisation. The main contribution is the connector view by which it is possible to model the collaboration without major changes in existing enterprise models, although the collaboration actually may effect several elements in the original model.

Supporting objects are used to connect elements in the connector view to the original model, thereby establishing correspondencies between the conector view and the enterprise view. RoboCup is a scientific and educational, international project that involves artificial intelligence, robotics and sport sciences. In these competitions, teams of all around the world participated in distinct leagues. In the begining of the Coach competition, one of RocoCup leagues,the goal of the researchers was to develop an agent Coach that provides advices to teammates about how to act and with improve team performance.

Using the resulting improved coach agent, with enhanced statistic calculation abilities, a huge amount of statistical data was gathered from the games held at Bremen According to the results, the team that represented our country, has a much more goal opportunities in comparison with the majority of the teams, but this team did not score many goals. In terms of more occupied regions, the best four teams in the tournament did not occupy many times the left and right wings, compared to others regions.

In the future the our country team needs to develop new strategies that use these two areas preferrentialy in order to achieve better results. The easy production of organisational reports that present information to management is an important goal of computerised information systems. This paper presents a new paradigm, called the motion picture paradigm, for information management.

The paper presents the key concepts that were developed for the new paradigm and then demonstrates that this paradigm can be realised through a comprehensive framework for the multi-dimensional management of information for complex domains and the use of existing information technologies. The results of preliminary experience, involving the computerised management of clinical practice guideline and electronic healthcare record information were obtained. The experience reveal that the motion picture paradigm facilitates, at any time, the easy and comprehensive review of information in a way that allows developments to be grasped easily and possibilities of the detection of hidden trends and the generation of ground-breaking questions to be enhanced.

This work presents a study on the handling of multiple spatio-temporal granularities in Data Warehouses or Multidimensional Databases MDB. The possibility of to store the spatial data with multiple granularities in databases, allow us to study these data with multiple representations and clarifies the understanding of the data analysis subject. This paper presents a conceptual multidimensional model called FactEntity FE to model MDB, this model adds new definitions, constructors, and hierarchical structures, to deal with the spatial and temporal multi-granularities under the multidimensional paradigm.

In addition, the FE model show and define some new concepts as Basic factEntity and Virtual factEntities, and the way of to derive the data in order to make up these Virtual factEntities. This study distinguishes two spatial granularity types, which we called: geometric granularity and semantic granularity and in order to handle them are proposed three new types of hierarchies: Dynamic, Static and Hybrid. We do an analysis on the behaviour of spatial data, with multiple granularities, interacting with other spatial and thematic data.

There is no multidimensional model that allows gathering of so much semantics, which our model proposes.

Romain Gary - The Man Who Sold His Shadow (Electronic book text)

Enterprise information systems integration is essential for organizations to fulfil interoperability requirements between applications and business processes. To carry out most typical integration requirements, traditional software development methodologies are not suitable. Neither are enterprise package implementation methodologies. Thus, specific ad-hoc methodologies are needed for information systems integration. This paper proposes a new methodology for enterprise information systems integration that facilitates continuous learning and centralized management during all the integration process.

This methodology has been developed based on the integration experience gained in a real case, which is shortly described. A review on the architecture of such systems is currently motivated by the wide availability of networking resources, especially the Internet, through which the cost of communication among the nodes of a distributed database can be reduced.

This paper presents a review of the classical distributed database management architecture from a technological standpoint, suggesting its use in the context of spatial data infrastructures SDI. The paper also proposes the adoption of elements from service-oriented architectures for the implementation of the connections among distributed database components, thus configuring a service-oriented distributed database architecture.

The administrators and designers of modern Information Systems face the problem of maintaining their systems in the presence of frequently occurring changes in any counterpart of it. Hence, it is imperative that the whole process should be done correctly, i. In this paper, we are dealing with the problem of evolution in the context of databases. First, we present a coherent, graph-based framework for capturing the effect of potential changes in the database software of an Information System. Next, we describe a generic annotation policy for database evolution and we propose a feasible and powerful extension to the SQL language specifically tailored for the management of evolution.

Finally, we demonstrate the efficiency and feasibility of our approach through a case study based on a real-world situation occurred in the Greek public sector. Due to the constant access and storage information need, there is a constant concern for implementing these functionalities in large part of the current developed applications. Most of these applications use the Data Access Object pattern to implement those functionalities, once this pattern makes possible the separation of the data access code of the application code.

However its implementation exposes the data access object to the others application objects, causing situations in witch a business object access the data access object. With the objective of solving this problem, the present paper proposes an oriented implementation of the aspects of this pattern, followed by a quantitative evaluation of both, object oriented OO and aspect oriented AO , implementations of this pattern. This study used strong software engineering attributes such as, separation of interests, coupling and cohesion, as evaluation criteria. Actual data warehouses models usually consider OLAP dimensions as static entities.

However, in practice, structural changes of dimensions schema are often necessary to adapt the multidimensional database to changing requirements. This article presents a new structural update operator for OLAP dimensions. This operator can create a new level to which, a pre-existent level in an OLAP dimension hierarchy rolls up. To define the domain of the new level and the aggregation function from an existing level to the new level, our operator classifies all instances of an existing level into k clusters with the k-means clustering algorithm.

  • Search form.
  • Stoner;
  • VISIONS Seeing the world as it is not viewing it as it is?
  • Navigation menu!

To choose features for k-means clustering, we propose two solutions: the first solution uses descriptors of the pre-existent level in its dimension table. On the other hand, the second solution proposes to describe the level by measures attributes in the fact table. As data warehouses are very large databases, these solutions were integrated inside a RDBMS: the Oracle database system. In addition, we carried out some experimentations which validated the relevance of our approach. Matching Techniques are becoming very attractive research topic. With the development and the use of a large variety of data e.

DB schemas, ontologies, taxonomies , in many domains e. In this paper, we are interested in studying large scale matching approaches.