This document describes process performance monitoring in service compositions. It discusses monitoring a single BPEL process using a resource event model and complex event definitions to calculate performance metrics. It also covers monitoring across partner processes by specifying a monitoring agreement based on a BPEL4Chor choreography model. Key events are correlated using identifiers. A prototype implements monitoring using an Apache ODE BPEL engine and ESPER CEP engine.
S-CUBE LP: Using Data Properties in Quality Predictionvirtual-campus
The document discusses using data properties in quality prediction for service compositions. It notes that the quality of service (QoS) of a composition depends on factors like the QoS of component services, composition structure, and data. An automotive scenario example is provided where a parts provider composition selects among multiple part makers. The computation cost of the provider composition depends on the number of parts and characteristics of the chosen maker. Data properties like the number of parts can thus impact QoS predictions for service compositions.
S-CUBE LP: Service Level Agreement based Service infrastructures in the conte...virtual-campus
This document discusses service level agreements (SLAs) in the context of multi-layered adaptation of service-based applications. It describes 3 main problem areas: 1) diversity of service infrastructure models, 2) lack of cross-layer monitoring and adaptation, and 3) rigidness of infrastructure. The objectives are to 1) hide infrastructure differences, 2) support higher layers of service-based applications, and 3) enable SLA-oriented self-adaptation. It proposes a SLA-aware service infrastructure architecture using a meta-negotiator, meta-broker, brokers, and automatic service deployers to achieve autonomous behavior while respecting SLAs.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms for those who already suffer from conditions like depression and anxiety.
S-CUBE LP: Monitoring Adaptation of Service-based Applicationsvirtual-campus
This document describes a framework for monitoring and adapting service-based applications based on user context. It discusses different types of user context that are important to consider, including direct contexts like role and cognition as well as related contexts like time and location. An ontology is presented for representing user context models. The framework uses annotations in service specifications to identify parts related to context, and event calculus rules to specify application behavior. It aims to select, modify or generate new monitoring rules based on user context to check application behavior.
The document describes a method for identifying services from requirements. It involves two steps: 1) Identifying business services from business process and data models. This can be done top-down from requirements or bottom-up from existing systems. 2) Decomposing business services into candidate services and operations using techniques like activity diagrams, use cases, and class diagrams. The goal is to discover reusable services that encapsulate business logic and data.
S-CUBE LP: Using Data Properties in Quality Predictionvirtual-campus
The document discusses using data properties in quality prediction for service compositions. It notes that the quality of service (QoS) of a composition depends on factors like the QoS of component services, composition structure, and data. An automotive scenario example is provided where a parts provider composition selects among multiple part makers. The computation cost of the provider composition depends on the number of parts and characteristics of the chosen maker. Data properties like the number of parts can thus impact QoS predictions for service compositions.
S-CUBE LP: Service Level Agreement based Service infrastructures in the conte...virtual-campus
This document discusses service level agreements (SLAs) in the context of multi-layered adaptation of service-based applications. It describes 3 main problem areas: 1) diversity of service infrastructure models, 2) lack of cross-layer monitoring and adaptation, and 3) rigidness of infrastructure. The objectives are to 1) hide infrastructure differences, 2) support higher layers of service-based applications, and 3) enable SLA-oriented self-adaptation. It proposes a SLA-aware service infrastructure architecture using a meta-negotiator, meta-broker, brokers, and automatic service deployers to achieve autonomous behavior while respecting SLAs.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms for those who already suffer from conditions like depression and anxiety.
S-CUBE LP: Monitoring Adaptation of Service-based Applicationsvirtual-campus
This document describes a framework for monitoring and adapting service-based applications based on user context. It discusses different types of user context that are important to consider, including direct contexts like role and cognition as well as related contexts like time and location. An ontology is presented for representing user context models. The framework uses annotations in service specifications to identify parts related to context, and event calculus rules to specify application behavior. It aims to select, modify or generate new monitoring rules based on user context to check application behavior.
The document describes a method for identifying services from requirements. It involves two steps: 1) Identifying business services from business process and data models. This can be done top-down from requirements or bottom-up from existing systems. 2) Decomposing business services into candidate services and operations using techniques like activity diagrams, use cases, and class diagrams. The goal is to discover reusable services that encapsulate business logic and data.
Hbb 2852 gain insights into your business operations with bpm and kibanaAllen Chan
At IBM InterConnect 2017, we discussed the ability for IBM BPM to send business events into analytics + visualization framework such as Elasticsearch + Kibana.
Data to Insight in a Flash: Introduction to Real-Time Analytics with WSO2 Com...WSO2
In this webinar, Sriskandarajah Suhothayan, technical lead at WSO2, will take a closer look at the following use cases:
Natural language processing capabilities of WSO2 CEP: Introducing basic constructs of the CEP
Analyzing a soccer game in Real time: Explaining how complicated scenarios can be implemented
Geo fencing capabilities of WSO2 CEP: Focusing on the CEP’s virtualization support
How to Create Observable Integration Solutions Using WSO2 Enterprise IntegratorWSO2
This slide deck introduces the WSO2 Enterprise Integrator analytics profile and explore its observability features.
Watch the webinar here: https://wso2.com/library/webinars/2018/09/how-to-create-observable-integration-solutions-using-wso2-enterprise-integrator
To view recording of this webinar please use below URL:
http://wso2.com/library/webinars/2015/09/event-driven-architecture/
Enterprise systems today are moving towards being dynamic where change has become the norm rather than the exception. Such systems need to be loosely coupled, autonomous, versatile and adaptive. There arises the need to model such systems, and event driven architecture (EDA) is how such systems can be modelled and explained.
This webinar will discuss
The basics of EDA
How it can benefit your enterprise
How the WSO2 product stack complements this architectural pattern
This document discusses using events to control SAP workflow. It provides examples of how to:
- Trigger a new workflow instance using an event
- Terminate a workitem using an event
- Wait for an external event to complete part of a workflow
- Configure different types of events such as HR, status management, and change documents
- Best practices for connecting workflows to SAP systems and processes using events
EVAM is a real-time event processing engine that can respond to complex event sequences and produce real-time actions. It processes enriched event data within defined scenarios and executes selected scenarios and relevant actions. EVAM allows users to design scenarios without scripting through a drag and drop interface. It can be used for applications like real-time offer management, fraud detection, monitoring and alert generation.
This document proposes a generic data model for storing and sharing process models across different modeling languages in a process model repository. It defines a generic process description that captures the common elements and relationships between elements across languages. A partial data model is generated from the generic description and mapping specifications define how each language maps to the generic model. Process models can then be stored and retrieved by converting them to and from the generic representation. This approach allows process models to be shared and reused independent of the original modeling language.
The document provides an overview of Business Process Modeling Notation (BPMN) 2.0 concepts and describes modeling a sales quote process using Oracle BPM Studio. It discusses key BPMN 2.0 elements like activities, events, gateways and flows, as well as enhancements in BPMN 2.0 including formal execution semantics, extensibility, and support for choreography. It also covers modeling human tasks, patterns, and using BPMN to both design and execute business processes.
The document discusses complex event processing (CEP) technology and the CEP GE instance available in FIWARE. It provides an overview of CEP's event-condition-action paradigm and how it can detect patterns over incoming events. It also describes how to define event types, processing rules, contexts and producers/consumers in the CEP GE's web interface. Finally, it provides an example of detecting a denial of service attack by defining an event processing agent to detect increasing traffic report events over time.
- jBPM5 is an open-source business process management project that offers a generic process engine supporting native BPMN 2.0 execution targeting both developers and business users.
- The core engine allows defining and executing processes using BPMN 2.0 XML definitions and provides integration with rules, events, human tasks, and more.
- jBPM5 provides flexibility through combining processes defined in BPMN with rules and events while also supporting integration with domain-specific processes.
Complex Event Processing (CEP) involves detecting patterns in streams of event data. CEP tools analyze multiple simple events to identify complex events inferred from simpler ones. Typical applications of CEP include monitoring for business anomalies, detecting fraud or security threats. CEP augments service-oriented architectures by allowing services to trigger from events and generate new event streams. Event processing engines use techniques like filtering, windows, and correlation to detect patterns across events over time.
Process Analytics with Oracle BPM Suite 12c and BAM - OGh SIG SOA & BPM, 1st ...Lucas Jellema
Business Processes implemented in BPEL and BPM(N) and running on Oracle BPM Suite 12c or SOA Suite 12c have to fulfill a business purpose and as such must meet business requirements - both functionally and non-functionally. SLAs for throughput, response time, quality are usually associated with these processes and we typically also would like insight in the number of process executions (per group) and the paths taken through our processes.
This presentation introduces process analytics in both BPEL and BPM processes in Oracle SOA Suite and BPM Suite 12c. It explains how to configure out of the box generic analytics and process specific business indicators. The presentation than introduces BAM 12c. It demonstrates the out of the box process analytics reports and dashboards. Then it explains how to create custom reports on the unified process analytics star schema or on custom tables. Finally the presentation goes into real-time monitoring in BAM using JMS and enterprise message resources in combination with the event processing templates in BAM.
How can the concepts of event-driven linked with the concepts of serivce-oriented architectures. and what is the added value of such a combination?
What do events mean in the context of Business Process Management (BPM) and Business Activity Monitoring (BAM), and how can such architectures/solutions be enhanced with the concepts of Complex Event Processing?
This document provides an overview of Microsoft's StreamInsight Complex Event Processing (CEP) platform. It discusses CEP concepts and benefits, the StreamInsight architecture and development environment, and deployment scenarios. The presentation aims to introduce IT professionals to CEP and Microsoft's StreamInsight solution for building event-driven applications that process streaming data with low latency.
This document provides an introduction to Sybase Event Stream Processing (ESP). ESP is a technology for analyzing streams of event data in real-time to derive continuous intelligence. It allows defining logic to combine data sources, compute values, detect patterns, and produce summaries. The ESP runtime continuously processes incoming event streams before storing to disk, enabling low latency analysis. It is not a replacement for databases but rather complements them by supporting continuous, real-time analysis of fast data streams. ESP uses data-flow programming where streams and operations on the data are defined as it flows from sources to outputs.
Complex Event Processor 3.0.0 - An overview of upcoming features WSO2
This document provides an overview of upcoming features in WSO2 Complex Event Processor 3.0.0. It discusses the Siddhi CEP engine, event processing queries including filters, windows, joins, patterns and event tables. It also covers high availability, persistence, scaling, integration with BAM, and performance comparisons with Esper. The document concludes with a demo of monitoring stock prices and tweets to detect significant stock price changes when a company is highly discussed on Twitter.
S4 is a distributed stream computing platform that allows programmers to develop applications for processing continuous streams of data. It is inspired by MapReduce and actor models of computation. S4 aims to provide a simple programming interface, scale using commodity hardware, minimize latency, and use a decentralized architecture. Processing in S4 is done using processing elements that operate on streaming data events in a distributed manner across processing nodes.
WebSphere Business Process Simulationonrandikaucsc
The document discusses process simulation and how to analyze the results of a process simulation. It provides an overview of process simulation, the steps to run a simulation, and how to analyze the simulation results. Key points include defining resources and probabilities, running a simulation snapshot, and analyzing results at the aggregated, process case, and instance level to understand time, cost, resource usage, and other metrics.
S-CUBE LP: Analysis Operations on SLAs: Detecting and Explaining Conflicting ...virtual-campus
Here are the key types of conflicts that can occur within temporal-aware WS-Agreement documents:
- Inconsistencies between terms, parts of terms, or creation constraints that are defined in overlapping time periods, making it impossible to satisfy all constraints simultaneously.
- Dead terms, where a guarantee term's qualifying condition can never be satisfied within the specified time periods due to contradictions with other terms or constraints.
- Ludicrous terms, where a guarantee term's service level objective cannot be fulfilled even when its qualifying condition is met, again due to contradictions arising from overlapping time periods.
The approach is to detect these three types of conflicts if and only if the involved terms or constraints are defined within overlapping time
S-CUBE LP: Chemical Modeling: Workflow Enactment based on the Chemical Metaphorvirtual-campus
This document provides an overview of a chemical metaphor for workflow enactment in large-scale heterogeneous environments. It discusses problems with current workflow enactment approaches and requirements for improvement. Specifically, it proposes modeling workflow enactment like chemical reactions, which are autonomous, distributed, concurrent and adaptive to local conditions. Resources are represented as "resource quantums" and a coordination model is formalized using the pi-calculus. This approach aims to provide more autonomy, adaptation and distribution for workflow enactment in complex environments.
More Related Content
Similar to S-CUBE LP: Process Performance Monitoring in Service Compositions
Hbb 2852 gain insights into your business operations with bpm and kibanaAllen Chan
At IBM InterConnect 2017, we discussed the ability for IBM BPM to send business events into analytics + visualization framework such as Elasticsearch + Kibana.
Data to Insight in a Flash: Introduction to Real-Time Analytics with WSO2 Com...WSO2
In this webinar, Sriskandarajah Suhothayan, technical lead at WSO2, will take a closer look at the following use cases:
Natural language processing capabilities of WSO2 CEP: Introducing basic constructs of the CEP
Analyzing a soccer game in Real time: Explaining how complicated scenarios can be implemented
Geo fencing capabilities of WSO2 CEP: Focusing on the CEP’s virtualization support
How to Create Observable Integration Solutions Using WSO2 Enterprise IntegratorWSO2
This slide deck introduces the WSO2 Enterprise Integrator analytics profile and explore its observability features.
Watch the webinar here: https://wso2.com/library/webinars/2018/09/how-to-create-observable-integration-solutions-using-wso2-enterprise-integrator
To view recording of this webinar please use below URL:
http://wso2.com/library/webinars/2015/09/event-driven-architecture/
Enterprise systems today are moving towards being dynamic where change has become the norm rather than the exception. Such systems need to be loosely coupled, autonomous, versatile and adaptive. There arises the need to model such systems, and event driven architecture (EDA) is how such systems can be modelled and explained.
This webinar will discuss
The basics of EDA
How it can benefit your enterprise
How the WSO2 product stack complements this architectural pattern
This document discusses using events to control SAP workflow. It provides examples of how to:
- Trigger a new workflow instance using an event
- Terminate a workitem using an event
- Wait for an external event to complete part of a workflow
- Configure different types of events such as HR, status management, and change documents
- Best practices for connecting workflows to SAP systems and processes using events
EVAM is a real-time event processing engine that can respond to complex event sequences and produce real-time actions. It processes enriched event data within defined scenarios and executes selected scenarios and relevant actions. EVAM allows users to design scenarios without scripting through a drag and drop interface. It can be used for applications like real-time offer management, fraud detection, monitoring and alert generation.
This document proposes a generic data model for storing and sharing process models across different modeling languages in a process model repository. It defines a generic process description that captures the common elements and relationships between elements across languages. A partial data model is generated from the generic description and mapping specifications define how each language maps to the generic model. Process models can then be stored and retrieved by converting them to and from the generic representation. This approach allows process models to be shared and reused independent of the original modeling language.
The document provides an overview of Business Process Modeling Notation (BPMN) 2.0 concepts and describes modeling a sales quote process using Oracle BPM Studio. It discusses key BPMN 2.0 elements like activities, events, gateways and flows, as well as enhancements in BPMN 2.0 including formal execution semantics, extensibility, and support for choreography. It also covers modeling human tasks, patterns, and using BPMN to both design and execute business processes.
The document discusses complex event processing (CEP) technology and the CEP GE instance available in FIWARE. It provides an overview of CEP's event-condition-action paradigm and how it can detect patterns over incoming events. It also describes how to define event types, processing rules, contexts and producers/consumers in the CEP GE's web interface. Finally, it provides an example of detecting a denial of service attack by defining an event processing agent to detect increasing traffic report events over time.
- jBPM5 is an open-source business process management project that offers a generic process engine supporting native BPMN 2.0 execution targeting both developers and business users.
- The core engine allows defining and executing processes using BPMN 2.0 XML definitions and provides integration with rules, events, human tasks, and more.
- jBPM5 provides flexibility through combining processes defined in BPMN with rules and events while also supporting integration with domain-specific processes.
Complex Event Processing (CEP) involves detecting patterns in streams of event data. CEP tools analyze multiple simple events to identify complex events inferred from simpler ones. Typical applications of CEP include monitoring for business anomalies, detecting fraud or security threats. CEP augments service-oriented architectures by allowing services to trigger from events and generate new event streams. Event processing engines use techniques like filtering, windows, and correlation to detect patterns across events over time.
Process Analytics with Oracle BPM Suite 12c and BAM - OGh SIG SOA & BPM, 1st ...Lucas Jellema
Business Processes implemented in BPEL and BPM(N) and running on Oracle BPM Suite 12c or SOA Suite 12c have to fulfill a business purpose and as such must meet business requirements - both functionally and non-functionally. SLAs for throughput, response time, quality are usually associated with these processes and we typically also would like insight in the number of process executions (per group) and the paths taken through our processes.
This presentation introduces process analytics in both BPEL and BPM processes in Oracle SOA Suite and BPM Suite 12c. It explains how to configure out of the box generic analytics and process specific business indicators. The presentation than introduces BAM 12c. It demonstrates the out of the box process analytics reports and dashboards. Then it explains how to create custom reports on the unified process analytics star schema or on custom tables. Finally the presentation goes into real-time monitoring in BAM using JMS and enterprise message resources in combination with the event processing templates in BAM.
How can the concepts of event-driven linked with the concepts of serivce-oriented architectures. and what is the added value of such a combination?
What do events mean in the context of Business Process Management (BPM) and Business Activity Monitoring (BAM), and how can such architectures/solutions be enhanced with the concepts of Complex Event Processing?
This document provides an overview of Microsoft's StreamInsight Complex Event Processing (CEP) platform. It discusses CEP concepts and benefits, the StreamInsight architecture and development environment, and deployment scenarios. The presentation aims to introduce IT professionals to CEP and Microsoft's StreamInsight solution for building event-driven applications that process streaming data with low latency.
This document provides an introduction to Sybase Event Stream Processing (ESP). ESP is a technology for analyzing streams of event data in real-time to derive continuous intelligence. It allows defining logic to combine data sources, compute values, detect patterns, and produce summaries. The ESP runtime continuously processes incoming event streams before storing to disk, enabling low latency analysis. It is not a replacement for databases but rather complements them by supporting continuous, real-time analysis of fast data streams. ESP uses data-flow programming where streams and operations on the data are defined as it flows from sources to outputs.
Complex Event Processor 3.0.0 - An overview of upcoming features WSO2
This document provides an overview of upcoming features in WSO2 Complex Event Processor 3.0.0. It discusses the Siddhi CEP engine, event processing queries including filters, windows, joins, patterns and event tables. It also covers high availability, persistence, scaling, integration with BAM, and performance comparisons with Esper. The document concludes with a demo of monitoring stock prices and tweets to detect significant stock price changes when a company is highly discussed on Twitter.
S4 is a distributed stream computing platform that allows programmers to develop applications for processing continuous streams of data. It is inspired by MapReduce and actor models of computation. S4 aims to provide a simple programming interface, scale using commodity hardware, minimize latency, and use a decentralized architecture. Processing in S4 is done using processing elements that operate on streaming data events in a distributed manner across processing nodes.
WebSphere Business Process Simulationonrandikaucsc
The document discusses process simulation and how to analyze the results of a process simulation. It provides an overview of process simulation, the steps to run a simulation, and how to analyze the simulation results. Key points include defining resources and probabilities, running a simulation snapshot, and analyzing results at the aggregated, process case, and instance level to understand time, cost, resource usage, and other metrics.
Similar to S-CUBE LP: Process Performance Monitoring in Service Compositions (20)
S-CUBE LP: Analysis Operations on SLAs: Detecting and Explaining Conflicting ...virtual-campus
Here are the key types of conflicts that can occur within temporal-aware WS-Agreement documents:
- Inconsistencies between terms, parts of terms, or creation constraints that are defined in overlapping time periods, making it impossible to satisfy all constraints simultaneously.
- Dead terms, where a guarantee term's qualifying condition can never be satisfied within the specified time periods due to contradictions with other terms or constraints.
- Ludicrous terms, where a guarantee term's service level objective cannot be fulfilled even when its qualifying condition is met, again due to contradictions arising from overlapping time periods.
The approach is to detect these three types of conflicts if and only if the involved terms or constraints are defined within overlapping time
S-CUBE LP: Chemical Modeling: Workflow Enactment based on the Chemical Metaphorvirtual-campus
This document provides an overview of a chemical metaphor for workflow enactment in large-scale heterogeneous environments. It discusses problems with current workflow enactment approaches and requirements for improvement. Specifically, it proposes modeling workflow enactment like chemical reactions, which are autonomous, distributed, concurrent and adaptive to local conditions. Resources are represented as "resource quantums" and a coordination model is formalized using the pi-calculus. This approach aims to provide more autonomy, adaptation and distribution for workflow enactment in complex environments.
S-CUBE LP: Quality of Service-Aware Service Composition: QoS optimization in ...virtual-campus
This document discusses quality of service (QoS) optimization in service-based processes. It describes how to select and optimize composed web services to satisfy QoS constraints. The key aspects covered are QoS definition for web services, optimization at both the local service selection level and global process level, and rebinding services to maintain QoS as processes execute.
S-CUBE LP: The Chemical Computing model and HOCL Programmingvirtual-campus
This document provides an overview of the Chemical Computing model and the Higher Order Chemical Language (HOCL). It describes the vision of chemical computing using multiset rewriting to express inherently parallel problems. The Gamma language is presented as the first to capture chemical programming. The γ-calculus improved on Gamma by making it higher order and modeling reaction rules as active molecules. HOCL is then presented as a language based on γ-calculus, allowing active molecules to capture and produce other active molecules. Examples are given to demonstrate the chemical approach.
S-CUBE LP: Executing the HOCL: Concept of a Chemical Interpretervirtual-campus
The document describes an interpreter for a chemical language called Higher Order Chemical Language (HOCL) based on the chemical computing model. The interpreter uses a production system approach with RETE pattern matching to enable efficient execution of the chemical language. Key constructs of the language include passive molecules to represent facts, active molecules to represent rules, and solutions to represent independent computational threads. The interpreter was implemented using Jess rule engine and experiences showed the importance of random conflict resolution and intelligent compilation for chemical modeling applications.
S-CUBE LP: SLA-based Service Virtualization in distributed, heterogenious env...virtual-campus
The document describes SLA-based service virtualization (SSV) in distributed, heterogeneous environments. SSV uses a meta-negotiation component for SLA management, a meta-broker for diverse broker management, and automatic service deployment for virtualizing resources on clouds. It presents the SSV architecture and how it can be extended to Federated Cloud Management using a two-level brokering approach for cloud selection and optimal VM placement. The SSV and FCM architectures aim to provide a unified system for managing different service infrastructures through SLA-based user interaction and an autonomic system for inner interactions.
S-CUBE LP: Service Discovery and Task Modelsvirtual-campus
The document describes a learning package on service discovery and task models. It discusses using task models to help select services that fit with a user's goals and constraints. A two-stage approach to task-based service discovery is presented: 1) specifying a user task model with a description, ConcurTaskTree diagram, and associated services; and 2) discovering services using the task model. The task model captures the task hierarchy, types, and temporal relationships. Services are matched based on analyzing subtasks and associated service classes.
S-CUBE LP: Impact of SBA design on Global Software Developmentvirtual-campus
This document provides an overview of a learning package about designing and migrating service-based applications and the impact of service-based application design on global software development. It discusses how service-oriented architecture (SOA), cloud computing, and agile service networks can help address challenges with global software development by facilitating collaboration across geographic boundaries. Specifically, it outlines how SOA can support increased modularity, clear work division, and standards adoption to help distribute development tasks.
S-CUBE LP: Techniques for design for adaptationvirtual-campus
This document describes a learning package on designing and migrating service-based applications. It discusses techniques for designing applications to enable self-adaptation. It presents three motivating scenarios involving supply chains, wine production, and mobile users that require different types of adaptation. The key aspects of adaptable service-based applications are life cycles, adaptation strategies, triggers, and the association between strategies and triggers. Guidelines are provided for modeling triggers, realizing strategies, and relating them through various design approaches like built-in, abstraction-based, and dynamic adaptation.
S-CUBE LP: Self-healing in Mixed Service-oriented Systemsvirtual-campus
This document provides an overview of self-healing in mixed service-oriented systems. It describes self-healing research from IBM on autonomic computing and self-adaptive systems. The key aspects of self-healing covered include the self-healing loop, requirements, states (normal, broken, degraded), failure classification, and policies for detection and recovery. The goal of self-healing is to maintain system health by detecting disruptions, diagnosing causes, and applying recovery strategies in a closed feedback loop.
S-CUBE LP: Analyzing and Adapting Business Processes based on Ecologically-aw...virtual-campus
The document describes a learning package on analyzing and adapting business processes based on ecologically-aware indicators. It discusses using green business process reengineering to optimize an auto finishing process to reduce its environmental impact by considering additional dimensions like water consumption and carbon emissions. A key part of green BPR is extending the traditional BPR architecture to include defining key ecological indicators, monitoring environmental impacts during process execution, and analyzing the data to identify opportunities for process adaptation and improvement.
S-CUBE LP: Preventing SLA Violations in Service Compositions Using Aspect-Bas...virtual-campus
This document discusses an approach to preventing violations of service level agreements (SLAs) in composite services using aspect-based fragment substitution. The approach defines checkpoints in the service composition and uses machine learning to generate predictions of SLA violations at checkpoints. If a violation is predicted, the service composition is adapted by substituting an alternative process fragment that is expected to prevent the predicted SLA violation. Background information is provided on related work in S-Cube on runtime prediction of SLA violations using machine learning on event logs, and on aspect-oriented programming concepts used in the fragment substitution approach.
S-CUBE LP: Analyzing Business Process Performance Using KPI Dependency Analysisvirtual-campus
This document describes a method for analyzing dependencies between Key Performance Indicators (KPIs) and lower-level metrics in business processes. It involves defining KPIs and metrics, monitoring process instances, and using classification algorithms like decision trees to learn relationships between metrics and KPI classes from historical data. The approach automates dependency analysis, is efficient compared to manual methods, and produces understandable decision tree models. Potential limitations include needing historical event logs to train models and ensuring all relevant data can be monitored.
S-CUBE LP: Service Level Agreement based Service infrastructures in the conte...virtual-campus
This document describes a learning package on SLA-aware service infrastructures that aim to 1) hide differences between service infrastructures, 2) support higher layers of service-based applications through SLA-constrained autonomous decisions, and 3) allow for SLA-oriented self-adaptation and violation propagation across layers through monitoring and adaptation mechanisms. The research focuses on autonomous behavior in service infrastructures while considering constraints from SLAs agreed to at higher composition and business process layers.
S-CUBE LP: Runtime Prediction of SLA Violations Based on Service Event Logsvirtual-campus
This document describes an approach for predicting violations of service level agreements (SLAs) based on analyzing event logs from a service composition runtime. It discusses defining checkpoints during service execution to collect monitoring data on factors that influence performance. Missing or future data can be estimated. Machine learning techniques are then used to generate predictions at checkpoints based on historical monitoring data. The accuracy of predictions is evaluated by comparing predictions to actual outcomes. Prediction error is found to decrease as execution progresses, showing the potential for early warning of possible SLA violations to allow corrective actions.
This document discusses proactive service level agreement (SLA) negotiation. It defines SLA and SLA negotiation, and describes two types of negotiation: reactive and proactive. It outlines scenarios that could trigger proactive SLA negotiation, and describes a two-phase proactive negotiation process involving identification of potential providers and pre-agreement/final agreement. The document also presents an architecture and process for proactive SLA negotiation and evaluates the approach through a case study.
S-CUBE LP: A Soft-Constraint Based Approach to QoS-Aware Service Selectionvirtual-campus
The document discusses service selection and quality of service (QoS) considerations. It proposes extending the soft constraint satisfaction problem (SCSP) approach to handle penalties. Specifically, it defines a soft service level agreement (SSLA) model that includes user preferences and penalties defined in terms of QoS variables. If a selected service fails, the approach aims to automatically switch to another service that fits the agreed upon QoS levels while applying any defined penalties. The key points are mapping the SSLA definitions to the SCSP framework and extending the SCSP constraints and operations to incorporate the defined penalties.
S-CUBE LP: Variability Modeling and QoS Analysis of Web Services Orchestrationsvirtual-campus
This document summarizes research on using pairwise testing to model variability and analyze quality of service (QoS) for web service orchestrations. Feature diagrams are used to explicitly represent variability in composite services, and pairwise testing is applied to select configurations covering all pairwise feature interactions. QoS distributions are computed for these configurations to predict overall orchestration QoS in a way that accounts for variability. The approach provides more realistic service level agreements than considering only worst-case scenarios.
S-CUBE LP: Run-time Verification for Preventive Adaptationvirtual-campus
The document describes an approach called SPADE for preventive adaptation of service-based applications using runtime verification. SPADE uses monitoring data from service executions, assumptions about service response times, and formalized requirements to predict if the application will violate requirements. If a violation is predicted, SPADE identifies the need for adaptation to prevent an actual failure. SPADE was designed as part of the S-Cube project to enable service-based applications to adapt preventively based on runtime monitoring and verification.
S-CUBE LP: Online Testing for Proactive Adaptationvirtual-campus
This document discusses online testing for proactive adaptation of service-based applications. It describes how online testing can be used to predict failures through monitoring services and applications during operation. This allows issues to be detected early and adaptations to be made proactively before failures occur externally. Two approaches are discussed: PROSA predicts violations of quality of service by testing stateless services, while JITO predicts violations of interaction protocols for conversational services. Online testing extends traditional testing into the operational phase to improve failure prediction accuracy and allow more proactive adaptation for service-based applications.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
QA or the Highway - Component Testing: Bridging the gap between frontend appl...zjhamm304
These are the slides for the presentation, "Component Testing: Bridging the gap between frontend applications" that was presented at QA or the Highway 2024 in Columbus, OH by Zachary Hamm.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
ScyllaDB Leaps Forward with Dor Laor, CEO of ScyllaDBScyllaDB
Join ScyllaDB’s CEO, Dor Laor, as he introduces the revolutionary tablet architecture that makes one of the fastest databases fully elastic. Dor will also detail the significant advancements in ScyllaDB Cloud’s security and elasticity features as well as the speed boost that ScyllaDB Enterprise 2024.1 received.
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
Getting the Most Out of ScyllaDB Monitoring: ShareChat's TipsScyllaDB
ScyllaDB monitoring provides a lot of useful information. But sometimes it’s not easy to find the root of the problem if something is wrong or even estimate the remaining capacity by the load on the cluster. This talk shares our team's practical tips on: 1) How to find the root of the problem by metrics if ScyllaDB is slow 2) How to interpret the load and plan capacity for the future 3) Compaction strategies and how to choose the right one 4) Important metrics which aren’t available in the default monitoring setup.
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
TrustArc Webinar - Your Guide for Smooth Cross-Border Data Transfers and Glob...TrustArc
Global data transfers can be tricky due to different regulations and individual protections in each country. Sharing data with vendors has become such a normal part of business operations that some may not even realize they’re conducting a cross-border data transfer!
The Global CBPR Forum launched the new Global Cross-Border Privacy Rules framework in May 2024 to ensure that privacy compliance and regulatory differences across participating jurisdictions do not block a business's ability to deliver its products and services worldwide.
To benefit consumers and businesses, Global CBPRs promote trust and accountability while moving toward a future where consumer privacy is honored and data can be transferred responsibly across borders.
This webinar will review:
- What is a data transfer and its related risks
- How to manage and mitigate your data transfer risks
- How do different data transfer mechanisms like the EU-US DPF and Global CBPR benefit your business globally
- Globally what are the cross-border data transfer regulations and guidelines
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Introducing BoxLang : A new JVM language for productivity and modularity!Ortus Solutions, Corp
Just like life, our code must adapt to the ever changing world we live in. From one day coding for the web, to the next for our tablets or APIs or for running serverless applications. Multi-runtime development is the future of coding, the future is to be dynamic. Let us introduce you to BoxLang.
Dynamic. Modular. Productive.
BoxLang redefines development with its dynamic nature, empowering developers to craft expressive and functional code effortlessly. Its modular architecture prioritizes flexibility, allowing for seamless integration into existing ecosystems.
Interoperability at its Core
With 100% interoperability with Java, BoxLang seamlessly bridges the gap between traditional and modern development paradigms, unlocking new possibilities for innovation and collaboration.
Multi-Runtime
From the tiny 2m operating system binary to running on our pure Java web server, CommandBox, Jakarta EE, AWS Lambda, Microsoft Functions, Web Assembly, Android and more. BoxLang has been designed to enhance and adapt according to it's runnable runtime.
The Fusion of Modernity and Tradition
Experience the fusion of modern features inspired by CFML, Node, Ruby, Kotlin, Java, and Clojure, combined with the familiarity of Java bytecode compilation, making BoxLang a language of choice for forward-thinking developers.
Empowering Transition with Transpiler Support
Transitioning from CFML to BoxLang is seamless with our JIT transpiler, facilitating smooth migration and preserving existing code investments.
Unlocking Creativity with IDE Tools
Unleash your creativity with powerful IDE tools tailored for BoxLang, providing an intuitive development experience and streamlining your workflow. Join us as we embark on a journey to redefine JVM development. Welcome to the era of BoxLang.
For senior executives, successfully managing a major cyber attack relies on your ability to minimise operational downtime, revenue loss and reputational damage.
Indeed, the approach you take to recovery is the ultimate test for your Resilience, Business Continuity, Cyber Security and IT teams.
Our Cyber Recovery Wargame prepares your organisation to deliver an exceptional crisis response.
Event date: 19th June 2024, Tate Modern
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
MongoDB to ScyllaDB: Technical Comparison and the Path to SuccessScyllaDB
What can you expect when migrating from MongoDB to ScyllaDB? This session provides a jumpstart based on what we’ve learned from working with your peers across hundreds of use cases. Discover how ScyllaDB’s architecture, capabilities, and performance compares to MongoDB’s. Then, hear about your MongoDB to ScyllaDB migration options and practical strategies for success, including our top do’s and don’ts.
MongoDB to ScyllaDB: Technical Comparison and the Path to Success
S-CUBE LP: Process Performance Monitoring in Service Compositions
1. S-Cube Learning Package
Process Performance Monitoring in
Service Compositions
University of Stuttgart (USTUTT), TU Wien (TUW)
Branimir Wetzstein, USTUTT
www.s-cube-network.eu
2. Learning Package Categorization
S-Cube
Adaptable Coordinated
Service Compositions
Adaptable and QoS-aware
Service Compositions
Process Performance Monitoring in
Service Compositions
3. Learning Package Overview
Problem Description
Process Performance Monitoring in Service Compositions
Discussion
Conclusions
4. Let’s Consider a Scenario (1)
Assume a reseller who has implemented a business process
as a service composition
He interacts with external services of customer, suppliers,
shipper, bank, etc. in a service choreography
5. Let’s Consider a Scenario (2)
The reseller is interested in measuring the process performance of his own
internal business process
Process performance metrics (PPMs) based on time, cost, quality, customer
satisfaction dimensions
– Order Fulfillment Lead Time, Perfect Order Fulfillment (in time and in full),
Customer Complaint Rate, …
QoS metrics, evaluating characteristics of the process execution
infrastructure, e.g. availability of the service endpoints, …
Listener
Service Monitor Dashboard
Event
Metric
definitions
Reseller
Event Metrics
Database
QoS Monitor
6. Let’s Consider a Scenario (3)
The reseller is also interested monitoring across public
partner processes in the choreography
Process Tracking:
– Partners want to track the state of the choreography beyond their own process
– Example: Customer wants to track its shipment (Amazon-DHL Example)
- Where is my shipment at the moment?
Customer Reseller Shipper
Get Order Status
Get Shipment Status
Event
Event
Event
Event
Event
Queues /
Topics
7. Let’s Consider a Scenario (4)
The reseller is also interested monitoring across public
partner processes in the choreography
Evaluate cross-partner metrics for evaluation of KPIs and
SLAs
– E.g., order fulfilment lead time
– Gather and correlate events from different processes
Customer Reseller
Event
Event
CEP
Event
8. Learning Package Overview
Problem Description
Process Performance Monitoring in Service
Compositions
Discussion
Conclusions
9. Event-Based Monitoring of BPEL
Service Orchestrations
Our monitoring approach for service orchestrations supports:
– Process Performance Metrics (PPMs) based on process events (BPEL event model)
– QoS metrics based on QoS events provided by QoS monitors
– Correlation of Process events and QoS events
– Metric calculation based on Complex Event Processing (ESPER)
Listener
Service
Event
Complex Event Dashboard
Processing
Process Engine Metric
definitions
Event
Metrics
QoS Monitor Database
Event
Other Event
Sources
10. BPEL Monitoring Mechanisms
BPEL engines typically support following monitoring mechanisms:
– Events are published (asynchronously) to message queues and topics which
external monitors can subscribe to
– Events are stored persistently in an audit trail (database)
– A monitoring interface which enables active querying of information on
deployed processes and running process instances
Supported events are defined in an Event Model
– Unfortunately, however, there is NO standard BPEL event metamodel
Every BPEL engine provides different types of events
– Some BPEL engines support also (synchronous) blocking events; the
execution of the process instance is halted until it is unblocked by an incoming
message
– Most BPEL engines enable configuring an event filter for a process model
(using a deployment descriptor)
thus only needed events are published at runtime which improves
performance!
11. Background:
BPEL 2.0 Event Model
BPEL 2.0 Event Model contains:
– State Models
- Separate state models for instances of following elements:
Process, Activity, Scope Activity, Loop Activity, Invoke Activity,
Link, Variable, Partner Link, Correlation Set
- Transitions between States are denoted by different types of
Events
– Event Contents
- Event attributes, in particular the different IDs which identify the
corresponding element in the process instance
Kopp, Oliver; Henke, Sebastian; Karastoyanova, Dimka; Khalaf, Rania; Leymann, Frank ; Sonntag, Mirko;
Steinmetz, Thomas; Unger, Tobias; Wetzstein, Branimir: An Event Model for WS-BPEL 2.0, Technical Report
No. 2011/07, University of Stuttgart.
13. Background:
Event Contents
An event contains following information:
– event name
– creation timestamp
– IDs which enable assignment of the event to the corresponding BPEL element instance
– other information, e.g. variable value
BPEL element Needed identifiers
Process Model • QName of the process model
• version number of the process model
Process Instance • globally unique instance ID of the process instance
Activity Instance • An XPath expression, which uniquely identifies the construct in the process
model
• Globally unique instance ID of the process instance
• Unique instance ID of the innermost scope instance, where the activity is
nested in
• unique instance ID of the activity instance
Link, Variable, Partner • unique instance ID of the process instance
Link, Correlation Set • unique instance ID of the scope where the element is declared
Instances • an XPath expression uniquely identifying the construct in the process model
14. Correlation of BPEL Events
Calculation of BPEL process instance metrics can be done based on
instance IDs set by the BPEL engine
SELECT e2.timestamp – e1.timestamp
FROM Activity_Executing e1, Activity_Completed e2
WHERE e1.pname=„POProcess“ AND
e1.aname=„Receive Order“ AND
e2.aname=„Ship Order“ AND
e1.pid = e2.pid
When, however, the BPEL process is not the only event source, other
correlation tokens have to be chosen:
SELECT e2.timestamp – e1.timestamp
FROM Activity_Executing e1, Variable_Modification e2,
Order_Shipped e3
WHERE e1.pname=„POProcess“ AND
e1.aname=„Receive Order“ AND
e2.vname=„Order“ AND
e1.pid = e2.pid AND
e2.orderId = e3.orderId
15. Monitoring of Process Performance
Metrics (PPMs)
PPMs are metrics defined based on runtime events of
processes
– We focus here on runtime events of WS-BPEL service orchestrations,
but in general our approach supports arbitrary events of information
systems participating in the business process
We distinguish between resource events and complex events
– Resource event definitions specify (based on the BPEL event model)
which raw events we need from the BPEL engine
– Complex events are defined (recursively) based on other events and
are used for PPM calculation
16. Definition of Resource Events
A resource event definition specifies the following three elements:
– Monitored resource: concrete BPEL entity instance + which states (as defined
in the BPEL event model)
– Process data: Optionally, one can specify which process data (defined as WS-
BPEL variable) is to be part of the event. The data is read when the event is
published.
– Target message queue or pub/sub topic
<resourceEventDefinition name="OrderReceivedEvent">
<monitoredResource
process="reseller:PurchaseOrderProcess"
scope="process"
activity="reseller:ReceivePO"
state="completed"/>
<data>
<processVariable name="order" variable="purchaseOrder"/>
</data>
<publish>
<queue name="PurchaseOrderProcess.ResourceEvents" />
</publish>
</resourceEventDefinition>
17. Definition of Complex Events
Complex events are specified by correlating and aggregating existing
events (resource and complex events)
A complex event definition specifies the following three elements:
– Consumed events: source event queue(s) and/or topic(s)
– Event aggregation statement: CEP statement which correlates and
aggregates the consumed events to a new event
– Target message queue or pub/sub topic
<complexEventDefinition name="OrderFulfillmentTimeEvent">
<consume><queue name="PurchaseOrderProcess.ResourceEvents" /></consume>
<eventAggregation>
<statement><![CDATA[
SELECT abs(b.timestamp - a.timestamp) AS metricValue, "ms" AS unit, a.piid AS piid
FROM PATTERN [EVERY a = ResourceEvent(name="OrderReceivedEvent")
-> b = ResourceEvent(name="ReceivedDeliveryNotificationEvent"
AND piid = a.piid) ]
]]><statement>
</eventAggregation>
<publish> <topic name="PurchaseOrderProcess.metrics" /> </publish>
</complexEventDefinition>
18. Monitoring QoS Metrics
In our context, QoS can be measured in three different ways:
– probing by a separate QoS monitor (e.g., probing whether an endpoint
is available, see example below)
– instrumentation of the WS-BPEL engine
– instrumentation of the WS-BPEL process (evaluating QoS parameters
using PPMs, e.g. response times of WS can be estimated through
WS-BPEL activity durations)
<QoSEventDefinition name="ProcessEndpointAvailableEvent">
<availabilityProbe>
<endpoint>
http://localhost:8082/.../poProcess?wsdl
</endpoint>
<testFrequencyPerMinute>20</testFrequencyPerMinute>
</availabilityProbe>
<publish>
<queue name="PurchaseOrderProcess.QoSEvents" />
</publish>
</QoSEventDefinition>
20. Deployment and Monitoring
Deployment of the monitor model:
Resource event definitions are deployed to Event Listener in the BPEL Engine
(Apache ODE), QoS Monitor, and other arbitrary event adapters
Complex event definitions are deployed to the ESPER CEP Engine
Listener
Service
Event
Complex Event Dashboard
Processing
Process Engine Metric
definitions
Event
Metrics
QoS Monitor Database
Event
Other Event
Sources
27. Discussion - Advantages
The Event-based Monitoring Approach based on a state-of-
the-art CEP engine has the following advantages:
– Complex Computations– A state-of-the-art CEP language is more
powerful than most domain-specific or specialized monitoring
languages
– High throughput, low latency – provided by the underlying CEP engine
implementation
– Support for arbitrary events – Process events based on BPEL event
model and some QoS events are directly supported
28. Discussion - Disadvantages
… but of course the approach also has some disadvantages.
– Specification of Monitored Properties – is relatively time-consuming
and error-prone because of the relatively low-level language
– Transactionality and Persistence – during event processing is not
directly supported
30. Summary
Event-based Monitoring of processes based on BPEL and
BPEL4Chor Models
Monitor model specifies:
1. Resource events which should be provided by BPEL engine (based
on BPEL event model) or QoS monitor
2. Complex events which specify higher-level monitored properties
based on an event processing language (ESPER EPL)
Monitoring across processes:
– Choreography model with public process descriptions as basis
– Extensions for choreography instance identification in message and
event exchanges needed
31. Further S-Cube Reading
Wetzstein, Branimir; Leitner, Philipp; Rosenberg, Florian; Dustdar, Schahram; Leymann, Frank: Identifying
Influential Factors of Business Process Performance Using Dependency Analysis. In: Enterprise Information
Systems. Vol. 5(1), Taylor & Francis, 2010.
Wetzstein, Branimir; Karastoyanova, Dimka; Kopp, Oliver; Leymann, Frank; Zwink, Daniel: Cross-
Organizational Process Monitoring based on Service Choreographies. In: Proceedings of the 25th Annual
ACM Symposium on Applied Computing (SAC 2010); Sierre, Switzerland, 21-26 March, 2010.
Kopp, Oliver; Henke, Sebastian; Karastoyanova, Dimka; Khalaf, Rania; Leymann, Frank ; Sonntag, Mirko;
Steinmetz, Thomas; Unger, Tobias; Wetzstein, Branimir: An Event Model for WS-BPEL 2.0, Technical Report
No. 2011/07, University of Stuttgart.
Wetzstein, Leitner, Rosenberg, Brandic, Dustdar, and Leymann. Monitoring and Analyzing Influential Factors
of Business Process Performance. In Proceedings of the 13th IEEE international conference on Enterprise
Distributed Object Computing (EDOC'09). IEEE Press, Piscataway, NJ, USA, 118-127.
Wetzstein, Branimir; Strauch, Steve; Leymann, Frank: Measuring Performance Metrics of WS-BPEL Service
Compositions. In: Proceedings of the Fifth International Conference on Networking and Services (ICNS 2009),
Valencia, Spain, April 20-25, 2009.