Documentation & Tutorial


In this document, we explain the Quality and Resource Management Language (QRML), combined with a small tutorial exercise. We start with providing an overview of the constructs of the QRML language, after which we illustrate them using a Biometric Access Control (BAC) System example. Throughout the document, some practice exercises are provided, to make you familiar with the language and concepts. Both a partial BAC model to be used with the exercises and a complete version of the BAC model are available in the public section of this site.

Back to top

Overview of the constructs of the QRML language

The key concept in QRML is the component. A component has an interface consisting of six parts, inputs, outputs, required budgets, provided budgets, qualities, and parameters.

component interface diagram

The component interface, as graphically depicted above, is described as follows.

  • output is a functional output (usually drawn horizontally) which is connected to the functional input of another component.
  • input is a functional input (usually drawn horizontally) which is connected to the functional output of another component.
  • provided budget is a resource provision (usually drawn vertically) which is connected to the resource dependency of another component.
  • required budget is a resource dependency (usually drawn vertically) which is connected to the resource provision of another component.
  • parameter is a configurable aspect of a component.
  • quality is an aspect of a component that is intended to be optimized.

Component dependencies connect interfaces of different components, indicating that they depend on each other. These dependencies can be specified using the following constructs.

  • outputs to connects the functional output of one component to the functional input of an other component (usually drawn horizontally).
  • runs on connects the resource dependency of one component to the resource provision of an other component (usually drawn vertically).

Each of the six elements of a component interface have a type, which can be defined in three different ways:

  • channel is used for the input and output interfaces. It therefore also concerns the outputs to dependency.
  • budget is used for the provides and requires interfaces. It therefore also concerns the run on dependency.
  • typedef is used for parameters and qualities but can also be used for a provide, require, input and output.

Additionally, a component can be a composition of other components, yielding a component hierarchy, in the following two ways.

  • Component aggregation is used when a component consists of (one or) multiple components.
  • Alternatives concerns a component that can behave in different ways, each specified as a component.

The following diagram shows the component hierarchy of the BAC model. Relations between component with diamonds end points indicate component aggregations, or alternatives. It has been automatically generated with the QRML tooling on the basis of the QRML model.

component hierarchy diagram

Back to top

Biometric Access Control system

We consider a Biometric Access Control system that grants access to e.g. buildings, rooms, files, or data on the basis of face recognition, e.g., by analyzing the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw.

BAC system overview

The figure above shows a component model of the BAC system. We define the BAC application, the BAC platform and connect them via a mapping, yielding the BAC system. This separation of concerns facilitates modeling and reasoning about quality and resource management (QRM). Next, we discuss the BAC system in a bottom up way and specify at the same time elements of the QRML language.

Back to top

The BAC application

The figure conveys that the BAC application is a pipeline that consists of three components, namely Face Detection (capturing an image and detecting a face), Face Recognition (identifying a person from the detected face) and Access Control (looking up the access rights of an identified person). The components are connected functionally, i.e., depicted horizontally. Face detection outputs an image to Face Recognition and Face Recognition outputs an ID to Access control. To make this more clear, we demonstrate how this is done in the actual QRML code.

Component Face Detection outputs an image. Furthermore, Face Detection has two requires statements, which means it depends on another component (to be discussed later), from which it requires a resource budget. Finally, there is a latency quality, which depends on the latency property of the imgAna subcomponent. The latter is captured by a constraint.

component FaceDetection {
    output Image out

    requires ImageCapturing imgCap
    requires ImageAnalysis imgAna

    quality Latency lat
    constraint lat = imgAna.latency

Component Face Recognition inputs an Image from Face Detection and outputs an Id to Access Control. Just like Face Detection, Face Recognition relies on a resource component implementing a Face Identification procedure. This resource component determines its latency. Moreover, another quality named reqQuality is used to capture the recorded quality of the Face Identification procedure.

component FaceRecognition {
    input Image inp
    output Id out

    requires FaceIdentification faceId

    quality Latency lat
    quality Quality recQuality

    constraint lat = faceId.latency
    constraint recQuality = faceId.qual

The third component of the pipeline, Access Control, only has input functionality, i.e., it receives an Id from Face Recognition. It requires access to a resource of type DatabaseAccess. The Access Control component has a latency that is inherited from the latency of the DatabaseAccess resource.

Exercise 1 Complete the AccessControl component. You may do so using the BAC-tutorial model in the public section of this site.

component AccessControl {

Face Detection, Face Recognition and Access Control are aggregated into component BACApplication, as follows.

component BACApplication {
    contains FaceDetection faceDet
    contains FaceRecognition faceRec
    contains AccessControl accCon

    constraint faceDet.out outputs to faceRec.inp
    constraint faceRec.out outputs to accCon.inp

    quality Latency endToEnd { endToEnd = + + }
    quality Quality recQuality from faceRec.recQuality

    requires ImageCapturing imgCap from faceDet.imgCap
    requires ImageAnalysis imgAna from faceDet.imgAna
    requires FaceIdentification faceId from faceRec.faceId
    requires DatabaseAccess dbAcc from accCon.dbAcc

First, the three components are included via the contains keyword. This is followed by two outputs to constraints to connect them in a pipeline fashion. Then, we introduce two qualities that expose metrics of interest to users or to a quality- and resource manager. The BAC application has a latency endToEnd and a quality recQuality, which represent the end-to-end latency of the pipeline and the recognition quality of Face Recognition. The latency is determined as the sum of the individual latencies of the pipeline components with a constraint. The recognition quality is merely a copy of the quality of the Face Recognition component. The required budgets are stated using the requires keyword. With the from-clause they are set to be equal to required budgets of subcomponents. Later, when the platform is introduced, which provides budgets, we will fulfill these requirements.

Back to top

The BAC platform

The key component of the BAC platform is the Smart Camera, a machine vision system providing, in normal mode, image capturing and analysis. In advanced mode, it also extracts person Ids from captured images. The Smart Camera is defined as follows.

component SmartCamera {
    contains SmartCameraNormalMode normal or SmartCameraAdvancedMode advanced as sc

    provides ImageCapturing imgCap from sc.imgCap
    provides ImageAnalysis imgAna from sc.imgAna
    provides FaceIdentification faceId from sc.faceId

The Smart Camera has two alternative realizations, i.e., it can behave in either one of two ways, which are both specified by a component. These alternatives are SmartCameraNormalMode and SmartCameraAdvancedMode, respectively. The specific alternatives can be referred to through normal and advanced, whereas sc refers to the selected alternative. The Smart Camera provides three budgets, all taken from the selected realization sc.

In normal mode, the Smart Camera has a latency of 25 for image analysis. The image capturing budget is left unspecified. For the purposes of this model, it suffices to specify that the budget exists. Face identification cannot be done in the normal mode.

component SmartCameraNormalMode {
    provides ImageCapturing imgCap
    provides ImageAnalysis imgAna { latency = 25 }

    // The SmartCameraNormalMode does not provide a Face Identification budget.
    // To ensure that the interface is compatible with the Smart Camera interface, a dummy budget is defined,
    // with quality set to bottom and latency set to 1000, assuming that this is always too high
    provides FaceIdentification faceId { qual = bottom & latency = 1000 }

In advanced mode, the latency for image analysis is higher, namely 50. However, in turn, the Smart Camera is able to perform Face Identification at low quality and a latency of 20. Hence, selecting the camera mode is clearly a trade-off.

component SmartCameraAdvancedMode {
    provides ImageCapturing imgCap
    provides ImageAnalysis imgAna { latency = 50 }
    provides FaceIdentification faceId { qual = low & latency = 20 }

The compute platform is defined in a similar way as the Smart Camera. It also has two alternatives, a Cloud Compute Platform alternative and a Local Compute Platform alternative. Both alternatives provide a Face Identification resource budget and a Database Access resource budget. If computations are performed in the cloud, the quality of Face Identification is high but the latencies of both Face Identification and Database Access are also high, namely 100. Alternatively, computations that are performed locally, via the Local Compute Platform, lead to a medium identification quality and lower latencies of 50 for both the identification and database access.

Exercise 2 Complete the following three components. Continue wit the earlier BAC-tutorial model.

component ComputePlatform {
component CloudComputePlatform {
component LocalComputePlatform {

The BAC Platform is composed of a Smart Camera component and a Compute Platform. It provides four budgets, Image Capturing, Image Analysis, Face Identification, and Database Access, all inherited from the constituent components. For the mapping of the Face Identification budget, a parameter fr is used, which mandates whether face identification runs on the Smart Camera (in advanced mode) or on the Compute Platform (either locally or in the cloud).

Exercise 3 Complete the BACPlatform component in your BAC-tutorial model.

component BACPlatform {

    parameter FaceRecPlatform fr
    provides FaceIdentification faceId
    constraint fr = smartCam => (faceId = ...)
    constraint fr = compPlat => ...

Back to top

The BAC system

Now that the BAC application and the BAC platform have been defined, it has become possible to connect them. This leads to the BAC System that is presented as follows.

main component BACSystem { 
    contains BACApplication app
    contains BACPlatform plt

    parameter FaceRecPlatform fr from


    constraint app.dbAcc runs on plt.dbAcc

The main keyword indicates that this component is the main system. That is, the component that the QRML tooling will use as its starting point of analysis. BAC System aggregates BAC Application and BAC Platform and forwards the parameter fr to BAC Platform to enable the selection of the platform there. The required budgets are mapped to the supported budgets using the runs on keyword. The qualities of BAC application need to be forwarded one-on-one to make them available at system level.

Exercise 4 Complete the BACSystem model.

Back to top

Typedefs, channels and budgets

The BAC model uses nine types, four budgets, two channels, and three general type definitions. Budgets, channels and any other type definition, need an ordering to allow the matching of provided and required budgets and provided outputs and required inputs. To match, a provided budget or output needs to be at least equal to a required budget or input. Conversely, a required budget or input should be at most equal to a provided budget or input to match.

We first identify the four budgets, which are used for required and provided resource budgets. The ordering of a type defines which values are considered better from the perspective of the budget and/or input requester. If types have a standard ordering, such as the integers, by default, smaller values are considered better. The default ordering can be overruled, through the ordered by keyword.

budget ImageCapturing boolean ordered by = 

budget ImageAnalysis{
    latency : Latency

budget FaceIdentification {
    latency : Latency
    qual : Quality

budget DatabaseAccess {

We define two channels, used to connect inputs and outputs.

channel Image integer
channel Id integer

And finally, we provide three type definitions, using the typedef keyword. Latency and Quality are used to describe qualities, whereas faceRecPlatform is used to specify the parameter range for the mapping of Face Recognition onto the platform. The Quality and faceRecPlatform are enumerations with a left-to-right ordering. The Quality type has a bottom value that may be used to specify that quality is undefined.

typedef Quality enumeration {bottom, low, medium, high} ordered left-to-right
typedef Latency // TO BE COMPLETED
typedef FaceRecPlatform enumeration {smartCam, compPlat} ordered left-to-right

Exercise 5 Complete the DatabaseAccess budget and Latency type definitions.

At this point, you should have obtained a complete BAC model, that does not have any syntax errors. You may verify your models with the completed BAC model provided on this site. Once you have a complete model without syntax errors, you may proceed to analyze the model.

Back to top


Once a QRML model is complete, and free of errors, a constraint model can be generated to study the configurations or set points that it supports.

From the model editor, one can click the button labelled Generate Artifacts and subsequently select the z3 option from the drop-down menu.

It will take a bit of time and then generate a long list of artifacts. Look for the artifact with the extension .z3, select it, and click the button Open Selected Artifact.

A new window or tab will open showing the results of the constraint solver. It may either say that the model has no feasible solution, or it says that there exists a feasible solution and it will indicate all values of the system variables that constitute a feasible solution.

The window allows the optimization to be re-run with a particular, single optimization objective. Select either minimize or maximize and pick, from the drop-down list, the variable that you want to optimize. Then click Run Z3 to optimize.

You can further save the Z3 model to work with Z3 directly.

Exercise 6 Generate a constraint model from the BAC-complete QRML model and confirm that the BAC model has a feasible solution. Check what choices the constraint solver selects. Consider the following questions:

  • what is the end-to-end latency for this solution?
  • what is the recognition quality?
  • which component does the face recognition? the smart camera or the compute platform?
  • is the compute platform local or in the cloud?
  • is this combination optimal?

Exercise 7 Use the constraint solver to determine a configuration of the BAC model that minimizes the end-to-end latency. Consider the following questions:

  • what is the configuration that minimizes the latency?
  • confirm that adding a constraint that requires the latency to be strictly better than the minimum gives an infeasible constraint problem. Recall that the constraint that latency lat is strictly better (i.e., lower) than some constant L is expressed (perhaps somewhat counter-intuitively) by the constraint lat > L.

Exercise 8 Use the constraint solver to determine a configuration that maximizes the recognition quality.

  • check which configuration is used
  • also try maximizing the recognition quality under the extra constraint that the latency is equal to the minimum you have determined earlier.

Exercise 9 Use the model and the constraint solver to determine the maximum achievable recognition quality when the cloud platform is not available.

Exercise 10 Use the model and the constraint solver to determine the entire trade-off space (all Pareto points) of latency and recognition quality.


This work has received funding from the Electronic Component Systems for European Leadership (ECSEL) Joint Undertaking under grant agreements no 783162 (FitOpTiVis) and 101007260 (TRANSACT)