Quantitative specification and verification in Maude

This is a toolset for specifying probabilistic models on top of nondeterministic specifications in the Maude language, based on rewriting logic. Probabilities can be specified with different alternative methods that are explained here, including a high-expressive probabilistic extension of the Maude strategy language.

These specifications can be analyzed with probabilistic and stochastic model-checking techniques using related tools:

Tool Approach Options Repository Documentation How to get Dependencies
umaudemc's pcheck command probabilistic Probability of LTL, CTL, and PCTL* formulae / Expected reward calculation / Steady-state and transient probabilities umaudemc Section 5.2 pip install umaudemc PRISM or Storm
umaudemc's scheck command statistical Estimation of QuaTEx expressions umaudemc Section 5.3 pip install umaudemc SciPy (optional)
mvmaude simulator statistical Estimation of QuaTEx expressions multivesta-maude Section 6 Download MultiVeSta
simaude / Term.srewrite statistical Frequency of results of a probabilistic strategy maudesmc / maude-bindings See below Download / pip install maude None

Moreover, qualitative model checking and other features of Maude can be seamlessly applied to the base nondeterministic specification. Namely, the check command of umaudemc can be used to check LTL, CTL, CTL*, and μ-calculus properties.

More information on specification and these tools is available in the user manual, which also covers the qualitative and strategy-aware model checkers explained here, and in the FM 2023 paper on on this topic.

Getting the tools

Using Docker

A ready-to-use Docker image can be started with

docker run -ti ghcr.io/fadoss/umaudemc:qmaude

where the umaudemc, mvmaude, maude, and simaude commands are available, along with documentation and examples. Both PRISM and Storm are installed as backends for the umaudemc tool. A fixed Docker image prepared for the Artifact Evaluation of FM 2023 also is available at Zenodo with almost the same contents.


For both probabilistic and statistical model checking, the following should be installed

Moreover, for probabilistic model checking with umaudemc pcheck, at least one of these model checkers should be installed as indicated in their websites

For stochastical model checking with umaudemc scheck there are no additional compulsory requirements, but installing SciPy is convenient. Otherwise, the normal distribution will be used for calculating confidence intervals instead of the Student's t-distribution with as many degrees of freedom as the number of simulation runs. Alternatively, our custom simulator for the statistical model checker MultiVeSta can also be used, but we recommend using scheck instead.


The collection of examples is available at github.com/fadoss/strat-examples/probabilistic and github.com/fadoss/multivesta-maude/tests. Among them:

Custom simulations of strategies

Using the srewrite and dsrewrite commands of the Maude interpreter, probabilistic strategies can be manually simulated. The official releases of Maude in maude.cs.illinois.edu does not currently support this probabilistic extension of the strategy language, but a modified version can be downloaded from here. However, those commands are not practical for runnning many simulations as it is usually needed. The previous downloads include a program

simaude <Maude file> <initial term> <strategy> [-j <jobs>] [-n <simulations>] [--help]
that executes a probabilistic strategy many times and shows the frequency of its results.

Moreover, the maude Python library supports the probabilistic extension of the strategy language, and strategies can be executed with the Term.srewrite command, resetting the pseudo random number generator if needed with maude.setRandomSeed.

Performance experiments

Probabilistic model checking

Using the Python profiler cProfile, we have analyzed how much of the execution time of pcheck has been spent in the external model-checking backend and how much in the probabilistic model generation and other duties within umaudemc. The annotated pcheck commands in the examples of this repository has been used for the comparison. Unsurprisingly, the results are highly dependent on the particular hardness of the model and the properties specification.

Plot of the percentage of execution time per backend

The proportion of execution time spent in the external backends in the whole test suite is the 97.58% using PRISM (in client mode, to avoid the initialization overhead of the Java VM), the 62.65% using Storm, and the 42.52% using Storm through its Python bindings.

The raw results and the scripts for reproducing them are here.

Statistical model checking

Regarding statistical model checking with MultiVeSta, we have compared the performance of our simulator with respect to the PMaude simulator already included in that tool. Its fair dice example and our adaptation of it have been used as the test case. These specifications are equivalent, although not completely identical. We have executed the mvmaude simulator under the equivalent assignment methods uniform and step with the unithrow strategy. The results, as speedups with respect to the execution time of the PMaude simulator, are in the table below.

Query Max. simulations Jobs Speedup with uniform Speedup with step
Extracting face 1 300 1 15.53 15.499
Extracting each face 300 1 13.902 14.08
1 21.73 24.19
3 8.75 9

The raw results and the scripts for reproducing them are here.