Testing Models

Unit Testing

The Credmark Model Framework testing is based on the standard Python unittest module.

Creating Tests

Create unit tests in python files with a specific naming pattern, the default being test*.py, for example test_model.py. Typically they should be in the same folder as the model they test.

Note

In order for the test discovery to load your tests, they must be in a folder with a __init__.py file so create a __init__.py file in your models folder if it does not exist.

The unit tests must be implemented as subclasses of credmark.cmf.engine.model_unittest.ModelTestCase and implement test_* methods to implement specific tests.

class ModelTestCase(methodName='runTest')[source]

A superclass for unittest TestCase instances that use framework classes and call other models.

A ModelTestCase has a default context defined during the tests which is available at self.context.

To configure a specific context and mocks for a test, it is recommended to use the model_context() decorator on test_methods.

Example:

@model_context(block_number=5000)
def test_model(self):
    # self.context.block_number == 5000

Alternatively, a test method can use the self.create_model_context() to create a new context and set mocks during a test.

property context

Gets the context. If it doesn’t exist, a default context will be created. Use the model_context() decorator or call self.create_model_context() to configure a context.

create_model_context(chain_id=1, block_number=17222851, mocks=None)[source]

Create a new model context and set it as the current context.

Parameters

You can use the credmark.cmf.engine.model_unittest.model_context() decorator on your test methods to configure a specific context and mocks for the test:

model_context(chain_id=1, block_number=17222851, mocks=None)[source]

A decorator that can be used on a test method in a ModelTestCase subclass to configure the context and mocks to use during the test.

Example:

@model_context(block_number=5000)
def test_model(self):
    # self.context.block_number == 5000
Parameters

See an Example Unit Test

Running Tests

You can run tests using:

credmark-dev test

By default, it will run all tests in the models folder. To limit tests to a specific folder, you can pass a folder argument:

credmark-dev test models/contrib/mymodels

You can also use the --pattern argument to change the file matching pattern used for test discovery. The default is test*.py.

Note

In order for the test discovery to load your tests, they must be in a folder with a __init__.py file so create a __init__.py file in your models folder if it does not exist.

Mocks

When testing a model that calls other models, you can use mock models to generate predictable output. Mock models are simply configured static model outputs for a specified model slug (and optionally some partial input properties to match.)

For example, if you’re developing a model that queries some data (by calling a ledger model for example) and then does some processing on it, you can mock the ledger model to return a set of test data.

Mock Configuration

Model mocks are defined using the credmark.cmf.engine.mocks.ModelMock and credmark.cmf.engine.mocks.ModelMockConfig classes. You can put your mocks in the same file as the model that uses them or another file in your models folder under models/contrib.

credmark.cmf.engine.mocks.ModelMockConfig defines a full configuration of (multiple) mocks that can be used when running credmark-dev. It contains a models property that is a dictionary where the keys are model slugs and the values are a single or list of credmark.cmf.engine.mocks.ModelMock instances.

class ModelMockConfig(models, run_unmocked=False)[source]

Configuration of mock models.

Parameters
  • models (dict) – a dict where each key is a model slug and the value is a ModelMock instance or list of ModelMock instances.

  • run_unmocked (boolean) – If true, an unmocked model will be run normally. Otherwise an exception will be raised.

Example:

config = ModelMockConfig(
    run_unmocked=False,  # unmocked models cannot be run

    models={
        "example.echo": ModelMock("example.echo"),
        "contrib.a": ModelMock({"a": 42}),
        "contrib.b": ModelMock([{"b": 1}, {"b": 2}, "contrib.b"]),
        "contrib.b-rep": ModelMock([{"b": 1}, {"b": 2},
                                    "contrib.b-rep"], repeat=2),
        "contrib.c": ModelMock({"c": 42}, input={"address": "0x00000000"}, repeat=2),
        "contrib.d": ModelMock([{"d": 1}, {"d": 2}],
                            input={"address": "0x00000000"}),
        "contrib.e": [ModelMock({"e": 42}, input={"address": "0x00000000"}),
                    ModelMock({"e": 1})],
        "contrib.f": [ModelMock({"f": 42}, input={"address": "0x00000000"}),
                    ModelMock("contrib.f")],
        "contrib.g": ModelMock([ModelMock({"g": 1}, repeat=1),
                                ModelMock({"g": 2}, repeat=2),
                                ModelMock({"g": 3}, repeat=3)],
                            repeat=2),
        "contrib.h": ModelMock(ModelDataError('Mock data error')),
    }
)

When using model mocks, all run model calls will try to use a mock. If there is no entry or input-matching entry for a run_model slug and run_unmocked is False, an exception will be raised.

A credmark.cmf.engine.mocks.ModelMock instance contains the output for the model (which can be a dict or DTO instance, a ModelBaseError subclass instance, a string, another ModelMock instance or a list containing any of these types.) It also has a property repeat which specifies how many times the mock output can be used. The default is 0 which means the output can be used an unlimited number of times.

class ModelMock(output, input=None, repeat=0)[source]

Defines mock output for a model and options for how the mock is used.

Parameters

The values in the output dictionary can be:

  • a string representing the slug of an actual model to run

  • a dict representing the output from a call to run_model(slug).

  • a DTO instance representing the output from a call to run_model(slug).

  • a ModelBaseError subclass instance representing an error that will be raised

  • a ModelMock instance to use as the output.

  • a list of dicts, DTOs, errors, or strings (as described above), representing the outputs to iterate over for each to call run_model(slug).

An input dict can contain a full or partial model input where if the input values match the input passed to run_model(slug), it is considered a match and the output is used. (Currently only a top-level shallow comparison of input values is used.)

NOTE: Mocks in the top-level list (outside a ModelMock instance) for a slug in the config that have input set will be used as a lookup before using other mocks in the list. Any Mocks with no input match criteria will be used after all mocks containing input are tried and there was no match. For mocks that are not in the top-level list for a slug, the input is simply used as a filter before a mock is used, in the normal order.

Generating Mocks

Instead of manually creating mocks, you can generate them automatically from a model run using the credmark-dev tool. To do this, use the --generate_mocks flag where the value is the path of a python file to create (or overwrite) and write the generated mocks in.

For example running:

credmark-dev run example.ledger-blocks --generate_mocks ./mocks.py

will generate the file ./mocks.py something like:

from credmark.cmf.engine.mocks import ModelMock, ModelMockConfig


mocks = ModelMockConfig(
    run_unmocked=False,
    models={
        'ledger.block_data': [ModelMock({'data': [{'difficulty': 13867018111894316}, {'difficulty': 13859975667743356}, {'difficulty': 13859700789836412}, {'difficulty': 13859425911929468}, {'difficulty': 13859151034022524}, {'difficulty': 13872423444635732}, {'difficulty': 13865378362450248}, {'difficulty': 13871876861917288}, {'difficulty': 13898747976151264}, {'difficulty': 13898473098244320}]}, input=None, repeat=1)]
    }
)

Removing Top-Level Generated Mocks

If your model is calling models that themselves call other models, you may want to remove the top-level models from your mocks. If you do this you will also need to set run_unmocked=True in your ModelMockConfig.

If you have a model that calls a "series" model, you probably want to remove the "series" model from your mocks. For example, if your model calls "series.block-window-interval" to run an "example.echo" model over a series of blocks, the generated mocks might look something like this:

from credmark.cmf.engine.mocks import ModelMock, ModelMockConfig


mocks = ModelMockConfig(
    run_unmocked=False,
    models={
        'example.echo': [ModelMock({'echo': 'Hello'}, input=None, repeat=1), ModelMock({'echo': 'Hello'}, input=None, repeat=1), ModelMock({'echo': 'Hello'}, input=None, repeat=1)],
        'rpc.get-block-range-block-window-interval': [ModelMock({'blockNumbers': [{'blockNumber': 9999, 'blockTimestamp': 1438334590, 'sampleTimestamp': 1438334590}, {'blockNumber': 10000, 'blockTimestamp': 1438334627, 'sampleTimestamp': 1438334627}, {'blockNumber': 10001, 'blockTimestamp': 1438334639, 'sampleTimestamp': 1438334639}]}, input=None, repeat=1)],
        'series.block-window-interval': [ModelMock({'series': [{'blockNumber': 9999, 'blockTimestamp': 1438334590, 'sampleTimestamp': 1438334590, 'output': {'echo': 'Hello'}}, {'blockNumber': 10000, 'blockTimestamp': 1438334627, 'sampleTimestamp': 1438334627, 'output': {'echo': 'Hello'}}, {'blockNumber': 10001, 'blockTimestamp': 1438334639, 'sampleTimestamp': 1438334639, 'output': {'echo': 'Hello'}}], 'errors': None}, input=None, repeat=1)]
    }
)

You can remove the "series.block-window-interval" mock and set run_unmocked=True so it actually runs the "series.block-window-interval" model and uses the mocks for the "example.echo" and "rpc.get-block-range-block-window-interval" models:

from credmark.cmf.engine.mocks import ModelMock, ModelMockConfig


mocks = ModelMockConfig(
    run_unmocked=True,
    models={
        'example.echo': [ModelMock({'echo': 'Hello'}, input=None, repeat=1), ModelMock({'echo': 'Hello'}, input=None, repeat=1), ModelMock({'echo': 'Hello'}, input=None, repeat=1)],
        'rpc.get-block-range-block-window-interval': [ModelMock({'blockNumbers': [{'blockNumber': 9999, 'blockTimestamp': 1438334590, 'sampleTimestamp': 1438334590}, {'blockNumber': 10000, 'blockTimestamp': 1438334627, 'sampleTimestamp': 1438334627}, {'blockNumber': 10001, 'blockTimestamp': 1438334639, 'sampleTimestamp': 1438334639}]}, input=None, repeat=1)],
    }
)

Testing with Mocks

To use mocks in a unit test, you can pass a configured ModelMockConfig instance to the credmark.cmf.engine.model_unittest.model_context() test method decorator in your ModelTestCase test subclass.

model_mocks_config = ModelMockConfig(
    models={
        'price': [
            ModelMock(Price(price=42.0, src='mock.price'),
                      input={'address': '0x68cfb82eacb9f198d508b514d898a403c449533e'}),
        ]
    })

class ModelTest(ModelTestCase):

    @model_context(chain_id=1,
                   block_number=140000,
                   mocks=model_mocks_config)
    def test_price(self):
        ...

Running with Mocks

You can also run your model using mocks. To do that, you simply pass a parameter to credmark-dev telling it the location of the ModelMockConfig instance you want to use using the module name (dot notation) followed by a dot and the variable name.

For example, if you have a ModelMockConfig defined in the file models/contrib/my_models/my_model.py with the variable name my_mocks, the model mock config is specified with models.contrib.my_models.my_model.my_mocks. You can run the model contrib.my-model with the mocks using:

credmark-dev run contrib.my-model --model_mocks models.contrib.my_models.my_model.my_mocks

Example Using Mocks

Here is an example of a model file that defines a mock for the model slug price. In this case there is one result for requests with partial input containing an address of '0x68cfb82eacb9f198d508b514d898a403c449533e'. To extend this, more ModelMocks could be added to the list or a single ModelMock with no input to match specified could be used (whose output would be returned for any request for the price model.)

from credmark.cmf.model import Model
from credmark.cmf.engine.mocks import ModelMockConfig, ModelMock
from credmark.cmf.types import Price, Token

model_mocks_config = ModelMockConfig(
    models={
        'price': [
            ModelMock(Price(price=42.0, src='mock.price'),
                      input={'address': '0x68cfb82eacb9f198d508b514d898a403c449533e'}),
        ]
    })


@Model.describe(slug='contrib.cmk-price',
                version='1.0',
                display_name='CMK Price',
                description='CMK Price test model',
                output=Price)
class CMKPriceModel(Model):

    def run(self, input: dict) -> Price:
        token = Token(symbol='CMK')
        price = Price(**self.context.models.price(token))
        return price

If this model was in the file models/contrib/my_models/cmk_price.py, you could run it with:

credmark-dev run contrib.cmk-price -m models.contrib.my_models.cmk_price.model_mocks_config

and the output will be something like:

{
  "slug": "contrib.cmk-price",
  "version": "1.0",
  "output": {
    "price": 42.0,
    "src": "mock.price"
  },
  "dependencies": {
    "price": {
      "0.0": 1
    },
    "contrib.cmk-price": {
      "1.0": 1
    }
  }
}

Example Unit Test

You can use mocks in a unit test by decorating your test method:

from credmark.cmf.engine.mocks import ModelMock, ModelMockConfig
from credmark.cmf.engine.model_unittest import ModelTestCase, model_context
from credmark.cmf.types import Price

model_mocks_config = ModelMockConfig(
    models={
        'price': [
            ModelMock(Price(price=42.0, src='mock.price'),
                      input={'address': '0x68cfb82eacb9f198d508b514d898a403c449533e'}),
        ]
    })

class ModelTest(ModelTestCase):

    @model_context(chain_id=1,
                   block_number=140000,
                   mocks=model_mocks_config)
    def test_price(self):

        output = self.context.models.contrib.cmk-price()

        self.assertEqual(output['price'], 42.0)