Octue SDK (Python)

The python SDK for Octue data services, digital twins, and applications - get faster data groundwork so you have more time for the science!

Definition

Octue service

An Octue data service, digital twin, or application that can be asked questions, process them, and return answers. Octue services can communicate with each other with minimal extra setup.

Key features

Unified cloud/local file, dataset, and manifest operations

  • Create and build datasets easily

  • Organise them with timestamps, labels, and tags

  • Filter and combine them using this metadata

  • Store them locally or in the cloud (or both for low-latency reading/writing with cloud-guaranteed data availability)

  • Use internet/cloud-based datasets as if they were local e.g.

    • https://example.com/important_dataset.dat

    • gs://example-bucket/important_dataset.dat

  • Create manifests (a set of datasets needed for a particular analysis) to modularise your dataset input/output

Ask existing services questions from anywhere

  • Send them data to process from anywhere

  • Automatically have their logs, monitor messages, and any errors forwarded to you and displayed as if they were local

  • Receive their output data as JSON

  • Receive a manifest of any output datasets they produce for you to download or access as you wish

Create, run, and deploy your apps as services

  • No need to change your app - just wrap it

  • Use the octue CLI to run your service locally or deploy it to Google Cloud Run or Google Dataflow

  • Create JSON-schema interfaces to explicitly define the form of configuration, input, and output data

  • Ask other services questions as part of your app (i.e. build trees of services)

  • Automatically display readable, colourised logs, or use your own log handler

  • Avoid time-consuming and confusing devops, cloud configuration, and backend maintenance

High standards, quick responses, and good intentions

  • Open-source and transparent on GitHub - anyone can see the code and raise an issue

  • Automated testing, standards, releases, and deployment

  • High test coverage

  • Works on MacOS, Linux, and Windows

  • Developed not-for-profit for the renewable energy industry

Need help, found a bug, or want to request a new feature?

We use GitHub Issues 1 to manage:

  • Bug reports

  • Feature requests

  • Support requests

Footnotes

1

Bug reports, feature requests and support requests, may also be made directly to your Octue support contact, or via the support pages.

Installation

Pip

pip install octue==x.y.z

Poetry

Read more about Poetry here.

poetry add octue=x.y.z

Add to your dependencies

To use a specific version of the Octue SDK in your python application, simply add:

octue==x.y.z

to your requirements.txt or setup.py file, where x.y.z is your preferred version of the SDK (we recommend the latest stable version).

Datafiles, datasets, and manifests

One of the main features of octue is making using, creating, and sharing scientific datasets easy. There are three main data classes in the SDK that do this.

Datafile

Definitions

Datafile

A single local or cloud file, its metadata, and helper methods.

Locality

A datafile has one of these localities:

  • Cloud-based: it exists only in the cloud

  • Local: it exists only on your local filesystem

  • Cloud-based and local: it’s cloud-based but has been downloaded for low-latency reading/writing

Tip

Use a datafile to work with a file if you want to:

  • Read/write to local and cloud files in the same way

  • Include it in a dataset that can be sent to an Octue service for processing

  • Add metadata to it for future sorting and filtering

Key features

Work with local and cloud data

Working with a datafile is the same whether it’s local or cloud-based. It’s also almost identical to using python built-in open function. For example, to write to a datafile:

from octue.resources import Datafile

datafile = Datafile("path/to/file.dat")

# Or:

datafile = Datafile("gs://my-bucket/path/to/file.dat")

with datafile.open("w") as f:
    f.write("Some data")
    datafile.labels.add("processed")

All the same file modes you’d use with the python built-in open context manager are available for datafiles e.g. "r" and "a".

Automatic lazy downloading

Save time and bandwidth by only downloading when necessary.

Downloading data from cloud datafiles is automatic and lazy so you get both low-latency content read/write and quick metadata reads. This makes viewing and filtering by the metadata of cloud datasets and datafiles quick and avoids unnecessary data transfer, energy usage, and costs.

Datafile content isn’t downloaded until you:

  • Try to read or write its contents using the Datafile.open context manager

  • Call its download method

  • Use its local_path property

Read more about downloading files here.

CLI command friendly

Use any command line tool on your datafiles. Datafiles are python objects, but they represent real files that can be fed to any CLI command you like

import subprocess
output = subprocess.check_output(["openfast", datafile.local_path])
Easy and expandable custom metadata

Find the needle in the haystack by making your data searchable. You can set the following metadata on a datafile:

  • Timestamp

  • Labels (a set of lowercase strings)

  • Tags (a dictionary of key-value pairs)

This metadata is stored locally in a .octue file for local datafiles or on the cloud objects for cloud datafiles and is used during Datafile instantiation. It can be accessed like this:

datafile.timestamp
>>> datetime.datetime(2022, 5, 4, 17, 57, 57, 136739)

datafile.labels
>>> {"processed"}

datafile.tags
>>> {"organisation": "octue", "energy": "renewable"}

You can update the metadata by setting it on the instance while inside the Datafile.open context manager.

with datafile.open("a"):
    datafile.labels.add("updated")

You can do this outside the context manager too, but you then need to call the update method:

datafile.labels.add("updated")
datafile.update_metadata()
Upload an existing local datafile

Back up and share your datafiles for collaboration. You can upload an existing local datafile to the cloud without using the Datafile.open context manager if you don’t need to modify its contents:

datafile.upload("gs://my-bucket/my_datafile.dat", update_metadata=True)
Get file and metadata hashes

Make your analysis reproducible: guarantee a datafile contains exactly the same data as before by checking its hash.

datafile.hash_value
>>> 'mnG7TA=='

You can also check that any metadata is the same.

datafile.metadata_hash_value
>>> 'DIgCHg=='
Immutable ID

Each datafile has an immutable UUID:

datafile.id
>>> '9a1f9b26-6a48-4f2d-be80-468d3270d79b'
Check a datafile’s locality

Is this datafile local or in the cloud?

datafile.exists_locally
>>> True

datafile.exists_in_cloud
>>> False

A cloud datafile that has been downloaded will return True for both of these properties.

Represent HDF5 files

Support fast I/O processing and storage.

Warning

If you want to represent HDF5 files with a Datafile, you must include the extra requirements provided by the hdf5 key at installation i.e.

pip install octue[hdf5]

or

poetry add octue -E hdf5

Usage examples

The Datafile class can be used functionally or as a context manager. When used as a context manager, it is analogous with the python built-in open function. On exiting the context (the with block), it closes the datafile locally and, if the datafile also exists in the cloud, updates the cloud object with any data or metadata changes.

_images/datafile_use_cases.png
Example A

Scenario: Download a cloud object, calculate Octue metadata from its contents, and add the new metadata to the cloud object

Starting point: Object in cloud with or without Octue metadata

Goal: Object in cloud with updated metadata

from octue.resources import Datafile


datafile = Datafile("gs://my-bucket/path/to/data.csv")

with datafile.open() as f:
    data = f.read()
    new_metadata = metadata_calculating_function(data)

    datafile.timestamp = new_metadata["timestamp"]
    datafile.tags = new_metadata["tags"]
    datafile.labels = new_metadata["labels"]
Example B

Scenario: Add or update Octue metadata on an existing cloud object without downloading its content

Starting point: A cloud object with or without Octue metadata

Goal: Object in cloud with updated metadata

from datetime import datetime
from octue.resources import Datafile


datafile = Datafile("gs://my-bucket/path/to/data.csv")

datafile.timestamp = datetime.now()
datafile.tags = {"manufacturer": "Vestas", "output": "1MW"}
datafile.labels = {"new"}

datafile.upload(update_metadata=True)  # Or, datafile.update_metadata()
Example C

Scenario: Read in the data and Octue metadata of an existing cloud object without intent to update it in the cloud

Starting point: A cloud object with Octue metadata

Goal: Cloud object data (contents) and metadata held locally in local variables

from octue.resources import Datafile


datafile = Datafile("gs://my-bucket/path/to/data.csv")

with datafile.open() as f:
    data = f.read()

metadata = datafile.metadata()
Example D

Scenario: Create a new cloud object from local data, adding Octue metadata

Starting point: A file-like locally (or content data in local variable) with Octue metadata stored in local variables

Goal: A new object in the cloud with data and Octue metadata

For creating new data in a new local file:

from octue.resources import Datafile


datafile = Datafile(
    "path/to/local/file.dat",
    tags={"cleaned": True, "type": "linear"},
    labels={"Vestas"}
)

with datafile.open("w") as f:
    f.write("This is some cleaned data.")

datafile.upload("gs://my-bucket/path/to/data.dat")

For existing data in an existing local file:

from octue.resources import Datafile


tags = {"cleaned": True, "type": "linear"}
labels = {"Vestas"}

datafile = Datafile(path="path/to/local/file.dat", tags=tags, labels=labels)
datafile.upload("gs://my-bucket/path/to/data.dat")

Dataset

Definitions

Dataset

A set of related datafiles that exist in the same location, dataset metadata, and helper methods.

Locality

A dataset has one of these localities:

  • Cloud-based: it exists only in the cloud

  • Local: it exists only on your local filesystem

Tip

Use a dataset if you want to:

  • Group together a set of files that naturally relate to each other e.g. a timeseries that’s been split into multiple files.

  • Add metadata to it for future sorting and filtering

  • Include it in a manifest with other datasets and send them to an Octue service for processing

Key features

Work with local and cloud datasets

Working with a dataset is the same whether it’s local or cloud-based.

from octue.resources import Dataset

dataset = Dataset(path="path/to/dataset", recursive=True)

dataset = Dataset(path="gs://my-bucket/path/to/dataset", recursive=True)
Upload a dataset

Back up and share your datasets for collaboration.

dataset.upload("gs://my-bucket/path/to/upload")
Download a dataset

Use a shared or public dataset or retrieve a backup.

dataset.download("path/to/download")
Easy and expandable custom metadata

Find the needle in the haystack by making your data searchable. You can set the following metadata on a dataset:

  • Name

  • Labels (a set of lowercase strings)

  • Tags (a dictionary of key-value pairs)

This metadata is stored locally in a .octue file in the same directory as the dataset and is used during Dataset instantiation. It can be accessed like this:

dataset.name
>>> "my-dataset"

dataset.labels
>>> {"processed"}

dataset.tags
>>> {"organisation": "octue", "energy": "renewable"}

You can update the metadata by setting it on the instance while inside the Dataset context manager.

with dataset:
    datafile.labels.add("updated")

You can do this outside the context manager too, but you then need to call the update method:

dataset.labels.add("updated")
dataset.update_metadata()
Get dataset and metadata hashes

Make your analysis reproducible: guarantee a dataset contains exactly the same data as before by checking its hash.

dataset.hash_value
>>> 'uvG7TA=='

Note

A dataset’s hash is a function of its datafiles’ hashes. Datafile and dataset metadata do not affect it.

You can also check that dataset metadata is the same.

dataset.metadata_hash_value
>>> 'DIgCHg=='
Immutable ID

Each dataset has an immutable UUID:

dataset.id
>>> '9a1f9b26-6a48-4f2d-be80-468d3270d79c'
Check a dataset’s locality

Is this dataset local or in the cloud?

dataset.exists_locally
>>> True

dataset.exists_in_cloud
>>> False

A dataset can only return True for one of these at a time.

Filter datasets

Narrow down a dataset to just the files you want to avoiding extra downloading and processing.

Datafiles in a dataset are stored in a FilterSet, meaning they can be easily filtered by any attribute of the datafiles contained e.g. name, extension, ID, timestamp, tags, labels, size. The filtering syntax is similar to Django’s i.e.

# Get datafiles that have an attribute that satisfies the filter.
dataset.files.filter(<datafile_attribute>__<filter>=<value>)

# Or, if your filter is a simple equality filter:
dataset.files.filter(<datafile_attribute>=<value>)

Here’s an example:

# Make a dataset.
dataset = Dataset(
    path="blah",
    files=[
        Datafile(path="my_file.csv", labels=["one", "a", "b" "all"]),
        Datafile(path="your_file.txt", labels=["two", "a", "b", "all"),
        Datafile(path="another_file.csv", labels=["three", "all"]),
    ]
)

# Filter it!
dataset.files.filter(name__starts_with="my")
>>> <FilterSet({<Datafile('my_file.csv')>})>

dataset.files.filter(extension="csv")
>>> <FilterSet({<Datafile('my_file.csv')>, <Datafile('another_file.csv')>})>

dataset.files.filter(labels__contains="a")
>>> <FilterSet({<Datafile('my_file.csv')>, <Datafile('your_file.txt')>})>

You can iterate through the filtered files:

for datafile in dataset.files.filter(labels__contains="a"):
    print(datafile.name)
>>> 'my_file.csv'
    'your_file.txt'

If there’s just one result, get it via the FilterSet.one method:

dataset.files.filter(name__starts_with="my").one()
>>> <Datafile('my_file.csv')>

You can also chain filters or specify them all at the same time - these two examples produce the same result:

# Chaining multiple filters.
dataset.files.filter(extension="csv").filter(labels__contains="a")
>>> <FilterSet({<Datafile('my_file.csv')>})>

# Specifying multiple filters at once.
dataset.files.filter(extension="csv", labels__contains="a")
>>> <FilterSet({<Datafile('my_file.csv')>})>

For the full list of available filters, click here.

Order datasets

A dataset can also be ordered by any of the attributes of its datafiles:

dataset.files.order_by("name")
>>> <FilterList([<Datafile('another_file.csv')>, <Datafile('my_file.csv')>, <Datafile(path="your_file.txt")>])>

The ordering can also be carried out in reverse (i.e. descending order) by passing reverse=True as a second argument to the FilterSet.order_by method.

Manifest

Definitions

Manifest

A set of related cloud and/or local datasets, metadata, and helper methods. Typically produced by or needed for processing by an Octue service.

Tip

Use a manifest to send datasets to an Octue service as a question (for processing) - the service will send an output manifest back with its answer if the answer includes output datasets.

Key features

Send datasets to a service

Get an Octue service to analyse data for you as part of a larger analysis.

from octue.resources import Child

child = Child(
    id="octue/wind-speed:latest",
    backend={"name": "GCPPubSubBackend", "project_name": "my-project"},
)

answer = child.ask(input_manifest=manifest)

See here for more information.

Receive datasets from a service

Get output datasets from an Octue service from the cloud when you’re ready.

answer["output_manifest"]["an_output_dataset"].files
>>> <FilterSet({<Datafile('my_file.csv')>, <Datafile('another_file.csv')>})>

Hint

Datasets in an output manifest are stored in the cloud. You’ll need to keep a reference to where they are to access them - the output manifest is this reference. You’ll need to use it straight away or save it to make use of it.

Further information

Manifests of local datasets

You can include local datasets in your manifest if you can guarantee all services that need them can access them. A use case for this is, for example, a supercomputer cluster running several octue services locally that process and transfer large amounts of data. It is much faster to store and access the required datasets locally than upload them to the cloud and then download them again for each service (as would happen with cloud datasets).

Warning

If you want to ask a child a question that includes a manifest containing one or more local datasets, you must include the allow_local_files parameter. For example, if you have an analysis object with a child called “wind_speed”:

input_manifest = Manifest(
    datasets={
        "my_dataset_0": "gs://my-bucket/my_dataset_0",
        "my_dataset_1": "local/path/to/my_dataset_1",
    }
)

analysis.children["wind_speed"].ask(
    input_values=analysis.input_values,
    input_manifest=analysis.input_manifest,
    allow_local_files=True,
)

Octue services

There’s a growing range of live services in the Octue ecosystem that you can ask questions to and get answers from. Currently, all of them are related to wind energy. Here’s a quick glossary of terms before we tell you more:

Definitions

Octue service

See here.

Child

An Octue service that can be asked a question. This name reflects the tree structure of services (specifically, a DAG) formed by the service asking the question (the parent), the child it asks the question to, any children that the child asks questions to as part of forming its answer, and so on.

Parent

An Octue service that asks a question to another Octue service (a child).

Asking a question

Sending data (input values and/or an input manifest) to a child for processing/analysis.

Receiving an answer

Receiving data (output values and/or an output manifest) from a child you asked a question to.

Octue ecosystem

The set of services running the Octue SDK as their backend. These services guarantee:

  • Defined input/output JSON schemas and validation

  • An easy and consistent interface for asking them questions and receiving their answers

  • Logs, exceptions, and monitor messages forwarded to you

  • High availability (if deployed in the cloud)

Service names

Questions are always asked to a revision of a service. Services revisions are named in a similar way to docker images. They look like namespace/name:tag where the tag is often a semantic version (but doesn’t have to be).

Definitions

Service revision

A specific instance of an Octue service that can be individually addressed. The revision could correspond to a version of the service, a dynamic development branch for it, or a deliberate duplication or variation of it.

Service revision unique identifier (SRUID)

The combination of a service revision’s namespace, name, and revision tag that uniquely identifies it. For example, octue/my-service:1.3.0 where the namespace is octue, the name is my-service, and the revision tag is 1.3.0.

Service namespace

The group to which the service belongs e.g. your name or your organisation’s name. If in doubt, use the GitHub handle of the user or organisation publishing the services.

Namespaces must be lower kebab case (i.e. they may contain the letters [a-z], numbers [0-9], and hyphens [-]). They may not begin or end with hyphens.

Service name

A name to uniquely identify the service within its namespace. This usually corresponds to the name of the GitHub repository for the service. Names must be lower kebab case (i.e. they may contain the letters [a-z], numbers [0-9] and hyphens [-]). They may not begin or end with hyphens.

Service revision tag

A tag that uniquely identifies a particular revision of a service. The revision tag could be a:

  • Commit hash (e.g. a3eb45)

  • Semantic version (e.g. 0.12.4)

  • Branch name (e.g. development)

  • Particular environment the service is deployed in (e.g. production)

  • Combination of these (e.g. 0.12.4-production)

Tags may contain lowercase and uppercase letters, numbers, underscores, periods, and hyphens, but can’t start with a period or a dash. They can contain a maximum of 128 characters. These requirements are the same as the Docker tag format.

Service ID

The SRUID is a special case of a service ID. A service ID can be an SRUID or just the service namespace and name. It can be used to ask a question to a service without specifying a specific revision of it. This enables asking questions to, for example, the service octue/my-service and automatically having them routed to its latest revision. See here for more info.

Asking services questions

How to ask a question

Questions are always asked to a revision of a service. You can ask a service a question if you have its SRUID, project name, and the necessary permissions. The question is formed of input values and/or an input manifest.

from octue.resources import Child

child = Child(
    id="my-organisation/my-service:latest",
    backend={"name": "GCPPubSubBackend", "project_name": "my-project"},
)

answer = child.ask(
    input_values={"height": 32, "width": 3},
    input_manifest=manifest,
)

answer["output_values"]
>>> {"some": "data"}

answer["output_manifest"]["my_dataset"].files
>>> <FilterSet({<Datafile('my_file.csv')>, <Datafile('another_file.csv')>})>

Note

Using the latest service revision tag, or not including one at all, will cause your question to be sent to the latest deployed revision of the service. This is determined by making a request to a service registry if one or more registries are defined. If none of the service registries contain an entry for this service, a specific service revision tag must be used.

You can also set the following options when you call Child.ask:

  • children - If the child has children of its own (i.e. grandchildren of the parent), this optional argument can be used to override the child’s “default” children. This allows you to specify particular versions of grandchildren to use (see this subsection below).

  • subscribe_to_logs - if true, the child will forward its logs to you

  • allow_local_files - if true, local files/datasets are allowed in any input manifest you supply

  • handle_monitor_message - if provided a function, it will be called on any monitor messages from the child

  • record_messages_to – if given a path to a JSON file, messages received from the parent while it processes the question are saved to it

  • allow_save_diagnostics_data_on_crash – if true, the input values and input manifest (including its datasets) will be saved by the child for future crash diagnostics if it fails while processing them

  • question_uuid - if provided, the question will use this UUID instead of a generated one

  • timeout - how long in seconds to wait for an answer (None by default - i.e. don’t time out)

If a child raises an exception while processing your question, the exception will always be forwarded and re-raised in your local service or python session. You can handle exceptions in whatever way you like.

If setting a timeout, bear in mind that the question has to reach the child, the child has to run its analysis on the inputs sent to it (this most likely corresponds to the dominant part of the wait time), and the answer has to be sent back to the parent. If you’re not sure how long a particular analysis might take, it’s best to set the timeout to None initially or ask the owner/maintainer of the child for an estimate.

Asking multiple questions in parallel

You can also ask multiple questions to a service in parallel.

child.ask_multiple(
    {"input_values": {"height": 32, "width": 3}},
    {"input_values": {"height": 12, "width": 10}},
    {"input_values": {"height": 7, "width": 32}},
)
>>> [
        {"output_values": {"some": "output"}, "output_manifest": None},
        {"output_values": {"another": "result"}, "output_manifest": None},
        {"output_values": {"different": "result"}, "output_manifest": None},
    ]

This method uses threads, allowing all the questions to be asked at once instead of one after another.

Asking a question within a service

If you have created your own Octue service and want to ask children questions, you can do this more easily than above. Children are accessible from the analysis object by the keys you give them in the app configuration file. For example, you can ask an elevation service a question like this:

answer = analysis.children["elevation"].ask(input_values={"longitude": 0, "latitude": 1})

if your app configuration file is:

{
  "children": [
    {
      "key": "wind_speed",
      "id": "template-child-services/wind-speed-service:latest",
      "backend": {
        "name": "GCPPubSubBackend",
        "project_name": "my-project"
      }
    },
    {
      "key": "elevation",
      "id": "template-child-services/elevation-service:latest",
      "backend": {
        "name": "GCPPubSubBackend",
        "project_name": "my-project"
      }
    }
  ]
}

and your twine.json file includes the child keys in its children field:

{
    "children": [
        {
            "key": "wind_speed",
            "purpose": "A service that returns the average wind speed for a given latitude and longitude.",
        },
        {
            "key": "elevation",
            "purpose": "A service that returns the elevation for a given latitude and longitude.",
        }
    ]
}

See the parent service’s app configuration and app.py file in the child-services app template to see this in action.

Overriding a child’s children

If the child you’re asking a question to has its own children (static children), you can override these by providing the IDs of the children you want it to use (dynamic children) to the Child.ask method. Questions that would have gone to the static children will instead go to the dynamic children. Note that:

  • You must provide the children in the same format as they’re provided in the app configuration

  • If you override one static child, you must override others, too

  • The dynamic children must have the same keys as the static children (so the child knows which service to ask which questions)

  • You should ensure the dynamic children you provide are compatible with and appropriate for questions from the child service

For example, if the child requires these children in its app configuration:

[
    {
        "key": "wind_speed",
        "id": "template-child-services/wind-speed-service:latest",
        "backend": {
            "name": "GCPPubSubBackend",
            "project_name": "octue-sdk-python"
        },
    },
    {
        "key": "elevation",
        "id": "template-child-services/elevation-service:latest",
        "backend": {
            "name": "GCPPubSubBackend",
            "project_name": "octue-sdk-python"
        },
    }
]

then you can override them like this:

answer = child.ask(
    input_values={"height": 32, "width": 3},
    children=[
        {
            "key": "wind_speed",
            "id": "my/own-service:latest",
            "backend": {
                "name": "GCPPubSubBackend",
                "project_name": "octue-sdk-python"
            },
        },
        {
            "key": "elevation",
            "id": "organisation/another-service:latest",
            "backend": {
                "name": "GCPPubSubBackend",
                "project_name": "octue-sdk-python"
            },
        },
    ],
)
Overriding beyond the first generation

It’s an intentional choice to only go one generation deep with overriding children. If you need to be able to specify a whole tree of children, grandchildren, and so on, please upvote this issue.

Using a service registry

When asking a question, you can optionally specify one or more service registries to resolve SRUIDs against. This is analogous to specifying a different pip index for resolving package names when using pip install. If you don’t specify any registries, the default Octue service registry is used.

Specifying service registries can be useful if:

  • You have your own private services that aren’t on the default Octue service registry

  • You want services from one service registry with the same name as in another service registry to be prioritised

Specifying service registries

You can specify service registries in two ways:

  1. Globally for all questions asked inside a service. In the service configuration (octue.yaml file):

    services:
      - namespace: my-organisation
        name: my-app
        service_registries:
          - name: my-registry
            endpoint: blah.com/services
    
  2. For questions to a specific child, inside or outside a service:

    child = Child(
        id="my-organisation/my-service:latest",
        backend={"name": "GCPPubSubBackend", "project_name": "my-project"},
        service_registries=[
            {"name": "my-registry", "endpoint": "blah.com/services"},
        ]
    )
    

Creating services

One of the main features of the Octue SDK is to allow you to easily create services that can accept questions and return answers. They can run locally on any machine or be deployed to the cloud. Currently:

  • The backend communication between twins uses Google Pub/Sub whether they’re local or deployed

  • The deployment options are Google Cloud Run or Google Dataflow

  • The language of the entrypoint must by python3 (you can call processes using other languages within this though)

Anatomy of an Octue service

An Octue service is defined by the following files (located in the repository root by default).

app.py

This is the entrypoint into your code - read more here.

twine.json

This file defines the schema for the service’s configuration, input, and output data. Read more here and see an example here.

Dependencies file

A file specifying your app’s dependencies. This is a setup.py file, a requirements.txt file, or a pyproject.toml file listing all the python packages your app depends on and the version ranges that are supported.

octue.yaml
None

This file defines the basic structure of your service. It must contain at least:

services:
  - namespace: my-organisation
    name: my-app

It may also need the following key-value pairs:

  • app_source_path: <path> - if your app.py file is not in the repository root

  • app_configuration_path: <path> - if your app needs an app configuration file that isn’t in the repository root

  • dockerfile_path: <path> - if your app needs a Dockerfile that isn’t in the repository root

All paths should be relative to the repository root. Other valid entries can be found in the ServiceConfiguration constructor.

Warning

Currently, only one service can be defined per repository, but it must still appear as a list item of the “services” key. At some point, it will be possible to define multiple services in one repository.

App configuration file (optional)
None

If your app needs any configuration, asks questions to any other Octue services, or produces output datafiles/datasets, you will need to provide an app configuration. Currently, this must take the form of a JSON file. It can contain the following keys:

  • configuration_values

  • configuration_manifest

  • children

  • output_location

If an app configuration file is provided, its path must be specified in octue.yaml under the “app_configuration_path” key.

See the AppConfiguration constructor for more information.

Dockerfile (optional)
None

Octue services run in a Docker container if they are deployed. They can also run this way locally. The SDK provides a default Dockerfile for these purposes that will work for most cases:

However, you may need to write and provide your own Dockerfile if your app requires:

  • Non-python or system dependencies (e.g. openfast, wget)

  • Python dependencies that aren’t installable via pip

  • Private python packages

Here are two examples of a custom Dockerfile that use different base images:

If you do provide one, you must specify its path in octue.yaml under the dockerfile_path key.

As always, if you need help with this, feel free to drop us a message or raise an issue!

Where to specify the namespace, name, and revision tag

See here for service naming requirements.

Namespace

  • Required: yes

  • Set in:

    • octue.yaml

    • OCTUE_SERVICE_NAMESPACE environment variable (takes priority)

Name

  • Required: yes

  • Set in:

    • octue.yaml

    • OCTUE_SERVICE_NAME environment variable (takes priority)

Revision tag

  • Required: no

  • Default: a random “coolname” (e.g. hungry-hippo)

  • Set in:

    • OCTUE_SERVICE_REVISION_TAG environment variable

    • If using octue start command, the --revision-tag option (takes priority)

Template apps

We’ve created some template apps for you to look at and play around with. We recommend going through them in this order:

  1. The fractal app template - introduces a basic Octue service that returns output values to its parent.

  2. The using-manifests app template - introduces using a manifest of output datasets to return output files to its parent.

  3. The child-services app template - introduces asking questions to child services and using their answers to form an output to return to its parent.

Deploying services automatically

Automated deployment with Octue means:

  • Your service runs in Google Cloud, ready to accept questions from and return answers to other services.

  • You don’t need to do anything to update your deployed service with new code changes - the service simply gets rebuilt and re-deployed each time you push a commit to your main branch, or merge a pull request into it (other branches and deployment strategies are available, but this is the default).

  • Serverless is the default - your service only runs when questions from other services are sent to it, meaning there is no cost to having it deployed but not in use.

To enable automated deployments, contact us so we can create a Google Cloud Build trigger linked to your git repository. This requires no work from you apart from authorising the connection to GitHub (or another git provider).

If you want to deploy services yourself, see here.

Running services locally

Services can be operated locally (e.g. for testing or ad-hoc data processing). You can:

  • Run your service once (i.e. run one analysis):

    • Via the CLI

    • By using the octue library in a python script

  • Start your service as a child, allowing it to answer any number of questions from any other Octue service:

    • Via the CLI

Running a service once

Via the CLI
  1. Ensure you’ve created a valid octue.yaml file for your service

  2. If your service requires inputs, create an input directory with the following structure

    input_directory
    |---  values.json    (if input values are required)
    |---  manifest.json  (if an input manifest is required)
    
  3. Run:

    octue run --input-dir=my_input_directory
    

Any output values will be printed to stdout and any output datasets will be referenced in an output manifest file named output_manifest_<analysis_id>.json.

Via a python script

Imagine we have a simple app that calculates the area of a square. It could be run locally on a given height and width like this:

from octue import Runner

runner = Runner(app_src="path/to/app.py", twine="path/to/twine.json")
analysis = runner.run(input_values={"height": 5, "width": 10})

analysis.output_values
>>> {"area": 50}

analysis.output_manifest
>>> None

See the Runner API documentation for more advanced usage including providing configuration, children, and an input manifest.

Starting a service as a child

Via the CLI
  1. Ensure you’ve created a valid octue.yaml file for your service

  2. Run:

    octue start
    

This will run the service as a child waiting for questions until you press Ctrl + C or an error is encountered. The service will be available to be questioned by other services at the service ID organisation/name as specified in the octue.yaml file.

Tip

You can use the --timeout option to stop the service after a given number of seconds.

Deploying services (developer’s guide)

This is a guide for developers that want to deploy Octue services themselves - it is not needed if Octue manages your services for you or if you are only asking questions to existing Octue services.

Attention

The octue deploy CLI command can be used to deploy services automatically, but it:

  • Is in alpha so may not work as intended

  • Requires the gcloud CLI tool with Google Cloud SDK 367.0.0 and beta 2021.12.10 to be available

  • Requires the correct permissions via the gcloud tool logged into a Google user account and/or with an appropriate service account available

For now, we recommend contacting us to help set up deployments for you.

What deployment enables

Deploying an Octue service to Google Cloud Run means it:

  • Is deployed as a docker container

  • Is ready to be asked questions by any other Octue service that has the correct permissions (you can control this)

  • Can ask questions to any other Octue service for which it has the correct permissions

  • Will automatically build and redeploy upon the conditions you provide (e.g. pushes or merges into main)

  • Will automatically start and run when Pub/Sub messages are received from the topic you created. The Pub/Sub messages can be sent from anywhere in the world, but the container will only run in the region you chose (you can create multiple Cloud Run services in different regions for the same repository if this is a problem).

  • Will automatically stop shortly after finishing the analyses asked for in the Pub/Sub message (although you can set a minimum container count so one is always running to minimise cold starts).

How to deploy

  1. Ensuring you are in the desired project, go to the Google Cloud Run page and create a new service

_images/create_service.png
  1. Give your service a unique name

_images/service_name_and_region.png
  1. Choose a low-carbon region that supports Eventarc triggers and is in a convenient geographic location for you (e.g. physically close to you for low latency or in a region compatible with your data protection requirements).

_images/low_carbon_regions.png
  1. Click “Next”. When changes are made to the source code, we want them to be deployed automatically. So, we need to connect the repository to GCP to enable this. Select “Continuously deploy new revisions from a source repository” and then “Set up with cloud build”.

_images/set_up_with_cloud_build.png
  1. Choose your source code repository provider and the repository containing the code you’d like to deploy. You’ll have to give the provider permission to access the repository. If your provider isn’t GitHub, BitBucket, or Google Cloud Source Repositories (GCSR), you’ll need to mirror the repository to GCSR before completing this step as Google Cloud Build only supports these three providers currently.

_images/choose_repository.png
  1. Click “Next”, enter a regular expression for the branches you want to automatically deploy from (main by default). As the service will run in a docker container, select “Dockerfile” and click “Save”.

_images/choose_dockerfile.png
  1. Click “Next”. If you want your service to be private, select “Allow internal traffic only” and “Require authentication”. This stops anyone without permission from using the service.

_images/set_traffic.png
  1. The service needs a trigger to start up and respond to. We’ll be using Google Pub/Sub. Click “Add eventarc trigger”, choose “Cloud Pub/Sub topic” as the trigger event, click on the menu called “Select a Cloud Pub/Sub topic”, then click “Create a topic”. Any services that want to ask your service a question will publish their question to this topic.

_images/create_trigger.png
  1. The topic ID should be in the form octue.services.my-organisation.my-service. Click “Create topic”.

  2. Under “Invocation settings”, click on the “Service account” menu and then “Create new service account”.

_images/create_service_account.png
  1. Make a new service account with a related name e.g. “my-service”, then click “Create”. Add the “octue-service-user” and “Cloud Run Invoker” roles to the service account. Contact us if the “octue-service-user” role is not available.

_images/add_roles_to_service_account.png
  1. Click “Save” and then “Create”.

_images/save_and_create.png
  1. You can now view your service in the list of Cloud Run services and view its build trigger in the list of Cloud Build triggers.

Testing services

We recommend writing automated tests for your service so anyone who wants to use it can have confidence in its quality and reliability at a glance. Here’s an example test for our example service.

Emulating children

If your app has children, you should emulate them in your tests instead of communicating with the real ones. This makes your tests:

  • Independent of anything external to your app code - i.e. independent of the remote child, your internet connection, and communication between your app and the child (Google Pub/Sub).

  • Much faster - the emulation will complete in a few milliseconds as opposed to the time it takes the real child to actually run an analysis, which could be minutes, hours, or days. Tests for our child services template app run around 900 times faster when the children are emulated.

The Child Emulator

We’ve written a child emulator that takes a list of messages and returns them to the parent for handling in the order given - without contacting the real child or using Pub/Sub. Any messages a real child can produce are supported. Child instances can be mocked like-for-like by ChildEmulator instances without the parent knowing. You can provide the emulated messages in python or via a JSON file.

Message types

You can emulate any message type that your app (the parent) can handle. The table below shows what these are.

Message type

Number of messages supported

Example

log_record

Any number

{“type”: “log_record”: “log_record”: {“msg”: “Starting analysis.”}}

monitor_message

Any number

{“type”: “monitor_message”: “data”: ‘{“progress”: “35%”}’}

exception

One

{“type”: “exception”, “exception_type”: “ValueError”, “exception_message”: “x cannot be less than 10.”}

result

One

{“type”: “result”, “output_values”: {“my”: “results”}, “output_manifest”: None}

Notes

  • Message formats and contents are validated by ChildEmulator

  • The log_record key of a log_record message is any dictionary that the logging.makeLogRecord function can convert into a log record.

  • The data key of a monitor_message message must be a JSON-serialised string

  • Any messages after a result or exception message won’t be passed to the parent because execution of the child emulator will have ended.

Instantiating a child emulator in python
messages = [
    {
        "type": "log_record",
        "log_record": {"msg": "Starting analysis."},
    },
    {
        "type": "monitor_message",
        "data": '{"progress": "35%"}',
    },
    {
        "type": "log_record",
        "log_record": {"msg": "Finished analysis."},
    },
    {
        "type": "result",
        "output_values": [1, 2, 3, 4, 5],
        "output_manifest": None,
    },
]

child_emulator = ChildEmulator(
    backend={"name": "GCPPubSubBackend", "project_name": "my-project"},
    messages=messages
)

def handle_monitor_message(message):
    ...

result = child_emulator.ask(
    input_values={"hello": "world"},
    handle_monitor_message=handle_monitor_message,
)
>>> {"output_values": [1, 2, 3, 4, 5], "output_manifest": None}
Instantiating a child emulator from a JSON file

You can provide a JSON file with either just messages in or with messages and some or all of the ChildEmulator constructor parameters. Here’s an example JSON file with just the messages:

{
    "messages": [
        {
            "type": "log_record",
            "log_record": {"msg": "Starting analysis."}
        },
        {
            "type": "log_record",
            "log_record": {"msg": "Finished analysis."}
        },
        {
            "type": "monitor_message",
            "data": "{\"progress\": \"35%\"}"
        },
        {
            "type": "result",
            "output_values": [1, 2, 3, 4, 5],
            "output_manifest": null
        }
    ]
}

You can then instantiate a child emulator from this in python:

child_emulator = ChildEmulator.from_file("path/to/emulated_child.json")

def handle_monitor_message(message):
    ...

result = child_emulator.ask(
    input_values={"hello": "world"},
    handle_monitor_message=handle_monitor_message,
)
>>> {"output_values": [1, 2, 3, 4, 5], "output_manifest": None}
Using the child emulator

To emulate your children in tests, patch the Child class with the ChildEmulator class.

from unittest.mock import patch

from octue import Runner
from octue.cloud.emulators import ChildEmulator


app_directory_path = "path/to/directory_containing_app"

# You can explicitly specify your children here as shown or
# read the same information in from your app configuration file.
children = [
    {
        "key": "my_child",
        "id": "octue/my-child-service:latest",
        "backend": {
            "name": "GCPPubSubBackend",
            "project_name": "my-project"
        }
    },
]

runner = Runner(
    app_src=app_directory_path,
    twine=os.path.join(app_directory_path, "twine.json"),
    children=children,
    service_id="your-org/your-service:latest",
)

emulated_children = [
    ChildEmulator(
        id="octue/my-child-service:latest",
        internal_service_name="you/your-service:latest",
        messages=[
            {
                "type": "result",
                "output_values": [300],
                "output_manifest": None,
            },
        ]
    )
]

with patch("octue.runner.Child", side_effect=emulated_children):
    analysis = runner.run(input_values={"some": "input"})

Notes

  • If your app uses more than one child, provide more child emulators in the emulated_children list in the order they’re asked questions in your app.

  • If a given child is asked more than one question, provide a child emulator for each question asked in the same order the questions are asked.

Creating a test fixture

Since the child is emulated, it doesn’t actually do any calculation - if you change the inputs, the outputs won’t change correspondingly (or at all). So, it’s up to you to define a set of realistic inputs and corresponding outputs (the list of emulated messages) to test your service. These are called test fixtures.

Note

Unlike a real child, the inputs given to the emulator and the outputs returned aren’t validated against the schema in the child’s twine - this is because the twine is only available to the real child. This is ok - you’re testing your service, not the child.

You can create test fixtures manually or by using the Child.received_messages property after questioning a real child.

import json
from octue.resources import Child


child = Child(
    id="octue/my-child:latest",
    backend={"name": "GCPPubSubBackend", "project_name": "my-project"},
)

result = child.ask(input_values=[1, 2, 3, 4])

child.received_messages
>>> [
        {
            'type': 'delivery_acknowledgement',
            'delivery_time': '2022-08-16 11:49:57.244263',
        },
        {
            'type': 'log_record',
            'log_record': {
                'msg': 'Finished analysis.',
                'args': None,
                'levelname': 'INFO',
                ...
            },
        },
        {
            'type': 'result',
            'output_values': {"some": "results"},
            'output_manifest': None,
        }
    ]

You can then feed these into a child emulator to emulate one possible response of the child:

from octue.cloud.emulators import ChildEmulator


child_emulator = ChildEmulator(messages=child.received_messages)

child_emulator.ask(input_values=[1, 2, 3, 4])
>>> {"some": "results"}

You can also create test fixtures from downloaded service crash diagnostics.

Troubleshooting services

Crash diagnostics

Services save the following data to the cloud if they crash while processing a question:

  • Input values

  • Input manifest and datasets

  • Child configuration values

  • Child configuration manifest and datasets

  • Inputs to and messages received in answer to each question the service asked its children (if it has any). These are stored in the order the questions were asked.

Important

For this feature to be enabled, the child must have the crash_diagnostics_cloud_path field in its service configuration (octue.yaml file) set to a Google Cloud Storage path.

Accessing crash diagnostics

In the event of a crash, the service will upload the crash diagnostics and send the upload path to the parent as a log message. A user with credentials to access this path can use the octue CLI to retrieve the crash diagnostics data:

octue get-crash-diagnostics <cloud-path>

More information on the command:

>>> octue get-crash-diagnostics -h

Usage: octue get-crash-diagnostics [OPTIONS] CLOUD_PATH

  Download crash diagnostics for an analysis from the given directory in
  Google Cloud Storage. The cloud path should end in the analysis ID.

  CLOUD_PATH: The path to the directory in Google Cloud Storage containing the
  diagnostics data.

Options:
  --local-path DIRECTORY  The path to a directory to store the directory of
                          diagnostics data in. Defaults to the current working
                          directory.
  --download-datasets     If provided, download any datasets from the crash
                          diagnostics and update their paths in their
                          manifests to the new local paths.
  -h, --help              Show this message and exit.

Creating test fixtures from crash diagnostics

You can create test fixtures directly from crash diagnostics, allowing you to recreate the exact conditions that caused your service to fail.

from unittest.mock import patch

from octue import Runner
from octue.utils.testing import load_test_fixture_from_crash_diagnostics


 (
     configuration_values,
     configuration_manifest,
     input_values,
     input_manifest,
     child_emulators,
 ) = load_test_fixture_from_crash_diagnostics(path="path/to/downloaded/crash/diagnostics")

# You can explicitly specify your children here as shown or
# read the same information in from your app configuration file.
children = [
    {
        "key": "my_child",
        "id": "octue/my-child-service:latest",
        "backend": {
            "name": "GCPPubSubBackend",
            "project_name": "my-project",
        }
    },
    {
        "key": "another_child",
        "id": "octue/another-child-service:latest",
        "backend": {
            "name": "GCPPubSubBackend",
            "project_name": "my-project",
        }
    }
]

runner = Runner(
    app_src="path/to/directory_containing_app",
    twine=os.path.join(app_directory_path, "twine.json"),
    children=children,
    configuration_values=configuration_values,
    configuration_manifest=configuration_manifest,
    service_id="your-org/your-service:latest",
)

with patch("octue.runner.Child", side_effect=child_emulators):
    analysis = runner.run(input_values=input_values, input_manifest=input_manifest)

Disabling crash diagnostics

When asking a question to a child, parents can disable crash diagnostics upload in the child on a question-by-question basis by setting allow_save_diagnostics_data_on_crash to False in Child.ask. For example:

child = Child(
    id="my-organisation/my-service:latest",
    backend={"name": "GCPPubSubBackend", "project_name": "my-project"},
)

answer = child.ask(
    input_values={"height": 32, "width": 3},
    allow_save_diagnostics_data_on_crash=False,
)

Logging

By default, octue streams your logs to stderr in a nice, readable format so your log messages are immediately visible when you start developing without any extra configuration. If you prefer to use your own handlers or formatters, simply set USE_OCTUE_LOG_HANDLER=0 in the environment running your app.

Readable logs

Some advantages of the Octue log handler are:

  • Its readable format

  • Its clear separation of log context from log message.

Below, the context is on the left and includes:

  • The time

  • Log level

  • Module producing the log

  • Octue analysis ID

This is followed by the actual log message on the right:

[2021-07-10 20:03:12,713 | INFO | octue.runner | analysis-102ee7d5-4b94-4f8a-9dcd-36dbd00662ec] Hello! The child services template app is running!

Colourised services

Another advantage to using the Octue log handler is that each Octue service is coloured according to its position in the tree, making it much easier to read log messages from multiple levels of children.

_images/coloured_logs.png

In this example:

  • The log context is in blue

  • Anything running in the root parent service’s app is labeled with the analysis ID in green

  • Anything running in the immediate child services (elevation and wind_speed) are labelled with the analysis ID in yellow

  • Any children further down the tree (i.e. children of the child services and so on) will have their own labels in other colours consistent to their level

Add extra information

You can add certain log record attributes to the logging context by also providing the following environment variables:

  • INCLUDE_LINE_NUMBER_IN_LOGS=1 - include the line number

  • INCLUDE_PROCESS_NAME_IN_LOGS=1 - include the process name

  • INCLUDE_THREAD_NAME_IN_LOGS=1 - include the thread name

Authentication

You need authentication while using octue to:

  • Access data from Google Cloud Storage

  • Use, run, or deploy Octue services

Authentication can be provided by using one of:

  • A service account

  • Application Default Credentials

Creating a service account

  1. Create a service account (see Google’s getting started guide)

  2. Make sure your service account has access to any buckets you need, Google Pub/Sub, and Google Cloud Run if your service is deployed on it (see here)

Using a service account

Locally
  1. Create and download a key for your service account - it will be called your-project-XXXXX.json.

Danger

It’s best not to store this in your project to prevent accidentally committing it or building it into a docker image layer. Instead, bind mount it into your docker image from somewhere else on your local system.

If you must keep within your project, it’s good practice to name the file gha-greds-<whatever>.json and make sure that gha-creds-* is in your .gitignore and .dockerignore files.

  1. If you’re developing in a container (like a VSCode .devcontainer), mount the file into the container. You can make gcloud available too - check out this tutorial.

  2. Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the key file.

On GCP infrastructure
  • Credentials are provided when running code on GCP infrastructure (e.g. Google Cloud Run)

  • octue uses these when when running on these platforms

  • You should ensure the correct service account is being used by the deployed instance

Inter-service compatibility

Parents and children running nearly all versions of octue can communicate with each other compatibly, although a small number can’t. The table below shows which parent SDK versions (rows) send questions that can be processed by each child SDK version (columns).

Key

  • 0 = incompatible

  • 1 = compatible

0.46.3

0.46.2

0.46.1

0.46.0

0.45.0

0.44.0

0.43.7

0.43.6

0.43.5

0.43.4

0.43.3

0.43.2

0.43.1

0.43.0

0.42.1

0.42.0

0.41.1

0.41.0

0.40.2

0.40.1

0.40.0

0.39.0

0.38.1

0.38.0

0.37.0

0.36.0

0.35.0

0.34.1

0.34.0

0.33.0

0.32.0

0.31.0

0.30.0

0.29.9

0.29.8

0.29.7

0.29.6

0.29.5

0.29.4

0.29.3

0.29.2

0.29.11

0.29.10

0.29.1

0.29.0

0.28.2

0.28.1

0.28.0

0.27.3

0.27.2

0.27.1

0.27.0

0.26.2

0.26.1

0.26.0

0.25.0

0.24.1

0.24.0

0.23.6

0.23.5

0.23.4

0.23.3

0.23.2

0.23.1

0.23.0

0.22.1

0.22.0

0.21.0

0.20.0

0.19.0

0.18.2

0.18.1

0.18.0

0.17.0

0.16.0

0.46.3

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.46.2

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.46.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.46.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.45.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.44.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.43.7

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.43.6

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.43.5

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.43.4

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.43.3

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.43.2

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.43.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.43.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.42.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.42.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.41.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.41.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.40.2

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.40.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.40.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.39.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.38.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.38.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.37.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.36.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.35.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.34.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.34.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.33.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.32.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.31.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.30.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.29.9

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.29.8

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.29.7

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.29.6

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.29.5

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.29.4

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.29.3

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.29.2

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.29.11

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.29.10

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.29.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.29.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.28.2

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.28.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.28.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.27.3

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.27.2

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.27.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.27.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.26.2

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.26.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.26.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.25.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.24.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.24.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.23.6

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.23.5

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.23.4

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.23.3

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.23.2

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.23.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.23.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.22.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.22.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.21.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.20.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.19.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.18.2

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.18.1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.18.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.17.0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

0.16.0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

API

Datafile

Dataset

Manifest

Analysis

Child

Child emulator

Filter containers

FilterSet
class octue.resources.filter_containers.FilterSet
filter(ignore_items_without_attribute=True, **kwargs)

Return a new instance containing only the Filterable`s to which the given filter criteria are `True.

Parameters
  • ignore_items_without_attribute (bool) – if True, just ignore any members of the container without a filtered-for attribute rather than raising an error

  • {str – any} kwargs: keyword arguments whose keys are the name of the filter and whose values are the values to filter for

Return octue.resources.filter_containers.FilterContainer

one(**kwargs)

If a single result exists for the given filters, return it. Otherwise, raise an error.

Parameters

{str – any} kwargs: keyword arguments whose keys are the name of the filter and whose values are the values to filter for

Raises

octue.exceptions.UnexpectedNumberOfResultsException – if zero or more than one results satisfy the filters

Return octue.resources.mixins.filterable.Filterable

order_by(attribute_name, check_start_value=None, check_constant_increment=None, reverse=False)

Order the Filterable`s in the container by an attribute with the given name, returning them as a new `FilterList regardless of the type of filter container begun with (`FilterSet`s and `FilterDict`s are inherently orderless).

Parameters
  • attribute_name (str) – name of attribute (optionally nested) to order by e.g. “a”, “a.b”, “a.b.c”

  • check_start_value (any) – if provided, check that the first item in the ordered container has the given start value for the attribute ordered by

  • check_constant_increment (int|float|None) – if given, check that the ordered-by attribute of each of the items in the ordered container increases by the given value when progressing along the sequence

  • reverse (bool) – if True, reverse the ordering

Raises

octue.exceptions.InvalidInputException – if an attribute with the given name doesn’t exist on any of the container’s members

Return FilterList

FilterList
class octue.resources.filter_containers.FilterList(iterable=(), /)
filter(ignore_items_without_attribute=True, **kwargs)

Return a new instance containing only the Filterable`s to which the given filter criteria are `True.

Parameters
  • ignore_items_without_attribute (bool) – if True, just ignore any members of the container without a filtered-for attribute rather than raising an error

  • {str – any} kwargs: keyword arguments whose keys are the name of the filter and whose values are the values to filter for

Return octue.resources.filter_containers.FilterContainer

one(**kwargs)

If a single result exists for the given filters, return it. Otherwise, raise an error.

Parameters

{str – any} kwargs: keyword arguments whose keys are the name of the filter and whose values are the values to filter for

Raises

octue.exceptions.UnexpectedNumberOfResultsException – if zero or more than one results satisfy the filters

Return octue.resources.mixins.filterable.Filterable

order_by(attribute_name, check_start_value=None, check_constant_increment=None, reverse=False)

Order the Filterable`s in the container by an attribute with the given name, returning them as a new `FilterList regardless of the type of filter container begun with (`FilterSet`s and `FilterDict`s are inherently orderless).

Parameters
  • attribute_name (str) – name of attribute (optionally nested) to order by e.g. “a”, “a.b”, “a.b.c”

  • check_start_value (any) – if provided, check that the first item in the ordered container has the given start value for the attribute ordered by

  • check_constant_increment (int|float|None) – if given, check that the ordered-by attribute of each of the items in the ordered container increases by the given value when progressing along the sequence

  • reverse (bool) – if True, reverse the ordering

Raises

octue.exceptions.InvalidInputException – if an attribute with the given name doesn’t exist on any of the container’s members

Return FilterList

FilterDict
class octue.resources.filter_containers.FilterDict(**kwargs)

A dictionary that is filterable by its values’ attributes. Each key can be anything, but each value must be an octue.mixins.filterable.Filterable instance.

filter(ignore_items_without_attribute=True, **kwargs)

Return a new instance containing only the Filterables for which the given filter criteria apply are satisfied.

Parameters
  • ignore_items_without_attribute (bool) – if True, just ignore any members of the container without a filtered-for attribute rather than raising an error

  • {str – any} kwargs: keyword arguments whose keys are the name of the filter and whose values are the values to filter for

Return FilterDict

order_by(attribute_name, reverse=False)

Order the instance by the given attribute_name, returning the instance’s elements as a new FilterList.

Parameters
  • attribute_name (str) – name of attribute (optionally nested) to order by e.g. “a”, “a.b”, “a.b.c”

  • reverse (bool) – if True, reverse the ordering

Raises

octue.exceptions.InvalidInputException – if an attribute with the given name doesn’t exist on any of the FilterDict’s values

Return FilterList

one(**kwargs)

If a single item exists for the given filters, return it. Otherwise, raise an error.

Parameters

{str – any} kwargs: keyword arguments whose keys are the name of the filter and whose values are the values to filter for

Raises

octue.exceptions.UnexpectedNumberOfResultsException – if zero or more than one results satisfy the filters

Return (any, octue.resources.mixins.filterable.Filterable)

Configuration

Service configuration
App configuration

Runner

Octue essential monitor messages

Octue log handler

octue.log_handlers.apply_log_handler(logger_name=None, logger=None, handler=None, log_level=20, formatter=None, include_line_number=False, include_process_name=False, include_thread_name=False)

Apply a log handler with the given formatter to the logger with the given name. By default, the default Octue log handler is used on the root logger.

Parameters
  • logger_name (str|None) – the name of the logger to apply the handler to; if this and logger are None, the root logger is used

  • logger (logging.Logger|None) – the logger instance to apply the handler to (takes precedence over a logger name)

  • handler (logging.Handler|None) – The handler to use. If None, the default StreamHandler will be attached.

  • log_level (int|str) – ignore log messages below this level

  • formatter (logging.Formatter|None) – if provided, this formatter is used and the other formatting options are ignored

  • include_line_number (bool) – if True, include the line number in the log context

  • include_process_name (bool) – if True, include the process name in the log context

  • include_thread_name (bool) – if True, include the thread name in the log context

Return logging.Handler

License

The Boring Bit

See the octue-sdk-python license.

Third Party Libraries

octue-sdk-python includes or is linked against code from third party libraries - see our attributions page.

Version History

See our releases on GitHub.

Semantic versioning

We use semantic versioning so you can see when new releases make breaking changes or just add new features or bug fixes. Breaking changes are highlighted in our pull request descriptions and release notes.

Important

Note that octue is still in beta, so its major version number remains at 0 (i.e. 0.y.z). This means that, for now, both breaking changes and new features are denoted by an increase in the minor version number (y in x.y.z). When we come out of beta, breaking changes will be denoted by an increase in the major version number (x in x.y.z).

Deprecated code

When code is deprecated, it will still work but a deprecation warning will be issued with a suggestion on how to update it. After an adjustment period, deprecations will be removed from the codebase according to the code removal schedule. This constitutes a breaking change.

Bibliography

Agarwal

S. Agarwal, N. Snavely, S. M. Seitz and R. Szeliski, Bundle Adjustment in the Large, Proceedings of the European Conference on Computer Vision, pp. 29–42, 2010.

Bjorck

A. Bjorck, Numerical Methods for Least Squares Problems, SIAM, 1996

Brown

D. C. Brown, A solution to the general problem of multiple station analytical stereo triangulation, Technical Report 43, Patrick Airforce Base, Florida, 1958.

ByrdNocedal

R. H. Byrd, J. Nocedal, R. B. Schanbel, Representations of Quasi-Newton Matrices and their use in Limited Memory Methods, Mathematical Programming 63(4):129–-156, 1994.

ByrdSchnabel

R.H. Byrd, R.B. Schnabel, and G.A. Shultz, Approximate solution of the trust region problem by minimization over two dimensional subspaces, Mathematical programming, 40(1):247–263, 1988.

Chen

Y. Chen, T. A. Davis, W. W. Hager, and S. Rajamanickam, Algorithm 887: CHOLMOD, Supernodal Sparse Cholesky Factorization and Update/Downdate, TOMS, 35(3), 2008.

Conn

A.R. Conn, N.I.M. Gould, and P.L. Toint, Trust region methods, Society for Industrial Mathematics, 2000.

GolubPereyra

G.H. Golub and V. Pereyra, The differentiation of pseudo-inverses and nonlinear least squares problems whose variables separate, SIAM Journal on numerical analysis, 10(2):413–432, 1973.

HartleyZisserman

R.I. Hartley & A. Zisserman, Multiview Geometry in Computer Vision, Cambridge University Press, 2004.

KanataniMorris

K. Kanatani and D. D. Morris, Gauges and gauge transformations for uncertainty description of geometric structure with indeterminacy, IEEE Transactions on Information Theory 47(5):2017-2028, 2001.

Keys

R. G. Keys, Cubic convolution interpolation for digital image processing, IEEE Trans. on Acoustics, Speech, and Signal Processing, 29(6), 1981.

KushalAgarwal

A. Kushal and S. Agarwal, Visibility based preconditioning for bundle adjustment, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2012.

Kanzow

C. Kanzow, N. Yamashita and M. Fukushima, Levenberg–Marquardt methods with strong local convergence properties for solving nonlinear equations with convex constraints, Journal of Computational and Applied Mathematics, 177(2):375–397, 2005.

Levenberg

K. Levenberg, A method for the solution of certain nonlinear problems in least squares, Quart. Appl. Math, 2(2):164–168, 1944.

LiSaad

Na Li and Y. Saad, MIQR: A multilevel incomplete qr preconditioner for large sparse least squares problems, SIAM Journal on Matrix Analysis and Applications, 28(2):524–550, 2007.

Madsen

K. Madsen, H.B. Nielsen, and O. Tingleff, Methods for nonlinear least squares problems, 2004.

Mandel

J. Mandel, On block diagonal and Schur complement preconditioning, Numer. Math., 58(1):79–93, 1990.

Marquardt

D.W. Marquardt, An algorithm for least squares estimation of nonlinear parameters, J. SIAM, 11(2):431–441, 1963.

Mathew

T.P.A. Mathew, Domain decomposition methods for the numerical solution of partial differential equations, Springer Verlag, 2008.

NashSofer

S.G. Nash and A. Sofer, Assessing a search direction within a truncated newton method, Operations Research Letters, 9(4):219–221, 1990.

Nocedal

J. Nocedal, Updating Quasi-Newton Matrices with Limited Storage, Mathematics of Computation, 35(151): 773–782, 1980.

NocedalWright

J. Nocedal & S. Wright, Numerical Optimization, Springer, 2004.

Oren

S. S. Oren, Self-scaling Variable Metric (SSVM) Algorithms Part II: Implementation and Experiments, Management Science, 20(5), 863-874, 1974.

Ridders

C. J. F. Ridders, Accurate computation of F’(x) and F’(x) F”(x), Advances in Engineering Software 4(2), 75-76, 1978.

RuheWedin

A. Ruhe and P.Å. Wedin, Algorithms for separable nonlinear least squares problems, Siam Review, 22(3):318–337, 1980.

Saad

Y. Saad, Iterative methods for sparse linear systems, SIAM, 2003.

Stigler

S. M. Stigler, Gauss and the invention of least squares, The Annals of Statistics, 9(3):465-474, 1981.

TenenbaumDirector

J. Tenenbaum & B. Director, How Gauss Determined the Orbit of Ceres.

TrefethenBau

L.N. Trefethen and D. Bau, Numerical Linear Algebra, SIAM, 1997.

Triggs

B. Triggs, P. F. Mclauchlan, R. I. Hartley & A. W. Fitzgibbon, Bundle Adjustment: A Modern Synthesis, Proceedings of the International Workshop on Vision Algorithms: Theory and Practice, pp. 298-372, 1999.

Wiberg

T. Wiberg, Computation of principal components when data are missing, In Proc. Second Symp. Computational Statistics, pages 229–236, 1976.

WrightHolt

S. J. Wright and J. N. Holt, An Inexact Levenberg Marquardt Method for Large Sparse Nonlinear Least Squares, Journal of the Australian Mathematical Society Series B, 26(4):387–403, 1985.