dal.plugins.persistence.filesystem package

Submodules

dal.plugins.persistence.filesystem.filesystem module

Copyright (C) Mov.ai - All Rights Reserved Unauthorized copying of this file, via any medium is strictly prohibited Proprietary and confidential

Developers: - Alexandre Pires (alexandre.pires@mov.ai) - 2020

class dal.plugins.persistence.filesystem.filesystem.FilesystemPlugin(**kwargs)

Bases: PersistencePlugin

Implements a workspace that stores in a local storage

backup(**kwargs)

archive a scope/scopes into a zip file

create_workspace(ref: str, **kwargs)

creates a new workspace

delete(data: object, **kwargs)

delete data in the persistent layer

delete_workspace(ref: str)

deletes a existing workspace

Get a list of all related objects

static get_scope_from_upstream(scope: str)

Get a scope from an upstream server, this function is called everytime a scope does not exists in the local archive it is basicly a wrapper aroung the RemoteArchive client and the RestoreManager class, it uses them to fetch a full backup of dependencies from a remote archive and restore it on the local archive

get_scope_info(**kwargs)

get the information of a scope

list_scopes(**kwargs)

list all existing scopes

list_versions(**kwargs)

list all existing scopes

list_workspaces()

list available workspaces

logger = <Logger filesystem.mov.ai (INFO)>
static parse_fleet_token()

Parse the fleet token to get the archive credentials, thee credentials are stored in a env var called FLEET_TOKEN as a base64 encoded string with the following format <user>:<password>

property plugin_name

Get current plugin class

property plugin_version

Get current plugin class

read(**kwargs)

load an object from the persistent layer

rebuild_indexes(**kwargs)

force the database layer to rebuild all indexes, for now we do not implement this on the file system The reason behind it is beacuse data in the archive should always reference to existing data, therefore the relations created during the saving process should be more than enough

restore(**kwargs)

restore a scope/scopes from a zip file

validate_data(schema: TreeNode, data: dict, out: dict)

Validate a dict against a schema

abstract property versioning

returns if this plugin supports versioning

workspace_info(ref: str)

get information about a workspace

write(data: object, **kwargs)

Stores the object on the persistent layer

Module contents

Copyright (C) Mov.ai - All Rights Reserved Unauthorized copying of this file, via any medium is strictly prohibited Proprietary and confidential

Developers: - Alexandre Pires (alexandre.pires@mov.ai) - 2020

class dal.plugins.persistence.filesystem.FilesystemPlugin(**kwargs)

Bases: PersistencePlugin

Implements a workspace that stores in a local storage

backup(**kwargs)

archive a scope/scopes into a zip file

create_workspace(ref: str, **kwargs)

creates a new workspace

delete(data: object, **kwargs)

delete data in the persistent layer

delete_workspace(ref: str)

deletes a existing workspace

Get a list of all related objects

static get_scope_from_upstream(scope: str)

Get a scope from an upstream server, this function is called everytime a scope does not exists in the local archive it is basicly a wrapper aroung the RemoteArchive client and the RestoreManager class, it uses them to fetch a full backup of dependencies from a remote archive and restore it on the local archive

get_scope_info(**kwargs)

get the information of a scope

list_scopes(**kwargs)

list all existing scopes

list_versions(**kwargs)

list all existing scopes

list_workspaces()

list available workspaces

logger = <Logger filesystem.mov.ai (INFO)>
static parse_fleet_token()

Parse the fleet token to get the archive credentials, thee credentials are stored in a env var called FLEET_TOKEN as a base64 encoded string with the following format <user>:<password>

property plugin_name

Get current plugin class

property plugin_version

Get current plugin class

read(**kwargs)

load an object from the persistent layer

rebuild_indexes(**kwargs)

force the database layer to rebuild all indexes, for now we do not implement this on the file system The reason behind it is beacuse data in the archive should always reference to existing data, therefore the relations created during the saving process should be more than enough

restore(**kwargs)

restore a scope/scopes from a zip file

validate_data(schema: TreeNode, data: dict, out: dict)

Validate a dict against a schema

abstract property versioning

returns if this plugin supports versioning

workspace_info(ref: str)

get information about a workspace

write(data: object, **kwargs)

Stores the object on the persistent layer