ocp_resources package
Submodules
ocp_resources.api_service module
- class ocp_resources.api_service.APIService(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
APIService object.
- api_group = 'apiregistration.k8s.io'
ocp_resources.benchmark module
- class ocp_resources.benchmark.Benchmark(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Benchmark resource Defined by https://github.com/cloud-bulldozer/benchmark-operator
The benchmark-operator monitors a namespace for Benchmark resources. When a new Benchmark is created, the benchmark-operator creates and starts the pods or VMs necessary, and triggers the benchmark run.
- api_group = 'ripsaw.cloudbulldozer.io'
- property suuid
Returns: str: (short)UUID string from resource instance
- property uuid
Returns: str: UUID string from resource instance
- workload_arg(arg, default=None)[source]
Retrieve the value of spec.workload.args[arg]
To provide a similar usage as .get(), a default can be defined if needed.
- Parameters:
arg (str) – Argument to retrieve from spec.workload.args
default (any) – Default value to return if arg is not found in workload args
- Returns:
Value of workload arg or ‘default’ if does not exist
- Return type:
any
- property workload_kind
Retrieve the value of spec.workload.args.kind
Not all Benchmarks have a ‘kind’ defined, this was added for vms. The default is ‘pod’
- Returns:
Value representing workload kind
- Return type:
str
ocp_resources.catalog_source module
- class ocp_resources.catalog_source.CatalogSource(source_type=None, image=None, display_name=None, publisher=None, update_strategy_registry_poll_interval=None, **kwargs)[source]
Bases:
NamespacedResource
https://olm.operatorframework.io/docs/concepts/crds/catalogsource/
- api_group = 'operators.coreos.com'
ocp_resources.catalog_source_config module
ocp_resources.cdi module
- class ocp_resources.cdi.CDI(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
CDI object.
- api_group = 'cdi.kubevirt.io'
ocp_resources.cdi_config module
- class ocp_resources.cdi_config.CDIConfig(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
CDIConfig object.
- api_group = 'cdi.kubevirt.io'
- property scratch_space_storage_class_from_spec
- property scratch_space_storage_class_from_status
- property upload_proxy_url
ocp_resources.chaos_engine module
- class ocp_resources.chaos_engine.ChaosEngine(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- class EngineStatus[source]
Bases:
object
- COMPLETED = 'completed'
- INITIALIZED = 'initialized'
- STOPPED = 'stopped'
- api_group = 'litmuschaos.io'
- property engine_status
- property experiments_status
- property success
ocp_resources.cluster_operator module
- class ocp_resources.cluster_operator.ClusterOperator(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
- api_group = 'config.openshift.io'
ocp_resources.cluster_role module
ocp_resources.cluster_role_binding module
- class ocp_resources.cluster_role_binding.ClusterRoleBinding(cluster_role=None, subjects=None, **kwargs)[source]
Bases:
Resource
https://kubernetes.io/docs/reference/kubernetes-api/authorization-resources/cluster-role-binding-v1/
- api_group = 'rbac.authorization.k8s.io'
ocp_resources.cluster_service_version module
- class ocp_resources.cluster_service_version.ClusterServiceVersion(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- api_group = 'operators.coreos.com'
ocp_resources.cluster_version module
- class ocp_resources.cluster_version.ClusterVersion(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
- api_group = 'config.openshift.io'
ocp_resources.configmap module
- class ocp_resources.configmap.ConfigMap(data=None, **kwargs)[source]
Bases:
NamespacedResource
https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/config-map-v1/
- api_version = 'v1'
ocp_resources.console_cli_download module
- class ocp_resources.console_cli_download.ConsoleCLIDownload(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
ConsoleCLIDownload object, inherited from Resource.
- api_group = 'console.openshift.io'
ocp_resources.console_quick_starts module
ocp_resources.custom_resource_definition module
- class ocp_resources.custom_resource_definition.CustomResourceDefinition(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
- api_group = 'apiextensions.k8s.io'
ocp_resources.daemonset module
- class ocp_resources.daemonset.DaemonSet(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
DaemonSet object.
- api_group = 'apps'
- delete(wait=False, timeout=240, body=None)[source]
Delete Daemonset
- Parameters:
wait (bool) – True to wait for Daemonset to be deleted.
timeout (int) – Time to wait for resource deletion
body (dict) – Content to send for delete()
- Returns:
True if delete succeeded, False otherwise.
- Return type:
bool
- wait_until_deployed(timeout=240)[source]
Wait until all Pods are deployed and ready.
- Parameters:
timeout (int) – Time to wait for the Daemonset.
- Raises:
TimeoutExpiredError – If not all the pods are deployed.
ocp_resources.datavolume module
- class ocp_resources.datavolume.DataVolume(name=None, namespace=None, source=None, size=None, storage_class=None, url=None, content_type='kubevirt', access_modes=None, cert_configmap=None, secret=None, client=None, volume_mode=None, hostpath_node=None, source_pvc=None, source_namespace=None, multus_annotation=None, bind_immediate_annotation=None, preallocation=None, teardown=True, privileged_client=None, yaml_file=None, delete_timeout=240, api_name='pvc', delete_after_completion=None, **kwargs)[source]
Bases:
NamespacedResource
DataVolume object.
- class AccessMode[source]
Bases:
object
AccessMode object.
- ROX = 'ReadOnlyMany'
- RWO = 'ReadWriteOnce'
- RWX = 'ReadWriteMany'
- class ContentType[source]
Bases:
object
ContentType object
- ARCHIVE = 'archive'
- KUBEVIRT = 'kubevirt'
- class Status[source]
Bases:
Status
- BLANK = 'Blank'
- CLONE_IN_PROGRESS = 'CloneInProgress'
- ClONE_SCHEDULED = 'CloneScheduled'
- IMPORT_IN_PROGRESS = 'ImportInProgress'
- IMPORT_SCHEDULED = 'ImportScheduled'
- PENDING_POPULATION = 'PendingPopulation'
- PVC_BOUND = 'PVCBound'
- SMART_CLONE_PVC_IN_PROGRESS = 'SmartClonePVCInProgress'
- SNAPSHOT_FOR_SMART_CLONE_IN_PROGRESS = 'SnapshotForSmartCloneInProgress'
- UNKNOWN = 'Unknown'
- UPLOAD_IN_PROGRESS = 'UploadInProgress'
- UPLOAD_READY = 'UploadReady'
- UPLOAD_SCHEDULED = 'UploadScheduled'
- WAIT_FOR_FIRST_CONSUMER = 'WaitForFirstConsumer'
- api_group = 'cdi.kubevirt.io'
- delete(wait=False, timeout=240, body=None)[source]
Delete DataVolume
- Parameters:
wait (bool) – True to wait for DataVolume and PVC to be deleted.
timeout (int) – Time to wait for resources deletion
body (dict) – Content to send for delete()
- Returns:
True if delete succeeded, False otherwise.
- Return type:
bool
- property pvc
- property scratch_pvc
- wait(timeout=600, failure_timeout=120)[source]
Wait for resource
- Parameters:
timeout (int) – Time to wait for the resource.
sleep (int) – Time to wait between retries
- Raises:
TimeoutExpiredError – If resource not exists.
- wait_deleted(timeout=240)[source]
Wait until DataVolume and the PVC created by it are deleted
Args: timeout (int): Time to wait for the DataVolume and PVC to be deleted.
Returns: bool: True if DataVolume and its PVC are gone, False if timeout reached.
- wait_for_dv_success(timeout=600, failure_timeout=120, stop_status_func=None, *stop_status_func_args, **stop_status_func_kwargs)[source]
Wait until DataVolume succeeded with or without DV Garbage Collection enabled
- Parameters:
timeout (int) – Time to wait for the DataVolume to succeed.
failure_timeout (int) – Time to wait for the DataVolume to have not Pending/None status
stop_status_func (function) –
function that is called inside the TimeoutSampler if it returns True - stop the Sampler and raise TimeoutExpiredError Example: def dv_is_not_progressing(dv):
return True if dv.instance.status.conditions.restartCount > 3 else False
- def test_dv():
… stop_status_func_kwargs = {“dv”: dv} dv.wait_for_dv_success(stop_status_func=dv_is_not_progressing, **stop_status_func_kwargs)
- Returns:
True if DataVolume succeeded.
- Return type:
bool
ocp_resources.deployment module
- class ocp_resources.deployment.Deployment(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
OpenShift Deployment object.
- api_group = 'apps'
- scale_replicas(replica_count=<class 'int'>)[source]
Update replicas in deployment.
- Parameters:
replica_count (int) – Number of replicas.
- Returns:
Deployment is updated successfully
- wait_for_replicas(deployed=True, timeout=240)[source]
Wait until all replicas are updated.
- Parameters:
deployed (bool) – True for replicas deployed, False for no replicas.
timeout (int) – Time to wait for the deployment.
- Raises:
TimeoutExpiredError – If not availableReplicas is equal to replicas.
ocp_resources.destination_rule module
- class ocp_resources.destination_rule.DestinationRule(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Destination Rule object.
- api_group = 'networking.istio.io'
ocp_resources.event module
- class ocp_resources.event.Event[source]
Bases:
object
Allow read and remove K8s events.
- api_version = 'v1'
- classmethod delete_events(dyn_client, namespace=None, name=None, label_selector=None, field_selector=None, resource_version=None, timeout=None)[source]
- delete_events - delete K8s events. For example, to cleanup events before test, in order to not get old
events in the test, in order to prevent false positive test.
- Parameters:
dyn_client (DynamicClient) – K8s client
namespace (str) – event namespace
name (str) – event name
label_selector (str) – filter events by labels; comma separated string of key=value
field_selector (str) – filter events by fields; comma separated string of key=valueevent fields; comma separated string of key=value
resource_version (str) – filter events by their resource’s version
timeout (int) – timeout in seconds
- Returns
list: event objects
example: deleting all the event with a reason of “AnEventReason”, from “my-namespace” namespace
- def delete_events_before_test(default_client):
Event.delete_events(default_client, namespace=my-namespace, field_selector=”reason=AnEventReason”)
- classmethod get(dyn_client, namespace=None, name=None, label_selector=None, field_selector=None, resource_version=None, timeout=None)[source]
get - retrieves K8s events.
- Parameters:
dyn_client (DynamicClient) – K8s client
namespace (str) – event namespace
name (str) – event name
label_selector (str) – filter events by labels; comma separated string of key=value
field_selector (str) – filter events by fields; comma separated string of key=valueevent fields; comma separated string of key=value
resource_version (str) – filter events by their resource’s version
timeout (int) – timeout in seconds
- Returns
list: event objects
- example: reading all CSV Warning events in namespace “my-namespace”, with reason of “AnEventReason”
- for event in Event.get(
default_client, namespace=”my-namespace”, field_selector=”involvedObject.kind==ClusterServiceVersion,type==Warning,reason=AnEventReason”, timeout=10,
- ):
print(event.object)
ocp_resources.gateway module
- class ocp_resources.gateway.Gateway(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Gateway object.
- api_group = 'networking.istio.io'
ocp_resources.host module
- class ocp_resources.host.Host(name=None, namespace=None, host_id=None, ip_address=None, provider_name=None, provider_namespace=None, secret_name=None, secret_namespace=None, client=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
,MTV
Migration Toolkit For Virtualization (MTV) Host resource.
- api_group = 'forklift.konveyor.io'
ocp_resources.hostpath_provisioner module
- class ocp_resources.hostpath_provisioner.HostPathProvisioner(name=None, path=None, image_pull_policy=None, client=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
Resource
HostPathProvisioner Custom Resource Object.
- api_group = 'hostpathprovisioner.kubevirt.io'
- property volume_path
ocp_resources.hyperconverged module
- class ocp_resources.hyperconverged.HyperConverged(name=None, namespace=None, client=None, infra=None, workloads=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- api_group = 'hco.kubevirt.io'
ocp_resources.image_content_source_policy module
ocp_resources.imagestreamtag module
- class ocp_resources.imagestreamtag.ImageStreamTag(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
ImageStreamTag object.
- api_group = 'image.openshift.io'
ocp_resources.infrastructure module
- class ocp_resources.infrastructure.Infrastructure(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
Infrastructure object.
- api_group = 'config.openshift.io'
- property platform
ocp_resources.installplan module
- class ocp_resources.installplan.InstallPlan(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- api_group = 'operators.coreos.com'
ocp_resources.kube_descheduler module
- class ocp_resources.kube_descheduler.KubeDescheduler(name=None, namespace=None, profiles=None, descheduling_interval=3600, log_level='Normal', management_state='Managed', mode='Predictive', operator_log_level='Normal', teardown=True, client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- api_group = 'operator.openshift.io'
ocp_resources.kubevirt module
- class ocp_resources.kubevirt.KubeVirt(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- api_group = 'kubevirt.io'
ocp_resources.kubevirt_common_templates_bundle module
- class ocp_resources.kubevirt_common_templates_bundle.KubevirtCommonTemplatesBundle(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- api_group = 'ssp.kubevirt.io'
ocp_resources.kubevirt_metrics_aggregation module
- class ocp_resources.kubevirt_metrics_aggregation.KubevirtMetricsAggregation(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- api_group = 'ssp.kubevirt.io'
ocp_resources.kubevirt_node_labeller_bundle module
- class ocp_resources.kubevirt_node_labeller_bundle.KubevirtNodeLabellerBundle(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- api_group = 'ssp.kubevirt.io'
ocp_resources.kubevirt_template_validaotr module
- class ocp_resources.kubevirt_template_validaotr.KubevirtTemplateValidator(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- api_group = 'ssp.kubevirt.io'
ocp_resources.machine module
- class ocp_resources.machine.Machine(name=None, namespace=None, teardown=True, client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Machine object.
- api_group = 'machine.openshift.io'
- property cluster_name
- property machine_role
- property machine_type
- property machineset_name
ocp_resources.machine_config_pool module
- class ocp_resources.machine_config_pool.MachineConfigPool(machine_config_selector=None, configuration=None, node_selector=None, max_unavailable=None, paused=None, **kwargs)[source]
Bases:
Resource
MachineConfigPool object. API reference: https://docs.openshift.com/container-platform/4.12/rest_api/machine_apis/machineconfigpool-machineconfiguration-openshift-io-v1.html
- Parameters:
node_selector (dict) –
Matching dict with supported selector logic, either labels or expressions. matchLabels example: matchLabels:
component: <some component> matchExpressions: - { key: tier, operator: In, values: [cache] } - { key: environment, operator: NotIn, values: [dev] }
matchExpressions example: matchExpressions: - key: <resource name>/role
operator: In values: - value_1 - value_2
machine_config_selector (dict) –
Matching labels/expressions, to determine which MachineConfig objects to apply this MachineConfigPool object. For filtering based on labels, the matchLabels dict is used - the same way as it is used in the
nodeSelector (see the example of node_selector[“matchLabels”] above).
- For filtering based on expressions, the matchExpressions dict is used - the same way as it is used in the
nodeSelector (see the example of node_selector[“matchExpressions”] above).
configuration (dict) – Targeted MachineConfig object for the machine config pool, in the following format: {“name”: (str), “source”: <List of dicts, each representing a MachineConfig resource>}
max_unavailable (int or str) – Number/percentage of nodes that can go Unavailable during an update.
paused (bool) – Whether changes to this MachineConfigPool should be stopped.
- api_group = 'machineconfiguration.openshift.io'
ocp_resources.machine_health_check module
- class ocp_resources.machine_health_check.MachineHealthCheck(name=None, namespace=None, cluster_name=None, machineset_name=None, client=None, machine_role='worker', machine_type='worker', node_startup_timeout='120m', max_unhealthy=2, unhealthy_timeout='300s', reboot_strategy=False, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
MachineHealthCheck object.
- api_group = 'machine.openshift.io'
ocp_resources.machine_set module
- class ocp_resources.machine_set.MachineSet(cluster_name=None, name=None, namespace=None, teardown=True, client=None, machine_role='worker', machine_type='worker', replicas=1, provider_spec=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Machineset object.
- Parameters:
cluster_name (str) – OpenShift cluster name.
machine_role (str) – machine role. e.g.: ‘worker’.
machine_type (str) – machine role. e.g.: ‘worker’.
replicas (int) – amount of replicas the machine-set will have.
provider_spec (dict) – provider spec information.
example (provider spec) –
- {
- “value”: {
“apiVersion”: “ovirtproviderconfig.machine.openshift.io/v1beta1”, “auto_pinning_policy”: “none”, “cluster_id”: “5612af70-f4f5-455d-b7df-fbad66accc38”, “cpu”: {
“cores”: 8, “sockets”: 1, “threads”: 1
}, “credentialsSecret”: {
“name”: “ovirt-credentials”
}, “kind”: “OvirtMachineProviderSpec”, “memory_mb”: 16000, “os_disk”: {
“size_gb”: 31
}, “template_name”: “ge2n1-gcwmg-rhcos”, “type”: “server”, “userDataSecret”: {
“name”: “worker-user-data”
}
}
}
- api_group = 'machine.openshift.io'
- property available_replicas
- property desired_replicas
- property provider_spec_value
- property ready_replicas
- scale_replicas(replicas, wait_timeout=300, sleep=1, wait=True)[source]
Scale down/up a machine-set replicas.
- Parameters:
replicas (int) – num of replicas to scale_replicas to.
wait_timeout (int) – maximum time to wait_for_replicas for scaling the machine-set.
sleep (int) – sleep time between each sample of the machine-set state.
wait (bool) – True if waiting for machine-set to reach into ‘ready’ state, False otherwise.
- Returns:
True if scaling the machine-set was successful or wait=False, False otherwise.
- Return type:
bool
- wait_for_replicas(timeout=300, sleep=1)[source]
Wait for machine-set replicas to reach ‘ready’ state.
- Parameters:
timeout (int) – maximum time to wait_for_replicas for the ‘ready’ state.
sleep (int) – sleep time between each sample.
- Returns:
True if machine-set reached ‘ready’ state, False otherwise.
- Return type:
bool
ocp_resources.migration module
- class ocp_resources.migration.Migration(name=None, namespace=None, plan_name=None, plan_namespace=None, cut_over=None, client=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
,MTV
Migration Toolkit For Virtualization (MTV) Migration object.
- Parameters:
plan_name (str) – MTV Plan CR name.
plan_namespace (str) – MTV Plan CR namespace.
cut_over (date) – For Warm Migration Only.Cut Over Phase Start Date & Time.
- api_group = 'forklift.konveyor.io'
ocp_resources.mtv module
- class ocp_resources.mtv.MTV[source]
Bases:
object
- Abstract Class for all Migration ToolKit For Virtualization (MTV) Resources:
Provider Plan Migration StorageMap NetworkMap Host ForkliftController
- class ConditionMessage[source]
Bases:
object
- HOST_READY = 'The host is ready.'
- MIGRATION_READY = 'The migration is ready.'
- MIGRATION_RUNNING = 'The migration is RUNNING'
- MIGRATION_SUCCEEDED = 'The migration has SUCCEEDED.'
- NETWORK_MAP_READY = 'The network map is ready.'
- PLAN_FAILED = 'The plan execution has FAILED.'
- PLAN_READY = 'The migration plan is ready.'
- PLAN_SUCCEEDED = 'The plan execution has SUCCEEDED.'
- PROVIDER_READY = 'The provider is ready.'
- STORAGE_MAP_READY = 'The storage map is ready.'
- class ConditionType[source]
Bases:
object
- FAILED = 'Failed'
- SUCCEEDED = 'Succeeded'
- TARGET_NAME_NOT_VALID = 'TargetNameNotValid'
- VM_ALREADY_EXISTS = 'VMAlreadyExists'
- class ProviderType[source]
Bases:
object
- OPENSHIFT = 'openshift'
- RHV = 'ovirt'
- VSPHERE = 'vsphere'
- property map_to_dict
ocp_resources.mutating_webhook_config module
- class ocp_resources.mutating_webhook_config.MutatingWebhookConfiguration(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
MutatingWebhookConfiguration object.
- api_group = 'admissionregistration.k8s.io'
ocp_resources.namespace module
ocp_resources.network module
- class ocp_resources.network.Network(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
- api_group = 'config.openshift.io'
ocp_resources.network_addons_config module
- class ocp_resources.network_addons_config.NetworkAddonsConfig(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
NetworkAddonsConfig (a Custom Resource) object, inherited from Resource.
- api_group = 'networkaddonsoperator.network.kubevirt.io'
ocp_resources.network_attachment_definition module
- class ocp_resources.network_attachment_definition.BridgeNetworkAttachmentDefinition(name, namespace, bridge_name, cni_type, cni_version='0.3.1', vlan=None, client=None, mtu=None, macspoofchk=None, teardown=True, old_nad_format=False, add_resource_name=True, dry_run=None)[source]
Bases:
NetworkAttachmentDefinition
- class ocp_resources.network_attachment_definition.LinuxBridgeNetworkAttachmentDefinition(name, namespace, bridge_name, cni_type='cnv-bridge', cni_version='0.3.1', vlan=None, client=None, mtu=None, tuning_type=None, teardown=True, macspoofchk=None, add_resource_name=True, dry_run=None)[source]
Bases:
BridgeNetworkAttachmentDefinition
- property resource_name
- class ocp_resources.network_attachment_definition.NetworkAttachmentDefinition(name=None, namespace=None, client=None, cni_type=None, cni_version='0.3.1', config=None, *args, **kwargs)[source]
Bases:
NamespacedResource
NetworkAttachmentDefinition object.
- api_group = 'k8s.cni.cncf.io'
- resource_name = None
- class ocp_resources.network_attachment_definition.OVNOverlayNetworkAttachmentDefinition(network_name=None, **kwargs)[source]
Bases:
NetworkAttachmentDefinition
- class ocp_resources.network_attachment_definition.OvsBridgeNetworkAttachmentDefinition(name, namespace, bridge_name, vlan=None, client=None, mtu=None, teardown=True, dry_run=None, cni_version='0.3.1')[source]
Bases:
BridgeNetworkAttachmentDefinition
- property resource_name
ocp_resources.network_map module
- class ocp_resources.network_map.NetworkMap(name=None, namespace=None, mapping=None, source_provider_name=None, source_provider_namespace=None, destination_provider_name=None, destination_provider_namespace=None, client=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
,MTV
Migration Toolkit For Virtualization (MTV) NetworkMap object.
- Parameters:
source_provider_name (str) – MTV Source Provider CR name.
source_provider_namespace (str) – MTV Source Provider CR namespace.
destination_provider_name (str) – MTV Destination Provider CR name.
destination_provider_namespace (str) – MTV Destination Provider CR namespace.
mapping (dict) –
Storage Resources Mapping Exaple:
- [ { “destination”{ “type”: “pod”,
”source” : { “id”: “network-13” }},
- { “destination”{ “name”: “nad_cr_name”,
”namespace”: “nad_cr_namespace”, “type”: “multus”},
”source” : { “name”: “VM Netowrk” }},
]
- api_group = 'forklift.konveyor.io'
ocp_resources.network_policy module
- class ocp_resources.network_policy.NetworkPolicy(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
NetworkPolicy object.
- api_group = 'networking.k8s.io'
ocp_resources.node module
- class ocp_resources.node.Node(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
Node object, inherited from Resource.
- api_version = 'v1'
- property hostname
- property internal_ip
- property kubelet_ready
- property machine_name
- property taints
ocp_resources.node_maintenance module
- class ocp_resources.node_maintenance.NodeMaintenance(name=None, client=None, node=None, reason='TEST Reason', teardown=True, timeout=240, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
Resource
Node Maintenance object, inherited from Resource.
- api_group = 'nodemaintenance.kubevirt.io'
ocp_resources.node_network_configuration_enactment module
- class ocp_resources.node_network_configuration_enactment.NodeNetworkConfigurationEnactment(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
- class Conditions[source]
Bases:
object
- api_group = 'nmstate.io'
ocp_resources.node_network_configuration_policy module
- exception ocp_resources.node_network_configuration_policy.NNCPConfigurationFailed[source]
Bases:
Exception
- class ocp_resources.node_network_configuration_policy.NodeNetworkConfigurationPolicy(name=None, client=None, capture=None, node_selector=None, node_selector_labels=None, teardown_absent_ifaces=True, teardown=True, mtu=None, ports=None, ipv4_enable=False, ipv4_dhcp=False, ipv4_auto_dns=True, ipv4_addresses=None, ipv6_enable=False, ipv6_dhcp=False, ipv6_auto_dns=True, ipv6_addresses=None, dns_resolver=None, routes=None, yaml_file=None, set_ipv4=True, set_ipv6=True, max_unavailable=None, state=None, success_timeout=480, delete_timeout=240, **kwargs)[source]
Bases:
Resource
- class Conditions[source]
Bases:
object
- add_interface(iface=None, name=None, type_=None, state=None, set_ipv4=True, ipv4_enable=False, ipv4_dhcp=False, ipv4_auto_dns=True, ipv4_addresses=None, set_ipv6=True, ipv6_enable=False, ipv6_dhcp=False, ipv6_auto_dns=True, ipv6_addresses=None, ipv6_autoconf=False)[source]
- api_group = 'nmstate.io'
- clean_up()[source]
For debug, export SKIP_RESOURCE_TEARDOWN to skip resource teardown. Spaces are important in the export dict
Examples
- To skip teardown of all resources by kind:
export SKIP_RESOURCE_TEARDOWN=”{Pod: {}}”
- To skip teardown of resource by name (on all namespaces):
export SKIP_RESOURCE_TEARDOWN=”{Pod: {<pod-name>:}}”
- To skip teardown of resource by name and namespace:
export SKIP_RESOURCE_TEARDOWN=”{Pod: {<pod-name>: <pod-namespace>}}”
- To skip teardown of multiple resources:
export SKIP_RESOURCE_TEARDOWN=”{Namespace: {<namespace-name>:}, Pod: {<pod-name>: <pod-namespace>}}”
- deploy(wait=False)[source]
For debug, export REUSE_IF_RESOURCE_EXISTS to skip resource create. Spaces are important in the export dict
Examples
- To skip creation of all resources by kind:
export REUSE_IF_RESOURCE_EXISTS=”{Pod: {}}”
- To skip creation of resource by name (on all namespaces or non-namespaced resources):
export REUSE_IF_RESOURCE_EXISTS=”{Pod: {<pod-name>:}}”
- To skip creation of resource by name and namespace:
export REUSE_IF_RESOURCE_EXISTS=”{Pod: {<pod-name>: <pod-namespace>}}”
- To skip creation of multiple resources:
export REUSE_IF_RESOURCE_EXISTS=”{Namespace: {<namespace-name>:}, Pod: {<pod-name>: <pod-namespace>}}”
- property nnces
- property status
Get resource status
Status: Running, Scheduling, Pending, Unknown, CrashLoopBackOff
- Returns:
Status
- Return type:
str
ocp_resources.node_network_state module
ocp_resources.oauth module
- class ocp_resources.oauth.OAuth(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
OAuth object.
- api_group = 'config.openshift.io'
ocp_resources.operator_group module
- class ocp_resources.operator_group.OperatorGroup(name=None, namespace=None, target_namespaces=None, teardown=True, client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- api_group = 'operators.coreos.com'
ocp_resources.operator_hub module
- class ocp_resources.operator_hub.OperatorHub(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
- api_group = 'config.openshift.io'
ocp_resources.operator_source module
- class ocp_resources.operator_source.OperatorSource(name=None, namespace=None, registry_namespace=None, display_name=None, publisher=None, secret=None, client=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- api_group = 'operators.coreos.com'
ocp_resources.package_manifest module
- class ocp_resources.package_manifest.PackageManifest(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- api_group = 'packages.operators.coreos.com'
ocp_resources.peer_authentication module
- class ocp_resources.peer_authentication.PeerAuthentication(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Peer Authentication object.
- api_group = 'security.istio.io'
ocp_resources.persistent_volume module
- class ocp_resources.persistent_volume.PersistentVolume(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
PersistentVolume object
- api_version = 'v1'
- property max_available_pvs
Returns the maximum number (int) of PV’s which are in ‘Available’ state
ocp_resources.persistent_volume_claim module
- class ocp_resources.persistent_volume_claim.PersistentVolumeClaim(name=None, namespace=None, client=None, storage_class=None, accessmodes=None, volume_mode='Filesystem', size=None, hostpath_node=None, teardown=True, yaml_file=None, delete_timeout=240, pvlabel=None, **kwargs)[source]
Bases:
NamespacedResource
PersistentVolumeClaim object
- class AccessMode[source]
Bases:
object
AccessMode object.
- ROX = 'ReadOnlyMany'
- RWO = 'ReadWriteOnce'
- RWX = 'ReadWriteMany'
- api_version = 'v1'
- property prime_pvc
- property selected_node
- property use_populator
ocp_resources.plan module
- class ocp_resources.plan.Plan(name=None, namespace=None, source_provider_name=None, source_provider_namespace=None, destination_provider_name=None, destination_provider_namespace=None, storage_map_name=None, storage_map_namespace=None, network_map_name=None, network_map_namespace=None, virtual_machines_list=None, target_namespace=None, warm_migration=False, pre_hook_name=None, pre_hook_namespace=None, after_hook_name=None, after_hook_namespace=None, client=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
,MTV
Migration Tool for Virtualization (MTV) Plan Resource.
- Parameters:
source_provider_name (str) – MTV Source Provider CR name.
source_provider_namespace (str) – MTV Source Provider CR namespace.
destination_provider_name (str) – MTV Destination Provider CR name.
destination_provider_namespace (str) – MTV Destination Provider CR namespace.
storage_map_name (str) – MTV StorageMap CR name.
storage_map_namespace (str) – MTV StorageMap CR namespace.
network_map_name (str) – MTV NetworkMap CR name.
network_map_namespace (str) – MTV NetworkMap CR CR namespace.
virtual_machines_list (list) – A List of dicts, each contain the Name Or Id of the source Virtual Machines to migrate. Example: [ { “id”: “vm-id-x” }, { “name”: “vm-name-x” } ]
(bool (warm_migration) – False): Warm (True) or Cold (False) migration.
default – False): Warm (True) or Cold (False) migration.
- api_group = 'forklift.konveyor.io'
ocp_resources.pod module
- class ocp_resources.pod.Pod(name=None, namespace=None, client=None, teardown=True, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Pod object, inherited from Resource.
- class Status[source]
Bases:
Status
- CRASH_LOOPBACK_OFF = 'CrashLoopBackOff'
- ERR_IMAGE_PULL = 'ErrImagePull'
- IMAGE_PULL_BACK_OFF = 'ImagePullBackOff'
- api_version = 'v1'
- property containers
Get Pod containers
- Returns:
List of Pod containers
- Return type:
list
- execute(command, timeout=60, container=None, ignore_rc=False)[source]
Run command on Pod
- Parameters:
command (list) – Command to run.
timeout (int) – Time to wait for the command.
container (str) – Container name where to exec the command.
ignore_rc (bool) – If True ignore error rc from the shell and return out.
- Returns:
Command output.
- Return type:
str
- Raises:
ExecOnPodError – If the command failed.
- property ip
ocp_resources.priority_class module
ocp_resources.project module
- class ocp_resources.project.Project(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
Project object. This is openshift’s object which represents Namespace
- api_group = 'project.openshift.io'
- clean_up()[source]
For debug, export SKIP_RESOURCE_TEARDOWN to skip resource teardown. Spaces are important in the export dict
Examples
- To skip teardown of all resources by kind:
export SKIP_RESOURCE_TEARDOWN=”{Pod: {}}”
- To skip teardown of resource by name (on all namespaces):
export SKIP_RESOURCE_TEARDOWN=”{Pod: {<pod-name>:}}”
- To skip teardown of resource by name and namespace:
export SKIP_RESOURCE_TEARDOWN=”{Pod: {<pod-name>: <pod-namespace>}}”
- To skip teardown of multiple resources:
export SKIP_RESOURCE_TEARDOWN=”{Namespace: {<namespace-name>:}, Pod: {<pod-name>: <pod-namespace>}}”
- class ocp_resources.project.ProjectRequest(name=None, client=None, teardown=True, timeout=240, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
Resource
RequestProject object. Resource which adds Project and grand full access to user who originated this request
- api_group = 'project.openshift.io'
- clean_up()[source]
For debug, export SKIP_RESOURCE_TEARDOWN to skip resource teardown. Spaces are important in the export dict
Examples
- To skip teardown of all resources by kind:
export SKIP_RESOURCE_TEARDOWN=”{Pod: {}}”
- To skip teardown of resource by name (on all namespaces):
export SKIP_RESOURCE_TEARDOWN=”{Pod: {<pod-name>:}}”
- To skip teardown of resource by name and namespace:
export SKIP_RESOURCE_TEARDOWN=”{Pod: {<pod-name>: <pod-namespace>}}”
- To skip teardown of multiple resources:
export SKIP_RESOURCE_TEARDOWN=”{Namespace: {<namespace-name>:}, Pod: {<pod-name>: <pod-namespace>}}”
ocp_resources.prometheus_rule module
- class ocp_resources.prometheus_rule.PrometheusRule(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Prometheus Rule object.
- api_group = 'monitoring.coreos.com'
ocp_resources.provider module
- class ocp_resources.provider.Provider(name=None, namespace=None, provider_type=None, url=None, secret_name=None, secret_namespace=None, vddk_init_image=None, client=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
,MTV
Migration Toolkit For Virtualization (MTV) Provider object.
- api_group = 'forklift.konveyor.io'
ocp_resources.replicaset module
- class ocp_resources.replicaset.ReplicaSet(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
OpenShift Service object.
- api_group = 'apps'
ocp_resources.resource module
- class ocp_resources.resource.KubeAPIVersion(vstring=None)[source]
Bases:
Version
Implement the Kubernetes API versioning scheme from https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-versioning
- component_re = re.compile('(\\d+ | [a-z]+)', re.VERBOSE)
- class ocp_resources.resource.NamespacedResource(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
Resource
Namespaced object, inherited from Resource.
- classmethod get(dyn_client=None, config_file=None, context=None, singular_name=None, raw=False, *args, **kwargs)[source]
Get resources
- Parameters:
dyn_client (DynamicClient) – Open connection to remote cluster
config_file (str) – Path to config file for connecting to remote cluster.
context (str) – Context name for connecting to remote cluster.
singular_name (str) – Resource kind (in lowercase), in use where we have multiple matches for resource.
raw (bool) – If True return raw object.
- Returns:
Generator of Resources of cls.kind
- Return type:
generator
- property instance
Get resource instance
- Returns:
openshift.dynamic.client.ResourceInstance
- class ocp_resources.resource.Resource(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
object
Base class for API resources
- class ApiGroup[source]
Bases:
object
- ADMISSIONREGISTRATION_K8S_IO = 'admissionregistration.k8s.io'
- APIEXTENSIONS_K8S_IO = 'apiextensions.k8s.io'
- APIREGISTRATION_K8S_IO = 'apiregistration.k8s.io'
- APPS = 'apps'
- APP_KUBERNETES_IO = 'app.kubernetes.io'
- BATCH = 'batch'
- CDI_KUBEVIRT_IO = 'cdi.kubevirt.io'
- CLONE_KUBEVIRT_IO = 'clone.kubevirt.io'
- CLUSTER_OPEN_CLUSTER_MANAGEMENT_IO = 'cluster.open-cluster-management.io'
- CONFIG_OPENSHIFT_IO = 'config.openshift.io'
- CONSOLE_OPENSHIFT_IO = 'console.openshift.io'
- COORDINATION_K8S_IO = 'coordination.k8s.io'
- DATA_IMPORT_CRON_TEMPLATE_KUBEVIRT_IO = 'dataimportcrontemplate.kubevirt.io'
- DISCOVERY_K8S_IO = 'discovery.k8s.io'
- EVENTS_K8S_IO = 'events.k8s.io'
- EXPORT_KUBEVIRT_IO = 'export.kubevirt.io'
- FORKLIFT_KONVEYOR_IO = 'forklift.konveyor.io'
- HCO_KUBEVIRT_IO = 'hco.kubevirt.io'
- HOSTPATHPROVISIONER_KUBEVIRT_IO = 'hostpathprovisioner.kubevirt.io'
- IMAGE_OPENSHIFT_IO = 'image.openshift.io'
- IMAGE_REGISTRY = 'registry.redhat.io'
- INSTANCETYPE_KUBEVIRT_IO = 'instancetype.kubevirt.io'
- INTEGREATLY_ORG = 'integreatly.org'
- K8S_CNI_CNCF_IO = 'k8s.cni.cncf.io'
- K8S_V1_CNI_CNCF_IO = 'k8s.v1.cni.cncf.io'
- KUBERNETES_IO = 'kubernetes.io'
- KUBEVIRT_IO = 'kubevirt.io'
- KUBEVIRT_KUBEVIRT_IO = 'kubevirt.kubevirt.io'
- LITMUS_IO = 'litmuschaos.io'
- MACHINECONFIGURATION_OPENSHIFT_IO = 'machineconfiguration.openshift.io'
- MACHINE_OPENSHIFT_IO = 'machine.openshift.io'
- MAISTRA_IO = 'maistra.io'
- METALLB_IO = 'metallb.io'
- METRICS_K8S_IO = 'metrics.k8s.io'
- MIGRATIONS_KUBEVIRT_IO = 'migrations.kubevirt.io'
- MONITORING_COREOS_COM = 'monitoring.coreos.com'
- NETWORKADDONSOPERATOR_NETWORK_KUBEVIRT_IO = 'networkaddonsoperator.network.kubevirt.io'
- NETWORKING_ISTIO_IO = 'networking.istio.io'
- NETWORKING_K8S_IO = 'networking.k8s.io'
- NMSTATE_IO = 'nmstate.io'
- NODEMAINTENANCE_KUBEVIRT_IO = 'nodemaintenance.kubevirt.io'
- NODE_LABELLER_KUBEVIRT_IO = 'node-labeller.kubevirt.io'
- OBSERVABILITY_OPEN_CLUSTER_MANAGEMENT_IO = 'observability.open-cluster-management.io'
- OCS_OPENSHIFT_IO = 'ocs.openshift.io'
- OPERATORS_COREOS_COM = 'operators.coreos.com'
- OPERATORS_OPENSHIFT_IO = 'operators.openshift.io'
- OPERATOR_OPENSHIFT_IO = 'operator.openshift.io'
- OPERATOR_OPEN_CLUSTER_MANAGEMENT_IO = 'operator.open-cluster-management.io'
- OS_TEMPLATE_KUBEVIRT_IO = 'os.template.kubevirt.io'
- PACKAGES_OPERATORS_COREOS_COM = 'packages.operators.coreos.com'
- POLICY = 'policy'
- POOL_KUBEVIRT_IO = 'pool.kubevirt.io'
- PROJECT_OPENSHIFT_IO = 'project.openshift.io'
- RBAC_AUTHORIZATION_K8S_IO = 'rbac.authorization.k8s.io'
- REMEDIATION_MEDIK8S_IO = 'remediation.medik8s.io'
- RIPSAW_CLOUDBULLDOZER_IO = 'ripsaw.cloudbulldozer.io'
- ROUTE_OPENSHIFT_IO = 'route.openshift.io'
- SCHEDULING_K8S_IO = 'scheduling.k8s.io'
- SECURITY_ISTIO_IO = 'security.istio.io'
- SECURITY_OPENSHIFT_IO = 'security.openshift.io'
- SNAPSHOT_KUBEVIRT_IO = 'snapshot.kubevirt.io'
- SNAPSHOT_STORAGE_K8S_IO = 'snapshot.storage.k8s.io'
- SRIOVNETWORK_OPENSHIFT_IO = 'sriovnetwork.openshift.io'
- SSP_KUBEVIRT_IO = 'ssp.kubevirt.io'
- STORAGECLASS_KUBERNETES_IO = 'storageclass.kubernetes.io'
- STORAGE_K8S_IO = 'storage.k8s.io'
- SUBRESOURCES_KUBEVIRT_IO = 'subresources.kubevirt.io'
- TEKTONTASKS_KUBEVIRT_IO = 'tektontasks.kubevirt.io'
- TEKTON_DEV = 'tekton.dev'
- TEMPLATE_KUBEVIRT_IO = 'template.kubevirt.io'
- TEMPLATE_OPENSHIFT_IO = 'template.openshift.io'
- UPLOAD_CDI_KUBEVIRT_IO = 'upload.cdi.kubevirt.io'
- V2V_KUBEVIRT_IO = 'v2v.kubevirt.io'
- VELERO_IO = 'velero.io'
- VM_KUBEVIRT_IO = 'vm.kubevirt.io'
- class ApiVersion[source]
Bases:
object
- V1 = 'v1'
- V1ALPHA1 = 'v1alpha1'
- V1ALPHA3 = 'v1alpha3'
- V1BETA1 = 'v1beta1'
- class Condition[source]
Bases:
object
- AVAILABLE = 'Available'
- CREATED = 'Created'
- DEGRADED = 'Degraded'
- FAILING = 'Failing'
- PROGRESSING = 'Progressing'
- READY = 'Ready'
- RECONCILE_COMPLETE = 'ReconcileComplete'
- class Reason[source]
Bases:
object
- ALL_REQUIREMENTS_MET = 'AllRequirementsMet'
- INSTALL_SUCCEEDED = 'InstallSucceeded'
- UPGRADEABLE = 'Upgradeable'
- class Status[source]
Bases:
object
- COMPLETED = 'Completed'
- DELETING = 'Deleting'
- DEPLOYED = 'Deployed'
- ERROR = 'Error'
- FAILED = 'Failed'
- PENDING = 'Pending'
- READY = 'Ready'
- RUNNING = 'Running'
- SUCCEEDED = 'Succeeded'
- TERMINATING = 'Terminating'
- property api
- api_group = None
- api_request(method, action, url, **params)[source]
Handle API requests to resource.
- Parameters:
method (str) – Request method (GET/PUT etc.).
action (str) – Action to perform (stop/start/guestosinfo etc.).
url (str) – URL of resource.
- Returns:
response data
- Return type:
data(dict)
- api_version = None
- clean_up()[source]
For debug, export SKIP_RESOURCE_TEARDOWN to skip resource teardown. Spaces are important in the export dict
Examples
- To skip teardown of all resources by kind:
export SKIP_RESOURCE_TEARDOWN=”{Pod: {}}”
- To skip teardown of resource by name (on all namespaces):
export SKIP_RESOURCE_TEARDOWN=”{Pod: {<pod-name>:}}”
- To skip teardown of resource by name and namespace:
export SKIP_RESOURCE_TEARDOWN=”{Pod: {<pod-name>: <pod-namespace>}}”
- To skip teardown of multiple resources:
export SKIP_RESOURCE_TEARDOWN=”{Namespace: {<namespace-name>:}, Pod: {<pod-name>: <pod-namespace>}}”
- client_wait_deleted(timeout)[source]
client-side Wait until resource is deleted
- Parameters:
timeout (int) – Time to wait for the resource.
- Raises:
TimeoutExpiredError – If resource still exists.
- create(wait=False)[source]
Create resource.
- Parameters:
wait (bool) – True to wait for resource status.
- Returns:
True if create succeeded, False otherwise.
- Return type:
bool
- Raises:
ValueMismatch – When body value doesn’t match class value
- deploy(wait=False)[source]
For debug, export REUSE_IF_RESOURCE_EXISTS to skip resource create. Spaces are important in the export dict
Examples
- To skip creation of all resources by kind:
export REUSE_IF_RESOURCE_EXISTS=”{Pod: {}}”
- To skip creation of resource by name (on all namespaces or non-namespaced resources):
export REUSE_IF_RESOURCE_EXISTS=”{Pod: {<pod-name>:}}”
- To skip creation of resource by name and namespace:
export REUSE_IF_RESOURCE_EXISTS=”{Pod: {<pod-name>: <pod-namespace>}}”
- To skip creation of multiple resources:
export REUSE_IF_RESOURCE_EXISTS=”{Namespace: {<namespace-name>:}, Pod: {<pod-name>: <pod-namespace>}}”
- events(name=None, label_selector=None, field_selector=None, resource_version=None, timeout=None)[source]
get - retrieves K8s events.
- Parameters:
name (str) – event name
label_selector (str) – filter events by labels; comma separated string of key=value
field_selector (str) – filter events by fields; comma separated string of key=valueevent fields; comma separated string of key=value
resource_version (str) – filter events by their resource’s version
timeout (int) – timeout in seconds
- Returns
list: event objects
- example: reading all CSV Warning events in namespace “my-namespace”, with reason of “AnEventReason”
pod = Pod(client=client, name=”pod”, namespace=”my-namespace”) for event in pod.events(
default_client, namespace=”my-namespace”, field_selector=”involvedObject.kind==ClusterServiceVersion,type==Warning,reason=AnEventReason”, timeout=10,
- ):
print(event.object)
- property exists
Whether self exists on the server
- full_api(**kwargs)[source]
Get resource API
- Keyword Arguments:
pretty –
_continue –
include_uninitialized –
field_selector –
label_selector –
limit –
resource_version –
timeout_seconds –
watch –
async_req –
- Returns:
Resource object.
- Return type:
- classmethod get(dyn_client=None, config_file=None, context=None, singular_name=None, exceptions_dict={<class 'urllib3.exceptions.MaxRetryError'>: [], <class 'ConnectionAbortedError'>: [], <class 'ConnectionResetError'>: [], <class 'kubernetes.dynamic.exceptions.InternalServerError'>: ['etcdserver: leader changed', 'etcdserver: request timed out', 'Internal error occurred: failed calling webhook', 'rpc error:'], <class 'kubernetes.dynamic.exceptions.ServerTimeoutError'>: [], <class 'kubernetes.dynamic.exceptions.ForbiddenError'>: ['context deadline exceeded']}, *args, **kwargs)[source]
Get resources
- Parameters:
dyn_client (DynamicClient) – Open connection to remote cluster.
config_file (str) – Path to config file for connecting to remote cluster.
context (str) – Context name for connecting to remote cluster.
singular_name (str) – Resource kind (in lowercase), in use where we have multiple matches for resource.
exceptions_dict (dict) – Exceptions dict for TimeoutSampler
- Returns:
Generator of Resources of cls.kind.
- Return type:
generator
- static get_all_cluster_resources(config_file=None, config_dict=None, context=None, *args, **kwargs)[source]
Get all cluster resources
- Parameters:
config_file (str) – path to a kubeconfig file.
config_dict (dict) – dict with kubeconfig configuration.
context (str) – name of the context to use.
*args (tuple) – args to pass to client.get()
**kwargs (dict) – kwargs to pass to client.get()
- Yields:
kubernetes.dynamic.resource.ResourceField – Cluster resource.
Example
- for resource in get_all_cluster_resources(label_selector=”my-label=value”):
print(f”Resource: {resource}”)
- property instance
Get resource instance
- Returns:
openshift.dynamic.client.ResourceInstance
- property keys_to_hash
Resource attributes list to hash in the logs.
The list should hold absolute key paths in resource dict.
- Example:
given a dict: {“spec”: {“data”: <value_to_hash>}} To hash spec[‘data’] key pass: [“spec..data”]
- kind = None
- property labels
Method to get labels for this resource
- Returns:
Representation of labels
- Return type:
openshift.dynamic.resource.ResourceField
- static retry_cluster_exceptions(func, exceptions_dict={<class 'urllib3.exceptions.MaxRetryError'>: [], <class 'ConnectionAbortedError'>: [], <class 'ConnectionResetError'>: [], <class 'kubernetes.dynamic.exceptions.InternalServerError'>: ['etcdserver: leader changed', 'etcdserver: request timed out', 'Internal error occurred: failed calling webhook', 'rpc error:'], <class 'kubernetes.dynamic.exceptions.ServerTimeoutError'>: [], <class 'kubernetes.dynamic.exceptions.ForbiddenError'>: ['context deadline exceeded']}, **kwargs)[source]
- singular_name = None
- property status
Get resource status
Status: Running, Scheduling, Pending, Unknown, CrashLoopBackOff
- Returns:
Status
- Return type:
str
- timeout_seconds = 60
- to_yaml()[source]
Get resource as YAML representation.
- Returns:
Resource YAML representation.
- Return type:
str
- update(resource_dict)[source]
Update resource with resource dict
- Parameters:
resource_dict – Resource dictionary
- update_replace(resource_dict)[source]
Replace resource metadata. Use this to remove existing field. (update() will only update existing fields)
- wait(timeout=240, sleep=1)[source]
Wait for resource
- Parameters:
timeout (int) – Time to wait for the resource.
sleep (int) – Time to wait between retries
- Raises:
TimeoutExpiredError – If resource not exists.
- wait_deleted(timeout=240)[source]
Wait until resource is deleted
- Parameters:
timeout (int) – Time to wait for the resource.
- Raises:
TimeoutExpiredError – If resource still exists.
- wait_for_condition(condition, status, timeout=300)[source]
Wait for Resource condition to be in desire status.
- Parameters:
condition (str) – Condition to query.
status (str) – Expected condition status.
timeout (int) – Time to wait for the resource.
- Raises:
TimeoutExpiredError – If Resource condition in not in desire status.
- wait_for_status(status, timeout=240, stop_status=None, sleep=1)[source]
Wait for resource to be in status
- Parameters:
status (str) – Expected status.
timeout (int) – Time to wait for the resource.
stop_status (str) – Status which should stop the wait and failed.
- Raises:
TimeoutExpiredError – If resource in not in desire status.
- watcher(timeout, resource_version=None)[source]
Get resource for a given timeout.
- Parameters:
timeout (int) – Time to get conditions.
resource_version (str) – The version with which to filter results. Only events with a resource_version greater than this value will be returned
- Yields:
Event object with these keys – ‘type’: The type of event such as “ADDED”, “DELETED”, etc. ‘raw_object’: a dict representing the watched object. ‘object’: A ResourceInstance wrapping raw_object.
- class ocp_resources.resource.ResourceEditor(patches, action='update', user_backups=None)[source]
Bases:
object
- property backups
<backup_as_dict>} The backup dict kept for each resource edited
- Type:
Returns a dict {<Resource object>
- property patches
Returns the patches dict provided in the constructor
- exception ocp_resources.resource.ValueMismatch[source]
Bases:
Exception
Raises when value doesn’t match the class value
- ocp_resources.resource.get_client(config_file=None, config_dict=None, context=None)[source]
Get a kubernetes client.
Pass either config_file or config_dict. If none of them are passed, client will be created from default OS kubeconfig (environment variable or .kube folder).
- Parameters:
config_file (str) – path to a kubeconfig file.
config_dict (dict) – dict with kubeconfig configuration.
context (str) – name of the context to use.
- Returns:
a kubernetes client.
- Return type:
DynamicClient
ocp_resources.role module
- class ocp_resources.role.Role(name=None, namespace=None, client=None, rules=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Role object.
- api_group = 'rbac.authorization.k8s.io'
ocp_resources.role_binding module
- class ocp_resources.role_binding.RoleBinding(name=None, namespace=None, client=None, subjects_kind=None, subjects_name=None, subjects_namespace=None, subjects_api_group=None, role_ref_kind=None, role_ref_name=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
RoleBinding object
- api_group = 'rbac.authorization.k8s.io'
ocp_resources.route module
- class ocp_resources.route.Route(name=None, namespace=None, client=None, service=None, destination_ca_cert=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
OpenShift Route object.
- api_group = 'route.openshift.io'
- property ca_cert
returns destinationCACertificate
- property exposed_service
returns the service the route is exposing
- property host
returns hostname that is exposing the service
- property termination
returns a secured route using re-encrypt termination
ocp_resources.secret module
- class ocp_resources.secret.Secret(name=None, namespace=None, client=None, accesskeyid=None, secretkey=None, htpasswd=None, teardown=True, data_dict=None, string_data=None, yaml_file=None, delete_timeout=240, type=None, **kwargs)[source]
Bases:
NamespacedResource
Secret object.
- api_version = 'v1'
- property certificate_not_after
- property certificate_not_before
- property keys_to_hash
Resource attributes list to hash in the logs.
The list should hold absolute key paths in resource dict.
- Example:
given a dict: {“spec”: {“data”: <value_to_hash>}} To hash spec[‘data’] key pass: [“spec..data”]
ocp_resources.security_context_constraints module
- class ocp_resources.security_context_constraints.SecurityContextConstraints(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
Security Context Constraints object.
- api_group = 'security.openshift.io'
ocp_resources.service module
- class ocp_resources.service.Service(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
OpenShift Service object.
- class Type[source]
Bases:
object
- CLUSTER_IP = 'ClusterIP'
- LOAD_BALANCER = 'LoadBalancer'
- NODE_PORT = 'NodePort'
- api_version = 'v1'
ocp_resources.service_account module
- class ocp_resources.service_account.ServiceAccount(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Service Account object
- api_version = 'v1'
ocp_resources.service_mesh_control_plane module
- class ocp_resources.service_mesh_control_plane.ServiceMeshControlPlane(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Service Mesh Control Plane object.
- api_group = 'maistra.io'
ocp_resources.service_mesh_member_roll module
- class ocp_resources.service_mesh_member_roll.ServiceMeshMemberRoll(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Service Mesh Member Roll object.
- api_group = 'maistra.io'
ocp_resources.service_monitor module
- class ocp_resources.service_monitor.ServiceMonitor(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Service Monitor object.
- api_group = 'monitoring.coreos.com'
ocp_resources.sriov_network module
- class ocp_resources.sriov_network.SriovNetwork(name=None, namespace=None, network_namespace=None, client=None, resource_name=None, vlan=None, ipam=None, teardown=True, yaml_file=None, delete_timeout=240, macspoofchk=None, **kwargs)[source]
Bases:
NamespacedResource
SriovNetwork object.
- api_group = 'sriovnetwork.openshift.io'
ocp_resources.sriov_network_node_policy module
- class ocp_resources.sriov_network_node_policy.SriovNetworkNodePolicy(name=None, namespace=None, pf_names=None, root_devices=None, num_vfs=None, resource_name=None, client=None, priority=None, mtu=None, node_selector=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
SriovNetworkNodePolicy object.
- api_group = 'sriovnetwork.openshift.io'
ocp_resources.sriov_network_node_state module
- class ocp_resources.sriov_network_node_state.SriovNetworkNodeState(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
SriovNetworkNodeState object.
- api_group = 'sriovnetwork.openshift.io'
- property interfaces
ocp_resources.ssp module
- class ocp_resources.ssp.SSP(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
SSP object.
- api_group = 'ssp.kubevirt.io'
ocp_resources.storage_class module
- class ocp_resources.storage_class.StorageClass(provisioner=None, reclaim_policy=None, volume_binding_mode=None, allow_volume_expansion=None, parameters=None, allowed_topologies=None, mount_options=None, **kwargs)[source]
Bases:
Resource
StorageClass object.
- class Annotations[source]
Bases:
object
- IS_DEFAULT_CLASS = 'storageclass.kubernetes.io/is-default-class'
- class Provisioner[source]
Bases:
object
- CEPH_RBD = 'openshift-storage.rbd.csi.ceph.com'
- HOSTPATH = 'kubevirt.io/hostpath-provisioner'
- HOSTPATH_CSI = 'kubevirt.io.hostpath-provisioner'
- NO_PROVISIONER = 'kubernetes.io/no-provisioner'
- class Types[source]
Bases:
object
These are names of StorageClass instances when you run oc get sc
- CEPH_RBD = 'ocs-storagecluster-ceph-rbd'
- HOSTPATH = 'hostpath-provisioner'
- HOSTPATH_CSI = 'hostpath-csi'
- LOCAL_BLOCK = 'local-block'
- NFS = 'nfs'
- class VolumeBindingMode[source]
Bases:
object
VolumeBindingMode indicates how PersistentVolumeClaims should be provisioned and bound. When unset, Immediate is used. When “Immediate”, if you want to use the “node aware” hostpath-provisioner, ProvisionOnNode annotations should be introduced to PVC. Or in order to be able to use the hpp without specifying the node on the PVC, since CNV-2.2, hpp supports for “WaitForFirstConsumer”.
- Immediate = 'Immediate'
- WaitForFirstConsumer = 'WaitForFirstConsumer'
- api_group = 'storage.k8s.io'
ocp_resources.storage_map module
- class ocp_resources.storage_map.StorageMap(name=None, namespace=None, source_provider_name=None, source_provider_namespace=None, destination_provider_name=None, destination_provider_namespace=None, mapping=None, client=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
,MTV
Migration Toolkit For Virtualization (MTV) StorageMap object.
- Parameters:
source_provider_name (str) – MTV Source Provider CR name.
source_provider_namespace (str) – MTV Source Provider CR namespace.
destination_provider_name (str) – MTV Destination Provider CR name.
destination_provider_namespace (str) – MTV Destination Provider CR namespace.
mapping (dict) –
Storage Resources Mapping .. rubric:: Example
- [ { “destination”{ “storageClass”: “nfs”,
”accessMode”: ” ReadWriteMany”, “volumeMode”: “Filesystem” },
”source” : { “id”: “datastore-11” }},
- { “destination”{ “storageClass”: “hss”,
”accessMode”: ” ReadWriteMany”, “volumeMode”: “Block” },
”source” : { “name”: “MyDatastore” }},
]
- api_group = 'forklift.konveyor.io'
ocp_resources.subscription module
- class ocp_resources.subscription.Subscription(name=None, namespace=None, client=None, source=None, source_namespace=None, package_name=None, install_plan_approval=None, channel=None, starting_csv=None, node_selector=None, tolerations=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- api_group = 'operators.coreos.com'
ocp_resources.template module
- class ocp_resources.template.Template(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- class Annotations[source]
Bases:
object
- DEPRECATED = 'template.kubevirt.io/deprecated'
- PROVIDER = 'template.kubevirt.io/provider'
- PROVIDER_SUPPORT_LEVEL = 'template.kubevirt.io/provider-support-level'
- PROVIDER_URL = 'template.kubevirt.io/provider-url'
- class Flavor[source]
Bases:
object
- LARGE = 'large'
- MEDIUM = 'medium'
- SMALL = 'small'
- TINY = 'tiny'
- class Labels[source]
Bases:
object
- BASE = 'template.kubevirt.io/type=base'
- FLAVOR = 'flavor.template.kubevirt.io'
- OS = 'os.template.kubevirt.io'
- WORKLOAD = 'workload.template.kubevirt.io'
- class VMAnnotations[source]
Bases:
object
- FLAVOR = 'vm.kubevirt.io/flavor'
- OS = 'vm.kubevirt.io/os'
- WORKLOAD = 'vm.kubevirt.io/workload'
- class Workload[source]
Bases:
object
- DESKTOP = 'desktop'
- HIGHPERFORMANCE = 'highperformance'
- SAPHANA = 'saphana'
- SERVER = 'server'
- api_group = 'template.openshift.io'
- singular_name = 'template'
ocp_resources.upload_token_request module
- class ocp_resources.upload_token_request.UploadTokenRequest(name=None, namespace=None, client=None, pvc_name=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
OpenShift UploadTokenRequest object.
- api_group = 'upload.cdi.kubevirt.io'
ocp_resources.utils module
- class ocp_resources.utils.TimeoutSampler(wait_timeout, sleep, func, exceptions_dict=None, print_log=True, **func_kwargs)[source]
Bases:
object
Samples the function output.
This is a generator object that at first yields the output of callable. After the yield, it either raises instance of TimeoutExpiredError or sleeps sleep seconds.
Yielding the output allows you to handle every value as you wish.
exceptions_dict should be in the following format: {
exception0: [exception0_msg0], exception1: [
exception1_msg0, exception1_msg1
], exception2: []
}
- If an exception is raised within func:
- Example exception inheritance:
class Exception class AExampleError(Exception) class BExampleError(AExampleError)
- The raised exception’s class will fall into one of three categories:
- An exception class specifically declared in exceptions_dict
exceptions_dict: {BExampleError: []} raise: BExampleError result: continue
- A child class inherited from an exception class in exceptions_dict
exceptions_dict: {AExampleError: []} raise: BExampleError result: continue
- Everything else, this will always re-raise the exception
exceptions_dict: {BExampleError: []} raise: AExampleError result: raise
- Parameters:
wait_timeout (int) – Time in seconds to wait for func to return a value equating to True
sleep (int) – Time in seconds between calls to func
func (Callable) – to be wrapped by TimeoutSampler
exceptions_dict (dict) – Exception handling definition
print_log (bool) – Print elapsed time to log
- class ocp_resources.utils.TimeoutWatch(timeout)[source]
Bases:
object
A time counter allowing to determine the time remaining since the start of a given interval
- ocp_resources.utils.skip_existing_resource_creation_teardown(resource, export_str, user_exported_args, check_exists=True)[source]
- Parameters:
resource (Resource) – Resource to match against.
export_str (str) – The user export str. (REUSE_IF_RESOURCE_EXISTS or SKIP_RESOURCE_TEARDOWN)
user_exported_args (str) – Value of export_str. (os.environ.get)
check_exists (bool) – Check if resource exists before return. (applied only for REUSE_IF_RESOURCE_EXISTS)
- Returns:
If resource match.
- Return type:
Resource or None
ocp_resources.validating_webhook_config module
- class ocp_resources.validating_webhook_config.ValidatingWebhookConfiguration(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
ValidatingWebhookConfiguration object.
- api_group = 'admissionregistration.k8s.io'
ocp_resources.virtual_machine module
- class ocp_resources.virtual_machine.VirtualMachine(name=None, namespace=None, client=None, body=None, teardown=True, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Virtual Machine object, inherited from Resource. Implements actions start / stop / status / wait for VM status / is running
- class RunStrategy[source]
Bases:
object
- ALWAYS = 'Always'
- HALTED = 'Halted'
- MANUAL = 'Manual'
- RERUNONFAILURE = 'RerunOnFailure'
- class Status[source]
Bases:
Status
- CRASH_LOOPBACK_OFF = 'CrashLoopBackOff'
- DATAVOLUME_ERROR = 'DataVolumeError'
- ERROR_PVC_NOT_FOUND = 'ErrorPvcNotFound'
- ERROR_UNSCHEDULABLE = 'ErrorUnschedulable'
- ERR_IMAGE_PULL = 'ErrImagePull'
- IMAGE_PULL_BACK_OFF = 'ImagePullBackOff'
- MIGRATING = 'Migrating'
- PAUSED = 'Paused'
- PROVISIONING = 'Provisioning'
- STARTING = 'Starting'
- STOPPED = 'Stopped'
- STOPPING = 'Stopping'
- WAITING_FOR_VOLUME_BINDING = 'WaitingForVolumeBinding'
- api_group = 'kubevirt.io'
- api_request(method, action, **params)[source]
Handle API requests to resource.
- Parameters:
method (str) – Request method (GET/PUT etc.).
action (str) – Action to perform (stop/start/guestosinfo etc.).
url (str) – URL of resource.
- Returns:
response data
- Return type:
data(dict)
- property printable_status
Get VM printableStatus
- Returns:
VM printableStatus if VM.status.printableStatus else None
- property ready
Get VM status
- Returns:
True if Running else None
- property vmi
Get VMI
- Returns:
VMI
- Return type:
- wait_for_ready_status(status, timeout=240, sleep=1)[source]
Wait for VM resource ready status to be at desire status
- Parameters:
status (any) – True for a running VM, None for a stopped VM.
timeout (int) – Time to wait for the resource.
- Raises:
TimeoutExpiredError – If timeout reached.
ocp_resources.virtual_machine_Instance_replica_set module
- class ocp_resources.virtual_machine_Instance_replica_set.VirtualMachineInstanceReplicaSet(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
VirtualMachineInstancePreset object.
- api_group = 'kubevirt.io'
ocp_resources.virtual_machine_import module
- class ocp_resources.virtual_machine_import.ResourceMapping(name=None, namespace=None, mapping=None, client=None, teardown=True, yaml_file=None, **kwargs)[source]
Bases:
NamespacedResource
ResourceMapping object.
- api_group = 'v2v.kubevirt.io'
- class ocp_resources.virtual_machine_import.VirtualMachineImport(name=None, namespace=None, provider_credentials_secret_name=None, provider_type=None, provider_credentials_secret_namespace=None, client=None, teardown=True, vm_id=None, vm_name=None, cluster_id=None, cluster_name=None, target_vm_name=None, start_vm=False, provider_mappings=None, resource_mapping_name=None, resource_mapping_namespace=None, warm=False, finalize_date=None, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Virtual Machine Import object, inherited from NamespacedResource.
- class Condition[source]
Bases:
Condition
- MAPPING_RULES_VERIFIED = 'MappingRulesVerified'
- PROCESSING = 'Processing'
- SUCCEEDED = 'Succeeded'
- VALID = 'Valid'
- class MappingRulesConditionReason[source]
Bases:
object
Mapping rules verified condition reason object
- MAPPING_COMPLETED = 'MappingRulesVerificationCompleted'
- MAPPING_FAILED = 'MappingRulesVerificationFailed'
- MAPPING_REPORTED_WARNINGS = 'MappingRulesVerificationReportedWarnings'
- class ProcessingConditionReason[source]
Bases:
object
Processing condition reason object
- COMPLETED = 'ProcessingCompleted'
- COPYING_DISKS = 'CopyingDisks'
- CREATING_TARGET_VM = 'CreatingTargetVM'
- FAILED = 'ProcessingFailed'
- class SucceededConditionReason[source]
Bases:
object
Succeeced cond reason object
- DATAVOLUME_CREATION_FAILED = 'DataVolumeCreationFailed'
- VALIDATION_FAILED = 'ValidationFailed'
- VIRTUAL_MACHINE_READY = 'VirtualMachineReady'
- VIRTUAL_MACHINE_RUNNING = 'VirtualMachineRunning'
- VMTEMPLATE_MATCHING_FAILED = 'VMTemplateMatchingFailed'
- VM_CREATION_FAILED = 'VMCreationFailed'
- class ValidConditionReason[source]
Bases:
object
Valid condition reason object
- INCOMPLETE_MAPPING_RULES = 'IncompleteMappingRules'
- RESOURCE_MAPPING_NOT_FOUND = 'ResourceMappingNotFound'
- SECRET_NOT_FOUND = 'SecretNotFound'
- SOURCE_VM_NOT_FOUND = 'SourceVMNotFound'
- UNINITIALIZED_PROVIDER = 'UninitializedProvider'
- VALIDATION_COMPLETED = 'ValidationCompleted'
- api_group = 'v2v.kubevirt.io'
- property vm
- wait(timeout=600, cond_reason='VirtualMachineReady', cond_status='True', cond_type='Succeeded')[source]
Wait for resource
- Parameters:
timeout (int) – Time to wait for the resource.
sleep (int) – Time to wait between retries
- Raises:
TimeoutExpiredError – If resource not exists.
ocp_resources.virtual_machine_import_configs module
ocp_resources.virtual_machine_instance module
- class ocp_resources.virtual_machine_instance.VirtualMachineInstance(name=None, namespace=None, client=None, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Virtual Machine Instance object, inherited from Resource.
- api_group = 'kubevirt.io'
- api_request(method, action, **params)[source]
Handle API requests to resource.
- Parameters:
method (str) – Request method (GET/PUT etc.).
action (str) – Action to perform (stop/start/guestosinfo etc.).
url (str) – URL of resource.
- Returns:
response data
- Return type:
data(dict)
- get_dommemstat()[source]
Get virtual machine domain memory stats link: https://libvirt.org/manpages/virsh.html#dommemstat
- Returns:
VMI domain memory stats as string
- Return type:
String
- get_domstate()[source]
Get virtual machine instance Status.
Current workaround, as VM/VMI shows no status/phase == Paused yet. Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1805178
- Returns:
VMI Status as string
- Return type:
String
- get_vmi_active_condition()[source]
A VMI may have multiple conditions; the active one it the one with ‘lastTransitionTime’
- get_xml()[source]
Get virtual machine instance XML
- Returns:
VMI XML in the multi-line string
- Return type:
xml_output(string)
- property guest_fs_info
- property guest_os_info
- property guest_user_info
- property interfaces
- property is_virt_launcher_pod_root
Check if Virt Launcher Pod is Root
- Returns:
True if Virt Launcher Pod is Root.
- Return type:
Bool
- property os_version
- property virt_handler_pod
- property virt_launcher_pod
- property virt_launcher_pod_hypervisor_connection_uri
Get Virt Launcher Pod Hypervisor Connection URI
Required to connect to Hypervisor for Non-Root Virt-Launcher Pod.
- Returns:
Hypervisor Connection URI
- Return type:
String
- property virt_launcher_pod_user_uid
Get Virt Launcher Pod User UID value
- Returns:
Virt Launcher Pod UID value
- Return type:
Int
- wait_for_pause_status(pause, timeout=240)[source]
Wait for Virtual Machine Instance to be paused / unpaused. Paused status is checked in libvirt and in the VMI conditions.
- Parameters:
pause (bool) – True for paused, False for unpause
timeout (int) – Time to wait for the resource.
- Raises:
TimeoutExpiredError – If resource not exists.
- wait_until_running(timeout=240, logs=True, stop_status=None)[source]
Wait until VMI is running
- Parameters:
timeout (int) – Time to wait for VMI.
logs (bool) – True to extract logs from the VMI pod and from the VMI.
stop_status (str) – Status which should stop the wait and failed.
- Raises:
TimeoutExpiredError – If VMI failed to run.
- property xml_dict
Get virtual machine instance XML as dict
ocp_resources.virtual_machine_instance_migration module
- class ocp_resources.virtual_machine_instance_migration.VirtualMachineInstanceMigration(name=None, namespace=None, vmi=None, client=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
- api_group = 'kubevirt.io'
ocp_resources.virtual_machine_instance_preset module
- class ocp_resources.virtual_machine_instance_preset.VirtualMachineInstancePreset(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
VirtualMachineInstancePreset object.
- api_group = 'kubevirt.io'
ocp_resources.virtual_machine_restore module
- class ocp_resources.virtual_machine_restore.VirtualMachineRestore(name=None, namespace=None, vm_name=None, snapshot_name=None, client=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
VirtualMachineRestore object.
- api_group = 'snapshot.kubevirt.io'
- wait_complete(status=True, timeout=240)[source]
Wait for VirtualMachineRestore to be in status complete
- Parameters:
status – Expected status: True for a completed restore operation, False otherwise.
timeout (int) – Time to wait.
- Raises:
TimeoutExpiredError – If timeout reached.
- wait_restore_done(timeout=240)[source]
Wait for the the restore to be done. This check 2 parameters, the restore status to be complete and the VM status restoreInProgress to be None.
- Parameters:
timeout (int) – Time to wait.
- Raises:
TimeoutExpiredError – If timeout reached.
ocp_resources.virtual_machine_snapshot module
- class ocp_resources.virtual_machine_snapshot.VirtualMachineSnapshot(name=None, namespace=None, vm_name=None, client=None, teardown=True, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
VirtualMachineSnapshot object.
- api_group = 'snapshot.kubevirt.io'
- wait_ready_to_use(status=True, timeout=240)[source]
Wait for VirtualMachineSnapshot to be in readyToUse status
- Parameters:
status – Expected status: True for a ready to use VirtualMachineSnapshot, False otherwise.
timeout (int) – Time to wait for the resource.
- Raises:
TimeoutExpiredError – If timeout reached.
- wait_snapshot_done(timeout=240)[source]
Wait for the the snapshot to be done. This check 2 parameters, the snapshot status to be readyToUse and the VM status snapshotInProgress to be None.
- Parameters:
timeout (int) – Time to wait.
- Raises:
TimeoutExpiredError – If timeout reached.
ocp_resources.virtual_service module
- class ocp_resources.virtual_service.VirtualService(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
Virtual Service object.
- api_group = 'networking.istio.io'
ocp_resources.volume_snapshot module
- class ocp_resources.volume_snapshot.VolumeSnapshot(name=None, namespace=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, **kwargs)[source]
Bases:
NamespacedResource
VolumeSnapshot object.
- api_group = 'snapshot.storage.k8s.io'
ocp_resources.volume_snapshot_class module
- class ocp_resources.volume_snapshot_class.VolumeSnapshotClass(name=None, client=None, teardown=True, timeout=240, privileged_client=None, yaml_file=None, delete_timeout=240, dry_run=None, node_selector=None, node_selector_labels=None, config_file=None, context=None, label=None, timeout_seconds=60, api_group=None, hash_log_data=True)[source]
Bases:
Resource
VolumeSnapshotClass object.
- api_group = 'snapshot.storage.k8s.io'