Expectation
- class great_expectations.expectations.expectation.Expectation(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None)#
Base class for all Expectations.
- Expectation classes must have the following attributes set:
domain_keys: a tuple of the keys used to determine the domain of the expectation
success_keys: a tuple of the keys used to determine the success of the expectation.
In some cases, subclasses of Expectation (such as BatchExpectation) can inherit these properties from their parent class.
They may optionally override runtime_keys and default_kwarg_values, and may optionally set an explicit value for expectation_type.
runtime_keys lists the keys that can be used to control output but will not affect the actual success value of the expectation (such as result_format).
default_kwarg_values is a dictionary that will be used to fill unspecified kwargs from the Expectation Configuration.
- Expectation classes must implement the following:
_validate
get_validation_dependencies
In some cases, subclasses of Expectation, such as ColumnMapExpectation will already have correct implementations that may simply be inherited.
- Additionally, they may provide implementations of:
validate_configuration, which should raise an error if the configuration will not be usable for the Expectation
Data Docs rendering methods decorated with the @renderer decorator. See the
- get_success_kwargs(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None)Dict[str, Any] #
Retrieve the success kwargs.
- Parameters
configuration – The ExpectationConfiguration that contains the kwargs. If no configuration arg is provided, the success kwargs from the configuration attribute of the Expectation instance will be returned.
print_diagnostic_checklist(diagnostics: Optional[great_expectations.core.expectation_diagnostics.expectation_diagnostics.ExpectationDiagnostics] = None, show_failed_tests: bool = False, backends: Optional[List[str]] = None, show_debug_messages: bool = False) str #
Runs self.run_diagnostics and generates a diagnostic checklist.
This output from this method is a thin wrapper for ExpectationDiagnostics.generate_checklist() This method is experimental.
- Parameters
diagnostics (optional[ExpectationDiagnostics]) – If diagnostics are not provided, diagnostics will be ran on self.
show_failed_tests (bool) – If true, failing tests will be printed.
backends – list of backends to pass to run_diagnostics
show_debug_messages (bool) – If true, create a logger and pass to run_diagnostics
run_diagnostics(raise_exceptions_for_backends: bool = False, ignore_suppress: bool = False, ignore_only_for: bool = False, for_gallery: bool = False, debug_logger: Optional[logging.Logger] = None, only_consider_these_backends: Optional[List[str]] = None, context: Optional[AbstractDataContext] = None) ExpectationDiagnostics #
Produce a diagnostic report about this Expectation.
The current uses for this method’s output are using the JSON structure to populate the Public Expectation Gallery and enabling a fast dev loop for developing new Expectations where the contributors can quickly check the completeness of their expectations.
The contents of the report are captured in the ExpectationDiagnostics dataclass. You can see some examples in test_expectation_diagnostics.py
Some components (e.g. description, examples, library_metadata) of the diagnostic report can be introspected directly from the Exepctation class. Other components (e.g. metrics, renderers, executions) are at least partly dependent on instantiating, validating, and/or executing the Expectation class. For these kinds of components, at least one test case with include_in_gallery=True must be present in the examples to produce the metrics, renderers and execution engines parts of the report. This is due to a get_validation_dependencies requiring expectation_config as an argument.
If errors are encountered in the process of running the diagnostics, they are assumed to be due to incompleteness of the Expectation’s implementation (e.g., declaring a dependency on Metrics that do not exist). These errors are added under “errors” key in the report.
- Parameters
raise_exceptions_for_backends – Bool object that when True will raise an Exception if a backend fails to connect.
ignore_suppress – Bool object that when True will ignore the suppress_test_for list on Expectation sample tests.
ignore_only_for – Bool object that when True will ignore the only_for list on Expectation sample tests.
for_gallery – Bool object that when True will create empty arrays to use as examples for the Expectation Diagnostics.
debug_logger (optional[logging.Logger]) – Logger object to use for sending debug messages to.
only_consider_these_backends (optional[List[str]]) –
context (optional[AbstractDataContext]) – Instance of any child of “AbstractDataContext” class.
- Returns
An Expectation Diagnostics report object
validate(validator: Validator, configuration: Optional[ExpectationConfiguration] = None, evaluation_parameters: Optional[dict] = None, interactive_evaluation: bool = True, data_context: Optional[AbstractDataContext] = None, runtime_configuration: Optional[dict] = None) ExpectationValidationResult #
Validates the expectation against the provided data.
- Parameters
validator – A Validator object that can be used to create Expectations, validate Expectations, and get Metrics for Expectations.
configuration – Defines the parameters and name of a specific expectation.
evaluation_parameters – Dictionary of dynamic values used during Validation of an Expectation.
interactive_evaluation – Setting the interactive_evaluation flag on a DataAsset make it possible to declare expectations and store expectations without immediately evaluating them.
data_context – An instance of a GX DataContext.
runtime_configuration – The runtime configuration for the Expectation.
- Returns
An ExpectationValidationResult object
- validate_configuration(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None)None #
Validates the configuration for the Expectation.
For all expectations, the configuration’s expectation_type needs to match the type of the expectation being configured. This method is meant to be overridden by specific expectations to provide additional validation checks as required. Overriding methods should call super().validate_configuration(configuration).
- Raises
InvalidExpectationConfigurationError – The configuration does not contain the values required by the Expectation.