lomas_core.models package
Submodules
lomas_core.models.collections module
- class lomas_core.models.collections.BooleanMetadata(*, private_id: bool = False, nullable: bool = False, max_partition_length: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_influenced_partitions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_partition_contributions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, type: Literal[MetadataColumnType.BOOLEAN])[source]
Bases:
ColumnMetadataModel for boolean column metadata.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: Literal[MetadataColumnType.BOOLEAN]
- class lomas_core.models.collections.BoundedColumnMetadata(*, private_id: bool = False, nullable: bool = False, max_partition_length: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_influenced_partitions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_partition_contributions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None)[source]
Bases:
ColumnMetadataModel for columns with bounded data.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.collections.CategoricalColumnMetadata(*, private_id: bool = False, nullable: bool = False, max_partition_length: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_influenced_partitions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_partition_contributions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None)[source]
Bases:
ColumnMetadataModel for categorical column metadata.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.collections.ColumnMetadata(*, private_id: bool = False, nullable: bool = False, max_partition_length: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_influenced_partitions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_partition_contributions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None)[source]
Bases:
BaseModelBase model for column metadata.
- max_influenced_partitions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None
- max_partition_contributions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None
- max_partition_length: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- nullable: bool
- private_id: bool
- class lomas_core.models.collections.DSAccess(*, database_type: str)[source]
Bases:
BaseModelBaseModel for access info to a private dataset.
- database_type: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.collections.DSInfo(*, dataset_name: str, dataset_access: DSPathAccess | DSS3Access, metadata_access: DSPathAccess | DSS3Access)[source]
Bases:
BaseModelBaseModel for a dataset.
- dataset_access: Annotated[DSPathAccess | DSS3Access, FieldInfo(annotation=NoneType, required=True, discriminator='database_type')]
- dataset_name: str
- metadata_access: Annotated[DSPathAccess | DSS3Access, FieldInfo(annotation=NoneType, required=True, discriminator='database_type')]
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.collections.DSPathAccess(*, database_type: Literal[PrivateDatabaseType.PATH], path: str)[source]
Bases:
DSAccessBaseModel for a local dataset.
- database_type: Literal[PrivateDatabaseType.PATH]
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- path: str
- class lomas_core.models.collections.DSS3Access(*, database_type: Literal[PrivateDatabaseType.S3], endpoint_url: str, bucket: str, key: str, access_key_id: str | None = None, secret_access_key: str | None = None, credentials_name: str)[source]
Bases:
DSAccessBaseModel for a dataset on S3.
- access_key_id: str | None
- bucket: str
- credentials_name: str
- database_type: Literal[PrivateDatabaseType.S3]
- endpoint_url: str
- key: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- secret_access_key: str | None
- class lomas_core.models.collections.DatasetOfUser(*, dataset_name: str, initial_epsilon: float, initial_delta: float, total_spent_epsilon: float, total_spent_delta: float)[source]
Bases:
BaseModelBaseModel for informations of a user on a dataset.
- dataset_name: str
- initial_delta: float
- initial_epsilon: float
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- total_spent_delta: float
- total_spent_epsilon: float
- class lomas_core.models.collections.DatasetsCollection(*, datasets: List[DSInfo])[source]
Bases:
BaseModelBaseModel for datasets collection.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.collections.DatetimeMetadata(*, private_id: bool = False, nullable: bool = False, max_partition_length: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_influenced_partitions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_partition_contributions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, type: Literal[MetadataColumnType.DATETIME], lower: datetime, upper: datetime)[source]
Bases:
BoundedColumnMetadataModel for datetime column metadata.
- lower: datetime
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: Literal[MetadataColumnType.DATETIME]
- upper: datetime
- class lomas_core.models.collections.FloatMetadata(*, private_id: bool = False, nullable: bool = False, max_partition_length: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_influenced_partitions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_partition_contributions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, type: Literal[MetadataColumnType.FLOAT], precision: Precision, lower: float, upper: float)[source]
Bases:
BoundedColumnMetadataModel for float column metadata.
- lower: float
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: Literal[MetadataColumnType.FLOAT]
- upper: float
- class lomas_core.models.collections.IntCategoricalMetadata(*, private_id: bool = False, nullable: bool = False, max_partition_length: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_influenced_partitions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_partition_contributions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, type: Literal[MetadataColumnType.INT], precision: Precision, cardinality: int, categories: List[int])[source]
Bases:
CategoricalColumnMetadataModel for integer categorical column metadata.
- cardinality: int
- categories: List[int]
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: Literal[MetadataColumnType.INT]
- class lomas_core.models.collections.IntMetadata(*, private_id: bool = False, nullable: bool = False, max_partition_length: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_influenced_partitions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_partition_contributions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, type: Literal[MetadataColumnType.INT], precision: Precision, lower: int, upper: int)[source]
Bases:
BoundedColumnMetadataModel for integer column metadata.
- lower: int
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: Literal[MetadataColumnType.INT]
- upper: int
- class lomas_core.models.collections.Metadata(*, max_ids: Annotated[int, Gt(gt=0)], rows: Annotated[int, Gt(gt=0)], row_privacy: bool, censor_dims: bool | None = False, columns: Dict[str, Annotated[Annotated[StrMetadata, Tag(tag=string)] | Annotated[StrCategoricalMetadata, Tag(tag=categorical_string)] | Annotated[IntMetadata, Tag(tag=int)] | Annotated[IntCategoricalMetadata, Tag(tag=categorical_int)] | Annotated[FloatMetadata, Tag(tag=float)] | Annotated[BooleanMetadata, Tag(tag=boolean)] | Annotated[DatetimeMetadata, Tag(tag=datetime)], Discriminator(discriminator=get_column_metadata_discriminator, custom_error_type=None, custom_error_message=None, custom_error_context=None)]])[source]
Bases:
BaseModelBaseModel for a metadata format.
- censor_dims: bool | None
- columns: Dict[str, Annotated[Annotated[StrMetadata, Tag(tag=string)] | Annotated[StrCategoricalMetadata, Tag(tag=categorical_string)] | Annotated[IntMetadata, Tag(tag=int)] | Annotated[IntCategoricalMetadata, Tag(tag=categorical_int)] | Annotated[FloatMetadata, Tag(tag=float)] | Annotated[BooleanMetadata, Tag(tag=boolean)] | Annotated[DatetimeMetadata, Tag(tag=datetime)], Discriminator(discriminator=get_column_metadata_discriminator, custom_error_type=None, custom_error_message=None, custom_error_context=None)]]
- max_ids: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])]
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- row_privacy: bool
- rows: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])]
- class lomas_core.models.collections.StrCategoricalMetadata(*, private_id: bool = False, nullable: bool = False, max_partition_length: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_influenced_partitions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_partition_contributions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, type: Literal[MetadataColumnType.STRING], cardinality: int, categories: List[str])[source]
Bases:
CategoricalColumnMetadataModel for categorical string metadata.
- cardinality: int
- categories: List[str]
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: Literal[MetadataColumnType.STRING]
- class lomas_core.models.collections.StrMetadata(*, private_id: bool = False, nullable: bool = False, max_partition_length: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_influenced_partitions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, max_partition_contributions: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Gt(gt=0)])] | None = None, type: Literal[MetadataColumnType.STRING])[source]
Bases:
ColumnMetadataModel for string metadata.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: Literal[MetadataColumnType.STRING]
- class lomas_core.models.collections.User(*, user_name: str, may_query: bool, datasets_list: List[DatasetOfUser])[source]
Bases:
BaseModelBaseModel for a user in a user collection.
- datasets_list: List[DatasetOfUser]
- may_query: bool
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- user_name: str
- class lomas_core.models.collections.UserCollection(*, users: List[User])[source]
Bases:
BaseModelBaseModel for users collection.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- lomas_core.models.collections.get_column_metadata_discriminator(v: Any) str[source]
Discriminator function for determining the type of column metadata.
- Parameters:
v (Any) – The unparsed column metadata (either dict or class object)
- Raises:
ValueError – If the column type cannot be found.
- Returns:
The metadata string type.
- Return type:
str
lomas_core.models.config module
lomas_core.models.constants module
- class lomas_core.models.constants.AdminDBType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
StrEnumTypes of administration databases.
- MONGODB = 'mongodb'
- YAML = 'yaml'
- class lomas_core.models.constants.ConfigKeys(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
StrEnumKeys of the configuration file.
- RUNTIME_ARGS = 'runtime_args'
- SETTINGS = 'settings'
- class lomas_core.models.constants.ExceptionType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
StrEnumLomas server exception types.
To be used as discriminator when parsing corresponding models
- EXTERNAL_LIBRARY = 'ExternalLibraryException'
- INTERNAL_SERVER = 'InternalServerException'
- INVALID_QUERY = 'InvalidQueryException'
- UNAUTHORIZED_ACCESS = 'UnauthorizedAccessException'
- class lomas_core.models.constants.MetadataColumnType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
StrEnumColumn types for metadata.
- BOOLEAN = 'boolean'
- CAT_INT = 'categorical_int'
- CAT_STRING = 'categorical_string'
- DATETIME = 'datetime'
- FLOAT = 'float'
- INT = 'int'
- STRING = 'string'
- class lomas_core.models.constants.Precision(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
IntEnumPrecision of integer and float data.
- DOUBLE = 64
- SINGLE = 32
lomas_core.models.exceptions module
- class lomas_core.models.exceptions.ExternalLibraryExceptionModel(*, type: Literal[ExceptionType.EXTERNAL_LIBRARY] = ExceptionType.EXTERNAL_LIBRARY, library: DPLibraries, message: str)[source]
Bases:
LomasServerExceptionModelFor exceptions from libraries external to the lomas packages.
- library: DPLibraries
The external library that caused the exception.
- message: str
Exception error message.
For exceptions from libraries external to the lomas packages.
- model_config: ClassVar[ConfigDict] = {'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: Literal[ExceptionType.EXTERNAL_LIBRARY]
Exception type.
- class lomas_core.models.exceptions.InternalServerExceptionModel(*, type: Literal[ExceptionType.INTERNAL_SERVER] = ExceptionType.INTERNAL_SERVER)[source]
Bases:
LomasServerExceptionModelFor any unforseen internal exception.
- model_config: ClassVar[ConfigDict] = {'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: Literal[ExceptionType.INTERNAL_SERVER]
Exception type.
For any unforseen internal exception.
- class lomas_core.models.exceptions.InvalidQueryExceptionModel(*, type: Literal[ExceptionType.INVALID_QUERY] = ExceptionType.INVALID_QUERY, message: str)[source]
Bases:
LomasServerExceptionModelException directly related to the query.
For example if it does not contain a DP mechanism or there is not enough DP budget.
- message: str
Exception error message.
This is for exceptions directly related to the query. For example if it does not contain a DP mechanism or there is not enough DP budget.
- model_config: ClassVar[ConfigDict] = {'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: Literal[ExceptionType.INVALID_QUERY]
Exception type.
- class lomas_core.models.exceptions.LomasServerExceptionModel(*, type: str)[source]
Bases:
BaseModelBase model for lomas server exceptions.
- model_config: ClassVar[ConfigDict] = {'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: str
Exception type.
- class lomas_core.models.exceptions.UnauthorizedAccessExceptionModel(*, type: Literal[ExceptionType.UNAUTHORIZED_ACCESS] = ExceptionType.UNAUTHORIZED_ACCESS, message: str)[source]
Bases:
LomasServerExceptionModelException related to rights with regards to the query.
(e.g. no user access for this dataset).
- message: str
Exception error message.
Exception related to rights with regards to the query. (e.g. no user access for this dataset).
- model_config: ClassVar[ConfigDict] = {'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: Literal[ExceptionType.UNAUTHORIZED_ACCESS]
Exception type.
lomas_core.models.requests module
- class lomas_core.models.requests.DiffPrivLibDummyQueryModel(*, dataset_name: str, dummy_nb_rows: Annotated[int, Gt(gt=0)], dummy_seed: int, diffprivlib_json: str, feature_columns: list, target_columns: list | None, test_size: Annotated[float, Gt(gt=0.0), Lt(lt=1.0)], test_train_split_seed: int, imputer_strategy: str)[source]
Bases:
DiffPrivLibQueryModel,DummyQueryModelInput model for a DiffPrivLib query on a dummy dataset.
- model_config: ClassVar[ConfigDict] = {'json_schema_extra': {'examples': [{'dataset_name': 'PENGUIN', 'diffprivlib_json': '{"module": "diffprivlib", "version": "0.6.6", "pipeline": [{"type": "_dpl_type:StandardScaler", "name": "scaler", "params": {"with_mean": true, "with_std": true, "copy": true, "epsilon": 0.5, "bounds": {"_tuple": true, "_items": [[30.0, 13.0, 150.0, 2000.0], [65.0, 23.0, 250.0, 7000.0]]}, "random_state": null, "accountant": "_dpl_instance:BudgetAccountant"}}, {"type": "_dpl_type:LogisticRegression", "name": "classifier", "params": {"tol": 0.0001, "C": 1.0, "fit_intercept": true, "random_state": null, "max_iter": 100, "verbose": 0, "warm_start": false, "n_jobs": null, "epsilon": 1.0, "data_norm": 83.69469642643347, "accountant": "_dpl_instance:BudgetAccountant"}}]}', 'dummy_nb_rows': 100, 'dummy_seed': 42, 'feature_columns': ['bill_length_mm', 'bill_depth_mm', 'flipper_length_mm', 'body_mass_g'], 'imputer_strategy': 'drop', 'target_columns': ['species'], 'test_size': 0.2, 'test_train_split_seed': 4}]}, 'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.requests.DiffPrivLibQueryModel(*, dataset_name: str, diffprivlib_json: str, feature_columns: list, target_columns: list | None, test_size: Annotated[float, Gt(gt=0.0), Lt(lt=1.0)], test_train_split_seed: int, imputer_strategy: str)[source]
Bases:
DiffPrivLibRequestModel,QueryModelBase input model for a diffprivlib query.
- model_config: ClassVar[ConfigDict] = {'json_schema_extra': {'examples': [{'dataset_name': 'PENGUIN', 'diffprivlib_json': '{"module": "diffprivlib", "version": "0.6.6", "pipeline": [{"type": "_dpl_type:StandardScaler", "name": "scaler", "params": {"with_mean": true, "with_std": true, "copy": true, "epsilon": 0.5, "bounds": {"_tuple": true, "_items": [[30.0, 13.0, 150.0, 2000.0], [65.0, 23.0, 250.0, 7000.0]]}, "random_state": null, "accountant": "_dpl_instance:BudgetAccountant"}}, {"type": "_dpl_type:LogisticRegression", "name": "classifier", "params": {"tol": 0.0001, "C": 1.0, "fit_intercept": true, "random_state": null, "max_iter": 100, "verbose": 0, "warm_start": false, "n_jobs": null, "epsilon": 1.0, "data_norm": 83.69469642643347, "accountant": "_dpl_instance:BudgetAccountant"}}]}', 'feature_columns': ['bill_length_mm', 'bill_depth_mm', 'flipper_length_mm', 'body_mass_g'], 'imputer_strategy': 'drop', 'target_columns': ['species'], 'test_size': 0.2, 'test_train_split_seed': 4}]}, 'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.requests.DiffPrivLibRequestModel(*, dataset_name: str, diffprivlib_json: str, feature_columns: list, target_columns: list | None, test_size: Annotated[float, Gt(gt=0.0), Lt(lt=1.0)], test_train_split_seed: int, imputer_strategy: str)[source]
Bases:
LomasRequestModelBase input model for a diffprivlib request.
- diffprivlib_json: str
The DiffPrivLib pipeline for the query (See diffprivlib_logger package.).
- feature_columns: list
The list of feature columns to train.
- imputer_strategy: str
The imputation strategy.
- model_config: ClassVar[ConfigDict] = {'json_schema_extra': {'examples': [{'dataset_name': 'PENGUIN', 'diffprivlib_json': '{"module": "diffprivlib", "version": "0.6.6", "pipeline": [{"type": "_dpl_type:StandardScaler", "name": "scaler", "params": {"with_mean": true, "with_std": true, "copy": true, "epsilon": 0.5, "bounds": {"_tuple": true, "_items": [[30.0, 13.0, 150.0, 2000.0], [65.0, 23.0, 250.0, 7000.0]]}, "random_state": null, "accountant": "_dpl_instance:BudgetAccountant"}}, {"type": "_dpl_type:LogisticRegression", "name": "classifier", "params": {"tol": 0.0001, "C": 1.0, "fit_intercept": true, "random_state": null, "max_iter": 100, "verbose": 0, "warm_start": false, "n_jobs": null, "epsilon": 1.0, "data_norm": 83.69469642643347, "accountant": "_dpl_instance:BudgetAccountant"}}]}', 'feature_columns': ['bill_length_mm', 'bill_depth_mm', 'flipper_length_mm', 'body_mass_g'], 'imputer_strategy': 'drop', 'target_columns': ['species'], 'test_size': 0.2, 'test_train_split_seed': 4}]}, 'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- target_columns: list | None
The list of target columns to predict.
- test_size: float
The proportion of the test set.
- test_train_split_seed: int
The seed for the random train/test split.
- class lomas_core.models.requests.DummyQueryModel(*, dataset_name: str, dummy_nb_rows: Annotated[int, Gt(gt=0)], dummy_seed: int)[source]
Bases:
QueryModelInput model for a query on a dummy dataset.
- dummy_nb_rows: int
The number of rows in the dummy dataset.
- dummy_seed: int
The seed to set at the start of the dummy dataset generation.
- model_config: ClassVar[ConfigDict] = {'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.requests.GetDummyDataset(*, dataset_name: str, dummy_nb_rows: Annotated[int, Gt(gt=0)], dummy_seed: int)[source]
Bases:
LomasRequestModelModel input to get a dummy dataset.
- dummy_nb_rows: int
The number of dummy rows to generate.
- dummy_seed: int
The seed for the random generation of the dummy dataset.
- model_config: ClassVar[ConfigDict] = {'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.requests.LomasRequestModel(*, dataset_name: str)[source]
Bases:
BaseModelBase class for all types of requests to the lomas server.
- We differentiate between requests and queries:
a request does not necessarily require an algorithm to be executed on the private dataset (e.g. some cost requests).
a query requires executing an algorithm on a private dataset (or a potentially a dummy).
- dataset_name: str
The name of the dataset the request is aimed at.
- model_config: ClassVar[ConfigDict] = {'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.requests.OpenDPDummyQueryModel(*, dataset_name: str, dummy_nb_rows: Annotated[int, Gt(gt=0)], dummy_seed: int, opendp_json: str, fixed_delta: Annotated[float | None, Ge(ge=0)])[source]
Bases:
OpenDPRequestModel,DummyQueryModelInput model for an opendp query on a dummy dataset.
- model_config: ClassVar[ConfigDict] = {'json_schema_extra': {'examples': [{'dataset_name': 'PENGUIN', 'dummy_nb_rows': 100, 'dummy_seed': 42, 'fixed_delta': 1e-05, 'opendp_json': '{"version": "0.12.0", "ast": {"_type": "partial_chain", "lhs": {"_type": "partial_chain", "lhs": {"_type": "partial_chain", "lhs": {"_type": "partial_chain", "lhs": {"_type": "partial_chain", "lhs": {"_type": "constructor", "func": "make_chain_tt", "module": "combinators", "args": [{"_type": "constructor", "func": "make_select_column", "module": "transformations", "kwargs": {"key": "bill_length_mm", "TOA": "String"}}, {"_type": "constructor", "func": "make_split_dataframe", "module": "transformations", "kwargs": {"separator": ",", "col_names": {"_type": "list", "_items": ["species", "island", "bill_length_mm", "bill_depth_mm", "flipper_length_mm", "body_mass_g", "sex"]}}}]}, "rhs": {"_type": "constructor", "func": "then_cast_default", "module": "transformations", "kwargs": {"TOA": "f64"}}}, "rhs": {"_type": "constructor", "func": "then_clamp", "module": "transformations", "kwargs": {"bounds": [30.0, 65.0]}}}, "rhs": {"_type": "constructor", "func": "then_resize", "module": "transformations", "kwargs": {"size": 346, "constant": 43.61}}}, "rhs": {"_type": "constructor", "func": "then_variance", "module": "transformations"}}, "rhs": {"_type": "constructor", "func": "then_laplace", "module": "measurements", "kwargs": {"scale": 5.0}}}}'}]}, 'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.requests.OpenDPQueryModel(*, dataset_name: str, opendp_json: str, fixed_delta: Annotated[float | None, Ge(ge=0)])[source]
Bases:
OpenDPRequestModel,QueryModelBase input model for an opendp query.
- model_config: ClassVar[ConfigDict] = {'json_schema_extra': {'examples': [{'dataset_name': 'PENGUIN', 'fixed_delta': 1e-05, 'opendp_json': '{"version": "0.12.0", "ast": {"_type": "partial_chain", "lhs": {"_type": "partial_chain", "lhs": {"_type": "partial_chain", "lhs": {"_type": "partial_chain", "lhs": {"_type": "partial_chain", "lhs": {"_type": "constructor", "func": "make_chain_tt", "module": "combinators", "args": [{"_type": "constructor", "func": "make_select_column", "module": "transformations", "kwargs": {"key": "bill_length_mm", "TOA": "String"}}, {"_type": "constructor", "func": "make_split_dataframe", "module": "transformations", "kwargs": {"separator": ",", "col_names": {"_type": "list", "_items": ["species", "island", "bill_length_mm", "bill_depth_mm", "flipper_length_mm", "body_mass_g", "sex"]}}}]}, "rhs": {"_type": "constructor", "func": "then_cast_default", "module": "transformations", "kwargs": {"TOA": "f64"}}}, "rhs": {"_type": "constructor", "func": "then_clamp", "module": "transformations", "kwargs": {"bounds": [30.0, 65.0]}}}, "rhs": {"_type": "constructor", "func": "then_resize", "module": "transformations", "kwargs": {"size": 346, "constant": 43.61}}}, "rhs": {"_type": "constructor", "func": "then_variance", "module": "transformations"}}, "rhs": {"_type": "constructor", "func": "then_laplace", "module": "measurements", "kwargs": {"scale": 5.0}}}}'}]}, 'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.requests.OpenDPRequestModel(*, dataset_name: str, opendp_json: str, fixed_delta: Annotated[float | None, Ge(ge=0)])[source]
Bases:
LomasRequestModelBase input model for an opendp request.
- fixed_delta: float | None
If the pipeline measurement is of type “ZeroConcentratedDivergence”.
(e.g. with “make_gaussian”) then it is converted to “SmoothedMaxDivergence” with “make_zCDP_to_approxDP” (see “opendp measurements documentation at https://docs.opendp.org/en/stable/api/python/opendp.combinators.html#opendp.combinators.make_zCDP_to_approxDP). # noqa # pylint: disable=C0301 In that case a “fixed_delta” must be provided by the user.
- model_config: ClassVar[ConfigDict] = {'json_schema_extra': {'examples': [{'dataset_name': 'PENGUIN', 'fixed_delta': 1e-05, 'opendp_json': '{"version": "0.12.0", "ast": {"_type": "partial_chain", "lhs": {"_type": "partial_chain", "lhs": {"_type": "partial_chain", "lhs": {"_type": "partial_chain", "lhs": {"_type": "partial_chain", "lhs": {"_type": "constructor", "func": "make_chain_tt", "module": "combinators", "args": [{"_type": "constructor", "func": "make_select_column", "module": "transformations", "kwargs": {"key": "bill_length_mm", "TOA": "String"}}, {"_type": "constructor", "func": "make_split_dataframe", "module": "transformations", "kwargs": {"separator": ",", "col_names": {"_type": "list", "_items": ["species", "island", "bill_length_mm", "bill_depth_mm", "flipper_length_mm", "body_mass_g", "sex"]}}}]}, "rhs": {"_type": "constructor", "func": "then_cast_default", "module": "transformations", "kwargs": {"TOA": "f64"}}}, "rhs": {"_type": "constructor", "func": "then_clamp", "module": "transformations", "kwargs": {"bounds": [30.0, 65.0]}}}, "rhs": {"_type": "constructor", "func": "then_resize", "module": "transformations", "kwargs": {"size": 346, "constant": 43.61}}}, "rhs": {"_type": "constructor", "func": "then_variance", "module": "transformations"}}, "rhs": {"_type": "constructor", "func": "then_laplace", "module": "measurements", "kwargs": {"scale": 5.0}}}}'}]}, 'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- opendp_json: str
The OpenDP pipeline for the query.
- class lomas_core.models.requests.QueryModel(*, dataset_name: str)[source]
Bases:
LomasRequestModelBase input model for any query on a dataset.
- We differentiate between requests and queries:
a request does not necessarily require an algorithm to be executed on the private dataset (e.g. some cost requests).
a query requires executing an algorithm on a private dataset (or a potentially a dummy).
- model_config: ClassVar[ConfigDict] = {'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.requests.SmartnoiseSQLDummyQueryModel(*, dataset_name: str, dummy_nb_rows: Annotated[int, Gt(gt=0)], dummy_seed: int, query_str: str, epsilon: Annotated[float, Gt(gt=0)], delta: Annotated[float, Ge(ge=0)], mechanisms: dict, postprocess: bool)[source]
Bases:
SmartnoiseSQLQueryModel,DummyQueryModelInput model for a smartnoise-sql query on a dummy dataset.
- model_config: ClassVar[ConfigDict] = {'json_schema_extra': {'examples': [{'dataset_name': 'PENGUIN', 'delta': 1e-05, 'dummy_nb_rows': 100, 'dummy_seed': 42, 'epsilon': 0.1, 'mechanisms': {'count': 'gaussian'}, 'postprocess': True, 'query_str': 'SELECT COUNT(*) AS NB_ROW FROM df'}]}, 'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.requests.SmartnoiseSQLQueryModel(*, dataset_name: str, query_str: str, epsilon: Annotated[float, Gt(gt=0)], delta: Annotated[float, Ge(ge=0)], mechanisms: dict, postprocess: bool)[source]
Bases:
SmartnoiseSQLRequestModel,QueryModelBase input model for a smartnoise-sql query.
- model_config: ClassVar[ConfigDict] = {'json_schema_extra': {'examples': [{'dataset_name': 'PENGUIN', 'delta': 1e-05, 'epsilon': 0.1, 'mechanisms': {'count': 'gaussian'}, 'postprocess': True, 'query_str': 'SELECT COUNT(*) AS NB_ROW FROM df'}]}, 'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- postprocess: bool
True).
See Smartnoise-SQL postprocessing documentation https://docs.smartnoise.org/sql/advanced.html#postprocess.
- Type:
Whether to postprocess the query results (default
- class lomas_core.models.requests.SmartnoiseSQLRequestModel(*, dataset_name: str, query_str: str, epsilon: Annotated[float, Gt(gt=0)], delta: Annotated[float, Ge(ge=0)], mechanisms: dict)[source]
Bases:
LomasRequestModelBase input model for a smarnoise-sql request.
- delta: float
Privacy parameter (e.g., 1e-5).
- epsilon: float
Privacy parameter (e.g., 0.1).
- mechanisms: dict
Dictionary of mechanisms for the query.
See Smartnoise-SQL mechanisms documentation at https://docs.smartnoise.org/sql/advanced.html#overriding-mechanisms.
- model_config: ClassVar[ConfigDict] = {'json_schema_extra': {'examples': [{'dataset_name': 'PENGUIN', 'delta': 1e-05, 'epsilon': 0.1, 'mechanisms': {'count': 'gaussian'}, 'query_str': 'SELECT COUNT(*) AS NB_ROW FROM df'}]}, 'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- query_str: str
The SQL query to execute.
NOTE: the table name is “df”, the query must end with “FROM df”
- class lomas_core.models.requests.SmartnoiseSynthDummyQueryModel(*, dataset_name: str, dummy_nb_rows: Annotated[int, Gt(gt=0)], dummy_seed: int, synth_name: SSynthMarginalSynthesizer | SSynthGanSynthesizer, epsilon: Annotated[float, Gt(gt=0)], delta: Annotated[float | None, Ge(ge=0)], select_cols: List, synth_params: dict, nullable: bool, constraints: str, return_model: bool, condition: str, nb_samples: int)[source]
Bases:
SmartnoiseSynthQueryModel,DummyQueryModelInput model for a smarnoise-synth query on a dummy dataset.
- model_config: ClassVar[ConfigDict] = {'json_schema_extra': {'examples': [{'condition': '', 'constraints': '', 'dataset_name': 'PENGUIN', 'delta': 1e-05, 'dummy_nb_rows': 100, 'dummy_seed': 42, 'epsilon': 0.1, 'nb_samples': 200, 'nullable': True, 'return_model': True, 'select_cols': [], 'synth_name': SSynthGanSynthesizer.DP_CTGAN, 'synth_params': {'batch_size': 50, 'embedding_dim': 128, 'epochs': 5}}]}, 'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.requests.SmartnoiseSynthQueryModel(*, dataset_name: str, synth_name: SSynthMarginalSynthesizer | SSynthGanSynthesizer, epsilon: Annotated[float, Gt(gt=0)], delta: Annotated[float | None, Ge(ge=0)], select_cols: List, synth_params: dict, nullable: bool, constraints: str, return_model: bool, condition: str, nb_samples: int)[source]
Bases:
SmartnoiseSynthRequestModel,QueryModelBase input model for a smarnoise-synth query.
- condition: str
Sampling condition in model.sample (only relevant if return_model is False).
- model_config: ClassVar[ConfigDict] = {'json_schema_extra': {'examples': [{'condition': '', 'constraints': '', 'dataset_name': 'PENGUIN', 'delta': 1e-05, 'epsilon': 0.1, 'nb_samples': 200, 'nullable': True, 'return_model': True, 'select_cols': [], 'synth_name': SSynthGanSynthesizer.DP_CTGAN, 'synth_params': {'batch_size': 50, 'embedding_dim': 128, 'epochs': 5}}]}, 'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- nb_samples: int
Number of samples to generate.
(only relevant if return_model is False)
- return_model: bool
True to get Synthesizer model, False to get samples.
- class lomas_core.models.requests.SmartnoiseSynthRequestModel(*, dataset_name: str, synth_name: SSynthMarginalSynthesizer | SSynthGanSynthesizer, epsilon: Annotated[float, Gt(gt=0)], delta: Annotated[float | None, Ge(ge=0)], select_cols: List, synth_params: dict, nullable: bool, constraints: str)[source]
Bases:
LomasRequestModelBase input model for a SmartnoiseSynth request.
- constraints: str
Dictionnary for custom table transformer constraints.
Column that are not specified will be inferred based on metadata.
- delta: float | None
Privacy parameter (e.g., 1e-5).
- epsilon: float
Privacy parameter (e.g., 0.1).
- model_config: ClassVar[ConfigDict] = {'json_schema_extra': {'examples': [{'constraints': '', 'dataset_name': 'PENGUIN', 'delta': 1e-05, 'epsilon': 0.1, 'nullable': True, 'select_cols': [], 'synth_name': SSynthGanSynthesizer.DP_CTGAN, 'synth_params': {'batch_size': 50, 'embedding_dim': 128, 'epochs': 5}}]}, 'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- nullable: bool
True if some data cells may be null.
- select_cols: List
List of columns to select.
- synth_name: SSynthMarginalSynthesizer | SSynthGanSynthesizer
Name of the synthesizer model to use.
- synth_params: dict
Keyword arguments to pass to the synthesizer constructor.
See https://docs.smartnoise.org/synth/synthesizers/index.html#, provide all parameters of the model except epsilon and delta.
- lomas_core.models.requests.model_input_to_lib(request: LomasRequestModel) DPLibraries[source]
Return the type of DP library given a LomasRequestModel.
- Parameters:
request (LomasRequestModel) – The user request
- Raises:
InternalServerException – If the library type cannot be determined.
- Returns:
The type of library for the request.
- Return type:
lomas_core.models.requests_examples module
lomas_core.models.responses module
- class lomas_core.models.responses.CostResponse(*, epsilon: float, delta: float)[source]
Bases:
ResponseModelModel for responses to cost estimation requests or queries.
- delta: float
The delta cost of the query.
- epsilon: float
The epsilon cost of the query.
- model_config: ClassVar[ConfigDict] = {'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.responses.DiffPrivLibQueryResult(*, res_type: Literal[DPLibraries.DIFFPRIVLIB] = DPLibraries.DIFFPRIVLIB, score: float, model: Annotated[DiffprivlibMixin, PlainSerializer(func=serialize_model, return_type=PydanticUndefined, when_used=always), PlainValidator(func=deserialize_model, json_schema_input_type=Any)])[source]
Bases:
BaseModelModel for diffprivlib query result.
- model: Annotated[DiffprivlibMixin, PlainSerializer(func=serialize_model, return_type=PydanticUndefined, when_used=always), PlainValidator(func=deserialize_model, json_schema_input_type=Any)]
The trained model.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- res_type: Literal[DPLibraries.DIFFPRIVLIB]
Result type description.
- score: float
The trained model score.
- class lomas_core.models.responses.DummyDsResponse(*, dtypes: Dict[str, str], datetime_columns: List[str], dummy_df: Annotated[DataFrame, PlainSerializer(func=dataframe_to_dict, return_type=PydanticUndefined, when_used=always)])[source]
Bases:
ResponseModelModel for responses to dummy dataset requests.
- datetime_columns: List[str]
The list of columns with datetime type.
- classmethod deserialize_dummy_df(v: DataFrame | dict, info: ValidationInfo) DataFrame[source]
Decodes the dict representation of the dummy df with correct types.
Only does so if the input value is not already a dataframe. :param v: The dataframe to decode. :type v: pd.DataFrame | dict :param info: Validation info to access other model fields. :type info: ValidationInfo
- Returns:
The decoded dataframe.
- Return type:
pd.DataFrame
- dtypes: Dict[str, str]
The dummy_df column data types.
- dummy_df: Annotated[DataFrame, PlainSerializer(func=dataframe_to_dict, return_type=PydanticUndefined, when_used=always)]
The dummy dataframe.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.responses.InitialBudgetResponse(*, initial_epsilon: float, initial_delta: float)[source]
Bases:
ResponseModelModel for responses to initial budget queries.
- initial_delta: float
The initial delta privacy loss budget.
- initial_epsilon: float
The initial epsilon privacy loss budget.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.responses.OpenDPQueryResult(*, res_type: Literal[DPLibraries.OPENDP] = DPLibraries.OPENDP, value: int | float | List[int | float])[source]
Bases:
BaseModelType for opendp result.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- res_type: Literal[DPLibraries.OPENDP]
Result type description.
- value: int | float | List[int | float]
The result value of the query.
- class lomas_core.models.responses.QueryResponse(*, epsilon: float, delta: float, requested_by: str, result: Annotated[DiffPrivLibQueryResult | SmartnoiseSQLQueryResult | SmartnoiseSynthModel | SmartnoiseSynthSamples | OpenDPQueryResult, Discriminator(discriminator=res_type, custom_error_type=None, custom_error_message=None, custom_error_context=None)])[source]
Bases:
CostResponseResponse to Lomas queries.
- model_config: ClassVar[ConfigDict] = {'use_attribute_docstrings': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- requested_by: str
The user that triggered the query.
- result: Annotated[DiffPrivLibQueryResult | SmartnoiseSQLQueryResult | SmartnoiseSynthModel | SmartnoiseSynthSamples | OpenDPQueryResult, Discriminator(discriminator=res_type, custom_error_type=None, custom_error_message=None, custom_error_context=None)]
The query result object.
- class lomas_core.models.responses.RemainingBudgetResponse(*, remaining_epsilon: float, remaining_delta: float)[source]
Bases:
ResponseModelModel for responses to remaining budget queries.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- remaining_delta: float
The remaining delta privacy loss budget.
- remaining_epsilon: float
The remaining epsilon privacy loss budget.
- class lomas_core.models.responses.ResponseModel[source]
Bases:
BaseModelBase model for any response from the server.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lomas_core.models.responses.SmartnoiseSQLQueryResult(*, res_type: Literal[DPLibraries.SMARTNOISE_SQL] = DPLibraries.SMARTNOISE_SQL, df: Annotated[DataFrame, PlainSerializer(func=dataframe_to_dict, return_type=PydanticUndefined, when_used=always), PlainValidator(func=dataframe_from_dict, json_schema_input_type=Any)])[source]
Bases:
BaseModelType for smartnoise_sql result type.
- df: Annotated[DataFrame, PlainSerializer(func=dataframe_to_dict, return_type=PydanticUndefined, when_used=always), PlainValidator(func=dataframe_from_dict, json_schema_input_type=Any)]
Dataframe containing the query result.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- res_type: Literal[DPLibraries.SMARTNOISE_SQL]
Result type description.
- class lomas_core.models.responses.SmartnoiseSynthModel(*, res_type: Literal[DPLibraries.SMARTNOISE_SYNTH] = DPLibraries.SMARTNOISE_SYNTH, model: Annotated[Synthesizer, PlainSerializer(func=serialize_model, return_type=PydanticUndefined, when_used=always), PlainValidator(func=deserialize_model, json_schema_input_type=Any)])[source]
Bases:
BaseModelType for smartnoise_synth result when it is a pickled model.
- model: Annotated[Synthesizer, PlainSerializer(func=serialize_model, return_type=PydanticUndefined, when_used=always), PlainValidator(func=deserialize_model, json_schema_input_type=Any)]
Synthetic data generator model.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- res_type: Literal[DPLibraries.SMARTNOISE_SYNTH]
Result type description.
- class lomas_core.models.responses.SmartnoiseSynthSamples(*, res_type: Literal['sn_synth_samples'] = 'sn_synth_samples', df_samples: Annotated[DataFrame, PlainSerializer(func=dataframe_to_dict, return_type=PydanticUndefined, when_used=always), PlainValidator(func=dataframe_from_dict, json_schema_input_type=Any)])[source]
Bases:
BaseModelType for smartnoise_synth result when it is a dataframe of samples.
- df_samples: Annotated[DataFrame, PlainSerializer(func=dataframe_to_dict, return_type=PydanticUndefined, when_used=always), PlainValidator(func=dataframe_from_dict, json_schema_input_type=Any)]
Dataframe containing the generated synthetic samples.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- res_type: Literal['sn_synth_samples']
Result type description.
- class lomas_core.models.responses.SpentBudgetResponse(*, total_spent_epsilon: float, total_spent_delta: float)[source]
Bases:
ResponseModelModel for responses to spent budget queries.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- total_spent_delta: float
The total spent delta privacy loss budget.
- total_spent_epsilon: float
The total spent epsilon privacy loss budget.
lomas_core.models.utils module
- lomas_core.models.utils.dataframe_from_dict(serialized_df: DataFrame | dict) DataFrame[source]
Transforms input dict into pandas dataframe.
If the input is already a dataframe, it is simply returned unmodified.
- Parameters:
serialized_df (pd.DataFrame | dict) – Dataframe in dict format. Or pd.Dataframe.
- Returns:
The transformed dataframe.
- Return type:
pd.DataFrame
- lomas_core.models.utils.dataframe_to_dict(df: DataFrame) dict[source]
Transforms pandas dataframe into a dictionary.
- Parameters:
df (pd.DataFrame) – The dataframe to “serialize”.
- Returns:
The pandas dataframe in dictionary format.
- Return type:
dict