Lomas Client Side: Using DiffPrivlib

This notebook showcases how researcher could use lomas platform with DiffPrivLib. It explains the different functionnalities provided by the lomas-client client library to interact with lomas server.

The secure data are never visible by researchers. They can only access to differentially private responses via queries to the server.

Each user has access to one or multiple projects and for each dataset has a limited budget with \(\epsilon\) and \(\delta\) values.

In this notebook the researcher is a penguin researcher named Dr. Antarctica. She aims to do a grounbdbreaking research on various penguins data.

Step 1: Install the library

To interact with the secure server on which the data is stored, Dr.Antartica first needs to install the library lomas-client on her local developping environment.

It can be installed via the pip command:

[1]:
# !pip install lomas_client

Or using a local version of the client

[2]:
import sys
import os
sys.path.append(os.path.abspath(os.path.join('..')))
[3]:
from lomas_client import Client
import numpy as np

Step 2: Initialise the client

Once the library is installed, a Client object must be created. It is responsible for sending sending requests to the server and processing responses in the local environment. It enables a seamless interaction with the server.

To create the client, Dr. Antartica needs to give it a few parameters: - a url: the root application endpoint to the remote secure server. - user_name: her name as registered in the database (Dr. Alice Antartica) - dataset_name: the name of the dataset that she wants to query (PENGUIN)

She will only be able to query on the real dataset if the queen Icergina has previously made her an account in the database, given her access to the PENGUIN dataset and has given her some epsilon and delta credit (as is done in the Admin Notebook for Users and Datasets management).

[4]:
APP_URL = "http://lomas_server"
USER_NAME = "Dr. Antartica"
DATASET_NAME = "PENGUIN"
client = Client(url=APP_URL, user_name = USER_NAME, dataset_name = DATASET_NAME)

And that’s it for the preparation. She is now ready to use the various functionnalities offered by lomas-client.

Step 3: Metadata and dummy dataset

Getting dataset metadata

Dr. Antartica has never seen the data and as a first step to understand what is available to her, she would like to check the metadata of the dataset. Therefore, she just needs to call the get_dataset_metadata() function of the client. As this is public information, this does not cost any budget.

[5]:
penguin_metadata = client.get_dataset_metadata()
penguin_metadata
[5]:
{'max_ids': 1,
 'row_privacy': True,
 'censor_dims': False,
 'columns': {'species': {'type': 'string',
   'cardinality': 3,
   'categories': ['Adelie', 'Chinstrap', 'Gentoo']},
  'island': {'type': 'string',
   'cardinality': 3,
   'categories': ['Torgersen', 'Biscoe', 'Dream']},
  'bill_length_mm': {'type': 'float', 'lower': 30.0, 'upper': 65.0},
  'bill_depth_mm': {'type': 'float', 'lower': 13.0, 'upper': 23.0},
  'flipper_length_mm': {'type': 'float', 'lower': 150.0, 'upper': 250.0},
  'body_mass_g': {'type': 'float', 'lower': 2000.0, 'upper': 7000.0},
  'sex': {'type': 'string',
   'cardinality': 2,
   'categories': ['MALE', 'FEMALE']}},
 'rows': 344}

Step 4: Train Logistic Regression model with DiffPrivLib

We want to train an ML model to guess the species of penguins based on their bill length and depth, flipper length and body mass.

Therefore, we use a DiffPrivLib pipeline which: - standard scales the dimensions between the metadata bounds - and then performs a logistic regression to predict the species of penguins.

[6]:
from sklearn.pipeline import Pipeline
from diffprivlib import models
import pandas as pd

Classification: Logistic Regression

Dr. Antartica wants to do a logistic regression on the feature columns ‘bill_length_mm’, ‘bill_depth_mm’, ‘flipper_length_mm’ and’body_mass_g’ to predict penguin species.

[7]:
feature_columns = ['bill_length_mm', 'bill_depth_mm', 'flipper_length_mm', 'body_mass_g']
target_columns = ['species']

She starts to write the associated DiffPrivLib pipeline and tries it on the dummy.

If the DiffprivlibCompatibilityWarning is raised by DiffPrivLib library, an warning will be raised the first time (as in DiffPrivLib) then the ‘wrong’ parameters will be ignored within the server.

[8]:
# DiffprivlibCompatibilityWarning Error expected
dpl_pipeline = Pipeline([
    ('scaler', models.StandardScaler(epsilon = 0.5)),
    ('classifier', models.LogisticRegression(epsilon = 1.0, svd_solver='full'))
])
/usr/local/lib/python3.11/site-packages/diffprivlib/utils.py:71: DiffprivlibCompatibilityWarning: Parameter 'svd_solver' is not functional in diffprivlib.  Remove this parameter to suppress this warning.
  warnings.warn(f"Parameter '{arg}' is not functional in diffprivlib.  Remove this parameter to suppress this "

To resolve the DiffprivlibCompatibilityWarning issue, the svd_solver should not be set as it is incompatible with DiffPrivLib. If these warnings are ignore by the user, the default behaviour of DiffPrivLib will be applied.

If PrivacyLeakWarning are encountered, then the query will not be processed by the server and will return an error.

[9]:
dpl_pipeline = Pipeline([
    ('scaler', models.StandardScaler(epsilon = 0.5)),
    ('classifier', models.LogisticRegression(epsilon = 1.0))
])
[10]:
# Expect PrivacyLeakWarning Error
dummy_response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns =  feature_columns,
    target_columns = target_columns,
    dummy = True
)
Error while processing DiffPrivLib request in server                 status code: 422 message: {"ExternalLibraryException":"PrivacyLeakWarning: Bounds parameter hasn't been specified, so falling back to determining bounds from the data.\n This will result in additional privacy leakage.  To ensure differential privacy with no additional privacy loss, specify `bounds` for each valued returned by np.mean().. Lomas server cannot fit pipeline on data, PrivacyLeakWarning is a blocker.","library":"diffprivlib"}

Diffprivlib requests that have PrivacyLeakWarning will not be processed in the server. In lomas, the bounds must always be specified. For most model, it is best to use the standard scaler must always be used as a first step and fill it based on the metadata values.

[11]:
def get_bounds(cols_metadata, columns):
    lower = [cols_metadata[col]["lower"] for col in columns]
    upper = [cols_metadata[col]["upper"] for col in columns]
    return (lower, upper)
[12]:
bounds = get_bounds(penguin_metadata['columns'], columns=feature_columns)
bounds
[12]:
([30.0, 13.0, 150.0, 2000.0], [65.0, 23.0, 250.0, 7000.0])
[13]:
dpl_pipeline = Pipeline([
    ('scaler', models.StandardScaler(epsilon = 0.5, bounds=bounds)),
    ('classifier', models.LogisticRegression(epsilon = 1.0))
])
[ ]:
# Expect PrivacyLeakWarning Error
dummy_response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    dummy = True
)

Again, we have a Privacy Leak. For the same reason, the data_norm should be computed based on metadata and given as argument as explained in the error message.

[20]:
# The max l2 norm of any row of the data. This defines the spread of data that will be protected by differential privacy.
data_norm = np.sqrt(np.linalg.norm(bounds[1]))
[21]:
dpl_pipeline = Pipeline([
    ('scaler', models.StandardScaler(epsilon = 0.5, bounds=bounds)),
    ('classifier', models.LogisticRegression(epsilon = 1.0, data_norm = data_norm))
])
[22]:
dummy_response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    dummy = True
)

The pipeline worked, she can check that she has a dummy model and a dummy score associated. In the case of a Logistic Regression the score is a mean accuracy as specified here. Each model return an associated score. The associated documentation is in the DiffPrivLib documentation in the score method of each model.

[23]:
dummy_response['query_response']
[23]:
{'score': 0.3,
 'model': Pipeline(steps=[('scaler',
                  StandardScaler(accountant=BudgetAccountant(spent_budget=[(0.5, 0)]),
                                 bounds=(array([  30.,   13.,  150., 2000.]),
                                         array([  65.,   23.,  250., 7000.])),
                                 epsilon=0.5)),
                 ('classifier',
                  LogisticRegression(accountant=BudgetAccountant(spent_budget=[(1.0, 0)]),
                                     data_norm=83.69469642643347))])}

Now that the pipeline seems to work, she also wants to choose another data imputation method: be default the missing data are dropped but she wants the replace them with the mean. Therefore, she uses the imputer_strategy argument.

[24]:
dummy_response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    imputer_strategy = "mean",
    dummy = True
)

It also works. It she wanted she could replace by the mean value with imputer_strategy = "mean" or the most frequent value with imputer_strategy = "most_frequent" (most_frequent makes more sense in the case of categorical columns).

Finally, she wants to use as much data as possible to train the model so she decides to reduce the test_size to 0.1 (meaning that 10% of the data will be used as the test set and 90% and the training set). Also she modifies the seed for the random split between training and testing data test_train_split_seed because why not. By default test_size = 0.2 and test_train_split_seed = 1.

[25]:
dummy_response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    test_size = 0.1,
    test_train_split_seed = 4,
    imputer_strategy = "mean",
    dummy = True
)

She can now estimated the cost of this pipeline

[26]:
res = client.estimate_diffprivlib_cost(
    dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    test_size = 0.1,
    test_train_split_seed = 4,
    imputer_strategy = "mean",
)
res
[26]:
{'epsilon_cost': 1.5, 'delta_cost': 0.0}
[27]:
f"The cost will be {res['epsilon_cost']} epsilon and {res['delta_cost']} delta."
[27]:
'The cost will be 1.5 epsilon and 0.0 delta.'

Now we train the same pipeline on the real dataset.

[28]:
res = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    test_size = 0.1,
    test_train_split_seed = 4,
    imputer_strategy = "mean",
)
[29]:
f"The accuracy score of the model trained on real data is {res['query_response']['score']}."
[29]:
'The accuracy score of the model trained on real data is 0.22857142857142856.'

The model is with different trained parameters is also available:

[30]:
model = res['query_response']['model']

We predict what would be the specie of the smallest possible penguin in all dimension versus to biggest possible penguin in all dimensions.

[32]:
x_to_predict = pd.DataFrame({
    'bill_length_mm': [bounds[0][0], bounds[1][0]],
    'bill_depth_mm': [bounds[0][1], bounds[1][1]] ,
    'flipper_length_mm': [bounds[0][2], bounds[1][2]],
    'body_mass_g': [bounds[0][3], bounds[1][3]]
})

predictions = model.predict(x_to_predict)
x_to_predict["predictions"] = predictions
x_to_predict
[32]:
bill_length_mm bill_depth_mm flipper_length_mm body_mass_g predictions
0 30.0 13.0 150.0 2000.0 Chinstrap
1 65.0 23.0 250.0 7000.0 Gentoo

Step 5: Train other models with DiffPrivLib

The logic is always the same for all the models. The pipeline and feature_columns arguments must always be specified for all models. The target_columns must be specified except for Clustering (K-Means) and Dimensinnality reduction (PCA).

Here are examples of each on dummy dataframes.

Classification: Gaussian Naive Bayes

[33]:
feature_columns = ['bill_length_mm', 'bill_depth_mm', 'flipper_length_mm']
target_columns = ['species']
[34]:
bounds = get_bounds(penguin_metadata['columns'], columns=feature_columns)
[35]:
dpl_pipeline = Pipeline([
    ('scaler', models.StandardScaler(epsilon = 0.5, bounds=bounds)),
    ('gaussian', models.GaussianNB(epsilon = 1.0, bounds=bounds, priors = (0.3, 0.3, 0.4))),
])
[36]:
dummy_response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    test_size = 0.15,
    imputer_strategy = "median",
    dummy = True
)
[37]:
cost_res = client.estimate_diffprivlib_cost(
    dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    test_size = 0.15,
    imputer_strategy = "median",
)
cost_res
[37]:
{'epsilon_cost': 1.5, 'delta_cost': 0.0}
[38]:
response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    imputer_strategy = "median",
    test_size = 0.15,
)
[39]:
x_to_predict = pd.DataFrame({
    'bill_length_mm': [bounds[0][0], bounds[1][0]],
    'bill_depth_mm': [bounds[0][1], bounds[1][1]] ,
    'flipper_length_mm': [bounds[0][2], bounds[1][2]],
})
[40]:
predictions = response['query_response']['model'].predict(x_to_predict)
x_to_predict["predictions"] = predictions
x_to_predict
[40]:
bill_length_mm bill_depth_mm flipper_length_mm predictions
0 30.0 13.0 150.0 Chinstrap
1 65.0 23.0 250.0 Chinstrap

Random Forest

[41]:
feature_columns = ['bill_length_mm', 'bill_depth_mm', 'body_mass_g']
target_columns = ['island']
[42]:
bounds = get_bounds(penguin_metadata['columns'], columns=feature_columns)
[43]:
dpl_pipeline = Pipeline([
    (
        'rf',
        models.RandomForestClassifier(
            n_estimators=10,
            epsilon = 2.0,
            bounds=bounds,
            classes=penguin_metadata['columns']['island']['categories']
        )
    ),
])
[44]:
dummy_response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    imputer_strategy = "drop", #default
    dummy = True
)
[45]:
cost_res = client.estimate_diffprivlib_cost(
    dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    imputer_strategy = "drop", #default
)
cost_res
[45]:
{'epsilon_cost': 2.0, 'delta_cost': 0.0}
[46]:
response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    imputer_strategy = "drop", #default
)
[47]:
model = response['query_response']['model']
[51]:
x_to_predict = pd.DataFrame({
    'bill_length_mm': [bounds[0][0], bounds[1][0]],
    'bill_depth_mm': [bounds[0][1], bounds[1][1]] ,
    'body_mass_g': [bounds[0][2], bounds[1][2]]
})
predictions = model.predict(x_to_predict)
x_to_predict["predictions"] = predictions
x_to_predict
[51]:
bill_length_mm bill_depth_mm body_mass_g predictions
0 30.0 13.0 2000.0 Torgersen
1 65.0 23.0 7000.0 Torgersen

Decision Tree Classifier

[56]:
feature_columns = ['bill_length_mm', 'body_mass_g']
target_columns = ['species']
[57]:
bounds = get_bounds(penguin_metadata['columns'], columns=feature_columns)
[58]:
dpl_pipeline = Pipeline([
    (
        'dtc',
        models.DecisionTreeClassifier(
            epsilon = 2.0,
            bounds=bounds,
            classes=penguin_metadata['columns']['species']['categories']
        )
    ),
])
[59]:
dummy_response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    test_size = 0.2,
    test_train_split_seed = 1,
    dummy = True,
    nb_rows = 100,
    seed = 42
)
[62]:
response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    test_size = 0.2,
)
[63]:
model = response['query_response']['model']
[65]:
x_to_predict = pd.DataFrame({
    'bill_length_mm': [bounds[0][0], bounds[1][0]],
    'body_mass_g': [bounds[0][1], bounds[1][1]] ,
})
x_to_predict["predictions"] = model.predict(x_to_predict)
x_to_predict
[65]:
bill_length_mm body_mass_g predictions
0 30.0 2000.0 Adelie
1 65.0 7000.0 Chinstrap

Regression: Linear Regression

[71]:
feature_columns = ['bill_length_mm']
target_columns = ['bill_depth_mm']
[72]:
bill_length_meta = penguin_metadata['columns']['bill_length_mm']
bill_depth_meta = penguin_metadata['columns']['bill_depth_mm']
[73]:
dpl_pipeline = Pipeline([
    (
        'lr',
        models.LinearRegression(
            epsilon = 2.0,
            bounds_X=(bill_length_meta['lower'], bill_length_meta['upper']),
            bounds_y=(bill_depth_meta['lower'], bill_depth_meta['upper'])
        )
    ),
])
[74]:
dummy_response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
    target_columns = target_columns,
    dummy = True
)
model = dummy_response['query_response']['model']
[75]:
# Dummy model predictions
x_to_predict = pd.DataFrame({
    'bill_length_mm': [bill_length_meta['lower'], bill_length_meta['upper']],
})
x_to_predict["predictions"] = model.predict(x_to_predict)
x_to_predict
[75]:
bill_length_mm predictions
0 30.0 17.641046
1 65.0 18.526961

Clustering: K-Means

[76]:
feature_columns = ['bill_length_mm', 'bill_depth_mm', 'flipper_length_mm', 'body_mass_g']
[77]:
bounds = get_bounds(penguin_metadata['columns'], columns=feature_columns)
[79]:
dpl_pipeline = Pipeline([
    ('kmeans', models.KMeans(n_clusters = 8, epsilon = 2.0, bounds=bounds)),
])
[80]:
dummy_response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
    dummy = True
)
model = dummy_response['query_response']['model']
model
[80]:
Pipeline(steps=[('kmeans',
                 KMeans(accountant=BudgetAccountant(spent_budget=[(2.0, 0)]),
                        bounds=(array([  30.,   13.,  150., 2000.]),
                                array([  65.,   23.,  250., 7000.])),
                        epsilon=2.0))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
[82]:
# Dummy model predictions
x_to_predict = pd.DataFrame({
    'bill_length_mm': [bounds[0][0], bounds[1][0]],
    'bill_depth_mm': [bounds[0][1], bounds[1][1]] ,
    'flipper_length_mm': [bounds[0][2], bounds[1][2]],
    'body_mass_g': [bounds[0][3], bounds[1][3]]
})
x_to_predict["predictions"] = model.predict(x_to_predict)
x_to_predict
[82]:
bill_length_mm bill_depth_mm flipper_length_mm body_mass_g predictions
0 30.0 13.0 150.0 2000.0 5
1 65.0 23.0 250.0 7000.0 6

Dimensionality Reduction: PCA

[83]:
feature_columns = ['bill_length_mm', 'bill_depth_mm', 'flipper_length_mm', 'body_mass_g']
bounds = get_bounds(penguin_metadata['columns'], columns=feature_columns)
[112]:
dpl_pipeline = Pipeline([
    (
        'pca',
        models.PCA(
            n_components=None,
            epsilon = 1.0,
            bounds=bounds,
            data_norm=100,
            centered=False
        )
    ),
])
[113]:
dummy_response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
    dummy = True
)
model = dummy_response['query_response']['model']
[114]:
response = client.diffprivlib_query(
    pipeline = dpl_pipeline,
    feature_columns = feature_columns,
)
model = response['query_response']['model']
[115]:
pca_model = model.steps[0][1]
pca_model
[115]:
PCA(accountant=BudgetAccountant(spent_budget=[(1.0, 0)]),
    bounds=(array([  30.,   13.,  150., 2000.]),
            array([  65.,   23.,  250., 7000.])),
    data_norm=100)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
[116]:
pca_model.components_
[116]:
array([[ 0.06269805, -0.02257092,  0.12818723,  0.98950874],
       [ 0.54397209,  0.70486754, -0.4534618 ,  0.0403549 ],
       [-0.04333305,  0.56533889,  0.81874155, -0.09042376],
       [ 0.83563483, -0.42783671,  0.3280285 , -0.10520211]])
[117]:
pca_model.explained_variance_
[117]:
array([12087.46118653,  4144.29748358,  2619.4887658 ,   121.73642974])
[118]:
pca_model.explained_variance_ratio_
[118]:
array([0.63708804, 0.21843151, 0.13806414, 0.0064163 ])
[119]:
pca_model.singular_values_
[119]:
array([1789.74222011,  179.61111848, 1047.96890848,  833.1653635 ])
[120]:
pca_model.mean_
[120]:
array([  44.38324636,   16.88939657,  198.03063776, 4207.08181725])
[121]:
pca_model.n_components_
[121]:
4
[122]:
pca_model.noise_variance_
[122]:
0.0

Step 6: See archives of queries

She now wants to verify all the queries that she did on the real data. It is possible because an archive of all queries is kept in a secure database. With a function call she can see her queries, budget and associated responses.

[123]:
previous_queries = client.get_previous_queries()
[126]:
query_1 = previous_queries[0]
query_1
[126]:
{'user_name': 'Dr. Antartica',
 'dataset_name': 'PENGUIN',
 'dp_librairy': 'diffprivlib',
 'client_input': {'dataset_name': 'PENGUIN',
  'diffprivlib_json': '{"module": "diffprivlib", "version": "0.6.4", "pipeline": [{"type": "_dpl_type:StandardScaler", "name": "scaler", "params": {"with_mean": true, "with_std": true, "copy": true, "epsilon": 0.5, "bounds": {"_tuple": true, "_items": [[30.0, 13.0, 150.0, 2000.0], [65.0, 23.0, 250.0, 7000.0]]}, "random_state": null, "accountant": "_dpl_instance:BudgetAccountant"}}, {"type": "_dpl_type:LogisticRegression", "name": "classifier", "params": {"tol": 0.0001, "C": 1.0, "fit_intercept": true, "random_state": null, "max_iter": 100, "verbose": 0, "warm_start": false, "n_jobs": null, "epsilon": 1.0, "data_norm": 83.69469642643347, "accountant": "_dpl_instance:BudgetAccountant"}}]}',
  'feature_columns': ['bill_length_mm',
   'bill_depth_mm',
   'flipper_length_mm',
   'body_mass_g'],
  'target_columns': ['species'],
  'test_size': 0.1,
  'test_train_split_seed': 4,
  'imputer_strategy': 'mean'},
 'response': {'requested_by': 'Dr. Antartica',
  'query_response': {'score': 0.22857142857142856,
   'model': Pipeline(steps=[('scaler',
                    StandardScaler(accountant=BudgetAccountant(spent_budget=[(0.5, 0)]),
                                   bounds=(array([  30.,   13.,  150., 2000.]),
                                           array([  65.,   23.,  250., 7000.])),
                                   epsilon=0.5)),
                   ('classifier',
                    LogisticRegression(accountant=BudgetAccountant(spent_budget=[(1.0, 0)]),
                                       data_norm=83.69469642643347))])},
  'spent_epsilon': 1.5,
  'spent_delta': 0.0},
 'timestamp': 1725375748.4772062}
[127]:
query_2 = previous_queries[1]
query_2
[127]:
{'user_name': 'Dr. Antartica',
 'dataset_name': 'PENGUIN',
 'dp_librairy': 'diffprivlib',
 'client_input': {'dataset_name': 'PENGUIN',
  'diffprivlib_json': '{"module": "diffprivlib", "version": "0.6.4", "pipeline": [{"type": "_dpl_type:StandardScaler", "name": "scaler", "params": {"with_mean": true, "with_std": true, "copy": true, "epsilon": 0.5, "bounds": {"_tuple": true, "_items": [[30.0, 13.0, 150.0], [65.0, 23.0, 250.0]]}, "random_state": null, "accountant": "_dpl_instance:BudgetAccountant"}}, {"type": "_dpl_type:GaussianNB", "name": "gaussian", "params": {"priors": {"_tuple": true, "_items": [0.3, 0.3, 0.4]}, "var_smoothing": 1e-09, "epsilon": 1.0, "bounds": {"_tuple": true, "_items": [[30.0, 13.0, 150.0], [65.0, 23.0, 250.0]]}, "random_state": null, "accountant": "_dpl_instance:BudgetAccountant"}}]}',
  'feature_columns': ['bill_length_mm', 'bill_depth_mm', 'flipper_length_mm'],
  'target_columns': ['species'],
  'test_size': 0.15,
  'test_train_split_seed': 1,
  'imputer_strategy': 'median'},
 'response': {'requested_by': 'Dr. Antartica',
  'query_response': {'score': 0.17307692307692307,
   'model': Pipeline(steps=[('scaler',
                    StandardScaler(accountant=BudgetAccountant(spent_budget=[(0.5, 0)]),
                                   bounds=(array([ 30.,  13., 150.]),
                                           array([ 65.,  23., 250.])),
                                   epsilon=0.5)),
                   ('gaussian',
                    GaussianNB(accountant=BudgetAccountant(spent_budget=[(1.0, 0)]),
                               bounds=(array([ 30.,  13., 150.]),
                                       array([ 65.,  23., 250.])),
                               priors=(0.3, 0.3, 0.4)))])},
  'spent_epsilon': 1.5,
  'spent_delta': 0.0},
 'timestamp': 1725375760.2964165}
[128]:
query_3 = previous_queries[2]
query_3
[128]:
{'user_name': 'Dr. Antartica',
 'dataset_name': 'PENGUIN',
 'dp_librairy': 'diffprivlib',
 'client_input': {'dataset_name': 'PENGUIN',
  'diffprivlib_json': '{"module": "diffprivlib", "version": "0.6.4", "pipeline": [{"type": "_dpl_type:RandomForestClassifier", "name": "rf", "params": {"n_estimators": 10, "n_jobs": 1, "random_state": null, "verbose": 0, "warm_start": false, "max_depth": 5, "epsilon": 2.0, "bounds": {"_tuple": true, "_items": [[30.0, 13.0, 2000.0], [65.0, 23.0, 7000.0]]}, "classes": ["Torgersen", "Biscoe", "Dream"], "shuffle": false, "accountant": "_dpl_instance:BudgetAccountant"}}]}',
  'feature_columns': ['bill_length_mm', 'bill_depth_mm', 'body_mass_g'],
  'target_columns': ['island'],
  'test_size': 0.2,
  'test_train_split_seed': 1,
  'imputer_strategy': 'drop'},
 'response': {'requested_by': 'Dr. Antartica',
  'query_response': {'score': 0.4925373134328358,
   'model': Pipeline(steps=[('rf',
                    RandomForestClassifier(accountant=BudgetAccountant(spent_budget=[(2.0, 0)]),
                                           bounds=(array([  30.,   13., 2000.]),
                                                   array([  65.,   23., 7000.])),
                                           classes=['Torgersen', 'Biscoe',
                                                    'Dream'],
                                           epsilon=2.0))])},
  'spent_epsilon': 2.0,
  'spent_delta': 0.0},
 'timestamp': 1725375769.0791934}
[ ]: