S3 example

Step 1: Install the library

To interact with the secure server on which the data is stored, one first needs to install the library lomas-client on her local developping environment.

It can be installed via the pip command:

[ ]:
!pip install lomas-client
[1]:
from lomas_client.client import Client
import numpy as np

Step 2: Initialise the client

Once the library is installed, a Client object must be created. It is responsible for sending sending requests to the server and processing responses in the local environment. It enables a seamless interaction with the server.

To create the client, one needs to give it a few parameters: - a url: the root application endpoint to the remote secure server. - user_name: her name as registered in the database (Jack) - dataset_name: the name of the dataset that we want to query (TITANIC)

[3]:
APP_URL = "http://lomas_server"
USER_NAME = "Jack"
DATASET_NAME = "TITANIC"
client = Client(url=APP_URL, user_name = USER_NAME, dataset_name = DATASET_NAME)

Step 3: Understand the functionnalities of the library

Getting dataset metadata

[4]:
titanic_metadata = client.get_dataset_metadata()
titanic_metadata
[4]:
{'max_ids': 1,
 'row_privacy': True,
 'columns': {'PassengerId': {'type': 'int', 'lower': 1},
  'Pclass': {'type': 'int', 'lower': 1, 'upper': 3},
  'Name': {'type': 'string'},
  'Sex': {'type': 'string',
   'cardinality': 2,
   'categories': ['male', 'female']},
  'Age': {'type': 'float', 'lower': 0.1, 'upper': 100.0},
  'SibSp': {'type': 'int', 'lower': 0},
  'Parch': {'type': 'int', 'lower': 0},
  'Ticket': {'type': 'string'},
  'Fare': {'type': 'float', 'lower': 0.0},
  'Cabin': {'type': 'string'},
  'Embarked': {'type': 'string',
   'cardinality': 3,
   'categories': ['C', 'Q', 'S']},
  'Survived': {'type': 'boolean'}}}

Get a dummy dataset

[5]:
NB_ROWS = 200
SEED = 0
[6]:
df_dummy = client.get_dummy_dataset(
    nb_rows = NB_ROWS,
    seed = SEED
)

print(df_dummy.shape)
df_dummy.head()
(200, 12)
[6]:
PassengerId Pclass Name Sex Age SibSp Parch Ticket Fare Cabin Embarked Survived
0 2733 1 0 male 96.160858 4078 4452 X 2421.785941 h Q False
1 9846 1 U male 23.246992 5405 6743 O 2503.982129 2 S False
2 3265 2 A male 94.936950 4611 9621 a 4833.935352 c C False
3 4860 1 q female 94.143633 8266 5630 l 399.928019 p Q False
4 9226 2 F male 79.940338 6634 2562 F 6397.051061 G C True

Query on dummy dataset

Average and number of rows with smartnoise-sql library on remote dummy

[7]:
# Average Age
QUERY = "SELECT COUNT(*) AS nb_passengers, \
        AVG(Age) AS avg_age \
        FROM df"
[8]:
# On the remote server dummy dataframe
dummy_res = client.smartnoise_query(
    query = QUERY,
    epsilon = 100.0, # make sure to select high values of epsilon and delta to have small differences
    delta = 2.0,     # make sure to select high values of epsilon and delta to have small differences
    dummy = True,
    nb_rows = NB_ROWS,
    seed = SEED
)
[9]:
print(f"Average age in remote dummy: {np.round(dummy_res['query_response']['avg_age'][0], 2)} years old.")
print(f"Number of rows in remote dummy: {np.round(dummy_res['query_response']['nb_passengers'][0], 2)}.")
Average age in remote dummy: 49.01 years old.
Number of rows in remote dummy: 199.

Get current budget

[10]:
client.get_initial_budget()
[10]:
{'initial_epsilon': 45.0, 'initial_delta': 0.2}
[11]:
client.get_total_spent_budget()
[11]:
{'total_spent_epsilon': 0.0, 'total_spent_delta': 0.0}

It will also be useful to know what the remaining budget is. Therefore, we call the function get_remaining_budget. It just substarcts the total spent budget from the initial budget.

[12]:
client.get_remaining_budget()
[12]:
{'remaining_epsilon': 45.0, 'remaining_delta': 0.2}

As expected, for now the remaining budget is equal to the inital budget.

Estimate cost of a query

Another safeguard is the functionnality to estimate the cost of a query. As in OpenDP and SmartnoiseSQL, the budget that will by used by a query might be slightly different than what is asked by the user. The estimate cost function returns the estimated real cost of any query.

Again, of course, this will not impact the user’s budget.

[13]:
EPSILON = 0.5
DELTA = 1e-4
[14]:
client.estimate_smartnoise_cost(
    query = QUERY,
    epsilon = EPSILON,
    delta = DELTA
)
[14]:
{'epsilon_cost': 1.5, 'delta_cost': 0.00014999500000001387}

Query on real private dataset with smartnoise-sql.

[15]:
client.get_remaining_budget()
[15]:
{'remaining_epsilon': 45.0, 'remaining_delta': 0.2}
[16]:
response = client.smartnoise_query(
    query = QUERY,
    epsilon = EPSILON,
    delta = DELTA,
    dummy = False # Optionnal
)
[ ]:

[17]:
nb_passengers = response['query_response']['nb_passengers'].iloc[0]
print(f"Number of passengers in real data: {nb_passengers}.")

avg_age = np.round(response['query_response']['avg_age'].iloc[0], 2)
print(f"Average age in real data: {avg_age}.")
Number of passengers in real data: 891.
Average age in real data: 30.27.

After each query on the real dataset, the budget informations are also returned to the researcher. It is possible possible to check the remaining budget again afterwards:

[18]:
client.get_remaining_budget()
[18]:
{'remaining_epsilon': 43.5, 'remaining_delta': 0.199850005}

As can be seen in get_total_spent_budget(), it is the budget estimated with estimate_smartnoise_cost() that was spent.

[19]:
client.get_total_spent_budget()
[19]:
{'total_spent_epsilon': 1.5, 'total_spent_delta': 0.00014999500000001387}

Step 4: Titanic statistics with opendp

[20]:
import opendp as dp
import opendp.transformations as trans
import opendp.measurements as meas

Confidence intervals for age over the whole population

[21]:
titanic_metadata
[21]:
{'max_ids': 1,
 'row_privacy': True,
 'columns': {'PassengerId': {'type': 'int', 'lower': 1},
  'Pclass': {'type': 'int', 'lower': 1, 'upper': 3},
  'Name': {'type': 'string'},
  'Sex': {'type': 'string',
   'cardinality': 2,
   'categories': ['male', 'female']},
  'Age': {'type': 'float', 'lower': 0.1, 'upper': 100.0},
  'SibSp': {'type': 'int', 'lower': 0},
  'Parch': {'type': 'int', 'lower': 0},
  'Ticket': {'type': 'string'},
  'Fare': {'type': 'float', 'lower': 0.0},
  'Cabin': {'type': 'string'},
  'Embarked': {'type': 'string',
   'cardinality': 3,
   'categories': ['C', 'Q', 'S']},
  'Survived': {'type': 'boolean'}}}
[22]:
columns = ["PassengerId", "Pclass", "Name", "Sex", "Age", "SibSp", "Parch"]
[23]:
age_min = titanic_metadata['columns']['Age']['lower']
age_max = titanic_metadata['columns']['Age']['upper']
age_min, age_max
[23]:
(0.1, 100.0)
[24]:
age_transformation_pipeline = (
    trans.make_split_dataframe(separator=",", col_names=columns) >>
    trans.make_select_column(key="Age", TOA=str) >>
    trans.then_cast_default(TOA=float) >>
    trans.then_clamp(bounds=(age_min, age_max)) >>
    trans.then_resize(size=nb_passengers.tolist(), constant=avg_age) >>
    trans.then_variance()
)
[25]:
# Expect to fail !!!
client.opendp_query(
    opendp_pipeline = age_transformation_pipeline,
    dummy=True
)
Server error status 400: {"InvalidQueryException":"The pipeline provided is not a measurement. It cannot be processed in this server."}

This is because the server will only allow measurement pipeline with differentially private results. We add Laplacian noise to the pipeline and should be able to instantiate the pipeline.

[26]:
var_age_transformation_pipeline = (
    age_transformation_pipeline >>
    meas.then_laplace(scale=5.0)
)

Now that there is a measurement, one is able to apply the pipeline on the dummy dataset of the server.

[27]:
dummy_var_res = client.opendp_query(
    opendp_pipeline = var_age_transformation_pipeline,
    dummy=True
)
print(f"Dummy result for variance: {np.round(dummy_var_res['query_response'], 2)}")
Dummy result for variance: 133.38

With opendp, the function estimate_opendp_cost is particularly useful to estimate the used epsilon and delta based on the scale value.

[28]:
cost_res = client.estimate_opendp_cost(
    opendp_pipeline = var_age_transformation_pipeline
)
cost_res
[28]:
{'epsilon_cost': 2.240181818190626, 'delta_cost': 0}

One can now execute the query on the real dataset.

[29]:
var_res = client.opendp_query(
    opendp_pipeline = var_age_transformation_pipeline,
)
[30]:
print(f"Number of passengers: {nb_passengers} (from previous smartnoise-sql query).")

print(f"Average age: {np.round(avg_age, 2)} (from previous smartnoise-sql query).")

var_age = var_res['query_response']
print(f"Variance of age: {np.round(var_age, 3)} (from opendp query).")
Number of passengers: 891 (from previous smartnoise-sql query).
Average age: 30.27 (from previous smartnoise-sql query).
Variance of age: -4.3 (from opendp query).
[31]:
# Get standard error
standard_error = np.sqrt(var_age/nb_passengers)
print(f"Standard error of age: {np.round(standard_error, 2)}.")
Standard error of age: nan.
/tmp/ipykernel_1062/2464255812.py:2: RuntimeWarning: invalid value encountered in sqrt
  standard_error = np.sqrt(var_age/nb_passengers)
[32]:
 # Compute the 95% confidence interval
ZSCORE = 1.96
lower_bound = np.round(avg_age - ZSCORE*standard_error, 2)
upper_bound = np.round(avg_age + ZSCORE*standard_error, 2)
print(f"The 95% confidence interval of the age of all passengers is [{lower_bound}, {upper_bound}].")
The 95% confidence interval of the age of all passengers is [nan, nan].
[ ]: