Road Test

[1]:
import emat, os, numpy, pandas, functools, asyncio
emat.versions()
emat 0.5.2, plotly 4.14.3
[2]:
logger = emat.util.loggers.log_to_stderr(30, True)

Defining the Exploratory Scope

The model scope is defined in a YAML file. For this Road Test example, the scope file is named road_test.yaml and is included in the model/tests directory.

[3]:
road_test_scope_file = emat.package_file('model','tests','road_test.yaml')

The filename for the YAML file is the first argument when creating a Scope object, which will load and process the content of the file.

[4]:
road_scope = emat.Scope(road_test_scope_file)
road_scope
[4]:
<emat.Scope with 2 constants, 7 uncertainties, 4 levers, 7 measures>

A short summary of the scope can be reviewed using the info method.

[5]:
road_scope.info()
name: EMAT Road Test
desc: prototype run
constants:
  free_flow_time = 60
  initial_capacity = 100
uncertainties:
  alpha = 0.1 to 0.2
  beta = 3.5 to 5.5
  input_flow = 80 to 150
  value_of_time = 0.001 to 0.25
  unit_cost_expansion = 95.0 to 145.0
  interest_rate = 0.025 to 0.04
  yield_curve = -0.0025 to 0.02
levers:
  expand_capacity = 0.0 to 100.0
  amortization_period = 15 to 50
  debt_type = categorical
  interest_rate_lock = boolean
measures:
  no_build_travel_time
  build_travel_time
  time_savings
  value_of_time_savings
  net_benefits
  cost_of_capacity_expansion
  present_cost_expansion

Alternatively, more detailed information about each part of the scope can be accessed in four list attributes:

[6]:
road_scope.get_constants()
[6]:
[Constant('free_flow_time', 60), Constant('initial_capacity', 100)]
[7]:
road_scope.get_uncertainties()
[7]:
[<emat.RealParameter 'alpha'>,
 <emat.RealParameter 'beta'>,
 <emat.IntegerParameter 'input_flow'>,
 <emat.RealParameter 'value_of_time'>,
 <emat.RealParameter 'unit_cost_expansion'>,
 <emat.RealParameter 'interest_rate'>,
 <emat.RealParameter 'yield_curve'>]
[8]:
road_scope.get_levers()
[8]:
[<emat.RealParameter 'expand_capacity'>,
 <emat.IntegerParameter 'amortization_period'>,
 <emat.CategoricalParameter 'debt_type'>,
 <emat.BooleanParameter 'interest_rate_lock'>]
[9]:
road_scope.get_measures()
[9]:
[Measure('no_build_travel_time'),
 Measure('build_travel_time'),
 Measure('time_savings'),
 Measure('value_of_time_savings'),
 Measure('net_benefits'),
 Measure('cost_of_capacity_expansion'),
 Measure('present_cost_expansion')]

Using a Database

The exploratory modeling process will typically generate many different sets of outputs, for different explored modeling scopes, or for different applications. It is convenient to organize these outputs in a database structure, so they are stored consistently and readily available for subsequent analysis.

The SQLiteDB object will create a database to store results. When instantiated with no arguments, the database is initialized in-memory, which will not store anything to disk (which is convenient for this example, but in practice you will generally want to store data to disk so that it can persist after this Python session ends).

[10]:
emat_db = emat.SQLiteDB('tempfile')

An EMAT Scope can be stored in the database, to provide needed information about what the various inputs and outputs represent.

[11]:
road_scope.store_scope(emat_db)

Trying to store another scope with the same name (or the same scope) raises a KeyError.

[12]:
try:
    road_scope.store_scope(emat_db)
except KeyError as err:
    print(err)
'scope named "EMAT Road Test" already exists'

We can review the names of scopes already stored in the database using the read_scope_names method.

[13]:
emat_db.read_scope_names()
[13]:
['EMAT Road Test']

Experimental Design

Actually running the model can be done by the user on an ad hoc basis (i.e., manually defining every combination of inputs that will be evaluated) but the real power of EMAT comes from runnning the model using algorithm-created experimental designs.

An important experimental design used in exploratory modeling is the Latin Hypercube. This design selects a random set of experiments across multiple input dimensions, to ensure “good” coverage of the multi-dimensional modeling space.

The design_latin_hypercube function creates such a design based on a Scope, and optionally stores the design of experiments in a database.

[14]:
from emat.experiment.experimental_design import design_experiments
[15]:
design = design_experiments(road_scope, db=emat_db, n_samples_per_factor=10, sampler='lhs')
design.head()
[15]:
alpha amortization_period beta debt_type expand_capacity input_flow interest_rate interest_rate_lock unit_cost_expansion value_of_time yield_curve free_flow_time initial_capacity
experiment
1 0.184682 38 5.237143 Rev Bond 18.224793 115 0.031645 False 118.213466 0.059510 0.015659 60 100
2 0.166133 36 4.121963 Paygo 87.525790 129 0.037612 True 141.322696 0.107772 0.007307 60 100
3 0.198937 44 4.719838 GO Bond 45.698048 105 0.028445 False 97.783320 0.040879 -0.001545 60 100
4 0.158758 42 4.915816 GO Bond 51.297546 113 0.036234 True 127.224901 0.182517 0.004342 60 100
5 0.157671 42 3.845952 Paygo 22.824149 133 0.039257 False 107.820482 0.067102 0.001558 60 100
[16]:
large_design = design_experiments(road_scope, db=emat_db, n_samples=5000, sampler='lhs', design_name='lhs_large')
large_design.head()
[16]:
alpha amortization_period beta debt_type expand_capacity input_flow interest_rate interest_rate_lock unit_cost_expansion value_of_time yield_curve free_flow_time initial_capacity
experiment
111 0.154130 21 5.061648 Rev Bond 75.542217 112 0.029885 True 124.452736 0.056335 0.001425 60 100
112 0.148731 29 4.088663 Rev Bond 91.184595 145 0.028659 False 131.688623 0.051854 0.007850 60 100
113 0.124027 34 3.956884 Paygo 60.436585 80 0.038101 False 95.462532 0.045672 0.011101 60 100
114 0.129724 41 4.969628 Paygo 74.271040 139 0.029665 False 98.206495 0.044312 0.010072 60 100
115 0.185723 22 4.485432 Paygo 61.084166 95 0.039195 False 140.792308 0.144629 0.019277 60 100

We can review what experimental designs have already been stored in the database using the read_design_names method of the Database object.

[17]:
emat_db.read_design_names('EMAT Road Test')
[17]:
['lhs', 'lhs_large']

Core Model in Python

Up until this point, we have been considering a model in the abstract, defining in the Scope what the inputs and outputs will be, and designing some experiments we would like to run with the model. Now we will actually interface with the model itself.

Model Definition

In the simplest approach for EMAT, a model can be defined as a basic Python function, which accepts all inputs (exogenous uncertainties, policy levers, and externally defined constants) as named keyword arguments, and returns a dictionary where the dictionary keys are names of performace measures, and the mapped values are the computed values for those performance measures. The Road_Capacity_Investment function provided in EMAT is an example of such a function. This made-up example considers the investment in capacity expansion for a single roadway link. The inputs to this function are described above in the Scope, including uncertain parameters in the volume-delay function, traffic volumes, value of travel time savings, unit construction costs, and interest rates, and policy levers including the amount of capacity expansion and amortization period.

[18]:
from emat.model.core_python import PythonCoreModel, Road_Capacity_Investment

The PythonCoreModel object provides an interface that links the basic Python function that represents the model, the Scope, and optionally the Database used to manage data storage.

[19]:
m = PythonCoreModel(Road_Capacity_Investment, scope=road_scope, db=emat_db)

From the PythonCoreModel, which links the model, scope, design, and database, we can run the design of experiments. This will systematically run the core model with each set of input parameters in the design, store the results in the database, and return a pandas.DataFrame containing the results.

Model Execution

[20]:
lhs_results = m.run_experiments(design_name='lhs')
lhs_results.head()
[20]:
alpha beta input_flow value_of_time unit_cost_expansion interest_rate yield_curve expand_capacity amortization_period debt_type interest_rate_lock free_flow_time initial_capacity no_build_travel_time build_travel_time time_savings value_of_time_savings net_benefits cost_of_capacity_expansion present_cost_expansion
experiment
1 0.184682 5.237143 115 0.059510 118.213466 0.031645 0.015659 18.224793 38 Rev Bond False 60 100 83.038716 69.586789 13.451927 92.059972 -22.290905 114.350877 2154.415985
2 0.166133 4.121963 129 0.107772 141.322696 0.037612 0.007307 87.525790 36 Paygo True 60 100 88.474313 62.132583 26.341730 366.219659 -16.843014 383.062672 12369.380535
3 0.198937 4.719838 105 0.040879 97.783320 0.028445 -0.001545 45.698048 44 GO Bond False 60 100 75.027180 62.543328 12.483852 53.584943 -113.988412 167.573355 4468.506839
4 0.158758 4.915816 113 0.182517 127.224901 0.036234 0.004342 51.297546 42 GO Bond True 60 100 77.370428 62.268768 15.101660 311.462907 11.539561 299.923347 6526.325171
5 0.157671 3.845952 133 0.067102 107.820482 0.039257 0.001558 22.824149 42 Paygo False 60 100 88.328990 72.848428 15.480561 138.156464 78.036616 60.119848 2460.910705

If running a large number of experiments, it may be valuable to parallelize the processing using a DistributedEvaluator instead of the default SequentialEvaluator. The DistributedEvaluator uses dask.distributed to distribute the workload to a cluster of processes, which can all be on the same machine or distributed over multiple networked computers. (The details of using dask.distributed in more complex environments are beyond this scope of this example, but interested users can refer to that package’s documentation.)

[21]:
lhs_large_async = m.async_experiments(large_design, max_n_workers=8, batch_size=157)
[22]:
lhs_large_results = await lhs_large_async.final_results()

Once a particular design has been run once, the results can be recovered from the database without re-running the model itself.

[23]:
reload_results = m.read_experiments(design_name='lhs')
reload_results.head()
[23]:
free_flow_time initial_capacity alpha beta input_flow value_of_time unit_cost_expansion interest_rate yield_curve expand_capacity amortization_period debt_type interest_rate_lock no_build_travel_time build_travel_time time_savings value_of_time_savings net_benefits cost_of_capacity_expansion present_cost_expansion
experiment
1 60 100 0.184682 5.237143 115 0.059510 118.213466 0.031645 0.015659 18.224793 38 Rev Bond False 83.038716 69.586789 13.451927 92.059972 -22.290905 114.350877 2154.415985
2 60 100 0.166133 4.121963 129 0.107772 141.322696 0.037612 0.007307 87.525790 36 Paygo True 88.474313 62.132583 26.341730 366.219659 -16.843014 383.062672 12369.380535
3 60 100 0.198937 4.719838 105 0.040879 97.783320 0.028445 -0.001545 45.698048 44 GO Bond False 75.027180 62.543328 12.483852 53.584943 -113.988412 167.573355 4468.506839
4 60 100 0.158758 4.915816 113 0.182517 127.224901 0.036234 0.004342 51.297546 42 GO Bond True 77.370428 62.268768 15.101660 311.462907 11.539561 299.923347 6526.325171
5 60 100 0.157671 3.845952 133 0.067102 107.820482 0.039257 0.001558 22.824149 42 Paygo False 88.328990 72.848428 15.480561 138.156464 78.036616 60.119848 2460.910705

It is also possible to load only the parameters, or only the performance meausures.

[24]:
lhs_params = m.read_experiment_parameters(design_name='lhs')
lhs_params.head()
[24]:
free_flow_time initial_capacity alpha beta input_flow value_of_time unit_cost_expansion interest_rate yield_curve expand_capacity amortization_period debt_type interest_rate_lock
experiment
1 60 100 0.184682 5.237143 115 0.059510 118.213466 0.031645 0.015659 18.224793 38 Rev Bond False
2 60 100 0.166133 4.121963 129 0.107772 141.322696 0.037612 0.007307 87.525790 36 Paygo True
3 60 100 0.198937 4.719838 105 0.040879 97.783320 0.028445 -0.001545 45.698048 44 GO Bond False
4 60 100 0.158758 4.915816 113 0.182517 127.224901 0.036234 0.004342 51.297546 42 GO Bond True
5 60 100 0.157671 3.845952 133 0.067102 107.820482 0.039257 0.001558 22.824149 42 Paygo False
[25]:
lhs_outcomes = m.read_experiment_measures(design_name='lhs')
lhs_outcomes.head()
[25]:
no_build_travel_time build_travel_time time_savings value_of_time_savings net_benefits cost_of_capacity_expansion present_cost_expansion
experiment run
1 fdeef8a8-be52-11eb-a248-acde48001122 83.038716 69.586789 13.451927 92.059972 -22.290905 114.350877 2154.415985
2 fdf0e65e-be52-11eb-a248-acde48001122 88.474313 62.132583 26.341730 366.219659 -16.843014 383.062672 12369.380535
3 fdf254b2-be52-11eb-a248-acde48001122 75.027180 62.543328 12.483852 53.584943 -113.988412 167.573355 4468.506839
4 fdf3ac86-be52-11eb-a248-acde48001122 77.370428 62.268768 15.101660 311.462907 11.539561 299.923347 6526.325171
5 fdf525a2-be52-11eb-a248-acde48001122 88.328990 72.848428 15.480561 138.156464 78.036616 60.119848 2460.910705

Feature Scoring

[26]:
m.get_feature_scores('lhs')
[26]:
alpha beta input_flow value_of_time unit_cost_expansion interest_rate yield_curve expand_capacity amortization_period debt_type interest_rate_lock
no_build_travel_time 0.075444 0.060221 0.540386 0.039323 0.037390 0.046159 0.040861 0.051164 0.049716 0.030621 0.028717
build_travel_time 0.055973 0.049988 0.261898 0.057534 0.052644 0.050073 0.053859 0.318194 0.035832 0.035718 0.028288
time_savings 0.092528 0.079272 0.443769 0.049315 0.041853 0.049267 0.051544 0.078433 0.046920 0.034810 0.032290
value_of_time_savings 0.088381 0.066305 0.313696 0.180042 0.040443 0.059434 0.061504 0.068176 0.053497 0.036622 0.031900
net_benefits 0.074451 0.055493 0.277536 0.096591 0.058608 0.049967 0.062057 0.179746 0.076616 0.043170 0.025766
cost_of_capacity_expansion 0.033834 0.037829 0.037349 0.036240 0.079136 0.045176 0.037441 0.523266 0.105854 0.039362 0.024514
present_cost_expansion 0.032350 0.031131 0.031986 0.036581 0.089316 0.044133 0.030549 0.627419 0.031182 0.024638 0.020715

Visualization

[27]:
from emat.analysis import display_experiments
display_experiments(road_scope, lhs_results, rows=['time_savings', 'net_benefits', 'input_flow'])

Time Savings

Net Benefits

Input Flow

Scenario Discovery

Scenario discovery in exploratory modeling is focused on finding scenarios that are interesting to the user.
The process generally begins through the identification of particular outcomes that are “of interest”, and the discovery process that can seek out what factor or combination of factors can result in those outcomes.

There are a variety of methods to use for scenario discovery. We illustrate a few here.

PRIM

Patient rule induction method (PRIM) is an algorithm that operates on a set of data with inputs and outputs.
It is used for locating areas of an outcome space that are of particular interest, which it does by reducing the data size incrementally by small amounts in an iterative process as follows:
  • Candidate boxes are generated. These boxes represent incrementally smaller sets of the data. Each box removes a portion of the data based on the levels of a single input variable.
    • For categorical input variables, there is one box generated for each category with each box removing one category from the data set.
    • For integer and continuous variables, two boxes are generated – one box that removes a portion of data representing the smallest set of values for that input variable and another box that removes a portion of data representing the largest set of values for that input. The step size for these variables is controlled by the analyst.
  • For each candidate box, the relative improvement in the number of outcomes of interest inside the box is calculated and the candidate box with the highest improvement is selected.
  • The data in the selected candidate box replaces the starting data and the process is repeated.

The process ends based on a stopping criteria. For more details on the algorithm, see Friedman and Fisher (1999) or Kwakkel and Jaxa-Rozen (2016).

The PRIM algorithm is particularly useful for scenario discovery, which broadly is the process of identifying particular scenarios of interest in a large and deeply uncertain dataset.
In the context of exploratory modeling, scenario discovery is often used to obtain a better understanding of areas of the uncertainty space where a policy or collection of policies performs poorly because it is often used in tandem with robust search methods for identifying policies that perform well (Kwakkel and Jaxa-Rozen (2016)).
[28]:
from emat.analysis.prim import Prim
[29]:
of_interest = lhs_large_results['net_benefits']>0
[30]:
discovery = Prim(
    m.read_experiment_parameters(design_name='lhs_large'),
    of_interest,
    scope=road_scope,
)
[31]:
box1 = discovery.find_box()
[32]:
box1.tradeoff_selector()
[33]:
box1.inspect(45)
coverage    0.489606
density     0.871173
id                45
mass          0.1568
mean        0.871173
res_dim            5
Name: 45, dtype: object

                         box 45
                            min         max                       qp values
expand_capacity        0.000672   95.018708     [-1.0, 0.06797686844207428]
input_flow           128.000000  150.000000  [4.181142828811809e-168, -1.0]
value_of_time          0.077057    0.236016  [2.2257439851081843e-37, -1.0]
beta                   3.597806    5.499712      [0.1926809470209529, -1.0]
amortization_period   17.000000   50.000000     [0.20016307300821792, -1.0]

[34]:
box1.select(45)
[35]:
box1.splom()

CART

Classification and Regression Trees (CART) can also be used for scenario discovery. They partition the explored space (i.e., the scope) into a number of sections, with each partition being added in such a way as to maximize the difference between observations on each side of the newly added partition divider, subject to some constraints.

[36]:
from emat.workbench.analysis import cart

cart_alg = cart.CART(
    m.read_experiment_parameters(design_name='lhs_large'),
    of_interest,
)
cart_alg.build_tree()
[37]:
from emat.util.xmle import Show
Show(cart_alg.show_tree(format='svg'))
[37]:
Tree 0 input_flow <= 122.5 gini = 0.402 samples = 5000 value = [3605, 1395] 1 input_flow <= 112.5 gini = 0.14 samples = 3028 value = [2798, 230] 0->1 True 12 value_of_time <= 0.061 gini = 0.484 samples = 1972 value = [807, 1165] 0->12 False 2 expand_capacity <= 10.955 gini = 0.054 samples = 2324 value = [2259, 65] 1->2 9 expand_capacity <= 41.221 gini = 0.359 samples = 704 value = [539, 165] 1->9 3 gini = 0.256 samples = 259 value = [220, 39] 2->3 4 expand_capacity <= 26.292 gini = 0.025 samples = 2065 value = [2039, 26] 2->4 5 gini = 0.108 samples = 368 value = [347, 21] 4->5 6 value_of_time <= 0.124 gini = 0.006 samples = 1697 value = [1692, 5] 4->6 7 gini = 0.0 samples = 1261 value = [1261, 0] 6->7 8 gini = 0.023 samples = 436 value = [431, 5] 6->8 10 gini = 0.495 samples = 265 value = [146, 119] 9->10 11 gini = 0.188 samples = 439 value = [393, 46] 9->11 13 expand_capacity <= 42.599 gini = 0.382 samples = 592 value = [440, 152] 12->13 16 expand_capacity <= 55.394 gini = 0.39 samples = 1380 value = [367, 1013] 12->16 14 gini = 0.493 samples = 262 value = [146, 116] 13->14 15 gini = 0.194 samples = 330 value = [294, 36] 13->15 17 value_of_time <= 0.096 gini = 0.185 samples = 764 value = [79, 685] 16->17 20 input_flow <= 136.5 gini = 0.498 samples = 616 value = [288, 328] 16->20 18 gini = 0.323 samples = 281 value = [57, 224] 17->18 19 gini = 0.087 samples = 483 value = [22, 461] 17->19 21 gini = 0.446 samples = 313 value = [208, 105] 20->21 22 gini = 0.389 samples = 303 value = [80, 223] 20->22
[38]:
cart_alg.boxes_to_dataframe(include_stats=True)
[38]:
Box Statistics expand_capacity input_flow value_of_time
coverage density gini entropy res dim mass min max min max min max
box 0 0.027957 0.150579 0.255810 0.611288 2 0.0518 NaN 10.954633 NaN 112.5 NaN NaN
box 1 0.015054 0.057065 0.107618 0.315683 2 0.0736 10.954633 26.29232 NaN 112.5 NaN NaN
box 2 0.000000 0.000000 0.000000 0.000000 3 0.2522 26.29232 NaN NaN 112.5 NaN 0.124238
box 3 0.003584 0.011468 0.022673 0.090374 3 0.0872 26.29232 NaN NaN 112.5 0.124238 NaN
box 4 0.085305 0.449057 0.494810 0.992499 2 0.0530 NaN 41.220825 112.5 122.5 NaN NaN
box 5 0.032975 0.104784 0.187608 0.483978 2 0.0878 41.220825 NaN 112.5 122.5 NaN NaN
box 6 0.083154 0.442748 0.493444 0.990522 3 0.0524 NaN 42.598623 122.5 NaN NaN 0.061168
box 7 0.025806 0.109091 0.194380 0.497168 3 0.0660 42.598623 NaN 122.5 NaN NaN 0.061168
box 8 0.160573 0.797153 0.323400 0.727586 3 0.0562 NaN 55.393572 122.5 NaN 0.061168 0.096174
box 9 0.330466 0.954451 0.086948 0.267178 3 0.0966 NaN 55.393572 122.5 NaN 0.096174 NaN
box 10 0.075269 0.335463 0.445855 0.920411 3 0.0626 55.393572 NaN 122.5 136.5 0.061168 NaN
box 11 0.159857 0.735974 0.388633 0.832762 3 0.0606 55.393572 NaN 136.5 NaN 0.061168 NaN

Creating a Meta-Model

Creating a meta-model requires an existing model, plus a set of experiments (including inputs and outputs) run with that model to support the estimation of the meta-model. After having completed sufficient initial runs of the core model, instantiating a meta-model is as simple as calling a create_metamodel_* method on the core model, either giving a design of experiments stored in the database with create_metamodel_from_design or passing the experimental results directly with create_metamodel_from_data.

[46]:
mm = m.create_metamodel_from_design('lhs', suppress_converge_warnings=True)
mm
[46]:
<emat.PythonCoreModel "MetaModel1", metamodel_id=1 with 2 constants, 7 uncertainties, 4 levers, 7 measures>

To demonstrate the performance of the meta-model, we can create an alternate design of experiments. Note that to get different random values, we set the random_seed argument to something other than the default value.

[47]:
design2 = design_experiments(
    scope=road_scope,
    db=emat_db,
    n_samples_per_factor=10,
    sampler='lhs',
    random_seed=2,
)
[48]:
design2_results = mm.run_experiments(design2)
design2_results.head()
[48]:
alpha amortization_period beta debt_type expand_capacity input_flow interest_rate interest_rate_lock unit_cost_expansion value_of_time yield_curve free_flow_time initial_capacity no_build_travel_time build_travel_time time_savings value_of_time_savings net_benefits cost_of_capacity_expansion present_cost_expansion
experiment
5111 0.113526 48 5.344274 GO Bond 75.218326 142 0.033177 True 131.820315 0.087901 0.000400 60 100 101.668054 62.564553 38.393643 471.987078 299.170876 359.487258 9370.723107
5112 0.112148 29 4.579477 Rev Bond 24.154443 132 0.034554 False 133.709154 0.058116 0.005657 60 100 85.010536 69.713683 15.319664 115.468367 -202.212472 187.823125 3009.086017
5113 0.119548 32 4.353462 Rev Bond 84.003562 148 0.032211 False 95.650097 0.040388 0.013589 60 100 102.269425 63.416434 49.326861 377.043662 -189.023127 660.736448 10329.009607
5114 0.102516 42 4.490051 GO Bond 51.580600 150 0.028362 False 136.919028 0.204260 -0.001315 60 100 105.688620 66.439000 29.733347 1775.057285 659.634046 336.803230 7424.674085
5115 0.140831 17 5.024638 Rev Bond 74.351908 120 0.033885 False 115.270765 0.067252 0.009138 60 100 79.985019 61.185461 20.650588 160.949226 -412.348160 703.127668 8241.631060
[49]:
mm.cross_val_scores()
[49]:
Cross Validation Score
no_build_travel_time 0.9911
build_travel_time 0.9742
time_savings 0.8954
value_of_time_savings 0.8640
net_benefits 0.6401
cost_of_capacity_expansion 0.9194
present_cost_expansion 0.8858

Compare Core vs Meta Model Results

We can generate a variety of plots to compare the distribution of meta-model outcomes on the new design against the original model’s results.

[50]:
from emat.analysis import contrast_experiments
contrast_experiments(road_scope, lhs_results, design2_results)

No Build Time

Build Time

Time Savings

Value Time Save

Net Benefits

Cost of Expand

Present Cost