From b36f89d555a4a050cbcd8da5337fc859cdc22f4c Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Mon, 14 Oct 2024 15:10:33 +0000 Subject: [PATCH] build based on ee4fc19 --- dev/.documenter-siteinfo.json | 2 +- dev/10-how-to-use/index.html | 4 ++-- dev/20-tutorials/index.html | 22 +++++++++--------- dev/30-concepts/index.html | 10 ++++---- dev/40-formulation/index.html | 2 +- dev/90-contributing/index.html | 2 +- dev/91-developer/index.html | 2 +- dev/95-reference/index.html | 42 +++++++++++++++++----------------- dev/index.html | 2 +- dev/search_index.js | 2 +- 10 files changed, 45 insertions(+), 45 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 0bbe3917..b9c658c1 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.0","generation_timestamp":"2024-10-14T15:06:38","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.0","generation_timestamp":"2024-10-14T15:10:25","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/dev/10-how-to-use/index.html b/dev/10-how-to-use/index.html index c4ce74ba..e6cfc17e 100644 --- a/dev/10-how-to-use/index.html +++ b/dev/10-how-to-use/index.html @@ -1,6 +1,6 @@ -How to Use · TulipaEnergyModel.jl

How to Use

Install

To use Tulipa, you first need to install the opensource Julia programming language.

Then consider installing a user-friendly code editor, such as VSCode. Otherwise you will be running from the terminal/command prompt.

Starting Julia

Choose one:

  • In VSCode: Press CTRL+Shift+P and press Enter to start a Julia REPL.
  • In the terminal: Type julia and press Enter

Adding TulipaEnergyModel

In Julia:

  • Enter package mode (press "]")
pkg> add TulipaEnergyModel
  • Return to Julia mode (backspace)
julia> using TulipaEnergyModel

(Optional) Running automatic tests

It is nice to check that tests are passing to make sure your environment is working. (This takes a minute or two.)

  • Enter package mode (press "]")
pkg> test TulipaEnergyModel

All tests should pass.

Running a Scenario

To run a scenario, use the function:

The connection should have been created and the data loaded into it using TulipaIO. See the tutorials for a complete guide on how to achieve this. The output_folder is optional if the user wants to export the output.

Input

Currently, we only accept input from CSV files that follow the Schemas. You can also check the test/inputs folder for examples.

CSV Files

Below, we have a description of the files. At the end, in Schemas, we have the expected columns in these CSVs.

Tip: If you modify CSV files and want to see your modifications, the normal git diff command will not be informative. Instead, you can use

git diff --word-diff-regex="[^[:space:],]+"

to make git treat the , as word separators. You can also compare two CSV files with

git diff --no-index --word-diff-regex="[^[:space:],]+" file1 file2

graph-assets-data.csv

This file contains the list of assets and the static data associated with each of them.

The meaning of Missing data depends on the parameter, for instance:

  • group: No group assigned to the asset.

graph-flows-data.csv

The same as graph-assets-data.csv, but for flows. Each flow is defined as a pair of assets.

assets-data.csv

This file contains the yearly data of each asset.

The investment parameters are as follows:

  • The investable parameter determines whether there is an investment decision for the asset or flow.
  • The investment_integer parameter determines if the investment decision is integer or continuous.
  • The investment_cost parameter represents the cost in the defined timeframe. Thus, if the timeframe is a year, the investment cost is the annualized cost of the asset.
  • The investment_limit parameter limits the total investment capacity of the asset or flow. This limit represents the potential of that particular asset or flow. Without data in this parameter, the model assumes no investment limit.

The meaning of Missing data depends on the parameter, for instance:

  • investment_limit: There is no investment limit.
  • initial_storage_level: The initial storage level is free (between the storage level limits), meaning that the optimization problem decides the best starting point for the storage asset. In addition, the first and last time blocks in a representative period are linked to create continuity in the storage level.

flows-data.csv

The same as assets-data.csv, but for flows. Each flow is defined as a pair of assets.

The meaning of Missing data depends on the parameter, for instance:

  • investment_limit: There is no investment limit.

assets-profiles.csv

These files contain information about assets and their associated profiles. Each row lists an asset, the type of profile (e.g., availability, demand, maximum or minimum storage level), and the profile's name. These profiles are used in the intra-temporal constraints.

flows-profiles.csv

This file contains information about flows and their representative period profiles for intra-temporal constraints. Each flow is defined as a pair of assets.

rep-periods-data.csv

Describes the representative periods by their unique ID, the number of timesteps per representative period, and the resolution per timestep. Note that in the test files the resolution units are given as hours for understandability, but the resolution is technically unitless.

rep-periods-mapping.csv

Describes the periods of the timeframe that map into a representative period and the weight of the representative periods that construct a period. Note that each weight is a decimal between 0 and 1, and that the sum of weights for a given period must also be between 0 and 1 (but do not have to sum to 1).

profiles-rep-periods.csv

Define all the profiles for the rep-periods. The profile_name is a unique identifier, the period and value define the profile, and the rep_period field informs the representative period.

The profiles are linked to assets and flows in the files assets-profiles, assets-timeframe-profiles, and flows-profiles.

assets-timeframe-profiles.csv

Like the assets-profiles.csv, but for the inter-temporal constraints.

groups-data.csv (optional)

This file contains the list of groups and the methods that apply to each group, along with their respective parameters.

profiles-timeframe.csv (optional)

Define all the profiles for the timeframe. This is similar to the profiles-rep-periods.csv except that it doesn't have a rep-period field and if this is not passed, default values are used in the timeframe constraints.

assets-rep-periods-partitions.csv (optional)

Contains a description of the partition for each asset with respect to representative periods. If not specified, each asset will have the same time resolution as the representative period, which is hourly by default.

There are currently three ways to specify the desired resolution, indicated in the column specification. The column partition serves to define the partitions in the specified style.

  • specification = uniform: Set the resolution to a uniform amount, i.e., a time block is made of X timesteps. The number X is defined in the column partition. The number of timesteps in the representative period must be divisible by X.
  • specification = explicit: Set the resolution according to a list of numbers separated by ; on the partition. Each number in the list is the number of timesteps for that time block. For instance, 2;3;4 means that there are three time blocks, the first has 2 timesteps, the second has 3 timesteps, and the last has 4 timesteps. The sum of the list must be equal to the total number of timesteps in that representative period, as specified in num_timesteps of rep-periods-data.csv.
  • specification = math: Similar to explicit, but using + and x for simplification. The value of partition is a sequence of elements of the form NxT separated by +, indicating N time blocks of length T. For instance, 2x3+3x6 is 2 time blocks of 3 timesteps, followed by 3 time blocks of 6 timesteps, for a total of 24 timesteps in the representative period.

The table below shows various results for different formats for a representative period with 12 timesteps.

Time Block:uniform:explicit:math
1:3, 4:6, 7:9, 10:1233;3;3;34x3
1:4, 5:8, 9:1244;4;43x4
1:1, 2:2, …, 12:1211;1;1;1;1;1;1;1;1;1;1;112x1
1:3, 4:6, 7:10, 11:12NA3;3;4;22x3+1x4+1x2

Note: If an asset is not specified in this file, the balance equation will be written in the lowest resolution of both the incoming and outgoing flows to the asset.

flows-rep-periods-partitions.csv (optional)

The same as assets-rep-periods-partitions.csv, but for flows.

If a flow is not specified in this file, the flow time resolution will be for each timestep by default (e.g., hourly).

assets-timeframe-partitions.csv (optional)

The same as their assets-rep-periods-partitions.csv counterpart, but for the periods in the timeframe of the model.

Schemas

  • assets_data
    • name: VARCHAR
    • active: BOOLEAN
    • year: INTEGER
    • commission_year: INTEGER
    • investable: BOOLEAN
    • investment_integer: BOOLEAN
    • investment_limit: DOUBLE
    • initial_units: DOUBLE
    • peak_demand: DOUBLE
    • consumer_balance_sense: VARCHAR
    • is_seasonal: BOOLEAN
    • storage_inflows: DOUBLE
    • initial_storage_units: DOUBLE
    • initial_storage_level: DOUBLE
    • energy_to_power_ratio: DOUBLE
    • storage_method_energy: BOOLEAN
    • investment_limit_storage_energy: DOUBLE
    • investment_integer_storage_energy: BOOLEAN
    • use_binary_storage_method: VARCHAR
    • max_energy_timeframe_partition: DOUBLE
    • min_energy_timeframe_partition: DOUBLE
    • unit_commitment: BOOLEAN
    • unit_commitment_method: VARCHAR
    • units_on_cost: DOUBLE
    • unit_commitment_integer: BOOLEAN
    • min_operating_point: DOUBLE
    • ramping: BOOLEAN
    • max_ramp_up: DOUBLE
    • max_ramp_down: DOUBLE
  • assets_profiles
    • asset: VARCHAR
    • commission_year: INTEGER
    • profile_type: VARCHAR
    • profile_name: VARCHAR
  • assetsrepperiods_partitions
    • asset: VARCHAR
    • year: INTEGER
    • rep_period: INTEGER
    • specification: VARCHAR
    • partition: VARCHAR
  • assetstimeframepartitions
    • asset: VARCHAR
    • year: INTEGER
    • specification: VARCHAR
    • partition: VARCHAR
  • assetstimeframeprofiles
    • asset: VARCHAR
    • commission_year: INTEGER
    • profile_type: VARCHAR
    • profile_name: VARCHAR
  • flows_data
    • from_asset: VARCHAR
    • to_asset: VARCHAR
    • year: INTEGER
    • active: BOOLEAN
    • investable: BOOLEAN
    • investment_integer: BOOLEAN
    • variable_cost: DOUBLE
    • investment_limit: DOUBLE
    • initial_export_units: DOUBLE
    • initial_import_units: DOUBLE
    • efficiency: DOUBLE
  • flows_profiles
    • from_asset: VARCHAR
    • to_asset: VARCHAR
    • year: INTEGER
    • profile_type: VARCHAR
    • profile_name: VARCHAR
  • flowsrepperiods_partitions
    • from_asset: VARCHAR
    • to_asset: VARCHAR
    • year: INTEGER
    • rep_period: INTEGER
    • specification: VARCHAR
    • partition: VARCHAR
  • graphassetsdata
    • name: VARCHAR
    • type: VARCHAR
    • group: VARCHAR
    • investment_method: VARCHAR
    • capacity: DOUBLE
    • technical_lifetime: INTEGER
    • economic_lifetime: INTEGER
    • discount_rate: DOUBLE
    • capacity_storage_energy: DOUBLE
  • graphflowsdata
    • from_asset: VARCHAR
    • to_asset: VARCHAR
    • carrier: VARCHAR
    • is_transport: BOOLEAN
    • capacity: DOUBLE
    • technical_lifetime: INTEGER
    • economic_lifetime: INTEGER
    • discount_rate: DOUBLE
  • groups_data
    • name: VARCHAR
    • year: INTEGER
    • invest_method: BOOLEAN
    • min_investment_limit: DOUBLE
    • max_investment_limit: DOUBLE
  • profilesrepperiods
    • profile_name: VARCHAR
    • year: INTEGER
    • rep_period: INTEGER
    • timestep: INTEGER
    • value: DOUBLE
  • profiles_timeframe
    • profile_name: VARCHAR
    • year: INTEGER
    • period: INTEGER
    • value: DOUBLE
  • repperiodsdata
    • year: INTEGER
    • rep_period: INTEGER
    • num_timesteps: INTEGER
    • resolution: DOUBLE
  • repperiodsmapping
    • year: INTEGER
    • period: INTEGER
    • rep_period: INTEGER
    • weight: DOUBLE
  • vintageassetsdata
    • name: VARCHAR
    • commission_year: INTEGER
    • fixed_cost: DOUBLE
    • investment_cost: DOUBLE
    • fixed_cost_storage_energy: DOUBLE
    • investment_cost_storage_energy: DOUBLE
  • vintageflowsdata
    • from_asset: VARCHAR
    • to_asset: VARCHAR
    • commission_year: INTEGER
    • fixed_cost: DOUBLE
    • investment_cost: DOUBLE
  • year_data
    • year: INTEGER
    • length: INTEGER
    • is_milestone: BOOLEAN

Structures

The list of relevant structures used in this package are listed below:

EnergyProblem

The EnergyProblem structure is a wrapper around various other relevant structures. It hides the complexity behind the energy problem, making the usage more friendly, although more verbose.

Fields

  • graph: The Graph object that defines the geometry of the energy problem.
  • representative_periods: A vector of Representative Periods.
  • constraints_partitions: Dictionaries that connect pairs of asset and representative periods to time partitions (vectors of time blocks).
  • timeframe: The number of periods in the representative_periods.
  • dataframes: A Dictionary of dataframes used to linearize the variables and constraints. These are used internally in the model only.
  • groups: A vector of Groups.
  • model: A JuMP.Model object representing the optimization model.
  • solution: A structure of the variable values (investments, flows, etc) in the solution.
  • solved: A boolean indicating whether the model has been solved or not.
  • objective_value: The objective value of the solved problem (Float64).
  • termination_status: The termination status of the optimization model.
  • time_read_data: Time taken (in seconds) for reading the data (Float64).
  • time_create_model: Time taken (in seconds) for creating the model (Float64).
  • time_solve_model: Time taken (in seconds) for solving the model (Float64).

Constructor

The EnergyProblem can also be constructed using the minimal constructor below.

  • EnergyProblem(connection): Constructs a new EnergyProblem object with the given connection that has been created and the data loaded into it using TulipaIO. The graph, representative_periods, and timeframe are computed using create_internal_structures. The constraints_partitions field is computed from the representative_periods, and the other fields are initialized with default values.

See the basic example tutorial to see how these can be used.

Graph

The energy problem is defined using a graph. Each vertex is an asset, and each edge is a flow.

We use MetaGraphsNext.jl to define the graph and its objects. Using MetaGraphsNext we can define a graph with metadata, i.e., associate data with each asset and flow. Furthermore, we can define the labels of each asset as keys to access the elements of the graph. The assets in the graph are of type GraphAssetData, and the flows are of type GraphFlowData.

The graph can be created using the create_internal_structures function, or it can be accessed from an EnergyProblem.

See how to use the graph in the graph tutorial.

GraphAssetData

This structure holds all the information of a given asset. These are stored inside the Graph. Given a graph graph, an asset a can be accessed through graph[a].

GraphFlowData

This structure holds all the information of a given flow. These are stored inside the Graph. Given a graph graph, a flow from asset u to asset v can be accessed through graph[u, v].

Partition

A representative period will be defined with a number of timesteps. A partition is a division of these timesteps into time blocks such that the time blocks are disjunct (not overlapping) and that all timesteps belong to some time block. Some variables and constraints are defined over every time block in a partition.

For instance, for a representative period with 12 timesteps, all sets below are partitions:

  • \[\{\{1, 2, 3\}, \{4, 5, 6\}, \{7, 8, 9\}, \{10, 11, 12\}\}\]

  • \[\{\{1, 2, 3, 4\}, \{5, 6, 7, 8\}, \{9, 10, 11, 12\}\}\]

  • \[\{\{1\}, \{2, 3\}, \{4\}, \{5, 6, 7, 8\}, \{9, 10, 11, 12\}\}\]

Timeframe

The timeframe is the total period we want to analyze with the model. Usually this is a year, but it can be any length of time. A timeframe has two fields:

  • num_periods: The timeframe is defined by a certain number of periods. For instance, a year can be defined by 365 periods, each describing a day.
  • map_periods_to_rp: Indicates the periods of the timeframe that map into a representative period and the weight of the representative period to construct that period.

Representative Periods

The timeframe (e.g., a full year) is described by a selection of representative periods, for instance, days or weeks, that nicely summarize other similar periods. For example, we could model the year into 3 days, by clustering all days of the year into 3 representative days. Each one of these days is called a representative period. TulipaEnergyModel.jl has the flexibility to consider representative periods of different lengths for the same timeframe (e.g., a year can be represented by a set of 4 days and 2 weeks). To obtain the representative periods, we recommend using TulipaClustering.

A representative period has three fields:

  • weight: Indicates how many representative periods are contained in the timeframe; this is inferred automatically from map_periods_to_rp in the timeframe.
  • timesteps: The number of timesteps blocks in the representative period.
  • resolution: The duration in time of each timestep.

The number of timesteps and resolution work together to define the coarseness of the period. Nothing is defined outside of these timesteps; for instance, if the representative period represents a day and you want to specify a variable or constraint with a coarseness of 30 minutes. You need to define the number of timesteps to 48 and the resolution to 0.5.

Solution

The solution object energy_problem.solution is a mutable struct with the following fields:

  • assets_investment[a]: The investment for each asset, indexed on the investable asset a.
  • flows_investment[u, v]: The investment for each flow, indexed on the investable flow (u, v).
  • storage_level_intra_rp[a, rp, timesteps_block]: The storage level for the storage asset a within (intra) a representative period rp and a time block timesteps_block. The list of time blocks is defined by constraints_partitions, which was used to create the model.
  • storage_level_inter_rp[a, periods_block]: The storage level for the storage asset a between (inter) representative periods in the periods block periods_block.
  • flow[(u, v), rp, timesteps_block]: The flow value for a given flow (u, v) at a given representative period rp, and time block timesteps_block. The list of time blocks is defined by graph[(u, v)].partitions[rp].
  • objective_value: A Float64 with the objective value at the solution.
  • duals: A Dictionary containing the dual variables of selected constraints.

Check the tutorial for tips on manipulating the solution.

Time Blocks

A time block is a range for which a variable or constraint is defined. It is a range of numbers, i.e., all integer numbers inside an interval. Time blocks are used for the periods in the timeframe and the timesteps in the representative period. Time blocks are disjunct (not overlapping), but do not have to be sequential.

Group

This structure holds all the information of a given group with the following fields:

  • name: The name of the group.
  • invest_method: Boolean value to indicate whether or not the group has an investment method.
  • min_investment_limit: A minimum investment limit in MW is imposed on the total investments of the assets belonging to the group.
  • max_investment_limit: A maximum investment limit in MW is imposed on the total investments of the assets belonging to the group.

Exploring infeasibility

If your model is infeasible, you can try exploring the infeasibility with JuMP.compute_conflict! and JuMP.copy_conflict.

Note: Not all solvers support this functionality.

Use energy_problem.model for the model argument. For instance:

if energy_problem.termination_status == INFEASIBLE
+How to Use · TulipaEnergyModel.jl

How to Use

Install

To use Tulipa, you first need to install the opensource Julia programming language.

Then consider installing a user-friendly code editor, such as VSCode. Otherwise you will be running from the terminal/command prompt.

Starting Julia

Choose one:

  • In VSCode: Press CTRL+Shift+P and press Enter to start a Julia REPL.
  • In the terminal: Type julia and press Enter

Adding TulipaEnergyModel

In Julia:

  • Enter package mode (press "]")
pkg> add TulipaEnergyModel
  • Return to Julia mode (backspace)
julia> using TulipaEnergyModel

(Optional) Running automatic tests

It is nice to check that tests are passing to make sure your environment is working. (This takes a minute or two.)

  • Enter package mode (press "]")
pkg> test TulipaEnergyModel

All tests should pass.

Running a Scenario

To run a scenario, use the function:

The connection should have been created and the data loaded into it using TulipaIO. See the tutorials for a complete guide on how to achieve this. The output_folder is optional if the user wants to export the output.

Input

Currently, we only accept input from CSV files that follow the Schemas. You can also check the test/inputs folder for examples.

CSV Files

Below, we have a description of the files. At the end, in Schemas, we have the expected columns in these CSVs.

Tip: If you modify CSV files and want to see your modifications, the normal git diff command will not be informative. Instead, you can use

git diff --word-diff-regex="[^[:space:],]+"

to make git treat the , as word separators. You can also compare two CSV files with

git diff --no-index --word-diff-regex="[^[:space:],]+" file1 file2

graph-assets-data.csv

This file contains the list of assets and the static data associated with each of them.

The meaning of Missing data depends on the parameter, for instance:

  • group: No group assigned to the asset.

graph-flows-data.csv

The same as graph-assets-data.csv, but for flows. Each flow is defined as a pair of assets.

assets-data.csv

This file contains the yearly data of each asset.

The investment parameters are as follows:

  • The investable parameter determines whether there is an investment decision for the asset or flow.
  • The investment_integer parameter determines if the investment decision is integer or continuous.
  • The investment_cost parameter represents the cost in the defined timeframe. Thus, if the timeframe is a year, the investment cost is the annualized cost of the asset.
  • The investment_limit parameter limits the total investment capacity of the asset or flow. This limit represents the potential of that particular asset or flow. Without data in this parameter, the model assumes no investment limit.

The meaning of Missing data depends on the parameter, for instance:

  • investment_limit: There is no investment limit.
  • initial_storage_level: The initial storage level is free (between the storage level limits), meaning that the optimization problem decides the best starting point for the storage asset. In addition, the first and last time blocks in a representative period are linked to create continuity in the storage level.

flows-data.csv

The same as assets-data.csv, but for flows. Each flow is defined as a pair of assets.

The meaning of Missing data depends on the parameter, for instance:

  • investment_limit: There is no investment limit.

assets-profiles.csv

These files contain information about assets and their associated profiles. Each row lists an asset, the type of profile (e.g., availability, demand, maximum or minimum storage level), and the profile's name. These profiles are used in the intra-temporal constraints.

flows-profiles.csv

This file contains information about flows and their representative period profiles for intra-temporal constraints. Each flow is defined as a pair of assets.

rep-periods-data.csv

Describes the representative periods by their unique ID, the number of timesteps per representative period, and the resolution per timestep. Note that in the test files the resolution units are given as hours for understandability, but the resolution is technically unitless.

rep-periods-mapping.csv

Describes the periods of the timeframe that map into a representative period and the weight of the representative periods that construct a period. Note that each weight is a decimal between 0 and 1, and that the sum of weights for a given period must also be between 0 and 1 (but do not have to sum to 1).

profiles-rep-periods.csv

Define all the profiles for the rep-periods. The profile_name is a unique identifier, the period and value define the profile, and the rep_period field informs the representative period.

The profiles are linked to assets and flows in the files assets-profiles, assets-timeframe-profiles, and flows-profiles.

assets-timeframe-profiles.csv

Like the assets-profiles.csv, but for the inter-temporal constraints.

groups-data.csv (optional)

This file contains the list of groups and the methods that apply to each group, along with their respective parameters.

profiles-timeframe.csv (optional)

Define all the profiles for the timeframe. This is similar to the profiles-rep-periods.csv except that it doesn't have a rep-period field and if this is not passed, default values are used in the timeframe constraints.

assets-rep-periods-partitions.csv (optional)

Contains a description of the partition for each asset with respect to representative periods. If not specified, each asset will have the same time resolution as the representative period, which is hourly by default.

There are currently three ways to specify the desired resolution, indicated in the column specification. The column partition serves to define the partitions in the specified style.

  • specification = uniform: Set the resolution to a uniform amount, i.e., a time block is made of X timesteps. The number X is defined in the column partition. The number of timesteps in the representative period must be divisible by X.
  • specification = explicit: Set the resolution according to a list of numbers separated by ; on the partition. Each number in the list is the number of timesteps for that time block. For instance, 2;3;4 means that there are three time blocks, the first has 2 timesteps, the second has 3 timesteps, and the last has 4 timesteps. The sum of the list must be equal to the total number of timesteps in that representative period, as specified in num_timesteps of rep-periods-data.csv.
  • specification = math: Similar to explicit, but using + and x for simplification. The value of partition is a sequence of elements of the form NxT separated by +, indicating N time blocks of length T. For instance, 2x3+3x6 is 2 time blocks of 3 timesteps, followed by 3 time blocks of 6 timesteps, for a total of 24 timesteps in the representative period.

The table below shows various results for different formats for a representative period with 12 timesteps.

Time Block:uniform:explicit:math
1:3, 4:6, 7:9, 10:1233;3;3;34x3
1:4, 5:8, 9:1244;4;43x4
1:1, 2:2, …, 12:1211;1;1;1;1;1;1;1;1;1;1;112x1
1:3, 4:6, 7:10, 11:12NA3;3;4;22x3+1x4+1x2

Note: If an asset is not specified in this file, the balance equation will be written in the lowest resolution of both the incoming and outgoing flows to the asset.

flows-rep-periods-partitions.csv (optional)

The same as assets-rep-periods-partitions.csv, but for flows.

If a flow is not specified in this file, the flow time resolution will be for each timestep by default (e.g., hourly).

assets-timeframe-partitions.csv (optional)

The same as their assets-rep-periods-partitions.csv counterpart, but for the periods in the timeframe of the model.

Schemas

  • assets_data
    • name: VARCHAR
    • active: BOOLEAN
    • year: INTEGER
    • commission_year: INTEGER
    • investable: BOOLEAN
    • investment_integer: BOOLEAN
    • investment_limit: DOUBLE
    • initial_units: DOUBLE
    • peak_demand: DOUBLE
    • consumer_balance_sense: VARCHAR
    • is_seasonal: BOOLEAN
    • storage_inflows: DOUBLE
    • initial_storage_units: DOUBLE
    • initial_storage_level: DOUBLE
    • energy_to_power_ratio: DOUBLE
    • storage_method_energy: BOOLEAN
    • investment_limit_storage_energy: DOUBLE
    • investment_integer_storage_energy: BOOLEAN
    • use_binary_storage_method: VARCHAR
    • max_energy_timeframe_partition: DOUBLE
    • min_energy_timeframe_partition: DOUBLE
    • unit_commitment: BOOLEAN
    • unit_commitment_method: VARCHAR
    • units_on_cost: DOUBLE
    • unit_commitment_integer: BOOLEAN
    • min_operating_point: DOUBLE
    • ramping: BOOLEAN
    • max_ramp_up: DOUBLE
    • max_ramp_down: DOUBLE
  • assets_profiles
    • asset: VARCHAR
    • commission_year: INTEGER
    • profile_type: VARCHAR
    • profile_name: VARCHAR
  • assets_rep_periods_partitions
    • asset: VARCHAR
    • year: INTEGER
    • rep_period: INTEGER
    • specification: VARCHAR
    • partition: VARCHAR
  • assets_timeframe_partitions
    • asset: VARCHAR
    • year: INTEGER
    • specification: VARCHAR
    • partition: VARCHAR
  • assets_timeframe_profiles
    • asset: VARCHAR
    • commission_year: INTEGER
    • profile_type: VARCHAR
    • profile_name: VARCHAR
  • flows_data
    • from_asset: VARCHAR
    • to_asset: VARCHAR
    • year: INTEGER
    • active: BOOLEAN
    • investable: BOOLEAN
    • investment_integer: BOOLEAN
    • variable_cost: DOUBLE
    • investment_limit: DOUBLE
    • initial_export_units: DOUBLE
    • initial_import_units: DOUBLE
    • efficiency: DOUBLE
  • flows_profiles
    • from_asset: VARCHAR
    • to_asset: VARCHAR
    • year: INTEGER
    • profile_type: VARCHAR
    • profile_name: VARCHAR
  • flows_rep_periods_partitions
    • from_asset: VARCHAR
    • to_asset: VARCHAR
    • year: INTEGER
    • rep_period: INTEGER
    • specification: VARCHAR
    • partition: VARCHAR
  • graph_assets_data
    • name: VARCHAR
    • type: VARCHAR
    • group: VARCHAR
    • investment_method: VARCHAR
    • capacity: DOUBLE
    • technical_lifetime: INTEGER
    • economic_lifetime: INTEGER
    • discount_rate: DOUBLE
    • capacity_storage_energy: DOUBLE
  • graph_flows_data
    • from_asset: VARCHAR
    • to_asset: VARCHAR
    • carrier: VARCHAR
    • is_transport: BOOLEAN
    • capacity: DOUBLE
    • technical_lifetime: INTEGER
    • economic_lifetime: INTEGER
    • discount_rate: DOUBLE
  • groups_data
    • name: VARCHAR
    • year: INTEGER
    • invest_method: BOOLEAN
    • min_investment_limit: DOUBLE
    • max_investment_limit: DOUBLE
  • profiles_rep_periods
    • profile_name: VARCHAR
    • year: INTEGER
    • rep_period: INTEGER
    • timestep: INTEGER
    • value: DOUBLE
  • profiles_timeframe
    • profile_name: VARCHAR
    • year: INTEGER
    • period: INTEGER
    • value: DOUBLE
  • rep_periods_data
    • year: INTEGER
    • rep_period: INTEGER
    • num_timesteps: INTEGER
    • resolution: DOUBLE
  • rep_periods_mapping
    • year: INTEGER
    • period: INTEGER
    • rep_period: INTEGER
    • weight: DOUBLE
  • vintage_assets_data
    • name: VARCHAR
    • commission_year: INTEGER
    • fixed_cost: DOUBLE
    • investment_cost: DOUBLE
    • fixed_cost_storage_energy: DOUBLE
    • investment_cost_storage_energy: DOUBLE
  • vintage_flows_data
    • from_asset: VARCHAR
    • to_asset: VARCHAR
    • commission_year: INTEGER
    • fixed_cost: DOUBLE
    • investment_cost: DOUBLE
  • year_data
    • year: INTEGER
    • length: INTEGER
    • is_milestone: BOOLEAN

Structures

The list of relevant structures used in this package are listed below:

EnergyProblem

The EnergyProblem structure is a wrapper around various other relevant structures. It hides the complexity behind the energy problem, making the usage more friendly, although more verbose.

Fields

  • graph: The Graph object that defines the geometry of the energy problem.
  • representative_periods: A vector of Representative Periods.
  • constraints_partitions: Dictionaries that connect pairs of asset and representative periods to time partitions (vectors of time blocks).
  • timeframe: The number of periods in the representative_periods.
  • dataframes: A Dictionary of dataframes used to linearize the variables and constraints. These are used internally in the model only.
  • groups: A vector of Groups.
  • model: A JuMP.Model object representing the optimization model.
  • solution: A structure of the variable values (investments, flows, etc) in the solution.
  • solved: A boolean indicating whether the model has been solved or not.
  • objective_value: The objective value of the solved problem (Float64).
  • termination_status: The termination status of the optimization model.
  • time_read_data: Time taken (in seconds) for reading the data (Float64).
  • time_create_model: Time taken (in seconds) for creating the model (Float64).
  • time_solve_model: Time taken (in seconds) for solving the model (Float64).

Constructor

The EnergyProblem can also be constructed using the minimal constructor below.

  • EnergyProblem(connection): Constructs a new EnergyProblem object with the given connection that has been created and the data loaded into it using TulipaIO. The graph, representative_periods, and timeframe are computed using create_internal_structures. The constraints_partitions field is computed from the representative_periods, and the other fields are initialized with default values.

See the basic example tutorial to see how these can be used.

Graph

The energy problem is defined using a graph. Each vertex is an asset, and each edge is a flow.

We use MetaGraphsNext.jl to define the graph and its objects. Using MetaGraphsNext we can define a graph with metadata, i.e., associate data with each asset and flow. Furthermore, we can define the labels of each asset as keys to access the elements of the graph. The assets in the graph are of type GraphAssetData, and the flows are of type GraphFlowData.

The graph can be created using the create_internal_structures function, or it can be accessed from an EnergyProblem.

See how to use the graph in the graph tutorial.

GraphAssetData

This structure holds all the information of a given asset. These are stored inside the Graph. Given a graph graph, an asset a can be accessed through graph[a].

GraphFlowData

This structure holds all the information of a given flow. These are stored inside the Graph. Given a graph graph, a flow from asset u to asset v can be accessed through graph[u, v].

Partition

A representative period will be defined with a number of timesteps. A partition is a division of these timesteps into time blocks such that the time blocks are disjunct (not overlapping) and that all timesteps belong to some time block. Some variables and constraints are defined over every time block in a partition.

For instance, for a representative period with 12 timesteps, all sets below are partitions:

  • \[\{\{1, 2, 3\}, \{4, 5, 6\}, \{7, 8, 9\}, \{10, 11, 12\}\}\]

  • \[\{\{1, 2, 3, 4\}, \{5, 6, 7, 8\}, \{9, 10, 11, 12\}\}\]

  • \[\{\{1\}, \{2, 3\}, \{4\}, \{5, 6, 7, 8\}, \{9, 10, 11, 12\}\}\]

Timeframe

The timeframe is the total period we want to analyze with the model. Usually this is a year, but it can be any length of time. A timeframe has two fields:

  • num_periods: The timeframe is defined by a certain number of periods. For instance, a year can be defined by 365 periods, each describing a day.
  • map_periods_to_rp: Indicates the periods of the timeframe that map into a representative period and the weight of the representative period to construct that period.

Representative Periods

The timeframe (e.g., a full year) is described by a selection of representative periods, for instance, days or weeks, that nicely summarize other similar periods. For example, we could model the year into 3 days, by clustering all days of the year into 3 representative days. Each one of these days is called a representative period. TulipaEnergyModel.jl has the flexibility to consider representative periods of different lengths for the same timeframe (e.g., a year can be represented by a set of 4 days and 2 weeks). To obtain the representative periods, we recommend using TulipaClustering.

A representative period has three fields:

  • weight: Indicates how many representative periods are contained in the timeframe; this is inferred automatically from map_periods_to_rp in the timeframe.
  • timesteps: The number of timesteps blocks in the representative period.
  • resolution: The duration in time of each timestep.

The number of timesteps and resolution work together to define the coarseness of the period. Nothing is defined outside of these timesteps; for instance, if the representative period represents a day and you want to specify a variable or constraint with a coarseness of 30 minutes. You need to define the number of timesteps to 48 and the resolution to 0.5.

Solution

The solution object energy_problem.solution is a mutable struct with the following fields:

  • assets_investment[a]: The investment for each asset, indexed on the investable asset a.
  • flows_investment[u, v]: The investment for each flow, indexed on the investable flow (u, v).
  • storage_level_intra_rp[a, rp, timesteps_block]: The storage level for the storage asset a within (intra) a representative period rp and a time block timesteps_block. The list of time blocks is defined by constraints_partitions, which was used to create the model.
  • storage_level_inter_rp[a, periods_block]: The storage level for the storage asset a between (inter) representative periods in the periods block periods_block.
  • flow[(u, v), rp, timesteps_block]: The flow value for a given flow (u, v) at a given representative period rp, and time block timesteps_block. The list of time blocks is defined by graph[(u, v)].partitions[rp].
  • objective_value: A Float64 with the objective value at the solution.
  • duals: A Dictionary containing the dual variables of selected constraints.

Check the tutorial for tips on manipulating the solution.

Time Blocks

A time block is a range for which a variable or constraint is defined. It is a range of numbers, i.e., all integer numbers inside an interval. Time blocks are used for the periods in the timeframe and the timesteps in the representative period. Time blocks are disjunct (not overlapping), but do not have to be sequential.

Group

This structure holds all the information of a given group with the following fields:

  • name: The name of the group.
  • invest_method: Boolean value to indicate whether or not the group has an investment method.
  • min_investment_limit: A minimum investment limit in MW is imposed on the total investments of the assets belonging to the group.
  • max_investment_limit: A maximum investment limit in MW is imposed on the total investments of the assets belonging to the group.

Exploring infeasibility

If your model is infeasible, you can try exploring the infeasibility with JuMP.compute_conflict! and JuMP.copy_conflict.

Note: Not all solvers support this functionality.

Use energy_problem.model for the model argument. For instance:

if energy_problem.termination_status == INFEASIBLE
  compute_conflict!(energy_problem.model)
  iis_model, reference_map = copy_conflict(energy_problem.model)
  print(iis_model)
-end

Storage specific setups

Seasonal and non-seasonal storage

Section Storage Modeling explains the main concepts for modeling seasonal and non-seasonal storage in TulipaEnergyModel.jl. To define if an asset is one type or the other then consider the following:

  • Seasonal storage: When the storage capacity of an asset is greater than the total length of representative periods, we recommend using the inter-temporal constraints. To apply these constraints, you must set the input parameter is_seasonal to true in the assets-data.csv.
  • Non-seasonal storage: When the storage capacity of an asset is lower than the total length of representative periods, we recommend using the intra-temporal constraints. To apply these constraints, you must set the input parameter is_seasonal to false in the assets-data.csv.

Note: If the input data covers only one representative period for the entire year, for example, with 8760-hour timesteps, and you have a monthly hydropower plant, then you should set the is_seasonal parameter for that asset to false. This is because the length of the representative period is greater than the storage capacity of the storage asset.

The energy storage investment method

Energy storage assets have a unique characteristic wherein the investment is based not solely on the capacity to charge and discharge, but also on the energy capacity. Some storage asset types have a fixed duration for a given capacity, which means that there is a predefined ratio between energy and power. For instance, a battery of 10MW/unit and 4h duration implies that the energy capacity is 40MWh. Conversely, other storage asset types don't have a fixed ratio between the investment of capacity and storage capacity. Therefore, the energy capacity can be optimized independently of the capacity investment, such as hydrogen storage in salt caverns. To define if an energy asset is one type or the other then consider the following parameter setting in the file assets-data.csv:

  • Investment energy method: To use this method, set the parameter storage_method_energy to true. In addition, it is necessary to define:

    • investment_cost_storage_energy: To establish the cost of investing in the storage capacity (e.g., kEUR/MWh/unit).
    • fixed_cost_storage_energy: To establish the fixed cost of energy storage capacity (e.g., kEUR/MWh/unit).
    • investment_limit_storage_energy: To define the potential of the energy capacity investment (e.g., MWh). Missing values mean that there is no limit.
    • investment_integer_storage_energy: To determine whether the investment variables of storage capacity are integers of continuous.
  • Fixed energy-to-power ratio method: To use this method, set the parameter storage_method_energy to false. In addition, it is necessary to define the parameter energy_to_power_ratio to establish the predefined duration of the storage asset or ratio between energy and power. Note that all the investment costs should be allocated in the parameter investment_cost.

In addition, the parameter capacity_storage_energy in the graph-assets-data.csv defines the energy per unit of storage capacity invested in (e.g., MWh/unit).

For more details on the constraints that apply when selecting one method or the other, please visit the mathematical formulation section.

Control simultaneous charging and discharging

Depending on the configuration of the energy storage assets, it may or may not be possible to charge and discharge them simultaneously. For instance, a single battery cannot charge and discharge at the same time, but some pumped hydro storage technologies have separate components for charging (pump) and discharging (turbine) that can function independently, allowing them to charge and discharge simultaneously. To account for these differences, the model provides users with three options for the use_binary_storage_method parameter in the assets-data.csv file:

  • binary: the model adds a binary variable to prevent charging and discharging simultaneously.
  • relaxed_binary: the model adds a binary variable that allows values between 0 and 1, reducing the likelihood of charging and discharging simultaneously. This option uses a tighter set of constraints close to the convex hull of the full formulation, resulting in fewer instances of simultaneous charging and discharging in the results.
  • If no value is set, i.e., missing value, the storage asset can charge and discharge simultaneously.

For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.

Setting up unit commitment constraints

The unit commitment constraints are only applied to producer and conversion assets. The unit_commitment parameter must be set to true to include the constraints in the assets-data.csv. Additionally, the following parameters should be set in that same file:

  • unit_commitment_method: It determines which unit commitment method to use. The current version of the code only includes the basic version. Future versions will add more detailed constraints as additional options.
  • units_on_cost: Objective function coefficient on units_on variable. (e.g., no-load cost or idling cost in kEUR/h/unit)
  • unit_commitment_integer: It determines whether the unit commitment variables are considered as integer or not (true or false)
  • min_operating_point: Minimum operating point or minimum stable generation level defined as a portion of the capacity of asset (p.u.)

For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.

Setting up ramping constraints

The ramping constraints are only applied to producer and conversion assets. The ramping parameter must be set to true to include the constraints in the assets-data.csv. Additionally, the following parameters should be set in that same file:

  • max_ramp_up: Maximum ramping up rate as a portion of the capacity of asset (p.u./h)
  • max_ramp_down:Maximum ramping down rate as a portion of the capacity of asset (p.u./h)

For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.

Setting up a maximum or minimum outgoing energy limit

For the model to add constraints for a maximum or minimum energy limit for an asset throughout the model's timeframe (e.g., a year), we need to establish a couple of parameters:

  • is_seasonal = true in the assets-data.csv. This parameter enables the model to use the inter-temporal constraints.
  • max_energy_timeframe_partition $\neq$ missing or min_energy_timeframe_partition $\neq$ missing in the assets-data.csv. This value represents the peak energy that will be then multiplied by the profile for each period in the timeframe.

    Note: These parameters are defined per period, and the default values for profiles are 1.0 p.u. per period. If the periods are determined daily, the energy limit for the whole year will be 365 times maxor min_energy_timeframe_partition.

  • (optional) profile_type and profile_name in the assets-timeframe-profiles.csv and the profile values in the profiles-timeframe.csv. If there is no profile defined, then by default it is 1.0 p.u. for all periods in the timeframe.
  • (optional) define a period partition in assets-timeframe-partitions.csv. If there is no partition defined, then by default the constraint is created for each period in the timeframe, otherwise, it will consider the partition definition in the file.

Tip: If you want to set a limit on the maximum or minimum outgoing energy for a year with representative days, you can use the partition definition to create a single partition for the entire year to combine the profile.

Example: Setting Energy Limits

Let's assume we have a year divided into 365 days because we are using days as periods in the representatives from TulipaClustering.jl. Also, we define the max_energy_timeframe_partition = 10 MWh, meaning the peak energy we want to have is 10MWh for each period or period partition. So depending on the optional information, we can have:

ProfilePeriod PartitionsExample
NoneNoneThe default profile is 1.p.u. for each period and since there are no period partitions, the constraints will be for each period (i.e., daily). So the outgoing energy of the asset for each day must be less than or equal to 10MWh.
DefinedNoneThe profile definition and value will be in the assets-timeframe-profiles.csv and profiles-timeframe.csv files. For example, we define a profile that has the following first four values: 0.6 p.u., 1.0 p.u., 0.8 p.u., and 0.4 p.u. There are no period partitions, so constraints will be for each period (i.e., daily). Therefore the outgoing energy of the asset for the first four days must be less than or equal to 6MWh, 10MWh, 8MWh, and 4MWh.
DefinedDefinedUsing the same profile as above, we now define a period partition in the assets-timeframe-partitions.csv file as uniform with a value of 2. This value means that we will aggregate every two periods (i.e., every two days). So, instead of having 365 constraints, we will have 183 constraints (182 every two days and one last constraint of 1 day). Then the profile is aggregated with the sum of the values inside the periods within the partition. Thus, the outgoing energy of the asset for the first two partitions (i.e., every two days) must be less than or equal to 16MWh and 12MWh, respectively.

Defining a group of assets

A group of assets refers to a set of assets that share certain constraints. For example, the investments of a group of assets may be capped at a maximum value, which represents the potential of a specific area that is restricted in terms of the maximum allowable MW due to limitations on building licenses.

In order to define the groups in the model, the following steps are necessary:

  1. Create a group in the groups-data.csv file by defining the name property and its parameters.

  2. In the file graph-assets-data.csv, assign assets to the group by setting the name in the group parameter/column.

    Note: A missing value in the parameter group in the graph-assets-data.csv means that the asset does not belong to any group.

Groups are useful to represent several common constraints, the following group constraints are available.

Setting up a maximum or minimum investment limit for a group

The mathematical formulation of the maximum and minimum investment limit for group constraints is available here. The parameters to set up these constraints in the model are in the groups-data.csv file.

  • invest_method = true. This parameter enables the model to use the investment group constraints.

  • min_investment_limit $\neq$ missing or max_investment_limit $\neq$ missing. This value represents the limits that will be imposed on the investment that belongs to the group.

    Notes:

    1. A missing value in the parameters min_investment_limit and max_investment_limit means that there is no investment limit.
    2. These constraints are applied to the investments each year. The model does not yet have investment limits to a group's accumulated invested capacity.

Example: Group of Assets

Let's explore how the groups are set up in the test case called Norse. First, let's take a look at the groups-data.csv file:

2×5 DataFrame
Rownameyearinvest_methodmin_investment_limitmax_investment_limit
String15Int64BoolInt64?Int64?
1renewables2030truemissing40000
2ccgt2030true10000missing

In the given data, there are two groups: renewables and ccgt. Both groups have the invest_method parameter set to true, indicating that investment group constraints apply to both. For the renewables group, the min_investment_limit parameter is missing, signifying that there is no minimum limit imposed on the group. However, the max_investment_limit parameter is set to 40000 MW, indicating that the total investments of assets in the group must be less than or equal to this value. In contrast, the ccgt group has a missing value in the max_investment_limit parameter, indicating no maximum limit, while the min_investment_limit is set to 10000 MW for the total investments in that group.

Let's now explore which assets are in each group. To do so, we can take a look at the graph-assets-data.csv file:

4×3 DataFrame
Rownametypegroup
String31String15String15?
1Asgard_Solarproducerrenewables
2Asgard_CCGTconversionccgt
3Midgard_Windproducerrenewables
4Midgard_CCGTconversionccgt

Here we can see that the assets Asgard_Solar and Midgard_Wind belong to the renewables group, while the assets Asgard_CCGT and Midgard_CCGT belong to the ccgt group.

Note: If the group has a min_investment_limit, then assets in the group have to allow investment (investable = true) for the model to be feasible. If the assets are not investable then they cannot satisfy the minimum constraint.

+end

Storage specific setups

Seasonal and non-seasonal storage

Section Storage Modeling explains the main concepts for modeling seasonal and non-seasonal storage in TulipaEnergyModel.jl. To define if an asset is one type or the other then consider the following:

  • Seasonal storage: When the storage capacity of an asset is greater than the total length of representative periods, we recommend using the inter-temporal constraints. To apply these constraints, you must set the input parameter is_seasonal to true in the assets-data.csv.
  • Non-seasonal storage: When the storage capacity of an asset is lower than the total length of representative periods, we recommend using the intra-temporal constraints. To apply these constraints, you must set the input parameter is_seasonal to false in the assets-data.csv.

Note: If the input data covers only one representative period for the entire year, for example, with 8760-hour timesteps, and you have a monthly hydropower plant, then you should set the is_seasonal parameter for that asset to false. This is because the length of the representative period is greater than the storage capacity of the storage asset.

The energy storage investment method

Energy storage assets have a unique characteristic wherein the investment is based not solely on the capacity to charge and discharge, but also on the energy capacity. Some storage asset types have a fixed duration for a given capacity, which means that there is a predefined ratio between energy and power. For instance, a battery of 10MW/unit and 4h duration implies that the energy capacity is 40MWh. Conversely, other storage asset types don't have a fixed ratio between the investment of capacity and storage capacity. Therefore, the energy capacity can be optimized independently of the capacity investment, such as hydrogen storage in salt caverns. To define if an energy asset is one type or the other then consider the following parameter setting in the file assets-data.csv:

  • Investment energy method: To use this method, set the parameter storage_method_energy to true. In addition, it is necessary to define:

    • investment_cost_storage_energy: To establish the cost of investing in the storage capacity (e.g., kEUR/MWh/unit).
    • fixed_cost_storage_energy: To establish the fixed cost of energy storage capacity (e.g., kEUR/MWh/unit).
    • investment_limit_storage_energy: To define the potential of the energy capacity investment (e.g., MWh). Missing values mean that there is no limit.
    • investment_integer_storage_energy: To determine whether the investment variables of storage capacity are integers of continuous.
  • Fixed energy-to-power ratio method: To use this method, set the parameter storage_method_energy to false. In addition, it is necessary to define the parameter energy_to_power_ratio to establish the predefined duration of the storage asset or ratio between energy and power. Note that all the investment costs should be allocated in the parameter investment_cost.

In addition, the parameter capacity_storage_energy in the graph-assets-data.csv defines the energy per unit of storage capacity invested in (e.g., MWh/unit).

For more details on the constraints that apply when selecting one method or the other, please visit the mathematical formulation section.

Control simultaneous charging and discharging

Depending on the configuration of the energy storage assets, it may or may not be possible to charge and discharge them simultaneously. For instance, a single battery cannot charge and discharge at the same time, but some pumped hydro storage technologies have separate components for charging (pump) and discharging (turbine) that can function independently, allowing them to charge and discharge simultaneously. To account for these differences, the model provides users with three options for the use_binary_storage_method parameter in the assets-data.csv file:

  • binary: the model adds a binary variable to prevent charging and discharging simultaneously.
  • relaxed_binary: the model adds a binary variable that allows values between 0 and 1, reducing the likelihood of charging and discharging simultaneously. This option uses a tighter set of constraints close to the convex hull of the full formulation, resulting in fewer instances of simultaneous charging and discharging in the results.
  • If no value is set, i.e., missing value, the storage asset can charge and discharge simultaneously.

For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.

Setting up unit commitment constraints

The unit commitment constraints are only applied to producer and conversion assets. The unit_commitment parameter must be set to true to include the constraints in the assets-data.csv. Additionally, the following parameters should be set in that same file:

  • unit_commitment_method: It determines which unit commitment method to use. The current version of the code only includes the basic version. Future versions will add more detailed constraints as additional options.
  • units_on_cost: Objective function coefficient on units_on variable. (e.g., no-load cost or idling cost in kEUR/h/unit)
  • unit_commitment_integer: It determines whether the unit commitment variables are considered as integer or not (true or false)
  • min_operating_point: Minimum operating point or minimum stable generation level defined as a portion of the capacity of asset (p.u.)

For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.

Setting up ramping constraints

The ramping constraints are only applied to producer and conversion assets. The ramping parameter must be set to true to include the constraints in the assets-data.csv. Additionally, the following parameters should be set in that same file:

  • max_ramp_up: Maximum ramping up rate as a portion of the capacity of asset (p.u./h)
  • max_ramp_down:Maximum ramping down rate as a portion of the capacity of asset (p.u./h)

For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.

Setting up a maximum or minimum outgoing energy limit

For the model to add constraints for a maximum or minimum energy limit for an asset throughout the model's timeframe (e.g., a year), we need to establish a couple of parameters:

  • is_seasonal = true in the assets-data.csv. This parameter enables the model to use the inter-temporal constraints.
  • max_energy_timeframe_partition $\neq$ missing or min_energy_timeframe_partition $\neq$ missing in the assets-data.csv. This value represents the peak energy that will be then multiplied by the profile for each period in the timeframe.

    Note: These parameters are defined per period, and the default values for profiles are 1.0 p.u. per period. If the periods are determined daily, the energy limit for the whole year will be 365 times maxor min_energy_timeframe_partition.

  • (optional) profile_type and profile_name in the assets-timeframe-profiles.csv and the profile values in the profiles-timeframe.csv. If there is no profile defined, then by default it is 1.0 p.u. for all periods in the timeframe.
  • (optional) define a period partition in assets-timeframe-partitions.csv. If there is no partition defined, then by default the constraint is created for each period in the timeframe, otherwise, it will consider the partition definition in the file.

Tip: If you want to set a limit on the maximum or minimum outgoing energy for a year with representative days, you can use the partition definition to create a single partition for the entire year to combine the profile.

Example: Setting Energy Limits

Let's assume we have a year divided into 365 days because we are using days as periods in the representatives from TulipaClustering.jl. Also, we define the max_energy_timeframe_partition = 10 MWh, meaning the peak energy we want to have is 10MWh for each period or period partition. So depending on the optional information, we can have:

ProfilePeriod PartitionsExample
NoneNoneThe default profile is 1.p.u. for each period and since there are no period partitions, the constraints will be for each period (i.e., daily). So the outgoing energy of the asset for each day must be less than or equal to 10MWh.
DefinedNoneThe profile definition and value will be in the assets-timeframe-profiles.csv and profiles-timeframe.csv files. For example, we define a profile that has the following first four values: 0.6 p.u., 1.0 p.u., 0.8 p.u., and 0.4 p.u. There are no period partitions, so constraints will be for each period (i.e., daily). Therefore the outgoing energy of the asset for the first four days must be less than or equal to 6MWh, 10MWh, 8MWh, and 4MWh.
DefinedDefinedUsing the same profile as above, we now define a period partition in the assets-timeframe-partitions.csv file as uniform with a value of 2. This value means that we will aggregate every two periods (i.e., every two days). So, instead of having 365 constraints, we will have 183 constraints (182 every two days and one last constraint of 1 day). Then the profile is aggregated with the sum of the values inside the periods within the partition. Thus, the outgoing energy of the asset for the first two partitions (i.e., every two days) must be less than or equal to 16MWh and 12MWh, respectively.

Defining a group of assets

A group of assets refers to a set of assets that share certain constraints. For example, the investments of a group of assets may be capped at a maximum value, which represents the potential of a specific area that is restricted in terms of the maximum allowable MW due to limitations on building licenses.

In order to define the groups in the model, the following steps are necessary:

  1. Create a group in the groups-data.csv file by defining the name property and its parameters.

  2. In the file graph-assets-data.csv, assign assets to the group by setting the name in the group parameter/column.

    Note: A missing value in the parameter group in the graph-assets-data.csv means that the asset does not belong to any group.

Groups are useful to represent several common constraints, the following group constraints are available.

Setting up a maximum or minimum investment limit for a group

The mathematical formulation of the maximum and minimum investment limit for group constraints is available here. The parameters to set up these constraints in the model are in the groups-data.csv file.

  • invest_method = true. This parameter enables the model to use the investment group constraints.

  • min_investment_limit $\neq$ missing or max_investment_limit $\neq$ missing. This value represents the limits that will be imposed on the investment that belongs to the group.

    Notes:

    1. A missing value in the parameters min_investment_limit and max_investment_limit means that there is no investment limit.
    2. These constraints are applied to the investments each year. The model does not yet have investment limits to a group's accumulated invested capacity.

Example: Group of Assets

Let's explore how the groups are set up in the test case called Norse. First, let's take a look at the groups-data.csv file:

2×5 DataFrame
Rownameyearinvest_methodmin_investment_limitmax_investment_limit
String15Int64BoolInt64?Int64?
1renewables2030truemissing40000
2ccgt2030true10000missing

In the given data, there are two groups: renewables and ccgt. Both groups have the invest_method parameter set to true, indicating that investment group constraints apply to both. For the renewables group, the min_investment_limit parameter is missing, signifying that there is no minimum limit imposed on the group. However, the max_investment_limit parameter is set to 40000 MW, indicating that the total investments of assets in the group must be less than or equal to this value. In contrast, the ccgt group has a missing value in the max_investment_limit parameter, indicating no maximum limit, while the min_investment_limit is set to 10000 MW for the total investments in that group.

Let's now explore which assets are in each group. To do so, we can take a look at the graph-assets-data.csv file:

4×3 DataFrame
Rownametypegroup
String31String15String15?
1Asgard_Solarproducerrenewables
2Asgard_CCGTconversionccgt
3Midgard_Windproducerrenewables
4Midgard_CCGTconversionccgt

Here we can see that the assets Asgard_Solar and Midgard_Wind belong to the renewables group, while the assets Asgard_CCGT and Midgard_CCGT belong to the ccgt group.

Note: If the group has a min_investment_limit, then assets in the group have to allow investment (investable = true) for the model to be feasible. If the assets are not investable then they cannot satisfy the minimum constraint.

diff --git a/dev/20-tutorials/index.html b/dev/20-tutorials/index.html index 6d9cb87b..3d170511 100644 --- a/dev/20-tutorials/index.html +++ b/dev/20-tutorials/index.html @@ -6,15 +6,15 @@ connection = DBInterface.connect(DuckDB.DB) read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name) energy_problem = run_scenario(connection)
EnergyProblem:
-  - Time creating internal structures (in seconds): 8.663744918
-  - Time computing constraints partitions (in seconds): 5.126177879
+  - Time creating internal structures (in seconds): 8.569627269
+  - Time computing constraints partitions (in seconds): 4.939669504
   - Model created!
-    - Time for  creating the model (in seconds): 12.497875194
+    - Time for  creating the model (in seconds): 11.848985272
     - Number of variables: 368
     - Number of constraints for variable bounds: 368
     - Number of structural constraints: 432
   - Model solved! 
-    - Time for  solving the model (in seconds): 2.624108657
+    - Time for  solving the model (in seconds): 2.503098402
     - Termination status: OPTIMAL
     - Objective value: 269238.4382375954
 

The energy_problem variable is of type EnergyProblem. For more details, see the documentation for that type or the section Structures.

That's all it takes to run a scenario! To learn about the data required to run your own scenario, see the Input section of How to Use.

Manually running each step

If we need more control, we can create the energy problem first, then the optimization model inside it, and finally ask for it to be solved.

using DuckDB, TulipaIO, TulipaEnergyModel
@@ -23,8 +23,8 @@
 connection = DBInterface.connect(DuckDB.DB)
 read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)
 energy_problem = EnergyProblem(connection)
EnergyProblem:
-  - Time creating internal structures (in seconds): 0.1351677
-  - Time computing constraints partitions (in seconds): 0.000211808
+  - Time creating internal structures (in seconds): 0.129412953
+  - Time computing constraints partitions (in seconds): 0.000199752
   - Model not created!
   - Model not solved!
 

The energy problem does not have a model yet:

energy_problem.model === nothing
true

To create the internal model, we call the function create_model!.

create_model!(energy_problem)
@@ -103,15 +103,15 @@
 connection = DBInterface.connect(DuckDB.DB)
 read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)
 energy_problem = run_scenario(connection, optimizer = GLPK.Optimizer)
EnergyProblem:
-  - Time creating internal structures (in seconds): 0.142070287
-  - Time computing constraints partitions (in seconds): 0.000214613
+  - Time creating internal structures (in seconds): 0.132963318
+  - Time computing constraints partitions (in seconds): 0.000197969
   - Model created!
-    - Time for  creating the model (in seconds): 0.007171736
+    - Time for  creating the model (in seconds): 0.00657469
     - Number of variables: 368
     - Number of constraints for variable bounds: 368
     - Number of structural constraints: 432
   - Model solved! 
-    - Time for  solving the model (in seconds): 2.203933527
+    - Time for  solving the model (in seconds): 2.205626931
     - Termination status: OPTIMAL
     - Objective value: 269238.4382417078
 

or

using GLPK
@@ -481,4 +481,4 @@
  54777.52475361511
  52128.801264625705
  46907.046645223265

Here value. (i.e., broadcasting) was used instead of the vector comprehension from previous examples just to show that it also works.

The value of the constraint is obtained by looking only at the part with variables. So a constraint like 2x + 3y - 1 <= 4 would return the value of 2x + 3y.

Writing the output to CSV

To save the solution to CSV files, you can use save_solution_to_file:

mkdir("outputs")
-save_solution_to_file("outputs", energy_problem)

Plotting

In the previous sections, we have shown how to create vectors such as the one for flows. If you want simple plots, you can plot the vectors directly using any package you like.

If you would like more custom plots, check out TulipaPlots.jl, under development, which provides tailor-made plots for TulipaEnergyModel.jl.

+save_solution_to_file("outputs", energy_problem)

Plotting

In the previous sections, we have shown how to create vectors such as the one for flows. If you want simple plots, you can plot the vectors directly using any package you like.

If you would like more custom plots, check out TulipaPlots.jl, under development, which provides tailor-made plots for TulipaEnergyModel.jl.

diff --git a/dev/30-concepts/index.html b/dev/30-concepts/index.html index cfe669eb..e865f5ec 100644 --- a/dev/30-concepts/index.html +++ b/dev/30-concepts/index.html @@ -66,15 +66,15 @@ connection = DBInterface.connect(DuckDB.DB) read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name) energy_problem = run_scenario(connection)
EnergyProblem:
-  - Time creating internal structures (in seconds): 0.166899477
-  - Time computing constraints partitions (in seconds): 0.00028174
+  - Time creating internal structures (in seconds): 0.165455016
+  - Time computing constraints partitions (in seconds): 0.00026879
   - Model created!
-    - Time for  creating the model (in seconds): 0.863402164
+    - Time for  creating the model (in seconds): 0.864324521
     - Number of variables: 727
     - Number of constraints for variable bounds: 727
     - Number of structural constraints: 957
   - Model solved! 
-    - Time for  solving the model (in seconds): 0.031195015
+    - Time for  solving the model (in seconds): 0.02970665
     - Termination status: OPTIMAL
     - Objective value: 2409.3840293440285
-

Since the battery is not seasonal, it only has results for the intra-storage level of each representative period, as shown in the following figure:

Battery-intra-storage-level

Since the phs is defined as seasonal, it has results for only the inter-storage level. Since we defined the period partition as 1, we get results for each period (i.e., day). We can see that the inter-temporal constraints in the model keep track of the storage level through the whole timeframe definition (i.e., week).

PHS-inter-storage-level

In this example, we have demonstrated how to partially recover the chronological information of a storage asset with a longer discharge duration (such as 48 hours) than the representative period length (24 hours). This feature enables us to model both short- and long-term storage in TulipaEnergyModel.jl.

+

Since the battery is not seasonal, it only has results for the intra-storage level of each representative period, as shown in the following figure:

Battery-intra-storage-level

Since the phs is defined as seasonal, it has results for only the inter-storage level. Since we defined the period partition as 1, we get results for each period (i.e., day). We can see that the inter-temporal constraints in the model keep track of the storage level through the whole timeframe definition (i.e., week).

PHS-inter-storage-level

In this example, we have demonstrated how to partially recover the chronological information of a storage asset with a longer discharge duration (such as 48 hours) than the representative period length (24 hours). This feature enables us to model both short- and long-term storage in TulipaEnergyModel.jl.

diff --git a/dev/40-formulation/index.html b/dev/40-formulation/index.html index 5f0e8159..c49063ee 100644 --- a/dev/40-formulation/index.html +++ b/dev/40-formulation/index.html @@ -79,4 +79,4 @@ \end{aligned}\]

Maximum Investment Limit of a Group

\[\begin{aligned} \sum_{a \in \mathcal{A}^{\text{i}} | p^{\text{group}}_{a} = g} p^{\text{capacity}}_{a} \cdot v^{\text{inv}}_{a} \leq p^{\text{max invest limit}}_{g} \\ \\ & \forall g \in \mathcal{G}^{\text{ai}} -\end{aligned}\]

References

Damcı-Kurt, P., Küçükyavuz, S., Rajan, D., Atamtürk, A., 2016. A polyhedral study of production ramping. Math. Program. 158, 175–205. doi: 10.1007/s10107-015-0919-9.

Morales-España, G., Ramos, A., García-González, J., 2014. An MIP Formulation for Joint Market-Clearing of Energy and Reserves Based on Ramp Scheduling. IEEE Transactions on Power Systems 29, 476-488. doi: 10.1109/TPWRS.2013.2259601.

Morales-España, G., Latorre, J. M., Ramos, A., 2013. Tight and Compact MILP Formulation for the Thermal Unit Commitment Problem. IEEE Transactions on Power Systems 28, 4897-4908. doi: 10.1109/TPWRS.2013.2251373.

Tejada-Arango, D.A., Domeshek, M., Wogrin, S., Centeno, E., 2018. Enhanced representative days and system states modeling for energy storage investment analysis. IEEE Transactions on Power Systems 33, 6534–6544. doi:10.1109/TPWRS.2018.2819578.

Tejada-Arango, D.A., Wogrin, S., Siddiqui, A.S., Centeno, E., 2019. Opportunity cost including short-term energy storage in hydrothermal dispatch models using a linked representative periods approach. Energy 188, 116079. doi:10.1016/j.energy.2019.116079.

+\end{aligned}\]

References

Damcı-Kurt, P., Küçükyavuz, S., Rajan, D., Atamtürk, A., 2016. A polyhedral study of production ramping. Math. Program. 158, 175–205. doi: 10.1007/s10107-015-0919-9.

Morales-España, G., Ramos, A., García-González, J., 2014. An MIP Formulation for Joint Market-Clearing of Energy and Reserves Based on Ramp Scheduling. IEEE Transactions on Power Systems 29, 476-488. doi: 10.1109/TPWRS.2013.2259601.

Morales-España, G., Latorre, J. M., Ramos, A., 2013. Tight and Compact MILP Formulation for the Thermal Unit Commitment Problem. IEEE Transactions on Power Systems 28, 4897-4908. doi: 10.1109/TPWRS.2013.2251373.

Tejada-Arango, D.A., Domeshek, M., Wogrin, S., Centeno, E., 2018. Enhanced representative days and system states modeling for energy storage investment analysis. IEEE Transactions on Power Systems 33, 6534–6544. doi:10.1109/TPWRS.2018.2819578.

Tejada-Arango, D.A., Wogrin, S., Siddiqui, A.S., Centeno, E., 2019. Opportunity cost including short-term energy storage in hydrothermal dispatch models using a linked representative periods approach. Energy 188, 116079. doi:10.1016/j.energy.2019.116079.

diff --git a/dev/90-contributing/index.html b/dev/90-contributing/index.html index eca77fef..9f138a5e 100644 --- a/dev/90-contributing/index.html +++ b/dev/90-contributing/index.html @@ -1,2 +1,2 @@ -Contributing Guidelines · TulipaEnergyModel.jl

Contributing Guidelines

Great that you want to contribute to the development of Tulipa! Please read these guidelines and our Developer Documentation to get you started.

GitHub Rules of Engagement

  • If you want to discuss something that isn't immediately actionable, post under Discussions. Convert it to an issue once it's actionable.
  • All PR's should have an associated issue (unless it's a very minor fix).
  • All issues should have 1 Type and 1+ Zone labels (unless Type: epic).
  • Assign yourself to issues you want to address. Consider if you will be able to work on them in the near future (this week) — if not, leave them available for someone else.
  • Set the issue Status to "In Progress" when you have started working on it.
  • When finalizing a pull request, set the Status to "Ready for Review." If someone specific needs to review it, assign them as the reviewer (otherwise anyone can review).
  • Issues addressed by merged PRs will automatically move to Done.
  • If you want to discuss an issue at the next group meeting (or just get some attention), mark it with the "question" label.
  • Issues without updates for 60 days (and PRs without updates in 30 days) will be labelled as "stale" and filtered out of view. There is a Stale project board to view and revive these.

Contributing Workflow

Fork → Branch → Code → Push → Pull → Squash & Merge

  1. Fork the repository
  2. Create a new branch (in your fork)
  3. Do fantastic coding
  4. Push to your fork
  5. Create a pull request from your fork to the main repository
  6. (After review) Squash and merge

For a step-by-step guide to these steps, see our Developer Documentation.

We use this workflow in our quest to achieve the Utopic Git History.

+Contributing Guidelines · TulipaEnergyModel.jl

Contributing Guidelines

Great that you want to contribute to the development of Tulipa! Please read these guidelines and our Developer Documentation to get you started.

GitHub Rules of Engagement

  • If you want to discuss something that isn't immediately actionable, post under Discussions. Convert it to an issue once it's actionable.
  • All PR's should have an associated issue (unless it's a very minor fix).
  • All issues should have 1 Type and 1+ Zone labels (unless Type: epic).
  • Assign yourself to issues you want to address. Consider if you will be able to work on them in the near future (this week) — if not, leave them available for someone else.
  • Set the issue Status to "In Progress" when you have started working on it.
  • When finalizing a pull request, set the Status to "Ready for Review." If someone specific needs to review it, assign them as the reviewer (otherwise anyone can review).
  • Issues addressed by merged PRs will automatically move to Done.
  • If you want to discuss an issue at the next group meeting (or just get some attention), mark it with the "question" label.
  • Issues without updates for 60 days (and PRs without updates in 30 days) will be labelled as "stale" and filtered out of view. There is a Stale project board to view and revive these.

Contributing Workflow

Fork → Branch → Code → Push → Pull → Squash & Merge

  1. Fork the repository
  2. Create a new branch (in your fork)
  3. Do fantastic coding
  4. Push to your fork
  5. Create a pull request from your fork to the main repository
  6. (After review) Squash and merge

For a step-by-step guide to these steps, see our Developer Documentation.

We use this workflow in our quest to achieve the Utopic Git History.

diff --git a/dev/91-developer/index.html b/dev/91-developer/index.html index f147d6e5..9517e4af 100644 --- a/dev/91-developer/index.html +++ b/dev/91-developer/index.html @@ -30,4 +30,4 @@ git rebase --continue git push --force origin <branch_name>

8. Create a Pull Request

When there are no more conflicts and all the test are passing, create a pull request to merge your remote branch into the org main. You can do this on GitHub by opening the branch in your fork and clicking "Compare & pull request".

Screenshot of Compare & pull request button on GitHub

Fill in the pull request details:

  1. Describe the changes.
  2. List the issue(s) that this pull request closes.
  3. Fill in the collaboration confirmation.
  4. (Optional) Choose a reviewer.
  5. When all of the information is filled in, click "Create pull request".

Screenshot of the pull request information

You pull request will appear in the list of pull requests in the TulipaEnergyModel.jl repository, where you can track the review process.

Sometimes reviewers request changes. After pushing any changes, the pull request will be automatically updated. Do not forget to re-request a review.

Once your reviewer approves the pull request, you need to merge it with the main branch using "Squash and Merge". You can also delete the branch that originated the pull request by clicking the button that appears after the merge. For branches that were pushed to the main repo, it is recommended that you do so.

Building the Documentation Locally

Following the latest suggestions, we recommend using LiveServer to build the documentation.

Note: Ensure you have the package Revise installed in your global environment before running servedocs.

Here is how you do it:

  1. Run julia --project=docs in the package root to open Julia in the environment of the docs.
  2. If this is the first time building the docs
    1. Press ] to enter pkg mode
    2. Run pkg> dev . to use the development version of your package
    3. Press backspace to leave pkg mode
  3. Run julia> using LiveServer
  4. Run julia> servedocs(launch_browser=true)

Performance Considerations

If you updated something that might impact the performance of the package, you can run the Benchmark.yml workflow from your pull request. To do that, add the tag benchmark in the pull request. This will trigger the workflow and post the results as a comment in you pull request.

Warning: This requires that your branch was pushed to the main repo. If you have created a pull request from a fork, the Benchmark.yml workflow does not work. Instead, close your pull request, push your branch to the main repo, and open a new pull request.

If you want to manually run the benchmarks, you can do the following:

Profiling

To profile the code in a more manual way, here are some tips:

See the file <benchmark/profiling.jl> for an example of profiling code.

Procedure for Releasing a New Version (Julia Registry)

When publishing a new version of the model to the Julia Registry, follow this procedure:

Note: To be able to register, you need to be a member of the organisation TulipaEnergy and have your visibility set to public: Screenshot of public members of TulipaEnergy on GitHub

  1. Click on the Project.toml file on GitHub.

  2. Edit the file and change the version number according to semantic versioning: Major.Minor.Patch Screenshot of editing Project.toml on GitHub

  3. Commit the changes in a new branch and open a pull request. Change the commit message according to the version number. Screenshot of PR with commit message "Release 0.6.1"

  4. Create the pull request and squash & merge it after the review and testing process. Delete the branch after the squash and merge. Screenshot of full PR template on GitHub

  5. Go to the main page of repo and click in the commit. Screenshot of how to access commit on GitHub

  6. Add the following comment to the commit: @JuliaRegistrator register Screenshot of calling JuliaRegistrator in commit comments

  7. The bot should start the registration process. Screenshot of JuliaRegistrator bot message

  8. After approval, the bot will take care of the PR at the Julia Registry and automatically create the release for the new version. Screenshot of new version on registry

    Thank you for helping make frequent releases!

+results = run(SUITE, verbose=true)

Profiling

To profile the code in a more manual way, here are some tips:

See the file <benchmark/profiling.jl> for an example of profiling code.

Procedure for Releasing a New Version (Julia Registry)

When publishing a new version of the model to the Julia Registry, follow this procedure:

Note: To be able to register, you need to be a member of the organisation TulipaEnergy and have your visibility set to public: Screenshot of public members of TulipaEnergy on GitHub

  1. Click on the Project.toml file on GitHub.

  2. Edit the file and change the version number according to semantic versioning: Major.Minor.Patch Screenshot of editing Project.toml on GitHub

  3. Commit the changes in a new branch and open a pull request. Change the commit message according to the version number. Screenshot of PR with commit message "Release 0.6.1"

  4. Create the pull request and squash & merge it after the review and testing process. Delete the branch after the squash and merge. Screenshot of full PR template on GitHub

  5. Go to the main page of repo and click in the commit. Screenshot of how to access commit on GitHub

  6. Add the following comment to the commit: @JuliaRegistrator register Screenshot of calling JuliaRegistrator in commit comments

  7. The bot should start the registration process. Screenshot of JuliaRegistrator bot message

  8. After approval, the bot will take care of the PR at the Julia Registry and automatically create the release for the new version. Screenshot of new version on registry

    Thank you for helping make frequent releases!

diff --git a/dev/95-reference/index.html b/dev/95-reference/index.html index 7a6fc45e..8713c32f 100644 --- a/dev/95-reference/index.html +++ b/dev/95-reference/index.html @@ -1,9 +1,9 @@ -Reference · TulipaEnergyModel.jl

Reference

TulipaEnergyModel.EnergyProblemType

Structure to hold all parts of an energy problem. It is a wrapper around various other relevant structures. It hides the complexity behind the energy problem, making the usage more friendly, although more verbose.

Fields

  • graph: The Graph object that defines the geometry of the energy problem.
  • representative_periods: A vector of Representative Periods.
  • constraints_partitions: Dictionaries that connect pairs of asset and representative periods to time partitions (vectors of time blocks)
  • timeframe: The number of periods of the representative_periods.
  • dataframes: The data frames used to linearize the variables and constraints. These are used internally in the model only.
  • groups: The input data of the groups to create constraints that are common to a set of assets in the model.
  • model_parameters: The model parameters.
  • model: A JuMP.Model object representing the optimization model.
  • solved: A boolean indicating whether the model has been solved or not.
  • objective_value: The objective value of the solved problem.
  • termination_status: The termination status of the optimization model.
  • timings: Dictionary of elapsed time for various parts of the code (in seconds).

Constructor

  • EnergyProblem(connection): Constructs a new EnergyProblem object with the given connection. The constraints_partitions field is computed from the representative_periods, and the other fields are initialized with default values.

See the basic example tutorial to see how these can be used.

source
TulipaEnergyModel.ModelParametersType
ModelParameters(;key = value, ...)
+Reference · TulipaEnergyModel.jl

Reference

TulipaEnergyModel.EnergyProblemType

Structure to hold all parts of an energy problem. It is a wrapper around various other relevant structures. It hides the complexity behind the energy problem, making the usage more friendly, although more verbose.

Fields

  • graph: The Graph object that defines the geometry of the energy problem.
  • representative_periods: A vector of Representative Periods.
  • constraints_partitions: Dictionaries that connect pairs of asset and representative periods to time partitions (vectors of time blocks)
  • timeframe: The number of periods of the representative_periods.
  • dataframes: The data frames used to linearize the variables and constraints. These are used internally in the model only.
  • groups: The input data of the groups to create constraints that are common to a set of assets in the model.
  • model_parameters: The model parameters.
  • model: A JuMP.Model object representing the optimization model.
  • solved: A boolean indicating whether the model has been solved or not.
  • objective_value: The objective value of the solved problem.
  • termination_status: The termination status of the optimization model.
  • timings: Dictionary of elapsed time for various parts of the code (in seconds).

Constructor

  • EnergyProblem(connection): Constructs a new EnergyProblem object with the given connection. The constraints_partitions field is computed from the representative_periods, and the other fields are initialized with default values.

See the basic example tutorial to see how these can be used.

source
TulipaEnergyModel.ModelParametersType
ModelParameters(;key = value, ...)
 ModelParameters(path; ...)
 ModelParameters(connection; ...)
-ModelParameters(connection, path; ...)

Structure to hold the model parameters. Some values are defined by default and some required explicit definition.

If path is passed, it is expected to be a string pointing to a TOML file with a key = value list of parameters. Explicit keyword arguments take precedence.

If connection is passed, the default discount_year is set to the minimum of all milestone years. In other words, we check for the table year_data for the column year where the column is_milestone is true. Explicit keyword arguments take precedence.

If both are passed, then path has preference. Explicit keyword arguments take precedence.

Parameters

  • discount_rate::Float64 = 0.0: The model discount rate.
  • discount_year::Int: The model discount year.
source
TulipaEnergyModel._check_initial_storage_level!Method
_check_initial_storage_level!(df)

Determine the starting value for the initial storage level for interpolating the storage level. If there is no initial storage level given, we will use the final storage level. Otherwise, we use the given initial storage level.

source
TulipaEnergyModel._construct_inter_rp_dataframesMethod
df = _construct_inter_rp_dataframes(assets, graph, years, asset_filter)

Constructs dataframes for inter representative period constraints.

Arguments

  • assets: An array of assets.
  • graph: The energy problem graph with the assets data.
  • asset_filter: A function that filters assets based on certain criteria.

Returns

A dataframe containing the constructed dataframe for constraints.

source
TulipaEnergyModel._interpolate_storage_level!Method
_interpolate_storage_level!(df, time_column::Symbol)

Transform the storage level dataframe from grouped timesteps or periods to incremental ones by interpolation. The starting value is the value of the previous grouped timesteps or periods or the initial value. The ending value is the value for the grouped timesteps or periods.

source
TulipaEnergyModel._parse_rp_partitionFunction
_parse_rp_partition(Val(specification), timestep_string, rp_timesteps)

Parses the timestep_string according to the specification. The representative period timesteps (rp_timesteps) might not be used in the computation, but it will be used for validation.

The specification defines what is expected from the timestep_string:

  • :uniform: The timestep_string should be a single number indicating the duration of each block. Examples: "3", "4", "1".
  • :explicit: The timestep_string should be a semicolon-separated list of integers. Each integer is a duration of a block. Examples: "3;3;3;3", "4;4;4", "1;1;1;1;1;1;1;1;1;1;1;1", and "3;3;4;2".
  • :math: The timestep_string should be an expression of the form NxD+NxD…, where D is the duration of the block and N is the number of blocks. Examples: "4x3", "3x4", "12x1", and "2x3+1x4+1x2".

The generated blocks will be ranges (a:b). The first block starts at 1, and the last block ends at length(rp_timesteps).

The following table summarizes the formats for a rp_timesteps = 1:12:

Output:uniform:explicit:math
1:3, 4:6, 7:9, 10:1233;3;3;34x3
1:4, 5:8, 9:1244;4;43x4
1:1, 2:2, …, 12:1211;1;1;1;1;1;1;1;1;1;1;112x1
1:3, 4:6, 7:10, 11:12NA3;3;4;22x3+1x4+1x2

Examples

using TulipaEnergyModel
+ModelParameters(connection, path; ...)

Structure to hold the model parameters. Some values are defined by default and some required explicit definition.

If path is passed, it is expected to be a string pointing to a TOML file with a key = value list of parameters. Explicit keyword arguments take precedence.

If connection is passed, the default discount_year is set to the minimum of all milestone years. In other words, we check for the table year_data for the column year where the column is_milestone is true. Explicit keyword arguments take precedence.

If both are passed, then path has preference. Explicit keyword arguments take precedence.

Parameters

  • discount_rate::Float64 = 0.0: The model discount rate.
  • discount_year::Int: The model discount year.
source
TulipaEnergyModel._check_initial_storage_level!Method
_check_initial_storage_level!(df)

Determine the starting value for the initial storage level for interpolating the storage level. If there is no initial storage level given, we will use the final storage level. Otherwise, we use the given initial storage level.

source
TulipaEnergyModel._construct_inter_rp_dataframesMethod
df = _construct_inter_rp_dataframes(assets, graph, years, asset_filter)

Constructs dataframes for inter representative period constraints.

Arguments

  • assets: An array of assets.
  • graph: The energy problem graph with the assets data.
  • asset_filter: A function that filters assets based on certain criteria.

Returns

A dataframe containing the constructed dataframe for constraints.

source
TulipaEnergyModel._interpolate_storage_level!Method
_interpolate_storage_level!(df, time_column::Symbol)

Transform the storage level dataframe from grouped timesteps or periods to incremental ones by interpolation. The starting value is the value of the previous grouped timesteps or periods or the initial value. The ending value is the value for the grouped timesteps or periods.

source
TulipaEnergyModel._parse_rp_partitionFunction
_parse_rp_partition(Val(specification), timestep_string, rp_timesteps)

Parses the timestep_string according to the specification. The representative period timesteps (rp_timesteps) might not be used in the computation, but it will be used for validation.

The specification defines what is expected from the timestep_string:

  • :uniform: The timestep_string should be a single number indicating the duration of each block. Examples: "3", "4", "1".
  • :explicit: The timestep_string should be a semicolon-separated list of integers. Each integer is a duration of a block. Examples: "3;3;3;3", "4;4;4", "1;1;1;1;1;1;1;1;1;1;1;1", and "3;3;4;2".
  • :math: The timestep_string should be an expression of the form NxD+NxD…, where D is the duration of the block and N is the number of blocks. Examples: "4x3", "3x4", "12x1", and "2x3+1x4+1x2".

The generated blocks will be ranges (a:b). The first block starts at 1, and the last block ends at length(rp_timesteps).

The following table summarizes the formats for a rp_timesteps = 1:12:

Output:uniform:explicit:math
1:3, 4:6, 7:9, 10:1233;3;3;34x3
1:4, 5:8, 9:1244;4;43x4
1:1, 2:2, …, 12:1211;1;1;1;1;1;1;1;1;1;1;112x1
1:3, 4:6, 7:10, 11:12NA3;3;4;22x3+1x4+1x2

Examples

using TulipaEnergyModel
 TulipaEnergyModel._parse_rp_partition(Val(:uniform), "3", 1:12)
 
 # output
@@ -29,26 +29,26 @@
  1:3
  4:6
  7:10
- 11:12
source
TulipaEnergyModel.add_expression_is_charging_terms_intra_rp_constraints!Method
add_expression_is_charging_terms_intra_rp_constraints!(df_cons,
                                                    df_is_charging,
                                                    workspace
-                                                   )

Computes the is_charging expressions per row of df_cons for the constraints that are within (intra) the representative periods.

This function is only used internally in the model.

This strategy is based on the replies in this discourse thread:

  • https://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23
source
TulipaEnergyModel.add_expression_terms_inter_rp_constraints!Method
add_expression_terms_inter_rp_constraints!(df_inter,
+                                                   )

Computes the is_charging expressions per row of df_cons for the constraints that are within (intra) the representative periods.

This function is only used internally in the model.

This strategy is based on the replies in this discourse thread:

  • https://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23
source
TulipaEnergyModel.add_expression_terms_inter_rp_constraints!Method
add_expression_terms_inter_rp_constraints!(df_inter,
                                            df_flows,
                                            df_map,
                                            graph,
                                            representative_periods,
-                                           )

Computes the incoming and outgoing expressions per row of df_inter for the constraints that are between (inter) the representative periods.

This function is only used internally in the model.

source
TulipaEnergyModel.add_expression_terms_intra_rp_constraints!Method
add_expression_terms_intra_rp_constraints!(df_cons,
                                            df_flows,
                                            workspace,
                                            representative_periods,
                                            graph;
                                            use_highest_resolution = true,
                                            multiply_by_duration = true,
-                                           )

Computes the incoming and outgoing expressions per row of df_cons for the constraints that are within (intra) the representative periods.

This function is only used internally in the model.

This strategy is based on the replies in this discourse thread:

  • https://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23
source
TulipaEnergyModel.add_expression_units_on_terms_intra_rp_constraints!Method
add_expression_units_on_terms_intra_rp_constraints!(
+                                           )

Computes the incoming and outgoing expressions per row of df_cons for the constraints that are within (intra) the representative periods.

This function is only used internally in the model.

This strategy is based on the replies in this discourse thread:

  • https://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23
source
TulipaEnergyModel.add_expression_units_on_terms_intra_rp_constraints!Method
add_expression_units_on_terms_intra_rp_constraints!(
     df_cons,
     df_units_on,
     workspace,
-)

Computes the units_on expressions per row of df_cons for the constraints that are within (intra) the representative periods.

This function is only used internally in the model.

This strategy is based on the replies in this discourse thread:

  • https://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23
source
TulipaEnergyModel.calculate_annualized_costMethod
calculate_annualized_cost(discount_rate, economic_lifetime, investment_cost, years, investable_assets)

Calculates the annualized cost for each asset, both energy assets and transport assets, in each year using provided discount rates, economic lifetimes, and investment costs.

Arguments

  • discount_rate::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the discount rate.
  • economic_lifetime::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the economic lifetime.
  • investment_cost::Dict: A dictionary where the key is a tuple (year, asset) or (year, (asset1, asset2)) for transport assets, and the value is the investment cost.
  • years::Array: An array of years to be considered.
  • investable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.

Returns

  • A Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the calculated annualized cost for each asset in each year.

Formula

The annualized cost for each asset in year is calculated using the formula:

annualized_cost = discount_rate[asset] / (
+)

Computes the units_on expressions per row of df_cons for the constraints that are within (intra) the representative periods.

This function is only used internally in the model.

This strategy is based on the replies in this discourse thread:

  • https://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23
source
TulipaEnergyModel.calculate_annualized_costMethod
calculate_annualized_cost(discount_rate, economic_lifetime, investment_cost, years, investable_assets)

Calculates the annualized cost for each asset, both energy assets and transport assets, in each year using provided discount rates, economic lifetimes, and investment costs.

Arguments

  • discount_rate::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the discount rate.
  • economic_lifetime::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the economic lifetime.
  • investment_cost::Dict: A dictionary where the key is a tuple (year, asset) or (year, (asset1, asset2)) for transport assets, and the value is the investment cost.
  • years::Array: An array of years to be considered.
  • investable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.

Returns

  • A Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the calculated annualized cost for each asset in each year.

Formula

The annualized cost for each asset in year is calculated using the formula:

annualized_cost = discount_rate[asset] / (
     (1 + discount_rate[asset]) *
     (1 - 1 / (1 + discount_rate[asset])^economic_lifetime[asset])
 ) * investment_cost[(year, asset)]

Example for energy assets

discount_rate = Dict("asset1" => 0.05, "asset2" => 0.07)
@@ -87,7 +87,7 @@
 Dict{Tuple{Int64, Tuple{String, String}}, Float64} with 3 entries:
   (2022, ("asset1", "asset2")) => 135.671
   (2021, ("asset3", "asset4")) => 153.918
-  (2021, ("asset1", "asset2")) => 123.338
source
TulipaEnergyModel.calculate_salvage_valueMethod
calculate_salvage_value(discount_rate,
                         economic_lifetime,
                         annualized_cost,
                         years,
@@ -143,7 +143,7 @@
 Dict{Tuple{Int64, Tuple{String, String}}, Float64} with 3 entries:
   (2022, ("asset1", "asset2")) => 964.325
   (2021, ("asset3", "asset4")) => 1202.24
-  (2021, ("asset1", "asset2")) => 759.2
source
TulipaEnergyModel.calculate_weight_for_investment_discountsMethod
calculate_weight_for_investment_discounts(social_rate,
                                           discount_year,
                                           salvage_value,
                                           investment_cost,
@@ -214,12 +214,12 @@
 Dict{Tuple{Int64, Tuple{String, String}}, Float64} with 3 entries:
   (2022, ("asset1", "asset2")) => 0.0797817
   (2021, ("asset3", "asset4")) => 0.13097
-  (2021, ("asset1", "asset2")) => 0.158874
source
TulipaEnergyModel.calculate_weight_for_investment_discountsMethod
calculate_weight_for_investment_discounts(graph::MetaGraph,
                                           years,
                                           investable_assets,
                                           assets,
                                           model_parameters,
-                                         )

Calculates the weight for investment discounts for each asset, both energy assets and transport assets. Internally calls calculate_annualized_cost, calculate_salvage_value, calculate_weight_for_investment_discounts.

Arguments

  • graph::MetaGraph: A graph
  • years::Array: An array of years to be considered.
  • investable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.
  • assets::Array: An array of assets.
  • model_parameters::ModelParameters: A model parameters structure.

Returns

  • A Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the weights for investment discounts.
source
TulipaEnergyModel.compute_assets_partitions!Method
compute_assets_partitions!(partitions, df, a, representative_periods)

Parses the time blocks in the DataFrame df for the asset a and every representative period in the timesteps_per_rp dictionary, modifying the input partitions.

partitions must be a dictionary indexed by the representative periods, possibly empty.

timesteps_per_rp must be a dictionary indexed by rep_period and its values are the timesteps of that rep_period.

To obtain the partitions, the columns specification and partition from df are passed to the function _parse_rp_partition.

source
TulipaEnergyModel.compute_constraints_partitionsMethod
cons_partitions = compute_constraints_partitions(graph, representative_periods)

Computes the constraints partitions using the assets and flows partitions stored in the graph, and the representative periods.

The function computes the constraints partitions by iterating over the partition dictionary, which specifies the partition strategy for each resolution (i.e., lowest or highest). For each asset and representative period, it calls the compute_rp_partition function to compute the partition based on the strategy.

source
TulipaEnergyModel.compute_dual_variablesMethod
compute_dual_variables(model)

Compute the dual variables for the given model.

If the model does not have dual variables, this function fixes the discrete variables, optimizes the model, and then computes the dual variables.

Arguments

  • model: The model for which to compute the dual variables.

Returns

A named tuple containing the dual variables of selected constraints.

source
TulipaEnergyModel.compute_flows_partitions!Method
compute_flows_partitions!(partitions, df, u, v, representative_periods)

Parses the time blocks in the DataFrame df for the flow (u, v) and every representative period in the timesteps_per_rp dictionary, modifying the input partitions.

partitions must be a dictionary indexed by the representative periods, possibly empty.

timesteps_per_rp must be a dictionary indexed by rep_period and its values are the timesteps of that rep_period.

To obtain the partitions, the columns specification and partition from df are passed to the function _parse_rp_partition.

source
TulipaEnergyModel.compute_rp_partitionMethod
rp_partition = compute_rp_partition(partitions, :lowest)

Given the timesteps of various flows/assets in the partitions input, compute the representative period partitions.

Each element of partitions is a partition with the following assumptions:

  • An element is of the form V = [r₁, r₂, …, rₘ], where each rᵢ is a range a:b.
  • r₁ starts at 1.
  • rᵢ₊₁ starts at the end of rᵢ plus 1.
  • rₘ ends at some value N, that is the same for all elements of partitions.

Notice that this implies that they form a disjunct partition of 1:N.

The output will also be a partition with the conditions above.

Strategies

:lowest

If strategy = :lowest (default), then the output is constructed greedily, i.e., it selects the next largest breakpoint following the algorithm below:

  1. Input: Vᴵ₁, …, Vᴵₚ, a list of time blocks. Each element of Vᴵⱼ is a range r = r.start:r.end. Output: V.
  2. Compute the end of the representative period N (all Vᴵⱼ should have the same end)
  3. Start with an empty V = []
  4. Define the beginning of the range s = 1
  5. Define an array with all the next breakpoints B such that Bⱼ is the first r.end such that r.end ≥ s for each r ∈ Vᴵⱼ.
  6. The end of the range will be the e = max Bⱼ.
  7. Define r = s:e and add r to the end of V.
  8. If e = N, then END
  9. Otherwise, define s = e + 1 and go to step 4.

Examples

partition1 = [1:4, 5:8, 9:12]
+                                         )

Calculates the weight for investment discounts for each asset, both energy assets and transport assets. Internally calls calculate_annualized_cost, calculate_salvage_value, calculate_weight_for_investment_discounts.

Arguments

  • graph::MetaGraph: A graph
  • years::Array: An array of years to be considered.
  • investable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.
  • assets::Array: An array of assets.
  • model_parameters::ModelParameters: A model parameters structure.

Returns

  • A Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the weights for investment discounts.
source
TulipaEnergyModel.compute_assets_partitions!Method
compute_assets_partitions!(partitions, df, a, representative_periods)

Parses the time blocks in the DataFrame df for the asset a and every representative period in the timesteps_per_rp dictionary, modifying the input partitions.

partitions must be a dictionary indexed by the representative periods, possibly empty.

timesteps_per_rp must be a dictionary indexed by rep_period and its values are the timesteps of that rep_period.

To obtain the partitions, the columns specification and partition from df are passed to the function _parse_rp_partition.

source
TulipaEnergyModel.compute_constraints_partitionsMethod
cons_partitions = compute_constraints_partitions(graph, representative_periods)

Computes the constraints partitions using the assets and flows partitions stored in the graph, and the representative periods.

The function computes the constraints partitions by iterating over the partition dictionary, which specifies the partition strategy for each resolution (i.e., lowest or highest). For each asset and representative period, it calls the compute_rp_partition function to compute the partition based on the strategy.

source
TulipaEnergyModel.compute_dual_variablesMethod
compute_dual_variables(model)

Compute the dual variables for the given model.

If the model does not have dual variables, this function fixes the discrete variables, optimizes the model, and then computes the dual variables.

Arguments

  • model: The model for which to compute the dual variables.

Returns

A named tuple containing the dual variables of selected constraints.

source
TulipaEnergyModel.compute_flows_partitions!Method
compute_flows_partitions!(partitions, df, u, v, representative_periods)

Parses the time blocks in the DataFrame df for the flow (u, v) and every representative period in the timesteps_per_rp dictionary, modifying the input partitions.

partitions must be a dictionary indexed by the representative periods, possibly empty.

timesteps_per_rp must be a dictionary indexed by rep_period and its values are the timesteps of that rep_period.

To obtain the partitions, the columns specification and partition from df are passed to the function _parse_rp_partition.

source
TulipaEnergyModel.compute_rp_partitionMethod
rp_partition = compute_rp_partition(partitions, :lowest)

Given the timesteps of various flows/assets in the partitions input, compute the representative period partitions.

Each element of partitions is a partition with the following assumptions:

  • An element is of the form V = [r₁, r₂, …, rₘ], where each rᵢ is a range a:b.
  • r₁ starts at 1.
  • rᵢ₊₁ starts at the end of rᵢ plus 1.
  • rₘ ends at some value N, that is the same for all elements of partitions.

Notice that this implies that they form a disjunct partition of 1:N.

The output will also be a partition with the conditions above.

Strategies

:lowest

If strategy = :lowest (default), then the output is constructed greedily, i.e., it selects the next largest breakpoint following the algorithm below:

  1. Input: Vᴵ₁, …, Vᴵₚ, a list of time blocks. Each element of Vᴵⱼ is a range r = r.start:r.end. Output: V.
  2. Compute the end of the representative period N (all Vᴵⱼ should have the same end)
  3. Start with an empty V = []
  4. Define the beginning of the range s = 1
  5. Define an array with all the next breakpoints B such that Bⱼ is the first r.end such that r.end ≥ s for each r ∈ Vᴵⱼ.
  6. The end of the range will be the e = max Bⱼ.
  7. Define r = s:e and add r to the end of V.
  8. If e = N, then END
  9. Otherwise, define s = e + 1 and go to step 4.

Examples

partition1 = [1:4, 5:8, 9:12]
 partition2 = [1:3, 4:6, 7:9, 10:12]
 compute_rp_partition([partition1, partition2], :lowest)
 
@@ -267,12 +267,12 @@
  7:7
  8:9
  10:10
- 11:12
source
TulipaEnergyModel.construct_dataframesMethod
dataframes = construct_dataframes(
     graph,
     representative_periods,
     constraints_partitions,, IteratorSize
     years,
-)

Computes the data frames used to linearize the variables and constraints. These are used internally in the model only.

source
TulipaEnergyModel.create_internal_structuresMethod
graph, representative_periods, timeframe  = create_internal_structures(connection)

Return the graph, representative_periods, and timeframe structures given the input dataframes structure.

The details of these structures are:

source
TulipaEnergyModel.create_modelMethod
model = create_model(graph, representative_periods, dataframes, timeframe, groups; write_lp_file = false)

Create the energy model given the graph, representative_periods, dictionary of dataframes (created by construct_dataframes), timeframe, and groups.

source
TulipaEnergyModel.default_parametersMethod
default_parameters(Val(optimizer_name_symbol))
+)

Computes the data frames used to linearize the variables and constraints. These are used internally in the model only.

source
TulipaEnergyModel.create_internal_structuresMethod
graph, representative_periods, timeframe  = create_internal_structures(connection)

Return the graph, representative_periods, and timeframe structures given the input dataframes structure.

The details of these structures are:

source
TulipaEnergyModel.create_modelMethod
model = create_model(graph, representative_periods, dataframes, timeframe, groups; write_lp_file = false)

Create the energy model given the graph, representative_periods, dictionary of dataframes (created by construct_dataframes), timeframe, and groups.

source
TulipaEnergyModel.default_parametersMethod
default_parameters(Val(optimizer_name_symbol))
 default_parameters(optimizer)
 default_parameters(optimizer_name_symbol)
 default_parameters(optimizer_name_string)

Returns the default parameters for a given JuMP optimizer. Falls back to Dict() for undefined solvers.

Arguments

There are four ways to use this function:

  • Val(optimizer_name_symbol): This uses type dispatch with the special Val type. Pass the solver name as a Symbol (e.g., Val(:HiGHS)).
  • optimizer: The JuMP optimizer type (e.g., HiGHS.Optimizer).
  • optimizer_name_symbol or optimizer_name_string: Pass the name in Symbol or String format and it will be converted to Val.

Using Val is necessary for the dispatch. All other cases will convert the argument and call the Val version, which might lead to type instability.

Examples

using HiGHS
@@ -290,12 +290,12 @@
 
 # output
 
-true
source
TulipaEnergyModel.filter_graphMethod
filter_graph(graph, elements, value, key)
 filter_graph(graph, elements, value, key, year)

Helper function to filter elements (assets or flows) in the graph given a key (and possibly year) and value (or values). In the safest case, this is equivalent to the filters

filter_assets_whose_key_equal_to_value = a -> graph[a].key == value
 filter_assets_whose_key_year_equal_to_value = a -> graph[a].key[year] in value
 filter_flows_whose_key_equal_to_value = f -> graph[f...].key == value
-filter_flows_whose_key_year_equal_to_value = f -> graph[f...].key[year] in value
source
TulipaEnergyModel.get_graph_value_or_missingMethod
get_graph_value_or_missing(graph, graph_key, field_key)
-get_graph_value_or_missing(graph, graph_key, field_key, year)

Get graph[graph_key].field_key (or graph[graph_key].field_key[year]) or return missing if any of the values do not exist. We also check if graph[graph_key].active[year] is true if the year is passed and return missing otherwise.

source
TulipaEnergyModel.profile_aggregationMethod
profile_aggregation(agg, profiles, key, block, default_value)

Aggregates the profiles[key] over the block using the agg function. If the profile does not exist, uses default_value instead of each profile value.

profiles should be a dictionary of profiles, for instance graph[a].profiles or graph[u, v].profiles. If profiles[key] exists, then this function computes the aggregation of profiles[key] over the range block using the aggregator agg, i.e., agg(profiles[key][block]). If profiles[key] does not exist, then this substitutes it with a vector of default_values.

source
TulipaEnergyModel.read_parameters_from_fileMethod
read_parameters_from_file(filepath)

Parse the parameters from a file into a dictionary. The keys and values are NOT checked to be valid parameters for any specific solvers.

The file should contain a list of lines of the following type:

key = value

The file is parsed as TOML, which is intuitive. See the example below.

Example

# Creating file
+filter_flows_whose_key_year_equal_to_value = f -> graph[f...].key[year] in value
source
TulipaEnergyModel.get_graph_value_or_missingMethod
get_graph_value_or_missing(graph, graph_key, field_key)
+get_graph_value_or_missing(graph, graph_key, field_key, year)

Get graph[graph_key].field_key (or graph[graph_key].field_key[year]) or return missing if any of the values do not exist. We also check if graph[graph_key].active[year] is true if the year is passed and return missing otherwise.

source
TulipaEnergyModel.profile_aggregationMethod
profile_aggregation(agg, profiles, key, block, default_value)

Aggregates the profiles[key] over the block using the agg function. If the profile does not exist, uses default_value instead of each profile value.

profiles should be a dictionary of profiles, for instance graph[a].profiles or graph[u, v].profiles. If profiles[key] exists, then this function computes the aggregation of profiles[key] over the range block using the aggregator agg, i.e., agg(profiles[key][block]). If profiles[key] does not exist, then this substitutes it with a vector of default_values.

source
TulipaEnergyModel.read_parameters_from_fileMethod
read_parameters_from_file(filepath)

Parse the parameters from a file into a dictionary. The keys and values are NOT checked to be valid parameters for any specific solvers.

The file should contain a list of lines of the following type:

key = value

The file is parsed as TOML, which is intuitive. See the example below.

Example

# Creating file
 filepath, io = mktemp()
 println(io,
   """
@@ -319,7 +319,7 @@
   "small_number"   => 1.0e-8
   "true_or_false"  => true
   "real_number1"   => 3.14
-  "big_number"     => 6.66e6
source
TulipaEnergyModel.run_scenarioMethod
energy_problem = run_scenario(connection; optimizer, parameters, write_lp_file, log_file, show_log)

Run the scenario in the given connection and return the energy problem.

The optimizer and parameters keyword arguments can be used to change the optimizer (the default is HiGHS) and its parameters. The variables are passed to the solve_model function.

Set write_lp_file = true to export the problem that is sent to the solver to a file for viewing. Set show_log = false to silence printing the log while running. Specify a log_file name to export the log to a file.

source
TulipaEnergyModel.safe_comparisonMethod
safe_comparison(graph, a, value, key)
-safe_comparison(graph, a, value, key, year)

Check if graph[a].value (or graph[a].value[year]) is equal to value. This function assumes that if graph[a].value is a dictionary and value is not, then you made a mistake. This makes it safer, because it will not silently return false. It also checks for missing.

source
TulipaEnergyModel.safe_inclusionMethod
safe_inclusion(graph, a, value, key)
-safe_inclusion(graph, a, value, key, year)

Check if graph[a].value (or graph[a].value[year]) is in values. This correctly check that missing in [missing] returns false.

source
TulipaEnergyModel.save_solution_to_fileMethod
save_solution_to_file(output_file, graph, solution)

Saves the solution in CSV files inside output_folder.

The following files are created:

  • assets-investment.csv: The format of each row is a,v,p*v, where a is the asset name, v is the corresponding asset investment value, and p is the corresponding capacity value. Only investable assets are included.
  • assets-investments-energy.csv: The format of each row is a,v,p*v, where a is the asset name, v is the corresponding asset investment value on energy, and p is the corresponding energy capacity value. Only investable assets with a storage_method_energy set to true are included.
  • flows-investment.csv: Similar to assets-investment.csv, but for flows.
  • flows.csv: The value of each flow, per (from, to) flow, rp representative period and timestep. Since the flow is in power, the value at a timestep is equal to the value at the corresponding time block, i.e., if flow[1:3] = 30, then flow[1] = flow[2] = flow[3] = 30.
  • storage-level.csv: The value of each storage level, per asset, rp representative period, and timestep. Since the storage level is in energy, the value at a timestep is a proportional fraction of the value at the corresponding time block, i.e., if level[1:3] = 30, then level[1] = level[2] = level[3] = 10.
source
TulipaEnergyModel.solve_modelFunction
solution = solve_model(model[, optimizer; parameters])

Solve the JuMP model and return the solution. The optimizer argument should be an MILP solver from the JuMP list of supported solvers. By default we use HiGHS.

The keyword argument parameters should be passed as a list of key => value pairs. These can be created manually, obtained using default_parameters, or read from a file using read_parameters_from_file.

The solution object is a mutable struct with the following fields:

  • assets_investment[a]: The investment for each asset, indexed on the investable asset a. To create a traditional array in the order given by the investable assets, one can run

    [solution.assets_investment[a] for a in labels(graph) if graph[a].investable]
    • assets_investment_energy[a]: The investment on energy component for each asset, indexed on the investable asset a with a storage_method_energy set to true.

    To create a traditional array in the order given by the investable assets, one can run

    [solution.assets_investment_energy[a] for a in labels(graph) if graph[a].investable && graph[a].storage_method_energy
  • flows_investment[u, v]: The investment for each flow, indexed on the investable flow (u, v). To create a traditional array in the order given by the investable flows, one can run

    [solution.flows_investment[(u, v)] for (u, v) in edge_labels(graph) if graph[u, v].investable]
  • storage_level_intra_rp[a, rp, timesteps_block]: The storage level for the storage asset a for a representative period rp and a time block timesteps_block. The list of time blocks is defined by constraints_partitions, which was used to create the model. To create a vector with all values of storage_level_intra_rp for a given a and rp, one can run

    [solution.storage_level_intra_rp[a, rp, timesteps_block] for timesteps_block in constraints_partitions[:lowest_resolution][(a, rp)]]
  • storage_level_inter_rp[a, pb]: The storage level for the storage asset a for a periods block pb. To create a vector with all values of storage_level_inter_rp for a given a, one can run

    [solution.storage_level_inter_rp[a, bp] for bp in graph[a].timeframe_partitions[a]]
  • flow[(u, v), rp, timesteps_block]: The flow value for a given flow (u, v) at a given representative period rp, and time block timesteps_block. The list of time blocks is defined by graph[(u, v)].partitions[rp]. To create a vector with all values of flow for a given (u, v) and rp, one can run

    [solution.flow[(u, v), rp, timesteps_block] for timesteps_block in graph[u, v].partitions[rp]]
  • objective_value: A Float64 with the objective value at the solution.

  • duals: A NamedTuple containing the dual variables of selected constraints.

Examples

parameters = Dict{String,Any}("presolve" => "on", "time_limit" => 60.0, "output_flag" => true)
-solution = solve_model(model, HiGHS.Optimizer; parameters = parameters)
source
TulipaEnergyModel.solve_model!Method
solution = solve_model!(dataframes, model, ...)

Solves the JuMP model, returns the solution, and modifies dataframes to include the solution. The modifications made to dataframes are:

  • df_flows.solution = solution.flow
  • df_storage_level_intra_rp.solution = solution.storage_level_intra_rp
  • df_storage_level_inter_rp.solution = solution.storage_level_inter_rp
source
+ "big_number" => 6.66e6
source
TulipaEnergyModel.run_scenarioMethod
energy_problem = run_scenario(connection; optimizer, parameters, write_lp_file, log_file, show_log)

Run the scenario in the given connection and return the energy problem.

The optimizer and parameters keyword arguments can be used to change the optimizer (the default is HiGHS) and its parameters. The variables are passed to the solve_model function.

Set write_lp_file = true to export the problem that is sent to the solver to a file for viewing. Set show_log = false to silence printing the log while running. Specify a log_file name to export the log to a file.

source
TulipaEnergyModel.safe_comparisonMethod
safe_comparison(graph, a, value, key)
+safe_comparison(graph, a, value, key, year)

Check if graph[a].value (or graph[a].value[year]) is equal to value. This function assumes that if graph[a].value is a dictionary and value is not, then you made a mistake. This makes it safer, because it will not silently return false. It also checks for missing.

source
TulipaEnergyModel.safe_inclusionMethod
safe_inclusion(graph, a, value, key)
+safe_inclusion(graph, a, value, key, year)

Check if graph[a].value (or graph[a].value[year]) is in values. This correctly check that missing in [missing] returns false.

source
TulipaEnergyModel.save_solution_to_fileMethod
save_solution_to_file(output_file, graph, solution)

Saves the solution in CSV files inside output_folder.

The following files are created:

  • assets-investment.csv: The format of each row is a,v,p*v, where a is the asset name, v is the corresponding asset investment value, and p is the corresponding capacity value. Only investable assets are included.
  • assets-investments-energy.csv: The format of each row is a,v,p*v, where a is the asset name, v is the corresponding asset investment value on energy, and p is the corresponding energy capacity value. Only investable assets with a storage_method_energy set to true are included.
  • flows-investment.csv: Similar to assets-investment.csv, but for flows.
  • flows.csv: The value of each flow, per (from, to) flow, rp representative period and timestep. Since the flow is in power, the value at a timestep is equal to the value at the corresponding time block, i.e., if flow[1:3] = 30, then flow[1] = flow[2] = flow[3] = 30.
  • storage-level.csv: The value of each storage level, per asset, rp representative period, and timestep. Since the storage level is in energy, the value at a timestep is a proportional fraction of the value at the corresponding time block, i.e., if level[1:3] = 30, then level[1] = level[2] = level[3] = 10.
source
TulipaEnergyModel.solve_modelFunction
solution = solve_model(model[, optimizer; parameters])

Solve the JuMP model and return the solution. The optimizer argument should be an MILP solver from the JuMP list of supported solvers. By default we use HiGHS.

The keyword argument parameters should be passed as a list of key => value pairs. These can be created manually, obtained using default_parameters, or read from a file using read_parameters_from_file.

The solution object is a mutable struct with the following fields:

  • assets_investment[a]: The investment for each asset, indexed on the investable asset a. To create a traditional array in the order given by the investable assets, one can run

    [solution.assets_investment[a] for a in labels(graph) if graph[a].investable]
    • assets_investment_energy[a]: The investment on energy component for each asset, indexed on the investable asset a with a storage_method_energy set to true.

    To create a traditional array in the order given by the investable assets, one can run

    [solution.assets_investment_energy[a] for a in labels(graph) if graph[a].investable && graph[a].storage_method_energy
  • flows_investment[u, v]: The investment for each flow, indexed on the investable flow (u, v). To create a traditional array in the order given by the investable flows, one can run

    [solution.flows_investment[(u, v)] for (u, v) in edge_labels(graph) if graph[u, v].investable]
  • storage_level_intra_rp[a, rp, timesteps_block]: The storage level for the storage asset a for a representative period rp and a time block timesteps_block. The list of time blocks is defined by constraints_partitions, which was used to create the model. To create a vector with all values of storage_level_intra_rp for a given a and rp, one can run

    [solution.storage_level_intra_rp[a, rp, timesteps_block] for timesteps_block in constraints_partitions[:lowest_resolution][(a, rp)]]
  • storage_level_inter_rp[a, pb]: The storage level for the storage asset a for a periods block pb. To create a vector with all values of storage_level_inter_rp for a given a, one can run

    [solution.storage_level_inter_rp[a, bp] for bp in graph[a].timeframe_partitions[a]]
  • flow[(u, v), rp, timesteps_block]: The flow value for a given flow (u, v) at a given representative period rp, and time block timesteps_block. The list of time blocks is defined by graph[(u, v)].partitions[rp]. To create a vector with all values of flow for a given (u, v) and rp, one can run

    [solution.flow[(u, v), rp, timesteps_block] for timesteps_block in graph[u, v].partitions[rp]]
  • objective_value: A Float64 with the objective value at the solution.

  • duals: A NamedTuple containing the dual variables of selected constraints.

Examples

parameters = Dict{String,Any}("presolve" => "on", "time_limit" => 60.0, "output_flag" => true)
+solution = solve_model(model, HiGHS.Optimizer; parameters = parameters)
source
TulipaEnergyModel.solve_model!Method
solution = solve_model!(dataframes, model, ...)

Solves the JuMP model, returns the solution, and modifies dataframes to include the solution. The modifications made to dataframes are:

  • df_flows.solution = solution.flow
  • df_storage_level_intra_rp.solution = solution.storage_level_intra_rp
  • df_storage_level_inter_rp.solution = solution.storage_level_inter_rp
source
diff --git a/dev/index.html b/dev/index.html index 72ba7550..1cf2a7df 100644 --- a/dev/index.html +++ b/dev/index.html @@ -26,4 +26,4 @@ - + diff --git a/dev/search_index.js b/dev/search_index.js index 79a75abd..37063f47 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"10-how-to-use/#how-to-use","page":"How to Use","title":"How to Use","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Pages = [\"10-how-to-use.md\"]\nDepth = 3","category":"page"},{"location":"10-how-to-use/#Install","page":"How to Use","title":"Install","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"To use Tulipa, you first need to install the opensource Julia programming language.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Then consider installing a user-friendly code editor, such as VSCode. Otherwise you will be running from the terminal/command prompt.","category":"page"},{"location":"10-how-to-use/#Starting-Julia","page":"How to Use","title":"Starting Julia","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Choose one:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"In VSCode: Press CTRL+Shift+P and press Enter to start a Julia REPL.\nIn the terminal: Type julia and press Enter","category":"page"},{"location":"10-how-to-use/#Adding-TulipaEnergyModel","page":"How to Use","title":"Adding TulipaEnergyModel","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"In Julia:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Enter package mode (press \"]\")","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"pkg> add TulipaEnergyModel","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Return to Julia mode (backspace)","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"julia> using TulipaEnergyModel","category":"page"},{"location":"10-how-to-use/#(Optional)-Running-automatic-tests","page":"How to Use","title":"(Optional) Running automatic tests","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"It is nice to check that tests are passing to make sure your environment is working. (This takes a minute or two.)","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Enter package mode (press \"]\")","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"pkg> test TulipaEnergyModel","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"All tests should pass.","category":"page"},{"location":"10-how-to-use/#Running-a-Scenario","page":"How to Use","title":"Running a Scenario","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"To run a scenario, use the function:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"run_scenario(connection)\nrun_scenario(connection; output_folder)","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The connection should have been created and the data loaded into it using TulipaIO. See the tutorials for a complete guide on how to achieve this. The output_folder is optional if the user wants to export the output.","category":"page"},{"location":"10-how-to-use/#input","page":"How to Use","title":"Input","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Currently, we only accept input from CSV files that follow the Schemas. You can also check the test/inputs folder for examples.","category":"page"},{"location":"10-how-to-use/#csv-files","page":"How to Use","title":"CSV Files","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Below, we have a description of the files. At the end, in Schemas, we have the expected columns in these CSVs.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Tip: If you modify CSV files and want to see your modifications, the normal git diff command will not be informative. Instead, you can usegit diff --word-diff-regex=\"[^[:space:],]+\"to make git treat the , as word separators. You can also compare two CSV files withgit diff --no-index --word-diff-regex=\"[^[:space:],]+\" file1 file2","category":"page"},{"location":"10-how-to-use/#graph-assets-data","page":"How to Use","title":"graph-assets-data.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"This file contains the list of assets and the static data associated with each of them.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The meaning of Missing data depends on the parameter, for instance:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"group: No group assigned to the asset.","category":"page"},{"location":"10-how-to-use/#graph-flows-data","page":"How to Use","title":"graph-flows-data.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The same as graph-assets-data.csv, but for flows. Each flow is defined as a pair of assets.","category":"page"},{"location":"10-how-to-use/#assets-data","page":"How to Use","title":"assets-data.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"This file contains the yearly data of each asset.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The investment parameters are as follows:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The investable parameter determines whether there is an investment decision for the asset or flow.\nThe investment_integer parameter determines if the investment decision is integer or continuous.\nThe investment_cost parameter represents the cost in the defined timeframe. Thus, if the timeframe is a year, the investment cost is the annualized cost of the asset.\nThe investment_limit parameter limits the total investment capacity of the asset or flow. This limit represents the potential of that particular asset or flow. Without data in this parameter, the model assumes no investment limit.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The meaning of Missing data depends on the parameter, for instance:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"investment_limit: There is no investment limit.\ninitial_storage_level: The initial storage level is free (between the storage level limits), meaning that the optimization problem decides the best starting point for the storage asset. In addition, the first and last time blocks in a representative period are linked to create continuity in the storage level.","category":"page"},{"location":"10-how-to-use/#flows-data","page":"How to Use","title":"flows-data.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The same as assets-data.csv, but for flows. Each flow is defined as a pair of assets.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The meaning of Missing data depends on the parameter, for instance:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"investment_limit: There is no investment limit.","category":"page"},{"location":"10-how-to-use/#assets-profiles-definition","page":"How to Use","title":"assets-profiles.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"These files contain information about assets and their associated profiles. Each row lists an asset, the type of profile (e.g., availability, demand, maximum or minimum storage level), and the profile's name. These profiles are used in the intra-temporal constraints.","category":"page"},{"location":"10-how-to-use/#flows-profiles-definition","page":"How to Use","title":"flows-profiles.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"This file contains information about flows and their representative period profiles for intra-temporal constraints. Each flow is defined as a pair of assets.","category":"page"},{"location":"10-how-to-use/#rep-periods-data","page":"How to Use","title":"rep-periods-data.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Describes the representative periods by their unique ID, the number of timesteps per representative period, and the resolution per timestep. Note that in the test files the resolution units are given as hours for understandability, but the resolution is technically unitless.","category":"page"},{"location":"10-how-to-use/#rep-periods-mapping","page":"How to Use","title":"rep-periods-mapping.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Describes the periods of the timeframe that map into a representative period and the weight of the representative periods that construct a period. Note that each weight is a decimal between 0 and 1, and that the sum of weights for a given period must also be between 0 and 1 (but do not have to sum to 1).","category":"page"},{"location":"10-how-to-use/#profiles-rep-periods.csv","page":"How to Use","title":"profiles-rep-periods.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Define all the profiles for the rep-periods. The profile_name is a unique identifier, the period and value define the profile, and the rep_period field informs the representative period.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The profiles are linked to assets and flows in the files assets-profiles, assets-timeframe-profiles, and flows-profiles.","category":"page"},{"location":"10-how-to-use/#assets-timeframe-profiles.csv","page":"How to Use","title":"assets-timeframe-profiles.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Like the assets-profiles.csv, but for the inter-temporal constraints.","category":"page"},{"location":"10-how-to-use/#groups-data.csv-(optional)","page":"How to Use","title":"groups-data.csv (optional)","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"This file contains the list of groups and the methods that apply to each group, along with their respective parameters.","category":"page"},{"location":"10-how-to-use/#profiles-timeframe.csv-(optional)","page":"How to Use","title":"profiles-timeframe.csv (optional)","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Define all the profiles for the timeframe. This is similar to the profiles-rep-periods.csv except that it doesn't have a rep-period field and if this is not passed, default values are used in the timeframe constraints.","category":"page"},{"location":"10-how-to-use/#assets-rep-periods-partitions-definition","page":"How to Use","title":"assets-rep-periods-partitions.csv (optional)","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Contains a description of the partition for each asset with respect to representative periods. If not specified, each asset will have the same time resolution as the representative period, which is hourly by default.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"There are currently three ways to specify the desired resolution, indicated in the column specification. The column partition serves to define the partitions in the specified style.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"specification = uniform: Set the resolution to a uniform amount, i.e., a time block is made of X timesteps. The number X is defined in the column partition. The number of timesteps in the representative period must be divisible by X.\nspecification = explicit: Set the resolution according to a list of numbers separated by ; on the partition. Each number in the list is the number of timesteps for that time block. For instance, 2;3;4 means that there are three time blocks, the first has 2 timesteps, the second has 3 timesteps, and the last has 4 timesteps. The sum of the list must be equal to the total number of timesteps in that representative period, as specified in num_timesteps of rep-periods-data.csv.\nspecification = math: Similar to explicit, but using + and x for simplification. The value of partition is a sequence of elements of the form NxT separated by +, indicating N time blocks of length T. For instance, 2x3+3x6 is 2 time blocks of 3 timesteps, followed by 3 time blocks of 6 timesteps, for a total of 24 timesteps in the representative period.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The table below shows various results for different formats for a representative period with 12 timesteps.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Time Block :uniform :explicit :math\n1:3, 4:6, 7:9, 10:12 3 3;3;3;3 4x3\n1:4, 5:8, 9:12 4 4;4;4 3x4\n1:1, 2:2, …, 12:12 1 1;1;1;1;1;1;1;1;1;1;1;1 12x1\n1:3, 4:6, 7:10, 11:12 NA 3;3;4;2 2x3+1x4+1x2","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Note: If an asset is not specified in this file, the balance equation will be written in the lowest resolution of both the incoming and outgoing flows to the asset.","category":"page"},{"location":"10-how-to-use/#flow-rep-periods-partitions-definition","page":"How to Use","title":"flows-rep-periods-partitions.csv (optional)","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The same as assets-rep-periods-partitions.csv, but for flows.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"If a flow is not specified in this file, the flow time resolution will be for each timestep by default (e.g., hourly).","category":"page"},{"location":"10-how-to-use/#assets-timeframe-partitions","page":"How to Use","title":"assets-timeframe-partitions.csv (optional)","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The same as their assets-rep-periods-partitions.csv counterpart, but for the periods in the timeframe of the model.","category":"page"},{"location":"10-how-to-use/#schemas","page":"How to Use","title":"Schemas","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"using Markdown, TulipaEnergyModel\n\nMarkdown.parse(\n join([\"- **$filename**\\n\" *\n join(\n [\" - `$f: $t`\" for (f, t) in schema],\n \"\\n\",\n ) for (filename, schema) in TulipaEnergyModel.schema_per_table_name\n ] |> sort, \"\\n\")\n)","category":"page"},{"location":"10-how-to-use/#structures","page":"How to Use","title":"Structures","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The list of relevant structures used in this package are listed below:","category":"page"},{"location":"10-how-to-use/#EnergyProblem","page":"How to Use","title":"EnergyProblem","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The EnergyProblem structure is a wrapper around various other relevant structures. It hides the complexity behind the energy problem, making the usage more friendly, although more verbose.","category":"page"},{"location":"10-how-to-use/#Fields","page":"How to Use","title":"Fields","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"graph: The Graph object that defines the geometry of the energy problem.\nrepresentative_periods: A vector of Representative Periods.\nconstraints_partitions: Dictionaries that connect pairs of asset and representative periods to time partitions (vectors of time blocks).\ntimeframe: The number of periods in the representative_periods.\ndataframes: A Dictionary of dataframes used to linearize the variables and constraints. These are used internally in the model only.\ngroups: A vector of Groups.\nmodel: A JuMP.Model object representing the optimization model.\nsolution: A structure of the variable values (investments, flows, etc) in the solution.\nsolved: A boolean indicating whether the model has been solved or not.\nobjective_value: The objective value of the solved problem (Float64).\ntermination_status: The termination status of the optimization model.\ntime_read_data: Time taken (in seconds) for reading the data (Float64).\ntime_create_model: Time taken (in seconds) for creating the model (Float64).\ntime_solve_model: Time taken (in seconds) for solving the model (Float64).","category":"page"},{"location":"10-how-to-use/#Constructor","page":"How to Use","title":"Constructor","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The EnergyProblem can also be constructed using the minimal constructor below.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"EnergyProblem(connection): Constructs a new EnergyProblem object with the given connection that has been created and the data loaded into it using TulipaIO. The graph, representative_periods, and timeframe are computed using create_internal_structures. The constraints_partitions field is computed from the representative_periods, and the other fields are initialized with default values.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"See the basic example tutorial to see how these can be used.","category":"page"},{"location":"10-how-to-use/#Graph","page":"How to Use","title":"Graph","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The energy problem is defined using a graph. Each vertex is an asset, and each edge is a flow.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"We use MetaGraphsNext.jl to define the graph and its objects. Using MetaGraphsNext we can define a graph with metadata, i.e., associate data with each asset and flow. Furthermore, we can define the labels of each asset as keys to access the elements of the graph. The assets in the graph are of type GraphAssetData, and the flows are of type GraphFlowData.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The graph can be created using the create_internal_structures function, or it can be accessed from an EnergyProblem.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"See how to use the graph in the graph tutorial.","category":"page"},{"location":"10-how-to-use/#GraphAssetData","page":"How to Use","title":"GraphAssetData","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"This structure holds all the information of a given asset. These are stored inside the Graph. Given a graph graph, an asset a can be accessed through graph[a].","category":"page"},{"location":"10-how-to-use/#GraphFlowData","page":"How to Use","title":"GraphFlowData","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"This structure holds all the information of a given flow. These are stored inside the Graph. Given a graph graph, a flow from asset u to asset v can be accessed through graph[u, v].","category":"page"},{"location":"10-how-to-use/#Partition","page":"How to Use","title":"Partition","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"A representative period will be defined with a number of timesteps. A partition is a division of these timesteps into time blocks such that the time blocks are disjunct (not overlapping) and that all timesteps belong to some time block. Some variables and constraints are defined over every time block in a partition.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"For instance, for a representative period with 12 timesteps, all sets below are partitions:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"1 2 3 4 5 6 7 8 9 10 11 12\n1 2 3 4 5 6 7 8 9 10 11 12\n1 2 3 4 5 6 7 8 9 10 11 12","category":"page"},{"location":"10-how-to-use/#timeframe","page":"How to Use","title":"Timeframe","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The timeframe is the total period we want to analyze with the model. Usually this is a year, but it can be any length of time. A timeframe has two fields:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"num_periods: The timeframe is defined by a certain number of periods. For instance, a year can be defined by 365 periods, each describing a day.\nmap_periods_to_rp: Indicates the periods of the timeframe that map into a representative period and the weight of the representative period to construct that period.","category":"page"},{"location":"10-how-to-use/#representative-periods","page":"How to Use","title":"Representative Periods","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The timeframe (e.g., a full year) is described by a selection of representative periods, for instance, days or weeks, that nicely summarize other similar periods. For example, we could model the year into 3 days, by clustering all days of the year into 3 representative days. Each one of these days is called a representative period. TulipaEnergyModel.jl has the flexibility to consider representative periods of different lengths for the same timeframe (e.g., a year can be represented by a set of 4 days and 2 weeks). To obtain the representative periods, we recommend using TulipaClustering.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"A representative period has three fields:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"weight: Indicates how many representative periods are contained in the timeframe; this is inferred automatically from map_periods_to_rp in the timeframe.\ntimesteps: The number of timesteps blocks in the representative period.\nresolution: The duration in time of each timestep.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The number of timesteps and resolution work together to define the coarseness of the period. Nothing is defined outside of these timesteps; for instance, if the representative period represents a day and you want to specify a variable or constraint with a coarseness of 30 minutes. You need to define the number of timesteps to 48 and the resolution to 0.5.","category":"page"},{"location":"10-how-to-use/#Solution","page":"How to Use","title":"Solution","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The solution object energy_problem.solution is a mutable struct with the following fields:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"assets_investment[a]: The investment for each asset, indexed on the investable asset a.\nflows_investment[u, v]: The investment for each flow, indexed on the investable flow (u, v).\nstorage_level_intra_rp[a, rp, timesteps_block]: The storage level for the storage asset a within (intra) a representative period rp and a time block timesteps_block. The list of time blocks is defined by constraints_partitions, which was used to create the model.\nstorage_level_inter_rp[a, periods_block]: The storage level for the storage asset a between (inter) representative periods in the periods block periods_block.\nflow[(u, v), rp, timesteps_block]: The flow value for a given flow (u, v) at a given representative period rp, and time block timesteps_block. The list of time blocks is defined by graph[(u, v)].partitions[rp].\nobjective_value: A Float64 with the objective value at the solution.\nduals: A Dictionary containing the dual variables of selected constraints.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Check the tutorial for tips on manipulating the solution.","category":"page"},{"location":"10-how-to-use/#time-blocks","page":"How to Use","title":"Time Blocks","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"A time block is a range for which a variable or constraint is defined. It is a range of numbers, i.e., all integer numbers inside an interval. Time blocks are used for the periods in the timeframe and the timesteps in the representative period. Time blocks are disjunct (not overlapping), but do not have to be sequential.","category":"page"},{"location":"10-how-to-use/#group","page":"How to Use","title":"Group","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"This structure holds all the information of a given group with the following fields:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"name: The name of the group.\ninvest_method: Boolean value to indicate whether or not the group has an investment method.\nmin_investment_limit: A minimum investment limit in MW is imposed on the total investments of the assets belonging to the group.\nmax_investment_limit: A maximum investment limit in MW is imposed on the total investments of the assets belonging to the group.","category":"page"},{"location":"10-how-to-use/#infeasible","page":"How to Use","title":"Exploring infeasibility","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"If your model is infeasible, you can try exploring the infeasibility with JuMP.compute_conflict! and JuMP.copy_conflict.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Note: Not all solvers support this functionality.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Use energy_problem.model for the model argument. For instance:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"if energy_problem.termination_status == INFEASIBLE\n compute_conflict!(energy_problem.model)\n iis_model, reference_map = copy_conflict(energy_problem.model)\n print(iis_model)\nend","category":"page"},{"location":"10-how-to-use/#Storage-specific-setups","page":"How to Use","title":"Storage specific setups","text":"","category":"section"},{"location":"10-how-to-use/#seasonal-setup","page":"How to Use","title":"Seasonal and non-seasonal storage","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Section Storage Modeling explains the main concepts for modeling seasonal and non-seasonal storage in TulipaEnergyModel.jl. To define if an asset is one type or the other then consider the following:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Seasonal storage: When the storage capacity of an asset is greater than the total length of representative periods, we recommend using the inter-temporal constraints. To apply these constraints, you must set the input parameter is_seasonal to true in the assets-data.csv.\nNon-seasonal storage: When the storage capacity of an asset is lower than the total length of representative periods, we recommend using the intra-temporal constraints. To apply these constraints, you must set the input parameter is_seasonal to false in the assets-data.csv.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Note: If the input data covers only one representative period for the entire year, for example, with 8760-hour timesteps, and you have a monthly hydropower plant, then you should set the is_seasonal parameter for that asset to false. This is because the length of the representative period is greater than the storage capacity of the storage asset.","category":"page"},{"location":"10-how-to-use/#storage-investment-setup","page":"How to Use","title":"The energy storage investment method","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Energy storage assets have a unique characteristic wherein the investment is based not solely on the capacity to charge and discharge, but also on the energy capacity. Some storage asset types have a fixed duration for a given capacity, which means that there is a predefined ratio between energy and power. For instance, a battery of 10MW/unit and 4h duration implies that the energy capacity is 40MWh. Conversely, other storage asset types don't have a fixed ratio between the investment of capacity and storage capacity. Therefore, the energy capacity can be optimized independently of the capacity investment, such as hydrogen storage in salt caverns. To define if an energy asset is one type or the other then consider the following parameter setting in the file assets-data.csv:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Investment energy method: To use this method, set the parameter storage_method_energy to true. In addition, it is necessary to define:\ninvestment_cost_storage_energy: To establish the cost of investing in the storage capacity (e.g., kEUR/MWh/unit).\nfixed_cost_storage_energy: To establish the fixed cost of energy storage capacity (e.g., kEUR/MWh/unit).\ninvestment_limit_storage_energy: To define the potential of the energy capacity investment (e.g., MWh). Missing values mean that there is no limit.\ninvestment_integer_storage_energy: To determine whether the investment variables of storage capacity are integers of continuous.\nFixed energy-to-power ratio method: To use this method, set the parameter storage_method_energy to false. In addition, it is necessary to define the parameter energy_to_power_ratio to establish the predefined duration of the storage asset or ratio between energy and power. Note that all the investment costs should be allocated in the parameter investment_cost.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"In addition, the parameter capacity_storage_energy in the graph-assets-data.csv defines the energy per unit of storage capacity invested in (e.g., MWh/unit).","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"For more details on the constraints that apply when selecting one method or the other, please visit the mathematical formulation section.","category":"page"},{"location":"10-how-to-use/#storage-binary-method-setup","page":"How to Use","title":"Control simultaneous charging and discharging","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Depending on the configuration of the energy storage assets, it may or may not be possible to charge and discharge them simultaneously. For instance, a single battery cannot charge and discharge at the same time, but some pumped hydro storage technologies have separate components for charging (pump) and discharging (turbine) that can function independently, allowing them to charge and discharge simultaneously. To account for these differences, the model provides users with three options for the use_binary_storage_method parameter in the assets-data.csv file:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"binary: the model adds a binary variable to prevent charging and discharging simultaneously.\nrelaxed_binary: the model adds a binary variable that allows values between 0 and 1, reducing the likelihood of charging and discharging simultaneously. This option uses a tighter set of constraints close to the convex hull of the full formulation, resulting in fewer instances of simultaneous charging and discharging in the results.\nIf no value is set, i.e., missing value, the storage asset can charge and discharge simultaneously.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.","category":"page"},{"location":"10-how-to-use/#unit-commitment-setup","page":"How to Use","title":"Setting up unit commitment constraints","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The unit commitment constraints are only applied to producer and conversion assets. The unit_commitment parameter must be set to true to include the constraints in the assets-data.csv. Additionally, the following parameters should be set in that same file:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"unit_commitment_method: It determines which unit commitment method to use. The current version of the code only includes the basic version. Future versions will add more detailed constraints as additional options.\nunits_on_cost: Objective function coefficient on units_on variable. (e.g., no-load cost or idling cost in kEUR/h/unit)\nunit_commitment_integer: It determines whether the unit commitment variables are considered as integer or not (true or false)\nmin_operating_point: Minimum operating point or minimum stable generation level defined as a portion of the capacity of asset (p.u.)","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.","category":"page"},{"location":"10-how-to-use/#ramping-setup","page":"How to Use","title":"Setting up ramping constraints","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The ramping constraints are only applied to producer and conversion assets. The ramping parameter must be set to true to include the constraints in the assets-data.csv. Additionally, the following parameters should be set in that same file:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"max_ramp_up: Maximum ramping up rate as a portion of the capacity of asset (p.u./h)\nmax_ramp_down:Maximum ramping down rate as a portion of the capacity of asset (p.u./h)","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.","category":"page"},{"location":"10-how-to-use/#max-min-outgoing-energy-setup","page":"How to Use","title":"Setting up a maximum or minimum outgoing energy limit","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"For the model to add constraints for a maximum or minimum energy limit for an asset throughout the model's timeframe (e.g., a year), we need to establish a couple of parameters:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"is_seasonal = true in the assets-data.csv. This parameter enables the model to use the inter-temporal constraints.\nmax_energy_timeframe_partition neq missing or min_energy_timeframe_partition neq missing in the assets-data.csv. This value represents the peak energy that will be then multiplied by the profile for each period in the timeframe.\nNote: These parameters are defined per period, and the default values for profiles are 1.0 p.u. per period. If the periods are determined daily, the energy limit for the whole year will be 365 times maxor min_energy_timeframe_partition.\n(optional) profile_type and profile_name in the assets-timeframe-profiles.csv and the profile values in the profiles-timeframe.csv. If there is no profile defined, then by default it is 1.0 p.u. for all periods in the timeframe.\n(optional) define a period partition in assets-timeframe-partitions.csv. If there is no partition defined, then by default the constraint is created for each period in the timeframe, otherwise, it will consider the partition definition in the file.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Tip: If you want to set a limit on the maximum or minimum outgoing energy for a year with representative days, you can use the partition definition to create a single partition for the entire year to combine the profile.","category":"page"},{"location":"10-how-to-use/#Example:-Setting-Energy-Limits","page":"How to Use","title":"Example: Setting Energy Limits","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Let's assume we have a year divided into 365 days because we are using days as periods in the representatives from TulipaClustering.jl. Also, we define the max_energy_timeframe_partition = 10 MWh, meaning the peak energy we want to have is 10MWh for each period or period partition. So depending on the optional information, we can have:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Profile Period Partitions Example\nNone None The default profile is 1.p.u. for each period and since there are no period partitions, the constraints will be for each period (i.e., daily). So the outgoing energy of the asset for each day must be less than or equal to 10MWh.\nDefined None The profile definition and value will be in the assets-timeframe-profiles.csv and profiles-timeframe.csv files. For example, we define a profile that has the following first four values: 0.6 p.u., 1.0 p.u., 0.8 p.u., and 0.4 p.u. There are no period partitions, so constraints will be for each period (i.e., daily). Therefore the outgoing energy of the asset for the first four days must be less than or equal to 6MWh, 10MWh, 8MWh, and 4MWh.\nDefined Defined Using the same profile as above, we now define a period partition in the assets-timeframe-partitions.csv file as uniform with a value of 2. This value means that we will aggregate every two periods (i.e., every two days). So, instead of having 365 constraints, we will have 183 constraints (182 every two days and one last constraint of 1 day). Then the profile is aggregated with the sum of the values inside the periods within the partition. Thus, the outgoing energy of the asset for the first two partitions (i.e., every two days) must be less than or equal to 16MWh and 12MWh, respectively.","category":"page"},{"location":"10-how-to-use/#group-setup","page":"How to Use","title":"Defining a group of assets","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"A group of assets refers to a set of assets that share certain constraints. For example, the investments of a group of assets may be capped at a maximum value, which represents the potential of a specific area that is restricted in terms of the maximum allowable MW due to limitations on building licenses.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"In order to define the groups in the model, the following steps are necessary:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Create a group in the groups-data.csv file by defining the name property and its parameters.\nIn the file graph-assets-data.csv, assign assets to the group by setting the name in the group parameter/column.\nNote: A missing value in the parameter group in the graph-assets-data.csv means that the asset does not belong to any group.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Groups are useful to represent several common constraints, the following group constraints are available.","category":"page"},{"location":"10-how-to-use/#investment-group-setup","page":"How to Use","title":"Setting up a maximum or minimum investment limit for a group","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The mathematical formulation of the maximum and minimum investment limit for group constraints is available here. The parameters to set up these constraints in the model are in the groups-data.csv file.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"invest_method = true. This parameter enables the model to use the investment group constraints.\nmin_investment_limit neq missing or max_investment_limit neq missing. This value represents the limits that will be imposed on the investment that belongs to the group.\nNotes:A missing value in the parameters min_investment_limit and max_investment_limit means that there is no investment limit.\nThese constraints are applied to the investments each year. The model does not yet have investment limits to a group's accumulated invested capacity.","category":"page"},{"location":"10-how-to-use/#Example:-Group-of-Assets","page":"How to Use","title":"Example: Group of Assets","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Let's explore how the groups are set up in the test case called Norse. First, let's take a look at the groups-data.csv file:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"using DataFrames # hide\nusing CSV # hide\ninput_asset_file = \"../../test/inputs/Norse/groups-data.csv\" # hide\nassets = CSV.read(input_asset_file, DataFrame, header = 2) # hide","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"In the given data, there are two groups: renewables and ccgt. Both groups have the invest_method parameter set to true, indicating that investment group constraints apply to both. For the renewables group, the min_investment_limit parameter is missing, signifying that there is no minimum limit imposed on the group. However, the max_investment_limit parameter is set to 40000 MW, indicating that the total investments of assets in the group must be less than or equal to this value. In contrast, the ccgt group has a missing value in the max_investment_limit parameter, indicating no maximum limit, while the min_investment_limit is set to 10000 MW for the total investments in that group.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Let's now explore which assets are in each group. To do so, we can take a look at the graph-assets-data.csv file:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"input_asset_file = \"../../test/inputs/Norse/graph-assets-data.csv\" # hide\nassets = CSV.read(input_asset_file, DataFrame, header = 2) # hide\nassets = assets[.!ismissing.(assets.group), [:name, :type, :group]] # hide","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Here we can see that the assets Asgard_Solar and Midgard_Wind belong to the renewables group, while the assets Asgard_CCGT and Midgard_CCGT belong to the ccgt group.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Note: If the group has a min_investment_limit, then assets in the group have to allow investment (investable = true) for the model to be feasible. If the assets are not investable then they cannot satisfy the minimum constraint.","category":"page"},{"location":"30-concepts/#concepts","page":"Concepts","title":"Concepts","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Pages = [\"30-concepts.md\"]\nDepth = 3","category":"page"},{"location":"30-concepts/#concepts-summary","page":"Concepts","title":"Summary","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"TulipaEnergyModel.jl incorporates two fundamental concepts that serve as the foundation of the optimization model:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Energy Assets: representation of a physical asset that can produce, consume, store, balance, or convert energy. Some examples of what these assets can represent are:\nProducer: e.g., wind turbine, solar panel\nConsumer: e.g., electricity demand, heat demand\nStorage: e.g., battery, pumped-hydro storage\nBalancing Hub: e.g., an electricity network that serves as a connection among other energy assets\nConversion: e.g., power plants, electrolyzers\nFlows: representation of the connections among assets, e.g., pipelines, transmission lines, or simply the energy production that goes from one asset to another.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In a nutshell, the model guarantees a balance of energy for the various types of assets while considering the flow limits. It considers a set of representative periods (e.g., days or weeks) for a given timeframe (e.g., a year) the user wants to analyze. Therefore, the model has two types of temporal (time) constraints to consider the different chronology characteristics of the assets:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Intra-temporal Constraints: These constraints limit the asset or flow within a representative period. The intra-temporal constraints help to characterize the short-term operational dynamics of the assets. So far, the model considers balance and flow limitations within the representative period, but future developments will include unit commitment, ramping, and reserve constraints.\nInter-temporal Constraints: These constraints combine the information of the representative periods and create limitations between them to recover chronological information across the whole timeframe. The inter-temporal constraints help to characterize the long-term operational dynamics of the assets (e.g., seasonality). So far, the model uses this type of constraint to model seasonal storage. Still, future developments will include, for example, maximum or minimum production/consumption for a year (or any timeframe).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The mathematical formulation shows an overview of these constraints and the variables in the model.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Another essential concept in the model is the flexible time resolution, which allows for each asset to be considered in a single timestep (e.g., 1, 2, 3...) or in a range of timesteps (e.g., 1:3, meaning that the asset's variable represents the value of timesteps 1, 2, and 3). This concept allows the modeling of different dynamics depending on the asset; for instance, electricity assets can be modeled hourly, whereas hydrogen assets can be modeled in a 6-hour resolution (avoiding creating unnecessary constraints and variables).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The following sections explain these concepts in more detail.","category":"page"},{"location":"30-concepts/#flex-asset-connection","page":"Concepts","title":"Flexible Connection of Energy Assets","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In energy system modeling, it is becoming common to have hybrid assets like storage + renewable (e.g., battery + solar), electrolyzer + renewable (e.g., electrolyzer + wind), or renewable + hydro (e.g., solar + hydro) that are located at the same site and share a common connection point to the grid. The standard method of modeling these assets requires extra variables and constraints for them to function correctly. For example, flows from the grid are not allowed, as they either avoid charging from the grid or require green hydrogen production. Therefore, hybrid connections typically require an additional node to regulate this connection with the grid.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The representation of the energy system in TulipaEnergyModel.jl is based on Graph Theory, which deals with the connection between vertices by edges. This representation provides a more flexible framework to model energy assets in the system as vertices and flows between energy assets as edges. By connecting assets directly to each other (i.e., without having a node in between), we reduce the number of variables and constraints needed to represent hybrid configurations, thus reducing the model size.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Consider the following example to demonstrate the benefits of using a graph theory approach. In the classic connection approach, the nodes play a crucial role in modeling. For instance, every asset must be connected to a node with balance constraints. When a storage asset and a renewable asset are in a hybrid connection like the one described before, a connection point is needed to connect the hybrid configuration to the rest of the system. Therefore, to consider the hybrid configuration of a storage asset and a renewable asset, we must introduce a node (i.e., a connection point) between these assets and the external power grid (i.e., a balance point), as shown in the following figure:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: Classic connection)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In this system, the phs storage asset charges and discharges from the connection point, while the wind turbine produces power that goes directly to the connection point. This connection point is connected to the external power grid through a transmission line that leads to a balance hub that connects to other assets. Essentially, the connection point acts as a balancing hub point for the assets in this hybrid configuration. Furthermore, these hybrid configurations impose an extra constraint to avoid storage charges from the power grid.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's consider the modeling approach in TulipaEnergyModel.jl. As nodes are no longer needed to connect assets, we can connect them directly to each other, as shown in the figure below:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: Flexible connection)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"By implementing this approach, we can reduce the number of variables and constraints involved. For example, the balance constraint in the intermediate node and the extra constraint to avoid the storage charging from the power grid are no longer needed. Additionally, we can eliminate the variable determining the flow between the intermediate node and the power grid, because the flow from phs to balance can directly link to the external grid. The section comparison of different modeling approaches shows the quantification of these reductions.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"This example of a phs and a wind asset is useful for illustrating the advantages of this modeling approach and will be reused in the following sections. However, please keep in mind that there are other applications of hybrid configurations, such as battery-solar, hydro-solar, and electrolyzer-wind.","category":"page"},{"location":"30-concepts/#flex-time-res","page":"Concepts","title":"Flexible Time Resolution","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"One of the core features of TulipaEnergyModel.jl is that it can handle different time resolutions on the assets and the flows. Typically, the time resolution in an energy model is hourly, like in the following figure where we have a 6-hour energy system:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: Hourly Time Resolution)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Therefore, for this simple example, we can determine the number of constraints and variables in the optimization problem:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Number of variables: 42 since we have six connections among assets (i.e., 6 flows x 6 hours = 36 variables) and one storage asset (i.e., 1 storage level x 6 h = 6 variables)\nNumber of constraints: 72, which are:\n24 from the maximum output limit of the assets that produce, convert, or discharge energy (i.e., H2, wind, ccgt, and phs) for each hour (i.e., 4 assets x 6 h = 24 constraints)\n6 from the maximum input limit of the storage or charging limit for the phs\n6 from the maximum storage level limit for the phs\n12 from the import and export limits for the transmission line between the balance hub and the demand\n24 from the energy balance on the consumer, hub, conversion, and storage assets (i.e., demand, balance, ccgt, and phs) for each hour (i.e., 4 assets x 6 h = 24 constraints)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Depending on the input data and the level of detail you want to model, hourly resolution in all the variables might not be necessary. TulipaEnergyModel.jl can have different time resolutions for each asset and flow to simplify the optimization problem and approximate hourly representation. This feature is useful for large-scale energy systems that involve multiple sectors, as detailed granularity is not always necessary due to the unique temporal dynamics of each sector. For instance, we can use hourly resolution for the electricity sector and six-hour resolution for the hydrogen sector. We can couple multiple sectors, each with its own temporal resolution.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's explore the flexibility of time resolution with a few examples.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The following table shows the user input data for the definition of asset time resolution. Please note that the values presented in this example are just for illustrative purposes and do not represent a realistic case.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"using DataFrames # hide\nusing CSV # hide\ninput_asset_file = \"../../test/inputs/Variable Resolution/assets-rep-periods-partitions.csv\" # hide\nassets = CSV.read(input_asset_file, DataFrame, header = 2) # hide\nassets = assets[assets.asset .!= \"wind\", :] # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The table shows that the H2 producer and the phs storage have a uniform definition of 6 hours. This definition means we want to represent the H2 production profile and the storage level of the phs every six hours.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The same time resolution can be specified for the flows, for example (again, the values are for illustrative purposes and do not represent a realistic case):","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"input_flow_file = \"../../test/inputs/Variable Resolution/flows-rep-periods-partitions.csv\" # hide\nflows_partitions = CSV.read(input_flow_file, DataFrame, header = 2) # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The table shows a uniform definition for the flow from the hydrogen producer (H2) to the conversion asset (ccgt) of 6 hours, from the wind producer (wind) to the storage (phs) of 3 hours, and from the balance hub (balance) to the consumer (demand) of 3 hours, too. In addition, the flow from the wind producer (wind) to the balance hub (balance) is defined using the math specification of 1x2+1x4, meaning that there are two time blocks, one of two hours (i.e., 1:2) and another of four hours (i.e., 3:6). Finally, the flow from the storage (phs) to the balance hub (balance) is defined using the math specification of 1x4+1x2, meaning that there are two time blocks, one of four hours (i.e., 1:4) and another of two hours (i.e., 5:6).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The following figure illustrates these definitions on the example system.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: Variable Time Resolution)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"So, let's recap:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The hydrogen producer (H2) is in a 6-hour resolution represented by the range 1:6, meaning that the balance of the hydrogen produced is for every 6 hours.\nThe flow from the hydrogen producer to the ccgt power plant (H2,ccgt) is also in a 6-hour resolution 1:6.\nThe flow from the ccgt power plant to the balance hub (ccgt, balance) has hourly resolution [1,2,3,4,5,6].\nThe ccgt is a conversion plant that takes hydrogen to produce electricity. Since both sectors have different time resolutions, the energy balance in the conversion asset is defined in the lowest resolution connecting to the asset. In this case, the energy balance in the ccgt is defined every 6 hours, i.e., in the range 1:6.\nThe wind producer has an hourly profile of electricity production, so the resolution of the asset is hourly.\nThe wind producer output has two connections, one to the balance hub and the other to the pumped-hydro storage (phs) with different resolutions:\nThe flow from the wind producer to the phs storage (wind, phs) has a uniform resolution of two blocks from hours 1 to 3 (i.e., 1:3) and from hours 4 to 6 (i.e., 4:6).\nThe flow from the wind producer to the balance hub (wind, balance) has a variable resolution of two blocks, too, but from hours 1 to 2 (i.e., 1:2) and from hours 3 to 6 (i.e., 3:6).\nThe phs is in a 6-hour resolution represented by the range 1:6, meaning the storage balance is determined every 6 hours.\nThe flow from the phs to the balance (phs, balance) represents the discharge of the phs. This flow has a variable resolution of two blocks from hours 1 to 4 (i.e., 1:4) and from hours 5 to 6 (i.e., 5:6), which differs from the one defined for the charging flow from the wind asset.\nThe demand consumption has hourly input data with one connection to the balance hub:\nThe flow from the balance hub to the demand (balance, demand) has a uniform resolution of 3 hours; therefore, it has two blocks, one from hours 1 to 3 (i.e., 1:3) and the other from hours 4 to 6 (i.e., 4:6).\nThe balance hub integrates all the different assets with their different resolutions. The lowest resolution of all connections determines the balance equation for this asset. Therefore, the resulting resolution is into two blocks, one from hours 1 to 4 (i.e., 1:4) and the other from hours 5 to 6 (i.e., 5:6).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Note: This example demonstrates that different time resolutions can be assigned to each asset and flow in the model. Additionally, the resolutions do not need to be uniform and can vary throughout the horizon.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The complete input data for this example can be found here.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Due to the flexible resolution, we must explicitly state how the constraints are constructed. For each constraint, three things need to be considered:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Whether it is type power or type energy.\ntype power: highest resolution\ntype energy: lowest resolution (multiplied by durations)\nHow the resolution is determined (regardless of whether it is highest or lowest): the incoming flows, the outgoing flows, or a combination of both.\nHow the related parameters are treated. We use two methods of aggregation, sum or mean.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Below is the table outlining the details for each type of constraint. Note min means highest resolution, and max means lowest resolution.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Name Variables involved Profile involved Constraint type Resolution of the constraints Profile aggregation\nConsumer Balance inputs, outputs demand power min(incoming flows, outgoing flows) mean\nStorage Balance inputs, outputs, storage level inflows energy max(asset, min(incoming flows, outgoing flows)) sum\nHub Balance inputs, outputs - power min(incoming flows, outgoing flows) -\nConversion Balance inputs, outputs - energy max(incoming flows, outgoing flows) -\nProducers Capacity Constraints outputs availability power min(outgoing flows) mean\nStorage Capacity Constraints (outgoing) outputs - power min(outgoing flows) -\nConversion Capacity Constraints (outgoing) outputs - power min(outgoing flows) -\nConversion Capacity Constraints (incoming) inputs - power min(incoming flows) -\nStorage Capacity Constraints (incoming) inputs - power min(incoming flows) -\nTransport Capacity Constraints (upper bounds) flow availability power if it connects two hubs or demands then max(hub a,hub b), otherwise its own mean\nTransport Capacity Constraints (lower bounds) flow availability power if it connects two hubs or demands then max(hub a,hub b), otherwise its own mean\nMaximum Energy Limits (outgoing) outputs max_energy energy Determine by timeframe partitions. The default value is for each period in the timeframe sum\nMinimum Energy Limits (outgoing) outputs min_energy energy Determine by timeframe partitions. The default value is for each period in the timeframe sum\nMaximum Output Flow with Unit Commitment outputs, units_on availability power min(outgoing flows, units_on) mean\nMinimum Output Flow with Unit Commitment outputs, units_on availability power min(outgoing flows, units_on) mean\nMaximum Ramp Up Flow with Unit Commitment outputs, units_on availability power min(outgoing flows, units_on) mean\nMaximum Ramp Down Flow with Unit Commitment outputs, units_on availability power min(outgoing flows, units_on) mean\nMaximum Ramp Up Flow without Unit Commitment outputs availability power min(outgoing flows) mean\nMaximum Ramp Down Flow without Unit Commitment outputs availability power min(outgoing flows) mean","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"For this basic example, we can describe the balance and capacity constraints in the model. For the sake of simplicity, we consider only the intra-temporal constraints, the representative period index is dropped from the equations, and there are no investment variables in the equations.","category":"page"},{"location":"30-concepts/#Energy-Balance-Constraints","page":"Concepts","title":"Energy Balance Constraints","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In the following sections, we lay out all the balance constraints of this example.","category":"page"},{"location":"30-concepts/#Storage-Balance","page":"Concepts","title":"Storage Balance","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"As shown in the table, the resolution of the storage balance is energy, which is calculated by max(asset, min(incoming flows, outgoing flows)). The resolutions of the incoming and outgoing flows of the storage are 1:3, 4:6, 1:4, and 5:6, resulting in a minimum resolution of 2. The resolution of the storage is 6. Then, max(asset, min(incoming flows, outgoing flows)) becomes max(6, min(3, (4, 2))) which results in 6, and thus this balance is for every 6 hours. The charging and discharging flows are multiplied by their durations to account for the energy in the range 1:6.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textstorage_balance_textphs16 \n qquad v^textintra-storage_textphs16 = 3 cdot p^texteff_(textwindtextphs) cdot v^textflow_(textwindtextphs)13 + 3 cdot p^texteff_(textwindtextphs) cdot v^textflow_(textwindtextphs)46 \n qquad quad - frac4p^texteff_(textphstextbalance) cdot v^textflow_(textphstextbalance)14 - frac2p^texteff_(textphstextbalance) cdot v^textflow_(textphstextbalance)56 \nendaligned","category":"page"},{"location":"30-concepts/#Consumer-Balance","page":"Concepts","title":"Consumer Balance","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The flows coming from the balancing hub are defined every 3 hours. Therefore, the flows impose the lowest resolution and the demand is balanced every 3 hours. The input demand is aggregated as the mean of the hourly values in the input data. As with the storage balance, the flows are multiplied by their durations.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textconsumer_balance_textdemand13 \n qquad v^textflow_(textbalancetextdemand)13 = p^textpeak demand_textdemand cdot fracsum_b=1^3 p^textdemand profile_textdemandb3 \n textconsumer_balance_textdemand46 \n qquad v^textflow_(textbalancetextdemand)46 = p^textpeak demand_textdemand cdot fracsum_b=4^6 p^textdemand profile_textdemandb3 \nendaligned","category":"page"},{"location":"30-concepts/#Hub-Balance","page":"Concepts","title":"Hub Balance","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The hub balance is quite interesting because it integrates several flow resolutions. Remember that we didn't define any specific time resolution for this asset. Therefore, the highest resolution of all incoming and outgoing flows in the horizon implies that the hub balance must be imposed for all 6 blocks. The balance must account for each flow variable's duration in each block.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n texthub_balance_textbalance11 \n qquad v^textflow_(textbalancetextdemand)13 = v^textflow_(textccgttextbalance) 11 + v^textflow_(textwindtextbalance)12 + v^textflow_(textphstextbalance)14 \n texthub_balance_textbalance22 \n qquad v^textflow_(textbalancetextdemand)13 = v^textflow_(textccgttextbalance) 22 + v^textflow_(textwindtextbalance)12 + v^textflow_(textphstextbalance)14 \n texthub_balance_textbalance33 \n qquad v^textflow_(textbalancetextdemand)13 = v^textflow_(textccgttextbalance) 33 + v^textflow_(textwindtextbalance)36 + v^textflow_(textphstextbalance)14 \n texthub_balance_textbalance44 \n qquad v^textflow_(textbalancetextdemand)46 = v^textflow_(textccgttextbalance) 44 + v^textflow_(textwindtextbalance)36 + v^textflow_(textphstextbalance)14\n texthub_balance_textbalance55 \n qquad v^textflow_(textbalancetextdemand)46 = v^textflow_(textccgttextbalance) 55 + v^textflow_(textwindtextbalance)36 + v^textflow_(textphstextbalance)56 \n texthub_balance_textbalance66 \n qquad v^textflow_(textbalancetextdemand)46 = v^textflow_(textccgttextbalance) 66 + v^textflow_(textwindtextbalance)36 + v^textflow_(textphstextbalance)56 \nendaligned","category":"page"},{"location":"30-concepts/#Conversion-Balance","page":"Concepts","title":"Conversion Balance","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The flows connected to the CCGT conversion unit have different resolutions, too. In this case, the hydrogen imposes the lowest resolution; therefore, the energy balance in this asset is also every 6 hours.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textconversion_balance_textccgt16 \n qquad 6 cdot p^texteff_(textH2textccgt) cdot v^textflow_(textH2textccgt)16 = frac1p^texteff_(textccgttextbalance) sum_b=1^6 v^textflow_(textccgttextbalance)b \nendaligned","category":"page"},{"location":"30-concepts/#Capacity-Constraints","page":"Concepts","title":"Capacity Constraints","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"All capacity constraints are defined in the highest resolution to guarantee that the flows are below the limits of each asset capacity.","category":"page"},{"location":"30-concepts/#Storage-Capacity-Constraints","page":"Concepts","title":"Storage Capacity Constraints","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Since the storage unit only has one input and output, the capacity limit constraints are in the same resolution as the individual flows. Therefore, the constraints for the outputs of the storage (i.e., discharging capacity limit) are:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmax_output_flows_limit_textphs14 \n qquad v^textflow_(textphstextbalance)14 leq p^textinit capacity_textphs \n textmax_output_flows_limit_textphs56 \n qquad v^textflow_(textphstextbalance)56 leq p^textinit capacity_textphs \nendaligned","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"And the constraints for the inputs of the storage (i.e., charging capacity limit) are:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmax_input_flows_limit_textphs13 \n qquad v^textflow_(textwindtextphs)13 leq p^textinit capacity_textphs \n textmax_input_flows_limit_textphs46 \n qquad v^textflow_(textwindtextphs)46 leq p^textinit capacity_textphs \nendaligned","category":"page"},{"location":"30-concepts/#Conversion-Capacity-Constraints","page":"Concepts","title":"Conversion Capacity Constraints","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Similarly, each outflow is limited to the ccgt capacity for the conversion unit.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmax_output_flows_limit_textccgtb \n qquad v^textflow_(textccgttextbalance)b leq p^textinit capacity_textccgt quad forall b in 16 \nendaligned","category":"page"},{"location":"30-concepts/#Producer-Capacity-Constraints","page":"Concepts","title":"Producer Capacity Constraints","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The wind producer asset is interesting because the output flows are in different resolutions, i.e., 1:2, 3:6, 1:3, and 4:6. The highest resolution is 1:2, 3, and 4:6. Therefore, the constraints are as follows:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmax_output_flows_limit_textwind12 \n qquad v^textflow_(textwindtextbalance)12 + v^textflow_(textwindtextphs)13 leq fracp^textinit capacity_textwind2 cdot sum_b=1^2 p^textavailability profile_textwindb \n textmax_output_flows_limit_textwind3 \n qquad v^textflow_(textwindtextbalance)36 + v^textflow_(textwindtextphs)13 leq p^textinit capacity_textwind cdot p^textavailability profile_textwind3 \n textmax_output_flows_limit_textwind46 \n qquad v^textflow_(textwindtextbalance)36 + v^textflow_(textwindtextphs)46 leq fracp^textinit capacity_textwind2 cdot sum_b=5^6 p^textavailability profile_textwindb \nendaligned","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Since the flow variables v^textflow_(textwind textbalance)12 and v^textflow_(textwind textbalance)13 represent power, the first constraint sets the upper bound of the power for both timestep 1 and 2, by assuming an average capacity across these two timesteps. The same applies to the other two constraints.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The hydrogen (H2) producer capacity limit is straightforward, since both the asset and the flow definitions are in the same time resolution:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmax_output_flows_limit_textH216 \n qquad v^textflow_(textH2textccgt)16 leq p^textinit capacity_textH2 cdot p^textavailability profile_textH216 \nendaligned","category":"page"},{"location":"30-concepts/#Transport-Capacity-Constraints","page":"Concepts","title":"Transport Capacity Constraints","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"For the connection from the hub to the demand, there are associated transmission capacity constraints, which are in the same resolution as the flow:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmax_transport_flows_limit_(textbalancetextdemand)13 \n qquad v^textflow_(textbalancetextdemand)13 leq p^textinit export capacity_(textbalancetextdemand) \n textmax_transport_flows_limit_(textbalancetextdemand)46 \n qquad v^textflow_(textbalancetextdemand)46 leq p^textinit export capacity_(textbalancetextdemand) \nendaligned","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmin_transport_flows_limit_(textbalancetextdemand)13 \n qquad v^textflow_(textbalancetextdemand)13 geq - p^textinit import capacity_(textbalancetextdemand) \n textmin_transport_flows_limit_(textbalancetextdemand)46 \n qquad v^textflow_(textbalancetextdemand)46 geq - p^textinit import capacity_(textbalancetextdemand) \nendaligned","category":"page"},{"location":"30-concepts/#Storage-Level-limits","page":"Concepts","title":"Storage Level limits","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Since the system has a storage asset, we must limit the maximum storage level. The phs time resolution is defined for every 6 hours, so we only have one constraint.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmax_storage_level_limit_textphs16 \n qquad v^textintra-storage_textphs16 leq p^textinit storage capacity_textphs\nendaligned","category":"page"},{"location":"30-concepts/#comparison","page":"Concepts","title":"Comparison of Different Modeling Approaches","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"This section quantifies the advantages of the flexible connection and flexible time resolution in the TulipaEnergyModel.jl modeling approach. So, let us consider three different approaches based on the same example:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Classic approach with hourly resolution: This approach needs an extra asset, node, to create the hybrid operation of the phs and wind assets.\nFlexible connection with hourly resolution: This approach uses the flexible connection to represent the hybrid operation of the phs and wind assets.\nFlexible connection and flexible time: This approach uses both features, the flexible connection and the flexible time resolution.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Note: The flexibility of TulipaEnergyModel.jl allows any of these three modeling approaches.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The table below shows the constraints and variables for each approach over a 6-hour horizon. These results show the potential of flexible connections and time resolution for reducing the size of the optimization model.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Modeling approach Nº Variables Nº Constraints Objective Function\nClassic approach with hourly resolution 48 84 28.4365\nFlexible connection with hourly resolution 42 72 28.4365\nFlexible connection and time resolution 16 29 28.4587","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"By comparing the classic approach with the other methods, we can analyze their differences:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The flexible connection with hourly resolution reduces 6 variables (125) and 12 constraints (approx 14). Notice that we include the 6 extra constraints related to not allowing charging from the grid, although these constraints can also be modeled as bounds. Finally, the objective function value is the same, since we use an hourly time resolution in both cases.\nThe combination of features reduces 32 variables (approx 67) and 55 constraints (approx 65) with an approximation error of approx 0073.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The level of reduction and approximation error will depend on the case study. Some cases that would benefit from this feature include:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Coupling different energy sectors with various dynamics. For instance, methane, hydrogen, and heat sectors can be represented in energy models with lower resolutions (e.g., 4, 6, or 12h) than the electricity sector, usually modeled in higher resolutions (e.g., 1h, 30 min).\nHaving high resolutions for all assets in a large-scale case study may not be necessary. For example, if analyzing a European case study focusing on a specific country like The Netherlands, hourly details for distant countries (such as Portugal and Spain) may not be required. However, one would still want to consider their effect on The Netherlands without causing too much computational burden. In such cases, flexible time resolution can maintain hourly details in the focus country, while reducing the detail in distant countries by increasing their resolution (to two hours or more). This reduction allows a broader scope without over-burdening computation.","category":"page"},{"location":"30-concepts/#flex-time-res-uc","page":"Concepts","title":"Flexible Time Resolution in the Unit Commitment and Ramping Constraints","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In the previous section, we have seen how the flexible temporal resolution is handled for the model's flow capacity and balance constraints. Here, we show how flexible time resolution is applied when considering the model's unit commitment and ramping constraints. Let's consider the example in the folder test/inputs/UC-ramping to explain how all these constraints are created in TulipaEnergyModel.jl when having the flexible time resolution.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: unit-commitment-case-study)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The example demonstrates various assets that supply demand. Each asset has different input data in the assets-data file, which activates different sets of constraints based on the method. For example, the gas producer has ramping constraints but not unit commitment constraints, while the ocgt conversion has unit commitment constraints but not ramping constraints. Lastly, the ccgt and smr assets both have unit commitment and ramping constraints.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"using DataFrames # hide\nusing CSV # hide\ninput_dir = \"../../test/inputs/UC-ramping\" # hide\nassets_data = CSV.read(joinpath(input_dir, \"assets-data.csv\"), DataFrame, header = 2) # hide\ngraph_assets = CSV.read(joinpath(input_dir, \"graph-assets-data.csv\"), DataFrame, header = 2) # hide\nassets = leftjoin(graph_assets, assets_data, on=:name) # hide\nfiltered_assets = assets[assets.type .== \"producer\" .|| assets.type .== \"conversion\", [\"name\", \"type\", \"capacity\", \"initial_units\", \"unit_commitment\", \"ramping\"]] # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The assets-rep-periods-partitions file defines the time resolution for the assets in the partition column. For instance, here we can see that the time resolutions are 3h for the ccgt and 6h for the smr. These values mean that the unit commitment variables (e.g., units_on) in the model have three and six hours resolution, respectively.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"assets_partitions_data = CSV.read(joinpath(input_dir, \"assets-rep-periods-partitions.csv\"), DataFrame, header = 2) # hide\nfiltered_assets_partitions = assets_partitions_data[!, [\"asset\", \"specification\", \"partition\"]] # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The flows-rep-periods-partitions file defines the time resolution for the flows. In this example, we have that the flows from the gas asset to the ccgt and from the ccgt asset to the demand are in a 2h resolution.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"flows_partitions_data = CSV.read(joinpath(input_dir, \"flows-rep-periods-partitions.csv\"), DataFrame, header = 2) # hide\nfiltered_flows_partitions = flows_partitions_data[!, [\"from_asset\", \"to_asset\", \"specification\", \"partition\"]] # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The default value for the assets and flows partitions is 1 hour. This means that assets and flows not in the previous tables are considered on an hourly basis in the model.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Important: It's not recommended to set up the input data partitions in such a way that the flow variables have a lower resolution than the units_on. This is because doing so will result in constraints that fix the value of the units_on in the timestep block where the flow is defined, leading to unnecessary extra variable constraints in the model. For instance, if the units_on are hourly and the flow is every two hours, then a non-zero flow in the timestep block 1:2 will require the units_on in timestep blocks 1:1 and 2:2 to be the same and equal to one. Therefore, the time resolution of the units_on should always be lower than or equal to the resolution of the flow in the asset.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Remember that the section mathematical formulation shows the unit commitment and ramping constraints in the model considering an uniform time resolution as a reference.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"With this information, we can analyze the constraints in each of the following cases:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Ramping in assets with multiple outputs\nUnit commitment in assets with constant time resolution\nUnit commitment and ramping in assets with flexible time resolution that are multiples of each other\nUnit commitment and ramping in assets with flexible time resolution that are not multiples of each other","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"We will analyze each case in the following sections, considering the constraints resolution defined in the summary table in the flexible time resolution section. For the sake of simplicity, we only show the asset a and timestep block b_k index and the constraints as they appear in the .lp file of the example, i.e., with all the coefficients and RHS values calculated from the input parameters. The .lp file can be exported using the keyword argument write_lp_file = true in the run_scenario function.","category":"page"},{"location":"30-concepts/#Ramping-in-Assets-with-Multiple-Outputs","page":"Concepts","title":"Ramping in Assets with Multiple Outputs","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In the case of the gas asset, there are two output flows above the minimum operating point with different time resolutions. The ramping constraints follow the highest time resolution of the two flows at each timestep block. Since the highest resolution is always defined by the hourly output of the flow(gas,ocgt), the ramping constraints are also hourly. The figure below illustrates this situation.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: unit-commitment-gas-asset)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's now take a look at the resulting constraints in the model.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"max_ramp_up(gas): The first constraint starts in the second timestep block and takes the difference between the output flows above the minimum operating point from b_k = 2:2 and b_k = 1:1. Note that since the flow(gas,ccgt) is the same in both timestep blocks, the only variables that appear in this first constraint are the ones associated with the flow(gas,ocgt). The second constraint takes the difference between the output flows from b_k = 3:3 and b_k = 2:2; in this case, there is a change in the flow(gas, ocgt); therefore, the constraint considers both changes in the output flows of the asset. In addition, the ramping parameter is multiplied by the flow duration with the highest resolution, i.e., one hour, which is the duration of the flow(gas,ocgt).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n2:2: -1 flow(gas,ocgt,1:1) + 1 flow(gas,ocgt,2:2) <= 1494\nb_k =\n3:3: -1 flow(gas,ocgt,2:2) + 1 flow(gas,ocgt,3:3) - 1 flow(gas,ccgt,1:2) + 1 flow(gas,ccgt,3:4) <= 1494\nb_k =\n4:4: -1 flow(gas,ocgt,3:3) + 1 flow(gas,ocgt,4:4) <= 1494\nb_k =\n5:5: -1 flow(gas,ocgt,4:4) + 1 flow(gas,ocgt,5:5) - 1 flow(gas,ccgt,3:4) + 1 flow(gas,ccgt,5:6) <= 1494","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"For the maximum ramp down we have similiar constraints as the ones shown above.","category":"page"},{"location":"30-concepts/#Unit-Commitment-in-Assets-with-Constant-Time-Resolution","page":"Concepts","title":"Unit Commitment in Assets with Constant Time Resolution","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The ocgt asset includes both the flow(oct,demand) and the asset time resolution, which defines the resolution of the units_on variable, with a default setting of one hour. As a result, the unit commitment constraints are also set on an hourly basis. This is the conventional method for representing these types of constraints in power system models. The figure below illustrates this situation.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: unit-commitment-ocgt-asset)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's now take a look at the resulting constraints in the model. Because everything is based on an hourly timestep, the equations are simple and easy to understand.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"limit_units_on(ocgt): The upper bound of the units_on is the investment variable of the asset","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:1: -1 assets_investment(ocgt) + 1 units_on(ocgt,1:1) <= 0\nb_k =\n2:2: -1 assets_investment(ocgt) + 1 units_on(ocgt,2:2) <= 0\nb_k =\n3:3: -1 assets_investment(ocgt) + 1 units_on(ocgt,3:3) <= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"min_output_flow(ocgt): The minimum operating point is 10 MW, so the asset must produce an output flow greater than this value when the unit is online.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:1: 1 flow(ocgt,demand,1:1) - 10 units_on(ocgt,1:1) >= 0\nb_k =\n2:2: 1 flow(ocgt,demand,2:2) - 10 units_on(ocgt,2:2) >= 0\nb_k =\n3:3: 1 flow(ocgt,demand,3:3) - 10 units_on(ocgt,3:3) >= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"max_output_flow(ocgt): The capacity is 100 MW, so the asset must produce an output flow lower than this value when the unit is online.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:1: 1 flow(ocgt,demand,1:1) - 100 units_on(ocgt,1:1) <= 0\nb_k =\n2:2: 1 flow(ocgt,demand,2:2) - 100 units_on(ocgt,2:2) <= 0\nb_k =\n3:3: 1 flow(ocgt,demand,3:3) - 100 units_on(ocgt,3:3) <= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"For the maximum ramp down we have similiar constraints as the ones shown above.","category":"page"},{"location":"30-concepts/#Unit-Commitment-and-Ramping-in-Assets-with-Flexible-Time-Resolution-that-are-Multiples-of-Each-Other","page":"Concepts","title":"Unit Commitment and Ramping in Assets with Flexible Time Resolution that are Multiples of Each Other","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In this case, the smr asset has an output flow(smr,demand) in a hourly basis, but its time resolution (i.e., partition) is every six hours. Therefore, the unist_on variables are defined in timestep block of every six hours. As a result, the unit commitment and ramping constraints are set on highest resolution of both, i.e., the hourly resolution of the flow(smr,demand). The figure below illustrates this situation.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: unit-commitment-smr-asset)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's now take a look at the resulting constraints in the model.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"limit_units_on(smr): The units_on variables are defined every 6h; therefore, the upper bound of the variable is also every 6h. In addition, the smr is not investable and has one existing unit that limits the commitment variables.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:6: 1 units_on(smr,1:6) <= 1\nb_k =\n7:12: 1 units_on(smr,7:12) <= 1\nb_k =\n13:18: 1 units_on(smr,13:18) <= 1\nb_k =\n19:24: 1 units_on(smr,19:24) <= 1","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"min_output_flow(smr): The minimum operating point is 150 MW, so the asset must produce an output flow greater than this value when the unit is online. Since the units_on variables are defined every 6h, the first six constraints show that the minimum operating point is multiplied by the variable in block 1:6. The next six constraints are multiplied by the units_on in block 7:12, and so on.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:1: 1 flow(smr,demand,1:1) - 150 units_on(smr,1:6) >= 0\nb_k =\n2:2: 1 flow(smr,demand,2:2) - 150 units_on(smr,1:6) >= 0\nb_k =\n3:3: 1 flow(smr,demand,3:3) - 150 units_on(smr,1:6) >= 0\nb_k =\n4:4: 1 flow(smr,demand,4:4) - 150 units_on(smr,1:6) >= 0\nb_k =\n5:5: 1 flow(smr,demand,5:5) - 150 units_on(smr,1:6) >= 0\nb_k =\n6:6: 1 flow(smr,demand,6:6) - 150 units_on(smr,1:6) >= 0\nb_k =\n7:7: 1 flow(smr,demand,7:7) - 150 units_on(smr,7:12) >= 0\nb_k =\n8:8: 1 flow(smr,demand,8:8) - 150 units_on(smr,7:12) >= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"max_output_flow(smr): The capacity is 200 MW, so the asset must produce an output flow lower than this value when the unit is online. Similiar to the minimum operating point constraint, here the units_on for the timestep block 1:6 are used in the first six constraints, the units_on for the timestep block 7:12 are used in the next six constraints, and so on.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:1: 1 flow(smr,demand,1:1) - 200 units_on(smr,1:6) <= 0\nb_k =\n2:2: 1 flow(smr,demand,2:2) - 200 units_on(smr,1:6) <= 0\nb_k =\n3:3: 1 flow(smr,demand,3:3) - 200 units_on(smr,1:6) <= 0\nb_k =\n4:4: 1 flow(smr,demand,4:4) - 200 units_on(smr,1:6) <= 0\nb_k =\n5:5: 1 flow(smr,demand,5:5) - 200 units_on(smr,1:6) <= 0\nb_k =\n6:6: 1 flow(smr,demand,6:6) - 200 units_on(smr,1:6) <= 0\nb_k =\n7:7: 1 flow(smr,demand,7:7) - 200 units_on(smr,7:12) <= 0\nb_k =\n8:8: 1 flow(smr,demand,8:8) - 200 units_on(smr,7:12) <= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"max_ramp_up(smr): The ramping capacity is 20MW, so the change in the output flow above the minimum operating point needs to be below that value when the asset is online. For constraints from 2:2 to 6:6, the units_on variable is the same, i.e., units_on at timestep block 1:6. The ramping constraint at timestep block 7:7 shows the units_on from the timestep block 1:6 and 7:12 since the change in the flow includes both variables. Note that if the units_on variable is zero in the timestep block 1:6, then the ramping constraint at timestep block 7:7 allows the asset to go from zero flow to the minimum operating point plus the ramping capacity (i.e., 150 + 20 = 170).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n2:2: -1 flow(smr,demand,1:1) + 1 flow(smr,demand,2:2) - 20 units_on(smr,1:6) <= 0\nb_k =\n3:3: -1 flow(smr,demand,2:2) + 1 flow(smr,demand,3:3) - 20 units_on(smr,1:6) <= 0\nb_k =\n4:4: -1 flow(smr,demand,3:3) + 1 flow(smr,demand,4:4) - 20 units_on(smr,1:6) <= 0\nb_k =\n5:5: -1 flow(smr,demand,4:4) + 1 flow(smr,demand,5:5) - 20 units_on(smr,1:6) <= 0\nb_k =\n6:6: -1 flow(smr,demand,5:5) + 1 flow(smr,demand,6:6) - 20 units_on(smr,1:6) <= 0\nb_k =\n7:7: -1 flow(smr,demand,6:6) + 1 flow(smr,demand,7:7) + 150 units_on(smr,1:6) - 170 units_on(smr,7:12) <= 0\nb_k =\n8:8: -1 flow(smr,demand,7:7) + 1 flow(smr,demand,8:8) - 20 units_on(smr,7:12) <= 0\nb_k =\n9:9: -1 flow(smr,demand,8:8) + 1 flow(smr,demand,9:9) - 20 units_on(smr,7:12) <= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"For the maximum ramp down we have similiar constraints as the ones shown above.","category":"page"},{"location":"30-concepts/#Unit-Commitment-and-Ramping-in-Assets-with-Flexible-Time-Resolution-that-are-NOT-Multiples-of-Each-Other","page":"Concepts","title":"Unit Commitment and Ramping in Assets with Flexible Time Resolution that are NOT Multiples of Each Other","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In this case, the ccgt asset has an output flow(ccgt,demand) on a two-hour basis, but its time resolution (i.e., partition) is every three hours. Therefore, the unist_on variables are defined in a timestep block every three hours. This setup means that the flow and unit commitment variables are not multiples of each other. As a result, the unit commitment and ramping constraints are defined on the highest resolution, meaning that we also need the intersections of both resolutions. The figure below illustrates this situation.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: unit-commitment-ccgt-asset)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's now take a look at the resulting constraints in the model.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"limit_units_on(ccgt): The units_on variables are defined every 3h; therefore, the upper bound of the variable is also every 3h. In addition, the ccgt is investable and has one existing unit that limits the commitment variables.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:3: -1 assets_investment(ccgt) + 1 units_on(ccgt,1:3) <= 1\nb_k =\n4:6: -1 assets_investment(ccgt) + 1 units_on(ccgt,4:6) <= 1\nb_k =\n7:9: -1 assets_investment(ccgt) + 1 units_on(ccgt,7:9) <= 1","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"min_output_flow(ccgt): The minimum operating point is 50 MW, so the asset must produce an output flow greater than this value when the unit is online. Here, we can see the impact of the constraints of having different temporal resolutions that are not multiples of each other. For instance, the constraint is defined for all the intersections, so 1:2, 3:3, 4:4, 5:6, etc., to ensure that the minimum operating point is correctly defined considering all the timestep blocks of the flow and the units_on variables.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:2: 1 flow(ccgt,demand,1:2) - 50 units_on(ccgt,1:3) >= 0\nb_k =\n3:3: 1 flow(ccgt,demand,3:4) - 50 units_on(ccgt,1:3) >= 0\nb_k =\n4:4: 1 flow(ccgt,demand,3:4) - 50 units_on(ccgt,4:6) >= 0\nb_k =\n5:6: 1 flow(ccgt,demand,5:6) - 50 units_on(ccgt,4:6) >= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"max_output_flows(ccgt): The capacity is 200 MW, so the asset must produce an output flow lower than this value when the unit is online. The situation is similar as in the minimum operating point constraint, we have constraints for all the intersections of the resolutions to ensure the correct definition of the maximum capacity.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:2: 1 flow(ccgt,demand,1:2) - 200 units_on(ccgt,1:3) <= 0\nb_k =\n3:3: 1 flow(ccgt,demand,3:4) - 200 units_on(ccgt,1:3) <= 0\nb_k =\n4:4: 1 flow(ccgt,demand,3:4) - 200 units_on(ccgt,4:6) <= 0\nb_k =\n5:6: 1 flow(ccgt,demand,5:6) - 200 units_on(ccgt,4:6) <= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"max_ramp_up(ccgt): The ramping capacity is 120MW, so the change in the output flow above the minimum operating point needs to be below that value when the asset is online. When the time resolutions of the flow and units_on are not multiples of each other, we encounter some counterintuitive constraints. For example, consider the constraint at timestep block 4:4. This constraint only involves units_on variables because the flow above the minimum operating point at timestep block 4:4 differs from the previous timestep block 3:3 only in terms of the units_on variables. As a result, the ramping-up constraint establishes a relationship between the units_on variable at 1:3 and 4:6. This means that if the unit is on at timestep 1:3, then it must also be on at timestep 4:6. However, this is redundant because there is already a flow variable defined for 3:4 that ensures this, thanks to the minimum operating point and maximum capacity constraints. Therefore, although this constraint is not incorrect, it is unnecessary due to the flexible time resolutions that are not multiples of each other.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n3:3: -1 flow(ccgt,demand,1:2) + 1 flow(ccgt,demand,3:4) - 120 units_on(ccgt,1:3) <= 0\nb_k =\n4:4: 50 units_on(ccgt,1:3) - 170 units_on(ccgt,4:6) <= 0\nb_k =\n5:6: -1 flow(ccgt,demand,3:4) + 1 flow(ccgt,demand,5:6) - 120 units_on(ccgt,4:6) <= 0\nb_k =\n7:8: -1 flow(ccgt,demand,5:6) + 1 flow(ccgt,demand,7:8) + 50 units_on(ccgt,4:6) - 170 units_on(ccgt,7:9) <= 0\nb_k =\n9:9: -1 flow(ccgt,demand,7:8) + 1 flow(ccgt,demand,9:10) - 120 units_on(ccgt,7:9) <= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"For the maximum ramp down we have similiar constraints as the ones shown above.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Important: The time resolutions of the unit commitment constraints do not have to be multiples of each other. However, using multiples of each other can help avoid extra redundant constraints.","category":"page"},{"location":"30-concepts/#Unit-Commitment-and-Ramping-Case-Study-Results","page":"Concepts","title":"Unit Commitment and Ramping Case Study Results","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's now optimize the model for the data in the example test/inputs/UC-ramping and explore the results. The first result is the unit commitment of the assets with this method, i.e., ocgt, ccgt, and smr. One of the characteristics of having flexible time resolution on the unit commitment variables (e.g., units_on) is that it allows us to consider implicitly minimum up/down times in a simplified manner. For instance, the ccgt asset can only increase the number of units every 3h, and the smr can only start up again after 6h.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: unit-commitment-results)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's now examine the hourly production balance in the results. We can see that the assets with a unit commitment method only produce electricity (e.g., flow to the demand asset) when they are on (units_on >= 1). In addition, the smr has a slow flow change due to its ramping limits.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: unit-commitment-balance)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In this example, we demonstrated the use of unit commitment and ramping constraints with flexible time resolution in the model, and we illustrated what the results look like. The flexible time resolution applied to the unit commitment variables aids in minimizing the number of binary/integer variables in the model and simplifies the representation of the assets' minimum up and down times.","category":"page"},{"location":"30-concepts/#storage-modeling","page":"Concepts","title":"Storage Modeling","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Energy storage systems can be broadly classified into two categories: seasonal and non-seasonal storage. Seasonal storage refers to assets that can store energy for more extended periods, usually spanning months or even years. Examples of such assets include hydro reservoirs, hydrogen storage in salt caverns, or empty gas fields. On the other hand, non-seasonal storage refers to assets that can store energy only for a few hours, such as batteries or small pumped-hydro storage units.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Both storage categories can be represented in TulipaEnergyModel.jl using the representative periods approach:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Non-seasonal storage: When the storage capacity of an asset is lower than the total length of representative periods, like in the case of a battery with a storage capacity of 4 hours and representative periods of 24-hour timesteps, intra-temporal constraints should be applied.\nSeasonal storage: When the storage capacity of an asset is greater than the total length of representative periods, like in the case of a hydroplant with a storage capacity of a month and representative periods of 24-hour timesteps, inter-temporal constraints should be applied.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The equations of intra- and inter-temporal constraints for energy storage are available in the mathematical formulation. An example is shown in the following section to explain these concepts. In addition, the section seasonal and non-seasonal storage setup shows how to set the parameters in the model to consider each type in the storage assets.","category":"page"},{"location":"30-concepts/#Example-to-Model-Seasonal-and-Non-seasonal-Storage","page":"Concepts","title":"Example to Model Seasonal and Non-seasonal Storage","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"We use the example in the folder test/inputs/Storage to explain how all these concepts come together in TulipaEnergyModel.jl.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's first look at this feature's most relevant input data, starting with the assets-data file. Here, we show only the storage assets and the appropriate columns for this example, but all the input data can be found in the previously mentioned folder.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"using DataFrames # hide\nusing CSV # hide\ninput_dir = \"../../test/inputs/Storage\" # hide\nassets_data = CSV.read(joinpath(input_dir, \"assets-data.csv\"), DataFrame, header = 2) # hide\ngraph_assets = CSV.read(joinpath(input_dir, \"graph-assets-data.csv\"), DataFrame, header = 2) # hide\nassets = leftjoin(graph_assets, assets_data, on=:name) # hide\nfiltered_assets = assets[assets.type .== \"storage\", [\"name\", \"type\", \"capacity\", \"capacity_storage_energy\", \"initial_storage_units\", \"initial_storage_level\", \"is_seasonal\"]] # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The is_seasonal parameter determines whether or not the storage asset uses the inter-temporal constraints. The phs is the only storage asset with this type of constraint and inter-storage level variable (i.e., v^textinter-storage_textphsp), and has 100MW capacity and 4800MWh of storage capacity (i.e., 48h discharge duration). The battery will only consider intra-temporal constraints with intra-storage level variables (i.e., v^textintra-storage_textbatterykb_k), and has 10MW capacity with 20MWh of storage capacity (i.e., 2h discharge duration).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The rep-periods-data file has information on the representative periods in the example. We have three representative periods, each with 24 timesteps and hourly resolution, representing a day. The figure below shows the availability profile of the renewable energy sources in the example.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"rp_file = \"../../test/inputs/Storage/rep-periods-data.csv\" # hide\nrp = CSV.read(rp_file, DataFrame, header = 2) # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: availability-profiles)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The rep-periods-mapping relates each representative period with the periods in the timeframe. We have seven periods in this case, meaning the timeframe is a week. Each value in the file indicates the weight of each representative period in the timeframe period. Notice that each period is composed of a linear combination of the representative periods. For more details on obtaining the representative periods and the weights, please look at TulipaClustering.jl. For the sake of readability, we show here the information in the file in tabular form:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"map_file = \"../../test/inputs/Storage/rep-periods-mapping.csv\" # hide\nmap = CSV.read(map_file, DataFrame, header = 2) # hide\nunstacked_map = unstack(map, :period, :rep_period, :weight) # hide\nrename!(unstacked_map, [\"period\", \"k=1\", \"k=2\", \"k=3\"]) # hide\nunstacked_map[!,[\"k=1\", \"k=2\", \"k=3\"]] = convert.(Float64, unstacked_map[!,[\"k=1\", \"k=2\", \"k=3\"]]) # hide\nunstacked_map # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The file assets-timeframe-partitions has the information on how often we want to evaluate the inter-temporal constraints that combine the information of the representative periods. In this example, we define a uniform distribution of one period, meaning that we will check the inter-storage level every day of the week timeframe.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"phs_partitions_file = \"../../test/inputs/Storage/assets-timeframe-partitions.csv\" # hide\nphs_partitions = CSV.read(phs_partitions_file, DataFrame, header = 2) # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Note: For the sake of simplicity, we show how using three representative days can recover part of the chronological information of one week. The same method can be applied to more representative periods to analyze the seasonality across a year or longer timeframe.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Now let's solve the example and explore the results:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"using DuckDB, TulipaIO, TulipaEnergyModel\n\ninput_dir = \"../../test/inputs/Storage\" # hide\n# input_dir should be the path to the Storage example\nconnection = DBInterface.connect(DuckDB.DB)\nread_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)\nenergy_problem = run_scenario(connection)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Since the battery is not seasonal, it only has results for the intra-storage level of each representative period, as shown in the following figure:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: Battery-intra-storage-level)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Since the phs is defined as seasonal, it has results for only the inter-storage level. Since we defined the period partition as 1, we get results for each period (i.e., day). We can see that the inter-temporal constraints in the model keep track of the storage level through the whole timeframe definition (i.e., week).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: PHS-inter-storage-level)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In this example, we have demonstrated how to partially recover the chronological information of a storage asset with a longer discharge duration (such as 48 hours) than the representative period length (24 hours). This feature enables us to model both short- and long-term storage in TulipaEnergyModel.jl.","category":"page"},{"location":"91-developer/#developer","page":"Developer Documentation","title":"Developer Documentation","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Welcome to TulipaEnergyModel.jl developer documentation. Here is how you can contribute to our Julia-based toolkit for modeling and optimization of electric energy systems.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Pages = [\"91-developer.md\"]\nDepth = 3","category":"page"},{"location":"91-developer/#Before-You-Begin","page":"Developer Documentation","title":"Before You Begin","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Before you can start contributing, please read our Contributing Guidelines.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Also make sure that you have installed the required software, and that it is properly configured. You only need to do this once.","category":"page"},{"location":"91-developer/#Installing-Software","page":"Developer Documentation","title":"Installing Software","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"To contribute to TulipaEnergyModel.jl, you need the following:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Julia programming language.\nGit for version control.\nVSCode or any other editor. For VSCode, we recommend to install a few extensions. You can do it by pressing Ctrl + Shift + X (or ⇧ + ⌘ + X on MacOS) and searching by the extension name. - Julia for Visual Studio Code; - Git Graph.\nEditorConfig for consistent code formatting. In VSCode, it is available as an extension.\npre-commit to run the linters and formatters.\nYou can install pre-commit globally using\npip install --user pre-commit\nIf you prefer to create a local environment with it, do the following:\npython -m venv env\n. env/bin/activate\npip install --upgrade pip setuptools pre-commit\nOn Windows, you need to activate the environment using the following command instead of the previous one:\nenv/Scripts/activate\nNote that there is no leading dot (.) in the above command.\nJuliaFormatter.jl for code formatting.\nTo install it, open Julia REPL, for example, by typing in the command line:\njulia\nNote: julia must be part of your environment variables to call it from the command line.\nThen press ] to enter the package mode. In the package mode, enter the following:\npkg> activate\npkg> add JuliaFormatter\nIn VSCode, you can activate \"Format on Save\" for JuliaFormatter. To do so, open VSCode Settings (Ctrl + ,), then in \"Search Settings\", type \"Format on Save\" and tick the first result:\n(Image: Screenshot of Format on Save option)\nPrettier for markdown formatting. In VSCode, it is available as an extension.\nHaving enabled \"Format on Save\" for JuliaFormatter in the previous step will also enable \"Format on Save\" for Prettier, provided that Prettier is set as the default formatter for markdown files. To do so, in VSCode, open any markdown file, right-click on any area of the file, choose \"Format Document With...\", click \"Configure Default Formatter...\" situated at the bottom of the drop-list list at the top of the screen, and then choose Prettier - Code formatter as the default formatter. Once you are done, you can double-check it by again right-clicking on any area of the file and choosing \"Format Document With...\", and you should see Prettier - Code formatter (default).\nLocalCoverage for coverage testing. You can install it the same way you installed JuliaFormatter, that is, by opening Julia REPL in the package mode and typing:\npkg> activate\npkg> add LocalCoverage","category":"page"},{"location":"91-developer/#Forking-the-Repository","page":"Developer Documentation","title":"Forking the Repository","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Any changes should be done in a fork. You can fork this repository directly on GitHub:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"(Image: Screenshot of Fork button on GitHub)","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"After that, clone your fork and add this repository as upstream:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git clone https://github.com/your-name/TulipaEnergyModel.jl # use the fork URL\ngit remote add upstream https://github.com/TulipaEnergy/TulipaEnergyModel.jl # use the original repository URL","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Check that your origin and upstream are correct:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git remote -v","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"You should see something similar to: (Image: Screenshot of remote names, showing origin and upstream)","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If your names are wrong, use this command (with the relevant names) to correct it:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git remote set-url [name] [url]","category":"page"},{"location":"91-developer/#Configuring-Git","page":"Developer Documentation","title":"Configuring Git","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Because operating systems use different line endings for text files, you need to configure Git to ensure code consistency across different platforms. You can do this with the following commands:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"cd /path/to/TulipaEnergyModel.jl\ngit config --unset core.autocrlf # disable autocrlf in the EnergyModel repo\ngit config --global core.autocrlf false # explicitly disable autocrlf globally\ngit config --global --unset core.eol # disable explicit file-ending globally\ngit config core.eol lf # set Linux style file-endings in EnergyModel","category":"page"},{"location":"91-developer/#Activating-and-Testing-the-Package","page":"Developer Documentation","title":"Activating and Testing the Package","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Start Julia REPL either via the command line or in the editor.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"In the terminal, do:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"cd /path/to/TulipaEnergyModel.jl # change the working directory to the repo directory if needed\njulia # start Julia REPL","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"In VSCode, first open your cloned fork as a new project. Then open the command palette with Ctrl + Shift + P (or + + P on MacOS) and use the command called Julia: Start REPL.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"In Julia REPL, enter the package mode by pressing ].","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"In the package mode, first activate and instantiate the project, then run the tests to ensure that everything is working as expected:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"pkg> activate . # activate the project\npkg> instantiate # instantiate to install the required packages\npkg> test # run the tests","category":"page"},{"location":"91-developer/#Configuring-Linting-and-Formatting","page":"Developer Documentation","title":"Configuring Linting and Formatting","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"With pre-commit installed, activate it as a pre-commit hook:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"pre-commit install","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"To run the linting and formatting manually, enter the command below:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"pre-commit run -a","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Do it once now to make sure that everything works as expected.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Now, you can only commit if all the pre-commit tests pass.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Note: On subsequent occasions when you need to run pre-commit in a new shell, you will need to activate the Python virtual environment. If so, do the following:. env/bin/activate # for Windows the command is: . env/Scripts/activate\npre-commit run -a","category":"page"},{"location":"91-developer/#Code-format-and-guidelines","page":"Developer Documentation","title":"Code format and guidelines","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"This section will list the guidelines for code formatting not enforced by JuliaFormatter. We will try to follow these during development and reviews.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Naming\nCamelCase for classes and modules,\nsnake_case for functions and variables, and\nkebab-case for file names.\nUse using instead of import, in the following way:\nDon't use pure using Package, always list all necessary objects with using Package: A, B, C.\nList obvious objects, e.g., using JuMP: @variable, since @variable is obviously from JuMP in this context, or using Graph: SimpleDiGraph, because it's a constructor with an obvious name.\nFor other objects inside Package, use using Package: Package and explicitly call Package.A to use it, e.g., DataFrames.groupby.\nList all using in .","category":"page"},{"location":"91-developer/#Contributing-Workflow","page":"Developer Documentation","title":"Contributing Workflow","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"When the software is installed and configured, and you have forked the TulipaEnergyModel.jl repository, you can start contributing to it.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"We use the following workflow for all contributions:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Make sure that your fork is up to date\nCreate a new branch\nImplement the changes\nRun the tests\nRun the linter\nCommit the changes\nRepeat steps 3-6 until all necessary changes are done\nMake sure that your fork is still up to date\nCreate a pull request","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Below you can find detailed instructions for each step.","category":"page"},{"location":"91-developer/#1.-Make-Sure-That-Your-Fork-Is-Up-to-Date","page":"Developer Documentation","title":"1. Make Sure That Your Fork Is Up to Date","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Fetch from org remote, fast-forward your local main:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git switch main\ngit fetch --all --prune\ngit merge --ff-only upstream/main","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Warning: If you have a conflict on your main, it will appear now. You can delete your old main branch usinggit reset --hard upstream/main","category":"page"},{"location":"91-developer/#2.-Create-a-New-Branch","page":"Developer Documentation","title":"2. Create a New Branch","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Create a branch to address the issue:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git switch -c ","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If there is an associated issue, add the issue number to the branch name, for example, 123-short-description for issue #123.\nIf there is no associated issue and the changes are small, add a prefix such as \"typo\", \"hotfix\", \"small-refactor\", according to the type of update.\nIf the changes are not small and there is no associated issue, then create the issue first, so we can properly discuss the changes.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Note: Always branch from main, i.e., the main branch of your own fork.","category":"page"},{"location":"91-developer/#3.-Implement-the-Changes","page":"Developer Documentation","title":"3. Implement the Changes","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Implement your changes to address the issue associated with the branch.","category":"page"},{"location":"91-developer/#4.-Run-the-Tests","page":"Developer Documentation","title":"4. Run the Tests","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"In Julia:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"TulipaEnergyModel> test","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"To run the tests with code coverage, you can use the LocalCoverage package:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"julia> using LocalCoverage\n# ]\npkg> activate .\n# \njulia> cov = generate_coverage()","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"This will run the tests, track line coverage and print a report table as output. Note that we want to maintain 100% test coverage. If any file does not show 100% coverage, please add tests to cover the missing lines.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If you are having trouble reaching 100% test coverage, you can set your pull request to 'draft' status and ask for help.","category":"page"},{"location":"91-developer/#5.-Run-the-Linter","page":"Developer Documentation","title":"5. Run the Linter","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"In the bash/git bash terminal, run pre-commit:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":". env/bin/activate # if necessary (for Windows the command is: . env/Scripts/activate)\npre-commit run -a","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If any of the checks failed, find in the pre-commit log what the issues are and fix them. Then, add them again (git add), rerun the tests & linter, and commit.","category":"page"},{"location":"91-developer/#6.-Commit-the-Changes","page":"Developer Documentation","title":"6. Commit the Changes","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"When the test are passing, commit the changes and push them to the remote repository. Use:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git commit -am \"A short but descriptive commit message\" # Equivalent to: git commit -a -m \"commit msg\"\ngit push -u origin ","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"When writing the commit message:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"use imperative, present tense (Add feature, Fix bug);\nhave informative titles;\nif necessary, add a body with details.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Note: Try to create \"atomic git commits\". Read The Utopic Git History to learn more.","category":"page"},{"location":"91-developer/#7.-Make-Sure-That-Your-Fork-Is-Still-Up-to-Date","page":"Developer Documentation","title":"7. Make Sure That Your Fork Is Still Up to Date","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If necessary, fetch any main updates from upstream and rebase your branch into origin/main. For example, do this if it took some time to resolve the issue you have been working on. If you don't resolve conflicts locally, you will get conflicts in your pull request.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Do the following steps:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git switch main # switch to the main branch\ngit fetch --all --prune # fetch the updates\ngit merge --ff-only upstream/main # merge as a fast-forward\ngit switch # switch back to the issue branch\ngit rebase main # rebase it","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If it says that you have conflicts, resolve them by opening the file(s) and editing them until the code looks correct to you. You can check the changes with:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git diff # Check that changes are correct.\ngit add \ngit diff --staged # Another way to check changes, i.e., what you will see in the pull request.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Once the conflicts are resolved, commit and push.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git status # Another way to show that all conflicts are fixed.\ngit rebase --continue\ngit push --force origin ","category":"page"},{"location":"91-developer/#8.-Create-a-Pull-Request","page":"Developer Documentation","title":"8. Create a Pull Request","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"When there are no more conflicts and all the test are passing, create a pull request to merge your remote branch into the org main. You can do this on GitHub by opening the branch in your fork and clicking \"Compare & pull request\".","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"(Image: Screenshot of Compare & pull request button on GitHub)","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Fill in the pull request details:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Describe the changes.\nList the issue(s) that this pull request closes.\nFill in the collaboration confirmation.\n(Optional) Choose a reviewer.\nWhen all of the information is filled in, click \"Create pull request\".","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"(Image: Screenshot of the pull request information)","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"You pull request will appear in the list of pull requests in the TulipaEnergyModel.jl repository, where you can track the review process.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Sometimes reviewers request changes. After pushing any changes, the pull request will be automatically updated. Do not forget to re-request a review.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Once your reviewer approves the pull request, you need to merge it with the main branch using \"Squash and Merge\". You can also delete the branch that originated the pull request by clicking the button that appears after the merge. For branches that were pushed to the main repo, it is recommended that you do so.","category":"page"},{"location":"91-developer/#Building-the-Documentation-Locally","page":"Developer Documentation","title":"Building the Documentation Locally","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Following the latest suggestions, we recommend using LiveServer to build the documentation.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Note: Ensure you have the package Revise installed in your global environment before running servedocs.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Here is how you do it:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Run julia --project=docs in the package root to open Julia in the environment of the docs.\nIf this is the first time building the docs\nPress ] to enter pkg mode\nRun pkg> dev . to use the development version of your package\nPress backspace to leave pkg mode\nRun julia> using LiveServer\nRun julia> servedocs(launch_browser=true)","category":"page"},{"location":"91-developer/#Performance-Considerations","page":"Developer Documentation","title":"Performance Considerations","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If you updated something that might impact the performance of the package, you can run the Benchmark.yml workflow from your pull request. To do that, add the tag benchmark in the pull request. This will trigger the workflow and post the results as a comment in you pull request.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Warning: This requires that your branch was pushed to the main repo. If you have created a pull request from a fork, the Benchmark.yml workflow does not work. Instead, close your pull request, push your branch to the main repo, and open a new pull request.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If you want to manually run the benchmarks, you can do the following:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Navigate to the benchmark folder\nRun julia --project=.\nEnter pkg mode by pressing ]\nRun dev .. to add the development version of TulipaEnergyModel\nNow run\ninclude(\"benchmarks.jl\")\ntune!(SUITE)\nresults = run(SUITE, verbose=true)","category":"page"},{"location":"91-developer/#Profiling","page":"Developer Documentation","title":"Profiling","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"To profile the code in a more manual way, here are some tips:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Wrap your code into functions.\nCall the function once to precompile it. This must be done after every change to the function.\nPrefix the function call with @time. This is the most basic timing, part of Julia.\nPrefix the function call with @btime. This is part of the BenchmarkTools package, which you might need to install. @btime will evaluate the function a few times to give a better estimate.\nPrefix the function call with @benchmark. Also part of BenchmarkTools. This will produce a nice histogram of the times and give more information. @btime and @benchmark do the same thing in the background.\nCall @profview. This needs to be done in VSCode, or using the ProfileView package. This will create a flame graph, where each function call is a block. The size of the block is proportional to the aggregate time it takes to run. The blocks below a block are functions called inside the function above.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"See the file for an example of profiling code.","category":"page"},{"location":"91-developer/#Procedure-for-Releasing-a-New-Version-(Julia-Registry)","page":"Developer Documentation","title":"Procedure for Releasing a New Version (Julia Registry)","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"When publishing a new version of the model to the Julia Registry, follow this procedure:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Note: To be able to register, you need to be a member of the organisation TulipaEnergy and have your visibility set to public: (Image: Screenshot of public members of TulipaEnergy on GitHub)","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Click on the Project.toml file on GitHub.\nEdit the file and change the version number according to semantic versioning: Major.Minor.Patch (Image: Screenshot of editing Project.toml on GitHub)\nCommit the changes in a new branch and open a pull request. Change the commit message according to the version number. (Image: Screenshot of PR with commit message \"Release 0.6.1\")\nCreate the pull request and squash & merge it after the review and testing process. Delete the branch after the squash and merge. (Image: Screenshot of full PR template on GitHub)\nGo to the main page of repo and click in the commit. (Image: Screenshot of how to access commit on GitHub)\nAdd the following comment to the commit: @JuliaRegistrator register (Image: Screenshot of calling JuliaRegistrator in commit comments)\nThe bot should start the registration process. (Image: Screenshot of JuliaRegistrator bot message)\nAfter approval, the bot will take care of the PR at the Julia Registry and automatically create the release for the new version. (Image: Screenshot of new version on registry)\nThank you for helping make frequent releases!","category":"page"},{"location":"95-reference/#reference","page":"Reference","title":"Reference","text":"","category":"section"},{"location":"95-reference/","page":"Reference","title":"Reference","text":"Pages = [\"95-reference.md\"]","category":"page"},{"location":"95-reference/","page":"Reference","title":"Reference","text":"Modules = [TulipaEnergyModel]","category":"page"},{"location":"95-reference/#TulipaEnergyModel.EnergyProblem","page":"Reference","title":"TulipaEnergyModel.EnergyProblem","text":"Structure to hold all parts of an energy problem. It is a wrapper around various other relevant structures. It hides the complexity behind the energy problem, making the usage more friendly, although more verbose.\n\nFields\n\ngraph: The Graph object that defines the geometry of the energy problem.\nrepresentative_periods: A vector of Representative Periods.\nconstraints_partitions: Dictionaries that connect pairs of asset and representative periods to time partitions (vectors of time blocks)\ntimeframe: The number of periods of the representative_periods.\ndataframes: The data frames used to linearize the variables and constraints. These are used internally in the model only.\ngroups: The input data of the groups to create constraints that are common to a set of assets in the model.\nmodel_parameters: The model parameters.\nmodel: A JuMP.Model object representing the optimization model.\nsolved: A boolean indicating whether the model has been solved or not.\nobjective_value: The objective value of the solved problem.\ntermination_status: The termination status of the optimization model.\ntimings: Dictionary of elapsed time for various parts of the code (in seconds).\n\nConstructor\n\nEnergyProblem(connection): Constructs a new EnergyProblem object with the given connection. The constraints_partitions field is computed from the representative_periods, and the other fields are initialized with default values.\n\nSee the basic example tutorial to see how these can be used.\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel.GraphAssetData","page":"Reference","title":"TulipaEnergyModel.GraphAssetData","text":"Structure to hold the asset data in the graph.\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel.GraphFlowData","page":"Reference","title":"TulipaEnergyModel.GraphFlowData","text":"Structure to hold the flow data in the graph.\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel.Group","page":"Reference","title":"TulipaEnergyModel.Group","text":"Structure to hold the group data\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel.ModelParameters","page":"Reference","title":"TulipaEnergyModel.ModelParameters","text":"ModelParameters(;key = value, ...)\nModelParameters(path; ...)\nModelParameters(connection; ...)\nModelParameters(connection, path; ...)\n\nStructure to hold the model parameters. Some values are defined by default and some required explicit definition.\n\nIf path is passed, it is expected to be a string pointing to a TOML file with a key = value list of parameters. Explicit keyword arguments take precedence.\n\nIf connection is passed, the default discount_year is set to the minimum of all milestone years. In other words, we check for the table year_data for the column year where the column is_milestone is true. Explicit keyword arguments take precedence.\n\nIf both are passed, then path has preference. Explicit keyword arguments take precedence.\n\nParameters\n\ndiscount_rate::Float64 = 0.0: The model discount rate.\ndiscount_year::Int: The model discount year.\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel.RepresentativePeriod","page":"Reference","title":"TulipaEnergyModel.RepresentativePeriod","text":"Structure to hold the data of one representative period.\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel.Timeframe","page":"Reference","title":"TulipaEnergyModel.Timeframe","text":"Structure to hold the data of the timeframe.\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel.Year","page":"Reference","title":"TulipaEnergyModel.Year","text":"Structure to hold the data of the year.\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel._check_initial_storage_level!-Tuple{Any, Any}","page":"Reference","title":"TulipaEnergyModel._check_initial_storage_level!","text":"_check_initial_storage_level!(df)\n\nDetermine the starting value for the initial storage level for interpolating the storage level. If there is no initial storage level given, we will use the final storage level. Otherwise, we use the given initial storage level.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel._construct_inter_rp_dataframes-NTuple{4, Any}","page":"Reference","title":"TulipaEnergyModel._construct_inter_rp_dataframes","text":"df = _construct_inter_rp_dataframes(assets, graph, years, asset_filter)\n\nConstructs dataframes for inter representative period constraints.\n\nArguments\n\nassets: An array of assets.\ngraph: The energy problem graph with the assets data.\nasset_filter: A function that filters assets based on certain criteria.\n\nReturns\n\nA dataframe containing the constructed dataframe for constraints.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel._get_graph_asset_or_flow-Tuple{Any, Any}","page":"Reference","title":"TulipaEnergyModel._get_graph_asset_or_flow","text":"_get_graph_asset_or_flow(graph, a)\n_get_graph_asset_or_flow(graph, (u, v))\n\nReturns graph[a] or graph[u, v].\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel._interpolate_storage_level!-Tuple{Any, Any}","page":"Reference","title":"TulipaEnergyModel._interpolate_storage_level!","text":"_interpolate_storage_level!(df, time_column::Symbol)\n\nTransform the storage level dataframe from grouped timesteps or periods to incremental ones by interpolation. The starting value is the value of the previous grouped timesteps or periods or the initial value. The ending value is the value for the grouped timesteps or periods.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel._parse_rp_partition","page":"Reference","title":"TulipaEnergyModel._parse_rp_partition","text":"_parse_rp_partition(Val(specification), timestep_string, rp_timesteps)\n\nParses the timestep_string according to the specification. The representative period timesteps (rp_timesteps) might not be used in the computation, but it will be used for validation.\n\nThe specification defines what is expected from the timestep_string:\n\n:uniform: The timestep_string should be a single number indicating the duration of each block. Examples: \"3\", \"4\", \"1\".\n:explicit: The timestep_string should be a semicolon-separated list of integers. Each integer is a duration of a block. Examples: \"3;3;3;3\", \"4;4;4\", \"1;1;1;1;1;1;1;1;1;1;1;1\", and \"3;3;4;2\".\n:math: The timestep_string should be an expression of the form NxD+NxD…, where D is the duration of the block and N is the number of blocks. Examples: \"4x3\", \"3x4\", \"12x1\", and \"2x3+1x4+1x2\".\n\nThe generated blocks will be ranges (a:b). The first block starts at 1, and the last block ends at length(rp_timesteps).\n\nThe following table summarizes the formats for a rp_timesteps = 1:12:\n\nOutput :uniform :explicit :math\n1:3, 4:6, 7:9, 10:12 3 3;3;3;3 4x3\n1:4, 5:8, 9:12 4 4;4;4 3x4\n1:1, 2:2, …, 12:12 1 1;1;1;1;1;1;1;1;1;1;1;1 12x1\n1:3, 4:6, 7:10, 11:12 NA 3;3;4;2 2x3+1x4+1x2\n\nExamples\n\nusing TulipaEnergyModel\nTulipaEnergyModel._parse_rp_partition(Val(:uniform), \"3\", 1:12)\n\n# output\n\n4-element Vector{UnitRange{Int64}}:\n 1:3\n 4:6\n 7:9\n 10:12\n\nusing TulipaEnergyModel\nTulipaEnergyModel._parse_rp_partition(Val(:explicit), \"4;4;4\", 1:12)\n\n# output\n\n3-element Vector{UnitRange{Int64}}:\n 1:4\n 5:8\n 9:12\n\nusing TulipaEnergyModel\nTulipaEnergyModel._parse_rp_partition(Val(:math), \"2x3+1x4+1x2\", 1:12)\n\n# output\n\n4-element Vector{UnitRange{Int64}}:\n 1:3\n 4:6\n 7:10\n 11:12\n\n\n\n\n\n","category":"function"},{"location":"95-reference/#TulipaEnergyModel.add_expression_is_charging_terms_intra_rp_constraints!-Tuple{Any, Any, Any}","page":"Reference","title":"TulipaEnergyModel.add_expression_is_charging_terms_intra_rp_constraints!","text":"add_expression_is_charging_terms_intra_rp_constraints!(df_cons,\n df_is_charging,\n workspace\n )\n\nComputes the is_charging expressions per row of df_cons for the constraints that are within (intra) the representative periods.\n\nThis function is only used internally in the model.\n\nThis strategy is based on the replies in this discourse thread:\n\nhttps://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.add_expression_terms_inter_rp_constraints!-NTuple{5, Any}","page":"Reference","title":"TulipaEnergyModel.add_expression_terms_inter_rp_constraints!","text":"add_expression_terms_inter_rp_constraints!(df_inter,\n df_flows,\n df_map,\n graph,\n representative_periods,\n )\n\nComputes the incoming and outgoing expressions per row of df_inter for the constraints that are between (inter) the representative periods.\n\nThis function is only used internally in the model.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.add_expression_terms_intra_rp_constraints!-NTuple{5, Any}","page":"Reference","title":"TulipaEnergyModel.add_expression_terms_intra_rp_constraints!","text":"add_expression_terms_intra_rp_constraints!(df_cons,\n df_flows,\n workspace,\n representative_periods,\n graph;\n use_highest_resolution = true,\n multiply_by_duration = true,\n )\n\nComputes the incoming and outgoing expressions per row of df_cons for the constraints that are within (intra) the representative periods.\n\nThis function is only used internally in the model.\n\nThis strategy is based on the replies in this discourse thread:\n\nhttps://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.add_expression_units_on_terms_intra_rp_constraints!-Tuple{Any, Any, Any}","page":"Reference","title":"TulipaEnergyModel.add_expression_units_on_terms_intra_rp_constraints!","text":"add_expression_units_on_terms_intra_rp_constraints!(\n df_cons,\n df_units_on,\n workspace,\n)\n\nComputes the units_on expressions per row of df_cons for the constraints that are within (intra) the representative periods.\n\nThis function is only used internally in the model.\n\nThis strategy is based on the replies in this discourse thread:\n\nhttps://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.add_group_constraints!-NTuple{6, Any}","page":"Reference","title":"TulipaEnergyModel.add_group_constraints!","text":"add_group_constraints!(model, graph, ...)\n\nAdds group constraints for assets that share a common limits or bounds\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.add_ramping_constraints!-NTuple{12, Any}","page":"Reference","title":"TulipaEnergyModel.add_ramping_constraints!","text":"add_ramping_and_unit_commitment_constraints!(model, graph, ...)\n\nAdds the ramping constraints for producer and conversion assets where ramping = true in assets_data\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.calculate_annualized_cost-NTuple{5, Any}","page":"Reference","title":"TulipaEnergyModel.calculate_annualized_cost","text":"calculate_annualized_cost(discount_rate, economic_lifetime, investment_cost, years, investable_assets)\n\nCalculates the annualized cost for each asset, both energy assets and transport assets, in each year using provided discount rates, economic lifetimes, and investment costs.\n\nArguments\n\ndiscount_rate::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the discount rate.\neconomic_lifetime::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the economic lifetime.\ninvestment_cost::Dict: A dictionary where the key is a tuple (year, asset) or (year, (asset1, asset2)) for transport assets, and the value is the investment cost.\nyears::Array: An array of years to be considered.\ninvestable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.\n\nReturns\n\nA Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the calculated annualized cost for each asset in each year.\n\nFormula\n\nThe annualized cost for each asset in year is calculated using the formula:\n\nannualized_cost = discount_rate[asset] / (\n (1 + discount_rate[asset]) *\n (1 - 1 / (1 + discount_rate[asset])^economic_lifetime[asset])\n) * investment_cost[(year, asset)]\n\nExample for energy assets\n\ndiscount_rate = Dict(\"asset1\" => 0.05, \"asset2\" => 0.07)\n\neconomic_lifetime = Dict(\"asset1\" => 10, \"asset2\" => 15)\n\ninvestment_cost = Dict((2021, \"asset1\") => 1000, (2021, \"asset2\") => 1500,\n (2022, \"asset1\") => 1100, (2022, \"asset2\") => 1600)\nyears = [2021, 2022]\n\ninvestable_assets = Dict(2021 => [\"asset1\", \"asset2\"],\n 2022 => [\"asset1\"])\n\ncosts = calculate_annualized_cost(discount_rate, economic_lifetime, investment_cost, years, investable_assets)\n\n# output\n\nDict{Tuple{Int64, String}, Float64} with 3 entries:\n (2021, \"asset1\") => 123.338\n (2021, \"asset2\") => 153.918\n (2022, \"asset1\") => 135.671\n\nExample for transport assets\n\ndiscount_rate = Dict((\"asset1\", \"asset2\") => 0.05, (\"asset3\", \"asset4\") => 0.07)\n\neconomic_lifetime = Dict((\"asset1\", \"asset2\") => 10, (\"asset3\", \"asset4\") => 15)\n\ninvestment_cost = Dict((2021, (\"asset1\", \"asset2\")) => 1000, (2021, (\"asset3\", \"asset4\")) => 1500,\n (2022, (\"asset1\", \"asset2\")) => 1100, (2022, (\"asset3\", \"asset4\")) => 1600)\nyears = [2021, 2022]\n\ninvestable_assets = Dict(2021 => [(\"asset1\", \"asset2\"), (\"asset3\", \"asset4\")],\n 2022 => [(\"asset1\", \"asset2\")])\n\ncosts = calculate_annualized_cost(discount_rate, economic_lifetime, investment_cost, years, investable_assets)\n\n# output\n\nDict{Tuple{Int64, Tuple{String, String}}, Float64} with 3 entries:\n (2022, (\"asset1\", \"asset2\")) => 135.671\n (2021, (\"asset3\", \"asset4\")) => 153.918\n (2021, (\"asset1\", \"asset2\")) => 123.338\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.calculate_salvage_value-NTuple{5, Any}","page":"Reference","title":"TulipaEnergyModel.calculate_salvage_value","text":"calculate_salvage_value(discount_rate,\n economic_lifetime,\n annualized_cost,\n years,\n investable_assets,\n )\n\nCalculates the salvage value for each asset, both energy assets and transport assets.\n\nArguments\n\ndiscount_rate::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the discount rate.\neconomic_lifetime::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the economic lifetime.\nannualized_cost::Dict: A Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the annualized cost for each asset in each year.\nyears::Array: An array of years to be considered.\ninvestable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.\n\nReturns\n\nA Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the salvage value for each asset in each year.\n\nFormula\n\nThe salvage value for each asset in year is calculated using the formula:\n\nsalvagevalue = annualizedcost[(year, asset)] * sum( 1 / (1 + discountrate[asset])^(yearalias - year) for yearalias in salvagevalue_set[(year, asset)] )\n\nExample for energy assets\n\ndiscount_rate = Dict(\"asset1\" => 0.05, \"asset2\" => 0.07)\n\neconomic_lifetime = Dict(\"asset1\" => 10, \"asset2\" => 15)\n\nannualized_cost =\n Dict((2021, \"asset1\") => 123.338, (2021, \"asset2\") => 153.918, (2022, \"asset1\") => 135.671)\n\nyears = [2021, 2022]\n\ninvestable_assets = Dict(2021 => [\"asset1\", \"asset2\"], 2022 => [\"asset1\"])\n\nsalvage_value = calculate_salvage_value(\n discount_rate,\n economic_lifetime,\n annualized_cost,\n years,\n investable_assets,\n)\n\n# output\nDict{Tuple{Int64, String}, Float64} with 3 entries:\n (2021, \"asset1\") => 759.2\n (2021, \"asset2\") => 1202.24\n (2022, \"asset1\") => 964.325\n\nExample for transport assets\n\ndiscount_rate = Dict((\"asset1\", \"asset2\") => 0.05, (\"asset3\", \"asset4\") => 0.07)\n\neconomic_lifetime = Dict((\"asset1\", \"asset2\") => 10, (\"asset3\", \"asset4\") => 15)\n\nannualized_cost = Dict(\n (2022, (\"asset1\", \"asset2\")) => 135.671,\n (2021, (\"asset3\", \"asset4\")) => 153.918,\n (2021, (\"asset1\", \"asset2\")) => 123.338,\n)\n\nyears = [2021, 2022]\n\ninvestable_assets =\n Dict(2021 => [(\"asset1\", \"asset2\"), (\"asset3\", \"asset4\")], 2022 => [(\"asset1\", \"asset2\")])\n\nsalvage_value = calculate_salvage_value(\n discount_rate,\n economic_lifetime,\n annualized_cost,\n years,\n investable_assets,\n)\n\n# output\n\nDict{Tuple{Int64, Tuple{String, String}}, Float64} with 3 entries:\n (2022, (\"asset1\", \"asset2\")) => 964.325\n (2021, (\"asset3\", \"asset4\")) => 1202.24\n (2021, (\"asset1\", \"asset2\")) => 759.2\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.calculate_weight_for_investment_discounts-NTuple{6, Any}","page":"Reference","title":"TulipaEnergyModel.calculate_weight_for_investment_discounts","text":"calculate_weight_for_investment_discounts(social_rate,\n discount_year,\n salvage_value,\n investment_cost,\n years,\n investable_assets,\n )\n\nCalculates the weight for investment discounts for each asset, both energy assets and transport assets.\n\nArguments\n\nsocial_rate::Float64: A value with the social discount rate.\ndiscount_year::Int64: A value with the discount year for all the investments.\nsalvage_value::Dict: A dictionary where the key is an tuple (year, asset) or (year, (asset1, asset2)) for transport assets, and the value is the salvage value.\ninvestment_cost::Dict: A dictionary where the key is an tuple (year, asset) or (year, (asset1, asset2)) for transport assets, and the value is the investment cost.\nyears::Array: An array of years to be considered.\ninvestable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.\n\nReturns\n\nA Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the weights for investment discounts.\n\nFormula\n\nThe weight for investment discounts for each asset in year is calculated using the formula:\n\nweightforinvestmentdiscounts = 1 / (1 + socialrate)^(year - discountyear) * (1 - salvagevalue[(year, asset)] / investment_cost[(year, asset)])\n\nExample for energy assets\n\nsocial_rate = 0.02\n\ndiscount_year = 2000\n\nsalvage_value = Dict(\n (2021, \"asset1\") => 759.1978422,\n (2021, \"asset2\") => 1202.2339859,\n (2022, \"asset1\") => 964.3285406,\n)\n\ninvestment_cost = Dict(\n (2021, \"asset1\") => 1000,\n (2021, \"asset2\") => 1500,\n (2022, \"asset1\") => 1100,\n (2022, \"asset2\") => 1600,\n)\nyears = [2021, 2022]\n\ninvestable_assets = Dict(2021 => [\"asset1\", \"asset2\"], 2022 => [\"asset1\"])\n\nweights = calculate_weight_for_investment_discounts(\n social_rate,\n discount_year,\n salvage_value,\n investment_cost,\n years,\n investable_assets,\n)\n\n# output\n\nDict{Tuple{Int64, String}, Float64} with 3 entries:\n (2021, \"asset1\") => 0.158875\n (2021, \"asset2\") => 0.130973\n (2022, \"asset1\") => 0.0797796\n\nExample for transport assets\n\nsocial_rate = 0.02\n\ndiscount_year = 2000\n\nsalvage_value = Dict(\n (2022, (\"asset1\", \"asset2\")) => 964.325,\n (2021, (\"asset3\", \"asset4\")) => 1202.24,\n (2021, (\"asset1\", \"asset2\")) => 759.2,\n)\n\ninvestment_cost = Dict((2021, (\"asset1\", \"asset2\")) => 1000, (2021, (\"asset3\", \"asset4\")) => 1500,\n (2022, (\"asset1\", \"asset2\")) => 1100, (2022, (\"asset3\", \"asset4\")) => 1600)\nyears = [2021, 2022]\n\ninvestable_assets = Dict(2021 => [(\"asset1\", \"asset2\"), (\"asset3\", \"asset4\")],\n 2022 => [(\"asset1\", \"asset2\")])\n\nweights = calculate_weight_for_investment_discounts(\n social_rate,\n discount_year,\n salvage_value,\n investment_cost,\n years,\n investable_assets,\n)\n\n# output\n\nDict{Tuple{Int64, Tuple{String, String}}, Float64} with 3 entries:\n (2022, (\"asset1\", \"asset2\")) => 0.0797817\n (2021, (\"asset3\", \"asset4\")) => 0.13097\n (2021, (\"asset1\", \"asset2\")) => 0.158874\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.calculate_weight_for_investment_discounts-Tuple{MetaGraphsNext.MetaGraph, Vararg{Any, 4}}","page":"Reference","title":"TulipaEnergyModel.calculate_weight_for_investment_discounts","text":"calculate_weight_for_investment_discounts(graph::MetaGraph,\n years,\n investable_assets,\n assets,\n model_parameters,\n )\n\nCalculates the weight for investment discounts for each asset, both energy assets and transport assets. Internally calls calculate_annualized_cost, calculate_salvage_value, calculate_weight_for_investment_discounts.\n\nArguments\n\ngraph::MetaGraph: A graph\nyears::Array: An array of years to be considered.\ninvestable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.\nassets::Array: An array of assets.\nmodel_parameters::ModelParameters: A model parameters structure.\n\nReturns\n\nA Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the weights for investment discounts.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.compute_assets_partitions!-NTuple{4, Any}","page":"Reference","title":"TulipaEnergyModel.compute_assets_partitions!","text":"compute_assets_partitions!(partitions, df, a, representative_periods)\n\nParses the time blocks in the DataFrame df for the asset a and every representative period in the timesteps_per_rp dictionary, modifying the input partitions.\n\npartitions must be a dictionary indexed by the representative periods, possibly empty.\n\ntimesteps_per_rp must be a dictionary indexed by rep_period and its values are the timesteps of that rep_period.\n\nTo obtain the partitions, the columns specification and partition from df are passed to the function _parse_rp_partition.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.compute_constraints_partitions-Tuple{Any, Any, Any}","page":"Reference","title":"TulipaEnergyModel.compute_constraints_partitions","text":"cons_partitions = compute_constraints_partitions(graph, representative_periods)\n\nComputes the constraints partitions using the assets and flows partitions stored in the graph, and the representative periods.\n\nThe function computes the constraints partitions by iterating over the partition dictionary, which specifies the partition strategy for each resolution (i.e., lowest or highest). For each asset and representative period, it calls the compute_rp_partition function to compute the partition based on the strategy.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.compute_dual_variables-Tuple{Any}","page":"Reference","title":"TulipaEnergyModel.compute_dual_variables","text":"compute_dual_variables(model)\n\nCompute the dual variables for the given model.\n\nIf the model does not have dual variables, this function fixes the discrete variables, optimizes the model, and then computes the dual variables.\n\nArguments\n\nmodel: The model for which to compute the dual variables.\n\nReturns\n\nA named tuple containing the dual variables of selected constraints.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.compute_flows_partitions!-NTuple{5, Any}","page":"Reference","title":"TulipaEnergyModel.compute_flows_partitions!","text":"compute_flows_partitions!(partitions, df, u, v, representative_periods)\n\nParses the time blocks in the DataFrame df for the flow (u, v) and every representative period in the timesteps_per_rp dictionary, modifying the input partitions.\n\npartitions must be a dictionary indexed by the representative periods, possibly empty.\n\ntimesteps_per_rp must be a dictionary indexed by rep_period and its values are the timesteps of that rep_period.\n\nTo obtain the partitions, the columns specification and partition from df are passed to the function _parse_rp_partition.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.compute_rp_partition-Tuple{AbstractVector{<:AbstractVector{<:UnitRange{<:Integer}}}, Any}","page":"Reference","title":"TulipaEnergyModel.compute_rp_partition","text":"rp_partition = compute_rp_partition(partitions, :lowest)\n\nGiven the timesteps of various flows/assets in the partitions input, compute the representative period partitions.\n\nEach element of partitions is a partition with the following assumptions:\n\nAn element is of the form V = [r₁, r₂, …, rₘ], where each rᵢ is a range a:b.\nr₁ starts at 1.\nrᵢ₊₁ starts at the end of rᵢ plus 1.\nrₘ ends at some value N, that is the same for all elements of partitions.\n\nNotice that this implies that they form a disjunct partition of 1:N.\n\nThe output will also be a partition with the conditions above.\n\nStrategies\n\n:lowest\n\nIf strategy = :lowest (default), then the output is constructed greedily, i.e., it selects the next largest breakpoint following the algorithm below:\n\nInput: Vᴵ₁, …, Vᴵₚ, a list of time blocks. Each element of Vᴵⱼ is a range r = r.start:r.end. Output: V.\nCompute the end of the representative period N (all Vᴵⱼ should have the same end)\nStart with an empty V = []\nDefine the beginning of the range s = 1\nDefine an array with all the next breakpoints B such that Bⱼ is the first r.end such that r.end ≥ s for each r ∈ Vᴵⱼ.\nThe end of the range will be the e = max Bⱼ.\nDefine r = s:e and add r to the end of V.\nIf e = N, then END\nOtherwise, define s = e + 1 and go to step 4.\n\nExamples\n\npartition1 = [1:4, 5:8, 9:12]\npartition2 = [1:3, 4:6, 7:9, 10:12]\ncompute_rp_partition([partition1, partition2], :lowest)\n\n# output\n\n3-element Vector{UnitRange{Int64}}:\n 1:4\n 5:8\n 9:12\n\npartition1 = [1:1, 2:3, 4:6, 7:10, 11:12]\npartition2 = [1:2, 3:4, 5:5, 6:7, 8:9, 10:12]\ncompute_rp_partition([partition1, partition2], :lowest)\n\n# output\n\n5-element Vector{UnitRange{Int64}}:\n 1:2\n 3:4\n 5:6\n 7:10\n 11:12\n\n:highest\n\nIf strategy = :highest, then the output selects includes all the breakpoints from the input. Another way of describing it, is to select the minimum end-point instead of the maximum end-point in the :lowest strategy.\n\nExamples\n\npartition1 = [1:4, 5:8, 9:12]\npartition2 = [1:3, 4:6, 7:9, 10:12]\ncompute_rp_partition([partition1, partition2], :highest)\n\n# output\n\n6-element Vector{UnitRange{Int64}}:\n 1:3\n 4:4\n 5:6\n 7:8\n 9:9\n 10:12\n\npartition1 = [1:1, 2:3, 4:6, 7:10, 11:12]\npartition2 = [1:2, 3:4, 5:5, 6:7, 8:9, 10:12]\ncompute_rp_partition([partition1, partition2], :highest)\n\n# output\n\n10-element Vector{UnitRange{Int64}}:\n 1:1\n 2:2\n 3:3\n 4:4\n 5:5\n 6:6\n 7:7\n 8:9\n 10:10\n 11:12\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.construct_dataframes-NTuple{4, Any}","page":"Reference","title":"TulipaEnergyModel.construct_dataframes","text":"dataframes = construct_dataframes(\n graph,\n representative_periods,\n constraints_partitions,, IteratorSize\n years,\n)\n\nComputes the data frames used to linearize the variables and constraints. These are used internally in the model only.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.create_internal_structures-Tuple{Any}","page":"Reference","title":"TulipaEnergyModel.create_internal_structures","text":"graph, representative_periods, timeframe = create_internal_structures(connection)\n\nReturn the graph, representative_periods, and timeframe structures given the input dataframes structure.\n\nThe details of these structures are:\n\ngraph: a MetaGraph with the following information:\nlabels(graph): All assets.\nedge_labels(graph): All flows, in pair format (u, v), where u and v are assets.\ngraph[a]: A TulipaEnergyModel.GraphAssetData structure for asset a.\ngraph[u, v]: A TulipaEnergyModel.GraphFlowData structure for flow (u, v).\nrepresentative_periods: An array of TulipaEnergyModel.RepresentativePeriod ordered by their IDs.\ntimeframe: Information of TulipaEnergyModel.Timeframe.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.create_intervals_for_years-Tuple{Any}","page":"Reference","title":"TulipaEnergyModel.create_intervals_for_years","text":"create_intervals(years)\n\nCreate a dictionary of intervals for years. The interval is assigned to the its starting year. The last interval is 1.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.create_model!-Tuple{Any}","page":"Reference","title":"TulipaEnergyModel.create_model!","text":"create_model!(energy_problem; verbose = false)\n\nCreate the internal model of an TulipaEnergyModel.EnergyProblem.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.create_model-NTuple{7, Any}","page":"Reference","title":"TulipaEnergyModel.create_model","text":"model = create_model(graph, representative_periods, dataframes, timeframe, groups; write_lp_file = false)\n\nCreate the energy model given the graph, representative_periods, dictionary of dataframes (created by construct_dataframes), timeframe, and groups.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.default_parameters-Tuple{Any}","page":"Reference","title":"TulipaEnergyModel.default_parameters","text":"default_parameters(Val(optimizer_name_symbol))\ndefault_parameters(optimizer)\ndefault_parameters(optimizer_name_symbol)\ndefault_parameters(optimizer_name_string)\n\nReturns the default parameters for a given JuMP optimizer. Falls back to Dict() for undefined solvers.\n\nArguments\n\nThere are four ways to use this function:\n\nVal(optimizer_name_symbol): This uses type dispatch with the special Val type. Pass the solver name as a Symbol (e.g., Val(:HiGHS)).\noptimizer: The JuMP optimizer type (e.g., HiGHS.Optimizer).\noptimizer_name_symbol or optimizer_name_string: Pass the name in Symbol or String format and it will be converted to Val.\n\nUsing Val is necessary for the dispatch. All other cases will convert the argument and call the Val version, which might lead to type instability.\n\nExamples\n\nusing HiGHS\ndefault_parameters(HiGHS.Optimizer)\n\n# output\n\nDict{String, Any} with 1 entry:\n \"output_flag\" => false\n\nAnother case\n\ndefault_parameters(Val(:Cbc))\n\n# output\n\nDict{String, Any} with 1 entry:\n \"logLevel\" => 0\n\ndefault_parameters(:Cbc) == default_parameters(\"Cbc\") == default_parameters(Val(:Cbc))\n\n# output\n\ntrue\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.filter_graph-Tuple{Any, Any, Any, Vararg{Any}}","page":"Reference","title":"TulipaEnergyModel.filter_graph","text":"filter_graph(graph, elements, value, key)\nfilter_graph(graph, elements, value, key, year)\n\nHelper function to filter elements (assets or flows) in the graph given a key (and possibly year) and value (or values). In the safest case, this is equivalent to the filters\n\nfilter_assets_whose_key_equal_to_value = a -> graph[a].key == value\nfilter_assets_whose_key_year_equal_to_value = a -> graph[a].key[year] in value\nfilter_flows_whose_key_equal_to_value = f -> graph[f...].key == value\nfilter_flows_whose_key_year_equal_to_value = f -> graph[f...].key[year] in value\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.get_graph_value_or_missing-Tuple{Any, Any, Any}","page":"Reference","title":"TulipaEnergyModel.get_graph_value_or_missing","text":"get_graph_value_or_missing(graph, graph_key, field_key)\nget_graph_value_or_missing(graph, graph_key, field_key, year)\n\nGet graph[graph_key].field_key (or graph[graph_key].field_key[year]) or return missing if any of the values do not exist. We also check if graph[graph_key].active[year] is true if the year is passed and return missing otherwise.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.profile_aggregation-NTuple{7, Any}","page":"Reference","title":"TulipaEnergyModel.profile_aggregation","text":"profile_aggregation(agg, profiles, key, block, default_value)\n\nAggregates the profiles[key] over the block using the agg function. If the profile does not exist, uses default_value instead of each profile value.\n\nprofiles should be a dictionary of profiles, for instance graph[a].profiles or graph[u, v].profiles. If profiles[key] exists, then this function computes the aggregation of profiles[key] over the range block using the aggregator agg, i.e., agg(profiles[key][block]). If profiles[key] does not exist, then this substitutes it with a vector of default_values.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.read_parameters_from_file-Tuple{Any}","page":"Reference","title":"TulipaEnergyModel.read_parameters_from_file","text":"read_parameters_from_file(filepath)\n\nParse the parameters from a file into a dictionary. The keys and values are NOT checked to be valid parameters for any specific solvers.\n\nThe file should contain a list of lines of the following type:\n\nkey = value\n\nThe file is parsed as TOML, which is intuitive. See the example below.\n\nExample\n\n# Creating file\nfilepath, io = mktemp()\nprintln(io,\n \"\"\"\n true_or_false = true\n integer_number = 5\n real_number1 = 3.14\n big_number = 6.66E06\n small_number = 1e-8\n string = \"something\"\n \"\"\"\n)\nclose(io)\n# Reading\nread_parameters_from_file(filepath)\n\n# output\n\nDict{String, Any} with 6 entries:\n \"string\" => \"something\"\n \"integer_number\" => 5\n \"small_number\" => 1.0e-8\n \"true_or_false\" => true\n \"real_number1\" => 3.14\n \"big_number\" => 6.66e6\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.run_scenario-Tuple{Any}","page":"Reference","title":"TulipaEnergyModel.run_scenario","text":"energy_problem = run_scenario(connection; optimizer, parameters, write_lp_file, log_file, show_log)\n\nRun the scenario in the given connection and return the energy problem.\n\nThe optimizer and parameters keyword arguments can be used to change the optimizer (the default is HiGHS) and its parameters. The variables are passed to the solve_model function.\n\nSet write_lp_file = true to export the problem that is sent to the solver to a file for viewing. Set show_log = false to silence printing the log while running. Specify a log_file name to export the log to a file.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.safe_comparison-Tuple{Any, Any, Any, Vararg{Any}}","page":"Reference","title":"TulipaEnergyModel.safe_comparison","text":"safe_comparison(graph, a, value, key)\nsafe_comparison(graph, a, value, key, year)\n\nCheck if graph[a].value (or graph[a].value[year]) is equal to value. This function assumes that if graph[a].value is a dictionary and value is not, then you made a mistake. This makes it safer, because it will not silently return false. It also checks for missing.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.safe_inclusion-Tuple{Any, Any, Vector, Vararg{Any}}","page":"Reference","title":"TulipaEnergyModel.safe_inclusion","text":"safe_inclusion(graph, a, value, key)\nsafe_inclusion(graph, a, value, key, year)\n\nCheck if graph[a].value (or graph[a].value[year]) is in values. This correctly check that missing in [missing] returns false.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.save_solution_to_file-NTuple{4, Any}","page":"Reference","title":"TulipaEnergyModel.save_solution_to_file","text":"save_solution_to_file(output_file, graph, solution)\n\nSaves the solution in CSV files inside output_folder.\n\nThe following files are created:\n\nassets-investment.csv: The format of each row is a,v,p*v, where a is the asset name, v is the corresponding asset investment value, and p is the corresponding capacity value. Only investable assets are included.\nassets-investments-energy.csv: The format of each row is a,v,p*v, where a is the asset name, v is the corresponding asset investment value on energy, and p is the corresponding energy capacity value. Only investable assets with a storage_method_energy set to true are included.\nflows-investment.csv: Similar to assets-investment.csv, but for flows.\nflows.csv: The value of each flow, per (from, to) flow, rp representative period and timestep. Since the flow is in power, the value at a timestep is equal to the value at the corresponding time block, i.e., if flow[1:3] = 30, then flow[1] = flow[2] = flow[3] = 30.\nstorage-level.csv: The value of each storage level, per asset, rp representative period, and timestep. Since the storage level is in energy, the value at a timestep is a proportional fraction of the value at the corresponding time block, i.e., if level[1:3] = 30, then level[1] = level[2] = level[3] = 10.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.save_solution_to_file-Tuple{Any, EnergyProblem}","page":"Reference","title":"TulipaEnergyModel.save_solution_to_file","text":"save_solution_to_file(output_folder, energy_problem)\n\nSaves the solution from energy_problem in CSV files inside output_file.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.solve_model","page":"Reference","title":"TulipaEnergyModel.solve_model","text":"solution = solve_model(model[, optimizer; parameters])\n\nSolve the JuMP model and return the solution. The optimizer argument should be an MILP solver from the JuMP list of supported solvers. By default we use HiGHS.\n\nThe keyword argument parameters should be passed as a list of key => value pairs. These can be created manually, obtained using default_parameters, or read from a file using read_parameters_from_file.\n\nThe solution object is a mutable struct with the following fields:\n\nassets_investment[a]: The investment for each asset, indexed on the investable asset a. To create a traditional array in the order given by the investable assets, one can run\n[solution.assets_investment[a] for a in labels(graph) if graph[a].investable]\nassets_investment_energy[a]: The investment on energy component for each asset, indexed on the investable asset a with a storage_method_energy set to true.\nTo create a traditional array in the order given by the investable assets, one can run\n[solution.assets_investment_energy[a] for a in labels(graph) if graph[a].investable && graph[a].storage_method_energy\nflows_investment[u, v]: The investment for each flow, indexed on the investable flow (u, v). To create a traditional array in the order given by the investable flows, one can run\n[solution.flows_investment[(u, v)] for (u, v) in edge_labels(graph) if graph[u, v].investable]\nstorage_level_intra_rp[a, rp, timesteps_block]: The storage level for the storage asset a for a representative period rp and a time block timesteps_block. The list of time blocks is defined by constraints_partitions, which was used to create the model. To create a vector with all values of storage_level_intra_rp for a given a and rp, one can run\n[solution.storage_level_intra_rp[a, rp, timesteps_block] for timesteps_block in constraints_partitions[:lowest_resolution][(a, rp)]]\nstorage_level_inter_rp[a, pb]: The storage level for the storage asset a for a periods block pb. To create a vector with all values of storage_level_inter_rp for a given a, one can run\n[solution.storage_level_inter_rp[a, bp] for bp in graph[a].timeframe_partitions[a]]\nflow[(u, v), rp, timesteps_block]: The flow value for a given flow (u, v) at a given representative period rp, and time block timesteps_block. The list of time blocks is defined by graph[(u, v)].partitions[rp]. To create a vector with all values of flow for a given (u, v) and rp, one can run\n[solution.flow[(u, v), rp, timesteps_block] for timesteps_block in graph[u, v].partitions[rp]]\nobjective_value: A Float64 with the objective value at the solution.\nduals: A NamedTuple containing the dual variables of selected constraints.\n\nExamples\n\nparameters = Dict{String,Any}(\"presolve\" => \"on\", \"time_limit\" => 60.0, \"output_flag\" => true)\nsolution = solve_model(model, HiGHS.Optimizer; parameters = parameters)\n\n\n\n\n\n","category":"function"},{"location":"95-reference/#TulipaEnergyModel.solve_model!","page":"Reference","title":"TulipaEnergyModel.solve_model!","text":"solution = solve_model!(energy_problem[, optimizer; parameters])\n\nSolve the internal model of an energy_problem. The solution obtained by calling solve_model is returned.\n\n\n\n\n\n","category":"function"},{"location":"95-reference/#TulipaEnergyModel.solve_model!-Tuple{Any, Any, Vararg{Any}}","page":"Reference","title":"TulipaEnergyModel.solve_model!","text":"solution = solve_model!(dataframes, model, ...)\n\nSolves the JuMP model, returns the solution, and modifies dataframes to include the solution. The modifications made to dataframes are:\n\ndf_flows.solution = solution.flow\ndf_storage_level_intra_rp.solution = solution.storage_level_intra_rp\ndf_storage_level_inter_rp.solution = solution.storage_level_inter_rp\n\n\n\n\n\n","category":"method"},{"location":"20-tutorials/#tutorials","page":"Tutorials","title":"Tutorials","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Here are some tutorials on how to use Tulipa.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Pages = [\"20-tutorials.md\"]\nDepth = 3","category":"page"},{"location":"20-tutorials/#basic-example","page":"Tutorials","title":"Basic example","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"For our first example, let's use a tiny existing dataset. Inside the code for this package, you can find the folder test/inputs/Tiny, which includes all the files necessary to create a model and solve it.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The files inside the \"Tiny\" folder define the assets and flows data, their profiles, and their time resolution, as well as define the representative periods and which periods in the full problem formulation they represent.¹","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"For more details about these files, see Input.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"¹ Ignore bad-assets-data.csv, which is used for testing.","category":"page"},{"location":"20-tutorials/#Run-scenario","page":"Tutorials","title":"Run scenario","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To read all data from the Tiny folder, perform all necessary steps to create a model, and solve the model, run the following in a Julia terminal:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using DuckDB, TulipaIO, TulipaEnergyModel\n\ninput_dir = \"../../test/inputs/Tiny\" # hide\n# input_dir should be the path to Tiny as a string (something like \"test/inputs/Tiny\")\n# TulipaEnergyModel.schema_per_table_name contains the schema with columns and types the file must have\nconnection = DBInterface.connect(DuckDB.DB)\nread_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)\nenergy_problem = run_scenario(connection)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The energy_problem variable is of type EnergyProblem. For more details, see the documentation for that type or the section Structures.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"That's all it takes to run a scenario! To learn about the data required to run your own scenario, see the Input section of How to Use.","category":"page"},{"location":"20-tutorials/#Manually-running-each-step","page":"Tutorials","title":"Manually running each step","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"If we need more control, we can create the energy problem first, then the optimization model inside it, and finally ask for it to be solved.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using DuckDB, TulipaIO, TulipaEnergyModel\n\ninput_dir = \"../../test/inputs/Tiny\" # hide\n# input_dir should be the path to Tiny as a string (something like \"test/inputs/Tiny\")\nconnection = DBInterface.connect(DuckDB.DB)\nread_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)\nenergy_problem = EnergyProblem(connection)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The energy problem does not have a model yet:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"energy_problem.model === nothing","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To create the internal model, we call the function create_model!.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"create_model!(energy_problem)\nenergy_problem.model","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The model has not been solved yet, which can be verified through the solved flag inside the energy problem:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"energy_problem.solved","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Finally, we can solve the model:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"solution = solve_model!(energy_problem)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The solution is included in the individual assets and flows, but for completeness, we return the full solution object, also defined in the Structures section.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"In particular, the objective value and the termination status are also included in the energy problem:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"energy_problem.objective_value, energy_problem.termination_status","category":"page"},{"location":"20-tutorials/#Manually-creating-all-structures-without-EnergyProblem","page":"Tutorials","title":"Manually creating all structures without EnergyProblem","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"For additional control, it might be desirable to use the internal structures of EnergyProblem directly. This can be error-prone, so use it with care. The full description for these structures can be found in Structures.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using DuckDB, TulipaIO, TulipaEnergyModel\n\ninput_dir = \"../../test/inputs/Tiny\" # hide\n# input_dir should be the path to Tiny as a string (something like \"test/inputs/Tiny\")\nconnection = DBInterface.connect(DuckDB.DB)\nread_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)\nmodel_parameters = ModelParameters(connection)\ngraph, representative_periods, timeframe, groups, years = create_internal_structures(connection)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"We also need a time partition for the constraints to create the model. Creating an energy problem automatically computes this data, but since we are doing it manually, we need to calculate it ourselves.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"constraints_partitions = compute_constraints_partitions(graph, representative_periods, years)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The constraints_partitions has two dictionaries with the keys :lowest_resolution and :highest_resolution. The lowest resolution dictionary is mainly used to create the constraints for energy balance, whereas the highest resolution dictionary is mainly used to create the capacity constraints in the model.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Finally, we also need dataframes that store the linearized indexes of the variables.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"dataframes = construct_dataframes(graph, representative_periods, constraints_partitions, years)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Now we can compute the model.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"model = create_model(graph, representative_periods, dataframes, years, timeframe, groups, model_parameters)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Finally, we can compute the solution.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"solution = solve_model(model)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"or, if we want to store the flow, storage_level_intra_rp, and storage_level_inter_rp optimal value in the dataframes:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"solution = solve_model!(dataframes, model)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"This solution structure is the same as the one returned when using an EnergyProblem.","category":"page"},{"location":"20-tutorials/#Change-optimizer-and-specify-parameters","page":"Tutorials","title":"Change optimizer and specify parameters","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"By default, the model is solved using the HiGHS optimizer (or solver). To change this, we can give the functions run_scenario, solve_model, or solve_model! a different optimizer.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"For instance, we run the GLPK optimizer below:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using DuckDB, TulipaIO, TulipaEnergyModel, GLPK\n\ninput_dir = \"../../test/inputs/Tiny\" # hide\nconnection = DBInterface.connect(DuckDB.DB)\nread_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)\nenergy_problem = run_scenario(connection, optimizer = GLPK.Optimizer)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"or","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using GLPK\n\nsolution = solve_model!(energy_problem, GLPK.Optimizer)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"or","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using GLPK\n\nsolution = solve_model(model, GLPK.Optimizer)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Notice that, in any of these cases, we need to explicitly add the GLPK package ourselves and add using GLPK before using GLPK.Optimizer.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"In any of these cases, default parameters for the GLPK optimizer are used, which you can query using default_parameters. You can pass a dictionary using the keyword argument parameters to change the defaults. For instance, in the example below, we change the maximum allowed runtime for GLPK to be 1 seconds, which will most likely cause it to fail to converge in time.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using DuckDB, TulipaIO, TulipaEnergyModel, GLPK\n\ninput_dir = \"../../test/inputs/Tiny\" # hide\nparameters = Dict(\"tm_lim\" => 1)\nconnection = DBInterface.connect(DuckDB.DB)\nread_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)\nenergy_problem = run_scenario(connection, optimizer = GLPK.Optimizer, parameters = parameters)\nenergy_problem.termination_status","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"For the complete list of parameters, check your chosen optimizer.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"These parameters can also be passed via a file. See the read_parameters_from_file function for more details.","category":"page"},{"location":"20-tutorials/#graph-tutorial","page":"Tutorials","title":"Using the graph structure","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Read about the graph structure in the Graph section first.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"We will use the graph created above for the \"Tiny\" dataset.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The first thing that we can do is access all assets. They are the labels of the graph and can be accessed via the MetaGraphsNext API:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using MetaGraphsNext\n# Accessing assets\nlabels(graph)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Notice that the result is a generator, so if we want the actual results, we have to collect it:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"labels(graph) |> collect","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To access the asset data, we can index the graph with an asset label:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"graph[\"ocgt\"]","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"This is a Julia struct, or composite type, named GraphAssetData. We can access its fields with .:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"graph[\"ocgt\"].type","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Since labels returns a generator, we can iterate over its contents without collecting it into a vector.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"for a in labels(graph)\n println(\"Asset $a has type $(graph[a].type)\")\nend","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To get all flows we can use edge_labels:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"edge_labels(graph) |> collect","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To access the flow data, we index with graph[u, v]:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"graph[\"ocgt\", \"demand\"]","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The type of the flow struct is GraphFlowData.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"We can easily find all assets v for which a flow (a, v) exists for a given asset a (in this case, demand):","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"inneighbor_labels(graph, \"demand\") |> collect","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Similarly, all assets u for which a flow (u, a) exists for a given asset a (in this case, ocgt):","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"outneighbor_labels(graph, \"ocgt\") |> collect","category":"page"},{"location":"20-tutorials/#solution-tutorial","page":"Tutorials","title":"Manipulating the solution","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"First, see the description of the solution object.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Let's consider the larger dataset \"Norse\" in this section. And let's talk about two ways to access the solution.","category":"page"},{"location":"20-tutorials/#The-solution-returned-by-solve_model","page":"Tutorials","title":"The solution returned by solve_model","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The solution, as shown before, can be obtained when calling solve_model or solve_model!.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using DuckDB, TulipaIO, TulipaEnergyModel\n\ninput_dir = \"../../test/inputs/Norse\" # hide\n# input_dir should be the path to Norse as a string (something like \"test/inputs/Norse\")\nconnection = DBInterface.connect(DuckDB.DB)\nread_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)\nenergy_problem = EnergyProblem(connection)\ncreate_model!(energy_problem)\nsolution = solve_model!(energy_problem)\nnothing # hide","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To create a traditional array in the order given by the investable assets, one can run","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The solution.flow, solution.storage_level_intra_rp, and solution.storage_level_inter_rp values are linearized according to the dataframes in the dictionary energy_problem.dataframes with keys :flows, :lowest_storage_level_intra_rp, and :storage_level_inter_rp, respectively. You need to query the data from these dataframes and then use the column index to select the appropriate value.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To create a vector with all values of flow for a given (u, v) and rp, one can run","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using MetaGraphsNext\ngraph = energy_problem.graph\n\n(u, v) = first(edge_labels(graph))\nrp = 1\ndf = filter(\n row -> row.rep_period == rp && row.from == u && row.to == v,\n energy_problem.dataframes[:flows],\n view = true,\n)\n[solution.flow[row.index] for row in eachrow(df)]","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To create a vector with the all values of storage_level_intra_rp for a given a and rp, one can run","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"a = energy_problem.dataframes[:lowest_storage_level_intra_rp].asset[1]\nrp = 1\ndf = filter(\n row -> row.asset == a && row.rep_period == rp,\n energy_problem.dataframes[:lowest_storage_level_intra_rp],\n view = true,\n)\n[solution.storage_level_intra_rp[row.index] for row in eachrow(df)]","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To create a vector with the all values of storage_level_inter_rp for a given a, one can run","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"a = energy_problem.dataframes[:storage_level_inter_rp].asset[1]\ndf = filter(\n row -> row.asset == a,\n energy_problem.dataframes[:storage_level_inter_rp],\n view = true,\n)\n[solution.storage_level_inter_rp[row.index] for row in eachrow(df)]","category":"page"},{"location":"20-tutorials/#The-solution-inside-the-graph","page":"Tutorials","title":"The solution inside the graph","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"In addition to the solution object, the solution is also stored by the individual assets and flows when solve_model! is called (i.e., when using an EnergyProblem object).","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"They can be accessed like any other value from GraphAssetData or GraphFlowData, which means that we recreate the values from the previous section in a new way:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"years = [year.id for year in energy_problem.years]\nDict(\n (y, a) => [\n energy_problem.graph[a].investment[y]\n ] for y in years for a in labels(graph) if graph[a].investable[y]\n)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Dict(\n (y, a) => [\n energy_problem.graph[u, v].investment[y]\n ] for y in years for (u, v) in edge_labels(graph) if graph[u, v].investable[y]\n)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"(u, v) = first(edge_labels(graph))\nrp = 1\ndf = filter(\n row -> row.rep_period == rp && row.from == u && row.to == v,\n energy_problem.dataframes[:flows],\n view = true,\n)\n[energy_problem.graph[u, v].flow[(rp, row.timesteps_block)] for row in eachrow(df)]","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To create a vector with all the values of storage_level_intra_rp for a given a and rp, one can run","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"a = energy_problem.dataframes[:lowest_storage_level_intra_rp].asset[1]\nrp = 1\ndf = filter(\n row -> row.asset == a && row.rep_period == rp,\n energy_problem.dataframes[:lowest_storage_level_intra_rp],\n view = true,\n)\n[energy_problem.graph[a].storage_level_intra_rp[(rp, row.timesteps_block)] for row in eachrow(df)]","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To create a vector with all the values of storage_level_inter_rp for a given a, one can run","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"a = energy_problem.dataframes[:storage_level_inter_rp].asset[1]\ndf = filter(\n row -> row.asset == a,\n energy_problem.dataframes[:storage_level_inter_rp],\n view = true,\n)\n[energy_problem.graph[a].storage_level_inter_rp[row.periods_block] for row in eachrow(df)]","category":"page"},{"location":"20-tutorials/#The-solution-inside-the-dataframes-object","page":"Tutorials","title":"The solution inside the dataframes object","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"In addition to being stored in the solution object, and in the graph object, the solution for the flow, storage_level_intra_rp, and storage_level_inter_rp is also stored inside the corresponding DataFrame objects if solve_model! is called.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The code below will do the same as in the two previous examples:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"(u, v) = first(edge_labels(graph))\nrp = 1\ndf = filter(\n row -> row.rep_period == rp && row.from == u && row.to == v,\n energy_problem.dataframes[:flows],\n view = true,\n)\ndf.solution","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"a = energy_problem.dataframes[:storage_level_inter_rp].asset[1]\ndf = filter(\n row -> row.asset == a,\n energy_problem.dataframes[:storage_level_inter_rp],\n view = true,\n)\ndf.solution","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"a = energy_problem.dataframes[:lowest_storage_level_intra_rp].asset[1]\nrp = 1\ndf = filter(\n row -> row.asset == a && row.rep_period == rp,\n energy_problem.dataframes[:lowest_storage_level_intra_rp],\n view = true,\n)\ndf.solution","category":"page"},{"location":"20-tutorials/#Values-of-constraints-and-expressions","page":"Tutorials","title":"Values of constraints and expressions","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"By accessing the model directly, we can query the values of constraints and expressions. We need to know the name of the constraint and how it is indexed, and for that, you will need to check the model.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"For instance, we can get all incoming flows in the lowest resolution for a given asset for a given representative period with the following:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using JuMP\na = energy_problem.dataframes[:lowest].asset[end]\nrp = 1\ndf = filter(\n row -> row.asset == a && row.rep_period == rp,\n energy_problem.dataframes[:lowest],\n view = true,\n)\n[value(energy_problem.model[:incoming_flow_lowest_resolution][row.index]) for row in eachrow(df)]","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The values of constraints can also be obtained, however, they are frequently indexed in a subset, which means that their indexing is not straightforward. To know how they are indexed, it is necessary to look at the model code. For instance, to get the consumer balance, we first need to filter the :highest_in_out dataframes by consumers:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"df_consumers = filter(\n row -> graph[row.asset].type == \"consumer\",\n energy_problem.dataframes[:highest_in_out],\n view = false,\n);\nnothing # hide","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"We set view = false to create a copy of this DataFrame so we can make our indexes:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"df_consumers.index = 1:size(df_consumers, 1) # overwrites existing index","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Now we can filter this DataFrame. Note that the names in the stored dataframes are defined as Symbol.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"a = \"Asgard_E_demand\"\ndf = filter(\n row -> row.asset == a && row.rep_period == rp,\n df_consumers,\n view = true,\n)\nvalue.(energy_problem.model[:consumer_balance][df.index])","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Here value. (i.e., broadcasting) was used instead of the vector comprehension from previous examples just to show that it also works.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The value of the constraint is obtained by looking only at the part with variables. So a constraint like 2x + 3y - 1 <= 4 would return the value of 2x + 3y.","category":"page"},{"location":"20-tutorials/#Writing-the-output-to-CSV","page":"Tutorials","title":"Writing the output to CSV","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To save the solution to CSV files, you can use save_solution_to_file:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"mkdir(\"outputs\")\nsave_solution_to_file(\"outputs\", energy_problem)","category":"page"},{"location":"20-tutorials/#Plotting","page":"Tutorials","title":"Plotting","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"In the previous sections, we have shown how to create vectors such as the one for flows. If you want simple plots, you can plot the vectors directly using any package you like.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"If you would like more custom plots, check out TulipaPlots.jl, under development, which provides tailor-made plots for TulipaEnergyModel.jl.","category":"page"},{"location":"90-contributing/#contributing","page":"Contributing Guidelines","title":"Contributing Guidelines","text":"","category":"section"},{"location":"90-contributing/","page":"Contributing Guidelines","title":"Contributing Guidelines","text":"Great that you want to contribute to the development of Tulipa! Please read these guidelines and our Developer Documentation to get you started.","category":"page"},{"location":"90-contributing/#GitHub-Rules-of-Engagement","page":"Contributing Guidelines","title":"GitHub Rules of Engagement","text":"","category":"section"},{"location":"90-contributing/","page":"Contributing Guidelines","title":"Contributing Guidelines","text":"If you want to discuss something that isn't immediately actionable, post under Discussions. Convert it to an issue once it's actionable.\nAll PR's should have an associated issue (unless it's a very minor fix).\nAll issues should have 1 Type and 1+ Zone labels (unless Type: epic).\nAssign yourself to issues you want to address. Consider if you will be able to work on them in the near future (this week) — if not, leave them available for someone else.\nSet the issue Status to \"In Progress\" when you have started working on it.\nWhen finalizing a pull request, set the Status to \"Ready for Review.\" If someone specific needs to review it, assign them as the reviewer (otherwise anyone can review).\nIssues addressed by merged PRs will automatically move to Done.\nIf you want to discuss an issue at the next group meeting (or just get some attention), mark it with the \"question\" label.\nIssues without updates for 60 days (and PRs without updates in 30 days) will be labelled as \"stale\" and filtered out of view. There is a Stale project board to view and revive these.","category":"page"},{"location":"90-contributing/#Contributing-Workflow","page":"Contributing Guidelines","title":"Contributing Workflow","text":"","category":"section"},{"location":"90-contributing/","page":"Contributing Guidelines","title":"Contributing Guidelines","text":"Fork → Branch → Code → Push → Pull → Squash & Merge","category":"page"},{"location":"90-contributing/","page":"Contributing Guidelines","title":"Contributing Guidelines","text":"Fork the repository\nCreate a new branch (in your fork)\nDo fantastic coding\nPush to your fork\nCreate a pull request from your fork to the main repository\n(After review) Squash and merge","category":"page"},{"location":"90-contributing/","page":"Contributing Guidelines","title":"Contributing Guidelines","text":"For a step-by-step guide to these steps, see our Developer Documentation.","category":"page"},{"location":"90-contributing/","page":"Contributing Guidelines","title":"Contributing Guidelines","text":"We use this workflow in our quest to achieve the Utopic Git History.","category":"page"},{"location":"40-formulation/#formulation","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"This section shows the mathematical formulation of TulipaEnergyModel.jl, assuming that the temporal definition of timesteps is the same for all the elements in the model (e.g., hourly). The concepts section shows how the model handles the flexible temporal resolution of assets and flows in the model.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Pages = [\"40-formulation.md\"]\nDepth = 3","category":"page"},{"location":"40-formulation/#math-sets","page":"Mathematical Formulation","title":"Sets","text":"","category":"section"},{"location":"40-formulation/#Sets-for-Assets","page":"Mathematical Formulation","title":"Sets for Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Description Elements Superset Notes\nmathcalA Energy assets a in mathcalA The Energy asset types (i.e., consumer, producer, storage, hub, and conversion) are mutually exclusive\nmathcalA^textc Consumer energy assets mathcalA^textc subseteq mathcalA \nmathcalA^textp Producer energy assets mathcalA^textp subseteq mathcalA \nmathcalA^texts Storage energy assets mathcalA^texts subseteq mathcalA \nmathcalA^texth Hub energy assets (e.g., transshipment) mathcalA^texth subseteq mathcalA \nmathcalA^textcv Conversion energy assets mathcalA^textcv subseteq mathcalA ","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"In addition, the following asset sets represent methods for incorporating additional variables and constraints in the model.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Description Elements Superset Notes\nmathcalA^texti Energy assets with investment method mathcalA^texti subseteq mathcalA \nmathcalA^textss Energy assets with seasonal method mathcalA^textss subseteq mathcalA This set contains assets that use the seasonal method method. Please visit the how-to sections for seasonal storage and maximum/minimum outgoing energy limit to learn how to set up this feature.\nmathcalA^textse Storage energy assets with energy method mathcalA^textse subseteq mathcalA^texts This set contains storage assets that use investment energy method. Please visit the how-to section to learn how to set up this feature.\nmathcalA^textsb Storage energy assets with binary method mathcalA^textsb subseteq mathcalA^texts setminus mathcalA^textss This set contains storage assets that use an extra binary variable to avoid charging and discharging simultaneously. Please visit the how-to section to learn how to set up this feature.\nmathcalA^textmax e Energy assets with maximum outgoing energy method mathcalA^textmax e subseteq mathcalA This set contains assets that use the maximum outgoing energy method. Please visit the how-to section to learn how to set up this feature.\nmathcalA^textmin e Energy assets with minimum outgoing energy method mathcalA^textmin e subseteq mathcalA This set contains assets that use the minimum outgoing energy method. Please visit the how-to section to learn how to set up this feature.\nmathcalA^textuc Energy assets with unit commitment method mathcalA^textuc subseteq mathcalA^textcv cup mathcalA^textp This set contains conversion and production assets that have a unit commitment method. Please visit the how-to section to learn how to set up this feature.\nmathcalA^textuc basic Energy assets with a basic unit commitment method mathcalA^textuc basic subseteq mathcalA^textuc This set contains the assets that have a basic unit commitment method. Please visit the how-to section to learn how to set up this feature.\nmathcalA^textramp Energy assets with ramping method mathcalA^textramp subseteq mathcalA^textcv cup mathcalA^textp This set contains conversion and production assets that have a ramping method. Please visit the how-to section to learn how to set up this feature.","category":"page"},{"location":"40-formulation/#Sets-for-Flows","page":"Mathematical Formulation","title":"Sets for Flows","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Description Elements Superset Notes\nmathcalF Flow connections between two assets f in mathcalF \nmathcalF^textin_a Set of flows going into asset a mathcalF^textin_a subseteq mathcalF \nmathcalF^textout_a Set of flows going out of asset a mathcalF^textout_a subseteq mathcalF ","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"In addition, the following flow sets represent methods for incorporating additional variables and constraints in the model.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Description Elements Superset Notes\nmathcalF^textt Flow between two assets with a transport method mathcalF^textt subseteq mathcalF \nmathcalF^textti Transport flow with investment method mathcalF^textti subseteq mathcalF^textt ","category":"page"},{"location":"40-formulation/#Sets-for-Temporal-Structures","page":"Mathematical Formulation","title":"Sets for Temporal Structures","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Description Elements Superset Notes\nmathcalP Periods in the timeframe p in mathcalP mathcalP subset mathbbN \nmathcalK Representative periods (rp) k in mathcalK mathcalK subset mathbbN mathcalK does not have to be a subset of mathcalP\nmathcalB_k Timesteps blocks within a representative period k b_k in mathcalB_k mathcalB_k is a partition of timesteps in a representative period k","category":"page"},{"location":"40-formulation/#Sets-for-Groups","page":"Mathematical Formulation","title":"Sets for Groups","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Description Elements Superset Notes\nmathcalG^texta Groups of energy assets g in mathcalG^texta ","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"In addition, the following subsets represent methods for incorporating additional constraints in the model.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Description Elements Superset Notes\nmathcalG^textai Group of assets that share min/max investment limit mathcalG^textai subseteq mathcalG^texta This set contains assets that have a group investment limit. Please visit the how-to section to learn how to set up this feature.","category":"page"},{"location":"40-formulation/#math-parameters","page":"Mathematical Formulation","title":"Parameters","text":"","category":"section"},{"location":"40-formulation/#Parameters-for-Assets","page":"Mathematical Formulation","title":"Parameters for Assets","text":"","category":"section"},{"location":"40-formulation/#General-Parameters-for-Assets","page":"Mathematical Formulation","title":"General Parameters for Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textinv cost_a mathbbR_+ a in mathcalA Investment cost of a unit of asset a [kEUR/MW/year]\np^textinv limit_a mathbbR_+ a in mathcalA Investment potential of asset a [MW]\np^textcapacity_a mathbbR_+ a in mathcalA Capacity per unit of asset a [MW]\np^textinit units_a mathbbZ_+ a in mathcalA Initial number of units of asset a [units]\np^textavailability profile_akb_k mathbbR_+ a in mathcalA, k in mathcalK, b_k in mathcalB_k Availability profile of asset a in the representative period k and timestep block b_k [p.u.]\np^textgroup_a mathcalG^texta a in mathcalA Group g to which the asset a belongs [-]","category":"page"},{"location":"40-formulation/#Extra-Parameters-for-Consumer-Assets","page":"Mathematical Formulation","title":"Extra Parameters for Consumer Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textpeak demand_a mathbbR_+ a in mathcalA^textc Peak demand of consumer asset a [MW]\np^textdemand profile_akb_k mathbbR_+ a in mathcalA^textc, k in mathcalK, b_k in mathcalB_k Demand profile of consumer asset a in the representative period k and timestep block b_k [p.u.]","category":"page"},{"location":"40-formulation/#Extra-Parameters-for-Storage-Assets","page":"Mathematical Formulation","title":"Extra Parameters for Storage Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textinit storage capacity_a mathbbR_+ a in mathcalA^texts Initial storage capacity of storage asset a [MWh]\np^textinit storage level_a mathbbR_+ a in mathcalA^texts Initial storage level of storage asset a [MWh]\np^textinflows_akb_k mathbbR_+ a in mathcalA^texts, k in mathcalK, b_k in mathcalB_k Inflows of storage asset a in the representative period k and timestep block b_k [MWh]\np^textinv cost energy_a mathbbR_+ a in mathcalA^textse Investment cost of a energy unit of asset a [kEUR/MWh/year]\np^textinv limit energy_a mathbbR_+ a in mathcalA^textse Investment energy potential of asset a [MWh]\np^textenergy capacity_a mathbbR_+ a in mathcalA^textse Energy capacity of a unit of investment of the asset a [MWh]\np^textenergy to power ratio_a mathbbR_+ a in mathcalA^texts setminus mathcalA^textse Energy to power ratio of storage asset a [h]\np^textmax intra level_akb_k mathbbR_+ a in mathcalA^texts setminus mathcalA^textss, k in mathcalK, b_k in mathcalB_k Maximum intra-storage level profile of storage asset a in representative period k and timestep block b_k [p.u.]\np^textmin intra level_akb_k mathbbR_+ a in mathcalA^texts setminus mathcalA^textss, k in mathcalK, b_k in mathcalB_k Minimum intra-storage level profile of storage asset a in representative period k and timestep block b_k [p.u.]\np^textmax inter level_ap mathbbR_+ a in mathcalA^textss, p in mathcalP Maximum inter-storage level profile of storage asset a in the period p of the timeframe [p.u.]\np^textmin inter level_ap mathbbR_+ a in mathcalA^textss, p in mathcalP Minimum inter-storage level profile of storage asset a in the period p of the timeframe [p.u.]","category":"page"},{"location":"40-formulation/#Extra-Parameters-for-Energy-Constraints","page":"Mathematical Formulation","title":"Extra Parameters for Energy Constraints","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textmin inter profile_ap mathbbR_+ a in mathcalA^textmin e, p in mathcalP Minimum outgoing inter-temporal energy profile of asset a in the period p of the timeframe [p.u.]\np^textmax inter profile_ap mathbbR_+ a in mathcalA^textmax e, p in mathcalP Maximum outgoing inter-temporal energy profile of asset a in the period p of the timeframe [p.u.]\np^textmax energy_ap mathbbR_+ a in mathcalA^textmax e Maximum outgoing inter-temporal energy value of asset a [MWh]\np^textmin energy_ap mathbbR_+ a in mathcalA^textmin e Minimum outgoing inter-temporal energy value of asset a [MWh]","category":"page"},{"location":"40-formulation/#Extra-Parameters-for-Producers-and-Conversion-Assets","page":"Mathematical Formulation","title":"Extra Parameters for Producers and Conversion Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textmin operating point_a mathbbR_+ a in mathcalA^textuc Minimum operating point or minimum stable generation level defined as a portion of the capacity of asset a [p.u.]\np^textunits on cost_a mathbbR_+ a in mathcalA^textuc Objective function coefficient on units_on variable. e.g., no-load cost or idling cost of asset a [kEUR/h/units]\np^textmax ramp up_a mathbbR_+ a in mathcalA^textramp Maximum ramping up rate as a portion of the capacity of asset a [p.u./h]\np^textmax ramp down_a mathbbR_+ a in mathcalA^textramp Maximum ramping down rate as a portion of the capacity of asset a [p.u./h]","category":"page"},{"location":"40-formulation/#Parameters-for-Flows","page":"Mathematical Formulation","title":"Parameters for Flows","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textvariable cost_f mathbbR_+ f in mathcalF Variable cost of flow f [kEUR/MWh]\np^texteff_f mathbbR_+ f in mathcalF Efficiency of flow f [p.u.]\np^textinv cost_f mathbbR_+ f in mathcalF^textt Investment cost of transport flow f [kEUR/MW/year]\np^textinv limit_f mathbbR_+ f in mathcalF^textt Investment potential of flow f [MW]\np^textcapacity_f mathbbR_+ f in mathcalF^textt Capacity per unit of investment of transport flow f (both exports and imports) [MW]\np^textinit export capacity_f mathbbR_+ f in mathcalF^textt Initial export capacity of transport flow f [MW]\np^textinit import capacity_f mathbbR_+ f in mathcalF^textt Initial import capacity of transport flow f [MW]\np^textavailability profile_fkb_k mathbbR_+ a in mathcalF, k in mathcalK, b_k in mathcalB_k Availability profile of flow f in the representative period k and timestep block b_k [p.u.]","category":"page"},{"location":"40-formulation/#Parameters-for-Temporal-Structures","page":"Mathematical Formulation","title":"Parameters for Temporal Structures","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textduration_b_k mathbbR_+ b_k in mathcalB_k Duration of the timestep blocks b_k [h]\np^textrp weight_k mathbbR_+ k in mathcalK Weight of representative period k [-]\np^textmap_pk mathbbR_+ p in mathcalP, k in mathcalK Map with the weight of representative period k in period p [-]","category":"page"},{"location":"40-formulation/#Parameters-for-Groups","page":"Mathematical Formulation","title":"Parameters for Groups","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textmin invest limit_g mathbbR_+ g in mathcalG^textai Minimum investment limit (potential) of group g [MW]\np^textmax invest limit_g mathbbR_+ g in mathcalG^textai Maximum investment limit (potential) of group g [MW]","category":"page"},{"location":"40-formulation/#math-variables","page":"Mathematical Formulation","title":"Variables","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\nv^textflow_fkb_k mathbbR f in mathcalF, k in mathcalK, b_k in mathcalB_k Flow f between two assets in representative period k and timestep block b_k [MW]\nv^textinv_a mathbbZ_+ a in mathcalA^texti Number of invested units of asset a [units]\nv^textinv energy_a mathbbZ_+ a in mathcalA^texti cap mathcalA^textse Number of invested units of the energy component of the storage asset a that use energy method [units]\nv^textinv_f mathbbZ_+ f in mathcalF^textti Number of invested units of capacity increment of transport flow f [units]\nv^textintra-storage_akb_k mathbbR_+ a in mathcalA^texts setminus mathcalA^textss, k in mathcalK, b_k in mathcalB_k Intra storage level (within a representative period) for storage asset a, representative period k, and timestep block b_k [MWh]\nv^textinter-storage_ap mathbbR_+ a in mathcalA^textss, p in mathcalP Inter storage level (between representative periods) for storage asset a and period p [MWh]\nv^textis charging_akb_k 0 1 a in mathcalA^textsb, k in mathcalK, b_k in mathcalB_k If an storage asset a is charging or not in representative period k and timestep block b_k [-]\nv^textunits on_akb_k mathbbZ_+ a in mathcalA^textuc, k in mathcalK, b_k in mathcalB_k Number of units ON of asset a in representative period k and timestep block b_k [units]","category":"page"},{"location":"40-formulation/#math-objective-function","page":"Mathematical Formulation","title":"Objective Function","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Objective function:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\ntextminimize quad assets_investment_cost + flows_investment_cost \n + flows_variable_cost + unit_on_cost\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Where:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nassets_investment_cost = sum_a in mathcalA^texti p^textinv cost_a cdot p^textcapacity_a cdot v^textinv_a + sum_a in mathcalA^textse cap mathcalA^texti p^textinv cost energy_a cdot p^textenergy capacity_a cdot v^textinv energy_a \nflows_investment_cost = sum_f in mathcalF^textti p^textinv cost_f cdot p^textcapacity_f cdot v^textinv_f \nflows_variable_cost = sum_f in mathcalF sum_k in mathcalK sum_b_k in mathcalB_k p^textrp weight_k cdot p^textvariable cost_f cdot p^textduration_b_k cdot v^textflow_fkb_k \nunit_on_cost = sum_a in mathcalA^textuc sum_k in mathcalK sum_b_k in mathcalB_k p^textrp weight_k cdot p^textunits on cost_a cdot p^textduration_b_k cdot v^textunits on_akb_k\nendaligned","category":"page"},{"location":"40-formulation/#math-constraints","page":"Mathematical Formulation","title":"Constraints","text":"","category":"section"},{"location":"40-formulation/#cap-constraints","page":"Mathematical Formulation","title":"Capacity Constraints","text":"","category":"section"},{"location":"40-formulation/#Maximum-Output-Flows-Limit","page":"Mathematical Formulation","title":"Maximum Output Flows Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textout_a v^textflow_fkb_k leq p^textavailability profile_akb_k cdot p^textcapacity_a cdot left(p^textinit units_a + v^textinv_a right) quad\n forall a in mathcalA^textcv cup left(mathcalA^texts setminus mathcalA^textsb right) cup mathcalA^textp forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Storage assets using the method to avoid charging and discharging simultaneously, i.e., a in mathcalA^textsb, use the following constraints instead of the previous one:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textout_a v^textflow_fkb_k leq p^textavailability profile_akb_k cdot left(p^textcapacity_a cdot p^textinit units_a + p^textinv limit_a right) cdot left(1 - v^textis charging_akb_k right) quad\n forall a in mathcalA^textsb forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textout_a v^textflow_fkb_k leq p^textavailability profile_akb_k cdot p^textcapacity_a cdot left(p^textinit units_a cdot left(1 - v^textis charging_akb_k right) + v^textinv_a right) quad\n forall a in mathcalA^textsb forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Maximum-Input-Flows-Limit","page":"Mathematical Formulation","title":"Maximum Input Flows Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textin_a v^textflow_fkb_k leq p^textavailability profile_akb_k cdot p^textcapacity_a cdot left(p^textinit units_a + v^textinv_a right) quad\n forall a in mathcalA^texts setminus mathcalA^textsb forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Storage assets using the method to avoid charging and discharging simultaneously, i.e., a in mathcalA^textsb, use the following constraints instead of the previous one:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textin_a v^textflow_fkb_k leq p^textavailability profile_akb_k cdot left(p^textcapacity_a cdot p^textinit units_a + p^textinv limit_a right) cdot v^textis charging_akb_k quad forall a in mathcalA^textsb forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textin_a v^textflow_fkb_k leq p^textavailability profile_akb_k cdot p^textcapacity_a cdot left(p^textinit units_a cdot v^textis charging_akb_k + v^textinv_a right) quad forall a in mathcalA^textsb forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Lower-Limit-for-Flows-that-are-Not-Transport-Assets","page":"Mathematical Formulation","title":"Lower Limit for Flows that are Not Transport Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textflow_fkb_k geq 0 quad forall f notin mathcalF^textt forall k in mathcalK forall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#uc-constraints","page":"Mathematical Formulation","title":"Unit Commitment Constraints","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Production and conversion assets within the set mathcalA^textuc will contain the unit commitment constraints in the model. These constraints are based on the work of Morales-España et al. (2013) and Morales-España et al. (2014).","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"The current version of the code only incorporates a basic unit commitment version of the constraints (i.e., utilizing only the unit commitment variable v^textunits on). However, upcoming versions will include more detailed constraints, incorporating startup and shutdown variables.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"For the unit commitment constraints, we define the following expression for the flow that is above the minimum operating point of the asset:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"e^textflow above min_akb_k = sum_f in mathcalF^textout_a v^textflow_fkb_k - p^textavailability profile_akb_k cdot p^textcapacity_a cdot p^textmin operating point_a cdot v^texton_akb_k quad\n forall a in mathcalA^textuc forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Limit-to-the-Units-On-Variable","page":"Mathematical Formulation","title":"Limit to the Units On Variable","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^texton_akb_k leq p^textinit units_a + v^textinv_a quad\n forall a in mathcalA^textuc forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Maximum-Output-Flow-Above-the-Minimum-Operating-Point","page":"Mathematical Formulation","title":"Maximum Output Flow Above the Minimum Operating Point","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"e^textflow above min_akb_k leq p^textavailability profile_akb_k cdot p^textcapacity_a cdot left(1 - p^textmin operating point_a right) cdot v^texton_akb_k quad\n forall a in mathcalA^textuc basic forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Minimum-Output-Flow-Above-the-Minimum-Operating-Point","page":"Mathematical Formulation","title":"Minimum Output Flow Above the Minimum Operating Point","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"e^textflow above min_akb_k geq 0 quad\n forall a in mathcalA^textuc basic forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#ramp-constraints","page":"Mathematical Formulation","title":"Ramping Constraints","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Ramping constraints restrict the rate at which the output flow of a production or conversion asset can change. If the asset is part of the unit commitment set (e.g., mathcalA^textuc), the ramping limits apply to the flow above the minimum output, but if it is not, the ramping limits apply to the total output flow.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Ramping constraints that take into account unit commitment variables are based on the work done by Damcı-Kurt et. al (2016). Also, please note that since the current version of the code only handles the basic unit commitment implementation, the ramping constraints are applied to the assets in the set mathcalA^textuc basic.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Duration parameter: The following constraints are multiplied by p^textduration_b_k on the right-hand side to adjust for the duration of the timesteps since the ramp parameters are defined as rates. This assumption is based on the idea that all timesteps are the same in this section, which simplifies the formulation. However, in a flexible temporal resolution context, this may not hold true, and the duration needs to be the minimum duration of all the outgoing flows at the timestep block b_k. For more information, please visit the concept section on flexible time resolution.","category":"page"},{"location":"40-formulation/#Maximum-Ramp-Up-Rate-Limit-WITH-Unit-Commitment-Method","page":"Mathematical Formulation","title":"Maximum Ramp-Up Rate Limit WITH Unit Commitment Method","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"e^textflow above min_akb_k - e^textflow above min_akb_k-1 leq p^textavailability profile_akb_k cdot p^textcapacity_a cdot p^textmax ramp up_a cdot p^textduration_b_k cdot v^texton_akb_k quad\n forall a in left(mathcalA^textramp cap mathcalA^textuc basic right) forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Maximum-Ramp-Down-Rate-Limit-WITH-Unit-Commmitment-Method","page":"Mathematical Formulation","title":"Maximum Ramp-Down Rate Limit WITH Unit Commmitment Method","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"e^textflow above min_akb_k - e^textflow above min_akb_k-1 geq - p^textavailability profile_akb_k cdot p^textcapacity_a cdot p^textmax ramp down_a cdot p^textduration_b_k cdot v^texton_akb_k-1 quad\n forall a in left(mathcalA^textramp cap mathcalA^textuc basic right) forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Maximum-Ramp-Up-Rate-Limit-WITHOUT-Unit-Commitment-Method","page":"Mathematical Formulation","title":"Maximum Ramp-Up Rate Limit WITHOUT Unit Commitment Method","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"sum_f in mathcalF^textout_a v^textflow_fkb_k - sum_f in mathcalF^textout_a v^textflow_fkb_k-1 leq p^textmax ramp up_a cdot p^textduration_b_k cdot p^textavailability profile_akb_k cdot p^textcapacity_a cdot left(p^textinit units_a + v^textinv_a right) quad\n forall a in left(mathcalA^textramp setminus mathcalA^textuc basic right) forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Maximum-Ramp-Down-Rate-Limit-WITHOUT-Unit-Commitment-Method","page":"Mathematical Formulation","title":"Maximum Ramp-Down Rate Limit WITHOUT Unit Commitment Method","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"sum_f in mathcalF^textout_a v^textflow_fkb_k - sum_f in mathcalF^textout_a v^textflow_fkb_k-1 geq - p^textmax ramp down_a cdot p^textduration_b_k cdot p^textavailability profile_akb_k cdot p^textcapacity_a cdot left(p^textinit units_a + v^textinv_a right) quad\n forall a in left(mathcalA^textramp setminus mathcalA^textuc basic right) forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Constraints-for-Energy-Consumer-Assets","page":"Mathematical Formulation","title":"Constraints for Energy Consumer Assets","text":"","category":"section"},{"location":"40-formulation/#Balance-Constraint-for-Consumers","page":"Mathematical Formulation","title":"Balance Constraint for Consumers","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"The balance constraint sense depends on the method selected in the asset file's parameter consumer_balance_sense. The default value is =, but the user can choose geq as an option.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textin_a v^textflow_fkb_k - sum_f in mathcalF^textout_a v^textflow_fkb_k leftbeginarrayl = geq endarrayright p^textdemand profile_akb_k cdot p^textpeak demand_a quad forall a in mathcalA^textc forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Constraints-for-Energy-Storage-Assets","page":"Mathematical Formulation","title":"Constraints for Energy Storage Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"There are two types of constraints for energy storage assets: intra-temporal and inter-temporal. Intra-temporal constraints impose limits inside a representative period, while inter-temporal constraints combine information from several representative periods (e.g., to model seasonal storage). For more information on this topic, refer to the concepts section or Tejada-Arango et al. (2018) and Tejada-Arango et al. (2019).","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"In addition, we define the following expression to determine the energy investment limit of the storage assets. This expression takes two different forms depending on whether the storage asset belongs to the set mathcalA^textse or not.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Investment energy method:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"e^textenergy inv limit_a = p^textenergy capacity_a cdot v^textinv energy_a quad forall a in mathcalA^texti cap mathcalA^textse","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Fixed energy-to-power ratio method:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"e^textenergy inv limit_a = p^textenergy to power ratio_a cdot p^textcapacity_a cdot v^textinv_a quad forall a in mathcalA^texti cap (mathcalA^texts setminus mathcalA^textse)","category":"page"},{"location":"40-formulation/#intra-storage-balance","page":"Mathematical Formulation","title":"Intra-temporal Constraint for Storage Balance","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textintra-storage_akb_k = v^textintra-storage_akb_k-1 + p^textinflows_akb_k + sum_f in mathcalF^textin_a p^texteff_f cdot p^textduration_b_k cdot v^textflow_fkb_k - sum_f in mathcalF^textout_a frac1p^texteff_f cdot p^textduration_b_k cdot v^textflow_fkb_k quad\n forall a in mathcalA^texts setminus mathcalA^textss forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Intra-temporal-Constraint-for-Maximum-Storage-Level-Limit","page":"Mathematical Formulation","title":"Intra-temporal Constraint for Maximum Storage Level Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textintra-storage_akb_k leq p^textmax intra level_akb_k cdot (p^textinit storage capacity_a + e^textenergy inv limit_a) quad forall a in mathcalA^texts setminus mathcalA^textss forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Intra-temporal-Constraint-for-Minimum-Storage-Level-Limit","page":"Mathematical Formulation","title":"Intra-temporal Constraint for Minimum Storage Level Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textintra-storage_akb_k geq p^textmin intra level_akb_k cdot (p^textinit storage capacity_a + e^textenergy inv limit_a) quad forall a in mathcalA^texts setminus mathcalA^textss forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Intra-temporal-Cycling-Constraint","page":"Mathematical Formulation","title":"Intra-temporal Cycling Constraint","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"The cycling constraint for the intra-temporal constraints links the first timestep block (b^textfirst_k) and the last one (b^textlast_k) in each representative period. The parameter p^textinit storage level_a determines the considered equations in the model for this constraint:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"If parameter p^textinit storage level_a is not defined, the intra-storage level of the last timestep block (b^textlast_k) is used as the initial value for the first timestep block in the intra-temporal constraint for the storage balance.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textintra-storage_akb^textfirst_k = v^textintra-storage_akb^textlast_k + p^textinflows_akb^textfirst_k + sum_f in mathcalF^textin_a p^texteff_f cdot p^textduration_b_k cdot v^textflow_fkb^textfirst_k - sum_f in mathcalF^textout_a frac1p^texteff_f cdot p^textduration_b_k cdot v^textflow_fkb^textfirst_k quad\n forall a in mathcalA^texts setminus mathcalA^textss forall k in mathcalK\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"If parameter p^textinit storage level_a is defined, we use it as the initial value for the first timestep block in the intra-temporal constraint for the storage balance. In addition, the intra-storage level of the last timestep block (b^textlast_k) in each representative period must be greater than this initial value.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textintra-storage_akb^textfirst_k = p^textinit storage level_a + p^textinflows_akb^textfirst_k + sum_f in mathcalF^textin_a p^texteff_f cdot p^textduration_b_k cdot v^textflow_fkb^textfirst_k - sum_f in mathcalF^textout_a frac1p^texteff_f cdot p^textduration_b_k cdot v^textflow_fkb^textfirst_k quad\n forall a in mathcalA^texts setminus mathcalA^textss forall k in mathcalK\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textintra-storage_akb^textfirst_k geq p^textinit storage level_a quad\n forall a in mathcalA^texts setminus mathcalA^textss forall k in mathcalK","category":"page"},{"location":"40-formulation/#inter-storage-balance","page":"Mathematical Formulation","title":"Inter-temporal Constraint for Storage Balance","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"This constraint allows us to consider the storage seasonality throughout the model's timeframe (e.g., a year). The parameter p^textmap_pk determines how much of the representative period k is in the period p, and you can use a clustering technique to calculate it. For TulipaEnergyModel.jl, we recommend using TulipaClustering.jl to compute the clusters for the representative periods and their map.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"For the sake of simplicity, we show the constraint assuming the inter-storage level between two consecutive periods p; however, TulipaEnergyModel.jl can handle more flexible period block definition through the timeframe definition in the model using the information in the file assets-timeframe-partitions.csv.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textinter-storage_ap = v^textinter-storage_ap-1 + sum_k in mathcalK p^textmap_pk sum_b_k in mathcalB_K p^textinflows_akb_k \n + sum_f in mathcalF^textin_a p^texteff_f sum_k in mathcalK p^textmap_pk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k \n - sum_f in mathcalF^textout_a frac1p^texteff_f sum_k in mathcalK p^textmap_pk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k\n forall a in mathcalA^textss forall p in mathcalP\nendaligned","category":"page"},{"location":"40-formulation/#Inter-temporal-Constraint-for-Maximum-Storage-Level-Limit","page":"Mathematical Formulation","title":"Inter-temporal Constraint for Maximum Storage Level Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textinter-storage_ap leq p^textmax inter level_ap cdot (p^textinit storage capacity_a + e^textenergy inv limit_a) quad forall a in mathcalA^textss forall p in mathcalP","category":"page"},{"location":"40-formulation/#Inter-temporal-Constraint-for-Minimum-Storage-Level-Limit","page":"Mathematical Formulation","title":"Inter-temporal Constraint for Minimum Storage Level Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textinter-storage_ap geq p^textmin inter level_ap cdot (p^textinit storage capacity_a + e^textenergy inv limit_a) quad forall a in mathcalA^textss forall p in mathcalP","category":"page"},{"location":"40-formulation/#Inter-temporal-Cycling-Constraint","page":"Mathematical Formulation","title":"Inter-temporal Cycling Constraint","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"The cycling constraint for the inter-temporal constraints links the first-period block (p^textfirst) and the last one (p^textlast) in the timeframe. The parameter p^textinit storage level_a determines the considered equations in the model for this constraint:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"If parameter p^textinit storage level_a is not defined, the inter-storage level of the last period block (p^textlast) is used as the initial value for the first-period block in the inter-temporal constraint for the storage balance.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textinter-storage_ap^textfirst = v^textinter-storage_ap^textlast + sum_k in mathcalK p^textmap_p^textfirstk sum_b_k in mathcalB_K p^textinflows_akb_k \n + sum_f in mathcalF^textin_a p^texteff_f sum_k in mathcalK p^textmap_p^textfirstk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k \n - sum_f in mathcalF^textout_a frac1p^texteff_f sum_k in mathcalK p^textmap_p^textfirstk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k\n forall a in mathcalA^textss\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"If parameter p^textinit storage level_a is defined, we use it as the initial value for the first-period block in the inter-temporal constraint for the storage balance. In addition, the inter-storage level of the last period block (p^textlast) in the timeframe must be greater than this initial value.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textinter-storage_ap^textfirst = p^textinit storage level_a + sum_k in mathcalK p^textmap_p^textfirstk sum_b_k in mathcalB_K p^textinflows_akb_k \n + sum_f in mathcalF^textin_a p^texteff_f sum_k in mathcalK p^textmap_p^textfirstk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k \n - sum_f in mathcalF^textout_a frac1p^texteff_f sum_k in mathcalK p^textmap_p^textfirstk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k\n forall a in mathcalA^textss\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textinter-storage_ap^textlast geq p^textinit storage level_a quad\n forall a in mathcalA^textss","category":"page"},{"location":"40-formulation/#Constraints-for-Energy-Hub-Assets","page":"Mathematical Formulation","title":"Constraints for Energy Hub Assets","text":"","category":"section"},{"location":"40-formulation/#Balance-Constraint-for-Hubs","page":"Mathematical Formulation","title":"Balance Constraint for Hubs","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textin_a v^textflow_fkb_k = sum_f in mathcalF^textout_a v^textflow_fkb_k quad forall a in mathcalA^texth forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Constraints-for-Energy-Conversion-Assets","page":"Mathematical Formulation","title":"Constraints for Energy Conversion Assets","text":"","category":"section"},{"location":"40-formulation/#Balance-Constraint-for-Conversion-Assets","page":"Mathematical Formulation","title":"Balance Constraint for Conversion Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textin_a p^texteff_f cdot v^textflow_fkb_k = sum_f in mathcalF^textout_a fracv^textflow_fkb_kp^texteff_f quad forall a in mathcalA^textcv forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Constraints-for-Transport-Assets","page":"Mathematical Formulation","title":"Constraints for Transport Assets","text":"","category":"section"},{"location":"40-formulation/#Maximum-Transport-Flow-Limit","page":"Mathematical Formulation","title":"Maximum Transport Flow Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textflow_fkb_k leq p^textavailability profile_fkb_k cdot left(p^textinit export capacity_f + p^textcapacity_f cdot v^textinv_f right) quad forall f in mathcalF^textt forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Minimum-Transport-Flow-Limit","page":"Mathematical Formulation","title":"Minimum Transport Flow Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textflow_fkb_k geq - p^textavailability profile_fkb_k cdot left(p^textinit import capacity_f + p^textcapacity_f cdot v^textinv_f right) quad forall f in mathcalF^textt forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Constraints-for-Investments","page":"Mathematical Formulation","title":"Constraints for Investments","text":"","category":"section"},{"location":"40-formulation/#Maximum-Investment-Limit-for-Assets","page":"Mathematical Formulation","title":"Maximum Investment Limit for Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textinv_a leq fracp^textinv limit_ap^textcapacity_a quad forall a in mathcalA^texti","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"If the parameter investment_integer in the assets-data.csv file is set to true, then the right-hand side of this constraint uses a least integer function (floor function) to guarantee that the limit is integer.","category":"page"},{"location":"40-formulation/#Maximum-Energy-Investment-Limit-for-Assets","page":"Mathematical Formulation","title":"Maximum Energy Investment Limit for Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textinv energy_a leq fracp^textinv limit energy_ap^textenergy capacity_a quad forall a in mathcalA^texti cap mathcalA^textse","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"If the parameter investment_integer_storage_energy in the assets-data.csv file is set to true, then the right-hand side of this constraint uses a least integer function (floor function) to guarantee that the limit is integer.","category":"page"},{"location":"40-formulation/#Maximum-Investment-Limit-for-Flows","page":"Mathematical Formulation","title":"Maximum Investment Limit for Flows","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textinv_f leq fracp^textinv limit_fp^textcapacity_f quad forall f in mathcalF^textti","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"If the parameter investment_integer in the flows-data.csv file is set to true, then the right-hand side of this constraint uses a least integer function (floor function) to guarantee that the limit is integer.","category":"page"},{"location":"40-formulation/#inter-temporal-energy-constraints","page":"Mathematical Formulation","title":"Inter-temporal Energy Constraints","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"These constraints allow us to consider a maximum or minimum energy limit for an asset throughout the model's timeframe (e.g., a year). It uses the same principle explained in the inter-temporal constraint for storage balance and in the Storage Modeling section.","category":"page"},{"location":"40-formulation/#Maximum-Outgoing-Energy-During-the-Timeframe","page":"Mathematical Formulation","title":"Maximum Outgoing Energy During the Timeframe","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textout_a sum_k in mathcalK p^textmap_pk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k leq p^textmax inter profile_ap cdot p^textmax energy_a\n forall a in mathcalA^textmax e forall p in mathcalP\nendaligned","category":"page"},{"location":"40-formulation/#Minimum-Outgoing-Energy-During-the-Timeframe","page":"Mathematical Formulation","title":"Minimum Outgoing Energy During the Timeframe","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textout_a sum_k in mathcalK p^textmap_pk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k geq p^textmin inter profile_ap cdot p^textmin energy_a\n forall a in mathcalA^textmin e forall p in mathcalP\nendaligned","category":"page"},{"location":"40-formulation/#group-constraints","page":"Mathematical Formulation","title":"Constraints for Groups","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"The following constraints aggregate variables of different assets depending on the method that applies to the group.","category":"page"},{"location":"40-formulation/#investment-group-constraints","page":"Mathematical Formulation","title":"Investment Limits of a Group","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"These constraints apply to assets in a group using the investment method mathcalG^textai. They help impose an investment potential of a spatial area commonly shared by several assets that can be invested there.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Note: These constraints are applied to the investments each year. The model does not yet have investment limits to a group's accumulated invested capacity.","category":"page"},{"location":"40-formulation/#Minimum-Investment-Limit-of-a-Group","page":"Mathematical Formulation","title":"Minimum Investment Limit of a Group","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_a in mathcalA^texti p^textgroup_a = g p^textcapacity_a cdot v^textinv_a geq p^textmin invest limit_g\n forall g in mathcalG^textai\nendaligned","category":"page"},{"location":"40-formulation/#Maximum-Investment-Limit-of-a-Group","page":"Mathematical Formulation","title":"Maximum Investment Limit of a Group","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_a in mathcalA^texti p^textgroup_a = g p^textcapacity_a cdot v^textinv_a leq p^textmax invest limit_g\n forall g in mathcalG^textai\nendaligned","category":"page"},{"location":"40-formulation/#math-references","page":"Mathematical Formulation","title":"References","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Damcı-Kurt, P., Küçükyavuz, S., Rajan, D., Atamtürk, A., 2016. A polyhedral study of production ramping. Math. Program. 158, 175–205. doi: 10.1007/s10107-015-0919-9.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Morales-España, G., Ramos, A., García-González, J., 2014. An MIP Formulation for Joint Market-Clearing of Energy and Reserves Based on Ramp Scheduling. IEEE Transactions on Power Systems 29, 476-488. doi: 10.1109/TPWRS.2013.2259601.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Morales-España, G., Latorre, J. M., Ramos, A., 2013. Tight and Compact MILP Formulation for the Thermal Unit Commitment Problem. IEEE Transactions on Power Systems 28, 4897-4908. doi: 10.1109/TPWRS.2013.2251373.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Tejada-Arango, D.A., Domeshek, M., Wogrin, S., Centeno, E., 2018. Enhanced representative days and system states modeling for energy storage investment analysis. IEEE Transactions on Power Systems 33, 6534–6544. doi:10.1109/TPWRS.2018.2819578.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Tejada-Arango, D.A., Wogrin, S., Siddiqui, A.S., Centeno, E., 2019. Opportunity cost including short-term energy storage in hydrothermal dispatch models using a linked representative periods approach. Energy 188, 116079. doi:10.1016/j.energy.2019.116079.","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"CurrentModule = TulipaEnergyModel","category":"page"},{"location":"#home","page":"Welcome","title":"Welcome","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"TulipaEnergyModel.jl is an optimization model for the electricity market that can be coupled with other energy sectors (e.g., hydrogen, heat, natural gas, etc.). The optimization model determines the optimal investment and operation decisions for different types of assets (e.g., producers, consumers, conversion, storage, and transport). TulipaEnergyModel.jl is developed in Julia and depends on the JuMP.jl package.","category":"page"},{"location":"#Getting-Started","page":"Welcome","title":"Getting Started","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"To start using Tulipa for your research, check out our How to Use section and Tutorials.","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"For a more technical explanation, check out the Concepts section, or dive into the Mathematical Formulation.","category":"page"},{"location":"#bugs-and-discussions","page":"Welcome","title":"Bug reports and discussions","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"If you think you have found a bug, feel free to open an issue. If you have a general question or idea, start a discussion here.","category":"page"},{"location":"#Contributing","page":"Welcome","title":"Contributing","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"If you want to contribute (awesome!), please read our Contributing Guidelines and follow the setup in our Developer Documentation.","category":"page"},{"location":"#license","page":"Welcome","title":"License","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"This content is released under the Apache License 2.0 License.","category":"page"},{"location":"#Contributors","page":"Welcome","title":"Contributors","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\"Abel
Abel Soares Siqueira

💻 👀
\"Diego
Diego Alejandro Tejada Arango

💻 👀 🤔 🔬
\"Germán
Germán Morales

🔬 🤔 🔍 📆
\"Greg
Greg Neustroev

🤔 🔬 💻
\"Juha
Juha Kiviluoma

🤔 🔬
\"Lauren
Lauren Clisby

💻 👀 🤔 📆
\"Laurent
Laurent Soucasse

🤔
\"Mathijs
Mathijs de Weerdt

🔍 📆
\"Ni
Ni Wang

💻 👀 🤔 🔬
\"Sander
Sander van Rijn

🤔
\"Suvayu
Suvayu Ali

💻 👀 🤔
\"Zhi\"/
Zhi

🤔 🔬
\n\n\n\n\n","category":"page"}] +[{"location":"10-how-to-use/#how-to-use","page":"How to Use","title":"How to Use","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Pages = [\"10-how-to-use.md\"]\nDepth = 3","category":"page"},{"location":"10-how-to-use/#Install","page":"How to Use","title":"Install","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"To use Tulipa, you first need to install the opensource Julia programming language.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Then consider installing a user-friendly code editor, such as VSCode. Otherwise you will be running from the terminal/command prompt.","category":"page"},{"location":"10-how-to-use/#Starting-Julia","page":"How to Use","title":"Starting Julia","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Choose one:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"In VSCode: Press CTRL+Shift+P and press Enter to start a Julia REPL.\nIn the terminal: Type julia and press Enter","category":"page"},{"location":"10-how-to-use/#Adding-TulipaEnergyModel","page":"How to Use","title":"Adding TulipaEnergyModel","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"In Julia:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Enter package mode (press \"]\")","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"pkg> add TulipaEnergyModel","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Return to Julia mode (backspace)","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"julia> using TulipaEnergyModel","category":"page"},{"location":"10-how-to-use/#(Optional)-Running-automatic-tests","page":"How to Use","title":"(Optional) Running automatic tests","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"It is nice to check that tests are passing to make sure your environment is working. (This takes a minute or two.)","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Enter package mode (press \"]\")","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"pkg> test TulipaEnergyModel","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"All tests should pass.","category":"page"},{"location":"10-how-to-use/#Running-a-Scenario","page":"How to Use","title":"Running a Scenario","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"To run a scenario, use the function:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"run_scenario(connection)\nrun_scenario(connection; output_folder)","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The connection should have been created and the data loaded into it using TulipaIO. See the tutorials for a complete guide on how to achieve this. The output_folder is optional if the user wants to export the output.","category":"page"},{"location":"10-how-to-use/#input","page":"How to Use","title":"Input","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Currently, we only accept input from CSV files that follow the Schemas. You can also check the test/inputs folder for examples.","category":"page"},{"location":"10-how-to-use/#csv-files","page":"How to Use","title":"CSV Files","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Below, we have a description of the files. At the end, in Schemas, we have the expected columns in these CSVs.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Tip: If you modify CSV files and want to see your modifications, the normal git diff command will not be informative. Instead, you can usegit diff --word-diff-regex=\"[^[:space:],]+\"to make git treat the , as word separators. You can also compare two CSV files withgit diff --no-index --word-diff-regex=\"[^[:space:],]+\" file1 file2","category":"page"},{"location":"10-how-to-use/#graph-assets-data","page":"How to Use","title":"graph-assets-data.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"This file contains the list of assets and the static data associated with each of them.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The meaning of Missing data depends on the parameter, for instance:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"group: No group assigned to the asset.","category":"page"},{"location":"10-how-to-use/#graph-flows-data","page":"How to Use","title":"graph-flows-data.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The same as graph-assets-data.csv, but for flows. Each flow is defined as a pair of assets.","category":"page"},{"location":"10-how-to-use/#assets-data","page":"How to Use","title":"assets-data.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"This file contains the yearly data of each asset.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The investment parameters are as follows:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The investable parameter determines whether there is an investment decision for the asset or flow.\nThe investment_integer parameter determines if the investment decision is integer or continuous.\nThe investment_cost parameter represents the cost in the defined timeframe. Thus, if the timeframe is a year, the investment cost is the annualized cost of the asset.\nThe investment_limit parameter limits the total investment capacity of the asset or flow. This limit represents the potential of that particular asset or flow. Without data in this parameter, the model assumes no investment limit.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The meaning of Missing data depends on the parameter, for instance:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"investment_limit: There is no investment limit.\ninitial_storage_level: The initial storage level is free (between the storage level limits), meaning that the optimization problem decides the best starting point for the storage asset. In addition, the first and last time blocks in a representative period are linked to create continuity in the storage level.","category":"page"},{"location":"10-how-to-use/#flows-data","page":"How to Use","title":"flows-data.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The same as assets-data.csv, but for flows. Each flow is defined as a pair of assets.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The meaning of Missing data depends on the parameter, for instance:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"investment_limit: There is no investment limit.","category":"page"},{"location":"10-how-to-use/#assets-profiles-definition","page":"How to Use","title":"assets-profiles.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"These files contain information about assets and their associated profiles. Each row lists an asset, the type of profile (e.g., availability, demand, maximum or minimum storage level), and the profile's name. These profiles are used in the intra-temporal constraints.","category":"page"},{"location":"10-how-to-use/#flows-profiles-definition","page":"How to Use","title":"flows-profiles.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"This file contains information about flows and their representative period profiles for intra-temporal constraints. Each flow is defined as a pair of assets.","category":"page"},{"location":"10-how-to-use/#rep-periods-data","page":"How to Use","title":"rep-periods-data.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Describes the representative periods by their unique ID, the number of timesteps per representative period, and the resolution per timestep. Note that in the test files the resolution units are given as hours for understandability, but the resolution is technically unitless.","category":"page"},{"location":"10-how-to-use/#rep-periods-mapping","page":"How to Use","title":"rep-periods-mapping.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Describes the periods of the timeframe that map into a representative period and the weight of the representative periods that construct a period. Note that each weight is a decimal between 0 and 1, and that the sum of weights for a given period must also be between 0 and 1 (but do not have to sum to 1).","category":"page"},{"location":"10-how-to-use/#profiles-rep-periods.csv","page":"How to Use","title":"profiles-rep-periods.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Define all the profiles for the rep-periods. The profile_name is a unique identifier, the period and value define the profile, and the rep_period field informs the representative period.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The profiles are linked to assets and flows in the files assets-profiles, assets-timeframe-profiles, and flows-profiles.","category":"page"},{"location":"10-how-to-use/#assets-timeframe-profiles.csv","page":"How to Use","title":"assets-timeframe-profiles.csv","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Like the assets-profiles.csv, but for the inter-temporal constraints.","category":"page"},{"location":"10-how-to-use/#groups-data.csv-(optional)","page":"How to Use","title":"groups-data.csv (optional)","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"This file contains the list of groups and the methods that apply to each group, along with their respective parameters.","category":"page"},{"location":"10-how-to-use/#profiles-timeframe.csv-(optional)","page":"How to Use","title":"profiles-timeframe.csv (optional)","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Define all the profiles for the timeframe. This is similar to the profiles-rep-periods.csv except that it doesn't have a rep-period field and if this is not passed, default values are used in the timeframe constraints.","category":"page"},{"location":"10-how-to-use/#assets-rep-periods-partitions-definition","page":"How to Use","title":"assets-rep-periods-partitions.csv (optional)","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Contains a description of the partition for each asset with respect to representative periods. If not specified, each asset will have the same time resolution as the representative period, which is hourly by default.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"There are currently three ways to specify the desired resolution, indicated in the column specification. The column partition serves to define the partitions in the specified style.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"specification = uniform: Set the resolution to a uniform amount, i.e., a time block is made of X timesteps. The number X is defined in the column partition. The number of timesteps in the representative period must be divisible by X.\nspecification = explicit: Set the resolution according to a list of numbers separated by ; on the partition. Each number in the list is the number of timesteps for that time block. For instance, 2;3;4 means that there are three time blocks, the first has 2 timesteps, the second has 3 timesteps, and the last has 4 timesteps. The sum of the list must be equal to the total number of timesteps in that representative period, as specified in num_timesteps of rep-periods-data.csv.\nspecification = math: Similar to explicit, but using + and x for simplification. The value of partition is a sequence of elements of the form NxT separated by +, indicating N time blocks of length T. For instance, 2x3+3x6 is 2 time blocks of 3 timesteps, followed by 3 time blocks of 6 timesteps, for a total of 24 timesteps in the representative period.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The table below shows various results for different formats for a representative period with 12 timesteps.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Time Block :uniform :explicit :math\n1:3, 4:6, 7:9, 10:12 3 3;3;3;3 4x3\n1:4, 5:8, 9:12 4 4;4;4 3x4\n1:1, 2:2, …, 12:12 1 1;1;1;1;1;1;1;1;1;1;1;1 12x1\n1:3, 4:6, 7:10, 11:12 NA 3;3;4;2 2x3+1x4+1x2","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Note: If an asset is not specified in this file, the balance equation will be written in the lowest resolution of both the incoming and outgoing flows to the asset.","category":"page"},{"location":"10-how-to-use/#flow-rep-periods-partitions-definition","page":"How to Use","title":"flows-rep-periods-partitions.csv (optional)","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The same as assets-rep-periods-partitions.csv, but for flows.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"If a flow is not specified in this file, the flow time resolution will be for each timestep by default (e.g., hourly).","category":"page"},{"location":"10-how-to-use/#assets-timeframe-partitions","page":"How to Use","title":"assets-timeframe-partitions.csv (optional)","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The same as their assets-rep-periods-partitions.csv counterpart, but for the periods in the timeframe of the model.","category":"page"},{"location":"10-how-to-use/#schemas","page":"How to Use","title":"Schemas","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"using Markdown, TulipaEnergyModel\n\nMarkdown.parse(\n join([\"- **`$filename`**\\n\" *\n join(\n [\" - `$f: $t`\" for (f, t) in schema],\n \"\\n\",\n ) for (filename, schema) in TulipaEnergyModel.schema_per_table_name\n ] |> sort, \"\\n\")\n)","category":"page"},{"location":"10-how-to-use/#structures","page":"How to Use","title":"Structures","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The list of relevant structures used in this package are listed below:","category":"page"},{"location":"10-how-to-use/#EnergyProblem","page":"How to Use","title":"EnergyProblem","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The EnergyProblem structure is a wrapper around various other relevant structures. It hides the complexity behind the energy problem, making the usage more friendly, although more verbose.","category":"page"},{"location":"10-how-to-use/#Fields","page":"How to Use","title":"Fields","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"graph: The Graph object that defines the geometry of the energy problem.\nrepresentative_periods: A vector of Representative Periods.\nconstraints_partitions: Dictionaries that connect pairs of asset and representative periods to time partitions (vectors of time blocks).\ntimeframe: The number of periods in the representative_periods.\ndataframes: A Dictionary of dataframes used to linearize the variables and constraints. These are used internally in the model only.\ngroups: A vector of Groups.\nmodel: A JuMP.Model object representing the optimization model.\nsolution: A structure of the variable values (investments, flows, etc) in the solution.\nsolved: A boolean indicating whether the model has been solved or not.\nobjective_value: The objective value of the solved problem (Float64).\ntermination_status: The termination status of the optimization model.\ntime_read_data: Time taken (in seconds) for reading the data (Float64).\ntime_create_model: Time taken (in seconds) for creating the model (Float64).\ntime_solve_model: Time taken (in seconds) for solving the model (Float64).","category":"page"},{"location":"10-how-to-use/#Constructor","page":"How to Use","title":"Constructor","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The EnergyProblem can also be constructed using the minimal constructor below.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"EnergyProblem(connection): Constructs a new EnergyProblem object with the given connection that has been created and the data loaded into it using TulipaIO. The graph, representative_periods, and timeframe are computed using create_internal_structures. The constraints_partitions field is computed from the representative_periods, and the other fields are initialized with default values.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"See the basic example tutorial to see how these can be used.","category":"page"},{"location":"10-how-to-use/#Graph","page":"How to Use","title":"Graph","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The energy problem is defined using a graph. Each vertex is an asset, and each edge is a flow.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"We use MetaGraphsNext.jl to define the graph and its objects. Using MetaGraphsNext we can define a graph with metadata, i.e., associate data with each asset and flow. Furthermore, we can define the labels of each asset as keys to access the elements of the graph. The assets in the graph are of type GraphAssetData, and the flows are of type GraphFlowData.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The graph can be created using the create_internal_structures function, or it can be accessed from an EnergyProblem.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"See how to use the graph in the graph tutorial.","category":"page"},{"location":"10-how-to-use/#GraphAssetData","page":"How to Use","title":"GraphAssetData","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"This structure holds all the information of a given asset. These are stored inside the Graph. Given a graph graph, an asset a can be accessed through graph[a].","category":"page"},{"location":"10-how-to-use/#GraphFlowData","page":"How to Use","title":"GraphFlowData","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"This structure holds all the information of a given flow. These are stored inside the Graph. Given a graph graph, a flow from asset u to asset v can be accessed through graph[u, v].","category":"page"},{"location":"10-how-to-use/#Partition","page":"How to Use","title":"Partition","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"A representative period will be defined with a number of timesteps. A partition is a division of these timesteps into time blocks such that the time blocks are disjunct (not overlapping) and that all timesteps belong to some time block. Some variables and constraints are defined over every time block in a partition.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"For instance, for a representative period with 12 timesteps, all sets below are partitions:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"1 2 3 4 5 6 7 8 9 10 11 12\n1 2 3 4 5 6 7 8 9 10 11 12\n1 2 3 4 5 6 7 8 9 10 11 12","category":"page"},{"location":"10-how-to-use/#timeframe","page":"How to Use","title":"Timeframe","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The timeframe is the total period we want to analyze with the model. Usually this is a year, but it can be any length of time. A timeframe has two fields:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"num_periods: The timeframe is defined by a certain number of periods. For instance, a year can be defined by 365 periods, each describing a day.\nmap_periods_to_rp: Indicates the periods of the timeframe that map into a representative period and the weight of the representative period to construct that period.","category":"page"},{"location":"10-how-to-use/#representative-periods","page":"How to Use","title":"Representative Periods","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The timeframe (e.g., a full year) is described by a selection of representative periods, for instance, days or weeks, that nicely summarize other similar periods. For example, we could model the year into 3 days, by clustering all days of the year into 3 representative days. Each one of these days is called a representative period. TulipaEnergyModel.jl has the flexibility to consider representative periods of different lengths for the same timeframe (e.g., a year can be represented by a set of 4 days and 2 weeks). To obtain the representative periods, we recommend using TulipaClustering.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"A representative period has three fields:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"weight: Indicates how many representative periods are contained in the timeframe; this is inferred automatically from map_periods_to_rp in the timeframe.\ntimesteps: The number of timesteps blocks in the representative period.\nresolution: The duration in time of each timestep.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The number of timesteps and resolution work together to define the coarseness of the period. Nothing is defined outside of these timesteps; for instance, if the representative period represents a day and you want to specify a variable or constraint with a coarseness of 30 minutes. You need to define the number of timesteps to 48 and the resolution to 0.5.","category":"page"},{"location":"10-how-to-use/#Solution","page":"How to Use","title":"Solution","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The solution object energy_problem.solution is a mutable struct with the following fields:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"assets_investment[a]: The investment for each asset, indexed on the investable asset a.\nflows_investment[u, v]: The investment for each flow, indexed on the investable flow (u, v).\nstorage_level_intra_rp[a, rp, timesteps_block]: The storage level for the storage asset a within (intra) a representative period rp and a time block timesteps_block. The list of time blocks is defined by constraints_partitions, which was used to create the model.\nstorage_level_inter_rp[a, periods_block]: The storage level for the storage asset a between (inter) representative periods in the periods block periods_block.\nflow[(u, v), rp, timesteps_block]: The flow value for a given flow (u, v) at a given representative period rp, and time block timesteps_block. The list of time blocks is defined by graph[(u, v)].partitions[rp].\nobjective_value: A Float64 with the objective value at the solution.\nduals: A Dictionary containing the dual variables of selected constraints.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Check the tutorial for tips on manipulating the solution.","category":"page"},{"location":"10-how-to-use/#time-blocks","page":"How to Use","title":"Time Blocks","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"A time block is a range for which a variable or constraint is defined. It is a range of numbers, i.e., all integer numbers inside an interval. Time blocks are used for the periods in the timeframe and the timesteps in the representative period. Time blocks are disjunct (not overlapping), but do not have to be sequential.","category":"page"},{"location":"10-how-to-use/#group","page":"How to Use","title":"Group","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"This structure holds all the information of a given group with the following fields:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"name: The name of the group.\ninvest_method: Boolean value to indicate whether or not the group has an investment method.\nmin_investment_limit: A minimum investment limit in MW is imposed on the total investments of the assets belonging to the group.\nmax_investment_limit: A maximum investment limit in MW is imposed on the total investments of the assets belonging to the group.","category":"page"},{"location":"10-how-to-use/#infeasible","page":"How to Use","title":"Exploring infeasibility","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"If your model is infeasible, you can try exploring the infeasibility with JuMP.compute_conflict! and JuMP.copy_conflict.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Note: Not all solvers support this functionality.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Use energy_problem.model for the model argument. For instance:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"if energy_problem.termination_status == INFEASIBLE\n compute_conflict!(energy_problem.model)\n iis_model, reference_map = copy_conflict(energy_problem.model)\n print(iis_model)\nend","category":"page"},{"location":"10-how-to-use/#Storage-specific-setups","page":"How to Use","title":"Storage specific setups","text":"","category":"section"},{"location":"10-how-to-use/#seasonal-setup","page":"How to Use","title":"Seasonal and non-seasonal storage","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Section Storage Modeling explains the main concepts for modeling seasonal and non-seasonal storage in TulipaEnergyModel.jl. To define if an asset is one type or the other then consider the following:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Seasonal storage: When the storage capacity of an asset is greater than the total length of representative periods, we recommend using the inter-temporal constraints. To apply these constraints, you must set the input parameter is_seasonal to true in the assets-data.csv.\nNon-seasonal storage: When the storage capacity of an asset is lower than the total length of representative periods, we recommend using the intra-temporal constraints. To apply these constraints, you must set the input parameter is_seasonal to false in the assets-data.csv.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Note: If the input data covers only one representative period for the entire year, for example, with 8760-hour timesteps, and you have a monthly hydropower plant, then you should set the is_seasonal parameter for that asset to false. This is because the length of the representative period is greater than the storage capacity of the storage asset.","category":"page"},{"location":"10-how-to-use/#storage-investment-setup","page":"How to Use","title":"The energy storage investment method","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Energy storage assets have a unique characteristic wherein the investment is based not solely on the capacity to charge and discharge, but also on the energy capacity. Some storage asset types have a fixed duration for a given capacity, which means that there is a predefined ratio between energy and power. For instance, a battery of 10MW/unit and 4h duration implies that the energy capacity is 40MWh. Conversely, other storage asset types don't have a fixed ratio between the investment of capacity and storage capacity. Therefore, the energy capacity can be optimized independently of the capacity investment, such as hydrogen storage in salt caverns. To define if an energy asset is one type or the other then consider the following parameter setting in the file assets-data.csv:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Investment energy method: To use this method, set the parameter storage_method_energy to true. In addition, it is necessary to define:\ninvestment_cost_storage_energy: To establish the cost of investing in the storage capacity (e.g., kEUR/MWh/unit).\nfixed_cost_storage_energy: To establish the fixed cost of energy storage capacity (e.g., kEUR/MWh/unit).\ninvestment_limit_storage_energy: To define the potential of the energy capacity investment (e.g., MWh). Missing values mean that there is no limit.\ninvestment_integer_storage_energy: To determine whether the investment variables of storage capacity are integers of continuous.\nFixed energy-to-power ratio method: To use this method, set the parameter storage_method_energy to false. In addition, it is necessary to define the parameter energy_to_power_ratio to establish the predefined duration of the storage asset or ratio between energy and power. Note that all the investment costs should be allocated in the parameter investment_cost.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"In addition, the parameter capacity_storage_energy in the graph-assets-data.csv defines the energy per unit of storage capacity invested in (e.g., MWh/unit).","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"For more details on the constraints that apply when selecting one method or the other, please visit the mathematical formulation section.","category":"page"},{"location":"10-how-to-use/#storage-binary-method-setup","page":"How to Use","title":"Control simultaneous charging and discharging","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Depending on the configuration of the energy storage assets, it may or may not be possible to charge and discharge them simultaneously. For instance, a single battery cannot charge and discharge at the same time, but some pumped hydro storage technologies have separate components for charging (pump) and discharging (turbine) that can function independently, allowing them to charge and discharge simultaneously. To account for these differences, the model provides users with three options for the use_binary_storage_method parameter in the assets-data.csv file:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"binary: the model adds a binary variable to prevent charging and discharging simultaneously.\nrelaxed_binary: the model adds a binary variable that allows values between 0 and 1, reducing the likelihood of charging and discharging simultaneously. This option uses a tighter set of constraints close to the convex hull of the full formulation, resulting in fewer instances of simultaneous charging and discharging in the results.\nIf no value is set, i.e., missing value, the storage asset can charge and discharge simultaneously.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.","category":"page"},{"location":"10-how-to-use/#unit-commitment-setup","page":"How to Use","title":"Setting up unit commitment constraints","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The unit commitment constraints are only applied to producer and conversion assets. The unit_commitment parameter must be set to true to include the constraints in the assets-data.csv. Additionally, the following parameters should be set in that same file:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"unit_commitment_method: It determines which unit commitment method to use. The current version of the code only includes the basic version. Future versions will add more detailed constraints as additional options.\nunits_on_cost: Objective function coefficient on units_on variable. (e.g., no-load cost or idling cost in kEUR/h/unit)\nunit_commitment_integer: It determines whether the unit commitment variables are considered as integer or not (true or false)\nmin_operating_point: Minimum operating point or minimum stable generation level defined as a portion of the capacity of asset (p.u.)","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.","category":"page"},{"location":"10-how-to-use/#ramping-setup","page":"How to Use","title":"Setting up ramping constraints","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The ramping constraints are only applied to producer and conversion assets. The ramping parameter must be set to true to include the constraints in the assets-data.csv. Additionally, the following parameters should be set in that same file:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"max_ramp_up: Maximum ramping up rate as a portion of the capacity of asset (p.u./h)\nmax_ramp_down:Maximum ramping down rate as a portion of the capacity of asset (p.u./h)","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.","category":"page"},{"location":"10-how-to-use/#max-min-outgoing-energy-setup","page":"How to Use","title":"Setting up a maximum or minimum outgoing energy limit","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"For the model to add constraints for a maximum or minimum energy limit for an asset throughout the model's timeframe (e.g., a year), we need to establish a couple of parameters:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"is_seasonal = true in the assets-data.csv. This parameter enables the model to use the inter-temporal constraints.\nmax_energy_timeframe_partition neq missing or min_energy_timeframe_partition neq missing in the assets-data.csv. This value represents the peak energy that will be then multiplied by the profile for each period in the timeframe.\nNote: These parameters are defined per period, and the default values for profiles are 1.0 p.u. per period. If the periods are determined daily, the energy limit for the whole year will be 365 times maxor min_energy_timeframe_partition.\n(optional) profile_type and profile_name in the assets-timeframe-profiles.csv and the profile values in the profiles-timeframe.csv. If there is no profile defined, then by default it is 1.0 p.u. for all periods in the timeframe.\n(optional) define a period partition in assets-timeframe-partitions.csv. If there is no partition defined, then by default the constraint is created for each period in the timeframe, otherwise, it will consider the partition definition in the file.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Tip: If you want to set a limit on the maximum or minimum outgoing energy for a year with representative days, you can use the partition definition to create a single partition for the entire year to combine the profile.","category":"page"},{"location":"10-how-to-use/#Example:-Setting-Energy-Limits","page":"How to Use","title":"Example: Setting Energy Limits","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Let's assume we have a year divided into 365 days because we are using days as periods in the representatives from TulipaClustering.jl. Also, we define the max_energy_timeframe_partition = 10 MWh, meaning the peak energy we want to have is 10MWh for each period or period partition. So depending on the optional information, we can have:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Profile Period Partitions Example\nNone None The default profile is 1.p.u. for each period and since there are no period partitions, the constraints will be for each period (i.e., daily). So the outgoing energy of the asset for each day must be less than or equal to 10MWh.\nDefined None The profile definition and value will be in the assets-timeframe-profiles.csv and profiles-timeframe.csv files. For example, we define a profile that has the following first four values: 0.6 p.u., 1.0 p.u., 0.8 p.u., and 0.4 p.u. There are no period partitions, so constraints will be for each period (i.e., daily). Therefore the outgoing energy of the asset for the first four days must be less than or equal to 6MWh, 10MWh, 8MWh, and 4MWh.\nDefined Defined Using the same profile as above, we now define a period partition in the assets-timeframe-partitions.csv file as uniform with a value of 2. This value means that we will aggregate every two periods (i.e., every two days). So, instead of having 365 constraints, we will have 183 constraints (182 every two days and one last constraint of 1 day). Then the profile is aggregated with the sum of the values inside the periods within the partition. Thus, the outgoing energy of the asset for the first two partitions (i.e., every two days) must be less than or equal to 16MWh and 12MWh, respectively.","category":"page"},{"location":"10-how-to-use/#group-setup","page":"How to Use","title":"Defining a group of assets","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"A group of assets refers to a set of assets that share certain constraints. For example, the investments of a group of assets may be capped at a maximum value, which represents the potential of a specific area that is restricted in terms of the maximum allowable MW due to limitations on building licenses.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"In order to define the groups in the model, the following steps are necessary:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Create a group in the groups-data.csv file by defining the name property and its parameters.\nIn the file graph-assets-data.csv, assign assets to the group by setting the name in the group parameter/column.\nNote: A missing value in the parameter group in the graph-assets-data.csv means that the asset does not belong to any group.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Groups are useful to represent several common constraints, the following group constraints are available.","category":"page"},{"location":"10-how-to-use/#investment-group-setup","page":"How to Use","title":"Setting up a maximum or minimum investment limit for a group","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"The mathematical formulation of the maximum and minimum investment limit for group constraints is available here. The parameters to set up these constraints in the model are in the groups-data.csv file.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"invest_method = true. This parameter enables the model to use the investment group constraints.\nmin_investment_limit neq missing or max_investment_limit neq missing. This value represents the limits that will be imposed on the investment that belongs to the group.\nNotes:A missing value in the parameters min_investment_limit and max_investment_limit means that there is no investment limit.\nThese constraints are applied to the investments each year. The model does not yet have investment limits to a group's accumulated invested capacity.","category":"page"},{"location":"10-how-to-use/#Example:-Group-of-Assets","page":"How to Use","title":"Example: Group of Assets","text":"","category":"section"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Let's explore how the groups are set up in the test case called Norse. First, let's take a look at the groups-data.csv file:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"using DataFrames # hide\nusing CSV # hide\ninput_asset_file = \"../../test/inputs/Norse/groups-data.csv\" # hide\nassets = CSV.read(input_asset_file, DataFrame, header = 2) # hide","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"In the given data, there are two groups: renewables and ccgt. Both groups have the invest_method parameter set to true, indicating that investment group constraints apply to both. For the renewables group, the min_investment_limit parameter is missing, signifying that there is no minimum limit imposed on the group. However, the max_investment_limit parameter is set to 40000 MW, indicating that the total investments of assets in the group must be less than or equal to this value. In contrast, the ccgt group has a missing value in the max_investment_limit parameter, indicating no maximum limit, while the min_investment_limit is set to 10000 MW for the total investments in that group.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Let's now explore which assets are in each group. To do so, we can take a look at the graph-assets-data.csv file:","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"input_asset_file = \"../../test/inputs/Norse/graph-assets-data.csv\" # hide\nassets = CSV.read(input_asset_file, DataFrame, header = 2) # hide\nassets = assets[.!ismissing.(assets.group), [:name, :type, :group]] # hide","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Here we can see that the assets Asgard_Solar and Midgard_Wind belong to the renewables group, while the assets Asgard_CCGT and Midgard_CCGT belong to the ccgt group.","category":"page"},{"location":"10-how-to-use/","page":"How to Use","title":"How to Use","text":"Note: If the group has a min_investment_limit, then assets in the group have to allow investment (investable = true) for the model to be feasible. If the assets are not investable then they cannot satisfy the minimum constraint.","category":"page"},{"location":"30-concepts/#concepts","page":"Concepts","title":"Concepts","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Pages = [\"30-concepts.md\"]\nDepth = 3","category":"page"},{"location":"30-concepts/#concepts-summary","page":"Concepts","title":"Summary","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"TulipaEnergyModel.jl incorporates two fundamental concepts that serve as the foundation of the optimization model:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Energy Assets: representation of a physical asset that can produce, consume, store, balance, or convert energy. Some examples of what these assets can represent are:\nProducer: e.g., wind turbine, solar panel\nConsumer: e.g., electricity demand, heat demand\nStorage: e.g., battery, pumped-hydro storage\nBalancing Hub: e.g., an electricity network that serves as a connection among other energy assets\nConversion: e.g., power plants, electrolyzers\nFlows: representation of the connections among assets, e.g., pipelines, transmission lines, or simply the energy production that goes from one asset to another.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In a nutshell, the model guarantees a balance of energy for the various types of assets while considering the flow limits. It considers a set of representative periods (e.g., days or weeks) for a given timeframe (e.g., a year) the user wants to analyze. Therefore, the model has two types of temporal (time) constraints to consider the different chronology characteristics of the assets:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Intra-temporal Constraints: These constraints limit the asset or flow within a representative period. The intra-temporal constraints help to characterize the short-term operational dynamics of the assets. So far, the model considers balance and flow limitations within the representative period, but future developments will include unit commitment, ramping, and reserve constraints.\nInter-temporal Constraints: These constraints combine the information of the representative periods and create limitations between them to recover chronological information across the whole timeframe. The inter-temporal constraints help to characterize the long-term operational dynamics of the assets (e.g., seasonality). So far, the model uses this type of constraint to model seasonal storage. Still, future developments will include, for example, maximum or minimum production/consumption for a year (or any timeframe).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The mathematical formulation shows an overview of these constraints and the variables in the model.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Another essential concept in the model is the flexible time resolution, which allows for each asset to be considered in a single timestep (e.g., 1, 2, 3...) or in a range of timesteps (e.g., 1:3, meaning that the asset's variable represents the value of timesteps 1, 2, and 3). This concept allows the modeling of different dynamics depending on the asset; for instance, electricity assets can be modeled hourly, whereas hydrogen assets can be modeled in a 6-hour resolution (avoiding creating unnecessary constraints and variables).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The following sections explain these concepts in more detail.","category":"page"},{"location":"30-concepts/#flex-asset-connection","page":"Concepts","title":"Flexible Connection of Energy Assets","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In energy system modeling, it is becoming common to have hybrid assets like storage + renewable (e.g., battery + solar), electrolyzer + renewable (e.g., electrolyzer + wind), or renewable + hydro (e.g., solar + hydro) that are located at the same site and share a common connection point to the grid. The standard method of modeling these assets requires extra variables and constraints for them to function correctly. For example, flows from the grid are not allowed, as they either avoid charging from the grid or require green hydrogen production. Therefore, hybrid connections typically require an additional node to regulate this connection with the grid.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The representation of the energy system in TulipaEnergyModel.jl is based on Graph Theory, which deals with the connection between vertices by edges. This representation provides a more flexible framework to model energy assets in the system as vertices and flows between energy assets as edges. By connecting assets directly to each other (i.e., without having a node in between), we reduce the number of variables and constraints needed to represent hybrid configurations, thus reducing the model size.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Consider the following example to demonstrate the benefits of using a graph theory approach. In the classic connection approach, the nodes play a crucial role in modeling. For instance, every asset must be connected to a node with balance constraints. When a storage asset and a renewable asset are in a hybrid connection like the one described before, a connection point is needed to connect the hybrid configuration to the rest of the system. Therefore, to consider the hybrid configuration of a storage asset and a renewable asset, we must introduce a node (i.e., a connection point) between these assets and the external power grid (i.e., a balance point), as shown in the following figure:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: Classic connection)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In this system, the phs storage asset charges and discharges from the connection point, while the wind turbine produces power that goes directly to the connection point. This connection point is connected to the external power grid through a transmission line that leads to a balance hub that connects to other assets. Essentially, the connection point acts as a balancing hub point for the assets in this hybrid configuration. Furthermore, these hybrid configurations impose an extra constraint to avoid storage charges from the power grid.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's consider the modeling approach in TulipaEnergyModel.jl. As nodes are no longer needed to connect assets, we can connect them directly to each other, as shown in the figure below:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: Flexible connection)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"By implementing this approach, we can reduce the number of variables and constraints involved. For example, the balance constraint in the intermediate node and the extra constraint to avoid the storage charging from the power grid are no longer needed. Additionally, we can eliminate the variable determining the flow between the intermediate node and the power grid, because the flow from phs to balance can directly link to the external grid. The section comparison of different modeling approaches shows the quantification of these reductions.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"This example of a phs and a wind asset is useful for illustrating the advantages of this modeling approach and will be reused in the following sections. However, please keep in mind that there are other applications of hybrid configurations, such as battery-solar, hydro-solar, and electrolyzer-wind.","category":"page"},{"location":"30-concepts/#flex-time-res","page":"Concepts","title":"Flexible Time Resolution","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"One of the core features of TulipaEnergyModel.jl is that it can handle different time resolutions on the assets and the flows. Typically, the time resolution in an energy model is hourly, like in the following figure where we have a 6-hour energy system:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: Hourly Time Resolution)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Therefore, for this simple example, we can determine the number of constraints and variables in the optimization problem:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Number of variables: 42 since we have six connections among assets (i.e., 6 flows x 6 hours = 36 variables) and one storage asset (i.e., 1 storage level x 6 h = 6 variables)\nNumber of constraints: 72, which are:\n24 from the maximum output limit of the assets that produce, convert, or discharge energy (i.e., H2, wind, ccgt, and phs) for each hour (i.e., 4 assets x 6 h = 24 constraints)\n6 from the maximum input limit of the storage or charging limit for the phs\n6 from the maximum storage level limit for the phs\n12 from the import and export limits for the transmission line between the balance hub and the demand\n24 from the energy balance on the consumer, hub, conversion, and storage assets (i.e., demand, balance, ccgt, and phs) for each hour (i.e., 4 assets x 6 h = 24 constraints)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Depending on the input data and the level of detail you want to model, hourly resolution in all the variables might not be necessary. TulipaEnergyModel.jl can have different time resolutions for each asset and flow to simplify the optimization problem and approximate hourly representation. This feature is useful for large-scale energy systems that involve multiple sectors, as detailed granularity is not always necessary due to the unique temporal dynamics of each sector. For instance, we can use hourly resolution for the electricity sector and six-hour resolution for the hydrogen sector. We can couple multiple sectors, each with its own temporal resolution.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's explore the flexibility of time resolution with a few examples.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The following table shows the user input data for the definition of asset time resolution. Please note that the values presented in this example are just for illustrative purposes and do not represent a realistic case.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"using DataFrames # hide\nusing CSV # hide\ninput_asset_file = \"../../test/inputs/Variable Resolution/assets-rep-periods-partitions.csv\" # hide\nassets = CSV.read(input_asset_file, DataFrame, header = 2) # hide\nassets = assets[assets.asset .!= \"wind\", :] # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The table shows that the H2 producer and the phs storage have a uniform definition of 6 hours. This definition means we want to represent the H2 production profile and the storage level of the phs every six hours.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The same time resolution can be specified for the flows, for example (again, the values are for illustrative purposes and do not represent a realistic case):","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"input_flow_file = \"../../test/inputs/Variable Resolution/flows-rep-periods-partitions.csv\" # hide\nflows_partitions = CSV.read(input_flow_file, DataFrame, header = 2) # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The table shows a uniform definition for the flow from the hydrogen producer (H2) to the conversion asset (ccgt) of 6 hours, from the wind producer (wind) to the storage (phs) of 3 hours, and from the balance hub (balance) to the consumer (demand) of 3 hours, too. In addition, the flow from the wind producer (wind) to the balance hub (balance) is defined using the math specification of 1x2+1x4, meaning that there are two time blocks, one of two hours (i.e., 1:2) and another of four hours (i.e., 3:6). Finally, the flow from the storage (phs) to the balance hub (balance) is defined using the math specification of 1x4+1x2, meaning that there are two time blocks, one of four hours (i.e., 1:4) and another of two hours (i.e., 5:6).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The following figure illustrates these definitions on the example system.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: Variable Time Resolution)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"So, let's recap:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The hydrogen producer (H2) is in a 6-hour resolution represented by the range 1:6, meaning that the balance of the hydrogen produced is for every 6 hours.\nThe flow from the hydrogen producer to the ccgt power plant (H2,ccgt) is also in a 6-hour resolution 1:6.\nThe flow from the ccgt power plant to the balance hub (ccgt, balance) has hourly resolution [1,2,3,4,5,6].\nThe ccgt is a conversion plant that takes hydrogen to produce electricity. Since both sectors have different time resolutions, the energy balance in the conversion asset is defined in the lowest resolution connecting to the asset. In this case, the energy balance in the ccgt is defined every 6 hours, i.e., in the range 1:6.\nThe wind producer has an hourly profile of electricity production, so the resolution of the asset is hourly.\nThe wind producer output has two connections, one to the balance hub and the other to the pumped-hydro storage (phs) with different resolutions:\nThe flow from the wind producer to the phs storage (wind, phs) has a uniform resolution of two blocks from hours 1 to 3 (i.e., 1:3) and from hours 4 to 6 (i.e., 4:6).\nThe flow from the wind producer to the balance hub (wind, balance) has a variable resolution of two blocks, too, but from hours 1 to 2 (i.e., 1:2) and from hours 3 to 6 (i.e., 3:6).\nThe phs is in a 6-hour resolution represented by the range 1:6, meaning the storage balance is determined every 6 hours.\nThe flow from the phs to the balance (phs, balance) represents the discharge of the phs. This flow has a variable resolution of two blocks from hours 1 to 4 (i.e., 1:4) and from hours 5 to 6 (i.e., 5:6), which differs from the one defined for the charging flow from the wind asset.\nThe demand consumption has hourly input data with one connection to the balance hub:\nThe flow from the balance hub to the demand (balance, demand) has a uniform resolution of 3 hours; therefore, it has two blocks, one from hours 1 to 3 (i.e., 1:3) and the other from hours 4 to 6 (i.e., 4:6).\nThe balance hub integrates all the different assets with their different resolutions. The lowest resolution of all connections determines the balance equation for this asset. Therefore, the resulting resolution is into two blocks, one from hours 1 to 4 (i.e., 1:4) and the other from hours 5 to 6 (i.e., 5:6).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Note: This example demonstrates that different time resolutions can be assigned to each asset and flow in the model. Additionally, the resolutions do not need to be uniform and can vary throughout the horizon.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The complete input data for this example can be found here.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Due to the flexible resolution, we must explicitly state how the constraints are constructed. For each constraint, three things need to be considered:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Whether it is type power or type energy.\ntype power: highest resolution\ntype energy: lowest resolution (multiplied by durations)\nHow the resolution is determined (regardless of whether it is highest or lowest): the incoming flows, the outgoing flows, or a combination of both.\nHow the related parameters are treated. We use two methods of aggregation, sum or mean.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Below is the table outlining the details for each type of constraint. Note min means highest resolution, and max means lowest resolution.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Name Variables involved Profile involved Constraint type Resolution of the constraints Profile aggregation\nConsumer Balance inputs, outputs demand power min(incoming flows, outgoing flows) mean\nStorage Balance inputs, outputs, storage level inflows energy max(asset, min(incoming flows, outgoing flows)) sum\nHub Balance inputs, outputs - power min(incoming flows, outgoing flows) -\nConversion Balance inputs, outputs - energy max(incoming flows, outgoing flows) -\nProducers Capacity Constraints outputs availability power min(outgoing flows) mean\nStorage Capacity Constraints (outgoing) outputs - power min(outgoing flows) -\nConversion Capacity Constraints (outgoing) outputs - power min(outgoing flows) -\nConversion Capacity Constraints (incoming) inputs - power min(incoming flows) -\nStorage Capacity Constraints (incoming) inputs - power min(incoming flows) -\nTransport Capacity Constraints (upper bounds) flow availability power if it connects two hubs or demands then max(hub a,hub b), otherwise its own mean\nTransport Capacity Constraints (lower bounds) flow availability power if it connects two hubs or demands then max(hub a,hub b), otherwise its own mean\nMaximum Energy Limits (outgoing) outputs max_energy energy Determine by timeframe partitions. The default value is for each period in the timeframe sum\nMinimum Energy Limits (outgoing) outputs min_energy energy Determine by timeframe partitions. The default value is for each period in the timeframe sum\nMaximum Output Flow with Unit Commitment outputs, units_on availability power min(outgoing flows, units_on) mean\nMinimum Output Flow with Unit Commitment outputs, units_on availability power min(outgoing flows, units_on) mean\nMaximum Ramp Up Flow with Unit Commitment outputs, units_on availability power min(outgoing flows, units_on) mean\nMaximum Ramp Down Flow with Unit Commitment outputs, units_on availability power min(outgoing flows, units_on) mean\nMaximum Ramp Up Flow without Unit Commitment outputs availability power min(outgoing flows) mean\nMaximum Ramp Down Flow without Unit Commitment outputs availability power min(outgoing flows) mean","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"For this basic example, we can describe the balance and capacity constraints in the model. For the sake of simplicity, we consider only the intra-temporal constraints, the representative period index is dropped from the equations, and there are no investment variables in the equations.","category":"page"},{"location":"30-concepts/#Energy-Balance-Constraints","page":"Concepts","title":"Energy Balance Constraints","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In the following sections, we lay out all the balance constraints of this example.","category":"page"},{"location":"30-concepts/#Storage-Balance","page":"Concepts","title":"Storage Balance","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"As shown in the table, the resolution of the storage balance is energy, which is calculated by max(asset, min(incoming flows, outgoing flows)). The resolutions of the incoming and outgoing flows of the storage are 1:3, 4:6, 1:4, and 5:6, resulting in a minimum resolution of 2. The resolution of the storage is 6. Then, max(asset, min(incoming flows, outgoing flows)) becomes max(6, min(3, (4, 2))) which results in 6, and thus this balance is for every 6 hours. The charging and discharging flows are multiplied by their durations to account for the energy in the range 1:6.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textstorage_balance_textphs16 \n qquad v^textintra-storage_textphs16 = 3 cdot p^texteff_(textwindtextphs) cdot v^textflow_(textwindtextphs)13 + 3 cdot p^texteff_(textwindtextphs) cdot v^textflow_(textwindtextphs)46 \n qquad quad - frac4p^texteff_(textphstextbalance) cdot v^textflow_(textphstextbalance)14 - frac2p^texteff_(textphstextbalance) cdot v^textflow_(textphstextbalance)56 \nendaligned","category":"page"},{"location":"30-concepts/#Consumer-Balance","page":"Concepts","title":"Consumer Balance","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The flows coming from the balancing hub are defined every 3 hours. Therefore, the flows impose the lowest resolution and the demand is balanced every 3 hours. The input demand is aggregated as the mean of the hourly values in the input data. As with the storage balance, the flows are multiplied by their durations.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textconsumer_balance_textdemand13 \n qquad v^textflow_(textbalancetextdemand)13 = p^textpeak demand_textdemand cdot fracsum_b=1^3 p^textdemand profile_textdemandb3 \n textconsumer_balance_textdemand46 \n qquad v^textflow_(textbalancetextdemand)46 = p^textpeak demand_textdemand cdot fracsum_b=4^6 p^textdemand profile_textdemandb3 \nendaligned","category":"page"},{"location":"30-concepts/#Hub-Balance","page":"Concepts","title":"Hub Balance","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The hub balance is quite interesting because it integrates several flow resolutions. Remember that we didn't define any specific time resolution for this asset. Therefore, the highest resolution of all incoming and outgoing flows in the horizon implies that the hub balance must be imposed for all 6 blocks. The balance must account for each flow variable's duration in each block.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n texthub_balance_textbalance11 \n qquad v^textflow_(textbalancetextdemand)13 = v^textflow_(textccgttextbalance) 11 + v^textflow_(textwindtextbalance)12 + v^textflow_(textphstextbalance)14 \n texthub_balance_textbalance22 \n qquad v^textflow_(textbalancetextdemand)13 = v^textflow_(textccgttextbalance) 22 + v^textflow_(textwindtextbalance)12 + v^textflow_(textphstextbalance)14 \n texthub_balance_textbalance33 \n qquad v^textflow_(textbalancetextdemand)13 = v^textflow_(textccgttextbalance) 33 + v^textflow_(textwindtextbalance)36 + v^textflow_(textphstextbalance)14 \n texthub_balance_textbalance44 \n qquad v^textflow_(textbalancetextdemand)46 = v^textflow_(textccgttextbalance) 44 + v^textflow_(textwindtextbalance)36 + v^textflow_(textphstextbalance)14\n texthub_balance_textbalance55 \n qquad v^textflow_(textbalancetextdemand)46 = v^textflow_(textccgttextbalance) 55 + v^textflow_(textwindtextbalance)36 + v^textflow_(textphstextbalance)56 \n texthub_balance_textbalance66 \n qquad v^textflow_(textbalancetextdemand)46 = v^textflow_(textccgttextbalance) 66 + v^textflow_(textwindtextbalance)36 + v^textflow_(textphstextbalance)56 \nendaligned","category":"page"},{"location":"30-concepts/#Conversion-Balance","page":"Concepts","title":"Conversion Balance","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The flows connected to the CCGT conversion unit have different resolutions, too. In this case, the hydrogen imposes the lowest resolution; therefore, the energy balance in this asset is also every 6 hours.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textconversion_balance_textccgt16 \n qquad 6 cdot p^texteff_(textH2textccgt) cdot v^textflow_(textH2textccgt)16 = frac1p^texteff_(textccgttextbalance) sum_b=1^6 v^textflow_(textccgttextbalance)b \nendaligned","category":"page"},{"location":"30-concepts/#Capacity-Constraints","page":"Concepts","title":"Capacity Constraints","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"All capacity constraints are defined in the highest resolution to guarantee that the flows are below the limits of each asset capacity.","category":"page"},{"location":"30-concepts/#Storage-Capacity-Constraints","page":"Concepts","title":"Storage Capacity Constraints","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Since the storage unit only has one input and output, the capacity limit constraints are in the same resolution as the individual flows. Therefore, the constraints for the outputs of the storage (i.e., discharging capacity limit) are:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmax_output_flows_limit_textphs14 \n qquad v^textflow_(textphstextbalance)14 leq p^textinit capacity_textphs \n textmax_output_flows_limit_textphs56 \n qquad v^textflow_(textphstextbalance)56 leq p^textinit capacity_textphs \nendaligned","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"And the constraints for the inputs of the storage (i.e., charging capacity limit) are:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmax_input_flows_limit_textphs13 \n qquad v^textflow_(textwindtextphs)13 leq p^textinit capacity_textphs \n textmax_input_flows_limit_textphs46 \n qquad v^textflow_(textwindtextphs)46 leq p^textinit capacity_textphs \nendaligned","category":"page"},{"location":"30-concepts/#Conversion-Capacity-Constraints","page":"Concepts","title":"Conversion Capacity Constraints","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Similarly, each outflow is limited to the ccgt capacity for the conversion unit.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmax_output_flows_limit_textccgtb \n qquad v^textflow_(textccgttextbalance)b leq p^textinit capacity_textccgt quad forall b in 16 \nendaligned","category":"page"},{"location":"30-concepts/#Producer-Capacity-Constraints","page":"Concepts","title":"Producer Capacity Constraints","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The wind producer asset is interesting because the output flows are in different resolutions, i.e., 1:2, 3:6, 1:3, and 4:6. The highest resolution is 1:2, 3, and 4:6. Therefore, the constraints are as follows:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmax_output_flows_limit_textwind12 \n qquad v^textflow_(textwindtextbalance)12 + v^textflow_(textwindtextphs)13 leq fracp^textinit capacity_textwind2 cdot sum_b=1^2 p^textavailability profile_textwindb \n textmax_output_flows_limit_textwind3 \n qquad v^textflow_(textwindtextbalance)36 + v^textflow_(textwindtextphs)13 leq p^textinit capacity_textwind cdot p^textavailability profile_textwind3 \n textmax_output_flows_limit_textwind46 \n qquad v^textflow_(textwindtextbalance)36 + v^textflow_(textwindtextphs)46 leq fracp^textinit capacity_textwind2 cdot sum_b=5^6 p^textavailability profile_textwindb \nendaligned","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Since the flow variables v^textflow_(textwind textbalance)12 and v^textflow_(textwind textbalance)13 represent power, the first constraint sets the upper bound of the power for both timestep 1 and 2, by assuming an average capacity across these two timesteps. The same applies to the other two constraints.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The hydrogen (H2) producer capacity limit is straightforward, since both the asset and the flow definitions are in the same time resolution:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmax_output_flows_limit_textH216 \n qquad v^textflow_(textH2textccgt)16 leq p^textinit capacity_textH2 cdot p^textavailability profile_textH216 \nendaligned","category":"page"},{"location":"30-concepts/#Transport-Capacity-Constraints","page":"Concepts","title":"Transport Capacity Constraints","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"For the connection from the hub to the demand, there are associated transmission capacity constraints, which are in the same resolution as the flow:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmax_transport_flows_limit_(textbalancetextdemand)13 \n qquad v^textflow_(textbalancetextdemand)13 leq p^textinit export capacity_(textbalancetextdemand) \n textmax_transport_flows_limit_(textbalancetextdemand)46 \n qquad v^textflow_(textbalancetextdemand)46 leq p^textinit export capacity_(textbalancetextdemand) \nendaligned","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmin_transport_flows_limit_(textbalancetextdemand)13 \n qquad v^textflow_(textbalancetextdemand)13 geq - p^textinit import capacity_(textbalancetextdemand) \n textmin_transport_flows_limit_(textbalancetextdemand)46 \n qquad v^textflow_(textbalancetextdemand)46 geq - p^textinit import capacity_(textbalancetextdemand) \nendaligned","category":"page"},{"location":"30-concepts/#Storage-Level-limits","page":"Concepts","title":"Storage Level limits","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Since the system has a storage asset, we must limit the maximum storage level. The phs time resolution is defined for every 6 hours, so we only have one constraint.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"beginaligned\n textmax_storage_level_limit_textphs16 \n qquad v^textintra-storage_textphs16 leq p^textinit storage capacity_textphs\nendaligned","category":"page"},{"location":"30-concepts/#comparison","page":"Concepts","title":"Comparison of Different Modeling Approaches","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"This section quantifies the advantages of the flexible connection and flexible time resolution in the TulipaEnergyModel.jl modeling approach. So, let us consider three different approaches based on the same example:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Classic approach with hourly resolution: This approach needs an extra asset, node, to create the hybrid operation of the phs and wind assets.\nFlexible connection with hourly resolution: This approach uses the flexible connection to represent the hybrid operation of the phs and wind assets.\nFlexible connection and flexible time: This approach uses both features, the flexible connection and the flexible time resolution.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Note: The flexibility of TulipaEnergyModel.jl allows any of these three modeling approaches.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The table below shows the constraints and variables for each approach over a 6-hour horizon. These results show the potential of flexible connections and time resolution for reducing the size of the optimization model.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Modeling approach Nº Variables Nº Constraints Objective Function\nClassic approach with hourly resolution 48 84 28.4365\nFlexible connection with hourly resolution 42 72 28.4365\nFlexible connection and time resolution 16 29 28.4587","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"By comparing the classic approach with the other methods, we can analyze their differences:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The flexible connection with hourly resolution reduces 6 variables (125) and 12 constraints (approx 14). Notice that we include the 6 extra constraints related to not allowing charging from the grid, although these constraints can also be modeled as bounds. Finally, the objective function value is the same, since we use an hourly time resolution in both cases.\nThe combination of features reduces 32 variables (approx 67) and 55 constraints (approx 65) with an approximation error of approx 0073.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The level of reduction and approximation error will depend on the case study. Some cases that would benefit from this feature include:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Coupling different energy sectors with various dynamics. For instance, methane, hydrogen, and heat sectors can be represented in energy models with lower resolutions (e.g., 4, 6, or 12h) than the electricity sector, usually modeled in higher resolutions (e.g., 1h, 30 min).\nHaving high resolutions for all assets in a large-scale case study may not be necessary. For example, if analyzing a European case study focusing on a specific country like The Netherlands, hourly details for distant countries (such as Portugal and Spain) may not be required. However, one would still want to consider their effect on The Netherlands without causing too much computational burden. In such cases, flexible time resolution can maintain hourly details in the focus country, while reducing the detail in distant countries by increasing their resolution (to two hours or more). This reduction allows a broader scope without over-burdening computation.","category":"page"},{"location":"30-concepts/#flex-time-res-uc","page":"Concepts","title":"Flexible Time Resolution in the Unit Commitment and Ramping Constraints","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In the previous section, we have seen how the flexible temporal resolution is handled for the model's flow capacity and balance constraints. Here, we show how flexible time resolution is applied when considering the model's unit commitment and ramping constraints. Let's consider the example in the folder test/inputs/UC-ramping to explain how all these constraints are created in TulipaEnergyModel.jl when having the flexible time resolution.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: unit-commitment-case-study)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The example demonstrates various assets that supply demand. Each asset has different input data in the assets-data file, which activates different sets of constraints based on the method. For example, the gas producer has ramping constraints but not unit commitment constraints, while the ocgt conversion has unit commitment constraints but not ramping constraints. Lastly, the ccgt and smr assets both have unit commitment and ramping constraints.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"using DataFrames # hide\nusing CSV # hide\ninput_dir = \"../../test/inputs/UC-ramping\" # hide\nassets_data = CSV.read(joinpath(input_dir, \"assets-data.csv\"), DataFrame, header = 2) # hide\ngraph_assets = CSV.read(joinpath(input_dir, \"graph-assets-data.csv\"), DataFrame, header = 2) # hide\nassets = leftjoin(graph_assets, assets_data, on=:name) # hide\nfiltered_assets = assets[assets.type .== \"producer\" .|| assets.type .== \"conversion\", [\"name\", \"type\", \"capacity\", \"initial_units\", \"unit_commitment\", \"ramping\"]] # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The assets-rep-periods-partitions file defines the time resolution for the assets in the partition column. For instance, here we can see that the time resolutions are 3h for the ccgt and 6h for the smr. These values mean that the unit commitment variables (e.g., units_on) in the model have three and six hours resolution, respectively.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"assets_partitions_data = CSV.read(joinpath(input_dir, \"assets-rep-periods-partitions.csv\"), DataFrame, header = 2) # hide\nfiltered_assets_partitions = assets_partitions_data[!, [\"asset\", \"specification\", \"partition\"]] # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The flows-rep-periods-partitions file defines the time resolution for the flows. In this example, we have that the flows from the gas asset to the ccgt and from the ccgt asset to the demand are in a 2h resolution.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"flows_partitions_data = CSV.read(joinpath(input_dir, \"flows-rep-periods-partitions.csv\"), DataFrame, header = 2) # hide\nfiltered_flows_partitions = flows_partitions_data[!, [\"from_asset\", \"to_asset\", \"specification\", \"partition\"]] # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The default value for the assets and flows partitions is 1 hour. This means that assets and flows not in the previous tables are considered on an hourly basis in the model.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Important: It's not recommended to set up the input data partitions in such a way that the flow variables have a lower resolution than the units_on. This is because doing so will result in constraints that fix the value of the units_on in the timestep block where the flow is defined, leading to unnecessary extra variable constraints in the model. For instance, if the units_on are hourly and the flow is every two hours, then a non-zero flow in the timestep block 1:2 will require the units_on in timestep blocks 1:1 and 2:2 to be the same and equal to one. Therefore, the time resolution of the units_on should always be lower than or equal to the resolution of the flow in the asset.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Remember that the section mathematical formulation shows the unit commitment and ramping constraints in the model considering an uniform time resolution as a reference.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"With this information, we can analyze the constraints in each of the following cases:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Ramping in assets with multiple outputs\nUnit commitment in assets with constant time resolution\nUnit commitment and ramping in assets with flexible time resolution that are multiples of each other\nUnit commitment and ramping in assets with flexible time resolution that are not multiples of each other","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"We will analyze each case in the following sections, considering the constraints resolution defined in the summary table in the flexible time resolution section. For the sake of simplicity, we only show the asset a and timestep block b_k index and the constraints as they appear in the .lp file of the example, i.e., with all the coefficients and RHS values calculated from the input parameters. The .lp file can be exported using the keyword argument write_lp_file = true in the run_scenario function.","category":"page"},{"location":"30-concepts/#Ramping-in-Assets-with-Multiple-Outputs","page":"Concepts","title":"Ramping in Assets with Multiple Outputs","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In the case of the gas asset, there are two output flows above the minimum operating point with different time resolutions. The ramping constraints follow the highest time resolution of the two flows at each timestep block. Since the highest resolution is always defined by the hourly output of the flow(gas,ocgt), the ramping constraints are also hourly. The figure below illustrates this situation.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: unit-commitment-gas-asset)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's now take a look at the resulting constraints in the model.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"max_ramp_up(gas): The first constraint starts in the second timestep block and takes the difference between the output flows above the minimum operating point from b_k = 2:2 and b_k = 1:1. Note that since the flow(gas,ccgt) is the same in both timestep blocks, the only variables that appear in this first constraint are the ones associated with the flow(gas,ocgt). The second constraint takes the difference between the output flows from b_k = 3:3 and b_k = 2:2; in this case, there is a change in the flow(gas, ocgt); therefore, the constraint considers both changes in the output flows of the asset. In addition, the ramping parameter is multiplied by the flow duration with the highest resolution, i.e., one hour, which is the duration of the flow(gas,ocgt).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n2:2: -1 flow(gas,ocgt,1:1) + 1 flow(gas,ocgt,2:2) <= 1494\nb_k =\n3:3: -1 flow(gas,ocgt,2:2) + 1 flow(gas,ocgt,3:3) - 1 flow(gas,ccgt,1:2) + 1 flow(gas,ccgt,3:4) <= 1494\nb_k =\n4:4: -1 flow(gas,ocgt,3:3) + 1 flow(gas,ocgt,4:4) <= 1494\nb_k =\n5:5: -1 flow(gas,ocgt,4:4) + 1 flow(gas,ocgt,5:5) - 1 flow(gas,ccgt,3:4) + 1 flow(gas,ccgt,5:6) <= 1494","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"For the maximum ramp down we have similiar constraints as the ones shown above.","category":"page"},{"location":"30-concepts/#Unit-Commitment-in-Assets-with-Constant-Time-Resolution","page":"Concepts","title":"Unit Commitment in Assets with Constant Time Resolution","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The ocgt asset includes both the flow(oct,demand) and the asset time resolution, which defines the resolution of the units_on variable, with a default setting of one hour. As a result, the unit commitment constraints are also set on an hourly basis. This is the conventional method for representing these types of constraints in power system models. The figure below illustrates this situation.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: unit-commitment-ocgt-asset)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's now take a look at the resulting constraints in the model. Because everything is based on an hourly timestep, the equations are simple and easy to understand.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"limit_units_on(ocgt): The upper bound of the units_on is the investment variable of the asset","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:1: -1 assets_investment(ocgt) + 1 units_on(ocgt,1:1) <= 0\nb_k =\n2:2: -1 assets_investment(ocgt) + 1 units_on(ocgt,2:2) <= 0\nb_k =\n3:3: -1 assets_investment(ocgt) + 1 units_on(ocgt,3:3) <= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"min_output_flow(ocgt): The minimum operating point is 10 MW, so the asset must produce an output flow greater than this value when the unit is online.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:1: 1 flow(ocgt,demand,1:1) - 10 units_on(ocgt,1:1) >= 0\nb_k =\n2:2: 1 flow(ocgt,demand,2:2) - 10 units_on(ocgt,2:2) >= 0\nb_k =\n3:3: 1 flow(ocgt,demand,3:3) - 10 units_on(ocgt,3:3) >= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"max_output_flow(ocgt): The capacity is 100 MW, so the asset must produce an output flow lower than this value when the unit is online.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:1: 1 flow(ocgt,demand,1:1) - 100 units_on(ocgt,1:1) <= 0\nb_k =\n2:2: 1 flow(ocgt,demand,2:2) - 100 units_on(ocgt,2:2) <= 0\nb_k =\n3:3: 1 flow(ocgt,demand,3:3) - 100 units_on(ocgt,3:3) <= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"For the maximum ramp down we have similiar constraints as the ones shown above.","category":"page"},{"location":"30-concepts/#Unit-Commitment-and-Ramping-in-Assets-with-Flexible-Time-Resolution-that-are-Multiples-of-Each-Other","page":"Concepts","title":"Unit Commitment and Ramping in Assets with Flexible Time Resolution that are Multiples of Each Other","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In this case, the smr asset has an output flow(smr,demand) in a hourly basis, but its time resolution (i.e., partition) is every six hours. Therefore, the unist_on variables are defined in timestep block of every six hours. As a result, the unit commitment and ramping constraints are set on highest resolution of both, i.e., the hourly resolution of the flow(smr,demand). The figure below illustrates this situation.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: unit-commitment-smr-asset)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's now take a look at the resulting constraints in the model.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"limit_units_on(smr): The units_on variables are defined every 6h; therefore, the upper bound of the variable is also every 6h. In addition, the smr is not investable and has one existing unit that limits the commitment variables.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:6: 1 units_on(smr,1:6) <= 1\nb_k =\n7:12: 1 units_on(smr,7:12) <= 1\nb_k =\n13:18: 1 units_on(smr,13:18) <= 1\nb_k =\n19:24: 1 units_on(smr,19:24) <= 1","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"min_output_flow(smr): The minimum operating point is 150 MW, so the asset must produce an output flow greater than this value when the unit is online. Since the units_on variables are defined every 6h, the first six constraints show that the minimum operating point is multiplied by the variable in block 1:6. The next six constraints are multiplied by the units_on in block 7:12, and so on.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:1: 1 flow(smr,demand,1:1) - 150 units_on(smr,1:6) >= 0\nb_k =\n2:2: 1 flow(smr,demand,2:2) - 150 units_on(smr,1:6) >= 0\nb_k =\n3:3: 1 flow(smr,demand,3:3) - 150 units_on(smr,1:6) >= 0\nb_k =\n4:4: 1 flow(smr,demand,4:4) - 150 units_on(smr,1:6) >= 0\nb_k =\n5:5: 1 flow(smr,demand,5:5) - 150 units_on(smr,1:6) >= 0\nb_k =\n6:6: 1 flow(smr,demand,6:6) - 150 units_on(smr,1:6) >= 0\nb_k =\n7:7: 1 flow(smr,demand,7:7) - 150 units_on(smr,7:12) >= 0\nb_k =\n8:8: 1 flow(smr,demand,8:8) - 150 units_on(smr,7:12) >= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"max_output_flow(smr): The capacity is 200 MW, so the asset must produce an output flow lower than this value when the unit is online. Similiar to the minimum operating point constraint, here the units_on for the timestep block 1:6 are used in the first six constraints, the units_on for the timestep block 7:12 are used in the next six constraints, and so on.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:1: 1 flow(smr,demand,1:1) - 200 units_on(smr,1:6) <= 0\nb_k =\n2:2: 1 flow(smr,demand,2:2) - 200 units_on(smr,1:6) <= 0\nb_k =\n3:3: 1 flow(smr,demand,3:3) - 200 units_on(smr,1:6) <= 0\nb_k =\n4:4: 1 flow(smr,demand,4:4) - 200 units_on(smr,1:6) <= 0\nb_k =\n5:5: 1 flow(smr,demand,5:5) - 200 units_on(smr,1:6) <= 0\nb_k =\n6:6: 1 flow(smr,demand,6:6) - 200 units_on(smr,1:6) <= 0\nb_k =\n7:7: 1 flow(smr,demand,7:7) - 200 units_on(smr,7:12) <= 0\nb_k =\n8:8: 1 flow(smr,demand,8:8) - 200 units_on(smr,7:12) <= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"max_ramp_up(smr): The ramping capacity is 20MW, so the change in the output flow above the minimum operating point needs to be below that value when the asset is online. For constraints from 2:2 to 6:6, the units_on variable is the same, i.e., units_on at timestep block 1:6. The ramping constraint at timestep block 7:7 shows the units_on from the timestep block 1:6 and 7:12 since the change in the flow includes both variables. Note that if the units_on variable is zero in the timestep block 1:6, then the ramping constraint at timestep block 7:7 allows the asset to go from zero flow to the minimum operating point plus the ramping capacity (i.e., 150 + 20 = 170).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n2:2: -1 flow(smr,demand,1:1) + 1 flow(smr,demand,2:2) - 20 units_on(smr,1:6) <= 0\nb_k =\n3:3: -1 flow(smr,demand,2:2) + 1 flow(smr,demand,3:3) - 20 units_on(smr,1:6) <= 0\nb_k =\n4:4: -1 flow(smr,demand,3:3) + 1 flow(smr,demand,4:4) - 20 units_on(smr,1:6) <= 0\nb_k =\n5:5: -1 flow(smr,demand,4:4) + 1 flow(smr,demand,5:5) - 20 units_on(smr,1:6) <= 0\nb_k =\n6:6: -1 flow(smr,demand,5:5) + 1 flow(smr,demand,6:6) - 20 units_on(smr,1:6) <= 0\nb_k =\n7:7: -1 flow(smr,demand,6:6) + 1 flow(smr,demand,7:7) + 150 units_on(smr,1:6) - 170 units_on(smr,7:12) <= 0\nb_k =\n8:8: -1 flow(smr,demand,7:7) + 1 flow(smr,demand,8:8) - 20 units_on(smr,7:12) <= 0\nb_k =\n9:9: -1 flow(smr,demand,8:8) + 1 flow(smr,demand,9:9) - 20 units_on(smr,7:12) <= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"For the maximum ramp down we have similiar constraints as the ones shown above.","category":"page"},{"location":"30-concepts/#Unit-Commitment-and-Ramping-in-Assets-with-Flexible-Time-Resolution-that-are-NOT-Multiples-of-Each-Other","page":"Concepts","title":"Unit Commitment and Ramping in Assets with Flexible Time Resolution that are NOT Multiples of Each Other","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In this case, the ccgt asset has an output flow(ccgt,demand) on a two-hour basis, but its time resolution (i.e., partition) is every three hours. Therefore, the unist_on variables are defined in a timestep block every three hours. This setup means that the flow and unit commitment variables are not multiples of each other. As a result, the unit commitment and ramping constraints are defined on the highest resolution, meaning that we also need the intersections of both resolutions. The figure below illustrates this situation.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: unit-commitment-ccgt-asset)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's now take a look at the resulting constraints in the model.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"limit_units_on(ccgt): The units_on variables are defined every 3h; therefore, the upper bound of the variable is also every 3h. In addition, the ccgt is investable and has one existing unit that limits the commitment variables.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:3: -1 assets_investment(ccgt) + 1 units_on(ccgt,1:3) <= 1\nb_k =\n4:6: -1 assets_investment(ccgt) + 1 units_on(ccgt,4:6) <= 1\nb_k =\n7:9: -1 assets_investment(ccgt) + 1 units_on(ccgt,7:9) <= 1","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"min_output_flow(ccgt): The minimum operating point is 50 MW, so the asset must produce an output flow greater than this value when the unit is online. Here, we can see the impact of the constraints of having different temporal resolutions that are not multiples of each other. For instance, the constraint is defined for all the intersections, so 1:2, 3:3, 4:4, 5:6, etc., to ensure that the minimum operating point is correctly defined considering all the timestep blocks of the flow and the units_on variables.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:2: 1 flow(ccgt,demand,1:2) - 50 units_on(ccgt,1:3) >= 0\nb_k =\n3:3: 1 flow(ccgt,demand,3:4) - 50 units_on(ccgt,1:3) >= 0\nb_k =\n4:4: 1 flow(ccgt,demand,3:4) - 50 units_on(ccgt,4:6) >= 0\nb_k =\n5:6: 1 flow(ccgt,demand,5:6) - 50 units_on(ccgt,4:6) >= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"max_output_flows(ccgt): The capacity is 200 MW, so the asset must produce an output flow lower than this value when the unit is online. The situation is similar as in the minimum operating point constraint, we have constraints for all the intersections of the resolutions to ensure the correct definition of the maximum capacity.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n1:2: 1 flow(ccgt,demand,1:2) - 200 units_on(ccgt,1:3) <= 0\nb_k =\n3:3: 1 flow(ccgt,demand,3:4) - 200 units_on(ccgt,1:3) <= 0\nb_k =\n4:4: 1 flow(ccgt,demand,3:4) - 200 units_on(ccgt,4:6) <= 0\nb_k =\n5:6: 1 flow(ccgt,demand,5:6) - 200 units_on(ccgt,4:6) <= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"max_ramp_up(ccgt): The ramping capacity is 120MW, so the change in the output flow above the minimum operating point needs to be below that value when the asset is online. When the time resolutions of the flow and units_on are not multiples of each other, we encounter some counterintuitive constraints. For example, consider the constraint at timestep block 4:4. This constraint only involves units_on variables because the flow above the minimum operating point at timestep block 4:4 differs from the previous timestep block 3:3 only in terms of the units_on variables. As a result, the ramping-up constraint establishes a relationship between the units_on variable at 1:3 and 4:6. This means that if the unit is on at timestep 1:3, then it must also be on at timestep 4:6. However, this is redundant because there is already a flow variable defined for 3:4 that ensures this, thanks to the minimum operating point and maximum capacity constraints. Therefore, although this constraint is not incorrect, it is unnecessary due to the flexible time resolutions that are not multiples of each other.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"b_k =\n3:3: -1 flow(ccgt,demand,1:2) + 1 flow(ccgt,demand,3:4) - 120 units_on(ccgt,1:3) <= 0\nb_k =\n4:4: 50 units_on(ccgt,1:3) - 170 units_on(ccgt,4:6) <= 0\nb_k =\n5:6: -1 flow(ccgt,demand,3:4) + 1 flow(ccgt,demand,5:6) - 120 units_on(ccgt,4:6) <= 0\nb_k =\n7:8: -1 flow(ccgt,demand,5:6) + 1 flow(ccgt,demand,7:8) + 50 units_on(ccgt,4:6) - 170 units_on(ccgt,7:9) <= 0\nb_k =\n9:9: -1 flow(ccgt,demand,7:8) + 1 flow(ccgt,demand,9:10) - 120 units_on(ccgt,7:9) <= 0","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"For the maximum ramp down we have similiar constraints as the ones shown above.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Important: The time resolutions of the unit commitment constraints do not have to be multiples of each other. However, using multiples of each other can help avoid extra redundant constraints.","category":"page"},{"location":"30-concepts/#Unit-Commitment-and-Ramping-Case-Study-Results","page":"Concepts","title":"Unit Commitment and Ramping Case Study Results","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's now optimize the model for the data in the example test/inputs/UC-ramping and explore the results. The first result is the unit commitment of the assets with this method, i.e., ocgt, ccgt, and smr. One of the characteristics of having flexible time resolution on the unit commitment variables (e.g., units_on) is that it allows us to consider implicitly minimum up/down times in a simplified manner. For instance, the ccgt asset can only increase the number of units every 3h, and the smr can only start up again after 6h.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: unit-commitment-results)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's now examine the hourly production balance in the results. We can see that the assets with a unit commitment method only produce electricity (e.g., flow to the demand asset) when they are on (units_on >= 1). In addition, the smr has a slow flow change due to its ramping limits.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: unit-commitment-balance)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In this example, we demonstrated the use of unit commitment and ramping constraints with flexible time resolution in the model, and we illustrated what the results look like. The flexible time resolution applied to the unit commitment variables aids in minimizing the number of binary/integer variables in the model and simplifies the representation of the assets' minimum up and down times.","category":"page"},{"location":"30-concepts/#storage-modeling","page":"Concepts","title":"Storage Modeling","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Energy storage systems can be broadly classified into two categories: seasonal and non-seasonal storage. Seasonal storage refers to assets that can store energy for more extended periods, usually spanning months or even years. Examples of such assets include hydro reservoirs, hydrogen storage in salt caverns, or empty gas fields. On the other hand, non-seasonal storage refers to assets that can store energy only for a few hours, such as batteries or small pumped-hydro storage units.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Both storage categories can be represented in TulipaEnergyModel.jl using the representative periods approach:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Non-seasonal storage: When the storage capacity of an asset is lower than the total length of representative periods, like in the case of a battery with a storage capacity of 4 hours and representative periods of 24-hour timesteps, intra-temporal constraints should be applied.\nSeasonal storage: When the storage capacity of an asset is greater than the total length of representative periods, like in the case of a hydroplant with a storage capacity of a month and representative periods of 24-hour timesteps, inter-temporal constraints should be applied.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The equations of intra- and inter-temporal constraints for energy storage are available in the mathematical formulation. An example is shown in the following section to explain these concepts. In addition, the section seasonal and non-seasonal storage setup shows how to set the parameters in the model to consider each type in the storage assets.","category":"page"},{"location":"30-concepts/#Example-to-Model-Seasonal-and-Non-seasonal-Storage","page":"Concepts","title":"Example to Model Seasonal and Non-seasonal Storage","text":"","category":"section"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"We use the example in the folder test/inputs/Storage to explain how all these concepts come together in TulipaEnergyModel.jl.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Let's first look at this feature's most relevant input data, starting with the assets-data file. Here, we show only the storage assets and the appropriate columns for this example, but all the input data can be found in the previously mentioned folder.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"using DataFrames # hide\nusing CSV # hide\ninput_dir = \"../../test/inputs/Storage\" # hide\nassets_data = CSV.read(joinpath(input_dir, \"assets-data.csv\"), DataFrame, header = 2) # hide\ngraph_assets = CSV.read(joinpath(input_dir, \"graph-assets-data.csv\"), DataFrame, header = 2) # hide\nassets = leftjoin(graph_assets, assets_data, on=:name) # hide\nfiltered_assets = assets[assets.type .== \"storage\", [\"name\", \"type\", \"capacity\", \"capacity_storage_energy\", \"initial_storage_units\", \"initial_storage_level\", \"is_seasonal\"]] # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The is_seasonal parameter determines whether or not the storage asset uses the inter-temporal constraints. The phs is the only storage asset with this type of constraint and inter-storage level variable (i.e., v^textinter-storage_textphsp), and has 100MW capacity and 4800MWh of storage capacity (i.e., 48h discharge duration). The battery will only consider intra-temporal constraints with intra-storage level variables (i.e., v^textintra-storage_textbatterykb_k), and has 10MW capacity with 20MWh of storage capacity (i.e., 2h discharge duration).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The rep-periods-data file has information on the representative periods in the example. We have three representative periods, each with 24 timesteps and hourly resolution, representing a day. The figure below shows the availability profile of the renewable energy sources in the example.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"rp_file = \"../../test/inputs/Storage/rep-periods-data.csv\" # hide\nrp = CSV.read(rp_file, DataFrame, header = 2) # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: availability-profiles)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The rep-periods-mapping relates each representative period with the periods in the timeframe. We have seven periods in this case, meaning the timeframe is a week. Each value in the file indicates the weight of each representative period in the timeframe period. Notice that each period is composed of a linear combination of the representative periods. For more details on obtaining the representative periods and the weights, please look at TulipaClustering.jl. For the sake of readability, we show here the information in the file in tabular form:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"map_file = \"../../test/inputs/Storage/rep-periods-mapping.csv\" # hide\nmap = CSV.read(map_file, DataFrame, header = 2) # hide\nunstacked_map = unstack(map, :period, :rep_period, :weight) # hide\nrename!(unstacked_map, [\"period\", \"k=1\", \"k=2\", \"k=3\"]) # hide\nunstacked_map[!,[\"k=1\", \"k=2\", \"k=3\"]] = convert.(Float64, unstacked_map[!,[\"k=1\", \"k=2\", \"k=3\"]]) # hide\nunstacked_map # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"The file assets-timeframe-partitions has the information on how often we want to evaluate the inter-temporal constraints that combine the information of the representative periods. In this example, we define a uniform distribution of one period, meaning that we will check the inter-storage level every day of the week timeframe.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"phs_partitions_file = \"../../test/inputs/Storage/assets-timeframe-partitions.csv\" # hide\nphs_partitions = CSV.read(phs_partitions_file, DataFrame, header = 2) # hide","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Note: For the sake of simplicity, we show how using three representative days can recover part of the chronological information of one week. The same method can be applied to more representative periods to analyze the seasonality across a year or longer timeframe.","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Now let's solve the example and explore the results:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"using DuckDB, TulipaIO, TulipaEnergyModel\n\ninput_dir = \"../../test/inputs/Storage\" # hide\n# input_dir should be the path to the Storage example\nconnection = DBInterface.connect(DuckDB.DB)\nread_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)\nenergy_problem = run_scenario(connection)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Since the battery is not seasonal, it only has results for the intra-storage level of each representative period, as shown in the following figure:","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: Battery-intra-storage-level)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"Since the phs is defined as seasonal, it has results for only the inter-storage level. Since we defined the period partition as 1, we get results for each period (i.e., day). We can see that the inter-temporal constraints in the model keep track of the storage level through the whole timeframe definition (i.e., week).","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"(Image: PHS-inter-storage-level)","category":"page"},{"location":"30-concepts/","page":"Concepts","title":"Concepts","text":"In this example, we have demonstrated how to partially recover the chronological information of a storage asset with a longer discharge duration (such as 48 hours) than the representative period length (24 hours). This feature enables us to model both short- and long-term storage in TulipaEnergyModel.jl.","category":"page"},{"location":"91-developer/#developer","page":"Developer Documentation","title":"Developer Documentation","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Welcome to TulipaEnergyModel.jl developer documentation. Here is how you can contribute to our Julia-based toolkit for modeling and optimization of electric energy systems.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Pages = [\"91-developer.md\"]\nDepth = 3","category":"page"},{"location":"91-developer/#Before-You-Begin","page":"Developer Documentation","title":"Before You Begin","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Before you can start contributing, please read our Contributing Guidelines.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Also make sure that you have installed the required software, and that it is properly configured. You only need to do this once.","category":"page"},{"location":"91-developer/#Installing-Software","page":"Developer Documentation","title":"Installing Software","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"To contribute to TulipaEnergyModel.jl, you need the following:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Julia programming language.\nGit for version control.\nVSCode or any other editor. For VSCode, we recommend to install a few extensions. You can do it by pressing Ctrl + Shift + X (or ⇧ + ⌘ + X on MacOS) and searching by the extension name. - Julia for Visual Studio Code; - Git Graph.\nEditorConfig for consistent code formatting. In VSCode, it is available as an extension.\npre-commit to run the linters and formatters.\nYou can install pre-commit globally using\npip install --user pre-commit\nIf you prefer to create a local environment with it, do the following:\npython -m venv env\n. env/bin/activate\npip install --upgrade pip setuptools pre-commit\nOn Windows, you need to activate the environment using the following command instead of the previous one:\nenv/Scripts/activate\nNote that there is no leading dot (.) in the above command.\nJuliaFormatter.jl for code formatting.\nTo install it, open Julia REPL, for example, by typing in the command line:\njulia\nNote: julia must be part of your environment variables to call it from the command line.\nThen press ] to enter the package mode. In the package mode, enter the following:\npkg> activate\npkg> add JuliaFormatter\nIn VSCode, you can activate \"Format on Save\" for JuliaFormatter. To do so, open VSCode Settings (Ctrl + ,), then in \"Search Settings\", type \"Format on Save\" and tick the first result:\n(Image: Screenshot of Format on Save option)\nPrettier for markdown formatting. In VSCode, it is available as an extension.\nHaving enabled \"Format on Save\" for JuliaFormatter in the previous step will also enable \"Format on Save\" for Prettier, provided that Prettier is set as the default formatter for markdown files. To do so, in VSCode, open any markdown file, right-click on any area of the file, choose \"Format Document With...\", click \"Configure Default Formatter...\" situated at the bottom of the drop-list list at the top of the screen, and then choose Prettier - Code formatter as the default formatter. Once you are done, you can double-check it by again right-clicking on any area of the file and choosing \"Format Document With...\", and you should see Prettier - Code formatter (default).\nLocalCoverage for coverage testing. You can install it the same way you installed JuliaFormatter, that is, by opening Julia REPL in the package mode and typing:\npkg> activate\npkg> add LocalCoverage","category":"page"},{"location":"91-developer/#Forking-the-Repository","page":"Developer Documentation","title":"Forking the Repository","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Any changes should be done in a fork. You can fork this repository directly on GitHub:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"(Image: Screenshot of Fork button on GitHub)","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"After that, clone your fork and add this repository as upstream:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git clone https://github.com/your-name/TulipaEnergyModel.jl # use the fork URL\ngit remote add upstream https://github.com/TulipaEnergy/TulipaEnergyModel.jl # use the original repository URL","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Check that your origin and upstream are correct:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git remote -v","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"You should see something similar to: (Image: Screenshot of remote names, showing origin and upstream)","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If your names are wrong, use this command (with the relevant names) to correct it:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git remote set-url [name] [url]","category":"page"},{"location":"91-developer/#Configuring-Git","page":"Developer Documentation","title":"Configuring Git","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Because operating systems use different line endings for text files, you need to configure Git to ensure code consistency across different platforms. You can do this with the following commands:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"cd /path/to/TulipaEnergyModel.jl\ngit config --unset core.autocrlf # disable autocrlf in the EnergyModel repo\ngit config --global core.autocrlf false # explicitly disable autocrlf globally\ngit config --global --unset core.eol # disable explicit file-ending globally\ngit config core.eol lf # set Linux style file-endings in EnergyModel","category":"page"},{"location":"91-developer/#Activating-and-Testing-the-Package","page":"Developer Documentation","title":"Activating and Testing the Package","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Start Julia REPL either via the command line or in the editor.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"In the terminal, do:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"cd /path/to/TulipaEnergyModel.jl # change the working directory to the repo directory if needed\njulia # start Julia REPL","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"In VSCode, first open your cloned fork as a new project. Then open the command palette with Ctrl + Shift + P (or + + P on MacOS) and use the command called Julia: Start REPL.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"In Julia REPL, enter the package mode by pressing ].","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"In the package mode, first activate and instantiate the project, then run the tests to ensure that everything is working as expected:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"pkg> activate . # activate the project\npkg> instantiate # instantiate to install the required packages\npkg> test # run the tests","category":"page"},{"location":"91-developer/#Configuring-Linting-and-Formatting","page":"Developer Documentation","title":"Configuring Linting and Formatting","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"With pre-commit installed, activate it as a pre-commit hook:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"pre-commit install","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"To run the linting and formatting manually, enter the command below:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"pre-commit run -a","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Do it once now to make sure that everything works as expected.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Now, you can only commit if all the pre-commit tests pass.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Note: On subsequent occasions when you need to run pre-commit in a new shell, you will need to activate the Python virtual environment. If so, do the following:. env/bin/activate # for Windows the command is: . env/Scripts/activate\npre-commit run -a","category":"page"},{"location":"91-developer/#Code-format-and-guidelines","page":"Developer Documentation","title":"Code format and guidelines","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"This section will list the guidelines for code formatting not enforced by JuliaFormatter. We will try to follow these during development and reviews.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Naming\nCamelCase for classes and modules,\nsnake_case for functions and variables, and\nkebab-case for file names.\nUse using instead of import, in the following way:\nDon't use pure using Package, always list all necessary objects with using Package: A, B, C.\nList obvious objects, e.g., using JuMP: @variable, since @variable is obviously from JuMP in this context, or using Graph: SimpleDiGraph, because it's a constructor with an obvious name.\nFor other objects inside Package, use using Package: Package and explicitly call Package.A to use it, e.g., DataFrames.groupby.\nList all using in .","category":"page"},{"location":"91-developer/#Contributing-Workflow","page":"Developer Documentation","title":"Contributing Workflow","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"When the software is installed and configured, and you have forked the TulipaEnergyModel.jl repository, you can start contributing to it.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"We use the following workflow for all contributions:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Make sure that your fork is up to date\nCreate a new branch\nImplement the changes\nRun the tests\nRun the linter\nCommit the changes\nRepeat steps 3-6 until all necessary changes are done\nMake sure that your fork is still up to date\nCreate a pull request","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Below you can find detailed instructions for each step.","category":"page"},{"location":"91-developer/#1.-Make-Sure-That-Your-Fork-Is-Up-to-Date","page":"Developer Documentation","title":"1. Make Sure That Your Fork Is Up to Date","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Fetch from org remote, fast-forward your local main:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git switch main\ngit fetch --all --prune\ngit merge --ff-only upstream/main","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Warning: If you have a conflict on your main, it will appear now. You can delete your old main branch usinggit reset --hard upstream/main","category":"page"},{"location":"91-developer/#2.-Create-a-New-Branch","page":"Developer Documentation","title":"2. Create a New Branch","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Create a branch to address the issue:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git switch -c ","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If there is an associated issue, add the issue number to the branch name, for example, 123-short-description for issue #123.\nIf there is no associated issue and the changes are small, add a prefix such as \"typo\", \"hotfix\", \"small-refactor\", according to the type of update.\nIf the changes are not small and there is no associated issue, then create the issue first, so we can properly discuss the changes.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Note: Always branch from main, i.e., the main branch of your own fork.","category":"page"},{"location":"91-developer/#3.-Implement-the-Changes","page":"Developer Documentation","title":"3. Implement the Changes","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Implement your changes to address the issue associated with the branch.","category":"page"},{"location":"91-developer/#4.-Run-the-Tests","page":"Developer Documentation","title":"4. Run the Tests","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"In Julia:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"TulipaEnergyModel> test","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"To run the tests with code coverage, you can use the LocalCoverage package:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"julia> using LocalCoverage\n# ]\npkg> activate .\n# \njulia> cov = generate_coverage()","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"This will run the tests, track line coverage and print a report table as output. Note that we want to maintain 100% test coverage. If any file does not show 100% coverage, please add tests to cover the missing lines.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If you are having trouble reaching 100% test coverage, you can set your pull request to 'draft' status and ask for help.","category":"page"},{"location":"91-developer/#5.-Run-the-Linter","page":"Developer Documentation","title":"5. Run the Linter","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"In the bash/git bash terminal, run pre-commit:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":". env/bin/activate # if necessary (for Windows the command is: . env/Scripts/activate)\npre-commit run -a","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If any of the checks failed, find in the pre-commit log what the issues are and fix them. Then, add them again (git add), rerun the tests & linter, and commit.","category":"page"},{"location":"91-developer/#6.-Commit-the-Changes","page":"Developer Documentation","title":"6. Commit the Changes","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"When the test are passing, commit the changes and push them to the remote repository. Use:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git commit -am \"A short but descriptive commit message\" # Equivalent to: git commit -a -m \"commit msg\"\ngit push -u origin ","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"When writing the commit message:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"use imperative, present tense (Add feature, Fix bug);\nhave informative titles;\nif necessary, add a body with details.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Note: Try to create \"atomic git commits\". Read The Utopic Git History to learn more.","category":"page"},{"location":"91-developer/#7.-Make-Sure-That-Your-Fork-Is-Still-Up-to-Date","page":"Developer Documentation","title":"7. Make Sure That Your Fork Is Still Up to Date","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If necessary, fetch any main updates from upstream and rebase your branch into origin/main. For example, do this if it took some time to resolve the issue you have been working on. If you don't resolve conflicts locally, you will get conflicts in your pull request.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Do the following steps:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git switch main # switch to the main branch\ngit fetch --all --prune # fetch the updates\ngit merge --ff-only upstream/main # merge as a fast-forward\ngit switch # switch back to the issue branch\ngit rebase main # rebase it","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If it says that you have conflicts, resolve them by opening the file(s) and editing them until the code looks correct to you. You can check the changes with:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git diff # Check that changes are correct.\ngit add \ngit diff --staged # Another way to check changes, i.e., what you will see in the pull request.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Once the conflicts are resolved, commit and push.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"git status # Another way to show that all conflicts are fixed.\ngit rebase --continue\ngit push --force origin ","category":"page"},{"location":"91-developer/#8.-Create-a-Pull-Request","page":"Developer Documentation","title":"8. Create a Pull Request","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"When there are no more conflicts and all the test are passing, create a pull request to merge your remote branch into the org main. You can do this on GitHub by opening the branch in your fork and clicking \"Compare & pull request\".","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"(Image: Screenshot of Compare & pull request button on GitHub)","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Fill in the pull request details:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Describe the changes.\nList the issue(s) that this pull request closes.\nFill in the collaboration confirmation.\n(Optional) Choose a reviewer.\nWhen all of the information is filled in, click \"Create pull request\".","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"(Image: Screenshot of the pull request information)","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"You pull request will appear in the list of pull requests in the TulipaEnergyModel.jl repository, where you can track the review process.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Sometimes reviewers request changes. After pushing any changes, the pull request will be automatically updated. Do not forget to re-request a review.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Once your reviewer approves the pull request, you need to merge it with the main branch using \"Squash and Merge\". You can also delete the branch that originated the pull request by clicking the button that appears after the merge. For branches that were pushed to the main repo, it is recommended that you do so.","category":"page"},{"location":"91-developer/#Building-the-Documentation-Locally","page":"Developer Documentation","title":"Building the Documentation Locally","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Following the latest suggestions, we recommend using LiveServer to build the documentation.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Note: Ensure you have the package Revise installed in your global environment before running servedocs.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Here is how you do it:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Run julia --project=docs in the package root to open Julia in the environment of the docs.\nIf this is the first time building the docs\nPress ] to enter pkg mode\nRun pkg> dev . to use the development version of your package\nPress backspace to leave pkg mode\nRun julia> using LiveServer\nRun julia> servedocs(launch_browser=true)","category":"page"},{"location":"91-developer/#Performance-Considerations","page":"Developer Documentation","title":"Performance Considerations","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If you updated something that might impact the performance of the package, you can run the Benchmark.yml workflow from your pull request. To do that, add the tag benchmark in the pull request. This will trigger the workflow and post the results as a comment in you pull request.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Warning: This requires that your branch was pushed to the main repo. If you have created a pull request from a fork, the Benchmark.yml workflow does not work. Instead, close your pull request, push your branch to the main repo, and open a new pull request.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"If you want to manually run the benchmarks, you can do the following:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Navigate to the benchmark folder\nRun julia --project=.\nEnter pkg mode by pressing ]\nRun dev .. to add the development version of TulipaEnergyModel\nNow run\ninclude(\"benchmarks.jl\")\ntune!(SUITE)\nresults = run(SUITE, verbose=true)","category":"page"},{"location":"91-developer/#Profiling","page":"Developer Documentation","title":"Profiling","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"To profile the code in a more manual way, here are some tips:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Wrap your code into functions.\nCall the function once to precompile it. This must be done after every change to the function.\nPrefix the function call with @time. This is the most basic timing, part of Julia.\nPrefix the function call with @btime. This is part of the BenchmarkTools package, which you might need to install. @btime will evaluate the function a few times to give a better estimate.\nPrefix the function call with @benchmark. Also part of BenchmarkTools. This will produce a nice histogram of the times and give more information. @btime and @benchmark do the same thing in the background.\nCall @profview. This needs to be done in VSCode, or using the ProfileView package. This will create a flame graph, where each function call is a block. The size of the block is proportional to the aggregate time it takes to run. The blocks below a block are functions called inside the function above.","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"See the file for an example of profiling code.","category":"page"},{"location":"91-developer/#Procedure-for-Releasing-a-New-Version-(Julia-Registry)","page":"Developer Documentation","title":"Procedure for Releasing a New Version (Julia Registry)","text":"","category":"section"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"When publishing a new version of the model to the Julia Registry, follow this procedure:","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Note: To be able to register, you need to be a member of the organisation TulipaEnergy and have your visibility set to public: (Image: Screenshot of public members of TulipaEnergy on GitHub)","category":"page"},{"location":"91-developer/","page":"Developer Documentation","title":"Developer Documentation","text":"Click on the Project.toml file on GitHub.\nEdit the file and change the version number according to semantic versioning: Major.Minor.Patch (Image: Screenshot of editing Project.toml on GitHub)\nCommit the changes in a new branch and open a pull request. Change the commit message according to the version number. (Image: Screenshot of PR with commit message \"Release 0.6.1\")\nCreate the pull request and squash & merge it after the review and testing process. Delete the branch after the squash and merge. (Image: Screenshot of full PR template on GitHub)\nGo to the main page of repo and click in the commit. (Image: Screenshot of how to access commit on GitHub)\nAdd the following comment to the commit: @JuliaRegistrator register (Image: Screenshot of calling JuliaRegistrator in commit comments)\nThe bot should start the registration process. (Image: Screenshot of JuliaRegistrator bot message)\nAfter approval, the bot will take care of the PR at the Julia Registry and automatically create the release for the new version. (Image: Screenshot of new version on registry)\nThank you for helping make frequent releases!","category":"page"},{"location":"95-reference/#reference","page":"Reference","title":"Reference","text":"","category":"section"},{"location":"95-reference/","page":"Reference","title":"Reference","text":"Pages = [\"95-reference.md\"]","category":"page"},{"location":"95-reference/","page":"Reference","title":"Reference","text":"Modules = [TulipaEnergyModel]","category":"page"},{"location":"95-reference/#TulipaEnergyModel.EnergyProblem","page":"Reference","title":"TulipaEnergyModel.EnergyProblem","text":"Structure to hold all parts of an energy problem. It is a wrapper around various other relevant structures. It hides the complexity behind the energy problem, making the usage more friendly, although more verbose.\n\nFields\n\ngraph: The Graph object that defines the geometry of the energy problem.\nrepresentative_periods: A vector of Representative Periods.\nconstraints_partitions: Dictionaries that connect pairs of asset and representative periods to time partitions (vectors of time blocks)\ntimeframe: The number of periods of the representative_periods.\ndataframes: The data frames used to linearize the variables and constraints. These are used internally in the model only.\ngroups: The input data of the groups to create constraints that are common to a set of assets in the model.\nmodel_parameters: The model parameters.\nmodel: A JuMP.Model object representing the optimization model.\nsolved: A boolean indicating whether the model has been solved or not.\nobjective_value: The objective value of the solved problem.\ntermination_status: The termination status of the optimization model.\ntimings: Dictionary of elapsed time for various parts of the code (in seconds).\n\nConstructor\n\nEnergyProblem(connection): Constructs a new EnergyProblem object with the given connection. The constraints_partitions field is computed from the representative_periods, and the other fields are initialized with default values.\n\nSee the basic example tutorial to see how these can be used.\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel.GraphAssetData","page":"Reference","title":"TulipaEnergyModel.GraphAssetData","text":"Structure to hold the asset data in the graph.\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel.GraphFlowData","page":"Reference","title":"TulipaEnergyModel.GraphFlowData","text":"Structure to hold the flow data in the graph.\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel.Group","page":"Reference","title":"TulipaEnergyModel.Group","text":"Structure to hold the group data\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel.ModelParameters","page":"Reference","title":"TulipaEnergyModel.ModelParameters","text":"ModelParameters(;key = value, ...)\nModelParameters(path; ...)\nModelParameters(connection; ...)\nModelParameters(connection, path; ...)\n\nStructure to hold the model parameters. Some values are defined by default and some required explicit definition.\n\nIf path is passed, it is expected to be a string pointing to a TOML file with a key = value list of parameters. Explicit keyword arguments take precedence.\n\nIf connection is passed, the default discount_year is set to the minimum of all milestone years. In other words, we check for the table year_data for the column year where the column is_milestone is true. Explicit keyword arguments take precedence.\n\nIf both are passed, then path has preference. Explicit keyword arguments take precedence.\n\nParameters\n\ndiscount_rate::Float64 = 0.0: The model discount rate.\ndiscount_year::Int: The model discount year.\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel.RepresentativePeriod","page":"Reference","title":"TulipaEnergyModel.RepresentativePeriod","text":"Structure to hold the data of one representative period.\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel.Timeframe","page":"Reference","title":"TulipaEnergyModel.Timeframe","text":"Structure to hold the data of the timeframe.\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel.Year","page":"Reference","title":"TulipaEnergyModel.Year","text":"Structure to hold the data of the year.\n\n\n\n\n\n","category":"type"},{"location":"95-reference/#TulipaEnergyModel._check_initial_storage_level!-Tuple{Any, Any}","page":"Reference","title":"TulipaEnergyModel._check_initial_storage_level!","text":"_check_initial_storage_level!(df)\n\nDetermine the starting value for the initial storage level for interpolating the storage level. If there is no initial storage level given, we will use the final storage level. Otherwise, we use the given initial storage level.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel._construct_inter_rp_dataframes-NTuple{4, Any}","page":"Reference","title":"TulipaEnergyModel._construct_inter_rp_dataframes","text":"df = _construct_inter_rp_dataframes(assets, graph, years, asset_filter)\n\nConstructs dataframes for inter representative period constraints.\n\nArguments\n\nassets: An array of assets.\ngraph: The energy problem graph with the assets data.\nasset_filter: A function that filters assets based on certain criteria.\n\nReturns\n\nA dataframe containing the constructed dataframe for constraints.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel._get_graph_asset_or_flow-Tuple{Any, Any}","page":"Reference","title":"TulipaEnergyModel._get_graph_asset_or_flow","text":"_get_graph_asset_or_flow(graph, a)\n_get_graph_asset_or_flow(graph, (u, v))\n\nReturns graph[a] or graph[u, v].\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel._interpolate_storage_level!-Tuple{Any, Any}","page":"Reference","title":"TulipaEnergyModel._interpolate_storage_level!","text":"_interpolate_storage_level!(df, time_column::Symbol)\n\nTransform the storage level dataframe from grouped timesteps or periods to incremental ones by interpolation. The starting value is the value of the previous grouped timesteps or periods or the initial value. The ending value is the value for the grouped timesteps or periods.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel._parse_rp_partition","page":"Reference","title":"TulipaEnergyModel._parse_rp_partition","text":"_parse_rp_partition(Val(specification), timestep_string, rp_timesteps)\n\nParses the timestep_string according to the specification. The representative period timesteps (rp_timesteps) might not be used in the computation, but it will be used for validation.\n\nThe specification defines what is expected from the timestep_string:\n\n:uniform: The timestep_string should be a single number indicating the duration of each block. Examples: \"3\", \"4\", \"1\".\n:explicit: The timestep_string should be a semicolon-separated list of integers. Each integer is a duration of a block. Examples: \"3;3;3;3\", \"4;4;4\", \"1;1;1;1;1;1;1;1;1;1;1;1\", and \"3;3;4;2\".\n:math: The timestep_string should be an expression of the form NxD+NxD…, where D is the duration of the block and N is the number of blocks. Examples: \"4x3\", \"3x4\", \"12x1\", and \"2x3+1x4+1x2\".\n\nThe generated blocks will be ranges (a:b). The first block starts at 1, and the last block ends at length(rp_timesteps).\n\nThe following table summarizes the formats for a rp_timesteps = 1:12:\n\nOutput :uniform :explicit :math\n1:3, 4:6, 7:9, 10:12 3 3;3;3;3 4x3\n1:4, 5:8, 9:12 4 4;4;4 3x4\n1:1, 2:2, …, 12:12 1 1;1;1;1;1;1;1;1;1;1;1;1 12x1\n1:3, 4:6, 7:10, 11:12 NA 3;3;4;2 2x3+1x4+1x2\n\nExamples\n\nusing TulipaEnergyModel\nTulipaEnergyModel._parse_rp_partition(Val(:uniform), \"3\", 1:12)\n\n# output\n\n4-element Vector{UnitRange{Int64}}:\n 1:3\n 4:6\n 7:9\n 10:12\n\nusing TulipaEnergyModel\nTulipaEnergyModel._parse_rp_partition(Val(:explicit), \"4;4;4\", 1:12)\n\n# output\n\n3-element Vector{UnitRange{Int64}}:\n 1:4\n 5:8\n 9:12\n\nusing TulipaEnergyModel\nTulipaEnergyModel._parse_rp_partition(Val(:math), \"2x3+1x4+1x2\", 1:12)\n\n# output\n\n4-element Vector{UnitRange{Int64}}:\n 1:3\n 4:6\n 7:10\n 11:12\n\n\n\n\n\n","category":"function"},{"location":"95-reference/#TulipaEnergyModel.add_expression_is_charging_terms_intra_rp_constraints!-Tuple{Any, Any, Any}","page":"Reference","title":"TulipaEnergyModel.add_expression_is_charging_terms_intra_rp_constraints!","text":"add_expression_is_charging_terms_intra_rp_constraints!(df_cons,\n df_is_charging,\n workspace\n )\n\nComputes the is_charging expressions per row of df_cons for the constraints that are within (intra) the representative periods.\n\nThis function is only used internally in the model.\n\nThis strategy is based on the replies in this discourse thread:\n\nhttps://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.add_expression_terms_inter_rp_constraints!-NTuple{5, Any}","page":"Reference","title":"TulipaEnergyModel.add_expression_terms_inter_rp_constraints!","text":"add_expression_terms_inter_rp_constraints!(df_inter,\n df_flows,\n df_map,\n graph,\n representative_periods,\n )\n\nComputes the incoming and outgoing expressions per row of df_inter for the constraints that are between (inter) the representative periods.\n\nThis function is only used internally in the model.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.add_expression_terms_intra_rp_constraints!-NTuple{5, Any}","page":"Reference","title":"TulipaEnergyModel.add_expression_terms_intra_rp_constraints!","text":"add_expression_terms_intra_rp_constraints!(df_cons,\n df_flows,\n workspace,\n representative_periods,\n graph;\n use_highest_resolution = true,\n multiply_by_duration = true,\n )\n\nComputes the incoming and outgoing expressions per row of df_cons for the constraints that are within (intra) the representative periods.\n\nThis function is only used internally in the model.\n\nThis strategy is based on the replies in this discourse thread:\n\nhttps://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.add_expression_units_on_terms_intra_rp_constraints!-Tuple{Any, Any, Any}","page":"Reference","title":"TulipaEnergyModel.add_expression_units_on_terms_intra_rp_constraints!","text":"add_expression_units_on_terms_intra_rp_constraints!(\n df_cons,\n df_units_on,\n workspace,\n)\n\nComputes the units_on expressions per row of df_cons for the constraints that are within (intra) the representative periods.\n\nThis function is only used internally in the model.\n\nThis strategy is based on the replies in this discourse thread:\n\nhttps://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.add_group_constraints!-NTuple{6, Any}","page":"Reference","title":"TulipaEnergyModel.add_group_constraints!","text":"add_group_constraints!(model, graph, ...)\n\nAdds group constraints for assets that share a common limits or bounds\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.add_ramping_constraints!-NTuple{12, Any}","page":"Reference","title":"TulipaEnergyModel.add_ramping_constraints!","text":"add_ramping_and_unit_commitment_constraints!(model, graph, ...)\n\nAdds the ramping constraints for producer and conversion assets where ramping = true in assets_data\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.calculate_annualized_cost-NTuple{5, Any}","page":"Reference","title":"TulipaEnergyModel.calculate_annualized_cost","text":"calculate_annualized_cost(discount_rate, economic_lifetime, investment_cost, years, investable_assets)\n\nCalculates the annualized cost for each asset, both energy assets and transport assets, in each year using provided discount rates, economic lifetimes, and investment costs.\n\nArguments\n\ndiscount_rate::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the discount rate.\neconomic_lifetime::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the economic lifetime.\ninvestment_cost::Dict: A dictionary where the key is a tuple (year, asset) or (year, (asset1, asset2)) for transport assets, and the value is the investment cost.\nyears::Array: An array of years to be considered.\ninvestable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.\n\nReturns\n\nA Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the calculated annualized cost for each asset in each year.\n\nFormula\n\nThe annualized cost for each asset in year is calculated using the formula:\n\nannualized_cost = discount_rate[asset] / (\n (1 + discount_rate[asset]) *\n (1 - 1 / (1 + discount_rate[asset])^economic_lifetime[asset])\n) * investment_cost[(year, asset)]\n\nExample for energy assets\n\ndiscount_rate = Dict(\"asset1\" => 0.05, \"asset2\" => 0.07)\n\neconomic_lifetime = Dict(\"asset1\" => 10, \"asset2\" => 15)\n\ninvestment_cost = Dict((2021, \"asset1\") => 1000, (2021, \"asset2\") => 1500,\n (2022, \"asset1\") => 1100, (2022, \"asset2\") => 1600)\nyears = [2021, 2022]\n\ninvestable_assets = Dict(2021 => [\"asset1\", \"asset2\"],\n 2022 => [\"asset1\"])\n\ncosts = calculate_annualized_cost(discount_rate, economic_lifetime, investment_cost, years, investable_assets)\n\n# output\n\nDict{Tuple{Int64, String}, Float64} with 3 entries:\n (2021, \"asset1\") => 123.338\n (2021, \"asset2\") => 153.918\n (2022, \"asset1\") => 135.671\n\nExample for transport assets\n\ndiscount_rate = Dict((\"asset1\", \"asset2\") => 0.05, (\"asset3\", \"asset4\") => 0.07)\n\neconomic_lifetime = Dict((\"asset1\", \"asset2\") => 10, (\"asset3\", \"asset4\") => 15)\n\ninvestment_cost = Dict((2021, (\"asset1\", \"asset2\")) => 1000, (2021, (\"asset3\", \"asset4\")) => 1500,\n (2022, (\"asset1\", \"asset2\")) => 1100, (2022, (\"asset3\", \"asset4\")) => 1600)\nyears = [2021, 2022]\n\ninvestable_assets = Dict(2021 => [(\"asset1\", \"asset2\"), (\"asset3\", \"asset4\")],\n 2022 => [(\"asset1\", \"asset2\")])\n\ncosts = calculate_annualized_cost(discount_rate, economic_lifetime, investment_cost, years, investable_assets)\n\n# output\n\nDict{Tuple{Int64, Tuple{String, String}}, Float64} with 3 entries:\n (2022, (\"asset1\", \"asset2\")) => 135.671\n (2021, (\"asset3\", \"asset4\")) => 153.918\n (2021, (\"asset1\", \"asset2\")) => 123.338\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.calculate_salvage_value-NTuple{5, Any}","page":"Reference","title":"TulipaEnergyModel.calculate_salvage_value","text":"calculate_salvage_value(discount_rate,\n economic_lifetime,\n annualized_cost,\n years,\n investable_assets,\n )\n\nCalculates the salvage value for each asset, both energy assets and transport assets.\n\nArguments\n\ndiscount_rate::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the discount rate.\neconomic_lifetime::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the economic lifetime.\nannualized_cost::Dict: A Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the annualized cost for each asset in each year.\nyears::Array: An array of years to be considered.\ninvestable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.\n\nReturns\n\nA Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the salvage value for each asset in each year.\n\nFormula\n\nThe salvage value for each asset in year is calculated using the formula:\n\nsalvagevalue = annualizedcost[(year, asset)] * sum( 1 / (1 + discountrate[asset])^(yearalias - year) for yearalias in salvagevalue_set[(year, asset)] )\n\nExample for energy assets\n\ndiscount_rate = Dict(\"asset1\" => 0.05, \"asset2\" => 0.07)\n\neconomic_lifetime = Dict(\"asset1\" => 10, \"asset2\" => 15)\n\nannualized_cost =\n Dict((2021, \"asset1\") => 123.338, (2021, \"asset2\") => 153.918, (2022, \"asset1\") => 135.671)\n\nyears = [2021, 2022]\n\ninvestable_assets = Dict(2021 => [\"asset1\", \"asset2\"], 2022 => [\"asset1\"])\n\nsalvage_value = calculate_salvage_value(\n discount_rate,\n economic_lifetime,\n annualized_cost,\n years,\n investable_assets,\n)\n\n# output\nDict{Tuple{Int64, String}, Float64} with 3 entries:\n (2021, \"asset1\") => 759.2\n (2021, \"asset2\") => 1202.24\n (2022, \"asset1\") => 964.325\n\nExample for transport assets\n\ndiscount_rate = Dict((\"asset1\", \"asset2\") => 0.05, (\"asset3\", \"asset4\") => 0.07)\n\neconomic_lifetime = Dict((\"asset1\", \"asset2\") => 10, (\"asset3\", \"asset4\") => 15)\n\nannualized_cost = Dict(\n (2022, (\"asset1\", \"asset2\")) => 135.671,\n (2021, (\"asset3\", \"asset4\")) => 153.918,\n (2021, (\"asset1\", \"asset2\")) => 123.338,\n)\n\nyears = [2021, 2022]\n\ninvestable_assets =\n Dict(2021 => [(\"asset1\", \"asset2\"), (\"asset3\", \"asset4\")], 2022 => [(\"asset1\", \"asset2\")])\n\nsalvage_value = calculate_salvage_value(\n discount_rate,\n economic_lifetime,\n annualized_cost,\n years,\n investable_assets,\n)\n\n# output\n\nDict{Tuple{Int64, Tuple{String, String}}, Float64} with 3 entries:\n (2022, (\"asset1\", \"asset2\")) => 964.325\n (2021, (\"asset3\", \"asset4\")) => 1202.24\n (2021, (\"asset1\", \"asset2\")) => 759.2\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.calculate_weight_for_investment_discounts-NTuple{6, Any}","page":"Reference","title":"TulipaEnergyModel.calculate_weight_for_investment_discounts","text":"calculate_weight_for_investment_discounts(social_rate,\n discount_year,\n salvage_value,\n investment_cost,\n years,\n investable_assets,\n )\n\nCalculates the weight for investment discounts for each asset, both energy assets and transport assets.\n\nArguments\n\nsocial_rate::Float64: A value with the social discount rate.\ndiscount_year::Int64: A value with the discount year for all the investments.\nsalvage_value::Dict: A dictionary where the key is an tuple (year, asset) or (year, (asset1, asset2)) for transport assets, and the value is the salvage value.\ninvestment_cost::Dict: A dictionary where the key is an tuple (year, asset) or (year, (asset1, asset2)) for transport assets, and the value is the investment cost.\nyears::Array: An array of years to be considered.\ninvestable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.\n\nReturns\n\nA Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the weights for investment discounts.\n\nFormula\n\nThe weight for investment discounts for each asset in year is calculated using the formula:\n\nweightforinvestmentdiscounts = 1 / (1 + socialrate)^(year - discountyear) * (1 - salvagevalue[(year, asset)] / investment_cost[(year, asset)])\n\nExample for energy assets\n\nsocial_rate = 0.02\n\ndiscount_year = 2000\n\nsalvage_value = Dict(\n (2021, \"asset1\") => 759.1978422,\n (2021, \"asset2\") => 1202.2339859,\n (2022, \"asset1\") => 964.3285406,\n)\n\ninvestment_cost = Dict(\n (2021, \"asset1\") => 1000,\n (2021, \"asset2\") => 1500,\n (2022, \"asset1\") => 1100,\n (2022, \"asset2\") => 1600,\n)\nyears = [2021, 2022]\n\ninvestable_assets = Dict(2021 => [\"asset1\", \"asset2\"], 2022 => [\"asset1\"])\n\nweights = calculate_weight_for_investment_discounts(\n social_rate,\n discount_year,\n salvage_value,\n investment_cost,\n years,\n investable_assets,\n)\n\n# output\n\nDict{Tuple{Int64, String}, Float64} with 3 entries:\n (2021, \"asset1\") => 0.158875\n (2021, \"asset2\") => 0.130973\n (2022, \"asset1\") => 0.0797796\n\nExample for transport assets\n\nsocial_rate = 0.02\n\ndiscount_year = 2000\n\nsalvage_value = Dict(\n (2022, (\"asset1\", \"asset2\")) => 964.325,\n (2021, (\"asset3\", \"asset4\")) => 1202.24,\n (2021, (\"asset1\", \"asset2\")) => 759.2,\n)\n\ninvestment_cost = Dict((2021, (\"asset1\", \"asset2\")) => 1000, (2021, (\"asset3\", \"asset4\")) => 1500,\n (2022, (\"asset1\", \"asset2\")) => 1100, (2022, (\"asset3\", \"asset4\")) => 1600)\nyears = [2021, 2022]\n\ninvestable_assets = Dict(2021 => [(\"asset1\", \"asset2\"), (\"asset3\", \"asset4\")],\n 2022 => [(\"asset1\", \"asset2\")])\n\nweights = calculate_weight_for_investment_discounts(\n social_rate,\n discount_year,\n salvage_value,\n investment_cost,\n years,\n investable_assets,\n)\n\n# output\n\nDict{Tuple{Int64, Tuple{String, String}}, Float64} with 3 entries:\n (2022, (\"asset1\", \"asset2\")) => 0.0797817\n (2021, (\"asset3\", \"asset4\")) => 0.13097\n (2021, (\"asset1\", \"asset2\")) => 0.158874\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.calculate_weight_for_investment_discounts-Tuple{MetaGraphsNext.MetaGraph, Vararg{Any, 4}}","page":"Reference","title":"TulipaEnergyModel.calculate_weight_for_investment_discounts","text":"calculate_weight_for_investment_discounts(graph::MetaGraph,\n years,\n investable_assets,\n assets,\n model_parameters,\n )\n\nCalculates the weight for investment discounts for each asset, both energy assets and transport assets. Internally calls calculate_annualized_cost, calculate_salvage_value, calculate_weight_for_investment_discounts.\n\nArguments\n\ngraph::MetaGraph: A graph\nyears::Array: An array of years to be considered.\ninvestable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.\nassets::Array: An array of assets.\nmodel_parameters::ModelParameters: A model parameters structure.\n\nReturns\n\nA Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the weights for investment discounts.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.compute_assets_partitions!-NTuple{4, Any}","page":"Reference","title":"TulipaEnergyModel.compute_assets_partitions!","text":"compute_assets_partitions!(partitions, df, a, representative_periods)\n\nParses the time blocks in the DataFrame df for the asset a and every representative period in the timesteps_per_rp dictionary, modifying the input partitions.\n\npartitions must be a dictionary indexed by the representative periods, possibly empty.\n\ntimesteps_per_rp must be a dictionary indexed by rep_period and its values are the timesteps of that rep_period.\n\nTo obtain the partitions, the columns specification and partition from df are passed to the function _parse_rp_partition.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.compute_constraints_partitions-Tuple{Any, Any, Any}","page":"Reference","title":"TulipaEnergyModel.compute_constraints_partitions","text":"cons_partitions = compute_constraints_partitions(graph, representative_periods)\n\nComputes the constraints partitions using the assets and flows partitions stored in the graph, and the representative periods.\n\nThe function computes the constraints partitions by iterating over the partition dictionary, which specifies the partition strategy for each resolution (i.e., lowest or highest). For each asset and representative period, it calls the compute_rp_partition function to compute the partition based on the strategy.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.compute_dual_variables-Tuple{Any}","page":"Reference","title":"TulipaEnergyModel.compute_dual_variables","text":"compute_dual_variables(model)\n\nCompute the dual variables for the given model.\n\nIf the model does not have dual variables, this function fixes the discrete variables, optimizes the model, and then computes the dual variables.\n\nArguments\n\nmodel: The model for which to compute the dual variables.\n\nReturns\n\nA named tuple containing the dual variables of selected constraints.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.compute_flows_partitions!-NTuple{5, Any}","page":"Reference","title":"TulipaEnergyModel.compute_flows_partitions!","text":"compute_flows_partitions!(partitions, df, u, v, representative_periods)\n\nParses the time blocks in the DataFrame df for the flow (u, v) and every representative period in the timesteps_per_rp dictionary, modifying the input partitions.\n\npartitions must be a dictionary indexed by the representative periods, possibly empty.\n\ntimesteps_per_rp must be a dictionary indexed by rep_period and its values are the timesteps of that rep_period.\n\nTo obtain the partitions, the columns specification and partition from df are passed to the function _parse_rp_partition.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.compute_rp_partition-Tuple{AbstractVector{<:AbstractVector{<:UnitRange{<:Integer}}}, Any}","page":"Reference","title":"TulipaEnergyModel.compute_rp_partition","text":"rp_partition = compute_rp_partition(partitions, :lowest)\n\nGiven the timesteps of various flows/assets in the partitions input, compute the representative period partitions.\n\nEach element of partitions is a partition with the following assumptions:\n\nAn element is of the form V = [r₁, r₂, …, rₘ], where each rᵢ is a range a:b.\nr₁ starts at 1.\nrᵢ₊₁ starts at the end of rᵢ plus 1.\nrₘ ends at some value N, that is the same for all elements of partitions.\n\nNotice that this implies that they form a disjunct partition of 1:N.\n\nThe output will also be a partition with the conditions above.\n\nStrategies\n\n:lowest\n\nIf strategy = :lowest (default), then the output is constructed greedily, i.e., it selects the next largest breakpoint following the algorithm below:\n\nInput: Vᴵ₁, …, Vᴵₚ, a list of time blocks. Each element of Vᴵⱼ is a range r = r.start:r.end. Output: V.\nCompute the end of the representative period N (all Vᴵⱼ should have the same end)\nStart with an empty V = []\nDefine the beginning of the range s = 1\nDefine an array with all the next breakpoints B such that Bⱼ is the first r.end such that r.end ≥ s for each r ∈ Vᴵⱼ.\nThe end of the range will be the e = max Bⱼ.\nDefine r = s:e and add r to the end of V.\nIf e = N, then END\nOtherwise, define s = e + 1 and go to step 4.\n\nExamples\n\npartition1 = [1:4, 5:8, 9:12]\npartition2 = [1:3, 4:6, 7:9, 10:12]\ncompute_rp_partition([partition1, partition2], :lowest)\n\n# output\n\n3-element Vector{UnitRange{Int64}}:\n 1:4\n 5:8\n 9:12\n\npartition1 = [1:1, 2:3, 4:6, 7:10, 11:12]\npartition2 = [1:2, 3:4, 5:5, 6:7, 8:9, 10:12]\ncompute_rp_partition([partition1, partition2], :lowest)\n\n# output\n\n5-element Vector{UnitRange{Int64}}:\n 1:2\n 3:4\n 5:6\n 7:10\n 11:12\n\n:highest\n\nIf strategy = :highest, then the output selects includes all the breakpoints from the input. Another way of describing it, is to select the minimum end-point instead of the maximum end-point in the :lowest strategy.\n\nExamples\n\npartition1 = [1:4, 5:8, 9:12]\npartition2 = [1:3, 4:6, 7:9, 10:12]\ncompute_rp_partition([partition1, partition2], :highest)\n\n# output\n\n6-element Vector{UnitRange{Int64}}:\n 1:3\n 4:4\n 5:6\n 7:8\n 9:9\n 10:12\n\npartition1 = [1:1, 2:3, 4:6, 7:10, 11:12]\npartition2 = [1:2, 3:4, 5:5, 6:7, 8:9, 10:12]\ncompute_rp_partition([partition1, partition2], :highest)\n\n# output\n\n10-element Vector{UnitRange{Int64}}:\n 1:1\n 2:2\n 3:3\n 4:4\n 5:5\n 6:6\n 7:7\n 8:9\n 10:10\n 11:12\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.construct_dataframes-NTuple{4, Any}","page":"Reference","title":"TulipaEnergyModel.construct_dataframes","text":"dataframes = construct_dataframes(\n graph,\n representative_periods,\n constraints_partitions,, IteratorSize\n years,\n)\n\nComputes the data frames used to linearize the variables and constraints. These are used internally in the model only.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.create_internal_structures-Tuple{Any}","page":"Reference","title":"TulipaEnergyModel.create_internal_structures","text":"graph, representative_periods, timeframe = create_internal_structures(connection)\n\nReturn the graph, representative_periods, and timeframe structures given the input dataframes structure.\n\nThe details of these structures are:\n\ngraph: a MetaGraph with the following information:\nlabels(graph): All assets.\nedge_labels(graph): All flows, in pair format (u, v), where u and v are assets.\ngraph[a]: A TulipaEnergyModel.GraphAssetData structure for asset a.\ngraph[u, v]: A TulipaEnergyModel.GraphFlowData structure for flow (u, v).\nrepresentative_periods: An array of TulipaEnergyModel.RepresentativePeriod ordered by their IDs.\ntimeframe: Information of TulipaEnergyModel.Timeframe.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.create_intervals_for_years-Tuple{Any}","page":"Reference","title":"TulipaEnergyModel.create_intervals_for_years","text":"create_intervals(years)\n\nCreate a dictionary of intervals for years. The interval is assigned to the its starting year. The last interval is 1.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.create_model!-Tuple{Any}","page":"Reference","title":"TulipaEnergyModel.create_model!","text":"create_model!(energy_problem; verbose = false)\n\nCreate the internal model of an TulipaEnergyModel.EnergyProblem.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.create_model-NTuple{7, Any}","page":"Reference","title":"TulipaEnergyModel.create_model","text":"model = create_model(graph, representative_periods, dataframes, timeframe, groups; write_lp_file = false)\n\nCreate the energy model given the graph, representative_periods, dictionary of dataframes (created by construct_dataframes), timeframe, and groups.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.default_parameters-Tuple{Any}","page":"Reference","title":"TulipaEnergyModel.default_parameters","text":"default_parameters(Val(optimizer_name_symbol))\ndefault_parameters(optimizer)\ndefault_parameters(optimizer_name_symbol)\ndefault_parameters(optimizer_name_string)\n\nReturns the default parameters for a given JuMP optimizer. Falls back to Dict() for undefined solvers.\n\nArguments\n\nThere are four ways to use this function:\n\nVal(optimizer_name_symbol): This uses type dispatch with the special Val type. Pass the solver name as a Symbol (e.g., Val(:HiGHS)).\noptimizer: The JuMP optimizer type (e.g., HiGHS.Optimizer).\noptimizer_name_symbol or optimizer_name_string: Pass the name in Symbol or String format and it will be converted to Val.\n\nUsing Val is necessary for the dispatch. All other cases will convert the argument and call the Val version, which might lead to type instability.\n\nExamples\n\nusing HiGHS\ndefault_parameters(HiGHS.Optimizer)\n\n# output\n\nDict{String, Any} with 1 entry:\n \"output_flag\" => false\n\nAnother case\n\ndefault_parameters(Val(:Cbc))\n\n# output\n\nDict{String, Any} with 1 entry:\n \"logLevel\" => 0\n\ndefault_parameters(:Cbc) == default_parameters(\"Cbc\") == default_parameters(Val(:Cbc))\n\n# output\n\ntrue\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.filter_graph-Tuple{Any, Any, Any, Vararg{Any}}","page":"Reference","title":"TulipaEnergyModel.filter_graph","text":"filter_graph(graph, elements, value, key)\nfilter_graph(graph, elements, value, key, year)\n\nHelper function to filter elements (assets or flows) in the graph given a key (and possibly year) and value (or values). In the safest case, this is equivalent to the filters\n\nfilter_assets_whose_key_equal_to_value = a -> graph[a].key == value\nfilter_assets_whose_key_year_equal_to_value = a -> graph[a].key[year] in value\nfilter_flows_whose_key_equal_to_value = f -> graph[f...].key == value\nfilter_flows_whose_key_year_equal_to_value = f -> graph[f...].key[year] in value\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.get_graph_value_or_missing-Tuple{Any, Any, Any}","page":"Reference","title":"TulipaEnergyModel.get_graph_value_or_missing","text":"get_graph_value_or_missing(graph, graph_key, field_key)\nget_graph_value_or_missing(graph, graph_key, field_key, year)\n\nGet graph[graph_key].field_key (or graph[graph_key].field_key[year]) or return missing if any of the values do not exist. We also check if graph[graph_key].active[year] is true if the year is passed and return missing otherwise.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.profile_aggregation-NTuple{7, Any}","page":"Reference","title":"TulipaEnergyModel.profile_aggregation","text":"profile_aggregation(agg, profiles, key, block, default_value)\n\nAggregates the profiles[key] over the block using the agg function. If the profile does not exist, uses default_value instead of each profile value.\n\nprofiles should be a dictionary of profiles, for instance graph[a].profiles or graph[u, v].profiles. If profiles[key] exists, then this function computes the aggregation of profiles[key] over the range block using the aggregator agg, i.e., agg(profiles[key][block]). If profiles[key] does not exist, then this substitutes it with a vector of default_values.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.read_parameters_from_file-Tuple{Any}","page":"Reference","title":"TulipaEnergyModel.read_parameters_from_file","text":"read_parameters_from_file(filepath)\n\nParse the parameters from a file into a dictionary. The keys and values are NOT checked to be valid parameters for any specific solvers.\n\nThe file should contain a list of lines of the following type:\n\nkey = value\n\nThe file is parsed as TOML, which is intuitive. See the example below.\n\nExample\n\n# Creating file\nfilepath, io = mktemp()\nprintln(io,\n \"\"\"\n true_or_false = true\n integer_number = 5\n real_number1 = 3.14\n big_number = 6.66E06\n small_number = 1e-8\n string = \"something\"\n \"\"\"\n)\nclose(io)\n# Reading\nread_parameters_from_file(filepath)\n\n# output\n\nDict{String, Any} with 6 entries:\n \"string\" => \"something\"\n \"integer_number\" => 5\n \"small_number\" => 1.0e-8\n \"true_or_false\" => true\n \"real_number1\" => 3.14\n \"big_number\" => 6.66e6\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.run_scenario-Tuple{Any}","page":"Reference","title":"TulipaEnergyModel.run_scenario","text":"energy_problem = run_scenario(connection; optimizer, parameters, write_lp_file, log_file, show_log)\n\nRun the scenario in the given connection and return the energy problem.\n\nThe optimizer and parameters keyword arguments can be used to change the optimizer (the default is HiGHS) and its parameters. The variables are passed to the solve_model function.\n\nSet write_lp_file = true to export the problem that is sent to the solver to a file for viewing. Set show_log = false to silence printing the log while running. Specify a log_file name to export the log to a file.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.safe_comparison-Tuple{Any, Any, Any, Vararg{Any}}","page":"Reference","title":"TulipaEnergyModel.safe_comparison","text":"safe_comparison(graph, a, value, key)\nsafe_comparison(graph, a, value, key, year)\n\nCheck if graph[a].value (or graph[a].value[year]) is equal to value. This function assumes that if graph[a].value is a dictionary and value is not, then you made a mistake. This makes it safer, because it will not silently return false. It also checks for missing.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.safe_inclusion-Tuple{Any, Any, Vector, Vararg{Any}}","page":"Reference","title":"TulipaEnergyModel.safe_inclusion","text":"safe_inclusion(graph, a, value, key)\nsafe_inclusion(graph, a, value, key, year)\n\nCheck if graph[a].value (or graph[a].value[year]) is in values. This correctly check that missing in [missing] returns false.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.save_solution_to_file-NTuple{4, Any}","page":"Reference","title":"TulipaEnergyModel.save_solution_to_file","text":"save_solution_to_file(output_file, graph, solution)\n\nSaves the solution in CSV files inside output_folder.\n\nThe following files are created:\n\nassets-investment.csv: The format of each row is a,v,p*v, where a is the asset name, v is the corresponding asset investment value, and p is the corresponding capacity value. Only investable assets are included.\nassets-investments-energy.csv: The format of each row is a,v,p*v, where a is the asset name, v is the corresponding asset investment value on energy, and p is the corresponding energy capacity value. Only investable assets with a storage_method_energy set to true are included.\nflows-investment.csv: Similar to assets-investment.csv, but for flows.\nflows.csv: The value of each flow, per (from, to) flow, rp representative period and timestep. Since the flow is in power, the value at a timestep is equal to the value at the corresponding time block, i.e., if flow[1:3] = 30, then flow[1] = flow[2] = flow[3] = 30.\nstorage-level.csv: The value of each storage level, per asset, rp representative period, and timestep. Since the storage level is in energy, the value at a timestep is a proportional fraction of the value at the corresponding time block, i.e., if level[1:3] = 30, then level[1] = level[2] = level[3] = 10.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.save_solution_to_file-Tuple{Any, EnergyProblem}","page":"Reference","title":"TulipaEnergyModel.save_solution_to_file","text":"save_solution_to_file(output_folder, energy_problem)\n\nSaves the solution from energy_problem in CSV files inside output_file.\n\n\n\n\n\n","category":"method"},{"location":"95-reference/#TulipaEnergyModel.solve_model","page":"Reference","title":"TulipaEnergyModel.solve_model","text":"solution = solve_model(model[, optimizer; parameters])\n\nSolve the JuMP model and return the solution. The optimizer argument should be an MILP solver from the JuMP list of supported solvers. By default we use HiGHS.\n\nThe keyword argument parameters should be passed as a list of key => value pairs. These can be created manually, obtained using default_parameters, or read from a file using read_parameters_from_file.\n\nThe solution object is a mutable struct with the following fields:\n\nassets_investment[a]: The investment for each asset, indexed on the investable asset a. To create a traditional array in the order given by the investable assets, one can run\n[solution.assets_investment[a] for a in labels(graph) if graph[a].investable]\nassets_investment_energy[a]: The investment on energy component for each asset, indexed on the investable asset a with a storage_method_energy set to true.\nTo create a traditional array in the order given by the investable assets, one can run\n[solution.assets_investment_energy[a] for a in labels(graph) if graph[a].investable && graph[a].storage_method_energy\nflows_investment[u, v]: The investment for each flow, indexed on the investable flow (u, v). To create a traditional array in the order given by the investable flows, one can run\n[solution.flows_investment[(u, v)] for (u, v) in edge_labels(graph) if graph[u, v].investable]\nstorage_level_intra_rp[a, rp, timesteps_block]: The storage level for the storage asset a for a representative period rp and a time block timesteps_block. The list of time blocks is defined by constraints_partitions, which was used to create the model. To create a vector with all values of storage_level_intra_rp for a given a and rp, one can run\n[solution.storage_level_intra_rp[a, rp, timesteps_block] for timesteps_block in constraints_partitions[:lowest_resolution][(a, rp)]]\nstorage_level_inter_rp[a, pb]: The storage level for the storage asset a for a periods block pb. To create a vector with all values of storage_level_inter_rp for a given a, one can run\n[solution.storage_level_inter_rp[a, bp] for bp in graph[a].timeframe_partitions[a]]\nflow[(u, v), rp, timesteps_block]: The flow value for a given flow (u, v) at a given representative period rp, and time block timesteps_block. The list of time blocks is defined by graph[(u, v)].partitions[rp]. To create a vector with all values of flow for a given (u, v) and rp, one can run\n[solution.flow[(u, v), rp, timesteps_block] for timesteps_block in graph[u, v].partitions[rp]]\nobjective_value: A Float64 with the objective value at the solution.\nduals: A NamedTuple containing the dual variables of selected constraints.\n\nExamples\n\nparameters = Dict{String,Any}(\"presolve\" => \"on\", \"time_limit\" => 60.0, \"output_flag\" => true)\nsolution = solve_model(model, HiGHS.Optimizer; parameters = parameters)\n\n\n\n\n\n","category":"function"},{"location":"95-reference/#TulipaEnergyModel.solve_model!","page":"Reference","title":"TulipaEnergyModel.solve_model!","text":"solution = solve_model!(energy_problem[, optimizer; parameters])\n\nSolve the internal model of an energy_problem. The solution obtained by calling solve_model is returned.\n\n\n\n\n\n","category":"function"},{"location":"95-reference/#TulipaEnergyModel.solve_model!-Tuple{Any, Any, Vararg{Any}}","page":"Reference","title":"TulipaEnergyModel.solve_model!","text":"solution = solve_model!(dataframes, model, ...)\n\nSolves the JuMP model, returns the solution, and modifies dataframes to include the solution. The modifications made to dataframes are:\n\ndf_flows.solution = solution.flow\ndf_storage_level_intra_rp.solution = solution.storage_level_intra_rp\ndf_storage_level_inter_rp.solution = solution.storage_level_inter_rp\n\n\n\n\n\n","category":"method"},{"location":"20-tutorials/#tutorials","page":"Tutorials","title":"Tutorials","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Here are some tutorials on how to use Tulipa.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Pages = [\"20-tutorials.md\"]\nDepth = 3","category":"page"},{"location":"20-tutorials/#basic-example","page":"Tutorials","title":"Basic example","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"For our first example, let's use a tiny existing dataset. Inside the code for this package, you can find the folder test/inputs/Tiny, which includes all the files necessary to create a model and solve it.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The files inside the \"Tiny\" folder define the assets and flows data, their profiles, and their time resolution, as well as define the representative periods and which periods in the full problem formulation they represent.¹","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"For more details about these files, see Input.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"¹ Ignore bad-assets-data.csv, which is used for testing.","category":"page"},{"location":"20-tutorials/#Run-scenario","page":"Tutorials","title":"Run scenario","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To read all data from the Tiny folder, perform all necessary steps to create a model, and solve the model, run the following in a Julia terminal:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using DuckDB, TulipaIO, TulipaEnergyModel\n\ninput_dir = \"../../test/inputs/Tiny\" # hide\n# input_dir should be the path to Tiny as a string (something like \"test/inputs/Tiny\")\n# TulipaEnergyModel.schema_per_table_name contains the schema with columns and types the file must have\nconnection = DBInterface.connect(DuckDB.DB)\nread_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)\nenergy_problem = run_scenario(connection)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The energy_problem variable is of type EnergyProblem. For more details, see the documentation for that type or the section Structures.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"That's all it takes to run a scenario! To learn about the data required to run your own scenario, see the Input section of How to Use.","category":"page"},{"location":"20-tutorials/#Manually-running-each-step","page":"Tutorials","title":"Manually running each step","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"If we need more control, we can create the energy problem first, then the optimization model inside it, and finally ask for it to be solved.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using DuckDB, TulipaIO, TulipaEnergyModel\n\ninput_dir = \"../../test/inputs/Tiny\" # hide\n# input_dir should be the path to Tiny as a string (something like \"test/inputs/Tiny\")\nconnection = DBInterface.connect(DuckDB.DB)\nread_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)\nenergy_problem = EnergyProblem(connection)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The energy problem does not have a model yet:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"energy_problem.model === nothing","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To create the internal model, we call the function create_model!.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"create_model!(energy_problem)\nenergy_problem.model","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The model has not been solved yet, which can be verified through the solved flag inside the energy problem:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"energy_problem.solved","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Finally, we can solve the model:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"solution = solve_model!(energy_problem)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The solution is included in the individual assets and flows, but for completeness, we return the full solution object, also defined in the Structures section.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"In particular, the objective value and the termination status are also included in the energy problem:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"energy_problem.objective_value, energy_problem.termination_status","category":"page"},{"location":"20-tutorials/#Manually-creating-all-structures-without-EnergyProblem","page":"Tutorials","title":"Manually creating all structures without EnergyProblem","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"For additional control, it might be desirable to use the internal structures of EnergyProblem directly. This can be error-prone, so use it with care. The full description for these structures can be found in Structures.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using DuckDB, TulipaIO, TulipaEnergyModel\n\ninput_dir = \"../../test/inputs/Tiny\" # hide\n# input_dir should be the path to Tiny as a string (something like \"test/inputs/Tiny\")\nconnection = DBInterface.connect(DuckDB.DB)\nread_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)\nmodel_parameters = ModelParameters(connection)\ngraph, representative_periods, timeframe, groups, years = create_internal_structures(connection)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"We also need a time partition for the constraints to create the model. Creating an energy problem automatically computes this data, but since we are doing it manually, we need to calculate it ourselves.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"constraints_partitions = compute_constraints_partitions(graph, representative_periods, years)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The constraints_partitions has two dictionaries with the keys :lowest_resolution and :highest_resolution. The lowest resolution dictionary is mainly used to create the constraints for energy balance, whereas the highest resolution dictionary is mainly used to create the capacity constraints in the model.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Finally, we also need dataframes that store the linearized indexes of the variables.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"dataframes = construct_dataframes(graph, representative_periods, constraints_partitions, years)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Now we can compute the model.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"model = create_model(graph, representative_periods, dataframes, years, timeframe, groups, model_parameters)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Finally, we can compute the solution.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"solution = solve_model(model)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"or, if we want to store the flow, storage_level_intra_rp, and storage_level_inter_rp optimal value in the dataframes:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"solution = solve_model!(dataframes, model)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"This solution structure is the same as the one returned when using an EnergyProblem.","category":"page"},{"location":"20-tutorials/#Change-optimizer-and-specify-parameters","page":"Tutorials","title":"Change optimizer and specify parameters","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"By default, the model is solved using the HiGHS optimizer (or solver). To change this, we can give the functions run_scenario, solve_model, or solve_model! a different optimizer.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"For instance, we run the GLPK optimizer below:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using DuckDB, TulipaIO, TulipaEnergyModel, GLPK\n\ninput_dir = \"../../test/inputs/Tiny\" # hide\nconnection = DBInterface.connect(DuckDB.DB)\nread_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)\nenergy_problem = run_scenario(connection, optimizer = GLPK.Optimizer)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"or","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using GLPK\n\nsolution = solve_model!(energy_problem, GLPK.Optimizer)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"or","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using GLPK\n\nsolution = solve_model(model, GLPK.Optimizer)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Notice that, in any of these cases, we need to explicitly add the GLPK package ourselves and add using GLPK before using GLPK.Optimizer.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"In any of these cases, default parameters for the GLPK optimizer are used, which you can query using default_parameters. You can pass a dictionary using the keyword argument parameters to change the defaults. For instance, in the example below, we change the maximum allowed runtime for GLPK to be 1 seconds, which will most likely cause it to fail to converge in time.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using DuckDB, TulipaIO, TulipaEnergyModel, GLPK\n\ninput_dir = \"../../test/inputs/Tiny\" # hide\nparameters = Dict(\"tm_lim\" => 1)\nconnection = DBInterface.connect(DuckDB.DB)\nread_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)\nenergy_problem = run_scenario(connection, optimizer = GLPK.Optimizer, parameters = parameters)\nenergy_problem.termination_status","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"For the complete list of parameters, check your chosen optimizer.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"These parameters can also be passed via a file. See the read_parameters_from_file function for more details.","category":"page"},{"location":"20-tutorials/#graph-tutorial","page":"Tutorials","title":"Using the graph structure","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Read about the graph structure in the Graph section first.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"We will use the graph created above for the \"Tiny\" dataset.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The first thing that we can do is access all assets. They are the labels of the graph and can be accessed via the MetaGraphsNext API:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using MetaGraphsNext\n# Accessing assets\nlabels(graph)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Notice that the result is a generator, so if we want the actual results, we have to collect it:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"labels(graph) |> collect","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To access the asset data, we can index the graph with an asset label:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"graph[\"ocgt\"]","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"This is a Julia struct, or composite type, named GraphAssetData. We can access its fields with .:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"graph[\"ocgt\"].type","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Since labels returns a generator, we can iterate over its contents without collecting it into a vector.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"for a in labels(graph)\n println(\"Asset $a has type $(graph[a].type)\")\nend","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To get all flows we can use edge_labels:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"edge_labels(graph) |> collect","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To access the flow data, we index with graph[u, v]:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"graph[\"ocgt\", \"demand\"]","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The type of the flow struct is GraphFlowData.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"We can easily find all assets v for which a flow (a, v) exists for a given asset a (in this case, demand):","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"inneighbor_labels(graph, \"demand\") |> collect","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Similarly, all assets u for which a flow (u, a) exists for a given asset a (in this case, ocgt):","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"outneighbor_labels(graph, \"ocgt\") |> collect","category":"page"},{"location":"20-tutorials/#solution-tutorial","page":"Tutorials","title":"Manipulating the solution","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"First, see the description of the solution object.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Let's consider the larger dataset \"Norse\" in this section. And let's talk about two ways to access the solution.","category":"page"},{"location":"20-tutorials/#The-solution-returned-by-solve_model","page":"Tutorials","title":"The solution returned by solve_model","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The solution, as shown before, can be obtained when calling solve_model or solve_model!.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using DuckDB, TulipaIO, TulipaEnergyModel\n\ninput_dir = \"../../test/inputs/Norse\" # hide\n# input_dir should be the path to Norse as a string (something like \"test/inputs/Norse\")\nconnection = DBInterface.connect(DuckDB.DB)\nread_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)\nenergy_problem = EnergyProblem(connection)\ncreate_model!(energy_problem)\nsolution = solve_model!(energy_problem)\nnothing # hide","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To create a traditional array in the order given by the investable assets, one can run","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The solution.flow, solution.storage_level_intra_rp, and solution.storage_level_inter_rp values are linearized according to the dataframes in the dictionary energy_problem.dataframes with keys :flows, :lowest_storage_level_intra_rp, and :storage_level_inter_rp, respectively. You need to query the data from these dataframes and then use the column index to select the appropriate value.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To create a vector with all values of flow for a given (u, v) and rp, one can run","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using MetaGraphsNext\ngraph = energy_problem.graph\n\n(u, v) = first(edge_labels(graph))\nrp = 1\ndf = filter(\n row -> row.rep_period == rp && row.from == u && row.to == v,\n energy_problem.dataframes[:flows],\n view = true,\n)\n[solution.flow[row.index] for row in eachrow(df)]","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To create a vector with the all values of storage_level_intra_rp for a given a and rp, one can run","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"a = energy_problem.dataframes[:lowest_storage_level_intra_rp].asset[1]\nrp = 1\ndf = filter(\n row -> row.asset == a && row.rep_period == rp,\n energy_problem.dataframes[:lowest_storage_level_intra_rp],\n view = true,\n)\n[solution.storage_level_intra_rp[row.index] for row in eachrow(df)]","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To create a vector with the all values of storage_level_inter_rp for a given a, one can run","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"a = energy_problem.dataframes[:storage_level_inter_rp].asset[1]\ndf = filter(\n row -> row.asset == a,\n energy_problem.dataframes[:storage_level_inter_rp],\n view = true,\n)\n[solution.storage_level_inter_rp[row.index] for row in eachrow(df)]","category":"page"},{"location":"20-tutorials/#The-solution-inside-the-graph","page":"Tutorials","title":"The solution inside the graph","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"In addition to the solution object, the solution is also stored by the individual assets and flows when solve_model! is called (i.e., when using an EnergyProblem object).","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"They can be accessed like any other value from GraphAssetData or GraphFlowData, which means that we recreate the values from the previous section in a new way:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"years = [year.id for year in energy_problem.years]\nDict(\n (y, a) => [\n energy_problem.graph[a].investment[y]\n ] for y in years for a in labels(graph) if graph[a].investable[y]\n)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Dict(\n (y, a) => [\n energy_problem.graph[u, v].investment[y]\n ] for y in years for (u, v) in edge_labels(graph) if graph[u, v].investable[y]\n)","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"(u, v) = first(edge_labels(graph))\nrp = 1\ndf = filter(\n row -> row.rep_period == rp && row.from == u && row.to == v,\n energy_problem.dataframes[:flows],\n view = true,\n)\n[energy_problem.graph[u, v].flow[(rp, row.timesteps_block)] for row in eachrow(df)]","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To create a vector with all the values of storage_level_intra_rp for a given a and rp, one can run","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"a = energy_problem.dataframes[:lowest_storage_level_intra_rp].asset[1]\nrp = 1\ndf = filter(\n row -> row.asset == a && row.rep_period == rp,\n energy_problem.dataframes[:lowest_storage_level_intra_rp],\n view = true,\n)\n[energy_problem.graph[a].storage_level_intra_rp[(rp, row.timesteps_block)] for row in eachrow(df)]","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To create a vector with all the values of storage_level_inter_rp for a given a, one can run","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"a = energy_problem.dataframes[:storage_level_inter_rp].asset[1]\ndf = filter(\n row -> row.asset == a,\n energy_problem.dataframes[:storage_level_inter_rp],\n view = true,\n)\n[energy_problem.graph[a].storage_level_inter_rp[row.periods_block] for row in eachrow(df)]","category":"page"},{"location":"20-tutorials/#The-solution-inside-the-dataframes-object","page":"Tutorials","title":"The solution inside the dataframes object","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"In addition to being stored in the solution object, and in the graph object, the solution for the flow, storage_level_intra_rp, and storage_level_inter_rp is also stored inside the corresponding DataFrame objects if solve_model! is called.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The code below will do the same as in the two previous examples:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"(u, v) = first(edge_labels(graph))\nrp = 1\ndf = filter(\n row -> row.rep_period == rp && row.from == u && row.to == v,\n energy_problem.dataframes[:flows],\n view = true,\n)\ndf.solution","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"a = energy_problem.dataframes[:storage_level_inter_rp].asset[1]\ndf = filter(\n row -> row.asset == a,\n energy_problem.dataframes[:storage_level_inter_rp],\n view = true,\n)\ndf.solution","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"a = energy_problem.dataframes[:lowest_storage_level_intra_rp].asset[1]\nrp = 1\ndf = filter(\n row -> row.asset == a && row.rep_period == rp,\n energy_problem.dataframes[:lowest_storage_level_intra_rp],\n view = true,\n)\ndf.solution","category":"page"},{"location":"20-tutorials/#Values-of-constraints-and-expressions","page":"Tutorials","title":"Values of constraints and expressions","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"By accessing the model directly, we can query the values of constraints and expressions. We need to know the name of the constraint and how it is indexed, and for that, you will need to check the model.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"For instance, we can get all incoming flows in the lowest resolution for a given asset for a given representative period with the following:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"using JuMP\na = energy_problem.dataframes[:lowest].asset[end]\nrp = 1\ndf = filter(\n row -> row.asset == a && row.rep_period == rp,\n energy_problem.dataframes[:lowest],\n view = true,\n)\n[value(energy_problem.model[:incoming_flow_lowest_resolution][row.index]) for row in eachrow(df)]","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The values of constraints can also be obtained, however, they are frequently indexed in a subset, which means that their indexing is not straightforward. To know how they are indexed, it is necessary to look at the model code. For instance, to get the consumer balance, we first need to filter the :highest_in_out dataframes by consumers:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"df_consumers = filter(\n row -> graph[row.asset].type == \"consumer\",\n energy_problem.dataframes[:highest_in_out],\n view = false,\n);\nnothing # hide","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"We set view = false to create a copy of this DataFrame so we can make our indexes:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"df_consumers.index = 1:size(df_consumers, 1) # overwrites existing index","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Now we can filter this DataFrame. Note that the names in the stored dataframes are defined as Symbol.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"a = \"Asgard_E_demand\"\ndf = filter(\n row -> row.asset == a && row.rep_period == rp,\n df_consumers,\n view = true,\n)\nvalue.(energy_problem.model[:consumer_balance][df.index])","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"Here value. (i.e., broadcasting) was used instead of the vector comprehension from previous examples just to show that it also works.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"The value of the constraint is obtained by looking only at the part with variables. So a constraint like 2x + 3y - 1 <= 4 would return the value of 2x + 3y.","category":"page"},{"location":"20-tutorials/#Writing-the-output-to-CSV","page":"Tutorials","title":"Writing the output to CSV","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"To save the solution to CSV files, you can use save_solution_to_file:","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"mkdir(\"outputs\")\nsave_solution_to_file(\"outputs\", energy_problem)","category":"page"},{"location":"20-tutorials/#Plotting","page":"Tutorials","title":"Plotting","text":"","category":"section"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"In the previous sections, we have shown how to create vectors such as the one for flows. If you want simple plots, you can plot the vectors directly using any package you like.","category":"page"},{"location":"20-tutorials/","page":"Tutorials","title":"Tutorials","text":"If you would like more custom plots, check out TulipaPlots.jl, under development, which provides tailor-made plots for TulipaEnergyModel.jl.","category":"page"},{"location":"90-contributing/#contributing","page":"Contributing Guidelines","title":"Contributing Guidelines","text":"","category":"section"},{"location":"90-contributing/","page":"Contributing Guidelines","title":"Contributing Guidelines","text":"Great that you want to contribute to the development of Tulipa! Please read these guidelines and our Developer Documentation to get you started.","category":"page"},{"location":"90-contributing/#GitHub-Rules-of-Engagement","page":"Contributing Guidelines","title":"GitHub Rules of Engagement","text":"","category":"section"},{"location":"90-contributing/","page":"Contributing Guidelines","title":"Contributing Guidelines","text":"If you want to discuss something that isn't immediately actionable, post under Discussions. Convert it to an issue once it's actionable.\nAll PR's should have an associated issue (unless it's a very minor fix).\nAll issues should have 1 Type and 1+ Zone labels (unless Type: epic).\nAssign yourself to issues you want to address. Consider if you will be able to work on them in the near future (this week) — if not, leave them available for someone else.\nSet the issue Status to \"In Progress\" when you have started working on it.\nWhen finalizing a pull request, set the Status to \"Ready for Review.\" If someone specific needs to review it, assign them as the reviewer (otherwise anyone can review).\nIssues addressed by merged PRs will automatically move to Done.\nIf you want to discuss an issue at the next group meeting (or just get some attention), mark it with the \"question\" label.\nIssues without updates for 60 days (and PRs without updates in 30 days) will be labelled as \"stale\" and filtered out of view. There is a Stale project board to view and revive these.","category":"page"},{"location":"90-contributing/#Contributing-Workflow","page":"Contributing Guidelines","title":"Contributing Workflow","text":"","category":"section"},{"location":"90-contributing/","page":"Contributing Guidelines","title":"Contributing Guidelines","text":"Fork → Branch → Code → Push → Pull → Squash & Merge","category":"page"},{"location":"90-contributing/","page":"Contributing Guidelines","title":"Contributing Guidelines","text":"Fork the repository\nCreate a new branch (in your fork)\nDo fantastic coding\nPush to your fork\nCreate a pull request from your fork to the main repository\n(After review) Squash and merge","category":"page"},{"location":"90-contributing/","page":"Contributing Guidelines","title":"Contributing Guidelines","text":"For a step-by-step guide to these steps, see our Developer Documentation.","category":"page"},{"location":"90-contributing/","page":"Contributing Guidelines","title":"Contributing Guidelines","text":"We use this workflow in our quest to achieve the Utopic Git History.","category":"page"},{"location":"40-formulation/#formulation","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"This section shows the mathematical formulation of TulipaEnergyModel.jl, assuming that the temporal definition of timesteps is the same for all the elements in the model (e.g., hourly). The concepts section shows how the model handles the flexible temporal resolution of assets and flows in the model.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Pages = [\"40-formulation.md\"]\nDepth = 3","category":"page"},{"location":"40-formulation/#math-sets","page":"Mathematical Formulation","title":"Sets","text":"","category":"section"},{"location":"40-formulation/#Sets-for-Assets","page":"Mathematical Formulation","title":"Sets for Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Description Elements Superset Notes\nmathcalA Energy assets a in mathcalA The Energy asset types (i.e., consumer, producer, storage, hub, and conversion) are mutually exclusive\nmathcalA^textc Consumer energy assets mathcalA^textc subseteq mathcalA \nmathcalA^textp Producer energy assets mathcalA^textp subseteq mathcalA \nmathcalA^texts Storage energy assets mathcalA^texts subseteq mathcalA \nmathcalA^texth Hub energy assets (e.g., transshipment) mathcalA^texth subseteq mathcalA \nmathcalA^textcv Conversion energy assets mathcalA^textcv subseteq mathcalA ","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"In addition, the following asset sets represent methods for incorporating additional variables and constraints in the model.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Description Elements Superset Notes\nmathcalA^texti Energy assets with investment method mathcalA^texti subseteq mathcalA \nmathcalA^textss Energy assets with seasonal method mathcalA^textss subseteq mathcalA This set contains assets that use the seasonal method method. Please visit the how-to sections for seasonal storage and maximum/minimum outgoing energy limit to learn how to set up this feature.\nmathcalA^textse Storage energy assets with energy method mathcalA^textse subseteq mathcalA^texts This set contains storage assets that use investment energy method. Please visit the how-to section to learn how to set up this feature.\nmathcalA^textsb Storage energy assets with binary method mathcalA^textsb subseteq mathcalA^texts setminus mathcalA^textss This set contains storage assets that use an extra binary variable to avoid charging and discharging simultaneously. Please visit the how-to section to learn how to set up this feature.\nmathcalA^textmax e Energy assets with maximum outgoing energy method mathcalA^textmax e subseteq mathcalA This set contains assets that use the maximum outgoing energy method. Please visit the how-to section to learn how to set up this feature.\nmathcalA^textmin e Energy assets with minimum outgoing energy method mathcalA^textmin e subseteq mathcalA This set contains assets that use the minimum outgoing energy method. Please visit the how-to section to learn how to set up this feature.\nmathcalA^textuc Energy assets with unit commitment method mathcalA^textuc subseteq mathcalA^textcv cup mathcalA^textp This set contains conversion and production assets that have a unit commitment method. Please visit the how-to section to learn how to set up this feature.\nmathcalA^textuc basic Energy assets with a basic unit commitment method mathcalA^textuc basic subseteq mathcalA^textuc This set contains the assets that have a basic unit commitment method. Please visit the how-to section to learn how to set up this feature.\nmathcalA^textramp Energy assets with ramping method mathcalA^textramp subseteq mathcalA^textcv cup mathcalA^textp This set contains conversion and production assets that have a ramping method. Please visit the how-to section to learn how to set up this feature.","category":"page"},{"location":"40-formulation/#Sets-for-Flows","page":"Mathematical Formulation","title":"Sets for Flows","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Description Elements Superset Notes\nmathcalF Flow connections between two assets f in mathcalF \nmathcalF^textin_a Set of flows going into asset a mathcalF^textin_a subseteq mathcalF \nmathcalF^textout_a Set of flows going out of asset a mathcalF^textout_a subseteq mathcalF ","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"In addition, the following flow sets represent methods for incorporating additional variables and constraints in the model.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Description Elements Superset Notes\nmathcalF^textt Flow between two assets with a transport method mathcalF^textt subseteq mathcalF \nmathcalF^textti Transport flow with investment method mathcalF^textti subseteq mathcalF^textt ","category":"page"},{"location":"40-formulation/#Sets-for-Temporal-Structures","page":"Mathematical Formulation","title":"Sets for Temporal Structures","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Description Elements Superset Notes\nmathcalP Periods in the timeframe p in mathcalP mathcalP subset mathbbN \nmathcalK Representative periods (rp) k in mathcalK mathcalK subset mathbbN mathcalK does not have to be a subset of mathcalP\nmathcalB_k Timesteps blocks within a representative period k b_k in mathcalB_k mathcalB_k is a partition of timesteps in a representative period k","category":"page"},{"location":"40-formulation/#Sets-for-Groups","page":"Mathematical Formulation","title":"Sets for Groups","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Description Elements Superset Notes\nmathcalG^texta Groups of energy assets g in mathcalG^texta ","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"In addition, the following subsets represent methods for incorporating additional constraints in the model.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Description Elements Superset Notes\nmathcalG^textai Group of assets that share min/max investment limit mathcalG^textai subseteq mathcalG^texta This set contains assets that have a group investment limit. Please visit the how-to section to learn how to set up this feature.","category":"page"},{"location":"40-formulation/#math-parameters","page":"Mathematical Formulation","title":"Parameters","text":"","category":"section"},{"location":"40-formulation/#Parameters-for-Assets","page":"Mathematical Formulation","title":"Parameters for Assets","text":"","category":"section"},{"location":"40-formulation/#General-Parameters-for-Assets","page":"Mathematical Formulation","title":"General Parameters for Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textinv cost_a mathbbR_+ a in mathcalA Investment cost of a unit of asset a [kEUR/MW/year]\np^textinv limit_a mathbbR_+ a in mathcalA Investment potential of asset a [MW]\np^textcapacity_a mathbbR_+ a in mathcalA Capacity per unit of asset a [MW]\np^textinit units_a mathbbZ_+ a in mathcalA Initial number of units of asset a [units]\np^textavailability profile_akb_k mathbbR_+ a in mathcalA, k in mathcalK, b_k in mathcalB_k Availability profile of asset a in the representative period k and timestep block b_k [p.u.]\np^textgroup_a mathcalG^texta a in mathcalA Group g to which the asset a belongs [-]","category":"page"},{"location":"40-formulation/#Extra-Parameters-for-Consumer-Assets","page":"Mathematical Formulation","title":"Extra Parameters for Consumer Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textpeak demand_a mathbbR_+ a in mathcalA^textc Peak demand of consumer asset a [MW]\np^textdemand profile_akb_k mathbbR_+ a in mathcalA^textc, k in mathcalK, b_k in mathcalB_k Demand profile of consumer asset a in the representative period k and timestep block b_k [p.u.]","category":"page"},{"location":"40-formulation/#Extra-Parameters-for-Storage-Assets","page":"Mathematical Formulation","title":"Extra Parameters for Storage Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textinit storage capacity_a mathbbR_+ a in mathcalA^texts Initial storage capacity of storage asset a [MWh]\np^textinit storage level_a mathbbR_+ a in mathcalA^texts Initial storage level of storage asset a [MWh]\np^textinflows_akb_k mathbbR_+ a in mathcalA^texts, k in mathcalK, b_k in mathcalB_k Inflows of storage asset a in the representative period k and timestep block b_k [MWh]\np^textinv cost energy_a mathbbR_+ a in mathcalA^textse Investment cost of a energy unit of asset a [kEUR/MWh/year]\np^textinv limit energy_a mathbbR_+ a in mathcalA^textse Investment energy potential of asset a [MWh]\np^textenergy capacity_a mathbbR_+ a in mathcalA^textse Energy capacity of a unit of investment of the asset a [MWh]\np^textenergy to power ratio_a mathbbR_+ a in mathcalA^texts setminus mathcalA^textse Energy to power ratio of storage asset a [h]\np^textmax intra level_akb_k mathbbR_+ a in mathcalA^texts setminus mathcalA^textss, k in mathcalK, b_k in mathcalB_k Maximum intra-storage level profile of storage asset a in representative period k and timestep block b_k [p.u.]\np^textmin intra level_akb_k mathbbR_+ a in mathcalA^texts setminus mathcalA^textss, k in mathcalK, b_k in mathcalB_k Minimum intra-storage level profile of storage asset a in representative period k and timestep block b_k [p.u.]\np^textmax inter level_ap mathbbR_+ a in mathcalA^textss, p in mathcalP Maximum inter-storage level profile of storage asset a in the period p of the timeframe [p.u.]\np^textmin inter level_ap mathbbR_+ a in mathcalA^textss, p in mathcalP Minimum inter-storage level profile of storage asset a in the period p of the timeframe [p.u.]","category":"page"},{"location":"40-formulation/#Extra-Parameters-for-Energy-Constraints","page":"Mathematical Formulation","title":"Extra Parameters for Energy Constraints","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textmin inter profile_ap mathbbR_+ a in mathcalA^textmin e, p in mathcalP Minimum outgoing inter-temporal energy profile of asset a in the period p of the timeframe [p.u.]\np^textmax inter profile_ap mathbbR_+ a in mathcalA^textmax e, p in mathcalP Maximum outgoing inter-temporal energy profile of asset a in the period p of the timeframe [p.u.]\np^textmax energy_ap mathbbR_+ a in mathcalA^textmax e Maximum outgoing inter-temporal energy value of asset a [MWh]\np^textmin energy_ap mathbbR_+ a in mathcalA^textmin e Minimum outgoing inter-temporal energy value of asset a [MWh]","category":"page"},{"location":"40-formulation/#Extra-Parameters-for-Producers-and-Conversion-Assets","page":"Mathematical Formulation","title":"Extra Parameters for Producers and Conversion Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textmin operating point_a mathbbR_+ a in mathcalA^textuc Minimum operating point or minimum stable generation level defined as a portion of the capacity of asset a [p.u.]\np^textunits on cost_a mathbbR_+ a in mathcalA^textuc Objective function coefficient on units_on variable. e.g., no-load cost or idling cost of asset a [kEUR/h/units]\np^textmax ramp up_a mathbbR_+ a in mathcalA^textramp Maximum ramping up rate as a portion of the capacity of asset a [p.u./h]\np^textmax ramp down_a mathbbR_+ a in mathcalA^textramp Maximum ramping down rate as a portion of the capacity of asset a [p.u./h]","category":"page"},{"location":"40-formulation/#Parameters-for-Flows","page":"Mathematical Formulation","title":"Parameters for Flows","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textvariable cost_f mathbbR_+ f in mathcalF Variable cost of flow f [kEUR/MWh]\np^texteff_f mathbbR_+ f in mathcalF Efficiency of flow f [p.u.]\np^textinv cost_f mathbbR_+ f in mathcalF^textt Investment cost of transport flow f [kEUR/MW/year]\np^textinv limit_f mathbbR_+ f in mathcalF^textt Investment potential of flow f [MW]\np^textcapacity_f mathbbR_+ f in mathcalF^textt Capacity per unit of investment of transport flow f (both exports and imports) [MW]\np^textinit export capacity_f mathbbR_+ f in mathcalF^textt Initial export capacity of transport flow f [MW]\np^textinit import capacity_f mathbbR_+ f in mathcalF^textt Initial import capacity of transport flow f [MW]\np^textavailability profile_fkb_k mathbbR_+ a in mathcalF, k in mathcalK, b_k in mathcalB_k Availability profile of flow f in the representative period k and timestep block b_k [p.u.]","category":"page"},{"location":"40-formulation/#Parameters-for-Temporal-Structures","page":"Mathematical Formulation","title":"Parameters for Temporal Structures","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textduration_b_k mathbbR_+ b_k in mathcalB_k Duration of the timestep blocks b_k [h]\np^textrp weight_k mathbbR_+ k in mathcalK Weight of representative period k [-]\np^textmap_pk mathbbR_+ p in mathcalP, k in mathcalK Map with the weight of representative period k in period p [-]","category":"page"},{"location":"40-formulation/#Parameters-for-Groups","page":"Mathematical Formulation","title":"Parameters for Groups","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\np^textmin invest limit_g mathbbR_+ g in mathcalG^textai Minimum investment limit (potential) of group g [MW]\np^textmax invest limit_g mathbbR_+ g in mathcalG^textai Maximum investment limit (potential) of group g [MW]","category":"page"},{"location":"40-formulation/#math-variables","page":"Mathematical Formulation","title":"Variables","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Name Domain Domains of Indices Description Units\nv^textflow_fkb_k mathbbR f in mathcalF, k in mathcalK, b_k in mathcalB_k Flow f between two assets in representative period k and timestep block b_k [MW]\nv^textinv_a mathbbZ_+ a in mathcalA^texti Number of invested units of asset a [units]\nv^textinv energy_a mathbbZ_+ a in mathcalA^texti cap mathcalA^textse Number of invested units of the energy component of the storage asset a that use energy method [units]\nv^textinv_f mathbbZ_+ f in mathcalF^textti Number of invested units of capacity increment of transport flow f [units]\nv^textintra-storage_akb_k mathbbR_+ a in mathcalA^texts setminus mathcalA^textss, k in mathcalK, b_k in mathcalB_k Intra storage level (within a representative period) for storage asset a, representative period k, and timestep block b_k [MWh]\nv^textinter-storage_ap mathbbR_+ a in mathcalA^textss, p in mathcalP Inter storage level (between representative periods) for storage asset a and period p [MWh]\nv^textis charging_akb_k 0 1 a in mathcalA^textsb, k in mathcalK, b_k in mathcalB_k If an storage asset a is charging or not in representative period k and timestep block b_k [-]\nv^textunits on_akb_k mathbbZ_+ a in mathcalA^textuc, k in mathcalK, b_k in mathcalB_k Number of units ON of asset a in representative period k and timestep block b_k [units]","category":"page"},{"location":"40-formulation/#math-objective-function","page":"Mathematical Formulation","title":"Objective Function","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Objective function:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\ntextminimize quad assets_investment_cost + flows_investment_cost \n + flows_variable_cost + unit_on_cost\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Where:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nassets_investment_cost = sum_a in mathcalA^texti p^textinv cost_a cdot p^textcapacity_a cdot v^textinv_a + sum_a in mathcalA^textse cap mathcalA^texti p^textinv cost energy_a cdot p^textenergy capacity_a cdot v^textinv energy_a \nflows_investment_cost = sum_f in mathcalF^textti p^textinv cost_f cdot p^textcapacity_f cdot v^textinv_f \nflows_variable_cost = sum_f in mathcalF sum_k in mathcalK sum_b_k in mathcalB_k p^textrp weight_k cdot p^textvariable cost_f cdot p^textduration_b_k cdot v^textflow_fkb_k \nunit_on_cost = sum_a in mathcalA^textuc sum_k in mathcalK sum_b_k in mathcalB_k p^textrp weight_k cdot p^textunits on cost_a cdot p^textduration_b_k cdot v^textunits on_akb_k\nendaligned","category":"page"},{"location":"40-formulation/#math-constraints","page":"Mathematical Formulation","title":"Constraints","text":"","category":"section"},{"location":"40-formulation/#cap-constraints","page":"Mathematical Formulation","title":"Capacity Constraints","text":"","category":"section"},{"location":"40-formulation/#Maximum-Output-Flows-Limit","page":"Mathematical Formulation","title":"Maximum Output Flows Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textout_a v^textflow_fkb_k leq p^textavailability profile_akb_k cdot p^textcapacity_a cdot left(p^textinit units_a + v^textinv_a right) quad\n forall a in mathcalA^textcv cup left(mathcalA^texts setminus mathcalA^textsb right) cup mathcalA^textp forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Storage assets using the method to avoid charging and discharging simultaneously, i.e., a in mathcalA^textsb, use the following constraints instead of the previous one:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textout_a v^textflow_fkb_k leq p^textavailability profile_akb_k cdot left(p^textcapacity_a cdot p^textinit units_a + p^textinv limit_a right) cdot left(1 - v^textis charging_akb_k right) quad\n forall a in mathcalA^textsb forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textout_a v^textflow_fkb_k leq p^textavailability profile_akb_k cdot p^textcapacity_a cdot left(p^textinit units_a cdot left(1 - v^textis charging_akb_k right) + v^textinv_a right) quad\n forall a in mathcalA^textsb forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Maximum-Input-Flows-Limit","page":"Mathematical Formulation","title":"Maximum Input Flows Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textin_a v^textflow_fkb_k leq p^textavailability profile_akb_k cdot p^textcapacity_a cdot left(p^textinit units_a + v^textinv_a right) quad\n forall a in mathcalA^texts setminus mathcalA^textsb forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Storage assets using the method to avoid charging and discharging simultaneously, i.e., a in mathcalA^textsb, use the following constraints instead of the previous one:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textin_a v^textflow_fkb_k leq p^textavailability profile_akb_k cdot left(p^textcapacity_a cdot p^textinit units_a + p^textinv limit_a right) cdot v^textis charging_akb_k quad forall a in mathcalA^textsb forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textin_a v^textflow_fkb_k leq p^textavailability profile_akb_k cdot p^textcapacity_a cdot left(p^textinit units_a cdot v^textis charging_akb_k + v^textinv_a right) quad forall a in mathcalA^textsb forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Lower-Limit-for-Flows-that-are-Not-Transport-Assets","page":"Mathematical Formulation","title":"Lower Limit for Flows that are Not Transport Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textflow_fkb_k geq 0 quad forall f notin mathcalF^textt forall k in mathcalK forall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#uc-constraints","page":"Mathematical Formulation","title":"Unit Commitment Constraints","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Production and conversion assets within the set mathcalA^textuc will contain the unit commitment constraints in the model. These constraints are based on the work of Morales-España et al. (2013) and Morales-España et al. (2014).","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"The current version of the code only incorporates a basic unit commitment version of the constraints (i.e., utilizing only the unit commitment variable v^textunits on). However, upcoming versions will include more detailed constraints, incorporating startup and shutdown variables.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"For the unit commitment constraints, we define the following expression for the flow that is above the minimum operating point of the asset:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"e^textflow above min_akb_k = sum_f in mathcalF^textout_a v^textflow_fkb_k - p^textavailability profile_akb_k cdot p^textcapacity_a cdot p^textmin operating point_a cdot v^texton_akb_k quad\n forall a in mathcalA^textuc forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Limit-to-the-Units-On-Variable","page":"Mathematical Formulation","title":"Limit to the Units On Variable","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^texton_akb_k leq p^textinit units_a + v^textinv_a quad\n forall a in mathcalA^textuc forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Maximum-Output-Flow-Above-the-Minimum-Operating-Point","page":"Mathematical Formulation","title":"Maximum Output Flow Above the Minimum Operating Point","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"e^textflow above min_akb_k leq p^textavailability profile_akb_k cdot p^textcapacity_a cdot left(1 - p^textmin operating point_a right) cdot v^texton_akb_k quad\n forall a in mathcalA^textuc basic forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Minimum-Output-Flow-Above-the-Minimum-Operating-Point","page":"Mathematical Formulation","title":"Minimum Output Flow Above the Minimum Operating Point","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"e^textflow above min_akb_k geq 0 quad\n forall a in mathcalA^textuc basic forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#ramp-constraints","page":"Mathematical Formulation","title":"Ramping Constraints","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Ramping constraints restrict the rate at which the output flow of a production or conversion asset can change. If the asset is part of the unit commitment set (e.g., mathcalA^textuc), the ramping limits apply to the flow above the minimum output, but if it is not, the ramping limits apply to the total output flow.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Ramping constraints that take into account unit commitment variables are based on the work done by Damcı-Kurt et. al (2016). Also, please note that since the current version of the code only handles the basic unit commitment implementation, the ramping constraints are applied to the assets in the set mathcalA^textuc basic.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Duration parameter: The following constraints are multiplied by p^textduration_b_k on the right-hand side to adjust for the duration of the timesteps since the ramp parameters are defined as rates. This assumption is based on the idea that all timesteps are the same in this section, which simplifies the formulation. However, in a flexible temporal resolution context, this may not hold true, and the duration needs to be the minimum duration of all the outgoing flows at the timestep block b_k. For more information, please visit the concept section on flexible time resolution.","category":"page"},{"location":"40-formulation/#Maximum-Ramp-Up-Rate-Limit-WITH-Unit-Commitment-Method","page":"Mathematical Formulation","title":"Maximum Ramp-Up Rate Limit WITH Unit Commitment Method","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"e^textflow above min_akb_k - e^textflow above min_akb_k-1 leq p^textavailability profile_akb_k cdot p^textcapacity_a cdot p^textmax ramp up_a cdot p^textduration_b_k cdot v^texton_akb_k quad\n forall a in left(mathcalA^textramp cap mathcalA^textuc basic right) forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Maximum-Ramp-Down-Rate-Limit-WITH-Unit-Commmitment-Method","page":"Mathematical Formulation","title":"Maximum Ramp-Down Rate Limit WITH Unit Commmitment Method","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"e^textflow above min_akb_k - e^textflow above min_akb_k-1 geq - p^textavailability profile_akb_k cdot p^textcapacity_a cdot p^textmax ramp down_a cdot p^textduration_b_k cdot v^texton_akb_k-1 quad\n forall a in left(mathcalA^textramp cap mathcalA^textuc basic right) forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Maximum-Ramp-Up-Rate-Limit-WITHOUT-Unit-Commitment-Method","page":"Mathematical Formulation","title":"Maximum Ramp-Up Rate Limit WITHOUT Unit Commitment Method","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"sum_f in mathcalF^textout_a v^textflow_fkb_k - sum_f in mathcalF^textout_a v^textflow_fkb_k-1 leq p^textmax ramp up_a cdot p^textduration_b_k cdot p^textavailability profile_akb_k cdot p^textcapacity_a cdot left(p^textinit units_a + v^textinv_a right) quad\n forall a in left(mathcalA^textramp setminus mathcalA^textuc basic right) forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Maximum-Ramp-Down-Rate-Limit-WITHOUT-Unit-Commitment-Method","page":"Mathematical Formulation","title":"Maximum Ramp-Down Rate Limit WITHOUT Unit Commitment Method","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"sum_f in mathcalF^textout_a v^textflow_fkb_k - sum_f in mathcalF^textout_a v^textflow_fkb_k-1 geq - p^textmax ramp down_a cdot p^textduration_b_k cdot p^textavailability profile_akb_k cdot p^textcapacity_a cdot left(p^textinit units_a + v^textinv_a right) quad\n forall a in left(mathcalA^textramp setminus mathcalA^textuc basic right) forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Constraints-for-Energy-Consumer-Assets","page":"Mathematical Formulation","title":"Constraints for Energy Consumer Assets","text":"","category":"section"},{"location":"40-formulation/#Balance-Constraint-for-Consumers","page":"Mathematical Formulation","title":"Balance Constraint for Consumers","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"The balance constraint sense depends on the method selected in the asset file's parameter consumer_balance_sense. The default value is =, but the user can choose geq as an option.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textin_a v^textflow_fkb_k - sum_f in mathcalF^textout_a v^textflow_fkb_k leftbeginarrayl = geq endarrayright p^textdemand profile_akb_k cdot p^textpeak demand_a quad forall a in mathcalA^textc forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Constraints-for-Energy-Storage-Assets","page":"Mathematical Formulation","title":"Constraints for Energy Storage Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"There are two types of constraints for energy storage assets: intra-temporal and inter-temporal. Intra-temporal constraints impose limits inside a representative period, while inter-temporal constraints combine information from several representative periods (e.g., to model seasonal storage). For more information on this topic, refer to the concepts section or Tejada-Arango et al. (2018) and Tejada-Arango et al. (2019).","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"In addition, we define the following expression to determine the energy investment limit of the storage assets. This expression takes two different forms depending on whether the storage asset belongs to the set mathcalA^textse or not.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Investment energy method:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"e^textenergy inv limit_a = p^textenergy capacity_a cdot v^textinv energy_a quad forall a in mathcalA^texti cap mathcalA^textse","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Fixed energy-to-power ratio method:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"e^textenergy inv limit_a = p^textenergy to power ratio_a cdot p^textcapacity_a cdot v^textinv_a quad forall a in mathcalA^texti cap (mathcalA^texts setminus mathcalA^textse)","category":"page"},{"location":"40-formulation/#intra-storage-balance","page":"Mathematical Formulation","title":"Intra-temporal Constraint for Storage Balance","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textintra-storage_akb_k = v^textintra-storage_akb_k-1 + p^textinflows_akb_k + sum_f in mathcalF^textin_a p^texteff_f cdot p^textduration_b_k cdot v^textflow_fkb_k - sum_f in mathcalF^textout_a frac1p^texteff_f cdot p^textduration_b_k cdot v^textflow_fkb_k quad\n forall a in mathcalA^texts setminus mathcalA^textss forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Intra-temporal-Constraint-for-Maximum-Storage-Level-Limit","page":"Mathematical Formulation","title":"Intra-temporal Constraint for Maximum Storage Level Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textintra-storage_akb_k leq p^textmax intra level_akb_k cdot (p^textinit storage capacity_a + e^textenergy inv limit_a) quad forall a in mathcalA^texts setminus mathcalA^textss forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Intra-temporal-Constraint-for-Minimum-Storage-Level-Limit","page":"Mathematical Formulation","title":"Intra-temporal Constraint for Minimum Storage Level Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textintra-storage_akb_k geq p^textmin intra level_akb_k cdot (p^textinit storage capacity_a + e^textenergy inv limit_a) quad forall a in mathcalA^texts setminus mathcalA^textss forall k in mathcalKforall b_k in mathcalB_k","category":"page"},{"location":"40-formulation/#Intra-temporal-Cycling-Constraint","page":"Mathematical Formulation","title":"Intra-temporal Cycling Constraint","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"The cycling constraint for the intra-temporal constraints links the first timestep block (b^textfirst_k) and the last one (b^textlast_k) in each representative period. The parameter p^textinit storage level_a determines the considered equations in the model for this constraint:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"If parameter p^textinit storage level_a is not defined, the intra-storage level of the last timestep block (b^textlast_k) is used as the initial value for the first timestep block in the intra-temporal constraint for the storage balance.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textintra-storage_akb^textfirst_k = v^textintra-storage_akb^textlast_k + p^textinflows_akb^textfirst_k + sum_f in mathcalF^textin_a p^texteff_f cdot p^textduration_b_k cdot v^textflow_fkb^textfirst_k - sum_f in mathcalF^textout_a frac1p^texteff_f cdot p^textduration_b_k cdot v^textflow_fkb^textfirst_k quad\n forall a in mathcalA^texts setminus mathcalA^textss forall k in mathcalK\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"If parameter p^textinit storage level_a is defined, we use it as the initial value for the first timestep block in the intra-temporal constraint for the storage balance. In addition, the intra-storage level of the last timestep block (b^textlast_k) in each representative period must be greater than this initial value.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textintra-storage_akb^textfirst_k = p^textinit storage level_a + p^textinflows_akb^textfirst_k + sum_f in mathcalF^textin_a p^texteff_f cdot p^textduration_b_k cdot v^textflow_fkb^textfirst_k - sum_f in mathcalF^textout_a frac1p^texteff_f cdot p^textduration_b_k cdot v^textflow_fkb^textfirst_k quad\n forall a in mathcalA^texts setminus mathcalA^textss forall k in mathcalK\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textintra-storage_akb^textfirst_k geq p^textinit storage level_a quad\n forall a in mathcalA^texts setminus mathcalA^textss forall k in mathcalK","category":"page"},{"location":"40-formulation/#inter-storage-balance","page":"Mathematical Formulation","title":"Inter-temporal Constraint for Storage Balance","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"This constraint allows us to consider the storage seasonality throughout the model's timeframe (e.g., a year). The parameter p^textmap_pk determines how much of the representative period k is in the period p, and you can use a clustering technique to calculate it. For TulipaEnergyModel.jl, we recommend using TulipaClustering.jl to compute the clusters for the representative periods and their map.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"For the sake of simplicity, we show the constraint assuming the inter-storage level between two consecutive periods p; however, TulipaEnergyModel.jl can handle more flexible period block definition through the timeframe definition in the model using the information in the file assets-timeframe-partitions.csv.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textinter-storage_ap = v^textinter-storage_ap-1 + sum_k in mathcalK p^textmap_pk sum_b_k in mathcalB_K p^textinflows_akb_k \n + sum_f in mathcalF^textin_a p^texteff_f sum_k in mathcalK p^textmap_pk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k \n - sum_f in mathcalF^textout_a frac1p^texteff_f sum_k in mathcalK p^textmap_pk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k\n forall a in mathcalA^textss forall p in mathcalP\nendaligned","category":"page"},{"location":"40-formulation/#Inter-temporal-Constraint-for-Maximum-Storage-Level-Limit","page":"Mathematical Formulation","title":"Inter-temporal Constraint for Maximum Storage Level Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textinter-storage_ap leq p^textmax inter level_ap cdot (p^textinit storage capacity_a + e^textenergy inv limit_a) quad forall a in mathcalA^textss forall p in mathcalP","category":"page"},{"location":"40-formulation/#Inter-temporal-Constraint-for-Minimum-Storage-Level-Limit","page":"Mathematical Formulation","title":"Inter-temporal Constraint for Minimum Storage Level Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textinter-storage_ap geq p^textmin inter level_ap cdot (p^textinit storage capacity_a + e^textenergy inv limit_a) quad forall a in mathcalA^textss forall p in mathcalP","category":"page"},{"location":"40-formulation/#Inter-temporal-Cycling-Constraint","page":"Mathematical Formulation","title":"Inter-temporal Cycling Constraint","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"The cycling constraint for the inter-temporal constraints links the first-period block (p^textfirst) and the last one (p^textlast) in the timeframe. The parameter p^textinit storage level_a determines the considered equations in the model for this constraint:","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"If parameter p^textinit storage level_a is not defined, the inter-storage level of the last period block (p^textlast) is used as the initial value for the first-period block in the inter-temporal constraint for the storage balance.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textinter-storage_ap^textfirst = v^textinter-storage_ap^textlast + sum_k in mathcalK p^textmap_p^textfirstk sum_b_k in mathcalB_K p^textinflows_akb_k \n + sum_f in mathcalF^textin_a p^texteff_f sum_k in mathcalK p^textmap_p^textfirstk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k \n - sum_f in mathcalF^textout_a frac1p^texteff_f sum_k in mathcalK p^textmap_p^textfirstk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k\n forall a in mathcalA^textss\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"If parameter p^textinit storage level_a is defined, we use it as the initial value for the first-period block in the inter-temporal constraint for the storage balance. In addition, the inter-storage level of the last period block (p^textlast) in the timeframe must be greater than this initial value.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textinter-storage_ap^textfirst = p^textinit storage level_a + sum_k in mathcalK p^textmap_p^textfirstk sum_b_k in mathcalB_K p^textinflows_akb_k \n + sum_f in mathcalF^textin_a p^texteff_f sum_k in mathcalK p^textmap_p^textfirstk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k \n - sum_f in mathcalF^textout_a frac1p^texteff_f sum_k in mathcalK p^textmap_p^textfirstk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k\n forall a in mathcalA^textss\nendaligned","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textinter-storage_ap^textlast geq p^textinit storage level_a quad\n forall a in mathcalA^textss","category":"page"},{"location":"40-formulation/#Constraints-for-Energy-Hub-Assets","page":"Mathematical Formulation","title":"Constraints for Energy Hub Assets","text":"","category":"section"},{"location":"40-formulation/#Balance-Constraint-for-Hubs","page":"Mathematical Formulation","title":"Balance Constraint for Hubs","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textin_a v^textflow_fkb_k = sum_f in mathcalF^textout_a v^textflow_fkb_k quad forall a in mathcalA^texth forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Constraints-for-Energy-Conversion-Assets","page":"Mathematical Formulation","title":"Constraints for Energy Conversion Assets","text":"","category":"section"},{"location":"40-formulation/#Balance-Constraint-for-Conversion-Assets","page":"Mathematical Formulation","title":"Balance Constraint for Conversion Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textin_a p^texteff_f cdot v^textflow_fkb_k = sum_f in mathcalF^textout_a fracv^textflow_fkb_kp^texteff_f quad forall a in mathcalA^textcv forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Constraints-for-Transport-Assets","page":"Mathematical Formulation","title":"Constraints for Transport Assets","text":"","category":"section"},{"location":"40-formulation/#Maximum-Transport-Flow-Limit","page":"Mathematical Formulation","title":"Maximum Transport Flow Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textflow_fkb_k leq p^textavailability profile_fkb_k cdot left(p^textinit export capacity_f + p^textcapacity_f cdot v^textinv_f right) quad forall f in mathcalF^textt forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Minimum-Transport-Flow-Limit","page":"Mathematical Formulation","title":"Minimum Transport Flow Limit","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nv^textflow_fkb_k geq - p^textavailability profile_fkb_k cdot left(p^textinit import capacity_f + p^textcapacity_f cdot v^textinv_f right) quad forall f in mathcalF^textt forall k in mathcalKforall b_k in mathcalB_k\nendaligned","category":"page"},{"location":"40-formulation/#Constraints-for-Investments","page":"Mathematical Formulation","title":"Constraints for Investments","text":"","category":"section"},{"location":"40-formulation/#Maximum-Investment-Limit-for-Assets","page":"Mathematical Formulation","title":"Maximum Investment Limit for Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textinv_a leq fracp^textinv limit_ap^textcapacity_a quad forall a in mathcalA^texti","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"If the parameter investment_integer in the assets-data.csv file is set to true, then the right-hand side of this constraint uses a least integer function (floor function) to guarantee that the limit is integer.","category":"page"},{"location":"40-formulation/#Maximum-Energy-Investment-Limit-for-Assets","page":"Mathematical Formulation","title":"Maximum Energy Investment Limit for Assets","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textinv energy_a leq fracp^textinv limit energy_ap^textenergy capacity_a quad forall a in mathcalA^texti cap mathcalA^textse","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"If the parameter investment_integer_storage_energy in the assets-data.csv file is set to true, then the right-hand side of this constraint uses a least integer function (floor function) to guarantee that the limit is integer.","category":"page"},{"location":"40-formulation/#Maximum-Investment-Limit-for-Flows","page":"Mathematical Formulation","title":"Maximum Investment Limit for Flows","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"v^textinv_f leq fracp^textinv limit_fp^textcapacity_f quad forall f in mathcalF^textti","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"If the parameter investment_integer in the flows-data.csv file is set to true, then the right-hand side of this constraint uses a least integer function (floor function) to guarantee that the limit is integer.","category":"page"},{"location":"40-formulation/#inter-temporal-energy-constraints","page":"Mathematical Formulation","title":"Inter-temporal Energy Constraints","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"These constraints allow us to consider a maximum or minimum energy limit for an asset throughout the model's timeframe (e.g., a year). It uses the same principle explained in the inter-temporal constraint for storage balance and in the Storage Modeling section.","category":"page"},{"location":"40-formulation/#Maximum-Outgoing-Energy-During-the-Timeframe","page":"Mathematical Formulation","title":"Maximum Outgoing Energy During the Timeframe","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textout_a sum_k in mathcalK p^textmap_pk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k leq p^textmax inter profile_ap cdot p^textmax energy_a\n forall a in mathcalA^textmax e forall p in mathcalP\nendaligned","category":"page"},{"location":"40-formulation/#Minimum-Outgoing-Energy-During-the-Timeframe","page":"Mathematical Formulation","title":"Minimum Outgoing Energy During the Timeframe","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_f in mathcalF^textout_a sum_k in mathcalK p^textmap_pk sum_b_k in mathcalB_K p^textduration_b_k cdot v^textflow_fkb_k geq p^textmin inter profile_ap cdot p^textmin energy_a\n forall a in mathcalA^textmin e forall p in mathcalP\nendaligned","category":"page"},{"location":"40-formulation/#group-constraints","page":"Mathematical Formulation","title":"Constraints for Groups","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"The following constraints aggregate variables of different assets depending on the method that applies to the group.","category":"page"},{"location":"40-formulation/#investment-group-constraints","page":"Mathematical Formulation","title":"Investment Limits of a Group","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"These constraints apply to assets in a group using the investment method mathcalG^textai. They help impose an investment potential of a spatial area commonly shared by several assets that can be invested there.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Note: These constraints are applied to the investments each year. The model does not yet have investment limits to a group's accumulated invested capacity.","category":"page"},{"location":"40-formulation/#Minimum-Investment-Limit-of-a-Group","page":"Mathematical Formulation","title":"Minimum Investment Limit of a Group","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_a in mathcalA^texti p^textgroup_a = g p^textcapacity_a cdot v^textinv_a geq p^textmin invest limit_g\n forall g in mathcalG^textai\nendaligned","category":"page"},{"location":"40-formulation/#Maximum-Investment-Limit-of-a-Group","page":"Mathematical Formulation","title":"Maximum Investment Limit of a Group","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"beginaligned\nsum_a in mathcalA^texti p^textgroup_a = g p^textcapacity_a cdot v^textinv_a leq p^textmax invest limit_g\n forall g in mathcalG^textai\nendaligned","category":"page"},{"location":"40-formulation/#math-references","page":"Mathematical Formulation","title":"References","text":"","category":"section"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Damcı-Kurt, P., Küçükyavuz, S., Rajan, D., Atamtürk, A., 2016. A polyhedral study of production ramping. Math. Program. 158, 175–205. doi: 10.1007/s10107-015-0919-9.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Morales-España, G., Ramos, A., García-González, J., 2014. An MIP Formulation for Joint Market-Clearing of Energy and Reserves Based on Ramp Scheduling. IEEE Transactions on Power Systems 29, 476-488. doi: 10.1109/TPWRS.2013.2259601.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Morales-España, G., Latorre, J. M., Ramos, A., 2013. Tight and Compact MILP Formulation for the Thermal Unit Commitment Problem. IEEE Transactions on Power Systems 28, 4897-4908. doi: 10.1109/TPWRS.2013.2251373.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Tejada-Arango, D.A., Domeshek, M., Wogrin, S., Centeno, E., 2018. Enhanced representative days and system states modeling for energy storage investment analysis. IEEE Transactions on Power Systems 33, 6534–6544. doi:10.1109/TPWRS.2018.2819578.","category":"page"},{"location":"40-formulation/","page":"Mathematical Formulation","title":"Mathematical Formulation","text":"Tejada-Arango, D.A., Wogrin, S., Siddiqui, A.S., Centeno, E., 2019. Opportunity cost including short-term energy storage in hydrothermal dispatch models using a linked representative periods approach. Energy 188, 116079. doi:10.1016/j.energy.2019.116079.","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"CurrentModule = TulipaEnergyModel","category":"page"},{"location":"#home","page":"Welcome","title":"Welcome","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"TulipaEnergyModel.jl is an optimization model for the electricity market that can be coupled with other energy sectors (e.g., hydrogen, heat, natural gas, etc.). The optimization model determines the optimal investment and operation decisions for different types of assets (e.g., producers, consumers, conversion, storage, and transport). TulipaEnergyModel.jl is developed in Julia and depends on the JuMP.jl package.","category":"page"},{"location":"#Getting-Started","page":"Welcome","title":"Getting Started","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"To start using Tulipa for your research, check out our How to Use section and Tutorials.","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"For a more technical explanation, check out the Concepts section, or dive into the Mathematical Formulation.","category":"page"},{"location":"#bugs-and-discussions","page":"Welcome","title":"Bug reports and discussions","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"If you think you have found a bug, feel free to open an issue. If you have a general question or idea, start a discussion here.","category":"page"},{"location":"#Contributing","page":"Welcome","title":"Contributing","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"If you want to contribute (awesome!), please read our Contributing Guidelines and follow the setup in our Developer Documentation.","category":"page"},{"location":"#license","page":"Welcome","title":"License","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"This content is released under the Apache License 2.0 License.","category":"page"},{"location":"#Contributors","page":"Welcome","title":"Contributors","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\"Abel
Abel Soares Siqueira

💻 👀
\"Diego
Diego Alejandro Tejada Arango

💻 👀 🤔 🔬
\"Germán
Germán Morales

🔬 🤔 🔍 📆
\"Greg
Greg Neustroev

🤔 🔬 💻
\"Juha
Juha Kiviluoma

🤔 🔬
\"Lauren
Lauren Clisby

💻 👀 🤔 📆
\"Laurent
Laurent Soucasse

🤔
\"Mathijs
Mathijs de Weerdt

🔍 📆
\"Ni
Ni Wang

💻 👀 🤔 🔬
\"Sander
Sander van Rijn

🤔
\"Suvayu
Suvayu Ali

💻 👀 🤔
\"Zhi\"/
Zhi

🤔 🔬
\n\n\n\n\n","category":"page"}] }