OpenQuake Engine 3.0.0
[Michele Simionato (@micheles)]
- Fixed a bug with newlines in the logic tree path breaking the CSV exporter
for the realizations output - When setting the event year, each stochastic event set is now considered
independent - Fixed a bug in the HMTK plotting libraries and added the ability to
customize the figure size - Fixed bug in the datastore: now we automatically look for the attributes
in the parent dataset, if the dataset is missing in the child datastore - Extended extract_losses_by_asset to the event based risk calculator
- Stored in source_info the number of events generated per source
- Added a script utils/reduce_sm to reduce the source model of a calculation
by removing all the sources not affecting the hazard - Deprecated
openquake.hazardlib.calc.stochastic.stochastic_event_set
- Fixed the export of ruptures with a GriddedSurface geometry
- Added a check for wrong or missing
<occupancyPeriods>
in the exposure - Fixed the issue of slow tasks in event_based_risk from precomputed GMFs
for sites without events - Now the engine automatically associates the exposure to a grid if
region_grid_spacing
is given and the sites are not specified otherwise - Extracting the site mesh from the exposure before looking at the site model
- Added a check on probs_occur summing up to 1 in the SourceWriter
oq show job_info
now shows the received data amount while the
calculation is progressing
[Daniele Viganò (@daniviga)]
- Removed support for Python 2 in
setup.py
- Removed files containing Python 2 dependencies
- Added support for WebUI groups/permissions on the
export outputs and datastore API endpoints
[Michele Simionato (@micheles)]
- Fixed
oq show
for multiuser with parent calculations - Fixed
get_spherical_bounding_box
for griddedSurfaces - Implemented disaggregation by source only for the case
of a single realization in the logic tree (experimental) - Replaced celery with celery_zmq as distribution mechanism
- Extended
oq info
to work on source model logic tree files - Added a check against duplicated fields in the exposure CSV
- Implemented event based with mutex sources (experimental)
- Add an utility to read XML shakemap files in hazardlib
- Added a check on IMTs for GMFs read from CSV
[Daniele Viganò (@daniviga)]
- Changed the default DbServer port in Linux packages from 1908 to 1907
[Michele Simionato (@micheles)]
- Logged rupture floating factor and rupture spinning factor
- Added an extract API for losses_by_asset
- Added a check against GMF csv files with more than one realization
- Fixed the algorithm setting the event year for event based with sampling
- Added a command
oq importcalc
to import a remote calculation in the
local database - Stored avg_losses-stats in event based risk if there are multiple
realizations - Better error message in case of overlapping sites in sites.csv
- Added a an investigation time attribute to source models with
nonparametric sources - Bug fix: in some cases the calculator
event_based_rupture
was generating
too few tasks and the same happened for classical calculation with
`optimize_same_id_sources=true - Changed the ordering of the epsilons in scenario_risk
- Added the ability to use a pre-imported risk model
- Very small result values in scenario_damage (< 1E-7) are clipped to zero,
to hide numerical artifacts - Removed an obsolete PickleableSequence class
- Fixed error in classical_risk when num_statistics > num_realizations
- Fixed a TypeError when reading CSV exposures with occupancy periods
- Extended the check on duplicated source IDs to models in format NRML 0.5
- Added a warning when reading the sources if .count_ruptures() is
suspiciously slow - Changed the splitting logic: now all sources are split upfront
- Improved the splitting of complex fault sources
- Added a script to renumber source models with non-unique source IDs
- Made the datastore of calculations using GMPETables relocatable; in
practice you can run the Canada model on a cluster, copy the .hdf5 on
a laptop and do the postprocessing there, a feat previously impossible.
[Valerio Poggi (@klunk386)]
- Included a method to export data directly from the Catalogue() object into
standard HMTK format.
[Michele Simionato (@micheles)]
- Now the parameter
disagg_outputs
is honored, i.e. only the specified
outputs are extracted from the disaggregation matrix and stored - Implemented statistical disaggregation outputs (experimental)
- Fixed a small bug: we were reading the source model twice in disaggregation
- Added a check to discard results coming from the wrong calculation
for the distribution modecelery_zmq
- Removed the long time deprecated commands
oq engine --run-hazard
andoq engine --run-risk
- Added a distribution mode
celery_zmq
- Added the ability to use a preimported exposure in risk calculations
- Substantial cleanup of the parallelization framework
- Fixed a bug with nonparametric sources producing negative probabilities