Skip to content

Commit

Permalink
Merge pull request #1160 from quantumblacklabs/merge-master-to-develop
Browse files Browse the repository at this point in the history
[AUTO-MERGE] Merge master into develop via merge-master-to-develop
  • Loading branch information
idanov authored Jun 17, 2021
2 parents e1251eb + 72203bd commit f448367
Show file tree
Hide file tree
Showing 9 changed files with 13 additions and 15 deletions.
2 changes: 1 addition & 1 deletion RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
* If you're using `spark.SparkHiveDataSet` with `write_mode` option set to `insert`, please update this to `append` in line with the Spark styleguide. If you're using `spark.SparkHiveDataSet` with `write_mode` option set to `upsert`, please make sure that your `SparkContext` has a valid `checkpointDir` set either by `SparkContext.setCheckpointDir` method or directly in the `conf` folder.
* Edit any scripts containing `kedro pipeline package --version` to remove the `--version` option. If you wish to set a specific pipeline package version, set the `__version__` variable in the pipeline package's `__init__.py` file.

# Upcoming release 0.17.4
# Release 0.17.4

## Major features and improvements
* Added the following new datasets:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/02_get_started/03_hello_kedro.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ from kedro.io import DataCatalog, MemoryDataSet
data_catalog = DataCatalog({"my_salutation": MemoryDataSet()})
```

Kedro provides a [number of different built-in datasets](https://kedro.readthedocs.io/en/stable/kedro.extras.datasets.html#kedro-extras-datasets) for different file types and file systems so you don’t have to write the logic for reading/writing data.
Kedro provides a [number of different built-in datasets](/kedro.extras.datasets) for different file types and file systems so you don’t have to write the logic for reading/writing data.

## Runner

Expand Down
6 changes: 3 additions & 3 deletions docs/source/03_tutorial/02_tutorial_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ isort~=5.0 # Used for linting code with `kedro lint`
jupyter~=1.0 # Used to open a Kedro-session in Jupyter Notebook & Lab
jupyter_client>=5.1.0, <7.0 # Used to open a Kedro-session in Jupyter Notebook & Lab
jupyterlab~=3.0 # Used to open a Kedro-session in Jupyter Lab
kedro==0.17.3
kedro==0.17.4
nbstripout~=0.4 # Strips the output of a Jupyter Notebook and writes the outputless version to the original file
pytest-cov~=2.5 # Produces test coverage reports
pytest-mock>=1.7.1, <2.0 # Wrapper around the mock package for easier use with pytest
Expand All @@ -61,10 +61,10 @@ The dependencies above may be sufficient for some projects, but for the spacefli
pip install "kedro[pandas.CSVDataSet,pandas.ExcelDataSet]"
```

Alternatively, if you need to, you can edit `src/requirements.txt` directly to modify your list of dependencies by replacing the requirement `kedro==0.17.3` with the following (your version of Kedro may be different):
Alternatively, if you need to, you can edit `src/requirements.txt` directly to modify your list of dependencies by replacing the requirement `kedro==0.17.4` with the following (your version of Kedro may be different):

```text
kedro[pandas.CSVDataSet,pandas.ExcelDataSet]==0.17.3
kedro[pandas.CSVDataSet,pandas.ExcelDataSet]==0.17.4
```

Then run the following:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/05_data/02_kedro_io.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ cars:
The `DataCatalog` will create a versioned `CSVDataSet` called `cars`. The actual csv file location will look like `data/01_raw/company/car_data.csv/<version>/car_data.csv`, where `<version>` corresponds to a global save version string formatted as `YYYY-MM-DDThh.mm.ss.sssZ`. Every time the `DataCatalog` is instantiated, it generates a new global save version, which is propagated to all versioned datasets it contains.

`catalog.yml` only allows you to version your datasets but it does not allow you to choose which version to load or save. This is deliberate because we have chosen to separate the data catalog from any runtime configuration. If you need to pin a dataset version, you can either [specify the versions in a separate `yml` file and call it at runtime](https://kedro.readthedocs.io/en/stable/04_kedro_project_setup/02_configuration.html#configure-kedro-run-arguments) or [instantiate your versioned datasets using Code API and define a version parameter explicitly](#versioning-using-the-code-api).
`catalog.yml` only allows you to version your datasets but it does not allow you to choose which version to load or save. This is deliberate because we have chosen to separate the data catalog from any runtime configuration. If you need to pin a dataset version, you can either [specify the versions in a separate `yml` file and call it at runtime](../04_kedro_project_setup/02_configuration.md#configure-kedro-run-arguments) or [instantiate your versioned datasets using Code API and define a version parameter explicitly](#versioning-using-the-code-api).

By default, the `DataCatalog` will load the latest version of the dataset. However, it is also possible to specify an exact load version. In order to do that, you can pass a dictionary with exact load versions to `DataCatalog.from_config`:

Expand Down
2 changes: 1 addition & 1 deletion docs/source/09_development/03_commands_reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ Returns output similar to the following, depending on the version of Kedro used
| |/ / _ \/ _` | '__/ _ \
| < __/ (_| | | | (_) |
|_|\_\___|\__,_|_| \___/
v0.17.3
v0.17.4
kedro allows teams to create analytics
projects. It is developed as part of
Expand Down
6 changes: 3 additions & 3 deletions docs/source/10_deployment/08_databricks.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ conda create --name iris_databricks python=3.7 -y
conda activate iris_databricks

# install Kedro and create a new project
pip install "kedro~=0.17.3"
pip install "kedro~=0.17.4"
# name your project Iris Databricks when prompted for it
kedro new --starter pyspark-iris
```
Expand Down Expand Up @@ -316,10 +316,10 @@ In your newly created notebook put each code snippet from below into a separate
%sh rm -rf ~/projects/iris-databricks && git clone --single-branch --branch master https://${GITHUB_USER}:${GITHUB_TOKEN}@github.com/${GITHUB_USER}/<your-repo-name>.git ~/projects/iris-databricks
```

* Install the latest version of Kedro compatible with version `0.17.3`
* Install the latest version of Kedro compatible with version `0.17.4`

```console
%pip install "kedro[spark.SparkDataSet]~=0.17.3"
%pip install "kedro[spark.SparkDataSet]~=0.17.4"
```

* Copy input data into DBFS
Expand Down
3 changes: 1 addition & 2 deletions features/environment.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,8 +69,7 @@ def before_scenario(context, scenario):
if FRESH_VENV_TAG in scenario.tags:
context = _setup_minimal_env(context)

context.temp_dir = Path(tempfile.mkdtemp()).resolve()
_PATHS_TO_REMOVE.add(context.temp_dir)
context.temp_dir = _create_tmp_dir()


def _setup_context_with_venv(context, venv_dir):
Expand Down
3 changes: 1 addition & 2 deletions features/steps/cli_steps.py
Original file line number Diff line number Diff line change
Expand Up @@ -177,8 +177,7 @@ def create_config_file(context):
"""
context.config_file = context.temp_dir / "config.yml"
context.project_name = "project-dummy"
root_project_dir = context.temp_dir / context.project_name
context.root_project_dir = root_project_dir
context.root_project_dir = context.temp_dir / context.project_name
context.package_name = context.project_name.replace("-", "_")
config = {
"project_name": context.project_name,
Expand Down
2 changes: 1 addition & 1 deletion kedro/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
configuration and pipeline assembly.
"""

__version__ = "0.17.3"
__version__ = "0.17.4"


import logging
Expand Down

0 comments on commit f448367

Please sign in to comment.