\ No newline at end of file
diff --git a/components/development/_development/index.html b/components/development/_development/index.html
deleted file mode 100644
index 3b0c66705..000000000
--- a/components/development/_development/index.html
+++ /dev/null
@@ -1 +0,0 @@
- development - Open Targets Genetics
This section contains various technical information on how to develop and run the code.
2023-10-09
2023-10-09
Contributors
\ No newline at end of file
diff --git a/development/_development/index.html b/development/_development/index.html
index be17637df..4cd8d49fc 100644
--- a/development/_development/index.html
+++ b/development/_development/index.html
@@ -1 +1 @@
- Development - Open Targets Genetics
This section contains various technical information on how to develop and run the code.
2023-10-09
2023-10-09
Contributors
\ No newline at end of file
diff --git a/development/airflow/index.html b/development/airflow/index.html
index 021748a8f..dee819bf1 100644
--- a/development/airflow/index.html
+++ b/development/airflow/index.html
@@ -1,4 +1,4 @@
- Running Airflow workflows - Open Targets Genetics
The steps in this section only ever need to be done once on any particular system.
Google Cloud configuration: 1. Install Google Cloud SDK: https://cloud.google.com/sdk/docs/install. 1. Log in to your work Google Account: run gcloud auth login and follow instructions. 1. Obtain Google application credentials: run gcloud auth application-default login and follow instructions.
Check that you have the make utility installed, and if not (which is unlikely), install it using your system package manager.
Run make setup-dev to install/update the necessary packages and activate the development environment. You need to do this every time you open a new shell.
It is recommended to use VS Code as an IDE for development.
All pipelines in this repository are intended to be run in Google Dataproc. Running them locally is not currently supported.
In order to run the code:
Manually edit your local workflow/dag.yaml file and comment out the steps you do not want to run.
Manually edit your local pyproject.toml file and modify the version of the code.
This must be different from the version used by any other people working on the repository to avoid any deployment conflicts, so it's a good idea to use your name, for example: 1.2.3+jdoe.
You can also add a brief branch description, for example: 1.2.3+jdoe.myfeature.
Note that the version must comply with PEP440 conventions, otherwise Poetry will not allow it to be deployed.
Do not use underscores or hyphens in your version name. When building the WHL file, they will be automatically converted to dots, which means the file name will no longer match the version and the build will fail. Use dots instead.
Run make build.
This will create a bundle containing the neccessary code, configuration and dependencies to run the ETL pipeline, and then upload this bundle to Google Cloud.
A version specific subpath is used, so uploading the code will not affect any branches but your own.
If there was already a code bundle uploaded with the same version number, it will be replaced.
Submit the Dataproc job with poetry run python workflow/workflow_template.py
You will need to specify additional parameters, some are mandatory and some are optional. Run with --help to see usage.
The script will provision the cluster and submit the job.
The cluster will take a few minutes to get provisioned and running, during which the script will not output anything, this is normal.
Once submitted, you can monitor the progress of your job on this page: https://console.cloud.google.com/dataproc/jobs?project=open-targets-genetics-dev.
On completion (whether successful or a failure), the cluster will be automatically removed, so you don't have to worry about shutting it down to avoid incurring charges.
When making changes, and especially when implementing a new module or feature, it's essential to ensure that all relevant sections of the code base are modified. - [ ] Run make check. This will run the linter and formatter to ensure that the code is compliant with the project conventions. - [ ] Develop unit tests for your code and run make test. This will run all unit tests in the repository, including the examples appended in the docstrings of some methods. - [ ] Update the configuration if necessary. - [ ] Update the documentation and check it with make build-documentation. This will start a local server to browse it (URL will be printed, usually http://127.0.0.1:8000/)
For more details on each of these steps, see the sections below.
If during development you had a question which wasn't covered in the documentation, and someone explained it to you, add it to the documentation. The same applies if you encountered any instructions in the documentation which were obsolete or incorrect.
Documentation autogeneration expressions start with :::. They will automatically generate sections of the documentation based on class and method docstrings. Be sure to update them for:
Dataset definitions in docs/reference/dataset (example: docs/reference/dataset/study_index/study_index_finngen.md)
Step definition in docs/reference/step (example: docs/reference/step/finngen.md)
The steps in this section only ever need to be done once on any particular system.
Google Cloud configuration: 1. Install Google Cloud SDK: https://cloud.google.com/sdk/docs/install. 1. Log in to your work Google Account: run gcloud auth login and follow instructions. 1. Obtain Google application credentials: run gcloud auth application-default login and follow instructions.
Check that you have the make utility installed, and if not (which is unlikely), install it using your system package manager.
Run make setup-dev to install/update the necessary packages and activate the development environment. You need to do this every time you open a new shell.
It is recommended to use VS Code as an IDE for development.
All pipelines in this repository are intended to be run in Google Dataproc. Running them locally is not currently supported.
In order to run the code:
Manually edit your local workflow/dag.yaml file and comment out the steps you do not want to run.
Manually edit your local pyproject.toml file and modify the version of the code.
This must be different from the version used by any other people working on the repository to avoid any deployment conflicts, so it's a good idea to use your name, for example: 1.2.3+jdoe.
You can also add a brief branch description, for example: 1.2.3+jdoe.myfeature.
Note that the version must comply with PEP440 conventions, otherwise Poetry will not allow it to be deployed.
Do not use underscores or hyphens in your version name. When building the WHL file, they will be automatically converted to dots, which means the file name will no longer match the version and the build will fail. Use dots instead.
Run make build.
This will create a bundle containing the neccessary code, configuration and dependencies to run the ETL pipeline, and then upload this bundle to Google Cloud.
A version specific subpath is used, so uploading the code will not affect any branches but your own.
If there was already a code bundle uploaded with the same version number, it will be replaced.
Submit the Dataproc job with poetry run python workflow/workflow_template.py
You will need to specify additional parameters, some are mandatory and some are optional. Run with --help to see usage.
The script will provision the cluster and submit the job.
The cluster will take a few minutes to get provisioned and running, during which the script will not output anything, this is normal.
Once submitted, you can monitor the progress of your job on this page: https://console.cloud.google.com/dataproc/jobs?project=open-targets-genetics-dev.
On completion (whether successful or a failure), the cluster will be automatically removed, so you don't have to worry about shutting it down to avoid incurring charges.
When making changes, and especially when implementing a new module or feature, it's essential to ensure that all relevant sections of the code base are modified. - [ ] Run make check. This will run the linter and formatter to ensure that the code is compliant with the project conventions. - [ ] Develop unit tests for your code and run make test. This will run all unit tests in the repository, including the examples appended in the docstrings of some methods. - [ ] Update the configuration if necessary. - [ ] Update the documentation and check it with make build-documentation. This will start a local server to browse it (URL will be printed, usually http://127.0.0.1:8000/)
For more details on each of these steps, see the sections below.
If during development you had a question which wasn't covered in the documentation, and someone explained it to you, add it to the documentation. The same applies if you encountered any instructions in the documentation which were obsolete or incorrect.
Documentation autogeneration expressions start with :::. They will automatically generate sections of the documentation based on class and method docstrings. Be sure to update them for:
Dataset definitions in docs/reference/dataset (example: docs/reference/dataset/study_index/study_index_finngen.md)
Step definition in docs/reference/step (example: docs/reference/step/finngen.md)
Test study fixture in tests/conftest.py (example: mock_study_index_finngen in that module)
Test sample data in tests/data_samples (example: tests/data_samples/finngen_studies_sample.json)
Test definition in tests/ (example: tests/dataset/test_study_index.py → test_study_index_finngen_creation)
2023-05-30
2023-10-26
Contributors
\ No newline at end of file
diff --git a/development/troubleshooting/index.html b/development/troubleshooting/index.html
index d685c9f7f..dc96298ea 100644
--- a/development/troubleshooting/index.html
+++ b/development/troubleshooting/index.html
@@ -1 +1 @@
- Troubleshooting - Open Targets Genetics
If you see various errors thrown by Pyenv or Poetry, they can be hard to specifically diagnose and resolve. In this case, it often helps to remove those tools from the system completely. Follow these steps:
Close your currently activated environment, if any: exit
Officially, PySpark requires Java version 8 (a.k.a. 1.8) or above to work. However, if you have a very recent version of Java, you may experience issues, as it may introduce breaking changes that PySpark hasn't had time to integrate. For example, as of May 2023, PySpark did not work with Java 20.
If you are encountering problems with initialising a Spark session, try using Java 11.
If you see an error message thrown by pre-commit, which looks like this (SyntaxError: Unexpected token '?'), followed by a JavaScript traceback, the issue is likely with your system NodeJS version.
One solution which can help in this case is to upgrade your system NodeJS version. However, this may not always be possible. For example, Ubuntu repository is several major versions behind the latest version as of July 2023.
Another solution which helps is to remove Node, NodeJS, and npm from your system entirely. In this case, pre-commit will not try to rely on a system version of NodeJS and will install its own, suitable one.
On Ubuntu, this can be done using sudo apt remove node nodejs npm, followed by sudo apt autoremove. But in some cases, depending on your existing installation, you may need to also manually remove some files. See this StackOverflow answer for guidance.
After running these commands, you are advised to open a fresh shell, and then also reinstall Pyenv and Poetry to make sure they pick up the changes (see relevant section above).
2023-07-04
2023-10-17
Contributors
\ No newline at end of file
+ Troubleshooting - Open Targets Genetics
If you see various errors thrown by Pyenv or Poetry, they can be hard to specifically diagnose and resolve. In this case, it often helps to remove those tools from the system completely. Follow these steps:
Close your currently activated environment, if any: exit
Officially, PySpark requires Java version 8 (a.k.a. 1.8) or above to work. However, if you have a very recent version of Java, you may experience issues, as it may introduce breaking changes that PySpark hasn't had time to integrate. For example, as of May 2023, PySpark did not work with Java 20.
If you are encountering problems with initialising a Spark session, try using Java 11.
If you see an error message thrown by pre-commit, which looks like this (SyntaxError: Unexpected token '?'), followed by a JavaScript traceback, the issue is likely with your system NodeJS version.
One solution which can help in this case is to upgrade your system NodeJS version. However, this may not always be possible. For example, Ubuntu repository is several major versions behind the latest version as of July 2023.
Another solution which helps is to remove Node, NodeJS, and npm from your system entirely. In this case, pre-commit will not try to rely on a system version of NodeJS and will install its own, suitable one.
On Ubuntu, this can be done using sudo apt remove node nodejs npm, followed by sudo apt autoremove. But in some cases, depending on your existing installation, you may need to also manually remove some files. See this StackOverflow answer for guidance.
After running these commands, you are advised to open a fresh shell, and then also reinstall Pyenv and Poetry to make sure they pick up the changes (see relevant section above).
2023-07-04
2023-10-17
Contributors
\ No newline at end of file
diff --git a/index.html b/index.html
index 1e329946d..2110f6f99 100644
--- a/index.html
+++ b/index.html
@@ -1,4 +1,4 @@
- Open Targets Genetics - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/_python_api/index.html b/python_api/_python_api/index.html
index 74651bc3c..5375ccc98 100644
--- a/python_api/_python_api/index.html
+++ b/python_api/_python_api/index.html
@@ -1 +1 @@
- Python API - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/dataset/_dataset/index.html b/python_api/dataset/_dataset/index.html
index 895e1fe55..e0a2997da 100644
--- a/python_api/dataset/_dataset/index.html
+++ b/python_api/dataset/_dataset/index.html
@@ -1,4 +1,4 @@
- Dataset - Open Targets Genetics
Validate DataFrame schema against expected class schema.
Raises:
Type
Description
ValueError
DataFrame schema is not valid
Source code in src/otg/dataset/dataset.py
87 88 89 90
@@ -476,4 +476,4 @@
raiseValueError(f"The following fields present differences in their datatypes: {fields_with_different_observed_datatype}.")
-
2023-01-15
2023-10-16
Contributors
\ No newline at end of file
+
2023-01-15
2023-10-31
Contributors
\ No newline at end of file
diff --git a/python_api/dataset/colocalisation/index.html b/python_api/dataset/colocalisation/index.html
index 571a39c8f..b88d04c05 100644
--- a/python_api/dataset/colocalisation/index.html
+++ b/python_api/dataset/colocalisation/index.html
@@ -1,17 +1,4 @@
- Colocalisation - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/dataset/gene_index/index.html b/python_api/dataset/gene_index/index.html
index cbaa4dc89..ae9657802 100644
--- a/python_api/dataset/gene_index/index.html
+++ b/python_api/dataset/gene_index/index.html
@@ -1,16 +1,4 @@
- GeneIndex - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/dataset/intervals/index.html b/python_api/dataset/intervals/index.html
index 1febf5321..9284ee362 100644
--- a/python_api/dataset/intervals/index.html
+++ b/python_api/dataset/intervals/index.html
@@ -1,15 +1,4 @@
- Intervals - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/dataset/ld_index/index.html b/python_api/dataset/ld_index/index.html
index b9e076707..b1f4b4d17 100644
--- a/python_api/dataset/ld_index/index.html
+++ b/python_api/dataset/ld_index/index.html
@@ -1,14 +1,4 @@
- LDIndex - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/dataset/study_index/index.html b/python_api/dataset/study_index/index.html
index d879c36ef..bbbeb4e0a 100644
--- a/python_api/dataset/study_index/index.html
+++ b/python_api/dataset/study_index/index.html
@@ -1,36 +1,4 @@
- StudyIndex - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/dataset/study_locus/index.html b/python_api/dataset/study_locus/index.html
index cdab180b9..0e5c7c6b8 100644
--- a/python_api/dataset/study_locus/index.html
+++ b/python_api/dataset/study_locus/index.html
@@ -1,44 +1,4 @@
- StudyLocus - Open Targets Genetics
Annotate study-locus dataset with credible set flags.
Sorts the array in the locus column elements by their posteriorProbability values in descending order and adds is95CredibleSet and is99CredibleSet fields to the elements, indicating which are the tagging variants whose cumulative sum of their posteriorProbability values is below 0.95 and 0.99, respectively.
Annotate study-locus dataset with credible set flags.
Sorts the array in the locus column elements by their posteriorProbability values in descending order and adds is95CredibleSet and is99CredibleSet fields to the elements, indicating which are the tagging variants whose cumulative sum of their posteriorProbability values is below 0.95 and 0.99, respectively.
Find overlapping study-locus that share at least one tagging variant. All GWAS-GWAS and all GWAS-Molecular traits are computed with the Molecular traits always appearing on the right side.
Find overlapping study-locus that share at least one tagging variant. All GWAS-GWAS and all GWAS-Molecular traits are computed with the Molecular traits always appearing on the right side.
Study-Locus quality control options listing concerns on the quality of the association.
Attributes:
Name
Type
Description
SUBSIGNIFICANT_FLAG
str
p-value below significance threshold
NO_GENOMIC_LOCATION_FLAG
str
Incomplete genomic mapping
COMPOSITE_FLAG
str
Composite association due to variant x variant interactions
VARIANT_INCONSISTENCY_FLAG
str
Inconsistencies in the reported variants
NON_MAPPED_VARIANT_FLAG
str
Variant not mapped to GnomAd
PALINDROMIC_ALLELE_FLAG
str
Alleles are palindromic - cannot harmonize
AMBIGUOUS_STUDY
str
Association with ambiguous study
UNRESOLVED_LD
str
Variant not found in LD reference
LD_CLUMPED
str
Explained by a more significant variant in high LD (clumped)
Source code in src/otg/dataset/study_locus.py
26272829
@@ -1234,7 +1194,7 @@
UNRESOLVED_LD="Variant not found in LD reference"LD_CLUMPED="Explained by a more significant variant in high LD (clumped)"NO_POPULATION="Study does not have population annotation to resolve LD"
-
\ No newline at end of file
diff --git a/python_api/dataset/study_locus_overlap/index.html b/python_api/dataset/study_locus_overlap/index.html
index f40095983..76bc18763 100644
--- a/python_api/dataset/study_locus_overlap/index.html
+++ b/python_api/dataset/study_locus_overlap/index.html
@@ -1,20 +1,4 @@
- StudyLocusOverlap - Open Targets Genetics
This dataset captures pairs of overlapping StudyLocus: that is associations whose credible sets share at least one tagging variant.
Note
This is a helpful dataset for other downstream analyses, such as colocalisation. This dataset will contain the overlapping signals between studyLocus associations once they have been clumped and fine-mapped.
Source code in src/otg/dataset/study_locus_overlap.py
This dataset captures pairs of overlapping StudyLocus: that is associations whose credible sets share at least one tagging variant.
Note
This is a helpful dataset for other downstream analyses, such as colocalisation. This dataset will contain the overlapping signals between studyLocus associations once they have been clumped and fine-mapped.
Source code in src/otg/dataset/study_locus_overlap.py
\ No newline at end of file
diff --git a/python_api/dataset/summary_statistics/index.html b/python_api/dataset/summary_statistics/index.html
index 557a5ce04..0982b07fa 100644
--- a/python_api/dataset/summary_statistics/index.html
+++ b/python_api/dataset/summary_statistics/index.html
@@ -1,16 +1,4 @@
- SummaryStatistics - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/dataset/variant_annotation/index.html b/python_api/dataset/variant_annotation/index.html
index fbc286bd7..2d67dd658 100644
--- a/python_api/dataset/variant_annotation/index.html
+++ b/python_api/dataset/variant_annotation/index.html
@@ -1,36 +1,4 @@
- VariantAnnotation - Open Targets Genetics
Creates a dataset with variant to gene assignments with a flag indicating if the variant is predicted to be a loss-of-function variant by the LOFTEE algorithm.
Optionally the trancript consequences can be reduced to the universe of a gene index.
Creates a dataset with variant to gene assignments with a flag indicating if the variant is predicted to be a loss-of-function variant by the LOFTEE algorithm.
Optionally the trancript consequences can be reduced to the universe of a gene index.
Creates a dataset with variant to gene assignments with a PolyPhen's predicted score on the transcript.
Polyphen informs about the probability that a substitution is damaging.The score can be interpreted as follows: - 0.0 to 0.15 -- Predicted to be benign. - 0.15 to 1.0 -- Possibly damaging. - 0.85 to 1.0 -- Predicted to be damaging.
Creates a dataset with variant to gene assignments with a PolyPhen's predicted score on the transcript.
Polyphen informs about the probability that a substitution is damaging.The score can be interpreted as follows: - 0.0 to 0.15 -- Predicted to be benign. - 0.15 to 1.0 -- Possibly damaging. - 0.85 to 1.0 -- Predicted to be damaging.
Creates a dataset with variant to gene assignments with a SIFT's predicted score on the transcript.
SIFT informs about the probability that a substitution is tolerated. The score can be interpreted as follows: - 0.0 to 0.05 -- Likely to be deleterious. - 0.05 to 1.0 -- Likely to be tolerated.
Creates a dataset with variant to gene assignments with a SIFT's predicted score on the transcript.
SIFT informs about the probability that a substitution is tolerated. The score can be interpreted as follows: - 0.0 to 0.05 -- Likely to be deleterious. - 0.05 to 1.0 -- Likely to be tolerated.
\ No newline at end of file
diff --git a/python_api/dataset/variant_index/index.html b/python_api/dataset/variant_index/index.html
index 3c0f80250..66889e7df 100644
--- a/python_api/dataset/variant_index/index.html
+++ b/python_api/dataset/variant_index/index.html
@@ -1,23 +1,4 @@
- VariantIndex - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/dataset/variant_to_gene/index.html b/python_api/dataset/variant_to_gene/index.html
index c937235c3..d2cbc4ec5 100644
--- a/python_api/dataset/variant_to_gene/index.html
+++ b/python_api/dataset/variant_to_gene/index.html
@@ -1,17 +1,4 @@
- V2G - Open Targets Genetics
A variant-to-gene (V2G) evidence is understood as any piece of evidence that supports the association of a variant with a likely causal gene. The evidence can sometimes be context-specific and refer to specific biofeatures (e.g. cell types)
A variant-to-gene (V2G) evidence is understood as any piece of evidence that supports the association of a variant with a likely causal gene. The evidence can sometimes be context-specific and refer to specific biofeatures (e.g. cell types)
\ No newline at end of file
diff --git a/python_api/datasource/_datasource/index.html b/python_api/datasource/_datasource/index.html
index 6525a3d61..aa764a836 100644
--- a/python_api/datasource/_datasource/index.html
+++ b/python_api/datasource/_datasource/index.html
@@ -1 +1 @@
- Datasource - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/datasource/finngen/_finngen/index.html b/python_api/datasource/finngen/_finngen/index.html
index 88b4df081..a6ba86bba 100644
--- a/python_api/datasource/finngen/_finngen/index.html
+++ b/python_api/datasource/finngen/_finngen/index.html
@@ -1,6 +1,6 @@
- FinnGen - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/datasource/finngen/study_index/index.html b/python_api/datasource/finngen/study_index/index.html
index 338594f3a..ebdde7566 100644
--- a/python_api/datasource/finngen/study_index/index.html
+++ b/python_api/datasource/finngen/study_index/index.html
@@ -1,4 +1,4 @@
- Study index - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/datasource/gnomad/_gnomad/index.html b/python_api/datasource/gnomad/_gnomad/index.html
index af0eadf7d..5691fbb8d 100644
--- a/python_api/datasource/gnomad/_gnomad/index.html
+++ b/python_api/datasource/gnomad/_gnomad/index.html
@@ -1,6 +1,6 @@
- Gnomad - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/datasource/gnomad/gnomad_ld/index.html b/python_api/datasource/gnomad/gnomad_ld/index.html
index 5084dc3a2..2a7ef1b16 100644
--- a/python_api/datasource/gnomad/gnomad_ld/index.html
+++ b/python_api/datasource/gnomad/gnomad_ld/index.html
@@ -1,4 +1,4 @@
- Gnomad ld - Open Targets Genetics
The information comes from LD matrices made available by GnomAD in Hail's native format. We aggregate the LD information across 8 ancestries. The basic steps to generate the LDIndex are:
Convert a LD matrix to a Spark DataFrame.
Resolve the matrix indices to variant IDs by lifting over the coordinates to GRCh38.
The information comes from LD matrices made available by GnomAD in Hail's native format. We aggregate the LD information across 8 ancestries. The basic steps to generate the LDIndex are:
Convert a LD matrix to a Spark DataFrame.
Resolve the matrix indices to variant IDs by lifting over the coordinates to GRCh38.
\ No newline at end of file
diff --git a/python_api/datasource/gnomad/gnomad_variants/index.html b/python_api/datasource/gnomad/gnomad_variants/index.html
index 71ed59b72..05b3fc6d2 100644
--- a/python_api/datasource/gnomad/gnomad_variants/index.html
+++ b/python_api/datasource/gnomad/gnomad_variants/index.html
@@ -1,4 +1,4 @@
- Gnomad variants - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/datasource/gwas_catalog/_gwas_catalog/index.html b/python_api/datasource/gwas_catalog/_gwas_catalog/index.html
index bc496edce..2d5b3ad21 100644
--- a/python_api/datasource/gwas_catalog/_gwas_catalog/index.html
+++ b/python_api/datasource/gwas_catalog/_gwas_catalog/index.html
@@ -1 +1 @@
- Gwas catalog - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/datasource/gwas_catalog/associations/index.html b/python_api/datasource/gwas_catalog/associations/index.html
index a7b430a54..877607400 100644
--- a/python_api/datasource/gwas_catalog/associations/index.html
+++ b/python_api/datasource/gwas_catalog/associations/index.html
@@ -1,4 +1,4 @@
- Associations - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/datasource/gwas_catalog/study_index/index.html b/python_api/datasource/gwas_catalog/study_index/index.html
index f10b69de6..0592203f7 100644
--- a/python_api/datasource/gwas_catalog/study_index/index.html
+++ b/python_api/datasource/gwas_catalog/study_index/index.html
@@ -1,4 +1,4 @@
- Study index - Open Targets Genetics
Source code in src/otg/datasource/gwas_catalog/study_index.py
265266267268
@@ -1101,4 +1101,4 @@
)returnself
-
2023-09-25
2023-10-16
Contributors
\ No newline at end of file
+
2023-09-25
2023-10-31
Contributors
\ No newline at end of file
diff --git a/python_api/datasource/gwas_catalog/study_splitter/index.html b/python_api/datasource/gwas_catalog/study_splitter/index.html
index 1c6cf36f3..37e3065ae 100644
--- a/python_api/datasource/gwas_catalog/study_splitter/index.html
+++ b/python_api/datasource/gwas_catalog/study_splitter/index.html
@@ -1,4 +1,4 @@
- Study splitter - Open Targets Genetics
If assigned disease of the study and the association don't agree, we assume the study needs to be split. Then disease EFOs, trait names and study ID are consolidated
If assigned disease of the study and the association don't agree, we assume the study needs to be split. Then disease EFOs, trait names and study ID are consolidated
\ No newline at end of file
diff --git a/python_api/datasource/gwas_catalog/summary_statistics/index.html b/python_api/datasource/gwas_catalog/summary_statistics/index.html
index 14662a1c6..de86080b9 100644
--- a/python_api/datasource/gwas_catalog/summary_statistics/index.html
+++ b/python_api/datasource/gwas_catalog/summary_statistics/index.html
@@ -1,4 +1,4 @@
- Summary statistics - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/datasource/intervals/_intervals/index.html b/python_api/datasource/intervals/_intervals/index.html
index b52b5eca3..406afca39 100644
--- a/python_api/datasource/intervals/_intervals/index.html
+++ b/python_api/datasource/intervals/_intervals/index.html
@@ -1 +1 @@
- Intervals - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/datasource/intervals/andersson/index.html b/python_api/datasource/intervals/andersson/index.html
index 1f1da669d..196d5a8a9 100644
--- a/python_api/datasource/intervals/andersson/index.html
+++ b/python_api/datasource/intervals/andersson/index.html
@@ -1,4 +1,4 @@
- Andersson - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/datasource/intervals/javierre/index.html b/python_api/datasource/intervals/javierre/index.html
index 10cac11e6..e14181275 100644
--- a/python_api/datasource/intervals/javierre/index.html
+++ b/python_api/datasource/intervals/javierre/index.html
@@ -1,4 +1,4 @@
- Javierre - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/datasource/intervals/jung/index.html b/python_api/datasource/intervals/jung/index.html
index 06099ce04..eceb217f0 100644
--- a/python_api/datasource/intervals/jung/index.html
+++ b/python_api/datasource/intervals/jung/index.html
@@ -1,4 +1,4 @@
- Jung - Open Targets Genetics
Source code in src/otg/datasource/intervals/jung.py
21222324
@@ -331,4 +331,4 @@
DataFrame: DataFrame with raw jung data """returnspark.read.csv(path,sep=",",header=True)
-
2023-09-25
2023-10-16
Contributors
\ No newline at end of file
+
2023-09-25
2023-10-31
Contributors
\ No newline at end of file
diff --git a/python_api/datasource/intervals/thurman/index.html b/python_api/datasource/intervals/thurman/index.html
index 751c8c8fb..fe9f2c5d6 100644
--- a/python_api/datasource/intervals/thurman/index.html
+++ b/python_api/datasource/intervals/thurman/index.html
@@ -1,4 +1,4 @@
- Thurman - Open Targets Genetics
\ No newline at end of file
diff --git a/python_api/datasource/open_targets/_open_targets/index.html b/python_api/datasource/open_targets/_open_targets/index.html
index 1f24e07cd..52b85efde 100644
--- a/python_api/datasource/open_targets/_open_targets/index.html
+++ b/python_api/datasource/open_targets/_open_targets/index.html
@@ -1,6 +1,6 @@
- Open targets - Open Targets Genetics
The Open Targets Platform is a comprehensive resource that aims to aggregate and harmonise various types of data to facilitate the identification, prioritisation, and validation of drug targets. By integrating publicly available datasets including data generated by the Open Targets consortium, the Platform builds and scores target-disease associations to assist in drug target identification and prioritisation. It also integrates relevant annotation information about targets, diseases, phenotypes, and drugs, as well as their most relevant relationships.
Genomic data from Open Targets integrates human genome-wide association studies (GWAS) and functional genomics data including gene expression, protein abundance, chromatin interaction and conformation data from a wide range of cell types and tissues to make robust connections between GWAS-associated loci, variants and likely causal genes.
2023-10-16
2023-10-20
Contributors
\ No newline at end of file
+
The Open Targets Platform is a comprehensive resource that aims to aggregate and harmonise various types of data to facilitate the identification, prioritisation, and validation of drug targets. By integrating publicly available datasets including data generated by the Open Targets consortium, the Platform builds and scores target-disease associations to assist in drug target identification and prioritisation. It also integrates relevant annotation information about targets, diseases, phenotypes, and drugs, as well as their most relevant relationships.
Genomic data from Open Targets integrates human genome-wide association studies (GWAS) and functional genomics data including gene expression, protein abundance, chromatin interaction and conformation data from a wide range of cell types and tissues to make robust connections between GWAS-associated loci, variants and likely causal genes.