-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get rid of Hardcoded paths #23
Open
Reeya123
wants to merge
8
commits into
master
Choose a base branch
from
ScriptPolishing-Reeya
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 7 commits
Commits
Show all changes
8 commits
Select commit
Hold shift + click to select a range
e345f6b
Tidy up directory
mariacuria 2da1abe
Compare fasta sequences initial commit
mariacuria 56b0686
Write whether or not a given ENSP is canonical
mariacuria a077d36
Moved scripts to proper directories
mariacuria 16e9007
Upd gitignore
mariacuria 1fbc6d5
Merge branch 'fasta' of https://github.com/GW-HIVE/biomuta-old
Reeya123 ca0a660
updated scripts with paths dynamically fetched from config.json
Reeya123 48d1a1e
Updated paths to config file
Reeya123 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,10 +1,12 @@ | ||
{ | ||
"relevant_paths": { | ||
"repos_generated_datasets": "/data/shared/repos/biomuta-old/generated_datasets", | ||
"repos_downloads": "/data/shared/repos/biomuta-old/downloads", | ||
"downloads": "/data/shared/biomuta/downloads", | ||
"generated_datasets": "/data/shared/biomuta/generated/datasets", | ||
"mapping": "/data/shared/biomuta/pipeline/convert_step2/mapping" | ||
}, | ||
"resource_init": { | ||
"cbioportal": ["subfolder1", "subfolder2"] | ||
} | ||
} | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
8 changes: 6 additions & 2 deletions
8
...ep2/cbioportal/1_ensp_to_uniprot_batch.sh → ...ep2/cbioportal/2_ensp_to_uniprot_batch.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,47 @@ | ||
import requests | ||
import json | ||
from pathlib import Path | ||
|
||
# Load config.json | ||
config_path = Path(__file__).resolve().parent.parent / "config.json" | ||
with open(config_path, "r") as config_file: | ||
config = json.load(config_file) | ||
|
||
# Retrieve paths from config.json | ||
repos_base = Path(config["relevant_paths"]["repos"]) | ||
ensembl_uniprot_map_path = repos_base / "2024_10_22/mapping_ids/canonical_toy.json" | ||
|
||
# Load your JSON file containing ENSEMBL to UniProt mappings | ||
with open(ensembl_uniprot_map_path, "r") as file: | ||
ensembl_uniprot_map = json.load(file) | ||
|
||
def fetch_ensembl_sequence(ensembl_id): # Fetch ENSEMBL sequence | ||
url = f"https://rest.ensembl.org/sequence/id/{ensembl_id}?content-type=text/plain" | ||
response = requests.get(url) | ||
if response.status_code == 200: | ||
return response.text.strip() | ||
else: | ||
print(f"Failed to fetch ENSEMBL sequence for {ensembl_id}") | ||
return None | ||
|
||
def fetch_uniprot_sequence(uniprot_id): # Fetch UniProt sequence | ||
url = f"https://rest.uniprot.org/uniprotkb/{uniprot_id}.fasta" | ||
response = requests.get(url) | ||
if response.status_code == 200: | ||
return response.text.split('\n', 1)[1].replace('\n', '') | ||
else: | ||
print(f"Failed to fetch UniProt sequence for {uniprot_id}") | ||
return None | ||
|
||
# Compare sequences | ||
for ensembl_id, uniprot_id in ensembl_uniprot_map.items(): | ||
ensembl_sequence = fetch_ensembl_sequence(ensembl_id) | ||
uniprot_sequence = fetch_uniprot_sequence(uniprot_id) | ||
|
||
if ensembl_sequence and uniprot_sequence: | ||
if ensembl_sequence == uniprot_sequence: | ||
print(f"Sequences match for {ensembl_id} and {uniprot_id}") | ||
else: | ||
print(f"Sequences do not match for {ensembl_id} and {uniprot_id}") | ||
else: | ||
print(f"Could not compare sequences for {ensembl_id} and {uniprot_id}") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
import requests | ||
|
||
def fetch_uniprot_sequence(uniprot_id): | ||
url = f"https://rest.uniprot.org/uniprotkb/{uniprot_id}.fasta" | ||
response = requests.get(url) | ||
if response.status_code == 200: | ||
return response.text.split('\n', 1)[1].replace('\n', '') | ||
else: | ||
print(f"Failed to fetch UniProt sequence for {uniprot_id}") | ||
return None | ||
|
||
print(fetch_uniprot_sequence('Q9NXB0')) |
mariacuria marked this conversation as resolved.
Show resolved
Hide resolved
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,112 @@ | ||
import csv | ||
import pickle | ||
import sys | ||
from pathlib import Path | ||
|
||
sys.path.append(str(Path(__file__).resolve().parent.parent.parent.parent)) | ||
from utils import ROOT_DIR | ||
from utils.config import get_config | ||
|
||
# Load input_bed and ann | ||
# Set input directory for JSON files and output file for BED format | ||
config_obj = get_config() | ||
wd = Path(config_obj["relevant_paths"]["generated_datasets"]) | ||
input_bed = wd / '2024_10_22' / 'liftover' / 'hg38_combined_toy.bed' # Write a util to get latest dir | ||
ann_dir = Path(config_obj["relevant_paths"]["downloads"]) | ||
ann = ann_dir / 'ensembl' / 'Homo_sapiens.GRCh38.113.gff3' # GFF file with genomic features | ||
output_file = wd / '2024_10_22' / 'mapping_ids' / 'chr_pos_to_ensp_toy.csv' | ||
|
||
|
||
# Step 1: Load and Serialize 'ann' Annotations | ||
def parse_and_serialize_ann(): | ||
annotations = {} | ||
print("Parsing annotations from ann...") | ||
with open(ann, 'r') as f: | ||
for line in f: | ||
if not line.startswith('#'): | ||
fields = line.strip().split('\t') | ||
chrom = 'chr' + fields[0] # Add 'chr' prefix to match format in input_bed | ||
feature_start = int(fields[3]) | ||
feature_end = int(fields[4]) | ||
if chrom not in annotations: | ||
annotations[chrom] = [] | ||
annotations[chrom].append((feature_start, feature_end, line.strip())) | ||
with open('annotations.pkl', 'wb') as f: | ||
pickle.dump(annotations, f) | ||
print("Serialized annotations to 'annotations.pkl'") | ||
|
||
# Step 2: Load 'ann' from Serialized File | ||
def load_annotations(): | ||
with open('annotations.pkl', 'rb') as f: | ||
return pickle.load(f) | ||
|
||
# Step 3: Process 'input_bed' and Write Results in Batches | ||
def process_large_input_bed(): | ||
# Load serialized annotations | ||
annotations = load_annotations() | ||
|
||
# Open output CSV file | ||
with open(output_file, 'w', newline='') as csvfile: | ||
# Define the new headers for the output file | ||
writer = csv.writer(csvfile) | ||
writer.writerow(['chr_id', 'start_pos', 'end_pos', 'entrez_gene_id', 'prot_change', 'ENSP']) | ||
|
||
batch = [] | ||
batch_size = 10000 # Define batch size for writing | ||
|
||
print("Processing SNP positions from input_bed and writing to CSV...") | ||
|
||
with open(input_bed, 'r') as f: | ||
# Skip the header line (if the first line is a header) | ||
header_skipped = False | ||
|
||
for i, line in enumerate(f, start=1): | ||
# Skip the header line | ||
if not header_skipped: | ||
header_skipped = True | ||
continue # Skip the header | ||
|
||
fields = line.strip().split('\t') | ||
|
||
# Check that the necessary fields are numeric before proceeding | ||
try: | ||
start = int(fields[1]) # start_pos | ||
end = int(fields[2]) # end_pos | ||
except ValueError: | ||
print(f"Skipping invalid line {i}: {line.strip()}") | ||
continue # Skip lines where start or end position is not numeric | ||
|
||
chrom = fields[0] # chr_id | ||
entrez = fields[3] # entrez_gene_id | ||
prot_change = fields[4] # prot_change | ||
|
||
# Find matching annotations | ||
if chrom in annotations: | ||
for feature_start, feature_end, annotation in annotations[chrom]: | ||
if start >= feature_start and end <= feature_end: | ||
ensp = None | ||
for field in annotation.split(';'): | ||
if 'protein_id=' in field: | ||
ensp = field.split('=')[1] | ||
break | ||
if ensp: | ||
# Add match to batch with renamed fields | ||
batch.append([chrom, start, end, entrez, prot_change, ensp]) | ||
|
||
# Write batch to file every 'batch_size' records | ||
if len(batch) >= batch_size: | ||
writer.writerows(batch) | ||
batch.clear() # Clear batch after writing to file | ||
print(f"Processed {i} lines so far...") # Status update | ||
|
||
# Write remaining entries in the batch | ||
if batch: | ||
writer.writerows(batch) | ||
print("Wrote remaining records to file.") | ||
|
||
print(f"Process completed. Results written to {output_file}") | ||
|
||
|
||
# Run the workflow | ||
parse_and_serialize_ann() # Run once to create the serialized annotations file if needed | ||
process_large_input_bed() # Process large 'input_bed' and write results |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"/data/shared/repos/biomuta-old/downloads" is a symlink to "/data/shared/biomuta/downloads", and "/data/shared/repos/biomuta-old/generated_datasets" is a symlink to "/data/shared/biomuta/generated/datasets".