Datasets:
[SOLVED] What is the correct way to download CT-RATE v2?
Hello, I am trying to download the dataset and tried several methods:
- Using scripts from
https://github.com/sezginerr/example_download_script:
- using
snapshot_download.pydoesn't work, as I'm immediately getting:Fetching 0 files: 0it [00:00, ?it/s]and the script exits. - using
dowload_only_valid_data.pywill download thevalidfolder and notvalid_fixed. Same for train. - No script to download the TotalSegmentor segmentations.
Last commit to the repo was last year, and v2 was some months ago so I presume the script doesn't support v2.
Also tried the command in
https://huggingface.co/datasets/ibrahimhamamci/CT-RATE/discussions/88(which ishuggingface-cli download ...) but got the same output assnapshot_download.py.I currently am downloading using
git clone [email protected]:datasets/ibrahimhamamci/CT-RATEas the only solution that downloads, but I know that this is not a recommended way. The commands output is:
Cloning into 'CT-RATE'...
remote: Enumerating objects: 485822, done.
remote: Counting objects: 100% (3/3), done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 485822 (delta 0), reused 0 (delta 0), pack-reused 485819 (from 1)
Receiving objects: 100% (485822/485822), 60.97 MiB | 10.33 MiB/s, done.
Resolving deltas: 100% (169/169), done.
Updating files: 100% (251051/251051), done.
Filtering content: 9% (23862/250996), 5309.30 GiB | 32.92 MiB/s
But that seems like the dataset should be about 50TB? Is that correct? I have seen some discussions saying that v1 is about 11TB.
I would be happy to not download train/valid from v1 if I only need the fixed versions if there is a way.
I'd love to hear about the correct way to download v2, using a restart-safe way like hf-cli.
Thank you very much, and thank you for your contribution!
Note: I was able to download CT-RATE v1 BEFORE the v2 update (unable now as it exits) using snapshot_download.py from the example_download_script repo, just changed the script to have auto-restarts:
import time
from huggingface_hub import snapshot_download
from requests import ConnectionError
# Code taken from issue: https://github.com/huggingface/huggingface_hub/issues/1542
while True:
try:
snapshot_path = snapshot_download(repo_id="ibrahimhamamci/CT-RATE", repo_type="dataset", local_dir="./dataset")
break
except Exception as e:
print(f"ConnectionError: {e}. Retrying in 1s...")
time.sleep(1)
same here, i tried any of the download script but got fetch 0 file (download_only_train, download_only_val, snapshot_download)
I'm not sure about the snapshot_download.py issue, but I downloaded v1 several months ago using the adapted version of the download_only_train_data.py, https://huggingface.co/datasets/ibrahimhamamci/CT-RATE/discussions/64 (farrell236 comment half way down). Just changing the directory to directory_name = f'dataset/{split}_fixed/' should work so you don't get the v1 download again. The v2 fixed version seems to be downloading properly for me currently this way. I see the script I used here: https://huggingface.co/datasets/ibrahimhamamci/CT-RATE/blob/main/download_scripts/download_dataset.py
What is the advantage of downloading using the sezginerr/example_download_script download scripts compared to git clone or huggingface-cli download?
updating this discussion - I updated the download_dataset.py script to download train_fixed and the segmentations (from TotalSegmentor only - ts_total, but can be used for all segs).
I put both scripts in a repo: https://github.com/yinon-gold/CT-RATE-download-scripts, planning to put a pull request to sezginerr/example_download_script.
The download_dataset.py is a small change from https://huggingface.co/datasets/ibrahimhamamci/CT-RATE/blob/main/download_scripts/download_dataset.py.
This is the script I used to download the TotalSegmentor segmentations, available in the repo as download_segmentations.py:
import shutil
import os
import pandas as pd
import sys
from huggingface_hub import hf_hub_download
from tqdm import tqdm
split = 'train_fixed'
batch_size = 100
start_at = 0
repo_id = 'ibrahimhamamci/CT-RATE'
directory_name = f'dataset/{split}/'
hf_token = os.getenv("HF_TOKEN")
data = pd.read_csv(f'{split.split("_")[0]}_labels.csv') # changed
for i in tqdm(range(start_at, len(data), batch_size)):
data_batched = data[i:i+batch_size]
for name in data_batched['VolumeName']:
folder1 = name.split('_')[0]
folder2 = name.split('_')[1]
folder = folder1 + '_' + folder2
folder3 = name.split('_')[2]
subfolder = folder + '_' + folder3
subfolder = directory_name + folder + '/' + subfolder
ll = subfolder.split('/') # added
seg_folder = os.path.join(ll[0], 'ts_seg', 'ts_total', *ll[1:]) # added
try:
hf_hub_download(repo_id=repo_id,
repo_type='dataset',
token=hf_token,
subfolder=seg_folder,
filename=name,
local_dir='<PATH_TO_CT-RATE>',
resume_download=True,
)
except Exception as e:
# append to file called errors.txt
with open('errors.txt', 'a') as f:
f.write(f"{e} :: {name=} {seg_folder=}\n")
# print to stderr
print(f"{e} :: {name=} {seg_folder=}", file=sys.stderr)
try:
# shutil.rmtree('.cache/huggingface/download/dataset/') # note that a .cache folder is created and should be deleted
continue
except Exception as e:
print(f"Error removing cache directory: {e}", file=sys.stderr)
If you want to download a subset here is the code. It works nicely for me:
Just change the DEST (destination folder and the number of samples you want to download.
import os
from pathlib import Path
from tqdm import tqdm
from huggingface_hub import HfApi, hf_hub_download
REPO_ID = "ibrahimhamamci/CT-RATE" # gated dataset (accept terms first)
DEST = "/......./CT_RATE"
Create folders
Path(DEST).mkdir(parents=True, exist_ok=True)
api = HfApi()
print("Listing files from HF (this may take a moment)...")
files = api.list_repo_files(REPO_ID, repo_type="dataset", revision="main")
Pull labels/metadata/reports:
keep_dirs = (
"dataset/metadata/",
"dataset/multi_abnormality_labels/",
"dataset/radiology_text_reports/",
)
meta = [f for f in files if f.startswith(keep_dirs)]
Volumes are under train_fixed/ and valid_fixed/ as .nii.gz
vols = [f for f in files
if (f.startswith("dataset/train_fixed/") or f.startswith("dataset/valid_fixed/"))
and f.endswith(".nii.gz")]
--- SUBSET HERE ---
LIMIT = 50 # change later for more
vols = vols[:LIMIT]
to_get = meta + vols
print(f"Will download {len(meta)} meta/label files and {len(vols)} volumes to {DEST}")
for f in tqdm(to_get, desc="Downloading"):
# This writes real files (no symlinks) into DEST, preserving directories
hf_hub_download(
REPO_ID, filename=f, repo_type="dataset",
local_dir=DEST, local_dir_use_symlinks=False
)
print("Done. Example paths now on disk like:")
print(" /......./CT_RATE/dataset/train_fixed/train_1/train_1_a/train_1_a_1.nii.gz")
updating this discussion - I updated the
download_dataset.pyscript to downloadtrain_fixedand the segmentations (from TotalSegmentor only -ts_total, but can be used for all segs).
I put both scripts in a repo: https://github.com/yinon-gold/CT-RATE-download-scripts, planning to put a pull request tosezginerr/example_download_script.The
download_dataset.pyis a small change fromhttps://huggingface.co/datasets/ibrahimhamamci/CT-RATE/blob/main/download_scripts/download_dataset.py.This is the script I used to download the TotalSegmentor segmentations, available in the repo as
download_segmentations.py:import shutil import os import pandas as pd import sys from huggingface_hub import hf_hub_download from tqdm import tqdm split = 'train_fixed' batch_size = 100 start_at = 0 repo_id = 'ibrahimhamamci/CT-RATE' directory_name = f'dataset/{split}/' hf_token = os.getenv("HF_TOKEN") data = pd.read_csv(f'{split.split("_")[0]}_labels.csv') # changed for i in tqdm(range(start_at, len(data), batch_size)): data_batched = data[i:i+batch_size] for name in data_batched['VolumeName']: folder1 = name.split('_')[0] folder2 = name.split('_')[1] folder = folder1 + '_' + folder2 folder3 = name.split('_')[2] subfolder = folder + '_' + folder3 subfolder = directory_name + folder + '/' + subfolder ll = subfolder.split('/') # added seg_folder = os.path.join(ll[0], 'ts_seg', 'ts_total', *ll[1:]) # added try: hf_hub_download(repo_id=repo_id, repo_type='dataset', token=hf_token, subfolder=seg_folder, filename=name, local_dir='<PATH_TO_CT-RATE>', resume_download=True, ) except Exception as e: # append to file called errors.txt with open('errors.txt', 'a') as f: f.write(f"{e} :: {name=} {seg_folder=}\n") # print to stderr print(f"{e} :: {name=} {seg_folder=}", file=sys.stderr) try: # shutil.rmtree('.cache/huggingface/download/dataset/') # note that a .cache folder is created and should be deleted continue except Exception as e: print(f"Error removing cache directory: {e}", file=sys.stderr)
In case you're wondering why the above repository does not contain any csv files, they are present in https://github.com/sezginerr/example_download_script.