dataset_isnt_on_the_Hub_2">1 What if my dataset isn’t on the Hub?
1.1 Working with local and remote datasets
1.1.1 supports several common data formats,
CSV & TSV csv load_dataset("csv", data_files="my_file.csv")
Text files text load_dataset("text", data_files="my_file.txt")
JSON & JSON Lines json load_dataset("json", data_files="my_file.jsonl")
Pickled DataFrames pandas load_dataset("pandas", data_files="my_dataframe.pkl")
load_dataset("arrow", data_files={'train': 'train.arrow', 'test': 'test.arrow'})
1.1.2 load function
from datasets import load_datasetsquad_it_dataset = load_dataset("json", data_files="SQuAD_it-train.json", field="data")
squad_it_dataset["train"][0]
1.1.3 load split data files
data_files = {"train": "SQuAD_it-train.json", "test": "SQuAD_it-test.json"}
squad_it_dataset = load_dataset("json", data_files=data_files, field="data")
squad_it_dataset
1.1.4 Loading a remote dataset
url = "https://github.com/crux82/squad-it/raw/master/"
data_files = {"train": url + "SQuAD_it-train.json.gz","test": url + "SQuAD_it-test.json.gz",
}
squad_it_dataset = load_dataset("json", data_files=data_files, field="data")
1.1.5 arrow- use dataset directly
from datasets import Dataset
dataset = Dataset.from_file("data.arrow")
与 load_dataset() 不同,Dataset.from_file() 会将 Arrow 文件内存映射,而不会在缓存中准备数据集,从而为您节省磁盘空间。在这种情况下,用于存储中间处理结果的缓存目录将是 Arrow 文件目录。
2 reference url
2.1 https://hugging-face.cn/docs/datasets
3 Slicing and dicing our data
3.1 differences of Slicing and dicing
3.1.1 Slice(切片):在数据库中,Slice是指将数据集按照某个维度进行切割。这个维度可以是时间、地理位置、产品类别等。通过切片,可以将数据集划分成不同的部分,以便进行更详细的分析。例如,可以将销售数据按照时间切片成不同的时间段,比如按月、按季度或按年,以便分析每个时间段的销售情况。
3.1.2 Dice(切块):Dice是指将数据集按照多个维度进行切块处理。切块可以理解为对数据进行多维度的交叉分析。例如,可以将销售数据按照时间和地理位置进行切块,以便分析不同时间和地理位置下的销售情况。这样可以更全面地了解销售数据在不同维度下的变化和趋势。
3.2 grab a small random sample to get a quick feel for the type of data
3.2.1
drug_sample = drug_dataset["train"].shuffle(seed=42).select(range(1000))
# Peek at the first few examples
drug_sample[:3]
3.2.2 use the Dataset.unique() function to verify that the number of IDs matches the number of rows in each split
for split in drug_dataset.keys():assert len(drug_dataset[split]) == len(drug_dataset[split].unique("Unnamed: 0"))
3.3 use map()
3.3.1 map可以和batch结合,进行加速
new_drug_dataset = drug_dataset.map(lambda x: {"review": [html.unescape(o) for o in x["review"]]}, batched=True
)
With Dataset.map() and batched=True you can change the number of elements in your dataset
3.3.2 cooperation with tokenizer
def tokenize_function(examples):return tokenizer(examples["review"], truncation=True)
tokenized_dataset = drug_dataset.map(tokenize_function, batched=True)
fast or slow tokenize mode ,fast tokenizers are the default
slow_tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", use_fast=False)
truncation and overflow data
def tokenize_and_split(examples):return tokenizer(examples["review"],truncation=True,max_length=128,return_overflowing_tokens=True,)
- GPU
- Tokenization is string manipulation.
- There is no way this could speed up using a GPU
3.4 pandas
drug_dataset.set_format("pandas")
3.5 Creating a validation set
drug_dataset_clean = drug_dataset["train"].train_test_split(train_size=0.8, seed=42)
# Rename the default "test" split to "validation"
drug_dataset_clean["validation"] = drug_dataset_clean.pop("test")
# Add the "test" set to our `DatasetDict`
drug_dataset_clean["test"] = drug_dataset["test"]
drug_dataset_clean
3.6 Saving a dataset
drug_dataset_clean.save_to_disk(“drug-reviews”)
3.6.2 drug_dataset_reloaded = load_from_disk(“drug-reviews”)
datasets_129">4 Big data-Streaming datasets
pubmed_dataset_streamed = load_dataset("json", data_files=data_files, split="train", streaming=True
)
4.1.1 return an IterableDataset object
4.1.2 use next to acess data
next(iter(pubmed_dataset_streamed))
4.2 shuffle
4.2.1 only shuffles the elements in a predefined buffer_size:
shuffled_dataset = pubmed_dataset_streamed.shuffle(buffer_size=10_000, seed=42)
next(iter(shuffled_dataset))
4.3 datasets.interleave_datasets
4.3.1 Interleave several datasets (sources) into a single dataset
4.4 benefits
4.4.1 Accessing memory-mapped files is faster than reading from or writing to disk.
4.4.2 Applications can access segments of data in an extremely large file without having to read the whole file into RAM first.