欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

Pytorch DataLoader shuffle验证方式

程序员文章站 2022-07-03 17:37:12
shuffle = false时,不打乱数据顺序shuffle = true,随机打乱import numpy as npimport h5pyimport torchfrom torch.utils...

shuffle = false时,不打乱数据顺序

shuffle = true,随机打乱

import numpy as np
import h5py
import torch
from torch.utils.data import dataloader, dataset  
h5f = h5py.file('train.h5', 'w');
data1 = np.array([[1,2,3],
               [2,5,6],
              [3,5,6],
              [4,5,6]])
data2 = np.array([[1,1,1],
                   [1,2,6],
                  [1,3,6],
                  [1,4,6]])
h5f.create_dataset(str('data'), data=data1)
h5f.create_dataset(str('label'), data=data2)
class dataset(dataset):
    def __init__(self):
        h5f = h5py.file('train.h5', 'r')
        self.data = h5f['data']
        self.label = h5f['label']
    def __getitem__(self, index):
        data = torch.from_numpy(self.data[index])
        label = torch.from_numpy(self.label[index])
        return data, label
 
    def __len__(self):
        assert self.data.shape[0] == self.label.shape[0], "wrong data length"
        return self.data.shape[0] 
 
dataset_train = dataset()
loader_train = dataloader(dataset=dataset_train,
                           batch_size=2,
                           shuffle = true)
 
for i, data in enumerate(loader_train):
    train_data, label = data
    print(train_data)
 

pytorch dataloader使用细节

背景:

我一开始是对数据扩增这一块有疑问, 只看到了数据变换(torchvisiom.transforms),但是没看到数据扩增, 后来搞明白了, 数据扩增在pytorch指的是torchvisiom.transforms + torch.utils.data.dataloader+多个epoch共同作用下完成的,

数据变换共有以下内容

composed = transforms.compose([transforms.resize((448, 448)), #  resize
                               transforms.randomcrop(300), # random crop
                               transforms.totensor(),
                               transforms.normalize(mean=[0.5, 0.5, 0.5],  # normalize
                                                    std=[0.5, 0.5, 0.5])])

简单的数据读取类, 进返回pil格式的image:

class mydataset(data.dataset):    
    def __init__(self, labels_file, root_dir, transform=none):
        with open(labels_file) as csvfile:
            self.labels_file = list(csv.reader(csvfile))
        self.root_dir = root_dir
        self.transform = transform
        
    def __len__(self):
        return len(self.labels_file)
    
    def __getitem__(self, idx):
        im_name = os.path.join(root_dir, self.labels_file[idx][0])
        im = image.open(im_name)
        
        if self.transform:
            im = self.transform(im)
            
        return im

下面是主程序

labels_file = "f:/test_temp/labels.csv"
root_dir = "f:/test_temp"
dataset_transform = mydataset(labels_file, root_dir, transform=composed)
dataloader = data.dataloader(dataset_transform, batch_size=1, shuffle=false)
"""原始数据集共3张图片, 以batch_size=1, epoch为2 展示所有图片(共6张)  """
for eopch in range(2):
    plt.figure(figsize=(6, 6)) 
    for ind, i in enumerate(dataloader):
        a = i[0, :, :, :].numpy().transpose((1, 2, 0))
        plt.subplot(1, 3, ind+1)
        plt.imshow(a)

Pytorch DataLoader shuffle验证方式

从上述图片总可以看到, 在每个eopch阶段实际上是对原始图片重新使用了transform, , 这就造就了数据的扩增

以上为个人经验,希望能给大家一个参考,也希望大家多多支持。