You are going in the right way with the custom generator. Images_aug_i, segmaps_aug_i = seq(image=image, segmentation_maps=segmap) Iaa.ElasticTransformation(alpha=50, sigma=5) # apply water effect (affects segmaps) Iaa.Affine(rotate=(-45, 45)), # rotate by -45 to 45 degrees (affects segmaps) Iaa.Sharpen((0.0, 1.0)), # sharpen the image Iaa.Dropout(), # drop 5% or 20% of all pixels Here's an example of imgaug taken from the documentation here: seq = iaa.Sequential([ You can use libraries like albumentations and imgaug, both are good but I have heard there are issues with random seed with albumentations. Return tf.convert_to_tensor(x), tf.convert_to_tensor(y) # read your data here using the batch lists, batch_x and batch_y #label_id = tf._categorical(label_id, num_classes)īatch_x = self.data_frameīatch_y = self.data_frame """ uncomment the below line to convert label into categorical format """ """ open image with file_id path and apply data augmentation """ Img = tf.image.random_flip_left_right(img) ''' function for apply some data augmentation ''' # fix on epoch end it's not working, adding shuffle in len for alternative Return math.ceil(ain_len/self.batch_size) Self.data_frame = shuffle(self.data_frame) Img_shape = image shape in (h, w, d) formatĪugmentation = data augmentation to make model rebust to overfittingĭef _init_(self, data_frame, batch_size=10, img_shape=None, augmentation=True, num_classes=None): Train_df, val_df = data_to_df(train_dir, subset="train", validation_split=0.2)Ĭlass CustomDataGenerator(tf.):ĭata_frame = pandas data frame in filenames and labels format Split_indexes = int(len(df) * validation_split) Img_list = os.listdir(os.path.join(data_dir, dataset))įilenames.append(os.path.join(data_dir, dataset, image)) # I think this is responsible for Augmentation but no idea how should I implement it and how does it works.Ĭustom Image Data Generator load Directory data into dataframe for CustomDataGenerator def data_to_df(data_dir, subset=None, validation_split=None): Return int(np.floor(len(self.files) / self.batch_size)) class DataGenerator():ĭef _init_(self, files_path, labels_path, batch_size=32, shuffle=True, random_state=42): I have my X_DIR, Y_DIR where image names for binarised and original are same but stored in different directories. Can somebody help me with the augmentation and generation of y images. Now the idea part is that I can use Sequence as the parent class but How can I keep on augmenting and generating new images on the fly with respective Y binarized images? This is the crucial part that I need to augment the images on the go to make it look like I have a huge dataset.Īnother Solution could be saving augmented images to a directory and making them 30-40K and then loading them.With larger dataset, it'll blow up the memory as data needs to be already in the memory.Also, I could have used something like: train_dataset = (Įncode_single_sample, num_parallel_calls=tf. ImageDataGenerator(preprocess_function=my_aug_function) to augment the images but the problem is that my y target is also an image. My idea is to augment the images randomly to make them look like they are differentso I have made a function which inserts any of the 4-5 types of Noises, skewness, shearing and so on to an image. I am working on Image Binarization using UNet and have a dataset of 150 images and their binarized versions too.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |